Opus 17a + EVOL

I recently had the honour of working on something for EVOL, the computer music project by Roc Jiménez de Cisneros and Stephen Sharp. This particular project involved a re-creation of Opus 17a by Hanne Darboven, and my task was to analyse the music from an MP3 and identify the notes so that EVOL could play it.

Hanne Darboven was a conceptual artist who worked with processes that could be described as generative because they were based on rules for manipulating numbers and making patterns. Darboven often used calendar dates re-arranged according to rules, displayed visually in grids drawn on paper that were arranged in larger grids. Opus 17a is derived from one such calendar-based artwork (Wunschkonzert, 1984), in which the numbers are transcribed into notes. The result is an hour-and-a-half-long piece for solo double bass.

At first, Opus 17a might sound ‘random’, but there are different patterns at different scales that become identifiable with repeated listening. My approach to re-constructing this piece from the recording ended up using these perceivable patterns. Initially, I’d hoped to be able to identify the rules that Darboven used and to re-create the piece by coding it into a computer program, but this wasn’t possible because we couldn’t find enough information about her process. Roc managed to find the music for the piece Wende 80 (‘Turning Point’), which had been performed by Trevor Dunn and Eyvind Kang. Although this provided some clues to the process, it was still impossible to reverse-engineer the music of Opus 17a to figure out the numbers, and the rules that had been used to map the numbers to notes.

So with the idea of programming defeated, I ran the MP3 through a plugin that would analyse the pitch and timing and output MIDI. The result had lots of mistakes, and was quite pleasing in a way. I’d used this inaccurate conversion process before – in the piece Mirror in the Mirror (on the album Symmetry-Breaking, 2011), which was based on analysis of Spiegel im Spiegel by Arvo Pärt. In that case, the errors were central to the piece, but EVOL’s Opus 17a required a good, clean transcription. Tidying up this messy MIDI output was laborious but it seemed like the only viable option. This meant removing all the obviously wrong notes, re-aligning the remaining ones in time, and gradually removing any remaining clutter and filling-in the gaps by listening to the original and comparing it with the MIDI version. Working in this way relied more on identifying the patterns by ear, then also beginning to recognise them visually as they were constructed in the MIDI editor. The output of the MIDI conversion became less useful as the new piece, which was set to a fixed tempo grid, drifted away from the original recording in which the performer modulates the tempo. It would have taken too long to do the whole piece this way, so it was restricted to the first 12 minutes, which is just over 1,500 notes. This 12-minute portion is cut off at a section similar to that at the very end of the piece.

Listening to Opus 17a in the process of re-constructing it – bit by bit, and seeing the pattern of notes build up in the MIDI editor – brought its musical structures into focus. The process of analysis and re-synthesis revealed that it has a semi-regular tempo based mostly on 4 beats and sometimes 2. There are lots of 4-note arpeggios that gradually ascend in pitch. Often there are ascending arpeggios interleaved with static or ‘drone’ notes. It is based on a fixed scale (F Lydian) with a range of just over 2 octaves, from E1 to F3. The first 64 notes of the piece are based on this pattern of ascending arpeggios:

Opus17a-MIDI-1This image reveals how the 4-note arpeggios are related: First the middle two notes are raised, then the first and last notes are raised. The first 4 arpeggios are: {{F, A, C, F}, {F, B, D, F}, {G, B, D, G}, {G, C, E, G}}. The root note F often appears to be the lowest note in the piece, as in this section, but occasionally a note below this (E) is sounded. Numbering the lowest F as 1, the sequence for the first 16 notes is: {{1, 3, 5, 8,}, {1, 4, 6, 8,}, {2, 4, 6, 9,}, {2, 5, 7, 9}}. With this numbering scheme, the low E would be numbered 0, and the highest pitch (F3) would be 15. It’s unlikely that Darboven used this method, because it’s not easy to see how the numbers 0 to 15 would map to the calendar numbers in the visual version of this generative artwork. It means that even though we can’t know Darboven’s generative process, this analysis does shed light on the generated structure. In the section pictured above, this process of shifting arpeggios continues until the pattern is just a step away from one octave above where it started. The next 64 notes in the piece look like this:


That section shows some of the variations on the main 4-note arpeggio, including a 2-note variation at bar 19. Starting near bar 28 is another type of pattern with ascending notes alternating with a steady pitch. Sometimes these ascend from the steady pitch, or descend from it, or pass through it from above or below. The whole piece has some symmetry, with the densest clusters of higher pitches appearing in groups at the start and end of the piece, and fewer in between.

EVOL’s début performance of Opus 17a opened the Unsound festival in New York, along with Oren Ambarchi who performed the epic Knots, from the album Audience of One. A review in the New York Times described how

Mr. Jiménez de Cisneros unleashed a prismatic extended tone at excessive volume. He shattered that core sound into jagged rhythmic clusters, each lacerating note accompanied with a piercing strobe light flash.

A clip of Roc’s performance has been posted on Vimeo but the quality isn’t very good, so I’ve removed the link until a better one is available. It’s interesting to note that the variation in tempo that had been introduced in the original instrumental performance, then straightened out in the analysis and re-synthesis process, is now taken to extremes in this computer music.

Posted in Art, Audio, Complexity, Music, Visual | Tagged , , , , , , , , , , , , | Leave a comment

Mapping Tintinnabuli Transformations

My project, Tintinnabuli Mathematica, involves the process of trying to understand Arvo Pärt’s tintinnabuli method of composition in order to use this knowledge for my own music-making. The tintinnabuli method transforms the notes of the M-voice into notes in the scale’s triad, either above or below the original pitch. For example, the first T-voice above the M-voice (T1↑) takes the first note in the triad above the M-voice pitch. The T2↑ voice takes the second note above the M-voice, and so on. In Pärt’s composition, T-voices may use these transformation  rules consistently or may alternate between them. My aim in this project is to code this generative process into a program to create musical sequences, and to use the code to explore the process. For this purpose, I drew up a couple of charts that map the transformation rules. These simple charts allowed for an analysis of the process, and revealed that there are six different forms of T-voice. The following charts use scientific pitch notation, and are based on the A natural minor scale (or A Aeolian mode), which is commonly used by Pärt. Other scales or modes produce different characters, and the amount of consonance/dissonance varies not only with the chosen scale but also as the T-voices are closer or further away from the M-voice.

The first chart is a concise representation of the tintinnabuli rules. The colours represent pitches: the scale forms a spectrum and the triad (A-C-E) is red-yellow-blue. Each row represents a voice, with the M-voice in the middle. The chart shows which notes each of the T-voices take for each of the M-voice notes, by looking at the columns. For example, if the M-voice is sounding the note D5, the corresponding T1↑ pitch will be the first note in the triad above it, which is E5. The pattern repeats in both directions, such that T4↑ is equivalent to T1↑, etc. This chart also demonstrates that there are six unique T-voices, i.e. that pairs of the upper (↑) and lower (↓) T-voices are dissimilar. For example, T3↑ is identical to T1↓ except when the M-voice is sounding one of the notes in the triad. The same goes for the pairs T2↑ & T2↓ and T1↑ & T3↓.

tintinnabuli_transformations-a1The second chart looks at the same transformation rules in another way. This time, the rows represent notes in the scale rather than the voices. A range of 3 octaves is shown here, with higher pitches towards to top. The M-voice is shown in the middle again, but in this arrangement it forms a diagonal line. The six T-voices are coloured differently. because of this, you can see how the pattern of voices repeats, such that T1↑ comprises the same pitch classes as T4↑. Like the previous chart, it also shows that the six voices are unique because they don’t align with each other.

tintinnabuli_transformations-b1These charts have been useful for the purpose of my musical projects, but since they reflect my personal understanding, they may be an inaccurate representation of Pärt’s approach. My understanding is largely informed by Paul Hillier’s biography of Arvo Pärt and by the analyses in the Cambridge Companion to Arvo Pärt edited by Andrew Shenton. Those studies suggest that Pärt uses only four of the six T-voices, to avoid octave transformations. This occurs in the T3↑ and T3↓ voices only when the M-voices sound one of the triad notes.

Posted in Audio, Music | Tagged , , , , , , , | Leave a comment

Tintinnabuli Mathematica Vol. I

I’m pleased to announce that, with the help of Joe Evans at Runningonair Music, my new album, Tintinnabuli Mathematica Vol. I., has been released. It’s the result of a generative music project that I’ve been  working on for over three years, and which is still ongoing. The Mathematica in the title refers to the algorithmic processes and number sequences that are used as the basis for the melodic parts of the music, and also to the programming language that is used to code the algorithms and generate the sequences. Tintinnabuli is Arvo Pärt’s compositional method. In this project the method is coded in Mathematica and programmed to generate harmonic parts by transforming the melodic parts.

The MIDI files that are generated with those processes are voiced with a single instrument – the free VSTi Synth1, which is modelled on the Nord Lead 2 ‘Red’ synthesizer. The idea was to keep the sounds quite similar to allow the different musical structures to be perceived. The main effects used are those by Variety of Sound, specifically FerricTDS and ThrillseekerVBL for dynamics processing, NastyVCS and pre-FIX for EQ, and NastyDLA mkII for delay. I’ve written more about the processes behind this project in the post Programming Arvo Pärt. This album  -  Vol. I – comprises the first 11 pieces of music from this project. The 12th piece was released on the SEQUENCE7 compilation album. The next volume will continue the exploration of the relationship between the programming language and the musical language of tintinnabuli, but with new methods and different sounds.

Posted in Audio, Music | Tagged , , , , , , | 1 Comment

Experimental Music Mix

On Tuesday 9th January, via KFAI radio, Eric Frye posted a new episode of Splice-Free, “a programme examining experimental compositions past and present”. The mix includes work by many artists whose work I admire: Alva Noto, Theo Burt, Jo Thomas, Ryoji Ikeda, Rashad Becker, Jean-Claude Risset, Martin Neukom, Mark Fell, EVOL and Autechre. Read the full track listing and stream the mix here, or listen by clicking on this:

Given that selection of established artists, I was very pleased to find one of my own pieces of music included in the mix: Tintinnabuli Mathematica 10b, from my forthcoming album Tintinnabuli Mathematica vol. I. In trying to establish my solo musical practice over the past few years, it’s been a gradual process of discovering my own ‘voice’ or style. Participating in things like the Disquiet Junto (an open group for making music based on creative constraints) has been good for this kind of development because it encourages cross-fertilization and enables comparisons with alternative approaches to the same musical problems or projects. Through such activity I’ve developed friendships and forged working collaborations with a variety of musicians, many of whom tend to be loosely classified under the label ‘ambient music’. I’m not entirely uncomfortable to situate my own work in this genre since I too share an engagement with quieter and slower forms of music. But my affinity with ambient music is perhaps less closely related to a particular style than with the associated approach to listening that it engenders or demands. This approach is articulated by Brian Eno in the liner notes for Music for Airports when he describes his aim to make music that is “as ignorable as it is interesting” and which is “able to accommodate many levels of listening attention without enforcing one in particular”. So, whilst my own music shares some characteristics of ambient music, it also has much in common with the generative and computer-based approaches of those artists included in the Splice Free programme. As a result, my work arguably sits more comfortably here than, for example, Tintinnabuli Mathematica 12d does within the mostly electro-acoustic pieces in the SEQUENCE7 compilation. And yet, of course, the label ‘experimental’ is no less problematic a term than ‘ambient’. But, in the end, I’m just happy to hear my music amongst such esteemed company, and I’m pleased to have the chance to share my work with others who might appreciate this kind of thing.


Eric Frye runs the label Scumbag Relations, and can be followed on Twitter at @sleepycobalt.

Posted in Audio, Music | Tagged , , , , , , , , , | Leave a comment

13 Music Highlights in 2013

Not really a ‘best of’, this list represents a few of the things that I’ve enjoyed this year. In no particular order:

Autechre – Exai and L-Event (Warp Records)

Complex sounds and complex rhythms, but with more soul than preceding releases. Their strongest work for a while.

NHK’Koyxeи – Dance Classics vol.III (PAN)

Latest installment in Kouhei Matsunaga’s wonky dancefloor experiments.

EVOL – Proper Headshrinker (Editions Mego)

It looks how it sounds. Spectrogram of Proper Headshrinker 6, created with Foobar2000 visualization plugin:

Jos Smolders – Music for FLAC-player (self-released)

1100 tracks, to be played on shuffle.

Rashad Becker – Traditional Music of Notional Species vol. I (PAN)

Lee Gamble – Dutch Tvashar Plumes (PAN)

Hard-to-categorise, easy-to-love music from both Becker and Gamble.

William Winant – Five American Percussion Pieces (Poon Village)

I first heard William Winant’s percussion via his work with Mr Bungle and Secret Chiefs 3. This is his first solo release. Stream the album and read an interview with Winant here: http://www.spin.com/articles/william-winant-five-american-percussion-pieces-stream/

Various – Touch Radio (Touch)

Always worth repeated listens, the Touch Radio series offers a platform for audio that is more than music, including recordings created in the context of scientific, documentary, environmental and political approaches to sound. A particular favourite is the contact microphone recording of an Icelandic longwave radio antenna by Aino Tytti:

Phirnis – Feeding Lions (Fwonk)

Nice noise from Kai Ginkel.

Yves de Mey – Metrics (Opal Tapes)

Beautifully balanced bleeps, beats and bass.

Mark Fell - n-Dimensional Analysis (Liberation Technologies)

Fell also released more good stuff via Editions Mego (Sensate Focus 1.666666 and Sensate Focus 2).

Tobias Reber – Kola (Iapetus)

Fractured, generative rhythms based on the sounds of pitched percussion.

Various – Disquiet Junto (via SoundCloud)

Group based on musical responses to creative constraints, with weekly assignments set by Marc Weidenbaum of Disquiet.com. Now in its second year, the Junto recently passed its 100th-assignment milestone, in which the task was to make something of the sound of water boiling (the very first project and the first anniversary project both used the sound of ice):

Posted in Audio, Music | Tagged , , | 1 Comment


Once again, the Futuresequence label has released a cracking compilation of experimental and ambient music for free. SEQUENCE7 contains 30 tracks that were whittled down from over 200 submissions through a selection process by label owner Michael Waring with the help of Pascal Savy, Ed Hamilton and Karl McGrath. This is the 7th in the series of SEQUENCE compilations, and I’m very pleased to say that it includes one of my tracks – the 12th in the series of Tintinnabuli Mathematica experiments. I’ve written elsewhere on this blog about that project in general; here I’ll explain a bit about this piece in particular.

Tintinnabuli Mathematica 12d is based on a melodic part (M-voice) constructed from a fractal integer sequence known as A194832 in the Online Encyclopedia of Integer Sequences (OEIS). I used the Mathematica code on that page to generate the first 657 numbers in the sequence. This is what it looks like when plotted as a graph:

A194832-658-plotThis sequence of numbers is converted to MIDI notes, mapping higher numbers to higher pitches. The reason for using the first 657 numbers is that the range of numbers in the sequence fits the scale I wanted to use: The 657th number in the sequence is 36 (that’s the point at the top right of the graph above), and there are 36 notes in a 5 octave scale. This piece uses the scale of A natural minor, from A2 (in scientific pitch notation) to A7. So, the first 10 numbers in the sequence are {1, 1, 2, 3, 1, 2, 3, 1, 4, 2}, which are converted to the MIDI notes {A2, A2, B2, C3, A2, B2, C3, A2, D3, B2}. The result of the conversion process for the M-voice sounds like this:

Once the M-voice has been created, the next step is to generate 6 T-voices using Arvo Pärt’s tintinnabuli method. I wrote a program to do this, which applies a transformation process to the M-voice, generating 6 MIDI files. In the final arrangement, the T-voices are staggered, so that the result is an arpeggiated pattern where the M-voice notes are followed by the descending T-1, T-2 and T-3 voices. The other T-voices (T+1,T+2, T+3) provide a bass part and higher-sounding voices that are interleaved with the other layers:

And here’s what it looks like as a spectrogram:


For those who might be interested in trying to use integer sequences such as this for musical purposes, the OEIS is a handy resource because it not only provides the algorithms to generate your own sequences, but also offers a sonification facility (by clicking on the ‘listen’ link below the main title for each sequence). This allows you to play a sequence and tweak the parameters, and also save the output as a MIDI file.

The previous 11 TM pieces will be released as an album – Tintinnabuli Mathematica vol. I – via Runningonair Music in the new year. These experiments are ongoing. Currently I’m exploring ways of extending the method – trying out new M-voice patterns, more complex variations of T-voices, and experimenting with new sounds. The aim is to release a second volume in due course.

Posted in Audio, Music | Tagged , , , , , , , | Leave a comment

Computer Fatigue and the Rise of Sonic Complexity

Last month sound artist Kate Carr wrote an article called Computer fatigue and the rise of the human which examines how and why some electronic musicians are turning away from the computer and digital recording/processing in favour of using analogue electronics and acoustic instruments. Apart from being an interesting article in itself, it caught my attention because it seems to relate to the complexity of sound in music. Complexity in music and visual art is a subject of deep interest for my work, both academic and creative, but it’s one that has been difficult to start writing about coherently. This post is an attempt to get the ball rolling. My initial aim was make a simple point about how the musicians’ strategies to counter computer fatigue can also be understood as a search for greater sonic complexity, but attempts to elaborate this point led to other avenues of thought. As such, the following ideas are still a bit sketchy.

The article describes a trend amongst electronic musicians for increasing use of ‘real’ (as opposed to ‘virtual’) instruments. The “computer fatigue” in the title refers to a weariness or frustration with the characteristic sounds and techniques of digital audio. The problem with digital techniques, Carr notes, is that they undermine “spontaneity, accidents and the importance of live improvisation”. Musicians reveal a number of motivations for adopting a more lo-fi approach: a desire for greater authenticity, a nostalgia for retro technology, a search for ‘character’ or ‘soul’, and a need for manual control. Underlying all these motivations is a distaste for the precision, repeatability and consistency of digital audio. In contrast, more primitive technology offers a certain amount of variation in tone, timbre or timing. In addition, manual controls offer greater expressive capabilities and a more ‘human’ touch that also lead to greater variation in sound than digital methods. The appeal of the lo-fi is often expressed in terms of imperfection. For example, Zachary Corsa says that he prefers “imperfections and warmth to sterility and polished freeze”. Cameron Webb identifies the same thing in describing the difference compared with digital sound: “acoustic/analogue instrumentation … brings with it an element of imperfection”. The article concludes by citing the sensory philosophy of Michael Serres, using the metaphor of navigating the world of sound:

The closed off space of the computer offers an impoverished stage for such a voyage of self-discovery. Little wonder we have returned to the complexities, resonances, the physical strains and even failures that only an embodied experience of making and listening to sound can offer.

The complexity that Carr mentions in that final sentence is the idea I’d like to draw out here. Put simply, the imperfection of lo-fi approaches produces sounds that are more complex than their digital equivalent. Imperfection is a key characteristic of lo-fi sounds, and it is caused by the inherent instability of the instruments and the subtle variations of manual control. These deviations from the perfectly tuned and timed mean that no two sounds are exactly alike, therefore lo-fi instruments produce sounds that have greater variation than digital instruments, which are capable of producing exact copies of sounds. Greater variation means greater complexity. My doctoral research revealed that people understand visual complexity in terms of the level of detail and the variation of elements or patterns in a picture. Visual complexity is perceived to increase in proportion with the number of different colours and patterns, but decrease with repetition. Handmade images look more complex because of their small deviations from the straight and true, which makes them more interesting and more appealing. It seems plausible to suggest, therefore, that a similar effect occurs in music. In this sense, the condition of computer fatigue that has driven the move towards more lo-fi approaches can also be understood as a search for greater sonic complexity. (This doesn’t imply that this is very complex music; it just means that the sounds that musicians seek are rich and expressive, and it’s those properties that make them more complex.)

Computer fatigue may be explained by the fact that it is difficult to achieve natural or human variation using digital techniques. The alternatives to digital audio used by the musicians in Carr’s article offer easier ways to achieve musical variation. These include a range of technologies – manual (e.g. a glockenspiel), mechanical (pianoforte) and electronic (tape recorder, modular synthesizer). Each has their own characteristic imperfections, and each operates with different types of control mechanism that also contribute to the imperfections. To generate the same kind of pleasing variation in digital methods requires either painstaking adjustment of many details or the effort to introduce some randomness or non-linearity into the methods. Some musicians appear to have taken on this challenge, however, and are choosing to stick with the computer. The increasing sonic complexity of their music suggests that they too may suffer the symptom of computer fatigue, sharing the dissatisfaction with “impoverished” digital audio like the other musicians, but choosing instead to grasp the nettle and find new ways to generate greater sonic complexity using digital techniques. This is a theme that I’d like to expand upon.

The idea of a more natural variation or shaping of sound in digital music appeared in the recent ‘Ask Autechre Anything‘ thread on the WATMM forum (also available to view as a Google doc here). One question asked: “Given that you guys have been at the leading edge of sonic explorations for 20+ years have you developed any personal alternate theories and postulations about what music actually is?” Sean Booth’s response is a concise mathematical description:

yeah music = speech − text

at least roughly — i reckon it’s a kind of super-developed version of the pitch and intonation parts of speech (the aural bit that doesn’t contain textual info)

Autechre’s conception of music as “speech minus text” is supported by recent research in evolutionary psychology (e.g. the work of Diana Deutsch) which suggests that music has its origins in our faculty for producing and perceiving vocal sounds. Neurological studies  suggest that speech and music are processed with similar mental circuits. The close ties between the two are also evidenced by education research that shows how participation in music can help to develop speaking and listening skills. This idea of music as ‘speech − text’ resonates because Autechre’s recent work seems to demonstrate a similar kind of naturally-formed sound. The music in Exai and L-Event in particular represent another step in the evolution of Autechre’s style, where the sounds seem to have a life of their own. These sounds are not quite organic (because they still sound synthetic, although less so than their previous work), but they are free from from the unappealing traits of digital audio. So, by being more complex sonically, Autechre represent an instance of musicians that share the condition of computer fatigue but who choose an alternative strategy to get away from the problems inherent in digital audio techniques. A similar aesthetic – perhaps a similar approach also – is perceivable in some of the output from Bill Kouligas’ PAN label, such as Traditional Music of Notional Species Vol. I by Rashad Becker and Dutch Tvashar Plumes by Lee Gamble:

Finally, I’d like to put forward the suggestion that these digital approaches to sonic complexity can be understood as an example of Manuel De Landa’s idea of “topological music”, as described in his essay ‘The Virtual Breeding of Sound’ (PDF). The essay begins with a description of natural sounds – such as the song  of a blackbird – that are formed via an evolutionary mechanism: “…these songs have becomes memes, patterns of behaviour transmitted through imitation and, as such, capable of having an evolution of their own.” The term “topological” refers to a method of shaping sound, in which De Landa suggests that the musician’s job of searching for and shaping musical forms may be aided by thinking in terms of biological evolution. Topological transformations – such as stretching or folding without cutting – are offered as a suitable resource for exploring evolutionary space. Evolutionary processes are proposed as a means of introducing musical variation by offering a way to sort and shape sounds. Techniques such as genetic algorithms can be used to generate new populations of sounds and select candidates for the next round of breeding. In this process, sounds evolve according to a set of fitness criteria, which constitute an abstract indication of desired properties rather than a specific and pre-determined design – a distinction made by De Landa in terms of metric and non-metric (topological) geometries. In this way, these topological techniques offer a means to escape the predictability and uniformity of more primitive digital audio methods, thereby avoiding computer fatigue:

It is possible, although I do not know how to theorize this yet, that musicians will have to start thinking in terms of abstract musical structures where the key properties for a sound are not those of fixed duration or a fixed wavelength and the like, but rather are something else corresponding to what we may call “topological music”, something we cannot hear [...] but which would define a rich search space, the final products of which would be audible. In turn, this implies representing within the computer something like the complex embryological processes, which map the genes (the genotype) into bodily traits (the phenotype), given that this complex mapping genotype-phenotype is where the conversion from topological to metric is achieved.

To support my argument that this other route away from computer fatigue may be characterized as topological music, I tried to find some information about these musicians’ techniques. Although lacking detail, it is clear that Autechre, Becker and Gamble still make use of the computer to shape sounds, develop rhythms and to structure music. The evolutionary aspect of topological music described by De Landa is hinted at with the inclusion of the word ‘species’ in the title of Rashad Becker’s album, as if the tracks were specimens of a new line of evolution. An interview with Becker supports this idea, where he discusses using software called The Brain that allows for sounds to be characterized and grouped according to shared ‘genetic’ characteristics between parent and offspring sounds. In accordance with Autechre’s conception of music as ‘speech minus text’, Becker also describes his music in terms of the characteristics of speech:

It’s the envelopes and the harmonic progressions that the sounds have that are all—like syllables, maybe. These are the progressions that I obviously, or naturally, or automatically look for, that resemble speech, breathing and performance, that represent a certain actual shape of a body.

Another description that I came across seems to capture this biological or non-digital aesthetic of topological music that I’m attempting to describe: “His sounds actually sound like things”, wrote Marc Masters in a review of Becker’s album. This description also matches how I feel listening to Autechre’s recent work. These strange new musical forms express a kind of internal mechanism that gives them a feeling of natural variation as well as a sense of coherence or family resemblance.

This topological music seems to share a similar rejection of sterile digital sounds with the musicians in Carr’s article. In both cases, the resulting music can be characterised as having hallmarks of complexity – greater variation of sonic materials and more complicated texture and structure. But the response to “computer fatigue” by the topological musicians contrasts with the work of those who choose the lo-fi approach as the solution. The critical difference is that the topological musics continue to use the computer and digital techniques. The topological techniques that De Landa proposes, such as genetic algorithms, are forms of generative music composition. Philip Galanter’s definition of generative art provides a clear and concise way of understanding what generative methods involve:

Generative art refers to any art practice where the artist uses a system, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is set into motion with some degree of autonomy contributing to or resulting in a completed work of art. (2003, What Is Generative Art?)

The key element is that the artist gives some amount of creative control to the system. By this definition, the strategies used by those musicians who have returned to lo-fi methods are also generative. An instrument or method that produces sounds that vary in an unpredictable way constitutes a form of generative music system, because the musician is allowing the method to contribute some of its own characteristics. In effect, the system makes some of the creative decisions. In a digital system, these decisions would first have to be thought up and then programmed into the system. Because digital methods have to be instructed what to do, variation and surprise is more difficult to achieve. So, the return to lo-fi methods and the adoption of evolutionary digital techniques represent two forms of generative music that seek to avoid the condition of computer fatigue. They choose different routes to solve that problem, but in both cases a search for greater sonic complexity can be seen to motivate the creation of new music.


Nathan Thomas also wrote a response to Kate Carr’s article, Nature and the Nature-Like: A Response to the Computer-Fatigued, which questions the usefulness of authenticity or ‘reality’ as the dividing line between digital and analogue.

Kate Carr is a sound artist from Sydney, Australia who runs the Flaming Pines label. She regularly posts music and field recordings on SoundCloud, and has a blog too.

Posted in Art, Audio, Complexity, Music, Research, Visual Perception | Tagged , , , , , , , , , , , , , , , , , , , , | 2 Comments