Computer Fatigue and the Rise of Sonic Complexity

Last month sound artist Kate Carr wrote an article called Computer fatigue and the rise of the human which examines how and why some electronic musicians are turning away from the computer and digital recording/processing in favour of using analogue electronics and acoustic instruments. Apart from being an interesting article in itself, it caught my attention because it seems to relate to the complexity of sound in music. Complexity in music and visual art is a subject of deep interest for my work, both academic and creative, but it’s one that has been difficult to start writing about coherently. This post is an attempt to get the ball rolling. My initial aim was make a simple point about how the musicians’ strategies to counter computer fatigue can also be understood as a search for greater sonic complexity, but attempts to elaborate this point led to other avenues of thought. As such, the following ideas are still a bit sketchy.

Kate Carr’s article describes a trend amongst electronic musicians for increasing use of ‘real’ (as opposed to ‘virtual’) instruments. The “computer fatigue” in the title refers to a weariness or frustration with the characteristic sounds and techniques of digital audio. The problem with digital techniques, Carr notes, is that they undermine “spontaneity, accidents and the importance of live improvisation”. Musicians reveal a number of motivations for adopting a more lo-fi approach: a desire for greater authenticity, a nostalgia for retro technology, a search for ‘character’ or ‘soul’, and a need for manual control. Underlying all these motivations is a distaste for the precision, repeatability and consistency of digital audio. In contrast, more primitive technology offers a certain amount of variation in tone, timbre and timing. In addition, manual controls offer greater expressive capabilities and a more ‘human’ touch that also lead to greater variation in sound than digital methods. The appeal of the lo-fi is often expressed in terms of imperfection. For example, Zachary Corsa says that he prefers “imperfections and warmth to sterility and polished freeze”. Cameron Webb identifies the same thing in describing the difference compared with digital sound: “acoustic/analogue instrumentation … brings with it an element of imperfection”. The article concludes by citing the sensory philosophy of Michael Serres, using the metaphor of navigating the world of sound:

The closed off space of the computer offers an impoverished stage for such a voyage of self-discovery. Little wonder we have returned to the complexities, resonances, the physical strains and even failures that only an embodied experience of making and listening to sound can offer.

The complexity that Carr mentions in that final sentence is the idea I’d like to draw out here. Put simply, the imperfection of lo-fi approaches produces sounds that are more complex than their digital equivalent. Imperfection is a key characteristic of lo-fi sounds, and it is caused by the inherent instability of the instruments and the subtle variations of manual control. These deviations from the perfectly tuned and timed mean that no two sounds are exactly alike, therefore lo-fi instruments produce sounds that have greater variation than digital instruments, which are capable of producing exact copies of sounds. Greater variation means greater complexity. My doctoral research revealed that people understand visual complexity in terms of the level of detail and the variation of elements or patterns in a picture. Visual complexity is perceived to increase in proportion with the number of different colours and patterns, but decrease with repetition. Handmade images look more complex because of their small deviations from the straight and true, which makes them more interesting and more appealing. It seems plausible to suggest, therefore, that a similar effect occurs in music. In this sense, the condition of computer fatigue that has driven the move towards more lo-fi approaches can also be understood as a search for greater sonic complexity. (This doesn’t imply that this is very complex music; it just means that the sounds that musicians seek are rich and expressive, and it’s those properties that make them more complex.)

Computer fatigue may be explained by the fact that it is difficult to achieve natural or human variation using digital techniques. The alternatives to digital audio used by the musicians in Carr’s article offer easier ways to achieve musical variation. These include a range of technologies – manual (e.g. a glockenspiel), mechanical (pianoforte) and electronic (tape recorder, modular synthesizer). Each has their own characteristic imperfections, and each operates with different types of control mechanism that also contribute to the imperfections. To generate the same kind of pleasing variation in digital methods requires either painstaking adjustment of many details or the effort to introduce some randomness or non-linearity into the methods. Some musicians appear to have taken on this challenge, however, and are choosing to stick with the computer. The increasing sonic complexity of their music suggests that they too may suffer the symptom of computer fatigue, sharing the dissatisfaction with “impoverished” digital audio like the other musicians, but choosing instead to grasp the nettle and find new ways to generate greater sonic complexity using digital techniques. This is a theme that I’d like to expand upon.

The idea of a more natural variation or shaping of sound in digital music appeared in the recent ‘Ask Autechre Anything‘ thread on the WATMM forum (also available to view as a Google doc here). One question asked: “Given that you guys have been at the leading edge of sonic explorations for 20+ years have you developed any personal alternate theories and postulations about what music actually is?” Sean Booth’s response is a concise mathematical description:

yeah music = speech − text

at least roughly — i reckon it’s a kind of super-developed version of the pitch and intonation parts of speech (the aural bit that doesn’t contain textual info)

Autechre’s conception of music as “speech minus text” is supported by recent research in evolutionary psychology (e.g. the work of Diana Deutsch) which suggests that music has its origins in our faculty for producing and perceiving vocal sounds. Neurological studies  suggest that speech and music are processed with similar mental circuits. The close ties between the two are also evidenced by education research that shows how participation in music can help to develop speaking and listening skills. This idea of music as ‘speech − text’ resonates because Autechre’s recent work seems to demonstrate a similar kind of naturally-formed sound. The music in Exai and L-Event in particular represent another step in the evolution of Autechre’s style, where the sounds seem to have a life of their own. These sounds are not quite organic (because they still sound synthetic, although less so than their previous work), but they are free from from the unappealing traits of digital audio. So, by being more complex sonically, Autechre represent an instance of musicians that share the condition of computer fatigue but who choose an alternative strategy to get away from the problems inherent in digital audio techniques. A similar aesthetic – perhaps a similar approach also – is perceivable in some of the output from Bill Kouligas’ PAN label, such as Traditional Music of Notional Species Vol. I by Rashad Becker and Dutch Tvashar Plumes by Lee Gamble:

Finally, I’d like to put forward the suggestion that these digital approaches to sonic complexity can be understood as an example of Manuel DeLanda’s idea of “topological music”, as described in his essay ‘The Virtual Breeding of Sound’ (PDF). The essay begins with a description of natural sounds – such as the song  of a blackbird – that are formed via an evolutionary mechanism: “…these songs have becomes memes, patterns of behaviour transmitted through imitation and, as such, capable of having an evolution of their own.” The term “topological” refers to the type of morphological transformation driven by evolution – the ways in which biological forms evolve: growing, twisting, folding, extruding, wrinkling, but not cutting. DeLanda proposes topological transformations as a method of shaping sound and exploring musical possibilities. Techniques such as genetic algorithms can be used to generate new populations of sounds and select candidates for the next round of breeding. In this process, sounds evolve according to a set of fitness criteria, which constitute an abstract indication of desired properties rather than a specific and pre-determined design – a distinction made by DeLanda in terms of metric and non-metric (topological) geometries. In this way, these topological techniques offer a means to escape the predictability and uniformity of more primitive digital audio methods, thereby avoiding computer fatigue:

It is possible, although I do not know how to theorize this yet, that musicians will have to start thinking in terms of abstract musical structures where the key properties for a sound are not those of fixed duration or a fixed wavelength and the like, but rather are something else corresponding to what we may call “topological music”, something we cannot hear […] but which would define a rich search space, the final products of which would be audible. In turn, this implies representing within the computer something like the complex embryological processes, which map the genes (the genotype) into bodily traits (the phenotype), given that this complex mapping genotype-phenotype is where the conversion from topological to metric is achieved.

To support my argument that this other route away from computer fatigue may be characterized as topological music, I tried to find some information about these musicians’ techniques. Although lacking detail, it is clear that Autechre, Becker and Gamble still make use of the computer to shape sounds, develop rhythms and to structure music. The evolutionary aspect of topological music described by De Landa is hinted at with the inclusion of the word ‘species’ in the title of Rashad Becker’s album, as if the tracks were specimens of a new line of evolution. An interview with Becker supports this idea, where he discusses using software called The Brain that allows for sounds to be characterized and grouped according to shared ‘genetic’ characteristics between parent and offspring sounds. In accordance with Autechre’s conception of music as ‘speech minus text’, Becker also describes his music in terms of the characteristics of speech:

It’s the envelopes and the harmonic progressions that the sounds have that are all—like syllables, maybe. These are the progressions that I obviously, or naturally, or automatically look for, that resemble speech, breathing and performance, that represent a certain actual shape of a body.

Another description that I came across seems to capture this biological or non-digital aesthetic of topological music that I’m attempting to describe: “His sounds actually sound like things”, wrote Marc Masters in a review of Becker’s album. This description also matches how I feel listening to Autechre’s recent work. These strange new musical forms express a kind of internal mechanism that gives them a feeling of natural variation as well as a sense of coherence or family resemblance.

This topological music seems to share a similar rejection of sterile digital sounds with the musicians in Carr’s article. In both cases, the resulting music can be characterised as having hallmarks of complexity – greater variation of sonic materials and more complicated texture and structure. But the response to “computer fatigue” by the topological musicians contrasts with the work of those who choose the lo-fi approach as the solution. The critical difference is that the topological musics continue to use the computer and digital techniques. The topological techniques that DeLanda proposes, such as genetic algorithms, are forms of generative music composition. Philip Galanter’s definition of generative art provides a clear and concise way of understanding what generative methods involve:

Generative art refers to any art practice where the artist uses a system, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is set into motion with some degree of autonomy contributing to or resulting in a completed work of art. (2003, What Is Generative Art?)

The key element is that the artist gives some amount of creative control to the system. By this definition, strictly speaking, the strategies used by those musicians who have returned to lo-fi methods are also generative, though mainly at the lower level of sound structure rather than compositional structure. An instrument or method that produces sounds that vary in an unpredictable way constitutes a form of generative music system, because the musician is allowing the method to contribute some of its own characteristics. In effect, the system makes some of the creative decisions. In a digital system, these decisions would first have to be thought up and then programmed into the system. Because digital methods have to be instructed what to do, variation and surprise is more difficult to achieve. So, the return to lo-fi methods and the adoption of evolutionary digital techniques represent two forms of generative music that seek to avoid the condition of computer fatigue. They choose different routes to solve that problem, but in both cases a search for greater sonic complexity can be seen to motivate the creation of new music.


Nathan Thomas also wrote a response to Kate Carr’s article, Nature and the Nature-Like: A Response to the Computer-Fatigued, which questions the usefulness of authenticity or ‘reality’ as the dividing line between digital and analogue.

Kate Carr is a sound artist from Sydney, Australia who runs the Flaming Pines label. She regularly posts music and field recordings on SoundCloud, and has a blog too.

This entry was posted in Art, Audio, Complexity, Music, Research, Visual Perception and tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

6 Responses to Computer Fatigue and the Rise of Sonic Complexity

  1. Le Berger says:

    Computers are not in any way ‘apart’ from nature, and neither are we. Computerized, electronics and digital means of music production are simply other tools in the box. The fact that they have only been around since fairly recent times and cannot be as intuitively or as immediately approached as a guitar let’s say, I think that makes us reach conclusions that are sometimes shaky and other times plain erroneous. Just like it would be easy to say ‘One of the keys on this guitar is broken, I cannot tune it right’ it’s all the same when you claim ‘computers lack human character / soul’. The flaw isn’t inherent to the tool itself, but in your perception of it.

    Just like when electrical instrumentation made its way into music, it took many years for a lot of ‘purists’ to turn their boots around. And nowadays most will readily accept them as part of the musical landscape and very few would deny their ‘character’.

    Now perhaps the way we have designed computers and softwares tend to generate music that we identify as ‘lacking soul’, perhaps the way we tend to use these softwares does not allow us to perceive an element of chance in the process or happenstance / lucky accidents / errors.

    I personally try not to look to any side of the fence for ‘flaws’ (may it be the design of instruments or the way we tend use them) because there isn’t a single culprit. The answer, for yours truly, lies in interaction, and our interaction with computers is at its baby steps still. Thus we are shaping the tools as we go along, and perhaps more and more of these tools will be in our image as time progresses. Then they will incorporate more chance elements and possibilities for errors and so called ‘character’. In fact I do think it’s already begun and we are merely commencing to incorporate these elements into our conventions and language.

    I can easily understand that certain individuals may feel fatigued or downright irritated by the computer as a music making tool, hell I’ve felt it personally quite a few times. But if I take a step back and look at it from a wider perspective, there’s no way the computer is going anywhere and we should only work hard as creatively inclined individuals to use this tool in the way that represents us best, the character will shine through.

    • Guy says:

      I apologize for this delayed response. You make some valid points about the musical use of the computer. The computer is *a part* of nature, not apart from it. Yet there is something new and different about the digital, and that’s one of the things I’m fascinated by.

  2. Robin Parmar says:

    Carr’s article is so flawed, at every turn, that I am considering assigning it to my students for critique. There is no “characteristic sound” of digital audio. (If there is, then it exists also in vinyl, since every record goes through a digital stage in mastering!) This simplistic and regressive idea is symptomatic of the “digital angst” I have examined in detail. Her use of Serres is poor and her characterisation of cyberpunk is way off.

    Besides this, there is also no “trend” away from computers. Carr cherry-picks her data with a few obscure names. I imagine that Bjork, Autechre, Carsten Nicolai (Alva Noto), Ryoji Ikeda, and thousands of other musicians do not share the naive ideas put forward here: that with digital there is no imperfection, no spontaneity, no improvisation — what rubbish!

    Your thesis is therefore based on a false premise. “Imperfection is a key characteristic of lo-fi sounds, and it is caused by the inherent instability of the instruments and the subtle variations of manual control”. Are you saying that one cannot code “instability”? In fact, it is the easiest thing to do! All but the most naive synthesis algorithms implement such characteristics, which is why we have replicas of analogue instruments… complete with quirks. Or we could simply buy a controller with loose knobs!

    I could make a robust argument exactly opposite to yours: that analogue technologies REMOVE complexity. Simply examine the spectrogram of a rich harmonic sound before and after it has been recorded to cassette tape. Which one is more complex?

    Whether we want or need complexity in music (and how much) is another question.

    Besides this, I agree with everything Le Berger wrote.

    “Yet there is something new and different about the digital, and that’s one of the things I’m fascinated by.”

    Then you might want to read my recent paper. Digital is not different so much as it throws into sharp relief certain characteristics that previously we could safely ignore.

  3. Guy says:

    Thanks, Robin. It’s always good to engage with a challenging response.
    I’m not sure that the existence of counter-examples to Kate Carr’s idea of ‘computer fatigue’ negates the premise of my thesis, since my argument is that what looks like ‘computer fatigue’ is better explained as a search for greater sonic complexity. I am certainly not saying that “one cannot code ‘instability'” – as you suggested – because, as I wrote above:
    “To generate the same kind of pleasing variation in digital methods requires either painstaking adjustment of many details or the effort to introduce some randomness or non-linearity into the methods.”
    In other words, to get a similar kind of instability/complexity in digital sounds requires either the time and effort to create these oneself or the use of ready-made presets/systems. As you pointed out, it’s relatively easy to create instability, but my point is that it’s difficult to achieve the particular kind of instability that we value in musical sounds. Computer musicians therefore face a dilemma between putting in some time and effort with technology, or using presets (with the drawback of those being sterile, inflexible, impersonal – i.e. the same musical qualities that cause ‘computer fatigue’ in the first place).
    You raise an interesting point about the complexity of analogue technologies. I’d like to put that to the test – to find out experimentally what it actually does to the sound, because whilst the characteristics of tape might reduce complexity in some ways (e.g. through its limited bandwidth and reduced dynamics), it would also increase complexity in other ways (adding harmonic distortion, wow and flutter). And even if tape is one example of an analogue technology that reduces complexity, that does not necessarily undermine the point that many lo-fi technologies have inherent instabilities that contribute to sonic complexity and which – for some musicians – provide the grounds for using physical instruments as opposed to digital.
    Thanks for the link to your paper. It’s interesting how your description of ‘digital angst’ differs from Kate Carr’s ‘computer fatigue’. I like your claim that “The ear is digital” – it suggests that ‘digital’ is as much a subjective viewpoint/approach to sound as it is an objective property of sound objects. And I share your position on the ‘corpuscular’ nature of sound – more closely aligned with Gabor’s granular theory of sound than with Fourier’s model of infinite timeless series.

    • Robin Parmar says:

      Thanks for your considered reply. In retrospect, I could have been clearer in separating out your concerns from Carr’s. In the following I will use “analogue” and “digital” in their colloquial accepted ways and not deconstruct them.

      It is quite easy to perform an experiment with a source sound processed either digitally or through analogue, and examine the difference. In fact, recording engineers have been doing this since the mid-eighties, when digital came on line. I recorded 12-bit digital audio in 1987 and was hardly the first. At that point it sounded crap, but the potential was obvious. Fast-forward 25 years and its safe to say that analogue artefacts are all well-studied, to the point of exact replication in digital. “Exact” means that no-one could hear the difference in a “double blind” (“double deaf”?) test. In other words, I make these assertions based on a phenomenological perspective that respects the listener. We now have software to emulate microphones, amplifiers, pre-amps, mixers, outboard gear, tape saturation and every other analogue process. We have expert reviews where trained engineers compare a plugin to a 24-track deck with specially sourced tape formulations. The conclusion is that the plugin gives an authentic sound and more control with fewer distractions away from the creative process. It is safe to conclude that in the field of audio, we have reached the point of full simulation.

      Why has all this effort been made? Because people value the “analogue sound”. Equally, just as many people have embraced the sounds of digital and post-digital music. It comes down to only aesthetics and fashion, something Carr missed out on totally, because this would undermine her arguments. The reason people use an analogue mixer and reel-to-reel tape instead of a computer is not because it’s “easier”… it isn’t. It’s not because it allows mistakes, since most of their time will be spent trying to fix things. It’s not because the same sounds can’t be formed digitally. It’s because it’s “cool”. And because of the digital angst I’ve already investigated. Fear and fashion.

      This is not to deny that there are things one can more readily do with certain analogue processes. But it *is* to say that these are not in truth the motivating factors. Turntablism was long a good example, but DJs are now quite happy with Serato. The use of long tape loops might be another use case, if people didn’t have looper pedals. (I can remember the fun stringing tape all around a control room!) But I think that the tactile nature of analogue is overplayed by those who wish to raise false distinctions. Certainly the recent analogophiles have never had to suffer the anguish of tape alignment, photo negative cleaning, film splicing, a wonky cable, or losing your best take to oxide flaking. These “accidents” can well be championed now those sitting in the lap of luxury, twiddling their iPhones!

      To address your points. My disagreement with your contention about coding instability remains. You wrote:

      “To generate the same kind of pleasing variation in digital methods requires either painstaking adjustment of many details or the effort to introduce some randomness or non-linearity into the methods.”

      Painstaking? Composers have long used stochastic, chaotic, and other processes in order to provide appealing variation in the timbre, duration, envelope, rhythm, and structures of their music. It may have been hard work when Xenakis did it, but now we have simple objects to put in the audio chain of Max, Pd, Reaktor, whatever. Even those who resort to presets get baked-in oscillator instability, noise, distortion, and many other artefacts. They have no work to do except twiddle a knob.The simple method of playing a filter sweep on the hi-hats of an X0X drum line is enough to make a repeating pattern sound pleasing and less “digital”. That’s not “painstaking”; it’s common practice.

      I am not sure why you think it is “difficult to achieve the particular kind of instability that we value in musical sounds”. Is this based on characteristics like tape wow and flutter or vinyl scratches? Because no-one ever valued these. It is only post-factum that they have been fetishised.

      As for the evil of “presets”, this needs further consideration. On my analogue keyboard in 1980 I had a bank of 16 presets that every other person also had, largely because this interface was modelled on an organ. When was the last time someone complained that their organ was “sterile, inflexible, impersonal”? Today, my digital patch has 2048 presets, each with 16 easy-to-access variations and several hundred others I can drill down to. That’s — what? — billions of sounds? Certainly we should not be using the same word “preset” to cover both of these cases?

      In fact, I would argue that the sense of fatigue comes from having too many choices, not too few. Those who are fatigued should be more introspective in their search for the causes. Constructing artificial external divisions and then blaming those is only a diversion. Personally I never feel fatigued, only exhilarated and privileged to live in such a time.

      But back to your argument, which contains an inherent error of logic. On the one hand, computer presets make everything too easy and homogeneous, so we need to go back to analogue technology to make things more difficult — and also (somehow) more “personal”. But if we strive to create individual sounds using digital technology, we are chastised for the work being too difficult! Which is it going to be? Is it not true that the difficult work we put into our digital sounds (a term I have only been provisionally accepting as one with meaning) makes these personal (or at least as “personal” as they need be)?

  4. Pingback: Exploring the Adjacent Possible in Music

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s