An animation of mine is to be screened this week at the Bonington Gallery, Nottingham, 15-19 October 2012. The exhibition, Chromista (named after photosynthetic aquatic microorganisms), is curated by Geoff Litherland and Jim Boxall, and focuses on video works that “exploit the physical surface of the projected image; light and imagery is abstracted to create works whose process of creation dictates the final image.”
The animation is generated with the flam3 (fractal flames) algorithm by Scott Draves (aka Spot) and Eric Reckase. 9,600 frames were generated, animated at 16fps to produce a video of 10:00 mins. The animation forms a loop and is made of four separate but related patterns which cycle once and morph to the next, shifting the colour spectrum a quarter turn at the same time. The earliest versions of this animation had far fewer frames – starting with 20 or so, then a hundred. Attempts to get a smoother animation by generating more frames didn’t seem to slow it down quite as one might expect. Parts that had been moving too fast to see before had now become visible, which suggested that the blurry, fast-moving streaks are things yet to be revealed at higher frame rates. A defining characteristic of fractals is that similar patterns are revealed as one zooms in to see more detail. In this case, more detail is revealed by ‘zooming in’ in time. It is as if its motion is fractal as well as its structure – a temporal fractal as well as spatial. The frames for the latest version took two days to render on a fast computer. Computational resources required to render greater detail grow exponentially, making it difficult to explore further. The latest version represents the most detailed exploration thus far. Here is an early version, based on 2,000 frames:
The soundtrack is based on visual complexity analysis of the 9,600 image frames that constitute the animation. The size of each PNG image frame and the number of unique colours per frame generate two strings of quantitative data that represent aspects of visual complexity. These data are sonified in two ways:
- Conversion of the two sets of 9,600 data points to L and R channels of a .WAV file results in an audio file of ~0.2 seconds at a sample rate of 44,100. This file is then stretched out to 10 minutes so that the audio corresponds with the length of the video. Here is the original output audio:
- The data is output as MIDI files, encoded as pitch and volume, which modulate the parameters of VSTi synthesizers (Synth1 and XILS3). Ten different tracks using this process are used in the current version – some which modulate white and coloured noise, and others which use various audio waveforms and envelopes. In some tracks it is possible to hear the 16 beats per second that correspond to the video frame rate of 16fps.
The result of the process is that every change in audio parameters corresponds to changes in the two measured aesthetic properties of the video. The work is an experiment in synchresis – the perceptual interaction of sight and sound, a concept developed by film theorist Michel Chion. In this experiment, the complexity of aesthetic information provides the basis of the relationship between image and sound. It is not a wholly successful experiment, because there are visible happenings that have no counterpart in the current soundtrack. The next step in the experiment will be to calculate the rate of change in each parameter (i.e., the first derivative of the values) as the basis for modulating the sounds. This may contribute further to the synchresis effect by bringing the auditory changes more in line with those of the visual.
The soundtrack in this video is the 8th version. The 6th version was posted on SoundCloud about a year ago and is available to download: