On building a tonoscope…
A collaboration with Anthony Rowe of squidsoup – an ideal partner for this project – squidsoup’s work being focussed as it is on exploring minimalist, interactive, virtual visual and aural environments.
What are we trying to achieve?
Ultimately the aim would be to create a beautiful and musically responsive visualisation device and/or piece of programming that could be used by both TSP and squidsoup for live gigs and/or exhibitions.
From what I’ve seen on youtube.com, analogue tonoscopes take a little time to settle down between tones as illustrated HERE (about 4:40 in). What we’d need to do to make a musically responsive tonoscope is to speed these transitions up making them almost instantaneous. The device can then create a unique pattern for each tone of a melody – however fast the notes change – although I admit there’s something aesthetically pleasing in the transitions.
There are a number of possible approaches:
- build a prototype analogue tonoscope, experiment with it and film the outputs – then build a virtual one that analyses frequency and amplitude (FFT analysis) to play back the appropriate output from the recordings.
- Anthony suggests “make a hybrid one; part real, part virtual, then use that as the basis for chopping up in realtime”.
- attempt to define and model the physics and build a virtual one from the outset – running this alongside an anologue version to compare and contrast effects.
Later development could include a multi-tonal version – layering different elements of a track on top of each other – bass line, lead melody, chordal progressions etc – with different colour or shaped particles to create some fantastic realtime visualisations of an entire track.
A google.com search for “tonoscope” turns up a few interesting entries (albeit most a bit too metaphysical for me):
The tonoscope was constructed to make the human voice visible without any electronic apparatus as an intermediate link. This yielded the amazing possibility of being able to see the physical image of the vowel, tone or song a human being produced directly. Not only could you hear a melody – you could see it.
Is there a connection between sound, vibrations and physical reality? Do sound and vibrations have the potential to create?
The article references Chladni figures – discovered by the physicist Ernst Chladni, 1756-1829, who laid the foundation for acoustics, the science of sound.
What we are seeing… is primarily two things: areas that are and are not vibrating. When a flat plate of an elastic material is vibrated, the plate oscillates not only as a whole but also as parts. The boundaries between these vibrating parts, which are specific for every particular case, are called node lines and do not vibrate. The other parts are oscillating constantly. If sand is then put on this vibrating plate, the sand (black in the illustration) collects on the non-vibrating node lines. The oscillating parts or areas thus become empty. According to Jenny, the converse is true for liquids; that is to say, water lies on the vibrating parts and not on the node lines.
And Bowditch curves also known as Lissajous figures – the result of two sine curves meeting at right angles
In the closing chapter of the book Cymatics, Jenny sums up these phenomena in a three-part unity. The fundamental and generative power is in the vibration which, with its periodicity, sustains phenomena with its two poles. At one pole we have form, the figurative pattern. At the other is motion, the dynamic process.
Mirror Tonoscopes, Milton Metfessei, Harrison Musgrave
JSTOR: The American Journal of Psychology: Vol. 46, No. 3 (Jul., 1934), pp. 478-480
From what I’ve uncovered so far I surmise:
- cymatics is based on standing wave patterns created through vibration – mostly on a 2D plate but also in 3D air columns,
- these patterns vary not only according to the frequency and amplitude of the vibration/sound source but also the physical properties – dimensions, elasticity etc. of the medium,
- standing waves are created as part of the resonant system induced through the interference of bounded waves…
The implications of these ideas for us is that I don’t think we need to build a virtual particle system where each particle knows where it is in relation to each other – as is the case for flocking behaviours for example. All the particles need do is to be introduced randomly into the system and then find the lowest energy points i.e. the nodes – vise vie the description of Chladni figures above – to allow us to see the low or zero energy lines within the resonating system – and this, we agree, is a much easier task.