A Personal Philosophy & Approach to Audiovisualisation

Wearing my Cybersonica hat I’ve started a regular monthly open forum for sharing creative coding and practice In association with MadlabInProcess:ing.

I gave a ‘show and tell’ presentation about one of my own Processing sketches at the launch event – though Kyle McDonald’s presentation on his 3D Scanning technique was the main attraction – and since it seems to have outlined my general philosophy and approach to audiovisualisation reasonably well I thought I’d include an edited transcript here.

There’s fairly poor quality video documentation on the Cybersonica Vimeo site… with useful links in the session output posting

tome-wilkinson_light-wave_large.jpg
lightwavevirtual_500px.jpg

Spot the Difference: Tom Wilkinson’s Light Wave and Prodical’s Light Wave Virtual

“… Programming does not come naturally to me – but because it helps me realise creative ideas like no other way I know – I persevere with it. Quite often the sense of satisfaction I get from coding is actually just in stopping… At best I’m a code-jockey – I find bits of code that do things I like – and I then attempt to mix them together – frequently with messy results… but the more I do it the easier it becomes…

I know my strengths – and what I’m good at is an inherent Magpie talent for selecting ‘bright shiny things’ – I recognise a lovely bit of code when I see it – and extrapolation – finding an example and extending it – and probably then – with my usual musician’s approach – twisting and bending it ‘til it breaks.

…I’m a longstanding musician and music technologist… and as a practitioner I’ve most recently been a core member of The Sancho Plan – during their early development… but now my focus is on Monomatic – my ongoing collaboration with the composer and performance system designer Nick Rothwell a.k.a. Cassiel

We’re planning on using Processing to deliver real-time controlled visualisations to accompany our music – mostly for live performance but also for making audiovisual shorts – where we run visuals on one machine sitting side-by-side with another making the music – and where both receive the same performative inputs from our instruments and controllers. What we’re working towards – and it is taking some time to develop – is to build an integrated system where sound and visuals can be arranged simultaneously – i.e. on the spectrum of – ‘make the music then shoot the video’ to ‘shoot and edit the film then compose the score’ were looking to develop a methodology that’s a half-way house – a set up which allows us to compose audio and visual simultaneously – and in response to each other.

I’ve been on a quest to find this ‘golden fleece’ of audiovisual composition for quite some time… and The Sancho Plan did get me closer than ever before… and though I do really like the engaging characters, surrealist environments, focus on narrative – and I admire much about Ed Cookson’s overall vision for The Sancho Plan – he is first-and-foremost an animator. The upshot is that the animation production takes so long compared to the music that for me the process is effectively divorced – the audiovisual composition experience is nowhere near immediate enough.

So Nick and I have taken an alternative approach – to build visualisations rendered and controlled in real-time. These are partly based on simple but engaging vector graphic characters – with their various simple ‘movements’ stacked as layers in a single SVG file and made visible/invisible and motion tweened – all controlled via an innovative animation control matrix that lets us build a sequence of short animated sequences on-the-fly. And partly based on virtual environments which model real world physics – like springs, fluids, natural oscillation etc. – so we can create systems that behave organically, in predictable ways and that that we can control in real-time… sometimes inspired by things we come across in the real world.

In February 2009 I saw an art work at Kinetica Art Fair in London that I not only liked very much – but thought might be possible to model virtually using Processing…

Light Wave. Tom Wilkinson
1999. Glass, metals, nylon and electric motor . 450 x 168 x 110 cm. £40,000.00.

Originally commissioned by Enron for their London offices, the artwork was kindly given to Wilkinson by the building’s owners in 2002. He has since restored the work for the outside environment.

The wave that ripples across the glass is a true sine-wave. Gravity and kinetic energy operate in a state of equilibrium enabling the ninety ‘blades’ of glass, weighing half a ton, to be set in motion by a small motor. This work also investigates the ambiguous nature of matter; the glass in Light Wave appears to be molten, inviting us to reflect on the paradoxical nature of glass as an amorphous solid.

I’m 10 months on… and on my 10th Processing sketch iteration… and while it still has some bugs and needs refinement it’s definitely getting there… and the sketch not only shows some of my development in working with virtual models… but also with multiple cameras… and the beginnings of external real-time control.

So why try to do several things at once? Why not just concentrate on the virtual modelling? Well when I make music I start with my well seasoned virtual studio – a collection of synths and samplers, FX and processors I know and like – all laid out in a configuration of my choice.

Nick and I are working towards creating a framework in Processing that simulates this idea – but for visuals – so that before we even start creating the sketch we have the framework and tools we’ll need in place – such as a virtual lighting rig we can use to create a real-time virtual light show, multiple cameras so we can move around and cut between different perspectives of the scene, and a persistent HUD display showing a Monomatic ‘Live’ console with all the real-time input from our performance instruments and controllers displayed graphically etc…”

Overall the rationale is to inform future visual production with established audio production.

Comments are closed.