Live video in Processing
I’ve been hitting a wall trying to extend the Mirror2 – Video (Capture) example – patch by Daniel Shiffman – particularly with mirroring on the y axis as well as reflections, distortions and multiples. Luckily I came across the Create Digital Motion article Processing Tutorials: Getting Started with Video Processing via OpenCV which shows how to extend Processing’s treatment of live video by bypassing the somewhat limited inbuilt Video library and using the OpenCV Processing and Java library – which includes a flip function – instead…
Displaying two camera sources in the same Processing sketch – code found in processing.org’s Video capture, Movie Playback, Vision Libraries board – images thresholded to protct the innocent.
“OpenCV is an open source computer vision library originally developed by Intel. It is free for commercial and research use under a BSD license. The library is cross-platform, and runs on Mac OS X, Windows and Linux. It focuses mainly towards real-time image processing…”
Despite this implementation not being a complete port of OpenCV there’s plenty of useful functionality. I followed Andy Best‘s tutorial in CDM and his follow up Processing OpenCV Tutorial #2- bubbles. I also enjoyed working through the examples and compiling them into an uber OPenCV_test sketch which helped me get to grips with the library’s functionality and used to consulting the reference – though other resources such as the OpenCVWiki seem a bit too hardcore.
Through CDM I also find out about another Processing video library – the gsvideo gsteamer library – but I haven’t looked at it yet.
So I proceeded on a fairly circuitous route happening as always across interesting stuff:
- Mother – “a program which allows the live mixing of the output of multiple Processing sketches, in a manner not unlike VJing” – sadly only for Windows at the moment;
- Processing Blogs and Code & Form: Computational Aesthetics;
- processing.org’s Video capture, Movie Playback, Vision Libraries board (and the somewhat older Video,Camera board);
all of which have lots of interesting posts/threads and useful code…
so I headed off into a Processing sunset attempting to:
- show and switch between two camera sources;
- change the video capture size on the fly;
- show subsequent frames from video input as a grid;
- apply colour filters;
- create stereoscopic images;
- apply polar_coordinate distortion;
- open multiple Processing and ControlP5 windows;
which was all good head-scratching fun… if a bit of a diversion… but which obviously helped my overall understanding because by the time I came back to the problem of trying to update the code to implement the OpenCV library instead of the inbuilt Video library in my extended Mirror2 sketch I’d sort of worked out what I needed to do – the Processing Video library allows loadPixels() from the Capture object i.e. video.loadpixels(); – OpenCV doesn’t… so I created a PImage and then copied the video frame to it then loaded this into pixels – and so I now have mirroring on the x and y axis through the flip() function in OpenCV – and also a good idea of how I’ll be able to use other OpenCV functionality and manipulation of the PImage to realise reflections, distortions and multiples in the future… hooray 😉