Processing – Viewport Cameras – PeasyCam & OCD conflicts?

I’d been having some issues mixing the PeasyCam and Obsessive Camera Display (OCD) libraries in the same Processing sketch – which I’ve now sort of resolved and learned a thing or two in the process…

PeasyCam_home_500px.jpg

I found that switching between a PeasyCam and non-configured OCD viewport camera feed made PeasyCam behave idiosyncratically – loss of mouse control, some offsetting of Z-axis distance (only on the first switch back from an OCD feed – it’s consistent from then on), clipping of the scene – as if the farClip setting in OCD has likwise adjusted it in PeasyCam – though clipping isn’t actually assignable within PeasyCam itself.

Turns out PeasyCam is now at v0.8.1 – I was running an older version – and the documentation now includes a number of methods I hadn’t noticed before.

camera.setMouseControlled(true); in the draw(){} loop sorted out the mouse control and once farClip was set to a reasonable distance in OCD PeasyCam wasn’t affected… though I didn’t quite get to the bottom of the Z-axis alignment issues.

Additionally, Heads Up Display looks interesting and though I didn’t quite manage to implement it an example from the guicomponentsNot only a set of 2D GUI components (buttons, sliders, labels, drop down lists (comboboxes), text boxes etc) but multiple windows that can be used for holding controls or showing separate drawings” – that extends PeasyCam by providing sliders to control the angles did hint at a solution. I can see a HUD being useful in my performance sketches – providing a persistent onscreen ‘console’ of monome button presses, LED states and accelerometer data; MIDI notes and controller data; OSC data from an iPhone/iPod Touch etc.

I managed to get CameraState state = camera.getState(); to work – and I’d hoped I could adapt my userpreset class from other sketches to store and select between them – though in practice this wasn’t possible since “The PeasyCam ‘state’ is meant to be an opaque, meaningless object, only meant for saving the camera’s setting at a moment in time, and restoring it later” and can’t itself be saved to an array or file. However, I’ve found it is possible to create multiple state ‘holders’ e.g. CameraState CamPstate1, CamPstate2; and I’ve managed to implement a longhand runtime only method to save and recall each independently – though my code isn’t particularly elegant. I’ve also set up ControlP5 (now up to version 0.4.6) to control the animation time between states – via a fader and radio buttons (a new and welcome implementation for me) – resulting in speedy or leisurely pans on the fly…

It would also be useful if I could use PeasyCam as a way to easily set up the shot – and then transfer the configuration into OCD:

float[] camera.getLookAt(); // float[] { x, y, z }, looked-at point
float[] rotations = camera.getRotations(); // x, y, and z rotations required to face camera in model space
double camera.getDistance(); // current distance
float[] position = camera.getPosition(); // x, y, and z coordinates of camera in model space

typically outputting:

[0] 1000.0
[1] 300.0
[2] 0.0
[0] -0.9600173
[1] 0.048165336
[2] -0.8826245
579.2877927837172
[0] 1027.8907
[1] 774.0031
[2] 331.8396

gets me most of the info I need from PeasyCam – now I need to work out how to coherently map these values into OCD which has a different set of variables and methods…

Camera(parent,
cameraX, cameraY, cameraZ,
targetX, targetY, targetZ,
upX, upY, upZ,
fov, aspect, nearClip, farClip)

where

cameraX                float: x coordinate for the camera position
cameraY                float: y coordinate for the camera position
cameraZ                float: z coordinate for the camera position
targetX                float: x coordinate for the center of interest
targetY                float: y coordinate for the center of interest
targetZ                float: z coordinate for the center of interest
upX                        float: x component of the "up" direction vector, usually -1.0, 0.0, or 1.0
upY                        float: y component of the "up" direction vector, usually -1.0, 0.0, or 1.0
upZ                        float: z component of the "up" direction vector, usually -1.0, 0.0, or 1.0
fovea                float: field-of-view angle in vertical direction
aspect                float: ratio of width to height
nearClip        float: relative z-position of the near clipping plane
farClip                float: relative z-position of the far clipping plane

which has been pretty straightforward – bar a little offsetting due I think to the inability to actually map PeasyCam rotations to OCD viewport cameras.

While I’ve now successfully managed to “set up several cameras and switch between them” the next stage with OCD is to explore the various methods and “manipulate individual cameras using standard camera movement commands”

While I’m also generally happy with the behaviour of the Light Wave Virtual model there’s still more functionality to explore with TRAER.PHYSICS 3.0 – particularly Custom Forces – such as audio input – but perhaps also Attraction.

Learnt a bit about the datatypes long, doubles and floats and how to covert between…

Typically all this googling led to some interesting Processing reference and projects:

  • The Processing camera() reference includes links to beginCamera(), endCamera(), frustum() and perspective() which look worth investigating;
  • Yannis Chatzikonstantinou’s voltile prototypes blog – “Research and experimentation on computation-enabled integration of the heterogenous information and processes taking part in the design of physical objects…” features a range of Processing projects – including the code – such as aab“an automatic composer of forms vaguely and loosely reminiscent of engineering aesthetics” and fib3“a basic real time cloth simulator” – some requiring libraries which led me to;
  • MSAFluid for Processing (from the same people who make MSA Remote for the iPhone) a library for solving real-time fluid dynamics simulations” which includes a working mouse controlled and TUIO demo – which led me to the already familiar but library required;
  • Processing TUIO Client API“two demo sketches and a library which allows Processing to receive TUIO messages from any TUIO enabled tracker” and reacTIVision 1.4“a toolkit for tangible multi-touch surfaces“;

Comments are closed.