Thursday, April 26, 2007

Titan Class Vision & Quartz Composer



I’ve just about finished integrating Quartz Composer with Titan Class Vision. For those of you that know nothing of Titan Class Vision, it provides a global flight information display that tracks aircraft in real-time. Our "Google Earth" type display presents new revenue generation opportunities for airports and also serves as a good airport-customer initiative. Titan Class Vision has been installed on two displays near the A/B exit of the Terminal 1 Arrivals Hall in Sydney International Airport (T1), Australia.

Before I describe my initial requirements and how I’ve integrated Quartz Composer here is a movie showing the animated integration.

I needed to consider using Titan Class Vision with Plasma Display Panels and thus minimise burn-in issues. Titan Class Vision is already quite animated zooming between various resolutions. However more movement was required to remove static text and a header and footer margin.

I could have programmed these animations into Titan Class Vision directly but I’ve always felt that something like Quartz Composer should be composing Vision’s image with other digital media and effects. By externalising most of the content (other than the planet and the 3d objects that get overlaid on to the planet e.g. the flight paths), I can now customise the display contents quite easily for individual customers - and with great effect!

Before I continue thanks to everyone on the Quartz Composer forum that have helped me over the past few weeks. In particular thanks go to “tkoelling”, Alessandro Sabatelli and Pierre-Olivier Latour.

The approach that I’ve taken is to render my 3D world into a Core Video (CV) buffer. The viewport is the size of the screen and is rendered once per frame. I then pass this CV buffer to Quartz Composer (QC). QC and CV share their OpenGL resources (context). Thus the image that I have rendered remains on the GPU side of the fence (I think that CV creates my image as a texture). QC receives the CV buffer as an image parameter and my scripts can do what they need to do. Pierre describes how to do QC/CV integration on the forum and I also posted some code there.

Titan Class Vision makes no assumption with regards to the name and location of the QC file. Instead I use AppleScript to pass in the path of the composition and this causes the instantiation of a new QCRenderer object (releasing any previously held one of course). At this time I also reset the timing for my composition if it is a different file to the last one. In summary I have some external program (which I actually call “Bootstrap”) that tells Titan Class Vision what to do - now including telling it where my composition file is. The Bootstrap is an Applescript Studio application that bundles the composition file as a resource.

Bootstrap tells Titan Class Vision to zoom through different resolutions on a periodic basis - typically every 30 seconds. It now also sets up input parameters to the QCRenderer so that parts of my composition are enabled/disabled. I typically have one composition file per client’s implementation and enable sub-patches as necessary depending on what I want to show. This approach eradicates any performance impact of instantiating a new QCRenderer object (not that there appeared to be much of an impact though). Note that I do release/alloc a new QCRenderer if my context is re-shaped due to a bug with QC taking note of its viewport only on instantiation.

For your interest, Bootstrap passes input parameters to my QCRenderer via Applescript as a URL encoded string parameter e.g.

set composition parameters to "Fade_In=1&Fade_Out=0&World_Actual_Size=1&World_Zoomed=0"


That’s about it from a high level perspective other than to say that I can easily get 60fps on my relatively slow development machine (dual G4 1Ghz). I haven’t released my changes to my Quad Xeon yet but will do so in the next couple of months and all of this should fly (pun intended)!

Please note that when rendering to a CVOpenGLBuffer it must be treated as something that is immutable i.e. after you have written to it and passed it to the QCRenderer do not try writing to it again. This is in fact exactly what I was doing on Tiger and all was well. However with Leopard I had a problem given that the QC imaging pipeline had apparently changed.

The resolution is to use a CVOpenGLBufferPool and then use CVOpenGLBufferPoolCreateOpenGLBuffer each time you want to render a new frame. After the frame has been passed to the QCRenderer you then release the CVOpenGLBuffer. Problem solved.