Saturday, February 13, 2010


Read Stu's posting on ProLost about a new idea for a DV Rebel camera quality metric being called 'The Subway Short' (a phrase coined by Gage).


What are pixel aspect ratios?

pixel-aspect-ratios.pngWhy do your graphic supers have funny jagged edges in Photoshop, but they look fine on (television) screen? For that matter, how can anamorphic formats cram so much width into a regular NTSC-type signal? The answer, simply put, is that pixels come in all shapes and sizes.

Recall that pixels are the individual points of color that make up a picture on your screen. While computer screens and similar displays usually use pixels that are square, televisions, historically, have not. In fact, the concept of a "pixel" didn't figure into analog television signals at all -- the NTSC specification called for 480 "lines," but the signal within those lines did not specify discrete units of width.

When the notion of digital video became a reality, the standards bodies that be decided that -- for both NTSC and PAL -- there would be exactly 720 pixels per line. Thus, the 480i resolutions we know and love: 720x480 NTSC, and 720x576 PAL.

Now, in order for video rendered in the new 720x___ proportions to look the same as it always had on analog screens, it didn't make sense to think of the 720 dots on each row as square. NTSC video, for example, was customarily rendered at a ratio of 4 units wide by 3 units tall. That translates to 640 pixels wide for every 480 pixels tall -- not 720.

The solution, then, was to render pixels as non-square: about 0.9 units wide for every unit tall, in the case of NTSC video (and about 1.09:1 for PAL). When encoding widescreen video as anamorphic DV, the ratio became skewed to "fat" pixels -- 1.21:1.

Fortunately for all of us, modern standards like HD have evolved in an age where digital editing and dissemination are the norm. HD standards were drawn up with square pixels in mind, so pixel aspect ratios are unimportant when considering fully native HD workflows. But unfortunately, HDV at 1080i -- with a native resolution of 1440x1080 to represent HD's 1920x1080 -- assumes fat pixels just as its predecessor DV formats did, this time at a ratio of 1.33:1.

Of course, modern imaging tools like Photoshop and After Effects ship with a wide array of presets fully appropriate to each type of native footage. As long as you realize that these presets involve more than just codecs and pixel resolution, you should avoid nasty surprises involving "squished" graphics.

ProVideo Stunning Good Looks by Art Adams

The Manual Duck

Ahhh the age old struggle between Final Cut Pro and After Effects. For what seems like centuries now us Final Cut Pro editors have been struggling with finding an efficient and, moreover, convenient workflow between FCP and After Effects. Sure, products like Livetype and Motion have come along and made life easier for some tasks but when it comes down to real motion graphics work and serious compositing nothing beats After Effects. Have you ever put Motion’s Primatte RT side by side with a key pulled from After Effects Keylight? To me there’s no comparison.

Coming from an editor’s chair, not a designer’s, it took me a while to really get up to speed with After Effects. In the past I was using AE infrequently for several reasons: 1. I didn’t know the interface and key commands well, 2. I didn’t know the software’s capabilities well, 3. I was intimidated by the rigid workflow between FCP and AE. All these factors equaled inefficent workflow and so I just usually opted not to use AE in favor of a faster and more flexible option like Livetype or Motion.

However, in the past year the work we have been doing has called more and more for serious graphics design and compositing, Livetype and Motion were simply not going to cut it. So I buckled down and really learned the After Effects interface, key commands and it’s capabilities. Through that hard work I quickly became much more efficent with AE and started creating some really cool stuff. But all this new-found efficency with AE itself still did nothing to help with a round-trip workflow to and from FCP. And if we can assume anything about Apple and Adobe there will probably never be an intergrated roundtrip solution between the two.

Now of course there are 3rd party solutions out there that help with this problem (at least half of the problem anyway). Automatic Duck is a great 3rd party solution that exports Final Cut Pro projects and timelines in a format that After Effects understands and converts to compositions. Bam, you’ve got half of the roundtrip issue solved right there, prepare a timline in FCP and export with Automatic Duck into AE and take care of your graphics and compositing. The problems? Output is still the same, you must render your comps out of AE and import them into FCP just like always. Then later if changes are needed you must go back to AE, make your changes, and re-render the revised comp and import back into FCP. The other problem, Automatic Duck is expensive. If you’re a home business or just struggling like everyone else in this economy buying the plug-in may not be an option.

I call this solution The Manual Duck. It doesn’t involve any special plug-ins or any other software, it’s just a few simple steps to add to the workflow that can make the trip to and from AE much easier, and more importantly, leave less room for errors requiring revisions in After Effects later.

I had a job recently where I knew that I was probably going to need to do almost entirely in After Effects. It was an image piece that involved nothing but text builds and a few stock images. The producers instructions were simply to take the “boring” corporate message and make it just “look cool.” Ahhh, is there anything better then the old “Just make it look cool…” line? And what’s more, from listening to the music that was selected, it was going to be a music driven edit.

In my opinion After Effects is not a good audio editor from a workflow perspective and have to cut to music in After Effects can be a big hassle being that there’s no “real” real-time playback or scrubbing of audio. All this added up to a perfect candidate for The Manual Duck workflow.

It’s simple really, start in Final Cut Pro. I laid down the music track in an empty timeline and made the audio edit (the track did need to be cut down and mixed a bit). Once I was happy with the audio I started to block out what I wanted the shots to be using the built-in text tool. I had the script and knew what order the text build had to go in. With the text tool I was able to very quickly block out where the individual sentences would go. I went along through the song and timed out all the text builds adding no style or animation of any sort. The key to this step is the speed at which you can work, just copy-paste the text clips from one edit to the next and copy-paste the next sentence from the script into the text tool. Format just a bit so the lines can be read and that’s all you need to do. Of course, if your project is more complex you can get as complex as you’d like during this step, adding images, transitions, etc. The point is that you lay everything down and time everything out in Final Cut Pro where you have quick real-time editing available with no significant render or RAM preview time.

The project blocked out in a FCP timeline
The project blocked out in a FCP timeline

From there export the timeline to a codec that After Effects will play well with. Import that QT into AE and drop it into your Comp. From here you can proceed in the manner you prefer best. You can scrub through the Comp and add markers at the edit points or do split-track edits. Either way you can quickly scrub the Comp and see where you made edits in FCP with no need for audio playback or scrubbing. Also you now have a base layer that acts as a virtual storyboard. As you build your effects and composite you can easily solo the base QT layer to see what you blocked out next.

The FCP QT imported into an AE Comp
The FCP QT imported into an AE Comp

As a side note I after I have made my markers or split tracks I turn off the visibility and lock this layer to ease RAM preview time and avoid offsetting the layer with a stray drag.

The final composite
The final composite

Now there’s nothing automated about the process and it doesn’t add any sort of round-tripping between the two but I’ve found that it helps a great deal with being efficient once in After Effects and leaves far less room for errors and mis-timimg. If you can build your graphics and composite and get it right the first time that is far more valuable than the extra time it took to block the project in Final Cut Pro.

Color Theory for Cinematographers

At this year’s San Antonio Film Academy, I gave two lectures on three Cs of cinematography, composition, contrast, and color. Color is often overlooked by beginning DPs, and it is an extremely powerful tool. I described color in cinematography as “the use of analogous or complimentary color tones to create contrasts between elements in the frame and communicate emotional ideas to the audience.”

Not a great description, but good enough for starters. Color can be used to communicate information to audiences in all kinds of ways. For example, the storyline in Steven Soderbergh’s Traffic takes place in three different places, each of which is a very different color. Viewers can instantly tell where characters are and what part of the story they are watching. This is a very obvious way to communicate basic information.

Color can also communicate emotional information. Certain cinematic conventions have developed which help with this, for example warm lighting to convey safety and cool lighting to suggest danger are about as standard as shadows to convey mystery and brightness to signify security. Some directors, like James Cameron, stick to these conventions religiously, but others are willing to shake things up.

It can be very helpful to depart from the expected if your film requires it. Spielberg flipped the light=good/shadows=bad expectation on its head for E.T., and Ridley Scott changed all the normal colors rules for Black Hawk Down. Because these films are more complex than say, a standard comedy, forcing the audience to adjust to and rethink the world of the film is very effective.

When we first see Scott’s Somalia it looks like this – dirty, grungy, and brown. A greeny-orangey tobacco-filter brown. This is not the rich golden Africa of Sahara or Gladiator, but a dingy and dangerous place. Diesel smoke makes even the sky grubby. So far, so good.

By contrast, US soldiers live in high-tech steel barracks lit by cool halogen lights and laptop screens. Remember, cinematic convention usually says that warm tones indicate a cosy safe place and harsh blues like these mean cold clinical uncertainty, but not in Black Hawk Down. This color palette is unfamiliar territory, just like Somalia.

When Task Force Ranger goes into Mogadishu they go into the warm, brown, dangerous sunlight and bad things happen. This bright warm orange light is not safe. This is different. The audience has been thrown a curve ball, just like our heroes.

Even the command center has warmer light in it during the attacks than it did previously. The monitors are still blue, so the fill light is cool, but the key light on JSOC officers is warm, like on their men in the field. Command is just as messed up as the operation.

Ever since Saving Private Ryan war movies have tended towards a very desaturated bleach-bypass look, especially for combat scenes (including the opening scene of Gladiator). Ridley Scott and DP Slawomir Idziak have bucked the trend here as well, and it is very effective.

Finally, our men begin to find cover. Inexplicably, the basements of the abandoned slums they hide in have a very cool lighting scheme. Subconsciously, even though this is not conventional color use, the audience knows that they are safer here that outside in the brown. By now, all our viewers have picked up on how the palette works.

As time ticks away the odds get worse, the situation becomes more and more dangerous. Even that deadly warm sunlight is trying to invade the cool blue safe house. Every part of the film, including the color palette, is communicating jeopardy to the audience.

Traditionally, nighttime is communicated on film by desaturation and an ever-present blue moonlight, but once again Ridley Scott has a better idea. Somalian night is spooky green, and the tracers and explosions add orange to the scene. It’s the same sickly warm tone as the daytime, but brighter and scarier. There is no blue here; no safety.

But fortunately, a relief convey is rolling out. The 10th Mountain Division brings bright blue halogen lights to banish the orange and green of danger. By amplifying the saturation of these night scenes far beyond what is “normal,” the audience finds them very unsettling. This is the perfect emotion for what is being depicted.

Up until now, most of the scenes have been almost monochromatic, despite being highly saturated. Only at the climax do all of our colors really collide. These soldiers are pinned between threatening orange fire behind them and the uncertain dark green night in front of them, but safe blue headlights are coming in from the right. It’s final showdown time.

And of course, the battle ends just as the blue light of dawn makes everything safe and secure. The grueling Mogadishu Mile becomes almost a victory lap with this new color palette. The Rangers are back to their normal hue, and all is well… pretty much.

Ridley Scott does a tremendous job with this film through clever color use. It might be a little surprising, since everyone wears the same clothing, all the buildings are the same shade, and a lot of the film takes place at night, but I think this film makes better use of color as a storytelling tool than even Gladiator.

To see how closely color is tied to the events of the film, take a look at the chart below. Brendan Dawes has come up with a great new way to examine the pacing and overall color of films, and here are a few more color charts to look through.

As you can see, since the colors are tied directly to the moods of the film, clear trends are visible as different things happen in the film. We can see the film’s acts and turning points highlighted clearly. I am certain that Ridley Scott and Slawomir Idziak created a color chart like the stripe I made on the right to plan things out, and by analyzing this chart (slightly cropped for clarity), we can see a coherent vision appear.

Color is such a powerful part of cinema storytelling that we should never neglect it. And despite the power of modern color correction tools, we can never leave it to chance or expect to come up with a highly effective Ridley-Scott-style color script in post. All the Cs of cinematography take careful thought and a lot of planning to use properly, but when plotted out, they add a tremendous amount of storytelling power.

Tip: Render Faster & Smarter in After Effects with BG Renderer

My only assumption with this tip is that you use Adobe After Effects CS4 (or even CS3) on a multi-processor machine, Mac or Windows. Beyond that:

maybe you only have one main machine and often face the dilemma of wanting to render while continuing to work
perhaps you monitor your system’s performance carefully and have noticed that your After Effects renders don’t always peg all of the processors
possibly you own or have owned a copy or copies of Gridiron Software’s Nucleo Pro and have experienced the joy of background rendering in After Effects already. However you’re not experiencing that joy in CS4, because Gridiron has been too busy with another little project to update it.
It could even be that you are aware that you can kick off an After Effects render in a shell (Terminal on Mac, DOS on Windows), allowing you to render without the GUI, and thus keep working. If so, if you’re like 99% of visual artists, you’re not that fond of memorizing, typing or optimizing code.

If any or all of these is true, get ready to buy Lloyd Alvarez a beer, because he offers the answer to all of these and more. Lloyd’s site is home of many useful tools, another of which may appear in this space this month, but as my first true tip of the month I wish to promote his most infinitely valuable script. And I say infinitely valuable because BG Renderer is offered free, a 100% discount off the alternatives.

Common Digital Image Formats

The following table lists key characteristics of common digital image file formats.

Format Bit depth (per pixel) Color model Lossy compression Lossless compression Layers Alpha channels Embedded timecode Additional features
Adobe Digital Negative (DNG) Various Various





Supports all feature of TIFF specification, various metadata
xAdobe Photoshop Document (PSD) Various Various




Supports vectors, parametric image operations, color profiles, other metadata
Kodak Cineon (CIN) 30 Logarithmic RGB, linear RGB


Can store key numbers, other metadata
CompuServe Graphical Interchange Format (GIF) 8 Indexed RGB

Supports animation, keyed alpha
SMPTE Digital Picture Exchange (DPX) 24, 30 Logarithmic RGB, linear RGB


Supports all features of Cineon specification
JPEG 24 Linear RGB



JPEG-2000 Various Various




Supports color profiles, other metadata
TIFF Various Various





Several variants of the TIFF format are available, such as LogLuv and Pixar, allowing for different dynamic ranges, supports color profiles, other metadata
OpenEXR 48 Logarithmic HDR



Covers 9.6 orders of magnitude with 0.1% precision, can store key numbers, other metadata
Radiance 32 Logarithmic HDR


Covers 76 orders of magnitude with 1.0% precision
Portable Network Graphics (PNG) 24 Linear RGB



Targa (TGA) 24 Linear RGB


Windows Bitmap (BMP) 8, 24 Linear RGB

Colour Grading - concepts and paradigms

Of all the digital cinematic processes I use and teach it is Colour Grading and the manipulation of chromatic form - light, shade, tone, style - that i find the most overly engaging and pleasing. Editing has it's thrills, the on-set experience of shooting is a great pressure cooker and building visual effects is an intricate art but none of these seem to have the same immediate power of colour grading. Here the response, the result, the impact of your manipulations is so immediate, tangible and, when done right, profound that its hard not to get a kick out of it.

One of the things i love most about teaching what ive done as a profession for so long is that you, as tutor, are forced to question, interrogate and articulate the not just the how but the Why of what you do. As a result I can say with great confidence that I now have a far greater understanding of cinema practice, process and artistic impact from teaching it then I did from making it. When you're teaching (particularly when teaching very smart, inquisitive and ambitious students) you're forced to investigate the deeper level thinking that informs the instinct of what you do as a practitioner. 'How' is not good enough, it has to be re-enforced with the why. And it's here that the teacher is prompted to make great discoveries for themselves every bit as palpable as those of the students.

It's in this vein that i have recently been intrigued by an investigation into how we use and manipulate colour in cinema. The tools for colour grading are so powerful, so flexible, so accessible (RedGiant Magic Bullet Looks is arguable to most popular piece of software at the International Film School Sydney) that it's all too easy to 'get carried away' and have students deliver childish results born of the 'because I can' approach to software. What i wanted was to build a viable conceptual framework to focus students' use of colour grading with set of guiding principles to make their use of colour more grounded and informed. More controlled brush strokes and less coloiur splatter.

The first run at such a taxonomical break down yielded more than a dozen modes of colour use in cinema and it was invariably a mis-mash of rather un-useful terms. The most powerful paradigms are most often simple ones; those that function as Verbs - driving a process in action - rather than Noun's naming that process after-the-fact.

So I distilled and came up with a much more modest and simple triptych of modes by which colour may be employed in a cinematic image; three methods to describe the intent of a colour change, look, style or grade.

1. Impression - our visual response
This kind of grade is one designed to imprint on the mind of the viewer an element beyond the picture; to leave an impression by creating a visual response from a set of tones overlaying the image.

An example of this would be Se7en where by a distinct de-saturated, sepia tone, dreary and dirty palette of tones seems intent on a specific feeling in the viewer. We might think of the Impression grade as one where the viewer's experience of the action and events is done through a filter and it's the filter rather than the contents of the frame that leaves the most stark impression. We might forget scenes, plots and characters faces but no one who has seen Se7en can forget the impression the visual tone made.

2. Expression - our emotional response
The Expressionist grade reflects emotional states, emotional changes and emotional journey's. Where the Impressionist grade is concerned with the external - the world and how that world feels - the Expressionist grade is concerned with the internal, the mind set and those elements stemming from character wants and desires. An Expressionist grade seeks to express an internal mind state of a character, or characters, on the world around them. Expression grades are therefore essentially reflective - reflecting in colour and tone the internal sentiments or journeys the characters are going through.

The image below is from the Korean film Natural City and each change in colour grade from scene to scene is intrinsically connected with the internal monologue of the characters. Desaturated, bleak, cold, high contrast when the character is cold and unfeeling. Warm, soft, glowing, saturated when the character is reflecting on memories and better times. Expressionist grades are outward reflections of internal states.

3. Construction - our cultural response
A Constructivist grade is one that builds upon, exploits or plays with or against pre-existing knowledge the viewer may have. Such a grade relies on a cultural understanding of what the audience already knows, perceives or expects and then plays too or against those pre-concieved notions. As viewers we are neither passive nor empty vessels. We enter the movie with an inbuilt knowledge bank and that information is the touchstone filter for everything we watch.

A simple example would be Three Kings where the image plays hard in opposition to established visual references the viewer already possess of desert war and the middle east. Rather than the blazing yellows, hot red tones and open blue-skies that form the popular conceptions of the middle east, Three Kings delivers a deliberately desaturated palette. Cold, bleached of warm tones, skies blown out to white. The films' grade plays against cultural assumptions to create a constructed cultural response.

Now this if of course not to suggest that all films work in one of these 3 modes in exclusivity. Most cinematic works may range through each of these in the duration of their screening; drawing upon Impressionist, Expressionist and Constructivist modes at different times for different aesthetic outcomes. Nor should such a set of conceptualizations be taken as an all encompassing tool of analysis of cinema colour. Rather these three modes are merely the basis by which the colourist may begin to ask questions of the film they are working. hey are intended as a starting point. If a colour gradist can look the film/sequcne/shot they are working and ask :

- What impression am i trying to leave?

- What expression am I trying to invoke?

- What cultural response am I trying to solicit?

then they will be able to enter the colour manipulation process with a far greater degree of clarity and articulated purpose.... less colour splatter and more defined brush strokes.

As with all cinematic arts the power to unlock creative endeavor lies in the knowledge of technical detail. Cinema IS Technology. As such the list below of tutorials, articles and essays regarding colour grading should form the initial reading list for anyone interested in engaging colour manipulation in a proactive way.


Colour Correction by Kevin Shaw

When colour correction is a necessity by Kevin Shaw

Layer it by Kevin Shaw

Website of Colorist Kevin Shaw (lots of great articles)

The nature of light and colour

The not so technical guide to log-gamma curves

Professional Colour Correction with Premiere Pro

The Colorists (article on colour grading professionals)

How to use Magic Bullet Looks (a step by step video)

Grading ‘return to dungeness’ : video tutorial on using MBL

Using Colorista in Final Cut Pro Introduction

Using Colorista in Final Cut Pro Advanced

The Concept of Colour Space from the practitioners standpoint

Colour is a phenomenon of mind and eye - what you now perceive as colour, is shape and form rendered as experience. Visible light is electromagnetic radiation with wavelengths between 400 and 700 nanometers. It is remarkable that so many distinct causes of colour should apply to a small band of electromagnetic radiation to which the eye is sensitive, a band less than one 'octave' wide in an electromagnetic spectrum of more than 80 'octaves.' When thinking about the issue of synesthesia, remember that it occurs in a very limited sensorium.

Above are two representations of colour space. In each, the coloured area represents the visual field that evolution has endowed us with. This is one octave of possible experience.

Trying to systematize the idea of colour and naming that understanding ‘colour space’ has historical precedents which I’ll discuss later, but the notion of systematizing the concept grows out of ‘Enlightenment’ desires to know the world in a material way by mapping and planning, then proceeding into Victorian methods derived from, indexing and cataloging – which comes from a desire to create the experience of understanding from methodology, an idea who’s dominance is still with us.

Colour has been formulated by intellectual cartographers but is not a map – colour is experiential as the cinematographer well knows. In photographic terms colour as a function of seeing and meaning came late to the form. Because of this, notions of areas of containment of colour grew – as if colour had been graphically applied to an area - thus denying its inherence in form. This is in fact true in terms of late analogue televisual forms – but not true of digital electronic cinematography.

Film is exposed and latently holds an image, then is developed to ‘reveal’ that image. Film was and is a medium that had and still has many intricate and alchemical processes before its exhibition and revelation of a captured reality in the cinema: a temple built for the purpose of ritual display, where all who enter are required to suspend their disbelief. Film asks us to deny the actual material reality of the environment we are in and also to deny something of our own self.

Colour in this environment too was to be a function of the act of belief in the unreal. The generation of the idea of Colour space is an umbrella concept under which sets of ideas coalesced around the organisation of that function and as such took on various methodologies for its assemblage.

Because we are the ape that we are, mathematics quickly becomes for us a key organising factor in the description of this functionality.

“A colour model is an abstract mathematical model describing the way colours can be represented as tuples of numbers, typically as three or four values or colour components - for instance RGB and CMYK”.

“Most colour models begin as three dimensional forms because when you distribute the values in 2D space, that space cannot hold all the necessary axes relevant to the distribution of those values”.

In one of the examples above, there is a simple distribution of values that charts how a display from a computer is related to a display from a printer - in other words what the computer can display and what a printer can display.

The other example of colour space seeks to demonstrate the relationship of the visible spectrum to film, print and the computer.

“The range of colours varies enormously across different media. Of the billions of colours in the visible spectrum, a computer screen can display millions, a high-quality printer in the order of thousands, and older computer systems may support only 216 colours across different platforms.”

I could elucidate further on variations of description of print colour space, film colour space and computer colour space, I could elucidate further on whether those spaces are best displayed in their respective display co-ordinates of RGB or CMYK.

I could try to tell a history of film space and how electronic space addressed that, of how Kodak generated the cineon file system which was created for electronic encoding of film colour space with later developments of Digital Picture Exchange (or DPX), both of which generate a set of separate files each of which is one frame of film taken over into one digital frame of display - of how each carried meta-data about the conditions under which the image was generated in - and so on and so forth - but I won’t because that is for further reading - if you are interested.

What I will do and I’m trying to do it right now, is to indicate that simple technical terms that are understood by ‘the industries’ are replete with not only cultural and social meaning - but also exist within paradigms of understanding that are now changing – primarily due to the advent of the digital.

The Practitioner

I wish to turn now to the practical act of the cinematographer entering into the concept of colour space and how that can be achieved.
The early issues of video and film are now behind us and in some senses a rapprochement between film and the latest representative of the electronic – digital electronic cinematography - has occurred.

Original electronic imaging was analogue in form – as was film – yet the formulation of the capturing of an image was different from film.

Film has a large latitude – one could make an intelligent ‘mistake’ and rework the material and formulate a sense of ‘atmosphere’ within the image. This is commonly known as ‘the Look’.

Analogue video was clean and clinical and you had to get the exposure right – in the early days, if you didn’t get exposure correct then you didn’t get focus. Colour itself was grafted on to an already set formulation of image capture – PAL - it was effectively an afterthought: Phase Alternate Line.

I shot one of the first features, generated on video and transferred to film for theatrical distribution; this was Birmingham Film and Video Workshops production ‘Out of Order’. I approached the task by imagining video as being like a reversal stock – with very little latitude for mistakes in exposure. The transfer to film was adequate, but when compared to today’s digital transfer techniques, was not good in terms of colour.

With the advent of electronic cinematography something very important has happened in the capturing of the image. In both photo-chemical and electronic cinematography until the image is developed, the image resides in a latent form in both the silver halides and the un-rendered data. Development, the bringing forth of the image in film is similar to the rendering of the image in the digital and electronic domain – and importantly, colour is within the bit-depth of electronic data and is therefore an integral part of its material form.

This developing practical understanding in the professional realm is counter to arguments that circulate within media theory – for instance: New Media A Critical introduction latest publication 2009, Lister, et al, claims an essential virtuality to new media where the precise immateriality of digital media is stressed over and over again.

However, industrial and professional expertise now challenges academic convention by seeking to re-inscribe digital image making as a material process.

One of the first films that took a material film base and dealt with colour in the electronic realm was ‘O Brother Where are thou’. When Roger Deakins was asked by the Coen Brothers to shoot this, being a creative and intuitive cinematographer, he knew that he was being offered a chance to cross the bridge between the ‘convergent’ and the ‘integrative’ paradigm. Deakins job as he saw it was to enact the kind of colour space seen in the faded, poignant, postcards of the twenties. Deakins knows his film colour space. He’s had enough practice.

I want to tell you the kind of method that a practitioner like Roger Deakins employs to understand the function of colour in the world when faced with a multi-million dollar set of technologies - also adopted by a friend of mine when shooting a quarter of a billion dollar production recently.

If you decide that you’re going for a certain look, because intellectually you’ve justified to yourself that this look in some way underlines the intention of the director - and after you and he or she has toured the galleries, looked through the books, seen the movies that seem to relate to the project – and after you’ve jettisoned all of that because you know that referencing is mostly an act of creative failure and after every residue of resistance has gone, then and only then you turn to your intuition about the way you must proceed.

It might be that that intuition is to evoke green as a colour - or maybe it’s a magenta cast - or it has a warm glow which at the dramatic end you feel has to be taken away from the audience and supplanted by its opposite....

Which ever of these tactics you decide to embrace to achieve your goal, you accept the fact that you have to enter a colour space and live in that space until you know it fully - so fully that you can reveal its nature to both yourself and then the audience.

So you buy yourself some sunglasses.

If the world you need to reveal is green, you find the right colour green and wear those sunglasses for a month or for however long you need to wear them to know the world that has that particular shade of green.
Conversely, but with a little more risk, you can take the opposite approach and buy a pair of sunglasses that are the complimentary opposite colour of the world you eventually wish to invoke.

In so doing you exposure yourself to the opposite world of colour so that when you take the glasses off, the complimentary opposite of the world is revealed with even greater intensity – more so than the continuous appraisal of the world by seeing the correct colour continuously. That moment is a moment of incomparable intensity.


I now want to give you a brief idea of how we began to systematise the idea of colour:

Aristotle developed the first known theory of colour. He postulated that God sent down colour from the heavens as celestial rays. He identified four colours corresponding to the four elements: earth, fire, wind, and water.

Leonardo da Vinci was the first to suggest an alternative hierarchy of colour. In his Treatise on Painting, he said that while philosophers viewed white as the 'cause, or the receiver' of colours and black as the absence of colour, both were essential to the painter, with white representing light, and black, darkness. He listed his six colours and within this is the age old symbolic system of alchemy.

The Enlightenment project then later stimulated a material examination of our physical state so that eventually theories developed that began to mirror and explain how we next believed that we ‘really’ perceive colour.

Isaac Newton created a colour wheel of perception in response.

Moses Harriss wrote the Natural System of Colours in 1776.

J. W. Goethe developed a colour harmony theory on the basis of his hue circle. In this circle, colours are categorised into two sides, the positive and the negative.

Ewald Hering (1834-1918) devised the first accurate theory of colour vision. And so on and so forth until we truly enter the physiological description of ‘reality’:

“Colour is a response of the eye and brain to data received by the visual systems evolved from the immediate environment. Objects emit light in various mixtures of wavelengths. Our minds perceive those wavelength-mixtures as a phenomenon we call colour, and this perception creates questions that current colour theory tries to explain”.

Vertebrate animals were primitively tetrachromatic. Tetrachromacy is the condition of possessing four independent channels for conveying colour information, or possessing four different types of cone cells in the eye.

With the trichromacy normal in humans, the gamut of colours construed by our perception will not cover all possible colours. Human trichromatic colour vision is a recent evolutionary novelty that first evolved in the common ancestor of the Old World Primates. Placental mammals lost both the short and mid wavelength cones. Human red-green colour blindness occurs because the two copies of the red and green opsin genes remain in close proximity on the X chromosome.

So we humans have a weak link in our chain with regards to colour - we are not 4 cone tetrochromats, we have three and in some cases only two - in extremely rare cases we have one!

We are now within a profoundly material description of our experience, yet this is incomplete, especially in terms of emotion and intelligence as has been brought out in other papers - I want to bring in another idea that relates to this description which is called: the Modular Transfer Function.

We humans value detail and sharpness more than resolution in an image. High resolution is synonymous with preservation of detail and sharpness, but high pixel count - which is generally regarded as being a measure of how good an image is - does not always translate into high resolution.

“As you try to increase the amount of information on the screen, the contrast that you get from a small resolution element to the next smallest resolution element is diminished. The point where we can no longer see contrast difference is called the limit of visual acuity. It’s the law of diminishing returns, and a system’s ability to preserve contrast at various resolutions is described by the modulation transfer function (MTF).“

The point of this technical description is that, as Alice observed on traveling through the looking glass, the smaller you go, the rounder you get. As Ivan Illych noted in his analysis of the creation of systems, there are drawbacks within the actual material construct of the system that you design. And this is especially so with colour: for instance a camera might be at 4k resolution in red – but it might only be 2k resolution in blue.

What I’m trying to come to here, is that I regard the area that each colour space covers as being a footprint of understanding, demonstrative of a world view - and world views, as we know, have many ramifications.


In film alone, compare Technicolour of the '50's with Colour film in the Eastern Block in the 80's and Chinese colour film in the 40’s. Surely a statement about national psyches and all existing within different film colour spaces.... The dominant colouration of these spaces speak about the state of the nations zeitgeist at the time of production.

The recent electronic and data based colour space is a statement about this particular time and the new possible epistemologies of understanding that are developing beyond the simple systems of materialist thought and materialist theory.


Using the metaphor of ‘modular transfer function’, that a chain of information and in this case a chain of understanding, is only as wide and as deep and as strong, to mix my metaphors, as the weakest link in that chain. Then only in the narrow optical region, just that region to which the human eye is sensitive, is the energy of light well attuned to the electronic structure of matter from which colour derives.

But we must not confuse this attunement as a metaphor of complete meaning - there are many meanings to be obtained within the concept of colour space, many emotional spaces, many spiritual and many intellectual spaces – and above all, many experiential spaces.

We see within a matrix of words when considering the subject, but when simply experiencing it, we do so on a different level of comprehension.

1 With the advent of the digital and our necessary remediation of it via older analogue understandings, we are upon the brink of constructing new concepts, to utilise the metaphor of colour, that will enable us to see outside of our current visible spectrum and therefore gain understanding to illuminate our intellectual world with greater intensity and detail. In this ‘seeing’, new language will be generated, new ideas, new uses of light and new concepts of colour and understanding that will begin to match what is now intimated through the development of digital colour space.

To effect this change at a more rapid and experiential pace – to achieve revelation: let us all buy a new pair of sunglasses.

Aspect Ratios to Frame Rates

Aspect Ratios to Frame Rates

By Ron Seifried and Craig Rosenzweig
Every image tells a story. The subjects, colors, and composition of an image contain a wealth of information. Even if we removed these attributes, a digital image still retains vital information necessary for any video editor. The following article covers the primary terminology required to understand the wide range of resolutions, fields, and frame rates. The learning curve for High Definition can seem large at first, but is easily understood after learning these basic concepts.

Chart illustrating the relative frame dimensions of different video formats in Progressive and Interlace

Ultimate Guide to Professional Video Cameras

How to pick a Professional Camera

by “Gospel” John Hess

The term “professional camera” is very subjective term. As technology continually improves, the line in quality between consumer cameras and professional cameras gets increasingly thin. But as any serious filmmaker/videographer will tell you, the jump to a professional level camera comes with huge improvement in the amount of control over your image. To make things more confusing, the word “prosumer” has popped up as a halfway between the consumer and professional realm. For the purposes of this article, we will consider professional cameras as cameras with 1/4″ chips sizes and up (more on chip size later).

When considering a professional camera, skip the beauty reels that you’ve seen on YouTube and Vimeo. Yes we’ve all seen the beauty videos with the rolling tree tops and the steel jungles. They’re fun to watch but they say more about the producer than the camera. After all, you can point any half way decent camera at a beautiful flower and capture a beautiful image. Beyond that, the fact is, all video hosting sites heavily compress their videos. Judging the capabilities of a camera based on compressed and resized web video will not give an accurate comparison of the cameras. Instead, you want focus of the features that will work for your film or video business.

The Basics

If you’re planning on investing in a professional camera, the camera must be at least capable of HD.

DO NOT INVEST A NEW Standard Def (SD) camera*

*(unless you are required by your workflow, i.e. to match other SD cameras in a studio or to fit with an established look of a show).

HD comes in a variety of flavors (HDV, XDCAM HD, P2, DVCPRO HD), which you must consider when you purchase a camera, but any HD camera, given the proper framing and lighting, will yield a better image than an similarly priced SD camera.

If you feel like you’re not ready to take the plunge into HD quite yet (for any reason), look around in the used market for a professional SD camera. There are plenty of them floating around as the market moves towards HD.

Keep in mind also, that many HDV cameras are also capable of recording in Standard Def DV or outputing an HD signal to SD. So even if the rest of your workflow isn’t ready for HD, you can still shoot HD and downconvert to SD as needed.

Being that the camera is HD, you will be shooting in the 16×9 widescreen format. Make sure the camera shoots in the regional format that you want to deliver in (ie. NTSC and PAL).

If you want to produce a more “film like” experience (another loaded term for a different article), make sure the camera has a 24p option. Be sure to research the 24p system that your camera uses - every brand seems to perform the 24p acrobatics in different ways. Consider the capabilities of you Editing software as well.

Chip Size

Almost all lower to medium end professional cameras have a 1/3″ imaging chip. Some of the smaller guerilla type cameras sport a smaller 1/4″ chip.

As you step up in chip size to 1/2″ to 2/3″ and even to the full 35mm styled chips (as in the RED), you will see an improvement in low light performance and substantially greater Depth of Field. The downside is these larger chips will be substantially more expensive. Generally speaking, the larger the chip, the better the image quality.


The professional camera industry is currently in a transitional state between CCD style imaging chips and CMOS style imaging chips although CCD cameras will be around for the foreseeable future. The difference between the two technologies is significant but goes way beyond the scope of this article. Both types are capable of producing great images but both have their drawbacks.

There is one major drawback to CMOS cameras that worth noting in this article. CMOS cameras use Rolling Shutters which can result in some unpleasant motion flaws in extreme situations - such as a whip pan. These flaws include: skew, wobble and partial exposure. These are problems in all CMOS cameras, but with the proper handling, can be avoided.

3 CCD / 3 CMOS

This is a term you’ll see thrown about a lot with these professional cameras. The concept involves splitting an image into its color components (Red, Green, and Blue) and recording them separately. Generally speaking in CCD cameras, having 3 chips is the prerequisite for a “professional” level camera. Because CMOS functions differently, many cameras (including the RED) only require 1 CMOS chip.

Interchangeable or Fixed Lens

For many professionals, this is the number one deciding factor when investing in a camera.

Cameras with interchangeable lenses allow you to swap out the lens for a different one. Although most cameras with an interchangeable lens come with a stock lens, there are a myriad of very high quality ENG (Electronic News Gathering) and Cine Lenses that you buy or rent to get achieve different focal lengths.

Cameras with fixed lenses tend to be less expensive and lighter. Although you cannot change the lens on a fixed lens camera, there are wide angle and telephoto adapters that can be attached on the front of the camera.

Side by side - a Fixed lens Camera and an Interchangable Lens Camera

Example: Sony offers two versions of their popular XDCAM EX cameras, the EX1 and the EX3. Among some other minor differences, the EX1 (left) is a fixed lens camera while the EX3 (right) has an interchangeable lens.

Low Light Capability

If you plan on using your camera to shoot events where you can’t control the lighting, low light capability is something you’ll want to have. Similarly, if you want to use a DOF adapter (which we’ll discuss later), low light capability will be to your advantage.

The good news is most professional cameras these days have excellent low light capabilities. Generally speaking, the larger the chip size and optics, the better low light response ie. a camera that has 1/3″ chip will perform better than a 1/4″ chip..

The bad news is there is no industry standard for defining low light performance. Every manufacturer will define a camera’s minimum illumination differently and often times that minimum illumination is defined using a whole host of “light enhancing” electronic features like gain and frame accumulation which will ad noise and motion artifacting that is unwanted all but the most critical situations.

You may be able to use a manufacturer’s minimum illumination specs to compare the manufacturers cameras to each other. But for overall comparisons, search for independent comparisons (sites such as cover this extensively) and go with the general rule of thumb - the larger the chip size, the better the low light performance.

Tape of Tapeless

Tape or Tapeless

More and more camera manufacturers are offering tapeless options and it is becoming something worth considering when purchasing a camera.

Tape offers the advantage of being a simple, inexpensive, and well defined format. Archiving is simple - just pull the record inhibit tab on the tape and store in a cool dry environment away from light. Also if you’re in a collaborative environment, it’s easy just to hand off a tape to someone else. But tape is limited to a set bit rate so you are stuck with the limitation of the format (currently HDV).

Tapeless acquisition allows for much higher bit rates and opens the door to much higher quality video recording. The freedom from set bit rates also allows interesting camera options such as over-cranking and time lapse recording. The two major drawbacks to tapeless are the high cost of recording media and the fact that you will have to actively back up and archive footage on a hard drive in order to reuse your recording media.

Audio Connections

Bar none, the best kind of audio connection for microphones is XLR. Most professional cameras have XLR connections, but there are a few out there that use RCA which means you’ll need an external converter for your XLR microphones.


Keep in mind how you will be using your camera for your production. If you intend on using the camera for event/corporate work, you may want to consider cameras that have a shoulder mount design - something that will allow you to get steady shots even while handheld.

If you want to use a smaller camera stabilizer, you may want to opt for the smaller light weight camera. Similarly, if you intended to shoot a lot of guerrilla shots, you will want a smaller more inconspicuous camera.

If you happen to be used to a particular brand’s button layout, you will mostly likely want to stick with that brand as different manufactures all have their own layouts.

Big Rig

And of course, the “cool factor” does play into it as well. A big hulking camera with wires and blinking buttons looks great and can impress a client. But it’s also can be intimidating, requires a lot of set up time, addition people to operate (such as a focus puller) and is impossible to pull off a guerrilla shoot.


Not all viewfinders are born equal. Although a viewfinder does not affect the final output of a camera, they make for easier critical focus and a generally more pleasant shooting experience.

Unfortunately, manufacturers are not exactly upfront about the specs of their LCD/Viewfinders. Most manufacturers supply a size, some will even give you a pixel count. The more pixels in an LCD, the cleaner and sharper the image.

Ultimately, the best way to judge a viewfinder is to compare models first hand.

Video Out

You can forgo the viewfinder if you are planning to send the video to a monitor through video out. All professional cameras have video out. The video out connections can include: SDI/HD-SDI (highest professional quality SD and HD), HDMI, Component (HD and SD), S-Video (SD only), and Composite (consumer grade SD).

Make sure the camera you are purchasing has the same format as the monitor you are intending to use.

Video Out can also be used to send the video to a Digital Disk Recorder such as the AJA KiPro. On some cameras, the SDI connections actually send the video image out before the signal is compressed to the recording format (generally HDV). Sending the signal before the camera’s recording compression allows for better quality which can make color correction and greenscreen compositing easier in post production. If you intend on using these devices, keep the video out options in mind when selecting a camera.

Timecode / Genlock

If you intend on using your camera in a multicam studio situation, you will need a camera with Timecode/Genlock sync capabilities.


This may not be a concern if you are stepping up to your first professional camera. But if you are buying a second one, consider the type of battery the camera uses and how that will play into your budget.

Depth of Field Adapters

One of the more exciting tools for the independent filmmaker of the last few years has been the advent and the improvement of the Depth of Field Adapter. Then adapters allow you to use 35mm lenses (still photo or film lenses) with your camera. When considering buying a camera, it’s worth considering if and how you will use it with a DOF Adapter.

A Depth of Field Adapter, also called a 35mm Adapter, works essentially like a projection screen. The image passes through the lens, and is projected on a translucent screen - the camera takes a picture of the screen. Since the area of the translucent screen is much larger than the image sensor, Depth of Field Adapters are capable of much shallower depth of field than the camera alone.

There are several limitations when using a depth of field adapter that you should consider in conjunction with your camera purchase. All Depth of Field Adapters project an upside down image on their translucent screen. Several DOF Adapter manufacturers offer image flipping devices, otherwise a camera with a viewfinder flip option would be useful. DOF adapters also eat of a lot of light, so cameras with low light capabilities fare best with DOF adapters.

DOF Adapter Manufacturers

Popular Models

Here is a selection of professional level cameras. This is certainly by no means an exhaustive list of cameras that are available on the market. These listed below are NTSC cameras, if you are purchasing for use in a non-NTSC country, look for alternative versions of these models.


Popular Sony Cameras:


Popular Canon Cameras:


Popular Panasonic Cameras:


Popular JVC Cameras:

Where To Buy?

When investing in a pro camera cost is certainly a issue. The interwebs are full of companies saying the offer “low prices.” But, is saving a couple dollars worth not having professional support for the huge number of questions you will have before AND after buying that camera? With a sales staff made up of working professional filmmakers and videographers with years of field experience, B&H is the only retailer we recomend (they also have some of the lowest prices). Yes, they are one of our sponsors, but the reason they are a sponsor is because they are the best. Don’t take our word for it, just ask around.


Avid Experiments

Avid Experiments
If you have read any of my previous blog entries here you would know that I am a long time Final Cut Pro user, but since the beginning of this year I have been working at an offline edit house (in addition to the time I spend doing latenite things!) that primarily uses Avid on Macs. As we solely do offline editing here and all of the grading and online is done at other more specialised post production facilities, generally speaking we don’t have to worry too much about gamma, colour spaces and getting files in and out of various programs. Most of our jobs are shot on 35mm, and get telecined to DVCAM which we then edit in DV-PAL, export an EDL + OMF and we’re done. For RED Projects we normally get dumped a hard drive full of R3Ds which we convert to DNxHD using RED Rushes and bring all these files in Avid via an ALE. Everything is fairly simple and straight forward. 


Cineform 3D Workflow

At NAB 2009, Cineform was demonstrating their comprehensive 3D workflow, using Neo3D. We stopped by to talk about these new tools for stereoscopic post-production, and along the way, Cineform’s David Newman gave us an in-depth education on 3D filmmaking and post. We’ve split the 30-minute interview into two parts:
Part 1 (Click to watch the video)
Part 2 (Click to watch the video)
Source: ProVideo Coalition

Using Media Manager to Consolidate your Media in Final Cut Pro

Media Manager is a great tool in Final Cut Pro that allows you to manage content based on timecode information embedded within all of your clips. You can use this tool to consolidate media that contains only the necessary media to play back your sequence. To consolidate a sequence using the Media Manager:
  • Highlight your finished sequence within the Browser Window
  • Go to File > Media Manager
  • Set the Media drop-down menu to Copy media referenced by duplicate items
  • Click the Delete unused media from duplicated items option
  • Click the Duplicate selected items and place into new project option
  • Click the Browse button to set a new media location for the new files that will be created
  • Click OK to consolidate the media into a new project
mediaman.gif Just remember to be careful when using Media Manager since certain functions are irreversible. Also know that in order for Media Manager to work correctly, all of the clips must contain the original timecode information from their source. Clips that do not contain timecode information are not included as part of the managed process. Check out this article for more information on the Media Manager.

Apple Color workflows

Arguably Apple Color is among the most frustrating pieces of
creative software around – frustrating because it is on one hand
amazingly powerful and on the other putridity inefficient and
dysfunctional. Simple tasks often seem far harder than they should be
and the round-trip between FCP and Color is not nearly as easy as it
sounds. It’s also a tool that makes all but the uber-nerdy feel more
like mathematicians than artists with an interface that just isnt
conducive to creative flow. If you want a colour-grading experience
that feels more like art than science RedGiant Colorista and
MagicBuletLooks are the tools to go for.

But, that said, with excellent secondary colour correction tools and
built-in motion tracking it presents two elements missing from
MagicBullet so it can be well worth the effort if you can wrangle its
quirks and issues into line.

Below is a set of good articles i've found that lay out different Color workflows and how to deal with some of its inconsistencies. Certainly we all look forward to the day when Apple finally convert Color from its clunky, linux-like interface into a real Apple-esque package with a consistent interface to the rest of the FCStudio.

Undertanding Apple Color Workflow

Color workflows with different types of sources

RED+FCP+COLOR: making it all work

And here’s also a 2-part video tutorial on the Color Workflow from FCP

FCP to Color part 1

FCP to color part 2

XMLEditor for FCP XML Exports

XMLEditor is an interesting way to look at your XML exports (or one a client sends you). It's free and can actually help with some troublesome FCP XML saves if you know what you're doing.

Shooting and Posting on the 5DmkII

As I’ve mentioned before, we shot the documentary Homeschool Dropouts on the 5DmkII in August, and posted it during September. It was a great learning experience, since it was our first time shooting video on a dSLR. Below is the worst shot from the project - all of the 5D’s image issues are visible in it. All of them can be avoided in-camera and all but one of them can be repaired in post (not counting the awkward composition).

Above is the final image as it was rendered from After Effects. Firstly, we have repaired the exposure. This was a very early shoot, before we started using the Magic Lantern firmware, and without its live histograms and zebra bars, getting the right exposure was tricky. Even though the camera only saves an 8-bit image, there is lots of room for correction, and since it comes from a 14-bit sensor, there is a surprising amount of latitude recorded.

Next, I adjusted the color. The green cast on the top of the wall is actually sunlight bouncing off the lawn in front of the porch. Since I had over-exposed a lot, and eyeballed the white balance badly, slight color changes like that were amplified, but it’s a great testament to the color sensitivity of the camera that it picked that up so vividly.

This shot also required a little bit of denoising. Even though we were using a low ISO setting, I had enabled Highlight Tone Priority on the camera. There’s some dispute as to how useful this setting is for video, and while it does provide more latitude in the highlights, it also adds some strange shadow noise. I used it on several early shoots, but I’m a little more leery of it now. I would use it with caution.

There’s also some subtle moiré pattern on Mr. Swanson’s sleeve. This is the Achilles heel of all of Canon’s video-enabled dSLRs, and it can be tricky to spot on the viewfinder. It’s also not especially predictable; note how it appears on the invisible pattern of the oxford cloth shirt, but not at all on the very pronounced weave of my jacket.

This cannot be fixed in post, but slightly adjusting the camera’s position, zoom, and/or focus will often make obvious aliasing and moiré artifacts vanish. I overlooked this issue, like all the others in the shot, because the camera was new, the shoot was hasty, and I was on the wrong side of the lens. Still, no excuse.

But enough dwelling on the worst shot; have a look at some of the better footage that we got:

All the outdoor shots were natural light with a single reflector, and the indoor shots were using available lights in the various homes rather than a professional light kit. Outdoor shots generally used Canon’s EF 28-135mm zoom lens, and most interviews used EF 50mm 1.8 primes.

Minor grading was applied to each shot, but it was extremely limited since the raw footage was so good. Some interviews got a subtle vignette, and there was a bit of levels work here and there, but most of these shots are pretty natural. I’ve downsized all the 1080p screenshots to 720 for bandwidth reasons, but it’s still big enough to see noise, any artifacts and also the sharpness that the camera is capable of.


I forgot to mention audio, or how we actually got footage from the camera into the edit. This is an important part of posting, so here’s our process:

The Canon 5DmkII records to MPEG4 files at about 48mb/s. For reference, HDV is MPEG2 at 25mb/s. MPEG4 is a far more efficient codec, so I figure that there’s actually more than twice the image data contained in twice the bitrate, but it’s not a good editing codec, so we converted everything to Cineform as we pulled it onto the computer.

The 5D also shoots 30 frames per second, which is a problem since NTSC video actually runs at 29.97 frames per second. Fortunately, Cineform automatically fixes this, conforming to the proper framerate and stretching the audio that extra 0.03 fps during conversion. We never had any sync issues once we realized we also had to stretch our external audio as well.

The interviews were recorded with a Sennheiser 100 G2 wireless mic running into a Zoom H2 audio recorder. All of the Botkin standups were recorded with that same mic running directly into the camera. Being primarily a stills camera, the 5D has crummy audio preamps, and they are by default set way too high and on automatic gain.

Using Magic Lantern, we were able to manually set the analog and digital gain at the appropriate levels, 0db and 6db respectively. With the Sennheiser receiver cranked all the way up to -6db, the audio signal was hot enough to need no real in-camera amplification, and so we got a very clean signal.