Monday, May 30, 2011
The “Acquisition Technical Metadata Set” is a set of metadata collected at capture through interfaces from live cameras or camcorders. It is intended to improve interoperability for the purposes of exchange of material. This set has been commonly agreed by EBU Members (users) and manufacturers for use in a tapeless file-based or live production environment.
The “Acquisition Technical Metadata Set” is clustered in camera device (shooting parameters), lens device (settings and identification) and microphone device (identification) video and audio metadata sets. This document only provides definitions of the different relevant metadata attributes.
For cameras and camcorders, it is expected that the file format used for the export of the essence will be MXF. The metadata structure should support the exchange of native available MXF (export) and XML (Import and export) formats. Additional EBU specifications provide implementation guidelines for different configurations (e.g. using KLV and XML encodings for exchange inside MXF files, or using separate XML files). Descriptive Metadata sets are also specified in a separate EBU specification.
More information on the role of this specification with regard to other related EBU metadata specifications is provided in the ‘metadata’ section of the EBU TECHNICAL website.
NVIDIA demoed the new goodness on a Honeycomb slate with 1280 x 800 resolution and the frame rates remained smooth throughout. In order to emphasize the generational leap that we can expect with Kal-El, the company switched off two of the four cores momentarily, which plunged performance down to less than 10fps. That means the simulations we're watching require a full quartet of processing cores on top of the 12-core GPU NVIDIA has in Kal-El. Mind-boggling stuff. Glow Ball will be available as a game on Android tablets once this crazy new chip makes its way into retail devices -- which are still expected in the latter half of this year, August if everything goes perfectly to plan. One final note if you're still feeling jaded: NVIDIA promises the production chip will be 25 to 30 percent faster than the one on display today. Full video demo follows after the break.Permalink | | Email this | Comments"
Saturday, May 28, 2011
As a colorist, one of your challenges can be balancing out shots with extremely different luminance values. We are all human and our eyes can get tired after watching image after image all day. That’s where the scope comes in.
The Tektronix 7000 series of scopes have several tools that make it easier to visualize your clips color and luminance without being effected by the actual image. In this tutorial, I will demonstrate two essential Tektronix functions: Capture and Line Select. Unlike software scopes, having immediate access to these hardware buttons can really speed up your workflow.
This knowledge can be applied wether you’re on DaVinci Resolve, Apple Color, Film Master, Avid Symphony, or any other system that relies on color correction.
Josh Petok is a Colorist who helps reality shows look their absolute best. From his beginnings on “The Surreal Life,” he strives to intensify drama or comedy while still keeping the presence and authenticity of reality tv. Completing work on his 47th show, Josh is continually learning and developing new methods for enhancing the shows that he works on. You can find Josh on his homepage, his blog The Current Cut, and – of course - Josh can be found on Twitter.
Related Posts (automatically generated)
- Baselight is Final Cut’s Newest Color Grading Plugin!
- Interview: Colorist Warren Eagles, Part 2
- Interview: Colorist Warren Eagles, Part 1
Tao of Color has not received compensation, goods, or services from anything mentioned in this post or in the Video Tutorial. We hope, one day, this might change. Affiliate links are clearly marked, resulting in a commission on sales (which helps support TaoOfColor.com)."
For the past few months I've been working on setting up and executing a simple and efficient workflow for the post of a delightful little movie called 'Stanley ka Dabba'. This movie was shot entirely on a Canon 7D DSLR camera. I've posted many a post of still cameras that shoot videos right here...
The Canon 7D DSLR camera shoots beautiful and natural stills. And the same sensor shoots HD video. These are saved on CF cards. As Quicktime movies encoded H.264 with PCM audio.
But there are two major problems. One is, the Canon 7D shoots 23.976 fps, not true 24 fps that film in theaters runs at. And the other is that the files have no timecode (TC in short) or reel/source associated with them. HD movie files out of a Canon are called MVI_xxxx where xxxx is a number serially added by the camera.
So what happens is that after one finishes the edit, and needs to send it for further post, like colour correction, VFX etc, there is no EDL based route to sending the edit data onward.
For 'Stanley ka Dabba', I had devised a workflow where I converted the files from 23.98 to 24fps without any frame loss or motion conversion. I then striped all the files with timecode with one following the next, sort of like they were captured off tape. And I assigned a reel name or source for all the files as per the folder they resided in.
Deepa Bhatia, the editor and co-producer of 'Stanley ka Dabba' edited on an Avid Media Composer, I made ALE files for her assistants to import into a bin and then batch imported the original (striped with TC) files from the Canon. That way Avid also 'gets' the TC from the Canon files.
After Deepa's edit, the colour correction happened at Reliance MediaWorks. On the Baselight at Reliance, the Avid EDLs that Deepa made out, perfectly conformed the original Canon files and they were on their way to colour grading within minutes. There was no struggle whatsoever.
So what's with the title of this post? 'Struggling with digital movie-making'. Here's what...
On the same day as 'Stanley ka Dabba' was released, there was another movie released, a horror movie about text messages. This too was shot (partly or entirely) on a Canon DSLR. But they chose to do their post in a different way. They directly imported the Canon files into Avid Media Composer to edit them. No 24fps, no TC.
Long story short, that movie had to be re-assembled manually shot by shot, eye-matched on an Autodesk Smoke system. The post facility concerned, simply couldn't fathom a correct method for reproducing the original edit.
Similarly, there are other post houses which have yet another interesting workflow for Canon files. They advocate (or so I am told by clients who've been there) first transferring all the Canon 7D files to HDCam tape (eeks 3:1:1 colour sampling) or HDCamSR tape.
In a period of nearly extinct and horrendously expensive HDCamSR tapes (after the Japan tsunami-earthquake), this isn't a rather bright suggestion.
I'm getting a bit snarky and sarcastic about all this because, as an editor it bothers me no end that people with no desire to find out new solutions, have such a say in the way post-production is (man)handled.
There are also other struggles and horror stories - with Alexa files, bad colour space conversions, loss of timecode, Red R3D file conversions that take days to get done, Weisscam files that just don't open and many such 'Adventures of a digital rebel'.
In my opinion 'Stanley ka Dabba showed that if one has a story with a heart, a positive attitude, and the ability to take a calculated risk in doing something for the first time, then there is a medium that's waiting to help you tell your story. There are challenges for sure, and shooting Canon is not exactly simple.
So just go to a theatre near you where 'Stanley' is showing, and take a look for yourself. Cinematographers and even DOPs (are they different?) will of course find issues with latitude, resolution, gamma, etc etc.
But you know what. With some 'careful' lighting it is entirely possible to make even 35mm film look flat. And some 'adventurous' DI colour correction can easily make film look synthetic and 'inorganic' too.
About operations on the Canon 7D, not being able to pull focus or aperture, no optical viewfinder (for video), and other whines from movie film cinematographers. I got an interesting insight at a fashion show a few weeks ago at the Leela.
At the head of the ramp during the show, there were photographers taking stills on their DSLRs. How do they focus, without a tape measure, with the artist walking toward them with no 'mark' to stop on, no rehearsal 'for movement'. And stills photographers have to take a frozen moment, so going out of focus is also out of the question.
And with the light at fashion shows they are often shooting at wide apertures, so depth of field is shallow.
Bottom line. Digital 'movie-making' is on it way to replacing the term 'film-making'. These are early days, of course. Those who forge ahead are the positive ones with guts. The rest will whine and sigh and hang on to film with a naive hope that digital is a fad that will go away and one day we will all go back to film.
As far as post production is concerned - editing, colour correction, sound editing and mixing - most of this is digital for a decade or more. Doing post on digital is no more complex than post for film - managing logs, key codes, cut lists, pull lists, making cement joints, Hamman joints, winding and rewinding film on a winder - I'm glad we got rid of all that.
Red One, Red MX, Epic, Alexa, Canon 5D/7D, AVCHD, XDCamEx, HDCamSR, Viper, D-21, Genesis, ArriRAW, Canon MPEG2 422, AVC-Intra, P2, XDCamHD, Varicam, DVCProHD, KiPro, Gemini, Hyperdeck, Nanoflash, GoPro... the list goes on...
There are a large number of resources and personal experiences for each of these. Forums, blogs, web zines are oozing with information. And simple common sense, along with some basic knowledge of how computers work and store data, also helps. For my part, I've worked hard to be ready for each and every one of these new digital formats.
Store, Copy, Convert, Edit, colour grade, finish, backup, archive, re-purpose into new formats... the whole nine yards.
So digital doesn't have to be struggle, unless you try hard, do the wrong things, or go to the wrong places.
Thursday, May 26, 2011
A paper from researchers at Microsoft Research and Hebrew University details a new, spline-based algorithm for transforming pixel art, such as that from early sprite-based games, into scalable vector images. Johannes Kopf and Dani Lischinski's 'Depixelizing Pixel Art' [PDF] describes an upscaling technique that differs from popular methods used by companies like Adobe in ways that are particularly suited for the low-resolution sprites of classic games. For example, the algorithm assumes that pixels in the ...
Wednesday, May 25, 2011
Tuesday, May 24, 2011
This is a damn tricky blog post as new software comes out every 2.5 seconds and issues I raise with current software is most likely fixed by the time I finish writing this sentence…so this blog post will be a continual work in progress that I will update as much as possible. I can’t make a post “sticky”.
New developments in DI and colour grading have been emerging at two Australian facilities recently, thanks to R&D carried out for recent feature film projects, "Tomorrow When the War Began' and 'Say Nothing'. Their research looks set to pave the way for future productions.
A couple of my colleagues at Northern Arizona University’s School of Communication noted how difficult it is to do manual white balance with the Canon DSLRs some of our students are using.
All I can say: Not as easy as video cameras. The Canon presets have worked pretty well for all the projects I’ve shot, but I’m using a Canon 5D Mark II where I dial in the color temperature I want. Beginning students may no what color temperature to use or just rely on presets — especially when they have to go through the steps I outline below. Or some may just want to use the simplicity of video cameras. Whatever camera you choose to shoot with, manual white balance is an important step not to neglect.
And if it’s more difficult with a DSLR, then you have to decide if it’s worth the extra effort. For me and many of my students who have purchased their own DSLRs, we realize one thing: The image quality, baby, the image quality. But knowing how to color balance is part of mastering the image quality.
Why do we even bother with adjusting color temperature in the first place? Why not just set the camera to automatic. If you’re controlling your image professionally, then you need to use the manual settings so it doesn’t do things you don’t want it to be doing.
Our eyes balance white automatically. A camera’s sensor isn’t as smart as us and it doesn’t have the multitasking capabilities of our minds. So you need to tell the camera what kind of light it’s seeing so it can find true white. White indoors is different than white outdoors. See the chart below:
Color temperature in degree Kelvins. This chart provides a list of different lamps and their corresponding color temps. (Image courtesy of Mapawatt.)
So if you’ve set your DSLR to an indoor light setting (~3700K), such as standard tungsten (a regular light bulb) and you go outdoors (~5500K), the white the camera saw indoors is now different and now has a bluish tint to it. If you’ve set your camera to daylight and you go indoors, the camera’s image now contains a warm yellow cast. See the images below.
The top image contains the bluish tint of an indoor white balance setting used incorrectly outdoors. The bottom image is too yellow — the typical problem with an outdoor setting used incorrectly indoors. The center image is properly color balanced. Photos by Kurt Lancaster (courtesy of Focal Press).
And if you are shooting a scene with multiple light sources (such as a window and a room with fluorescent lighting), you need to tell the camera which one it should see. Is the window your key light? Then dial in the proper color temperature or set the custom white balance.
The best way to make sure you don’t make mistakes? Use custom white balance. The most precise way to make sure your whites are really white is to use a standard 18% grey card. With a video camera, you simply fill the lens with the grey card (or a white sheet of paper when you need something handy) and press the manual white balance button. Be sure you’re angling the card or sheet of paper so that it’s reflecting the light you’re balancing into the camera’s lens!
However, a Canon DSLR is more complicated than video cameras. They don’t have a point and shoot white balance mode. However, some of the cameras, such as the 60D, 7D, and 5D Mark II allow you to dial in the color temperature manually (you’ll need to know the color temperature of your light source, of course).
In this shot from the short, “The Last 3 Minutes“, the DP, Shane Hurlbut, ASC, dialed in the color temperature to 4700K in order to emphasize the warmth of the beach sunset. The Canon 60D, 7D, and 5D Mark II allow shooters to dial in color temperature in 100 degree increments.
However, until you gain the eye and experience of a professional photographer or cinematographer, it’s really good practice to manually set the white balance so as to avoid mistakes. Also, don’t rely on the “fix it in post” adage. DSLRs with their 8-bit color space don’t allow much room for error. As Shane Hurlbut, ASC, says, you need to get the image close, treating the DSLR like reversal film stock. Expose for the highlights and be sure to get the color temperature right. (See Hurlbut’s lecture from the Collision Conference, here; direct link to video segment about color balance, here.)
To white balance manually, follow the steps below to get the most accurate reading in the light conditions you’re shooting.
Manual settings of white balance for Canon DSLRs
- Take a photo of the white sheet or grey card, reflecting the light you want to balance from the card into your lens. Also, be sure it fills the lens, that it is in focus, and you have proper exposure (don’t blow it out or underexpose it).
- Turn your Mode Dial (top circle dial with images and letters on it) to manual camera mode (such as P). (This is necessary in order to access the Custom White Balance menu.)
- Press the menu button and wheel over to the second camera menu (the camera image with two dots beside it).
- Go down to Custom White Balance.
- The photo you took as your white balance reference will appear. Choose, OK. The camera has been properly white balanced. You now need to make sure the camera is using custom white balance in the presets.
- Press the menu button to exit the menu.
- Switch the Mode Dial back to the video mode.
- Press the Q button (or white balance button) and move down to the white balance menu where you can rotate through the white balance settings: Auto, Daylight, Shade, Cloudy, Tungsten light, White fluorescent light, Flash, and Custom. Select Custom. The camera is now properly white balanced.
If you weren’t at NAB 2011 then you probably haven’t seen all the features of the soon-to-be-released DaVinci Resolve 8 in action. There were quite a few videos right from the NAB show floor (and here is one more) but this official video from Blackmagic Design does a good job of highlighting those features new to version 8, including stabilization, the multilayer timeline and XML export to and from Final Cut Pro.
The Sony PMW-F3 has a very nice standard look right out of the camera, but it also has many many ways of adjusting the image to your liking. At AbelCine we have been making scene files for our clients, designed to both match cameras and create a look. With the wide range of adjustments available in the F3′s picture profile control, I was able to make several looks that you might find useful. Some are aimed at maximizing the range of the camera, while others are aimed at creating a specific look. You can download all of the scene files here, and easily upload them to your camera through an SxS card. Here is a brief description of each and what you can expect. Learn how these files were made and how to make your own in our F3 class, which is going on this week in NYC & LA.
Note: All of the stills were grabbed at +6db of gain, so you might notice a little noise in the stills. I changed exposure in each still to adjust for the different gamma modes.
Monday, May 23, 2011
At NAB yesterday, Promise Technology announced the SANLink Thunderbolt to Fibre Channel adapter. From the press release:
'SANLink will provide a dual 4G Fibre Channel link that can be used to connect to external Fibre Channel storage or to an Xsan network. Each adapter features full duplex FC ports that automatically detect connection speed and can operate independently at 1, 2 or 4Gb/s. SANLink is the perfect companion to connect to the new VTrak x30 Series as well as maintain compatibility with the previous generation VTrak x10 Series.'
Bringing Xsan to laptops is great, but what about having Mac Mini Servers to interface to Xsan Volumes?"
Sunday, May 22, 2011
I didn't know that one. Thanks Nir!
On first run it will pull up this dialog. The values that must be changed to reflect your setup are: Path to Backburner: the path where you have installed the Backburner manager. .ie C:\Program Files\backburner
2. Path to aerender.exe: set to your shared AfterEffects\Support Files folder. ie F:\Video\AfterEffects\Support Files Maximum # of Servers: Top limit of potential render machines The other settings are left blank (automatic), but can be set to fill in the UI panel defaults. These will be written to your user prefs file. To submit a job to backburner manager can be as simple as: opening a project. selecting the composition you wish to render. (assuming it has been added to the project render que). clicking Submit to Backburner. This will send off a task based on what it has, adding the needed defaults. It will : name the Job with the selected comp name if blank. set the destination group to the text prefs default. set the Depend on Job Names empty and Max # of rendernodes to all. As well it will pull the: currently active project name. currently selected comp name. currently selected comp frame bounds. You can explore further by working with the Additional AE Parameters section. ex: using the OM and RS templates to override the renderqueue settings, etc. Anything that will render with aerender will work with the backburner manager.
SSDs and then there are SSDs -- the Texas Memory Systems (TMS) RamSan-70 is definitely the latter, packing 900GB of high-speed SLC NAND flash onto a single half-length PCIe card. Boasting an incredible 2GB-per-second sustained external throughput, this near-terabyte solid state drive is clearly overkill for most of us, considering that it's guaranteed to have a sky-high price (once details are released). Instead, the '900GB Gorilla,' as it's come to be known around TMS HQ, is destined for high-end servers -- though we certainly wouldn't object to clearing out a slot in our desktop, if by some miracle we can afford this monster when it starts shipping in four to eight weeks.
When you want to grade professionally using a mouse is just no option, you want to keep your eyes on the grading monitor. Moving your eyes from the grading monitor to the GUI monitor and back just to see where to click next is both slowing you down and making your eyes more tired. When using a panel you can adjust two parameters at the same time, like mid's and high's color balance or shadow and highlight luminance to get the right contrast. Once you get used to where all the buttons are your grading gains both speed and quality.
The expensive grading systems often have dedicated panels, but the cheaper systems rely on third party panels. There are three main competitors, Tangent, JLCooper and Avid (former Euphonics). I have had a look and feel on 3 of their panels, Tangent Wave, Avid Artist Color and JLCooper Eclipse CX. The Tangent and Avid panel are in the same price range, the JLCooper is more expensive, but it also has a lot more direct access buttons, which makes the use of mouse less necessary and of course work quicker."
Literally a thousand years ago* I wrote a post titled Dear some nerd: Please port the Box2D open-source physics engine to an Adobe After Effects Script. I’m not sure if that’s exactly what Motion Boutique has done with Newton, but what they’ve made looks incredibly fun and usable.
Anyone know the release date? Price?
ARRI ALEXA post, part 4: "Local producers have started real productions with the ARRI ALEXA, so my work has moved from the theoretical to the practical. As an editor, working with footage from ALEXA is fun. The ProRes files are easily brought into FCP, Premiere Pro or Media Composer via import or AMA with little extra effort. The Rec 709 [...]"
Saturday, May 21, 2011
The digital intermediate (DI) has become an accepted part of the workflow for feature films ─ from the biggest blockbusters to smaller-budget independents. There are currently an enormous number of tools to get the job done, including software-based systems and the DI systems with the most bells and whistles. P3 Update has taken a look at some recent DI work to see the challenges that DI artists are facing and the tools they use to complete the job."
This week I was at the LLB show in Stockholm and had a look at the new OLED displays from Sony. I was very impressed by them even when it's a bit hard to judge them in the bright lights of the conference hall.
OLED (Organic Light-Emitting Diode) is a new type of display which doesn't need backlight. This has the advantage that it can display deep black levels, which LCD can't. OLED can therefor achieve higher contrast ratio, which is preferred when color grading.
Sony has two versions of their OLED monitors, BVM-E250 - a grading monitor and PVM-2541 - a production monitor."
Thursday, May 19, 2011
It occurred to us when creating the Camera Mounted Recorders comparison chart that there is often confusion around different HD formats. In 2009, I wrote up a blog titled Making Sense of HD Formats which covered the different HD formats used today. But what I didn’t go into was those other words and numbers that we often see associated with a video compression format: bit rate and bit depth. Don’t stop reading just yet, I promise to keep this simple and you’ll see that these numbers may actually mean something when it comes to your next production.
Wednesday, May 18, 2011
Tuesday, May 17, 2011
It is amazing what can come into life if you have a good idea and you transform it into reality. Friedemann Wachsmuth and his friend built a fully working Super8 movie projector using Lego Technic blocks! Of course not all parts were made by Lego, but the only non-Lego parts are the lens, the reel spindles and the lamp. The author says that "the projector uses just two engines and is fully featured with automatic feeding, 24 fps, fast rewind and 120m reel capabilities. A decent LED flashlight makes it pretty amazingly bright".
Even though you might think that projecting Super8 film is a largely unnecessary hassle these days you must admit that those words are clearly meaningless to the camera built by Friedemann. Check out the video included to see the projector in action!
We wonder what more could be built with the use of Lego blocks. Any cool ideas?
TheInquirer.net reports that Western Digital has released 2.5TB and 3TB hard drives aimed at storing video.
Western Digital, which recently purchased rival storage vendor Hitachi Global Storage Technologies, announced that its AV-GP series of hard disk drives now come in 2.5TB and 3TB models. Both drives have the firm’s Silkstream technology, which Western Digital claims can capture 12 simultaneous high definition video streams.
In what could be the a classic case of the poacher turned gamekeeper, BitTorrent, widely condemned by content owners as providing a key channel for illegal content downloads is to begin legal distribution of films over its network.
One of the big stories at NAB 2011 was the release of numerous Camera Mounted HD recorders. From the AJA Ki Pro Mini to the Sound Devices PIX 240, a huge variety of portable recorders are becoming available. These recorders all aim to increase recording quality and ease workflow needs, but they differ greatly in their form, function and price point. To help navigate these different options, we created a Camera Mounted Recorder Comparison Chart that compares several of the different camera mounted recorders. The chart includes details on recording format, media, inputs and other information that should help you decide which might be right for you. We also included the high end recorders from Codex and S.two, which feature ARRIRAW recording capability. ARRIRAW is a 14-bit RAW Uncompressed format that can be sent out of the ARRI ALEXA camera.
NOTE: Several excellent recorders from Sony, Panasonic, AJA and others are not included, because we wanted to focus on recorders that are designed to be mounted on a camera for production. Check out the chart and let us know what you think."
Saturday, May 14, 2011
This year’s NAB was a BLAST of fun and of new products and announcements. While there were literally tens of thousands of new products beings launched – and almost as many companies vying for my and other people’s attention – for me this was the year of the CAMERA, as a slew of new cameras and corresponding footage were released. But what really made this the “Year of the Camera” for me – or what kept me focused on the new cameras was Zacuto’s “Great Camera Shootout” camera test results that they were screening during the trade show (Zacuto won an Emmy last year for their tests).
The digital intermediate (DI) is often a high-ticket item in the postproduction budget, but more and more independent filmmakers are still able to finish their films with a flourish. Even better, top-end tools from such well-respected manufacturers as Blackmagic Design and FilmLight are part of the equation."
Here are 2 videos from Autodesk showing the integrated Real-Time Color Grading in Flame Premium. They show the multi-layer timeline, volumetric light, 3d cast shadows, lens flares etc.
If you are working with material from the ARRI Alexa you might need a matching LUT. ARRI provides a LUT generator where you can generate LUT's for most common Grading and Effects software. There are 4 different categories, On-Set LUTs, Dailies LUTs Postproduction and Advanced, which give you different options to change."
May 13, 2011
This tool allows you to manage preference files for Final Cut Studio and Final Cut Express. Trash your preferences’ is often a mantra when trouble shooting errors in FCP. I found this free utility from Digital Rebellion works well. Makes it easy to backup and trash your preferences rather than navigating through the library files manually.
Preference files store information about user preferences, window and toolbar placements, and launch settings. Sometimes these can become corrupted, causing problems. Preference Manager allows you to trash corrupted preferences and keep backups of working preference files in order to quickly restore your settings.
- Works for all applications in Final Cut Studio, not just Final Cut Pro
- Backs up preferences, FCP button bars, keyboard layouts, column layouts, window layouts, track layouts, custom settings, user plugins, Boris text plugin preferences and Compressor settings and destinations
- Unlimited backups
- Completely free
- Lock preference files to prevent modification
- Assign a custom backup location
- Backups are stored in a single file, making it easier to move them to another machine
- Works with Final Cut Pro 3.0 and above and all versions of Final Cut Studio
- Link backups to a particular project so that the backup will be restored when the project is launched, useful for per-project settings such as scratch disks
- Categorize your backups for easy identification
A utility for managing preference files for Apple Final Cut Studio on Mac OS X. Trash, backup, lock and restore preferences.FCP Editor Tips, Plug-ins "