I've recently been doing a little reading about YCbCr to RGB conversion and it occurred to me that this is an area that can have a profound effect on the quality one can get out of an HV20. You see, part of the issue, aside from the color sub-sampling inherent to compressed formats like DV and HDV, is that simple color space conversion conversion is ALSO a lossy operation in and of itself. Add to that the issue of "video" color range [16-235] vs. "Computer" color range [0-255] and it seems like it would be a really good idea to understand the subtle but important details regarding how the color is handled in ones "pipeline" when quality is a priority.
I did some tests to try to determine exactly what it is that comes out of the HV20 in terms of "color space". The results were a little surprising, but mostly in a good way.
As you know, the HV20 saves it's video in "HDV" format, which is really nothing but an mpeg2 stream with very specific parameters. mpeg2 should use a Rec709 matrix/coefficients for it's interpretation of YCbCr. Basically, Rec709 defines how the YCbCr is to be interpreted for conversion to RGB. (there is more than one "formula" for conversion to and from YCbCr to RGB. Rec709 is the standard formula used for HDTV and modern YCbCr professional video gear.)
For my tests, I used AVISynth to convert from the HV20s mpeg into RGB using the four standard formulas it provides. People familiar with the "Farnsworth" method for removing the hard telecine in the movies from the HV20 might recognise the name of the program. Think of it as a Swiss Army Knife for working with video.
This is where the surprises came up. It seems that while the HV20 pads the blacks as specified by Rec709 (it puts "video black" at code value 16) it uses the rest of the range, all the way up to code value 255 to encode whites.
This is actually good news for our purposes. We get more headroom for our whites!
The trick is, you need to convert the video correctly. If you convert the video in the standard (arguably "correct") way, scaling the Rec709 video [16-235] to computer space [0-255], the BLACKS will be in the correct place but you will clip the whites. By default, this is probably what most codecs do when decoding the video.
To hang on to those extra whites, what you need to do is pass the video straight though, converting the code values 1:1 without scaling. So, instead of scaling the video color range [16-235] to fit in the computer color range [0-255], you basically pass it straight though with no scaling. It will make the blacks look a little milky, but you'll hang on to more of the highlight values. The milkiness in the blacks can be corrected in post and it will be no more lossy a color correction than if you scaled them in the initial conversion anyway. The advantage is that you've held on to more highlights.
This is another advantage of using my AVISynth pipeline for converting HV20 HDV. Not only can you process the inverse telecine on the fly, you can also control precisely how the color is being handled on the YCbCr to RGB conversion, something that is not normally exposed by most playback codecs.
I'm attaching the output from the conversion tests. The files that have "PC" in the names are the ones that use full range conversion (no scaling). As you can see, they look a little flatter and the blacks are milky. OK, no problem with the blacks, we can fix that. To really see the difference you have to look the image with a color picker. See the headlights of the cars? They actually go above 235, proving that data is in fact encoded in that "super white" range. Using the standard Rec709 conversion would clip those values. (Compare the images, the scaled "Rec709" and the unscaled "PC709") Way more room for grading in the PC709 images.
Considering the limited bandwidth of HDV is makes sense to hang on to everything that we can. MOST editing programs, unfortunately, work in RGB color space so this conversion step is often a necessary evil. It makes sense to take control of how the conversion is done. Converting the YCbCr straight across rather than attempting to scale it to PC color range, in this case, seem to be the way to go.
First image... Incorrect conversion (the default!)
Second image... more correct... no scaling of "video" to computer color.
Last image... my graded image which fixes the lifted backs but leaves the
newly found white dynamic range. (I lifted the saturation a little also )
Another thing I want to add is even systems that work in Y'CbCr natively will some times run "video legal" filter on footage without asking you. That will effectively cut off any of the out-of-range data. That is a caveat that you definitely need to be aware of even if your NLE is YCbCr native. Make sure you know what your NLEs default behavior is. Also, keep in mind that a lot of filters in many NLEs are RGB only. The NLE will convert the video to RGB and back to YCC without telling you, often also running those "video leagal" filters in the process.
My recent testing (October 2008) has revealed that Adobe CS3 handles the super-whites correctly. Using Premiere to edit and After Effects to remove pull down and color grade as described in this post [ http://hv20.com/showthread.php?t=14476 ] by Max Goldberg is a straight forward system that addresses both while maintaining the highest quality. To save the super whites you must run After Effects in Floating Point mode. At that point the only real caveat is that some effects in AE are NOT floating point. You must be careful to remap any super white values back into the 0-1 range before running any non-float filters or they will be clipped. It's not as complex as it sounds. And nearly ALL the color correction filters are float so if you do all your color correction first you will likely never have any problems.
Non-float filters are labeled with a little exclamation point in a warning sign. They stand out in the filter stack quite obviously and are easy to spot. Color correct first and you will usually be OK. However, if for some reason you need to pass super whites THOUGH some non-float filters there is a simple trick. You'll lose some precision but not much. Use "Levels" to map the super whites into the 0-1 range using the output white slider before running the non-float filters. You can then map the values back up AFTER the non-float filters using another Levels with the input white slider (set to the same value you used in the previous Levels output white) if required. This will map the super whites to the exact sample place they were before the non-float filters. The non-float filters run in 16 bit mode so there will be some the loss in precision. But it will not be big enough to worry about if you map the float values down in a sane way.. i.e. only pull down the whites just enough to get them under 1 (in most cases it should not require more than about a 10% reduction)
To-date I have only tested this with HDV sources and not with direct captures from the Intensity (I currently don't have access to my Intensity workstation). The more convoluted work-flow requiring pre-processing with AviSynth might still be required for those sources. But since the pre-processing actually serves to COMPRESS the footage (we are talking about 60i footage in a pure Uncompressed codec vs the post-processed footage at 24p in a lossless compressed format) In that particular case pre-processing in AviSynth actually has advantages that make up for the extra work it requires.