I cross-linked zephyrnoid's post to dvinfo here along with some extra interpretation, that I'll quote myself on here, because (a) these threads on cine mode keep popping up on various forums, and (b) I'm pretty darned sure I'm right.
What that most-likely means is that the high-precision "raw" data is mixed in-camera with a blurred image of the same data before being quantized (to 4:2:2 or 4:2:0 etc). This is an old not-unusual image-processing trick. It's "agnostic" in that it's applied uniformly, it doesn't seek edges or bright regions or anything like that.
You can try it yourself in AE or Photoshop -- mix a pic with a blurred image of the same pic. The precise mixing scheme can vary; I like multiplying, though that requires a gamma correction first: e.g.
for x ranging from 0 to 1. You can do this in photoshop by copying a layer, blurring it, change the blend mode to "multiply" and use the "levels" command on each of both layers to change the gamma (that is, raise the color value to a power) to (1/x).Here's a quick over-done sample with x set to 0.5 (gamma 2.0) and the blur very wide. Note that the visible noise is mostly in the original un-blurred layer, so noise seems to get suppressed, too.
final_color = (original_color^x) * (blurred_color^(1-x))
Why not do it in post? Well, you can, but the quality of the unblurred, pre-quantized signal in-camera will be better -- especially if they step on the gamma very hard (for a simple 8-bit image, gamma 2 is a pretty strong step).
(edit postscript: probably, what Canon does is use x=0.5 and rearrange the terms -- multiply the un-gamma-corrected images, and then raise the gamma of the result afterwards. Fewer operations and if the image is high-precision you won't notice that part of the quality loss. There is quality loss, in that we are mixing with a gaussian-processed image -- the good part is that you might pick up some detail in the extreme shadow/highlight ranges. So it depends on your subjective definition of "quality")