Zebra Overlay
Open, WishlistPublic

Description

Replace areas in the image that are above/below a certain luma value with a moving stripes "zebra" pattern to indicate over/under exposure areas.

Parameters:

  • Direction of the stripes
  • Color of the stripes
  • Overexposure-threshold
  • Underexposure-threshold
  • ON/OFF

Magic Lantern example:

sebastian updated the task description. (Show Details)
sebastian raised the priority of this task from to Wishlist.
sebastian added a project: AXIOM Beta Software.
sebastian added a subscriber: sebastian.
Bertl added a subscriber: Bertl.Dec 29 2014, 1:21 PM

Direction of the stripes?
Color of the stripes?

Just to clarify, luma or luminance?

From Wikipedia: Luma is the weighted sum of gamma-compressed R'G'B' components of a color video – the prime symbols (') denote gamma-compression. The word was proposed to prevent confusion between luma as implemented in video engineering and luminance as used in color science (i.e. as defined by CIE). Luminance is formed as a weighted sum of linear RGB components, not gamma-compressed ones.

sebastian updated the task description. (Show Details)Dec 29 2014, 1:23 PM

stripe color/direction added to parameter list.

Interesting, I always thought luma is just an abbreviation of luminance.
I guess since we are doing "video engineering" "luma" is the correct term.

It might also make sense to explore marking per-channel overexposure aka when the red channel is starting to clip but none of the other channels yet...

Bertl added a comment.Dec 29 2014, 2:04 PM

For me it would make most sense to check each channel in the raw bayer data for over/under a certain threshold (which could be different/weighted) as the main idea is to prevent information loss from clipping, no?

In T233#3329, @Bertl wrote:

For me it would make most sense to check each channel in the raw bayer data for over/under a certain threshold (which could be different/weighted) as the main idea is to prevent information loss from clipping, no?

Agreed

Yes and no. One traditional use of Zebra patterns was to set correct skin tone exposure levels - roughly 67 IRE in Rec 709 Colour space - if this capability was lost (as the zebras became channel specific only) I think many people would be more than a little annoyed as they rely on it for correct exposure.

Zebras have become somewhat more complicated in recent years with the advent of log style image profiles as you potentially need mutille settings depending upon the form of image your're looking at. For example, IRE 70 would be an ideal setting when looking at a 'corrected' image (ie a translation of the log recording signal to rec709 for monitoring purposes, while log is still recorded) but IRE 52 would be needed when looking at a raw slog2 image (for example)

In these situations Luma would be the value to work from - mapped onto IRE values.

Also, given that in many cases a raw image will not be recorded (I'm assuming mostly the debayered, 10 bit uncompressed output will be recorded) having zebra set by Raw signal would be unhelpful as the uncompressed output will exhibit clipping when the Raw signal hasn't.

For completeness ideally we'd want to have numerous options for Zebra: luma value, 10 bit code value of compressed stream, IRE value (conversion from Luma to IRE), raw bayer level (12 bit value I guess) exposure percentage 0% black clip, 100% white clip). But also global level (luma - for skin tone settings) and single channel (showing clipping). Having them Colour selectable would also be useful (I,e. Red, blue or green when per channel zebras are set, traditional black when luma values are set).

I'd also recommend enabling a 'false colour' option - which is a bit like a global version of zebras, where areas of set exposure values are separately coloured: 0-10% dark blue (black clip), 10-20% purple (shadow), green 40-60% (mid grey/log skin tone exposure) pink 65-75 (rec709 corrected skin tones) 80-90 yellow (approaching white clip), 90-100 red (white clip)

Of course each option should be assignable across the full range of values available.

Bertl added a comment.Dec 30 2014, 8:39 PM

Please, if possible, provide some examples for good/bad features so that we can avoid making the same mistakes others made and strive for the "best possible" implementation :)

One clear rule is to always show clipping as recorded.

Nothing wold be more annoying than having the false sentiment that you are not clipping (for example on debayered quick preview from raw), and then have clipping happens on the recorded data (lower range).

from magic lantern (http://wiki.magiclantern.fm/userguide):

  • Luma: zebras are computed from Y channel only.
  • RGB: check overexposure for each RGB channel. Clipped channels are displayed in the opposite color (i.e. clipped red shown as cyan, underexposed as white and so on).

You may adjust thresholds for underexposure and overexposure, or you can disable zebras while recording.

I think the first thing to consider is that there are multiple image pathways possible on the camera. So one output may be a log or even raw image stream, while another may show a monitoring LUT (e.g. Log to Rec709) so the user needs to be able to select which stream they want the Zebra to be applied to but also to select which stream they want the zebra to be calculated from.

i.e. if using a rec709 monitor LUT but recording a log or raw stream the user may want the zebras to be calculated from the LUT values (to set skin tone exposure, for example) or they may want them set from the log (to check clipping, or indeed skin tones based on the log values. But still have them applied over the LUT enables monitoring image (ie using log exposure controls on a monitoring LUT).

This comment was removed by Bertl.
Bertl added a comment.Jan 2 2015, 1:11 AM

Adding a complete image path/feature cross-switch would be possible but very expensive FPGA resource wise, so I would prefer to select in advance which path can apply what features.

For example, if we dedicate one output for recording, that this output shouldn't have to handle any features or overlays.

In T233#3449, @Bertl wrote:

Adding a complete image path/feature cross-switch would be possible but very expensive FPGA resource wise, so I would prefer to select in advance which path can apply what features.

For example, if we dedicate one output for recording, that this output shouldn't have to handle any features or overlays.

Understood, I created a new task for this purpose: http://lab.apertus.org/T241

Not sure why my previous comment was removed. I was simply pointing out the ideal set of features. As a general rule I don't think anyone is likely to have a problem having one feed permanently clean for recording purposes.

The other feeds, for monitoring purposes should, ideally have the option of displaying various exposure and monitoring tools as overlays e.g. peaking, zebra, false colour, LUT - a user selectable 3D LUT in particular is probably the most important as the others are available from relatively cheap monitors.

In terms of exposure tools I definitely think the ideal situation would be to give the user the option of displaying exposure tools calculated from the recording values (log or raw) overlayed upon a monitoring LUT.

If you feel that there is not enough resources in the image processing FGPA to achieve this then it might we worth considering adding an additional FGPA to the IO shield in order to carry out these calculations and add them to the image pipeline.

As a minimum you would want to be able to have them calculated from the values you are outputting on the stream being overplayed. i.e. if a LUT is applied then the value represent that image. If the LUT is disabled, then they represent those values - and have it very easy to switch the LUT on and off.

However, it is worth bearing in mind that many of these features are available as standard in even quite cheap monitors (such as the Lilliput LCDs) and higher end monitor recorders such as the Atoms Shogun and Convergent design Odyssey series implement them very well (including 3D LUTs) - so it may not be strictly necessary for the Beta to include them as features.

Probably the only mission critical feature that you'd need would be the ability to identify and display areas of black and white clip when recording Raw.

One thing I realised last night was that there is no particular need these days to have the 'zebras' displayed as an actual zebra pattern (the stripes). The only reason they were originally displayed as stripes was because the first video cameras only had black and white viewfinders hence there was no other way to display this relevant information.

I imagine it would be easier to show Zebra areas as blocks of a single colour. Then all you'd need to do is identify any pixels whose luma values fall within a selected range and change them to the selected colour.

Additional 'zebra patterns' could then be added with different colours. So you have one set of values represented by, for example dark purple (deep shadow) and another set by red (white clip).

This is essentially all a false colour image is: a series of coloured 'zebra patterns' layered over the image.

One thing worth mentioning is that the Convergent Design Odyssey has user assignable false colour where you set the ranges of a series of colours (dark blue, purple, green, yellow, red) and you can turn each of these on and off. Notably, when you turn them all off you are left with a black and white image (pure Luma values). I mention this as I suspect this is the least processor intensive approach to creating a false colour image: convert the image stream to basic luma values then change the colour of any pixel in a given range.

I imagine it would be easier to show Zebra areas as blocks of a single colour. Then all you'd need to do is identify any pixels whose luma values fall within a selected range and change them to the selected colour.
Additional 'zebra patterns' could then be added with different colours. So you have one set of values represented by, for example dark purple (deep shadow) and another set by red (white clip).

The zebra pattern still makes sense with color monitors as its a pattern you will not likely shoot as actual image content. Using single color overlay instead will mean you never know if your image is actually overexposed or actually supposed to be red in that particular area (if using red to indicate overexposure).

By all means, if it can be managed why not do it - there's a lot to be said for doing something that people are used to. I was simy pointing out that blocks of colour might be easier to implement.

I do like the idea of being able to change the cour of the stripes though - as this allows you to add multiple zebras to the image and having each one easily readable. Personally I find it much harder to interpret the multiple zebras when the stripes just go in different directions - at least I can't interpret them with a quick glance, which is what you need when shooting a doc or working on set.

As if you didn't all have enough to think about, but… I personally am still more interested in a B/W version of the Beta camera.

Given that this may still be a possibility?

Would the implementation of zebras be completely different? More complicated? Or simply a matter of a moving window triggering over, near over and near under and under displays?

For what it's worth marching ants seem preferable to colours which seem overcomplicated to me.

If a B/W version is possible would it still be possible for focus peaking to use colour as per many other cameras?

Having the sensor information in b/w doesn't preclude having color zebra.

The zebra will be added after the sensor image is processed anyway, so it doesn't change anything.