- User Since
- Dec 9 2014, 1:09 PM (245 w, 3 d)
Apr 2 2017
Some form of highlight recovery would still be useful in this instance though - albeit not one based on white balance.
Just to follow up from Sebastian - yeah, it wouldn't be necessary to do it to Raw data - as it is better handled in post.
As I understand it, the approach generally relies upon approximations from adjacent, unclipped pixels.
You're not quite getting it. It would probably easier for you to understand by looking at an example of it in action. If you had a raw data sample in DNG format of a clipped source (such as a domestic light near a white wall) you could load it into Resolve and try turning hightlight recovery on and off - then you'll see the detail appear and disappear in pixels near to the point of clipping (it's often more dramatic with off white lights)
Feb 28 2015
The Arri mount system seems to just be a plate held in place by four locking screws (like the Red mount) with the mount in the middle so I don't see why the Axiom system couldn't position the locking screws in the same place.
Feb 11 2015
It's worth noting that the 1000fps camera has just been given an upgrade to use the Cmosis CMV12000 sensor being used in the Beta: http://www.eoshd.com/2014/12/kickstarter-fps1000-high-speed-camera-upgraded-220fps-4k-super-35mm-2750/
Feb 8 2015
Ah, okay. I've had a quick look at the Wooden Camera website - it's not entirely clear what is going on there.
Red mount is a good idea - but you should asses any potential licensing issues.
Or just having good solid frame lines.
Personally I think I'd find odd colours around the image distracting. Having the area outside black and white and/or partially shaded would be better for me.
Feb 5 2015
The MOX is open source open format and I know Brendan Knows about the Axiom and I think Sebastian et al have been in contact with him about maybe using it. As I understand it, however, MOX is essentially a flexible wrapper - it could be used as a production format, you just need to include an (open source) compression codec in the wrapper.
Feb 1 2015
That sounds like a good idea - although I'm not sure how this set up would be much different - except as a more basic form of the same thing?! Could you perhaps explain the exposure compensation idea in more detail?
Jan 31 2015
Actually I reread the title of this task. Exposure compensation is a pretty good way of describing what I'm talking about (duh! I'm pretty stupid sometimes) - so + or - a stop would be one way of expressing it, although some might prefer to see it as a change in the 'ISO' (not that digital sensors really have an ISO). But ultimately I think we're talking about the same thing, no?
Jan 29 2015
Of course, being able to adjust the ISO using analogue gain settings would also be useful, bearing in mind this would reduce the dynamic range of the chip, especially for low light situations and also for Raw recording. The question then becomes how to communicate to the user what's happening - is it a tone remapping or the application of analogue gain or both?
Hi sebastian - take a look at the link I showed you. In the Arri The variable ISO settings are not to do with digital or analogue gain - these both remain the same (to make sure DR is maximised at all times) rather they are to do with tone mapping.
Jan 28 2015
It's probably more 'standard' to express this as a variable ISO and have it realised through adjusting the gamma curve of the preview LUT. In this scenario the ISO represents the (shifted) mid grey point. It might also help to have a little +/- figure indicating how many stops above and below this mid grey point you have.
Jan 26 2015
I imagine it would depend a lot on how the averaging was done. Anyway - can you post some examples of the sort of artefacts you're talking about? Only I had a look and all I could find were examples of stills built from multiple video frames - and these had quote nice motion blur.
Jan 25 2015
Also there is a marked difference between 10bit and 12bit - which is worth bearing in mind.
I believe Alex was talking about the mk1 sensor. The latest version is faster I believe.
I doubt you'd be able to do any processing on 4K (above 30fps) or high speed data would you? It'd be straight raw out.
Jan 24 2015
I did find this, however: http://www.cambridgeincolour.com/tutorials/image-averaging-noise.htm
Can you expand upon that as I'm not too sure what you're talking about and Google doesn't help.
I believe frame averaging doesn't have any motion artefact problems - it's effectively the same as taking one frame of 1/50th second exposure - but temporal noise is effectively lost in the averaging. Fixed pattern noise remains (although a black balance can deal with a lot of that. The main drawback is a significantly reduced ISO.
Jan 23 2015
I think it's probably not worth it, no? Maybe just work on better processing ;-)
Response from Cmosis:
Jan 22 2015
I like the idea of the Beta shipping in some sort of foam that can be reused in a case. You could, perhaps, offer backers the option of paying a bit extra to get a solid case rather than a cardboard box.
Jan 20 2015
Not exactly true regarding the XLRSs - even running dual system sound you may want to use the onboard recording for a scratch track via either an on board mic or a radio mic to help with syncing in post. It's also a useful idea if the camera cannot accept Timecode in as you can use an Audi track to record LTO Timecode.
Jan 19 2015
I'm willing to give it a go. Who do I contact?
Jan 18 2015
Ah, yes. I suppose you would!
Okay, so a 16mm crop area would be simple enough to implement then. It might be worth using the line skip as well since this would increase the max frame rate at the same time.
Jan 17 2015
Okay, cool - although bear in mind I'm a cameraman not a sound recordist - they may have different opinions. Having said that one of great advantages of the Axiom is that it's open source - if people hate the noisy buttons they can go and make some silent ones...
I'm impressed you're thinking about the noise of the buttons- I doubt a angle camera manifacturer has ever thought about that!
Jan 11 2015
Also: a detailed update on the Gamma (at some point) would be appreciated. I heard Sebastian on Ogy's podcast say that the timeline for delivery is quite strict due to the European funding - so it would be good to find out what the development roadmap for it is. I'd also be really interested to find out who your development partners are and what aspects of the development they are assisting you with.
Jan 10 2015
Although Alex at Magic Lantenr seemed to indicate that the black reference collums are too small to do much to reduce the noise: http://www.magiclantern.fm/forum/index.php?topic=11787.0
Here's the bit I was thinking about:
Jan 8 2015
Yes, I'm not sure how it works. There are some pixels on the Cmosis chip set aside as black reference I believe but I'd need to look through the data sheet to find them.
Jan 7 2015
I think both Cmosis and Alex from Magic lantern should be able to help with this.
Some info on black shading in Magic lantern http://www.magiclantern.fm/forum/index.php?topic=9861.0
Jan 6 2015
The chip has a programable window mode is that means he chip will only read the centre 1080 lines - but I think they are spread across the full width of 4096 pixels - I'm not sure if you can specify a window that has a shorter line length, but perhaps Cmosis could let you know. If you can get the chip to only read out that window it would make processing a lot easier. Or perhaps it's simple enough to disregard the first and last 1,088 pixels per line. One added advantage would be a high max frame rate a la Red.
Jan 5 2015
90 degree connectors would help, as would rotating the IO shield so the ports are at the top rather than the side.
Highlight recovery is only of use when recording debayered (non-raw) images since highlight recovery can be carried out in post on raw data. However the process itself needs to be carried out on the Raw data stream itself.
Jan 4 2015
I'd put this into the serious consideration category - the dynamic range of the chip is already somewhat restricted - so any in camera processing that can increase dynamic range at the top end or reduce noise in the shadow areas would definitely be welcome.
I don't see why not - you could simply disable one of the HDMI 1.0 ports when running the HDMI 1.4 port 'hot' (as it were) - i.e. If it is used for Raw, 4K or high speed output. If it's just used for HD only the bandwidth would be the same as if it were HDMI 1.0 - but you'd have the extra bandwidth when needed and the extra output when you don't - surely it's the most flexible option?
Jan 3 2015
It might be easier to produce just one shield that has one 1.4b HDMI (for high speed or 4K) and two 1.0 ports for monitoring.
One more thing: am I right in thinking the Beta will have no LCD, status lights or buttons on it? If so how is it switched on and off? Is simply a question of pulling the power cable or removing the battery (is this likely to damage the unit at all?).
Jan 2 2015
Alternatively it might be better to save the high speed and Raw modes for SDI shields as these seem to allow much higher data rates even with just 3G SDI (let alone 6G or 12G).
Those sound plausible to me. However, I do want to throw one thing out there: the Atomos Shogun has an HDMI 1.4b board in it. According to the specs this allows it to accept 4K HDMI signals and HD signals at up to 120fps.
By all means, if it can be managed why not do it - there's a lot to be said for doing something that people are used to. I was simy pointing out that blocks of colour might be easier to implement.
One thing I realised last night was that there is no particular need these days to have the 'zebras' displayed as an actual zebra pattern (the stripes). The only reason they were originally displayed as stripes was because the first video cameras only had black and white viewfinders hence there was no other way to display this relevant information.
Not sure why my previous comment was removed. I was simply pointing out the ideal set of features. As a general rule I don't think anyone is likely to have a problem having one feed permanently clean for recording purposes.
I think the first thing to consider is that there are multiple image pathways possible on the camera. So one output may be a log or even raw image stream, while another may show a monitoring LUT (e.g. Log to Rec709) so the user needs to be able to select which stream they want the Zebra to be applied to but also to select which stream they want the zebra to be calculated from.
Jan 1 2015
One other thought: It would be good to get an update on the expected cost of the Beta to backers - some of us need to start saving now to afford it!
Dec 30 2014
I find the 'sharpening' approach less helpful than the coloured edges. The former method often leads me feeling a little concerned, whereas the coloured edges leave you in no doubt.
Yes and no. One traditional use of Zebra patterns was to set correct skin tone exposure levels - roughly 67 IRE in Rec 709 Colour space - if this capability was lost (as the zebras became channel specific only) I think many people would be more than a little annoyed as they rely on it for correct exposure.
Dec 28 2014
No, they're not - at least not at the moment (I wouldn't put it past Black Magic to release the firmware of the black magic cameras). But they are both incredibly cheap - and I doubt there is much that can be done to the Beta to make it cheaper, except what is already on offer (ie a cheaper sensor).
I'd like to see, in the future:
Obviously no compression is the easiest to manage in camera in terms of processing required (i.e. none) but further down the image pipeline no compression becomes a problem.
The main problem with this suggestion is that there is already two very good 'low cost' cameras in the market:
Dec 25 2014
Could be useful - but equally tricky to programme I would have thought since the camera has to work out where that is. I suppose, once you have peaking working okay (ie highlighting of high contrast pixels) you tell it to jump to work out a general centre for those areas.
Dec 17 2014
A 512x288 image is insufficient for ensuring correct focus, especially at 4K - in my opinion. I'd also say that line skipping or scaling seems like a lot of extra processor work to me. I was thinking that having a simple black and white image would be the easiest solution: simple conversion of the raw pixel values straight to luma data.
Dec 16 2014
Fair enough. I was just raising the question as to whether or not own form of lossless compression could be implemented - even if it requires significant post processing to unpack the data afterwards.
Dec 12 2014
Dec 10 2014
From a practical sense it's rather irrelevant that the compression is lossy. Obviously the ideal would be lossless compression but what is needed is some for of compression that is achievable in real time (no mean feat when dealing with 4K footage, especially at rates above 30fps) and makes file sizes manageable. 4K Raw or Uncompressed (which is actually worse than Raw) is a horrendous amount of data to deal with.