We need to record metadata anyway, for example to keep information about exposure time, sensor register settings, etc.
If we decide to do it in a separate file/stream then we need very precise timestamps (or frame numbers) to go with.
- Queries
- All Stories
- Search
- Advanced Search
Advanced Search
Oct 9 2016
Well, an acceleromter info for a camera is a great idea, actually. But the problem is how can it be included in the metadata? Should it be stored in an another format like .aclmif (Accelerometer Info) which contains a very precise tracking of the accelerometer over time. You should include timecode info too so you can parse the information of the accelerometer over time. And the program which can render .aclmif is only Open Cine, with Premiere Pro adding support in 2021? Well but it is not a wrong idea to add a new format and it should not be in this section - it should be in the software section.
Jun 8 2016
May 22 2016
So, we have been trying to do something like this for the HDMI2USB-misoc-firmware, you can see our instructions at https://github.com/timvideos/HDMI2USB-misoc-firmware/tree/master/scripts
May 21 2016
Actually (nitpicking here :) the IMU doesn't record the motion, it tracks it with several sensors.
May 20 2016
The IMU just records the motion of the camera (3D rotation, acceleration, etc.) what you do with that kind of information then in post production is up to the user/software.
So i don't understand exactly how it work for the IMU data recording in the beta.
You can take data from IMU and use it directly in postprod for a better stabilization ?
May 15 2016
there is another one application: realtime track could be very useful for green screen previsualisation of live human in 3d environment or the other side with real environment and vfx character like the troll in lord of the ring (this was made with motioncapture system but today it's sould be possible to do this with camera motion tracking)
For camera stabilisation it could be very cool somebody make an extension card to drive directly the brushless motor.
Maybe information to easily find optical center and find the offset between IMU and optical center could be usefull too
May 14 2016
there is another one application: realtime track could be very useful for green screen previsualisation of live human in 3d environment or the other side with real environment and vfx character like the troll in lord of the ring (this was made with motioncapture system but today it's sould be possible to do this with camera motion tracking)
May 13 2016
The only applications for doing this live in the Beta that comes to mind would be sports/broadcast and here dedicated mechanical stabilization systems exist already... So I agree this is a post production thing mainly, but I think that's what chooksprod meant with mentioning opencine anyway.
While it certainly works well in post processing, this probably isn't a good idea for real-time stabilization in the AXIOM Beta.
May 12 2016
If we say the sensor is running at 250fps and each frame records 1/250s of light coming onto the sensor.
For a 25fps Video at 1/50s you would use the first 5 Frames of the 250fps Stream - average them to calculate the first Image of the 25fps Stream.
Wait 25/s and then take the next 5 Frames and so on.
May 8 2016
Great !
I found this one, problably you know as well also : http://docs.opencv.org/3.1.0/#gsc.tab=0
Jan 28 2016
In T212#9262, @chooksprod wrote:
Probably you know this project :
Oct 14 2015
we also want sdcc and gputils (latest version) for pic related code and general tools like vim, gcc, python, etc for various tasks.
Sep 22 2015
Nice idea for the two steps system.
For the network config, it can be done in two step (same for others critical stuff):
RAML looks nice, but probably a bit too complex in the initial brainstorming phase. We could first define what "stuff" will be read / written / created/ deleted from the camera, then write the api specs.
To help in the api spec/doc, a language exist for that: RAML (http://raml.org/). Clear and simple to understand, it can be good to use it, what do you think about that?
Great, what would be the next step?
My current take on this, is that we should follow Laravel Lumen methodology, and first agree on writing the api documentation then implement it.
Aug 23 2015
An alternative may include to implement and use a dedicated phy to stream the output in RTP https://tools.ietf.org/html/rfc4175
other thing to look at is this FPGA dirac encoding implementation: http://opencores.com/websvn,listing?repname=dirac&path=%2Fdirac%2F#path_dirac_
Jul 31 2015
Jul 21 2015
Well, Apple can be a bit of a pain sometimes but they are opening up ProRes more and more so maybe it is an option. A few years back there was no official solution for ProRes writing on Windows and now different applications are getting the go for example.
Jul 6 2015
Should we consider colour characterization in this? If so, it would be worth considering a light box that enables us to push extremely wide primaries outward as far as the spectral locus permits.
Jun 24 2015
May 21 2015
Indeed, lavarel can be a good chose. In fact, it is base on many Symfony components but it is maybe more easier to work with it instead of the full stack Symfony framework.
I hope I will soon have time to help for the web interface.
This looks like a nice and lean alternative to symphony : http://lumen.laravel.com/docs/introduction
May 18 2015
Having the sensor information in b/w doesn't preclude having color zebra.
As if you didn't all have enough to think about, but… I personally am still more interested in a B/W version of the Beta camera.
Mar 31 2015
I'm working in vfx industry and i'm waiting this feature with great excitation, i hope this will be used by the blender tracker as basis for 3d tracking (optical could be just used for tracking enhancement) because optical tracking alone doesn't work any time (during fast motion for example). This could be used to automatically detect element that move in the shot and separate them from the camera motion.
May be realtime optical camera tracking could be done by moving optical search zone of tracking points at his estimated position based on IMU measurement???
This could be a killer apps...
Mar 16 2015
Back for the alpha I started writing my own devicetree and I simply copied the relevant part from there (adjusting to the decompiled devicetree of course).
u-boot networking is still a big problem, especially for TFTP based boot.
QSPI works with the correct devicetree entry.
Mar 15 2015
What happens is that the GEM is initialized with the u-boot hard coded MAC address, and changing the etheraddr later in uEnv.txt does indeed change the u-boot generated packets, but it doesn't update (or reinitialize) the GEM, so it still only receives packets for the hardcoded MAC.
It seems, u-boot doesn't update the GEM registers.
Just send you an email on the sources topic.
Mar 14 2015
If we remove the ethaddr option, will it be recovered by Linux when the PHY is setup?
btw, where can I find/get the kernel/u-boot sources you used?
u-boot seems to have a problem with modified MAC addresses.
Mar 11 2015
In T277#4062, @colinelves wrote:Personally I think I'd find odd colours around the image distracting. Having the area outside black and white and/or partially shaded would be better for me.
Yep. For the camera operator, I agree. Strong frame lines and a dimmed image beyond would be my preference.
Feb 16 2015
grandioso! that will be my favourit feature :)
yes :)
Feb 12 2015
Update to boot.bin to kernel 3.14. XADC has been added. QSPI still not detected.
Done reading!
By the way @WedgeSama, welcome to the lab !
Very interesting, thanks.
For a first step, I'd like to work with a minimalistic approach. Symphony seems a bit overkill for what we want to achieve. I'd start with the smallest working prototype than build on top of that.
I put some information in the pad, what do you think about it?
Feb 11 2015
New boot.bin for testing.
Feb 10 2015
Yes. Expect an update of the image and uboot this evening.
Any progress here?
Can we expect something in the near future?
Feb 8 2015
Or just having good solid frame lines.
Personally I think I'd find odd colours around the image distracting. Having the area outside black and white and/or partially shaded would be better for me.
Feb 7 2015
Feb 2 2015
EI shifts the window.
Feb 1 2015
and the formula from Alex: y = (1 - 1/(1+a*x)) / (1 - 1/(1+a)), x is input data in [0..1], a is exposure_gain - 1 (e.g. for 3 EV, a = 2^3-1 = 7)
Alex mentioned this as good reference from ML: http://www.magiclantern.fm/forum/index.php?topic=9597.msg99327#msg99327
The ALEXA LUT/EI method shifts the middle values while blacks/whites are untouched in contrast to the exposure compensation method which affects the entire range with a constant factor.
That sounds like a good idea - although I'm not sure how this set up would be much different - except as a more basic form of the same thing?! Could you perhaps explain the exposure compensation idea in more detail?
Technically speaking as I understand it those are two different things.
Jan 31 2015
Actually I reread the title of this task. Exposure compensation is a pretty good way of describing what I'm talking about (duh! I'm pretty stupid sometimes) - so + or - a stop would be one way of expressing it, although some might prefer to see it as a change in the 'ISO' (not that digital sensors really have an ISO). But ultimately I think we're talking about the same thing, no?
In T268#3955, @sebastian wrote:So on an Alexa if you close the iris by one stop and increase the ISO by one stop you will end up with a completely different looking image as before?
Jan 29 2015
The variable ISO settings are not to do with digital or analogue gain - these both remain the same (to make sure DR is maximised at all times) rather they are to do with tone mapping.
Of course, being able to adjust the ISO using analogue gain settings would also be useful, bearing in mind this would reduce the dynamic range of the chip, especially for low light situations and also for Raw recording. The question then becomes how to communicate to the user what's happening - is it a tone remapping or the application of analogue gain or both?
Hi sebastian - take a look at the link I showed you. In the Arri The variable ISO settings are not to do with digital or analogue gain - these both remain the same (to make sure DR is maximised at all times) rather they are to do with tone mapping.
ISO settings correspond to analog on-sensor gain. I know some manufacturers claim ISO to be a post processing digital gain only that is a non destructive operation, but that's not the case with the CMV12000.
Jan 28 2015
Note about astronomy imaging (for space telescope):
THe challenge there is capturing very faint object without having the image "damaged" by highlights. Very long exposure time are used.
caveats : On CCD sensor, you have a white line when there is a pixel that oversaturated.
caveats : On CMOS sensor, your bright spot "bleeds" on neighbouring pixels.
It's probably more 'standard' to express this as a variable ISO and have it realised through adjusting the gamma curve of the preview LUT. In this scenario the ISO represents the (shifted) mid grey point. It might also help to have a little +/- figure indicating how many stops above and below this mid grey point you have.
Jan 26 2015
I imagine it would depend a lot on how the averaging was done. Anyway - can you post some examples of the sort of artefacts you're talking about? Only I had a look and all I could find were examples of stills built from multiple video frames - and these had quote nice motion blur.
All of these dynamic range hacks fall to the dreaded issue of motion; what can work theoretically for still images, works appallingly for motion based work. Hence the vast majority of these cheats are relegated to the bin as marketing hacks, with the exception being an exceptionally constrained and limited shooting context.
Jan 25 2015
Also there is a marked difference between 10bit and 12bit - which is worth bearing in mind.
I believe Alex was talking about the mk1 sensor. The latest version is faster I believe.
The sensor does not do 300fps at maximum resolution.
I doubt you'd be able to do any processing on 4K (above 30fps) or high speed data would you? It'd be straight raw out.
Jan 24 2015
I did find this, however: http://www.cambridgeincolour.com/tutorials/image-averaging-noise.htm
Can you expand upon that as I'm not too sure what you're talking about and Google doesn't help.
@sebastian mostly it means that we probably won't have the resources to run a 3D LUT on full data, especially not on 150+ FPS 4K data in realtime, and it isn't necessary either, as this is better done in post.