4K RAW PC recording
Open, Needs TriagePublic

Description

in the crowdfunding campaign, the "AXIOM Beta 4K RAW PC recording option" got funded as a streched goal.

There are several concepts but there is no definitive technical solution for that yet.

Ideas:

  • use the 3 HDMI Ports from the Axiom and connect it to a 3 Port HDMI Interface Card, combine frames in software to get 4K in high framerate
  • SDI

Bertl: SDI is an option as well, but requires additional soft- and hardware on the camera side

  • Ethernet

Bertl: gigabit ethernet is way to slow for transferring 4k raw in realtime

davidak created this task.Nov 4 2014, 9:55 PM
davidak updated the task description. (Show Details)
davidak raised the priority of this task from to Needs Triage.
davidak added a project: Brainstorming.
davidak added a subscriber: davidak.
PhilC added a subscriber: PhilC.Nov 4 2014, 10:42 PM

This at some point should become a project on apertus.org, the wiki, and the lab.

As soon as there is enough material, I'll create the related pages/projects.

Feel free to start the work on the wiki.

aombk added a subscriber: aombk.Nov 6 2014, 12:01 AM

i think i should mention this product http://www.dexteralabs.com/inogeni/

Bertl added a subscriber: Bertl.Nov 8 2014, 7:34 AM

Yes, we stumbled upon the inogeni converter several months ago, we even planned to get one back then but it never happened.

It is still quite expensive for a single channel capture device which is not (yet) able to do deep color HDMI.

That said, I see that they offer customization of their product, so maybe at some point it might be worth contacting them in this regard.

Fazek added a subscriber: Fazek.EditedNov 9 2014, 6:59 AM

I think there is no easy solution for this yet. But perhaps within a few years, there will be a fast and widely used interface. Probably the real time full 4k is not so important, it is hard to process on PC anyway. For exampe, in stop motion animation, you need only a good live preview until you capture the frame you want.

So I think a real time preview (aka 2k output) and a fast (but not real time) full frame capture could be a first goal. You can do it with a slower HDMI interface, like BMD Intensity.

Or, maybe better, you can make a fully configurable multiplexer: there is a picture format as an input stream and the outputs: 1,2 or 3 HDMIs, and you can set the encapsulation formats (1080@24/25/30/50/60 fps...).

sebastian updated the task description. (Show Details)Nov 9 2014, 4:53 PM
sebastian added a subscriber: sebastian.EditedNov 9 2014, 9:24 PM

We will develop a custom PCIe card and if that doesn't work a PCI card to take the input signals from the Beta. No details before we actually tested anything.

Bertl added a comment.Nov 20 2014, 6:27 PM

Potential cooperation with inviso who consider to open up their framegrabber design and create an AXIOM Beta SDI Shield.

http://irc.apertus.org/index.php?day=20&month=11&year=2014

My preference would definitely be for SDI - HDMI connectors are so unreliable and you need a solid connection if you're going to be dealing with that much data.

The one thing that is worth bearing in mind is we're talking about a huge amount of data. I've been shooting 4K raw with my FS700 and you're looking at 1/2TB of data shot out in 8mins even just at 60fps. 120fps is of course double that.

This is quite apart from the architecture needed to record it. At the moment my Odyssey 7Q can handle 60p 4K Raw by writing alternate frames to two high speed SSDs so 120fps is likely to need either 2x Odysseys working in tandem (which would not necessarily be impossible to get working with Convergent design's help) or possibly their as-of-yet unreleased Athena recorder. I don't think you'd even be able to record it to a PC without building an expensive SSD raid system.

The issue then is: why use an Axiom at all? When a Red Epic can provide the same, more even (150fps @ 4K) but with a much lighter data load thanks to it's internal wavelet compression.

Which is not to say: I don't think we should be aiming for 4K capture - but rather it is with thinking about how it can be made more practical to use. Is there room in the FGPA to carry out some form of compression before it hits the shield? Is there possibility for additional processing in the shield itself to further compress the stream?

Would a high speed buffer incorporated into the shield (to reduce cost of the basic Beta model) to allow fixed period high speed recording (potentially at much higher frame rates) be a better option? ie have a SDI/High speed IO shield as an optional extra.

I know I keep banging the same drum, but coming from a production background I know how the camera could potentially be used in the field - and have a good sense as to why people might choose to purchase an Axiom over any of the other camera choices out there. Affordable 4K aquisition is definitely of interest and could be achieved by working with Convergent Design or Atomos to get the Odyssey or Ronin set up to record a 4K stream from the camera (bearing in mind affordable 4K compressed recording is already offered by Black Magic Design) - something worth exploring anyway since the Beta will need to work with some form of monitor/recorder however it is realised.

However I would point out that affordable, practical, high speed is still up for grabs - it's something I would definitely see myself using a lot and something that would easily push the Beta into the 'must have camera' territory...

@colinelves: The wavelet compression is lossy and slow to decompress at the computer side. I think even Red using it as a compromise.

Maybe a stupid question, but isn't it possible to connect the computer's PCIe bus directly (with a short cable) to the camera? It sounds not professional but it could be an easy and cheap solution for testing without additional connectors, cards and conversions... And later you can use this to connect a converter module to provide SDI or HDMI signals.

From a practical sense it's rather irrelevant that the compression is lossy. Obviously the ideal would be lossless compression but what is needed is some for of compression that is achievable in real time (no mean feat when dealing with 4K footage, especially at rates above 30fps) and makes file sizes manageable. 4K Raw or Uncompressed (which is actually worse than Raw) is a horrendous amount of data to deal with.

Likewise it is irrelevant that it is hard to decompress computer side - the important thing is to capture it on the day. Once captured (in some form or other) it can be transcoded to a more usable format for editing and transcoding can be slow as it doesn't need to happen until after the shoot. It can be done overnight, for example.

surami added a subscriber: surami.EditedDec 11 2014, 11:01 AM

RAW recording means RAW, so there isn't any compression, debayering, etc. just get directly the RAW data from the sensor and save it. It is very good from the point of dataamount, because there isn't any information about the color of the pixels. Let's count:

4K (UHD) 12bit RAW frame means:
3840 x 2160 x 12bit / 8 / 1024000 = 12,15 MB/frame

4K (UHD) 8bit debayered uncompressed RGB frame means:
3840 x 2160 x 8bit x 3 (color, RGB) / 8 / 1024000 = 24,30 MB/frame

4K (UHD) 10bit debayered uncompressed RGB frame means:
3840 x 2160 x 10bit x 3 (color, RGB) / 8 / 1024000 = 30,38 MB/frame

Please correct me if this counting isn't good.

+1 for just get the RAW data somehow, because we can acchieve more fps at the same databandwith. Of course in post it have to be debayered to view the footage, a virtual file system would be the best, which shows us CinemaDNG frames. Take a look at some solution by the Magic Lantern team:

I think the Beta in RAW recording mode should work this way:

  • 1 HDMI port for live monitoring with a small TFT screen
  • 1 or 2 ports (I don't know what kind) for 12bit RAW data transfer

I already started to think on a portable Mini-ITX multifunctional PC:
http://lab.apertus.org/T217

Hi Surami,

I hear what you're saying but the thing is, while most people may understand 'Raw' to mean 'uncompressed' in most practical cases it isn't, it generally just means 'undebayered'. So Red cinema compresses all their Raw streams (3:1 compression is the minimum) in Red Raw. Likewise Sony Raw (on the F55, F5 and F65) is compressed by about x2.5. As far as I know Black magic and Kine Cinema don't compress their Raw formats - but in all honesty they should because although a Raw stream is a lighter data load than uncompressed (which is why I said uncompressed "is actually worse than Raw") it is still a hugely impractical amount of data to deal with in the real world.

12.15 Mb/Frame = 291.6Mb/Sec (at 24fps) = 17,496Mb (17.5Gb)/min = 1,049,760MB (1TB!) an hour!

Trust me, I've shot a lot of 4K Raw in the last year, so I know that is a lot of data to be dealing. i.e. With my Odyssey recorder I'm having to change SSDs every 15mins - and each SSD can take 30-40 mins to download (because fast Raid or SSD backup is too expensive) and I'm usually shooting about 2TB of data a day - which is minimum £160 in external hard drives for double back ups. So 4K Raw, uncompressed sounds like a great idea in principle but in practice it is expensive and annoying to deal with.

Bertl added a comment.Dec 16 2014, 4:11 PM

As usual, the question of using lossy or lossless compression will split the community into (at least) two groups :)

We decided to go the lossless route with the AXIOM for several reasons, but as the camera is completely open, the group favoring lossy compression to reduce the amount of stored data are free to do so as well, just needs somebody to implement it.

Best,
Herbert

colinelves added a comment.EditedDec 16 2014, 10:55 PM

Fair enough. I was just raising the question as to whether or not own form of lossless compression could be implemented - even if it requires significant post processing to unpack the data afterwards.

Partly it's about Keeping the amount of data manageable, partly it's about keeping the data rate manageable. Given current storage technology the raw data will have to be compressed in some way in real time for their to be any hope of utilising the chip's full potential in a way that you can actually capture it. 300fps @ 4k is just too much data to be recorded, without a very expensive SSD raid array...

Bertl added a comment.Dec 18 2014, 8:31 PM

Compression doesn't really help the lossless path, as you have to plan for the worst case, which is no compression at all.

PRN added a subscriber: PRN.Dec 19 2014, 7:49 AM

The no compression solution breaks my heart as well. In a recent shoot, we had to shoot with a Black magic 4k camera (despite my initial choice being red scarlet/one-mx, producers went for BM 4k due to cheap rental prices) . BM 4k data is nightmare to manage, the post production costs multifolded. This made us take a decision to go with my initial choice, a Red scarlet after a schedule.

I desperately wish someone would implement the axiom beta-compression version.

Fazek added a comment.Dec 19 2014, 1:53 PM

What's the problem with the BM 4k camera format? The big size? Or not supported by the program? I think the implemented file format should be open, free, and widely supported, compressed or not. The Redcode format is not usable here, it's protected by patents etc. and if you implement an own format, it won't be compatible with the existing softwares...

By the way I think raid 0 arrays are not that expensive, you can build very fast arrays with traditional harddisks too. Video capture is highly sequential and predictable, so the disk i/o can be optimized. Also you can use the computer's RAM to buffer the stream.

PRN added a comment.Dec 19 2014, 3:44 PM

Yes the size and all the problems caused by the size. Transcoding times, the power of the machines for DI. Cost goes upwards.
I dint mean apertus should use RC ofcourse. Compatibility with the softwares will not be a problem in long run, everything eventually catches up.
\

PRN added a comment.Dec 19 2014, 3:53 PM

And please someone care to explain what kind of benefits, "No compression image" gives compared to say, lossless compression to half the size.

I talked about in the irc already, I shot same composition on an epic with RC3:1 through 18:1 (at that time 18:1 was the maximum compression) and projected them on a christie 4k projector for comparision. For my naked eye, I couldn't differentiate between 3:1,4:1 and 5:1. Forget about 1:1 or no compression. Unless in special cases like, BG replacements, chroma keying and frame grabs, my opinion is no compression is redundant!

Obviously no compression is the easiest to manage in camera in terms of processing required (i.e. none) but further down the image pipeline no compression becomes a problem.

Firstly because the band width is limited for the outputs (HDMI or SDI) so some compression in camera allows more frames and bigger frames to be output within a this limited bandwidth.

Secondly there is an issue with the capturing and storage of the frames produced: 4K Raw is a lot of data as both PRN and myself can testify. This requires on location the recorder to come with a lot of storage (capture media) and it means a lot of time spent on set backing it up throughout the day (as the media is filled quickly). Then, back in the edit suite, it means a a lot of storage required to hold the rushes.

Furthermore since we're talking about Raw (as opposed to uncompressed) there is little benefit in terms of image processing in the edit since Raw images require a lot of processing for an NLE to interpret as a viewable image (e.g. each frame needs to be debayered) unlike say, certain video codecs (Avid DNX, Apple pro res) which require much less processing and have the added benefit of requiring less storage space.

I found ton opencores.org a project of a PCIe board with Sata connector. (http://opencores.org/project,spartan6_pcie).
External interfaces are not the correct one but the developer seems open to discussion. Board cost 200USD (got feedback from developer)

By the way the Zynq providers up to 16 12.5 Gbit/s transciever and we could use 10G ethernet without collision detection(basically raw serial connexion)

Bertl added a comment.Dec 30 2014, 8:24 PM

Nice! Please investigate how we can get our hands on such a board for testing (in exchange for 200 USD of course).

@ZYNQ, yes, but there is a catch, any ZYNQ above 7030 is not covered by the WEB License, so you need a (very expensive) license to generate the bitstream. This leaves the 7015 and 7030, both with "only" 4 MGTs.

Thanks,
Herbert

His name is Christophe Carpentier and he has one available for 200USD. Go on the page http://opencores.org/acc,view,chipmaker78. His email address is there.
I gave him the address of the apertus web site and I and investigating how get the board.

@Zynq : It was too nice to be that easy.

Answer from Christophe

Hello,
I dont know if my board is powerful enough for your project. But i am sure it could help in developing your idea , maybe a part of it (all about PICe , processor on FPGA , DDR),
Also i have plenty of blank PCB , you can solder on it any Spartan6 FPGA (in FGG484 package) , up to high range with(10x more cell.
the XC6SLX45 on board is ok for full feature Microblaze processor with direct PCIe DMA and registers.

PS : the SATA part is not assembled on the board (my previous customer did not need it )
PS : about SATA , i have plenty of SATA connectors , so i will add them in case you buy , but i dont have a chip ICS844071 (sata PLL), mut be buyed separately from Digikey in case you need it., easy to solder (with a magnifier of course)
PS : also i have a spare XC6SLX45T on stock and plenty of DDR3 chip, so a second board would be possible ...
PS : postage cost 6 eur by tracked and insured package,not express . Express cost 30eur.

I saw a remark about web licence not covering some FPGA.
The web licence also not cover EDK and processor on FPGA .
But they is the 1 month trial Licence , that is easy to renew , just by changing PC (if not same machine or not same person , i think you can get new trial).
Anyway , i think you will need EDK (processor on chip) , that is also not covered by web licence.

Photo of board :

We are in touch with Christophe and just purchased this board from him.

Hello,
First let me apologize for joining this conversation this late. For starters, my opinion is that working with 3 HDMIs shouldn't be an option. Cables are not solid for long distances, connections even golden ones are not secure, and transferring that amount of data would be a real headache. I've worked as DATA manager and DIT in several production in Spain and Mexico, and SDI is the most reliable cable you can get at the moment. I know it requires implement soft and hard, but in my opinion, 3 HDMIs sounds scary.

Refering to compression. Correct me if I'm wrong, but decision about compression is already been made? That means Axiom beta will not record compressed files? No codec will be implemented? Then what is the good part of having an open source camera recording 4K if it can't be used in an "indie" type of film because we need a studio debayering offline by night, and then transcoding, so no dailies or roughcuts during production, so more money into post-production. I think I heard you guys talked about being able to use DNxHD but it didn't work, right? Also that you are in contact with Brendan Boles about the possibilities that MOX could bring, but as far as I know his open codec is basically for post-production apps. I don't want to sound pesimistic, because I'm very much the opposite. We are talking here, about an opensource camera with in film production standards. That by itself blows my mind, but if this project is about the small guy against the big leagues, then shooting with Axiom Beta should be about not compromising an independent film budget in post-production resources. (And I'm talking from a position in which I already have the possibility to work in a 4K workflow).
Just an opinion.

The MOX is open source open format and I know Brendan Knows about the Axiom and I think Sebastian et al have been in contact with him about maybe using it. As I understand it, however, MOX is essentially a flexible wrapper - it could be used as a production format, you just need to include an (open source) compression codec in the wrapper.

The other option I believe people are considering is the Magic Lantern MLV video format

Laoena added a subscriber: Laoena.Feb 25 2015, 1:11 PM

Just saw this on Kickstarter: TOB Cable: One cable for everything

https://www.kickstarter.com/projects/janulus/tob-cable-one-cable-for-everything

They are producing a cable that contains copper wire plus fibre and swappable connectors. According to the page, they claim that "Using USB 3.1 TOB can transfer data up to 10 Gbps ..." Current connectors included the package are USB ( + Micro & Mini), and HDMI ( + Micro).

The small company already have designed a few devices which they sell already.. They estimate July at to shipping the product. I went and pledged for a full connection kit. Can let you know what it is like when I do get it if you like.

Not sure if this fits in with your programme for the Beta, else it is something to keep in mind possibly for the Gamma version.

@Laoena, what is your suggestion we do with that cable exactly?

What about something like this?
http://www.thessdreview.com/daily-news/latest-buzz/samsung-begins-mass-production-of-industrys-first-m-2-nvme-pcie-ssd-for-pcs-and-workstations/
or
http://www.legitreviews.com/intel-ssd-750-nvme-pcie-ssd-review_161829
Both address the write performance needed I believe and are getting cost effective. Wit the NVMe capabilities the latency and queues would be awesome.
Intel SSD750 about $1K for 1.2TB storage.
There are others entering the market too like...
http://www.gskill.com/en/product/fm-pcx8g2r4-960g

Just trying to contribute as I wait for my Axiom Beta....

@kimbray, definitely look like interesting options!

db added a subscriber: db.Jun 7 2015, 7:07 PM

Hello all!

First post, better late than never right?

4K RAW PC recording: https://lab.apertus.org/T136
colinelves: "At the moment my Odyssey 7Q can handle 60p 4K Raw by writing alternate frames to two high speed SSDs so 120fps is likely to need either 2x Odysseys working in tandem (which would not necessarily be impossible to get working with Convergent design's help) or possibly their as-of-yet unreleased Athena recorder. I don't think you'd even be able to record it to a PC without building an expensive SSD raid system."

Portable Mini-ITX multifunctional PC: https://lab.apertus.org/T217
surami: "Well, I started to think on SBCs at first, but there aren't any, which has a wide input bandwidth for 4K RAW data recording. So till now there are standalone portable recorders like Atomos Shogun, Odyssey7Q, etc.. For me the price of them is too much for "only" a recording solution and why not build a multifunctional portable PC, which later could be used for postproduction too? Of course it's a heavy thing and eats more power..."

C-Box System: https://www.kickstarter.com/projects/648471422/rig-friendly-c-box-system-use-any-ssd-on-your-cfas/description
"Truly Uncompressed Footage! This is not an external recorder quite like anything out there; it sends your camera’s highest quality footage directly to your choice of SSDs. External recorders require the camera to which they are connected to send a video signal which then must be encoded by the recorder. Most professional-grade external recorders will only handle up to 4K resolution footage, with a limited number of frame rate options, and some even require the purchase of proprietary SSDs. With the C-Box system you can record all of the frame rate and resolution options that your camera offers for internal cards, directly to external, non-proprietary SSDs."

So, its my limited understanding that this is no good because it uses CFast and the Beta uses mSD, right? Maybe its just wishful thinking that I might be able to afford all of these extras to go along with my Beta... I nearly fell off my seat at the price of just one Odyssey7Q... I suppose there are customers who want the PCIE card so they can use them with huge workstation RAIDs that have to be hauled to sets, and no way to funnel the PCIE card stretch goal money into a C-Box clone for mSD instead?

Cheers!

Hi db

First post, better late than never right?

exactly! Great to have you here :)

The microSD card slots have a datarate limit of around 50MB/s so they are only suitable for still image recording.

We are currently experimenting with UHSII SDXC cards that can take sustainable write speeds of 200+Mb/s
This will in any case require a new plugin module and not work with the existing microsd slots.
And we have trouble sourcing UHSII components, still seems to be a very new thing...

Bertl added a comment.Jun 8 2015, 3:56 AM

As sebastian explained, UHS-II is electrically different from "normal" SD, so the only way to make this work is through special hardware.

CFast is a variant on Compact Flash which internally uses a SATA interface (thus it is rather simple to do the CFast to SSD conversion), but there is no advantage in using CFast over using SATA SSDs directly. The main problem with SATA on the Beta are the missing Gigabit Tranceivers.

Note that a PCIe card design could also be adapted to "smaller" emerging standards like MiniPCI Express.

Best,
Herbert

Andrej added a subscriber: Andrej.Jan 3 2016, 11:21 PM

Hello,

@Herbert as far as I recall we discussed the possibility of CF to SSD conversion which wasn't that simple because CF uses various modes, I think it was something like - transfer/memory/IO - and all three modes had to work. People have discussed it when trying to get a RAW Stream of Magic Lantern/Canon 5D3 onto an SSD instead of an expensice CF card.

The SATA/PCI board certainly looks very promising and would be a great desktop/non-portable solution.

Regarding an (intermediary) portable solution I'd still suggest to go with the CD Odyssey 7+. Since CD is interested in cooperation with Apertus it would be helpful to include camera control for BETA in Odyssey's firmware and GUI. A Beta and a CDO7+ would be a working 4k/60 camera setup if included with a sort of harness and monitor shade - though not fully open source, and even a CMS20k would work at 30 fps (if supported by Convergent) making the setup futureproof and relatively easy achievable.

But as You mentioned, either SATA/PCI or SDI/Odyssey or HDMI/Displayport are in desperate need of Gigabit transceivers.

Good luck,

Andrej

I think we should use PCI Express 3.0 x4 with 4 M.2 cards in RAID-0 so we can get RAW Uncompressed in a very high framerate and resolution. Because if we go for SATA SSDs the SATA III would be a bottleneck. We should develop it with PCI Express so M.2 and XQD compatible. We can also add more memory modules for Cache.

Bertl added a comment.Oct 9 2016, 2:41 PM

Sounds nice! Do you happen to have a FOSS/OH solution for PCI-E 3.0 and M.2?

Best,
Herbert