Low Resolution Real-time Preview Video Stream - FPGA feature
Open, WishlistPublic

Description

I would like to be able to access a low resolution uncompressed RGB (24bit) video stream from the Linux userspace that has a low enough bandwidth requirement to be streamed over the network or is low enough resolution that one of the zynq ARM cores can compress it in real-time with existing software (eg. ffmpeg, vlc, etc.) inside the camera.

A 512x288 ( 4096 / 8) image created by skipping pixels would be perfectly fine IMHO.

Bandwidth requirements estimate:
Size per image: 432kB
Bandwidth requirement for 25 FPS: 10.5 Mbyte/s

This live video feed could be packaged into an RTSP stream and then be viewed on any network connected device.

Thoughts?

Related Objects

sebastian updated the task description. (Show Details)
sebastian raised the priority of this task from to Wishlist.
sebastian assigned this task to Bertl.
sebastian added a project: AXIOM Beta Software.
sebastian added a subscriber: sebastian.
Bertl added a comment.Dec 16 2014, 4:30 PM

Low resolution means scaling the high resolution version down.
This is a very expensive (resource wise) and power hungry task.
Compressing the data with e.g. JPEG or similar is also very intensive.
While doable, I'm not sure we actually want that for the Beta.

skipping pixels (keep 1, skip 7 in my example) should be a very resource friendly way of scaling down no?

I think Sebastian meant pixel skip. (every 4 line, every 4 column or something like that)

Is it possible to share some memory between the fpga and arm core? Something like that :

  • fpga writes low res image onto some ram area
  • arm core has read only access to it
colinelves added a subscriber: colinelves.EditedDec 17 2014, 9:03 AM

A 512x288 image is insufficient for ensuring correct focus, especially at 4K - in my opinion. I'd also say that line skipping or scaling seems like a lot of extra processor work to me. I was thinking that having a simple black and white image would be the easiest solution: simple conversion of the raw pixel values straight to luma data.

This would be perfect for framing and focussing (with or without tools - as it is much easier to asses focus on a black and White image) and allow look around I'd guess - which is a great feature.

The main downside being that you have no colour information (obviously) to asses white balance. Although I'd say this could be addressed simply enough with your standard white balance tools (auto white balance, white balance card, presets, dial in colour temperature and offset etc).

Oh and directors these days often whine if they can't see a colour image. But that's less important ;-p

@philippej: if you check the image pipeline the AXIOM Alpha uses, you will see that the entire raw image is already present in memory so it can be accessed from the arm cores. It is very likely that the "normal" operation will be similar on the Beta.

Arbitrary decimation, scaling or thresholding can be done in the arm cores on this data, it just needs to be synchronized.

the entire raw image is already present in memory

Would a viable path to generate this real-time preview be that a software/driver running on the ARM core reads pixel data just from every 8th pixel address and wraps it into a new image/device? Then no FPGA processing would be required at all and the load on the CPU is also minimal as no image resampling/processing is required.

This feature came from me with the idea of using the Axiom with Dragonframe stop motion software.

I think you need to double the suggested resolution, to 1024x576 to be useful for stop motion animation. And keep it in color.
Keep in mind that this video feed is just as the video assist. Separately the software would capture a full raw image for each frame.

If you watch the "How It Works" video on our site you can see an overview of how the software works with a Canon DSLR:
http://www.dragonframe.com/features.php

Bertl added a comment.Dec 18 2014, 5:17 PM

@sebastian: maybe, it would for sure put a certain load on the memory controller (limited resource) and probably reading every 8th pixel would also give a very strange preview while not really reducing the memory load that much (access is 64-128bit).

@dyamicaliri: I think for this specific purpose it doesn't really matter how much load a preview puts on FPGA or ARM cores as it will be the only thing really running at that time and can probably be suspended when the high resolution raw is captured.

Best,
Herbert

@sebastian: maybe, it would for sure put a certain load on the memory controller (limited resource) and probably reading every 8th pixel would also give a very strange preview while not really reducing the memory load that much (access is 64-128bit).

I see, so doing the pixel skipping already in the FPGA and writing a smaller image into RAM from the PL side would be much faster but also a lot more work right? I would not mind having the preview image look ugly due to the skipping of pixels (we can simulate this beforehand and verify if its a bad idea but I have a feeling it will work fine for just checking the framing).

Bertl added a comment.Dec 29 2014, 5:18 AM

Maybe test it out on the Alpha?

All it needs is a program which does the decimation and sends the (potentially compressed) image over network.

Would be great if we could find someone interested in testing this on the Alpha!

other thing to look at is this FPGA dirac encoding implementation: http://opencores.com/websvn,listing?repname=dirac&path=%2Fdirac%2F#path_dirac_

An alternative may include to implement and use a dedicated phy to stream the output in RTP https://tools.ietf.org/html/rfc4175

Uncompressed we could output 4096 width * 3072 height * 12bit * 5~6 fps within 1Gbit. The suggestion was to use low resolution, while the alternative might be to decimate the framerate.

Bertl added a comment.Apr 2 2017, 1:42 PM

Sounds nice and should be doable on the MicroZed.
It might be worth using decimation and keeping the framerate to avoid extensive buffering.
The obvious drawback is that connectivity via ethernet is lost.