Hi @maltefiala,
Please have a look on my proposal draft, allow me to utilize your valuable feedback to improve it further so that I can upload a final version.
Thank you, waiting for your response.
- Queries
- All Stories
- Search
- Advanced Search
Advanced Search
Apr 1 2017
Mar 31 2017
Mar 29 2017
In T734#11374, @Bertl wrote:@mash: while that is a good idea for cutting/post software, this is not an option for the AXIOM Beta as there is no GPU to accelerate anything.
@mash: while that is a good idea for cutting/post software, this is not an option for the AXIOM Beta as there is no GPU to accelerate anything.
It is unlikely that the lens system manufacturers will disclose the information about their lenses, but of course, it is worth a try contacting and asking them.
I figured out that camera lenses are using 16bit addressing system and SPI protocol. We need to send and retrieve 16bit data from lenses. And then we need to decode that data stream to human readable. Unfortunately, the data that can be sent and retrieve to lenses is different for every model of the same lens manufacturer. Camera manufacturers include all the information(focus length, aperture range) in their firmware through camera lens ID code. Either we need to get data stream information and observe that by oscilloscope and function generator and translate that to lens information but we've to perform that for every lens model on the planet or else we can ask the camera manufacturers about their own SPI system (what bytes for what).
Every manufacturer has different protocol for their lenses.
Clarify me whether the communication system between cameras( of different manufacturers) and lens is same or do we need to implement different communication protocols for different camera manufacturers? Do all lens systems from various manufacturers follow same communication protocol to their cameras?
Mar 28 2017
Sebastian, if there's need at some point in time then i can translate the article natively, if machine translation is not clear enough.
the waveform and vectorscope feature requests are long outstanding wishes.
Is it something like this (http://www.ixbt.com/digimage/canonautosonyl.shtml) that you want to implement ??
Hi,
Is it something like this (http://www.ixbt.com/digimage/canonautosonyl.shtml) that you want to implement ??
Mar 27 2017
My very direct and subjective feedback:
Mar 26 2017
In T765#11343, @RexOr wrote:@merikhan36 - Do you have any FPGA related experience?
@merikhan36 - Do you have any FPGA related experience?
Any feedback on the suggested project will be appreciated :)
Thanks for your application draft. I will look at it following week. If you don't hear back from me until April 1st, please ping me.
I have prepared my first draft of the proposal, please review and comment wherever necessary, it will help in improving it more.
Mar 25 2017
Thanks, very interesting!
Can you elaborate more on the technical background of this proposed feature and planned implementation?
Mar 24 2017
Sorry for delay, yes @sebastian this is my GSoC project proposal.
Sorry for delay, yes @sebastian this is my GSoC project proposal.
Mar 23 2017
@kkvasan92: AXI is not required/desired for simulation/emulation.
@anil: With Link Training we refer to training the LVDS connection required for the Gearwork.
@mehrikhan36 is this your GSoC project proposal?
Yes, we have IRC, it is on #apertus @ irc.freenode.
Just join there and ask if you need anything.
@sagnikbasu: Sorry for the delay, I obviously missed your questions.
Mar 22 2017
The test footage looks amazing.
Mar 21 2017
Hi
I am Interested in this project.
Can you please elaborate on the project goals.
In the Goals, you mentioned Link training; are you referring to Link training, which is added in HDMI 2.1 spec?
Mar 17 2017
A useful resource for pipelined convolution implementation: https://daim.idi.ntnu.no/masteroppgaver/013/13656/masteroppgave.pdf
Mar 16 2017
hello do we have IRC for apertus for gsoc's 2017 projects?
Mar 13 2017
Here is a link to a simple flowchart: https://github.com/sagniknitr/Real-time-sobel-filter-in-FPGA/blob/master/gsoc2.jpg
Regarding hardware emulation, can you please tell about interfaces
AXI can be used as input stream interface for hardware emulation model assuming this emulation sensor model going to be used in same fpga where main system going to be implemented?
Thanks a lot for clarifying. I am interested in this project.
i need some more clarification
Also, in addition to the above post, I would like to know about the various signals which the camera sensor (Truesense KAC12040 or Cmosis CMV!2000) data bus will provide?
Hi,
Here some of my queries regarding this project.
->What kind of edges does the camera require? What if the algorithm is developed if it only shows the edges when the camera is in motion? When the user is stationary it will store the edge information in memory, so that it will not have to do any processing.I think this may save computation time.
Mar 12 2017
I don't think that this needs to be Vendor specific in any way.
I.e. you can use the Xilinx Vivado toolchain or any other (Altera Quartus, Lattice Diamond, ...) for testing and simulation, but the resulting HDL should be vendor independent.
Mar 11 2017
Hi Mr Bertl,
Can i assume that xilinx tool flow will be used for hardware emulation As Xilinx zynq 7020 is used in AXIOM camera. uncompressed video or image data will be placed in memory and sensor model should generate output bit stream for that image/video according to timing and functional standard of the sensor, am i correct? is it possible to use available IPs ( opencore / xilinx) if it can be used?
Mar 10 2017
I had in mind techniques that could be used for blurring, sharpening, un-sharpening, embossing, etc. I am not sure if any of these are already implemented, but my idea is to develop a common module in Verilog that can accept kernel of any size (e.g, 3*3 or 5*5) and produce the desired effect on image. For example, the user could input a kernel of size 5*5 to produce Gaussian blur.
- I had in mind techniques that could be used for blurring, sharpening, un-sharpening, embossing, etc. I am not sure if any of these are already implemented, but my idea is to develop a common module in Verilog that can accept kernel of any size (e.g, 3*3 or 5*5) and produce the desired effect on image. For example, the user could input a kernel of size 5*5 to produce Gaussian blur.
Mar 9 2017
First, I am wondering if the mentors would be open to the implementation of other techniques (in addition to Sobel Filters) that are useful in film making?
I am a final year undergraduate student at National University of Sciences and Technology, with a major in Electrical Engineering. I participated in GSOC last year with TimeLab organisation and developed a time series simulator in Python.
I have a particular interest in parallel hardware programming. Recently, as a part of my semester project, I implemented a pipelined version of Convolutional Neural Network on FPGA using Verilog.
Mar 8 2017
Yes, if we extend it a bit, just creating a simple daemon for now. The log is written to /var/log/syslog in my VM. You can see the init for log in the source i've uploaded yesterday, jsut take a look at main.cpp. Thanks for reminding me of return values, as we also want to request set values to display/verify them.
@anuditverma
No, you don't need to know about django, you only need to know about unit testing. Feel free to consult another resource of your liking.
Just my idea: As the daemon is planned for user-space first, it would be not a big problem to test on local machine. Actually developing it currently on LinuxMint with Unix domain sockets which are meant just for local usage, other languages need wrappers of course to be able to use commands which came over network. Don't want to allow to access the daemon over network for security purpose at the moment.
Since the task has been updated, I have been following the resources present on it, also I am getting my self acquainted with the prerequisites too.
One of the resource link is leading to a O'reilly book "Test-Driven Development with Python" and it is advised to read 1-4 chapters from it, the book contains py and django based app development content.
Mar 6 2017
Oh alright I see, thanks for the heads up @maltefiala, then I should certainly focus on other aspects of this project.
Dear @anuditverma, it's great you want to take a look at the current examples. Just bear in mind that the camera hardware has not been virtualised so far, so you won't be able to get useful replies when running the scripts on your raspberry. The latest firmware is here: http://vserver.13thfloor.at/Stuff/AXIOM/BETA/beta_20170109.dd.xz
Thanks @maltefiala for the update and for the application reminder, I am looking forward to apply. I will go through this updated task once again to get a more clear understanding to make my basic fundamentals more strong.
@anuditverma, we finished updating this task. If you are still interested, please apply with a timeline proposal until April 3 16:00 UTC. If you have any further question, feel free to ask.
Mar 4 2017
Mar 3 2017
Not necessarily gsoc, see our comments in T757
ah right, all clear now :)
Told on IRC just a moment ago, that it is a preparation for GSoC. It's faster if i start it, than a student who has never seen the system. Afterwards a student can proceed on improving, like adding new protocols as UART or similar.
I will add / edit the gsoc task tomorrow.
This is not a gsoc task but will be done by Andrej
Please dont forget to fill the required GSoC tasks fields:
Please dont forget to fill the required GSoC tasks fields:
Added by @sebastian:
Beauty of this task is it's modularity. E.g. we could add the fpga bitstreams as binaries in the beginning.
Here a few quick comments far from being complete:
Mar 2 2017
Btw, I root for travis, as we get it for free on github: https://travis-ci.org/
Current image, posted by @BAndiT1983
CI definitely sounds like a good plan for this task, +1 from me
Good solution for this thing is to install Jenkins, e.g. through Docker, and let it build the packages for you. Either manually or fully automatic, e.g. nightly builds. Jenkins is not limited just to building, it's a batch task machine. So you could give it all the tasks which should produce repeatable results, like firmware images, OC nightlies, unit testing.
@anuditverma, thanks a lot for your detailed answer. I will update this task as soon as we have improved the onboarding process.
Thanks for your interest in my work, so here are my views to your queries,
We are currently improving the onboarding process (documentation, tools,..) for this task. I am positive we can give more information on monday. Regarding dates and times: Most of our team lives in MET (UTC +1). We will switch to Daylight saving time (UTC+2) on the 26th of March.
Mar 1 2017
someone at natron.fr is probably also working on an waveform monitor https://forum.natron.fr/t/luma-waveform-display-using-shadertoy/985