This was forwarded in IRC:
- Queries
- All Stories
- Search
- Advanced Search
Advanced Search
Aug 29 2018
Jul 5 2018
The git commit hash and the date of the last update of the software (beta-sofware repo) is already displayd at login through /etc/motd. This can be different from the creation commit / date of the firmware, but these can be easily added as well. The standard disclaimer can also be added in the same place.
Possible framework, as replacement for ImageMagick: http://www.graphicsmagick.org
more points for the checklist:
Note the comments by one of the authors of the new automatic build system today:
http://irc.apertus.org/index.php?day=04&month=07&year=2018#160
Jul 4 2018
task up for grabs again.
Jun 25 2018
May 22 2018
May 18 2018
Done. RESTServer part of the task is obsolete.
Apr 11 2018
Looks good, I just added the https://github.com/apertus-open-source-cinema/pcb-aoi repo to phabricator
Apr 10 2018
Finally restored the page again, not pretty, but fiducial search is working and the image is unwarped.
Apr 9 2018
This link is open for all
Cannot open it without logging in or subscription.
This is the paper that demosaics/ interpolates the pixel as well as preserve the edge information.:
http://ieeexplore.ieee.org/abstract/document/1703585
Apr 4 2018
These seems super interesting, I see it as a potential tool to help tweaking the color science of the camera.
Mar 30 2018
Mar 17 2018
Mar 14 2018
perhaps we should change the title of this thread - or move it to "websockets" REST doesn't seem to make sense anymore...
Mar 12 2018
I am not worried about the display of these values in realtime using a Vue value store (modified by websocket information). I think we could get to 30fps with less than 100ms latency - if the hist3 can pump the info out that fast - and still manage the wifi and the websockets...
I would also suggest we try decimating the histogram number of samples and bitdepth drastically and try sending multiple measurements per second, after all as filmmaker you are not interested in details of the spikes, rather see a general value distribution and how it changes when opening or closing a lens aperture for example. I guess test gpu acceleration on smartphones for drawing these could be worth a try.
Ok - I see. Then the interface will also need to show a folder list (with preview / editing options and perhaps metadata viewer - respective to data type.) Do you also expect / hope to send whitelisted commands via some kind of mock-shell?
Wow - I didn't know that that was one of the deliverables.
It isn't currently (well the camera can capture pictures of course - but not serve them over http yet) but could be useful in the future.
Wow - I didn't know that that was one of the deliverables. What do you mean by "etc."? Yes, we can serve from any folder. We would need to investigate the possibility of sym-linking a "DCIM" type of folder, but there isn't any real reason why it shouldn't work.
Can we also easily serve files (like images, etc.) from the camera internal linux user space to clients without lighttp (for photography like applications)? If yes then I guess we could consider getting rid of lighttp.
As far as security is concerned, you are probably right sending and receiving flatbuffers. We don’t want DAEMON to unexpectedly crash.
Here is how we could even serve http with libwebsockets, which would help to do the automatic upgrade to ws:// - it is public domain licensed.
Don't you think there is a bigger security problem when allowing to run websockets permanently in daemon? Also that was a reason for separation of daemon and server. Daemon should just have slim layer, like flatbuffers, for communication. This would also allow different types of servers communicate to the daemon, without opening security holes.
Mar 11 2018
There will be minimum three connections with the client device, perhaps four.
we could also have multiple clients... ie. one webremote and one cli (in case we are merging the server and the controldaemon).
So I have "implemented something" - the frontend for the WebRemote and worked with the team today to design a spec. But I would still like @jatha to show some evidence that suggests that "the camera has plenty of resources". I find this hard to believe (its only a relatively low-powered dual arm!!!) and only benchmarks running during a camera state where it is filming and executing scripts could change my opinion. (Or alternatively, a video of htop screen.)
Of course - usually you have the whole reverse proxy cruft because you don't know who / how many clients you will have. The luxury behind this WebRemote as we are building it is that we will at most have exactly one client, who has "properly" identified itself with a UUID. We don't care about the rest of the universe.
sorry for being unclear. When writing rest server, i actually didnt think about rest but exposing an interface with webstandarts. we could also replace any rest with websockets.
And furthermore, the only purpose for having lighttpd is to serve the initial html/js/css to the WebRemote. After a ws:// connection has been made we can even turn it off. (Technically speaking, we could even just use netcat to serve the folder to the browser.)
Can we get rid of the REST server entirely? I was under the impression that we actually agreed to do websockets...
hm... dont really get, why we would want to disable the rest server / the control daemon seperately. if we dont want to expose http from outside, we could pervent this by binding the rest server to a local adress and do reverse proxying with ligthttpd. in this case disabeling lighttpd would disable the accessibility from outside.
Connection to daemon is done by UNIX domain sockets, then flatbuffers package is sent over it. Split is intentional, so we can deactivate them separately (was asked by @Bertl).
FYI: This is what we are working on for the specification of the C&C. It is neither finished, nor valid JSON or anything else.
"split the control daemon and rest server" -> that would be the approach with websockets, which I prefer and we are investigating right now.
moreover, what is the transport mechanism for the flatbuffers? notwork? unix sockets? files? fifos?
why do we split the control daemon and rest server into two different programs? couldnt the contol daemon directly expose some kind of web compatible api (websockets / rest), the webapp and the cli should use?
Mar 10 2018
Mar 8 2018
Mar 4 2018
@malita With HDMI know-how we mean that you have a basic understanding how HDMI works, how the data is encoded and transported between source and sink and what the building blocks of an HDMI image are ... Tim does a nice overview of HDMI here https://media.ccc.de/v/33c3-8057-dissecting_hdmi
Hi,
I am from University of Peradeniya Sri Lanka. And I like to engage in the project. Can you please explain what is meant by " HDMI know-how " in the prerequsites section?
Correct cmv_hist3 link: https://github.com/apertus-open-source-cinema/beta-software/tree/master/software/cmv_tools/cmv_hist3
Mar 1 2018
Great! I take it you mean this flavor of libwebsockets: https://libwebsockets.org/
We can test by replacing Pistache by libwebsockets, but it takes some time (maybe on weekend), as currently preparations for GSoC and my packing for move to new city are ongoing.
Here is some information about websockets vs. ajax
Feb 28 2018
@jatha Have you set Travis CI up?
That sounds great!
As a new Beta is finally connected to the server, i could test again if i can finally set gain through daemon. If it works, then more usage cases can be implemented and tested. As it is necessary before we know what we need to supply to the camera, but most work should be done by daemon, also using pre-defined (stored in binary files) stuff like FPGA bitstreams.
As far as the API goes, I would like to suggest versioning it according to a reference standard that documents the REST call, its expectations and all values. The current state (at T865) is what I would consider to be V0 - because it is not systematically standardised. As soon as everything is "written in stone", I would propose promoting the API to V1.
No complete reference yet, as it was created while developing. Have no direct access to the hardware, so it takes more effort to test.
Yes, looking often into Lab and IRC.
Andrej, are you tracking this conversation too?
Is there a complete list of all current and valid REST package requests?
Cool - I saw that referenced in the C code, but didn't know how to expect / construct it from the JS.
There is a thing missing, which i forgot to tell. REST sends a JSON package, format is described in https://lab.apertus.org/T865.
I totally understand the issue with node running on the box, and is why I suggested the C library for websockets.
I created an overview block diagram with @BAndiT1983 of the current situation which hopefully provides some insight:
Should we make an issue here https://github.com/apertus-open-source-cinema/beta-software/issues about registering the available components and middleware routes in order to match the interface expectations as detailed here:
- One thing to keep in mind is that we should get a confirmation from the camera back to the remote that a particular setting/command was applied successfully.
- Also can we push commands from camera to webremote currently without polling?
Sounds good.
Feb 27 2018
Also, there is a small issue: ./run_image.sh dev/microzed-image-1.3 runs run_image.sh which is not in the project. Maybe it is ./runQEMU.sh ?
There are still issues with the build scripts: most importantly, the sha256 checksums do not match for xilinx. (./guest-images/dev/microzed-image-1.3/build.sh, which should be ./guest-images/dev/microzed-image-1.4/build.sh, given the patch at https://github.com/apertus-open-source-cinema/axiom-beta-qemu/tree/ffe128c9cbff62e75bd0c5d44ef1f914c393fd8f)
Feb 26 2018
Logs from LinuxMint 18.3
The following is log of build on Manjaro Community 17.1.5 (arch)
https://pastebin.com/GZ136Ure
I took the local-storage route for the vue.js version of the interface that I built. Here is the flow:
Legal issues? If you mean JetBrains Webstorm - there aren't any legal issues. The EAP version is their prerelease of their software and it is entirely legal and free to use - without registration. Otherwise I wouldn't have shared it in the VM I built.
Aha, so you want to risk legal issues? VSCode is widely accepted, even by Linux community.
Yeah, but I really avoid Microsoft at every level in the stack - and I won't be installing visual studio anything on linux. Webstorm is simply the best because the entire environment is built for the kind of engineering needed by web developers...
To bypass problems with licenses, you can use VSCode, which has a ton of plugins for different areas, also Node, Vue and so on.
I just made you all a VM Snapshot for Oracle VM VirtualBox. It's current state has:
I also first had the issue that all the JS and CSS files were reported 404 by the python SimpleHTTPServer even though they were all in place...
Will do it later, if new commits are still not working, am at work now and cannot access my private machine.
I really didn't want to sound condescending, I hope you don't have that impression. In the future it would be excellent if you would make a normal issue report at github that has:
JS was enabled, but browser refused to apply a CSS file with cryptic name (seems like hash). Am not new to web development, doing it daily in my regular job (although it's GWT and Java mainly).
and pull the repo again - there seem to have been a few files in the dist folder that weren't added to the repo yesterday
ok... what security measures? have you enabled javascript?