AXIOM WebRemote: Where/How to save data?
Open, NormalPublic

Description

How should we save parameters for the WebGUI?

Things like preset values, button layout (once its customize able) and user settings.

We could save it in camera but this would mean different people would overwrite each others settings.
Or we save it locally (eg. in browser cookies)?

Thoughts?

sebastian created this task.Feb 6 2018, 7:12 PM
sebastian updated the task description. (Show Details)
sebastian raised the priority of this task from to Normal.
sebastian assigned this task to BAndiT1983.
sebastian added a project: AXIOM Beta Software.
sebastian moved this task to Control daemon on the AXIOM Beta Software board.
sebastian added a subscriber: sebastian.
RexOr added a subscriber: RexOr.Feb 6 2018, 10:24 PM
sebastian updated the task description. (Show Details)Feb 6 2018, 11:13 PM

@BAndiT1983 suggested using http://tutorials.jenkov.com/html5/local-storage.html
I will look into that, seem like a good alternative to cookies.

sebastian renamed this task from ControlGUI: Where/How to save data? to AXIOM WebRemote: Where/How to save data?.Feb 24 2018, 5:35 PM

I took the local-storage route for the vue.js version of the interface that I built. Here is the flow:

  • Defaults are assumed by the "virgin state" of the WebRemote on load to populate the interface.
  • If there is anything in webstorage, it is pulled into the interface and a command is sent to the AXIOM to change the settings
  • On every setting change the webinterface is immediately updated, the local-storage gets updated and a post request is made to the server
  • It is possible to reset to defaults.

Please see this issue and leave a comment about the Endpoint construction:
https://github.com/KinoKabaret/AXIOM-WebRemote/issues/5

Sounds good.

One thing to keep in mind is that we should get a confirmation from the camera back to the remote that a particular setting/command was applied successfully.

Also can we push commands from camera to webremote currently without polling?

  • One thing to keep in mind is that we should get a confirmation from the camera back to the remote that a particular setting/command was applied successfully.
  • Also can we push commands from camera to webremote currently without polling?

Both of these issues would be resolved with websockets. In fact, using a ws:// is much better than sending POST requests. This was not possible for me to build without guidance, but is trivial once the team has decided upon a communication protocol that is running on the camera. Socket.io is a bit heavy, but this would be a drop in solution: https://github.com/MetinSeylan/Vue-Socket.io

If node.js can run on the AXIOM, then this would be a lighter solution:
https://github.com/socketio/engine.io

If it can't then we should consider using (and extending) this: https://github.com/mortzdk/Websocket (a C lib for websockets)

Should we make an issue here https://github.com/apertus-open-source-cinema/beta-software/issues about registering the available components and middleware routes in order to match the interface expectations as detailed here:

sebastian added a comment.EditedFeb 28 2018, 4:42 PM

I created an overview block diagram with @BAndiT1983 of the current situation which hopefully provides some insight:

https://docs.google.com/drawings/d/18JjDpwX0aVS4sCGY7sPS5dM0RJI9Jc0x9uBt2KGNXgA/edit

The main concern with having node.js on the camera is resource/performance related. The Beta has dual 800MHz ARM cores and 1GB shared memory (most is used by FPGA for image processing) so not a lot of resources. Thats the main reason why the control daemon and RESTserver are all C/C++ so far.

nothingismagick added a comment.EditedFeb 28 2018, 4:49 PM

I totally understand the issue with node running on the box, and is why I suggested the C library for websockets.

Your document is correct, however we could use a little trick to "fake" websockets without adding another library and that would be to delay the REST server reply to the POST / PUT request until it has resolved in the control daemon. We could use this reply to update the WebRemote.

Making a request once every 10 seconds (in a debounced setTimeout loop) to /api/status would also be a way to "fake" a websocket...

There is a thing missing, which i forgot to tell. REST sends a JSON package, format is described in https://lab.apertus.org/T865.

Cool - I saw that referenced in the C code, but didn't know how to expect / construct it from the JS.

Is there a complete list of all current and valid REST package requests?

Andrej, are you tracking this conversation too?

https://lab.apertus.org/T931

Yes, looking often into Lab and IRC.

No complete reference yet, as it was created while developing. Have no direct access to the hardware, so it takes more effort to test.

As far as the API goes, I would like to suggest versioning it according to a reference standard that documents the REST call, its expectations and all values. The current state (at T865) is what I would consider to be V0 - because it is not systematically standardised. As soon as everything is "written in stone", I would propose promoting the API to V1.

But its just a suggestion.

As a new Beta is finally connected to the server, i could test again if i can finally set gain through daemon. If it works, then more usage cases can be implemented and tested. As it is necessary before we know what we need to supply to the camera, but most work should be done by daemon, also using pre-defined (stored in binary files) stuff like FPGA bitstreams.

That sounds great!

I watched the talk at the 32c3 and one of the members of the audience asked a really good question, and I am just bringing it up because I think its a great idea and something that could easily be implemented within documentStorage in the WebRemote scope and that is to use "preset files" - or in the real situation a collection of JSON that the API receives either in parallel or in series. This way we can guarantee that the initialization phase of the camera ends with either a preset (or the last settings)...

We can test by replacing Pistache by libwebsockets, but it takes some time (maybe on weekend), as currently preparations for GSoC and my packing for move to new city are ongoing.

Great! I take it you mean this flavor of libwebsockets: https://libwebsockets.org/

I too am quite busy for the next 10 days, but once you can confirm that libwebsockets is servicing ws:// interface I will take a few hours and retool the frontend to reflect this in the code, push to my git repo and update the VM accordingly. (ie make a node server that mimics the responses that the AXIOM would be providing).

new version as result of today's discussion:

anuejn added a subscriber: anuejn.Mar 11 2018, 5:28 PM

why do we split the control daemon and rest server into two different programs? couldnt the contol daemon directly expose some kind of web compatible api (websockets / rest), the webapp and the cli should use?

moreover, what is the transport mechanism for the flatbuffers? notwork? unix sockets? files? fifos?

"split the control daemon and rest server" -> that would be the approach with websockets, which I prefer and we are investigating right now.

FYI: This is what we are working on for the specification of the C&C. It is neither finished, nor valid JSON or anything else.

/**
 * module registration
 **/
"modules" : {
  $moduleName : {
    "description" : String,
    "values" : Array, 
    "which" : "filelocation" // this should be an absolute path to the binary / memory location
  } ...
}

/**
 * ws:// WebRemote registration with DAEMON either it gets an answer, or it receives an error code "500"
 **/
"message:WebRemote:whoami": {
  "sender" : UUID, // WebRemote's UUID / cli wrapper
	"modules" : modules.all,
	"status" : one.of(messages.status)	
}  

/**
 * ws:// DAEMON response to whoami
 **/
"message:WebRemote:whoami" : {
	"DAMEON_UUID": UUID,
	"ACCESS" : one.of(messages.access)
}    
  
/**
 * ws:// control communication from WebRemote
 **/
"message:WebRemote:CMD": {
  "sender" : UUID, // WebRemote's UUID / cli wrapper
  "module" : $moduleName
  "value" : one.of($moduleName.values),
  "timestamp" : timestamp	  
}

/**
 * ws:// communication from DAEMON
 **/
"message:DAEMON": {
  "sender" : UUID, // DAEMON's UUID
  "access" : one.of(messages.access)	
  "module" : $moduleName
  "response" : one.of(messages),
  "timestamp" : timestamp	  
}

/**
 * messages
 **/
"messages" :{
  "access" : one.of(messages)
  "status" : ["online", "working", err]
  "success" : $moduleName + "succeeded",
  "error" : "error: " + err,
  "malformed" : "command " + cmd + " malformed"	,
  "shutdown": "shutting down, see ya"	
}

For future reference you can find the current state here:
github gist

Connection to daemon is done by UNIX domain sockets, then flatbuffers package is sent over it. Split is intentional, so we can deactivate them separately (was asked by @Bertl).

hm... dont really get, why we would want to disable the rest server / the control daemon seperately. if we dont want to expose http from outside, we could pervent this by binding the rest server to a local adress and do reverse proxying with ligthttpd. in this case disabeling lighttpd would disable the accessibility from outside.

another question: how does the flatbuffers protocol work? are we going to write some question object to the socket and get some answer object (like with http) or will there be a more messagebus like publication subscription model?

imo, it could be smart to implement a http / websockets based api directly inside the control daemon to reruce software complexity. otherwise we would have to specify two seperate protocols, which carry exactly the same data and build a translator between them (the rest server).

Can we get rid of the REST server entirely? I was under the impression that we actually agreed to do websockets...

Otherwise the WebRemote is going to have to either do long-polling or timeout polling to get information (like perhaps how much storage space is left, how the battery status looks etc.)

nothingismagick added a comment.EditedMar 11 2018, 7:09 PM

And furthermore, the only purpose for having lighttpd is to serve the initial html/js/css to the WebRemote. After a ws:// connection has been made we can even turn it off. (Technically speaking, we could even just use netcat to serve the folder to the browser.)

With hyper-aggressive caching in a cache.manifest there would never be a need to even use the lighttpd / netcat again...

sorry for being unclear. When writing rest server, i actually didnt think about rest but exposing an interface with webstandarts. we could also replace any rest with websockets.

And furthermore, the only purpose for having lighttpd is to serve the initial html/js/css to the WebRemote.

theoretically yes, but this is not the way websockets are intedet to work. most of the time you put them behind some kind of recerse proxy to have them on the same port and origin.

Of course - usually you have the whole reverse proxy cruft because you don't know who / how many clients you will have. The luxury behind this WebRemote as we are building it is that we will at most have exactly one client, who has "properly" identified itself with a UUID. We don't care about the rest of the universe.

we could also have multiple clients... ie. one webremote and one cli (in case we are merging the server and the controldaemon).

anyway, i dont think, that the reverse proxy thing has anything to do with handling multiple clients. if you can handle multiple clients with your websockets lib with a reverse proxy, you can also do it without, because the reverse proxy is normally not doing any kind of aggregation. you are reverse proxing for only having to expose a single service to the outside world.

There will be minimum three connections with the client device, perhaps four.

Connection 0 with the client will be via onboard wifi, this really has nothing to do with the webstack per sé - it is a deeper layer of connectivity and out of scope for this discussion.

Connection 1 will only be for a single, unique WebRemote client, it will happen via netcat or lighttpd and it serves to negotiate the ws:// channel. This is a first come, first serve approach. If the ws:// socket stays alive for more than 1 minute (for example), this interface will be torn down. If the ws:// socket fails, then this connection is built back up.

Connection 2 is the actual control session via websockets and it negotiates control and response with the DAEMON (in the best case) using the libwebsocket that @BAndiT1983 proposed. The first client to present a valid WebRemote UUID to the DAEMON over the ws;// wins the election. After this successful negotiation, the PID of Connection 1 is sigkilled.

Connection 3 is SSH access to the APERTUS - but again, this is out of scope for the WebRemote - but not the DAEMON, who will also be fully callable / scriptable over SSH / bash scripting.

Don't you think there is a bigger security problem when allowing to run websockets permanently in daemon? Also that was a reason for separation of daemon and server. Daemon should just have slim layer, like flatbuffers, for communication. This would also allow different types of servers communicate to the daemon, without opening security holes.

As far as security is concerned, you are probably right sending and receiving flatbuffers. We don’t want DAEMON to unexpectedly crash.

Can we also easily serve files (like images, etc.) from the camera internal linux user space to clients without lighttp (for photography like applications)? If yes then I guess we could consider getting rid of lighttp.

Should I update the drawing?

Wow - I didn't know that that was one of the deliverables. What do you mean by "etc."? Yes, we can serve from any folder. We would need to investigate the possibility of sym-linking a "DCIM" type of folder, but there isn't any real reason why it shouldn't work.

Speaking of etc. in the mock-up there is also a histogram. Will this histogram be rendered as an image and sent to the client, or will the data-points be sent and the client should render this? Both are possible.

sebastian added a comment.EditedMar 12 2018, 8:14 PM

Wow - I didn't know that that was one of the deliverables.

It isn't currently (well the camera can capture pictures of course - but not serve them over http yet) but could be useful in the future.

What do you mean by "etc."?

Just thinking out loud: metadata related files, calibration related stuff, config files, when the AXIOM Beta can in the future also record footage internally we could even offer videos for download through the webserver

Speaking of etc. in the mock-up there is also a histogram. Will this histogram be rendered as an image and sent to the client, or will the data-points be sent and the client should render this? Both are possible.

currently we have a camera internal C software called cmv_hist3 that outputs histogram data on the command-line with all kinds of options: https://wiki.apertus.org/index.php/AXIOM_Beta/Manual#Image_Histogram_Data

Ok - I see. Then the interface will also need to show a folder list (with preview / editing options and perhaps metadata viewer - respective to data type.) Do you also expect / hope to send whitelisted commands via some kind of mock-shell?

If we can stream hist3 data, we can definitely show a near-realtime histogram in the browser - as the entire packet (uncompressed and full-bandwidth would be 16,384 bits) but really we only need the R,G,B, GB row values - right? We also won't really need the "exact" resolution of the histogram, so we could also decimate it to a 512 line-height, unless decimation requires additional cycles. We could also send an array of 12-letter binary words (in this case all are = 4096 (actually 4095 cuz we start at 0):

binary_R_G_B_GB = [111111111111,111111111111,111111111111,111111111111]

Or just send the values if that is what we get natively:

num_R_G_B_GB = [4095,4095,4095,4095]

My feeling is that if the histogram updates every second it should be fine, and that is well within the capabilities of the WebRemote's interface (especially if we can offload it to the GPU). The question is if that is feasible on the hardware side. The less processing we can get away with in the camera, the better. Right?

I would also suggest we try decimating the histogram number of samples and bitdepth drastically and try sending multiple measurements per second, after all as filmmaker you are not interested in details of the spikes, rather see a general value distribution and how it changes when opening or closing a lens aperture for example. I guess test gpu acceleration on smartphones for drawing these could be worth a try.

nothingismagick added a comment.EditedMar 12 2018, 11:00 PM

I am not worried about the display of these values in realtime using a Vue value store (modified by websocket information). I think we could get to 30fps with less than 100ms latency - if the hist3 can pump the info out that fast - and still manage the wifi and the websockets...

Is there some way that you could make a capture from the camera where the values are like the following on a line-by-line basis?

like:

#!/bin/bash

while true; do
  date +%s%3N >> recording.hist3
  ./cmv_hist3 -h >> recording.hist3
done

to get a file with lines something like this:

1520891842295
25,395,45,405
1520891843033
35,495,45,45
1520891843687
55,495,45,45
...

ps - youll need to brew install coreutils on mac to get at those pesky nanoseconds - and change date to gdate - linux shouldn't have that problem. I would take this file and pump its lines through to the WebRemote at as close to the speed as you recorded it as possible.