It'll do for the first month.
- Queries
- All Stories
- Search
- Advanced Search
All Stories
Apr 9 2018
Apr 5 2018
Apr 4 2018
These seems super interesting, I see it as a potential tool to help tweaking the color science of the camera.
Apr 2 2018
Duplicate to T989
Mar 30 2018
this electronic controlled ND filter built in would be the perfect solution for two reasons:
Mar 22 2018
Awesome, that's perfect!
Scrolling back, I noticed you said ‘cool idea’ about the extra bottom hole a while back. It didn’t make it into the current revision though.
Snap, totally forgot about the Compact Shell! Which is kind of a cage, I guess. And cages are... You need to buy a cage or extra rails for more mounting points with every new camera you buy these days. It would be great if cameras had all the mounting points themselves. Saves both weight and money.
Hi Iwan.
And maybe add an extra 1/4” hole on the bottom end of the camera, right underneath the 1/4” hole on the right hand side while you’re at it. A rail there could allow my wooden side handle to be attached.
Thanks guys. For the camera to work without a cage, I really need at least two 1/4” holes on either side, vertically aligned. So the right hand side has got that covered, the left hand side doesn’t. One extra hole down below would allow a Nato rail to be attached and then we’d be home free.
Mar 21 2018
Mar 20 2018
Mar 17 2018
Mar 16 2018
Mar 14 2018
perhaps we should change the title of this thread - or move it to "websockets" REST doesn't seem to make sense anymore...
Mar 13 2018
Mar 12 2018
I am not worried about the display of these values in realtime using a Vue value store (modified by websocket information). I think we could get to 30fps with less than 100ms latency - if the hist3 can pump the info out that fast - and still manage the wifi and the websockets...
I would also suggest we try decimating the histogram number of samples and bitdepth drastically and try sending multiple measurements per second, after all as filmmaker you are not interested in details of the spikes, rather see a general value distribution and how it changes when opening or closing a lens aperture for example. I guess test gpu acceleration on smartphones for drawing these could be worth a try.
Ok - I see. Then the interface will also need to show a folder list (with preview / editing options and perhaps metadata viewer - respective to data type.) Do you also expect / hope to send whitelisted commands via some kind of mock-shell?
Wow - I didn't know that that was one of the deliverables.
It isn't currently (well the camera can capture pictures of course - but not serve them over http yet) but could be useful in the future.
Wow - I didn't know that that was one of the deliverables. What do you mean by "etc."? Yes, we can serve from any folder. We would need to investigate the possibility of sym-linking a "DCIM" type of folder, but there isn't any real reason why it shouldn't work.
Can we also easily serve files (like images, etc.) from the camera internal linux user space to clients without lighttp (for photography like applications)? If yes then I guess we could consider getting rid of lighttp.
As far as security is concerned, you are probably right sending and receiving flatbuffers. We don’t want DAEMON to unexpectedly crash.
Here is how we could even serve http with libwebsockets, which would help to do the automatic upgrade to ws:// - it is public domain licensed.
Point of note: Blackmagic Design's Duplicator 4K is an SD card duplicator with built in realtime H.264 and H.265 encoding, yet Blackmagic Design DaVinci Resolve doesn't support H.265
Don't you think there is a bigger security problem when allowing to run websockets permanently in daemon? Also that was a reason for separation of daemon and server. Daemon should just have slim layer, like flatbuffers, for communication. This would also allow different types of servers communicate to the daemon, without opening security holes.
Mar 11 2018
There will be minimum three connections with the client device, perhaps four.
we could also have multiple clients... ie. one webremote and one cli (in case we are merging the server and the controldaemon).
So I have "implemented something" - the frontend for the WebRemote and worked with the team today to design a spec. But I would still like @jatha to show some evidence that suggests that "the camera has plenty of resources". I find this hard to believe (its only a relatively low-powered dual arm!!!) and only benchmarks running during a camera state where it is filming and executing scripts could change my opinion. (Or alternatively, a video of htop screen.)
Of course - usually you have the whole reverse proxy cruft because you don't know who / how many clients you will have. The luxury behind this WebRemote as we are building it is that we will at most have exactly one client, who has "properly" identified itself with a UUID. We don't care about the rest of the universe.
sorry for being unclear. When writing rest server, i actually didnt think about rest but exposing an interface with webstandarts. we could also replace any rest with websockets.
And furthermore, the only purpose for having lighttpd is to serve the initial html/js/css to the WebRemote. After a ws:// connection has been made we can even turn it off. (Technically speaking, we could even just use netcat to serve the folder to the browser.)
Can we get rid of the REST server entirely? I was under the impression that we actually agreed to do websockets...
hm... dont really get, why we would want to disable the rest server / the control daemon seperately. if we dont want to expose http from outside, we could pervent this by binding the rest server to a local adress and do reverse proxying with ligthttpd. in this case disabeling lighttpd would disable the accessibility from outside.
Connection to daemon is done by UNIX domain sockets, then flatbuffers package is sent over it. Split is intentional, so we can deactivate them separately (was asked by @Bertl).
FYI: This is what we are working on for the specification of the C&C. It is neither finished, nor valid JSON or anything else.
"split the control daemon and rest server" -> that would be the approach with websockets, which I prefer and we are investigating right now.
moreover, what is the transport mechanism for the flatbuffers? notwork? unix sockets? files? fifos?
why do we split the control daemon and rest server into two different programs? couldnt the contol daemon directly expose some kind of web compatible api (websockets / rest), the webapp and the cli should use?
Mar 10 2018
Mar 8 2018
Would it be worth building a docker or vagrant profile to unify the build and development process?