It isn't obsolete at all. If you participate in a project you would understand exactly why it is divided into such. Costs, time, and quality all need to orbit around such a breakdown. There is no such blurry distinction to anyone with experience in the field. If anything, the division is much clearer than it was twenty years ago.
- Queries
- All Stories
- Search
- Advanced Search
Advanced Search
Jan 21 2016
Dec 30 2015
...while I'd like to add, that this whole "offline" - "online" distinction seems to become more and more an obsolete style of thinking. In a similar way, as the distinction between "editing" and "compositing", or "editing" and "vfx" became more and more blurry and artificial and besides the point.
Dec 29 2015
It's one of the goals on the list. I haven't forgotten our previous discussions.
I sincerely hope both of you are considering an offline pipeline while theorizing. Offline to picture lock. Focus on that.
Nov 29 2015
I've prepared a longer answer, but stashed it away in a text file for now as i've noticed that more benchmarks for several parts are necessary.
In T609#9032, @BAndiT1983 wrote:It takes 60-70ms to decode raw array at the moment (source: https://www.apertus.org/axiom-beta-hello-world-article-may-2015).
Nov 22 2015
Ah, a Trekkie. I thought you were more of original series guy rather than Next Generation. ;)
Nov 20 2015
In T609#9017, @BAndiT1983 wrote:By the way, what's Lumiera's point of view on OpenCL or similar things? Maybe also on OpenMP?
Nov 17 2015
In T609#9017, @BAndiT1983 wrote:... there are a couple of questions that bother me for some days.
- How would Lumiera display 16bit per channel images (48bit ones)?
In T609#9017, @BAndiT1983 wrote:...and thought about multithreaded processing and almost slapped myself on the forehead as i remembered my former processing attempt for Bayer images. I used multiple arrays, one of them was for RGGB (or similar patterns) and have done complex modulo calculations, awkward iteration and so on. But then i remembered that every thread could process a row of RG or GB (or every other combination), which would be controlled by a couple of parameters and write it to an output array. I see no problem as the threads would never write to the same position, at least if there is no mistake in the code.
Nov 16 2015
Hi and sorry for not answering, i was absent from home, but not from project.
Nov 10 2015
...if there is some overarching state you need for processing a frame (or even multiple frames), we could go the old and proven "Handle / PImpl" route. As you probably know, you can build very nice opaque handle types on top of shared_ptr. On the API, we'd only expose the processing-Interface for the user. Thus, in order to do anything, the client needs to get such a handle instance from the OC library. Which then, opaquely, points to the state data you need internally to keep track of some stuff. And since the handle is ref-counting, clean-up works magically and airtight.
In T609#8994, @BAndiT1983 wrote:What granularity is on your mind for the interaction between OC and Lumiera?
In T609#8994, @BAndiT1983 wrote:I've played with Docker some time ago, but don't know how it helps to stay "portable" between Linux versions. How would it benefit Lumiera or OC?
Nov 9 2015
No, C please! ;) Don't want to wake that dinosaur, Linux is still full of that stuff.
I've played with Docker some time ago, but don't know how it helps to stay "portable" between Linux versions. How would it benefit Lumiera or OC?
Nov 8 2015
In T599#8979, @BAndiT1983 wrote:I would suggest that we start from a very simple base to evaluate which direction to take. Lumiera should be the master and responsible for the session, at least for now.
In T599#8979, @BAndiT1983 wrote:
- What about some sort of event system to signalize that processing is finished or would you just wait till the processing thread is done?
Nov 7 2015
In T609#8980, @BAndiT1983 wrote:I will answer on the other topic as soon as i have reflected on it. But here are two things which i would use for memory mapped IO...
In T609#8980, @BAndiT1983 wrote:DEB packaging will be an important point, so i appreciate your offer and will refer to it some time in the future.
Nov 6 2015
DEB packaging will be an important point, so i appreciate your offer and will refer to it some time in the future. To automate the process i've planned to use Jenkins as a build system (local, for now). It would also accomplish a lot of other things, like performing automatic test runs and also to ensure clean builds, because a developer machine is usually messy as hell with all the libs and other dependencies.
I would suggest that we start from a very simple base to evaluate which direction to take. Lumiera should be the master and responsible for the session, at least for now.
incidentally, while we're just at that topic: I have figured out quite well how DEB packaging works meanwhile. So I offer to help with that task, i.e. I can help with or care for getting either just that lib or OpenCine as a whole packed in a proper way for Debian and derivatives (Ubuntu, Mint, SteamOS :-D ), and I'll show you how to keep that stuff manageable.
fully agreed.
Nov 5 2015
Workflow-A is a must, but is trivially suported: all it needs on the side of OpenCine is to provide some kind of "Project" or persistent "Session", i.e. something to store
- all the shots belonging together
- the settings for these shots, so they can be reproduced / tweaked later.
My preference is a shared lib, as OC is already using OCcore for processing. Lumiera can be equipped with OC plugin which uses the lib One thing bothers me the most. I'm using Qt5 core features for disk IO (folder/file enumeration and similar) in OCcore module. Is it a problem for you to have QtCore as dependency for an OC adapter (at least for now)?
some notes at what IMHO would be the next steps...
Oct 27 2015
to summarise some of the previous discussion: we propose the following ideal-typical workflows as point of reference for detailed analysis and planing.
Hello all!
Oct 21 2015
**Canon 5D (DSLR)**
**Black Magic Pocket Cinema Camera**
Oct 20 2015
**unknown cam** ?? (probably Canon Camcorder)
**Sony Consumer Cam AVCHD codec** (super shit, a lot of software struggle with it)
**Sony A7s**
Oct 14 2015
Oct 10 2015
Oct 8 2015
Sep 30 2015
Sep 25 2015
Adjusted control names. Will be set to "Resolved" after layout changes are completed.
Sep 23 2015
Current drive list state in Linux (adjusted visuals):
Sep 20 2015
References to find optimal block size for transfer:
Obsolete after splitting OC into multiple applications.
Sep 18 2015
Checked under Windows 8.1 and LinuxMint 17.2.
Sep 16 2015
After reading http://askubuntu.com/questions/561368/why-does-my-qquick-application-crash-on-ubuntu-14-04 changed constructor signature from
Not actual anymore.
OC was restructured and this item isn't actual anymore.
CMake scripts were adjusted according to the new modular structure of OC.
Sill investigating, but it seems to be Qt related bug. Under Windows it works without hassle.
Sep 14 2015
Sep 13 2015
After some on and off OC development (regular job causes usual lack of time), here is the current state of removable drive list:
Sep 6 2015
Sep 3 2015
Tested and committed a bug fix.
Aug 31 2015
Used (maybe misused a little bit) GetVolumeInformation() to get only mounted removable drives. Seems to work for now. Doing some tests before uploading the current source code.
Implemented and tested it successfully. Next step, before commit, is to retrieve only mounted drives as Windows provides all card reader drives (mounted or not).
Aug 30 2015
Jul 28 2015
Ok, I feel with you. Haven't had such a fast RAID yet. Copying one evening worth of Arriraw (only OpenGate 75fps and 120fps 2.8K 16:9, no normal speed, all maximum) skater footage with 1,5 TB in a few hours was kind of hard to a 4 bay RAID5 TB-Array (was a fun project and no second target (expect of slow USB3 single drives) was available.
yoyotta has the checksum read on source as option to verify data integrity which sounds to me as a good solution.
copying parallel more then one job/file is not a good solution, because then your frames are not stored physically in a sequence and you can
get problems with readspeed when you are playing them because they could be fragmented. Specially with 4k frames. I use a 12bay SAS storage on set as main storage which
provides about 1200MB/s read/write and 6bay Thunderbolt/USB3 Areca Raids which provide about 700MB/s.
With picture sequences as it is with Arri Raw or DNG etc. you have a high load on I/O operation as thats a huge bunch of single files.
An option for an in camera MXF wrapping around the raw files would be very nice!
Jul 27 2015
In T449#7481, @RainerFritz wrote:Hi !
irieger:
For example you have a shooting day where you have to deal with let's say 1,5TB of data reading the source twice costs a lot of time.
As you need to do an "optical" check to the material (hashes can not see problems with the picture) and process it then for example to an offline material as well.
So you end up by reading the source files on set at least three times with no parallel processing in the copy process that would be
a 4th time.
Nope, my script reads two times. One for copies (coping in a buffer and writing to all destinations in parallel). And one time for checksum.
irieger:
For example you have a shooting day where you have to deal with let's say 1,5TB of data reading the source twice costs a lot of time.
As you need to do an "optical" check to the material (hashes can not see problems with the picture) and process it then for example to an offline material as well.
So you end up by reading the source files on set at least three times with no parallel processing in the copy process that would be
a 4th time.
All copy programs I used at work including yoyotta are doing this. Think of the fact that you can have a multicamera show where you
need to handle two or three times the data I mentioned above. It could be less, but I would highly suggest to do parallel processing on the copy task.
Just for my understanding, would you do it like this or in some other way:
From my perspective the copy and verification task should run as read once write to multiple destination in parallel.
I played with python a little bit and I read the files in binary mode so the source checksums could be generated in parallel
to the copy task which is essential.
The slowest destination will then set the copyspeed... I thought on a dynamic buffer size per destination write speed or a SSD as additional
buffer to compensate too big storage speeds differences.
The buffer is then written in parallel to all destinations. When the files where read back for destination checksum generation,
it would be fine if there would be a choice how much files you want to process in parallel. With picture sequences it would be nice
to process at least 4 files parallel per backup. If there is a conatinerformat in the future, processing one after one could be faster. I used md5deep/hashdeep
very often which is very fast. md5deep
After verification if there are mismatches on checksums or missing files etc. it should ask to recopy/reverify those files.
A copy report in pdf format would be also nice with or without thumbnails from the beginning, middle and end of the clip.
I uploaded a sample here
A textfile with all checksums should be stored with every backup at the destinations as the report.
Jul 17 2015
In T449#7312, @BAndiT1983 wrote:I know that everyone demands rsync or similar thing, but e.g. librsync depends on Cygwin and I try to avoid many dependencies. Still searching for alternative libs before rolling some sort of own implementation. OC doesn't need the network part of rsync, as Qt would be used to copy (or plain C++11), which should be portable. Also constrain OC to local drives (built-in oder removable ones) for now. Later OC would be sending data over local network (not internet, not for now at least), but this should be similar to the "local" workflow inside the application.
I don't see rsync as a main need. Mostly it is a fast local transfer you need onset or so not the kind of network transfer or you use a network file system and can use the tool as if you are working local.
Jul 16 2015
I know that everyone demands rsync or similar thing, but e.g. librsync depends on Cygwin and I try to avoid many dependencies. Still searching for alternative libs before rolling some sort of own implementation. OC doesn't need the network part of rsync, as Qt would be used to copy (or plain C++11), which should be portable. Also constrain OC to local drives (built-in oder removable ones) for now. Later OC would be sending data over local network (not internet, not for now at least), but this should be similar to the "local" workflow inside the application.
Cool, I will keep an eye on your project an maybe find some time to help. Would be cool to finally have a nice data management tool. Have tried a few of the commercial ones and none of them really satisfied me so there is some space to fill.
Jul 15 2015
Also very helpful and valid things. The decoupling of Backup is progressing and I'll move parts of the current layout over to new OCBackup project (not commited yet).
Thanks for the input. For the "recopy" feature there were already plans to make something like one-way synchronization (see my last comment). I will evaluate it and other things you mentioned as soon as I've finished with decoupling OC modules (as Troy suggested).
A few points that I see as important parts:
- 1:1 copy as mentioned earlier. (Maybe with an option to exclude .Spotlight, .Trash etc. when on a mac that sadly creates them directly when connecting the first time ...)
- Just have a list of targets where you can add from one to multiple (not only two) targets
- Parallel Copy: Read from source to RAM and copy to all target drives in parallel to speed up progress
- Parallel verification: Some simple tools do a checksum file(or to ram) from the source and check the targets then. Skip this and do checksums of source and all targets at once and compare then.
- add a project mode where you can set the target drives and directory scheme for this project that can be loaded as a starting point
- variable based directory scheme so that you can have something like "/media/irieger/MY_EXTERNAL_DRIVE/OperationApertus/Footage/%Y-%m%-%d/%camNo"
- an option to add another transfer to a queue that fill be processed when the current is finished
- an option to start a parallel file transfer (what I think about here is if you need to copy a small card from the sound guy who wants to finish in the evening and not have to let him wait until all the RAW footage is backed up or something similiar)
Would be nice if the user can select a checksum method and the checksums are then written to a file for manual checking (for example a md5sum-file that can be checked with md5sum -c $file. Checksum algorithms I'd really like to see are md5sum an xxHash (https://code.google.com/p/xxhash/), which seems to be much fast so could be helpful when transfering huge amounts of 4K raw sequences.
Jul 6 2015
I'd encourage this to be a standalone unit as it is with almost every other camera manufacturer.