Reading through the data sheet for the Cmosis sensor there are a couple of exciting features of the chip.
Firstly it says that he max frame rate at 4K is 140fps in 12 bit (Whoa!), 300 fps at 4K in 10 bit (Whoa!!), but 528 fps (Whoa!!!) in 12 bit via X/Y Subsampling, which seems akin to the Half-4K Raw you get in the C500, and 1048 fps in 10 bit via X/Y Subsampling (WHOA!!!! That's phantom territory!) The possibilities here are very very exciting - at least for the Gamma if not the Beta.
The other interesting bit was this: "To maintain the same field of view but reduce the noise coming
out of the sensor, a binning mode is implemented on the chip. This mode will average a number of pixels (at pixel level) to reduce the noise and data coming from the chip."
Reducing the noise coming off the chip should allow us to increase the dynamic range (by cleaning up shadow stops). Obviously effective resolution is lost - but since the chip is 4K that still opens up the possibility for increased DR at HD resolutions. Although it is worth noting that this operation would require a different OLPF to be used in front of the sensor to avoid moire.
I'm also wondering if we can steal a few magic tricks from other camera manufacturers - specifically Arri who have engineered an incredible camera mostly through their ability to deal with sensor noise. As I understand it they do this in part by black shading the chip in between every exposure. Given the speed of the chip I'm wondering if something like that could be programmed into the FGPA. Secondly they talk about their 'dual gain architecture' which I'm guessing involves reading each pixel twice - once at a low gain and once again at a high gain and combining the data to increase DR. Again, I'm wondering if something similar could be programmed with this chip (although it does sound similar to the multiple slope HDR mode already engineered in the chip).