This issue is to research options for external autofocus that can be added to any camera and (manual) lens.
There are different technologies to measure the distance to an object:
Main article: https://en.wikipedia.org/wiki/Range_imaging
- [[ https://en.wikipedia.org/wiki/Lidar | LIDAR ]]
- [[ https://en.wikipedia.org/wiki/Radar | RADAR ]]
- [[ https://en.wikipedia.org/wiki/Optical_flow | optical flow detection ]]
- Stereo image triangulation
- Coded aperture
The most accurate technology seems LIDAR, that's why it's used in self-driving cars.
There are some commercial lidar systems.
- [[ https://store.dji.com/de/product/ronin-3d-focus-system | DJI Ronin 3D Focus System ]] (159 €, range when filming a person is about 6 meters)
- https://www.youtube.com/watch?v=Y7UEHryq7zo
- https://www.youtube.com/watch?v=jtLODYh34uA&t=661s
- https://www.youtube.com/watch?v=DWhm_hpkuWY
- [[ https://www.blickfeld.com/de/firma/ | Blickfeld Cube ]] (made for industrie applications like self-driving cars)
- https://www.youtube.com/watch?v=VThDC1Tb8ZE
- [[ https://velodynelidar.com/products/velabit/ | Velodyne Velabit ]]
- [[ https://www.livoxtech.com/de/horizon | Livox Horizon ]] (€1,199)
- [[ https://www.lumotive.com/products | Lumotive ]]
- [[ https://www.embedded.com/open-lidar-api-aims-to-accelerate-software-defined-lidar-adoption/ | Open lidar API ]]
{F246586}
It would be awesome if we could build an open hardware solid-state LIDAR. Maybe there are some research papers we could implement. We could collaborate with open hardware self-driving car projects. Maybe some university is interested in such a project.
One practical use of such a system for video would be conference recording. Today a camera person has to follow the speaker and keep focus. With a system to move the camera like a gimbal or robot arm and LIDAR focus it can be fully automated!