How will the OpenROV position and depth will be reported?


Given that it is underwater and on a tether how will the OpenROV georeferenced position and depth will be reported to operator ?

Will these OpenROV 3D tracks will be stored?

Having a depth gauge would prevent user from going deeper than the case capacity, even a depth autolimit would be great to have built in?


OpenROV 2.3 is equipped with live video feed for reference so far. We've got plans to integrate a compass and pressure sensor for depth, but figured that would be an upgrade we could make later, with input from more of the community.

Do you have ideas about this? Please share!


The 3D tethered offset positioning from the boat which I assume that will have either RTK or DGPS positioning would required an algorithm using a three-way accelerometer and a three-way direction sensor, similar to the CASIO GPS Camera's EX-H20

A side scanner would be a great way to get bottom super video Starfish 450H has a very small transducer


Starting simple is a great approach... this way alpha users can start building on top of it having a solid mechanical vehicle to test things out.

For tracking position I think a gyroscope and accelerometer are useful addition as they help finding if there is current drift, and if the ROV is level and not being tilted buy a shark :)

Maybe also a pitot tube to sense forward speed

Also a barometer for sensing the depth, and maybe also a waterproof sonar to scan the bottom of the sea.


If you can put a small bubble compass within view of the camera you get a really cheap compass and artificial horizon.


We definitely want to add both depth and heading sensors ASAP, but just haven't gotten around to that yet. The general plan is to use a depth sensor component that would normally be found in a SCUBA diving computer, and a simple digital compass chip. These components, coupled with the ability to sense motor power could allow the user to do dead-reckoning, but there are more advanced possibilities as well. The ROV also has a computer and webcam, so implementing Computer Vision tricks such as Optical Flow have a lot of potential. Also, I've been tinkering with using piezo elements to make a (very) rudimentary acoustic trilateration system. If someone wants to start experimenting with either of these advanced positioning methods, their work would be on the cutting edge of the technology.