Simple sonar help


#1

is there a way to produce a simple sonar image, (hardware side of things) im used to the smaller sonar housings but even those are massive for one of these rov’s


#2

I have been thinking about sonar recently as well and found this product that looks very useful and compact.

https://www.lowrance.com/lowrance/type/castables/lowrance-fishhunter-3d/

It is a surface unit that connects to your device via wifi. I could imagine it would be useful for mapping the bottom before a dive or you may be able to use it during a dive to give you another source of navigation feedback. Seems kinda cool and not very expensive.


#3

@ balder.matt do you know if GPS signal is required to be able to do the mapping? I’m working on a project to explore flooded mines and caves, and I would not be able to get the GPS signal inside either.

Thanks,
~Michael


#4

From what I understand of structure from motion (SfM) you do not need GPS in order to create a 3D mesh model using photos. Your model will not be placed accurately in real world coordinates though. As long as you have enough overlap between photos (70-80%) you will have a good outcome. It is also possible to use software like Pix4D to grab still frames from a video and use that to process your model and generate an orthomosaic map, 3D mesh and point cloud.

Something I am eager to try is the use of a 360 camera mounted on Trident to use for mapping purposes. I have verified that Pix4D will support any spherical camera that supports the equirectangular format. The Insta360 One X is in this format. I will report back as soon as I get a chance to use it.

https://www.insta360.com/product/insta360-onex?gclid=Cj0KCQiA8_PfBRC3ARIsAOzJ2uoaLxIIPQ6i9VjqodAPYHPb4z4vL8EFVu29zpNJiZxYwT4XGLMtvlwaAn_vEALw_wcB


#5

@balder.matt Thanks for responding.

Yes, I’m familiar with SfM. I started doing SfM back when I built the OpenROV 2.8, and have had great success. I use the Agisoft Photoscan software to render my models. Here is one of the earliest models I did.

The issue with doing the SfM in the mines is lighting. Since there is no light other than the lights on the Trident, extreme shadowing makes accurate aligning of the images difficult to impossible. Also, due to the lack of light, getting a 3D mapping overview of the entire space of the flooded portion of the mine is virtually impossible. This is why I have been looking into the very portable fish sonar. This should give me a more than adequate 3D map of the space below. Fish sonar does not require light, so the complete darkness of the mine and lack of proper lighting is not an issue. I just need to make sure that it does not need the GPS coordinates to properly stitch together the sonar images it gets. I would assume it does not, but I need to confirm this before I make any purchases.

Thanks,
~Michael


#6

Very cool! It looks like you have been doing some great work. The first thought that comes to mind is to add additional self powered LED arrays both above an below Trident. This may be of no use in murky conditions though or even make the photos worse as particles reflect.

I like the idea of sonar as well. I do see the lack of precise position information of the location of the where the sonar data is being captured a hurdle to overcome. You have probably already looked at some of the equipment that Blue Robotics has but they have some interesting GPS equipment https://www.bluerobotics.com/product-category/sensors-sonars-cameras/

The fish finder I sent a link on before probably isn’t rated for any kind of depth but the guts of it may be able to be adapted to something useful. My idea for it was to use in conjunction with Trident in murky waters for search and recovery type activities. Being able to “see” trident on a sonar screen may be very useful. I really don’t know if there is any downloadable output you can get from this sonar because it is all app based.

This article looks pretty cool. I just glanced at it but it mentioned “sensor nodes” on the tether and the use of SLAM technology.

Sounds like a very cool project!


#7

Nice paper Matt. I too have been working with my own acoustic components for various approaches. I also think short distance lidar can add info to the visual images. I have been experimenting with a proximity sensor rated at 4M in air (will be less in water). The autonomous car folks have been combining sensor data of different types to get better fidelity (sensor fusion approach).

My current work is to integrate the Trident data stream with supplementary sensors using the DDS protocol and getting acoustic localization working to know where the ROV is.

Jim