Structure from Motion for documenting for Citizen / Community Science and Marine Archaeology



Alternatively Great ways to document your favourite underwater sites using a simple GoPro

(A small reef site documented using Structure from Motion from about 70 images with an insert showing image from above with a diver for scale - Image by Huw Porter)

A while ago, I posted on the forums a bit of information on some of the work I had been playing with in relation to Structure from motion

Structure from motion is a great way to document sites of interest using multiple 2D images to provide stunning large-scale 3D results

A number of us have collaborated to put together a further extension of the original post and transform it into a guide to aid others who may wish to do similar with their projects and shorten any learning curves associated with the technique and hopefully inspire others to document their local sites

I would like to thank @Kevin_K @Michael_Girard and Huw Porter for their help and assistance in putting together the guide and hope it is of assistance to others

A pdf of the guide can be downloaded from OpenExplorer Guide to Structure from Motion Documentation.pdf (1.1 MB)

The document looks like

Hope this is of some assistance and encourages others to give it a try

Scott W

Recap of my Greece expedition with OpenROV 2.6
Structure From Motion For Free
Any suggestions for alternative Structure from Motion Cameras?
Structure From Motion
OpenROV software intern interested in ROV computer vision projects
3D Mapping and opportunities?
Underwater Archaeological Camera
GoPro underwater Structure from Motion (SfM) / Photogrammetry
Tridents in Australia
GoPro underwater Structure from Motion (SfM) / Photogrammetry
GoPro underwater Structure from Motion (SfM) / Photogrammetry
GoPro underwater Structure from Motion (SfM) / Photogrammetry
ROV for search and recovery
Payload Interface: Mechanical (Trident Kickstarter Update #10)
Ability to fly grid pattern
Garmin Panoptix - Affordable ROV mounted multi-beam sonar?

Going along with this document, I made an easy payload bay tray for my GoPro that I can use to take SfM or forward looking HD video. No results yet, but you get the idea.


I see you are using the killer thrusters from Blue Robotics like many OpenROVers, but I didn’t read any feedbacks about them. I just ordered them.



Here’s my build log with changes: OpenROV #1790


This is fantastic! I applaud your work. I’m so sad it’s taking me almost 1 year to get back to the site. So much has been happening!


Scott, Kevin you have encouraged my classes and we are building SfM onto our ROV thanks for sharing your work.


This is some really exiting stuff, thanks for sharing Scott.
I’m definately using this for our Cape Palos Wreck Documentation Project.
I would also like to feature your work during our workshop in Barcelona in April. Trust that’s ok?


@Jim_N Thanks Jim its basically all about sharing the tips and tricks with everyone so we can all gain by each others advances ideas and work

@wkneipp love to see what the class can do Question to Brian @badevguru does the new forum support picture galleries? eg a gallery of ROV modifications (I know there is a thread) and a gallery of results of SfM??

@Roy_Petter_Dyrdahl_T Great to hear that you intend to use the technique as an aside this is really just a start of what you can then do have a look at what @Michael_Girard has been doing via Blender

I also know of a couple of cave diving guys who have then been transporting the base SfM model into the game engine Unity to allow people to interactively explore the site

Feel free to use it in your workshop its all about being opensource/crowd science



@Scott_W Once we have the ROV all completed Scott will take a few pictures so you can have a preview of it. We Still have the grab arm and water sampler to make, so once that happens hoping to also upload a movie of its maiden exploration…


Please, someone can help or explain me : how is the process of the lens correctión with the GOPRO?


Hi Alex

In short don’t worry about it, none of the Gigapixel (cut down to allow faster loading) images here shot from a GoPro have had any external correction

I have primarily used Agisoft PhotoScan several other people on the forums have used various other pieces of software

My experience is that with a GoPro no external or prior to use lens correction is required (even with the GoPro shot using its widest and hence most fisheye effected view)

Stealing heavily form a few posts on some of the photogrammetry forums

Every photogrammetry software has a software correction of lenses inside, without it cannot process data with desired degree of precision

In a Structure from Motion approach such as PhotoScan, a self-calibration/auto-calibration is run to automatically define the camera’s interior orientation. The latter is stored for each image in the intrinsic parameter matrix K. Since PhotoScan can solve for four radial lens distortion parameters (k1, k2, k3, k4) and two decentring lens distortion parameters (p1, p2), the total lens distortion can be modelled very accurately and much better than most tools such as PTlens. In addition to the abovementioned parameters, several other camera characteristics can be calibrated such as affinity in the image plane, consisting of aspect ratio (or squeeze) and skew (or shear). However, zero skew (i.e. perpendicular axis) and a unit aspect ratio (i.e. photodetector width to height equals 1) can be assumed for any digital frame camera. The latter explains why I would never ask PhotoScan to optimise for aspect and skew.

As I indicated I use Agisoft PhotoScan but I am also strongly of the belief that the other software packages used do the lens correction internal to the program

It’s all digital just give it a try and see the results you get

Scott W


Ocean71 has used his OpenROV to gather a nice series of images using a GoPro over at Openexplorer which is well worth a visit

Ocean71has documented a wreck by collecting multiple images and then processed this into a wonderful Structure from Motion model which has then uploaded it into Sketchfab

This gives a really good indication of what can be done and how it can be used to share the experience of an underwater site

@badevguru do you know how to get the imbed code to work? Thanks Mate


@Scott_W your welcome. When there are new sites that support embed I have to add them to a whitelist. Done!


Thank you for this very useful guide!
You mentioned in the guide that you take a photo every 1 or 2 seconds in order to have a good coverage, however there is no mention for the speed of the ROV? How fast was it moving (on average)? What thrust factor did you use to take these images?

Thank you


Hi Achraf

I have no fixed rules, and over time I have used a few different settings and haven’t dialled anything specific in (it is a bit of a balance to get enough images to get good results but not so many that processing time is a pain)

If I was to suggest anything it would be starting at 1 frame per second (I have also been playing with 1 frame every 0.5 seconds but in low light the images can be a bit poor) at a power setting of 2 to 3 and you should get something out

I have had it work at speeds up to say 25 meters/minute at 1 frame per second (height above the bottom is important so you still get good over coverage - so about 3 meters off the bottom) but I have also had poor results a quite slow speeds (I think there wasn’t a lot of light and images were average)

Have a go let me know how you get on with it - it’s only digital so it hasn’t cost you anything



I would recommend you try a land bound SFM project before attempting an underwater one. I used SFM to create a 3D model of my ROV as a first project and it took several attempts before I started to understand what the software needs to do its thing.

The other thing I would recommend is to experiment with a small project before taking on a larger one. There’s a fair bit of technique involved but when done right the end result is impressive.

Incidentally, I’ve done some work with SFM in poor visibility (less than 1 meter) and it is do-able, but requires much more work.


I’ve been working on a method using SfM that pulls directly from the ROV stream (you can also feed it a file). It does distortion correction as well. I’ve got some good initial results, but they’re pretty sensitive to noise. Its also written in python so its a bit slow. I think I should have the first version available for folks to play around with this week.


@heidtn - Are you planning on compiling it for a particular operating system?


No, I’ll be releasing the source once I have a version that isn’t a great big mess of code. Its based off of openCV so it should work with mac/windows/linux etc. Initially it will just be python for ease and speed of development, but I’ll move to C++ once I get basic stuff working. I’ll link to the github repo when its ready.


That sounds fantastic :beers: I love to hear that this can be integrated with the ROV camera system

I know its early days but I can’t help asking a couple of questions, is it pseudo live or is it post processing? Would it take multiple camera systems (either stereo or panoramic) is it using camera pose from the IMU or just pure grunt

Either way great work and hope you can get the "basic stuff working"