Structure from Motion for documenting for Citizen / Community Science and Marine Archaeology



Hi Jose

Have a look at particularly his SMS Dresden Anchor and Chain model (a long linear model - sorry for not putting the direct link but he asks to be contacted before sharing or embedding)

With the distortion problem is the models being generated turning up or down over the length of the model if so that may be an effect of the calibration and maybe you may have to look at the auto calibration of the lens?


Hi Jose

I knew when I read your comment I had seen something about this a while ago,

Have a look at

Broad terms

So in short be a bit “wonky” with the camera, a few different slants left to right and back and forward as you go and a bit of change in depth could well help



Hello @Scott_W

That’s exactly our situation. During the video capture process, we had in mind the generation of 2D photomosaic with Hugin, so there was no non-paralel motion that could provide a higher angular deviation (but still present due to natural diving motion).

I have been exploring the use of surface fitting algorithms (assuming our model was near flat-ground along the 50mt transect), and 3D model rectification using GCP and GIS tools (QGIS and GRASS). However, i would prefer to perform the rectification process during the bundle adjustment phase. I must add that VIsualSFM documentation is, uhm, let’s say… near-zero for this process; and i was about to give up an focus on camera parameters update until you posted that article!! In the final part, shows a method to estimate radial distortion parameters aiming to “zero-doming”… i’ll be giving a try the next couple of weeks.

Once we solve this issue, we are aiming to generate georeferenced DEMs based on the existing GCPs, for further analysis with existing GIS tools.

Thanks for sharing, regards


No problems Jose happy I could point you in a direction



Hi Scott, hope all is well. I’ve resurfaced for a time, lol. FInally got around to starting my SFM project. Any experience with VisualSFM? Also, would you happen to have a dataset or two I could run through VisualSFM? testing quality, feature descriptors, etc. I have nothing from my ROV yet…(still building…freakin’ rediculous) Anyway, thanks for any pointers you can offer.



Hi @Jim_N

Yep everything is good over here just way to busy at work to get out and do much new exploring / searching for wrecks

I haven’t used VisualSFM but it looks pretty similar to Agisoft PhotoScan that I use

Send me an email (click on the Recaptcha link below to see my email address) and I will send through a link to Google Drive with a heap of raw images that you can play with

For a bit of inspiration have a look at what Simon Brown has been generating



Had my first crack at trying to create a 3D model from some of the video footage we took of site 019-A. I about 120 images, but needless to say it didn’t turn out very well and the orientation is all weird and difficult to rotate in Sketchfab I’m using the Pro version of Agisoft’s Photoscan (got a good deal on the Educational License).

Site 019-A Test 2 by Endurance Marine Exploration on Sketchfab

Edit: How are we doing the embed?


Like this

(just the sketchfab hyperlink inserted)


Hi @Jim_N
Another useful tool, specially for post-processing the models obtained with VisualSfM is CMPMVS. It may help you for the surface generation for complex geometries. Its performance is better that standard Poisson reconstruction provided by Meshlab.



What you using for batteries. I’m working on an rov but the battery along with the charger costs a lot, an alternative would be an option


This was the ROv I used to get the video and shots: had 4 packs of the OpenROV batteries. Probably not the cheapest solution, but it was all that was available at the time.



Does anyone have some experience with OpenDroneMap
for full textured 3D model generation from consumer grade camera video? We have been tinkering with the Image->PointCloud->Mesh->Surface pipeline employed in ODM, with some interesting results for a couple of “challenging” videos of 20 x 4 mt transects inside a 1800s shipwreck. Yet, we are trying to skip the video-to-image frame extraction, as there is a lot of information in the original video which could produce a richer 3D model.

Also, if somebody knows about any Video->xxxx->3D Model pipeline, drop the info here!

J. Cappelletto