Underwater Archaeological Camera


It’s great to see some cats using the Yale Open hand, hydrophones, and R-Pi’s in interesting ways with the ROVs. I’ve written posts about wanting to do just these things some time ago. Now, I’d like to turn your attention towards PTMs and how, in my opinion, they can assist the marine archaeological and possible search/rescue groups in identification of submerged objects via polynomial texture maps (PTM).

Quoted from the link:

"Polynomial Texture Maps (PTMs) are a simple representation for images of functions instead of just images of color values. In an conventional image, each pixel contains static red, green, blue values. In a PTM, each pixel contains a simple function that specifies the red, green, blue, values of that pixel as a function of two independent parameters, lu and lv.

Typically, PTMs are used for displaying the appearance of an object under varying lighting direction, and lu,lv specify the direction of a point light source. However, other applications are possible, such as controlling focus of a scene. PTMs can be used as light-dependent texture maps for 3D rendering, but typically are just viewed as ‘adjustable images’."

It may not be obvious at this moment what being able to modify light source and refocusing objects in a given image might illuminate for you. However, given the examples from HPs website and the papers written by Tom Malzbender, highlighting Sumerian, Egyptian, and ancient Greek discoveries made by using a PTM rig as an analysis tool, one may begin to consider what such techniques could do for underwater explorers like ourselves.

Well, as I sit here, 12am on a Friday morning, wrapping up the CTD work and getting ready for some wet testing of the near finished kit, I find myself planning the next payload project. The ARC-CAM.

This on a ROV

Doing this for science:

And this:

You get the idea. Or you will, in the coming months. Being able to do this real-time and share the data in said real time to a community of explorers would be pretty exciting.

I’ll post CAD and specifications for the camera rig this weekend.


Hi Jim
Interested to hear your thoughts on how you are going to do this
I’m guessing your talking about some real time SfM or full SLAM implementation?
Are you thinking the standard ROV forward facing camera or a down facing camera system?
Stereo / Mono / Multiple cameras Rig?
Grabbing stills and integrating data from the IMU into their EXIF’s


Hi Scott,

Actually, no. This technique uses a light umbrella and a single camera to capture varying light angle photos and stitches them together, allowing for light/shadow modifications and texture mixing to highlight features of the target object. Check out http://www.hpl.hp.com/research/ptm/antikythera_mechanism/index.html

I’d love to hear your thoughts about it. I also want to integrate a SfM module like you have in the next few weeks. However, the Chesapeake here is pretty murky and algea bloomed at times, also some problematic currents, so we are considering a benthic rover for deployment.


Just realized I didn’t answer you entirely. The camera angle depends on application so the mount will need to be positional, pre-dive at first, motorized on a larger kit for later implementations. The rig would be module and not dependent on the on-board camera. Mono and grabbing stills.


Jim, its interesting how thing tie back together the example you give of the Polynomial Texture Maps (PTM) of the Antikythera Mechanism

ties back to some of the SfM stuff from ROV’s

Synergy, fate or just coincidence :wink:


Jim getting back to the actual discussion

Yep I think an external module would be Ideal way to go from a number of different reasons

As always I see baby steps forward as an important way to progress any development (Jim I know you are aware of these) have a look at the topics below give a working simplistic answer including a dxf file of the backing plate to “bolt onto” an OpenROV

Given that, I would see an Ideal external module

  1. Realistically maybe up to 4 cameras -2 down facing stereo and potentially 2 angled at say 45 degrees out to the side

With a single camera, I have had times where the software has not matched well 1 or 2 poor shots that can then exclude a larger number of shots in a “sequence run” as this gap in coverage doesn’t allow the other shots to be positioned. A stereo rig may help prevent this

With a stereo rig (although the software is pretty good at positioning the mono camera systems) if the distance is known between the stereo cameras a bit tighter 3D positioning can be done

With the 2 off 45 degree angled (approx) cameras I have seen times where I have had good shots shooting straight down, but because all of the shots are basically shooting down, when you rotate the 3D generated SfM image views from the side could be better

  1. Potentially with an IMU and or integration with other navigation systems


Adding the ability to add camera pose https://en.wikipedia.org/wiki/3D_pose_estimation assist with the SfM modelling

  1. An External module


So it can be used across multi platforms - I currently do SfM from the OpenROV, from a dive scooter and as a free swimming diver so the ability to move the same equipment across the platforms would be nice

4 cameras may be a bit too expensive to start with, so modular being able to add more cameras as your personal demands increase may be beneficial

Sometimes you not after a SfM output so being able to easily remove it from the OpenROV is worthwhile

Potentially integrating WiFi (given its 100mm range underwater) to communicate from the External module back to the OpenROV

  1. Potentially with lasers


Everything is better with lasers :smile:

However, more importantly assisting in locating the camera pose or alternatively “depth lock” in Cockpit the ability to use as a “bottom lock”

5) Lighting fairly obvious

If I was looking at this, I would consider a “mast” (fixed, retractable or fold down) with the camera mounted on top to get better shots down and rotate the mast and then move on a bit and repeat (almost Google Street view style)


Hi Jim:
As you all may know I belong to a Research group focused on historical ships. We belong to a joint of departments located at different Universities that include History, Archeology, Marine Sciences, Naval building and Navy school …
We have planned a “on the field” reseach exploration for this comming summer (weather/ocean permiting).
The goal is taking some videos, pics and datas of a XVIII ship we located 1 year ago.
I find the posts above of the major interest, but as you surely also know, robotics is not my thing.
Anyway, I’ll do my best for involving my team robotics guys in testing your ideas during the research.

Thanks for sharing,


Oh man, awesome. I’ll endeavor to be more open and productive on this front then :slight_smile: I’ve got some traveling and what not this week to do, but I’ll be posting on this topic as soon as I get both feet back on solid ground.


I plan to fully response soon :slight_smile:


Here’s a tutorial paper, circa 2008, in regards to cultural heritage imaging.



Also, a DIY simple rig demo paper.

Considerations for the LEDs and drivers, and understanding that you need around 25 images for a good continual light angle variance:

  1. 25 Bright LEDs https://www.sparkfun.com/products/9850
  2. LED driver https://www.sparkfun.com/products/10616
  3. Light weight submersible umbrella rig (design phase)
  4. Raspberry Pi 2 and Cam http://www.adafruit.com/products/1367?gclid=CLHG7caAscYCFZKBfgod-_MB1g
  5. PTM Software with slick modifications http://www.hpl.hp.com/research/ptm/
  6. Camera / PC Pressure vessel…working on this one. Maybe an Otterbox…


Just goes to show you, if you have an idea, execute on it…otherwise, someone else will, LOL


Anyway, I’ll be diving into this again…