Downward facing camera


#1

OK I have been playing with some structure from motion concepts and I have gone out and purchased a Raspberry Pi and 4 off fisheye Pi Cameras (5MP) as well as a camera module multiplexer (to bring in all 4 cameras)

I have been thinking of a configuration where there is at least 2 of the cameras will be down facing to gather stereo images for later structure from motion models (I have lots of different thoughts on configurations for the 4 cameras, and I have not settled on 1 yet, so will most likely be a a few different trial concepts along the way). But if I am only taking an image every 0.5 to 1 second there is a bit of spare processing time (maybe an extra image every half second) even if there is a 1 or 2 second processing lag it should be enough to form a pretty good ruff and ready site map of where the ROV has been and what it saw

I have for a while been interested in the navigation side of the ROV as well and was wondering what any of you know (knowledge of the crowd) about real time mosaics. I was seeing this as a dumbed down version of SLAM but most likely more than enough for what we are doing



I was thinking what are the pros and cons of processing on the surface (similar to the above BoofCV video or something else) and how this best integrates into an OpenROV world

Interested on hearing @badevguru thoughts and if anyone has played in this area (@Jim_N @Darcy_Paulin ?)


OpenROV software intern interested in ROV computer vision projects
Structure From Motion For Free
#2

An interesting idea… I’ve recently started a Ph.D. in SLAM for underwater robots, using the OpenROV as my platform. My initial idea was to create a virtual model of the OpenROV complete with a realistic seabed model so that I could try to implement a SLAM algorithm virtually before trying to do it on the robot itself, though this felt too risky as I think the technology (especially in the underwater domain) is young and the challanges for a complete SLAM implementation are significant (processing, feature recognition, turbidity, currents, motion modelling etc.). So as a focus for my project I’ve decided to look at trying to compensating for the effect of currents within the SLAM framework. Part of this will involve creating a simulation and I hope to make it availiable to the community and play around with it myself in the future so that we can start make some steps towards high level control and autonomy.

Do you know if anyone has tried to implement dead reckoning via an inertial navigation system (INS)? This should be fairly straight forward though I’m not sure how accurate it would be given the quality of the IMU. Perhaps a pragmatic approach to start would be to try to build a mosaik using the readings from the INS.


#3

Indeed. I’ll respond more fully tonight. There’s always been I interest but I think there’s more if a desire and need now then before. I’ve learned alot in the last few months about slam and other localization techniques that may or may not port well to the marine world. I have a rig for ptam slam testing that should be in the water by the end of next week. I will be pushing updates and data up when I can. There’s subtles in getting these types of algorithms running robustly on the benchtop much more in a marine environment. More to come soon.


#4

Hi Zac

As far as I know a few people have had a look at it but no one has successfully implemented Extended Kalman Filter for dead reckoning on OpenROV … yet.

I was just considering this imaging overlay style strategy as a simplistic non integrated alternative step towards a bigger picture navigation solution

Eager looking forward to both Jim:+1:

I also had a thought last night

This concept should be able to work simplistically with the Trident given Eric’s comments (sorry @Eric_Stackpole not trying to verbal you or box you into a corner)

Given a simple GoPro (mounted down facing) with WiFi to the Trident streaming back to the surface and then video image capture (at an appropriate frame rate) to provide near Real-time Mosaic (even if a bit rough around the edges) from stills even if they aren’t blended beautifully together


#5

Here’s a little more detail. Depending on what you want to accomplish, in the monocular sense, there exist three big players…but I’m not an expert, so I may be missing some.

  1. Semi-Direct Visual Odometry - SVO
  2. Parallel Tracking and Mapping - PTAM
  3. Large Scale Direct Monocular SLAM - LSD-SLAM

I’ve spent the last few months at 5% of my time per week looking at SVO and PTAM. Yes not much time, that’s why it’s taking me forever :smile: Anyway, SVO and PTAM work well for aerial systems both indoors and outdoors. Currently in the process of evaluating outdoors…takes a bit since it’s a failry unconstrained environment so quantifying all the variables for accurate testing is arduous to say the least. However, indoors is another matter. Anyway, not to bore you with the details, SVO comes in at about ~cm accuracy with camera only, while PTAM lies somewhere in the 1.5+ m range, and drifts a bit, with camera only. (i.e. no fused IMU and barometer/altimeter data) for solving the camera resectioning problem, or localization of the camera in the world frame. LSD-SLAM is next on the list…maybe after the holidays, late January I’ll have a good idea of how it performs.

Anyway, SVO is great as long as your downward facing. The out of the box algorithm keyframe method is tuned for that camera orientation. Also, it doesn’t build much of a map, nor was it designed to. PTAM, on the other hand, builds a sparse map and when fused with IMU and altitude data, comes in at around 1/2 m accuracy (granted using cheapo IMUs). It’s also important to note that both of these methods break and drift off, often and need tuning given environmental differences. No solution exists that can run out of the box and be stable, reliable, and spot on…yet.

Continuing, read the papers, they are good, and they claim great results, hopefully we can get close to them on our systems and in real deployments. We shall see.

In regards to ROV/AUVs, @zacmacc, yes. LOL. That is fantastic. All of your points about why it seems difficult is why it is difficult and there has been a great deal of money thrown at this problem in the DOD world and they have their solutions, don’t know what they are, but I don’t think they are that good, so you’re tackling a good problem space. I can say that noise from the motors will adversely affect your IMU. The big dogs, covert subs and million dollar ROVs, have really expensive and accurate INS. On our kits, the motor noise and quality of the cell phone IMU is going to give you head aches galore. They do me on a weekly basis and others on this forum.

But let’s keep this thread alive. We’ve been talking about this for at least a year and haven’t made any progress for marine work to date, nows a good a time as any to actually do work, lol.

Just a comment about SfM and Localization, though solving what seems to be the same problem is not really. Doing SLAM, you need real-time or near real-time. Given that the ROVs are mostly slow things, near real-time is fine. Processing on a topside buoy is brilliant, do it! Even the RPi2 has problems with denser point cloud work…like it can’t really handle large points clouds for SLAM work. We use an odroid xu4 for svo and ptam and it does a good job, but we do not map densley nor are building large maps on them…don’t think i’ll even attempt it yet.

Anyway, could talk for years about this subject. Here’s the rig I have,

Using an RPi2 with a Navio+ Autopilot cape running ROS. I plan on a bluefox global shutter camera (absolutely needed for VO/SLAM work…we can argue about this later) but now just a simple c920 webcam for preliminary feature testing. The bluefox is somewhat pricey so I have to play nice with others in order to use it for my testing. Gutted my ROV for the acrylic housing (have a new pressure housing ordered from blue robotics) and using a similar homeplug network setup for the comms to topside. Building a larger ROV as well but not needed for the tests coming up this month. Running power from an onboard 4200mAh LiPo. First set of tests will be in a small tank, easy features etc, and go from there.

@zacmacc, keep us updated, very interested in SLAM for underwater robotics…very.

More to come!

Jim


#6

BTW, @Scott_W, the real-time mosaic video is awesome!


#7

Ah yes of course, I hadn’t considered the effect of magnetism from the motors… I was quite surprised when I read these INS systems can deliver reasonable localisation with a drift of only 0.1% to 1% of distance traveled which would be more than sufficient for our needs, though they were probably talking about commercial systems.

Interesting, do you know what challanges they were facing besides the magnetism? Or is it just a matter of manpower?

[quote=“Jim_N, post:5, topic:3637”]
But let’s keep this thread alive. We’ve been talking about this for at least a year and haven’t made any progress for marine work to date, nows a good a time as any to actually do work, lol.[/quote]

Likewise, I’m also very interested in what you’re doing. It seems that you’re taking a very hands-on approach and that’s great. Would be fantastic if we could solve this problem as a community and add higher-level capabilities to the OpenROV. Does anyone know of any websites/communities or toolboxes specifically looking at SfM, SLAM?

Exactly what tests do you have in mind in the near and long term @Jim_N?


#8

I did stumble across this article that could be of help?


& Ive also found some new info on a 360 degree 4K waterproof action camera
It is available right now in Japan, soon to be the ret of the world
http://petapixel.com/2015/09/07/kodak-pixpro-sp360-4k-a-360-degree-action-camera/
Im writing up a post about the camera as we speak :slight_smile:

Cheers
OzyMark


#9

Thanks for the link. Jumping back into this project now :slightly_smiling: