Obstacle Avoiding OpenROV using an IMU and Laser-based Vision System (LVS)



Hi guys,

I would like to create a program on the BBB for autonomous OpenROV which could avoid obstacles (like quey, rocks, sunken wrecks). I’m interesting in this article: http://advantech.gr/med07/papers/T20-006-812.pdf. In this project I would like to use IMU, LVS and OpenCV. In addition the rov should archive photos. So, what have been done?

  1. MicroSD As Extra Storage:
  2. Archive photos on the SD card with time, pitch, roll, yaw and depth data: http://oi63.tinypic.com/24180o7.jpg

Now I have a problem with data processing in real time on the BBB. I would like to achieve 15-30 fps but I cannot install OpenCV ( http://blog.lemoneerlabs.com/3rdParty/Darling_BBB_30fps_DRAFT.html ). I have a issue with missing packages (for example: GLib, Pango, GdkPixbuf, ATK, cairo, librsvg, poppler, freetype2 etc.) Installation of new packages is very cumbersome (a long instalation/configuration time, high ping rate in cockpit after instalation). Anyone install OpenCV on 30.0.2/0.3 software?

Also I don’t have access to Accel Data (“we are not actively pulling Acceleration data. You will definitely need to add new code to the existing software to access accel data.”) Any ideas?

I would like to point out that using OpenCV and archive image in real time cause very high CPU usage.
How to lower the CPU usage on the BBB? Maybe turn off the cockpit? New Data Socket Client?


There already exists a Debian package for installing OpenCV. Have you tried “apt-get install opencv” ?

Yea, if you are bypassing the control system and talking serial directly to the MCU you can save about 20% CPU.

Let us know how it goes!


Thank you for your response, but I cannot get accel data :confused:
Please, take a look in this image:

Any ideas? Where is a mistake?


You’ve modified the source correctly, if you were using the MPU9150 IMU, but your cockpit status shows that you are using the new IMU (BNO055), so these modifications need to be made in the CBNO055 class. I’ll work out the specific changes and post them tonight.


Can we get pres/depth value faster than 1Hz?


You should be able to. I looked at the datasheet and didn’t immediately see a data rate specification other than:

“Fast conversion down to 1 ms”

That would lead me to believe that it can operate somewhere between 500-1000hz, since it’s just an I2C transmission + A-D conversion with a bit of math applied. You should be able to go to CMS803_XXBA.cpp and change the millisecond value at:

if( DepthSensorSamples.HasElapsed( 1000 ) )

to achieve whatever update rate you want, though I’m not sure what the exact upper limit is, or how it will affect available clock cycles on the ATmega. Shoot for 100hz and see if it works well for you.


Thank you! Now I have 20Hz. It works good, but there is a issue with false depth data. Take a look at screenshot:

What could be wrong?


I’ve seen this behaviour before, even at the normal data rate, although it is not quite as frequent as what you are seeing. I only really have two possible explanations for that behaviour. The first is that the there is still some wonky typecasting/overflow in the conversion math for certain ranges of values, but I believe that that was taken care of some time ago using Luke Miller’s code (http://github.com/millerlp) and I walked through it a couple times myself to make sure that all of the casting between ints and floats was correct. It’s possible that there could still be a bug in there somewhere.

The other option might be that the sensor has not yet finished performing the conversion and you get a bad value if you try to read it. I’ll have to dig into the datasheet again and see how it behaves and whether or not there is some kind of interrupt/event flag that is available for reading valid data.


Hi charlesdc,

Is the issue solved for false depth data?

I’ve noticed in the code, that the depth value depends from temperature (CMS5803_XX.cpp). It is wrong idea, because it causes double error value. Depth measurement should be independent. Please notice Temperature Sensor has its error. I’ve also noticed wrong behavior in raw temperature data (value ‘uint32_t D2’ in the code). Example is below.

I recived other values, for example: 2047 (it could be correct - 20 celsius degree), 1, 6559928, 329431100, 1023, etc. I’m sure, that the sensor can send a correct data, because I put a bandpass filter (from 4 to 20 degree), but frequency is unsatisfying.

Can you explain this behavior? I think it isn’t normal. Is the sensor broken?


I recently took another look at the code for the depth sensor, and everything seemed to look correct in how the calculations are done, so I am led to believe that these abnormal values are actually being generated by the sensor itself. My theory is that we aren’t giving the sensor enough time to properly process the A-to-D conversion, and so there are sometimes garbage values that we pull from the sensor. There is a configuration variable for the resolution of the sensor which directly impacts how long it spends averaging samples and preparing an output from the ADC. Currently, we set the resolution with the value 512. I believe that if we knock that value down one more level to 256, the sensor will be able to provide outputs more rapidly, though at the cost of some sensor resolution. I believe the other option to avoid these errors is to sample the sensor less frequently.

Note, this is still just a theory that I haven’t had time to look into too much yet. If you are feeling froggy, you could try out changing that 512 value to 256 and see if you have any change in behaviour, as well as varying your sampling rate. You could also bump the value up to one of the higher resolution levels and seeing if the frequency of gross errors increases for a given sampling rate. This would help to validate my theory.

You can change the default resolution value by going to OpenROV/CMS5803_XX.h and changing the value in this section:

// Constructor for the class.
// The argument is the desired oversampling resolution, which has
// values of 256, 512, 1024, 2048, 4096
CMS5803( uint16_t Resolution = 512 );

Eventually, I’ll get a chance to pound on this issue some more, but if you get a chance to experiment with my hypothesis in the meantime, please share your results!


I’m sorry, it still doesn’t work. I tried tu put resolution: 256, 512, 1024, 2048, 4096.

Maybe here is an error?:

	// Wait a specified period of time for the ADC conversion to happen
	// See table on page 1 of the MS5803 data sheet showing response times of
	// 0.5, 1.1, 2.1, 4.1, 8.22 ms for each accuracy level.
	switch( commandADC & 0x0F )
		case CMD_ADC_256 :
			delay( 1 ); // 1 ms

		case CMD_ADC_512 :
			delay( 3 ); // 3 ms

		case CMD_ADC_1024:
			delay( 4 );

		case CMD_ADC_2048:
			delay( 6 );

		case CMD_ADC_4096:
			delay( 10 );