What is your workflow?




I’ve been trying some different ways to fetch data off my phone after a dive, and after trying to convert the video with the app, upload to Google drive and then downloading to my computer, I think I’ve found a workflow that suits me better, especially when I’m at a remote location without network. Please share your workflow too, I’m interested in how people work with their ROV.

  1. Dive
  2. Connect my phone and computer to the same WiFi, and download all the raw content using the Sweech app and the accompanying command line interface.
  3. View the .h264 files directly using vlc.
  4. Convert h264 files to mp4 using ffmpeg for further distribution and editing.
  5. Todo: use telemetry data and video together somehow.

The VLC command is:

vlc -f --demux h264 --h264-fps 30 GUID.h264

The -f simply means to launch in full screen mode, drop it if you like.

An alternative to VLC is ffplay, like this for 30 fps, again -fs is to start in fullscreen:

ffplay -fs -f h264 -vf 'setpts=N/(30*TB)' GUID.h264

A working ffmpeg command to convert .h264 to mp4 is:

ffmpeg -r 30 -i GUID.h264 -c copy -r 30 GUID.mp4

I’ve made a short wrapper script for the Sweech command line interface. It downloads the entire folder recursively, skipping files that have been transferred already. And the transfer happens on the local WiFi only. I’ve fetched the sweech CLI from github, but it can be installed with pip as well. The wrapper script is located in $HOME/bin/trident-fetch.sh, and looks like this:

exec $HOME/git/sweech-cli/sweech.py pull --keep /storage/emulated/0/Android/data/com.openrov.cockpit/files/data $HOME/openrov/

The URL should be specified with the --url switch, but it can also be added to a config file. I’ve gone for the config file approach, which should be called $HOME/.config/sweech.json. Mine looks like this, using the IP I get when I set up my phone as a hotspot:

    "url": ""

As discussed in the other thread on telemetry data, it’d be nice to get the data out but I’ve heard back from OpenROV that they will change the format so I’m not sure it’s worth a great effort. Will try to look into some basics tonight, like fishing out the start timestamp. Given the timestamp it’ll be easy to rename the files from the guid to something easier to relate to, like date and time for the dive.



By the way, when the trident starts storing video locally instead of recording on the phone, I really hope some command line interface will exist, or at least an API that allows us to write a CLI, similar to the sweech one. Being able to do a simple update like this is really nice, working through a GUI and manually converting and downloading one recording at a time is not the way to go.


I’ve shared some tools on https://github.com/kefir-/openrov-utils that perhaps someone else may find useful. For example, the poorly named script trident-metadata.py takes one of the forest.db.0 files and tries to parse out the timestamp. It’s a hack, but it seems to work both before and after the latest update. This is useful together with the rest of the above workflow, and it got the same timestamp as the Cockpit app’s conversion process in my tests.


Nice work! Confirmed working on macOS 10.13 High Sierra, with manually built forestdb_dump and snappy.