We require a suite of inexpensive, geospatially-aware, video cameras that could run OpenSensorHub (OSH) onboard and store and/or stream in real time, video and navigation data (i.e. location and orientation). The OSH team thus developed the GeoCam based on Raspberry Pi (RPi) using the RPi HD video camera, an Adafruit GPS (with or without antenna), and an Adafruit orientation sensor. (Build your own GeoCams following the recipe here).

 

With the tripod mounting adapter attached, one can use GoPro or Garmin VIRB accessories to use these GeoCams as “drop cams”, “sticky cams”, “body cams”, etc.  Such cost-effective and geospatially-aware video cameras have been created using Raspberry Pi (RPi) along with  and camera modules as well as commercial off-the-shelf location and orientation sensors. Addition of compact power banks allow these camera units to be easily deployed in any setting and operate for extended periods of time (18 hours for the battery shown). Each unit can be built for under $200.

 

The GeoCam runs OSH directly onboard the RPi and can either store measurements (video, GPS location, and geoOrientation) onboard and/or push these observations via WiFi to a local OSH field node. Below are screen shots of a GeoCam being used in a field campaign showing GeoCam HD video in the window and location and orientation as an icon on the map.

GeoCam in field3

GeoCam in field4

 

2 comments

  1. Mohammad. Sorry to take so long to respond. One could choose to try to stick synchronized navigation values “into the mpeg” if its packaged in something like mp4 wrapper. However, in our experience, it has been MUCH better to keep the navigation and video data as separate outputs. This is for several reasons:

    (1) they are both sensor observations are should be treated equally in that regard rather than navigation information being treated as metadata,

    (2) the sampling of the video and navigation data are rarely synchronous nor being measured at the same sampling rate, so attempts to tie a particular navigation measurement with a particular video frame is typically inaccurate (I know that you haven’t suggested this but many have),

    (3) there can be many situations where you want to use the navigation data but not the video data (e.g.performing an on-demand search to determine whether a UAV looked at a particular location within a particular time window). Its nice to be able to request from an OSH/SOS to send only the nav data without flooding the bandwidth with a bunch of video that you don’t need at that time.

    (4) there are libraries and tools that can handle the streaming of video efficiently but not if you stick other stuff in the video stream

    As long as each measurements (location, orientation, video, etc) are tagged with accurate time values, one can better synchronize these data in the client or in processing algorithms (see our video demo regarding “image draping” and “video draping” from the 3DR Solo Drone, for example.

    I understand that you may have an organization that requires mpeg and navigation data to be put together. Its doable by modifying the existing driver, but its not advisable.

    Thanks.
    Mike Botts

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s