Robotics Applications Employing OpenSensorHub

OpenSensorHub provides the world’s only complete implementation of the OGC’s Sensor Web Enablement standard, but it really is more than a standards-based sensor implementation, it is a framework for Sensors, Things, and Robots.  At Botts Innovative Research, Inc. we are often developing solutions for government and commercial customers alike in areas such as vertical and horizontal sensor integration, visualization, distribution, and discovery through OpenSensorHub; but every now and then we like to get in touch with our inner child and play.  In the past we have played with common off-the-shelf sensors such as the Microsoft Kinect to create point clouds and Raspberry Pi cameras to catch troublemaking cats, each illustrating the ability of OpenSensorHub to integrate a variety of sensors, platforms, processes, and single board computers into the SWE ecosystem.  On this occasion we wanted to play with robots!

The subject of our little experiment is the commercially available “Yahboom G1 AI vision smart tank robot kit with WiFi video camera for Raspberry Pi 4B”.  This relatively inexpensive package includes a high definition camera, dc motors, SG90 servos, an ultrasonic sensor, a RGB lamp module, and a color sensor for basic line detection and following.  The kit includes the aluminum chassis, treads, wheels, a breakout board compatible with Arduino and Raspberry Pi, and of course a power source (rechargeable battery).  The kit also comes with its own software and handy IOS or Android application for remote control.  Overall a good package for someone wanting to teach or learn about robotics with a turnkey solution.  However, being as curious as we are, we dumped the OEM software written in C and Python, and replaced it with OpenSensorHub running on a Raspberry Pi 4B with 8GB RAM and 16 GB SD card hosting Ubuntu for Raspberry Pi as the operating system.  We chose this configuration to provide the greatest amount of processing power available on an RPi with suitable storage capacity for the software and data requirements.

Each sensor, actuator, and process module developed for our “PiBot” is written in Java, the native programming language for OpenSensorHub.  We are exploring the future integration with ROS (Robot Operating System) to transform ROS based platforms into location-enabled, geographically-aware, web-accessible services.  For this current iteration we decided to implement the software using OSH APIs to illustrate the capability of OpenSensorHub as a robotics framework. At the time of writing this article, we have implemented modules to control locomotion, range detection with pan, RGB lamp on/off, full pan tilt control of the HD Camera, and an accompanying process module employing OpenCV based Haar Cascades for feature detection.  The inherent modularity provided by OpenSensorHub promotes software reuse, reduces development time, and defects.

Each Sensor Module is comprised of few Java classes, primarily a Descriptor which registers the module with an instance OpenSensorHub, a Sensor which implements the API necessary to manage the lifecycle of the Sensor Module, a Config exposing any configuration parameters, and one or more Output modules.  Collectively, these modules implement the entirety of a simple Sensor Module, describing the sensor, its observables, and configuration which OpenSensorHub publishes through a standard OGC Sensor Observation Service (SOS). Additionally, if a module is taskable, i.e. can receive tasking for example as with the pan/tilt HD Camera, the tasking parameters are described and published as well.  Furthermore, through the use of OGC Sensor Planning Services, a particular Sensor Module’s tasking parameters are exposed to external clients such as Command Posts, base stations, Common Operating Pictures, etc. whereby the sensor can be remotely tasked.

Processes in OpenSensorHub can be chained together taking as inputs specific sensor and other process outputs or collections of outputs to produce new observations, perform tipping and cueing, generate alerts, etc. Processing can be implemented as modules with similar construction as sensor modules.  Processes can also be simple executable modules consisting of a callable function.  Process chains can be described in a SensorML document that OpenSensorHub utilizes to orchestrate complex processes, and also to capture the provenance of the outputs. 

We decided to process image frames captured from our HD Camera Sensor Module and perform feature detection by implementing a processing module utilizing an OpenCV based Java library and bindings.  Our cat wasn’t too excited, but he cooperated nonetheless. As can be seen in the diagram the left hand image is the scene as captured by the camera sensor, while the right is the post processed image with a bounding box around our not so grumpy  cat.

The same mechanisms can be used to task and maneuver Robots in the sky (aka, drones…just wait for the future blog post on this!), and to make sense of the sensor observations they are collecting.  It’s just another use of the same modular and portable architecture offered by OpenSensorHub.

We point this out because a drone case helps further illustrate the processing capabilities of OpenSensorHub outlined above.  This image, utilizing the same exact processing module on multiple outputs from a UAS platform sensor module running OpenSensorHub hosted on a common business class Windows based laptop computer.  In the sample image the real-time processing of sensor output from our camera interface for this particular sensor is fed as input to the feature detection process configured to detect cars, providing a label consisting of a time stamp, and approximate location of the sensor fed as input from a distinct output from our platform sensor module.  Since we used a Haar Cascade model for vehicles trained for terrestrial traffic cameras detection was not 100%, but that was not our intent.  Our intent was to illustrate that with OpenSensorHub we can run such processes from the Edge-to-the-Cloud.  

Processing such as this can be expanded to tracking an object or feature within the field of view by analyzing the frames and generating tasking commands to pan and tilt the camera, to trigger range detection using the ultrasonic module, or combine within a process chain to identify a target in the scene, range detect and follow.  Similarly, we could combine outputs from a beacon, to direct a remote camera to locate and track the source of the signal, or simply to direct our Robot in the direction of the signal autonomously or semi-autonomously.  Complex processing chains can be implemented by combining atomic processes for example by converting one or more inputs (sensor outputs) into one or more process outputs.  At each step, outputs from an atomic process can be treated as inputs to the next atomic process in the chain, until the desired output class is achieved.  In fact we have demonstrated this capability (VBC Exercise: Tasking Camera to Track Officer) using a PTZ camera atop City Hall in Huntsville, AL to track a GPS signal from an OpenSensorHub enabled Android device worn as a body camera by a participant walking in a nearby local park.  The camera, when tasked to track the subject participant, received PTZ commands converted from GPS positional information received by OpenSensorHub from the bearer’s Android device. The result being an automatic response by the camera maintaining the subject within the field of view until the camera was redirected or tasked anew.  Similarly, this tracking data can be used to dynamically task the maneuvers of Robots, whether on the ground or in the air, to further observe the object of interest.OpenSensorHub is a truly modular and extensible framework for Sensors, Things and Robots. Computer vision, machine learning, artificial intelligence, can be easily incorporated whether through existing open source libraries,  custom solutions, or a combination of open and closed source implementations. OpenSensorHub provides a robust sensor, process, and tasking description with SensorML; gathering, harmonizing, storing, and publishing time tagged sensor observations through Sensor Observation Services – SOS; tasking sensors, actuators, and processes through Sensor Planning Services – SPS; and versatile processing capabilities. We are excited by what can be accomplished with OpenSensorHub and future integrations with more complex and advanced robotic systems and ROS-based platforms.

OpenSensorHub is a truly modular and extensible framework for Sensors, Things and Robots. Computer vision, machine learning, artificial intelligence, can be easily incorporated whether through existing open source libraries,  custom solutions, or a combination of open and closed source implementations. OpenSensorHub provides a robust sensor, process, and tasking description with SensorML; gathering, harmonizing, storing, and publishing time tagged sensor observations through Sensor Observation Services – SOS; tasking sensors, actuators, and processes through Sensor Planning Services – SPS; and versatile processing capabilities. We are excited by what can be accomplished with OpenSensorHub and future integrations with more complex and advanced robotic systems and ROS-based platforms.

— Nick Garay 7/27/21

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: