OpenSensorHub allows for vertical and horizontal integration of systems where multiple OpenSensorHub enabled systems can be deployed and configured to share data. This can allow for vertical or horizontal integration to provide a network and hierarchy of systems, if needed, configurable to meet the desired objectives. OpenSensorHub allows for two distinct methods for such integration: SOS-T (Sensor Observation Service – Transactional) or Sensor Web Enablement Virtual Sensors. SOS-T allows for a push mechanism directly from sensors with network connectivity to an OpenSensorHub instance or for a “local” OpenSensorHub instance to push a sensor’s description and observations to a “remote” instance of OpenSensorHub. SWE Virtual Sensors, on the other hand, allow for a pull type mechanism where an instance of OpenSensorHub is configured to mirror one or more sensors on a remote instance of OpenSensorHub. To clients connecting with their “local” instance of OpenSensorHub the fact that the sensor is actually hosted and managed by a “remote” instance is of no consequence.
In “Robotics Applications Employing OpenSensorHub” we introduced the ability for OpenSensorHub to operate within single board computers (SBC) used to operate robotics platforms. You may recall that the “PiBot” was controlled by a Raspberry Pi 4B with 8GB RAM and 16 GB SD card hosting Ubuntu. This SBC was powerful enough to host OpenSensorHub and its sensor, process, and actuator modules. In fact, where processing is concerned, we developed a SensorML Process Chain for computer vision built using OpenCV and Haar Cascades. This allowed us to illustrate the ability to perform processing on-board although using only the Central Processing Unit (CPU). CPUs are capable of performing computer vision operations but suffer from bottlenecks in throughput especially for real-time applications such as video. It is thus advantageous to off-load from the CPU processing intensive operations that can be handled more readily by Graphics Processing Units (GPUs).
The Nvidia Jetson TX2 was chosen as the platform on which we decided to experiment with OpenSensorHub and SensorML processing. The Jetson TX2 development kit sports an NVIDIA Pascal™ Architecture GPU, 2 Denver 64-bit CPUs Quad-Core A57 Complex, 8 GB L128 bit DDR4 Memory, 32 GB eMMC 5.1 Flash Storage, Connectivity to 802.11ac Wi-Fi and Bluetooth-Enabled Devices, and a 5 MP Fixed Focus MIPI CSI Camera. To this end we used the Nvidia SDK Manager to flash our board with the recommended image of Ubuntu Linux and accompanying tools and libraries including OpenCV. However, during the development process we found it necessary to delete OpenCV and to download and compile the source code. This allowed us to control what modules were built as we wanted to ensure OpenCV was built with Compute Unified Device Architecture (CUDA) support. The final element to install was the OpenJDK 11 Software Development Kit (SDK) which includes the Java Runtime Environment (JRE) or Java Virtual Machine (JVM) and SDK necessary to compile and execute Java code.
Targeting the Nvidia Jetson TX2 we wrote two custom modules – a sensor driver for the CSI Camera and a process module – along with a SensorML document that describes how the process chain is to be built. The SensorML document denotes the data sources as inputs to the process, process configuration parameters, the process outputs necessary to complete the process chain, and how they should be “connected”; i.e., outputs to inputs mapping and so on. When all three are loaded by OpenSensorHub the process chain is established and the video from the sensor driver is fed to the process module, which in turn performs the processing logic, and produces processed video frames. While OpenSensorHub runs within a JVM, in order to execute feature detection processes on the GPU it is necessary to write the logic in a language such as C or C++ that can be compiled down to machine instructions and interface with the
physical GPU via OpenCV CUDA constructs. The logic for handling the actual feature detection is simple and there are many examples available online in C/C++ and Python. We opted for C++ and compiled our code into a shared library using CMake – a utility for writing build “recipes” for computer programming. Before we could compile the code, we needed to write a special Java class called a Java Native Interface (JNI) class. This class acts as a bridge between the JVM and external libraries and used to create an interface specification by which calls can be made from the JVM to said external libraries through an instance of these special classes. The C++ code is callable by adhering to and implementing the necessary logic for the interface specification. Again, this code was written to operate directly with
the GPU via OpenCV and CUDA on behalf of our Java based processing module. The compiled shared library was then packaged with the Java process module for deployment. Once integrated and configured within OpenSensorHub the process is able to receive image frames from the sensor driver, call the CUDA based OpenCV logic via the JNI class, and receive back the processed image frames with bounding boxes around the desired features to be detected. The feature detection was performed against pre-trained Haar Cascade models available with OpenCV as was the case with the “PiBot’s” CPU based version of the feature detection module.
The importance of being able to run OpenSensorHub within an SBC such as an Nvidia Jetson TX2 is apparent in situations where one or more sensors are present on a platform such as an Android device. Using OpenSensorHub on an Android device allows for the device’s on-board sensors’ observations to be recorded, shared, processed or analyzed, and even visualized by external clients. For example, the on-board camera, GPS, and magnetometer can be used to provide a geolocated and oriented video stream in near real-time to client applications that connect with the Android’s OpenSensorHub instance to retrieve such observations. Having the geolocated and oriented video stream available to the client is great, it allows the client to know where the device is geographically as well as its orientation providing greater context to what is presented by the video stream. What happens though if the client wants feature
detection to be applied to the video stream? OpenCV can be run directly on the Android and potentially provide this capability but it will possibly overburden the device. In order to not place an unnecessary load on the Android device, increase its power demands, and decrease its usability any extra processing should ideally be performed off of the device. Having an external processing unit on an auxiliary pack connected (either tethered or wirelessly) to the Android device to handle the extra load is ideal. Running OpenSensorHub on such a device and employing either SOS-T or SWE Virtual Sensors between the Android OpenSensorHub instance and the external unit can alleviate the demands of extra processing power. The Jetson TX2 is perfect for such an application as it can host its own OpenSensorHub instance
perform processing and with additional storage capacity can record all sensor observations it receives as well as new sensor observations produced by the on-board SensorML process chains.
The ability to configure a networked hierarchical system for vertical and horizontal integration is greatly enhanced by being able to take advantage of the full spectrum of available hardware-based technologies, from cloud to edge. This includes edge processors working in tandem such as Android and Nvidia GPU compute platforms. Similarly other combinations can be applied at the edge, such as Arduino or Raspberry Pi powered devices and other custom systems. Apply these capabilities to platforms, systems, or systems-of-systems employing OpenSensorHub and unleash the power of SensorML process chains with GPUs.