Sensor tasking and observations as well as processes and process chains are core capabilities provided by OpenSensorHub (OSH) through its implementation of Open Geospatial Consortium’s (OGC) Sensor Web Enablement (SWE), SWE Common Data, and SensorML standards. Sensor outputs can be integrated into processes and process chains receiving one or more sensors’ outputs as input and in turn generating new outputs. Processes and process chains employing Artificial Intelligence (AI), Machine Learning (ML), & Computer Vision (CV) algorithms and libraries augment situational awareness by providing reasoning, identification, classification, and knowledge discovery generating novel and timely outputs for situational awareness.
OSH implements OGC’s SWE standards allowing for the harvesting of sensor observations from geographically distributed disparate sensors, sensor networks, platforms, and robots and providing them to clients as harmonized SWE Common Data observation offerings. Similarly, OSH provides the ability to task sensors, sensor networks, platforms, and robots. Completing the suite of capabilities, processes and process chains allow for observations to be fed as inputs providing for any number of custom transformations, aggregation, analysis, etc. of data and generating new inputs. These processes and process chains can be described in SensorML documents or through the use of the process engine API in OSH to develop processing modules and connecting them to sensor outputs along with taskable parameters. The complexity of the process or process chain can be as simple as a coordinate transformation or as complex as a deep learning artificial neural network.
At OpenSensorHub.org we are integrating leading AI, ML, and CV methodologies and libraries into atomic processes. Atomic processes are processing modules that can be used individually or aggregated to create process chains. OpenCV, a leading open source library for computer vision in robotics and other applications has been successfully integrated as a configurable process performing feature detection in video data fed to OSH, identifying and classifying objects within the sensor’s field of view on the fly. Similarly, algorithms such as K-Means Clustering have been implemented as processes and used in classification of heterogeneous sensor platforms. Classification is performed based on dynamic common outputs across the sensors, harmonized as SWE Common Data, providing a feature for classification with the aim of enhancing situational awareness by allocating sensors to K number of clusters. These same processes can be employed to provide automated tasking and management of sensor enabled platforms based on sensor, atomic, or process chain outputs in any given combination.
As OSH has been proven to run on devices from the edge (e.g. drones, Raspberry Pi, Android phones, Arduino) to the cloud, processing can be deployed with OSH nodes anywhere. Furthermore, OSH instances are configurable to share sensor observations with other instances across a network. This provides significant flexibility to decide where best to support processing. For instance, for a given video camera with a particular internet bandwidth limitation, is it better to push all data forward and process this video on the cloud, or is it better to detect features within the video at the camera and then send low volume alerts of recognized objects and short video clips to a command center. This power and flexibility allow data consumers from the field and command centers and anywhere in between to share and leverage augmented observations in decision making, enhancing observation, orientation, decision, and action loops.