HELLE Project
HELLE Project
Challenge
Wearable-based data are heterogenous, due various sampling rates, sensor calibration and different modalities. Even though having homogeneity directly at the device it is almost impossible (manufacturers have to agree on a standard), mitigation techniques towards interoperability could be applied to define a common way of describing the produced values. Standards such as the W3C WoT exist, however, there is need for a middleware to enforce them. Moreover, most of the existing produced datasets fail to be defined as FAIR, due to the Interoperability pillar. On the other hand, existing DL models for inference are large leading to their deployment at the cloud which is ineffective in terms of costs, latency and data privacy. HELLE produces hardware specific data collection modules for heterogenous devices along with tinyML models and reusable data processing pipelines for timeseries expanding the NEPHELE VOStack (https://netmode.gitlab.io/vo-wot/).
Solution
To address these challenges, the HELLE project employed the W3C Web of Things (WoT) framework and the NEPHELE platform, enabling the creation of Virtual Objects (VOs) to abstract and manage the physical devices and sensors. Each VO was designed to represent a specific type of sensor input. For instance, a Gesture VO was developed to handle accelerometer and gyroscope data from a smartwatch, while a Stress VO managed data related to stress levels derived from physiological sensors. These VOs were organized in a modular and flexible system, with the Composite Virtual Object (cVO) serving as the digital twin interface, aggregating and contextualizing the data from multiple VOs to create a holistic view of the individual.
A key part of the solution was the integration of TinyML models within the gesture recognition VO. This allowed raw sensor data (gyroscope and accelerometer) to be processed locally on the VO, after applying a set of generic preprocessing functions, such as data windowing, segmentation, normalization, and calibration. These functions ensured that raw data was prepared in a form optimized for inference using the TinyML model, which could run on low-power devices like smartwatches. The TinyML model enabled the system to recognize five specific gestures while minimizing both the energy consumption and latency.
Another benefit of using the NEPHELE platform was the ability to access both real-time and historical data. This allowed for more flexible interaction, such as running predictions on past data for analysis and asynchronously processing data without requiring continuous real-time input. Additionally, secure communication protocols, including token-based authentication for database interactions and basic authentication for inter-device communication, ensured that the data exchange between the VOs, cVO, and databases was secure, maintaining the integrity and confidentiality of the system. By combining real-time data processing with secure, efficient communication, the HELLE project provided a scalable and robust solution for digital twin applications, such as real-time monitoring of first responders’ physiological states.
Energy and Availability Optimization
In the first stage of the project, our focus was on running the generic functions (such as data segmentation, normalization, and calibration) and the TinyML models directly on the user's smartwatch. This setup allowed the real-time processing of sensor data from the accelerometer and gyroscope to predict user gestures. While this approach enabled real-time predictions, it posed significant challenges in terms of performance and energy consumption, as the smartwatch has limited processing power and battery life.
To improve both performance and energy efficiency, in the next phase, we moved the generic functions and TinyML models to the Virtual Object (VO) stack. By shifting this processing to the VO stack, we were able to offload computational tasks from the smartwatch to more capable cloud or edge computing resources. This change drastically reduced the energy consumption on the smartwatch, allowing it to last longer without compromising the responsiveness or accuracy of gesture detection. Additionally, the VO stack architecture allowed for more flexible data processing, including access to historical data and asynchronous model execution, further optimizing the system for real-time applications such as those used by first responders.
Presentation Video: