Text by Konstantinos Kentrotis, MEng MDes PhD | Research & Innovation Consultant at EXUS

Apart from utilising state-of-the-art Human Computer Interaction (HCI) technologies to create a safe and pleasant working environment, WorkingAge (WA) deploys the same methods to interact with its users in an easy and privacy-respecting manner. Thus, and in order to protect the privacy of WA users and that of their co-workers, EXUS -a proud participant of the WA project- developed a Face Recognition & Gesture-based Interaction platform (EFRGI). The latter enables its users to 1) be identified and authorised by the system and 2) interact with the platform through simple hand gestures (e.g. enumeration).

The EFRGI HCI platform comprises of two main components:

  • The first component is responsible for face recognition and is mainly used for security purposes. Its goal is on one hand to ensure that only registered -and therefore validated- users can interact with the platform, and on the other hand, that any human activity or face irrelevant to the system’s interest or access rights is not recorded. When registering with a system, each platform user uploads a ‘selfie’ picture via a supported interface (e.g. mobile application). The user’s picture and details are uploaded stored on the platform’s cloud server in a GDPR-compliant way. After registering with the platform, the user becomes identifiable by it and hence authorised to use the second component of the platform, i.e. to interact with it through a set of pre-defined hand gestures.
  • The second component is responsible for hand-gesture interaction with the system. Using any compatible, registered end-device (camera), and upon their consent, users can interact with the platform with hand gestures. This component relies on available datasets to train its gesture deep learning model and the selection of dataset depends on the system’s requirements. At its current stage, the platform is based mainly upon simple, gesture-based communication codes (e.g. enumeration), but if required and after suitable training of the underlying Machine Learning (ML) algorithms, this can be adapted to the system’s needs.

A typical use case of the EFRGI platform is as follows: the user registers via his device (e.g. mobile phone), filling in their details and uploading a ‘selfie’ picture. This will be stored into the server. When the end-device (camera) starts recording, it only begins capturing frames, upon a registered user’s permission/request. First, a face recognition phase takes place, confirming that the face of the user requesting to use the service matches the one stored in the server under the same user’s dataset. From that point onwards, the gestures start being monitored. The continuous signal is then transferred, sampled and processed to be compared with an “average gesture signal” using ML algorithms until a match is identified. In case of matching, the signal is translated into its corresponding interpretation and the system acts as planned by the platform operator. For example, in the below use case, an AI engine sends alerts and recommendations as ordered by the user’s gesture.

Figure 1. WorkingAge’s EFRGI platform

The platform will have a reached a TRL 6 by the end of the WA project. EFRGI has been designed and realised by the EXUS Innovation team (https://www.exusinnovation.co.uk/) working on the aforementioned project and stems from the collaboration between its Research Consultants and Software Engineers.

(Photo by Michael Rodichev on Unsplash)

Twitter
LinkedIn