Last year we introduced a new way to navigate via Sentiri, a proximity-sensing headband that gives the wearer the ability to "sense" their environment using haptic feedback via eight modules placed around the wearer’s head.
Using sensors, coin cell motors and BLE connectivity, Sentiri transmits information to the wearer to direct them in a specific direction, alert them when they’ve arrived at a destination, and even notify them of upcoming obstacles in their way.
That was last year. Now we’re kicking it up a notch.
Fjord R&D labs globally are currently collaborating to create an even more impressive Sentiri V2. For this version, we’ve transitioned from headband to a haptic wearable that sits on the chest Unlike the original—which utilized arbitrary infared depth data—this version implements context-based navigation and SLAM (simultaneous localization and mapping) tracking.
It’s the context-based element of the navigation that makes for a more specific and customized experience. The wearer isn’t simply guided based on obstacles around them. The environments can be pre-programmed to direct him or her straight to a precise, desired location.
Take this tech in the home. After an area has been 3D scanned, a human can go in and tag details—the location of the thermostat, closet or silverware drawer, for instance. (This is done with the Bridge Engine.) Once this environment has been established, the user can use verbal commands to find what they’re looking for. After a user asks “Where is the thermostat?”, for example, Sentiri will guide them there directly (and safely) using haptic feedback.
The technology is, in some ways, similar to V1, and includes:
Microcontroller and battery - Run Sentiri
Coin cell motors - Provide haptic feedback in the form of vibrations that increase in intensity as an object comes closer to the user
BLE connectivity - Allows control from another device or application and can be used to communicate information to Sentiri
For V2, we added the following tech:
Motor drivers to allow for more complex communications via haptics
A phone that uses computer vision to track the user’s position in space
A setup app and Bridge Engine, which allow the user to 3D scan an environment and place contextual tags in 3D space via Mixed Reality
The start of a haptic language protocol that users would learn to understand the environment and guidance prompts
Speech recognition to add another interface for navigation control
The first build of Sentiri only allowed for obstacle avoidance based on arbitrary depth data, as opposed to this iteration, which takes into account the context and location in the virtual space, translated to the real world via haptics.
In other words, you can imagine a “virtual layer”—living on top of our “real world”—that our devices can recognize and augment or “mix” to allow for more complex interactions and meaningful experiences, in more locations.
For the visually impaired, this could be used for better accessibility in unfamiliar settings. Instead of placing braille signs around—where users might not necessarily know where to find them—the building could be scanned and the space tagged with relevant locations like entrances, emergency exits, bathrooms, etc. and that info stored in a centralized location. Once a user has entered, the info could sync to their device and they could navigate the environment.
JUMP TO TOP