Skip to main content Skip to Footer

EVENT


SXSW: New ways to play

Go behind-the-scenes with Fjord Austin R&D! Witness Fjord 2017 Trends come to life in the form of four amazing creations, and get your hands dirty experimenting with emerging technologies such as AI, VR, machine learning, voice recognition, proximity-sensing wearable tech and more!

For information on when and where you can interact with these projects live, check out Accenture Interactive’s SXSW schedule of events!

Get a sneak peek here of our four incredible projects.

PartyBOT
BrickClick
MELTDOWN
Sentiri 2.0

 

Your personal party programmer

The Story

When people shout a question at Alexa or Google Home, and their AI of choice spits back an answer, it might feel like a conversation, but while this exchange of information is impressive, it’s lacking in emotion, depth and nuance.

That’s how PartyBOT is different.

This isn’t simply a chatbot or an AI—it’s equipped with facial recognition. This gives the bot the ability to recognize users’ feelings through facial expressions and words—and recognize whether those align—allowing for more emotional intelligence and empathy, and resulting in more meaningful conversations. (It can even recognize sarcasm!)

Not only are these meaningful conversations more engaging for the user, but it also allows us to gather more data and understand them better than ever before.

 

In Action

PartyBOT will be on full display at our SXSW party, where it will use AI, facial recognition, natural language processing and a neural network to curate the perfect individual experience based on the user’s party preferences.

The users’ relationship with the bot begins on a mobile application, where—through a facial recognition activity—the bot will learn to recognize the user and their emotions. Then, the bot will converse with the user about party staples—music, dancing, drinking and socializing—to learn about them and, most importantly, gauge their party potential. (Are they going to be a dance machine or a stick in the mud—the bot, as a bouncer of sorts, is here to find out.) After this covert evaluation, if it appears that the user will be a fun addition to the party, they'll make it through the (virtual) velvet rope and receive an exclusive invite.

And that’s just the beginning.

Upon arrival at the bar, the user will be recognized by PartyBOT, and throughout the party, the bot will work to ensure a personalized experience based on what it knows about them—their favorite music, beverages, and more. (For example, they might receive a notification when the DJ is playing one of their favorite songs.)

PartyBOT will gently nudge the individual throughout the party, checking on them and lightly curating their activities. Once the bot detects that the attendee have done their partying duty and hit the dance floor, the user will unlock the VIP experience, which upgrades them to premium drinks and access to cool swag.

 

Tech Specs

  • Computer Vision

    • Face detection, recognition

    • Emotion analysis

  • Machine Learning

    • Knowledge Graph

    • Neural Networks

    • Natural Language Processing

 

  • Speech Synthesis

    • Speech to text

    • Text to Speech

  • Real-Time Communication

    • Web sockets

    • Push notifications

Future Applications

The PartyBOT boasts way more than party potential, primarily because of the facial recognition element of the AI and the increased capacity for emotion and empathy. It could, for example, be used with an interactive agent at a hospital. If a patient claims to be in pain, the agent won’t simply assist, but will recognize the patient’s distress and pain and respond in an appropriately understanding and sympathetic manner. This adds a more personal element to the user experience, and makes for a more comforting interaction in a typically uncomfortable, scary situation.

JUMP TO TOP

 

A better way to build, no instruction manual necessary

The Story

Ever tried to put together an IKEA desk? (Yeah, we heard you swearing from here.) Well imagine if—instead of trying to decipher Swedish instructions, pay attention to a YouTube tutorial, and find Part A so you can insert it into part B—you could just strap on a Microsoft Hololens for guided, real-time assembly instruction.

That’s the premise behind BrickClick, a project that will change the way we approach education and knowledge sharing through Mixed Reality.

In the use case we’re presenting, the user will be building a specific structure with blocks, with the help of MR. Throughout the construction process, they will be viewing a holographic projection of the finished product through the Hololens, and as they build, BrickClick will guide them, recognizing parts and alerting the user which piece to grab next and exactly where to connect each, resulting in a flawless finished product, built more efficiently than ever.

No instruction manual necessary.

 

Tech Specs

  • Developed for the Microsoft HoloLens

  • Utilizes the Vuforia augmented reality SDK

 

Future Applications

Imagine, for example, that you purchase a vespa, and instead of providing an owner’s manual, the company has invested in this technology. Using the HoloLens-compatible instructions, you could then fix it yourself.

Not only is this technology applicable in the cases mentioned above, but BrickClick also illustrates how Mixed Reality can be used for education and knowledge sharing. Whether it’s cooking, home maintenance or, yup, furniture assembly, this technology can be utilized to help users perform completely unfamiliar tasks more easily and efficiently than ever.

This training system also has the potential to teach new and/or unskilled workers—millennials entering the workforce without sufficient labor skills, for instance—how to complete specific tasks, and can be utilized to help experienced workers strengthen their skillsets and become more productive employees.

JUMP TO TOP

 

React well—or the reactor won’t

The Story

Meltdown is a virtual-reality experience that transports you to your first day on the job at the power plant—where something goes terribly wrong. With the door locked, the clock ticking and chaos erupting, it’s up to you to complete a series of tasks and save the day before time runs out and the plant goes into MELTDOWN!

 

Tech Specs

  • Developed in Unreal Engine 4

  • Utilizes the HTC Vive VR hardware

 

Future Applications

It’s one thing to be told what to do in case of an emergency situation, it’s another thing to experience it. While Meltdown is designed as a game, it’s really representative of the way virtual reality can be used to simulate dangerous situations—all without the actual danger.

A company could use a variation of Meltdown to teach employees how to react in emergencies. Immersing them in a situation in which they actually must perform a set of functions serves as a far more captivating and convincing training method than a test, class or manual, and allows trainees to better understand how to perform in critical, high-pressure situations. This could be used to simulate the environment on an oil rig or a flight deck, for example, or to train employees how to work on a quick-moving assembly line.

It could also be used as a hiring tool to screen potential employees, with new candidates being run through the simulation to determine their qualifications and how they perform in intense situations.

JUMP TO TOP

 

Tap Into your sixth sense

The Story

Last year we introduced a new way to navigate via Sentiri, a proximity-sensing headband that gives the wearer the ability to "sense" their environment using haptic feedback via eight modules placed around the wearer’s head.

Using sensors, coin cell motors and BLE connectivity, Sentiri transmits information to the wearer to direct them in a specific direction, alert them when they’ve arrived at a destination, and even notify them of upcoming obstacles in their way.

That was last year. Now we’re kicking it up a notch.

Fjord R&D labs globally are currently collaborating to create an even more impressive Sentiri V2. For this version, we’ve transitioned from headband to a haptic wearable that sits on the chest Unlike the original—which utilized arbitrary infared depth data—this version implements context-based navigation and SLAM (simultaneous localization and mapping) tracking.

It’s the context-based element of the navigation that makes for a more specific and customized experience. The wearer isn’t simply guided based on obstacles around them. The environments can be pre-programmed to direct him or her straight to a precise, desired location.

Take this tech in the home. After an area has been 3D scanned, a human can go in and tag details—the location of the thermostat, closet or silverware drawer, for instance. (This is done with the Bridge Engine.) Once this environment has been established, the user can use verbal commands to find what they’re looking for. After a user asks “Where is the thermostat?”, for example, Sentiri will guide them there directly (and safely) using haptic feedback.

 

Tech Specs

The technology is, in some ways, similar to V1, and includes:

  • Microcontroller and battery - Run Sentiri

  • Coin cell motors - Provide haptic feedback in the form of vibrations that increase in intensity as an object comes closer to the user

  • BLE connectivity - Allows control from another device or application and can be used to communicate information to Sentiri

For V2, we added the following tech:

  • Motor drivers to allow for more complex communications via haptics

  • A phone that uses computer vision to track the user’s position in space

  • A setup app and Bridge Engine, which allow the user to 3D scan an environment and place contextual tags in 3D space via Mixed Reality

  • The start of a haptic language protocol that users would learn to understand the environment and guidance prompts

  • Voice recognition

  • Speech recognition to add another interface for navigation control

 

Future Applications

The first build of Sentiri only allowed for obstacle avoidance based on arbitrary depth data, as opposed to this iteration, which takes into account the context and location in the virtual space, translated to the real world via haptics.

In other words, you can imagine a “virtual layer”—living on top of our “real world”—that our devices can recognize and augment or “mix” to allow for more complex interactions and meaningful experiences, in more locations.

For the visually impaired, this could be used for better accessibility in unfamiliar settings. Instead of placing braille signs around—where users might not necessarily know where to find them—the building could be scanned and the space tagged with relevant locations like entrances, emergency exits, bathrooms, etc. and that info stored in a centralized location. Once a user has entered, the info could sync to their device and they could navigate the environment.

JUMP TO TOP