Tiaa interactive wall

i Designed interactive walls with depth-sensing technology for TIAA's lobby to showcase company history to clients.

Team:


Ash Tiwari

Lead interaction engineer

Kensuke Sato


Interface designer

Michael Cooper


Systems engineer

Timeline:

October 2022 - November 2023
*in conjunction to other projects

Key responsibilities:

Designed and developed the interactive experience

Successfully integrated advanced technologies such as hand-tracking algorithms, depth value monitoring, and machine learning to create an engaging, screen-free interactive experience.

User research and testing

Demonstrated the ability to adapt and refine the project by switching from hand-tracking to depth value monitoring to overcome performance and accessibility issues, ensuring the final product was both efficient and inclusive.

Project and client management

Managed the collaborative efforts of outsourcing animation, integrating various software components, and conducting rigorous testing, culminating in a flawlessly functioning interactive wall installed at the Charlotte campus.

overview:

This 40-foot-long projection wall presents an interactive narrative about TIAA’s commitment to lifetime income in retirement through responsible investments, diversity, equity and inclusion within the company and beyond, and shepherding a more sustainable future for our planet. Through fifteen boldly illustrated interaction points that trigger fun, beautifully animated sequences when activated, viewers will gain a deeper understanding of TIAA’s core values, past, present and future.

setup:

The setup comprised three walls, each equipped with a camera mounted at the top (Luxonis OAK-D PRO PoE Wide lens), positioned approximately 10 feet above the ground and directed towards the center of the wall to maximize the area for tracking touches. A projector would be placed in front of each wall to display the animations.

user experience:

When entering the hallway, the user sees a large wall filled with printed images, bright, projected animated mood elements (i.e. floating clouds, buzzing bees, winking portraits, blowing wind), and text. Each printed image represents one of 15 narratives about TIAA, which come to life as the user approaches.

Triggered by proximity, a blue circle on the image will first begin to glow in order to prompt the user to interact with it further. As the user reaches out their hand to touch the printed image, the interaction point is triggered and an animated sequence begins, building off of the printed image. Some animations remain contained within their single story. Others offer animated elements that guide the user to related stories on the wall that build on one another. Additional text appears along with the animation to provide further narrative context for the visual elements the user is seeing unfold. The surprising and delightful experience of having the wall respond to interaction by the user will prompt them to continue down the hall, interacting with other elements. Though there is a coherent narrative theme to each wall, the individual stories are not designed to necessarily be viewed in order. If a user decides to jump around on the wall, they will have the same experience as someone who chooses to interact linearly.

Output depth values on x,y,z axes


a.i. / machine learning - Hand-tracking:

In our quest for greater accuracy, we decided to refine our hand-tracking by training a YOLOv8 machine learning model on a custom dataset. This dataset was created by labeling and annotating a video of users touching the walls, capturing all possible angles and movements. We also applied additional preprocessing and augmentation settings provided by Roboflow.

Ultimately, running inference on test images revealed a higher confidence in the detection of hands.

Research:

Depth-tracking:

Using a hand-tracking algorithm that utilizes depth readings from the environment, we researched the accuracy of the camera in tracking hands touching the wall through top-down video input. To facilitate visualization, we developed a quick JavaScript test code that maps the 3D depth points from the camera onto a 2D canvas, showing approximately where the sensor detects the touch.

Our findings indicated that the readings became less clear further down the wall, away from the camera, and were often inaccurate at points either to the far left or the far right. This allowed us to determine the optimal spacing for our interaction points.

user testing:

development:

Finally, to test and evaluate the application's performance in detecting user interactions and playing animations at the touched spots on the wall, we integrated a frontend application built in ReactJS with a Python script.

The ReactJS application manages the video playback, while the Python script simultaneously tracks depth values and signals when a hand is detected at a specific spot. We packaged the entire system using Electron for a seamless experience.

wireframes & setup:

For user testing, we utilized a single wall of approximate size to simulate the actual installation. We printed out static backdrops for the animations (which in the final version would be vinyl stickers on the wall) and set up a camera and projector.

We outsourced the creation of individual animations and reviewed the wireframes to arrange the animations at points where the hand-tracking was most accurate and wouldn’t get crowded for multiple users (the areas in pink in the image shown).

problem-solving:

Problem:

After rigorous user testing, we identified several issues with this approach- mainly in performance and accessibility.

The algorithm's processing time for input data caused a slight delay of 1-2 seconds in video playback after detecting interaction. Additionally, since the model was trained specifically to detect hands as the primary interaction points, it could not interact with accessibility tools or other body parts.

solution:

To address these issues, we decided to abandon the hand-tracking algorithm and instead focus on reading depth values. We implemented bounding boxes in specific regions corresponding to different animations, each serving as either proximity markers or interaction points.

We monitored the depth values within these regions, triggering the playback of associated animations upon detecting significant changes such as spikes or drops. Along with the performance issues, this approach effectively resolved our accessibility concerns, allowing interaction with the wall using more than just hands.

Proximity Markers: The white boxes on top trigger a rotating blue circle to form on the animation, prompting the user to interact with it further.

Interaction Points: The white boxes on bottom trigger the animation to play.

final product

final product ✦

final video

A final test during installation at the Charlotte campus confirmed that the interactive wall functioned flawlessly.