top of page

Intel RealSense Spatial Awareness Wearable

 

The spatial awareness wearable is a project about augmenting the senses of visually impaired people so that they can feel what is around them and better engage with their environment.

It is not about replacing the cane, this is about augmenting it.

The mission we had was can we use a depth camera to help a visually impaired person navigate a cluttered office space.

One of the problems our users told us that assistive devices usually rely on audio based interfaces which cause an auditory overload. Our solution doesn't impact their hearing bandwidth helping them focus on other things like engaging in a conversation.

The results we had we mind blowing. We barely scratched the surface and learned the right questions to ask to understand the problem better.  We worked with several visually impaired people and with National Federation for the Blind to make something that really matters.

 

Intel's CEO Brian Krzanich presenting IRSAW project at CES 2015 in his Keynote along with Darryl Adams

My Contribution

  • Collaborated with a team of 4 designing the whole system

  • Sketched concepts for the solution of the problem while working in the given contraints

  • Rapidly created and modified prototypes for the project programming Arduinos, Particle Photons, and wrote code in C++ and Cinder using Winsock for communicating with the WiFi Microcontrollers using TCP/IP

  • Used OpenCV for finding features in the depth data coming from the RealSense R200 Depth camera. 

  • Created a phone app using Touch OSC for enabling users to control the prototype and tweak settings

  • Tested the prototypes with people with different visual impairments. We tried the prototypes with fully blind as well as people with Retinitis Pigmentosa (Partial blindness) and gathered a lot of feedback which was implemented in future prototypes

  • Designed and 3D printed enclosures for the electronic parts for the wearable

  • Made the WiFi based vibration actuators using Particle Photon programming using their REST based API

  • Finished the final belt based design for the prototype which was open sourced along with the cod,e and we used an embedded computer instead of a laptop in a back pack which resulted in significant reduction in weight of the system and made it more usable. We used a Minnowboard Max as the computer for processing the depth data and sending feedback to the system

  • Created documentation and tutorial videos using Adobe Premiere pro 

  • Showed the prototype to the CEO who really like it and he showcased the project in his keynote at CES-2015

Process

 

We followed a very user centered design process which involved weekly ideation, concepting, and prototyping sessions.

We did a lot of interviews with the people to understand what our users need.

We made quick working prototypes and tested them with visually impaired people to get quick feedback

We made mistakes and failed often and our learning from these mistakes made the future prototypes better. The process was highly iterative which involved a lot of whiteboarding sessions

We went from a fully wired prototypes which goes on body to a completely wireless system which is very easy to setup.

Links

A UX Design Technologist passionate about creating meaningful user experiences

bottom of page