Blog Archives

DisastARcons

dcons-logo




A HoloLens application for disaster response teams.

Introductory video that briefly describes the primary use case of the application.

Overview

The main inspiration behind this project was to empower disaster response teams to quickly triage and respond to an emergency, rebuild faster and restore affected areas more effectively, and at the same time take care of the most critical pain points that are faced by the response teams.

DisastARcons allows responders using Microsoft HoloLens to assess damage as needing to be addressed for safety concerns, and to triage sites affected after an emergency event by visually inspecting and marking areas that need attention, or that are health/safety risks. This information is synced via the cloud to the Incident Command System that will use these data points to decide and lead deployment of skilled teams to the area. Subsequent responders will use DisastARcons to find and resolve areas which were tagged by earlier teams in a more efficient manner than conventional methods.

The project was born at the SEA-VR Hackathon IV in Oct 2016, for which we won the Best Humanitarian Assistance Award. Since then, a subset of us have been working on this developing it further for a client.

Role: Project Lead/Product Manager, UX Researcher, Designer, Presenter

Key Activities: Research, Ideation, Prototyping, Design (Environment + Interaction + Key Features Hierarchy), Data Visualization, Coding, Demoing, Presentation and Evangelization, Client Support

Initial Team (at Hackathon): Abhigyan Kaustubh, Amanda Koster, Alicia Lookabill, Steven Dong, Tyler Esselstrom, Drew Stone, Evan Westenberger, Jared Sanson, Sebastian Sanchez

Final Team: Abhigyan Kaustubh, Amanda Koster, Alicia Lookabill, Steven Dong, Tyler Esselstrom, Drew Stone

Timeline: Oct’2016 – Present

Tools: Tableau, AWS, Unity, Balsamiq, Blender, Adobe Photoshop, Illustrator, Premier, Visual Studio, Hololens + its SDK, Asana

Website: http://disastarcons.com/

 


Process

Process Flow - Page 1 (1)

 

 

Ground work + Research

Our process started with high level analysis of the problem space – why did we care? We realized that it was a space that didn’t have optimal solutions, and solving for the problems in this space meant saving several lives in areas that were affected by natural disasters. This allowed us to inform our emotional motivation, which got our team fired up to develop a solution.

Secondary Research

The second part was defining clearly the actual problem space and our value proposition/ solution, and to gauge its viability and short term and long term adoption. For doing this well, several things needed to be done (in series and in parallel):

  1. Research organizations in this field in terms of their needs, focus, specialties, customers and pain points.
  2. Identify the target customer whom we will be designing our product for: what will be their big problem that we will be fixing with our product?
  3. Identify scenarios in which you see the target customer using our product. Identify one main scenario where our product will be indispensable to them, and understand the frequency of such a scenario occurring.

Meanwhile, in parallel,

  1. Envision how could an organization help out in a disaster affected area?
  2. What are the top 3 things that need to be done in such a scenario, and what is the best way of doing that – be completely agnostic to any technology or process. Understand the need at the most fundamental level, and then reason how best to solve this.
  3. Are we developing a mixed reality solution just because it is a VR hackathon, or is there really a strong need that can only be met by Mixed Reality application at the highest level of efficiency?

We started with the users for whom we were going to design our application. After considering several organizations that were involved in this field, their needs, focus, specialties, customers and pain points, we decided to narrow our target customer to FEMA.

We believed it was vital that we were clear about the above aspects of the projects before we dived into design and development. Hence, we iterated the above exercise a couple of times to gain clearer comprehension.

 

Primary Research

We used two methods for this from the resources that were available: Interviews, Observation through Role Playing

The purpose of the interview was to gain (and cross check) a deeper comprehension of our secondary research, and to get a sanity check from people who were closest to our users, Additionally, none of my team members had been a in the scenario of the intended user before (and had no way of doing so with the resources and time we had), and had limited experience in the field of mixed reality.
Hence, we interviewed experts from the fields of Disaster Management, Accessibility and Mixed Reality.

We interleaved this process with Observation methodology, which we implemented via Role Playing. This enabled us to find more pertinent questions to ask experts as we understood the scenario in the context of a potential user.

We gained the following insights:

  1. A lot of people might be initially enamored my the mixed reality application just because it was “cool” and there was a Hololens involved. This would bias the users feedback on how useful they would find the app, especially if they are using it when in a disaster affected area.
  2. Movies like Iron Man which depict augmented reality and are a significant initial motivation for people experimenting in this field focus mainly on appealing to viewers rather than usefulness to the actor using it. For eg., the field of view should should be as minimalistic as possible to reduce the cognitive load.
  3. The scenario where the application will be used will be hostile and might limit accessibility for users. There should be multiple modes of interacting with the application for critical features.
  4. Along with focusing on minimalism and accessibility, the interface should be as universally comprehensible as possible to understand

The above process allowed us to come up with the following outline for our project (described from target user’s perspective):

  1. Situation:
    1. An 8.8 earthquake happens causing a devastating tsunami
    2. 1st responders have performed search and rescue.
    3. You are a member of FEMA (Federal Emergency Management Agency), responsible for the coordination and response to a disaster that has occurred in the United States and that overwhelms the resources of local and state authorities.
  2. Problem Statement: A Government Accountability Office (GAO) 2015 audit report found:
    1. Response capability gaps through national-level exercises and real-world incidents
    2. Status of agency actions to address these gaps is not collected by or reported to the Department of Homeland Security or Federal Emergency Management Agency (FEMA).–Anthony Kimery, Editor-in-Chief, Homeland Security Today.
  3. Proposed Solution: DisastARcons
    1. DisastARcons uses the Microsoft HoloLens for damage assessment by visually inspecting and marking areas that need attention or that are health/safety risks.
    2. DisastARcons increases efficiency in capturing and sharing accurate data AND measures the time between identification and resolution.
  4. Why Hololens?
    1. Always in front of you: The HoloLens utilizes the user’s entire field of view vs. most devices, such as a cell phone that uses a limited rectangle of view and is dependent on the user’s way of holding the device.
    2. Example use case: For the second shift of maintenance workers, all data will always be easily accessible when relevant.
    3. Hands free
    4. Highest fidelity: HoloLens can do 3D, 360° (4π Steradian) construction of its surroundings.

Gaining Product Clarity

Integrating the above, we get the following high level scenario:

 

High Level Scenario - Page 1
Storyboarding

 

Storyboard

Scoping

Following the above process, we scoped our project in terms of main goals and extension goals, as follows.

Main Goal:

  1. To build a Hololens application that has the simplest possible interface that allows the user to mark hazards and assign severity ranking to them with accuracy and precision based on the user’s inspection of their surroundings.
    1. The marking of hazards will take place through tagging, where appropriate holograms will be attached to the affected area.
    2. The severity of the hazard will be indicated by the color of the hologram.
  2. Safety mechanism: Since the user will be using this in a dangerous area, there should be a way for the user to call for help (911), easily & intentionally.

Extension Goals:

  1. Establishing a connection with the ICS (or a remote server) to populate data collected from different field agents.
  2. Update the information points on every hololens in the field
  3. Send the information to ICS for analysis
  4. Craft an interface for the ICS to analyze the data quickly and give out directives to field agents.
  5. Add to the existing backend of ICS that allows them to utilize the hololens data points along with others in a seamless fashion.

Ideation

The ideation process involved condensing data from results from different research methods/activities like roll playing, using custom hologram app in Hololens, 3D construction, expert interviews, concepts in accessibility, etc.

We used this to play around with different interface ideas and interaction methods while trying to refine the use case to utmost leanness.

Based on our results, we came up with the following flow:

The Disasters - Phase 1 - Page 1

 

Phase 2- ICS + Maintenance Personnel POV - Page 1

Design

Mockups

The ideation process was translated into a UI flow for the app’s interface – with special emphasis on simplicity and ease of access.

 

 UI Flow - Page 1


Result

We build a Mixed Reality Hololens application that allows the user to apply persistent tags to different things in their real environment and rate the severity level, while recording and transferring the most accurate set of data points describing hazards (that can be later located by other FEMA agents and attended to) to the remote Incident Command System, which is analyzing all the input data streams and giving the users prioritized and relevant information on their field of view, enabling them to restore the most critical affected areas while remaining safe and keeping a track new potentially hazardous developments in their neighborhood.

Next Steps

We are currently working on our primary extension goals (which is now are main goal):

To build the interface for the ICS and establish efficient data transmission in-between field agents, and with the remote Incident Command System.

The process for that can be best represented by the following flow chart:

Building a functioning Dashboard for SC (Front end + Back end) - Page 1 (1)

 

The prototype for eventual incident command center’s interface to get an overview of various things happening in the affected area is as follows:

FINAL preso

Memory Game

Screen Shot 2016-07-12 at 3.56.46 PM-min

Built a Mixed Reality game for kids during the HoloHacks in May, 2016.

Overview

Memory Palace is a Windows Hololens application that enables the user to enhance their memory for specific objects by using the spatial mapping of the brain in the current/familiar environment.

Role: Product Management, UX Researcher, VR Interaction Designer

Key activities: Secondary ResearchBrainstorming, Ideation, Roleplaying, Prototyping, Feature Identification and design, VR Interaction and UI Flow, Coding, Presentation, Project Management

Team Members: Abhigyan Kaustubh, Malika Lim, John Shaff, Kevin Owyang, Hailey

Timeline: 36 hours

Tools:  Unity3D, Windows 10, MS Visual Studio, Hololens SDK, Maya

Demo Video:

 

Process

From our brainstorming session, where we started with our inspiration, value propositions, user needs and team skill set, we scoped it down and designed and developed towards our final product.

Process Flow - Memory Game (HoloHacks)

Some pictures from our project:

IMG_20160522_163113 IMG_20160522_163253 IMG_20160522_163137 IMG_20160522_163158 IMG_20160522_163210 IMG_20160522_163226 IMG_20160522_163234  IMG_20160522_163304

Solar System Simulation

solarsys

Summary

This project presents an active 3D representation of our Solar System, which is accessible in virtual reality. It consists of the Sun, the eight planets, Pluto – the dwarf planet, the asteroid belt, and a few comets.

Currently, I am working on building natural satellites (moons).

Created by: Abhigyan Kaustubh

Software used: Unity, Cardboard SDK, Windows 10

Mind Palace

log_in01

A virtual reality application that allows the user to store, organize, search, explore and share memories from different moments in their lives in a 3D virtual environment.

Overview

Led my team to create a 3D model in VR during the Seattle Virtual Reality Hackathon II in September, 2015, and have been researching various techniques that can leverage the 3D VR environment to augment human mind/memory on my own since then. Have continued to develop the VR IxD for this application with a strong emphasis on UCD principles.

Based on the learnings from this project, I lead a team at RATLab LLC to carry out further research on very similar project, and identified most compelling use cases and key features which allowed us to design the virtual environment and interface design for this in High Fidelity (prototyping), and then in Unity 3D (separate team).

Received honorable mention at the SEA-VR Hackathon II, and the project was covered in GeekWire.

Role: Project Lead/Product Manager, UX Researcher, Designer, Presenter

Key Activities: Research, Ideation, Prototyping, Design (Environment + Interaction + Key Features Hierarchy), Data Visualization, Coding, Demoing, Presentation and Evangelization, Client Support

Team Members: Abhigyan Kaustubh, Xinglu Yao, Lucky Agung Pratama

Timeline: 36 hours

Tools: Unreal Engine, Balsamiq, Oculus DK2, Leap Motion

 

css.php