Blog Archives

DisastARcons

dcons-logo




A HoloLens application for disaster response teams.

Introductory video that briefly describes the primary use case of the application.

Overview

The main inspiration behind this project was to empower disaster response teams to quickly triage and respond to an emergency, rebuild faster and restore affected areas more effectively, and at the same time take care of the most critical pain points that are faced by the response teams.

DisastARcons allows responders using Microsoft HoloLens to assess damage as needing to be addressed for safety concerns, and to triage sites affected after an emergency event by visually inspecting and marking areas that need attention, or that are health/safety risks. This information is synced via the cloud to the Incident Command System that will use these data points to decide and lead deployment of skilled teams to the area. Subsequent responders will use DisastARcons to find and resolve areas which were tagged by earlier teams in a more efficient manner than conventional methods.

The project was born at the SEA-VR Hackathon IV in Oct 2016, for which we won the Best Humanitarian Assistance Award. Since then, a subset of us have been working on this developing it further for a client.

Role: Project Lead/Product Manager, UX Researcher, Designer, Presenter

Key Activities: Research, Ideation, Prototyping, Design (Environment + Interaction + Key Features Hierarchy), Data Visualization, Coding, Demoing, Presentation and Evangelization, Client Support

Initial Team (at Hackathon): Abhigyan Kaustubh, Amanda Koster, Alicia Lookabill, Steven Dong, Tyler Esselstrom, Drew Stone, Evan Westenberger, Jared Sanson, Sebastian Sanchez

Final Team: Abhigyan Kaustubh, Amanda Koster, Alicia Lookabill, Steven Dong, Tyler Esselstrom, Drew Stone

Timeline: Oct’2016 – Present

Tools: Tableau, AWS, Unity, Balsamiq, Blender, Adobe Photoshop, Illustrator, Premier, Visual Studio, Hololens + its SDK, Asana

Website: http://disastarcons.com/

 


Process

Process Flow - Page 1 (1)

 

 

Ground work + Research

Our process started with high level analysis of the problem space – why did we care? We realized that it was a space that didn’t have optimal solutions, and solving for the problems in this space meant saving several lives in areas that were affected by natural disasters. This allowed us to inform our emotional motivation, which got our team fired up to develop a solution.

Secondary Research

The second part was defining clearly the actual problem space and our value proposition/ solution, and to gauge its viability and short term and long term adoption. For doing this well, several things needed to be done (in series and in parallel):

  1. Research organizations in this field in terms of their needs, focus, specialties, customers and pain points.
  2. Identify the target customer whom we will be designing our product for: what will be their big problem that we will be fixing with our product?
  3. Identify scenarios in which you see the target customer using our product. Identify one main scenario where our product will be indispensable to them, and understand the frequency of such a scenario occurring.

Meanwhile, in parallel,

  1. Envision how could an organization help out in a disaster affected area?
  2. What are the top 3 things that need to be done in such a scenario, and what is the best way of doing that – be completely agnostic to any technology or process. Understand the need at the most fundamental level, and then reason how best to solve this.
  3. Are we developing a mixed reality solution just because it is a VR hackathon, or is there really a strong need that can only be met by Mixed Reality application at the highest level of efficiency?

We started with the users for whom we were going to design our application. After considering several organizations that were involved in this field, their needs, focus, specialties, customers and pain points, we decided to narrow our target customer to FEMA.

We believed it was vital that we were clear about the above aspects of the projects before we dived into design and development. Hence, we iterated the above exercise a couple of times to gain clearer comprehension.

 

Primary Research

We used two methods for this from the resources that were available: Interviews, Observation through Role Playing

The purpose of the interview was to gain (and cross check) a deeper comprehension of our secondary research, and to get a sanity check from people who were closest to our users, Additionally, none of my team members had been a in the scenario of the intended user before (and had no way of doing so with the resources and time we had), and had limited experience in the field of mixed reality.
Hence, we interviewed experts from the fields of Disaster Management, Accessibility and Mixed Reality.

We interleaved this process with Observation methodology, which we implemented via Role Playing. This enabled us to find more pertinent questions to ask experts as we understood the scenario in the context of a potential user.

We gained the following insights:

  1. A lot of people might be initially enamored my the mixed reality application just because it was “cool” and there was a Hololens involved. This would bias the users feedback on how useful they would find the app, especially if they are using it when in a disaster affected area.
  2. Movies like Iron Man which depict augmented reality and are a significant initial motivation for people experimenting in this field focus mainly on appealing to viewers rather than usefulness to the actor using it. For eg., the field of view should should be as minimalistic as possible to reduce the cognitive load.
  3. The scenario where the application will be used will be hostile and might limit accessibility for users. There should be multiple modes of interacting with the application for critical features.
  4. Along with focusing on minimalism and accessibility, the interface should be as universally comprehensible as possible to understand

The above process allowed us to come up with the following outline for our project (described from target user’s perspective):

  1. Situation:
    1. An 8.8 earthquake happens causing a devastating tsunami
    2. 1st responders have performed search and rescue.
    3. You are a member of FEMA (Federal Emergency Management Agency), responsible for the coordination and response to a disaster that has occurred in the United States and that overwhelms the resources of local and state authorities.
  2. Problem Statement: A Government Accountability Office (GAO) 2015 audit report found:
    1. Response capability gaps through national-level exercises and real-world incidents
    2. Status of agency actions to address these gaps is not collected by or reported to the Department of Homeland Security or Federal Emergency Management Agency (FEMA).–Anthony Kimery, Editor-in-Chief, Homeland Security Today.
  3. Proposed Solution: DisastARcons
    1. DisastARcons uses the Microsoft HoloLens for damage assessment by visually inspecting and marking areas that need attention or that are health/safety risks.
    2. DisastARcons increases efficiency in capturing and sharing accurate data AND measures the time between identification and resolution.
  4. Why Hololens?
    1. Always in front of you: The HoloLens utilizes the user’s entire field of view vs. most devices, such as a cell phone that uses a limited rectangle of view and is dependent on the user’s way of holding the device.
    2. Example use case: For the second shift of maintenance workers, all data will always be easily accessible when relevant.
    3. Hands free
    4. Highest fidelity: HoloLens can do 3D, 360° (4π Steradian) construction of its surroundings.

Gaining Product Clarity

Integrating the above, we get the following high level scenario:

 

High Level Scenario - Page 1
Storyboarding

 

Storyboard

Scoping

Following the above process, we scoped our project in terms of main goals and extension goals, as follows.

Main Goal:

  1. To build a Hololens application that has the simplest possible interface that allows the user to mark hazards and assign severity ranking to them with accuracy and precision based on the user’s inspection of their surroundings.
    1. The marking of hazards will take place through tagging, where appropriate holograms will be attached to the affected area.
    2. The severity of the hazard will be indicated by the color of the hologram.
  2. Safety mechanism: Since the user will be using this in a dangerous area, there should be a way for the user to call for help (911), easily & intentionally.

Extension Goals:

  1. Establishing a connection with the ICS (or a remote server) to populate data collected from different field agents.
  2. Update the information points on every hololens in the field
  3. Send the information to ICS for analysis
  4. Craft an interface for the ICS to analyze the data quickly and give out directives to field agents.
  5. Add to the existing backend of ICS that allows them to utilize the hololens data points along with others in a seamless fashion.

Ideation

The ideation process involved condensing data from results from different research methods/activities like roll playing, using custom hologram app in Hololens, 3D construction, expert interviews, concepts in accessibility, etc.

We used this to play around with different interface ideas and interaction methods while trying to refine the use case to utmost leanness.

Based on our results, we came up with the following flow:

The Disasters - Phase 1 - Page 1

 

Phase 2- ICS + Maintenance Personnel POV - Page 1

Design

Mockups

The ideation process was translated into a UI flow for the app’s interface – with special emphasis on simplicity and ease of access.

 

 UI Flow - Page 1


Result

We build a Mixed Reality Hololens application that allows the user to apply persistent tags to different things in their real environment and rate the severity level, while recording and transferring the most accurate set of data points describing hazards (that can be later located by other FEMA agents and attended to) to the remote Incident Command System, which is analyzing all the input data streams and giving the users prioritized and relevant information on their field of view, enabling them to restore the most critical affected areas while remaining safe and keeping a track new potentially hazardous developments in their neighborhood.

Next Steps

We are currently working on our primary extension goals (which is now are main goal):

To build the interface for the ICS and establish efficient data transmission in-between field agents, and with the remote Incident Command System.

The process for that can be best represented by the following flow chart:

Building a functioning Dashboard for SC (Front end + Back end) - Page 1 (1)

 

The prototype for eventual incident command center’s interface to get an overview of various things happening in the affected area is as follows:

FINAL preso

Phytoplankton Trends

pharmon201508262325113764

Discovered 2 types of phytoplankton by using machine learning and data visualization on Flow Cytometry data.

Overview

This is a data science based project that is going on at UW Seattle. The work on this project was performed as a Capstone project primarily with the goal of gaining better comprehension of Marine biology by analyzing the Flow Cytometry data available.

Oceanographers use Flow Cytometry to measure the optical properties of a given sample of water through radial dispersion. This is done by attaching Flow Cytometers to the bottom of ships that conduct research, thus enabling coverage of a vast body of water.

We procured Flow Cytometry data obtained at 3 minute time intervals (might change), and used suitable clustering technique to identify regions in the water body that have similar trends in microscopic life form population.

To learn more about the project, please see the wiki.

Roles: Program Manager, Developer (Machine Learning + Data Visualization), Presenter

Key Activities: Acquiring client, gathering requirements, setting goals, creating a roadmap of deliverables, coordinating events with the stakeholders and ensuring that deliverables are on time, literature review, coding, data visualization, presentation

Team:

The primary contributor of this project’s repository are:

  1. Abhigyan Kaustubh
  2. Elton Dias
  3. Tanmay Modak

This repo was compiled and documented by Abhigyan Kaustubh.

Stakeholders

  1. Bill Howe, eScience Institute, UW CSE
  2. Sophie Clayton, UW Oceanography
  3. Jeremy Hyrkas, UW CSE
  4. Daniel Halperin, UW CSE
  5. UW Oceanography Researchers (eScience Institute)

Timeline: Dec 2014 to Jun 2015 (7 months)

Sponsor: UW eScience

Poster for Presentation

 

Screen Shot 2016-03-29 at 5.08.22 AM

fMRI Brain Scan

fMRI

Predicted the object that the person is looking at by using Machine Learning on the fMRI data of their brain scans.

Tools: Python, Scikit-Learn

Author: Abhigyan Kaustubh

Role: Machine Learning Developer

Key Activities/algorithms/tools: Data Sanitization, Principal Component Analysis, Data Visualization, Machine Learning Algorithms

Timeline: 10 weeks

Stock Market Swings Prediction

NASDAQ_stock_market_display

Description

Explored the possibility of predicting Stock Market swings using sanitized Twitter data as first venture into Data Science.

Team Members: Abhigyan Kaustubh, Brennen Smith, Padma Vaithyam, Wenxuan Zheng

Tools: R, Excel

Key Activities: Data sanitization, regression, sentiment analysis, lexical analysis, TF-IDF, Data Visualization (heat maps)

Timeline: 10 weeks

Abstract

Statistical data concerning opinions and emotions of people have been accurate in some instances in gleaming what is going to happen in the immediate or late future, when understood and applied correctly. Here, we have adopted the ‘Data Science’ based approach to verify our hypothesis that there is a correlation and possible causation between the rise and fall of the stock prices of a particular company and their corresponding tweets on Twitter. Through our research and analysis, we incorporated three different methods for sentiment and lexical analysis with the intension of divining accurate predictions for 5 major companies using the tweet keywords generated from January 2008 till March 2010.

The resulting figures and visualizations generated in the process showed that there was no correlation between the stock market movement and the twitter stock handle that we used.

The graphical results (graphs and heat maps) of the analysis can be viewed here.

Learn More

css.php