submission voting
voting is closed.
PerSim™: realistic, portable, low cost simulation
short description
PerSim trains first responders using augmented reality simulation. It is more realistic, portable, and lower cost than current solutions.
About you
Bio (optional)
Our team is composed of the founders of MedCognition, Inc. a recently formed (2016), pre-product startup company formed in San Antonio, Texas. Our mission is to make high fidelity simulation accessible throughout the chain off medical care. With our combined expertise and experience, and our intention to commercialize this as a product, we feel have a high potential of success. Our team is made up of the following personnel. Dr. Kevin King is a board-certified emergency physician with extensive military and combat experience who brings his administrative, clinical and educational expertise to head this project. He envisioned the AR simulation device and has been integral to its development. Dr. John Quarles is an associate professor of computer scientist with extensive background and research in virtual and augmented reality. He is the primary software and hardware architect of the AR simulator. Roland Paquette is an former US Army Special Forces medic now Physician Assistant who owns and operates his own first responder training company. Thus, Roland has a deep understanding of EMT training needs and limitations. Dr. Hector Caraballo is a board certified emergency physician with expertise in medical education and training. Brian Dedmon, MBA, is our chief financial officer who has extensive experience in startups and compliance, which will help to ensure our prototype is successful when transitioning it to a market product and getting it into the hands of actual customers.
Technical Details
Solution overview
We are proposing our solution as an extension of our current functional prototype of PerSim™.
Current Prototype: We have developed a functioning prototype of PerSim using the Microsoft HoloLens for the trainee and a Microsoft Surface tablet controller for the instructor (Figure 1). The trainee views the patient naturally as he or she would view a real patient (Figure 4). The trainee interacts through visual, auditory, and haptic channels with a virtual patient who is situated in the real surrounding environment. The virtual patient has realistic color (e.g., looks pale and sweaty when in severe respiratory distress), realistic mouth movements and voice sounds, which can be pre-recorded or streamed in real time by the instructor, and realistic animations designed by a professional medical artist. To easily control the simulator, the instructor uses a tablet to change the states of the virtual patient simply buy touching a button (Figure 3).
One of the most valuable aspects of our solution is the ability to design custom scenarios. Our solution includes scenario authoring software solution (Figure 2). This allows users to create new scenarios based upon a library of scenario building blocks and our user-friendly touch and drag interface. The funds this this proposal will largely be used to expand this library. Thus, the scenarios we already have and the ones we describe here are merely examples of the thousands of scenarios that can be made by mixing and matching all the individual parts (e.g., with building blocks you from these two scenes could create a gunshot wound victim at a pickup truck accident). Thus, scenarios can be customized or made from scratch in our editor interface, which uses simple touch and drag paradigm to import new and existing assets and mix and match the various textures, animations, and vitals monitor output during the live simulation. The realistic vitals monitor can either be displayed on an additional physical tablet or virtually through the Hololens.
In addition, we are using a low-fidelity, CPR mannequin torso to provide a physical reference for the AR projection as well as providing trainees the ability to easily touch or move the simulated patient. We have developed our own software and animations with several current common scenarios – severe respiratory distress, blunt trauma (e.g., common in a car accidents) seizure and stroke.
Proposed Work: PerSim is currently missing the affective, emotional, elements which are present in almost all real-life scenes. This includes belligerent patients, panicky bystanders, and cars speeding by, which are all significant dangers, distractions, and pressures commonly found in real scenes. This is a significant need that is not being met in current simulation approaches, according to the 30+ EMTs and EMT trainers we have previously interviewed. These affective elements are often overlooked in current simulations, but are of the utmost importance because they can significantly impact decision making and critical thinking. Specifically, we proposed to The development of mass casualty, tactical, and Chemical, Biological, Radiological, Nuclear, and Explosive (CBRNE) scenarios using our approach.
DIY Recipe Kit
PerSim consists of a series of off-the-shelf consumer products that have been integrated to enable a portable high fidelity patient simulator at a more affordable cost than current high fidelity mannequins.
Specifically, the components are 1) a Microsoft Hololens for the trainee to see, hear, and interact with the virtual patients, 2) a Microsoft Surface tablet for the instructor to control the simulation in real time and to design new scenarios, and 3) a battery powered wireless router that facilitates communication between the devices. The software has been programmed in Unity3D.
PerSim is designed to be a turn-key application with minimal setup time (~5 minutes). Once connected to the wireless router, the user adjusts the Hololens comfortable on their head and then starts the programs on the Hololens and the tablet. Then the devices will automatically discover each other and establish a network connection. Lastly there is a short 20 second calibration step where the Hololens user aligns the virtual patient to the real environment by looking at 3 points and saying ‘left, right, middle, load’. This step only needs to be performed once per environment.
Because PerSim is made of consumer-grade components that have been extensively safety tested, we expect that it will present little to no risk to users. Moreover, the Hololens is optical see through AR, which means that the user can always see the real world with no delay. Thus, users are not in danger of tripping over obstacles as they would be in a virtual reality system and are not subject to visually induced motion sickness as commonly experienced in virtual reality systems and video-see-through AR systems.
Realistic Elements - Virtual
The virtual patient will be automatically aligned and integrated into the real world. PerSim can be easily and quickly aligned to most existing mannequin simulators for those users who desire to practice procedural skills and get realistic haptic (i.e., tactile, touch) feedback. However, PerSim can align to any stationary object (e.g., the gurney in the back of an ambulance) and thus a mannequin is not required, which can increase portability. A realistic vitals monitor can be optional on a physical tablet or virtually displayed in the Hololens.
The models, textures, and animations, will be created by a professional medical artist and thus will be highly visually realistic and medically accurate (see examples in the video link). Moreover, we will hire professional voice actors to record the audio for the patient if pre-recorded voices are desired. Note that PerSim also currently allows the instructor to speak through the patient and the mouth will move realistically along with the streamed audio. Moreover, we plan to professionally record real ambient/environmental sounds (e.g., cars speeding by etc.).
Realistic Elements - Physical
Because PerSim uses the Microsoft Hololens, it can integrate with most physical environments, as long as they are not in direct sunlight as this reduces the visibility of the virtual elements. This enables users to work through the scenes using their real environments (e.g. the back of an ambulance, their classroom, their fire station, a restaurant etc.) and the real tools (e.g., tourniquets, intubation tools, stethoscope, etc.)
Technology Testing
Based on our novel approach to integrating tablets and Hololenses, our current system already includes most of NIST’s technologies of interest, such as heads-up displays, cameras, location specific identification tools, sensors, tough displays, audio cues, voice commands, mapping technologies, interface layouts, and gesture recognition. Thus, our solution can provide and effective platform to test these technologies.
Moreover, our solution can be integrated seamlessly with physical equipment, such as most current mannequin simulator technologies, including CPR dummies that most first responder training agencies already have. Because our approach is so versatile, these existing simulation technologies and approaches can be tested within our system, but at a much higher level of realism than possible before.
Purchase Order List*
This is the list of hardware components needed to run PerSim (see system diagram in figure ?).
Hololens – trainee interface: $3000
Tablet (e.g., iPad, Android, Surface) – instructor interface: $800
(optional can be virtual to reduce cost) Surface Pro Tablet – physical vitals monitor - $800
Wireless router: $30
A low fidelity dummy mannequin can also be incorporated for more skills based training and realistic feel. Most EMT training already have one of these. For a new one, they can cost from $400-$2000, depending on the procedures supported. Our system also works without any mannequin for increase portability.
Total cost of hardware components (including optional mannequin and 2nd tablet): $6,630.
Additional hardware may be required for additional trainees (e.g. multiple Hololenses for team training) and additional tablets may be used to support multiple instructors concurrently.
First Responder Scenes
We plan to develop the following two highly detailed scenes using the HeroX funds 1) active shooter, 2) pickup truck accident. What we propose here could be done with $50K. With less money, we can create fewer scenes at less details. However, remember that all the building blocks - models, animations, patient textures (e.g., gunshot wound, seatbelt bruise, sweat, cyanosis) - can be mixed and matched for first responder trainers to create potentially thousands more custom scenarios of their own design in real locations that they choose.
Scene 1: Active shooter - multiple casualties. Gun Shot Wound (GSW).
At an active shooter scene with a direct threat, the first responders must direct other personnel while they care for the patient. In the proposed scene, trainees will direct the response team (police/swat) to keep threat engaged or return fire to keep threat engaged or minimized. Trainees will also direct any bystanders to leave the scene and take cover. They must remember to not place any public in harm’s way by use of force or threat suppression. Concurrently, the trainee must prevent casualties from sustaining further injury. If casualties are responsive, – the trainee must direct the casualty to get behind cover. If a casualty is responsive but unable to move, then the trainee must use a tactically feasible plan to extract casualty. Lastly, if the casualty is unresponsive, the trainee must weigh the risk vs benefit of a rescue attempt based on threat level and likelihood of success. The trainee must also perform hemorrhage control. The trainee must direct the casualty to self-aid if able and/or buddy aid - place hasty tourniquet or apply pressure to bleeding. The trainee must put casualties in a recovery position.
At a scene with an indirect threat the trainee must perform tactical field care. They must place casualties and needed gear in a safe collection point. They may need to triage uninjured or capable of self-extraction, deceased or expectant deceased, all others. The trainee must establish communication with command or 9-1-1. For treatment, trainees should use the MARCH algorithm (Massive hemorrhage, Airway, Respiration, Circulation, Hypothermia, Head). This includes skills such as treating M) Massive hemorrhage - tourniquets, direct pressure, wound packing, A) Airway – assessing an unconscious patient, feel for breath, rise and fall of chest, bleeding into airway, injury blocking airway, jaw thrusts, Nasopharyngeal Airway Placement, R) Respirations - check for torso trauma, cover with chest seal, needle decompression of tension pneumothorax, C) Circulation - treat non-massive hemorrhages, check limb pulses, Start IV if indicated for fluid resuscitation, H) Hypothermia / Head, keep casualty warm, elevate patient 30 degrees if head injury is suspected and place on supplemental oxygen to keep O2 sat > 90%.
Scene 1 Performance Metrics Examples: a. Scene safety assessment and communication of details to central command. b. Assess safety of bystanders running toward police. c. Time to identifying GSW victims who are can be saved with tourniquet. d. Geolocation by GPS and photo of victims and potential active shooter(s).
Other possible metrics based:
- Return fire and take cover behaviors.
- Moving casualties to places of relative safety behavior.
- Assess wound for appropriate intervention: tourniquet versus alternative (i.e. combat gauze) correctness of choice.
Scene 2: Pickup truck accident- South Texas/ semi-rural area. A drum of pesticides overturned and spilled onto the road. Multiple vehicles involved. Two victims laying on roadside with SLUDGE presentation.
First, trainees should ask dispatcher information such as unusual signs and symptoms. Once a Hazardous substance is suspected then responders should locate specific information on the chemical(s).

Upon arrival at a scene, an initial assessment of the nature and extent of the incident. Trainees can demonstrate if they correctly assess/secure the scene. Here is a partial list that can be modelled in our system given the sufficient 3D/audio assets, such as:
- Avoid unnecessary contamination of equipment.
- Use of adequate protective equipment while recovering shipping papers or manifests.
- Avoid exposure while approaching a scene.
- Do not approach anyone coming from contaminated areas.
- Do not attempt rescue unless trained and equipped with appropriate PPE for the situation.
- Report all suspicious packages, containers, or people to the command post.
- Immediately establish an Exclusion (Hot) Zone, taking care not to become exposed during the process

Scene 2 Performance Metrics: a. Time to establishing scene safety. b. Ability to communicate level of threat and required resources to address the threat. c. Time elapsed to administering lifesaving interventions based on presentation. d. Proper notification to local hospitals to prepare for threat/ exposure.
PerSim can automatically record the following data: trainee/instructor speech, trainee gestures, time taken to perform an action / simulation state change, when/ if actions were performed (see scene descriptions for detailed actions), depth and RGB video of where the trainee is looking, head orientation, head position, scene map.
Based on this data, PerSim can measure:
- Setup time at an emergency scene and reattempt rate – this can be measure through time elapsed between states and between user interactions.
- Accuracy of completing a task (based on virtual patient state)
- Accuracy and precision of location based technologies: On average, this is actually already known for static scenes based on the Hololens’s scene mapping technology. However, having many moving real objects in a scene can disrupt and hinder tracking performance. For this, we can look at how many remapping operations are being performed
- Speed from communication to resulting action, Precision and granularity of voice commands. PerSim can record real time audio from the user and instructor. Thus we can analyze speech patterns and communication frequency, as well as test the correctness of PerSim’s voice command recognition and look at how this impacts performance time.
- Measure multiple technologies on one task—such as, visual, auditory.
- Enabling operations vs. distracting from operations. Because the PerSim simulation is state based, we can infer how distractors impact subsequent states.
Supporting Documents - Visual Aids
herox Figures.pdf

comments (public)