menu

Submission

submission voting
voting is closed.
introduction
title
Untethered VR Training in Sync'ed Physical Spaces
short description
Inspired by developments in merged VR & physical environments for gaming, a platform is described which supports the NIST challenge
Technical Details
Solution overview
We propose a solution based on recent advances in merged immersive virtual & physical environments for gaming & other purposes, especially the "Hypr Reality" work of the company “The VOID” [https://www.thevoid.com/] [https://en.wikipedia.org/ wiki/The_Void_(virtual_ reality)], and of others soon to come out of stealth development mode [https://thevideoink.com/ spaces-is-one-step-closer-to- creating-a-vr-theme-park- fe527ba8599b]. The concepts and technology developed by VR companies like The VOID provide a powerful basis for supporting your needs, and can be extended in several important ways for the purposes of meeting your full set of test environment requirements.
The company The VOID began by prototyping the merger of physical spaces with developer edition VR equipment to test their concept with people. They enabled participants to be untethered, carrying the VR experience computer in a backpack & using wireless technology where necessary. Trials proved it a very powerful method to solve limitations of existing consumer oriented 3D VR, such as the inability to move normally for exploration, and increasing the level of immersion to a significantly greater level. In short, testers of the early prototype found it extremely compelling, & the company has optimized their technology & the experience ever since. They began by using Oculus Rift developer edition HMDs, Leap Motion depth sensors mounted to the front for hand recognition, headphones for immersive audio and connected them all to a laptop in a backpack. They used optical IR markers and motion tracking cameras mounted above for gross body position tracking. They mapped a physical space in 3D and imported it into the virtual space. They registered the mapped physical space & the virtual representation to coincide, and ensured accurate co-location of walls, benches, etc., between the VR & real spaces. They also built basic peripherals such as guns & torches, which themselves communicated wirelessly and were position & orientation tracked. They then opened the prototype for people to explore the environments together & collected feedback on the experience (& some revenue) [https://www.technologyreview. com/s/544096/inside-the-first- vr-theme-park/].

The company has since been developing an entire range of custom technology as they discovered they could not purchase what they needed off-the-shelf. They have an exclusive agreement with contract design firm Optimal Design Co for development of this series of technologies they have labeled the “Rapture” system [http://optimaldesignco.com/ portfolio/void-virtual- reality-system/]. They have developed a custom VR HMD with significant differences over consumer VR HMDs, given their technology has less price sensitivity. Their HMD integrates dual curved 2K OLED screens with custom optics, giving 180 degree peripheral field of view. They include quality Bang & Olufsen headphones, microphones for communications, & RF 3D position & orientation tracking devices.

They have developed a haptic vest with multiple types of haptic feedback, & wireless gloves with RF position trackers, built in finger bend sensors & open finger tips so the hands can be fully represented in VR while still feeling surfaces of the physical environment.

They have built a custom small computer for the backpack to run each individual’s VR experience, partnering with nVidia & others to obtain the high performance computing required for this demanding application.

And they are working on occlusion free RF tracking of each participant’s body including sensors for ankles, gloves, back and head. Existing optical motion trackers are very expensive & there are always occlusion issues with walls and bodies. Their RF system is being designed to solve these issues.

The HMD is wire connected to the backpack computer, however all other elements such as gloves, ankle tracking sensors, & any held peripherals are all wireless with no cables to get in the way.

The physical environments are setup with digitally triggered & controlled interactive physical elements, such as sprays of water mist, fans, heat sources, etc. Other motion & vibration platforms can be incorporated to represent physical movements & effects, such as a vibration platform to mimic an elevator.

One final piece that makes the whole system work is the ability to fit a lot of physically explorable virtual space into a relatively small real physical space. This is possible because without the audio-visual cues of the real world, participants can be tricked with something called “redirected walking” into believing they are walking straight when they are walking in circles. This, along with the ability to map very different virtual environments to the same physical spaces enables a relatively small physical arena to work for large & varying VR spaces [http://www.roadtovr.com/the- void-rapture-vr-headset-2k- curved-oled-display/].
DIY Recipe Kit
The gaming environments companies are considering for first application of the technology will likely involve keeping a physical environment setup for several months at least, so the public has a chance to buy tickets and experience the scenario. For first responder training, however, there may be greater need to make changes in the available spaces to suit different first responder groups, test scenes and scenarios. Therefore the physical environments should use modular, movable and storable physical infrastructure elements, like walls, stairs, doors, furniture, vehicles, etc. In addition, feedback elements such as heat, smell, air, and water sources should be made modular and quick to setup, take down, move and store. Assuming a large, open, warehouse like space for the physical environments to be set up in, we can envision many of these elements on moveable wheels or pulled into position on overhead tracks for rapid deployment and return to and from storage to the training space. Inspiration for these kind of elements can come from theatre set design, and potentially from film set work as well.

Preparing A New Training Environment

To create a new training environment a set of steps is required so it can be re-created anytime and anywhere using the system. First, VR scene and physical scene can be co-designed and created, each helping inform the other. Once the physical environment is built the first time, using as many standardized modular elements as possible, the scene can be 3D scanned into a 3D model. This model can be cleaned up and imported into the 3D VR world, and the VR world and physical model accurately registered to each other. In order to speed setup of the physical scene at later times and places, this registered 3D model can be used by staff wearing augmented reality (AR) HMDs such as Microsoft Hololens [https://www.microsoft.com/en- ca/hololens] or others, to rapidly position and orient all the physical elements into place with precision. This scene setup step could even use a version of the 3D model modified specifically for setup staff where the software and model guides them through a sequence of steps: first this item goes here, next this item goes there, etc. This AR visualized model could include guide markers, lines, rules, AR tags, etc., to help the staff with placement and set up. This is a great way to leverage the media and models created for participants for rapid setup by staff. It is also a way to incorporate the strengths of AR, where staff need to see the physical world with virtual world elements overlaid and collaborate as they assemble the scene [http://www.cio.com/article/ 3190557/virtual-reality/how- hololens-lets-you-collaborate- in-context.html].

VR/AR Software

We propose use of the Unity 3D game engine for both participant VR experiences and the optional setup staff AR experiences [https://unity3d.com/]. Almost everyone in the industry integrates with Unity by default as the shared standard, including Microsoft Hololens [https://unity3d.com/partners/ microsoft/hololens], Oculus Rift and others [https://unity3d.com/unity/ multiplatform/vr-ar], and even The VOID [https://madewith.unity.com/ en/profiles/the-void] [https://unity3d.com/company/ public-relations/news/void- launches-ghostbusters- dimension-immersive-hyper- reality]. Unity has all the capability needed to support all elements of the system presented in this proposal.
Safety
Some elements of the physical environment can be made safer for participants without negatively affecting realism. For example, objects with sharp edges can be blunted so people wont get cut, since they can’t actually see every aspect of the physical world. Heat levels can be kept below skin burning level.
Such modifications have to be carefully considered so that important elements of realism in the training exercises are not degraded, yet the objective is to make the participants feel they are immersed in real and possibly dangerous environments without actually exposing them to harm.

The VR HMD is a ‘helmet,’ with protective bicycle helmet like padding inside for safety of the face and head while your ears and eyes are covered [https://uploadvr.com/void- upgrades-hardware-aims-20- installations-year/].
Realistic Elements - Virtual
How This Technology Supports First Responder Training:

Firstly, this technology supports deep immersion. People really experience a different world with so many cues informing their senses:
* a high fidelity 3D VR world shown to the eyes,
* in-world audio to the ears,
* an avatar body which matches movement of your own body,
* haptic feedback through gloves, vest, and held peripheral tools,
* smells, temperature and moisture variations that match the VR scene,
* physical navigation and physical tactile exploration of the environment,
* shared experience with others of the merged environment, and full communication between participants.

This is ideal for bringing high realism to first responder test scenarios.

Secondly, the scenes and scenarios can represent anything. Scenarios can involve real or manikin bodies, all tracked to synchronize virtual and physical worlds. Medical and rescue tools and equipment can be mapped, modelled and tracked so that physical interactions with them are accurately represented in the VR world. Props of any kind, including standard objects from the real world, rubble, vehicles, etc., can likewise be integrated for ultra realism. Sensors can detect touch, proximity, pointing, targeting, etc., and provide data to the VR world and controllers which operate actuators to make the physical environment respond. For example, a training scenario may involve a person (manikin) trapped under a vehicle, where the exercise goal is to jack the vehicle up off the person as quickly as possible with minimum further harm to the victim. A vehicle, or a partial physical model of a vehicle, can be incorporated in the physical scene, shown in high fidelity in the 3D VR scene, with the ability for trainees to coordinate jacking up the vehicle with a real jack and extricating the manikin body, with data on every important aspect of the exercise digitally logged for repeatable training analysis.

In another scenario involving using an instrument with button, dial and switch interface elements and a display, either the real instrument can be used or a low cost simple mockup of the instrument can be used, for equal realism. If a real instrument already wirelessly outputs digital data through an API which can be integrated with the VR computation elements there may be no need to build a mockup. Or, a mockup model of the instrument can be created which feels like the real thing, has buttons which depress, dials which turn and switches which toggle, with sensors on these elements also wirelessly relaying the interaction into the VR computation elements for modelling in the VR environment.

For realism, light and 3D spatial audio/noise are represented by the virtual scene and presented to the participant through the HMD. Weather can be represented to the visual and audio senses by the VR scene, and to the skin and nose by physical scene actuation elements that blow air using fans, spray mist or water, and heat or cool surfaces or spaces using various heating and cooling technologies. Physical instability can also be partially represented to the visual and audio senses by the VR scene and to the rest of the body by vibration platforms, pads, moveable and shifting floors, rubble or other surfaces, and haptics provided by a haptic feedback vest.
Realistic Elements - Physical
To make things simpler, reusable IoT like wireless sensor modules can be created to use for many different instruments and infrastructure objects, avoiding custom builds for each new item that needs tracking. These modules can contain additional sensors which help with training exercise analysis. For example, accelerometers are not included in medical instruments, yet mockups of those instruments can contain these wireless sensor nodes containing a 9 DoF accelerometer, as well as the RF tracking electronics, to know where the instrument was placed at all times during a training exercise, as well as what G forces it experienced (was it used, where was it placed, was it dropped?). Simple proximity and touch sensors can be integrated to detect how things are touched and handled during training, potentially providing rich data for training exercise analysis on how tools, instruments and devices are operated.

Beyond any instruments, tools or devices interacted with during training, the infrastructure can also include digital sensor elements, such as proximity and touch pads, to know where and when, and optionally how hard, they were touched during a training exercise. For example, say in a fire rescue scenario, each door into rooms in the building should be opened to check the room for people. The doors or door knobs can have sensors which detect that the door surface and door knob were first checked for excessive heat, and the door then opened in the appropriate manner according to the test scenario for that room.

In addition, beyond video and audio fed digitally to the participant through the VR HMDs, and haptic feedback provided by wearable and held haptic interfaces, the environment itself contains modelled thermal, moisture and olfactory elements participants perceive directly through their own senses like the nose and skin. These elements can be smoke, smells like fuels, burning wood, plastic, carpets and upholstery or hair. Thermal activity could be modelled by air blown from heaters or coolers, water delivered from heaters or coolers mimicking rain, spray from fire hoses, or surface waters, and surfaces can be heated or cooled electrically to model active fire and heat sources and those sources being extinguished.

More sophisticated and expensive robot arm mounted machinery could be incorporated for mimicking driving, flying, piloting marine vessels, spaceflight and more [https://venturebeat.com/2017/ 01/17/mmone-unveils- commercial-version-of-insane- giant-simulator-arm-for-wild- vr-rides/] [https://thenextweb.com/ virtual-reality/2017/03/12/ these-terrifying-motion- machines-are-the-vr- experience-youve-always- wanted/#.tnw_x5ajJ8Qz].
Technology Testing
In the gaming scenarios this technology is initially being developed for, there may be little desire or need to move items or objects in the environment beyond yourself and any peripherals you hold. However, for first responder training, moving items, objects, bodies, vehicles, etc., may all be important elements of the training, so more items will need to be tracked with the RF tracking technology, and optionally have the prior mentioned additional sensing modules attached to them so detailed orientation and forces can be captured, and everything modelled accurately in real time in the VR scene.

For the gaming scenarios this technology is initially being developed for, a few basic peripheral devices for holding and manipulating, such as a gun, a torch, etc., are likely enough. For first responder training a much broader selection of instruments, tools, and devices are likely to be useful for integration into the exercises. For this reason it likely makes sense to build the common wireless module mentioned above which supports a variety of input controls (buttons, switches, dials, triggers, etc.), the RF tracking module, as well as a standard set of sensors for collecting training data, such as 9 axis accelerometer and touch/force sensors. Each module would report a unique ID for data tracking, linked to training run participant at session setup, if it is something used by one individual.

The system can test physical equipment, and machine automation or robotics equipment at any level of autonomy. The system can test individual and team performance performing tasks and full scenarios. The system can incorporate physical elements where the tactile aspect is important, virtual elements when no tactile experience of physical objects or infrastructure is needed, and mapped, registered, tracked and synchronized combined physical and virtual elements when both tactile (+ haptic) and virtual are required and need to match. The system can create a time synchronized record of every audio-visual frame, every sensor data point, and every communication, providing an extremely rich set of data for session analysis and post session performance review. Analysis software will need to be written to support post session performance review.
Purchase Order List*
All the proposed technology is currently available, given further engineering and integration.
* Access to a merged VR/physical exploration platform, such as The VOID’s Rapture hardware series and Unity VR software = unknown cost
* Wireless IoT like sensor modules which extend the data gathering of moveable objects and equipment interaction mockups or real equipment can be created for something like $100 per node or less.
* Quality 3D Scanner equipment can be expensive, upwards of $10,000 - $25,000 [https://www.creaform3d.com/ en/metrology-solutions/ portable-3d-scanners] [https://www.artec3d.com/]
* Software for training session data analysis for session performance review will need to be designed, written and tested.
* Every scene and scenario will have a variable cost to produce and duplicate. Physical elements will cost much the same for every duplicate. Software elements have almost zero cost for duplicates, once created.
* Optionally, software to produce the AR setup guidance based off the scanned 3D model will need to be written.
First Responder Scenes
The design of the system is general purpose enough to support all the first responder types, scenes, and scenarios envisioned in the description. Variations will be taken care of in the design and implementation of the VR training scenes, and the associated physical training scenes, including what objects will be moveable or interfaced with (and therefor need to be tracked and have data collected from), what equipment needs to be incorporated, modelled and mocked up, where sensors and actuation elements (fans, heaters, etc.) need to be placed, and the like. Real or manikin victims can be incorporated. Vehicles can be incorporated. Tools, instruments, and other devices used in the scenarios can all be incorporated. Many indoor and outdoor scenes and scenarios can be modelled, with some limitations in size, area, and physical vertical height depending on the available space in the test building.
Measurements
Audio-visual, gestural and textual communications can all be shared and saved to the time synchronized session record. Tracking and sensor data collection of human and intelligent machine training participants and any objects or equipment manipulated or operated can be used by appropriate parts of the system during a training session and recorded for post session analysis. Post training session analysis can involve playback of simplified visualizations of the time sequence of all events, actions, data, communications, etc., for review of individuals and combined team performance.

Post analysis will be important in training exercises. Training analysis could involve a replay of the audio-visual VR scene from each participant’s PoV on large 2D display screens (may not need the 3D immersion for post analysis phase), with all the training metadata represented by time synchronized visualizations to the side of the scene display, so that performance reviews can be conducted and comparisons of training runs done.

The VR HMDs have headphones and microphones to support voice communications between training participants in the scene and to those acting as remote command. The Unity 3D game engine used for the immersive VR scenes supports plugin SDKs, including one for voice and video communication in AR/VR apps [https://venturebeat.com/2017/ 04/16/voximplant-makes-it- easy-to-put-voice-and-video- communication-in-arvr-apps/]. Such audio communications can all be recorded into the time synchronized session record for post analysis and review.

Camera’s can be one of the instruments provided as a physical mockup. Participants will see a camera in the VR scene, and be able to pick up and ‘operate’ a physical mockup of the camera. The participant can physically point and take pictures with the mock up camera, which they’ll see modelled in the VR scene and the images will be of the VR scene, not the physical scene. These snapshots can also become part of the session record, reviewed in post analysis, since capturing images of a first responder scene can be a training scenario task.

Live video chat can be modelled in VR, with real video of participants acting as remote command, and VR scene avatar video of the on-scene participants.

Touch computers used for participants for communications, data sharing, etc., can be real computers sporting position and orientation tracking, so the VR representation synchronizes with the real, and with the display duplicated in VR on the virtual display so everything matches up.

The entire list of examples of metrics or data that could be collected and measured should be accessible to this proposal.
Supporting Documents - Visual Aids
NISTVR1stRes.pdf

comments (public)