menu

Submission

submission voting
voting is closed.
introduction
title
Virtual Reality Scenario Builder
short description
Allows user to create scenario layout around common environments, deploy-able on omni-treadmill and HTC Vive.
About you
(optional)
Bio (optional)
I am a current PhD student at the University of Arkansas at Little Rock working in the Emerging Analytics Lab and doing research in applied virtual reality so this challenge was right up my alley. I hold an undergraduate degree in game simulation arts and sciences and electronic arts from Rensselaer Polytechnic Institute. I Also worked briefly for the Army at the Judge Advocacy General's Legal Center and School providing E-learning solutions and simulations for army lawyers deployed or and in school for training. Unrelated to my experience, I'm an avid runner, I did cross country during undergrad and my team made it to nationals my senior year! I'm also a huge fashion enthusiast/addict.
Technical Details
Solution overview
The concentration of this solution is four parts: availability, scalability, ease of implementation, and reusability of the end product. To keep availability obtainable, this solution concentrates on high-end, but consumer virtual reality equipment providing the best product with a realistic implementation. This means using Unity for building the application, the HTC Vive, and omni-treadmill for immersion, and a basic desktop computer for running the application. Scalability is based around the fact that the end application will be able to support multiple users with the Photon Cloud Plugin for unity providing networking, but recognizes that the equipment necessary for that kind of implementation grows exponentially more expensive, so also allows for procedural control over the amount of users intended in any one scenario. Another thing to mention about scalability is that even though this implementation concentrated on the HTC Vive and omni-treadmill for greater immersion. The same design can be applied to a Unity application made to work with Google Cardboard or the Samsung Gear, and excluding the use of the omni-treadmill to replace it with a basic Bluetooth controller. Though the immersion and realism of the art design would fall sharply with this change. Ease of implementation is the concentration on the final product, which would consist of an application with GUI for a controller to build the scenario and a user to enter the scenario after it is built, making the actual scenario generation clean and simple for users to understand. Finally, the reusability of this product is highly important as are most simulations, so this solution proposes procedurally generated scenarios with GUI exposing the most relevant metrics to the scenario builder. In this case, the solution proposes the ability to assign weather, control lighting, build a scene from available assets and assign priority and response times to object targets put into the scene. To increase immersion and realism though, this solution also proposes that certain aspects still be available to the scene-builder at run-time, including lighting, weather, certain hazards damage and spread, and response times.
DIY Recipe Kit
• The concentration is scalability, so the project is designed to support about 10 users using the Photon-clouds room system in Unity for networking, but would require extensive VR equipment for each user and due to the cost of such installations it is practical to make the solution still operable with less equipment. A variation of this design could even exlude the omni-treadmills, but would lack the immersion of physical movement controls.
• Materials list:
o Equipment:
 Omni-treadmill x users
 HTC Vive x users
 PC computers x users+controller
 Network connection
o Software:
 Unity
 Photon Plugin for Unity
o Assets:
 Scripts:
• Game controller script: maintains game logic and differences in metrics generated from multiple users
• Network controller script: manages connection of different users to the network
• Scenario generation:
o Metrics script that keeps track of objects tagged as targets and hazards in a scene and assigns them a priority or risk based on the input of the scenario creator as well as a response time limitation if applicable or a damage count if applicable
o Weather script that allows the creator to assign weather to the environment
o Environment script that allows the user to choose the base environment-this assigns the appropriate assets list to the scene- a.k.a. city, forest, lake, mountain-this then assigns audio as well as is appropriate to the scene
o Lighting script-actually access to an image effect shader that will be able to adjust brightness and contrast in the scene.
o Building controls:
 Drag and drop script to allow object placement in scene
 Clickable instantiation to add assets to scene
 Spawn point controller for each user in the scene-up to ten and just lists 1-10 as to where to populate
• Real-time controls:
o Script GUI for weather still exposed to controller
o Script GUI for lighting still exposed to controller
o Script GUI to adjust tagged objects response time exposed to controller
o Script for real-time communication including: networked audio and image sending using the Photon plugin for Unity
• User scripts:
o Player metrics script: maintains info on damage done to player by hazards-visible through a health-bar GUI, as well as the response time of the player to each object target and whether they made it before the time ran out
o Communication script to send and receive audio to the controller or other players in the scalable environment again using Photon in Unity
o Movement scripts
 Models:
• Hazards:
o Fires
o Falling rubble
o Deep water
o Electric wires
o Glass
o animals
o ect.
• Environmental assets:
o Forest assets
o City assets
o Mountain assets
o Ect.

After building the product in Unity, the end package will include a game allowing the user to enter as either the “controller” where they can build the scene or as a “user”. Users will use the omni-treadmill and HTC Vive to enter the virtual space after the scenario is build, while the controller will operate the scenario from a desktop PC.
Upon entering the space, each user will have metrics generated on them based upon their speed in response to targets and interference from hazards. If there are multiple users the controller will be able to set a parameter for generation a collective score or individual’s scores.
The controller will be able to change lighting, response times, and weather on the fly for the scene as well as communicate with the users on targets that are not necessarily in their field of view. The controller maintains a top-down view of the entire scene. They will also have access to certain extra parameters on certain hazards: a.k.a. ability to increase or decrease spread of fire, or falling rubble ect.
Safety
This is an entirely safe solution, the implementation is entirely virtual. The worst thing that could happen to a user is falling getting in and out of the omni-treadmill. (Which is actually way more of a threat than you might think.)
Realistic Elements - Virtual
The concentration on realism in the virtual environments concentrates on the three forms of feedback we can provide for the user in this implementation: audio, visual, and movement. Most of the audio will be controlled in the building of the scene and outside of the front-end user controls. Objects will have sounds associated with them as well as overall sounds for certain environments. While this will not be a metric proposed in this implementation, it will be kept as close to a realistic environment as possible. All environmental audio implementation will be back-end work not exposed to the end-users. There will also be networked audio though between all users in the application using the Photon chat plugin. This will allow users to talk to each-other about the scenario as they would outside of a virtual environment. It will also allow the scene-builder to communicate extra information about the scene that the users do not see in their field-of-view, ect.
The visual element for realism will revolve around the artwork and modeling which is why the HTC Vive implementation is highly recommended over the Samsung Gear or Cardboard, to support higher quality models and shaders on a computationally more powerful desktop. Lighting and shaders will implement realistic lighting design with a few image effect shader parameters exposed to the scene-builder even at runtime to adjust overall scene brightness and contrast. Higher-poly models with high-pixel density textures will also add to the visual realism of the scene. The scene objects that can be added will be built as prefabs in the actual Unity build implementation, including animations, colliders, audio, and controllers as makes sense for the particular object. For example-doors will have triggers and animations to open if a player steps into them, windows will be destructible, ect. Again though, all this visual realism will be handled in the building of the application in the game engine, the actual GUI exposed to the scene builder will be a simple drag-and-drop of scene objects.
Finally, the last virtual form of realism we have is movement. In the virtual space the most important part of this implementation for realism is actually the feel of the movement. Particularly on the omni-treadmill many movement scripts do not handle player velocity on turning very well and results in a reduced frame-rate in the headset that can make users uncomfortable. The solution to this is very precise movement controls that account for velocity in movement and failing the ability to improve turning frame-rate, implementing movement controls that may be less realistic, but feel less invasive and more comfortable to the user.
Realistic Elements - Physical
The omni-treadmill and the HTC Vives superior hardware quality (over the oculus, yes I am biased), is the main concentration for the physically realistic elements. The omni-treadmill compared to its competitors like the Cyberith, is far superior and feels much more natural. I have used both and the Cyberith requires an odd foot-sliding motion to operate while the omni-supports very natural movements, the limitation being running. The Vive provides greater computing power than the cardboard or Samsung Gear, allowing more realistic art assets, real-time lighting, and shaders, as well as in my experience only, done a better job of reducing the nausea some people used to experience with virtual reality headsets.
I would like to add, I’m sure other projects will concentrate on haptic responses, which I did consider for my implementation but ultimately decided against due to the limitations of most haptic options. Without designing a hardware for a haptic response tailored specifically to emergency responders, the market is relatively limited to constraint controls. A more realistic haptic response for first responders would include shocks from electrical wires or pinches from glass or other hazards, but this becomes a question of safety. So, for that reason, I choose to ignore haptic inputs in this solution.
Technology Testing
Users will be provided with a HUD that allows them to view information provided by the scene-builder or other users including images sent across the network. This will be accessible with buttons on the HTC Vive controller. This could also integrate smart alerts or other messaging devices between users, as well as basic briefs on environment, information on weather, temperature, and maps.
Terrain generation will support scene-builders providing a height-map for an area, so that real terrain information could be used in scenario generation, though the rest of the scene would have to be built by hand.
Most other technology testing would have to be considered and implemented before the final application is built, to provide data surrounding that implementation, but since the solution implements the Unity game engine, changing the source code would be easy enough. Also, most other technologies listed would also have to be replicated in the virtual space or provide extra equipment.
For example: gesture recognition could be implemented, but would require mounting a leap-motion to the headset as a solution and then reconstruction users movements virtually.
Biofeedback like heart-rate could be monitored outside of the environment.
Purchase Order List*
Omni-treadmill- $300-$600 x users
HTC Vive- $700 x users
PC computers- $500-$800 x users+controller
Unity - Pro $125/month
Photon Plugin for Unity - PLUS $95
First Responder Scenes
The idea is that this application provides procedural generation, allowing a user to build the scenario and assign response times to objects. The concentration will be on a limited amount of most common assignable environments, such as city, forest, mountain, lake, ect. All with parameters available to change weather, lighting, and temperature. From there, the user can build out the scene by dragging and dropping object targets with response parameters and assigning damage to hazards as well as spread affecting spread in real-time when the scenario is running. This gives the user complete control over the scene and placement of hazards and response targets, so that they can build the scenario based on a certain situation they would like to test and assign parameters to grade the user accordingly.
Measurements
The main solution for collecting data and metrics is in a script that will evaluate the users response to targets. It will evaluate how quickly they responded to them and in which order against the parameters set by the scene builder before the scenario begins. It will also calculate any damage they received from hazards shown in a health-bar GUI that can result in failure of the scenario. Certain parameters will also be able to be tagged as necessary for data collection. In which case users will be graded for providing the controller with certain images or strings of text sent across the network, which the controller will either be able to approve or reject in real-time and will be calculated into the final score. The scene builder will also be able to set up slightly different metric for group scenarios and can adjust response times on the fly to see it’s effect on users overall end scores.
Supporting Documents - Visual Aids
gui.pdf

comments (public)