submission voting
voting is closed.
Active Virtual Reality for Training
short description
First responders train or test skills in dangerous tasks in full immersion wireless VR, with untethered motion and using all real equipment.
About you
Bio (optional)
Christopher Chambers co-founded Serious Simulations in 2014, where he serves as CEO. The company is a Veteran Owned Small Business start up specializing in developing immersive training experiences allowing unhindered human motion, real equipment and weapons, and wireless Virtual Reality.

Chris led a privately owned simulation company in Houston Texas, (Laser Shot Inc.), accelerating growth of revenues, profit, and new product offerings, while developing new business with federal agencies and defense contractors. The National Defense Industrial Association named the company “2010 Small Business Success Story” for most outstanding impact in support of U.S. warfighters.

Before joining Laser Shot, Mr. Chambers was responsible for leading the US Army Game Project (commercially brand-named America’s Army®). He launched the game for the Army at E3 in 2002 and led a team of 60 full time PC game developers and dozens of other skilled contractors who produced 21 PC releases (with a following of 8 million users), two Xbox titles, a mobile phone game, an arcade game, and several fielded Army training applications. The Army-produced game is often cited as the world’s first major “serious game”. Awards included: finalist in the Harvard University Innovations in Government Award for 2006; Best of Show at the Electronic Entertainment Exposition (E3) in 2002; MI 6’s Best Advertorial award in 2005; and the Digital Marketing Society’s Best Advergame in 2006.

Mr. Chambers holds degrees from West Point (BS) and Wharton (MBA). He was an officer in the US Army, serving in a variety of positions in staff and command in the U.S., Egypt, Germany, and Panama, and the 18th Airborne Corps in Operation Enduring Freedom (Afghanistan).

Chris hails from Massachusetts, his wife Sondra from Montana. With their daughter Harlow, and son Colton, they reside in Oviedo, Florida. They enjoy raising their children, travelling, outdoor activities, hunting, and the shooting sports.
Technical Details
Solution overview
Our proposed solution is a planned outgrowth from our military VR immersive simulator for training warriors individually and in small groups to move, shoot, and communicate as effective teams, in day or night. Achieving Realism is the key design criteria for our system from both technical and experiential perspectives. We have pioneered the use of wireless techniques to capture full human motion, equipment data monitoring, and delivery of uncompressed video/audio to individuals. We hold a portfolio of 5 patents/patents pending which underpins our simulator development. The system for first responders will capitalize on the existing technology base and adapt weapons, flame retarding devices, and other equipment for highly immersive training and testing. Effectiveness is enhanced by moving the trainee's state of mind from simulation to reality (the "suspension of disbelief") by enabling complete human motion without wires and instruments, use of real equipment in the simulator, ability to see ones own hands and feet, interactivity with highly realistic and physically correct immersive VR environments. Superb audio, microphone communications, and tactile feedback devices round out the realistic elements of the ready2train experience.

Trainees are tracked in sub-millimeter accuracy without wires using optical tracking (also possible to use wireless IMUs). Movements of the body are transformed from tracking data to avatar movements in the virtual reality environment in real time. Graphics are outputted to the extremely wide field of view Head Mounted Display (enabling human peripheral vision) through a unique Wireless VR system (patent pending by Serious Simulations). Real equipment is tracked and mechanical manipulations are captured and wirelessly transmitted to the instructor station for real time monitoring. All data and graphics of the individuals in the virtual situation are captured for later debriefings and lessons learned.

Availability - All of the above technology exists now, but has not been adapted to all types of first responders. Our system will support testing and training for Law Enforcement, Security, and SWAT in its current configuration. Serious Simulations would like to put resources toward moving forward rapidly with a partner company who currently has physics-correct firefighting software available to integrate into our training/testing package. In addition, although we have a robust solution for the physics-correct law enforcement virtual environment, we would like to develop a library of specific scenes and test procedures for target customer needs. Several interactive software options exist commercially for Law Enforcement and Fire Fighting and some Medical training in the form of desktop/laptop simulators, and these also can readily be transformed for utilization in an interactive 3D VR simulation featuring live humans in simulators.

Overall, in terms of achieving realism in a virtual reality based, fully interactive system, our untethered ready2train system enabling real motion, real equipment use, real sound, accurate 3D models, rich graphics, real physics, and real tactile feedback, is perhaps the most realistic system in the world.
DIY Recipe Kit
Since the products in our ready2train simulator line were targeted for military use anywhere, we developed a completely portable, easy to set up and use, and light weight system. It uses many existing technologies in unique combinations, but is easily replicable by other entities. The basic installation is achieved by setting up our portable tracking area of pre-configured optical trackers on telescopic "towers". (Alternate tracking is via wireless IMUs which does not need towers.) Each tracking area is variable in size from 20 x 20 feet to many times this size (additional tracking cameras required). Each individual's tracking data populates the movements of his avatar in the simulation. As in real life, the trainee's view is a first person perspective and all others see him in a third person perspective. The tracking towers are wired to a host computer, which is networked with all the other tracking areas to the instructor station to enable the overall simulation, and capture data for After Action Reviews.

All computers are currently arranged in deployable rugged cases, with all connections and software pre-configured. Set up time is just a few minutes. Computers can alternately be deployed as the customer demands for their classroom or training area.

Trainees first suit up in their real duty gear. Six light weight plastic plates with asymmetrical reflective marker patterns are then arrayed using Velcro straps etc. to the trainee. Similarly, a few markers are added to real equipment, or simulated equipment (if preferred). Serious Simulations patented the ability to monitor mechanical data from items/tools such as weapons or fire hoses, and wireless transmit their data to the simulation.

The trainee then mounts the wireless Head Mounted Display. Serious Simulations recently developed its own "Pro Grade" HMD and added a patent-pending Wireless VR package to it. Our HMD allows for untethered movement with the same frame rate and higher resolution when compared to Oculus or Vive type wired HMDs. Our use of two High Resolution displays in Landscape orientation enables a remarkable 130 degree view in the virtual world. This field of view exceeds consumer grade HMDs. The ground breaking wireless field of view is considered mid-peripheral vision for a human and is essential for serious training.

The set up time for the training system is about 30 minutes for the individual tracking systems and instructor station. The trainee needs only 5 minutes or less to mount the individual gear and individual calibration is about 10 seconds.

Repeatability - resetting scenarios is less than 30 seconds depending on the software package used.
Completely safe. No safety concerns in terms of operating the system.

VR "sickness" is not an issue in our system due to our special HMD design which allows trainees to see the floor and feet, allowing their bodies to feel grounded in the real world while operating in the virtual world. Also, our patent pending wireless VR system reduces latency to less than 17 microseconds so that it eliminates sickness issues that are more common in slower simulators.
Realistic Elements - Virtual
- Extremely wide field of view in the virtual world, enabling human peripheral vision
- Force feedback from live or artificial intelligence avatars. One example for law enforcement is the ability to be "shot" by another character in the simulation. This is accomplished by a wireless taser-like device that gives variably adjustable sharp feedback when a person is hit by a bullet in the simulation. This same architecture can be used for other physical elements to be stimulated by virtual elements such as smells, heat, wind, and noise.
- hand and arm signals can be made physically and represented accurately on the virtual avatar
- depending on software package of choice, real faces can be added to the simulated avatars
- physics-correct programming for collisions between live trainee and virtual objects such as walls or other avatars, gravity, wind effects, bullet and other object trajectories, proximity based sound, realistic ambient sounds (live or artificial intelligence)
- video pass through cameras allow a view of the real world to be blended with the virtual world
- audio from the virtual world is replicated in trainee head phones or ambient speakers. Trainees speak using microphones or physical voice contact depending on set up.
- realistic depictions of death and wounding and other stress inducing events to assist trainees in being pre-exposed and mentally prepared for traumatic and stressful activities of their professions.
- realistic day and night conditions
- night vision goggle (NVG) views and thermal sight views for night operations
- full weather replication with proper physics on many objects
Realistic Elements - Physical
Our system enables high physical realism in a rich virtual environment:

- real human motion without tethers of any kind
- wireless VR display with dual screens and 130 degree field of view (human mid-peripheral vision)
- One minute add-on kit for real weapons to be used in the simulator with complete wireless data reporting and safe recoil (patented)
- 100% of real duty gear can be used in our simulator to include helmets, fire retardant devices, protective masks and clothing, back packs, rifles, pistols, non-lethal weapons.
- Wireless force feedback and environmental feedback that is cued to triggers in the virtual world, helps realism in terms of smell, audio, wind, vibration, concussive impact, and taser-like shocker feedback zoned to various body zones. Other devices for unique purposes can also be added in the same architecture.
- seamless transition from vehicles to foot
- vehicle simulators can have motion platforms with 3 or 6 degress of freedom to provide bumps, tilts, vibration, turns etc. during vehicular operations
- featherlight attachments for human body tracking with no wires and one minute to attach - far less cumbersome/more realistic than other wired sensor packages
Technology Testing
Our platform enables the same basic functions for all types of first responders (untethered human motion, real equipment, variable size tracking area, unlimited virtual areas, rich virtual environments). Therefore, our simulator can support law enforcement testing followed immediately by firefighting testing by simply opening different scenarios and putting markers on the next trainee and his equipment.

Testing can support assessments of teamwork, communications, maneuver, operating procedures, new challenges in the environment or new equipment effectiveness. Examples of the latter might be the testing of new fire retardant devices on a variety of fires in a variety of circumstances and environmental conditions. To accomplish this, we would model the new device and its physics, and then apply the device to the physics-correct fires in the simulation, with the ability to change the drafting of air, the location/size/intensity of the combustible material, and measure the effectiveness of the individual using the device (time to accomplish tasks and pre-tasks, completeness of extinguishment, as measured against different types of combustible materials).

Equipment that can be tested would require minimal programming and modelling -
- Heads-up displays (already exist in VBS3, and can be fed to our Head Mounted Display
- Cameras can be used in the simulation to capture virtual snapshots. Testing of new cameras such as body-cams could be easily integrated by programming the in-game cameras for different resolutions and fields of view.
- Location-specific identification tools. VBS3 and other software packages already include geospecific terrain sets, topographical and street maps, and are tied in with google earth functions. All of this functionality is usable in the Serious Simulations ready2train system.
- Sensors. A variety of military sensors already exist in VBS3 and other sensors can be programmed quite easily. Motion, heat, sound, and other triggers are possible.
- Touch displays. We have experimented with instructors and trainees using touch displays while in the simulator. This assists in navigating menus without keyboards or mice while training or testing during the simulation.
- Audio cues. Standard for all simulations. Custom sounds can be added for audio cues.
- Voice commands/auditory controls. This is software specific and not currently in place in our ready2train system, but could be developed without too much effort given the advanced state of voice recognition software.
- Data analytics and sharing tools. Full after action review capability is standard, and custom programming for additional non-standard data collection should not require much effort.
- Biofeedback. Our company has individuals with experience incorporating stress related biofeedback devices through wireless brain monitoring, temperature and heartbeat monitoring.
- Haptic. The best haptic feedback is to use real equipment and we go to great lengths to support 100% real equipment in our simulators. The equipment often needs safe conversions to enable haptic feedback such as recoil, noise, vibration etc. and we are skilled in accomplishing these tasks.
- Gesture recognition. Our ready2train system has incorporated a wireless glove unit with finger sensors that enable a complete set of standard infantry hand and arm signals. These signals are accurately replicated by the avatar and can be recognized by the other live trainees. We currently have not incorporated facial gesture recognition, but are willing to do so by capitalizing on third party software of which we are aware.
Purchase Order List*
ready2train system is available for purchase now and is most applicable to law enforcement and military customers. Individual tracking cubes, with all optical tracking equipment, sensors, communication, HMD, and weapon conversion kits are available now for $50,000 to $65,000 depending on variations. Our system is designed to be software agnostic with Unity, Unreal, and Virtual Battlespace 3, but other 3D interactive virtual environments/game engines can also be quickly integrated.

We are beginning to sell wireless head mounted displays for training and entertainment. We have single screen wireless VR displays available for $2500 and dual screen wireless head mounted displays will be available in a few months for approximately $5,000 (quantity dependent).

Weapon conversion kits are also available for M4 carbines and AR15s for $2750 each. Recoil is sold separately as a 3rd party technology.

Our team is available to adapt our very flexible immersive suite of hardware to a variety of occupations ranging from firefighters to maritime training to medical training. We are a small business and do not have all these solutions currently in existence, but have the necessary architecture, and early designs completed and ready to execute for willing customers. Pricing is dependent on a variety of factors and can be quoted upon request.
First Responder Scenes
Our architecture is designed such that each individual trainee is tracked individually and their avatar is networked by the simulation engine in the virtual world. This approach enables unlimited individuals and teams to train virtually, with full human motion, in unlimited terrains, geo specific or geo generic terrain, in vehicles or on foot, in urban areas or non-urban areas, and be completely interactive with virtual characters and other virtual assets. In such an architecture, any scene can be replicated by a game designer. In many cases, entire virtual scenes can be created by an untrained instructor using menu driven scene development and no programming background. Real time editing during the scenario is also easily accomplished by the instructor in the common robust game engines with which we integrate. Our simulator has been integrated with Unreal, Unity, and VBS3, but is easily integrated into other 3D interactive game engines. The game engines with which ready2train operates, have the ability to accommodate virtually any scene or physical model.

The addition of real objects and obstacles such as furniture, walls or doorways, is easily accomplished. These items can be low or high fidelity mock-ups, and can be made to feel appropriate to the touch. The objects need to be placed in the physical scene in a 1-to-1 mapping with the virtual scene. In other words, one foot in the virtual reality area equals one foot in the physical area, in order to provide the correct orientation of the visual representation of the objects in the VR display, with the touch of the object in the physical setting. If desired by the customer, a video camera attached to the HMD can provide a real time look into the physical world, which is displayed as Picture In Picture format in the VR display ("video pass through").

With the above mentioned set-up options, the following scenes are possible in the ready2train system:

Public building - Create1-to-1 physical/virtual world with real objects and mannequin. Video pass through HMD, or flip up VR display to assess the physical victim and take to safety.
Private home- Virtual scene. 24 firefighters can easily participate in combinations of ready2train systems, or PC/laptop stations using game interfaces.
Investigation- Virtual scene in ready2train system with physical lamp that has virtual lamination effect.
Traffic stop- Easily accomplished in a ready2train system, with a vehicle mock-up, real weapon, voice communications devices (communicate wirelessly to entities in the game), and police computers.
Public event. This scene is possible in ready2train and would be accomplished in a totally Virtual scene. Special off-the-shelf crowd behavior software would need to purchased and integrated to replicate very large crowds.
Crime scene walk through - This is one of the best uses of ready2train where officers would be in their own tracking areas but networked in the same virtual scene, allowing them unlimited access and motion throughout the scene. Real weapons are converted with safe recoil kits and wireless data-sensing weapon skins for instructor/test data. The scene is very dynamic and can be altered (edited) by the instructor as conditions dictate during the action.
Documenting a scene - possible in virtual environment with correct modelling. Real or simulated camera could be used.

Accident Response - This VR scene can be accomplished easily in ready2train which has the ability to move from vehicle operations to foot and back. Motion platform under the vehicle (off-the-shelf item) reacts to software inputs such as bumps and turns. Vehicle portion could be a high fidelity mock-up without the use of VR displays.  
Car Crash Triage.- can be accomplished in VR scene with mixed reality real or simulated medical monitoring/assessment devices for communications and vital data capture at the accident.
Heart attack in restaurant - Can be accomplished in virtual environment in ready2train, but could potentially be served better by a live exercise with mannequin, since most of this scenario is physical skill.

Search & Rescue: All scenes well served in virtual environments in ready2train.

Explosives and Hazmat - Unmanned ground vehicles have been modelled and used in VR trainers by the Army. I have personal experience in making the first Talon Robot trainer. This is a good application of a VR based system on laptop or robot interface mockup, and may not require any human body tracking.
The software packages we use have a complete capture of a mission or scene that can be replayed by the instructor for student after action review. In addition, data can be captured and time logged for any event desired whether it is based on a virtual event (fire station notified of fire, police car arrives on scene) or a human motion (such as a shot fired, fire hose engaged, etc.)

The play back can be rolled forward and backward by the individual to quickly find the scene of interest. Play back includes 1st or 3rd person, and Bird's eye perspectives and replays audio, microphone, and video.

NIST Metrics that can be supported in ready2train:

1. How quickly can you get to a person in a fire and extricate them from scene - Yes all actions or triggers can be measured in time logs and outputted to spreadsheets or after action reviews
2. Engaging with walls, doors, furniture etc. - Depending on customer needs, virtual or real objects can be used. Trade-offs in terms of 1-to-1 virtual/physical world mapping as mentioned above.
3. Improve safety of officer with delivering records and info. in a quick way—confirmation that an item has been stolen, or a fugitive has been found - Yes, easily accomplished.
4. Accuracy of completing a task - Yes, with comparison of pre-programmed steps in game engine versus virtual or human motion actions in ready2train.
5. Measuring mistakes - Yes, as in 4 above.
6. EMS: Minute-by-minute measurements - Yes. Providing best care with prioritization. Balance care of multiple people on a scene. Optimizing care beyond visual assessment, using technology that tells more detailed medical information - Yes to all as in 4 above.
7. Accuracy and precision of location based technologies - Yes, geo-specific terrain enabled in various game engines.
8. Precision and granularity of gestures - Yes, via submillimeter measurement of human motions
and voice commands - Yes
9. Measure multiple technologies on one task—such as haptic, visual, auditory. See which is most effective to complete the task - Yes, these are all present and measurable for output. Effectiveness to be graded against a provided scale.
10. Victim identification, medical record receipt and display- Yes
11. Upload and/or sharing of visual elements—photos, video, etc. Yes - accurate visual information can be shared in game.
12. Speed of command center to access information from scene, make decisions, and send directions back to the first responders - Yes, time logs can be kept on all activities.
13. Set-up time at an emergency scene - Yes, time logs can be kept on all activities.
14. What level of information or data is actually required for optimal performance? Address data overload vs. not enough to make a good decision - Yes, data sufficiency versus overload could be easily tested in a scenario, and graded against definitions of known or approved "good" decisions.
15. Capturing and logging data from a crime scene or accident. Such as taking pictures of a scene, saving it, and sharing to appropriate staff while protecting private data as required for specific departments - Yes these elements can be programmed into a simulation.
16. Misunderstanding rate - Yes, if measured against known or approved correct actions/outcomes in the scenario.
17. Rate of requests resulting in incorrect action - Yes
18. Reattempt rate (measure of user impatience) - Yes
19. Speed from communication to resulting action- Yes, in game time logs.
20. Enabling operations vs. distracting from operations - Not enough information to answer this question.
Supporting Documents - Visual Aids
NIST challenge document.pdf

comments (public)