menu

Submission

submission voting
voting is closed.
introduction
title
MultiVRse - parallel physical and VR universes
short description
Auto VR visualisation matching the physical space, untethered & unbounded VR, interaction with real & VR objects, no cameras needed + more!
About you
(optional)
Bio (optional)
UNSN (/unseen/) is a creative-tech innovation studio. We focus on pushing the boundaries of UX by introducing brave and unconventional ways of interacting with the physical and digital world around you, designing true experiences of the future.

UNSN co-founders come from a creative-tech background. They have years-long experience as Technical and Art Directors respectively in world’s top innovation studios delivering groundbreaking AR / VR, web, mobile and physical content to brands. They have technically and artistically led projects for clients like Disney, Google, 20th Century Fox, Nissan or Reebok, scoring over 70 awards including Emmy, Cannes Lions, SxSW, Adobe Cutting Edge Award, Webby and CLIO.
Technical Details
Solution overview
*** We are proposing a solution that: ***
1. provides a modular and quick way of setting up physical test spaces of any size or layout, anywhere
2. automatically generates matching VR environments that can be further customised, with no external sensors or cameras required
3. provides untethered VR (no cables attached)
4. provides unbounded VR (unlimited positional tracking with no external sensors)
5. supports AR mode using the same headset
6. allows hands-free capture, visualisation and recording of responder's full body pose up to individual fingers with no external sensors or cameras required
7. allows operating and evaluating physical and virtual technologies and interfaces
8. supports multiple first responders at the same time, within the same physical space
9. records, analyzes and allows challenging "ghost trails" for ultimate benchmarking

*** Here is how we achieve this: ***

*** 1. & 2. Building physical spaces with automatic generation of matching VR environments: ***
The physical space will be built using simple modular blocks (think of it as bricks, just usually bigger, e.g. entire wall modules). The blocks are entirely non-digital (created from any material, including wood, plastic, styrofoam or reusable materials). They have NFC stickers underneath that are recognised by special floor tiles and a matching 3D model is found. Mechanical and fully custom blocks are also possible - simply stick the NFC stickers underneath them. The system will know exactly what block is placed on which floor tile and thus will be able to recreate a virtual environment that perfectly matches the physical one with no optical tracking required in the room, which makes the solution scalable and easy to set up anywhere.

*** 3. & 4. & 5. Untethered and unbounded VR and AR: ***
It is absolutely essential that the VR solution doesn't impede the first responder's actions. If we used a PC headset like Oculus Rift or HTC Vive, it would not allow walking or running any reasonable distances. These headsets require cables to be plugged, they are heavy and their positional tracking only works within a highly limited range (around 4x4m max) and otherwise would require extremely costly optical mocap systems ($200k+, ref: https://www.optitrack.com/systems/?gclid=CJuG4t6KvtMCFUdfGQodPYYEcw#virtual-reality/prime-13w/100).

To solve all the above issues and, most importantly, to provide a comfortable and truly-mobile VR solution our system will use the BRIDGE (https://bridge.occipital.com/) mobile headset. This headset is untethered and provides built-in position tracking with no external sensors, thanks to which its range is unbounded. It also provides an AR mode or even a seamless Mixed Reality mode.

*** 6. Hands-free full-body motion capture with no external sensors: ***
Our solution makes use of Perception Neuron (https://neuronmocap.com/) - a modular, wireless motion capture system capable of full body tracking up to individual fingers with no external sensors and without having to hold any controllers in hands. It can also be naturally extended to track other objects and give their position and orientation. Precision demo: https://www.youtube.com/watch?v=PU652js7ztU

*** 7. Operating both virtual and physical devices: ***
First of all, thanks to the proposed mocap approach, first responders have their hands free to operate real devices. They can also use natural hand gestures to operate virtual ones as their individual fingers are tracked.

Our system supports emulating devices in VR that can be then virtually attached to any part of the first responder's body. Also, it allows testing physical devices (e.g. drones) by visualising their position in VR using the same motion capture system as for the first responder's body. Finally, running the simulation in AR mode allows seeing and operating physical devices like smartwatches in real life while having other effects applied on top of the vision. More details in the sections below.

*** 8. Multi user co-op or challenge: ***
All our solutions are wireless, self-contained and do not require external trackers to work. They are also not prone to occlusions. It means we can have multiple first responders operating or co-operating at the same time without any issue.

*** 9. Ultimate data analysis and comparison with "ghosts": ***
Thanks to having full-body motion capture system and unbounded positional tracking we can easily record complete "ghost passes" - full-body recordings of the first responder at any point in time, throughout the entire test scenario. Such passes can be then evaluated like a video recording but better - with free 3D camera look around, slow motion, they can be overlaid for comparison or processed by other systems, including AI. Such recorded passed can be also visualised in VR for other first responders to try to beat, potentially using different aiding technologies / devices.
DIY Recipe Kit
Our proposed solution only makes use of technology that already exists, including all the VR equipment, sensors and visualisation technologies. There are only two areas that require bespoke pre-production (done only once, before the first setup):
- the floor tiles need Arduino-based NFC sensors and edge connectors installed
- the software that joins all the pieces together needs to be created

Once done, you will have a complete set of technologies and hardware that can be used to build any test scenario environment, that will adapt to any needs, that can be set up in a matter of minutes and that can be reused over and over again for new test cases and in new locations easily.

*** Pre-production: ***
First, we need to gather or produce the building blocks for the physical environment. We would reuse or custom-make different sized blocks that can serve as floor tiles, parts of walls, doors, sofas, beds, tables etc. They can be made of anything - wood, plastic, waste, or they can even be actual physical objects (actual windows, actual tables). Then, we would stick durable and waterproof NFC stickers under each corner of each block. So far nothing complex. Next, we would create simple Arduino-based NFC readers that would be put inside or under each floor tile and we would provide connecting outlets to each of the 4 floor tile sides, so they can be connected together in a grid. Finally, we would design 3D models for each block type. You can imagine for most of them (walls, floor tiles) these are 3D primitives that can be put together in a few minutes). That's it. See below why this is and how this will work.

The software build can't be detailed in couple sentences but is estimated to take between 3 to 5 months for a team of 3 developers.

*** New environment setup: ***
Get a PC with the software installed, plug it to electricity (or use a battery). Connect the first floor tile to the PC via USB and put it on the floor. Attach other floor tiles to the sides of the first one, creating a floor surface of any shape or size. Place blocks on top of the floor (effectively creating rooms, passages, furniture etc.) with their underneath NFC stickers touching the floor tiles.

At this point a matching 3D environment has already been created for you thanks to:
- the floor tiles being able to identify other tiles attached to their sides
- each tile being able to identify the object put on top of it, as well as its orientation, thanks to that object's unique NFC tags attached to each of its corners.

At this point you can dive straight into VR or customise the virtual environment.

For new NFC tags you need to specify the object’s height and you can specify a custom model if you’re not happy with a generic block generated based on the block’s outline and height. Once set, the corresponding model is assigned to the NFC tags and when the same block is moved or placed in a new setup, the system already knows what it is.

Further customisation of the VR environment can be done simply by clicking on walls and other items and choosing different textures for them or adding visual effects like smoke or fire to them.

To initiate the VR session, put on the body tracking sensors and place the VR headset on the first floor tile. It will synchronise the headset's position in the virtual world with real world (as we're not using any external tracking cameras). Then just pick up the headset and dive into the experience - walk, run around and operate within the parallel physical and virtual environments.

*** Disassembly & reusing: ***
At the end of the experience, simply remove all objects from the floor tiles and detach the floor tiles from each other, shut down the PC, that's it. Pack the blocks and send them to another place (or just get / produce more blocks). Set up a new layout simply by altering how the floor tiles are connected together and what objects are put on top. Again, the 3D environment matching the real life one will be automatically generated for you.

*** Additional hardware required: ***
General:
- Wi-Fi router for local area network (no internet connection is required)
- PC, PC mouse, keyboard, monitor
- USB cables

Per first responder:
- BRIDGE VR goggles (https://bridge.occipital.com/)
- iPhone 6 or 7 (for the VR goggles)
- Perception Neuron body tracking sensors (https://neuronmocap.com/)
- Wireless headphones for athletes (https://www.jaybirdsport.com/en-us/x3-bluetooth-headphones.html)
Safety
*** Headset: ***
Our system uses BRIDGE - a production-ready, lightweight mobile headset that is untethered. Therefore there is no risk of getting tangled in cables or having the headset pulled off one's head.

Another safety feature of the headset is being able to sense physical obstacles in its nearest proximity that are not shown in the virtual world. If it happened that there is another person standing within the test space or there is an object that is not part of the installation, when the first responder comes close to it, such object would become highlighted and overlaid in VR.

*** Full-body tracking: ***
The Perception Neuron body tracking system is capable of tracking full body up to individual fingers, providing next level of comfort and realism to the VR experiences. The first responder will feel comfortable being able to see their limbs as they are in real world instead of not seeing them at all or seeing just floating hands, as it tends to be in most VR experiences.

Also, Perception Neuron doesn't require holding anything in hands and is wireless, not only giving the first responder the ability to operate other devices but also being able to support themselves naturally if they lose balance or fall.

*** Physical environment: ***
The proposed approach for building physical test environments allows ultra-precise, 1:1 automatic replica in VR. Whenever the layout changes, there is no need to rely on human to adjust the 3D representation - it's done automatically, avoiding any room for human error. The first responder can feel safe knowing that what they see in VR is, in fact, how the environment looks in real life. This also means they have natural haptic feedback coming from the environment.

Furthermore, because the environment building blocks can be made of anything (as long as it's capable of remaining mostly rigid and as long as it lets an NFC sticker to be attached underneath), it means the building blocks could be as well made of sponge- or gum-like materials, providing increased safety upon contact or collision.
Realistic Elements - Virtual
*** Intro: what is realism? ***
Whether something is realistic or not depends entirely on the target audience. If we were targeting tennis players, probably ball and racket physics would be what defines realism to them. If our target audience were interior designers, probably textures and lighting would be what they care most about in VR. We are targeting first responders and no VR experience will be "realistic" to first responders if they can't have complete freedom of movements. They need to be able to walk, run, crouch, operate tools and do all this without any constraints. But whether fire looks a bit higher or lower resolution probably doesn't matter to them much, as long as it's there.

*** Complete freedom of movements: ***
Thanks to using the BRIDGE mobile headset first responder's movements are unconstrained. The headset is fully mobile and untethered, doesn't require any cables attached which will allow the first responder to forget about technology and focus on their actions. The BRIDGE headset also provides unbounded positional tracking which lets the first responders freely move between rooms.

*** Interact with VR objects as in real life: ***
The Perception Neuron motion capture system allows tracking full body, including individual fingers. This will allow the first responder to precisely operate virtual tools and devices. Virtual keyboards, screens, fire extinguishers, handles, lavers, everything can be operated in VR with near-real-life precision. Perception Neuron works wirelessly and is comfortable to wear. It also leaves the hands free to operate other devices and doesn't require to hold any controllers.

*** Full body presence: ***
Another advantage of using a full-body tracking system like Perception Neuron is being able to visualise the first responder's body in VR. Most VR experiences avoid showing the user's body as they don't have a way of tracking its position precisely. But it completely kills immersion. We will be able to achieve highest level of realism where the first responder will be able to see their entire body as it is in real life making them feel comfortable and again, forget about technology and focus on the task.

*** Haptic feedback: ***
Thanks to having 1:1 matching VR and physical environments, the first responders get haptic feedback whenever they touch a wall or any other object in the room. This is achieved thanks to the automatic VR model generation when using the connected blocks to build the physical space.

*** Carrying or moving objects, opening doors: ***
Our system naturally scales to tracking position and orientation of physical objects. Their movement is reflected live in VR. It is achieved by attaching additional Perception Neuron sensors to those objects. Their position and orientation would be then tracked similar to how the first responder's body is tracked. The neurons are small and can be attached to anything including doors, boxes, tools or other devices.

*** Smoke, light, fire and other effects: ***
Our system works with all major game engines including Unity and Unreal. They both provide particle systems, dynamic lighting and other effects that can be added to increase realism of the scene or to simulate extreme conditions.
Realistic Elements - Physical
*** Any layout, any size: ***
Most importantly, our design is modular. It is not constrained by shape or size and effectively allows building out any physical space, including separate rooms, doors or even floors. You start by shaping the floor. But you're not limited to one level. The floor tiles come at different thicknesses and can be placed on top of each other, effectively letting you create raised floors or even stairs to another level. The floor tiles can also include motors that make them shake. Walls and objects can be placed anywhere on the floor tiles.

*** Custom objects: ***
You can also create and reuse custom objects. It is enough to stick NFC stickers to them and provide a 3D model that matches that object. Then, it can be placed anywhere in the scene and by putting it on the floor tiles. The position and orientation of the object is automatically mapped in VR.

*** Moving objects and devices: ***
As mentioned in the Realistic Elements - Virtual section, physical objects can be tracked with Perception Neuron so they can seamlessly blend between VR and the physical space.
Technology Testing
*** Physical equipment: ***
With the proposed setup it is possible to test real, physical devices. One way to do that is using the Augmented Reality mode (seeing the physical environment and having virtual overlays on top of it). The first responder can operate real life objects like smart watches, levers, touchscreens or fire extinguishers, seeing the physical surroundings and the tested devices directly. On top of the "reality" video feed, we add augmentations like fire, smoke, decor, victims etc. The VR and AR modes can be seamlessly switched between at any time providing the ability to e.g. start in VR mode for the best immersion and only switch to the AR mode when the first responder needs to operate a physical device, then switch back to VR.

Physical objects can be easily motion- and position-tracked with the Perception Neuron - the same sensors that are already used for body tracking. This is another way of testing physical devices in VR. An example could be sticking the sensor on top of an actual drone to visualise it in VR (while having the actual device operate in real world). It would be the real drone navigating in the real, physical environment and by having its position tracked, it could be visualised in VR without having to re-implement its autonomy or functional logic specifically for VR.

*** Virtual interfaces and simulated technologies: ***
The proposed setup also enables testing of new device concepts before they are even manufactured. They can be simulated in VR and in AR. Existing devices can be virtually simulated the same way without a need of having them physically on site. Anything from HUDs / helmet data overlays, throwable flares that "are aware" of the environment, to autonomous drones that navigate around the virtual environment - it can all be visualised in both VR and AR at no extra hardware cost or setup. It is possible thanks to the entire body of the first responder being tracked. We can then understand their hand movements to compute the flight path of an item being virtually thrown (and thanks to knowing the environment setup we can also simulate it bounce off walls, floors etc.). Another interesting benefit of full-body tracking is being able to virtually stick devices to any part of the user's body. An obvious example would be HUD that's stuck to the responder's head, so they have it in front of their eyes all the time but it can also be a virtual smartwatches attached to their wrists, a virtual knife attached to their hip, a light attached to their shoulder they toggle by simply tapping it with their hand, or even a virtual backpack from which they can take out virtual items or to which they can put collected virtual objects. It could also be virtual CPR that interacts with a virtual victim. With full-body motion tracking, free range of motion and no cables attached the possibilities are endless.
Purchase Order List*
All key components required for the proposed solution are already available on the market and can be purchased. There is custom software that needs to be written using one of the leading game engine solutions like Unity or Unreal. Finally, the building blocks for the physical environment need to be organized / bought / produced.

*** Common: ***
- middleweight PC: $1600
- PC display: $200
- Wi-Fi router: $200
- PC accessories (keyboard, mouse, USB cables etc.): $200
- Custom software build: TBC, estimated for 3 to 5 months for a team of 3 developers

*** Single floor tile (# of tiles depends on setup size): ***
Floor tiles can be built from anything that’s durable enough. It can be wood or plastic.
Each floor tile needs to have edge connectors to neighbouring tiles and an Arduino-based NFC reader.
- Connectors: $20
- Arduino: $15
- Arduino NFC shield: $29.50
An estimate for a 700 square feet setup is 200 floor tiles. All tiles can be reused.

*** Building blocks: ***
They can be anything, including real furniture, cardboard, styrofoam etc. Ballpark cost per 700 square feet setup is $500.

*** Per first responder: ***
- BRIDGE headset: $399
- iPhone 7: $649
- Perception Neuron motion capture system: $1,499
- Jaybird X3 Wireless headphones: $129.99

*** Software: ***
Software cost is highly TBC. It will depend on the final set of features agreed. There is no 3rd party software cost (other than Windows operating system) involved. All software is built custom for the purpose of this project (using cost-efficient approach with Unity or Unreal). The software will be developed once and can be reused for multiple setups. A rough ballpark is 3 to 5 months of development for a team consisting of 3 developers.

*** Example total cost: ***
Here is an example total ballpark for a 700 square feet setup of couple rooms for one first responder. The entire setup can be easily reused for future purposes and other scenarios:

Total: $23,100 (exclusive of one-time software development cost).
First Responder Scenes
Our system addresses a wide range of scenes by allowing the first responder to freely move, walk and run without restrictions. The first responder has their full body tracked allowing them to precisely manipulate objects in VR. The AR mode enables interaction with physical objects without the need of replicating their functionality in VR. Perception Neuron motion capture system allows tracking other objects too which makes them movable. And finally, the automatic VR 3D model generation of a matching environment works with any layout, shape or size of the testing space.

Below are the examples that have been provided in the brief with a short explanation of how they are possible with the proposed system.

*** Firefighters - Public building: ***
Our system fully covers this test case - first of all allowing to create a hotel room layout with the blocks, put actual furniture around, have all this automatically mapped 1:1 in VR which lets the first responder touch walls and have haptic feedback and our motion capture system lets the first responder move freely, including crawling. Interaction with the victim can be performed in AR with an actual human or in VR with a simulated person.

*** Firefighters - Private home: ***
Similar to above. Moving between floors (upstairs / downstairs) can be achieved via physical multi-floor layout or it can be simulated in VR. Having more than one first responder is not a problem as everyone has their own self-contained motion capture and VR system. 24 first responders is also doable.

*** Firefighters - Investigation ***
Technically identical to the above. Visually Unity and Unreal provide realistic shaders and textures to emulate charred remains. Photographing and documenting the environment can be done in VR with a simulated camera (that the first responder can naturally operate in VR with real-life hand gestures thanks to the full-body motion capture system).

*** Police - Traffic stop: ***
We assume simulating the first part - the chase, is out of scope for this project although technically there is no issue adding an interactive experience that triggers at the start while the first responder is seated. Our system allows to place a mock car (or any similarly sized placeholder object) on the floor and have the car visualised in VR. The first responder can walk freely around it and they can operate a weapon as in real life thanks to the full body motion capture system. Audio / video communication with a remote teammate is also fully possible, i.e. it is perfectly simulatable in Unity or Unreal. Alternatively the first responder can use a real life communication system when in the AR mode.

*** Police - Public event: ***
It is possible to add purely VR objects that are not physically placed within the physical setup (in this case crowds of people). Metal detectors, screens and comms can be visualised in VR. Multiple first responders can operate within the same space, move around between stations and co-operate.

*** Police - Crime scene walk-through: ***
Multiple officers can operate at the same time. They can move and get as close to or as far from the evidence / objects as they want. They can document everything with a VR camera. VR bodies can be moved with real-life gestures.

*** Police - Documenting the scene: ***
Weather, lighting and environmental effects can be easily simulated with the technology of our choice (Unity or Unreal). First responders can freely walk around the car and take pictures with a VR camera.

*** Search and Rescue - Maritime: ***
Water can be simulated in VR. First responders can put on thicker clothes to impede their movements. They can still move freely and use their hands as they would in water to emulate this scenario.

These are just couple examples of how our system generalizes and adapts to various situations.The essence is though that if the first responder can move freely, if they can see and use their whole bodies as they would in real life, and if they have haptic feedback from a matching physical environment around them, we have everything to simulate any scenario. The rest can be added in VR.
Measurements
At all times, the user’s body motion would be tracked thanks to the Perception Neuron full-body tracking set. The data would be stored which would allow for precise analysis of the first responder’s behavior and performance. It’s like a video recording that you can scrub back and forth, just in 3D and with extreme precision. What’s more, this data could be used to visualize a “ghost” that the first responder can compete with in the next try. Add to that having recordings for each technological aid (e.g. one for using a smartwatch, one for using an AR data overlay) and you can easily see and compare their benefits!

As the system is already capturing the entire first responder’s body position and pose throughout the duration of the test runs, there is no issue with adding additional data overlays to the “ghost” recordings for further analysis or comparison. Additional measurement equipment like heart rate monitor or heat cameras can be plugged and synchronised at ease. What’s more, our system will provide a Machine Learning layer on top of the analysis to spot and learn facts that may not be obvious to the human eye. Which equipment setup is the best for the first responder? Is it only time that matters? AI will help answer that.

Next, there will be a Supervisor Mode. A Supervisor will be a person watching the first responder’s performance from perspective. They will be able to see, monitor and analyze the performance of the first responder live, in AR, using Microsoft HoloLens AR goggles. In such approach, a simplified version of the VR environment (without walls) would be overlaid on top of the real world environment. The actual first responder would be seen as is in real life with their position being highlighted even through walls. Additional HUD and data overlays as well as controls are also possible, e.g. showing current responder’s heart rate, precision or current time.

Furthermore, we can implement virtual devices at east. These can be cameras, thermometers, communication devices and many more. Possibilities are endless. They can interact with the virtual world around them, collect and store information. Most importantly though thanks to the full-body motion tracking system they can be operated identically as in real life, giving the responder the most realistic experience possible.
Supporting Documents - Visual Aids
MultiVRse - Overview.pdf

comments (public)