Title: Shadow Dreamer
Creators: Julie Huynh & Gabriel Gianordoli
We were inspired by an interactive installation by swiss designer Camille Scherrer called In the Woods that combines computer vision and projection mapping to recreate users’ shadows as silhouettes of animals. The half-imaginary creatures are also programmed with simple animations to interact with people which led us to the idea of using the Kinect sensor to detect people and activate their shadows in Unity, and a brainwave sensor to effect the environment the shadow resides in.
So we combined this idea with Julie’s Dream homework assignment to create a dreamscape environment where the dreamer wakes up in a shadow world, where all the creatures are shadows trapped in a room, and there is a totem in each room to trigger the scene changes.
We began by connecting all the hardware using the Zigfu add-on for Unity to connect the Kinect sensor, and merged their example avatar into the dreamlike medieval-styled room from Julie’s homework assignment. Once we tested and added the plug-in for Kinect with Zigfu, we worked on the connection with sending data using OSC for Neurosky’s Thinkgear brainwave sensor. Testing the connection with the Oscillator on the Mac, we installed the OSC receiver plug-in for Unity to receive the data and checked the Unity console to verify it was working.
Once we completed the technical connections with the Kinect and Thinkgear, we had 3D-modeler, Aaren Grace a customize an early 21st century styled bed and Atlantic spotlight to create a surreal scenery from the past. The other lamp, animal, and environment models were free assets we found to populate the 2 scenes we created.
The cave scene was based on flight, so we mapped wings onto the avatar to control, however the skeleton to wing connection was glitchy and still needs to be fixed (we found a solution and will update soon as I find a Kinect). The scene was set in a cave with bats, birds and owls. The light source for the room was mapped to the avatar’s meditation level retrieved from the thinkgear sensor to simulate how dreams can be fuzzy and convoluted. When the meditation level is low, the shadows cannot be seen, blacking out the room, but everything is lit up when the meditation level is high. We set a bird emitter to randomly confuse the user, but also set an owl to take flight when the user’s attention level reached a threshold. We chose the bed knob to place the totem for this scene change, and since the theme was flight we used a small bird ready to take flight off the knob. Adding a light source to directly hit only the totem to create a strong glare for the user to notice it flicker when looking in the direction of the bed was the cue for the user to come up and look a the the totem, where a raycast is set with an owl hoot for the scene change to the forest scene.
The forest scene was based on growth, when the user begins the forest scene, they have a bull’s head , and the trees grow based on an attention threshold from the thinkgear sensor. The lighting was also mapped to the meditation levels in this scenes to continue the shadow dream simulation. There are bull noises in the background and grunting from the gorilla in this scene. The forest becomes more muddled in this scene with the mixture of animals to confuse the user, so we kept the same totem of a small bird on a bed knob for the raycasting, but in this scene change is signified with a bull noise.
Live Dino Playtesting Pictures:
- Even though we got a lens to shorten the distance you have to stand from the Kinect, it was still difficult to move around with the Oculus cords, and keeping the thinkgear sensor in range.
- It was uncomfortable to wear the thinkgear sensor and the Oculus headset.
Github Link with exported project