I was awarded a fellowship to create an exhibit for the Scott Family Amazeum museum in Bentonvile AR.
This exhibit took the form of a large box with a mirror mounted on one side. When you stand in from of the mirror different effects are overlayed on your face much like a snapchat filter. Additionally you can activate different environmental effects by opening your mouth or touching the bucket in front of the mirror.
The piece tracks the location of the users face using a depth camera. Different effects are then dislpayed on the monitor at the correct (x, y) location by moving the elements on screen while the z axis moves the entire monitor back and forth on a pair of linear rails actuated by stepper motors. The environmental effects are activated using capacative sensors attached to a bucket and use arduinos to control various LEDs and relays.
A common question I get is why the monitor must be moved instead of just making the image smaller or larger. The z axis must be moved because otherwise only a single viewer could see the effects in the correct location on the users face. Additionally the eye has a focal mechanism that assists in depth perception so if the monitor remained stationary the your face and the effects would be in different focal planes appearing blurry and unrealistic.