The project combines an LCD flat panel screen with force sensors and a robotic arm that moves it back and forwards. By controlling how much resistance there is to a user's fingertip the firm says it can simulate the shape and weight of objects shown on screen. Microsoft says the device could have medical uses as well as for gaming.
Work on the project is being carried out at the firm's Redmond campus near Seattle. When a person touches the prototype it pushes back with a light force to ensure one of their fingers stays in contact with the screen.
If they then press against it the robotic arm instantly pulls the screen backwards in a matching smooth movement. If they start to retract their finger, it moves it back towards them. Meanwhile a computer adjusts the size and perspective of the on-screen graphics to create a 3D effect.
The trick to simulating a physical sense of touch is to adjust the amount of force-feedback resistance. So, in an application which shows graphics representing different square blocks on a wall, a "stone" one requires a relatively large amount of force to push it off the ledge and a "sponge" one less.
In addition, the kit can be used to provide a sense of shape by adjusting the screen's position to match a virtual object's contours as a person drags their finger over its surface.
"As your finger pushes on the touchscreen and the senses merge with stereo vision, if we do the convergence correctly and update the visuals constantly so that they correspond to your finger's depth perception, this is enough for your brain to accept the virtual world as real," said senior researcher Michael Pahud.
His team have used the technique to allow users to feel the shape of a virtual cup and ball, among other objects, while viewing them using special glasses to get a stereo-vision effect.