Here is a project we made for an AR course we had. Often you want to be able to have some big screens you can interact with. This explores the possibility of using relatively cheap technology to get a surface with multiple touch support. The idea is to use one (or more) RGB cameras to detect where the screen is occluded. This uses Unity and some GPU programming to get the wanted real-time performance.
The blue circles show that an object is on top. The video below shows the program in action and shows some of the setup. It uses homography and color calibration to get the desired result.