Photogrammetry for Projection Mapping

Projection Mapping

We will cover the basics of how photogrammetry works and why we should use it for projection mapping.

Today I'm going to show you it photogrammetry is and how we can use it in our next projection mapping project. Whenever we are going to start a 3D projection mapping design we need to have a solid understanding of what we're going to be projection mapping onto. One common way to do this is to use a laser scanner to create a point cloud of what you're going to project on. Then, you can take that point cloud and convert it into a mesh. From this point on it is fairly easy to apply any textures that we want to the surface as well as add any effects. These devices use a spinning laser to determine how far each point is away from the sensor. They also may use a RGB camera to map color data to each of those points. Unfortunately these scanners are not very accessible are often times extremely expensive in the tens of thousands of dollars. Also, they are quite bulky and can be difficult to use. 

Point Cloud


Thankfully there is a much more accessible technology called photogrammetry. To better understand how photogrammetry works let's take a brief look under the hood at what it does. Imagine that we have a set of these three photos. If we try to line them up we will see that we cannot make them match no matter how hard we try. This is because they were taken at different angles. 

Image Angles


What photogrammetry does is use a computer algorithm to determine how these photos were taken in relationship to each other. Then with this information it can calculate the three dimensional mesh that must been present to result in these three photos. It will then output that mesh with the RGB data applied to it. This is very useful to us because we only need a camera in order to capture the object we want to scan into the computer. In order to convert our photos into a 3-D model we are going to need a Photogramatry software. 

Resulting Mesh


If you were on windows you want to look at Alice vision which an open source piece of software. If you are using a Mac like I am you want to use the Apple’s object capture. Object capture is a great Photogramatry tool that was just introduced in recent macOS updates. It also takes advantage of the Apple neural engine that is included with the new M1 line of chips making Photogramatry extremely fast. 

Now let's talk about how we want to take our photos. We don't need any fancy camera in fact I just use my cell phones camera. There are a couple of things that we need to watch out for when taking these photos. We want to avoid any surfaces that have excessive reflections such as mirrors. We also want to make sure that the lighting is as flat as possible. If you were taking photos outside try to do it on an overcast day. If you were indoors make sure that there are no bright light sources that might make the camera expose incorrectly. With photogrammetry the more photos the better because this gives more data points for the algorithm to attempt to find our surface. We want to take photos of what we want to capture from multiple angles and multiple depths, making sure that there's not too much deviation in position from the previous photo.

Now that we have our photos let's drop them into a folder on our computer. I will leave a link below to Alice vision as well as a program called PhotoCatch that uses apples object capture. After opening photo catch I need to click on select a folder of images. After we select our folder of images photocatch will perform the photogrammetry process. Once the process is complete select OBJ as the file type and click save.

Photo Catch


The photogrammetry scan is not going to be very clean. So we need to do some post processing to it. I will be using a program called blender 3-D which is an open source 3-D modeling software. We just need to import our OBJ file into the scene. Next, we need to add a smoothing modifier and change the amount to 1.5. This will make the surfaces of the scan less rough. At this point we're going to want to do much more cleaning of the scan and there are many good tutorials online how to do so. Since I was short on time I only used the smoothing modifier so my scan will look a little rough.

Blender Smoothing


Now that we have our smooth object we can proceed with traditional 3-D projection mapping. If you would like to learn more about 3-D projection mapping check out my previous video here. For this demonstration I simply use unity 3-D acting as a three projection mapping software. I went ahead and I did a couple of different effects using the 3-D object. First I added a directional light into the scene and change the angle to make it look like the time of day was changing. Second, I added a point light and moved it around the scene to see how it would interact with the 3-D geometry. I then added a 3-D sphere and shaped how it would interact with shadows on the surface as well as pass-through our 3-D object. Lastly I decided to play around with some particle effects.

Unity

End Result