Nov 08, 2021 Experiments with Photogrammetry
By TJ Ferrill
Lately I have been experimenting with photogrammetry using the newly installed GIS/Visualization lab computers in ProtoSpace 2751 to better understand the possibilities and limitations of photogrammetry using free software. Photogrammetry is the process of reconstructing 3D geometry from a series of still images. The Marriott Library’s GIS/Visualization lab is in ProtoSpace on level 2, and is a great resource for our students and faculty. The lab enables us to host technology workshops that require more computing power than what we typically carry around with us in the form of laptops and mobile devices.
This is where photogrammetry comes in. It is a promising method for creating a 3D model from still images, but it requires a lot of computational power. Taking the idea a bit further, still images can be extracted from video, a simple pipeline from video to 3d model. I have been experimenting with different videos using this process, with varying degrees of success. In this post I’d like to share some of the takeaways I’ve learned. But first, a rundown of the process itself.
Starting with a video, extract frames using VirtualDub. Depending on the length of the video you may want to take every other frame, every 5th frame, or every 50th frame. It depends on the speed at which the camera is moving, but keep in mind that more frames generally results in a better model, but at the cost of more processing time. I should also note that VirtualDub requires a plugin in order to read MP4 video files.
Once the frames have been extracted to images, drop all of the frames into a new project workspace in Meshroom. From here, you can right click on the “Texturing” node at the bottom of the screen, choose “Compute” (Meshroom will prompt you to save if you haven’t already, so save it and then click compute again). After several minutes (or hours depending on your photo set and the computer you use), you will end up with a textured mesh in OBJ format. Models usually require some cleanup in blender or another 3D editing program before they can be used in your next workflow.
Example outputs:
Oquirrh mountains with clouds. Video taken from flight into SLC international. 235 images. Processed for 90 minutes.
Rock on laptop. Video taken at Marriott Library. 171 images. Processed for 1 hour.
Man seated. Video taken at Marriott Library. 550 images. Processed for 3 hours, 30 minutes.
There is also a consideration of disk space, which becomes considerable with more images and more projects. It seems, on average, projects inflate to 1.25GB for every 100 images. So a 200-image project can be expected to require about 2.5GB disk space. Of course, once the project is done, the final result is much smaller (usually less than 100MB).
There are still questions I would like to investigate, such as: what’s the benefit of higher resolution images? What is the rate of diminishing returns for additional images? It would be interesting to try out identical workflows on image sets derived from high resolution photos vs image sets from video.
I will continue to find answers to these and more questions as time goes on; it’s fairly common to hear from patrons who want to know what options they have for 3D scanning and capture, so nailing down precisely what can be done with photogrammetry is priority. Further steps will include putting together a more complete tutorial, as well as a video tutorial that walks through the steps I have outlined here. There will also be a Digital Matters workshop in Spring ’22 that covers this topic in greater detail. Stay tuned!
Best wishes to all,
TJ
TJ Ferrill | Assistant Head of Creative Spaces
Creativity & Innovation Services / Creative Spaces
thomas.ferrill@utah.edu
Sorry, the comment form is closed at this time.