Import 3D models of animal houses into Unity and use them for VR content
Photogrammetry is a useful method for creating 3D models of actual buildings and spaces, although there are many restrictions in terms of selecting appropriate objects and locations, shooting conditions, software processing performance, and weight reduction of model data. Starting with the game industry and construction industry, it is a 3D model archive of historical buildings and archaeological actual materials, and its use in the academic and educational fields is also attracting attention.
With permission from Higashiyama Zoo and Botanical Gardens (Nagoya City, Aichi Prefecture, Japan), we took photos with a handheld camera from a general walkway and created a 3D model of the animal house using photogrammetry.
This article provides an overview of each process. This was my first time doing photogrammetry that covered a wide area like a zoo, and there were many stumbling blocks, so I'll introduce the points I noticed. (I'm so happy that there's a better way...)
Note that the deliverables for this article were ordered by Nagoya University Faculty of Engineering/Graduate School of Engineering Kawaguchi Laboratory, and I was able to photograph the inside of the Nagoya City Higashiyama Zoo and Botanical Gardens.
This time, the purpose was to import a 3D model of the animal house into Unity and use it for VR content, so we proceeded with model production based on the 2 rules of “making one fbx less than 100,000 polygons” and “reducing the data size by making the texture 4k x 4 sheets of JPG.” (It's important to keep data size to a minimum while minimizing loss in quality. (There is still room for improvement in “less than 100,000 polygons.”)
The main production flow is 1. Shooting, 2. Processing with photogrammetry software, 3. This is a model correction.
1. Shooting
In photogrammetry shooting, it is necessary to take all the places you want to show in the 3D model, and take pictures from multiple angles in a way that goes around the target 360 degrees as much as possible. For the part where I want to bring out the details, I use a method of gradually moving closer, farther, in the middle, and closer. Also, it would be nice to be able to take pictures from above.
This time, in order not to put stress on the animals, I photographed the area around the animal house from a general visitor aisle, and I did not shoot from a high position using a drone or monopod.
What are the points to keep in mind when shooting
・Make sure there are no blurred parts in pan focus (blur causes errors)
・Make sure that no shadow or moving objects get in
・Take a distant view where you can understand the whole thing
・Don't take too many photos while being aware that 60 to 80% of the photos next to each other will be covered (it seems basic that the more the better. (This time, I took on the challenge of taking as few pictures as possible in consideration of various data capacities)
・Take a picture so that the alignments are connected
That's it.
I'm referring to Ryu Lilea's article:How “Zeniarai Benten VR” is made with photogrammetry (Part 1: Photography Edition)
The animal house was photographed with 2 equipment and settings. The conditions and equipment settings for the shooting day are as follows.
Weather: Cloudy with occasional sunny days
Shooting time: January 10:30 to 16:30 (since it's winter, the sun sets from around 15:00)
Shooting equipment and settings:
Camera ① Canon EOS 5D, focal length 24 mm, F value 9 to 11
Camera ② Sony α7R IV, 16 mm to 20 mm, F value 9 to 13
(Some lenses were changed and 50 mm and 70 mm were also used)
The number of photos taken varied depending on the size and structure of the animal house. Since I only shot almost from a height of sight, the number of photos taken is small, but for example, I took 1,481 photos of the gorilla chimpanzee house, 575 pictures of the Asian elephant house, 344 photos of the polar bear house, and 108 photos of the Indian rhino house. The animal house, which has a complicated structure, was photographed by dividing the exterior and interior viewing passages, so it was over 1000 photos.
2. Photogrammetry software processing
The photogrammetry software used Reality Capture.
2.1 Preprocessing
There is no problem even if the captured photo data is used as it is, but in order to obtain more accurate camera position estimation, mesh, and texture results with photogrammetry software, it is recommended that preprocessing such as deleting EXIF tags or adjusting exposure, chromatic aberration, and noise with image processing software such as Lightroom is performed.
This time, in the case of an area where a lot of sky (sky) is reflected, the number of empty garbage polygons increased due to color adjustments, so we also proceeded using the JPG as it was taken.
2.2 Create models with Reality Capture
If you look at Reality Capture help and community Q&A, a lot of know-how is described. @jyouryuusui'sGet started with photogrammetry with the RealityCapture Quick StartThe Japanese explanation in the Reality Capture help window is helpful. The basic procedure is to load a photo and process it according to the workflow.
Using the 2 rules of “make one fbx less than 100,000 polygons” and “reduce the data size by making the texture 4k x 4 sheets of JPG,” trial and error was repeated, and finally proceeded with the following steps.
① Alignment and component creation: When split into multiple parts, use merging and control points to make components that look good. Those that can be predicted to be separated from the beginning are created in a separate project from alignment creation.
② Setting the reconstruction area: Reduce model calculation time by narrowing it down to the area required for modeling.
③ Model creation: Created with Normal Details with the default settings.
④ Reduce the number of polygons: Use Reality Capture's Filter Selection and Simplify tools. Select and delete unnecessary polygons in Filter Selection. Next, gradually reduce the number of vertices and polygons with the Simplify tool.
⑤ Texture generation: Once the model is generally clean (about 30 million polygons), use the smoothing tool once, then generate the texture. The higher the number of polygons and vertices, the more detailed the texture is created, so this texture is later retextured into a model with a further reduced number of polygons.
⑥ Reduce the number of polygons to less than 100,000 polygons. Use the Smoothing tool again to smooth the surface of the model. Apply the texture created in ⑤ to the final model with the retexturing tool.
⑦ Model export: Export in.obj and.jpg formats.
2.3 Case Study: Making a model for an Asian elephant house
When 579 jpg files taken were read and the alignment was created, a Component was created where the angle of the ground in the back right was misaligned. I exported the component in the back right and the other components respectively, read the component into the new project and performed alignment, but since they didn't merge well, I was able to create and merge the alignment by adding Control Points. A reconstruction area was set for the component in the 575 photos, and since a model with a Triangle count of 299.33 million polygons/Vertex count of 150 million points was created, polygon reduction was performed based on this.
If approximately 300 million polygons are reduced to 100,000, it becomes a round mass. To avoid this, I first used the Selection Tool to select an unnecessary area or ground that seemed good to be replaced with a board polygon, and then deleted it with Filter Selection. I then used the Simplify Tool. The Simplify Tool has 3 types of methods: ① absolute, which reduces the specified number of polygons, ② relative, which reduces the specified percentage of the original number of polygons, and ③ maximum of absolute and relative, which results in maximum values using both absolute and relative methods. The Simplify tool can also process only the selected area, so you can leave the details you want to preserve. If the number of polygons generated at the beginning was large, the finish was more beautiful if you divided it into multiple times, simplified, and finally reduced it to less than 100,000 polygons.
Also, if the entire Asian elephant house was set to less than 100,000 polygons, the model would inevitably become rough, so we finally divided it into 7 and then reduced the number of polygons. (I think there's a better way to cut it out) I used the division method of cutting out the area 7 times in the reconstruction area settings.
3. Model correction
I used Blender 2.8. In Blender, unnecessary polygons that were difficult to delete in Reality Capture are removed, and distorted parts are corrected, etc. Finally, export it in.fbx format.
It is also possible to delete the number of polygons in Blender, then return to Reality Capture and generate the texture again. What is important to note is that in Reality Capture, the obj file itself will cause an import error if it is not consistent with the polygon ID held in the file (.obj.rcinfo). (It seems like if you merge or split it, you can't go back to Reality Capture)
This concludes my introduction to the animal house 3D model production flow using photogrammetry.
We will continue to build up production efficiency know-how through photogrammetry trial and error, and then write it down.
3D animal house model production using photogrammetry Was published recently in Kadinche Engineering on medium, where people are discussing the conversation by discussing and discussing to this story.