Department of Infrastructure Engineering
GEOM90038 Advanced Imaging
Lab Assignment 1: image-based 3D modelling
Due for submission at 10:00 pm on Friday of Week 3
Note: This assignment can be carried out in a group. You can choose your group member (no more than 2 members per group) and work together on the assignment. However, each student must submit an individual report outlining their method and results. It is recommended that you use a camera rather than a phone for photography.
The task
The aim of this assignment is to reconstruct a 3D model of a man-made object with planar surfaces, e.g. a building, from a set of images. Your tasks include camera calibration, image acquisition, camera orientation, accuracy analysis, and generation of a textured wireframe. model of the selected object. To carry out these tasks you will use a software called iWitness, which has been installed in the lab computers. You can also download the software from the link below and installed it on your personal computer.
http://www.photometrix.com.au/downloads/iWitnessPRO_STUDENT/iWitnessPRO_4.105_STUDE NT_DEMO_x64.exe
The user manual of the iWitness software is available on LMS. You are also provided a pdf file containing 20 B&W (Black and White) codes used for camera calibration, which is a prerequisite step to conducting 3D measurement of objects with iWitness. Figure 1 shows an example 3D model ofa building created from a set of images in iWitness.
Figure 1. A screenshot ofiWitness interface showing a textured 3D model ofa building reconstructed from images.
The procedure
1. Camera calibration. In this step, you are required to calibrate your camera using the automatic calibration tool in iWitness. Details of this step can be found in the section C2 of the manual(iWitness Manual.pdf on LMS). You will need to set up the camera and target layout with the 20 black and white coded targets (iWitness_B&W_CalCodes.pdf on LMS) for the automatic calibration. It is recommended to turn-off “auto-rotate”, do not refocus and zoom when capturing the layout of the setup targets, set the focus to infinity, and try to shoot with the highest resolution. Once the calibration is finished, check the accuracy of your calibration. If the internal accuracy of referencing exceeds 0.25 pixels, you will have to redo the process, perhaps with more images. Save the calibration parameters in your project and record the accuracy for your report.
2. Image acquisition. Capture a sufficient number of images covering all sides of the real selected object (e.g., buildings). Make sure there is sufficient overlap between each pair of consecutive images. Remember to measure a few distances on the real object using a measuring tape so that you can scale your model later.
3. Camera orientation. Perform relative orientation of the first two images via the point marking and referencing operation in iWitness (Section B3.1 of the iWitness manual). You should use at least 6 points and check the Total RMS error, which display at the right-bottom of the 3D list. If the total RMS error of the image marking/referencing exceeds 1.5 pixels, you will have to redo the orientation, perhaps with more points or picking the marking/referencing points more carefully. Orient the third and subsequent images using the referencing tool (Section B3.4 of the manual). Use review mode (Section B7) to check your point references and correct the point positions if necessary.
4. Scaling the model. Apply the correct scale to your model by entering the actual distances between one or more pairs of points of the real object for the distances between their corresponding points on the 3D models. The details of this process can be found in the Section B4.1(the iWitness manual). Remember to setup with a suitable project unit (which can be changed at any time).
5. Accuracy evaluation. Export the 3D coordinates of all points and the corresponding standard errors in a .txt or a .csv file (Section B5.1). Analyse the accuracy of the camera orientation based on the standard error of the point coordinates and include your analysis in the report. Verify your results with the project status summary (Section B9).
6. 3D modelling. Create a 3D wireframe. model of the object by drawing polylines connecting vertices of planar surfaces (Section B11).
7. Texture mapping. Map texture to each planar polyline in 3D view (Section B12). Export your textured 3D model in a .vrml file.
Submission
Write an individual report outlining the process and your results. Include the following content:
a. Introduction: describe the aims and your general approach to the assignment.
b. Methods: explain the method you followed to perform each of the tasks of the project.
c. Results: report the results including the accuracy of camera calibration and orientation. Include screenshots of your textured 3D model. Provide an analysis of the results and discuss if/how the results can be improved.
d. Conclusions: provide a summary of your findings and what you learned about image-based modelling.
e. References: provide a list of references if your text includes any information from other sources. Submit a digital version of your report via LMS and in pdf format only.
Marking rubric
Appropriate length and proper formatting 5%
Proper introduction 10%
Proper description of the method 15%
Proper analysis of the results 20%
Accuracy of the camera calibration/orientation 20%
Quality of the model 20%
Logical conclusions 10%
Reviews
There are no reviews yet.