Processing Oblique UAS Imagery Using Image Annotation
Introduction:
Fittingly for a geography and GIS associated class, all projects up to this point have focused on bird's eye view or NADIR (directly below sensor) remote sensing UAS data. This data was then processed for DSM and orthomosaic raster image data which could be brought into ESRI software and other software for mapping and further processing like value added data analysis.
In this project however, oblique imagery is used in conjunction with image annotation to create a stand-alone three dimensional model of a specific object. The 3D Models processing template option is chosen instead of the 3D Maps option or the Ag Multispectral option. Image annotation refers to the user defined specification of the object of interest in pictures from which the three dimensional model is created. This process involves the user picking out each software-found set of pixels (which are grouped by similar color) that is the background and not the object of interest.
After a proper amount of images are annotated, covering many angles of the object, a 3D model can be built by the Pix4D. These models are used for building and tower inspection, as well as other uses. More fields are using these model every year due to the relative ease and low cost of UAS flight, image capture, and processing.
Before processing, images needed to be captured. These images were captured low to the ground at various heights always pointed at the object with 360 degrees of coverage at each height of the object. Information is given in the Pix4D manual on how to capture oblique imagery for 3D models. The manual suggests flying around a building with a downward angle of 45 degrees first, then again at a higher altitude with a downward angle of 30 degrees. For best results the manual recommends flying at various heights between these as well.
Figure 1
Methods:
After obtaining the sets of UAS captured images from the instructor of the course, they were each processed with Pix4D twice. They were each processed once with annotation before the second step was run, and once with no annotation run.
A new project was mad six times. Each time, the project was given a descriptive name, then the directory with the specific images wanted was selected with the Add Directories... button in the Select Images step of the wizard. Next, the camera settings were changed in the Image Properties step of the wizard. The edit button was clicked (highlighted in Figure 2), then the edit button in the resulting window was clicked and the Shutter Model dropdown box setting was changed to Linear Rolling Shutter (buttons highlighted in Figure 3). Going on, the spatial reference information was not changed, however in the last step of the wizard, the template chosen was the 3D Models option instead of the other options that were used in past projects (Figure 4).
Figure 2
Figure 3
Figure 4
From here initial processing was started. This was done by making sure that only the initial processing box was checked in the main interface window of the software, the clicking start (Figure 5).
Figure 5
After this finished processing, annotation could begin. After selecting four to six images that covered a variety of angles on the object in question (only calibrated images were selected because uncalibrated images can be annotated, but their anotation is not processed), each image was selected individually and annotated. The image was clicked on, then the image was viewed in the selection section on the right of the screen. The annotation button was then clicked (highlighted in Figure 6), and the annotation process begun. After annotation, the apply button that would appear where the Disable button is at the bottom of Figure 6 was clicked and the process was repeated with the next image.
Figure 6
Figure 7
In annotation, a mask is applied around the object of focus. For example, in Figure 7, the mask has been applied around part of the image. The mask is the area that is colored purple. The mask is made up by small sections of like colors chosen by the program. Each section can be chosen or deleted by a click, and when zoomed in smaller sections are made available to select that would have been part of a larger section when zoomed out. This feature makes it easier to get the boarder around an object more accurately specified.
Figure 8
After each image has been annotated for a dataset, the second step for the processing can be run (Figure 9). The user must first however make sure that certain settings are turned on. These can be accessed in the second tab of the processing options window and are highlighted in Figure 8.
Figure 9
This completes all necessary processing for the imagery.
Results and Discussion:
Below in are with and without annotation comparisons of processed data for each data set. Figures 10, 12, 14, and 16 are models made without annotation, Figures 11, 13, 15, and 17 were made with annotation. Also for each dataset are flyby videos for the Pix4D project that was processed with annotation.
Dozer:
Despite annotating 5 images for the dozer, not much, if any, difference was made. The outcomes seemed almost the same. The dozer did, however, come out relatively accurate, with the acceptations being the front end (understandably difficult because it is concave), and the gaps between the bars and the cab at the top of the dozer.
Figure 10
Figure 11
Shed:
With the shed, not much changed after annotation. The lack of stray points in the annotated image (Figure 13) is only because a layer was turned off before the screenshot was taken.
Figure 12
Figure 13
Tundra:
The tundra also did not change in any substantial way. Geometry of aberrations changed, but the amount of them did not.
Figure 14
Figure 15
Person:
Our controller gained a head after annotation. This is one of the only substantial changes.
Figure 16
Figure 17
General Discussion:
After processing of images it is apparent that image annotation does not add very much to processing at the level of annotation done here. About five images were annotated for each dataset, which didn't make an substantial changes to outcome. At higher levels of annotation certain details may change. In future experimentation, it would be smart to do a higher level of images, maybe 20 or 30. Also, zooming in farther and paying closer attention to the detail of change between the background and the subject in annotation would be something else to try that may result in higher quality resultant data. Unfortunately, the Pix4D's annotation interface is frustratingly slow to work with, and sometimes is buggy in what it labels as masked and not masked, and in that it changes what one has annotated in the image. For example, sometimes if the image was zoomed into and annotated at a higher level of detail, the unwanted connected pieces of the image also become annotated when the image has been zoomed out. Another complaint for the software is that there is no way to click and drag to highlight and mask a large potion of an image, instead one must slowly go piece by piece in large areas that need to be masked by clicking and slowly moving around while the software catches up and highlights the next piece desired. These things make the software very annoying and time consuming to work with.
Conclusion:
Annotation may help to create higher quality data if done extensively and extremely carefully. This however would require a team of people due to the time consuming and frustrating nature of carefully annotating images. Other processing programs may work better for this function than Pix4D. The three dimensional models made could be very useful, however, in many applications. Building inspection has been noted as one modern use of this technology, and the field of UAS three dimensional modeling is growing as more uses are being found.
No comments:
Post a Comment