Thursday, November 28, 2013

Lab 7: Photogrammetry and Orthorectification

Goals and Objectives

The main goal of this laboratory exercise is to develop your skills in performing key photogrammetric tasks on aerial photographs and satellite images. Specifically the lab is designed to train you in understanding the mathematics behind the calculation of photographic scales, measurement of areas and perimeters of features, and calculating relief displacement. Moreover this lab is indented to introduce you to stereoscopy and performing orthorectification on satellite images. At the end of this lab exercise, students will be in a position to perform diverse photogrammetric tasks.

Methods


Photogrammetry 


Four different photogrammetry methods were used to come up different scales and relief displacement.  The first method we were given an aerial image and the actual distance measured.  We had to find the scale by using a ruler and measuring the distance on the aerial image.  The scale was found using this equation: Actual distance Measured/Measured Distance on the Map.  The second method we were given the focal lens length, the height of the aerial flight, and the elevation of the point on the surface.  We calculated the scale by this equation: Focal Lens Length/(Height of Aerial Flight-Elevation Point).  The third method we used was to find the area and perimeter of a lagoon on the Southwest side of Eau Claire.  We had to use the area and perimeter tool in ERDAS.  I digitized the lagoon and the area was given.  It was in hectares and miles.  The perimeter was in meters and miles.  The fourth method used was to calculate the relief displacement when we were given the height of the smoke stack in real life and the height of the camera.  This was calculated by this equation: Height of Object in Reality x Radial Distance from Principal Point to Top of Object / Height of Camera.

Sterescopy


This tool was rather simple to use.  With a DEM given, certain parameters were changed to get the wanted end result.  I had to create a stereoscopic image of the city of Eau Claire.  The end result was an image that displayed the elevation of the City of Eau Claire.

Orthorectification


This was a long, tedious process!  I first had to set ground control points on my images.  I created 12 GCP's.  I then had to assign the image a DEM and another panchromatic image to correct. 


Results


Photogrammetry Results

The image below is the first task of the photogrammetry section of the lab.  I calculated the scale to be 1:38,498.


The image below is the second task in photogrammetry.  I calculated the scale to be 1: 38,509.





















The image below is the third task in the photogrammetry section of the lab.  I calculated the area and perimeter of the lagoon in acres, hectares, miles, and meters.





















The image below is the last part of the photogrammetry section of the lab.  I had to calculate the relief displacement of the smoke stack.


















Stereoscopic Results

The image below is the result of the stereoscopic task.  It is suppose to be almost a 3D image of Eau Claire, but it is hard to see the elevation and relief without 3D glasses on.





















Orthorectification Results

The image below is the results of the orthorectification lab section.  Through multiple steps like adding GCP's, importing a DEM, and rectifying the image the image was created.















Conclusion

This was the longest lab of the semester so far for this class.  It took a long time to complete.  Overall, I believe it was a very helpful lab that taught us a lot.  The photogrammetry skills in this lab were very helpful and could be used to get a job.

Tuesday, November 19, 2013

Lab 6: Geometric Correction

Goal and Objective

This lab is designed to introduce you to a very important image pre-processing exercise known as geometric correction. The lab is structured to develop your skills on the two major types of geometric correction that are normally performed on satellite images as part of the pre-processing activities prior to the extraction of biophysical and socio-cultural information from satellite images.


Methods

In part 1 image to map rectification was used.  In part 1 of this lab, we were given a topographic map of the greater Chicago area and a LandSat image of the same area.  Our objective was to geometrically correct the topographic map to fit the LandSat image by using ground control points.  Ground control points are used to stretch the image or map.  Control points have latitude and longitude coordinates to provide a an accurate correction to the image.  In part 1, we used first polynomial correction.  In first polynomial correction only three GCP's are needed.  The rectified image had a root mean square .4379.  This is an accepted value because anything under .5 is really good.  The second method used was image to image rectification.  In the second part we were given a LandSat image of Sierre Leone that needed to be corrected.  In part 2, we used third polynomial correction. In third polynomial correction we needed at least 10 ground control points.  We used 12 ground control points to get better accuracy.  My root mean square error was .1576.  This an acceptable value of error for correction.

Results

The first image below is a screen shot of the Root Mean Error and Ground Control Points for the first part of the lab.  
The image above is a screen shot of the first part of the lab using first order polynomial equation.
The second part of the lab was Image-to-Image correction.  Using a third degree polynomial equation, we placed 12 ground control points to have better accuracy.

The image above is of the second part of the lab, where we used a third degree polynomial equation to have better accuracy.






  

Wednesday, November 13, 2013

Lab 5: Image Mosaic and Miscellaneous Image Functions 2

Goal

This lab is designed to introduce you to some important analytical processes in remote sensing. The lab explores RGB to IHS transform and back, image mosaic, spatial and spectral image enhancement, band ratio, and binary change detection. The image mosaic adopted in this lab is structured to teach you how to process individual scenes of satellite images to arrive at one seamless scene, in the event you are faced with a project that covers a large geographic area that exceeds the spatial extent of one satellite image scene, or your study area astride portions of two satellite image scenes. At the end of the lab exercise, students will be in a position to apply all the analytical processes introduced in this lab in real life projects.

Methods

Part 1: RGB to IHS and IHS to RGB

In the first part of the lab we had to transform images from Red, Green, Blue (RGB) to Intensity, Hue, Saturation (IHS) and then back to a RGB image.  Various steps were followed in ERDAS in order to come up with the specific results.  The RGB to IHS function transform an image that is suppose to produce a more realistic image.  The IHS image is an image that has different band combination.
.
Part 2: Image Mosaicking

In the next part of the lab we were asked to mosaic images together using two different ways.  The two ways are mosaic express and mosaic pro.  The first method we used was mosaic express.  Mosaic Express is the quicker way to mosaic an image and does not always produce the same results.  The second method we used was Mosaic Pro.  Mosaic Pro has you set parameters in order to produce the results we intended.  


Part 3: Band Ratioing

Band ratioing is a very help tool in order to pick out and exemplify a specific band within the image.  In this part of the lab, we used band ratioing on the normalized difference vegetation index (NDVI) image.  In the image you can tell exactly where vegetation is growing and where it is not growing.  This tool is very helpful when using an image with vegetation in or you are trying to find vegetation.
  
Part 4: Spatial and Spectral Image Enhancement

Spatial and spectral enhancement are a techniques applied to images to improve on its appearance for visual analysis.  These change the contrast and brightness in order for the human eye to be able to perceive it.  For example, we were given a high frequency image and we had to tone down the brightness level of the image.  We were also given a low frequency image and we had to do a high and low pass convolution mask/edge enhancement to improve the brightness.  In the last part of this section we had to equalize the contrast on a histogram to improve the image quality.  These are all useful tools in order to be able to better perceive the image.

 Part 5: Binary Change Detection (Image Differencing)

Binary change detection is a useful tool in order to see the difference between years on images.  For example, in our lab we used images from 1991 and 2011 to look at the change in a four county area in Western Wisconsin.

Results

The following images are a select few from the important tools that we learned how to use in ERDAS imagine in this lab.  The first image is a transformation from IHS to RGB.

Figure 1: The image above is a transformation IHS to RGB. The color change is from false color to real color.


Figure 2: The image above is of a stretched RGB.


Mosaic Express
Figure 3: The image above is of a mosaic express.  As you can see there is not a smooth transition between images.

Mosaic Pro
Figure 4: The image above is of a mosaic pro.  As you can see there is a smoother transition between images.

Conclusion

This lab was extremely beneficial to learn new tools in the ERDAS Imagine program.  In this lab we learned tools to transform images from RGB to IHS and IHS back to RGB, image mosaicking, band ratioing, spatial and spectral image enhancement, and binary change detection.  Overall, all these tools are very necessary to learn in order to be able to read remotely sensed images.