Friday, December 6, 2013

Lab 8: Spectral Signature Analysis

Goals and Objectives

The main goal of this lab is for you to gain experience on the measurement and interpretation of spectral reflectance (signatures) of various Earth surface and near surface materials captured by satellite images. In this lab, you will learn how to collect spectral signatures from remotely sensed images, graph them, and perform analysis on them to verify whether they pass the spectral separability test discussed in lectures. This is a prerequisite for image classification which will be cover in the advanced version of this class. At the end of this lab, students will be able to collect and properly analyze spectral signature curves for various Earth surface and near surface features for any multispectral remotely sensed image.

Methods

This lab was all about collecting spectral signatures on 12 different Earth surface's.  The 12 surfaces we needed to find were:

   1. Standing Water
   2. Moving water
   3. Vegetation
   4. Riparian vegetation.
   5. Crops
   6. Urban Grass
   7. Dry soil (uncultivated)
   8. Moist soil (uncultivated)
   9. Rock
   10. Asphalt highway
   11. Airport runway
   12. Concrete surface (Parking lot)

In order to do this I opened up the image that was provided and connected to Google Earth.  Connecting to Google Earth helps with finding the right surface.  It is difficult to find the right surface by just looking at the image in ERDAS.  Once I found the right surface I used the polygon drawing tool to draw in the surface area.  In order to find the spectral reflectance I had to use another tool called the signature editor.  This tool will show the spectral reflectance for each band for that surface and plot it on a graph.

Results

The image below is of the 12 different surface areas we had to draw a polygon on and collect the spectral reflectance on.  Using the Signature Editor tool we were able to have this grid of each surface we had to find.



The image below is a graph showing all of the surface areas and there respective spectral reflectance.  As you can see different surfaces have different reflectance.














Conclusion

Overall, this lab was very interesting to do.  To be able to go over other tools that we used earlier in the semester like syncing Google Earth with our image to be able to tell what kind of surface is on the image.  It was interesting to use new tools as well.  This was a great lab to do because remote sensing is all about reflectance and this lab was useful to help us be able to find different surface reflectance values.


Thursday, November 28, 2013

Lab 7: Photogrammetry and Orthorectification

Goals and Objectives

The main goal of this laboratory exercise is to develop your skills in performing key photogrammetric tasks on aerial photographs and satellite images. Specifically the lab is designed to train you in understanding the mathematics behind the calculation of photographic scales, measurement of areas and perimeters of features, and calculating relief displacement. Moreover this lab is indented to introduce you to stereoscopy and performing orthorectification on satellite images. At the end of this lab exercise, students will be in a position to perform diverse photogrammetric tasks.

Methods


Photogrammetry 


Four different photogrammetry methods were used to come up different scales and relief displacement.  The first method we were given an aerial image and the actual distance measured.  We had to find the scale by using a ruler and measuring the distance on the aerial image.  The scale was found using this equation: Actual distance Measured/Measured Distance on the Map.  The second method we were given the focal lens length, the height of the aerial flight, and the elevation of the point on the surface.  We calculated the scale by this equation: Focal Lens Length/(Height of Aerial Flight-Elevation Point).  The third method we used was to find the area and perimeter of a lagoon on the Southwest side of Eau Claire.  We had to use the area and perimeter tool in ERDAS.  I digitized the lagoon and the area was given.  It was in hectares and miles.  The perimeter was in meters and miles.  The fourth method used was to calculate the relief displacement when we were given the height of the smoke stack in real life and the height of the camera.  This was calculated by this equation: Height of Object in Reality x Radial Distance from Principal Point to Top of Object / Height of Camera.

Sterescopy


This tool was rather simple to use.  With a DEM given, certain parameters were changed to get the wanted end result.  I had to create a stereoscopic image of the city of Eau Claire.  The end result was an image that displayed the elevation of the City of Eau Claire.

Orthorectification


This was a long, tedious process!  I first had to set ground control points on my images.  I created 12 GCP's.  I then had to assign the image a DEM and another panchromatic image to correct. 


Results


Photogrammetry Results

The image below is the first task of the photogrammetry section of the lab.  I calculated the scale to be 1:38,498.


The image below is the second task in photogrammetry.  I calculated the scale to be 1: 38,509.





















The image below is the third task in the photogrammetry section of the lab.  I calculated the area and perimeter of the lagoon in acres, hectares, miles, and meters.





















The image below is the last part of the photogrammetry section of the lab.  I had to calculate the relief displacement of the smoke stack.


















Stereoscopic Results

The image below is the result of the stereoscopic task.  It is suppose to be almost a 3D image of Eau Claire, but it is hard to see the elevation and relief without 3D glasses on.





















Orthorectification Results

The image below is the results of the orthorectification lab section.  Through multiple steps like adding GCP's, importing a DEM, and rectifying the image the image was created.















Conclusion

This was the longest lab of the semester so far for this class.  It took a long time to complete.  Overall, I believe it was a very helpful lab that taught us a lot.  The photogrammetry skills in this lab were very helpful and could be used to get a job.

Tuesday, November 19, 2013

Lab 6: Geometric Correction

Goal and Objective

This lab is designed to introduce you to a very important image pre-processing exercise known as geometric correction. The lab is structured to develop your skills on the two major types of geometric correction that are normally performed on satellite images as part of the pre-processing activities prior to the extraction of biophysical and socio-cultural information from satellite images.


Methods

In part 1 image to map rectification was used.  In part 1 of this lab, we were given a topographic map of the greater Chicago area and a LandSat image of the same area.  Our objective was to geometrically correct the topographic map to fit the LandSat image by using ground control points.  Ground control points are used to stretch the image or map.  Control points have latitude and longitude coordinates to provide a an accurate correction to the image.  In part 1, we used first polynomial correction.  In first polynomial correction only three GCP's are needed.  The rectified image had a root mean square .4379.  This is an accepted value because anything under .5 is really good.  The second method used was image to image rectification.  In the second part we were given a LandSat image of Sierre Leone that needed to be corrected.  In part 2, we used third polynomial correction. In third polynomial correction we needed at least 10 ground control points.  We used 12 ground control points to get better accuracy.  My root mean square error was .1576.  This an acceptable value of error for correction.

Results

The first image below is a screen shot of the Root Mean Error and Ground Control Points for the first part of the lab.  
The image above is a screen shot of the first part of the lab using first order polynomial equation.
The second part of the lab was Image-to-Image correction.  Using a third degree polynomial equation, we placed 12 ground control points to have better accuracy.

The image above is of the second part of the lab, where we used a third degree polynomial equation to have better accuracy.






  

Wednesday, November 13, 2013

Lab 5: Image Mosaic and Miscellaneous Image Functions 2

Goal

This lab is designed to introduce you to some important analytical processes in remote sensing. The lab explores RGB to IHS transform and back, image mosaic, spatial and spectral image enhancement, band ratio, and binary change detection. The image mosaic adopted in this lab is structured to teach you how to process individual scenes of satellite images to arrive at one seamless scene, in the event you are faced with a project that covers a large geographic area that exceeds the spatial extent of one satellite image scene, or your study area astride portions of two satellite image scenes. At the end of the lab exercise, students will be in a position to apply all the analytical processes introduced in this lab in real life projects.

Methods

Part 1: RGB to IHS and IHS to RGB

In the first part of the lab we had to transform images from Red, Green, Blue (RGB) to Intensity, Hue, Saturation (IHS) and then back to a RGB image.  Various steps were followed in ERDAS in order to come up with the specific results.  The RGB to IHS function transform an image that is suppose to produce a more realistic image.  The IHS image is an image that has different band combination.
.
Part 2: Image Mosaicking

In the next part of the lab we were asked to mosaic images together using two different ways.  The two ways are mosaic express and mosaic pro.  The first method we used was mosaic express.  Mosaic Express is the quicker way to mosaic an image and does not always produce the same results.  The second method we used was Mosaic Pro.  Mosaic Pro has you set parameters in order to produce the results we intended.  


Part 3: Band Ratioing

Band ratioing is a very help tool in order to pick out and exemplify a specific band within the image.  In this part of the lab, we used band ratioing on the normalized difference vegetation index (NDVI) image.  In the image you can tell exactly where vegetation is growing and where it is not growing.  This tool is very helpful when using an image with vegetation in or you are trying to find vegetation.
  
Part 4: Spatial and Spectral Image Enhancement

Spatial and spectral enhancement are a techniques applied to images to improve on its appearance for visual analysis.  These change the contrast and brightness in order for the human eye to be able to perceive it.  For example, we were given a high frequency image and we had to tone down the brightness level of the image.  We were also given a low frequency image and we had to do a high and low pass convolution mask/edge enhancement to improve the brightness.  In the last part of this section we had to equalize the contrast on a histogram to improve the image quality.  These are all useful tools in order to be able to better perceive the image.

 Part 5: Binary Change Detection (Image Differencing)

Binary change detection is a useful tool in order to see the difference between years on images.  For example, in our lab we used images from 1991 and 2011 to look at the change in a four county area in Western Wisconsin.

Results

The following images are a select few from the important tools that we learned how to use in ERDAS imagine in this lab.  The first image is a transformation from IHS to RGB.

Figure 1: The image above is a transformation IHS to RGB. The color change is from false color to real color.


Figure 2: The image above is of a stretched RGB.


Mosaic Express
Figure 3: The image above is of a mosaic express.  As you can see there is not a smooth transition between images.

Mosaic Pro
Figure 4: The image above is of a mosaic pro.  As you can see there is a smoother transition between images.

Conclusion

This lab was extremely beneficial to learn new tools in the ERDAS Imagine program.  In this lab we learned tools to transform images from RGB to IHS and IHS back to RGB, image mosaicking, band ratioing, spatial and spectral image enhancement, and binary change detection.  Overall, all these tools are very necessary to learn in order to be able to read remotely sensed images.









Tuesday, October 29, 2013

Lab 4: Miscellaneous Image Functions 1

Goals and Background

This laboratory exercise is designed around the following: (1) delineating a study area from a larger satellite image scene, (2) demonstrate how spatial resolution of images can be optimized for visual interpretation purposes, (3) introduce some radiometric enhancement techniques in optical images, (4) linking a satellite image to Google Earth which can be a source of ancillary information, and, (5) introduce students to various methods of re-sampling satellite images. At the end of this lab exercise we will have gained some skills in image pre-processing, enhancing images for visual interpretation, and be in a position to delineate any study area (area of interest) from a larger satellite image scene. 

Methods


Figure 1: Create Subset&Clip
In part 1 of this 5 part lab, we looked at image sub-setting.  There are two methods of image sub-setting: an inquire box and creating an area of interest or AOI.  When creating a sub-set using the inquire box, you will make a square or rectangle around the part of the image that you want delineate from the rest of the image.  To insert the inquire box into your image right-click on your image and hit "Inquire Box".  The inquire box can be re-positioned and re-sized to the specifications that you need.  Once the inquire box is around the area of the image you desire to sub-set, you will then need to create the sub-set.  Under raster, click on subset&clip, choose create subset image, as seen in Figure 1. In output file, go to your folder and save it.  When the sub-set is created, the end result will be an image of where the inquire box was around.

Figure 2: Shape file Area of Interest
The next way to sub-set an area is by creating an area of interest or AOI.  In order to create an AOI, you will have to have a shape file of your AOI.  The one used in part 1 was of Eau Claire and Chippewa Counties, as seen in Figure 2.  Once the AOI is selected, you can clip the AOI and save it by using the sub-set tool.  A new image of the AOI is created. 






Figure 3: Pan Sharpen/Resolution Merge
In part 2, we looked at image fusion.  Image fusion is when a higher spatial resolution image is made from a coarser resolution image.  In order to create a higher spatial resolution image, a regular image and a panchromatic image are fused together.  This is called pan sharpening. We merge the two images together.  With both images open in Erdas, click on Raster, then pan sharpen, and choose resolution merge from the drop down box, as shown in Figure 3. 






Figure 4: How to save image
A window will open like in Figure 4. In the high resolution input the panchromatic image, in the multi-spectral input the original image, and in the output choose a file name.  In the method box check multiplicative and in the re-sampling techniques check nearest neighbor.  With two viewers open in ERDAS, sync the views and compare them.  When the two images are merged together you will get a pan sharpened image that is easier to depict features from, better color, and more depth of detail.

 
In part 3, we looked at enhancing the image by clearing the haze from an image.  A image with haze makes it harder to read and see detail.  With an image open in ERDAS, click on raster, then radiometric, and in the drop down choose haze reduction, as seen in figure 5.
Figure 5: Haze Reduction
  You will get a window to rename your output/new image.  Once the image is done processing and the haze is cleared up, the image will be sharper and easier to interpret.  
















In part 4 we linked an image with a Google Earth image.  This is a neat feature of ERDAS. With an image open in ERDAS you will connect to Google Earth, as shown in Figure 6.  
Figure 6: Linking to Google Earth
When the images are side by side you can sync them together and scroll up and down and zoom in and out.  Since the images are synced both images will move the same.  Google Earth is a great tool to interpret aerial photos with.  You can zoom in to a very near extent and use the image interpretation keys to help identify objects in the frame. 





In part 5, we took a look at re-sampling.  Re-sampling is changing the size of pixels.  With an image open in ERDAS. click on raster, spatial, and re-sample pixel size, as shown in figure 7.
Figure 7: Re-sampling
 When the pixels are made smaller it is called re-sample up and when the pixels are made larger it is called re-sample down.












Results


This is the result of sub-setting with an inquire box.  We started out with and image of Eau Claire County and some outside areas.  With the inquire box you can zoom in to where you want to be able to observe.  This can be a larger area or a smaller area.  For this we choose to zoom in on the city of Eau Claire.  Image sub-setting is a great tool to have in order to pick an area out of a the big image and you are able to only concentrate on the area that you need.


Inquire Box

In this image we created an area of interest.  This is another form of sub-setting.  Here our area of interest was Eau Claire and Chippewa Counties.  A shapefile was overlaid on the image and we were able to select the two counties and make that our area of interest.  

Area of Interest