The calibration of multispectral and hyperspectral imaging systems is typically done in the laboratory using an integrating sphere, which usually produces a signal that is red rich. Using such a source to calibrate environmental monitoring systems presents some difficulties. Not only is much of the calibration data outside the range and spectral quality of data values that are expected to be captured in the field, using these measurements alone may exaggerate the optical flaws found within the system. Left unaccounted for, these flaws will become embedded in to the calibration, and thus, they will be passed on to the field data when the calibration is applied. To address these issues, we used a series of well-characterized spectral filters within our calibration. It provided us with a set us stable spectral standards to test and account for inadequacies in the spectral and radiometric integrity of the optical imager.
©2004 Optical Society of America
The complexity and variety of marine optical signals in coastal ocean areas has created a challenging environment for the development of remote sensing instrumentation and algorithms. To date, most of the successful oceanic algorithms were based upon multispectral data streams and have been focused on the characterization of Case 1 type waters (waters where the optical constituents of the water co-vary with each other) [1–3]. While these algorithms have worked well for classifying marine water types, they have been less successful in describing near shore environments . The near shore environment has additional influences on the optical signals, which do not necessarily co-vary with signals produced by the ecological interactions. These additional signals include the influence of the bottom, which includes variations in the spectral signal and magnitude depending on the bathymetry and bottom type of the region. These bottom effects have temporal as well as spatial variations, and include the impact of seasonal changes in macrophyte coverage and resuspended sediments. The algorithms are also hampered by colored degradation matter and sediments from terrestrial sources that contaminate the marine produced color signals.
The first step in any algorithm development for coastal optical remote sensing requires the accurate retrieval of water-leaving radiance, Lw(λ), from the sensor measured radiance. The sensor radiance signal is most often dominated by the atmospheric radiance additions and attenuations, such that Lw(λ) is often just a small fraction of the measured photon density. The removal of the atmospheric interference in the water-leaving radiance signal requires a priori knowledge of a host of atmospheric constituents, e.g., column water vapor, aerosol type and density, ozone concentration, etc. Without a priori knowledge, these constituents must be retrieved from the spectral data stream itself, decreasing the degrees of freedom with which to resolve the water leaving radiance signal.
The increased potential for bottom or suspended sediment influenced water-leaving radiance within the coastal area further complicates the atmospheric correction of such scenes. As opposed to deeper off shore waters, the assumption that these waters have no optical return in the near infra red is no longer valid [4–7]. These considerations coupled with the dominance of the atmosphere’s optical effects over the weak optical return from the coastal environment make atmospheric correction of these coastal areas a non-trivial matter.
Promising to deliver the extra information needed to properly handle such spectrally complex scenes, hyperspectral remote sensing emerged as a collection tool more than a decade ago [8, 9]. Hyperspectral remote sensing data, with its numerous, narrow, contiguous wavebands, approximate the true electromagnetic signature of its target . With this new information, mathematical techniques originally developed in laboratory spectroscopy could be applied to this data set in an attempt to characterize the imagery. There have been some recent efforts to use high spectral data in the mapping of the coastal zone [11–15]. However, the sensors used to collect the data for these previous studies suffer from sensitivity and calibration issues that become apparent in these low light scenes. The instruments’ limitations require an on-site, vicarious calibration of the data to be useful in these environments . This in turn, reduces the applicability of these tools and techniques to other coastal areas, or even other water-types within the same image. In addition, remote sensing data analyses, such as temporal scene-to-scene comparisons, are virtually impossible to interpret if the physical units of the individual images are in question. This demand for a high degree of radiometric certainty has to date inhibited this data stream from reaching its full potential as an oceanographic tool. Therefore for newly developed algorithms to be of greater use, a high confidence in the absolute radiometric calibration of the hyperspectral data they utilize is needed .
Over the last several years, the Florida Environmental Research Institute (FERI) in cooperation with the Naval Research Laboratory (NRL) Code 7200 has been developing, deploying, and calibrating a hyperspectral sensor that was specifically designed for the coastal marine environment. During this time, the key issue we encountered centered on how to use a red rich calibrating source to correct an imager focused on blue dominated scenes. This paper will illustrate the steps taken to radiometrically calibrate this sensor using a filter technique in an attempt to reach the goal of producing absolute spectral images of the coastal environment. The radiometric calibration is intimately tied to the spectral calibration and other characteristics of the sensor. Hence, we include a full description of the calibration and characterization of the Portable Hyperspectral Imager for Low-Light Spectroscopy 2 (PHILLS 2) instrument.
The PHILLS 2 is an aircraft mounted, push broom type sensor. It utilizes a two dimensional charge coupled device (CCD) camera to collect the spectral information along a single line on the ground perpendicular to the direction the aircraft is traveling. As the aircraft moves forward along its trajectory, the sensor’s CCD camera captures the spatially dependent spectral information one frame at a time across a thin swath of ground, and thus, as each along-track frame is assembled the image cube is built. The integration time of the sensor is a function of the frame rate of the camera. The spatial resolution in the cross-track is dependent upon the sensor’s lens, CCD dimensions, and aircraft altitude. While the along-track spatial resolution is a function of the lens and altitude also, it is more dependent on the frame rate of the camera and speed of the aircraft. Davis, et al.  provides a detailed description of the sensor.
What distinguishes this instrument from other hyperspectral push-broom sensors is that from concept this sensor was designed specifically for oceanic and near-shore hyperspectral remote sensing. Capturing coastal optical signals from an airborne platform poses two major design challenges that are not usually considerations for terrestrial focused systems. The first challenge is signal sensitivity. Imaging over optically deep waters and at high altitudes, the atmosphere makes up the majority of the observed signal (~90–100%). This combined with the non linear attenuation properties of water requires a sensor to have a high degree of sensitivity in order to properly map the water’s subtle spectral characteristics. In the coastal environment, this is compounded by the relatively bright returns from shallow areas in which the effect of the bottom albedo is visible. The challenge is to resolve not only shallow and deep water signals effectively without saturating the sensor’s CCD, but to also not saturate while imaging adjacent bright land or clouds. Limited dynamic range sensors need to compromise between the range of signals they can detect and the degree of sensitivity they can detect those signals. To overcome this limitation, the PHILLS 2 utilizes a high dynamic range camera, the 14 bit PlutoCCD camera from PixelVision, to capture the spectral information.
The second issue is the spectral characteristics of the target itself. Water is a blue dominant target. However, traditional charged coupled device (CCD) cameras are inherently inefficient in detecting blue light. This limitation was accounted for with the employment of a Scientific Imaging Technologies’ (SITe) thinned, backside-illuminated CCD. Thinned, backside-illuminated chips are essentially normal CCD chips. The difference is that the silicon wafer that the chip was constructed from is thinned, and the chip is flipped upside down when it is mounted in the camera. This process lets incident photons avoid encountering the silicon nitride passivation layer and silicon dioxide and polysilicon gate structures on the front side of the chip before being recorded by the silicon substrate[19, 20]. This greatly increases the quantum efficiency of the chip from ~5% to 60% at 400nm and from ~40% to 85% at 700nm[20, 21]. Although the quantum efficiency of the chip is greatly increased, this procedure does have its disadvantages. By avoiding the silicon gate structures, the probability of blooming occurring during CCD saturation conditions is greatly increased. Blooming happens when a single CCD well overflows due to saturation, and its overflow corrupts its neighboring cells. The SITe CCD chip that the PHILLS 2 employs is broken up in to four quadrants, each with its own analog to digital converter. This division allows the chip to clock off data at a faster rate. The ability of this camera to achieve very fast frame rates allows for the flexibility in selecting a frame rate (1/integration time) in which the maximum brightness of the expected target does not approach saturation, and thus, the probability of CCD blooming is diminished.
However, faster frame rates equate to shorter integration times. And although short integration times over dark oceanic targets are problematic, the high quantum efficiency of the sensor’s chip offsets this concern. To further improve the blue light throughput, the spectrograph that was chosen for the sensor, the American Holographics (now Headwall Photonics Inc.) HyperSpec VS-15 Offner Spectrograph, was optimized to be efficient at the shorter wavelengths. Also, the camera data is spectrally binned to enhance the signal to noise ratio. The PHILLS 2 utilizes a 652 by 494 (spatial by spectral dimension) CCD. The spectral resolution is ~1.1 nanometer prior to binning and ~4.4 nanometers after binning.
No matter how thorough the design and development of the instrument is, the sensor is of little use if its output can not be related to physical reality. The calibration and characterization of this instrument is a multi-step process that includes spectral calibration, angular response characterization, and radiometric calibration.
3.1 Spectral calibration
Spectral calibration is a relatively straightforward procedure. As light enters the PHILLS 2, it passes through a spectrograph prior to being recorded by the CCD camera. The spectrograph separates the incoming light into its wavelength components. The spectral calibration is a determination of the relationship between the true spectral position of the incoming light and the observed effect.
In order to determine this relationship, a spectral element lamp is set up in front of the sensor. Between the lamp and the sensor, a piece of ground glass is placed to diffuse the source so to insure that the entire aperture of the detector is illuminated. Sensor data is then collected using krypton, oxygen, hydrogen, helium, mercury and argon lamps. The combination of these lamps was selected because they produce emission lines that together cover the full spectral range of the PHILLS 2 instrument (~400 to 960 nanometers). The known spectral wavelengths of the emission lines are then regressed against the position of the pixels within the sensor in which the response was recorded.
The optical design and construction of the PHILLS 2 is such that the CCD plane should be perpendicular to the incoming diffracted light. And thus, it should be expected that a first order regression should properly explain the spectral calibration data. A demonstrated need for a higher order regression would suggest that the camera was misaligned in its relationship to the spectrograph, or worse, the either CCD and/or the spectrograph did not meet original specifications or were damaged.
As can be seen in Fig. 1, the first order regression properly summarizes the spectral data. This regression is performed for every spatial element of the CCD. How this regression performs as a function of spatial position illustrates the spectral smile of the sensor. Smile is a function of all diffraction gratings; however, the PHILLS 2 was specifically designed to have minimal smile effects . As can be derived from Fig. 1, the camera is perpendicular to the projection of the spectrograph; however, the rotation of the camera within this perpendicular plane can not be determined. Any rotational tilt would be convolved into the projection of the sensor’s smile. As is witnessed in Fig. 2, the smile-tilt of the sensor is small (~1.15 nm over 652 spatial pixels).
It may be noted that the sensor’s smile-tilt is described on a sub-pixel scale. In developing the spectral calibration, the element lamp data was fit with a series of Gaussian curves. The relatively coarse spectral scale of the PHILLS 2 is not fine enough to detect the true position of the spectral peaks. Thus, the peaks that make up the observed spectral elements response were each modeled with a Gaussian curve. The position of the peak of this fit was then equated to the true spectral position of the element peak within the CCD. And it is these derived positions that are then propagated through the remainder of the spectral calibration.
Once the smile-tilt relationship is determined for all the recorded spectral lamp responses across the spatial dimension of the CCD, a smile-tilt map of the CCD can be generated (see Fig. 3). This map is then used to warp all future data (both laboratory calibration and field data) to a set wavelength vector.
3.2 Angular response characterization
The angular response characterization is a procedure developed to determine the viewing angle of every pixel within the spatial dimension of the sensor. A physically thin, spectral light source is set up a fixed distance away from the sensor. The source is at a distance so that its response within the sensor occupies only one spatial pixel in width. Once this distance is noted, great care is placed in determining the plane, in which the source occupies, that is parallel to the focal plane of the sensor. Next, the position along this plane that is both perpendicular to this plane and inline with the center of the sensor’s focal plane is found. The thin source is then placed here and the spatial position illuminated within the sensor is noted. The source is then repositioned along the parallel plane at predetermined intervals and the corresponding positions of the response within the CCD are recorded. Employing the physical geometry of the sensor and the source through out the trials, the angular displacements of the source are derived. This is then regressed against the observed CCD spatial position, which in turn produces a map of the angular response of the sensor.
As can be seen in Fig. 4, the sensor’s angular response is nearly symmetrical. Also, the angular spacing is linear. This supports the conclusion determined during the spectral calibration that the spectrograph-camera alignment was appropriate. Ideally, the sensor’s keystone, the spatial equivalent of spectral smile, could be determined from this procedure. Unfortunately, the spectral source we employed for this test did not extend into the near infrared. Therefore, any determination of keystone over that region would be the result of extrapolation, and thus, the determination of the system’s keystone was not performed. However, like spectral smile, this sensor was developed to have minimal keystone. The spatially dependent angular response as a function of spectral position witnessed in the visible part of the spectrum supports this claim.
3.3 Radiometric calibration
Simply stated, radiometric calibration relates the digital counts recorded by the sensor to the true physical units of the signal being sampled. Although silicon CCD’s are generally considered linear sensors with respect to intensity, we determine and verify this relationship by testing the PHILLS 2 with a ten lamp, 40 inch diameter, Labsphere integrating sphere as our known source. The lamps and sphere that we employed were calibrated to NIST-traceable standards (National Institute of Standards and Technology) one month prior to the taking of our measurements .
The tungsten-halogen lamps that are utilized within the integrating sphere are red rich. This is not ideal because the oceanic signals the instrument is designed to measure in the field are blue rich. As can be seen in Fig. 5, using the sphere measurements alone is not sufficient. Most of the ocean target spectra are outside of the calibration range of the sphere, resulting in the extrapolation of the retrieved upwelling radiance values from regions outside of the calibration series. With this extrapolation left unaccounted, the effects of small flaws and imperfections within the spectrograph and camera will be exaggerated. And these exaggerated values will get imbedded within the radiometric calibration, and thus, will propagate to the field data when the calibration is applied. To overcome this problem, a series of filters was employed so that the calibration spectra more closely resembled the expected field spectra. The use of these filters over a range of lamp intensities allows us to provide a calibration series that covers almost the entire spectral range of the expected targets [23, 24].
The PHILLS 2 is a push broom type sensor. And thus, by design different pixels in the cross track are viewing the targets at different angles. Accounting for the additional pathlength through the filter for off angle measurements is critical prior to utilizing any filter measurements within the calibration. Using the angular response measurements outlined in the previous section, an estimation of the effective filter transmission can be derived for every viewing angle.
Before starting, several assumptions are needed. First, the filters are assumed to be of a uniform thickness. This assumption is satisfied since the Schott color glass filters utilized in this study had specifications that state that their front and back planes were parallel to within two arc minutes.
Second, it is assumed that the filter will be placed within a plane parallel to the sensor’s aperture. With this in mind, great care was used in placing the filter in the filter holder prior to the collection of the calibration measurements. It should be noted that the filter holder for the PHILLS 2 was integrated in to the sensor’s design. The filter holder, and thus filter, is positioned in front of the sensor’s fore optics. This allowed for consistency in filter measurements from one calibration to another. Also because the filter holder is a permanent part of the sensor, its effect on the calibration measurements does not need to be considered prior to applying the calibration set to field data. We have witnessed that the filter holder we employ reduces the impact of off angle environmental stray light. It acts in a way similar to the photographic lens hood on traditional imaging systems. And thus, if the filter holder was not integrated in to the sensor design, its effect on the calibration data stream would need to be accounted for prior to completing the calibration process.
The final assumption dealt with the influence of the internal transmission relative to the total transmission for off angle measurements. Prior to the calibration, the total transmission of each of the filters was measured at a zero off angle, using a Perkin Elmer Lambda 18 UV/VIS Spectrophotometer. However, the influence that the front plane, back plane, and internal medium had on the total transmission was not measured. This may account for small errors in determining the transmission at wider angles where the influence of the internal medium is a greater percentage of the total transmission.
The total filter transmission can be summarized by the following equation:
where: Tt is the total filter transmission, i t is the transmittance of the filter media, r is the reflectance due to the front side of the filter (air-filter interface), and r’ is the reflectance due to the back side of the filter (filter-air interface). All four parameters are a function of wavelength and angle. In order to determine the losses due to the front and back plane reflectance, Snell’s Law was employed to figure out the path angle within the filter for a particular viewing angle :
where: n is the index of refraction and θ is the corresponding angle. The index of the refraction for the filter was supplied by the manufacturer. In the previous section, we measured the angular resolution of every spatial pixel of the sensor. With this angular information the reflectance loses can be determined via Fresnel’s formula [25, 26]:
These expressions are valid if θ air is not equal to zero (if θ filter,λ equals zero, θ air must also equal zero according to Eq. (2)). In this case that θ air equals zero, the surface reflectance is equal to:
The determination of the effective internal transmission utilizes the Bouger’s Law (or Beer-Lambert) , which states:
where: d θ=0,λ and d θ=N,λ are the distances through the filter at the zero and N angle respectively. Using Snell’s Law (Eq. (2)) and geometry, the ratio of the distances can be found via:
And thus by combining Eqs. (3) through (6), the total transmission at an off angle of N can be solved by:
Utilizing Eq. (7), a filter map that corresponds to the view angles of the sensor’s CCD can be generated for each filter. Using these effective filter transmission maps, the filters can be employed so that the range and spectral shape of the sphere data will more closely resemble optical signals seen in the field (see Fig. 5).
As already stated, the mismatch of the calibration lamp spectra and the anticipated field spectra could result in incorrect calibration of the final data stream by exaggerating spectral stray light and other artifacts inherent to the sensor and projecting them on to the radiometric calibration. To characterize the instrument’s combined spectral and radiometric response, a set of measurements was taken in front of a sphere through several different filters (three long-pass cutoff filters and three blue balancing filters). Utilizing these measurements and a measurement of the sphere without the filters present, the sensor perceived filter transmission can be determined.
where: PTλ is the PHILLS 2 perceived filter transmission and dnλ is the raw, dark current corrected, digital number recorded by the sensor. Using Eq. (8), the perceived filter transmissions were determined (Fig. 6(a)). As can be seen, the perceived filter does not approach the true filter transmission in the blue part of the spectrum.
In an attempt explain this outcome, it was hypothesized that there was some undiffracted light (zero order effect) that was either directly, or through a reflection within the sensor body, impinging on the blue side of the CCD. To combat this effect, a light trap was placed within the sensor to better shield the CCD from the zero order light. The data was resampled with the modified sensor and the perceived filters were then recalculated (Fig. 6(b)). The sensor design limits the influx of UV light, and second order light greater than 515 nm is blocked by a filter. The combination of this and the light trap should insure that only the effects of the first order diffracted light are measured by the sensor.
While there was marked improvement after the placement of the trap, the perceived filter responses are still far from ideal. This outcome may be the result of two other possible sources of error. They are out-of-band response and frame transfer smear. Out-of-band response can be witnessed when some percentage of photons are recorded on the sensor’s CCD outside of their intended spectral position due to imperfections in the diffraction grating. This flaw is not unique to the PHILLS 2 sensor; it is a function of all diffraction grating spectrometers.
Frame transfer smear is a possible flaw because the PHILLS 2 is a shutterless system. During the transfer of individual frames off the CCD, the CCD is left open to collect more data. The CCD transfers information in a circular “bucket-brigade” fashion. Consider a single chain of CCD wells in the spectral dimension. The chain is continuous, and as one spectral pixel’s information is delivered and discharged, the bucket is moved back to the end of the array and the remaining CCD values shift one well (bucket) closer to the read out row. This process continues until the entire spectral dimension is read off and each of the buckets are back in their original position, ready for another integration period. However, during advancement back to their original position and during their wait to be read off, the remaining CCD values are acquiring additional information. But, this additional information is not inherently theirs; it was intended for other spectral buckets. Therefore, frame transfer smear is the culmination of the additional photons gained by the added integration time in different spectral locations. Fortunately, the read out time is a very small fraction of the integration time, but the frame smear effects must still be accounted for in the calibration procedures.
While these are two very distinct problems (out of band and frame transfer smear), their effects on the data stream are very similar. This similarity makes it difficult to distinguish between the two in an attempt to determine which caused the errors, and thus, in our analysis we try to estimate their joint effect rather than the individual contributions on the data stream. Assuming that you can measure this joint error function, translating the measured sensor response to the true response is relatively straight forward. However, directly measuring the function that would be needed to correct these misappropriations is very difficult. These effects can be described via the following:
where: tdn1 is the true digital number associated with the first spectral position, mdn1 is the measured digital number associated with the first spectral position, and p is the probability that describes the stray and smear light. As Eq. (9) illustrates, the measured response of spectral position one is a function of the counts that truly belong in position one but were distributed to neighboring positions, counts that were found in position one but belong to other spectral positions, and counts that both belong and are found in position one.
Using linear algebra, Eq. (9) can be expanded and rearranged to incorporate all the sensor’s spectral bands and their interactions with their neighbors simultaneously to determine the true digital counts of the data.
It should be noted that Eq. (10) describes a closed system; there is no possibility for photon gain or loss. Photons that are not registered by the sensor are not considered to exist within this model. For example, digital counts registered by the sensor according to the model belong only to other elements of the CCD and not to a source outside the CCD’s bounds. While these limitations exist, Eq. (10) utilizes all of the information that is available, and thus, it will be employed in its present form.
where: tft is the true filter transmission as determined by independent testing. By summing the fitness function over diverse spectral and illumination intensities, the determined probability function approaches the ideal distribution function of the sensor, and thus, this unbiased function should be appropriate to employ on a wide range of optical signals that will be viewed in the field. The perceived transmission spectra after the stray light probability was applied to the data can be seen in Fig. 6(c).
Once this probability matrix was determined, it was then used to correct all of the calibration data (as well as, all the field data). The derived probability matrix is the function that characterizes the probability that a photon intended for one spectral position was reported in another. This matrix is independent of spatial CCD location. And thus to be applied, it is run against the collected spectral signal for every spatial location within the CCD. This matrix is then used to post correct every frame of calibration data, and the stray light effects found within field data will be addressed by applying this to all future frames of data collected.
With the spectral stray light and frame smear accounted for, the radiometric relationship between the calibration source and the sensor can be determined. Using this corrected data for the blue balance filtered and non filtered data over several illumination intensities, a linear regression was performed to determine the relationship between the observed data and the NIST calibrated source (Fig. 7). This regression is performed for every element of the CCD to create a radiometric calibration map. It is this relationship that translates the digital counts collected by the camera in the field to the physical units of W m-2 sr-1 µm-1
Prior to the application of the derived calibration maps to field collected data, the issue of polarization should be addressed. The effects of polarization were not measured in the calibration of this instrument. This is not to say that the influence of polarization was ignored. Polarization effects are difficult to accurately measure and at the time of this analysis, the lab was not setup to characterize its effects. The development of the needed procedures to make these measurements is underway, and polarization studies will become a part of future calibrations.
Even with the sensor’s response to polarization accounted for, it application to field collected data is not directly apparent. This instrument is an aircraft mounted imager. And unlike a laboratory or handheld field spectroradiometer, it collects data under very dynamic conditions. The a priori information about the scene geometry that is needed to properly configure a polarization filter prior to the collecting of the data is impossible to gather. Aircraft turbulence, continuously changing solar angles, surface wave fields, and alternating of the aircraft’s trajectories all contribute to the difficulty in utilizing a polarization filter. The future characterization of the sensor in regards to polarization will have little effect on the actual calibration of field collected data. It will, however, aid us in better determining the error bounds in which we trust our data.
In regards to the field deployments of the instrument, we have made every attempt to make measurements in which the effects of polarization within the targeted scene are at a minimum. Flight trajectories are setup to avoid sun glint and to follow the recommendations of Fougnie et al. , who directly address polarization issues when taking optical data above the water’s surface. Their results indicate that polarization has a minimal spectral effect on the optical signals that our instrument is collecting within the field.
As can be seen from the perceived filter comparisons (Figs. 6(a–c)), the use of filters in the development of the calibration coefficients have a dramatic influence. However, the question remains: what effect do these filter derived calibrations have on remotely sensed imagery collected in the field?
In an attempt to address this question, two sets of calibration coefficients were developed. The first set utilized all of the procedures outlined above. The second set also was developed using the steps above. This set, however, was built without the use of any of the sphere filter data. Thus, the stray light – frame transfer smear correction was not applied prior to the generation of the radiometric calibration. Also, only the unfiltered sphere measurements were used in the regression to determine the second set’s radiometric calibration.
Once developed, these calibrations were then applied individually to field data. During this application, the filtered calibration utilized the stray light correction on the field data prior to the calibration’s application; the application of the unfiltered calibration did not include this step. The field study selected for this analysis was the PHILLS 2 October 29th, 2002 flight over Looe Key, Florida (Fig. 8). This data set was selected because of the availability of high quality ground truth data collected coincident to the overflight. The PHILLS 2 data was collected at an altitude of 10,000 feet, which yielded a 2 meter ground spatial resolution. A detailed description of the flight can be found on the FERI website (http://www.flenvironmental.org/Projectpages/flightlogindex.htm).
With both calibrations applied, the effects of the atmosphere were removed so that the results could be compared to the ground truth measurements. Tafkaa, an atmospheric correction program developed by the Naval Research Laboratory - DC, was used for this step [28, 29]. Tafkaa uses look up tables generated with a vector radiative code, and it includes a correction for sky light reflected off the wind roughened sea surface. The model was run using scene invariant atmospheric parameters that were consistent with the environmental and atmospheric conditions witnessed during the data collect (Table 1).
With the effects of the atmosphere removed, each of the data sets were compared to the optical signals measured at the ground truth stations. At the shallow water station (3.2m), both data sets resembled the ground spectra (Fig. 9). However, the filtered data spectrum had a marginally better fit. The minor errors seen in both data sets could be attributed to any combination of unaccounted for sensor noise, flaws in the application of the atmospheric correction, inaccuracies in the geo-registration of the imagery and the ground spectra, or measurement differences between radiometers.
On the other hand, the filtered data was clearly superior to the unfiltered data in its comparison to the ground truth spectrum at the deep water (62.5m) station (Fig. 10). This finding is consistent with the assertion that the spectral quality and intensity of the light used during an instrument’s calibration is critical for coastal and oceanic remote sensing instruments. The emitted signal of the deep water target is so dark that the influences of the stray light and frame smear become more evident. The obvious deviations can be attributed to the need to extrapolate the unfiltered radiometric calibration in its application to this blue rich, dark data, as well as the stray light and frame smear impacts on the calibration series.
The complex yet subtle optical signatures found throughout coastal and oceanic regions require instrumentation and corresponding calibrations that are appropriately sensitive. The promise to properly classify these heterogeneous areas lies in the successful deployment of hyperspectral instrumentation, with its numerous, finely spaced wavebands. However, the high spectral detail that these systems gather requires proportionally high data throughput and processing. In addition, these instruments also need to have the dynamic range and signal-to-noise ratio necessary to accurately measure the ocean’s low light targets. As has been illustrated in both the development and deployment of these coastal remote sensing systems, the goals of fast and accurate, passive optical signal imaging are often at odds with each other. With the advancement of imaging technology, development and deployment decisions should be less constraining. However, any envisioned gains granted by future technological advancements will be lost if the careful calibration of these systems is ignored.
No system is perfect. Grating spectrometer based systems typically have on the order of 10-4 spectral stray light. In addition, PHILLS 2 utilizes a frame transfer camera that imparts a small amount of spectral smear during the readout. Other systems may have the same or other second order artifacts, which must be corrected if one is to obtain well calibrated data for dark scenes like the ocean. To solve this problem, we have developed a technique that uses filters to extend the radiometric calibration range. We have also produced a calculation for the filter effect that accounts for all viewing angles of the system. Using the data collected with the filters in place, we derived a correction for the residual stray light and frame transfer smear inherent in the PHILLS 2 data stream. While this approach was developed for the PHILLS 2 sensor, it is universal in nature and can be applied to other spectral imaging systems to assess and correct similar problems.
This work was funded by the Office of Naval Research. We would like to thank Daniel Dye (FERI) and Mubin Kadiwala (FERI) for their help the development of the processing programs that helped us in the analysis this data. We thank Jeff Bowles (NRL) and Daniel Korwan (NRL) for their help and hospitality during our visits to the NRL Remote Sensing Optics Lab (Code 7200). And finally, we want to thank Marcos Montes (NRL) for his guidance in the application the atmospheric correction software to this data.
References and links
1. H. R. Gordon and A. Morel, Remote assessment of ocean color for interpretation of satellite visible imagery, A review (Springer-Verlag, New York, 1983), p. 114.
2. A. Morel, “Optical modeling of the upper ocean in relation to its biogenous matter content (Case I waters),” Journal of Geophysical Research 93(C9), 10,749–710,768 (1988).
3. H. R. Gordon, O. B. Brown, R. H. Evans, J. W. Brown, R. C. Smith, K. S. Baker, and D. K. Clark, “A semianalytic radiance model of ocean color,” J. Geophys. Res. 93(D9), 10,909–910,924 (1988).
4. C. Hu, K. L. Carder, and F. E. Muller-Karger, “Atmospheric Correction of SeaWiFS Imagery over Turbid Coastal Waters: A Practical Method,” Remote Sensing of Environment.74, no. 2 (2000). [CrossRef]
5. D. Siegel, M. Wang, S. Maritorena, and W. Robinson, “Atmospheric corection of satellite ocean color imagery: the black pixel assumption,” Appl. Opt. 39, 3582–3591 (2000). [CrossRef]
7. K. Ruddick, F. Ovidio, and M. Rijkeboer, “Atmospheric correction of SeaWiFS imagery for turbid coastal and inland waters,” App. Opt. 39, 897–912 (2000). [CrossRef]
8. R. J. Birk and T. B. McCord, “Airborne Hyperspectral Sensor Systems,” IEEE AES Systems Magazine 9, 26–33 (1994). [CrossRef]
9. K. L. Carder, P. Reinersman, R. F. Chen, F. Müller-Karger, C. O. Davis, and M. Hamilton, “AVIRIS calibration and application in coastal oceanic environments,” Remote Sensing of Environment 44, 205–216 (1993). [CrossRef]
10. Z. Lee, K. L. Carder, R. F. Chen, and T. G. Peacock, “Properties of the water column and bottom derived from Airborne Visible Infrared Imaging Spectrometer (AVIRIS) data,” J. Geophys. Res. 106, 11,639–611,652 (2001). [CrossRef]
11. D. D. R. Kohler, “An evaluation of a derivative based hyperspectral bathymetric algorithm,” Dissertation Cornell University, Ithaca, NY, (2001).
12. E. Louchard, R. Reid, F. Stephens, C. Davis, R. Leathers, and T. Downes, “Optical remote sensing of benthic habitats and bathymetry in coastal environments at Lee Stocking Island, Bahamas: A comparative spectral classification approach,” Limnol. Oceanogr. 48, 511–521 (2003). [CrossRef]
13. J. C. Sandidge and R. J. Holyer, “Coastal bathymetry from hyperspectral observations of water radiance,” Remote Sensing of Environment 65, 341–352 (1998). [CrossRef]
14. Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization,” Appl. Opt. 38, 3831–3843 (1999). [CrossRef]
15. Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters. 1. A semianalytical model,” Appl. Opt. 37, 6329–6338 (1998). [CrossRef]
16. R. O. Green, “Spectral calibration requirement for Earth-looking imaging spectrometers in the solar-reflected spectrum,” Appl. Opt. 37, 683–690 (1998). [CrossRef]
17. C. O. Davis, J. Bowles, R. A. Leathers, D. Korwan, T. V. Downes, W. A. Snyder, W. J. Rhea, W. Chen, J. Fisher, W. P. Bissett, and R. A. Reisse, “The Ocean PHILLS Hyperspectral Imager: Design, Characterization, and Calibration,” Opt. Express 10, 210–221 (2002), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-10-4-210 [CrossRef] [PubMed]
18. A. Morel, “In-water and remote measurement of ocean color,” Boundary-Layer Meteorology 18, 117–201 (1980). [CrossRef]
19. C. M. Huang, B. E. Burke, B. B. Kosicki, R. W. Mountain, P. J. Daniels, D. C. Harrison, G. A. Lincoln, N. Usiak, M. A. Kaplan, and A. R. Forte.. “A new process for thinned, back-illuminated CCD imager devices,” presented at the International Symposium on VLSI Technology, New York, (1989).
20. G. M. Williams, H. H. Marsh, and M. Hinds, “Back-illuminated CCD imagers for high information content digital photography,” presented at the Digital Solid State Cameras: Designs and Applications, San Jose, CA, (1998).
21. Scientific Imaging Technologies, Inc., “The CCD Imaging Array: An Introduction to Scientific Imaging Charge-Coupled Devices,” Beaverton, Oregon, (1994).
22. G. Meister, P. Abel, R. Barnes, J. Cooper, C. Davis, M. Godin, D. Goebel, G. Fargion, R. Frouin, D. Korwan, R. Maffione, C. McClain, S. McLean, D. Menzies, A. Poteau, J. Robertson, and J. Sherman, The First SIMBIOS Radiometric Intercomparison (SIMRIC-1), April-September 2001 (NASA Center for AeroSpace Information, Greenbelt, MD, 2002), Vol. NASA Technical Memorandum 2002-210006, p. 60.
23. C. Cattrall, K. L. Carder, K. J. Thome, and H. R. Gordon, “Solar-reflectance-based calibration of spectral radiometers,” Geophys.Res. Lett. 29, 2.1–2.4 (2002). [CrossRef]
24. P. N. Slater, S. Biggar, J. M. Palmer, and K. J. Thome, “Unified approach to absolute radiometric calibration in the solar reflective range,” Remote Sensing of Environment 77, 293–303 (2001). [CrossRef]
25. A. Ryer, Light Measurement Handbook (International Light, Inc., Newburyport, MA, 1997), p. 64.
26. C. D. Mobley, Light and Water (Academic Press, San Diego, CA, 1994), p. 592.
27. B. Fougnie, R. Frouin, P. Lecomte, and P.-Y. Deschamps, “Reduction of skylight reflection effects in the above-water measurement of diffuse marine reflectance,” Appl. Opt. 38, 3844–3856 (1999). [CrossRef]
28. B.-C. Gao, M. J. Montes, Z. Ahmad, and C. O. Davis, “Atmospheric correction algorithm for hyperspectral remote sensing of ocean color from space,” Appl. Opt. 39, 887–896 (2000). [CrossRef]
29. M. J. Montes, B. C. Gao, and C. O. Davis, “A new algorithm for atmospheric correction of hyperspectral remote sensing data,” presented at the GeoSpatial Image and Data Exploration II, Orlando, FL, 2001.
30. B.-C. Gao and C. O. Davis, “Development of a line by line based atmosphere removal algorithm for airborne and spaceborne imaging spectrometers,” presented at the Imaging Spectrometry III, 1997.