Expand this Topic clickable element to expand a topic
OSA Publishing

Early Posting

Accepted papers to appear in an upcoming issue

OSA now posts prepublication articles as soon as they are accepted and cleared for production. See the FAQ for additional information.

Improving the performance of interferometric imagingthrough the use of disturbance feedforward

Michael Boehm, Martin Glueck, Alexander Keck, Jörg Pott, and Oliver Sawodny

Doc ID: 277984 Received 25 Oct 2016; Accepted 20 Feb 2017; Posted 21 Feb 2017  View: PDF

Abstract: In this paper we present a disturbance compensation technique to improve the performance of interferometric imaging for extremely large ground based telescopes, e.g. the Large Binocular Telescope (LBT), which serves as the application example in this contribution. The most significant disturbance sources at ground based telescopes are wind induced mechanical vibrations in the range of 8 Hz to 60 Hz. Traditionally, their optical effect is eliminated by feedback systems, such as the adaptive optics control loop combined with a fringe tracking system within the interferometric instrument. In this paper, accelerometers are used to measure the vibrations. These measurements are used to estimate the motion of the mirrors, i.e. tip, tilt and piston, with a dynamic estimator. Additional delay compensation methods are presented to cancel sensor network delays and actuator input delays, improving the estimation result even more, particularly at higher frequencies. Because various instruments benefit from the implementation of telescope vibration mitigation, the estimator is implemented as a separate, independent software on the telescope, publishing the estimated values via multicast on the telescope's ethernet. Every client capable of using, and correcting the estimated disturbances can subscribe and use these values in a feed-forward for its compensation device, e.g. the deformable mirror, the piston mirror of LINC-NIRVANA, or the fast pathlength corrector of LBTI. This easy-to-use approach eventually leveraged the presented technology for interferomtric use at the LBT and now significantly improves the sky coverage, performance and operational robustness of interferometric imaging on a regular basis.

Holographic Aperture Ladar with Range-Compression

Jason Stafford, David Rabb, and Bradley Duncan

Doc ID: 279459 Received 27 Oct 2016; Accepted 20 Feb 2017; Posted 21 Feb 2017  View: PDF

Abstract: Simultaneous range-compression and aperture synthesis is experimentally demonstrated with a stepped linear frequency modulated waveform and holographic aperture ladar. The resultant 3D data has high resolution in the aperture synthesis dimension and is recorded using a conventional low bandwidth focal plane array. Individual cross-range field segments are coherently combined using data driven registration, while range-compression is performed without the benefit of a coherent waveform. Furthermore, we demonstrate a synergistically enhanced ability to discriminate image objects due to the coaction of range-compression and aperture synthesis. We show that two objects can be precisely located in 3D space, despite being unresolved in two directions, due to resolution gains in both the range and azimuth cross-range dimensions.

Fundamental limits of target detection performance in passive polarization imaging

Francois Goudail and Matthieu Boffety

Doc ID: 282810 Received 13 Dec 2016; Accepted 17 Feb 2017; Posted 21 Feb 2017  View: PDF

Abstract: We quantitatively determine the target detection performance that can be obtained with different passive polarization imaging architectures perturbed by signal-independent detection noise or signal-dependent Poisson shot noise. We compare the fully adaptive polarimetric imager and the best channel of a static polarimetric imager, and each case, we compare the use of a polarizer or of a polarization beam-splitter in the polarization analyzing device. For all these configurations, we derive a closed-form expression of the target/background separabilty, and we quantify the performance gain brought by polarization imaging compared to standard intensity imaging. These results are useful to evaluate the fundamental limits of this gain and to determine, in practice, which type of imaging architecture is preferable for a given application.

Accurate wavelength calibration method for compact CCD-spectrometer

Yanchao Sun, Guo Xia, Chan Huang, Shiqun Jin, and hongbo lu

Doc ID: 283742 Received 28 Dec 2016; Accepted 17 Feb 2017; Posted 17 Feb 2017  View: PDF

Abstract: Wavelength calibration is an important step in CCD-spectrometer. In this paper, an accurate calibration method is proposed. A model of line profile spectrum is built at the beginning, followed by noise reduction, bandwidth correction and automatic peak-seeking treatment. Experimental tests are conducted on USB4000 spectrometer with mercury-argon calibration light source. Compared with the traditional method, the results show that this wavelength calibration procedure obtains higher accuracy and the deviations are within 0.1nm.

Wave-Amplitude Synthesis Applied to Gaussian-Beam Scattering by an Off-Axis Sphere

Dimitrios Chrissoulidis and Elodie Richalot

Doc ID: 281458 Received 23 Nov 2016; Accepted 15 Feb 2017; Posted 16 Feb 2017  View: PDF

Abstract: Electromagnetic scattering of a Gaussian beam by an off-axis dielectric sphere is treated by the sum-of-waves formulation which is inherent in Lorenz-Mie theory. Each “wave" is a spherical eigenvector, defined in the natural frame of the scatterer, and the coefficient of that wave is the “wave amplitude."Decomposition of the beam into homogeneous plane waves lays the ground for a synthesis of the wave amplitudes, which is done by an integration over the polar angle that defines the direction of propagation of the plane-wave constituents of the beam. Concise analytical results are developed for (a) the electric-field intensity in every part of space, (b) the bistatic and monostatic radar cross sections of the scatterer, and (c) the power extracted from the beam by scattering and absorption. Numerical calculations are made for a spherical glycerol droplet of radius 1.5um that is excited by an adjacent, infrared, Gaussian beam of wavelength 1.1424um and spot size 2um. The numerical application manifests (a) how the beam is coupled with the droplet and (b) the effect of the droplet on the power intercepted by a receiver-end fibre placed on the beam axis, beyond the focal plane. Comparisons to numerical results obtained by a FEM software (a) validate the sum-of-waves theory, (b) evaluate the performance of the code implementing that theory, and, succinctly, (c) manifest the limits of the plane-wave decomposition of the beam.

GREAT: A Gradient-based Color Sampling Scheme for Retinex

Michela Lecca, Alessandro Rizzi, and Raul Paolo Serapioni

Doc ID: 279301 Received 24 Oct 2016; Accepted 15 Feb 2017; Posted 17 Feb 2017  View: PDF

Abstract: Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano-Retinex family. Here we present GREAT, a novel, noise free Retinex implementation, based on an image aware spatial color sampling. As a member of the Retinex family, GREAT processes the chromatic channels of an input RGB image separately and outputs a new color image, said {\em color filtering}, which is a qualitative estimate of the human color sensation. For each channel, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the chromatic intensity of each image pixel, said target, by the average of the intensities of the selected edges, weighted by a function of their positions, gradient magnitudes and intensity relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color formation.The name GREAT comes from the expression "Gradient RElevAnce for reTinex", that refers to the threshold-based definition of a gradient relevance map for the edge selection and thus for the estimation of the color filtering.

Transformation of a High-dimensional Colour Space for Material Classification

Huajian Liu, Sang-heon lee, and Javaan Chahl

Doc ID: 279565 Received 27 Oct 2016; Accepted 15 Feb 2017; Posted 17 Feb 2017  View: PDF

Abstract: Images in red-green-blue (RGB) colour space need to be transformed to other colour space for image processing or analysis. E.g., the well-known hue-saturation-intensity (HSI) colour space, which is similar to the colour perception of humans, can aid many computer vision applications. Some birds are tetrachromatic or pentachromatic and their visual systems can sense ultraviolet besides visible light. Similar to humans, the brain of birds might transform the optical signals received to a colour perception of hue, saturation and intensity. Inspired by this fact, the transformation of multispectral or hyperspectral images to a colour space which can separate ‘hue’ from saturation and intensity would be useful computer vision, however, the related works are limited. Some methods can transform hyperspectral images to new colour spaces which are for displaying or human observation, however, most of them usually need dimension reduction which could cause loss or distortion of original data, and therefore the transformed colour spaces are not suitable for material classification. This paper describes a method which can transform high-dimensional images to a colour space called hyper-hue-saturation-intensity (HHSI) which is analogous to HSI in high dimensions. The transformation does not need dimension reduction and therefore it can preserve the original information. Experimental results showed that the hyper-hue is independent of saturation and intensity and it is more suitable for material classification of proximal or remote sensing images of natural environment where illumination is usually out of control.

Cavity-backed metasurface antennas and their application to frequency diversity imaging

Daniel Marks, Okan Yurduseven, and David Smith

Doc ID: 281777 Received 29 Nov 2016; Accepted 13 Feb 2017; Posted 13 Feb 2017  View: PDF

Abstract: Frequency diversity antennas with spatially structured radiation patterns reduce the reliance on actively switched elements for beamforming which become increasingly expensive and impractical as frequency increases. As the quality factor Q of a frequency diverse antenna increases, the antenna samples more spatial structure as the number of unique radiated coded spatial patterns correspondingly increases. Antennas that combine hollow cavities and metamaterial apertures achieve both large fractional bandwidth, in excess of 40%, and a high Q of 1600, so that each antenna radiates over 640 unique coded patterns. As compared to switched active antennas, such a passive antenna replaces the 50 antennas and switches that would produce at most (50/2)^2=625 unique patterns. Furthermore, the engineered metamaterial apertures enable a radiation efficiency exceeding 60% to be achieved in a single desired polarization. The theory of cavity-backed metasurface antennas is explained, and frequency diverse imaging is demonstrated with a pair of these antennas.

High performance analysis of layered nanolithography masks by a surface impedance generating operator

Alireza Gholipour, Reza Faraji-Dana, and Guy Vandenbosch

Doc ID: 279075 Received 26 Oct 2016; Accepted 12 Feb 2017; Posted 13 Feb 2017  View: PDF

Abstract: A fast computational algorithm is presented for the analysis of multilayered nanolitography masks (M-NLM). The technique used is an exact field-theoretical approach which can model the diffraction effects in subwavelength propagation regimes. The field scattered by the mask pattern is obtained in two steps. First a surface impedance generating operator (SIGO) that relates the tangential electric field on the boundary of each etched area to its equivalent surface electric current is computed. Second the exterior problem is formulated based on the equivalence theorem in electromagnetics and is combined with the SIGO model. These two steps may be executed in parallel, making the lithography simulation fast and numerically efficient. For an arbitrary 2D mask illuminated by a TMy-polarized incident wave, the required Green's functions are obtained. The Green's function of the interior problem is calculated directly in spatial domain while the complex images method is used for computing the Green's functions of the exterior multilayer problem. Based on this forward modeling procedure, a parameter sweep is performed and a binary mask pattern under normal incident coherent illumination is analyzed.

Optical performance characterisations of light-logging actigraphy dosimeters

Luke Price, Andrey Lyachev, and Marina Khazova

Doc ID: 280192 Received 10 Nov 2016; Accepted 10 Feb 2017; Posted 13 Feb 2017  View: PDF

Abstract: There are several wearable products specially developed or marketed for studying sleep, circadian rhythms and light levels. However, new recommendations relating to human physiological responses to light have changed what measurements researchers may demand. The performance of eleven light-logging dosimeters from eight manufacturers were compared. The directional and spectral sensitivities, linearity, dynamic range and resolution were tested for seven models, and compared along with other published data. The sample mainly comprised light-logging actigraphy dosimeters wearable as badges, in accordance with measurement protocols for larger scale field studies. A proposed standard for optical performance assessments is set out.

The spectral shift of light wave on scattering fromellipsoidal particle with arbitrary orientation

Zhimin Shi, Tao Wang, Darrick Hay, and Hao Wu

Doc ID: 282941 Received 15 Dec 2016; Accepted 10 Feb 2017; Posted 13 Feb 2017  View: PDF

Abstract: Within the accuracy of the first-order Born approximation, a general expression for the far-zone spectrum of a light wave on scattering from an arbitrarily-orientated ellipsoidal particle is derived. We show that the spectrum of the scattered field, in general, changes with the scattering azimuthal angle, displaying rotational non-symmetry. The influence of the orientation of the particle on the spectrum of the scattered field is discussed, and the relationship between the orientation of scattering particle and the distribution of the relative spectral shift of the scattered field is investigated.

Fractal and spinodal-decomposed turbidities of nanoporous glass: Fluctuation picture in turbid and transparent Vycor

Shigeo Ogawa and Jiro Nakamura

Doc ID: 275751 Received 12 Sep 2016; Accepted 10 Feb 2017; Posted 13 Feb 2017  View: PDF

Abstract: The light propagation and scattering in monolithic transparent nanoporous materials such as Vycor glasses exhibittwo optical turbidities, both of which are slightly deviated from the λ−4 Rayleigh wavelength dependence in thevisible (Vis) region: one is a transient white turbidity f τ , characterized by the convex-upward dependence on theinverse fourth power of wavelength, and the other is turbidity sp τ inherent to the structural inhomogeneity,characterized by the convex-downward dependence. The former is attributed to a fractal-like percolation networkof imbibed or drained pores as a consequence of transient imbibition or drainage of wetting fluid into or from thepore space. The latter is attributed to the structural inhomogeneities inherent to the original dry porous glass,which is produced by spinodal decomposition. In this paper, we develop a general scheme to estimate thetransmittance spectra of Vycor through the turbidities f τ and sp τ in the visible region on the basis of the theory ofdielectric constant fluctuations. We show that the applicability and its limitation of the turbidity analysis of thephotospectroscopically measured data as a method to study the correlation functions that characterize the porespace and the structural features of isotropic transparent nanoporous media, on the presupposition that thereexists no light attenuation other than the scattering.

Interaction of aberrations, diffraction, and quantal fluctuations determine the impact of pupil size on visual quality

Renfeng Xu, Huachun Wang, Larry Thibos, and Arthur Bradley

Doc ID: 279244 Received 20 Oct 2016; Accepted 06 Feb 2017; Posted 09 Feb 2017  View: PDF

Abstract: Purpose: To develop a computational approach that jointly assesses the impact of stimulus luminance and pupil size on visual quality. Methods: We compared traditional optical measures of image quality and those that incorporate the impact of retinal illuminance dependent neural contrast sensitivity. Visually-weighted image quality was calculated for a presbyopic model eye with representative levels of chromatic and monochromatic aberrations as pupil diameter was varied from 7mm to 1mm, stimulus luminance varied from 2000 to 0.1 cd/m2 and defocus varied from zero to -2 diopters. The model included the effects of quantal fluctuations on neural contrast sensitivity. We tested the model’s predictions for 5 cycles per degree gratings by measuring contrast sensitivity at 5 cyc/deg. Results: Unlike the traditional Strehl Ratio and the visually-weighted area under the modulation transfer function, the visual Strehl Ratio derived from the optical transfer function was able to capture the combined impact of optics and quantal noise on visual quality. In a well-focused eye, provided retinal illuminance is held constant as pupil size varies, visual image quality scales approximately as the square root of illuminance because of quantum fluctuations, but optimum pupil size is essentially independent of retinal illuminance and quantum fluctuations. Conversely, when stimulus luminance is held constant (and therefore illuminance varies with pupil size), optimum pupil size increases as luminance decreases, thereby compensating partially for increased quantum fluctuations. However, in the presence of -1 and -2 diopters of defocus and at high photopic levels where Weber’s law operates, optical aberrations and diffraction dominate image quality and pupil optimization. Similar behavior was observed in human observers viewing sinusoidal gratings. Conclusions: Optimum pupil size increases as stimulus luminance drops for the well-focused eye, and the benefits of small pupils for improving defocused image quality remain throughout the photopic and mesopic ranges. However, restricting pupils to <2mm will cause significant reductions in the best focus vision at low photopic and mesopic luminances.

Robust object tracking based on local discriminative sparse representation

xin wang, Siqiu Shen, Chen Ning, Yuzhen Zhang, and Guofang Lv

Doc ID: 272047 Received 21 Jul 2016; Accepted 01 Jan 2017; Posted 21 Feb 2017  View: PDF

Abstract: Despite much success has been demonstrated in the application of sparse representation to object tracking, most of the existing sparse representation based tracking methods are still not robust enough to the challenges such as pose variations, illumination changes, occlusions, and background distractions. In this paper, we propose a robust object tracking algorithm via local discriminative sparse representation. The key idea in our method is to develop a novel local discriminative sparse representation method for object appearance modeling, which can be helpful in overcoming appearance variations, occlusions, etc. Then, a robust tracker based on the local discriminative sparse appearance model is proposed to track the object over time. Additionally, an online dictionary update strategy is introduced in our approach for further robustness. Experimental results on challenging sequences demonstrate the effectiveness and robustness of our proposed method.

Correction of turbulence-degraded underwater images using shift map analysis

Kalyan Halder, Manoranjan Paul, Murat Tahtali, Sreenatha Anavatti, and Manzur Murshed

Doc ID: 278065 Received 03 Oct 2016; Accepted 12 Dec 2016; Posted 13 Dec 2016  View: PDF

Abstract: In underwater imaging, the water waves cause severe geometric distortions and blurring of the acquiredshort-exposure images. Corrections for these distortions have been tackled reasonably well by previousefforts, but still need improvement in the estimation of pixel shift maps to increase the restoration accuracy.This paper presents a new algorithm that efficiently estimates the shift maps from the geometricallydistorted video sequences and uses those maps to restore the sequences. A non-rigid image registrationmethod is employed to estimate the shift maps of the distorted frames against a reference frame. Thesharpest frame of the sequence, determined using a sharpness metric, is chosen as the reference frame.A k-means clustering technique is employed to discard too blurry frames that could result in inaccuracyin the shift maps’ estimation. The estimated pixel shift maps are processed to generate the accurate shiftmap that is used to dewarp the input frames into their non-distorted forms. The proposed method isapplied on several synthetic and real-world video sequences and the obtained results exhibit significantimprovements over the state-of-the-art methods.

Select as filters


    Select Topics Cancel
    © Copyright 2017 | The Optical Society. All Rights Reserved