OSA Publishing

Early Posting

Accepted papers to appear in an upcoming issue

OSA now posts prepublication articles as soon as they are accepted and cleared for production. See the FAQ for additional information.

Phase estimation using phase gradients obtained through Hilbert transform

Jebathilagar Ivan and Ameen Yasir

Doc ID: 267070 Received 25 May 2016; Accepted 23 Aug 2016; Posted 24 Aug 2016  View: PDF

Abstract: An algorithm to extract phase in its unwrapped form from an interferogram having perturbed straight linefringes is proposed and studied. Phase gradients are extracted from an interferogram using the Hilberttransform, and the phase is then estimated from their gradients using the method of least squares forthe Hudgin geometry. The matrix inversion required in implementing the method of least squares for theHudgin geometry is carried out analytically by exploiting the additional symmetries available in the Hudginmatrix. The consistency of the proposed algorithm is demonstarted through its implementation, bothon numerically generated interferograms, as well as on interferograms measured in a Mach-Zehnder interferometricset up, where the respective imparted phases were random, and corresponded to atmosphericturbulence like models.

Adaptive Multi Focus Image Fusion Using Block Compressed Sensing With SPL Integration In Wavelet Domain

UNNI VS, Sai Subrahmanyam Gorthi, and Deepak Mishra

Doc ID: 267357 Received 24 Jun 2016; Accepted 23 Aug 2016; Posted 24 Aug 2016  View: PDF

Abstract: The need of image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus, more useful for a human operator or other computer vision tasks. This paper presents a new approach to multi focus image fusion based on sparse signal representation. Block based compressive sensing integrated with a projection driven CS recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out of focus images. Compression is achieved during the image acquisition process using a Block Compressive Sensing (BCS) method. An adaptive thresholding technique within the Smoothed Projected Landweber (SPL) recovery process reconstructs high resolution focused image from low dimensional CS measurements of out of focus images. Discrete Wavelet Transform (DWT) and Dual Tree Complex Wavelet Transform (DTCWT) are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables better selection of the fusion coefficients and hence better fusion. Laplacian mixture model fit is done in the wavelet domain and estimation of the pdf parameters by expectation maximization lead us to the proper selection of the coefficients of the fused image. The proposed method is compared with the fusion scheme without employing PL scheme and observed that with fewer samples itself the proposed method outperforms the latter.

Design of an autofocus capsule endoscope system and the corresponding 3D reconstruction algorithm

Wei ZHANG, YI-TAO JIN, XIN GUO, JIN-HUI SU, and SU-PING YOU

Doc ID: 267730 Received 06 Jun 2016; Accepted 22 Aug 2016; Posted 23 Aug 2016  View: PDF

Abstract: A traditional capsule endoscope can only take two-dimensional (2D) images, and most of the images are not clear enough to be used for diagnosing. A three-dimensional (3D) capsule endoscope can help doctors make a quicker and more accurate diagnosis. However, blurred images negatively affect reconstruction accuracy. A compact, autofocus capsule endoscope system is designed in this study. Using a liquid lens, the system can be electronically controlled to autofocus, and without any moving elements. The depth of field of the system is in the 3–100 mm range and its field of view is about 110°. The images captured by this optical system are much clearer than those taken by a traditional capsule endoscope. A 3D reconstruction algorithm is presented to adapt to the zooming function of our proposed system. Simulations and experiments have shown that more feature points can be correctly matched and a higher reconstruction accuracy can be achieved by this strategy.

Calibration method for center of mass method to enlarge the solvable range of fluorescence lifetime

Jiangtao Xu, Jun Qiao, Kaiming Nie, and An Zhang

Doc ID: 267934 Received 08 Jun 2016; Accepted 22 Aug 2016; Posted 23 Aug 2016  View: PDF

Abstract: This paper presents a calibration method for center of mass method (CMM) based on rough rapid lifetimedetermination (RLD) to enlarge the solvable range of fluorescence lifetime. The proposed method defines the ratioof two photon-count-numbers as a threshold parameter to characterize the length of the sample lifetime. Whendetecting long lifetimes beyond the threshold, a raw lifetime is estimated firstly through RLD. Then the raw lifetimeis compensated to get a precise one. Simulation results show the solvable range is extended from T/τ>4 to T/τ>1.5with less than 1% error. The extended range with 40dB SNR guaranteed enables higher-frequency laser pulses tosolve long lifetimes or incomplete decays and has promising biomedical applications such as quantum dots.

Perifoveal L- and M-cone driven temporal contrast sensitivities at different retinal illuminances

Cord Huchzermeyer and Jan Kremers

Doc ID: 263622 Received 21 Apr 2016; Accepted 21 Aug 2016; Posted 22 Aug 2016  View: PDF

Abstract: We established a protocol using a well established LED-stimulator with four independent primaries to measure temporal contrast sensitivities driven by sine-wave modulation of the L- and M-cones in the perifovea using triple silent substitution. The stimulus was presented in an annular field (2° inner diameter; 13° outer diameter). We validated this technique by studying the contrast sensitivity in three color normal observers at 10 different temporal frequencies between 1 and 28 Hz and a large range of retinal illuminances between 0.07 and 587 phot Td, spanning the complete mesopic range. In one subject, sensitivities to counter-phase modulation of L- and M-cones (L-minus-M) and in phase modulation of L, M, and S-cones were additionally measured, putatively mediated by the parvo- and magnocellular retinogeniculate pathways respectively. Furthermore, we performed measurements of temporal contrast sensitivities as a function of frequency at 294 phot Td in two protanopes, two deuteranopes, and in one subject with S-cone monochromacy.Quality of isolation was quite satisfactory and we were able to reproduce known physiological patterns of temporal vision, like the typical temporal contrast sensitivity functions of the L- and M-cone, the parvo- and magnocellular retinogeniculate pathways, as well as the light adaptation curves. These results will also help determining optimal stimulus conditions in future studies. The results in the dichromats and the S-cone monochromat support the quality of isolation and underpin the potential clinical value of our protocol.

Weakly Diverging to Tightly Focused Gaussian Beams: a Single Set of Analytic Expressions

Uri Levy and Yaron Silberberg

Doc ID: 269781 Received 06 Jul 2016; Accepted 21 Aug 2016; Posted 22 Aug 2016  View: PDF

Abstract: Analytic expressions describing all vector components of Gaussian beams, linearly polarized as well as radially polarized, are presented. These simple expressions, to high powers in divergence angle, were derived from a single component vector potential. The vector potential itself, as in the 1979 work of L. W. Davis, was approximated by the first two terms of an infinite series solution of Helmholtz equation. The expressions presented here were formulated so as to emphasize the dependence of the amplitude of the various field components on the beam’s divergence angle.We show that the amplitude of the axial component of a linearly polarized Gaussian beam scales as the divergence angle squared whereas the amplitude of the cross-polarized component of a linearly polarized Gaussian beam scales as the divergence angle to the fourth power.Weakly diverging Gaussian beams as well as strongly focused Gaussian beams can be described by exactly the same set of mathematical expressions, up to normalization constant. For a strongly focused linearly polarized Gaussian beam, the ellipticity of the dominant electric field component, typically calculated by the Debye-Wolf integral, is reproduced. For yet higher accuracy, terms with higher powers in divergence angle are presented, but the inclusion of these terms is limited to low divergence angles and short axial distances.

Spectral changes of cosine-Gaussian-correlated Schell-model beams with rectangular symmetry scattered on a deterministic medium

Liu Zr, Xun Wang, Kelin Huang, and Deming Zhu

Doc ID: 270337 Received 14 Jul 2016; Accepted 20 Aug 2016; Posted 22 Aug 2016  View: PDF

Abstract: Based on the theory of first-order Born approximation, analytical expressions for cosine-Gaussian-correlated Schell-model beams with rectangular symmetry (RCGSM) beam scattered on a deterministic medium in the far-zone are derived. In terms of the analytical formula obtained, the changes of RCGSM beam scattered spectrum are numerically investigated. Results show that several parameters (including the scattering directions sx and sy, effective radius σ of the scattering medium, the initial beam’s correlation widths δx and δy, and line width Γ0 of the incident spectrum) closely influence the distributions of normalized scattered spectrum in the far-zone. These features of RCGSM beam scattered spectrum can be used to obtain information about the structure of a deterministic medium.

Controlling electromagnetic scattering with wire metamaterial resonators

Dmitry Filonov, Alexander Shalin, Ivan Iorsh, Pavel Belov, and Pavel Ginzburg

Doc ID: 265215 Received 13 May 2016; Accepted 14 Aug 2016; Posted 16 Aug 2016  View: PDF

Abstract: Manipulation of radiation is required for enabling a span of electromagnetic applications. Since properties of antennas and scatterers are very sensitive to a surrounding environment, macroscopic artificially created materials are good candidates for shaping their characteristics. In particular, metamaterials enable controlling both dispersion and density of electromagnetic states, available for scattering from an object. As the result, properly designed electromagnetic environment could govern waves’ phenomena. Here electromagnetic properties of scattering dipoles, situated inside a wire medium (metamaterial) are analyzed both numerically and experimentally. Impact of the metamaterial geometry, dipole arrangement inside the medium, and frequency of the incident radiation on scattering phenomena was studied. It was shown that the resonance of the dipole hybridizes with Fabry–Pérot modes of the metamaterial, giving rise to a complete reshaping of electromagnetic properties. Regimes of controlled scattering suppression and super-scattering were observed. Numerical analysis is in an agreement with experiments, performed at the GHz spectral range. The reported approach to scattering control with metamaterials could be directly mapped into optical and infrared spectral ranges by employing scalability properties of Maxwell’s equations.

Monocular catadioptric panoramic depth estimationvia caustics-based virtual scene transition

Lingxue Wang, Yu He, Yi Cai, and Wei Xue

Doc ID: 260255 Received 01 Mar 2016; Accepted 12 Aug 2016; Posted 12 Aug 2016  View: PDF

Abstract: Existing catadioptric panoramic depth estimation systems usually require two panoramic imaging subsystems toachieve binocular disparity. The system structures are complicated and only sparse depth maps can be obtained.We present a novel monocular catadioptric panoramic depth estimation method that achieves dense depth maps ofpanoramic scenes using a single unmodified conventional catadioptric panoramic imaging system. Caustics modelthe reflection of the curved mirror and establish the distance relationship between the virtual and real panoramicscenes to overcome the nonlinear problem of the curved mirror. Virtual scene depth is then obtained by applyingour structure classification regularization to depth from defocus (DFD). Finally, real panoramic scene depth isrecovered using the distance relationship. Our method’s effectiveness is demonstrated in experiments.

Time behavior of focused vector beams

Ilya Golub and Svetlana Khonina

Doc ID: 268732 Received 20 Jun 2016; Accepted 11 Aug 2016; Posted 12 Aug 2016  View: PDF

Abstract: We elucidate the pecularities of time behavior of focused vector optical fields. In particular, for linear or radial incident polarizations, we demonstrate explicitly the /2 phase delay between transverse and longitudinal components of the field generated at the focus, i.e. their appearance/reaching the peak at different instances of the optical period. For clockwise circular polarization with -1 order vortex the longitudinal component is in phase with the transverse one. For clockwise circular polarization, the same circular polarization with a +1 order vortex and for radial polarization with +1 order vortex the longitudinal field component has a constant, azimuthally rotating in time shape and it coexists with one or simultaneously with both x and y field components. In addition, we show that the recently studied ultrafast rotating dipole produced by focusing an azimuthally polarized vortex beam (Opt. Lett. 41, 1605 (2016)) differs significantly from a pattern obtained by focusing circularly polarized light. The numerically calculated field components distributions are verified by simplifying the system with an application of a narrow ring aperture allowing to obtain precise analytical expressions confirming the phase relations between different field components. These findings will have to be taken into account or can be taken advantage of when using vector beams in studying light-matter interactions (particle manipulation and acceleration) and especially ultrafast optical phenomena.

Optical Properties of V Groove Silicon Nitride Trench Waveguides

Qiancheng Zhao, Yuewang Huang, and Ozdal Boyraz

Doc ID: 266760 Received 23 May 2016; Accepted 10 Aug 2016; Posted 11 Aug 2016  View: PDF

Abstract: We numerically investigate the mode properties of the V groove silicon nitride trench waveguides based on the experimental results. The trench waveguides are suitable for nonlinear applications. By manipulating the waveguide thicknesses, the waveguides can achieve zero dispersion or maximized nonlinear parameter of 0.219 W-1∙m-1 at 1550 nm. Broadband four-wave mixing with a gain of 5.545 m-1 is presented as an example. The waveguides can also be applied in sensing applications with optimized evanescent intensity ratio. By etching away the top flat slabs, the wide trapezoidal trench waveguides can be utilized for plasmonic sensing due to their TE fundamental mode

Three-dimensional polarization algebra

Colin Sheppard, Marco Castello, and Alberto Diaspro

Doc ID: 268402 Received 14 Jun 2016; Accepted 08 Aug 2016; Posted 09 Aug 2016  View: PDF

Abstract: If light is focused or collected with a high numerical aperture lens, as may occur in imaging and optical encryption applications, polarization should be considered in three dimensions (3D). The matrix algebra of polarization behavior in 3D is discussed. It is useful to convert between the Mueller matrix and two different Hermitian matrices, representing an optical material or system, which are in the literature. Explicit transformation matrices for converting the column vector form of these different matrices are extended to the 3D case, where they are large (81x81), but can be generated using simple rules. It is found that there is some advantage in using a generalization of the Chandrasekhar phase matrix treatment, rather than that based on Gell-Mann matrices, as the resultant matrices are of simpler form, and reduce to the 2D case more easily. Explicit expressions are given for the 3D complex field components in terms of the Chandrasekhar-Stokes parameters.

An updated version of interim connection space LabPQR for spectral color reproduction: LabLab

qian cao, xiaoxia wan, Junfeng Li, and jinxing liang

Doc ID: 268266 Received 13 Jun 2016; Accepted 08 Aug 2016; Posted 10 Aug 2016  View: PDF

Abstract: In this paper we propose a new interim connection space named LabLab, which is an updated version of LabPQR to overcome the drawback that the last three dimensions of LabPQR have no definite colorimetric meanings. We extend and improve the method by which the first three dimensions of LabPQR are deduced to obtain an interim connection space consisting of two sets of the CIELAB values under different illuminants by means of computational formula of the CIEXYZ tristimulus values combined with least-square best fit. The improvement obtained from the proposed method is tested to compress and reconstruct the reflectance spectra of 1950 Natural Color System color chips and 6160 Canon IPF5100 printer color patches measured by different spectrophotometers as well as six multispectral images acquired by multispectral image acquisition systems using 1600 glossy Munsell color chips as training samples. The performance is evaluated by the mean values of color differences between the original and reconstructed spectra under the CIE 1931 standard colorimetric observer and the CIE standard illuminants D50, D55, D65, D75, F2, F7, F11and A. The mean and maximum values of root mean square errors between the original and reconstructed spectra are also calculated. The experimental results show that the proposed three LabLab interim connection spaces significantly outperform PCA, LabPQR, XYZLMS, XYZXYZ and LabRGB in colorimetric reconstruction accuracy at the cost of slight reduction of spectral reconstruction accuracy and illuminant-independence of color differences of the suggested LabLab interim connection spaces outperform other interim connection spaces except PCA. In addition, the presented LabLab interim connection spaces could be well compatible with the extensively used colorimetric management system, since its each dimension has definite colorimetric meanings and be perceptually uniform.

Performance comparison of fully adaptive and staticpassive polarimetric imagers in the presence ofintensity and polarization contrast

Francois Goudail and Matthieu Boffety

Doc ID: 265115 Received 12 May 2016; Accepted 06 Aug 2016; Posted 09 Aug 2016  View: PDF

Abstract: We address the comparison of contrast improvement obtained with a fully adaptive polarimetric imagerand the best channel of a static polarimetric imager in the presence of both intensity and polarizationdifference between the target and the background. We develop an in-depth quantitative study of theperformance loss incurred by a static imager compared to a fully adaptive one in this case. These resultsare useful to make a well-informed choice between these two polarimetric imaging architectures in agiven application.

Application of Derivative Matrices of Skew Rays toDesign of Compound Dispersion Prisms

Psang Dain Lin

Doc ID: 265432 Received 17 May 2016; Accepted 05 Aug 2016; Posted 05 Aug 2016  View: PDF

Abstract: Numerous optimization methods have been developed in recent decades for optical system design. However, thesemethods rely heavily on ray tracing and finite difference techniques to estimate the derivative matrices of the rays.Consequently, the accuracy of the results obtained from these methods is critically dependent on the incrementalstep size used in the tuning stage. To overcome this limitation, the present study proposes a comprehensivemethodology for the design of compound dispersion prisms based on the first- and second-order derivativematrices of skew rays. The proposed method facilitates the analysis and design of prisms with respect to arbitrarysystem variables and provides an ideal basis for automatic prism design applications. Four illustrative examplesare given. It is shown that the optical quantities required to evaluate the prism performance can be extracteddirectly from the proposed derivative matrices. In addition, it is shown in this study that the single-element 3-Dprism can have the same deviation angle and spectral dispersion as the 2-D compound prism.

Recognizing blurred, non-frontal, illumination and expression variant partially occluded faces

Abhijith Punnappurath and Ambasamudram Rajagopalan

Doc ID: 259294 Received 25 Apr 2016; Accepted 02 Aug 2016; Posted 11 Aug 2016  View: PDF

Abstract: The focus of this paper is on the problem of recognizing faces across space-varying motion blur, changes in pose, illumination, and expression, as well as partial occlusion, when only a single image per subject is available in the gallery. We show how the blur incurred due to relative motion between the camera and the subject during exposure can be estimated from the alpha matte of pixels that straddle the boundary between the face and the background. We also devise a strategy to automatically generate the trimap required for matte estimation. Having computed the motion via the matte of the probe, we account for pose variations by synthesizing from the intensity image of the frontal gallery, a face image that matches the pose of the probe. To handle illumination and expression variations, and partial occlusion, we model the probe as a linear combination of nine blurred illumination basis images in the synthesized non-frontal pose, plus a sparse occlusion. We also advocate a recognition metric that capitalizes on the sparsity of the occluded pixels. The performance of our method is extensively validated on synthetic as well as real face data.

Color characterization of coatings with diffraction pigments

Alejandro Ferrero, Berta Bernad, Joaquin Campos-Acosta, Esther Perales, José luis Velázquez, and Francisco Martínez-Verdú

Doc ID: 267930 Received 09 Jun 2016; Accepted 31 Jul 2016; Posted 24 Aug 2016  View: PDF

Abstract: Coatings with diffraction pigments present high iridescence, which needs to be characterized to describe their appearance. The spectral Bidirectional Reflectance Distribution Function (BRDF) of six coatings with SpectraFlair diffraction pigments were measured using the robot-arm-based goniospectrophotometerer GEFE, designed and developed at CSIC. Principal Components Analysis (PCA) has been applied to study the coatings’ BRDF data. From data evaluation and based in theoretical considerations, we propose a relevant geometric factor to study the spectral reflectance and color gamut variation of coatings with diffraction pigments. At fixed values of this geometrical variable, the spectral BRDF component due to diffraction is almost constant. Commercially-available portable goniospectrophotometers, extensively used in several industries (automotive and others), have to be provided with more aspecular measurement angles to characterize goniochromatic coatings based on diffraction pigments.

Detection and removal of fence occlusions in an imageusing a video of the static/dynamic scene

Sankaraganesh Jonna, KRISHNA NAKKA, VRUSHALI KHASARE, RAJIV SAHAY, and MOHAN KANKANHALLI

Doc ID: 270635 Received 14 Jul 2016; Accepted 26 Jul 2016; Posted 05 Aug 2016  View: PDF

Abstract: The advent of inexpensive smartphones/tablets/phablets equipped with cameras has resulted in the averageperson capturing cherished moments as images/videos and sharing them on the internet. However,at several locations an amateur photographer is frustrated with the captured images. For example, theobject of interest to the photographer might be occluded or fenced. Currently available image de-fencingmethods in the literature are limited by non-robust fence detection and can handle only static occludedscenes whose video is captured by constrained camera motion. In this work, we propose an algorithm toobtain a de-fenced image using a few frames from a video of the occluded static or dynamic scene. Wealso present a new fenced image database captured under challenging scenarios such as clutter, poor lighting,view point distortion etc. Initially, we propose a supervised learning based approach to detect fencepixels and validate its performance with qualitative as well as quantitative results. We rely on the ideathat free-hand panning the fenced scene is likely to render visible hidden pixels of the reference framein other frames of the captured video. Our approach necessitates the solution of three problems: (i) detectionof spatial locations of fences/occlusions in the frames of the video (ii) estimation of relative motionbetween the observations and (iii) data fusion to fill-in occluded pixels in the reference image. We assumethe de-fenced image as a Markov random field and obtain its maximum a-posteriori estimate by solvingthe corresponding inverse problem. Several experiments on synthetic and real-world data demonstratethe effectiveness of the proposed approach.

Temporal coherence effects on target-based phasing of laser arrays

Milo Hyde and Glenn Tyler

Doc ID: 267113 Received 27 May 2016; Accepted 21 Jul 2016; Posted 09 Aug 2016  View: PDF

Abstract: This paper studies how temporal coherence (in particular, linewidth broadening introduced to suppress stimulated Brillouin scattering) affects target-based phasing of fiber laser arrays. A radio-frequency modulated array, whose elements are fed by a broadband laser source, phasing on a remote step target is theoretically analyzed. An expression for the detector plane irradiance, ultimately used to phase the array on the target, is derived and discussed in detail. Simulation results of a seven element hexagonal array phasing on a distant step target with scattering surfaces separated by many coherence lengths are presented to validate the theoretical findings.

Losing focus: how lens position and viewing angleaffect the function of multifocal lenses in fishes

Yakir Gagnon, David Wilby, and Shelby Temple

Doc ID: 259077 Received 12 Feb 2016; Accepted 12 Jun 2016; Posted 14 Jun 2016  View: PDF

Abstract: Light rays of different wavelengths are focused at different distances when they pass through a lens (longitudinal chromatic aberration; LCA). For animals with colour vision this can pose a serious problem, because in order to perceive a sharp image the rays must be focused at the shallow plane of the photoreceptor outer segments in the retina. A variety of fish and tetrapods have been found to possess multifocal lenses, which correct for LCA by assigning concentric zones to correctly focus specific wavelengths. Each zone receives light from a specific beam entrance position (BEP) – the lateral distance between incoming light and the centre of the lens. Any occlusion of incoming light at specific BEPs changes the composition of the wavelengths that are correctly focused on the retina. Here, we calculated the effect of: lens position relative to the plane of the iris; and light entering the eye at oblique angles on how much of the lens was involved in focusing the image on the retina (measured as the availability of BEPs). We used rotational photography of fish eyes and mathematical modelling, to quantify the degree of lens occlusion. We found that at most lens positions and viewing angles there was a decrease of BEP availability, and in some cases complete absence of some BEPs. Given the implications of these effects on image quality, we postulate that three morphological features (aphakic spaces, curvature of the iris and intraretinal variability in spectral sensitivity) may, in part, be adaptations to mitigate the loss of spectral image quality in the periphery of the eyes of fishes.

Select as filters


    Select Topics Cancel
    © Copyright 2016 | The Optical Society. All Rights Reserved