Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

CMOS approach to compressed-domain image acquisition

Open Access Open Access

Abstract

A hardware implementation of a real-time compressed-domain image acquisition system is demonstrated. The system performs front-end computational imaging, whereby the inner product between an image and an arbitrarily-specified mask is implemented in silicon. The acquisition system is based on an intelligent readout integrated circuit (iROIC) that is capable of providing independent bias voltages to individual detectors, which enables implementation of spatial multiplication with any prescribed mask through a bias-controlled response-modulation mechanism. The modulated pixels are summed up in the image grabber to generate the compressed samples, namely aperture-coded coefficients, of an image. A rigorous bias-selection algorithm is presented to the readout circuit, which exploits the bias-dependent nature of the imager’s responsivity. Proven functionality of the hardware in transform coding compressed image acquisition, silicon-level compressive sampling, in pixel nonuniformity correction and hardware-level implementation of region-based enhancement is demonstrated.

© 2017 Optical Society of America

1. Introduction

Dramatic advances in the field of computational and medical imaging over the past decades have enabled many critical applications such as night vision, medical diagnosis, quality control, and remote sensing applications [1–5]. The increasing demand in image quality and its fidelity needs an increase in pixel count and a sophisticated post-processing mechanism to efficiently store, transmit, and analyze this huge data [6–9]. There is an inherent trade-off between the generation of big data by such imaging systems, and efficiency in extraction of useful information within real-time constraints, limiting the efficacy of such sensors in real-time decision-making systems [10,11]. The traditional imaging system gets burdened by the acquisition, transmission, and storage of excess data, bearing redundant information for the given application of interest [12–16]. Transmission of the extra information requires a high bandwidth and consumption of extra power to store or transmit. Similarly, post-processing imposes extra latency and requires additive power consumption, which is troublesome for many low-power, real-time applications, and portable devices [17].

There is a need to address this problem by intelligently acquiring a limited but most important set of data and process the abstract information. This, in turn, needs an additional ability where computations are performed at the pixel level, within the readout integrated circuit, at the front-end of the imager [18].

In the pursuit of seeking an efficient computational imaging hardware, which tends to address the memory efficiency, low power consumption and minimal latency requirements, we demonstrate a CMOS-based imaging hardware [19], which supports compression at the acquisition time [20], inside the pixel. Figure 1(a) shows an iconic block diagram of the long established typical imaging system, and our alternative approach, compressed-domain imaging, is demonstrated in Fig. 1(b). The proposed approach suggests integration of the post-processing to the acquisition, which results in lower latency and reduction in power consumption.

 figure: Fig. 1

Fig. 1 a) A system-level block diagram of a conventional imaging system, which includes image acquisition, storage, and post-processing stages. b) Block diagram of the intelligent readout integrated circuit we propose for on-chip image acquisition and compression.

Download Full Size | PDF

In Section 2, we discuss some background and prior works in the area of sensor-level compression. Section 3 covers our proposed compressed-domain imaging hardware and the photodetector embedded in the design. The experimental setup is explained in Section 4. Different applications, including nonuniformity correction and compressive sensing are discussed in Sections 6 and 7, and the experimental results are also presented. Finally, we will outline conclusions and future works in Section 9.

2. Background and previous work

For a typical image sensor, imaging involves reading out the values sampled at different pixels [21]; whereas in the case of compressed-domain hardware, a set of gain matrices is loaded to the pixel array, and the image sensor’s output would be a linear combination of the projection of the object’s reflectance function to the gain matrices [16,22]. In the following paragraphs, we make some comparisons among a few other works that have been devoted to the problem of online compression and hardware domain sensing based on matrix projection.

One of the earliest reported hardware implementations to the compressive sensing is based on a single-pixel camera [23]. The single-pixel imaging utilizes a digital micromirror (DMM) [24] to project the incident light coming from the object to the digital masks. The photodetector samples the integrated light coming from the sample, which is modulated by using the DMM. This method is usually used for far infrared imaging where having an array of low-cost, small size photodetectors is not feasible. The DMM degrades the sensitivity of the imager, and the alignment of different components is a limit to the scaling of this method.

An optical-domain coded apertures based compressive sensing is demonstrated in [25]. A random phase mask injects the measurement matrices, and the modulated intensities at different pixels are sampled using a low-resolution imager. This technique suffers from the noise added by the optical masks, and the complexity of the alignment setup is a big challenge.

A CMOS imager is demonstrated in [26] that utilizes a flip-flop-based shift-register distributed over the pixel array to hold the random digital patterns. The shift register selectively disconnects the pixels from the readout and implements the measurement matrices. The proposed hardware offers multiplication only by a binary value. This limits the compressive-sensing algorithm to the binary projection matrices, which are composed of only one or zero. Furthermore, there is no control over the bias voltage of the detectors, the result of which many features that are offered by modulation at the detector level are not supported. Finally, because the unitcell does not support integration, the proposed hardware cannot work with the detectors with lower quantum efficiency.

Figure 2 presents our proposed monolithic CMOS image sensor that can run as a stand-alone image sensor and is able to perform spatiotemporal region of interest enhancement. The hardware is also capable of generating already compressed images as well as canceling the nonuniformity inherent from process variation or other sources such as a voltage drop across the image sensor chip. The main contribution of this hardware is the introduction of control over a per-pixel modulation factor through controling the photodetector’s responsivity that is demonstrated as a controllable gain symbol in the pixels. The capacitor represents the analog memory that is embedded to store and hold the bias information for individual pixels. The AND gate selectively enables different pixels to load the bias voltage to the active pixel, and this selection occurs at the same time that the pixel is being readout; therefore, no delay penalty is associated with the new design. While sampling the integrated voltage to the sample-and-hold (S&H) capacitor, voltage Vref is used as a global reference voltage for all of the preamplifiers. This removes the bias voltage from showing up in the readout and makes the readout value meaningful.

 figure: Fig. 2

Fig. 2 Block diagram of the individual pixel bias tunable readout integrated circuit and the CTIA-based unitcell at the extended view. The extra circuitry added to the CTIA-based unitcell enables setting independent bias voltages for each individual pixel while the previously integrated voltage is being read out.

Download Full Size | PDF

During the readout, the bias information, which is loaded to different pixels, can be different from each other and also from the bias that is loaded to the same pixel in the previous frame. This is what we refer to as the spatiotemporal independence of pixels biasing scheme.

The proposed hardware has the unique feature of performing application-specific transform coding based on a specialized set of bias masks. These sets of bias masks are dictated by a rigorous bias-selection algorithm that is then stored in the memory of the device. The incoming image data is projected into the designated masks to generate the code words used for image reconstruction. Most importantly, our proposed bias-selection algorithm, which has not been reported in the literature, considers the responsivity of the device, resulting in remarkably less reconstruction error. We will discuss the detail implementation of the iROIC in the next section.

3. Design of the pixel

Implementation of a compressed-domain imaging system requires a means to implement projection of the object’s reflectance function to the gain matrices. The way we have approached this problem is by embedding a fine control over operating voltage of each individual pixel’s detector. The current hardware is designed with an array of n+/nwell/psub detectors that is laid out along with the rest of the readout integrated circuit in silicon. The fill-factor of the detector is 8.4%. A cross section of the photodetector is shown in Fig. 3(a).

 figure: Fig. 3

Fig. 3 a) A cross section of the n+/nwell/psub photodetector used in this chip, b) the measured photoresponse of n+/nwell/psub photodetector as a function of the applied bias voltages at different illumination levels, and c) the same measured results that are scaled to one. In this experiment, a green LED is used as the illumination source and the dimension of the photodetector is 100 µm×100 µm.

Download Full Size | PDF

The graph in Fig. 3(b) shows the measured photocurrent of the n+/nwell/psub photodetector at six different illumination levels and at dark. A green LED is used as the light source in this experiment, and the intensity is modulated by controlling the injection current. The LED is placed at almost 40 cm away from the detector, meaning that the intensity of illumination at the scale of the detector and the optical power meter can be approximated as uniform. The illumination intensity is measured simultaneously, and the reported optical power is scaled to the area of the detector. As seen in Fig. 3(b), because the photoresponse is a function of both the bias voltage and the intensity of the light, one could load the projection matrix to the pixel array and acquire the image while the pixels are operating at different response-modulation factors. The graph in Fig. 3(c) shows the same measured data, which are normalized to one. Because the normalized data are approximately overlapping, we can state that modulating the bias voltage scales the measured photocurrent. This could lead to many applications, which will be discussed in the following sections.

Table 1 briefly compares different possible configurations for the preamplifier stage of the unitcell. Because the capacitive trans-impedance amplifier (CTIA) provides the best performance in terms of precise control over the detector’s bias voltage, as well as provides high injection efficiency, large voltage swing, and support for good charge storage, we have selected this configuration as the base for the preamplifier.

Tables Icon

Table 1. Comparison between different configuration for preamplifier used in an imager. Due to the need for good bias control, high injection efficiency, and sufficient charge storage, we have selected CTIA configuration for iROIC.

Figure 4(a) depicts the detailed block diagram of the unitcell of iROIC. Figure 4(b) shows the video switches, the active load for the source follower at the output of the unitcells. The ROIC peripherals are shown in Fig. 4(c). In the proposed unitcell, the conventional CTIA configuration is featured with the ability to control each individual pixel’s bias voltage. Here, we briefly explain the process that is followed to operate the compressed-domain imaging. The readout mechanism is also demonstrated in Fig. 4(d):

  1. The bias control circuit is composed of an analog switch, SWBias, that is enabled when the row-select and column-select signals address the pixel, then the analog memory is loaded with the bias voltage.
  2. During the integration, the bias is held at the analog memory. Both SWBias and SWRef switches are off for the entire integration time to protect the CBias capacitor from charging.
  3. At the end of the integration, the SWRef switch is enabled to set the same reference voltage for all of the pixels and to make the sample’s value meaningful.
To provide a high voltage swing range, the chip has been fabricated by Taiwan Semiconductor Manufacturing Co. (TSMC) at CL035HV process technology node, which is a standard CMOS process technology, supporting four metal layers, two poly layers, and the two different voltage domains. The metal layers serve as the interconnection between various devices, and the polies are used at the gate of the transistors, and they have been employed to form the inter-poly capacitors. The high voltage domain is used for the unitcells to support high swing voltage; the low voltage domain is employed for the row-select and column-select circuitry, resulting in higher integration and lower power consumption. The minimum feature size of the devices at the CL035HV process node is 1.5 µm for the transistors at high voltage domain and 0.35 µm for those at low voltage domain.

 figure: Fig. 4

Fig. 4 a) Switch level implementation of iROIC unitcell. The unitcell includes 15 transistors and three capacitors, b) the video switches, c) the row/column select peripherals, and d) a sample timing diagram of a single unitcell.

Download Full Size | PDF

A major challenge in the design of this circuit was the trade-off between the number of functionalities and the area for the pixel. To comply with the pitch of standard focal plane arrays (FPAs), we decided to restrict the unitcell to 30 µm × 30 µm. The constraint imposed by area forced us to have all the switches at the minimum size supported by the technology node. This minimum feature size of 1.5 µm is still large enough to neglect leakage currents that are dominant mainly at submicron or deep submicron devices. All of the switches are based on a single NMOS transistor. The rest of the area was equally divided between the capacitors to achieve the highest possible resolution for the output image data. In total, the unitcell is composed of seven transistors for the dual-stage differential amplifier and eight transistors for the rest of the unitcell circuitry. The unitcell also includes four capacitors that serve as the compensation, the integration, the sample-and-hold, and the bias-voltage holder capacitor.

To have a model for the response-modulation function of the imager, the response of the system to a uniform level of illumination at different bias voltages is measured. The normalized imagers photoresponse is shown in Fig. 5. In the error-bar graph, the mean and standard deviations are based on statistical analysis over all the pixels in the entire 96 × 96 frame, and each measurement was repeated 10 times to reduce random noises. The mean value and the standard variation shown in this figure are employed as the base for bias selection in a real-time system. The curve infers that the system responds to the bias voltage in a semi-linear fashion as long as the detector’s bias voltage is limited to ∼ [+0.4, +3.5].

 figure: Fig. 5

Fig. 5 Demonstration of the normalized modulation function of the system to a uniform illumination level. The graph reflects the system’s response to the modulation of the detector’s bias.

Download Full Size | PDF

The silicon-based photodetector has been laid out in the form of a 10 µm strip on the right and top side of the unitcell, which increases the size of the pixel to 40 µm × 40 µm. A microphotograph of the fabricated chip is shown in Fig. 6(a), and the layout of the unitcell is shown in the extended view. The dimension of the pixel array is 3840 µm × 3840 µm, and the total area of the chip, including test-cells, PADs, and ESD protection, is 5140 µm × 5140 µm.

 figure: Fig. 6

Fig. 6 a) A microphotograph of the fabricated ROIC, the row and column select, and the test devices. The unitcell is shown in the extended view. b) A block diagram of the experimental setup, which includes a Raspberry Pi board as the main controller of the system, an ADC and a DAC to set the bias voltage of the detectors and grabs the readout of the imager. All communication between the controller and a remote machine is over SSH.

Download Full Size | PDF

Although we have considered n+/nwell/psub photodetectors as a means to exploit compressed-domain image acquisition, the circuit would work fine with any detector, for which the nominal operating voltage and current of the detector fit in the specification of the designed readout integrated circuit. Additionally, we have embedded extra knobs, such as the bias current of the preamplifier, the integration time, and the readout clock speed, that are set from outside the chip. These knobs can be employed to optimize the operating point of the system.

4. Experimental setup

In the implemented hardware, the timing signal and the analog bias for the photodetectors are generated using a Raspberry Pi board (RPB). The main reason for choosing the RPB as the main controller is its extended support for on-board memory in the form of a micro-SD card. The typical FPGAs do not support for high volume storage; this challenges the storage of massive bias information. A DAC converts these digital values to analog and then feed them to the iROIC. The output video signal is sampled using an ADC chip, which is derived by the RPB. The sampled data are both sent to a remote computer for the purpose of online monitoring and also are stored in the local memory of the controller to be processed later. The RPB board acts as a stand-alone controller for the iROIC and performs all image acquisition details. A custom PCB board is designed to host the test chip, to interface the RPB board, and to deliver high signal integrity. The RPB board is controlled using a desktop over LAN, and test vectors are loaded using Linux’s standard commands such as rsync, ssh, scp, etc. A block diagram of the experimental setup is shown in Fig. 6(b). The control over bias information of every pixel’s detector and the flexibility offered by the experimental setup has enabled many different applications that are explained in the following sections.

5. Nonuniformity correction

The pixels are designed to maximize the sensitivity to the photoresponse. However, the overall performance of the sensor is limited by noise, which comes from many different sources and contributes to the output signal. Random noise is a temporal variation in the signal that is not constant and changes over time, from frame to frame. This type of noise, which is hard to predict, has a statistical distribution and can be canceled statistically by the mean of averaging [27,28].

On the other hand, the pattern noise is the spatial variation in the photoresponse of different pixels while they are exposed to a uniform illumination. This type of noise is fixed over time and cannot be reduced by averaging. The pattern noise stems from the variations in the growth or fabrication of the photodetectors. The difference in the driving and sampling circuitry or the variation in power distribution also results in deviation in responsivity in the form of pattern noise [29].

The pattern noise is composed of fixed pattern noise (FPN) [30, 31] and photo response nonuniformity (PRNU) components [32,33]. The FPN is measured in the absence of illumination and is a result of variations in growth, detector dimension, doping concentrations, fabrication defects, characteristics of transistors (VT, gm, W, L, etc.) [34, 35], or nonuniformity in the distribution of power [36]. Additionally, at high-speed readout, the differences between the resistance and capacitance that are seen at the output of different unitcells can also cause nonuniformity. The second component of pattern noise, PRNU, is a function of illumination and varies based on the dimension of the photodetector, the doping concentration, and the color of the light incident to the detector [37].

Nonuniformity correction is an important topic under investigation and deals with processing inconsistencies that lead to unfavorable pattern noise. Independent of the source of the nonuniformity, it can be corrected using single-point calibration, two-point calibration [38, 39], or scene-based nonuniformity correction [40].

Because pattern noise does not change with time, it could be canceled by using proper biasing of the circuit. We have used a two points based nonuniformity correction to calibrate the responsivity of the image sensor. Two different uniform illuminations are used as the calibration points, and as a result, an offset and a gain are calculated for each pixel, which is employed to correct the photoresponses that are read from each pixel. This method has the extra benefit that if the nonuniformity grows with temperature, it will offer a better correction. The mathematical formulation for the correction algorithm we used is given below [41]. The linear model of the imaging device is estimated by:

Yijk=gijkIijk+oijk,
where Iijk is the actual object’s reflection function, which is incident to the image sensor, and the observed pixel value is given by Yijk. Variable k is the frame index, and the gain and offset of the (i, j)th detector are denoted by gijk and oijk respectively. Here, nonuniformity correction is carried out by the means of a linear transformation of the observed pixel values Yijk. The goal is to provide an estimate of the true intensity Iijk so that all of the detectors appear to be performing uniformly. The correction is given by:
Iijk=wijkYijk+bijk,
where wijk and bijk are the gain and offset of the linear correction model of the (i, j)th detector.

After we estimate the parameters wijk and bijk or gijk and oijk, the NUC can be achieved as per Eq. (2) for which we computed the corrected bias to be applied from the responsivity graph. In Fig. 7(a), we demonstrate an image of a white paper, which is taken at uniform biasing for all of the pixels. Although the bias information is uniform, the pixels’ response across the image varies because of the nonuniform illumination, weakly sensitive pixels, and other sources of fixed pattern noise. Figure 7(b), on the other hand, shows another image under the same illumination condition with a bias matrix, which is optimized for the NUC technique discussed above. Gain and offset are calculated as per Eqs. (1) and (2) per pixel and are embedded in the bias applied to each pixel using the RPB board. The 3D intensity level shown in Figs. 7(a) and 7(b) reflects a Gaussian distribution with a flat variance in part (a) due to the presence of nonuniformity whereas the variance is minimal due to its correction in part (b).

 figure: Fig. 7

Fig. 7 a) The result of imaging a white paper with uniform biasing, while the illumination is not uniform. Defects and other sources of nonuniformity also contribute to the variation across the image. The stack of three graphs demonstrates (I) camera output image, (II) illumination contour, and (III) 3D view of the intensities. b) Another white paper is imaged with the same illumination condition using the implemented nonuniformity correction. The graph has the same scale as part (a), and the legend in the middle is for part (II). c) and d) show the histogram for the measured results of part (a) and (b), respectively.

Download Full Size | PDF

Figures 7(c) and (d) depict the histogram of the images shown in Figs. 7(a) and 7(b), respectively. While the histogram on Fig. 7(c) shows that it is flat as for the given non-uniform illumination, the camera results in an image with a wide range of pixel intensity level, while at the same time our NUC method resulted in a narrow histogram as shown in Fig. 7(d). Here, the point is that the hardware is able to cancel the integrated nonuniformity that stems in the pixels, the ROIC, and also in the illumination.

The nonuniformity correction also aided in the fine-tuning of the responsivity curves. Because the responsivity is based on the calibration of pixels under different bias conditions and different lighting conditions, enabling nonuniformity correction before this calibration process allowed a uniform behavior of responsivity through all of the pixels and less invariant toward any form of noise. This also guaranteed that the SNR of responsivity is above a certain threshold, which enabled the bias-selection technique to have superior performance as discussed over results.

6. Compressed-domain image acquisition

The most important application of the chip is targeted in a compressed-domain imaging framework. The compression is achieved by the hardware by performing a projection of the image to a set of basis masks implemented in the detectors’ biases. We have considered two different in-hardware compression modalities, which are in-pixel discrete-cosine-transform (DCT) based compressed-domain image acquisition and compressive sensing framework [42,43].

To implement the compression modalities in hardware, we need to adapt the compressive masks as per device responsivity so that we ensure the mask coefficients are exactly achievable as modulation factors at the pixels.

6.1. Discrete cosine transform

In this part, we present the mathematical formulations for compression and reconstruction of the image using the DCT. In order to realize any sort of transform coding on the computational imaging hardware, one needs to be able to project the acquired image into the designated mask where the transform coefficients need to be realized at each of the pixels as multiplication factors. Considering R is the responsivity of the image sensor, which is a function of the object’s reflectance function I and the detector’s bias voltage V, then:

R=g(I,V),
where g is some nonlinear function of I and V. Here, if I is the object reflectance function in spatial domain, then its frequency domain transform is given by:
yuv=2MNi=0M1j=0N1[C(u)C(v)Iijcosπ(2i+1)u2Ncosπ(2j+1)v2N],
where i and j are integers in the range of [0, N − 1], which are used to address different pixels, and C(u) and C(v) are defined in the following equation:
C(u),C(v)={12ifu,v=01otherwise.
The inverse of the DCT transform function is defined as:
Iij=2MNu=0M1v=0N1[yuvcosπ(2i+1)u2Ncosπ(2j+1)v2N]
In order to implement the computationally intensive DCT transform in hardware, we have reordered Eq. (4) and decoupled the bias (mask) matrices from the image sensor responses, which is shown in the equation below:
yuv=2MNC(u)C(v)i=0M1j=0N1[IijMaskuv(i,j)],
where:
u,v=0,1,,N1.
In the above equation, Maskuv(i, j) is the mask set that is to be loaded to the image sensor as the bias information. If we assume N equals M, for exact reconstruction the total number of masks would be N × N. The mask matrices can be represented as,
Maskuv(i,j)=cosπ(2m+1)u2Ncosπ(2n+1)v2N.
In the calculation of the mask matrices, because C(u) and C(v) are not a function of m and n, they are treated as constants and are not included in Eq. (8). Because all of the coefficients are limited to the same range of [−1, +1], we could efficiently use the limited dynamic range of the analog memory to store the bias voltage; otherwise, the DCT coefficient would need a greater number of bits to deliver the same SNR.

The discussion above works fine as long as the system is noise free; however, the system’s response-modulation function shown in Fig. 5 triggers the need for a more intelligent bias-selection algorithm. Due to the device’s limited dynamic range and noise behavior of the system, it is a must to have a bias-selection algorithm. This algorithm efficiently prescribes the optimal bias to each pixel, which leads in minimization of the effect of noise. Also, some linear transformation is used to map all coefficients over the given implementable dynamic range. The next section is devoted to the mathematical model of the device-response and bias-selection algorithms.

6.2. Bias selection algorithm

In this section, we will describe a novel bias-selection algorithm based on the MMSE approach, which tends to address the issue of image reconstruction when noise comes into play in the responsivity of the device. When the bias corresponding to a basis coefficient is computed without considering the effect of noise in the responsivity of the device, then we call it a naïve technique. This term will be used frequently in the rest of paper to consider such cases.

The projection and reconstruction are exact as long as the device behaves deterministically for the applied mask. However, the complexity rises as its behavior tends to be random and there exists a finite uncertainty to its response. In this case, the common reconstruction method does not lead to exact recovery as it is difficult to find a unique bias that is able to achieve the designated gain factor. Next, we discuss a technique that enables us to optimally choose the bias for the given mask coefficient.

To begin describing the bias-selection method, as shown in Fig. 8, we consider a set of basis masks, {Bk}k=1N, each of which is to be implemented by a 2D array of biases to be determined later. Each of these masks consists of a 2D array of coefficients, given by {{bijk}}i,j=1N. The objective is to map each of these bijk coefficients into achievable responsivity values by means of the application of appropriate bias drawn from the responsivity function given by R˜(v). Here, R˜(v) is the noisy responsivity of the device as a function of applied bias. This bias assignment is carried out according to the optimization criterion stated in Eq. (14).

 figure: Fig. 8

Fig. 8 Acquisition and compression processes, which include mapping k mask matrices to their corresponding bias voltages. The mapping is based on the system’s response-modulation function shown in Fig. 5. Then the bias matrices that are stored in the Raspberry Pi memory are loaded to the imager and projected to the object’s reflectance function. The resultant dot product is optionally summed up in the hardware, and the k resulting coefficients are sent to the remote computer for reconstruction.

Download Full Size | PDF

For an imaging system of resolution N = 96 × 96 pixels, the image captured by the system I, the matrix of DCT coefficients Y, and the k-th ideal DCT mask B(k), is represented by

I=[I1,1I1,96I96,1I96,96],Y=[y(1)y96y(96296)y(962)]
and
B(k)=[b˜1,1(k)b˜1,96(k)y96,1(k)b˜96,96(k)].
The k-th practical mask based on noisy responsivity is:
R˜(k)=[r˜1,1(k)r˜1,96(k)r˜96,1(k)r˜96,96(k)],
and
r˜(v)=r(v)+η(μ,σv2),
where R(v) is the implementable kth mask based on ideal responsivity when the system is noise free. Now the expression for computing the individual DCT coefficients corresponding to the noisy responsivity mask and corresponding error are given by
yR˜(k)=i=196j=196Ii,jr˜i,j(k)(v),
and
yerr(k)=yidl(k)yR˜(k)=i=196j=196Ii,j(bi,j(k)r˜i,j(k)(v)),
where the k-th DCT coefficient corresponding to the ideal mask is denoted by
yidl(k)=ijIi,jbi,j(k).

For a specific pixel at (i, j) position, if b is the mask coefficient to be achieved and r˜(v) is the realizable coefficient from responsivity, then the objective function for bias selection for that specific pixel is given by

f(v)=(br˜(v))2,
and the optimization problem is given by
minimizevf(v)subject toE(f(v))=0,
where
  • E(f (v)) stands for the expected value of the entity f(v), which is a function of v.
  • f(v) : ℝn → ℝ to be minimized over variable v.
  • E(f (v)) = 0 is the equality constraint.
Equivalently, the problem can be reformulated as
vopt=argminvE(f(v)),
where
E(f(v))=(br(v))22μ(br(v))+μ2+σv2,
then to find the optimum vopt, we differentiate the objective with respect to v such that ddvE(f(vopt))=0, and thus we obtain
r(vopt)=bμσddvσvoptddvr(vopt),
and
vopt=σvopt2[(ddvσvoptddvr(vopt))2+1],
where, r(vopt) corresponds to the optimal realizable gain coefficient for a given ideal value of mask coefficient b, and vopt stands for the optimal bias to be applied to realize gain r(vopt). The above expressions explained the optimal bias-selection rule driving the corresponding gain coefficients to be implemented on the pixel to realize the optimal mask coefficient b.

The bias-selection algorithm works fine as long as the variance of noise in the responsivity lies within some limit and the lighting condition does not change drastically. This is because for different operating light conditions, the responsivity might change and the designed bias in the memory will not be able to suffice the objective.

6.3. Conditioning the masks for mapping the bias into device dynamic range

For an image {{Iij}}i,j=1N and basis masks given by Bk={{bijk}}i,j=1N, the DCT coefficient for the ideal case is achieved as

yijk=ijIijbijk.
However, due to the device’s limited operating dynamic range and memory, there is a need to appropriately condition the mask coefficients such that they are realizable as per the device responsivity. Once the projection is obtained, an equivalent transform needs to be applied to retrieve the actual DCT coefficients.

Now for any linear transformation given by r = mb + c, where m is the gain, c is the offset and r is the entity equivalent to b in the transform domain. Hence, this transformation is identically applied to all of the basis coefficients, to accommodate all of them into the working dynamic range of the device responsivity. If rij = mbij + c, then

yijk=ijIijrijk=ijIij(mbij+c)=mijIijbij+cijIij=myijk+cijIij.

Then, for each projection coefficient yijk, we can condition as follow so to retrieve the actual projection coefficient:

yijkmcijIij=yijk.

This conditioning is responsible for mapping of the target mask coefficients into the realizable region, the distribution of which is shown in Figs. 9(a) and 9(c) corresponding to naïve and MMSE methods, respectively. Also, as observed from the distribution of bias from Figs. 9(b) and 9(d), the MMSE spreads out the bias to ensure the quantization effects on implementation are minimized. As MMSE considers the effects of noise while bias is prescribed for the given mask, variance is added on the realizable mask coefficients, which leads to their spread when compared to that designed without considering the effect of noise.

 figure: Fig. 9

Fig. 9 a) Distribution of 8 × 8 block-based DCT mask coefficients for naïve method, b) distribution of bias for naïve method, c) distribution of 8 × 8 block-based DCT mask coefficients for MMSE method, and d) distribution of bias for MMSE method.

Download Full Size | PDF

6.4. DCT-based image compression

Once the optimal masks and gain are designed with the aid of the bias-selection algorithm, the biases are then applied to the hardware, which in turn, results in achieving the desired coefficients as modulation factors at each pixel. Finally, the DCT coefficient corresponding to each mask is achieved by

yoptk=i=196i=196Ii,jr˜i,jk(vopt).

6.5. DCT-based image reconstruction

Image reconstruction is achieved by simply applying the linear combination of the masks to which the image was projected. The reconstruction is achieved by the following equation:

IkRoptkyoptk.
Following the discussion above, we performed DCT-based image compression optimally on the hardware. However, some error still exists in the projection coefficients that propagate during the reconstruction, which is mainly due to the limited dynamic range of the pixels and different random uncharacterized noise present in the hardware.

6.6. Compressive sensing implementation

The second type of in-pixel compressed-domain acquisition we have explored is compressive sensing (CS). While in the DCT transform coding, the gain vectors vary continuously, which leads to the maximal exploitation of device dynamic range; the CS implementation simplifies the complexity by making use of only zeros and ones, which makes the system more resilient to noise. Here, we present some background regarding CS and implementation methodology on the proposed hardware.

CS is based on the principle of achieving a larger and more efficient compression, provided that the desired data is sparse in some basis. Sparsity is the primary condition here, which will lead to efficient reconstruction of data if it is sampled in a proper domain. We consider the input image as a discrete-time column vector xRP with elements x[n] where n = 1, 2,…, P and P = 96 × 96. Then, x can be represented as a linear combination of elements from an orthonormal basis {ϕi}i=1P and coefficients si. Here,

x=i=1Psiϕi,
or
x=ϕs.

We assume that s is sparse with K nonzero coefficients. Now, by selecting an efficient binary random sensing matrix ψ, we can represent the reduced data set as y = ψx where ψ is a binary matrix of size M × P and MP. In this way, the dimension of data set is reduced from P to M. However, the size M also needs to be properly determined for stable reconstruction. The standard expressions for computing M are given as

McKlog(PK),
where c is a constant. Here, the matrix ψ is composed of M basis functions in P dimension to which data x is projected, i.e. ψ = [ψ1|ψ2| … âŤĆψM]T, where ψ1 is of size P × 1. The matrix was designed with the restricted isometry property (RIP) [44] given below:
(1σk)|x|22|ψx|22(1+σk)|x|22,
where σK ∈ [0, 1). Moreover, each ψi is converted to an equivalent 2D data set and then subjected to be implemented on hardware as a measurement mask. Because this mask is composed of binary elements, it is easier to achieve projections as the detector tends to switch on or off depending upon the bias applied for the acquisition. After we have obtained coefficients from the projection of the image to the reduced basis, the challenging problem is to reconstruct the image out of its dimensionally reduced format. Specifically, in this problem, we look forward to reconstruct image vector x by only using the M measurements in the vector y, the random measurement matrix ψ, and the orthonormal basis ϕ. Equivalently, we could reconstruct the sparse coefficient vector s. The estimate is given by the ℓ1 minimization criteria, which uses a convex relaxation of the ℓ0 norm given as
x^=min⥹x1,
such that
ψx=y,
and
x1=ixi.

The reconstruction was performed with the aid of ℓ1-magic algorithm, where the same random basis was considered for reconstruction, which was used for the projection during the hardware implementation [45].

6.7. Performance comparison between naïve DCT, LMS DCT, and CS reconstruction

For a prescribed response-modulation factor, mandated by the DCT masks, for example, we analytically calculated the required voltage using the bias-selection algorithm as discussed in Section 6.2. Note that without such a statistical calculation of the voltage, the implementation of the modulation level would be inexact andd would result in errors in the image reconstruction. Figure 10 shows reconstructed images for different compression methods with a different number of projection coefficients taken into account. The criticality of the statistical calculation of the voltages is evidenced by the presence of noise in the reconstructed images using the naïve approach, which uses bias voltages that are calculated without considering uncertainty in ROIC’s implementation of the masks, as shown in Fig. 10(a). In contrast, the reconstruction based on a bias-selection algorithm tends to achieve a better reconstruction, as seen in Fig. 10(b). In addition, the CS reconstruction, as shown in Fig. 10(c), outperforms the DCT-based approach. For the given results, we can see that naïve based reconstruction fails to retrieve the details of the image as well as contrast levels due to the presence of noise. However, MMSE-based results suggest that they achieve a better contrast result as well as reproduce most of the details of the original image. Note that the CS gives almost exact reconstruction when a sufficient number of coefficients is used. This is due to the fact that CS exploits randomness as a tool to extract information with fewer coefficients, and the uncertainty in responsivity has less of an implication on it compared to the DCT approach, which relies on the exact implantation of the masks. Also, for the DCT transform, a linear combination of projection coefficients, with the corresponding basis masks results into reconstruction where an error in projection is propagated during the reconstruction. CS reconstruction uses ℓ1 minimization-based optimization, which tends to keep the reconstruction noise as low as possible. Hence, the CS-based reconstruction is more tolerant of uncertainty in electronic mask implementation due to its robust ℓ1 optimization, whereas the DCT approach uses an ℓ2 optimization, which is known for its inferior performance compared to ℓ1 optimization.

 figure: Fig. 10

Fig. 10 The resulting images reconstructed using a) naïve DCT, b) minimum-mean-square error based DCT, c) compressive sensing, and d) ideal DCT. e) The performance of different method is compared in terms of the mean square error between the reconstructed image and the original image.

Download Full Size | PDF

A reconstruction based on ideal DCT is depicted in Fig. 10(d) where the given input image was projected on a set of ideal DCT masks, and the reconstruction was performed with different projection coefficients. The results demonstrated on Fig. 10(d) are entirely carried out at a simulation level.

The reconstruction errors which are shown in Fig. 10(e) are computed with respect to the ground-truth image. For MMSE and naïve methods, although the visual results are better with the higher percent of coefficients considered to lower percent for binary CS in the reconstruction process; the individual pixel values were off from the original pixels, whereas the difference was less for CS. This is because the correlation between the pixels retains the image structure and looks better for the user. Thus, the correlation of pixels for larger projection coefficients for reconstruction in naïve and MMSE are higher when compared to lower number of projection coefficients for binary CS. However, compared to CS, because the coefficients are more sensitive to noise for MMSE and naïve, the reconstruction error is higher. In this context, considering more coefficients in reconstruction leads to more propagation of projection error. This error is less for MMSE when compared to naïve.

The analog image sensor has a limited memory, which forces the device to operate over a limited dynamic range; this constrains the device to rely on small, block-sized transform coding instead of a large kernel mask. This is due to, for a large block size, the mask coefficients being significantly large in number and denser. This gives rise to quantization issue as most of the neighboring coefficient values are rounded to their nearby realizable coefficients. As a result, the realized mask loses its orthonormal property, and the implemented mask is no longer equivalent to the targeted mask, leading to reconstruction errors.

7. Functioning as a stand-alone camera

Depending on the modulation scheme applied to the chip, different applications could be delivered. In the simplest scenario, if all of the pixels are biased with the same voltage, the iROIC camera can be used as a stand-alone camera. In this mode of operation, Vbias should remain constant, and as a result, the modulation factor that is used for different pixels is the same.

The extra benefit of this hardware over the conventional CTIA is that in stand-alone mode, because the reference voltage for the readout is different from the detector’s bias voltage, a VrefVbias offset is applied to the measured values, which means a level shifter is embedded in every pixel. This method is beneficial if there is a constant offset at the output of the imager. Figure 11 shows four images that are taken by the iROIC camera in stand-alone mode.

 figure: Fig. 11

Fig. 11 Four images that are taken using iROIC camera in normal mode. a) phantom, b) a cell, c) some rice grains, and d) UNM logo.

Download Full Size | PDF

8. Region of interest (ROI) enhancement

The support for continuous spatiotemporal control over the bias voltage applied to each photodetector enables ROI enhancement achieved by means of selectively modulating responsivity of detectors located in the region of interest. Different applications are advantaged from this, and some are briefly discussed below:

  • It aids in enhancing the contrast of image over a given region, which is originally poor due to limited dynamic range of the sensor. This is also a solution to the challenge of finding an optimum bias for a high contrast image where part of it saturated and some other part is at the noise level. A smart selection of bias voltages enforces all pixels to operate in the linear region.
  • This method facilitates in achieving different resolutions for different regions of a given image by using sub-masks corresponding to low pass and high pass response. This is useful in the surveillance and medical applications, where the user may be interested in a specific region and wants to ignore the information in the rest of the image.
  • Spectral selectivity in different areas of the image is another application of the hardware; however, the requirement is to have support for multispectral tunability at the photodetectors.
Figure 12(a) shows the original image of the white matter, which we have used at the input of iROIC in the image segmentation experiment. Figure 12(b) depicts the white matter image we have taken with iROIC when a uniform bias is applied to all of the pixels, and Figs. 12(c)–12(f) present the same scene with the exception of applying different bias to some selected area, which is referred to as region of interest.

 figure: Fig. 12

Fig. 12 a) Original white matter image used for imaging. b) Image is taken using iROIC with a uniform biasing for all of the pixels where some of the pixels are saturated due to the high intensity. In c), d), e), and, f) the same scene is imaged using proper biasing for the different areas that normally are at the noise floor of the imager.

Download Full Size | PDF

9. Conclusions

A monolithic implementation of compressive-domain image acquisition is presented, where all of the computations are performed at the acquisition time within the analog ROIC circuit. Detector-bias information is the knob we employed to control the modulation factor of each individual pixel. The reported hardware outputs a reduced set of compression coefficients of an image, thereby avoiding the generation of big data. A flexible image retrieval setup enables fine control over the matrix that is to be projected to the image.

The enhanced acquisition technique, which utilizes a statistical detector biasing scheme, offers many different applications, such as in-place nonuniformity correction, sensor level region of interest enhancement, transform coding embedded in ROIC, and compressive sampling, where all of them are proven using the selection of proper biasing matrices. Additionally, for the case of transform coding, an intelligent bias-selection algorithm is proposed, and the result is compared against the naïve method.

The motive of the current extension is to efficiently acquire data and reconstruct with fewer projection coefficients, which is highly desired for multispectral imaging. This reduces acquisition time and instrument complexity. Here we deploy a CS-based compression technique where an entire signal can be reconstructed from sparse data set where a proper basis pursuit algorithm is used for reconstruction of the multispectral image from a reduced data set.

Funding

This work was supported primarily by the Engineering Research Centers Program (ERC) of the National Science Foundation under NSF Cooperative Agreement No. EEC-0812056.

References and links

1. M. Prastawa, E. Bullitt, S. Ho, and G. Gerig, “A brain tumor segmentation framework based on outlier detection,” Med. Image Anal. 8, 275–283 (2004). [CrossRef]   [PubMed]  

2. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5, 10669 (2015). [CrossRef]  

3. M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE Sig. Proc. Mag. 25, 72–82 (2008). [CrossRef]  

4. F. Shao, W. Lin, G. Jiang, and Q. Dai, “Models of monocular and binocular visual perception in quality assessment of stereoscopic images,” IEEE T. Comput. Imag. 2, 123–135 (2016). [CrossRef]  

5. S. V. Venkatakrishnan, L. F. Drummy, M. Jackson, M. De Graef, J. Simmons, and C. A. Bouman, “Model-based iterative reconstruction for bright-field electron tomography,” IEEE T. Comput. Imag. 1, 1–15 (2015). [CrossRef]  

6. A. Chowdhury, R. Darveaux, J. Tome, R. Schoonejongen, M. Reifel, A. De Guzman, S. S. Park, Y. W. Kim, and H. W. Kim, “Challenges of megapixel camera module assembly and test,” in “Proceedings Electronic Components and Technology, 2005. ECTC’05.”, (IEEE, 2005), pp. 1390–1401.

7. N. Nakano, R. Nishimura, H. Sai, A. Nishizawa, and H. Komatsu, “Digital still camera system for megapixel CCD,” IEEE T. Consum. Electron. 44, 581–586 (1998). [CrossRef]  

8. C. F. Weiman and J. M. Evans Jr, “Digital image compression employing a resolution gradient,” (1992). US Patent 5,103,306,.

9. P. T. Barrett, “Method for image compression on a personal computer,” (1994). US Patent 5,287,420.

10. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE T. Pattern Anal. 15, 1148–1161 (1993). [CrossRef]  

11. A. Gandomi and M. Haider, “Beyond the hype: Big data concepts, methods, and analytics,” Int. J. Inform. Manage. 35, 137–144 (2015). [CrossRef]  

12. Y. Oike, M. Ikeda, and K. Asada, “Design and implementation of real-time 3-D image sensor with 640× 480 pixel resolution,” IEEE J. Solid-St. Circ. 39, 622–628 (2004). [CrossRef]  

13. R. LiKamWa, B. Priyantha, M. Philipose, L. Zhong, and P. Bahl, “Energy characterization and optimization of image sensing toward continuous mobile vision,” in “Proceeding of the 11th annual international conference on Mobile systems, applications, and services,” (ACM, 2013), pp. 69–82.

14. I. Cevik, X. Huang, H. Yu, M. Yan, and S. U. Ay, “An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability,” Sensors 15, 5531–5554 (2015). [CrossRef]   [PubMed]  

15. M. Dadkhah, M. J. Deen, and S. Shirani, “Compressive sensing image sensors-hardware implementation,” Sensors 13, 4961–4978 (2013). [CrossRef]   [PubMed]  

16. R. G. Baraniuk, “Compressive sensing,” IEEE Sig. Proc. Mag. 24, 118 (2007). [CrossRef]  

17. J. Ribas-Corbera and S. Lei, “Rate control in DCT video coding for low-delay communications,” IEEE T. Circuits Syst. Video Technol. 9, 172–185 (1999). [CrossRef]  

18. M. Leinonen, M. Codreanu, and M. Juntti, “Compressed acquisition and progressive reconstruction of multidimensional correlated data in wireless sensor networks,” in “2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),” (IEEE, 2014), pp. 6449–6453.

19. G. R. C. Fiorante, P. Zarkesh-Ha, J. Ghasemi, and S. Krishna, “Spatio-temporal tunable pixels for multi-spectral infrared imagers,” in “2013 IEEE 56th International Midwest Symposium on Circuits and Systems (MWSCAS),” (IEEE, 2013), pp. 317–320.

20. M. Bhattarai, J. Ghasemi, G. R. Fiorante, P. Zarkesh-Ha, S. Krishna, and M. M. Hayat, “Intelligent bias-selection method for computational imaging on a CMOS imager,” in “2016 IEEE Photonics Conference,” (2016).

21. J. Lee, S. Lim, and G. Han, “A 10b column-wise two-step single-slope adc for high-speed cmos image sensor,” in “Proc. IEEE Int. Image sensor Workshop, Ogunquit, ME,” (Citeseer, 2007), pp. 196–199.

22. M. Lustig, D. Donoho, and J. M. Pauly, “MRI Sparse: The application of compressed sensing for rapid MR imaging,” Magn. Reson. Med. 58, 1182–1195 (2007). [CrossRef]   [PubMed]  

23. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25, 83 (2008). [CrossRef]  

24. J. B. Sampsell, “Digital micromirror device and its application to projection displays,” J. Vac. Sci. Technol. B 12, 3242–3246 (1994). [CrossRef]  

25. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013). [CrossRef]   [PubMed]  

26. Y. Oike and A. El Gamal, “A 256× 256 CMOS image sensor with ∆Σ-based single-shot compressed sensing,” in “2012 IEEE International Solid-State Circuits Conference,” (IEEE, 2012), pp. 386–388.

27. H. Tian, “Noise analysis in CMOS image sensors,” Ph.D. thesis, Citeseer (2000).

28. M. Bigas, E. Cabruja, J. Forest, and J. Salvi, “Review of CMOS image sensors,” Microelectron. J. 37, 433–451 (2006). [CrossRef]  

29. S. Mendis, S. E. Kemeny, and E. R. Fossum, “CMOS active pixel image sensor,” IEEE T. Electron. Dev. 41, 452–453 (1994). [CrossRef]  

30. A. Mehrish, A. Subramanyam, and S. Emmanuel, “Sensor pattern noise estimation using probabilistically estimated RAW values,” IEEE Signal Process. Lett. 23, 693–697 (2016). [CrossRef]  

31. K. Yonemoto and H. Sumi, “A CMOS image sensor with a simple fixed-pattern-noise-reduction technology and a hole accumulation diode,” IEEE J. Solid-St. Circ. 35, 2038–2043 (2000). [CrossRef]  

32. A. J. Cooper, “Improved photo response non-uniformity (PRNU) based source camera identification,” Forensic Sci. Int. 226, 132–141 (2013). [CrossRef]   [PubMed]  

33. M. J. Schulz and L. V. Caldwell, “Nonuniformity correction and correctability of infrared focal plane arrays,” in “SPIE’s 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics,” (International Society for Optics and Photonics, 1995), pp. 200–211.

34. D. Litwiller, “CCD vs. CMOS,” Photon. Spectra 35, 154–158 (2001).

35. B. E. Stine, D. S. Boning, and J. E. Chung, “Analysis and decomposition of spatial variation in integrated circuit processes and devices,” IEEE T. Semicond. Manuf. 10, 24–41 (1997). [CrossRef]  

36. N. Ricquier and B. Dierickx, “Active pixel CMOS image sensor with on-chip non-uniformity correction,” in “Proc. IEEE Workshop Charge-Coupled Devices and Advanced Image Sensors,” (1995), pp. 20–22.

37. A. Piva, “An overview on image forensics,” ISRN Sig. Proc. 2013, 496701 (2013).

38. M. Sheng, J. Xie, and Z. Fu, “Calibration-based NUC method in real-time based on IRFPA,” Physics Procedia 22, 372–380 (2011). [CrossRef]  

39. D. L. Perry and E. L. Dereniak, “Linear theory of nonuniformity correction in infrared staring sensors,” Opt. Eng. 32, 1854–1859 (1993). [CrossRef]  

40. S. N. Torres, E. M. Vera, R. A. Reeves, and S. K. Sobarzo, “Adaptive scene-based nonuniformity correction method for infrared-focal plane arrays,” in “AeroSense 2003,” (International Society for Optics and Photonics, 2003), pp. 130–139.

41. C. Zuo, Q. Chen, G. Gu, and X. Sui, “Scene-based nonuniformity correction algorithm based on interframe registration,” JOSA A 28, 1164–1176 (2011). [CrossRef]   [PubMed]  

42. S. Saha, “Image compression-from DCT to wavelets: a review,” Crossroads 6, 12–21 (2000). [CrossRef]  

43. Y.-M. Zhou, C. Zhang, and Z.-K. Zhang, “An efficient fractal image coding algorithm using unified feature and DCT,” Chaos, Solitons & Fractals 39, 1823–1830 (2009). [CrossRef]  

44. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Sig. Proc. Mag. 25, 21–30 (2008). [CrossRef]  

45. E. Candes and J. Romberg, “l1-magic: Recovery of sparse signals via convex programming,” URL: www.acm.caltech.edu/l1magic/downloads/l1magic.pdf4, 14 (2005).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 a) A system-level block diagram of a conventional imaging system, which includes image acquisition, storage, and post-processing stages. b) Block diagram of the intelligent readout integrated circuit we propose for on-chip image acquisition and compression.
Fig. 2
Fig. 2 Block diagram of the individual pixel bias tunable readout integrated circuit and the CTIA-based unitcell at the extended view. The extra circuitry added to the CTIA-based unitcell enables setting independent bias voltages for each individual pixel while the previously integrated voltage is being read out.
Fig. 3
Fig. 3 a) A cross section of the n+/nwell/psub photodetector used in this chip, b) the measured photoresponse of n+/nwell/psub photodetector as a function of the applied bias voltages at different illumination levels, and c) the same measured results that are scaled to one. In this experiment, a green LED is used as the illumination source and the dimension of the photodetector is 100 µm×100 µm.
Fig. 4
Fig. 4 a) Switch level implementation of iROIC unitcell. The unitcell includes 15 transistors and three capacitors, b) the video switches, c) the row/column select peripherals, and d) a sample timing diagram of a single unitcell.
Fig. 5
Fig. 5 Demonstration of the normalized modulation function of the system to a uniform illumination level. The graph reflects the system’s response to the modulation of the detector’s bias.
Fig. 6
Fig. 6 a) A microphotograph of the fabricated ROIC, the row and column select, and the test devices. The unitcell is shown in the extended view. b) A block diagram of the experimental setup, which includes a Raspberry Pi board as the main controller of the system, an ADC and a DAC to set the bias voltage of the detectors and grabs the readout of the imager. All communication between the controller and a remote machine is over SSH.
Fig. 7
Fig. 7 a) The result of imaging a white paper with uniform biasing, while the illumination is not uniform. Defects and other sources of nonuniformity also contribute to the variation across the image. The stack of three graphs demonstrates (I) camera output image, (II) illumination contour, and (III) 3D view of the intensities. b) Another white paper is imaged with the same illumination condition using the implemented nonuniformity correction. The graph has the same scale as part (a), and the legend in the middle is for part (II). c) and d) show the histogram for the measured results of part (a) and (b), respectively.
Fig. 8
Fig. 8 Acquisition and compression processes, which include mapping k mask matrices to their corresponding bias voltages. The mapping is based on the system’s response-modulation function shown in Fig. 5. Then the bias matrices that are stored in the Raspberry Pi memory are loaded to the imager and projected to the object’s reflectance function. The resultant dot product is optionally summed up in the hardware, and the k resulting coefficients are sent to the remote computer for reconstruction.
Fig. 9
Fig. 9 a) Distribution of 8 × 8 block-based DCT mask coefficients for naïve method, b) distribution of bias for naïve method, c) distribution of 8 × 8 block-based DCT mask coefficients for MMSE method, and d) distribution of bias for MMSE method.
Fig. 10
Fig. 10 The resulting images reconstructed using a) naïve DCT, b) minimum-mean-square error based DCT, c) compressive sensing, and d) ideal DCT. e) The performance of different method is compared in terms of the mean square error between the reconstructed image and the original image.
Fig. 11
Fig. 11 Four images that are taken using iROIC camera in normal mode. a) phantom, b) a cell, c) some rice grains, and d) UNM logo.
Fig. 12
Fig. 12 a) Original white matter image used for imaging. b) Image is taken using iROIC with a uniform biasing for all of the pixels where some of the pixels are saturated due to the high intensity. In c), d), e), and, f) the same scene is imaged using proper biasing for the different areas that normally are at the noise floor of the imager.

Tables (1)

Tables Icon

Table 1 Comparison between different configuration for preamplifier used in an imager. Due to the need for good bias control, high injection efficiency, and sufficient charge storage, we have selected CTIA configuration for iROIC.

Equations (34)

Equations on this page are rendered with MathJax. Learn more.

Y i j k = g i j k I i j k + o i j k ,
I i j k = w i j k Y i j k + b i j k ,
R = g ( I , V ) ,
y u v = 2 M N i = 0 M 1 j = 0 N 1 [ C ( u ) C ( v ) I i j cos π ( 2 i + 1 ) u 2 N cos π ( 2 j + 1 ) v 2 N ] ,
C ( u ) , C ( v ) = { 1 2 if u , v = 0 1 otherwise .
I i j = 2 M N u = 0 M 1 v = 0 N 1 [ y u v cos π ( 2 i + 1 ) u 2 N cos π ( 2 j + 1 ) v 2 N ]
y u v = 2 M N C ( u ) C ( v ) i = 0 M 1 j = 0 N 1 [ I i j M a s k u v ( i , j ) ] ,
u , v = 0 , 1 , , N 1 .
M a s k u v ( i , j ) = cos π ( 2 m + 1 ) u 2 N cos π ( 2 n + 1 ) v 2 N .
I = [ I 1 , 1 I 1 , 96 I 96 , 1 I 96 , 96 ] , Y = [ y ( 1 ) y 96 y ( 96 2 96 ) y ( 96 2 ) ]
B ( k ) = [ b ˜ 1 , 1 ( k ) b ˜ 1 , 96 ( k ) y 96 , 1 ( k ) b ˜ 96 , 96 ( k ) ] .
R ˜ ( k ) = [ r ˜ 1 , 1 ( k ) r ˜ 1 , 96 ( k ) r ˜ 96 , 1 ( k ) r ˜ 96 , 96 ( k ) ] ,
r ˜ ( v ) = r ( v ) + η ( μ , σ v 2 ) ,
y R ˜ ( k ) = i = 1 96 j = 1 96 I i , j r ˜ i , j ( k ) ( v ) ,
y e r r ( k ) = y i d l ( k ) y R ˜ ( k ) = i = 1 96 j = 1 96 I i , j ( b i , j ( k ) r ˜ i , j ( k ) ( v ) ) ,
y i d l ( k ) = i j I i , j b i , j ( k ) .
f ( v ) = ( b r ˜ ( v ) ) 2 ,
minimize v f ( v ) subject to E ( f ( v ) ) = 0 ,
v o p t = argmin v E ( f ( v ) ) ,
E ( f ( v ) ) = ( b r ( v ) ) 2 2 μ ( b r ( v ) ) + μ 2 + σ v 2 ,
r ( v o p t ) = b μ σ d d v σ v o p t d d v r ( v o p t ) ,
v o p t = σ v o p t 2 [ ( d d v σ v o p t d d v r ( v o p t ) ) 2 + 1 ] ,
y i j k = i j I i j b i j k .
y i j k = i j I i j r i j k = i j I i j ( m b i j + c ) = m i j I i j b i j + c i j I i j = m y i j k + c i j I i j .
y i j k m c i j I i j = y i j k .
y o p t k = i = 1 96 i = 1 96 I i , j r ˜ i , j k ( v o p t ) .
I k R o p t k y o p t k .
x = i = 1 P s i ϕ i ,
x = ϕ s .
M c K log ( P K ) ,
( 1 σ k ) | x | 2 2 | ψ x | 2 2 ( 1 + σ k ) | x | 2 2 ,
x ^ = min â Ą ą x 1 ,
ψ x = y ,
x 1 = i x i .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.