## Abstract

We demonstrate an approach that allows taking videos at very high frame-rates of over 100,000 frames per second by exploiting the fast sampling rate of the standard rolling-shutter readout mechanism, common to most conventional sensors, and a compressive-sampling acquisition scheme. Our approach is directly applied to a conventional imaging system by the simple addition of a diffuser to the pupil plane that randomly encodes the entire field-of-view to each camera row, while maintaining diffraction-limited resolution. A short video is reconstructed from a single camera frame via a compressed-sensing reconstruction algorithm, exploiting the inherent sparsity of the imaged scene.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction and principle

High-speed imaging is important for observations of fast occurring phenomena, such as high-speed tracking of single-particles [1], the study of explosive materials, in the automotive industry, and more [2]. While high-resolution cameras are available today in affordable devices such as in smartphones, very high-speed ($>$10,000 frames-per-second (fps)) cameras are still uncommon and expensive, with price tags at the thousands of dollars. The high price-tag is a result of the hardware requirements, which include high-speed readout, a high transfer bit-rate from the sensor to the memory, and a high-speed memory allocated close to the sensor.

Here, we suggest a simple solution for high-speed (>100,000fps) imaging of sparse-scenes that goes beyond the camera frame-rate for the same region of interest (ROI) by two orders of magnitude. The fast frame-rate is obtained by exploiting the fast rolling-shutter readout common to most conventional sensors, and a compressive-sampling (CS) acquisition scheme, without changing the camera readout speed. Our technique can be applied to any conventional imaging system by the simple addition of a diffuser at the imaging-system pupil-plane. The diffuser goal is to generate a random speckle point-spread-function (PSF), which encodes the entire scene to each camera row. The rolling shutter samples the scene at 100,000 rows-per-second, allowing high-speed video reconstruction from a single camera frame, via a post-processing computational reconstruction.

In recent years, various techniques that allow video reconstruction from a single frame based on compressed-sensing have been developed [3]. Among them are coded exposure photography [4], dynamic coded-aperture photography [5], dynamic PSF modulation [6] and many more [7–12]. However, these approaches rely on a fast dynamic modulation and require relatively-complex modifications or additions to the imaging system.

In contrast to these approaches, our approach requires only a simple, straightforward, addition of a static diffuser to the imaging system pupil-plane. Recently, a similar technique of utilizing the rolling-shutter effect for video reconstruction from a single frame was presented by Antipa et al. [13] for lensless imaging systems. The implementation of this approach, however, required access to the bare sensor, and thus, is not directly adaptable to all conventional imaging systems. Here, we demonstrate that the use of the rolling-shutter for high-speed photography can be exploited in any lens-based imaging system by accessing only the pupil plane. In addition, by relying on a speckle-based encoding, rather than a caustics-one used by Antipa et al., our implementation preserves a diffraction-limited spatial resolution, a potential advantage for microscopic imaging.

The enabling principle in all approaches of video reconstruction from a single frame is that the dynamic scene to be captured is compressible in some known domain and that each pixel in the acquired frame encodes information from a large number of video pixels, in both *space* and *time*. Following the principles of CS [14], a dynamic video can be reconstructed from such a single frame under the conditions of an appropriate encoding and a minimal number of acquired pixels. Importantly, a random encoding, i.e. an encoding where each camera pixel is a random linear superposition of video pixels in space and time, satisfies those conditions.

In a conventional camera with a rolling shutter readout, the camera image is sampled row after row in a serial, consecutive manner [9]. Therefore, each row of pixels in the acquired frame encodes the scene information in a specific, short, time (Fig. 1(a)). This effect is usually a hurdle for high-speed photography [15], causing artifacts for fast-moving objects, or fast-changing scenes, as is depicted in Fig. 1(a). However, one can exploit the inherently fast sampling-rate of the rolling shutter for video reconstruction by randomly-encoding the entire scene information (in all spatial and temporal coordinates) to every camera row (scanline).

A schematic realization of this principle using an optical diffuser is presented in Figs. 1(b)-(g). A light scattering diffuser at the pupil plane of the imaging system (Fig. 1(c)) optically randomly encodes the entire scene to each row in the camera sensor by scattering (Fig. 1(d)). A camera with a rolling-shutter readout captures the single frame (Fig. 1(e)). Each row in the captured image encodes a single time frame of the video. The single captured image is fed into a CS reconstruction algorithm that decodes the video (Figs. 1(f)-(g)). Importantly, in common CMOS cameras, the readout speed of each row is usually several orders of magnitude faster than the full image acquisition rate (the number of rows in the image) [9].

## 2. Forward model and reconstruction scheme

The acquisition and reconstruction of our approach can be mathematically described as follows: for imaging a 2D spatially-incoherently illuminated scene, described by $O(x,y,t),$ the image intensity at the sensor plane at any time, $t,$ is given by:

Where PSF(x,y) is the imaging PSF, $M$ is the magnification of the imaging system, and $\underset {x,y} \circledast$ denotes 2D spatial-convolution over (x,y).The single image acquired by a camera with a rolling-shutter readout, $I_{cam}(m,n)$, is a sampled version of $I(x,y,t)$, where the row at the coordinate $n$ is sampled at a time, $t_n=n/V_s,$ where $V_s$ is the rolling-shutter speed in rows per second, i.e. the row sampling frequency is $f_s=V_s.$ Assuming an exposure time given by $T_{exp},$ the captured image is given by:

The linear forward model described in Eq. (3) can be written as a matrix-vector multiplication of a convolution-like matrix, $\textbf {A}$ that describes the PSF and the rolling-shutter readout, with a vector $\textbf {v}$ that describes the dynamic scene $O(x,y,t)$ [16,17].

Where $\textbf {b}$ is a vector of dimensions $[m=n_x\cdot n_y,1]$, representing the captured $n_x$ by $n_y$ pixels-image. $\textbf {v}$ is a vector with dimensions $[n = n_x \cdot n_y \cdot n_t,1]$ that represents the 3D dynamic scene having $n_t$ temporal bins, and $\textbf {A}$ is the forward-model (Eq. (3)) matrix of dimensions $[m,n].$ The $i$-th row in the matrix $\textbf {A}$ describes the random-encoding of the spatio-temporal scene to the $i$-th sensor-pixel. The $j$-th column in $\textbf {A}$ describes the spreading of each spatio-temporal pixel intensity in the scene $I(x_j,y_j,t_j)$ to all camera pixels. The columns in $\textbf {A}$ are thus shifted, sampled representations of the PSF.In the framework of CS, the scene video, $\textbf {v}$, can be reconstructed from the acquired frame, $\textbf {b}$, by e.g. finding the solution to the convex-minimization problem [14]:

## 3. Numerical results

The imaging procedure requires performing three steps: PSF calibration, acquisition, and reconstruction. For calibration, a point-object is used to record the system PSF. For an ideal thin diffuser placed at the pupil plane, the system is expected to be isoplanatic and a single PSF recording is sufficient. For realistic diffusers, the isoplanatic angle is the angular "memory effect" [21] of the diffuser, which for ground-glass or holographic diffusers is a couple of degrees [22,23]. Thus, the PSF calibration can be done by recording the PSF in a few isoplanatic patches. The acquisition (encoding) step is a simple recording of a single camera frame. Finally, the reconstruction (decoding) step is performed by running a conventional CS reconstruction algorithm to solve the inverse problem [24].

As a first step to confirm the proposed approach, we performed a numerical simulation using a single experimentally measured PSF, generated by $1^{\circ }$ diffuser (Newport). The results of this simulation are presented in Fig. 2.

The simulated scene, composed of 54 digits changing in space and time over 54 time-bins and 108x108 spatial pixels. The captured image (Fig. 2(a)) contains 108x108 pixels. Thus, m=11,664, n=629,856, and the number of non-zero entries in the scene are k=679. Poisson noise was added to the raw image pixels, to simulate a measurement SNR of 50 (17dB), where the SNR is defined as the ratio of the signal mean-intensity to the standard deviation of the noise intensity. These values were chosen since they are representative of the SNR in our experiments (Fig. 3 and Fig. 4). To appropriately simulate the dual rolling shutter of our camera (Andor Zyla 4.2), two camera rows are simultaneously sampling the scene at each specific time, rolling from the top and bottom of the frame at earlier times to the center of the frame at later times.

Under these conditions, a high-fidelity reconstruction of the 54 video frames at a resolution of 108x108 pixels was obtained and compared to the ground truth video (Figs. 2(c) and (d)), yielding a $\times 60$ increase in acquisition speed, compared to the raw camera frame-rate. Figures 2(e)-(h) and Figs. 2(i)-(l) shows a zoomed-in comparison on two of the digits, one that appears early and is located at the bottom edge of the FoV (Figs. 2(e)-(h)), and one that is located at the center of the FoV and appears at a later time (Figs. 2(i)-(l)). As a result of the finite dimensions of the speckle PSF, the signal from objects that are located at the bottom or top edges of the FoV, would not evenly spread over the entire sensor, and thus would result in a lower number of effective samples (Fig. 2(e)) than an object at the center of the FoV (Fig. 2(i)). This effect may reduce the reconstruction fidelity of the edges of the FoV. It can be reduced by choosing a larger PSF, albeit with a reduced SNR for the center of the frame.

We have performed additional investigations of reconstruction fidelity at various SNR levels and scene sparsity by simulating three different scenes with varying sparsity levels and calculating to each scene the Pearson’s correlation coefficient between the simulated video and the full video reconstruction. The results of these investigations are presented in Fig. 2(m), confirming that the complexity of our scene is close to the bounds of CS reconstruction.

## 4. Experimental proof-of-principle

For a proof-of-principle experimental demonstration, the setup depicted in Figs. 3(a)-(c) was constructed. The setup is a simple imaging setup consisting of two 4f imaging telescopes with a $\times 22.5$ total de-magnification. An optical diffuser (10DKIT-C1 holographic diffuser, Newport) having a 1-degree scattering angle, without a dominant ballistic component (zero-order diffraction peak) is placed at the Fourier plane (Fig. 3(b)). The criterion for choosing the optical diffuser scattering angle is that PSF spread would be such that significant intensity from a point object located at the bottom of the field-of-view (FoV) would reach the entire camera sensor. A larger PSF would be wasteful for the photon flux reaching the sensor. A smaller PSF would not encode the information from the lower and upper edges of the FoV to all camera rows. The experimental PSF was measured by imaging a $30 \mu m$-diameter pinhole located at the center of the FoV. The measurement matrix $\textbf {A}$ was built as a convolution matrix from the single measured PSF, assuming perfect shift-invariance of the PSF (i.e. infinite memory effect).

As a dynamic, rapidly changing scene, three LEDs at a wavelength of $625nm$ (Thorlabs M625F2, fiber-coupled to 200$\mu$m fibers) were modulated at different frequencies up to 52KHz with different duty-cycles using three independent function generators. An sCMOS camera (Andor Zyla 4.2) set to fast readout mode of $V_s=f_s=104,166 \ rows/sec$ and an exposure time of $T_{exp}=1/f_s=9.6 \mu s$ was used to capture the scene. According to the Shannon-Nyquist sampling criteria, the maximum frequency our system can record without aliasing is $f_{max} = f_s/2 = 52,083 \ Hz$. A sample video with 54 frames reconstructed at a frame rate of 104,166 fps at a pixel resolution of 108x108 pixels is presented in Figs. 3(d)-(f). This video is compared to measurements taken with direct diffraction-limited imaging, captured with the same optical setup without the optical diffuser and no temporal modulations. In addition, measurements of the temporal traces were taken with a fast photodiode 3g-i. The reconstructed frame-rate is $\times 60$ higher than the highest native frame-rate of the camera at such a small ROI of 1,600 fps, and $\times 1000$ higher than the camera frame-rate at full resolution. The diffraction-limited resolution of our system was chosen such that the speckled PSF (Fig. 2(b)) is Nyquist sampled by the camera pixels. Thus, the optical resolution of the system is $\delta x \approx \Delta x/M_x$ where $M_x=1/22.5$ Is the magnification of the optical system. It is worth noting that the CS reconstruction enables a sub-Nyquist super-resolution reconstruction for sparse samples [25].

To validate that the proposed approach preserves a diffraction-limited spatial resolution, we performed an additional experiment with a more complex multi-point object, with spacings that are close to the diffraction-limited resolution of the optical imaging system. For this experiment, the object was composed of eight diffraction-limited dots with a diameter of $~250\mu m$ at spacings of $~600\mu m$ placed in front of an LED (Thorlabs M625L4) that was modulated at a frequency of 20,833Hz. The reconstructed 40 frames video, at a frame rate of 41,666 fps, with a pixel resolution of 96x80 pixels is presented in Fig. 4. To compare the reconstruction resolution to the diffraction-limited resolution, two direct images of the object were taken without the diffuser with and without the iris diaphragm limiting the system NA (Figs. 4(a)-(b)). Comparing the direct images (Figs. 4(a)-(b)) to the sum of the frames in our reconstruction (Fig. 4(c)) shows that the reconstruction quality is comparable to the diffraction-limited image. Moreover, a quantitative analysis of a cross-section of these images (Fig. 4(d)) reveals that our method maintains diffraction-limited spatial resolution. Cross-sections and sample frames from the full video reconstruction are given in Figs. 4(e)-(h). Raw image (Fig. 4(e)) is taken with the same sCMOS camera set to slow readout mode of $V_s=f_s=41,666 \ rows/sec$ and an exposure time of $T_{exp}=1/f_s=24 \mu s$. The same optical diffuser is placed at the camera pupil plane, generating a random speckle PSF (Fig. 4(f)). The illuminating LED is modulated at a frequency of 20,833Hz which is the maximum frequency our system can record without aliasing at this slower readout mode. Three video frames (Fig. 4(g)) are presented and a full time-trace for the eight object dots is presented for every video frame (Fig. 4(h)).

## 5. Conclusion

We presented a method for high-speed imaging that relies on the rolling shutter effect. While placing an optical diffuser at the pupil plane results in a random speckle PSF, it does not affect the resolution limit of the imaging system, since the speckle grain-size (Fig. 2(b)) is diffraction-limited [26]. While in our demonstration we used a simple diffuser that produces a speckle pattern PSF obeying Rayleigh-statistics [26], an engineered phase-mask can be used to customize more efficient encoding or different intensity statistics [27].

As speckles can be very sensitive to the optical wavelength [28], broadband scenes may result in low contrast raw images. This can be alleviated by engineered phase masks. However, the spectral sensitivity of speckles can be an interesting advantage for hyperspectral (x,y,t,$\lambda$) imaging [29,30]. Moreover, the suggested approach could also be extended for recovering depth information (x,y,t,z) by exploiting the natural orthogonality of speckles at different axial planes, as was recently demonstrated [17]. However, both of those extensions would come at a cost of more demanding sparsity constraints since a higher dimensional vector should be reconstructed from the same number of measurements (camera pixels). To maximize reconstruction fidelity the PSF can be specifically engineered, as was recently demonstrated for the goal of 3D reconstruction using deep-learning [31].

As in similar approaches, an inherent drawback of the presented approach is in the fact that the intensity from each spatial position in the scene is spread over a large number of camera pixels, reducing the raw signal to noise. The approach is thus not ideal for high pixel-count imaging of low photon-flux non-sparse scenes. Nonetheless, Antipa et al. [13] demonstrated that spatially-complex scenes can be reconstructed from a low-contrast single frame. This was achieved by increased exposure time and a considerably lower frame-rates than those considered in our experiments (4,500 fps vs. 100,000 fps). The high frame-rate of our reconstruction limits the spatial complexity of the scenes, as the total spatio-temporal sparsity and number of measurements need to satisfy the CS theory conditions: $m \geq \mathcal {O}(k \cdot log(n/k).$ The goal of our work is to demonstrate the highest frame-rate possible using conventional rolling-shutter sensors, by a simple pupil plane manipulation. The ability to reconstruct spatial-sparse fast dynamics scenes may be most valuable for high-speed simultaneous tracking of multiple particles. A demonstration of such an application is beyond the scope of the current work.

The CS reconstruction algorithm used for our demonstration is gradient projection for sparse reconstruction (GPSR) [24], which takes approximately three hours for the reconstruction of 54 video frames at a pixel resolution of 108x108 on Intel I7 CPU. The use of GPU and faster reconstruction algorithms can considerably reduce the run-time. In addition, deep-learning based reconstruction framework can also be used [32,33] for improving the reconstruction run-time and potentially the reconstruction fidelity. Finally, the approach can also be extended to spatially-coherent scenes, e.g. for high-speed holographic video [34], or in optical coherence tomography (OCT) [35], by considering holographic detection of the fields rather than intensity only detection.

## Funding

European Research Council (ERC) Horizon 2020 research and innovation program (677909); Azrieli Foundation; Ministry of Science, Technology and Space (712845); Human Frontier Science Program (RGP0015/2016); Israel Science Foundation (1361/18).

## Disclosures

The authors declare that there are no conflicts of interest related to this article.

## References

**1. **H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. **117**(11), 7331–7376 (2017). [CrossRef]

**2. **H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics **5**(4), 497–509 (2016). [CrossRef]

**3. **J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica **5**(9), 1113–1127 (2018). [CrossRef]

**4. **R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” in * ACM transactions on graphics (TOG)*, (ACM, 2006), pp. 795–804.

**5. **P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express **21**(9), 10526–10545 (2013). [CrossRef]

**6. **Y. Shi, Y. Liu, W. Sheng, J. Wang, and T. Wu, “Speckle rotation decorrelation based single-shot video through scattering media,” Opt. Express **27**(10), 14567–14576 (2019). [CrossRef]

**7. **L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature **516**(7529), 74–77 (2014). [CrossRef]

**8. **R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express **23**(12), 15992–16007 (2015). [CrossRef]

**9. **J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in 2010 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2010), pp. 1–8.

**10. **Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.

**11. **Y. Sun, X. Yuan, and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express **25**(15), 18182–18190 (2017). [CrossRef]

**12. **X. Yuan and S. Pang, “Compressive video microscope via structured illumination,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1589–1593.

**13. **N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

**14. **D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**(4), 1289–1306 (2006). [CrossRef]

**15. **C.-K. Liang, L.-W. Chang, and H. H. Chen, “Analysis and compensation of rolling shutter effect,” IEEE Trans. on Image Process. **17**(8), 1323–1330 (2008). [CrossRef]

**16. **N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica **5**(1), 1–9 (2018). [CrossRef]

**17. **M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. **10**(1), 1327 (2019). [CrossRef]

**18. **O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. **95**(13), 131110 (2009). [CrossRef]

**19. **S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. **20**(11), 3097–3111 (2011). [CrossRef]

**20. **E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory **52**(12), 5406–5425 (2006). [CrossRef]

**21. **I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. **61**(20), 2328–2331 (1988). [CrossRef]

**22. **O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics **6**(8), 549–553 (2012). [CrossRef]

**23. **J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature **491**(7423), 232–234 (2012). [CrossRef]

**24. **M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. **1**(4), 586–597 (2007). [CrossRef]

**25. **S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express **17**(26), 23920–23946 (2009). [CrossRef]

**26. **J. W. Goodman, * Speckle phenomena in optics: theory and applications* (Roberts and Company Publishers, 2007).

**27. **N. Bender, H. Yılmaz, Y. Bromberg, and H. Cao, “Customizing speckle intensity statistics,” Optica **5**(5), 595–600 (2018). [CrossRef]

**28. **B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics **7**(9), 746–751 (2013). [CrossRef]

**29. **M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, J. Hauser, S. Gurevitch, R. Malinsky, and A. Kagan, “Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser,” Appl. Opt. **55**(3), 432–443 (2016). [CrossRef]

**30. **S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica **4**(10), 1209–1213 (2017). [CrossRef]

**31. **E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods **17**(7), 734–740 (2020). [CrossRef]

**32. **M. Kellman, M. Lustig, and L. Waller, “How to do physics-based learning,” arXiv preprint arXiv:2005.13531 (2020).

**33. **M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics **5**(3), 030801 (2020). [CrossRef]

**34. **Z. Wang, L. Spinoulas, K. He, L. Tian, O. Cossairt, A. K. Katsaggelos, and H. Chen, “Compressive holographic video,” Opt. Express **25**(1), 250–262 (2017). [CrossRef]

**35. **D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science **254**(5035), 1178–1181 (1991). [CrossRef]