We introduce an image upscaling method that reduces bit errors caused by Nyquist apertures. Nyquist apertures used for higher storage densities generate optical aberrations and degrade the quality of the image that is recorded on the medium. Here, to correct the bit errors caused by the Nyquist aperture, an image upscaling method is used to restore the degraded image in the enhanced spatial frequency domain using its point spread function (PSF) as a restoration filter. The proposed method reduces the bit error rate (BER) significantly and hence allows higher storage densities.
© 2011 OSA
Since the start of volume recording on holographic media  and data retrieval from holographic media by digital sensors , many research groups have focused on the technologies necessary for high data transfer rates and storage densities for holographic data storage. Generally, the Nyquist aperture is used to increase the recording density of holographic data storage . However, because the size of the aperture becomes smaller for higher storage densities, the bit error rate (BER) inevitably increases . Bit error minimization is a key technology in holographic data storage that uses small apertures for higher storage density. Several methods, including modulation codes  and channel codes [6, 7], have been proposed to decrease the BER in the signal processing step. Methodologies for the mitigation of pixel misregistration have also been reported [8–10].
In this paper, an image restoration method that minimizes the BER is introduced. While a small aperture size is maintained, the degraded image is resized for effective restorations and deconvolved in the enhanced spatial frequency domain. In this domain, the restoration filter can have a higher spatial frequency, and can hence be calculated more precisely as the point spread function (PSF) of the optical system. As a result, the restored image has a higher resolution and fewer bit errors compared with images that are not restored. This is because the PSF describes the exact function of image degradation. The restored image has an upscaled image resolution because the proposed method can restore missed high-spatial-frequency components.
The BER is reduced from 0.0215 to 0.0018 for a Nyquist factor of 1.1 and from 0.0061 to 0.00014 for a Nyquist factor of 1.2. These are enhancements of almost 12 and 44 times, respectively. Storage density can be increased using the proposed image upscaling method because a smaller aperture can be used with the reduced BER. This enhancement is possible because we found the original source of the bit errors and restored missed high-frequency components of the image. Conventional modulation codes use iterative methods such as the minimum mean square error (MMSE) to find an optimum filter for the image. However, the solution of the MMSE is not an exact solution for the degraded image. We calculated the transfer function of the optical system, which implies exact characteristics, and used it as the restoration filter for the degraded image. Simulation and experimental results for various aperture sizes are presented.
2. Research background
Like other optical disc drives, page-based holographic data storage systems need many optical and electrical devices. However, they require additional techniques to store more than a terabyte of data on a single 12 cm optical disc. This is because the page-based holographic data storage system records an image in a single hologram. The recording and reading of the hologram start with image encoding.
2.1 Image encoding
The conventional image reconstruction process is shown in Fig. 1 . A spatial light modulator (SLM) is used to transfer the two-dimensional (2-D) image to the optics. The signal beam passes through the SLM and contains the 2-D data generated by the SLM. Because the SLM can generate only 1 and 0 at each pixel, each pixel of the user image that has 8-bit data must be encoded. We used the 6:8 balanced modulation code as a recording code to encode the serial 6 bits of the user image to 8 bits. The encoded 8-bit data had four 1s and four 0s for effective distinguishability in the image signal processing. Our simulations and experiments used the 6:8 balanced modulation code as a reference encoding/decoding method to ensure the consistency of our results. We used an amplitude type of SLM. The resolution, pixel size, and contrast ratio of the SLM were 120 x 120, 36μm, and over 100:1, respectively. The resolution and pixel size of the complementary metal–oxide–semiconductor (CMOS) sensor were 360 x 360 and 12μm, respectively.
2.2 Hologram recording
In the recording process, the interference patterns of signal and reference beams are recorded on the medium. By changing the angle of the reference beam, holograms can be recorded many times at the same place to achieve high storage density. In addition to angle-multiplexed recording, the Nyquist aperture is placed in the Fourier plane of the lens to narrow the gap between holograms. The Nyquist aperture is a low-pass filter that eliminates the high-frequency components from the aperture. This allows the holograms to be recorded more closely without overlap or interference. The Nyquist aperture allows the minimum space between adjacent holograms recorded on the medium to be the Nyquist aperture size . This means that the storage density can be increased by adjusting the Nyquist aperture. The size of the aperture is
2.3 Hologram reading
In the reading process, only the reference beam is used. The reconstructed beam appears in the medium via irradiation by the reference beam. The beam forms an image at the CMOS sensor through optical components that include aberrations. In our simulations and experiments, a CMOS sensor with a pixel size of 12 μm was used. Because the pixel size of the SLM was 36- μm, the CMOS image oversampled the SLM image by a ratio of 3:1.
2.4 Digitizing and decoding of the image
Each pixel of the SLM was oversampled by 9 pixels of the CMOS sensor. Our goal is to obtain the same image at the CMOS sensor as the original SLM image. In the CMOS sensor, however, we get degraded images due to optical aberrations, distortion, and the oversampling ratio. Additional errors are added in the process of digitizing the image. The CMOS image is modulated to be the same size as the SLM image, and then the serial 8 pixels of the image are digitized to have four 1s and four 0s, periodically. The final image is decoded from the digitized CMOS image by a 6:8 balanced modulation code.
2.5 Calculation of BER and SNR
The bit errors in our experimentally obtained images were mainly generated by the optics and digitizing processes. In addition, the laser beam was not perfectly uniform, and each pixel of the SLM had different values. The lenses had aberrations, and the apertures generated severe aberrations. When we used smaller apertures for higher storage densities, optical aberrations also increased, and the images were degraded by these optical aberrations. The CMOS images, PSFs, and storage densities for the various apertures are shown in Fig. 2 , and the optical configuration and experimental setup are shown in Fig. 3 . The CMOS images were acquired by the sensor, and the PSFs were calculated from the optics used in our experiment. We used transmission images to exclude the effects of the medium. The shapes of the PSFs explain the image degradation, which occurred because the CMOS image was a convolution of the PSF and SLM images. We calculated the errors as the BER by comparing the SLM and digitized images. The signal-to-noise ratio (SNR) was calculated by comparing the CMOS and digitized images. The BER and SNR were defined asEq. (2) and Eq. (3), respectively; the values are shown in Table 1 . The smaller apertures (smaller Nyquist factors) had larger BERs and smaller SNRs.
3. Image restoration and upscaling
Our study shows that the bit errors caused by the optics and apertures can be reduced by the proposed image restoration and upscaling process (Fig. 4 ). The degraded CMOS image can be enhanced by the image restoration because we know the degradation function as a PSF. The CMOS image is a 2-D convolution of the SLM image and the PSF, and the restoration process is a deconvolution of the CMOS image and the PSF. The restored image has enhanced resolution and produces fewer bit errors compared to images that are not restored and are conventionally processed.
The CMOS image produced by our experimental setup was restored to be the original image that was generated by the SLM. The CMOS image could be restored with the same resolution (360 x 360). Here, however, we applied the upscaling concept to restore missed higher-spatial-frequency components of the image. The CMOS image was resized to a higher pixel resolution (720 x 720), and the resized image was restored in the enhanced spatial frequency domain using the more precisely calculated restoration filter. The image resizing process was nothing but a magnification and subsequent reduction of image size. There was no complex physics or enhancement of image quality in the image resizing. The image quality was enhanced in the restoration process (deconvolution process of the resized image with the fine restoration filter). Finally, the restored image was digitized to be decoded to the real image.
3.1 Image resizing
Our two key concepts are to use a restoration filter that has a small grid size to make the filter more similar to the original PSF and to restore the missed high-spatial-frequency components of image in the enhanced spatial frequency domain. These concepts are realized by resizing the CMOS image.
To decrease the grid size of the restoration filter, the pixel size of the image should also be decreased while not changing the actual CMOS sensor. The pixel size of the image can be reduced by resizing the image. Image resizing itself does not enhance the quality of the image, but it gives us the ability to calculate a finer restoration filter that contains much more optical information. The upscaling of the image resolution and the reduction of the BER are more effective if we use the smaller grid size for the restoration filter. Simulation and experimental results show that the information lost by the optics can be restored using the restoration filter with the smaller grid size. Another contribution to be considered here is detector convolution during the process of calculating the restoration filter. Detector convolution occurs because the grid size of the restoration filter must be matched with the pixel size of the image sensor. The original PSF is calculated with a grid size of less than 1μm. Restoration filters are calculated with a grid size that is the same as the pixel size (12μm) of the image sensor. Thus, calculated restoration filters show the effect of detector convolution.
The SLM image, of 120 x 120 resolution with a 36μm pixel size, passed through the lenses and was focused at the CMOS sensor, which had a resolution of 360 x 360 and a 12-μm pixel size. The CMOS image was blurred by the optics and apertures. This is because the PSF of the optical system was degraded by the optics, and the CMOS image was in turn blurred by the PSF. To restore the CMOS image, we needed to know the PSF that blurred it; the restoration filter was then calculated from the PSF. However, the pixel size of the CMOS sensor was too bulky for the PSF to be an effective restoration filter because the pixel size of the restoration filter was matched to the pixel size of the CMOS sensor. Theoretically, if the original PSF with a small grid size could be used as a restoration filter, the image restoration would be more effective. However, using a small grid size for the PSF requires the same small pixel size of the CMOS sensor. Because the pixel size of the CMOS sensor was too large for such a fine calculation of the restoration filter, our idea was to resize the CMOS image so that the grid size of the restoration filter could be smaller, allowing the restoration filter to effectively describe the optics. The resized image and the restoration filter showed enhanced resolution and higher-spatial-frequency components as shown in Fig. 5 .
As shown in Fig. 6 , additional high-spatial-frequency components could be included in the Fast Fourier transformed image by 2X resizing the image in spatial domain. Because the image resizing increased the frequency resolution of Fourier transform. The image with additional high frequency components could be restored by the proposed image upscaling method with the precisely calculated restoration filter.
3.2 Image restoration
To allow a high data transfer rate, we restored the image in the spatial frequency domain. The image restoration was a deconvolution process of the CMOS image and the PSF. We restored the CMOS image using constrained least-squares filtering (CLSF) . It minimizes the noise amplification during the image restoration process while enhancing the resolution of the image. The resolution of the image and the amount of noise amplification have a trade-off relation in the image restoration process. Moreover, we can reduce the amount of noise amplification by changing the value of γ in Eq. (4). The frequency-domain solution of CLSF becomes
A schematic diagram of the image restoration and image upscaling are shown in Fig. 7 . We calculated the fast Fourier transform (FFT) of the CMOS image and the deconvolution filter. Then, CLSF was performed in the spatial frequency domain. The restored image was calculated by an inverse-FFT (iFFT) process. The proposed method needs only the FFT of the CMOS image. Because we knew that the PSF is a deconvolution filter, the filter and its CLSF form were stored in read-only memory (ROM) and only the dot product of the 2-D image and the iFFT were taken during the image restoration. Our method needs additional memory and computation time. However, there is a trade-off between the transfer rate and the storage density. This paper is focused on achieving higher storage density by reducing the BER. We can store more data in the media with the proposed method.
3.3 Image upscaling
For more effective image restoration, we resized the CMOS image (360 x 360 → 720 x 720) by the cubic interpolation method and calculated the deconvolution filter again to give more exact information about the optical system. The nearest-neighbor, bilinear, and cubic interpolation methods were used as image resizing methods. The best result was obtained when the cubic interpolation method was used for the magnification and demagnification process. The recalculated restoration filter for the resized image was more like the original PSF [Fig. 5] than the filter for the image which was not resized. Only the low-frequency components of the image could be restored without image resizing. If we resize the image, we can calculate the restoration filter including higher-frequency components and can restore the images in the enhanced high-frequency region. Hence the image upscaling process enhances the resolution and quality of the image.
3.4 Simulation results
We simulated the CMOS image using commercial optical software (CODE V). After the PSFs for each Nyquist aperture were calculated, the convolutions of the PSFs and the SLM images were simulated. Because the simulated image has only optical aberrations, distortions caused by lenses and apertures, we added extrinsic noise. The simulated images at each step are shown in Fig. 8 . The error pixels were calculated by comparing the SLM and digitized images. The resized and restored image showed enhanced resolution and contrast. Upscaling and restoration were additional steps, but the error pixels were reduced using the proposed method.
The BERs were reduced for all the Nyquist factors by the image restoration method, as shown in Fig. 9 . The upscaling method also demonstrated that the BERs could be minimized. The increased SNR shows the performance of the image restoration method. Our results indicate that the upscaling method could be very effective for page-based holographic data storage.
3.5 Experimental results
We assume that the optics, SLM, and CMOS sensor are the same in systems using the conventional method and those using the proposed method. The storage density is increased by recording more holograms on the medium. The size of the hologram can be reduced by using a smaller Nyquist aperture; however, the BER is increased due to the aberrations caused by the smaller aperture. The level of the BER should be maintained for the smaller Nyquist factor to increase the storage density. The main idea in this paper is to keep or reduce the BER for smaller-sized apertures. If we could use smaller apertures with a lower level of BER, then the storage density could be increased.
Figure 10 shows the experimental images at each step. The CMOS image was resized and restored to have fewer bit errors using the proposed method. The BER and SNR are shown in Fig. 11 . For a Nyquist factor of 1.1, the BER was reduced from 0.0215 to 0.0018, an enhancement of nearly a factor of 12. For a Nyquist factor of 1.2, the BER was reduced by a factor of 44, from 0.0061 to 0.00014, and there were no bit errors for Nyquist factors over 1.3. The SNRs were nearly doubled for all Nyquist factors.
The experimental results show that the image upscaling method is very effective for page-based holographic data storage systems. Using the proposed image upscaling method and smaller apertures, the storage density can be increased because we can narrow the gap between adjacent holograms recorded on the medium with the same optics, SLM, and CMOS sensors. Optically generated errors are calculated as PSFs, and the restoration filters are then calculated from the PSFs. To restore high-frequency components of the image, a restoration filter is calculated in the enhanced high-frequency domain with a grid size that matches the pixel size of the resized image. Bit errors generated by the lenses and apertures are effectively eliminated by the proposed image upscaling method.
In this research, we achieved a reduced BER without varying the Nyquist aperture size using the proposed image upscaling method. The bit errors caused by the optics and Nyquist apertures were compensated by the PSF through the image restoration in the enhanced frequency domain. This was possible because we knew that the errors were aberrations caused by the optics. Our research results showed us that we could increase the storage density of holographic data systems without changing the optical configurations or image sensors.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2011-0000023).
References and links
1. P. J. van Heerden, “Theory of optical information storage in solids,” Appl. Opt. 2(4), 393–400 (1963). [CrossRef]
3. V. Vadde, B. V. K. Vijaya Kumar, G. W. Burr, H. Coufal, J. A. Hoffnagle, and C. M. Jefferson, “A figure-of-merit for the optical aperture used in digital volume holographic data storage,” in Optical Data Storage ’98, S. Kubota, T. D. Milster, and P. J. Wehrenberg, eds., Proc. SPIE 3401, 194–200 (1998).
5. G. W. Burr, J. Ashley, H. Coufal, R. K. Grygier, J. A. Hoffnagle, C. M. Jefferson, and B. Marcus, “Modulation coding for pixel-matched holographic data storage,” Opt. Lett. 22(9), 639–641 (1997). [CrossRef] [PubMed]
6. J. F. Heanue, M. C. Bashaw, and L. Hesselink, “Channel codes for digital holographic data storage,” J. Opt. Soc. Am. A 12(11), 2432–2439 (1995). [CrossRef]
7. V. Vadde and B. V. Kumar, “Channel modeling and estimation for intrapage equalization in pixel-matched volume holographic data storage,” Appl. Opt. 38(20), 4374–4386 (1999). [CrossRef]
8. G. W. Burr, “Holographic data storage with arbitrarily misaligned data pages,” Opt. Lett. 27(7), 542–544 (2002). [CrossRef]
10. C. Y. Chen, C. C. Fu, and T. D. Chiueh, “Low-complexity pixel detection for images with misalignment and interpixel interference in holographic data storage,” Appl. Opt. 47(36), 6784–6795 (2008). [CrossRef] [PubMed]
12. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Prentice Hall, 2002), Chap. 5.