Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Size and shape recognition using measurement statistics and random 3D reference structures

Open Access Open Access

Abstract

Three dimensional (3D) reference structures segment source spaces based on whether particular source locations are visible or invisible to the sensor. A lensless 3D reference structure based imaging system measures projections of this source space on a sensor array. We derive and experimentally verify a model to predict the statistics of the measured projections for a simple 2D object. We show that the statistics of the measurement can yield an accurate estimate of the size of the object without ever forming a physical image. Further, we conjecture that the measured statistics can be used to determine the shape of 3D objects and present preliminary experimental measurements for 3D shape recognition.

©2003 Optical Society of America

1. Introduction

Rapid advances in digital processing and focal planes and the availability of ample ubiquitous computing power have led to the growing popularity of integrated computational imaging systems. Integrated computational imaging systems are a new class of imaging systems that integrate optical and electronic processing to achieve new functionalities [1, 2]. Some of these systems use nonconventional optical elements to preprocess the field for digital analysis [3, 4]. These nonconventional optical elements can perform a wide range of transformations that can be used to implement complicated multidimensional/multi-spectral mapping of a source space into a measurement space. Some of the nonconventional optical elements used include coded apertures [7, 8, 9, 10], 3D reference structures [13, 11, 12, 14] and volume holograms [16, 17].

A computational sensor system consists of a mapping from a source state s to a measure state m. An inverse mapping on m to a source of state parameters s is also embedded in an object space and determines the measure state via optical radiation. Historically, sensor systems have been designed to form an isomorphism between the object space and the sensor embedding space. This is the case, for example, with mapping from an object plane to an image plane via lenses. More recently, tomographic and interferometric systems have been emphasized. In these systems, the measurement state still samples a continuous space. However, this measurement space represents a global transformation of the object space like a Radon or Fourier transformation.

This paper discusses an imaging system based on the principle of reference structure tomography (RST). An RST based system consists of a 3D reference structure placed in between the source space and sensor space. The reference structure is a volume that is partially filled with small opaque obscurants. These opaque obscurants segment the source space in front of the reference structure into regions that are either visible or invisible to a sensor behind the reference structure. Thus, any measurement acquired by the sensor is the visibility modulated projection of the source space. RST and related systems implement algebraic transformations of the source with no simple measurement embedding space. The potential advantage of the RST approach is that one can implement data and computationally efficient sensing directly in the physical layer. An example of such efficiency can be found in [13]. This paper explores a further generalization of the concept of efficient measurement. We specifically consider the statistics of the reference structure modulated intensity. We envision a sensor in which detector statistics could be directly read from a mixed-mode focal plane array by manipulating the hardware cir-cuitry. These statistics would themselves be the measurement of the system to determine object attributes. We show that a few statistical measures allow us to estimate the size and shape of objects using order (1) values thus reducing the system’s computational and data load. To this end, in section 2, we derive a theoretical model that relates the statistics of the measurement to the size of the object for a simple 2D object and present experimental verification of the model in section 3.

Potuluri et al. [14] have demonstrated imaging based on RST. If the region of interest in the source space is restricted to a small volume, it is possible to efficiently acquire measurements throughout the region of interest at a predefined sampling frequency, thus dividing the region into voxels. An unknown object in this region can be considered a linear superposition of these voxels and the resulting measurement would also be a superposition of a few or all of the previously acquired measurements. If the source space is itself sparse, it is possible to efficiently invert the resulting measurement to obtain the true source structure. However, this method scales poorly for objects with complex shapes and becomes computationally expensive. In section 4, we conjecture that the statistics of the measurements (acquired from multiple persepectives) are an adequate representation of the shape of an object and present some preliminary theoretical considerations and experimental results. Finally, we conclude in section 5 with some suggestions for future work on statistical shape recognition.

2. Size recognition using measurement statistics

Consider Fig. 1 as a representative scheme for reference structure tomography. The obscurants within the reference structure are assumed to be perfectly opaque. Hence, the obscurants inside the reference structure segment the source space into regions of varying visibility as shown in Fig. 1. Thus, for each pair of source and measurement points at r and rm respectively, one can associate a visibility function v(rm,r). For opaque obscurants, the visibility function is binary-valued depending on whether the source located at r is visible or invisible to the measurement point rm. Thus, the visibility function imposed by the reference structure modulates the measurement by segmenting the source. For a measurement point rm, the value of measurement m, can be obtained by integrating over this segmented source space

 figure: Fig. 1.

Fig. 1. Obscurants distributed within the reference structure volume segments the source space based on whether a region is visible to a sensor.

Download Full Size | PDF

m=v(rm,r)s(r)dr.

In (1), s(r) is the density function of sources over the source space. We can discretize (1) byassuming that the source space is segmented into non-overlapping cells [14]. Now, the integral in (1) can be replaced by a summation over all the cells in the source space to yield

m=ivm,isi.

In (2), i is a dummy variable that spans all the possible source cells and vm,i represents the visibility of source si to the measurement point rm. If there are multiple measurement points j=1…M available, (2) can be written as

mj=ivj,isi.

In (2), vj,i represents the visibility of the i th source cell to the j th measurement point. The above analysis is applicable for a deterministic reference structure for which the exact locations of the obscurants are known [13]. On the other hand, if the reference structure contains randomly distributed obscurants, it is difficult and often computationally expensive to ascertain vj,i for every source-sensor pair. Instead, we can efficiently determine the nature of the object based on the statistics of the measurements.We associate with each source cell and measurement pair a probability pm,i that the i th source is visible to the sensor located at rm. The expected value of the measurement 〈m〉, is then given by

m=i(1×pm,isi+0×(1pm,i)si).

In other words, the expected value of the visibility is given as

vm,i=1×pm,i+0×(1pm,i).

Equation 4 can now be written as

m=ivm,isi=ipm,isi.

In (6), pm,i can be determined based on the nature of the reference structure. We will assume that the obscurants are uniformly and randomly distributed within the reference structure. Consider a reference structure that occupies a volume V. There are small obscurants of average volume v dispersed throughout the entire reference structure volume. Hence, the maximum number of obscurants N, that can be accommodated within the reference structure volume is given by

 figure: Fig. 2.

Fig. 2. Schematic to determine the probability of visibility of a source cell to a sensor.

Download Full Size | PDF

N=Vv.

However, only a small fraction ψ of the entire volume is actually filled with the obscurants. Thus, the number of obscurants present in the reference structure is ψN. For the purposes of this paper, we have used ψ=0.1 for both theory and experiment. Now consider Fig. 2. The space enclosed by the lines joining the edges of source cell (1) to the sensor point intercepts a certain volume σ of the reference structure. The number of obscurants that could possibly be accommodated within this volume is

K=σv.

Now, the probability pm,(1) can be determined as the answer to the question: Given that out of N available locations within the reference structure only ψN actually contain obscurants, what is the probability that out of K randomly selected locations there are no obscurants? This probability can easily be determined for each source cell and sensor pair in a manner similar to determining the probability in a Bernoulli distribution. However, it is not necessary to do this. From the question posed previously, it is obvious that the probability would change only if the values of either ψ or K change. ψ is fixed for an imaging system and from (8), we know that K depends only on σ. Thus if σ were to remain constant, the probability would also stay unchanged. This observation implies that closely spaced source cells will have approximately equal probabilities of being visible to the same sensor. For a random reference structure, we define the uniform field of view (UFOV) of a single sensor as the angular range over which σ changes by less than 1%. Based on this, we calculate that the UFOV of a single sensor to be ϕ≈16.14° for ψ=0.1. For an object located within the UFOV of the sensor, (4) can again be simplified by assuming that pm,i remains constant over the object to yield

m=pmisi.

We would like to point out that the actual field of view of the sensor can actually exceed the UFOV; however, (9) would not hold outside the UFOV. This suggests that it might be possible to actually measure object shapes based on the joint statistics of the measurements throughout the entire field of view of the sensor. The details of this are beyond the scope of this paper and we will focus on objects that lie within the UFOV. Equation 9 states that the expected value of a measurement depends on the source distribution and intensity. This result is hardly surprising and is observed in almost all imaging systems. However, an analysis of the higher order moments of the measurements is more interesting. Proceeding in a manner similar to (9), we obtain for the n th order statistics of the measurement

mn=pmn(isi)n.

In (10), 〈mn〉 is the n th moment of the measurement located at rm. Consider a continuous, self luminescent 2D object; if the luminous flux Φ of the object is constant, the intensity emitted by the object is a function of only the object’s area A i.e. (∑isi)nnAn yielding

mn=cn×An.

In (11), cn is a proportionality constant. In other words, the n th order statistic of the measurement varies directly as the n th power of the object area for a simple 2D object. This analysis is also applicable for a system in which the visibility of the individual sensors is unobscured (i.e. without the reference structure between the source and the sensors). However in this case, pm=1 and the variance 〈m 2〉-〈m2=0. This scaling of 〈mn〉 in An varies substantially from focal systems. For a focal sensor 〈mn〉 is linear in A or can be derived for specific illumination cases [18]. By determining pm, a reference structure imposes certain statistics on the image intensity. This report experimentally demonstrates the dependance of moments of m and its variance on A for reference structure based sensors.

3. Experimental verification of size recognition using reference structure tomography

 figure: Fig. 3.

Fig. 3. Experimental setup for verifying RST based size recognition.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Lens-less measurements obtained on CCD (a) without reference structure (b) with reference structure.

Download Full Size | PDF

To experimentally verify the predictions of (11), we would require a large ensemble of random reference structures and a single sensor. A large number of measurements acquired by using the entire ensemble of random reference structures would yield the the probability density function (PDF) of the measurement and this in turn would be analyzed to calculate the statistical moments. However, this is an extremely time consuming process. Instead, we again take advantage of the fact that the probability of visibility depends only on σ. However, instead of looking back at the source space from the sensor, we look forward from the source space into the measurement space. In a method similar to that described in section 2, we can calculate the UFOV of each source cell over which the probability of visibility would not change. Now, if we placed multiple sensors within this UFOV, we could acquire multiple measurements at one shot without sampling over an ensemble of many random reference structures. In other words, the measurement process is actually ergodic and an expectation over an ensemble of reference structures is the same as an expectation over an ensemble of many sensors provided the sensors lie within the UFOV of the source object. Based on the above discussion, we used an experimental setup similar to the schematic shown in Fig. 3. The object used was the output of an incandescent fiber light source. An iris was placed in front of the end of the fiber to alter the diameter of the object and thus change its size. The object was placed at a distance of 61 cm from the CCD camera. The camera was a 1320×1040 pixel cooled scientific camera manufactured by Roper Scientific Instruments. The pixels of the camera provided the sensors to acquire the multiple measurements. The random reference structure was created by printing opaque dots on a slide transparency with a 10% fill factor. It was made 3D by folding several layers of transparency on top of each other. The experimental reference structure had opaque dots of average diameter 100?m with 16 layers, each 100µm thick folded on top of each other. The reference structure was located 2 cm away from the pixel array of the camera. We took measurements for objects of different sizes both with and without a reference structure present. Figure 4 shows the measurements on the CCD with and without a reference structure. Notice that the measurement with the reference structure has regions of high and low brightness corresponding to the visibility imposed by the obscurants. However, the measurement without the reference structure shows no such structure.

 figure: Fig. 5.

Fig. 5. Histograms of different objects (a) without and (b) with reference structure.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Statistical moments plotted versus object area (a) 〈m〉, (b) 〈m 2〉, (c) 〈m 3〉, (d) 〈m 4〉.

Download Full Size | PDF

We took readings for 29 objects of different sizes and binned all the measurements into 100 bins. The resulting histograms are a representation of the PDF of the measurements corresponding to the object of that particular size. Figure 5 shows the histograms for the objects both with and without a reference structure. Notice that Fig. 5(a) has just 15 of the histograms (every alternate histogram has been plotted) because all 29 would appear too crowded.We note that there is no discernible change in either the variance, skewness or the kurtosis of the distributions. The only change is observed is an increase in the mean intensity with object size. Figure 5(b) shows the histogram for all 29 objects with exactly the same arrangement but with a random reference structure placed in front of the CCD. Note that the statistics i.e. the shape of the histogram changes as the object size changes. Figure 6 shows the higher moments of the distributions as a function of the object areas. From the figure, we see that the moments fit well with the polynomial power corresponding to that particular moment. Further Fig. 7 is a plot of the normalized measurement variance versus the 2D object area shown for the RST system.We see that the variance increases quadratically for the RST system. The higher central moments (skewness, kurtosis etc.) also show similar behavior hence they can be used to obtain an accurate representation of the object size consistent with the theoretical predictions.

 figure: Fig. 7.

Fig. 7. The variance of the measurements of an RST system scales quadratically with the area of the 2D object.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Measurement statistics for three different object locations.

Download Full Size | PDF

Finally, Fig. 8 shows the statistics for the object placed at three different spatial locations; one located on axis and 65 cm away from the CCD, one located 5 cm off axis and 61 cm from CCD and the last one located 5cm off axis and 65 cm from the CCD. The fact that the statistics appear to be unaffected by small changes in object location is also consistent with our theory.

4. Shape recognition using measurement statistics

 figure: Fig. 9.

Fig. 9. The statistical RST system measures the projected size of the object in different directions.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The measured projections can be used to reconstruct the convex hull of the object of interest.

Download Full Size | PDF

From the preceding discussion we know that the moments of the measurements can be used to determine the size of a 2D object of uniform brightness as discussed in (11).We now present some preliminary analysis on using the statistics of the measurements to recognize 3D object shapes. We conjecture that several statistical size measurements of a 3D object acquired in different directions can be used to reconstruct the convex hull of the 3D object.

 figure: Fig. 11.

Fig. 11. Shapes used and their projections at θ=0°.

Download Full Size | PDF

Consider Fig. 9, the RST based statistical measurement system measures the size of the projections of the object in several directions. Thus, the statistical RST system yields a measure of an object consisting of the size of the projection as a function of the direction i.e we obtain a function A=A(θ) corresponding to the extent of the object in the direction specified by θ. Based on this information A(θ), it is possible to obtain the convex hull of the object shape by using the shadow-backprojection algorithm described below. For simplicity we discuss reconstruction of a 2D convex hull based on line projections, however, the approach is identical for reconstructing a 3D shape using area projections. The 2D statistical RST system returns a function l(θ) that describes the length of the projection of the curve l for a particular direction θ. To reconstruct the 2D object, we start with any two projections say l(θ=0) and l(θ=π/2). Now, based on the geometry of the projections, the object is constrained to lie inside the region specified by

x0,
xl(π2),
y0,
yl(0).

Now consider an arbitrary projection angle θ with projection length l(θ).We can show that the innermost allowable edges of the 2D object inclined at θ are constrained by equations:

yxtanθl(θ)secθ+l(π2)tanθ0,
yxtanθl(θ)secθl(0)0.

Thus, each projection specifies a forbidden region within the rectangle defined by (12)–(15). A superposition of all these regions for a large number of projections would specify the convex hull of the object of interest as shown in Fig. 10. However, it is not necessary to actually obtain the convex hull by shadow-backprojection. It is sufficient to establish a correspondence in between a particular “image” l(θ) and the corresponding convex hull of the shape and identify the shape’s convex hull based on the statistical signature l(θ) alone. Thus, the statistical signature l(θ) now identifies the convex hull of a particular object and this can be used for RST based shape recognition. We would like to mention here that the convex hull of an object shape is not necessarily unique. Consequently, we plan to use correlations in between the measured pixel values of the projections and then try to relate these correlations to a particular object shape. Research in this area is currently ongoing. Object analysis in sensor systems has conventionally been implemented in image post-processing, although optical processing based on field correlation has also been implemented [19, 20, 21, 22].

Most contemporary probabilistic [23] shape recognition systems compare a measurement with a database of standard shapes to determine the shape of an object. We use a similar approach by using the statistical size measurements to form a statistical “image” of the shape of the object. This is depicted in Fig. 11 which shows the measurements obtained from 5 objects of different shapes. Each object was placed on a rotational stage and 5 different perspective views were obtained for each object at angles of 0°,45°,90°,135° and 180°.

Figure 12(a) shows plots of the statistics for different shapes for different perspectives. We examine each shape separately, the shape of the sphere does not change as the perspective changes and this is reflected in the histograms for the sphere that remain approximately invariant over the entire range of angles. A parallelepiped on the other hand would appear different from different angles and it is seen that while the histograms for 0° and 180° look similar, the other three angles give rise to different histograms. The cylinder, like the sphere, also gives rise to similar histograms for each angle. The tetrahedron on the other hand appears to be a different shape from each perspective and thus the resulting histograms all appear to be different. Finally, the hemisphere looks like a sphere from one side and like a parallelepiped from the other. The resulting histogram shows a similar behavior as it initially yields statistics like a sphere but the opposite perspective yields statistics like a flat surface. Figure 12(b) shows the corresponding statistical images obtained from the histograms.

 figure: Fig. 12.

Fig. 12. (a) Shapes used and their projections (b) Statistical “images” (A(θ)) of the shapes. These statistical images are a representation of the object shape

Download Full Size | PDF

5. Conclusions and Future work

We have shown that the size of an uniform-brightness object can be accurately predicted by determining the higher moments of the statistics of the RST measurement. Further, we see that complex shape recognition can also be achieved using this method. Thus RST seems attractive to implement shape recognition because of the relatively few measurements required: We envision that a typical shape recognition system would require just a few statistical measurements in each direction to determine the projected size of the object in that direction. These projections could then be inverted to yield information about the convex hull of the shape. While it is true that the RST based shape recognition system delocalizes the object information, we have shown that a separate RST system (described in [13]) can be used in conjunction with the shape recognition system to both locate the object and determine its shape. Thus, we believe that it is possible to perform shape recognition with low power consumption and data transfer rate using RST systems. Efforts are currently underway in our laboratory to implement a prototype RST based 3D shape recognition system.

Acknowledgment

The authors would like to thank George Barbastathis, Nikos Pitsianis, Prasant Potuluri, Steven Feller, Evan Cull, Mohan Shankar and Unnikrishnan Gopinathan for useful discussions. This work was supported by the Defence Advanced Research Projects Agency through the grant DAAD 19-01-1-0641.

References and links

1. D.J. Brady and Z.U. Rahman, “Integrated analysis and design of analog and digital processing in imaging systems: introduction to feature issue,” Appl. Opt. 41, 6049–6049, (2002). [CrossRef]   [PubMed]  

2. W.T. Cathey and E.R. Dowski, “New paradigm for Imaging systems,” Appl. Opt. 41, 6080–6092, (2002). [CrossRef]   [PubMed]  

3. D.L. Marks, R.A. Stack, D.J. Brady, D.C. Munson, and R.B. Brady, “Visible Cone beam tomography with a lensless interferometric camera,” Science 284, 1561–1564, (1999). [CrossRef]  

4. G. Barbastathis and D.J. Brady, “Multidimensional tomographic imaging using volume holography,” Proceedings of the IEEE 87, 2098–2120, (1999). [CrossRef]  

5. M.R. Descour, C.E. Volin, E.L. Derenaiak, T.M. Gleeson, M.F. Hopkins, D.W. Wilson, and P.D. Maker, “Demonstration of a computed tomography imager spectrometer using a computer-generated hologram dispenser,” Appl. Opt. 36, 3694–3698, (1997). [CrossRef]   [PubMed]  

6. E.R. Dowski and W.T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef]   [PubMed]  

7. T.M. Cannon and E.E. Fenimore, “Tomographical imaging using uniformly redundant arrays,” Appl. Opt. 18, 1052–1057, (1979). [CrossRef]   [PubMed]  

8. E.E. Fenimore, “Coded aperture imaging-predicted performance of uniformly redundant arrays,” Appl. Opt. 17, 3562–3570 (1978). [CrossRef]   [PubMed]  

9. A.R. Gourlay and J.B. Stephen, “Geometric coded aperture masks,” Appl. Opt. 22, 4042–4047, (1983). [CrossRef]   [PubMed]  

10. K.A. Nugent, “Coded aperture imaging- a fourier space analysis,” Appl. Opt. 26, 563–569, (1999). [CrossRef]  

11. P. Potuluri, M.R. Fetterman, and D.J. Brady, “High depth of field microscopic imaging using an interferometric camera,” Opt. Express 8, 624–630, (2001), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-8-11-624. [CrossRef]   [PubMed]  

12. P. Potuluri, U. Gopinathan, J. R. Adleman, and D.J. Brady, “Lensless sensor system using a reference structure,” Opt. Express 11, 965–974 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-965. [CrossRef]   [PubMed]  

13. U. Gopinathan, D. J. Brady, and N. P. Pitsianis, “Coded apertures for efficient pyroelectric motion tracking,” Opt. Express 11, 2142–2152 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2142. [CrossRef]   [PubMed]  

14. P. Potuluri, M. Xu, and D.J. Brady, “Imaging with random 3D reference structures,” Opt. Express 11, 2134–2141 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2134. [CrossRef]   [PubMed]  

15. T. Cannon and E. Fenimore, “Coded aperture imaging-many holes make light work,” Optical Engineering 19, 283–289, (1980).

16. G. Barbastathis, M. Balberg, and D. J. Brady, “Confocal microscopy with a volume holographic filter,” Opt. Lett. 24, 811–813 (1999). [CrossRef]  

17. A. Sinha and G. Barbastathis, “Volume holographic telescope,” Opt. Lett. 27, 1690–1692 (2002). [CrossRef]  

18. J. W. Goodman, “Statistical Optics,” John Wiley & sons Ch.6, 237 (2000).

19. J. Rosen, “Three-dimensional optical Fourier transform and correlation,” Opt. Lett. 22, 964–966 (1993). [CrossRef]  

20. B. Javidi and E. Tajahuerce, “Three-dimensional object recognition by use of digital holography,” Opt. Lett. 22, 610–612 (2000). [CrossRef]  

21. J.J. Esteve-Taboada, D. Mas, and J. Garca, “Three-dimensional object recognition by Fourier transform profilometry,” Appl. Opt. 38, 4760–4765 (1999). [CrossRef]  

22. Y. Frauel and B. Javidi, “Digital Three-dimensional image correlation using computer-reconstructed integral imaging,” Jrnl. of Appl. Optics 41, 5488–5496 (2002). [CrossRef]  

23. N. Saitoet al., “Discriminant feature extraction using empirical probability density estimation and a local basis library,” Patten Recognition 35, 28412852 (2002).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Obscurants distributed within the reference structure volume segments the source space based on whether a region is visible to a sensor.
Fig. 2.
Fig. 2. Schematic to determine the probability of visibility of a source cell to a sensor.
Fig. 3.
Fig. 3. Experimental setup for verifying RST based size recognition.
Fig. 4.
Fig. 4. Lens-less measurements obtained on CCD (a) without reference structure (b) with reference structure.
Fig. 5.
Fig. 5. Histograms of different objects (a) without and (b) with reference structure.
Fig. 6.
Fig. 6. Statistical moments plotted versus object area (a) 〈m〉, (b) 〈m 2〉, (c) 〈m 3〉, (d) 〈m 4〉.
Fig. 7.
Fig. 7. The variance of the measurements of an RST system scales quadratically with the area of the 2D object.
Fig. 8.
Fig. 8. Measurement statistics for three different object locations.
Fig. 9.
Fig. 9. The statistical RST system measures the projected size of the object in different directions.
Fig. 10.
Fig. 10. The measured projections can be used to reconstruct the convex hull of the object of interest.
Fig. 11.
Fig. 11. Shapes used and their projections at θ=0°.
Fig. 12.
Fig. 12. (a) Shapes used and their projections (b) Statistical “images” (A(θ)) of the shapes. These statistical images are a representation of the object shape

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

m = v ( r m , r ) s ( r ) d r .
m = i v m , i s i .
m j = i v j , i s i .
m = i ( 1 × p m , i s i + 0 × ( 1 p m , i ) s i ) .
v m , i = 1 × p m , i + 0 × ( 1 p m , i ) .
m = i v m , i s i = i p m , i s i .
N = V v .
K = σ v .
m = p m i s i .
m n = p m n ( i s i ) n .
m n = c n × A n .
x 0 ,
x l ( π 2 ) ,
y 0 ,
y l ( 0 ) .
y x tan θ l ( θ ) sec θ + l ( π 2 ) tan θ 0 ,
y x tan θ l ( θ ) sec θ l ( 0 ) 0 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.