Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical frequency and phase information-based fusion approach for image rotation symmetry detection

Open Access Open Access

Abstract

Detecting an object using rotation symmetry property is widely applicable as most artificial objects have this property. However, current known techniques often fail due to using single symmetry energy. To tackle this problem, this paper proposes a novel method which consists of two steps: 1) Based on an optical image, two independent symmetry energies are extracted from the optical frequency space (RSS – Rotation Symmetry Strength) and phase space (SSD – Symmetry Shape Density). And, an optimized symmetry-energy-based fusion algorithm is creatively applied to these two energies to achieve a more comprehensive reflection of symmetry information. 2) In the fused symmetry energy map, the local region detection algorithm is used to realize the detection of multi-scale symmetry targets. Compared with known methods, the proposed method can get more multiple-scale (skewed, small-scale, and regular) rotation symmetry centers, and can significantly boost the performance of detecting symmetry properties with better accuracy. Experimental results confirm the performance of the proposed method, which is superior to the state-of-the-art methods.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

An image is the result of the frequency and phase modulation of reflected light. Detecting the symmetry of a given image allows for the assessment of modulated light. Symmetrical target detection is also a computer application which mimics the human instinct to understand nature. To these effects, it is an important research subject; symmetry detection allows intelligent machines to quickly and accurately determine a given target and its structure [17]. Rotation symmetry detection is a common approach to such machine applications. The symmetrical form has a simple and clear pattern which can be expressed mathematically and in the form of a group. A clear understanding of symmetrical patterns is necessary to establish an effective symmetry detection algorithm [8]. Obtaining symmetry information from the optical information in the image is a topic that merits scholarly attention.

Recent researchers have explored this topic from the dual perspectives of space and frequency, and have established various calculation algorithms to detect and identify symmetrical targets. In the spatial domain, most algorithms are based on the detection of image features (e.g., edges, angles, corners, and textures) [912] and the detection of gradient features [1315]. In the gradient feature detection algorithm, the symmetry has a significant direction change for detection, and this research has gradually become a hot research direction [16]. Other algorithms are based on frequency analysis tools designed for spectral decomposition [1720], which maps specific objects to dimensional space; this can quickly reveal the spectral characteristics of a given signal that reflect certain targeted characteristics. Tools such as discrete Fourier transform (DFT) and wavelet transform, for example, use spectral decomposition algorithms to analyze signal characteristics. The DFT, in particular, can provide very useful and clear symmetric mode connections in the frequency domain for specific symmetric modes in the spatial domain [2124]. Image-based detection generally requires feature detection in a local region, while frequency-based detection requires detection in the whole image (i.e., global region). Modern detection algorithms can be divided into two categories accordingly: local algorithms and global algorithms. Current rotation symmetry center detection and recognition algorithms can also be categorized in this way.

Local rotation symmetry center detection and recognition algorithms in use today are mainly based on specific feature detectors. For example, Loy et al. [25] used all direction, scale, and position information in an image to detect and identify rotation symmetry center. This effect is applicable to images of single as well as multiple symmetry target scenes, and uses a robust feature point matching algorithm wherein feature points are generated by feature detectors (e.g., the SIFT algorithm). Cornelius et al. [26] used Hessian-affine, Harris-affine, and SIFT feature detectors to perform affine projection. To detect rotation symmetry, they assumed that each feature pair has a tilt angle and direction forming a set of rotation symmetry centers; a significant indicator of rotation symmetry can be determined per this set. The close characteristics can also be used to determine the center of symmetry. However, texture information is required, which prevents the objects recognition with symmetrical textures. Although the above algorithms can detect rotation symmetry centers, they do so with limited detection accuracy $-$ especially in images containing multiple rotation symmetry center scenes.

The frequency analysis tool is commonly used in global rotation symmetry center detection and recognition algorithms. Although robust and automatic rotation symmetry center detection and recognition algorithms are relatively rare [23,24], the frieze-expansion pattern (FEP) algorithm is often utilized in rotation symmetry detection algorithms. The FEP [24] uses RSS and SSD maps to detect the center of rotation symmetry; the nature of the periodic signal is used to identify the rotation symmetry fold numbers. This algorithm is not suitable for multiple-scale rotation symmetry center scenes, because it centers on the global maximum exploration of the image [27,28]. It serves only to calculate the RSS and SSD of the image from which the maximum value must be determined to detect the center of rotation symmetry. Rotation symmetry centers with relatively small dimensions then remain obscured because they have lower symmetry energy values. Additionally, a given image must be calculated twice globally in order to calculate the RSS and SSD maps, which creates a costly and complex calculation process. Further, the rotation symmetry centers of RSS and SSD maps may have inconsistent rotation symmetry center positions, which necessitates identifying a single center rather than multiple rotation symmetry centers.

Many researchers have struggled with these issues [2932]. For example, Pan et al. [33] used a radius-based FEP algorithm to detect rotation symmetry centers. His algorithm has missing-center problems due to defects in the global RSS maps. Itti et al. [34] established a region of interest detection algorithm which is suitable for detecting single or multiple rotation symmetry center regions. Huang et al. [35] used Itti’s algorithm to detect rotation symmetry center regions with global fast computing capability according to the detected interest region (IR), which is confined to local regions. However, if the IR does not cover the rotation symmetry center in the image, the algorithm fails to detect the symmetry center and its overall error rate increases. A novel approach was developed in this study to calculate RSS and SSD symmetry energy maps simultaneously. The symmetry center is then identified according to optimized symmetrical energy peaks. First, the RSS and SSD symmetry energy maps are calculated globally. The two symmetry energy maps are fused using an optimization fusion algorithm based on symmetry energy and min-max gradient changes (surface smoothing). Second, the significant interest region as-detected is used in the fused symmetry energy map. In this way,the global maximum symmetry center detection process becomes a local maximum symmetry center detection process. Multiple symmetry centers can be identified efficiently, with low likelihood of missing rotation symmetry centers; the problem of independent RSS and SSD algorithm detections [24] is also effectively avoided to prevent identifying an excess of symmetry centers .

2. Basic theory and background

The mathematical description of symmetry is theoretically dependent on the concept of a group. If any element in the group satisfies the commutative law $a \cdot b = a \cdot b ,a,b \in g$, then this group is an Abel group [36,37]. The rotation (symmetrical) group satisfies the commutative law, the rotation group is also an Abel group [38]. The simplest group that satisfies the commutative law is the positive triangular symmetry group (bilateral symmetry) $D_3$. This group element is composed of three rotation symmetry and three reflection symmetry elements. It has same structure as $C_6$, i.e., the elements of each group have one-to-one correspondence with each as do the products of the group elements [38]. The rotation group is the focus of the present study.

2.1 Introduction to RSS rotation symmetry detection and recognition algorithm

Lee et al. [23] found that in FEP pattern, after the periodic signal is subjected to the DFT, there is a correspondence between the frequency domain signal and the time domain periodic signal fold number. The core of the detection algorithm is that any signal passes the Fourier transform, then its fold number in the time domain corresponds to the subscript (index) value of the peak point in the frequency domain. This subscript value can be considered the main energy in the time domain. The high-frequency signals hidden in the signal have a number of folds that are multiples of the fold number of the main energy signal. Lee et al. [23] used this concept to establish an effective algorithm for detecting and identifying rotation symmetry centers based on RSS.

In the actual image, rotation symmetry is usually dominated by the annulus at the center point (Fig. 1(a) red point). If rotated around the center point, the resulting annulus contains a repeating pattern in the circumferential direction (Fig. 1(a)). The rotation symmetry in the original image is detected by the FEP algorithm. The rotation symmetry on the endless belt becomes the translational symmetry on the FEP pattern (Fig. 1(b)). Each line then on the obtained FEP pattern (e.g., the red dotted line in Fig. 1(b)) is then subjected to a one-dimensional (1D) DFT and an absolute value is taken to create a discrete energy density map for each line (Fig. 1(c)). By searching the maximum energy density value (magnitude) of each line and finding the corresponding index number, we can get the number of rotation symmetry fold corresponding to each line. If the adjacent lines with the same number of folds are unified, a symmetry region can be formed. Referring to (Fig. 1(c)), we find that in the peak point of the symmetry energy density map, Point A represents the highest symmetry energy density, and its index number is 5, which corresponds to (Figs. 1(a), 1(b)) features rotation 5 times. Point B represents the second-highest symmetry energy density, which indicates the high-frequency symmetry component. By finding its index number is 10, it can be determined that the high-frequency symmetry component fold number is multiple of the low-frequency symmetry component fold number (index number,subscript number). This rule also applies to the symmetry energy density point C at the third height (Fig. 1(c)).

 figure: Fig. 1.

Fig. 1. Core theory of RSS algorithm based on FEP [23]. (a) Original image (red point is rotation symmetry center). (b) FEP pattern. (c) Row of DFT magnitude results for (b) red dotted line.

Download Full Size | PDF

2.2 Introduction of SSD rotation symmetry detection and recognition algorithm

Lee et al. [24] selected an arbitrary pixel for FEP pattern processing around the center of rotation symmetry; there was a corresponding relationship between the phase information in the frequency domain and the center of symmetry. This correspondence was used to establish their SSD rotation symmetry detection and recognition algorithm. It is worth noting that the theoretical description of this correspondence is simplified here due to space limitations; the theory is explained as a more concise mathematical expression. To operate this method, first randomly select a pixel around the center of rotation symmetry for FEP pattern processing. Obtain the signal phase angle of the signal frequency value (1) in the frequency domain. The phase angle and the straight line passing through the pixel point thus pass through the rotation symmetry center point. Multiple straight lines of different pixel points pass through the rotation symmetry center point, so the position of the rotation symmetry center point can be confirmed accordingly. Accordingly, Lee et al. [24] established an algorithm based on SSD rotation symmetry detection and recognition which outperformed previously published algorithms.

The core of algorithm is to calculate the rotation symmetry center by calculating the energy density phase information after 1D DFT transformation. Two pixels are selected near the center of rotation symmetry (two red points in Fig. 2(a)) and processed by FEP to obtain the signal phase value of frequency value (1). The phase value is used to find the tangent value, which serves as the slope to draw two straight lines at a point through the two known red points. This point is the possible center point of rotation symmetry, as shown in Fig. 2(b) where the blue and red lines meet. All regions around the rotation symmetry point are calculated (in Fig. 2(c), the red region is the pixel region to be calculated). Each pixel is subjected to FEP processing to obtain the corresponding phase value. In this case, most of the lines pass through the center point of rotation symmetry (Fig. 2(d)). The specific calculation process of the SSD algorithm, as mentioned above, is only briefly described here; the proposed method has a concise mathematical expression but produces results in accordance with those of Lee et al. [24].

 figure: Fig. 2.

Fig. 2. Core theory of SSD algorithm based on FEP [24]. (a) Original image with red points. (b) Two lines through (a) red points (slope based on phase value). (c) Red region representing all calculated points. (d) Lines through all points.

Download Full Size | PDF

3. Optimization research based on RSS and SSD

3.1 Optimized RSS and SSD fusion algorithm flowchart

In the first step of the proposed algorithm, the RSS and SSD maps of the original image (Fig. 3(a)) are calculated by interval sampling (Figs. 3(b), 3(c)). In the second step, the fusion algorithm fuses the symmetry energy maps of RSS and SSD (Fig. 3(d)). In the third step, the region of interest is obtained in the fused symmetry energy map using the visual algorithm based on the significant interest region (Figs. 3(e), 3(f)). In the fourth step, the potential rotation symmetry center is obtained (Fig. 3(g)). In the fifth step, the potential rotation symmetry center is converted into FEP pattern (Figs. 3(h), 3(i)). Finally, the symmetry regions are detected and the symmetry characteristics are analyzed across the whole image (Figs. 3(j), 3(k)).

 figure: Fig. 3.

Fig. 3. Flowchart of RSS and SSD fusion algorithm. (a) Input image. (b) RSS map. (c) SSD map. (d) RSS and SSD fusion map. (e) Attended location. (f) Symmetry center region. (g) Location map. (h) Cartesian space. (i) Polar-transformed space. (j) Symmetry region. (k) Rotation symmetry order, type, regions.

Download Full Size | PDF

3.2 Optimized RSS and SSD fusion algorithm

To effectively complete the fusion task, the independent and symmetrical energy represented by SSD and RSS must be expressed uniformly in an energy domain. A normalized numerical function is proposed here for this purpose. A fusion function based on gradient changes was also designed to determine the maximum fusion energy effect and the lowest noise in the maximum fusion energy (resulting in a smoother energy surface with fewer glitches). An optimization function ensures the maximum fusion energy sum and minimum gradient variation sum. The fusion function is defined first followed by the optimization function and normalized numerical function (Eqs. (1), (2) and (3), respectively). ${\left \| \cdot \right \|_1}$ represents 1-norm operations and $\nabla$ represents image gradient operation (Eq. (4)).

$$F(x,y,\lambda ) = \left\{ {\begin{array}{cc} {{F_{RSS1}}(x,y) + {F_{SSD1}}(x,y),\begin{array}{c} {} \end{array}\begin{array}{l} {} \end{array}}&{{{\left\| {\nabla {F_{RSS1}}(x,y)} \right\|}_1} > \lambda ,{{\left\| {\nabla {F_{SSD1}}(x,y)} \right\|}_1} > \lambda, }\\ {\min ({F_{RSS1}}(x,y),{F_{SSD1}}(x,y)),\ \ }&{{other.}} \end{array}} \right.$$
$$\left\{ {\begin{array}{c} {{\lambda _s} = \left\{ {\arg \mathop {\max }_\lambda \sum_{(x,y) \in S} {F\left( {x,y,\lambda } \right)} \left| {\lambda = 1,2,\ldots,\mathop {\max }_{(x,y) \in S} \left( {{{\left\| {\nabla {F_{RSS1}}(x,y)} \right\|}_1},{{\left\| {\nabla {F_{SSD1}}(x,y)} \right\|}_1}} \right)} \right.} \right\}}\\ {\lambda = \min \left\{ {\arg \mathop {\min }_{\lambda \in {\lambda _s}} \sum_{(x,y) \in S} {{{\left\| {\nabla F\left( {x,y,\lambda } \right)} \right\|}_1}} } \right\}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}} \end{array}} \right.$$
$$\left\{ {\begin{array}{c} {{F_{RSS1}}(x,y) = \frac{{\mathop {\max }_{(x,y) \in S} \left( {{F_{RSS}}(x,y),{F_{SSD}}(x,y)} \right)}}{{\mathop {\max }_{(x,y) \in S} \left( {{F_{RSS}}(x,y)} \right)}} \cdot {F_{RSS}}(x,y)}\\ {{F_{SSD1}}(x,y) = \frac{{\mathop {\max }_{(x,y) \in S} \left( {{F_{RSS}}(x,y),{F_{SSD}}(x,y)} \right)}}{{\mathop {\max }_{(x,y) \in S} \left( {{F_{SSD}}(x,y)} \right)}} \cdot {F_{SSD}}(x,y)} \end{array}} \begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array} \right.$$
$$\left\{ {\begin{array}{c} {\nabla F(x,y) = {{\left[ {\begin{array}{cc} {{F_x}(x,y)} & {{F_y}(x,y)} \end{array}} \right]}^T}}\\ {{F_x}(x,y) = F(x + 1,y) - F(x - 1,y)}\\ {{F_y}(x,y) = F(x,y + 1) - F(x,y - 1)} \end{array}} \right.$$

The normalized numerical function (Eq. (3)) is used to scale the RSS and SSD energy to the same value range within a specific area (in this case, the entire energy map). The scaling parameter is, indeed, a scaling parameter which reveals the RSS and SSD energy map containing the maximum energy value. The energy maps are numerically enlarged according to this maximum energy value to complete the unified numerical processing. The fusion function (Eq. (1)) then combines the RSS energy and SSD energy under the gradient threshold $\lambda$ to reveal the fusion symmetrical energy of a single pixel. When the 1-norm gradient RSS and SSD values both exceed the threshold $\lambda$, they are added together to obtain the fusion energy value. Otherwise, their minimum value is taken as the fusion energy value. This approach reduces the noise glitch energy. Finally, the optimization function (Eq. (2)) is continuously operated through the threshold $\lambda$ from 1 to the maximum value of the RSS and SSD 1-norm to obtain the maximum symmetric energy sum of the entire region. This same sum may correspond to multiple threshold $\lambda _s$. All threshold $\lambda _s$ are searched to identify the smallest minimum threshold $\lambda$ that makes the 1-norm gradient minimum sum of the fusion energy. This threshold $\lambda$ is plugged into the fusion function (Eq. (1)) to obtain a final fusion energy map. This approach makes the surface of the entire fusion energy map very smooth.

A fast calculation step was added to the proposed algorithm to remedy the time-consuming nature of RSS calculation. The interval calculation algorithm can be used to quickly calculate the RSS, as discussed in detail below. The 1D DFT is calculated in each line of the FEP. At the image pixels $(x,y)$, $S_{x,y}(r,k)$ represents a $K \times R$ FEP energy spectral density, where $r\in [0,R-1]$, $k \in [0,K-1]$. $R$ and $K$ represent the height (number of rows) and length (number of columns) of the FEP, respectively. The RSS rotation symmetry function is defined as follows [24]:

$$\begin{array}{l} {F_{RSS}}(x,y) = \sum_{r = 5\delta } {{\rho _r}\frac{{mean\left( {{S_{x,y}}\left( {r,{k_{peak}}\left( r \right)} \right)} \right)}}{{mean\left( {{S_{x,y}}\left( {r,k} \right)} \right)}}} ,\delta = 0,1,2,\ldots,\left\lfloor {\left( {R - 1} \right)/5} \right\rfloor \\ \\ s.t.\begin{array}{c} {} \end{array}{\rho _r} = \left\{ {\begin{array}{cc} {1,} & {if \ \ Mod\left( {{k_{peak}}\left( r \right),\min \left( {{k_{peak}}\left( r \right)} \right)} \right) = 0,}\\ {0,} & {other.} \end{array}} \right. \end{array}$$

Where $\left \lfloor \cdot \right \rfloor$ is the round-down operation. In the fast calculation, $r = 5\delta$. The calculation is performed once every five rows to perform the summation. In the FEP mode, each line of symmetric information exists in the image and adjacent consecutive multiple lines have the same symmetric information, so the calculation of every five rows does not affect the symmetry center but rather only decreases the overall value of the energy map; this saves up to 80% of calculation time. The main differences between the equations used here and the equations from Lee et al. [24] are: 1) The standard DFT calculation representation is used here (calculating from 0). 2) The interval calculation method is also used here. The content is similar, however, as the work in Lee et al. [24] is valid.

The energy spectral density is defined as follows [24]:

$${S_{x,y}}(r,k) = {\left| {\sum_{n = 5\delta } {{f_{x,y}}(r,n){e^{ - i\frac{{2\pi }}{K}nk}}} } \right|^2}, \begin{array}{c} {} \end{array}\delta = 0,1,2,\ldots,\left\lfloor {(K - 1)/5} \right\rfloor$$

Where ${f_{x,y}}(r,n)$ is the $n$ pixel value of the $r$ row. Here, a fast algorithm is used: DFT calculations are only performed with $n = 5\delta$ at intervals of five points on each line for fast calculation. Huang et al. [39] mathematically proved that the symmetry fold number of the image corresponds to the peak frequency subscript value, which is much smaller than the number of samples. This approach ($n = 5\delta$) requires at least 80% less calculation than the standard DFT calculation. ${S_{x,y}}(r,{k_{peak}(r)})$ also must be satisfied [24] ($\beta =2$ in following experiments)(excluding DC coefficient (k=0)):

$$\begin{array}{c} {S_{x,y}}(r,{k_{peak}}(r)) \ge mean\{ {S_{x,y}}(r,k)|k = 1,2,\ldots,\left\lfloor {\left( {K - 1} \right)/5} \right\rfloor \} + \\ \begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\beta \cdot std\{ {S_{x,y}}(r,k)|k = 1,2,\ldots,\left\lfloor {\left( {K - 1} \right)/5} \right\rfloor \} \end{array}$$

Calculating the SSD is also time-consuming, so another fast calculation step was added to the proposed method. The interval calculation algorithm is used to quickly calculate the SSD. At the image pixels $(x_i,y_i)$, ${P_{x_i,y_i}}(r,k)$ represents the frequency spectral coefficient of a $K \times R$ FEP, where $r\in [0,R-1]$, $k \in [0,K-1]$. $R$ and $K$ represent the height (number of rows) and length (number of columns) of the FEP. The k frequency spectrum coefficient ${P_{x_i,y_i}}(r,k)$ in the $r$ row is defined as follows [24]:

$${P_{x_i,y_i}}(r,k) = \sum_{n = 5\delta } {{p_{x_i,y_i}}(r,n){e^{ - i\frac{{2\pi }}{K}nk}}} , \begin{array}{c} {} \end{array}\delta = 0,1,2,\ldots,\left\lfloor {(K - 1)/5} \right\rfloor$$
Where ${p_{x_i,y_i}}(r,n)$ is the $n$ pixel value of the $r$ row. For the sake of efficiency (similar to Eq. (6)), DFT calculations are only performed with $n = 5\delta$ at intervals of five points on each line.

The phase value ${\phi _\textrm {{i}}}(r)$ and median phase value ${\Phi _i}$ of the first frequency spectral coefficient in the $r$ row are defined as follows [24]:

$$\begin{aligned} \left\{ {\begin{array}{c} {{\phi _\textrm{i}}(r) = \arctan \left( {\frac{{{\mathop{\textrm Re}\nolimits} \left( {{P_{{x_i},{y_i}}}(r,1)} \right)}}{{{\mathop{\textrm Im}\nolimits} \left( {{P_{{x_i},{y_i}}}(r,1)} \right)}}} \right)}\\ {{\Phi _i} = median\left( {{\phi _\textrm{{i}}}(r)} \right)\quad \quad \quad \quad \ } \end{array}} \right. \end{aligned}$$
And the bilateral straight line as [24]:
$$\frac{{\tan {\Phi _i}}}{{{x_i} + {y_i}\tan {\Phi _i}}}y + \frac{1}{{{x_i} + {y_i}\tan {\Phi _i}}}x = 1$$
The potential rotation symmetry center position $C$ between the image pixel $(x_i,y_i)$ and $(x_j,y_j)$ is defined as follows [24]:
$$C = \left( {\begin{array}{c} x\\ y \end{array}} \right) = \left( {\begin{array}{c} {\frac{{{s_i}{x_i} - {s_j}{x_j} + {y_j} - {y_i}}}{{{s_i} - {s_j}}}}\\ {\frac{{{s_i}{s_j}({x_i} - {x_j}) + {s_j}{y_j} - {s_i}{y_i}}}{{{s_i} - {s_j}}}} \end{array}} \right)$$
Where ${s_i} = - 1/ \tan {\Phi _i}$ and ${s_j} = - 1/ \tan {\Phi _j}$.

In the image region, the SSD rotation symmetry function is defined as follows:

$$\begin{array}{l} {F_{SSD}}(x,y) = \mathop{\sum}\limits_{({x_i},{y_i}) \in S,({x_j},{y_j}) \in S} {{\rho _{ij}}} \begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\\ \\ {\rho _{ij}} = \left\{ {\begin{array}{cc} {1,} & {\left( {\begin{array}{c} x\\ y \end{array}} \right) = \left( {\begin{array}{c} {\left\lfloor {\frac{{{s_i}{x_i} - {s_j}{x_j} + {y_j} - {y_i}}}{{{s_i} - {s_j}}}} \right\rfloor }\\ {\left\lfloor {\frac{{{s_i}{s_j}({x_i} - {x_j}) + {s_j}{y_j} - {s_i}{y_i}}}{{{s_i} - {s_j}}}} \right\rfloor } \end{array}} \right)},\\ {0,}&{other.} \end{array}} \right. \end{array}$$
Lee et al. [24] calculated the SSD using a Gaussian kernel fusion algorithm. Here, a simpler version of this algorithm is utilized and the number of straight lines passing through potential symmetry centers is used for statistics. This method was tested to find that the peaks of symmetry energy remain observable (Figs. 4(d), 5(d)).

 figure: Fig. 4.

Fig. 4. Comparison of various fusion algorithms with non-interval sampling calculations. (a) Original image. (b) Red region where RSS and SSD calculations are performed. (c) RSS result. (d) SSD result. (e) Fusion result of RSS direct addition to SSD. (f) Fusion result of RSS multiplied by SSD. (g) RSS gradient result. (h) SSD gradient result. (i) Numerical result after detecting RSS gradient. (j) Fusion result of proposed method.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Comparison of various fusion algorithms with interval sampling calculation. (a) Original image. (b) Red region where RSS and SSD calculations are performed. (c) RSS result. (d) SSD result. (e) Fusion result of RSS direct addition to SSD. (f) Fusion result of RSS multiplication by SSD. (g) RSS gradient result. (h) SSD gradient result. (i) Numerical result after detecting RSS gradient. (j) Fusion result of proposed method.

Download Full Size | PDF

In this study, a series of experiments were conducted to compare the non-interval sampling calculation algorithm (Fig. 4) and the interval sampling calculation algorithm (Fig. 5). The non-interval sampling calculation algorithm (Fig. 4) was processed first. Although this algorithm is more computationally intensive, it produces effects similar to the interval sampling calculation algorithm (Fig. 5).

To operate the non-interval sampling algorithm, a region containing the center of rotation symmetry is set in advance (Fig. 4(b), where the red region contains the rotation symmetry center). In the second step, the RSS and SSD formulas are used to calculate respective symmetry energy diagrams (RSS, Fig. 4(c); SSD, Fig. 4(d)) in this region; their results are normalized by Eq. (3). The results are transferred into the same metric range for the next step of the fusion process. The fusion algorithm is operated in the third step. A series of fusion algorithms were designed to evaluate the proposed method by comparison among them. The first is the simplest available version of addition fusion, which directly adds the symmetry energy values of each pixel in the RSS and SSD maps together (Fig. 4(e)). The second fusion algorithm uses multiplication operation fusion, which directly multiplies the symmetry energy values of each pixel in the RSS and SSD maps (Fig. 4(f)). The third fusion algorithm works by the proposed method, where the RSS gradient is calculated first (Fig. 4(g)) followed by the SSD gradient (Fig. 4(h)). Equation (13) is calculated to check whether the RSS gradient has a large negative value (in this experiment, $\delta = -10, \alpha = 20$ were used as input parameters). If the conditions are met, the inverse calculation of the RSS value is performed. Otherwise, the inversion calculation is not performed (Fig. 4(i)). Finally, Eqs. (1), (2) and (3) are used to calculate the fusion of each pixel in the RSS and SSD maps (Fig. 4(j)). The adaptive gradient threshold $\lambda$ of the fusion calculation satisfies the optimal conditions for maximum symmetry energy summation of RSS and SSD and for the minimum gradient change (surface smoothing).

The results of this experiment showthat direct multiplication performs best when only a non-interval sampling algorithm is considered (Fig. 4(f)), as per the energy represented by the red circle, about 4*$10^6$ (Fig. 4(f)). The peak value of the fusion symmetry energy obtained by the direct multiplication is very high. The curved surface indicated by the yellow ellipse region is also smooth, and the wave peak can be easily distinguished. The proposed method also yields a smooth curved surface and easily distinguishable peak (Fig. 4(j)), but the fusion effect is not as strong as with direct multiplication. The direct addition fusion algorithm performs relatively poorly, as the surface change is less clear when the basic energy of the RSS and SSD are simply added together. Further assessment is yet needed to determine whether direct multiplication truly outperforms the proposed method.

$${F_{RSS}}\left( {x,y} \right) = \left\{ {\begin{array}{c} \begin{array}{l} \mathop {\max }_{(x,y) \in S} \left( {{F_{RSS}}\left( {x,y} \right)} \right) - {F_{RSS}}\left( {x,y} \right),\\ \begin{array}{c} {\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}} \end{array}\begin{array}{c} {\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}} \end{array}\begin{array}{c} {\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}} \end{array}\left( {\sum_{(x,y) \in S} {\left( {\left( {\left[ {\begin{array}{cc} 1 & 1 \end{array}} \right] \cdot \nabla {F_{RSS}}\left( {x,y} \right)} \right) < \delta } \right)} } \right) > \alpha, \end{array}\\ {{F_{RSS}}\left( {x,y} \right),\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\textrm{{o}}ther.\begin{array}{c} {} \end{array}\begin{array}{c} {\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}\begin{array}{c} {} \end{array}} \end{array}} \right.$$

The same techniques were compared again per their performance with an interval sampling algorithm (Fig. 5), which requires significantly less calculation than non-interval sampling. The increase in the sampling interval resulted in a dearth of obvious peaks in the map calculated by the RSS, but a strong trough (yellow circle, Fig. 5(c)). This suggests that regardless of how the sample is taken, the symmetry information is still prominent. This is an important finding. It is necessary to extract the symmetry energy appropriately in the next step. The direct addition algorithm again performed worst among the three algorithms tested (yellow circle, Fig. 5(e)). The direct multiplication algorithm also performed relatively poorly in this case, with several glitches (yellow ellipse, Fig. 5(f)); the energy represented by the red circle in the figure is about 3*$10^4$ and the energy at the center of symmetry is still the highest. The proposed algorithm produced optimal fusion effects in this case (Fig. 5(j)). The curved surface represented by the yellow ellipse region changes smoothly and the wave peak is easily distinguishable.

These two experiments altogether suggest that direct multiplication is not very robust, and that direct addition has generally poor robustness. The fusion algorithm with the maximum symmetry energy and the smallest gradient change (surface smoothing) proposed in this paper has strong robustness and produces the best fusion effects overall.

The algorithmic effect was further tested on whole images. The original experimental image was utilized again (Fig. 6(a)) with Eq. (5) and Eq. (12) to calculate the symmetry energy of the RSS (Fig. 6(b)) and SSD (Fig. 6(c)), followed by the various fusion algorithms mentioned above. The fusion effects of direct addition (Fig. 6(d)), direct multiplication (Fig. 6(e)), and the result of the fusion algorithm proposed in this paper (Fig. 6(f)) were obtained accordingly. The proposed algorithm outperformed the others, again, in terms of robustness.

 figure: Fig. 6.

Fig. 6. Comparison of various fusion algorithms with interval sampling to calculate the whole image. (a) Original image. (b) RSS result. (c) SSD result. (d) Fusion result of RSS direct addition to SSD. (e) Fusion result of RSS multiplication by SSD. (f) Fusion result of proposed method.

Download Full Size | PDF

The same experimental image (Fig. 7(a)) was utilized again to calculate the fusion map of the RSS and SSD symmetry energy (Fig. 7(b)), followed by a saliency-based visual attention (SBVA) detection algorithm operation (the latest code (V2.3, July 2013)(http://saliencytoolbox.net)). The SBVA default parameters (color characteristics, intensity characteristics, and direction characteristics) were all set to 1, then different energy regions of interest were repeatedly detected. If a new energy region of interest coincided with the detected energy region, the detection was terminated. The results are shown in Fig. 7(c) and Fig. 7(d). The fusion algorithm again produced favorable effects with clearly observable rotation symmetry centers, which lays the foundation for subsequent multi-rotation symmetry center detection.

 figure: Fig. 7.

Fig. 7. Process of proposed fusion algorithm. (a) Original image. (b) Fusion result. (c) SBVA region of interest. (d) Concentrated energy region.

Download Full Size | PDF

The gray image about the FEP was next used to segment the symmetry regions (Fig. 3(i)). In the segmented FEP region, a minimum sequence number algorithm and a pipeline algorithm can be used to detect the symmetric region. The algorithm first changes the index number (if it is an integer multiple of the minimum index number) to the minimum index number. The pipeline algorithm then eliminates the existence of fluctuating noise. At the selected pipeline length (10 pixels in this case), if the head and tail of the detection sequence have the same ordinal value (head and tail length of 3, here), then a pipeline exists. If there is a difference in the middle of the pipeline, it is defined as noise and the value of the head is utilized. This eliminates fluctuations and eventually produces a stable symmetry region (Figs. 3(j), 3(k)).

The same method as Lee et al. [24] was used in this study to detect rotation symmetry properties. There is no vertical reflection in the FEP pattern of the rotation symmetry center and there is a vertical reflection in the FEP pattern of the bilateral symmetry. Therefore, any vertical reflection in the FEP pattern can be used to distinguish between rotational symmetry and bilateral symmetry. Most of the values were zero in this experiment, which indicates a uniform region in the original image which can be considered an orthogonal symmetry region.

3.3 Flow chart of optimized RSS and SSD fusion algorithm

The optimized RSS and SSD fusion algorithm (Algorithm 1) is discussed in this section. The calculation process is still based on the principles described in the previous section. First, every pixel in the original image is sampled at intervals to calculate the respective RSS and SSD symmetry energy maps. The next step is to judge whether the RSS symmetry energy needs to be inverted for the obtained RSS energy map, as the valley of the RSS symmetry energy map may show the symmetry information. The gradient of the RSS symmetry energy map is calculated to determine which RSS symmetry energy map to use (Eq. (13)). The general image gradient definition (Eq. (4)) is used here for the image gradient operation. The RSS and SSD symmetry energy maps are then normalized (Eq. (3)) to place them within the same numerical space, and so that no certain symmetry energy has an absolute influence due to the relationships among numerical values. Equations (1) and (2) are used to perform optimal RSS and SSD fusion calculations. The SBVA algorithm is applied to detect the energy region of interest. Finally, in the local energy region of interest, the optimal symmetry energy is identified.

4. Algorithm complexity

The image is $I \times I$ in size. The temporal complexity of the image sampling pixels is $O(I^2)$, that of the FEP algorithm is $O(I^1)$, and that of the RSS/SSD symmetry energy map is $O(I^3)$ [24]. The RSS/SSD algorithms were used in this study and retain a temporal complexity of $O(I^3)$ despite the use of interval sampling to improve the calculation efficiency. Gradient calculation is applied to the neighboring points of the image, but it is still necessary to calculate the entire image resulting in temporal complexity of $O(I^2)$. The fusion algorithm is used to calculate the entire energy map ($I \times I$), so its temporal complexity is $O(I^2)$. The SBVA algorithm [34] uses image brightness, color, and directional characteristics for calculation; its temporal complexity is $O(I^2)$. The temporal complexity of the proposed algorithm is ultimately $O(I^3)$. The image resolution used in this study ranges from 204 * 204 to 800 * 600. The image effect can be calculated in a limited amount of time as per the temporal complexity of the algorithm. A larger image requires lengthier calculations. The interval sampling algorithm was used in this study to minimize the necessary amount of calculation.

5. Experimental results and analysis

This experiment was conducted on a PC with 3.7 GHz AMD 860K CPU and 16GB RAM in MATLAB software. Test images from the IEEE TPAMI test data set (http://ieeexplore.ieee.org/ielx5/ 34/5530071/5276798/ttp2009990119.zip) were utilized. The average running time for testing these images on the PC was 10 min. These images require lengthy processing time due to their complex backgrounds and large resolution.

The image occlusion effects of the SSD algorithm [24] and proposed algorithm were compared first (Fig. 8). The original images with varying degrees of occlusion are shown in Fig. 8(a) and Fig. 8(e). Figure 8(b) and Fig. 8(f) are the results of the SSD algorithm [24]. From these results, the algorithm is good in occlusion resistance. Figures 8(c), 8(d), and Fig. 8(g), 8(h) are the results of proposed fusion algorithm in this paper (the results contain flat display and 3D stereo display respectively). Looking at these results, the detection center still exists, but the result is not concentrated. It can be seen from the above results that the proposed fusion algorithm has a certain anti-occlusion ability.

 figure: Fig. 8.

Fig. 8. Comparison of SSD algorithm [24] and proposed fusion algorithm under image occlusion. (a), (e) Original image (small portion occluded, half of image occluded). (b), (f) Results of SSD algorithm. (c), (d), (g), (h) Results of proposed fusion algorithm.

Download Full Size | PDF

The single and multiple rotation symmetry center detection results of RSS [23], SSD [24], SBVA + RSS [35], and the proposed algorithm were compared next (Fig. 9). The first and second images in the figure contain a single rotation symmetry center, while the other two images contain multiple rotation symmetry centers. Both are test images with obvious symmetry. In any image containing multiple rotation symmetry centers, the RSS-based method [23] shows the worst effects (Figs. 9(b-3)–9(b-4)); this algorithm is based on global maximum probability rotation symmetry center detection, so it only reveals the maximum probability rotation symmetry center. The rotation symmetry detection algorithm based on the SSD [24] works well (Figs. 9(c-1)–9(c-4)), but also reveals several incorrect symmetry centers because there is no obvious peak effect. The SBVA + RSS [35] algorithm performs better than the above algorithms (Figs. 9(d-1)–9(d-4)), but misses the rotation symmetry center if it is not covered by the region of interest. The proposed algorithm has a better results than them (Figs. 9(e-1)–9(e-4)).

 figure: Fig. 9.

Fig. 9. Comparison of various rotation symmetry detection algorithms. (a-*) Original image. (b-*) Results of Lee algorithm [23]. (c-*) Results of Lee algorithm [24]. (d-*) Results of Huang algorithm [35]. (e-*) Results of proposed method.

Download Full Size | PDF

The same series of algorithms were next tested in an experiment on simple and complex multi-rotation symmetry center detection and recognition (Fig. 10 and Fig. 11). The Loy algorithm [25] is a global detection algorithm that can result in rotation symmetry center loss (Fig. 10(c)). Huang algorithm [35] first uses the SBVA to detect the region of interest in the original image (Fig. 10(d)), then the RSS symmetry energy map in the region of interest (Fig. 10(e)) to obtain the rotation symmetry centers (Fig. 10(f)). Finally, the symmetry attribute is calculated as shown in Fig. 10(g). The Huang algorithm [35] cannot detect all therotation symmetry centers (Fig. 10(f)), because the SBVA algorithm does not cover rotation symmetry centers outside of the region of interest. The Lee algorithm [24] uses RSS and SSD symmetry energy maps in the original image (Figs. 10(h), 10(i)). It produces fairly effective detection results (Figs. 10(j), 10(k)), but they must be manually merged with rotation symmetry centers in RSS and SSD results. The proposed algorithm first involves detecting the rotation symmetry center in the original image through the RSS and SSD symmetry energy maps, then applying an optimization algorithm with the largest symmetry energy and smallest gradient change (surface smoothing) to fuse the maps (Fig. 10(l)); the SBVA algorithm is then applied to search for the region of interest in the fused symmetry energy map. The Huang algorithm [35] applies the SBVA to the original image, while the proposed algorithm applies the SBVA to a symmetry energy map. The SBVA reveals a local detection region, which remedies the rotation symmetry center problem of the global calculation inherent to the RSS and SSD symmetry energy maps [24]. The SBVA effectively reveals each region of interest that contains the center of symmetry (Fig. 10(m)), thereby achieving the favorable effects (Fig. 10(o)).

 figure: Fig. 10.

Fig. 10. Comparison of various rotation symmetry center detection and recognition algorithms. (a) Original image. (b) Ground truth (GT). (c) Loy algorithm [25]. (d) ~(g) SBVA results with Huang algorithm [35], RSS maps, symmetry center detection results (red +), and symmetry attribute detection results. (h) ~(k) RSS map, SSD map, symmetry center detection results (red x), symmetry attribute detection results with Lee algorithm [24]. (l) ~(o) Proposed RSS and SSD fusion map, SBVA interest region detection results under fusion map, symmetry center detection results (red +), and symmetry attribute detection results.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Comparison of various rotation symmetry center detection and recognition algorithms. (a) Original image. (b) Ground truth (GT). (c) Loy algorithm [25]. (d) ~(g) SBVA results with Huang algorithm [35], RSS maps, symmetry center detection results (red +), and symmetry attribute detection results. (h) ~(k) RSS map, SSD map, symmetry center detection results (red x), symmetry attribute detection results with Lee algorithm [24]. (l) ~(o) Proposed RSS and SSD fusion map, SBVA interest region detection results under fusion map, symmetry center detection results (red +), and symmetry attribute detection results.

Download Full Size | PDF

The various algorithms tested in this study also performed differently on complex multi-rotation symmetry center detection experiments. The Loy algorithm [25] is a global detection algorithm, so it suffers rotation symmetry center loss (Fig. 11(c)). The Huang algorithm [35] first uses the SBVA to detect the region of interest in the original image (Fig. 11(d)), then calculates the RSS symmetry energy map (Fig. 11(e)) in the region of interest to detect the symmetry center. The result (Fig. 11(f)) is finally used to calculate the symmetry attribute (Fig. 11(g)), but was unable to detect all the rotation symmetry centers in this experiment as the rotation symmetry centers under SBVA were not covered in the region of interest (Fig. 11(g)). The Lee algorithm [24] uses the RSS and SSD symmetry energy maps to detect the center of rotation symmetry in the original image (Figs. 11(h), 11(i)). This yields fairly effective detection results (Figs. 11( j), 11(k)), but they must be merged manually and some rotation symmetry centers are still lost. Because the RSS and SSD are global algorithms, certain local small rotation symmetry centers cannot be detected. The proposed algorithm first detects the rotation symmetry center in the original image through the RSS and SSD symmetry energy maps, respectively, then uses the optimization algorithm with the largest symmetry energy and the smallest gradient change (surface smoothing) to fuse them (Fig. 11(l)). The SBVA is then applied to search for the region of interest in the fused symmetry energy map. The SBVA effectively detects many regions of interest that contain rotation symmetry centers (Fig. 11(m)) and favorable effects (Fig. 11(o)). The Lee algorithm [24], conversely, requires a manual method to summarize the rotation symmetry centers. The proposed algorithm uses a fusion technique to automatically calculate the rotation symmetry centers to achieve a higher level of automated detection.

As shown in Table 1, due to the use of local regions to detect rotation symmetry centers and RSS/SSD fusion algorithms, the proposed method reveals abundant potential rotation symmetry centers. The recall rate achieved in this study is higher than in previous studies [24,25,35]. The Lee algorithm [24] uses RSS and SSD symmetry energy maps in the entire image; these maps are representative of global detection algorithms, so the precision of the algorithm is low and some potential rotation symmetry centers are lost. The proposed method based on the saliency-based visual attention region of interest results in a limited number of local regions, so its precision rate is higher than that of the Lee algorithm [24]. The number of correct folds obtained in this study is also higher than in previous studies [24,25,35] because narrow symmetrical regions are successfully detected. In short, the proposed algorithm can detect more rotation symmetry centers while correctly identifying the fold numbers.

Tables Icon

Table 1. Experimental results.a

6. Conclusion

In this paper, to solve the problem that a single symmetry energy cannot fully express the symmetry information, a novel method has been proposed, which includes two steps: 1) Two independent symmetry energies are extracted from the RSS and SSD maps based on an optical image. And an optimized symmetry-energy-based fusion algorithm is creatively applied to these two energies to achieve a more comprehensive symmetry energy map. 2) In the fused symmetry energy map, the local region detection algorithm is used to accomplish the detection of multi-scale symmetry targets. Compared with the state-of-the-art algorithms, the proposed algorithm can get more multiple-scale (skewed, small-scale, and regular) rotation symmetry centers, and can boost much the performance detecting symmetry properties with better accuracy, and the performance has been confirmed by experimental results.

Funding

National Natural Science Foundation of China (61860206007, U19A2071).

Acknowledgments

The authors would like to thank Lee et al. [24] for the rotation symmetry center test images and to Itti et al. [34] for the source code of the saliency-based visual attention algorithm.

Disclosures

The authors declare no conflicts of interest.

References

1. X. Xu, Q. Huang, Y. Ren, D.-Y. Zhao, and J. Yang, “Sensor fault diagnosis for bridge monitoring system using similarity of symmetric responses,” Smart Struct. Syst. 23(3), 279–293 (2019). [CrossRef]  

2. S. Wentao, H. Yong, G. Cailan, and K. Dingbo, “Spatial characteristics analysis of multi-scale ship target in scanning detection system,” Acta Opt. Sin. 39(7), 0728010 (2019). [CrossRef]  

3. B. Hatipoglu, C. M. Yilmaz, and C. Kose, “A signal-to-image transformation approach for eeg and meg signal classification,” Signal, Image and Video Process. pp. 1–8 (2018).

4. M. A. Zambrello, M. W. Maciejewski, A. D. Schuyler, G. Weatherby, and J. C. Hoch, “Robust and transferable quantification of nmr spectral quality using iroc analysis,” J. Magn. Reson. 285, 37–46 (2017). [CrossRef]  

5. J. Yao, “Peak detection method for mass spectrometry and system therefor,” (2017). US Patent 9,613,786.

6. G. Kootstra, A. Nederveen, and B. De Boer, “Paying attention to symmetry,” in British Machine Vision Conference (BMVC2008), (The British Machine Vision Association and Society for Pattern Recognition, 2008), pp. 1115–1125.

7. P. Kovesi, “Symmetry and asymmetry from local phase,” in Tenth Australian joint conference on artificial intelligence, vol. 190 (Citeseer, 1997), pp. 2–4.

8. Y. Liu, H. Hel-Or, and C. S. Kaplan, Computational symmetry in computer vision and computer graphics (Now publishers Inc, 2010).

9. P. Ma, Z. Zhang, X. Zhou, Y. Yun, Y. Liang, and H. Lu, “Feature extraction from resolution perspective for gas chromatography-mass spectrometry datasets,” RSC Adv. 6(115), 113997 (2016). [CrossRef]  

10. K. D. Bemis, A. Harry, L. S. Eberlin, C. R. Ferreira, S. M. van de Ven, P. Mallick, M. Stolowitz, and O. Vitek, “Probabilistic segmentation of mass spectrometry (ms) images helps select important ions and characterize confidence in the resulting segments,” Mol. & Cell. Proteomics 15(5), 1761–1772 (2016). [CrossRef]  

11. P. Cools, E. Ho, K. Vranckx, P. Schelstraete, B. Wurth, H. Franckx, G. Ieven, L. Van Simaey, S. Verhulst, F. De Baets, and M. Vaneechoutte, “Epidemic achromobacter xylosoxidans strain among belgian cystic fibrosis patients and review of literature,” BMC Microbiol. 16(1), 122 (2016). [CrossRef]  

12. Y. Lei and K. C. Wong, “Detection and localisation of reflectional and rotational symmetry under weak perspective projection,” Pattern Recognit. 32(2), 167–180 (1999). [CrossRef]  

13. G. Peron, “Metabolomics in natural products research: application to in vivo bioactivity studies involving nutraceuticals,” Dipartimento di Scienze Chimiche (2018).

14. M. Galli, “An easy-to-use software program for the ensemble pixel-by-pixel classification of maldi-msi datasets,” Università degli Studi di Milano-Bicocc (2018).

15. Y. C. Hernandez, T. Boskamp, R. Casadonte, L. Hauberg-Lotte, J. Oetjen, D. Lachmund, A. Peter, D. Trede, K. Kriegsmann, M. Kriegsmann, J. Kriegsmann, and P. Maass, “Targeted feature extraction in maldi mass spectrometry imaging to discriminate proteomic profiles of breast and ovarian cancer,” PROTEOMICS–Clinical Appl. p. 1700168 (2018).

16. S. Ren, A. A. Hinzman, E. L. Kang, R. D. Szczesniak, and L. J. Lu, “Computational and statistical analysis of metabolomics data,” Metabolomics 11(6), 1492–1513 (2015). [CrossRef]  

17. B. J. White and D. P. Munoz, “Neural mechanisms of saliency, attention, and orienting,” in Computational and Cognitive Neuroscience of Vision, (Springer, 2017), pp. 1–23.

18. R. Arya, N. Singh, and R. Agrawal, “A novel combination of second-order statistical features and segmentation using multi-layer superpixels for salient object detection,” Appl. Intell. 46(2), 254–271 (2017). [CrossRef]  

19. H. Pashler, Attention (Psychology University, 2016).

20. K. Gupta and A. P. Chattopadhyay, “Dft studies of small rare-gas clusters,” PARIPEX-Indian J. Res. 4(10), 4 (2015). [CrossRef]  

21. P. L. Hill, “Post-processing method for determining peaks in noisy strain gauge data with a low sampling frequency,” Ph.D. thesis, Virginia Tech (2017).

22. A. El ouaazizi, A. Nasri, and R. Benslimane, “A rotation symmetry group detection technique for the characterization of islamic rosette patterns,” Pattern Recognit. Lett. 68, 111–117 (2015). [CrossRef]  

23. S. Lee, R. T. Collins, and Y. Liu, “Rotation symmetry group detection via frequency analysis of frieze-expansions,” in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, (IEEE, 2008), pp. 1–8.

24. S. Lee and Y. Liu, “Skewed rotation symmetry group detection,” IEEE Trans. Pattern Analysis Mach. Intell. 32(9), 1659–1672 (2010). [CrossRef]  

25. G. Loy and J.-O. Eklundh, “Detecting symmetry and symmetric constellations of features,” in European Conference on Computer Vision, (Springer, 2006), pp. 508–521.

26. H. Cornelius and G. Loy, “Detecting rotational symmetry under affine projection,” in 18th International Conference on Pattern Recognition (ICPR’06), vol. 2 (IEEE, 2006), pp. 292–295.

27. T. O’Haver, A Pragmatic Introduction to Signal Processing (Lulu. com, 2016).

28. J. Lu, M. J. Trnka, S.-H. Roh, P. J. Robinson, C. Shiau, D. G. Fujimori, W. Chiu, A. L. Burlingame, and S. Guan, “Improved peak detection and deconvolution of native electrospray mass spectra from large protein complexes,” J. The Am. Soc. for Mass Spectrom. 26(12), 2141–2151 (2015). [CrossRef]  

29. I. R. Atadjanov and S. Lee, “Robustness of reflection symmetry detection methods on visual stresses in human perception perspective,” IEEE Access 6, 63712–63725 (2018). [CrossRef]  

30. Z. He and H. He, “Unsupervised multi-object detection for video surveillance using memory-based recurrent attention networks,” Symmetry 10(9), 375 (2018). [CrossRef]  

31. C. Bartalucci, R. Furferi, L. Governi, and Y. Volpe, “A survey of methods for symmetry detection on 3d high point density models in biomedicine,” Symmetry 10(7), 263 (2018). [CrossRef]  

32. R. Furferi, L. Governi, F. Uccheddu, and Y. Volpe, “A rgb-d based instant body-scanning solution for compact box installation,” in Advances on Mechanics, Design Engineering and Manufacturing, (Springer, 2017), pp. 819–828.

33. G. Pan, D. Sun, Y. Chen, and C. Zhang, “Multiresolution rotational symmetry detection via radius-based frieze-expansion,” J. Electr. Comput. Eng. 2016, 1–8 (2016). [CrossRef]  

34. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis & Machine Intelligence pp. 1254–1259 (1998).

35. R. Huang, Y. Liu, Z. Xu, P. Wu, and Y. Shi, “Multiple rotation symmetry group detection via saliency-based visual attention and frieze expansion pattern,” Signal Process. Image Commun. 60, 91–99 (2018). [CrossRef]  

36. R. C. Lyndon and P. E. Schupp, Combinatorial group theory (Springer, 2015).

37. M. Hamermesh, Group theory and its application to physical problems (Courier Corporation, 2012).

38. M. Zhongqi, Group Theory in Physics (Science and Technology University, 2006).

39. R. Huang, Y. Liu, X. Shi, Y. Zheng, Y. Wang, and B. Zhai, “A mathematical analysis method of the relationship between dft magnitude and periodic feature of a signal,” Sens. Imaging 20(1), 8 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Core theory of RSS algorithm based on FEP [23]. (a) Original image (red point is rotation symmetry center). (b) FEP pattern. (c) Row of DFT magnitude results for (b) red dotted line.
Fig. 2.
Fig. 2. Core theory of SSD algorithm based on FEP [24]. (a) Original image with red points. (b) Two lines through (a) red points (slope based on phase value). (c) Red region representing all calculated points. (d) Lines through all points.
Fig. 3.
Fig. 3. Flowchart of RSS and SSD fusion algorithm. (a) Input image. (b) RSS map. (c) SSD map. (d) RSS and SSD fusion map. (e) Attended location. (f) Symmetry center region. (g) Location map. (h) Cartesian space. (i) Polar-transformed space. (j) Symmetry region. (k) Rotation symmetry order, type, regions.
Fig. 4.
Fig. 4. Comparison of various fusion algorithms with non-interval sampling calculations. (a) Original image. (b) Red region where RSS and SSD calculations are performed. (c) RSS result. (d) SSD result. (e) Fusion result of RSS direct addition to SSD. (f) Fusion result of RSS multiplied by SSD. (g) RSS gradient result. (h) SSD gradient result. (i) Numerical result after detecting RSS gradient. (j) Fusion result of proposed method.
Fig. 5.
Fig. 5. Comparison of various fusion algorithms with interval sampling calculation. (a) Original image. (b) Red region where RSS and SSD calculations are performed. (c) RSS result. (d) SSD result. (e) Fusion result of RSS direct addition to SSD. (f) Fusion result of RSS multiplication by SSD. (g) RSS gradient result. (h) SSD gradient result. (i) Numerical result after detecting RSS gradient. (j) Fusion result of proposed method.
Fig. 6.
Fig. 6. Comparison of various fusion algorithms with interval sampling to calculate the whole image. (a) Original image. (b) RSS result. (c) SSD result. (d) Fusion result of RSS direct addition to SSD. (e) Fusion result of RSS multiplication by SSD. (f) Fusion result of proposed method.
Fig. 7.
Fig. 7. Process of proposed fusion algorithm. (a) Original image. (b) Fusion result. (c) SBVA region of interest. (d) Concentrated energy region.
Fig. 8.
Fig. 8. Comparison of SSD algorithm [24] and proposed fusion algorithm under image occlusion. (a), (e) Original image (small portion occluded, half of image occluded). (b), (f) Results of SSD algorithm. (c), (d), (g), (h) Results of proposed fusion algorithm.
Fig. 9.
Fig. 9. Comparison of various rotation symmetry detection algorithms. (a-*) Original image. (b-*) Results of Lee algorithm [23]. (c-*) Results of Lee algorithm [24]. (d-*) Results of Huang algorithm [35]. (e-*) Results of proposed method.
Fig. 10.
Fig. 10. Comparison of various rotation symmetry center detection and recognition algorithms. (a) Original image. (b) Ground truth (GT). (c) Loy algorithm [25]. (d) ~(g) SBVA results with Huang algorithm [35], RSS maps, symmetry center detection results (red +), and symmetry attribute detection results. (h) ~(k) RSS map, SSD map, symmetry center detection results (red x), symmetry attribute detection results with Lee algorithm [24]. (l) ~(o) Proposed RSS and SSD fusion map, SBVA interest region detection results under fusion map, symmetry center detection results (red +), and symmetry attribute detection results.
Fig. 11.
Fig. 11. Comparison of various rotation symmetry center detection and recognition algorithms. (a) Original image. (b) Ground truth (GT). (c) Loy algorithm [25]. (d) ~(g) SBVA results with Huang algorithm [35], RSS maps, symmetry center detection results (red +), and symmetry attribute detection results. (h) ~(k) RSS map, SSD map, symmetry center detection results (red x), symmetry attribute detection results with Lee algorithm [24]. (l) ~(o) Proposed RSS and SSD fusion map, SBVA interest region detection results under fusion map, symmetry center detection results (red +), and symmetry attribute detection results.

Tables (1)

Tables Icon

Table 1. Experimental results. a

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

F ( x , y , λ ) = { F R S S 1 ( x , y ) + F S S D 1 ( x , y ) , F R S S 1 ( x , y ) 1 > λ , F S S D 1 ( x , y ) 1 > λ , min ( F R S S 1 ( x , y ) , F S S D 1 ( x , y ) ) ,     o t h e r .
{ λ s = { arg max λ ( x , y ) S F ( x , y , λ ) | λ = 1 , 2 , , max ( x , y ) S ( F R S S 1 ( x , y ) 1 , F S S D 1 ( x , y ) 1 ) } λ = min { arg min λ λ s ( x , y ) S F ( x , y , λ ) 1 }
{ F R S S 1 ( x , y ) = max ( x , y ) S ( F R S S ( x , y ) , F S S D ( x , y ) ) max ( x , y ) S ( F R S S ( x , y ) ) F R S S ( x , y ) F S S D 1 ( x , y ) = max ( x , y ) S ( F R S S ( x , y ) , F S S D ( x , y ) ) max ( x , y ) S ( F S S D ( x , y ) ) F S S D ( x , y )
{ F ( x , y ) = [ F x ( x , y ) F y ( x , y ) ] T F x ( x , y ) = F ( x + 1 , y ) F ( x 1 , y ) F y ( x , y ) = F ( x , y + 1 ) F ( x , y 1 )
F R S S ( x , y ) = r = 5 δ ρ r m e a n ( S x , y ( r , k p e a k ( r ) ) ) m e a n ( S x , y ( r , k ) ) , δ = 0 , 1 , 2 , , ( R 1 ) / 5 s . t . ρ r = { 1 , i f     M o d ( k p e a k ( r ) , min ( k p e a k ( r ) ) ) = 0 , 0 , o t h e r .
S x , y ( r , k ) = | n = 5 δ f x , y ( r , n ) e i 2 π K n k | 2 , δ = 0 , 1 , 2 , , ( K 1 ) / 5
S x , y ( r , k p e a k ( r ) ) m e a n { S x , y ( r , k ) | k = 1 , 2 , , ( K 1 ) / 5 } + β s t d { S x , y ( r , k ) | k = 1 , 2 , , ( K 1 ) / 5 }
P x i , y i ( r , k ) = n = 5 δ p x i , y i ( r , n ) e i 2 π K n k , δ = 0 , 1 , 2 , , ( K 1 ) / 5
{ ϕ i ( r ) = arctan ( R e ( P x i , y i ( r , 1 ) ) I m ( P x i , y i ( r , 1 ) ) ) Φ i = m e d i a n ( ϕ {i} ( r ) )  
tan Φ i x i + y i tan Φ i y + 1 x i + y i tan Φ i x = 1
C = ( x y ) = ( s i x i s j x j + y j y i s i s j s i s j ( x i x j ) + s j y j s i y i s i s j )
F S S D ( x , y ) = ( x i , y i ) S , ( x j , y j ) S ρ i j ρ i j = { 1 , ( x y ) = ( s i x i s j x j + y j y i s i s j s i s j ( x i x j ) + s j y j s i y i s i s j ) , 0 , o t h e r .
F R S S ( x , y ) = { max ( x , y ) S ( F R S S ( x , y ) ) F R S S ( x , y ) , ( ( x , y ) S ( ( [ 1 1 ] F R S S ( x , y ) ) < δ ) ) > α , F R S S ( x , y ) , {o} t h e r .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.