We introduce an optical method based on white light interferometry in order to solve the well-known NP–complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non–polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal–to–noise ratio. The proposed method is meant purely as a gedankenexperiment.
© 2007 Optical Society of America
CorrectionsTobias Haist and Wolfgang Osten, "An optical solution for the traveling salesman problem: erratum," Opt. Express 15, 12627-12627 (2007)
The traveling salesman problem (TSP) is one of the most important unsolved problems in computer science. Apart from practical applications like transportation, plotting, chip fabrication, record balancing or routing (see e.g. Ref.  for an overview), the importance stems from the fact that it belongs to the class of so called NP–complete problems. “NP–complete” more or less means that for such problems the runtime for finding the optimum solution increases faster than polynomial with the problem size N. It can be shown that once a solution to one NP–complete problem is found it will be possible to map that result (in polynomial time) to any other NP–complete problem. Therefore fast optimum solutions to the TSP are of considerable importance for all NP–complete problems.
Figure 1 depicts the TSP: A salesman wants to visit a set of N cities. The task is to find the shortest tour length through all the cities with the additional constraint to visit each city exactly once. In the classical TSP the trip should end at the same city where it has started. That means we have a round–trip. It is easy to see that for N cities there are (N-1)!/2 possible tours. So even for a moderate number of cities one would have to check an astronomical number of possible tours (e.g. 30 cities lead to 4.4x1030 tours) in order to find the shortest one. For 60 cities we would have more tours than there are atoms in the universe.
It has been proven, that no polynomial time heuristic can guarantee the solution of the TSP of a given size (assuming the widely believed conjecture P≠NP). So the global optimum for large problems in general cannot be found. But a lot of methods, e.g. based on local optimization (e.g. 3-opt) or global optimization and heuristics (e.g. simulated annealing, neural networks, tabu search, Lin–Kernighan algorithm, greedy-Algorithm), for finding good approximate solution have been used in the past. Even for large numbers of cities (N>100) very good solutions can be found today in reasonable time using conventional computers. Ref.  gives a good overview. Polynomial time algorithms are available if one is satisfied with obtaining good solutions where one can even set upper bounds for the deviation from global optimality. For large practical problem sizes typically solutions which differ only by about 1–2 % from optimality can be found because — compared to other NP–complete problems - the TSP is somewhat of good nature for real life examples. Also unconventional strategies like using quantum computers, DNA-computing, or optical hardware have been proposed in the last years. Shaked et al recently proposed an all optical method for solving the TSP. Their approach doesn’t reduce the complexity of the problem but rather introduces a fast method for solving the TSP by a parallel matrix–vector multiplication based on optical correlation.
Apart from all these advances finding the global solution for pathologically posed TSPs with large N is still not possible by conventional computing. In this paper we will propose the first — to our knowledge — method that finds the global optimum of the TSP by reducing the NP complexity of the problem to N 2 complexity. We achieve this by using white light interferometry (WLI). From the mathematical point of view we unfortunately haven’t disapproved the commonly believed conjecture that NP≠P (the Clay mathematics institute set out a price of 1.000.000 US-$ for doing that) but we apparently reduced the complexity by a trick, namely replacing operations by photons (and the physics of interference).
In section 2 we introduce the method as a gedankenexperiment. After that we explain in section 3 in more detail how we avoid wrong interferences and in section 4 we show that the practical implementation will be unrealistic for large problem sizes due to the low signal to noise ratio. Finally we will state our opinion about the question whether the proposed method should be regarded to be a quantum computer.
2. Basic idea
The proposed system (as shown in Fig. 2) is mainly a white light interferometer with a complicated signal path. A wave-packet with limited temporal extension (due to the short coherence length) entering the signal tour will travel through the network of N connected cities and finally might exit at the final (=initial) city. If the optical path length through the network corresponds to the reference path length, interference at the detector will be observed.
The cities are connected by input and output fibers with a length corresponding to their distance in reality. We denote the fiber length between city i and city j by dij. It should be noted that by using fibers we can implement an arbitrary distance metric as well as asymmetric connections between the cities (so called asymmetric TSP). The input signals from other cities to the exit(=initial) city are connected directly to the white light interferometer.
Figure 3 shows the structure of one city. It has N input and N output channels. A photon entering one of the input channels will travel through a fiber optical delay line before entering a fan–out element that outputs the photon to one of the output channels. In the quantum mechanical particle model the choice of the output port is given purely randomly. In the wave-optical model of course we have a splitting of the wave to the N output channels. Different methods (e.g. Y-couplers) can be used for a practical implementation and by using directional couplers we can circumvent that light leaks back to the input channels. To make the cities distinguishable we use additional delay lines of different length within the cities.
The overall idea for globally solving the TSP is to perform the following four steps:
1. Setup of the scenario.
2. Computation of a lower bound for the overall optical path length for a correct solution.
3. Increasing the reference path length of the WLI, starting from the lower bound while checking for correct interference to obtain the global minimum of the tour length.
4. Finding the corresponding tour.
The most critical step is to check for correct interferences (step 3). To this end a careful choice of the delays within the cities is mandatory. For the TSP we chose (other choices are possible) for city number i the length Di of the delay line
with i=0, 1, …, N-1 and with a small d in the range of tens of microns. The reason for this choice will become clear in section 3. The constant α is chosen such that it is considerably larger than N times the largest distance max(dij) between two cities.
1) Setup of the scenario
The main work for setting up the experiment is to connect the cities. With N cities this setup time will be proportional to
which corresponds to a complexity of N 2.
2) Computation of a lower bound for the overall optical path length for a correct solution
We first connect the white light source and the interferometer to one of the cities (Fig. 2 (a)). For searching the global solution it doesn’t matter which city we chose as the starting city of the trip because it is anyway a round–trip.
According to Eq. 1 visiting all the cities means that the overall path length L is at least
with P being the overall path length traveled between the cities:
P is the function to be minimized by the TSP.
We start by setting the reference path length of the WLI to α (2N-1)+Nd which is the sum of all delay lines in the cities.
3) Increasing the reference path length of the WLI, starting from the lower bound while checking for correct interference to obtain the global minimum of the tour length.
Now we continue to increase the reference path length until we detect interference. The first interference signal will occur if the shortest tour is approached. This detection of interference is a little bit more complicated as in a conventional white light interferometer because we have to check for additional interferences due to wrong tours within our network of cities, e.g. if not all cities were visited exactly once. We will describe the interference detection in detail in section 3.
If we have found the first position of interference we have completed step A and the path length of the reference arm minus α(2N-1)+Nd exactly equals the minimum path length that the salesman has to travel during his trip.
It should be pointed out that this was the hard part of the problem and itself corresponds to a NP–complete problem. We continue by finding also the correct tour through the network of cities.
4) Finding the corresponding tour
We now connect the interferometer input to all of the possible N – 1 remaining cities (see Fig. 2 (b)). For city i we have to reduce the length of the reference arm by
with j being the index of the exit city. Of course we will also disconnect the input connections to the final city.
If we still have a correct interference pattern (see section 3), city i is the correct last but one city. We now regard this city as the exit city and continue the same way, now with N-2 remaining cities.
Therefore we have to do less than
tests to completely find the optimum tour for the salesman. The complexity is the same as for setting up the experiment and proportional to N 2. Therefore we have reduced the NP–complete problemto a problemwith cost proportional N 2. For the example at the beginning with N=30 we have to perform 900 comparisons which is negligible compared to the original cost proportional to 1030.
3. Filtering out the correct interference
Apart from the wanted interference of the reference wave packet with the wave packet running through all the cities exactly once in optimum time we unfortunately might have a lot of other interferences at the detector. In the following we will analyze these spurious interferences and explain how to detect the optimum tour.
We denote the reference wave packet by R and the wave packet that corresponds to the correct solution by C. We have to keep in mind, that there might be multiple global solutions Ci present at the same time (see also below), therefore we have to use ∑#C i Ci for the analysis of the interference. #C denotes the number of global solutions.
We also might have interference of the reference and the Ci with wave packets that have the same overall path length L but might not traveled through all the cities exactly once. We denote these wave packets byWCj and the number of these wave packets by #W.
Additionally we might have a lot of interference due to the superposition of wave packets having the same overall path length among each other where this path length differs from the reference path length L. If we have #K different superpositions each with #Hk wave packets we might denote this term by .
The overall intensity at the detector therefore will be given by
is the (very strong) background intensity due to the light not interfering with the reference. One might use heterodyne detection in order to get rid of this background signal but we have to look at the signal–to–noise problem which leads to the practical limit for the method more closely in section 4.
We eliminate the last double sum for example by making a second measurement with the reference blocked:
The difference I 1-I 2 therefore is
In the following it will become clear why we have chosen the delays in the cities according to Eq. 1. We will show that by this choice we can effectively remove the WCj.
When connecting all the the cities by fibers we chose inter-city fiber lengths dij as well as α (see Eq. 1) in multiples of 1 mm. Consequently, it follows that P then is also a multiple of 1 mm:
with a positive integer G.
For a practical problem we might chose max(dij)=1 m. That means the resolution of our problem is 1:1000 (e.g. 1 km for a trip with maximum inter city distances of 1000 km) which is enough for non-artificial problems for finding the best trip but of course other choices — with a better resolution — are possible. The computational cost does not depend on the chosen resolution.
d is chosen larger than twice the coherence length of the light source. E.g. we might chose d=10 µm.
We distinguish between three cases in order to prove the non–existence of any WCj for the chosen delays.
Case A: Is it possible that we have aWCj with exactly N cities?
No, because that would imply that at least one city of our normal list of the N cities would be replaced by another one. But that would mean that the overall delay within the cities is changed by at least α. Since α>P we can’t cancel that change by a different way through the cities because any change would be too strong. Therefore we will not achieve interference with the reference.
Case B: Is it possible that we have aWCj with less than N cities?
Two reasons lead to a contradiction: If we would have less than N cities we again would have to cancel the loss of one (or more) cities by visiting one city with a large delay multiple times. This is also not possible with the chosen Delays Di.
The second reason uses the term Nd within the delay. Due to Eq. 12 we can only visit very special numbers N′ of cities in order to achieve this delay, namely
with integer k.
For the chosen values (d=10 μm and e.g. N=10) this would lead to N′=10,110,210,310. In practice only the first possible value of N′, which corresponds to case A, is detectable at all. The ratio between the intensities of the two first possible solution for N=10, namely N′=10 and N′=110 is 1010/10110=10100 which is definitely far beyond any possibility for detection (compare section 4).
Even if one doesn’t like such an argument based on the probability of detection one could detect such a wrong tour by continuing with step B (finding the corresponding tour). In this case the worst case computational complexity would be increased and more complicated to calculate.
Case C: Is it possible that we have aWCj with more than N cities?
Here, the same argument as in case B shows that such a situation can’t lead to interference with the reference: Only N′ cities according to Eq. 14 are possible. And again the extreme difference in intensities leads to the situation that the probability of getting only one wrong photon is beyond any practical limit.
We therefore can conclude that there are no WCj disturbing any measurement if we chose the setup as described above.
One might also check for the very unlikely case that ∑Ci=0 by randomly increasing some (randomly chosen) distances between cities by a very small amount (e.g. λ/5) that is performing phase shifting.
Additional note: Since both directions of a tour are equally valid we always will have at least two solutions. These two solutions lead to positive interference due to the equal path length of both ways.
In conclusion we have shown that by detecting a modulation in the difference equation 11 while scanning (using the conventional axial shift of the white light interferometer) we detect a possible solution of the TSP. If we start scanning from the lower bound α(2N-1)+Nd the first solution that we will find is the global solution to the TSP.
4. Signal-to-Noise Ratio
For a TSP with N cities we have to use fan–out elements with N connections. That means that the intensity of a wave packet entering the network will be attenuated by a factor of (1/N)N before being detected by the interferometer.
Since we always have two possible solutions this results in an amplitude of 2(1/N)N/2 and the interference with the reference (which should have a large amplitude, we take here 2) leads to an interference of
with a peak-to-valley modulation of 16N -N/2.
In principle photon noise (due to the Poisson distribution for coherent light) will set the lower bound for the number of necessary photons in order to achieve detection. For the Poisson distribution the standard deviation sdev of the photon number is directly proportional to the square root of the total photon number Ng, that is .
If we claim that the standard deviation of the overall photon number (on the detector) is at least larger than the expected signal we have to claim that
where we denoted the total number of photons coming out of the network with M and the total number of photons (dominated by the reference light) by Ng.
Therefore we can conclude that
So in total we will need
For photons with a wavelength of 1 µm the energy per photon is E = hν≈ 2×10-19 J. With a 1 W light source we therefore would achieve 5×1018 photons per second and we should be able to solve problems with about 16 cities (detector integration 1 s). Using millimeter waves (λ=1 mm) and waveguides we gain a factor of 1000 but even together with raising the power to 1 kW this would lead only to a small improvement concerning the maximum number of cities to about 20 (see Fig. 4).
Of course we completely neglected practical issues like the limited signal–to–noise ratio of real detectors as well as unavoidable loss of photons. In practice sensitivities of 1011 have been reported (see the detailed discussion in ).
5. Is this a quantum computer?
A single photon running through the network of cities will simultaneously run through all the possible paths. Just like in Young’s double slit interference experiment we have the superposition of all possible tours. Only by the detection through interference we select one of these tours and the wavefunction collapses. The connection with the wave-optical description that we used in section 2 and 3 of this article is well known. The amplitudes of the electromagnetic waves correspond to the wavefunction describing the probability of detection.
For the basic principle of the method nevertheless we don’t need quanta or particles. Ordinary (spatially limited because of the short coherence length) waves together with quadratic detection of interference is sufficient.
According to the free encyclopedia Wikipedia a quantum computer “is any device for computation that makes direct use of distinctively quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data.” Conventionally in a quantum computer we have the entanglement of qbits, e.g. realized by photons or atoms.
Here it is different. Definitely we are using the superposition of all possible solutions in order to find the optimum. But we are not using conventional entanglement. Instead of describing the parameters (in our case the tour sequence) by entangled qbits we introduced them here by the superposition of different optical path lengths. Therefore we would describe this sort of computing, which can be used not only for the traveling salesman problem but as well for other problems, as “wave-optical computation”.
We have shown that the complexity of the traveling salesman problem can be dramatically reduced from N! to N 2 by optical means. To this end we employed white light interferometry and a fiber optic model of the network of cities that the salesman should travel through.
The maximum number of cities N (problem size) is fundamentally limited by the number of photons. Problems with N=20 cities can be solved if 1 kW of power is available (1 s integration time per individual measurement). Since for practical (non-pathological) problems by purely electronic means very good solutions to even large size problems can be found, our proposed method is not meant to solve real–world traveling salesman problems but rather as a gedankenexperiment to show how photons and the laws of physics can considerably reduce the computational complexity of difficult mathematical problems.
We want to thank Thomas Schuster, Christof Pruss, and Martin Schönleber for fruitful discussions on this topic.
References and links
1. G. Laporte, A. Asef-Vaziri, and C. Sriskandarajah, “Some applications of the generalized travelling salesman problem,” J. Oper. Res. Soc. 47, 1461–1467 (1996).
2. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to algorithms, (MIT Press, 2001).
3. C. Vourdouris and E. Tsang, “Guided local search and its application to the traveling salesman problem,” Eur. J. Oper. Res. 113, 80–110 (1998).
4. C. Kirkpatrick, “Optimization by simulated annealing: Quantitative studies,” J. Stat. Phys. 34, 975–986 (1984). [CrossRef]
5. J. J. Hopfield and D. W. Tank, “Neural’ computation of decisions in optimization problems,” Biol. Cybern. 52, 381–384 (1985).
6. F. Glover, E. Taillard, and D. de Werra, “A user’s guide to tabu search,” Ann. Oper. Res. 41, 3–28 (1993). [CrossRef]
7. S. Lin and W. Kernighan, “An effective heuristic algorithm for the traveling salesman problem,’ Oper. Res. 21, 498–516, (1973). [CrossRef]
8. J. Edmonds, “Matroids and the greedy algorithm,” Math. Program. 1, 127–136 (1971). [CrossRef]
9. S. Arora, “Polynomial time approximation schemes for euclidean traveling salesman and other geometric problems,” J. ACM 45, 753–7872 (1998). [CrossRef]
10. D. S. Johnson and L. A. McGeochE. Aarts and J. K. Lenstra,“The traveling salesman problem: a case study in local optimization,” in Local Search in Combinatorial Optimization, eds., (Princton University Press, 2003).
11. T. Hogg and D. Portnov, “Quantum optimization,” Inf. Sci . 128, 181–197 (2000). [CrossRef]
12. S. Y. Shin, B. T. Zhang, B.T., and S. S. JunP. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, and A. Zalzala, “Solving traveling salesman problems using molecular programming,” eds., (IEEE Press, 1999), pp 994–1000.
15. V. A. Kozlov, J. Hernandez-Codero, and T. F. Morse, “All-fiber coherent beam combining of fiber lasers,” Opt. Lett. 24, 1814–1816 (1999). [CrossRef]
16. B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics, (Wiley, 1991). [CrossRef]
17. A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser, “Optical coherence tomography — principles and applications,” Reports on Progress in Physics 66, 239–303 (2003). [CrossRef]