The simulation of nonlinear wave propagation in the ray regime, i.e., in the limit of geometrical optics, is discussed. The medium involved is nonlinear, which means that the field amplitudes affect the constitutive parameters (e.g., dielectric constant) involved in the propagation formalism. Conventionally, linear ray propagation is computed by the use of Hamilton’s ray equations whose terms are derived from the appropriate dispersion equation. The formalism used to solve such a set of equations is the Runge–Kutta algorithm in one of its variants. In the present case of nonlinear propagation, a proper dispersion equation must first be established from which the rays can be computed. Linear ray tracing with Hamilton’s ray theory allows for the computation of ray trajectories and wave fronts. The convergence or divergence of rays suggests heuristic methods for computing the variation of amplitudes. Here, terms appearing in the Hamiltonian ray equations involve field amplitudes, which themselves are determined by the convergence (or divergence) of the rays. This dictates the simultaneous computation of a beam comprising many rays, so it is necessary to modify the original Runge–Kutta scheme by building into it some iteration mechanism such that the process converges to the values that take into account the amplitude effect. This research attempts to modify the existing propagation formalism and apply the new algorithm to simple problems of nonlinear ray propagation. The results display self-focusing effects characteristic of nonlinear optics problems. The influence of weak losses on the beam propagation and its self focusing is also discussed. Some displayed results obtained by simulating the modified formalism seem to be physically plausible and are in excellent agreement with experimental results reported in the literature.
© 1998 Optical Society of AmericaFull Article | PDF Article