The matrix-free formulation of DBIM can provide solutions to very large problems aided with parallelization of the solvers on large supercomputers. The inventory of iterative solvers for nonlinear optimization includes steepest-descent, conjugate-gradient, and Newton-type solvers. These solvers require matrix-vector multiplications, but, do not require the matrices themselves, and therefore MLFMA can be employed for on-the-fly matrix multiplications. Each of these solvers follows a different path in the object space to find the closest minimum of the cost functional (1) to the initial guess.
This communication is on some performance considerations and strategies for the iterative solutions. For example, the regularization of the Newton-type solvers is required because the Frechet derivative to be inverted is ill-conditioned in practice. An over-regularization may slow down the convergence rate drastically and, on the other hand, an under-regularization may yield unstable iterations which may not be convergent. Another consideration is the convergence rate and per-iteration cost of the iterative solvers. A Conjugate-gradient solver has a low cost and easy to implement, but has a slower convergence rate with respect to a Newton-type solver. One significant burden in each iteration is the line search in the step direction. We propose an analytical way to perform this one-dimensional search to provide stable iterations. Last but not least, DBIM may break down when the scatterer has a high contrast. In this case the iterative solution converges to a local minimum in the vicinity of the initial guess. A good initial guess preconditions the problem and provides a convergence to the global minimum as desired. These problematic cases and respective practical solutions will be demonstrated with numerical examples.