
The matrixfree formulation of DBIM can provide solutions to very large problems aided with parallelization of the solvers on large supercomputers. The inventory of iterative solvers for nonlinear optimization includes steepestdescent, conjugategradient, and Newtontype solvers. These solvers require matrixvector multiplications, but, do not require the matrices themselves, and therefore MLFMA can be employed for onthefly matrix multiplications. Each of these solvers follows a different path in the object space to find the closest minimum of the cost functional (1) to the initial guess. This communication is on some performance considerations and strategies for the iterative solutions. For example, the regularization of the Newtontype solvers is required because the Frechet derivative to be inverted is illconditioned in practice. An overregularization may slow down the convergence rate drastically and, on the other hand, an underregularization may yield unstable iterations which may not be convergent. Another consideration is the convergence rate and periteration cost of the iterative solvers. A Conjugategradient solver has a low cost and easy to implement, but has a slower convergence rate with respect to a Newtontype solver. One significant burden in each iteration is the line search in the step direction. We propose an analytical way to perform this onedimensional search to provide stable iterations. Last but not least, DBIM may break down when the scatterer has a high contrast. In this case the iterative solution converges to a local minimum in the vicinity of the initial guess. A good initial guess preconditions the problem and provides a convergence to the global minimum as desired. These problematic cases and respective practical solutions will be demonstrated with numerical examples.
