Alternating Direction Implicit (ADI) methods have been in use for a long time for the solution of both parabolic and elliptic partial differential equations. In the case where good estimates of the eigenvalues of the operator are available, the convergence of these methods can be dramatically accelerated.
However, in the case of computation on parallel computers, the solution of the tridiagonal system can impose an unreasonable overhead. We discuss methods to lower the overhead imposed by the solution of the corresponding tridiagonal systems. The proposed method has the same convergence properties as a standard ADI method, but all of the solves run in approximately the same time as the "fast" direction. Hence, this acts like a "transpose-free" method while still maintaining the smoothing properties of ADI.
Algorithms are derived and convergence theory is provided. Numerical examples on serial, parallel, and clusters of processors are provided showing how much of a speed up can be gained by the new method.