Implementation of a Scalable Preconditioned Eigenvalue Solver Using Hypre

Merico E. Argentati

Center for Computational Mathematics
University of Colorado at Denver
Campus Box 170, P.O. Box 173364
Denver, Colorado 80217-3364

Andrew V. Knyazev


The goal of this project is to develop a Scalable "Preconditioned Eigenvalue Solver" for the solution of partial eigenvalue problems for large sparse symmetric matrices on massively parallel computers, taking advantage of advances in the Scalable Linear Solvers project, in particular in multigrid technology and in incomplete factorizations (ILU) developed under the HYPRE project, at the Lawrence Livermore National Laboratory, Center for Applied Scientific Computing (LLNL-CASC). The solver allows the utilization of HYPRE preconditioners for symmetric eigenvalue problems. In this talk we discuss the implementation approach for a flexible “matrix free” parallel algorithm, and the capabilities of the developed software. We also discuss performance on a set of test problems.

The base iterative method that has been implemented is the locally optimal block preconditioned conjugate gradient (LOBPCG) method described in: A. V. Knyazev, Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method, SIAM Journal on Scientific Computing 23 (2001), no. 2, pp. 517-541. The LOBPCG solver finds one or more of the smallest eigenvalues and the corresponding eigenvectors of a symmetric matrix.

The code is written in MPI based C-language and uses HYPRE and LAPACK libraries. It has been tested with HYPRE version 1.6.0. The user interface to the solver is implemented using HYPRE style object oriented function calls. The matrix-vector multiply and the preconditioned solve are done through user supplied functions. This approach provides significant flexibility. The implementation illustrates that this method can successfully and efficiently use parallel libraries.

The following HYPRE preconditioners have been tested, AMG-PCG, DS-PCG, ParaSails-PCG, Schwarz-PCG and Euclid-PCG , in the eigenvalue solver. Partition of processors is determined by user input consisting of an initial array of parallel vectors. The code has been mainly developed and tested on a beowulf cluster at CU Denver. This system includes 36 nodes, 2 processors per node, PIII 933MHz processors, 2GB memory per node running Linux Redhat, and a 7.2SCI Dolpin interconnect. Lobpcg has also been tested on several LLNL clusters using Compaq and IBM hardware, running Unix and/or Linux.

Keywords: Eigensolvers, parallel preconditioning, sparse matrices, parallel computing, conjugate gradients, additive Schwarz preconditioner.