The LSMS method has been implemented on various distributed memory message passing platforms including CRAY T3Es and workstation clusters using either PVM or MPI as the communication tool. Its ideal linear O[N]-scaling has been observed for system sizes up to 1024 atoms. This is the first time that O[N] scaling of a first principles method has been observed on a massively parallel computer for systems of this size. Continued O[N] scaling is expected for even larger systems as more powerful MPP’s become available or multiple parallel machines are coupled together.
The LSMS method has been applied to very large cell (256<N<1024) simulations of disordered alloys, bulk amorphous metals, and magnetic inhomogeneities in disordered alloys. It has been used to understand the nature of charge correlation and magnetic moment correlation in random alloys. In the former case a new relationship has been discovered between charge transfer and the Madelung contribution to the total energy of random alloys, clarifying some area of recent controversy. The large cell spin-polarized calculation of disordered NiCu alloys has been used to understand the nature of magnetic moment inhomogeneities in these alloys and to provide the first quantitative theory of the results of neutron scattering experiments of the magnetic scattering cross-section.
On Nov. 9, running on a 1,480 processor CRAY T3E system, LSMS achieved sustained performance of 1.02 Teraflops (trillions of calculations per second). For their work on this project, a team of scientists — from Oak Ridge National Lab, the National Energy Research Scientific Computing Center, University of Bristol (UK) and Pittsburgh Supercomputing Center — won the 1998 Gordon Bell Prize, given for best achievement in high-performance computing.