Jones, R. W. L. (2003) ATLAS computing and the GRID. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 502 (2-3). pp. 372-375.
Full text not available from this repository.Abstract
At the most conservative estimates, ATLAS will produce over 1 Pb of data per year requiring 1-2M SpecInt95 of CPU to process and analyse, and to generate large Monte Carlo datasets. The collaboration is worldwide, and only Grids will allow all collaborators must have access to the full datasets. Atlas must develop an intercontinental distributed computing and data Grid with a user interface to shield the user from the Grid middleware and the distributed nature of the processing; we must develop automated production systems using the Grid tools; and we must provide tools that automatically distribute, install and verify the required experimental software and run-time environment to remote sites to avoid the problems of chaotic and multi-site management. Bookkeeping, replication and monitoring are also required. All of these topics are being addressed within the collaboration, with Grid tools being used for large-scale Data Challenges.