Campus researchers have access to Strelka, Swarthmore's computer cluster. To learn more about the capabilities of the system and obtain an account, email support@swarthmore.edu.
Technical Specifications
System Configuration
The cluster consists of 13 nodes with an additional head node to handle user logins and job scheduling.
- 8 mid-memory nodes (384GB RAM)
- 3 high-memory nodes (768GB RAM)
- 2 GPU nodes, each with 4x NVIDIA 2080 Ti GPUs
- High speed Infiniband networking
Jobs are submitted through the Slurm job scheduling system.
Summary
Feature | Total |
---|---|
CPUs | 18x Intel Xeon Gold 6230 |
Total Cores | 648 |
Total Memory | 4.6TB |
GPUs | 8x NVIDIA RTX 2080 Ti |
User Storage | 100TB |
Creating an Account
Please see these instructions for creating an account on Strelka.
Logging in
Please see these instructions for logging into Strelka.
Transferring Files
Please see these instructions for transferring files to/from Strelka.
Submitting Jobs
To run code on Strelka, you need to submit a job to the queue. For complete information, see the Slurm Commands page.
Other Strelka Resources
Acknowledgement
If you publish a paper where the cluster was used for calculation, please include the following acknowledgement:
“This work used the Strelka Computing Cluster, which is supported by the Swarthmore College Office of the Provost.”
Governance
An Advisory Group has been set up to govern the decisions concerning Strelka. The members of the Advisory Group are:
- Tristan Smith, Physics
- Dan Grin, Haverford
- Jason Simms, ITS
- Andrew Ruether, ITS