Campus researchers have access to Strelka, Swarthmore's computer cluster.  To learn more about the capabilities of the system and obtain an account, email  

Technical Specifications

System Configuration

The cluster consists of 13 nodes with an additional head node to handle user logins and job scheduling. 

  • 8 mid-memory nodes (384GB RAM)
  • 3 high-memory nodes (768GB RAM)
  • 2 GPU nodes, each with 4x NVIDIA 2080 Ti GPUs
  • High speed Infiniband networking

Jobs are submitted through the Slurm job scheduling system.




CPUs18x Intel Xeon Gold 6230
Total Cores648
Total Memory4.6TB
User Storage100TB

Creating an Account

Please see these instructions for creating an account on Strelka.

Logging in

Please see these instructions for logging into Strelka.  

Transferring Files

Please see these instructions for transferring files to/from Strelka.

Submitting Jobs

To run code on Strelka, you need to submit a job to the queue. For complete information, see the Slurm Commands page.

Other Strelka Resources


If you publish a paper where the cluster was used for calculation, please include the following acknowledgement:

“This work used the Strelka Computing Cluster, which is supported by the Swarthmore College Office of the Provost.”


An Advisory Group has been set up to govern the decisions concerning Strelka.  The members of the Advisory Group are:

  • Tristan Smith, Physics
  • Dan Grin, Haverford
  • Jason Simms, ITS
  • Andrew Ruether, ITS

  • No labels