Anselm was installed in the summer 2013 with a theoretical peak performance of 94 TFlop/s. Anselm consists of 209 computational nodes. Each node is equipped with 16 cores (two eight-core Intel Sandy Bridge processors). These computing nodes are interconnected by InfiniBand (QDR) and Ethernet networks. 

There are 4 types of compute node:

  • 180 compute nodes without any accelerator, with 2.4 GHz CPUs and 64 GB RAM,
  • 23 compute nodes with GPU accelerators (NVIDIA Tesla K20), with 2.3 GHz CPUs and 96 GB RAM,
  • 4 compute nodes with MIC accelerators (Intel Xeon Phi 5110P), with 2.3 GHz CPUs and 96 GB RAM,
  • 2 fat nodes with larger RAM and faster storage (2.4 GHz CPUs, 512 GB RAM and two SSD drives).

The total theoretical peak performance of the whole cluster is 94 TFlop/s with a maximal LINPACK performance of 73 TFlop/s.

All computing nodes share a 320 TB/home disk storage to store user files. A 146 TB shared/scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is a local hard drive (500 GB) on all compute nodes.
94
TFlop/s
theoretical peak
466
TiB
storage
40
Gb/s
interconnect

technical information of the ANSELM supercomputer

put into operation

summer 2013

Theoretical peak performance

94 TFlop/s

operating system

RedHat Linux 64bit 6.x

compute nodes

209

CPU

2x Intel SandyBridge, 8 cores, 2,3 / 2,4 GHz,

3,344 cores in total

RAM per compute node

64 GB / 96 GB / 512 GB

accelerators

4x Intel Xeon Phi 5110P

23x NVIDIA Tesla K20 (Kepler)

storage

320 TiB / home (speed 2 GB /s),

146 TiB / scratch (speed 6 GB/s)

interconnect

Infiniband QDR 40 Gb/s

 

Learn more at docs.it4i.cz.

cluster utilization of the Anselm supercomputer

Dashboard Anselm

View current Anselm supercomputer data.

Dashboards display the current state of the supercomputer. The displayed values are:

  • utilization
  • node allocation
  • filling HOME and SCRATCH repositories
  • statistics of PBS tasks and TOP7 software modules
  • the number of days of operation since the last outage and the date of any planned new outage
  • MOTD (Message of the Day)
  • responsible for the Service of the day

Gallery