SBEL Logo

Hardware

SBEL uses Euler, a multi-core supercomputer cluster built by the Wisconsin Applied Computing Center. Multi-core processing is vital to parallel computing, which involves carrrying out many different calculations at the same time. Parallel computing allows researchers to break complex problems into many simple components. SBEL uses this approach to model and simulate complex mechanical systems.

Euler: NVIDIA GPU + AMD CPU Cluster:

Assembled 2011-05-04

Expanded 2011-12-31, 2012-04-01, 2012-04-01

Specifications:

  • OS: CentOS Linux 7.2
  • Peak Flop Rate [Single Precision]: 432 Teraflops (GPU)
  • Peak Flop Rate [Double Precision]: 13 Teraflops (GPU), 10.0 Teraflops (CPU)
  • 1 x Login Head Node
  • 1 x Supervisor Head Node
  • 15 x GPU Compute Nodes
  • 26 x CPU Compute Nodes
  • 4 x Development Nodes
  • 2 x Haswell-E Buildbot Nodes
  • 1 x AMD APU Node
  • 2 x NVIDIA Jetson TK1 Development Boards
  • 4 x NVIDIA GeForce GTX 480 GPUs
  • 3 x NVIDIA GeForce GTX 680 GPUs
  • 9 x NVIDIA GeForce GTX Titan X (Maxwell) GPUs
  • 48 x NVIDIA GeForce GTX 1080 GPUs
  • 8 x NVIDIA Tesla K20x GPUs
  • 4 x NVIDIA Tesla K40c GPUs
  • 2 x NVIDIA GeForce GTX 770 GPUs
  • 4 x Intel Xeon Phi 31S1P Coprocessors
  • Interconnect: 10-Gigabit Ethernet, Infiniband
  • Infiniband Switch: QLogic 12200-BS01

Cluster Layout:

This image may not reflect the current state of the cluster

    Head Node

  • Supermicro 6016T-NTF SuperServer
  • 2 x Intel Nehalem Xeon E5520 2.26GHz processor
  • 48GB DDR3 ECC Registered
  • 1 x Mellanox ConnectX-2 MHQH19B-XTR Infiniband

    CPU GPU Node

  • 1 x Supermicro 7028GR-TRT SuperServer
  • 2 x Intel Haswell-E Xeon E5-2650 v3 2.30GHz 10 core processor
  • 12 nodes: 4x NVIDIA GeForce GTX 1080 GPUs
  • 1 node: 4x NVIDIA GeForce GTX 480 GPUs
  • 1 node: 3x NVIDIA GeForce GTX 680 and 1x Titan X GPUs
  • 128GB DDR4 ECC Registered
  • 1 x 10GBase-T RJ45 Ethernet (10 Gb/s)

    AMD CPU Node

  • 1 x Supermicro 1042-LTF SuperServer
  • 4 x AMD Opteron 6274 2.2GHz 16 core processor
  • 128GB DDR3 ECC Registered
  • 1 x Mellanox ConnectX-2 MHQH19B-XTR Infiniband HCA (40 Gb/s)

    Intel CPU Node

  • 1 x Supermicro 1028-TRT SuperServer
  • 2 x Intel Broadwell-E Xeon E5-2640 v4 2.40GHz 10 core processor
  • 128GB DDR4 ECC Registered
  • 1 x 10GBase-T RJ45 Ethernet (10 Gb/s)

    Development Node

  • 2 x Intel Xeon E5-2630 2.30GHz
  • 64GB DDR3 ECC Registered
  • 2x NVIDIA Tesla K20x
  • 2x NVIDIA Tesla K40c
  • 1 x Mellanox ConnectX-2 MHQH19B-XTR Infiniband HCA (40 Gb/s)

    Development Node

  • 2 x Intel Xeon E5-2690v2
  • 64GB DDR3 ECC Registered
  • 2x NVIDIA Tesla K20x
  • 2x NVIDIA Tesla K40c
  • 1 x Mellanox ConnectX-2 MHQH19B-XTR Infiniband HCA (40 Gb/s)

    Development Node

  • 2x Intel Xeon E5-2650 v3 2.30GHz
  • 128GB DDR4 ECC Registered
  • 2x NVIDIA Tesla K20x
  • 2x NVIDIA Tesla K40
  • 1x 10GBase-T RJ45 Ethernet (10 Gb/s)

    Development Node

  • 2 x Intel Xeon E5-2650 v3 2.30GHz
  • 128GB DDR4 ECC Registered
  • 4x Intel Xeon Phi 31S1P Coprocessor
  • 1x 10GBase-T RJ45 Ethernet (10 Gb/s)

    i7 5960x (Haswell-E) Node

  • 1 x Intel i7 5960x
  • 32GB DDR4
  • NVIDIA GTX 770

    AMD APU Node

  • 1 x AMD A10-7850K
  • 16GB DDR3
  • AMD Radeon R7

    NVIDIA Jetson TK1

  • 1 x ARM Cortex A15
  • 2GB DDR3L
  • NVIDIA Kepler GK20A

    File Server

  • Installed on 5-4-2011
  • OS: Scientific Linux 6.3
  • 1 x Intel Westmere Xeon E5630 2.53GHz processor
  • 24 x W.D. 2TB 7200 64MB SATAII – WD2003FYYS
  • 6 x 4GB DDR3-1333 ECC REG.
  • 3ware 9690SA-4I battery-backed RAID controller
  • 34TB for home directories, backed by RAID6
  • 1 x Mellanox ConnectX-2 MHQH19B-XTR Infiniband HCA (40 Gb/s)
  • Interconnect: Gigabit Ethernet, Infiniband

    File Server

  • Installed in Jan 2016
  • OS: CentOS 7.2
  • 1 x Intel Haswell-E Xeon E5-2623 v3 3.00GHz processor
  • 16 x HGST 8TB 7200 64MB SATA3 – WD2003FYYS
  • 64 GB DDR4 ECC REG.
  • LSI MegaRAID SAS 2208 battery-backed RAID controller
  • 87TB for home directories, backed by RAID50 w/ 1 GHS)
  • 1x 10GBase-T RJ45 Ethernet (10 Gb/s)
  • Interconnect: 10 Gigabit Ethernet

© Simulation Based Engineering Laboratory, Dan Negrut 2016.

SBEL is led by Mead Witter Foundation Professor Dan Negrut in the Department of Mechanical Engineering at UW-Madison.

[News] [Publications] [Projects] [People] [Animations] [Resources] [Outreach] [Courses] [Forum] [About the Lab]
UW Logo