67 lines
1.9 KiB
Markdown
67 lines
1.9 KiB
Markdown
# Cluster Plan
|
|
|
|
## Hardware
|
|
|
|
### Vorhanden:
|
|
|
|
**Compute Nodes**
|
|
16 Server, je:
|
|
- 2 Nodes, mit je 2 quadcore CPUs [E5530](https://www.intel.com/content/www/us/en/products/sku/37103/intel-xeon-processor-e5530-8m-cache-2-40-ghz-5-86-gts-intel-qpi/specifications.html?wapkw=e5530)
|
|
- Strom pro Node ~400W Volllast, Power Unit 770W pro Node
|
|
- Formfaktor 19 Zoll, 80cm Tiefe im Schrank nötig
|
|
- Server: [Asus RS700D-E6/PS8](https://www.manualslib.com/manual/445256/Asus-Rs700d-E6-Ps8-0-Mb-Ram.html?page=92#manual)
|
|
- Mainboard: [Z8NH-D12](https://dlcdnet.asus.com/pub/ASUS/mb/socket1366/Z8PH-D12_SE/QDR/e5743_Z8PH-D12-SE-QDR.pdf)
|
|
- 1x x16 PCIe2.0 low profile,
|
|
- 12x DDR3 RAM slots, 6x4GB RDIMM 10600R Belegt
|
|
- InfiniBand 20Gbit
|
|
- 2x 1Gbit ethernet
|
|
- 100mbps management port
|
|
- 3 Lüfter
|
|
- 4x 2.5 Slots
|
|
|
|
**Login Nodes**
|
|
- 1 Login Server mit 2 Nodes, wie compute nodes
|
|
|
|
**Ethernet Switch**
|
|
- 1 HE
|
|
- 2x 48Gbit Switch vorhanden (48x 1Gbps Ports)
|
|
|
|
**Infiniband Switch**
|
|
- 1 HE Mellanox [MTS3600](https://andovercg.com/datasheets/mellanox-MTS3600Q-1BNC-MTS3600.pdf)
|
|
- 36 Infiniband Ports QSFP
|
|
|
|
**Storage Server**
|
|
- 2 HE
|
|
- 12x 2.5 Zoll Slots
|
|
- 18x DDR3 RAM, 3x triple channel
|
|
- 2 CPU Sockel E5640
|
|
|
|
### Benötigt:
|
|
|
|
**Netzwerkschrank**
|
|
- [Ebay Angebot, 320€](https://www.ebay.de/itm/315685371014)
|
|
- Oder im ZIH vorhanden
|
|
|
|
**Stromversorgung**
|
|
- Power Distribution Unit PDU von 32A Starkstrom
|
|
-
|
|
|
|
### Optional:
|
|
|
|
- [Xilinx Artix-7 FPGA](https://www.amd.com/content/dam/amd/en/documents/products/adaptive-socs-and-fpgas/fpga/7-series/artix7-product-brief.pdf) ~8 vorhanden (nur Chip, müsste abgelötet werden) XC7A15T
|
|
- Per node SSD storage
|
|
|
|
## Software
|
|
|
|
**SLURM**
|
|
- Als HPC System
|
|
- [Wiki](https://en.wikipedia.org/wiki/Slurm_Workload_Manager)
|
|
|
|
**Proxmox**
|
|
- Login Nodes mit Proxmox
|
|
- Storage Node mit Proxmox
|
|
- Proxmox Cluster/Quorum mit 2x Login 1x Storage Server
|
|
|
|
**Linux Distro**
|
|
- NixOS pro Server Node
|