Yes.From the total memory it looks like you have two 16GB DIMMs per socket.
Correct, I did not put attention on that until you mentioned it.From
https://www.intel.com/content/www/us/en ... tions.html
I understand the E7-8880 v3 has 4 channels per socket.
I have only 16x16GB modules, and therefore used 2 modules per each of the 8 CPUs.Are half the memory channels empty or did you use 8GB DIMMs?
I did only power lower node with 4 CPUs and all HDDs, and the system worked as a x3850 X6 4-socket system.The maximum NUMA distance between sockets is only 12. The similar vintage dual-socket Xeon here has a socket-to-socket distance of 21.
Although the 8-way system is more uniform, since everything is normalised to 10 on the diagonal, it's not clear whether that uniformity comes from faster interconnects or higher latency in general on the E7 series.
It'd be interesting to run the core-to-core latency tests at
https://github.com/ChipsandCheese/Micro ... yLatency.c
to see more details.
There were two ratings for quad e7-8880 v3 showing average of 50,157:
https://www.cpubenchmark.net/cpu.php?cp ... cpuCount=4
I did run passmark benchmark on new 4-socket system with 128GB memory first.
Even with only two memory channels result was much better, CPU Mark 52,900.
After powering off I installed the other 8x16GB modules in the recommeded order.
Now all 4 CPUs have 4x16GB memory.
And the new CPU Mark value with 4 memory channels I reported is now 57,828(!):
https://www.passmark.com/baselines/V11/ ... 0102157717
Since the system with only bottom node powered draws half power and still has 72C/144T, I will keep it that way for now.
I will develop my pthread code on 16C/32T AMD 7950X and do initial testing on that.
If more cores are needed, I will use my new system with 72=4.5*16 cores.
When I will do production runs of my optimization software later, I will go back to 8-socket system with 144C/288T.
See numactl output for new system below.
I stopped the code you asked me to execute after 2h(!) after all entries for cores 0-17 have been computed.
I did not want to run for 7x2 more hours to complete.
I hope you can find what you wanted to see in attached output.
hermann@x3950-X6:~$ numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
node 0 size: 64321 MB
node 0 free: 63566 MB
node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
node 1 size: 64499 MB
node 1 free: 62551 MB
node 2 cpus: 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
node 2 size: 64456 MB
node 2 free: 63714 MB
node 3 cpus: 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
node 3 size: 64493 MB
node 3 free: 63751 MB
node distances:
node 0 1 2 3
0: 10 11 11 11
1: 11 10 11 11
2: 11 11 10 11
3: 11 11 11 10
hermann@x3950-X6:~$
P.S:
TIL that I don't need to use the IMM WebGUI.
After turning on the lower node power plug in browser, this simply powers on the server:
Code:
pi@raspberrypi5:~ $ ssh USERID@192.168.178.87 power on(USERID@192.168.178.87) Password: system> oksystem> pi@raspberrypi5:~ $Statistics: Posted by HermannSW — Fri Aug 15, 2025 5:22 pm