climateprediction.net (CPDN) home page
Thread 'Hardware for new models.'

Thread 'Hardware for new models.'

Message boards : Number crunching : Hardware for new models.
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 6 · Next

AuthorMessage
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 66318 - Posted: 9 Nov 2022, 15:10:03 UTC - in response to Message 66317.  

My machine came with two of these, and I added two more:
I do not remember if there is room for another four or not. (I must power down the system to open the box.)

Dell Memory Upgrade - 16GB - 2RX8 DDR4 RDIMM 2933MHz

Data Integrity Check ECC
Speed 2933 MHz (PC4-23400)
Dual rank, registered
I use CPU-Z to find what motherboard/RAM I have and if I can add more and what type.
ID: 66318 · Report as offensive     Reply Quote
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 66323 - Posted: 9 Nov 2022, 18:40:35 UTC - in response to Message 66322.  
Last modified: 9 Nov 2022, 18:48:53 UTC

There are seven more., but four say no module installed. So I seem to be using 4 memory modules and I could add up to four more (in pairs) if I were rich enough and needed more RAM.
Agreed, sort of. Last time I checked, £100 for 32GB.

You can't see what motherboard you have, but if it's taking 16GB modules, it should take a 16GB in every slot. At least. Maybe a 32GB in every slot.

Since you have a W-2245 CPU, I assume you have a Dell Precision 5820. If so, they can take 512GB! So that would be 8 x 64GB modules!

But.... it's *4* channel not 2. So you're better off buying 4 modules of whatever you need / can afford (unless you want to leave free sockets for another upgrade).
ID: 66323 · Report as offensive     Reply Quote
Jean-David Beyer

Send message
Joined: 5 Aug 04
Posts: 1120
Credit: 17,202,915
RAC: 2,154
Message 66324 - Posted: 9 Nov 2022, 20:14:16 UTC - in response to Message 66323.  

There are seven more., but four say no module installed. So I seem to be using 4 memory modules and I could add up to four more (in pairs) if I were rich enough and needed more RAM.

Agreed, sort of. Last time I checked, £100 for 32GB.

You can't see what motherboard you have, but if it's taking 16GB modules, it should take a 16GB in every slot. At least. Maybe a 32GB in every slot.

Since you have a W-2245 CPU, I assume you have a Dell Precision 5820. If so, they can take 512GB! So that would be 8 x 64GB modules!

But.... it's *4* channel not 2. So you're better off buying 4 modules of whatever you need / can afford (unless you want to leave free sockets for another upgrade).


Yes, my machine is a Dell 5820 desktop workstation. Those modulles cost me more than that when I bought them in late 2020. I suppose they are less now.

Dell Memory Upgrade - 16GB - 2RX8 DDR4 RDIMM 2933MHz
QUANTITY: 2 | UNIT PRICE: $293.40


Dell have this to say about memory for this machine:
Memory specifications
Features Specifications
Type 
● DDR4 ECC RDIMMs - Supported only with Xeon W Series CPUs  <---<<< what I have
● DDR4 Non-ECC UDIMMs supported with Core X Series CPUs
Speed 
● 2666 MHz (Discontinued on system configurations purchased after October 2020)
● 2933 MHz <---<<< what is in there now.
● 3200 MHz
NOTE: 2933 MHz RDIMMs are not offered with Xenon W Skylake Series CPUs.
NOTE: Computer configurations offered with 2933 MHz RDIMMs operating with Sky Lake
processors will operate at 2666 MHz.
NOTE: Computer configurations offered with 3200 MHz RDIMMs operating with Cascade Lake
processors will operate at 2933 MHz.

Connectors 8 DIMM Slots
DIMM capacities 
● 32 GB per slot 2666 MHz DDR4
● 64 GB per slot 2933 MHz DDR4
● 64 GB per slot 3200 MHz DDR4


So I would have to replace the current modules with 64 GByte modules to max out my box

Dell Memory Upgrade - 64GB - 2RX4 DDR4 RDIMM 3200MHz (Not Compatible with Skylake CPU)
Random Access Memory (RAM) is a type of hardware that your computer uses to store information. 
Adding memory is one of the most cost-effective ways to improve your computer's performance.

Dell™ Branded memory offered in the Memory Selector has gone ...  Show More
Dell Memory Upgrade - 64GB - 2RX4 DDR4 RDIMM 3200MHz (Not Compatible with Skylake CPU) 1

Estimated Value
$3,579.00
Dell Price $1,789.50
You Save $1,789.50 (50%)
Temporarily Out of Stock  <---<<<


Right now with my machine up 11 days since most recent reboot, running 11 Boinc tasks, mainly WCG, I use very little RAM, so I harldy think I will be needing much more unless the new OpenIFS become absolute RAM hogs. I am more worried about running out of processor cache.


$ free -hw
              total        used        free      shared     buffers       cache   available
Mem:           62Gi       6.7Gi       802Mi       148Mi       601Mi        54Gi        54Gi
Swap:          15Gi       137Mi        15Gi

ID: 66324 · Report as offensive     Reply Quote
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 66325 - Posted: 9 Nov 2022, 21:14:15 UTC - in response to Message 66324.  
Last modified: 9 Nov 2022, 21:15:50 UTC

Those modules cost me more than that when I bought them in late 2020. I suppose they are less now.

Dell Memory Upgrade - 16GB - 2RX8 DDR4 RDIMM 2933MHz
QUANTITY: 2 | UNIT PRICE: $293.40
$140 a piece now, so the same price. Or $45 a piece 2nd hand. I always get 2nd hand and run a thorough memtest on it.

2933MHz or above would be perfect (it's the max speed of the memory, it will just run slower if need be), or go down a notch to 2666 if it saves a lot of money, therefore you could afford larger ones.

Right now with my machine up 11 days since most recent reboot, running 11 Boinc tasks, mainly WCG, I use very little RAM, so I hardly think I will be needing much more unless the new OpenIFS become absolute RAM hogs. I am more worried about running out of processor cache.
Since you already have 4 channel memory in use, I guess you can't improve on that.

I run big stuff like LHC. And the OpenIFS looks like it will be big too, especially if you want to use all your cores on it.
ID: 66325 · Report as offensive     Reply Quote
Jean-David Beyer

Send message
Joined: 5 Aug 04
Posts: 1120
Credit: 17,202,915
RAC: 2,154
Message 66327 - Posted: 10 Nov 2022, 3:28:53 UTC - in response to Message 66325.  

Since you already have 4 channel memory in use, I guess you can't improve on that.

I run big stuff like LHC. And the OpenIFS looks like it will be big too, especially if you want to use all your cores on it.


I cannot really run all cores on Boinc tasks. I do not have air conditioning. I can control the fan speeds by BIOS settings. But the maximum I can run the fans at, before their noise gets really annoying, lets me run 8 or so cores in summer and I am currently running 11. The maximum number of cores for Boinc tasks is easily set in the Boinc-manager. And I can set the number of concurrent tasks for a given project in the app_config.xml file in the appropriate directory.

For me, they are here:

$ locate app_config
/var/lib/boinc/projects/boinc.bakerlab.org_rosetta/app_config.xml
/var/lib/boinc/projects/climateprediction.net/app_config.xml
/var/lib/boinc/projects/milkyway.cs.rpi.edu_milkyway/app_config.xml
/var/lib/boinc/projects/universeathome.pl_universe/app_config.xml
/var/lib/boinc/projects/www.worldcommunitygrid.org/app_config.xml

ID: 66327 · Report as offensive     Reply Quote
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 66328 - Posted: 10 Nov 2022, 3:42:04 UTC - in response to Message 66327.  

I cannot really run all cores on Boinc tasks. I do not have air conditioning. I can control the fan speeds by BIOS settings. But the maximum I can run the fans at, before their noise gets really annoying, lets me run 8 or so cores in summer and I am currently running 11. The maximum number of cores for Boinc tasks is easily set in the Boinc-manager. And I can set the number of concurrent tasks for a given project in the app_config.xml file in the appropriate directory.
I'm surprised. CPUs generate so little heat compared with GPUs. I have 12 GPUs and 100 CPU cores. Most are in my garage, and I just leave windows open. No, the glass ones not the MS ones! One day I'll join the garage to the house so I can let the heat drift through in winter.

As for noise, just install a bigger heatsink and fan (or even water cooling). My 130W Ryzen 9 CPUs make very little noise running flat out. Two 6 inch fans on a 6x6x6 inch heatsink. Hence slow quiet fans.
ID: 66328 · Report as offensive     Reply Quote
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 66329 - Posted: 10 Nov 2022, 6:04:25 UTC - in response to Message 66328.  

I'm surprised. CPUs generate so little heat compared with GPUs. I have 12 GPUs and 100 CPU cores. Most are in my garage, and I just leave windows open. No, the glass ones not the MS ones! One day I'll join the garage to the house so I can let the heat drift through in winter.
Some crunchers live in hotter climes than UK. Even in Cambridge we are getting days above 40C now.
ID: 66329 · Report as offensive     Reply Quote
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 66330 - Posted: 10 Nov 2022, 6:10:49 UTC - in response to Message 66329.  

Some crunchers live in hotter climes than UK. Even in Cambridge we are getting days above 40C now.
Even so, about 100W is nothing. A fridge gives that off. Open the window.
ID: 66330 · Report as offensive     Reply Quote
wateroakley

Send message
Joined: 6 Aug 04
Posts: 195
Credit: 28,374,828
RAC: 10,749
Message 66333 - Posted: 10 Nov 2022, 10:20:07 UTC - in response to Message 66330.  

Even so, about 100W is nothing. A fridge gives that off. Open the window.
Are you sure? An E-rated American style fridge-freezer is about 350 kWh a year, that's 40W. A B-rated fridge is about 137 kWh a year, or 15W. Our i7-cpu based PCs pull over 100W without the monitors, about 900kWh a year. The waste heat from two PCs is warming our home office today, very slightly.
ID: 66333 · Report as offensive     Reply Quote
Richard Haselgrove

Send message
Joined: 1 Jan 07
Posts: 1061
Credit: 36,717,389
RAC: 8,111
Message 66334 - Posted: 10 Nov 2022, 10:29:34 UTC

A fridge is much bigger than a CPU, so the energy density is lower.
ID: 66334 · Report as offensive     Reply Quote
Jean-David Beyer

Send message
Joined: 5 Aug 04
Posts: 1120
Credit: 17,202,915
RAC: 2,154
Message 66337 - Posted: 10 Nov 2022, 12:45:45 UTC - in response to Message 66328.  

As for noise, just install a bigger heatsink and fan


My heat sink is about 5 inches high, 5 inches deep, and 4 inches wide with half the RAM chips on each side. No room for a bigger fan. There is a temperature-controlled fan built onto one end of that heat sink. There are three fans at the front of the computer blowing air in, two fans in the power supply.

coretemp-isa-0000
Adapter: ISA adapter
Package id 0:  +79.0°C  (high = +88.0°C, crit = +98.0°C)
Core 1:        +65.0°C  (high = +88.0°C, crit = +98.0°C)
Core 2:        +63.0°C  (high = +88.0°C, crit = +98.0°C)
Core 3:        +66.0°C  (high = +88.0°C, crit = +98.0°C)
Core 5:        +73.0°C  (high = +88.0°C, crit = +98.0°C)
Core 8:        +74.0°C  (high = +88.0°C, crit = +98.0°C)
Core 9:        +65.0°C  (high = +88.0°C, crit = +98.0°C)
Core 11:       +79.0°C  (high = +88.0°C, crit = +98.0°C)
Core 12:       +61.0°C  (high = +88.0°C, crit = +98.0°C)

amdgpu-pci-6500
Adapter: PCI adapter
vddgfx:       +0.76 V
fan1:        2052 RPM  (min = 1800 RPM, max = 6000 RPM)
edge:         +32.0°C  (crit = +97.0°C, hyst = -273.1°C)
power1:        4.25 W  (cap =  25.00 W)

dell_smm-virtual-0
Adapter: Virtual device
fan1:        4252 RPM
fan2:        1135 RPM
fan3:        3869 RPM

ID: 66337 · Report as offensive     Reply Quote
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 66339 - Posted: 10 Nov 2022, 16:51:45 UTC
Last modified: 10 Nov 2022, 17:00:57 UTC

I think we need a new thread dedicated to discussing hardware.. ;)
ID: 66339 · Report as offensive     Reply Quote
xii5ku

Send message
Joined: 27 Mar 21
Posts: 79
Credit: 78,306,092
RAC: 230
Message 67049 - Posted: 26 Dec 2022, 7:05:07 UTC - in response to Message 66327.  

On RAM upgrades:
Unless obliged by a service contract to do so, don't buy RAM from the ODM (like Dell for instance), just buy RAM according to CPU and mainboard specification. Intel ARK lists maximum RAM capacity which most likely requires more than one DIMM per channel, and maximum RAM clock which likely applies to 1 DIMM per channel or LRDIMMs (i.e. may be lower with 2DPC in case of plain RDIMMs).

On CPU temperature and heat output:
– If core clocks are high, hot-spot temperatures will be high, almost independently of the size of the heatsink and its intake air temperature.
– Take a look into the BIOS for power limits. They might be set needlessly high by default.
– There is a trend among consumer desktop mainboard BIOSes to apply too high voltage by default, I hope that's not the case with workstation BIOSes.
ID: 67049 · Report as offensive     Reply Quote
xii5ku

Send message
Joined: 27 Mar 21
Posts: 79
Credit: 78,306,092
RAC: 230
Message 67051 - Posted: 26 Dec 2022, 10:08:27 UTC
Last modified: 26 Dec 2022, 10:16:21 UTC

Hardware requirements for current "OpenIFS 43r3 Perturbed Surface" work:

The following items need to be taken into account, in descending order of concern:
1. Upload bandwidth of your internet link.
2. Disk space.
3. RAM capacity.
99. CPU. This one doesn't really matter, except of course that CPU core count has got an influence on how many tasks you may want to run in parallel, and that core count × core speed influences how many tasks you can complete per day at most. One or both of these factors (concurrent tasks, average rate of task completions) influence the sizing of items 1…3 in this list.

What I mean to say: While CPU performance related questions like "Should or shouldn't I use HT a.k.a. SMT?", "How much processor cache should be there per task to avoid memory access bottlenecks?" are arguably interesting, they are unlikely to be the deciding questions regarding possible OIFS throughput on your equipment. (From here on, I am using "OIFS" as a shorthand for "current OpenIFS 43r3 Perturbed Surface work".)

1. Sizing of the upload bandwidth of your internet link
– Each OIFS task produces 1.72 GiB of compressed result data files which need to be uploaded.
– Take the sum of the average_task_completions_per_day of all of your computers behind a given internet link, multiply with 1.72 GiB per result, and you have got the minimum uplink bandwidth which is required to sustain those average task completions per day.
– Now make an estimate of internet link downtimes and CPDN upload server downtimes, as a percentage over a longer time frame, and increase the figure for uplink bandwidth accordingly. (Problem: We can't predict these downtimes. But we can set a figure of a downtime which we aim to bridge without interruption of computation.) Also increase the figure for anything else which you want to use the uplink for.

Most of us won't resize our internet link based on what we want to compute, therefore the consideration is usually the other way around: Take your uplink bandwidth, divide by 1.72 GiB (1.85 GB) per result, multiply by estimated task duration, and you know how many tasks you can run concurrently in steady state in the optimum case. Reduce this figure for downtimes and other overhead.

Example:
– Let's say your uplink bandwidth is 7 Mb/s = 0.875 MB/s, and your average task duration is 15 h = 54,000 s.
– Then your uplink can sustain 0.875 MB/s / 1850 MB * 54,000 s ≈ 25 tasks running in parallel as long as there is no downtime.
– Let's say the CPDN upload server was down from a Friday night until the following Monday morning = 2.5 days, and you would like to be able to clear your resulting upload backlog within 1.5 days after this outage. Then this would only work if you ran no more than 25*1.5/(2.5+1.5) ≈ 9 tasks in parallel (on average during the period from beginning of the outage to the end of the clearing of your upload backlog).

2. Sizing of disk space
– On the BOINC server, the workunit property rsc_disk_bound is set to 7.0 GiB.
– The server will stop assigning new tasks to a host eventually, based on rsc_disk_bound and on what the client reports about disk capacity to the server. Unfortunately I'm unclear about the details: Possible versus actual disk utilization; number of new tasks or number of the host's tasks in progress… Perhaps somebody can set us straight how this works from the BOINC server's side of view.
– Of lesser concern for disk sizing: The BOINC client should stop starting tasks once the remaining allowed disk space becomes less than rsc_disk_bound.
– On a side note, the BOINC client will kill a running task if it notices that the files which are identifiable by the client to belonging to this task exceed rsc_disk_bound.
– Furthermore, there are the mentioned 1.72 GiB result data. During longer outages of your internet link or of CPDN's upload server — or more generally during periods during which you compute faster than you can upload — you need to have respectively much disk space to store these pending uploads.

3. Sizing of RAM
– You want RAM for the running tasks, some RAM for disk cache, and RAM for everything else which goes on on the computer.
– On the server, the workunit property rsc_memory_bound is set to ≈5.6 GiB.
– In my observation, on a stripped-down compute server which runs nothing but OIFS, time-averaged resident memory size of OIFS tasks is at the order of 3.5 GiB, and time-averaged disk cache footprint is at the order of 1.5 GiB/task, roughly. (So, 5 GiB together, per running task.)
– In my observation, via spot checks of completed tasks, peak resident memory size of OIFS tasks was ≈4.8 GiB on dual-socket computers running a distributor's kernel, and ≈4.4 GiB on a single-socket desktop computer running a somewhat stripped-down self compiled kernel. The OIFS executable is statically linked, therefore the difference in peak memory consumption can't be related to differences of installed libraries. Also, these peak sizes were consistent across a good number of completed tasks, and the different computer fetched work at the same time frames, therefore the difference is unlikely to come from workunit differences. I suspect the difference in peak resident memory sizes which I observed is caused by kernel and machine configuration, e.g. via per-CPU memory allocations.
– The BOINC client will not start new tasks/ put tasks into waiting state when actual used memory of all running tasks gets to the limit of memory allowed to be used by BOINC.
– If there is swap space, there kernel will begin to swap from RAM to swap space when resident memory of all processes on the system reaches available physical memory.
– If the kernel needs to swap frequently, the machine will basically become unresponsive.
– The kernel will start to kill random processes (memory hogs first) when resident memory of all processes reaches the sum of available physical memory and swap space. But the machine will already have become unresponsive before that.
– On a side note, the BOINC client will kill a science task if its resident memory exceeds rsc_memory_bound, as far as I know. This should never happen since rsc_memory_bound is supposed to be set big enough by the project administrator.

To summarize, supply about 5 GiB RAM for each OIFS task which you want to run concurrently, plus enough RAM for everything else which happens on the computer. The latter is equally important to get right, but not as straightforward.

If you want to avoid task failures because of out-of-memory situations, supply enough swap space on top of enough RAM. In contrast, if you want to avoid the computer to ever become unresponsive, supply plenty of RAM for all processes plus for the system's filesystem caches, yet maybe disable swap devices.
ID: 67051 · Report as offensive     Reply Quote
Jean-David Beyer

Send message
Joined: 5 Aug 04
Posts: 1120
Credit: 17,202,915
RAC: 2,154
Message 67063 - Posted: 28 Dec 2022, 1:22:01 UTC - in response to Message 66330.  

Even so, about 100W is nothing. A fridge gives that off. Open the window.


My machine is drawing 265 watts at the moment, running 12 Boinc tasks and my Firefox browser where I am typing this.
It is wintertime, so I do not need to open a window.

In the summertime I have a double window fan blowing outside air inside and windows open elsewhere, but when it is 90F outside, it is tough to keep the computer box cool enough to keep the processor cool enough unless I run the fans so fast as to drive me crazy.
ID: 67063 · Report as offensive     Reply Quote
Glenn Carver

Send message
Joined: 29 Oct 17
Posts: 1049
Credit: 16,476,460
RAC: 15,681
Message 67119 - Posted: 29 Dec 2022, 18:59:11 UTC - in response to Message 67051.  

The following items need to be taken into account, in descending order of concern:
1. Upload bandwidth of your internet link.
2. Disk space.
3. RAM capacity.
99. CPU.
As the developer of OpenIFS, that order is not correct. RAM should be top closely followed by CPU & upload capacity. Why? Because the current configurations used by CPDN are very low resolution and bordering on scientifically unimportant. The plan is to go to higher resolutions which will require much more RAM in order to fit the model comfortably. And CPU, especially multiple cores, because the higher resolutions take longer to run.

To give you an idea, the next resolution configuration up from the one currently being run will need ~12 Gb, the one after that ~22Gb. As the resolution goes higher, the model timestep has to decrease so runtimes get longer.

Disk space; lowest priority and most machines will have 100s if not more available. Storage is cheap.

In any system, one of these will be a bottleneck to throughput, I don't think we can generalize here too much as it will depend on what individuals have.
ID: 67119 · Report as offensive     Reply Quote
wujj123456

Send message
Joined: 14 Sep 08
Posts: 127
Credit: 41,996,185
RAC: 68,842
Message 67120 - Posted: 29 Dec 2022, 20:37:11 UTC - in response to Message 67119.  
Last modified: 29 Dec 2022, 20:37:44 UTC

To give you an idea, the next resolution configuration up from the one currently being run will need ~12 Gb, the one after that ~22Gb. As the resolution goes higher, the model timestep has to decrease so runtimes get longer.


This is very helpful. Thanks. Guess need to get more aggressive in memory upgrade now...

In any system, one of these will be a bottleneck to throughput, I don't think we can generalize here too much as it will depend on what individuals have.

Agree. I can see @xii5ku probably has the same unfortunate copper based "broadband" as I do where I get pitiful upload bandwidth relative to download. Down:up ratio is like 50-100:1 here. :-(

With current oifs3 tasks, my upload link indeed saturate first before running out of memory, though it likely would change for the next resolutions and that's good news for me.
ID: 67120 · Report as offensive     Reply Quote
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 67126 - Posted: 30 Dec 2022, 9:52:46 UTC - in response to Message 66333.  

Even so, about 100W is nothing. A fridge gives that off. Open the window.
Are you sure? An E-rated American style fridge-freezer is about 350 kWh a year, that's 40W. A B-rated fridge is about 137 kWh a year, or 15W. Our i7-cpu based PCs pull over 100W without the monitors, about 900kWh a year. The waste heat from two PCs is warming our home office today, very slightly.
It gives off 100W while running. Depends how long it runs for. In hot weather it will be a lot. Dunno what an E rating is, but my fridge freezer runs most of the time when the room is 25C.
ID: 67126 · Report as offensive     Reply Quote
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 67127 - Posted: 30 Dec 2022, 9:53:13 UTC - in response to Message 66334.  

A fridge is much bigger than a CPU, so the energy density is lower.
Irrelevant, the room the heat is going into is the same size.
ID: 67127 · Report as offensive     Reply Quote
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 67128 - Posted: 30 Dec 2022, 9:54:41 UTC - in response to Message 66337.  

My heat sink is about 5 inches high, 5 inches deep, and 4 inches wide with half the RAM chips on each side. No room for a bigger fan. There is a temperature-controlled fan built onto one end of that heat sink. There are three fans at the front of the computer blowing air in, two fans in the power supply.
I have two machines with a 6x6x6inch cooler with 2 6 inch fans on it. Ryzen 9 3900XT. Can't hear the fans. If it won't fit, get a bigger case or as I do just leave the side off.
ID: 67128 · Report as offensive     Reply Quote
1 · 2 · 3 · 4 . . . 6 · Next

Message boards : Number crunching : Hardware for new models.

©2024 cpdn.org