Questions and Answers : Getting started : Is more tasks better?
Message board moderation
Author | Message |
---|---|
Send message Joined: 28 Jul 17 Posts: 1 Credit: 524,650 RAC: 0 |
What is better, running numerous simultaneous or only one at a time. If numerous is better, how many is recommended? Warm Regards Newbe Johan Swanepoel |
Send message Joined: 31 Dec 07 Posts: 1152 Credit: 22,363,583 RAC: 5,022 |
That’s a complex question. The answer depends on several factors. How many cores do you have (and how fast are they) and how much RAM. 2 GB’s of RAM per model (simulation) is recommended. Running a model on each core will slow down your machine to some extent. How much performance loss are you willing to except? Many people leave at least 1 core empty for their own use. Also running your machine flat out can cause it to run hot. This stresses the processor. If the cooling fan goes into high as soon as you start Boinc you may want cut down on the number of models being run at once to prolong the life of your machine. |
Send message Joined: 15 May 09 Posts: 4544 Credit: 19,039,635 RAC: 18,944 |
Also, I don't have experience of this myself, not having cpu's with hyperthreading but some one did report a while ago that total throughput of models on a four core machine with hyperthreading was greatest using six or seven virtual cores rather than eight. Over the years there have also been several posts asking why we don't have tasks that can use more than one core at a time. There are two aspects of this that need to be taken into account. Firstly, the Fortran programme which is owned by the Met Office would have to be re-written and CPDN doesn't have the necessary permission to modify this code and Secondly, the serial nature of climate modelling means each result is contingent on the previous one so probably little would be gained compared to some other computing tasks such as rendering 3d images where it makes a massive difference. |
Send message Joined: 5 Aug 04 Posts: 1496 Credit: 95,522,203 RAC: 0 |
Johan, For what it's worth, my i7-4790 running four real and one of four hyper-threads, the machine runs similar CPDN tasks slower than my slower-rated i5-4670. When the i7 was new, it was tested adding another hyper-thread "cpu" per test. Each additional "cpu" took a larger bite from processing rate than its preceding test -- an obvious non-linear curve when plotted. "We have met the enemy and he is us." -- Pogo Greetings from coastal Washington state, the scenic US Pacific Northwest. |
Send message Joined: 7 Aug 04 Posts: 2187 Credit: 64,822,615 RAC: 5,275 |
For most situations, hyperthreading adds 10 to 15% additional work accomplished compared to the same processor with hyperthreading turned off. Individual tasks will take quite a bit longer to complete, but the total models completed over some long period of time, and the total credits per day/week etc. will be 10 to 15% greater with hyperthreading. That said, this can be less true for larger, complex models that take more memory and therefore hit the cache and memory bandwidth harder. On the other hand, models with a small memory footprint (like the old FAMOUS) don't tax the cache and memory bandwidth as much and can reach the upper end of efficiency near 20%. With the various energy saving and heat protection settings for the processor in the bios, CPU speed may be throttled down by the PC when you max out the number of logical cores running at the same time which can also complicate throughput expectations. It's harder to do such a test nowadays as we often have numerous model types (batches) at different resolutions and grid sizes and different physics taking more or less memory. |
©2025 cpdn.org