climateprediction.net (CPDN) home page
Thread 'New work Discussion'

Thread 'New work Discussion'

Message boards : Number crunching : New work Discussion
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 84 · 85 · 86 · 87 · 88 · 89 · 90 . . . 91 · Next

AuthorMessage
ProfileConan
Avatar

Send message
Joined: 6 Jul 06
Posts: 147
Credit: 3,615,496
RAC: 420
Message 65740 - Posted: 3 Aug 2022, 9:08:12 UTC

Do you know if they run faster as 64 bit or are they the same?
If the same then what is the benefit?

Is there any reason (that you know of) why they need so much memory?
More expansive models, more parameters or something else?

Still keen to try some OpenIFS work units.

I have 64 GB of RAM on my AMD 5900X (12 cores/24 threads), as only 4 work units seem to be downloaded at any particular time (in the last two attempts to get work) I should be OK, (I run a lot of other projects as well, so this limits how much work can be downloaded).

Conan
ID: 65740 · Report as offensive
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 65741 - Posted: 3 Aug 2022, 11:16:53 UTC

Do you know if they run faster as 64 bit or are they the same?
If the same then what is the benefit?

Is there any reason (that you know of) why they need so much memory?
More expansive models, more parameters or something else?


I am afraid, I don't know enough about the modelling process to answer that. I can say that many of these tasks from the testing site have used 6GB of RAM peak on my computers. Running more than one at a time on my now dead laptop which only had 8GB RAM really slowed them down. Some have peaked at around 9GB/task but as they don't all peak at the same time if you stagger them it doesn't seem a major problem in that it doesn't cause crashes. The OIFS code is open source and produced by a pan European consortium which I see as an advantage over the met office code. It was the amount of memory they used that alerted one of my fellow mods to the fact that they must be 64 bit.
ID: 65741 · Report as offensive
Jean-David Beyer

Send message
Joined: 5 Aug 04
Posts: 1120
Credit: 17,202,915
RAC: 2,154
Message 65755 - Posted: 5 Aug 2022, 11:11:00 UTC - in response to Message 65738.  

Make sure that you have LOTS of ram before trying any!!!


When I heard about OpenIFS memory requirements, I doubled my RAM size. Right now almost all that is unused. If I ever get any more CPDN work, any Rosetta work, or WCG work, that usage may go up some, but OpenIfs should use more.


$ free -hw
              total        used        free      shared     buffers       cache   available
Mem:           62Gi       3.4Gi       1.1Gi        87Mi       166Mi        57Gi        58Gi
Swap:          15Gi       1.0Mi        15Gi


Computer 1511241

CPU type 	GenuineIntel
Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz [Family 6 Model 85 Stepping 7]
Number of processors 	16

Operating System 	Linux Red Hat Enterprise Linux
Red Hat Enterprise Linux 8.6 (Ootpa) [4.18.0-372.19.1.el8_6.x86_64|libc 2.28 (GNU libc)]
BOINC version 	7.16.11
Memory 	62.28 GB
Cache 	16896 KB
Swap space 	15.62 GB
Total disk space 	488.04 GB
Free Disk Space 	482.76 GB
Measured floating point speed 	6.58 billion ops/sec
Measured integer speed 	30.58 billion ops/sec

ID: 65755 · Report as offensive
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 65756 - Posted: 5 Aug 2022, 14:41:19 UTC

Just for information. Hoping these do come to main site soon. One of my Open IFS tasks from testing had the following on the task page.

Peak working set size 8.77 GB
Peak swap size 9.38 GB

Oh and some of them have had final uploads in the region of 1GB too!
ID: 65756 · Report as offensive
SolarSyonyk

Send message
Joined: 7 Sep 16
Posts: 262
Credit: 34,915,412
RAC: 16,463
Message 65757 - Posted: 5 Aug 2022, 16:12:46 UTC

Woah.

Is there any way to make those opt-in? That's massive, even for CPDN, and I would wager not many people have enough RAM to run more than one of those at a time. One of my dedicated compute boxes could only run one of those, and the other might manage three...
ID: 65757 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 65758 - Posted: 5 Aug 2022, 16:15:43 UTC - in response to Message 65757.  

Woah.

Is there any way to make those opt-in? That's massive, even for CPDN, and I would wager not many people have enough RAM to run more than one of those at a time. One of my dedicated compute boxes could only run one of those, and the other might manage three...


10GB is not massive. I have three computers that will take 128GB and currently have 40, 50, and 64. RAM is cheap, upgrade your dinosaurs. Trouble is I run Windows, not geeky Linux, so I can't do them.
ID: 65758 · Report as offensive
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 65759 - Posted: 5 Aug 2022, 17:21:09 UTC - in response to Message 65757.  

Woah.

Is there any way to make those opt-in? That's massive, even for CPDN, and I would wager not many people have enough RAM to run more than one of those at a time. One of my dedicated compute boxes could only run one of those, and the other might manage three...
As long as you have the minimum amount of RAM to run one, you can run more than that if you have sufficient swap space. The first ones wouldn't download if you had less than 5GB of RAM so the project won't let you run them unless you have at least the minimum required for one task. I was able to run four of those on a laptop with 8GB of RAM but it did slow them down a lot. Running two was relatively OK as they spend much of their time not using that much and as long as they are out of sync and not all peaking at once you can get away with it.
ID: 65759 · Report as offensive
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 65760 - Posted: 5 Aug 2022, 17:22:48 UTC

On the Windows front, I have 2 NZ WAH2 tasks running under WINE from testing branch. There is something to do with the ancillary files that they want to get sorted before moving to bigger batches and main site work.
ID: 65760 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 65761 - Posted: 5 Aug 2022, 17:37:59 UTC - in response to Message 65760.  

On the Windows front, I have 2 NZ WAH2 tasks running under WINE from testing branch. There is something to do with the ancillary files that they want to get sorted before moving to bigger batches and main site work.
Thanks, 6 computers waiting.
ID: 65761 · Report as offensive
SolarSyonyk

Send message
Joined: 7 Sep 16
Posts: 262
Credit: 34,915,412
RAC: 16,463
Message 65762 - Posted: 5 Aug 2022, 18:17:59 UTC - in response to Message 65758.  
Last modified: 5 Aug 2022, 18:18:19 UTC

10GB is not massive. I have three computers that will take 128GB and currently have 40, 50, and 64. RAM is cheap, upgrade your dinosaurs. Trouble is I run Windows, not geeky Linux, so I can't do them.


It is "an order of magnitude larger than any other distributed compute tasks I've run." Even the 1.5GB tasks are fairly rare, and CPDN only.

Most of my boards support larger RAM, but only at significantly slower timings. They won't run 4 sticks at DDR4-3200, which is why I'm running two sticks in the compute boards right now - because bandwidth is useful with the big stuff. Though I'd hardly call a 3900X a dinosaur.

If any of those show up in bulk, I'll certainly try them out, and I can add swap space easily enough (though not enough to load up all the cores, not that I do that anyway), but it's just a major shift in RAM-per-task sizes.
ID: 65762 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 65763 - Posted: 5 Aug 2022, 18:25:46 UTC - in response to Message 65762.  

It is "an order of magnitude larger than any other distributed compute tasks I've run." Even the 1.5GB tasks are fairly rare, and CPDN only.
If you've ever done Rosetta or LHC, or some of the maths projects, you'll be used to big tasks. I have two 10 years old machines that take 128GB. I never compromise on the motherboard or RAM.

Most of my boards support larger RAM, but only at significantly slower timings. They won't run 4 sticks at DDR4-3200, which is why I'm running two sticks in the compute boards right now - because bandwidth is useful with the big stuff. Though I'd hardly call a 3900X a dinosaur.
That's odd because I have zero out of 7 boards that change timings with more sticks. They will all in fact run all sticks faster than the CPU supports.

If any of those show up in bulk, I'll certainly try them out, and I can add swap space easily enough (though not enough to load up all the cores, not that I do that anyway), but it's just a major shift in RAM-per-task sizes.
Ebay is full of all sorts of RAM.
ID: 65763 · Report as offensive
SolarSyonyk

Send message
Joined: 7 Sep 16
Posts: 262
Credit: 34,915,412
RAC: 16,463
Message 65764 - Posted: 5 Aug 2022, 21:09:45 UTC

Ah, Intel side of things?

Both of my boards drop from 3200 down to 2667 or so going from 2 to 4 sticks.

I mean, I've got a rig with 72GB RAM, but it's an old Westmere era dual Xeon box with power consumption to match, and "asleep" it still pulls 150W, so I'm not using it much.
ID: 65764 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 65765 - Posted: 5 Aug 2022, 21:42:42 UTC - in response to Message 65764.  
Last modified: 5 Aug 2022, 21:45:23 UTC

Ah, Intel side of things?

Both of my boards drop from 3200 down to 2667 or so going from 2 to 4 sticks.

I mean, I've got a rig with 72GB RAM, but it's an old Westmere era dual Xeon box with power consumption to match, and "asleep" it still pulls 150W, so I'm not using it much.
My motherboards are:

Ryzen: https://uk.msi.com/Motherboard/X470-GAMING-PLUS-MAX/
i5: https://www.gigabyte.com/Motherboard/Z370-HD3P-rev-10#kf
Two dual xeon servers: https://www.dell.com/learn/us/en/05/shared-content~data-sheets~en/documents~r410-specsheet.pdf and https://www.dell.com/learn/us/en/16/shared-content~data-sheets~en/documents~r510-specsheet.pdf
Ancient quad core: https://www.asus.com/uk/supportonly/P5N-D/HelpDesk_Knowledge/
Plus these laptop sorta things it's impossible to get documentation on: https://icecat.biz/p/packard+bell/dt.ua3eh.001/imedia-pcs-workstations-s2984-32917145.html and https://www.expertreviews.co.uk/laptops/49686/acer-aspire-5741-review

I can find no indication that the first 4 (the decent ones that take loadsa RAM) will slow down with more RAM. Anyway, who needs fast RAM? I changed my Ryzen from single to dual channel and can hardly tell the difference on most games and most Boinc projects. But more RAM is always very helpful, what you aren't using is a massive disk cache.

As for power, I've put all but my gaming Ryzen in the garage. They provide a tropical environment for 2 of my parrots. Who cares if I'm using 4kW 24/7? We have fossil fuels to burn baby!
ID: 65765 · Report as offensive
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 65766 - Posted: 5 Aug 2022, 22:25:11 UTC

This is getting off topic.
ID: 65766 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 65767 - Posted: 5 Aug 2022, 22:28:18 UTC - in response to Message 65766.  
Last modified: 5 Aug 2022, 22:28:59 UTC

This is getting off topic.
Welcome to the art of conversation.

Anyway it isn't off topic at all, we're discussing memory size, memory speed, relevant to the new larger work units.
ID: 65767 · Report as offensive
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 65769 - Posted: 6 Aug 2022, 0:29:27 UTC

Hardware discussion in the new thread, please.
ID: 65769 · Report as offensive
[SG]Felix

Send message
Joined: 4 Oct 15
Posts: 34
Credit: 9,075,151
RAC: 374
Message 65772 - Posted: 6 Aug 2022, 7:19:37 UTC - in response to Message 65736.  

All batches are being processed, are there any new work developments on the horizon?
Another 63 HADCM3S tasks on the testing server at the moment. Pretty sure all that testing will lead to main site work at some point but no hints in discussions in other places about how long this will take or what they need to know before sending them out.



Hold them back, i still have more than enough work downloaded ^^

https://www.cpdn.org/results.php?hostid=1521318&offset=0&show_names=0&state=1&appid=

Some day i looked into the VM, and it was more than full :)

Greets
Felix
ID: 65772 · Report as offensive
ProfileJIM

Send message
Joined: 31 Dec 07
Posts: 1152
Credit: 22,363,583
RAC: 5,022
Message 65779 - Posted: 7 Aug 2022, 2:39:00 UTC

What Os will these HADCM3S be for?
ID: 65779 · Report as offensive
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 65780 - Posted: 7 Aug 2022, 3:07:02 UTC - in response to Message 65779.  

Mac OS
ID: 65780 · Report as offensive
ProfileConan
Avatar

Send message
Joined: 6 Jul 06
Posts: 147
Credit: 3,615,496
RAC: 420
Message 65781 - Posted: 7 Aug 2022, 3:48:09 UTC - in response to Message 65756.  

Just for information. Hoping these do come to main site soon. One of my Open IFS tasks from testing had the following on the task page.

Peak working set size 8.77 GB
Peak swap size 9.38 GB

Oh and some of them have had final uploads in the region of 1GB too!


The application has been placed on the Application Page, just awaiting the actual work units to go with it.

So something is moving.

Conan
ID: 65781 · Report as offensive
Previous · 1 . . . 84 · 85 · 86 · 87 · 88 · 89 · 90 . . . 91 · Next

Message boards : Number crunching : New work Discussion

©2024 cpdn.org