climateprediction.net (CPDN) home page
Thread 'HadAM3 Models Discussion'

Thread 'HadAM3 Models Discussion'

Message boards : Number crunching : HadAM3 Models Discussion
Message board moderation

To post messages, you must log in.

AuthorMessage
DJStarfox

Send message
Joined: 27 Jan 07
Posts: 300
Credit: 3,288,263
RAC: 26,370
Message 31960 - Posted: 2 Jan 2008, 19:02:02 UTC
Last modified: 2 Jan 2008, 19:03:24 UTC

OK, I downloaded a new HadAM3 model the other day. Unlike the other sulfur and coupled models, the attrition models keep doing a malloc() and free() every few seconds back and forth, as evidenced by my memory usage going up and down in a sinusoidal way. As any programmer knows, getting memory from the OS costs significant CPU time for the kernel. Has anyone else on Linux noticed an HadAM3 model doing this? (I do not recall the Seasonal Attrition Project models doing this, but it\'s been a long time since I ran one of those.)

Also, the model is crawling along in progress. BOINC tells me my current sulfur model will take 520 hours, but the attrition model will take 800 hours to finish.
ID: 31960 · Report as offensive     Reply Quote
Profilegeophi
Volunteer moderator

Send message
Joined: 7 Aug 04
Posts: 2186
Credit: 64,822,615
RAC: 5,275
Message 31961 - Posted: 2 Jan 2008, 20:24:56 UTC

I noted the memory usage pattern on the Seasonal project, and it continues on this new version. From a post I made to the beta site\'s forum on this version of hadam3...

I downloaded a beta Linux hadam3 model and tracked the memory usage as a function of timestep. In the terminal window, beta Linux hadam3 logs every timestep. Like the hadsm3 and hadcm3 models, the radiative timestep occurs every three model hours. It appears to occur in hadam3 between hh10 and hh20, and takes about 9 times as long as a regular timestep. Memory usage of hadam3_um observed:

During regular timesteps - hadam3_um takes up ~140 to 225 MB of resident memory
During radiation timesteps - hadam3_um takes up ~240 to 415 MB of resident memory (mostly between 300 and 400 MB)

So, the high memory usage is associated with the radiation timesteps.
ID: 31961 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 31962 - Posted: 2 Jan 2008, 20:31:17 UTC


... my current sulfur model ...

Are you sure about that?
The sulphur models became obsolete in February 2006.

ID: 31962 · Report as offensive     Reply Quote
DJStarfox

Send message
Joined: 27 Jan 07
Posts: 300
Credit: 3,288,263
RAC: 26,370
Message 31963 - Posted: 2 Jan 2008, 20:33:18 UTC - in response to Message 31961.  

Do you have any information from the science team about the future of the HadAM3 models?

Obviously, there is still data to crunch, but if they intend to use these models as research, I would highly recommend a little development effort to the code. It\'s probably moot to argue for a code change, but in this case a little development effort will go a long way in terms of performance. Reusing a block of memory between loop iterations is really easy to implement. The trick will be getting a few grad students who know Fortran to do it. :) If it is taking 9x longer per timestep than the other models, then a lot more models can be completed. Again, the development effort makes a lot of sense only if the HadAM models will continue to be used in research.
ID: 31963 · Report as offensive     Reply Quote
DJStarfox

Send message
Joined: 27 Jan 07
Posts: 300
Credit: 3,288,263
RAC: 26,370
Message 31964 - Posted: 2 Jan 2008, 20:34:10 UTC - in response to Message 31962.  


... my current sulfur model ...

Are you sure about that?
The sulphur models became obsolete in February 2006.


What are the HadSM3 models? That\'s what I have.
hadsm3fub_00ni_005917257_3
ID: 31964 · Report as offensive     Reply Quote
Profilegeophi
Volunteer moderator

Send message
Joined: 7 Aug 04
Posts: 2186
Credit: 64,822,615
RAC: 5,275
Message 31965 - Posted: 2 Jan 2008, 20:39:59 UTC

I\'m not absolutely sure that some enhancement couldn\'t be made for the memory usage, but the primary reason the memory requirements are higher, and the timesteps take over 10 times as much as slab, are that it is a much higher resolution model than hadsm3 or hadcm3. More grid points horizontally and vertically mean much more computation time per timestep.
ID: 31965 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 31966 - Posted: 2 Jan 2008, 20:54:53 UTC


HadSM3 = slab (ocean) models (the SM part).

The HADAM3 models were tested for some time on the beta site before being released here, so I would think that they were made the best it was possible to.

They WILL be around for a while, and used for a variety of research, not just the UK floods of 2000, as was the SAP project.

ID: 31966 · Report as offensive     Reply Quote
ProfileastroWX
Volunteer moderator

Send message
Joined: 5 Aug 04
Posts: 1496
Credit: 95,522,203
RAC: 0
Message 31967 - Posted: 2 Jan 2008, 20:58:58 UTC
Last modified: 2 Jan 2008, 21:01:24 UTC

Last I saw, when the England flood study is concluded, there will be regional studies for South Africa and for regional Snow accumulation and melt rate for the US Pacific Northwest. From there? Who knows?

Edit: Heh! You must have posted at the time I began to write, Les.
"We have met the enemy and he is us." -- Pogo
Greetings from coastal Washington state, the scenic US Pacific Northwest.
ID: 31967 · Report as offensive     Reply Quote
DJStarfox

Send message
Joined: 27 Jan 07
Posts: 300
Credit: 3,288,263
RAC: 26,370
Message 31968 - Posted: 2 Jan 2008, 21:14:24 UTC - in response to Message 31965.  

I\'m not absolutely sure that some enhancement couldn\'t be made for the memory usage, but the primary reason the memory requirements are higher, and the timesteps take over 10 times as much as slab, are that it is a much higher resolution model than hadsm3 or hadcm3. More grid points horizontally and vertically mean much more computation time per timestep.


Since the increased compute time is because of a higher resolution model, it\'s hard to say how much of an improvement the memory usage (up/down usage) enhancements would be to the code. I can promise you there will be some performance benefit; I just don\'t know how much. I guess when the time comes to apply the models to the South African or NW America territory studies, the team should look at revising the code in this way.

If I wasn\'t busy with my graduate studies already, I\'d try to help, as I\'ve always wanted to work with weather models as a career.
ID: 31968 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 31969 - Posted: 2 Jan 2008, 21:31:09 UTC


If I wasn\'t busy with my graduate studies already, I\'d try to help, as I\'ve always wanted to work with weather models as a career.

Hate to be a wet blanket, but you\'d need to work for the project, as part of Oxford University\'s Atmospheric, Oceanic & Planetary Physics department.
(Although I think that the progammers were moved to the Comms lab section last year.)

Or the UK Met Office\'s Hadley Centre In Exeter. (They own the code.)

But if that\'s not possible, you can always join the beta testers when the next models come up for testing.

ID: 31969 · Report as offensive     Reply Quote
ProfileMikeMarsUK
Volunteer moderator
Avatar

Send message
Joined: 13 Jan 06
Posts: 1498
Credit: 15,613,038
RAC: 0
Message 31970 - Posted: 2 Jan 2008, 21:48:15 UTC


The SAP models were identical to these, except these do 500 times less disk reads (at the cost of a slightly higher memory footprint).

The radiation timesteps are fairly infrequent. The downside to having it maintain a steady memory footprint is that since it only peaks briefly, you\'d be less able to run two or four together on a memory-restricted machine. I can (just) run two simultaneously on a cut-down 1GB XP machine (not recommended). If they both maintained a steady 450MB each, that\'d be impossible - a lot of memory would be sitting around unused for extended periods of time.


I\'ve been running one at a time on a 512MB laptop (also not recommended!!) without any major issues (the laptop only gets rebooted once every couple of months, so any memory fragmention is being dealt with successfully by the operating system).
I'm a volunteer and my views are my own.
News and Announcements and FAQ
ID: 31970 · Report as offensive     Reply Quote
DJStarfox

Send message
Joined: 27 Jan 07
Posts: 300
Credit: 3,288,263
RAC: 26,370
Message 31972 - Posted: 3 Jan 2008, 2:15:32 UTC - in response to Message 31969.  
Last modified: 3 Jan 2008, 2:16:13 UTC

Hate to be a wet blanket, but you\'d need to work for the project, as part of Oxford University\'s Atmospheric, Oceanic & Planetary Physics department.
(Although I think that the progammers were moved to the Comms lab section last year.)

Or the UK Met Office\'s Hadley Centre In Exeter. (They own the code.)

But if that\'s not possible, you can always join the beta testers when the next models come up for testing.


Well, studying overseas is a very remote possibility but not completely zero. Studying weather overseas sounds like a cool idea for a start to my career. It all depends on how things go with my 1st grad degree and job prospects here in the USA. I suspect that \"life\" will change for me a lot after graduation, so perhaps I\'ll just be a cruncher for now. :)
ID: 31972 · Report as offensive     Reply Quote
ProfileMikeMarsUK
Volunteer moderator
Avatar

Send message
Joined: 13 Jan 06
Posts: 1498
Credit: 15,613,038
RAC: 0
Message 31978 - Posted: 3 Jan 2008, 8:08:46 UTC


Optimising the memory usage may also be a time-consuming task = the current generation of models is one million lines of fortran which has been developed over thirty years. The next generation of models (HadGEM etc) is ten million lines ...

I'm a volunteer and my views are my own.
News and Announcements and FAQ
ID: 31978 · Report as offensive     Reply Quote
Steinar1965

Send message
Joined: 4 Sep 06
Posts: 79
Credit: 5,583,517
RAC: 0
Message 31984 - Posted: 3 Jan 2008, 22:22:37 UTC - in response to Message 31978.  


The next generation of models (HadGEM etc) is ten million lines ...


Ouch! When do they come? And what will the requirements be to run these models?
Can\'t imagine to run four models simultainiously..
ID: 31984 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 31985 - Posted: 3 Jan 2008, 23:10:21 UTC


These big models were/are from a group at a different Uni, and, as Carl, (who was working on them) has now left, there\'s no feedback.
But Carl was working on a 64bit model, requiring 4.7 Gigs of ram, with a possibility of smaller, simpler, models to get them under 4 Gigs.

There was also talk of multithreading these big models across several cpu cores, but this first of all reqires that BOINC be able to do this, and last that I heard, this was some time off.

This is some of what Carl wrote privately back in June 2007:
I ran some numbers on all cpdn/boinc machines that have trickled in the past 2 weeks:

10,545 have 2GB or more
477 have 4 GB or more
116 have 6 GB or more
92 have 8 GB or more

so you can see that it\'s still heavily skewed towards the 2GB mark (for the curious, 28915 have 1GB or more, with 22664 still getting by with less than a gig!)


Whatever, they are a long way off yet.

ID: 31985 · Report as offensive     Reply Quote

Message boards : Number crunching : HadAM3 Models Discussion

©2024 cpdn.org