Questions and Answers : Wish list : Using GPUs for number crunching
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
Send message Joined: 16 Jan 10 Posts: 1084 Credit: 7,904,898 RAC: 2,026 |
Can 'efficiency' be used to choose between a climate model and, for example, testing a number to see whether it's prime? If knowledge is proportional to work then it might be amusing to attend a seminar on "Entropy and Knowledge" given by a philosopher and a physicist. In fact, if I remember correctly, the CPDN project leader Myles Allen is both - he could do it! Or perhaps the BOINC credit unit, the cobblestone, is the measure of knowledge and we should simply devote our efforts to the project that produces the most credits. I suspect that if minimising energy consumption is the objective then the best thing to do is to avoid distributed computing altogether. |
Send message Joined: 31 Oct 04 Posts: 336 Credit: 3,316,482 RAC: 0 |
The developer of Infernal (used by RNA-World) tested a GPU version and of course the plain calculation was faster - but more time got lost loading the data into / retrieving the results from the GPU memory than it saved crunching them. Even if there was someone who took on this job for CPDN, my guess would be that CPDN would have an even worse savings (GPU) to expence (bus) ratio. Btw., I don't think that it basically wouldn't work with Fortran, Fortran can sure call functions in .so or .dll files compiled from C sources. |
Send message Joined: 28 Mar 14 Posts: 7 Credit: 47,798 RAC: 0 |
Well, AFAIK all currently active climate-models uses SSE2-optimizations, and my guess this means they're using double-precision. Since the fortran-compiler linked a few posts back is CUDA, and Nvidia-cards has abyssimally poor double-precision-speed of only 1/24 single-precision-performance, except if you pays $$$$ for the professional cards, even a top-end Nvidia-GTX780Ti only manages 210 GFLOPS at most. A quad-core (8 with HT) cpu on the other hand is around 100 GFLOPS. Meaning even best-case the Nvidia-GPU will only be 2x faster than CPU. In reality even 50% performance on GPU can be too high, meaning your "slow" CPU is outperforming your "fast" GPU. I'm actually moving from the Nvidia card and once prices settle will probably try for an R9 280X as a compromise between my wish list and what I can afford. Hopefully an E3-1275V3 is to be matched with my new Asus P9D-WS motherboard (in theory supports 4x Crossfire, ECC, RAID, etc.). In the distant future when the price of these cards crash on eBay I'l probably move to 2x or 3x Crossfire to stretch the life of the system or sooner if I need a performance upgrade. This is practically trailing edge hardware for a gamer but as an older gamer I'm looking for reliability over performance and have more requirements than pure gaming. Although I'm not into coin mining. The R9 280X is 1/4 for DP. The AMD trade off is heat for performance. This is the point where the efficiency of an ~90W Xeon Quad core running 8x work units against a GPU could begin with a comparison of Performance/Power ratio. Will any GPUs be more efficient than the CPU? As I've yet to buy the graphics card there is some flexibility but 1 GByte cards are not going to be acceptable. I was aware of the performance problems of the Nvidia line but it gave me some crude numbers to show CPU vs GPU performance. Plus there was a commercial FORTRAN compiler available. Some comments in these GPU related threads seem concerned FORTRAN was not available to support GPU hardware. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
FORTRAN has been able to support GPUs for several years. Just not at the capabilities required by these professional climate models. (Either double precision, or 80 bits wide.) And climate models are mostly about serial processing, not parallel. You can't calculate tomorrows weather before first off calculating today's. And on and on. |
Send message Joined: 31 Dec 07 Posts: 1152 Credit: 22,363,583 RAC: 5,022 |
This issue just won�t stay dead. Every time that you think it have been safely staked through the heart it rises from the grave like Dracula in one of those old Hammer movies. |
Send message Joined: 28 Mar 14 Posts: 7 Credit: 47,798 RAC: 0 |
Hi Volvo GPUs and weather modeling is not new. Here are some random samples from Google: http://www.mmm.ucar.edu/wrf/WG2/michalakes_lspp.pdf http://www.nvidia.com/content/PDF/sc_2010/theater/Govett_SC10.pdf http://data1.gfdl.noaa.gov/multi-core/2011/presentations/Govett_Successes%20for%20Challenges%20Using%20GPUs%20for%20Weather%20and%20Climate%20Models.pdf Being an ace FORTRAN programmer does not instantly grant any detailed knowledge about programming GPU hardware. If you said there was a dedicated team researching GPU solutions then I would concede they have the knowledge. Finding somebody who has played with GPUs at home does not translate into that person having any influence over future directions of a corporate programming project. To my knowledge these organisations all run their models on CPUs, in some cases on supercomputers. For example, a supercomputer in Tokyo is used for this purpose. If using GPUs were possible for the type of calculations required for climate models I'm pretty sure that all these model programmers in several institutions would already have harnessed this possibility. They have every motivation to complete model runs as quickly as possible because similar models based on the UM are used for weather prediction, for which they also run ensembles, albeit much smaller than ours at CPDN. Supercomputers can be built with GPUs e.g.:http://en.wikipedia.org/wiki/Titan_(supercomputer) or http://en.wikipedia.org/wiki/Tianhe-I and used for climate modelling.
I wouldn't expect these two programmers to design the model but I expect they would be considered experts in implementing stable code in the BOINC environment. If they have researched GPU processing and say it can never be done due to design limitations of the hardware platform (e.g. rounding errors) then end of story. If the BOINC programming team hasn't made that evaluation or extra programming staff would be required to implementation GPU processing then it is a financial problem not technical. Maybe the only option is to ask for donations specifically to investigate GPU processing. We are all aware that running research tasks on computers uses electricity and that we need to ensure that our computers run as efficiently as possible. One way we can reduce the carbon footprint is be ensuring that as few models crash as possible. As a Problem Manager (ITIL) in a large Telco I've seen a few train wrecks by teams and their programmers. Must admit the spectacular crashes do get attention. I tend to take notice when one of the earlier posts here said The project's servers are already struggling to cope with the huge amounts of data being returned. Why do you want to increase this so drastically? |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
You STILL haven't got the point. The Met Office models are a global standard. They're run by researchers in a lot of places around the globe, who know that these models are stable, and that they can compare results with other people using them. Every researcher currently using these models would have to switch to GPU programs and start again with their testing to see if they get consistent results. Why bother when the current system works? And if they don't change, then we don't. And have you also missed the point from earlier, that they're OWNED by the Met Office, who doesn't provide anyone with the source code? |
Send message Joined: 28 Mar 14 Posts: 7 Credit: 47,798 RAC: 0 |
This issue just won�t stay dead. Every time that you think it have been safely staked through the heart it rises from the grave like Dracula in one of those old Hammer movies. I haven't seen it staked through the heart, not even a flesh wound so I expect it will keep coming back each year as a wish list item. Floating point double precision didn't exist in hardware on the early GPUs and there must have been a time when there wasn't FORTRAN compiler support for the GPU hardware. To see postings saying single precision is not adequate or FORTRAN compilers don't exist that support GPU hardware doesn't inflict a flesh wound. There are real issues with GPU hardware rounding, FORTRAN compilers and their extensions, etc. It appears from these wish list threads that people give reasons not to investigate GPU processing and appearing to say nobody has actually attempted to investigate BOINC GPU processing and failed due to a GPU hardware limitation. My guess is the expensive of optimizing then testing the code for parallel GPU processing is the killer, impossible with the available resources, and unlikely in the foreseeable future. The aim has to be the same results as the existing BOINC work units and not a new model. Throw a lot of money, probably a vast pile of money and programmers at the problem and BOINC GPU processing is possible. That is a flesh wound that won't stop people revisiting this as a wish list item. |
Send message Joined: 28 Mar 14 Posts: 7 Credit: 47,798 RAC: 0 |
You STILL haven't got the point. The wish list request here is not to change the model but to optimize code running on particular hardware. The Unified Model is not running on the same supercomputer hardware at all the different sites around the world plus the FORTRAN compilers would introduce low level optimizations choices appropriate for the hardware. Hopefully these optimizations would not affect the accuracy of the results even when changing for example the execution order of instructions. Seem to remember even in my CDC 6400/6600 SCOPE/KRONOS days FORTRAN optimizations for hardware could exist without invalidating results. I'd expect the Met Office would see little practical benefit in resourcing a team to produce code optimized for execution on GPUs. In effect they control the purse strings and it is their cost/benefit considerations that determine priority. The Unified Model code is not static as improvements are incorporated over time but if the cost of developing or maintaining GPU code exceeds any benefits to them then it is pointless. Every researcher currently using these models would have to switch to GPU programs and start again with their testing to see if they get consistent results. A request to optimize code is not a request to change the model therefore no requirement for researchers to switch programs. Of course if the new code was faster than the existing code there might be an incentive to upgrade. Why bother when the current system works? And if they don't change, then we don't. Viewpoints are different. On one side there is access to a resource of wasted CPU instruction cycles that could be utilized for climate studies while on the other side there are people who are willing to donate their wasted CPU and GPU cycles for a good purpose. As a person donating CPU and GPU cycles I wish to donate all those cycles not just a fraction. Other BOINC projects will fully utilize spare CPU and GPU processor cycles therefore I perceive a higher benefit in donating CPU and GPU cycles to one of those projects. I'm guessing people will keep revisiting this wish list topic. |
Send message Joined: 15 May 09 Posts: 4542 Credit: 19,039,635 RAC: 18,944 |
I am sure you are right that this topic will keep being revisited but until the ability to cope with 80bit numbers appears on GPUs, my understanding is that it just isn't worth starting on. Once it is widely available, then it may well be worth working on but only if someone comes up with enough money to fund the work. You have suggested having donations specifically for this but as these would almost certainly take away from the donations the project currently receives I think this is unlikely. The only ways I can see it happening is if a group of volunteer programmers with the skills and time plus hardware resources wish to take it on or someone taking it on as a masters/PHD project. Any takers? |
Send message Joined: 28 Mar 14 Posts: 7 Credit: 47,798 RAC: 0 |
I am sure you are right that this topic will keep being revisited but until the ability to cope with 80bit numbers appears on GPUs, my understanding is that it just isn't worth starting on. Once it is widely available, then it may well be worth working on but only if someone comes up with enough money to fund the work. Double precision started appearing in AMD GPUs in 2008 and they all stress IEEE 754 compliance in their marketing hype. Now double precision is in every GPU and the discussion has moved onto performance differences between Nvidia and AMD, with the cost/power/performance trade-offs between different GPUs. Programmers still have to know the hardware: https://developer.nvidia.com/sites/default/files/akamai/cuda/files/NVIDIA-CUDA-Floating-Point.pdf You have suggested having donations specifically for this but as these would almost certainly take away from the donations the project currently receives I think this is unlikely. The only ways I can see it happening is if a group of volunteer programmers with the skills and time plus hardware resources wish to take it on or someone taking it on as a masters/PHD project. Think there was a comment in this thread from a moderator that the Met doesn't release its code which would eliminate volunteer programmer teams. I cannot know if the task is too large or too small for a single Piled Higher and Deeper student or that a code optimization for a different processor is a suitable thesis topic. |
Send message Joined: 28 Mar 14 Posts: 7 Credit: 47,798 RAC: 0 |
Last work unit completed. I'm moving to another BOINC project and won't be following this thread. CYA.. |
Send message Joined: 20 Jun 12 Posts: 1 Credit: 76,526 RAC: 0 |
Just chalk it up to inefficient code and an archaic mainframe. Eventually to get timely results, the powers that be will realize they need to change. Until then, you might as well talk to the wall. |
Send message Joined: 13 Jan 07 Posts: 195 Credit: 10,581,566 RAC: 0 |
A CPDN model is of-the-order-of 1 million lines of FORTRAN code. Who fancies taking that on as a project to convert to GPU? (Even if the UK Met Office, which owns the code, were to agree to release the source to make it feasible.) |
Send message Joined: 5 Aug 04 Posts: 1496 Credit: 95,522,203 RAC: 0 |
Methinks you know not of which you write. Lockleys calls it correctly. The UK Met Office Model was developed over decades by PhD physicists/meteorologists. Are you qualified to use the adjectives you used against that development and sustaining update? (I doubt it.) If your argument is against FORTRAN, well, yes, it is old. It is, however, a language developed specifically for scientific projects. You are at risk in calling it inefficient -- or calling CRAY supercomputers 'archaic.' Do you have a serious point or are you merely another troll? (If you can do a better job, why not form a company and do it?) "We have met the enemy and he is us." -- Pogo Greetings from coastal Washington state, the scenic US Pacific Northwest. |
Send message Joined: 15 May 09 Posts: 4542 Credit: 19,039,635 RAC: 18,944 |
A case of, "If I wanted to get there, then I wouldn't start from here." |
Send message Joined: 22 Apr 15 Posts: 1 Credit: 10,139 RAC: 0 |
titan owner here reporting in 60+ hours on my i7 3770k I'm not sure how big this is compared to other projects but http://allprojectstats.com/showuser.php?id=3492102 I'm on my way to being ranked #1 most handsome boincer in the universe and it's 100% due to my titan I didn't read this thread not sure if they have a good reason but you'd get a lot more work done putting this on graphics cards instead of cpu's as would every boinc project I would love to find out how many hours would get shaved off willing to beta test and everything as long as I get the weather forecast before everyone else and get to use it as part of my magic show |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
Perhaps you SHOULD have read the thread before posting. Then you would have found out that all of the modelling programs used here are owned and developed by the UK Met Office, for use on their supercomputers. The versions that we run are the desktop versions supplied by the Met Office for use by professional climate scientists around the planet. So, no GPUs now or in the future. |
Send message Joined: 15 May 09 Posts: 4542 Credit: 19,039,635 RAC: 18,944 |
Those of us who have followed this thread since the start know about the million lines of Fortran code involved. Out of interest, how much new code is there for each new model type? I imagine it is at least an order of magnitude less in order for new models to come out looking at specific extreme weather events? I also wonder if there are other sets of code being used for climate modelling apart from the Met Office programs and assuming there are, why do we not hear of them? |
Send message Joined: 16 Jan 10 Posts: 1084 Credit: 7,904,898 RAC: 2,026 |
[Dave Jackson wrote:]Those of us who have followed this thread since the start know about the million lines of Fortran code involved. Out of interest, how much new code is there for each new model type? I imagine it is at least an order of magnitude less in order for new models to come out looking at specific extreme weather events? ... ... my impression is that the representation of the physics changes relatively slowly, though there are new land, carbon cyle and other models added from time to time. The different experiments are principally different sets of input parameters, which include data (e.g. oceans, number of years), switch settings (i.e. turn this or that model component on or off) and output selections. There is then a rather problematic BOINC layer on top, which tries to get the industrial Met Office code to fit into the BOINC framework - and adapt to changes in both the client and the server ends of that software. At least that's not FORTRAN! |
©2025 cpdn.org