Message boards : Number crunching : Validation pending for 9 years...
Message board moderation
Author | Message |
---|---|
Send message Joined: 14 Sep 08 Posts: 127 Credit: 41,725,848 RAC: 62,638 |
For some random reason, I decided to check back this project after leaving it for almost 10 years. Funny that I found one WU still pending validation! Just putting it here in case it's a bug and someone is interested in digging into time capsule: https://www.cpdn.org/workunit.php?wuid=6868191 (No, I am not really expecting anyone to waste time on that. :-P) I am also curious if 32-bit library requirement for Linux is still current? Ubuntu has announced plan to not even have them in repo a while back, though the schedule has been pulled back due to lots of push back. But it's 2019 now and why we still need 32-bit libraries... |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
I am also curious if 32-bit library requirement for Linux is still current? It is still current. |
Send message Joined: 15 May 09 Posts: 4540 Credit: 19,013,957 RAC: 21,195 |
As I am sure you know, validation on some projects means you don't get credit till a second machine has confirmed the result. CPDN doesn't use validation and you will have been granted credit of one of the tasks from that work unit was yours. However the BOINC server code labels all returned results, "validation pending" if not validated by a second matching task. I hope that makes sense.
with the exception of the openifs tasks which are 64 bit. The trouble is that without the 32bit libraries you will crash a lot of the other tasks as in their, "wisdom" the project have decided not to continue the option of allowing users to choose on the website which types of tasks they want to run. Also, there has only been one batch of the openifs tasks outside of the testing programme to date so relying on these would mean very long gaps between getting work. |
Send message Joined: 14 Sep 08 Posts: 127 Credit: 41,725,848 RAC: 62,638 |
Thanks. There were 4 computers returned results for that WU, even if I exclude the one without runtime assuming it's faulty. I don't think there is credit listed for any of the results. For now I will go and install the 32-bit libraries. Hopefully the requirement will be gone by the time Ubuntu truly stops providing them. I likely wouldn't be bothered to compile them manually if Ubuntu no longer maintains them. It would be a hassle to keep up-to-date with versions and vulnerabilities myself. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
Validation as used here, is a "BOINC" thing, whereby the results from one computer are compared to the results from the same starting data produced by another computer. It's not possible to do this here, so the validation part has never been used and never will. The climate models being used now are far more complex than those used 10 years ago. A recent batch was using about 3.5 Gigs per model, and also seem to run faster (better), with 4 Megs of L3 cache per model. And there is a new source of models that were tried out earlier this year. On a test run, each model was using a bit over 5 gigs per model. It's not known when these will start appearing, but your computer is going to need lots more memory if you intend to use all of the cores. Or else you'll need to use the "percentage to use" option in your account preferences to limit the number of cores used. And the project people DO know about the decreasing support for 32 bit programs. I don't know why so many people think it's necessary to tell us this. |
Send message Joined: 7 Aug 04 Posts: 2187 Credit: 64,822,615 RAC: 5,275 |
That was an interesting work unit since 5 computers ran it all the way to the end. Some had errors in stderr, and others didn't, but they all apparently completed successfully according to boinc. When you look at the work unit task listing in the link you provided, it has blank for credits. However, when you look at the individual tasks, they all have the correct credit. I've seen this with other old work units that no longer are listed on the Applications page for this project. It won't show the credits in the listing of tasks, but looking at the individual tasks, the credit is there, and it is counted toward the host and user totals. It's something odd about the new website listing of tasks. |
Send message Joined: 14 Sep 08 Posts: 127 Credit: 41,725,848 RAC: 62,638 |
And there is a new source of models that were tried out earlier this year. 5 GB per core?! I do run a mixed number of memory light and memory heavy workloads, but so far the biggest one used "only" 2GB per core. 5GB is definitely a new high. It will probably change my decision of how much memory I need for my next build. And the project people DO know about the decreasing support for 32 bit programs. Sorry if I made you feel I am "telling" you. That's not what I meant and I am more or less aware of how legacy stuff can linger around forever. The context is that most of time I don't check forums and just add a project in BAM and let it crunch. I don't have a fully automated framework to watch results across all projects on my few computers. I could have failed all WUs on my Linux hosts due to the 32-bit library requirement without knowing for a while. Similar things happened to LHC@Home for me as they require some special setup for one of the apps. For weeks I was just returning errors for that application, which isn't really a nice thing. That's why I brought this up and wonder if/when I can just attach in BAM and not worry about checking results too much each time I reinstall or change setup. |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
5 GB per core?! I do run a mixed number of memory light and memory heavy workloads, but so far the biggest one used "only" 2GB per core. My biggest ones (hadam4_um_8.52_i686-pc-linux-gnu N216) each use 1385 Megabytes of virtual memory, but actually resident in RAM is only 1.3 Gigabytes. Are you perhaps referring to OpenIFS models? I have not seen any of those yet. |
Send message Joined: 15 May 09 Posts: 4540 Credit: 19,013,957 RAC: 21,195 |
Are you perhaps referring to OpenIFS models? I have not seen any of those yet. Yes. I have only seen them in testing. I think the ones that did make it o this site actually only had about 3GB max resident in ram at any one time but the ones in testing went just over 5. |
Send message Joined: 6 Oct 06 Posts: 204 Credit: 7,608,986 RAC: 0 |
I have seven validation pending WU's but not worried, going back nine years. Here for the science. I think I got the credits, who knows. Maybe the server was annoyed with our absence. |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
I have 77 still pending for validation, but I am not worried either. They start sometime in 2010 and end in December 2015. I have 157 valid ones. I have this much credit: 3,212,752 I wonder what it all means. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
It means that BOINC has many, many, many, many, many, options because of requests over the years. Projects don't have to use them if they don't want to, and cpdn doesn't use several. But they show up because of other parts of BOINC. Just ignore them. They won't go away, but they're meaningless. Of more importance is something that I've just notice with my current list. There are 5 tasks in the tasks tab, and the last character, (the issue number), is, from top to bottom, 0, 1, 2, 3, and 4. Is someone trying to tell me something, and what? Also who? |
Send message Joined: 15 May 09 Posts: 4540 Credit: 19,013,957 RAC: 21,195 |
There are 5 tasks in the tasks tab, and the last character, (the issue number), is, from top to bottom, 0, 1, 2, 3, and 4. You must have somehow enabled the hidden option to sort by last character of task name. -:) Edit: At least it shows that the implementation of giving Linux tasks 5 attempts instead of 3 to try and increase the completion rate is working. |
Send message Joined: 16 Jan 10 Posts: 1084 Credit: 7,808,726 RAC: 5,192 |
To add to what has been said about validation ... The CPDN climate models run for a relatively long time and produce a relatively large amount of data. The run-time libraries of each operating system are compatible in that they can be called by a cross-platform application but they are not, as I understand it, identical in their outputs: a library function might, for example, use an approximation or an optimisation or some processor function in different circumstances, leading to significantly different results after millions of calculations. One model result is not more or less valid than another result just because the results differ - by what standard would it be judged? A climate model is a markedly different type of calculation from, for example, determining whether an integer is a prime number - so it seems to me not unnatural for PrimeGrid to implement validation and CPDN to not implement validation. Moreover, participants who have run a model for a long time might be irritated to the point of non-participation were credits to be allocated only for complete models validated by some opaque method. So, all in all, the current approach to validation seems right for CPDN. [There was an attempt within BOINC to produce a common framework that would produce exactly the same numerical results across processors and operating systems, but I haven't seen anything about that for a long time - perhaps it was too slow. Virtual machines might be another route into reducing the variability of results, which could enable a practical validation approach.] |
©2024 cpdn.org