Questions and Answers : Getting started : How to read server status?
Message board moderation
Author | Message |
---|---|
Send message Joined: 3 Dec 19 Posts: 2 Credit: 9,617 RAC: 0 |
Hi there! After two years, I have rejoined cpdn. I added the project to BOINC and all looked OK. But No tasks. I looked at The server status and it said tasks were unsent. How do I read the server status?? ----- Info: Sat 12 Mar 2022 03:39:31 PM EST | climateprediction.net | Fetching scheduler list Sat 12 Mar 2022 03:39:34 PM EST | climateprediction.net | Master file download succeeded Sat 12 Mar 2022 03:39:39 PM EST | climateprediction.net | Sending scheduler request: Project initialization. Sat 12 Mar 2022 03:39:39 PM EST | climateprediction.net | Requesting new tasks for CPU and AMD/ATI GPU Sat 12 Mar 2022 03:39:41 PM EST | climateprediction.net | Scheduler request completed: got 0 new tasks Sat 12 Mar 2022 03:39:41 PM EST | climateprediction.net | No tasks sent Sat 12 Mar 2022 03:39:41 PM EST | climateprediction.net | Project requested delay of 3636 seconds Sat 12 Mar 2022 03:39:41 PM EST | climateprediction.net | General prefs: from climateprediction.net (last modified 12-Mar-2022 15:33:16 and Mar 11 00:47:59 pc-14 boinc[1102]: 11-Mar-2022 00:47:59 [---] Starting BOINC client version 7.16.6 for x86_64-pc-linux-gnu Mar 11 00:48:00 pc-14 boinc[1102]: 11-Mar-2022 00:47:59 [---] log flags: file_xfer, sched_ops, task Mar 11 00:48:00 pc-14 boinc[1102]: 11-Mar-2022 00:47:59 [---] Libraries: libcurl/7.68.0 OpenSSL/1.1.1f zlib/1.2.11 brotli/1.0.7 libidn2/2.2.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh/0.9.3/openssl /zlib nghttp2/1.40.0 librtmp/2.3 Mar 11 00:48:00 pc-14 boinc[1102]: 11-Mar-2022 00:47:59 [---] Data directory: /var/lib/boinc-client Mar 11 00:48:00 pc-14 boinc[1102]: 11-Mar-2022 00:48:00 [---] OpenCL: AMD/ATI GPU 0: AMD VERDE (LLVM 13.0.0, DRM 2.50, 5.13.0-36-generic) (driver version 22.0.0 - kisak-mesa PPA, device version OpenCL 1.1 Mesa 22.0.0 - kisak-mesa PPA, 2048MB, 2048MB available, 512 GFLOPS peak) Mar 11 00:48:01 pc-14 boinc[1102]: 11-Mar-2022 00:48:01 [---] libc: Ubuntu GLIBC 2.31-0ubuntu9.7 version 2.31 Mar 11 00:48:01 pc-14 boinc[1102]: 11-Mar-2022 00:48:01 [---] Host name: pc-14 Mar 11 00:48:01 pc-14 boinc[1102]: 11-Mar-2022 00:48:01 [---] Processor: 8 AuthenticAMD AMD FX(tm)-8150 Eight-Core Processor [Family 21 Model 1 Stepping 2] Mar 11 00:48:01 pc-14 boinc[1102]: 11-Mar-2022 00:48:01 [---] Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt fma4 nodeid_msr topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold Mar 11 00:48:01 pc-14 boinc[1102]: 11-Mar-2022 00:48:01 [---] OS: Linux Ubuntu: Ubuntu 20.04.4 LTS [5.13.0-36-generic|libc 2.31 (Ubuntu GLIBC 2.31-0ubuntu9.7)] Mar 11 00:48:01 pc-14 boinc[1102]: 11-Mar-2022 00:48:01 [---] Memory: 11.53 GB physical, 9.31 GB virtual Mar 11 00:48:01 pc-14 boinc[1102]: 11-Mar-2022 00:48:01 [---] Disk: 91.17 GB total, 82.26 GB free Mar 11 00:48:01 pc-14 boinc[1102]: 11-Mar-2022 00:48:01 [---] Local time is UTC -5 hours THANKS in advance!!! Jay |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,915,412 RAC: 16,463 |
Computing -> Server Status - should go here: https://www.cpdn.org/server_status.php There are no Windows tasks, no Linux tasks, and the only MacOS tasks are running out (mostly due to machines failing them with bad CPU types - they're 32-bit binaries and modern MacOS only runs 64-bit) quickly. They should be fully empty in another 2-3 days. Some of us are running Mojave VMs to help compute them, but it's not worth setting that up at this point in time. No idea when more work will show up, the project has been fairly dry lately. WCG is down for moves, so some of my boxes may just end up very bored too. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
Also, at the top left of this page, click on Computing, and then on Applications. This new page lists on which OS each model type will run. Except, as has been said, for the only available types at present, where the Linux versions were deprecated due to the high percentage of them failing. |
Send message Joined: 3 Dec 19 Posts: 2 Credit: 9,617 RAC: 0 |
Thanks for the explanations! Any idea when WU for Linux with 64 bit libraries will created. Some of the 32 bit libraries are no longer supported in distributions. Jay PS - Is there a way to tell which applications require VBox? |
Send message Joined: 12 Apr 21 Posts: 318 Credit: 15,022,490 RAC: 5,406 |
Last I've seen on the forums is that no one knows when OpenIFS (64-bit Linux app) will be available mainstream, it is in testing phase though. None of the CPDN apps require VBox in the same way as some of the apps at LHC & Rosetta projects, for example. Those will tell you explicitly. At CPDN you'd use VBox to run apps for the OSs that you don't have installed. So if your PC is Linux, you'd need to install VBox (or another hypervisor) and create a Windows VM to run Windows apps or a macOS VM to run macOS apps. You can actually do this for any project that doesn't have apps for your operating system. For example, my PCs are WIndows10 but I have a macOS Mojave VM on VBox running the macOS app at CPDN. I also use WSL2 (a way to virtualize Linux) to run Linux apps for different projects like CPDN, LHC, etc. As well as running Windows apps on Windows10 since it's my main OS. |
Send message Joined: 15 May 09 Posts: 4541 Credit: 19,039,635 RAC: 18,944 |
Last I've seen on the forums is that no one knows when OpenIFS (64-bit Linux app) will be available mainstream, it is in testing phase though. Another Mod has suggested to the project that we really need to be only using 64 bit task types for Mac and Linux given the recent MacOS's lack of 32bit support. I don't know enough (anything) about what makes a particular piece of research more suitable for one model type or another to know if once it is out of testing and more people get familiar with OpenIFS it will mean the retirement of the other task types. What I do know is that if the testing so far is anything to go by, a lot of older computers will fall by the wayside due to insufficient RAM. |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,915,412 RAC: 16,463 |
Releasing 32-bit MacOS tasks that will download, try to execute, and fail on 64-bit only MacOS (Mojave is the last that supports 32-bit tasks, and it was released 3 years ago, newer Macs are ARM and can emulate x86 tasks, but only 64-bit) is a bit weird in 2022, as there aren't really many machines that can actually run those - see the failure rate from CPU type errors. I don't know what it is, but a brief sample of tasks would indicate that maybe 50-75% of them have failed to that error, or due to the "Suspend and resume fails and gets them killed" errors. They're rather unsuited to modern MacOS... I'd expect Linux support for 32-bit binaries to similarly drop at some point, as you can make your life an awful lot simpler that way. I get that there's concern about rewriting tasks and having different results and such, but at some point, you won't be able to get any results without rewriting stuff to be a bit more modern and keep up with things. Keep the floating point the same and it shouldn't be that difficult to validate the same results, but at this point, I do wonder if there are any people left who actually deeply understand the software that they're using, or if it's just teams using "the software handed to us by the researchers of yesterdecade." In any case, CPDN is, by far, the touchiest set of tasks any of my heaters run. Suspend/resume isn't reliable, resume from "system randomly rebooted" isn't reliable, they need some exceedingly specific environments set up (see the MacOS VMs and complexity getting those going), etc. I don't mind it, but neither am I a good sample of the general population when it comes to computing either. The more broken things are, the more I seem to be drawn to making them work... It's a useful project, but it definitely is going to need some modernization of applications in the next few years, or the pool of computers able to run the tasks is going to keep declining. Submitting 2000 tasks that "mostly fail" and get a few results back isn't any way to run a project, IMO. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
All of the climate models so far, are created, owned, and maintained by the UK Met Office. We're just "along for the ride". |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,915,412 RAC: 16,463 |
I totally understand that - but at some point, the UK Met Office people need to recognize that if they don't spend some resources to catch up with the state of computing, they won't have any grid computing left. |
Send message Joined: 12 Apr 21 Posts: 318 Credit: 15,022,490 RAC: 5,406 |
CPDN is an odd project in that it has been around for a long time and has the least amount of active users, the longest deadlines, the longest running tasks but is sensitive to pauses and restarts, is somewhat complicated to set up and run, has a very high failure rate of tasks (mainly due to previous 2 point). It seems to be most suited for a more dedicated user-base who're also computer savvy but It does not do a good job of maintaining such a user-base. I think that the 32-bit/64-bit thing is not as big of an issue. The biggest by far is not policing the user-base or more precisely computer-base by having some kind of a system that reduces the amount of tasks a given computer gets to say 1 a day if it continues to fail them. Once the machine gets fixed or user changes his/her usage habits and is able to complete tasks successfully the amount can increase. |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,915,412 RAC: 16,463 |
When you're releasing MacOS tasks that don't run on any modern MacOS, I think I'd consider that an "issue." They won't run on the last couple releases on Intel properly, and they can't even be emulated properly with Rosetta on the Apple Silicon, because that's 64-bit only as well. If the available compute pool for "MacOS" tasks is "Those running old version of MacOS in virtual machines," I don't think that's a particularly good spot to be in as a distributed compute project. I absolutely agree about some "computer reputation" - if you keep failing tasks, especially with 0.00 compute time because they can't even launch, you shouldn't get more than a slow trickle of tasks. But that doesn't change the reality that the number of computers that can run 32-bit tasks is decreasing with time. Having to dirty up a 64-bit clean install to run CPDN tasks will reduce the pool of compute even further. If people responsible for tasks are fine with this, then, I suppose, nothing more to do. It just seems a less-than-helpful status quo to me. |
Send message Joined: 12 Apr 21 Posts: 318 Credit: 15,022,490 RAC: 5,406 |
I completely agree about modernizing software and that the number of computers that can run 32-bit natively is decreasing. At the same time this project has a licensure limitation (and potentially 32-bit vs. 64-bit pros cons). OpenIFS appears to be slow moving in development & testing but from what some are saying, even when it comes out, the high RAM requirements will again (potentially significantly) limit the amount of users that will be able to participate. Also, its arrival won't necessarily mean deprecation of 32-bit, as posted by Dave above. The 32-bit constraint can be overcome with virtualization and libraries, potentiality for a long time going forward. I agree that CPDN is a complicated project to run but, of the projects I'm familiar with, by far the most complicated (but pretty modern) is LHC Theory and ATLAS sub-projects (especially ATLAS). Consider that besides BOINC, for (native) Theory and ATLAS you need a specific OS, a software distribution service, a container, and if you regularly contribute 5 or more CPU threads - a caching proxy. Each must be properly configured but even then things don't always work right. The projects tried to simplify things by bundling up the containers but that doesn't always work and one still has to install them separately. Constant connection to the internet is needed, RAM requirements are on the high side, and they also have issues with any kind of pauses and restarts. Typically it's not erroring out tasks but restarting from scratch (tasks take half a day or so of computing time). Tasks should be ran uninterrupted in any way from start to finish. All things considered I think the best thing CPDN can do right now is put a system in place to prevent unsuitable computers from wasting tasks, as well as cutting deadlines in half. The project doesn't seem to care about getting tasks back quickly so it seems like 1000 or so of suitable, regularly contributing computers would be sufficient to run the project pretty well. |
Send message Joined: 9 Dec 05 Posts: 116 Credit: 12,567,835 RAC: 1,448 |
I agree completely to your concerns and I would like to see those problems tackled soon. But there might be a reason for the lack of development here. The climate change has been a lot on the news recent years and policy makers may see this area of science more 'sexy' and vital for mankind. The scientific groups doing the research are nowadays maybe more likely to get funding for their research and therefore push their climate simulations to the super computer environment. This would probably give them results faster and without the hassle and uncertainties involved in CPDN environment. Just my 2 cents. |
Send message Joined: 15 May 09 Posts: 4541 Credit: 19,039,635 RAC: 18,944 |
I totally understand that - but at some point, the UK Met Office people need to recognize that if they don't spend some resources to catch up with the state of computing, they won't have any grid computing left. The met office use these programs on their supercomputers. They don't have any reason to update their programs to 64 bit as they don't use distributed computing. CPDN have a license to use their programs which at around a million lines of Fortran is easier than writing their own. If the project want to go down the completely 64 bit route they will have to write their own programs from scratch which would entail having the programmers with the right sort of expertise and experience and the time away from doing other things. Resources that are not in place presently. So the only option for 64 bit that is even close to the horizon is the OpenIFS, again written primarily for supercomputers. |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,915,412 RAC: 16,463 |
They don't have any reason to update their programs to 64 bit Other than the rather radically increased register count, doubled integer register size (though if they're purely running the vector engine it may not matter that much), radically increased virtual memory space, and various extensions on x86 that are only useful in 64-bit operating modes... In any case, I agree there's nothing people around here can do about it at this point beyond the sport of running more and more ancient binaries on modern systems. Time to let the current workloads drain and wait for what's next! |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
Having to dirty up a 64-bit clean install to run CPDN tasks will reduce the pool of compute even further. Yes. I am running Red Hat Enterprise Linux release 8.5 (Ootpa) and I could no longer find the compatibility files on the Red Hat web site. They had them for RHEL7, but not RHEL8. And RHEL9 is either coming out soon or out already. I looked at the EPEL site and they do not have them either. I think I got mine from CentOS (who are now owned by RedHat). |
©2024 cpdn.org