climateprediction.net (CPDN) home page
Thread '*** Running 32bit CPDN from 64bit Linux - Discussion ***'

Thread '*** Running 32bit CPDN from 64bit Linux - Discussion ***'

Questions and Answers : Unix/Linux : *** Running 32bit CPDN from 64bit Linux - Discussion ***
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 19 · Next

AuthorMessage
Jim1348

Send message
Joined: 15 Jan 06
Posts: 637
Credit: 26,751,529
RAC: 653
Message 60807 - Posted: 7 Aug 2019, 16:24:53 UTC - in response to Message 60806.  

I was just looking at the Project status list, though whether it is really accurate is another question.
https://www.cpdn.org/cpdnboinc/server_status.php
ID: 60807 · Report as offensive     Reply Quote
Thomas Wiegand

Send message
Joined: 4 Jul 19
Posts: 31
Credit: 252,192
RAC: 0
Message 60810 - Posted: 8 Aug 2019, 3:15:58 UTC - in response to Message 60807.  

though whether it is really accurate is another question.

Jepp,
Tasks in progress : 84.082 - against
single sum : 928 + 2320 + 55430 + 4815 + 286 = 63.779

but UK Met Office HadAM4 at N144 resolution was at start 1.7.19 about 626, and is now 286, so 55% done. I did 2 sets of it in each 7+ days ... so also most others seam to be slow ....
ID: 60810 · Report as offensive     Reply Quote
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 60811 - Posted: 8 Aug 2019, 5:14:14 UTC - in response to Message 60810.  

but UK Met Office HadAM4 at N144 resolution was at start 1.7.19 about 626, and is now 286, so 55% done. I did 2 sets of it in each 7+ days ... so also most others seam to be slow ....


A lot seem to go to computers that spend much of their time switched off.
ID: 60811 · Report as offensive     Reply Quote
Thomas Wiegand

Send message
Joined: 4 Jul 19
Posts: 31
Credit: 252,192
RAC: 0
Message 60825 - Posted: 9 Aug 2019, 23:11:55 UTC - in response to Message 60811.  

A lot seem to go to computers that spend much of their time switched off.

Yes, and I told why this seam to happen before.
24/7 computers get bigger gap report available and stop report then - as often off computers are fresh updated with "I am ready to take work",
seen at home , tested with 6 computer here.
But someone wrote, it might be not urgent to get results.
ID: 60825 · Report as offensive     Reply Quote
ProfileMichael Goetz
Avatar

Send message
Joined: 2 Feb 05
Posts: 11
Credit: 983,334
RAC: 6,066
Message 61371 - Posted: 24 Oct 2019, 15:16:36 UTC

Any guidance for which packages need to be added to Debian 9 (Stretch) or Debian 10 (Buster)? I've been having a devil of a time getting 32 bit BOINC apps (not just yours) to work on Stretch or Buster. Eventually I took the path of least resistance and create a 32 bit Debian VM. It would still be nice to know what's needed for the 64 bit installs. The Ubuntu instructions don't seem to work anymore for Debian.
Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

ID: 61371 · Report as offensive     Reply Quote
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 61372 - Posted: 24 Oct 2019, 15:28:16 UTC - in response to Message 61371.  
Last modified: 24 Oct 2019, 16:11:44 UTC

Any guidance for which packages need to be added to Debian 9 (Stretch) or Debian 10 (Buster)? I've been having a devil of a time getting 32 bit BOINC apps (not just yours) to work on Stretch or Buster. Eventually I took the path of least resistance and create a 32 bit Debian VM. It would still be nice to know what's needed for the 64 bit installs. The Ubuntu instructions don't seem to work anymore for Debian.


Have you tried running ldd on the executable files downloaded? .Yyou can then put them into a search engine along with Debian to find out the package name that incudes them I am afraid, I have never used Debian having been on various RPM distros before I moved to Ubuntu so don't have any better ideas. Having just, "rolled my own" manager and client I know that the number of dependencies to address is a lot less than that and I managed to work that out eventually.

Edit: I looked at what the BOINC site had to say about this but it only gives instructions for Red Hat and Ubuntu and in the case of the latter the instructions have changed since they last updated the page.
ID: 61372 · Report as offensive     Reply Quote
ToKamaK

Send message
Joined: 9 Dec 18
Posts: 3
Credit: 10,186,332
RAC: 279,715
Message 61468 - Posted: 5 Nov 2019, 20:39:01 UTC - in response to Message 61371.  

To install 32 bits libraries on recent Debian versions, you will need to add the i386 architecture to Apt, using the multiarch mechanism:
$ sudo dpkg --add-architecture i386

Validate it is properly taken in account:
$ dpkg --print-foreign-architectures
i386

Update your Apt packages knowledge base:
$ sudo apt update

And then you can install all the libraries you want in 32 bits thanks to multiarchitecture mode. Assuming the listing on top of the forum thread is still accurate, the Debian Buster equivalent would be:
$ sudo apt install zlib1g:i386 libncurses5:i386 libbz2-1.0:i386 libstdc++6:i386 -y

I have this in place on my Debian Sid, but it works on Debian 10 too. I guess multiarchitecture handling should not have changed too much since Wheezy, so I guess it roughly the same procedure for Debian 9.

I hope this helps
ID: 61468 · Report as offensive     Reply Quote
ProfileMichael Goetz
Avatar

Send message
Joined: 2 Feb 05
Posts: 11
Credit: 983,334
RAC: 6,066
Message 61469 - Posted: 5 Nov 2019, 22:06:44 UTC - in response to Message 61468.  

To install 32 bits libraries on recent Debian versions, you will need to add the i386 architecture to Apt, using the multiarch mechanism:...


Thanks!

I can confirm that this works on both Buster and Stretch.
Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

ID: 61469 · Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 18 Feb 17
Posts: 81
Credit: 14,062,567
RAC: 2,946
Message 61707 - Posted: 18 Dec 2019, 20:41:11 UTC

Hello.
I know Linux tasks are far and few between, but this machine hasn't gotten any since I added it to the project in September.
https://www.cpdn.org/show_host_detail.php?hostid=1493090

I have all prudent 32 bit libraries installed so far as I know - I simply followed the instructions given in this thread. is there something else I'm missing here? This machine is running Boinc 7.16.3.

yet, after following these steps again, just added this machine, and got a task right away, with an earlier version of Boinc, 7.9.3.

https://www.cpdn.org/show_host_detail.php?hostid=1496254

Any help is appreciated at this point. I don't want to force a manual update as that has never ended well for me.
ID: 61707 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Jan 06
Posts: 637
Credit: 26,751,529
RAC: 653
Message 61708 - Posted: 18 Dec 2019, 21:08:18 UTC - in response to Message 61707.  
Last modified: 18 Dec 2019, 21:13:15 UTC

Any help is appreciated at this point. I don't want to force a manual update as that has never ended well for me.

I can't offer much help, except that all four of my machines running CPDN are on BOINC 7.16.3, so that by itself is not a problem.
And there are a ton of Linux tasks now. For once, availability is not a problem (mainly because a lot of people can't run them apparently).

But I would make one suggestion: make sure your buffer is large enough. I have to set mine to 1.0 + 1.5 days now to get downloads reliably.
You might have to do more, depending on how fast you machine is. And go ahead with a manual update. The worst it can do is fail.

EDIT: I see one of your machines has only 4 GB memory. That won't do it any more. 8 GB might work, but I would not try it will less than 16 GB these days.
ID: 61708 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 61711 - Posted: 18 Dec 2019, 22:24:56 UTC - in response to Message 61707.  

If you're also running tasks for other projects, then perhaps BOINC has decided that the computer has enough work for now.

But the best place to look is the Event Log.
This should have a line or 2 of text saying why you didn't get any work.
ID: 61711 · Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 18 Feb 17
Posts: 81
Credit: 14,062,567
RAC: 2,946
Message 61712 - Posted: 18 Dec 2019, 22:40:54 UTC - in response to Message 61708.  

Any help is appreciated at this point. I don't want to force a manual update as that has never ended well for me.

I can't offer much help, except that all four of my machines running CPDN are on BOINC 7.16.3, so that by itself is not a problem.
And there are a ton of Linux tasks now. For once, availability is not a problem (mainly because a lot of people can't run them apparently).

But I would make one suggestion: make sure your buffer is large enough. I have to set mine to 1.0 + 1.5 days now to get downloads reliably.
You might have to do more, depending on how fast you machine is. And go ahead with a manual update. The worst it can do is fail.

EDIT: I see one of your machines has only 4 GB memory. That won't do it any more. 8 GB might work, but I would not try it will less than 16 GB these days.

The machine with 4 gb total only has 2 total physical cores, though. Does that make a difference? I was assuming 1 gb per core or are these linux tasks much more intensive? I remember Windows being something like 200 mb-1 gb depending on task.
I'll increase the buffer on the two machines that aren't seeming to get any work - one with a 4770 and another with a 2600. The 4770 at least has 32 gb of ram and the 2600 has 16, so I can't see either of these having issues.
I am running CPDN with 300 resource now in hopes it will get more tasks. Everything else is now set to 0 resource so hopefully the work cue will clear out and CPDN can start crunching.
Should I enable anything in my event log just in case the scheduler is throwing back a particular error regarding libraries or the like?
ID: 61712 · Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 18 Feb 17
Posts: 81
Credit: 14,062,567
RAC: 2,946
Message 61714 - Posted: 18 Dec 2019, 22:47:07 UTC - in response to Message 61711.  

If you're also running tasks for other projects, then perhaps BOINC has decided that the computer has enough work for now.

But the best place to look is the Event Log.
This should have a line or 2 of text saying why you didn't get any work.


From just a few minutes ago.
12/18/2019 4:42:46 PM | climateprediction.net | Sending scheduler request: To fetch work.
12/18/2019 4:42:46 PM | climateprediction.net | Requesting new tasks for CPU
12/18/2019 4:42:48 PM | climateprediction.net | Scheduler request completed: got 0 new tasks
12/18/2019 4:42:48 PM | climateprediction.net | No tasks sent

From this machine.
https://www.cpdn.org/show_host_detail.php?hostid=1496255
ID: 61714 · Report as offensive     Reply Quote
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 61715 - Posted: 18 Dec 2019, 22:54:59 UTC - in response to Message 61712.  

The machine with 4 gb total only has 2 total physical cores, though. Does that make a difference? I was assuming 1 gb per core or are these linux tasks much more intensive?


The hadam4h tasks the N216 resolution ones use about 1.4GB. I think the hadam4 (N144 resolution) take a bit less. If I run 2 of the hadam4h ones at once on my 4GB 2 physical core machine it becomes very difficult to use, often freezing for up to a minute or more at a time but it still gets tasks so I suspect that isn't the problem. Running one hadam4h at a time is no problem but running the African weather app from WCG makes the machine freeze for up to about 30 seconds at a time.

When the Openifs tasks come online that machine certainly won't get any. In testing they took just over 5GB/task and I tried getting some and the event log told me I didn't have enough memory for them. Interestingly my 4 core 8GB laptop would run four at once but they slowed down a lot because of swapping to disk. Throughput still went up all the way to using all four cores however but not very much between three and four.
ID: 61715 · Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 18 Feb 17
Posts: 81
Credit: 14,062,567
RAC: 2,946
Message 61719 - Posted: 19 Dec 2019, 0:19:26 UTC - in response to Message 61715.  

The machine with 4 gb total only has 2 total physical cores, though. Does that make a difference? I was assuming 1 gb per core or are these linux tasks much more intensive?


The hadam4h tasks the N216 resolution ones use about 1.4GB. I think the hadam4 (N144 resolution) take a bit less. If I run 2 of the hadam4h ones at once on my 4GB 2 physical core machine it becomes very difficult to use, often freezing for up to a minute or more at a time but it still gets tasks so I suspect that isn't the problem. Running one hadam4h at a time is no problem but running the African weather app from WCG makes the machine freeze for up to about 30 seconds at a time.

When the Openifs tasks come online that machine certainly won't get any. In testing they took just over 5GB/task and I tried getting some and the event log told me I didn't have enough memory for them. Interestingly my 4 core 8GB laptop would run four at once but they slowed down a lot because of swapping to disk. Throughput still went up all the way to using all four cores however but not very much between three and four.

I plan on throwing up a Ryzen 1700x with 64 GB of ram, so that should prove interesting. All the Linux machines are exclusively crunchers - I'm still very new to Linux and mainly ssh into them only if absolutely needed.
One thing I found quite odd is the library size differed on the core 2 duo vs. 4770. One required over 100 MB and the other just over 85. Is this normal? I of course did apt-get update and upgrade just before this to make sure.

Still haven't gotten any from the new project at WCG, but I just said yes to it and nothing else recently so we'll see. I've been struggling to find projects that peak my interest scientifically lately - but always find my way back here. I've either aborted all other WUs at this point on both machines with no CPDN or they have finished successfully, so we'll see what happens. I hope I don't end up with a bunch of idle cores and something not installed/configured correctly...
ID: 61719 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 61720 - Posted: 19 Dec 2019, 1:18:42 UTC - in response to Message 61712.  

wolfman

I was assuming 1 gb per core or are these linux tasks much more intensive?

My current hadam4, which are the N144, are using about 650 Megs each.
The current hadam4h, which are the N216, are using about 3.6 Gigs each.

It's also been found experimentally, that the later like to have access to 4 gigs of L3 cache each, otherwise they slow right down.
For the long discussion about this, see the start of this thread in the Number crunching section: UK Met Office HadAM4 at N216 resolution

So, best to have plenty of ram available, the next lot they come up with may need even more.
ID: 61720 · Report as offensive     Reply Quote
Profilegeophi
Volunteer moderator

Send message
Joined: 7 Aug 04
Posts: 2187
Credit: 64,822,615
RAC: 5,275
Message 61721 - Posted: 19 Dec 2019, 2:28:45 UTC - in response to Message 61720.  

wolfman

I was assuming 1 gb per core or are these linux tasks much more intensive?

My current hadam4, which are the N144, are using about 650 Megs each.
The current hadam4h, which are the N216, are using about 3.6 Gigs each.

It's also been found experimentally, that the later like to have access to 4 gigs of L3 cache each, otherwise they slow right down.
For the long discussion about this, see the start of this thread in the Number crunching section: UK Met Office HadAM4 at N216 resolution

So, best to have plenty of ram available, the next lot they come up with may need even more.

Les, I'm seeing 1.4 GB max of RAM for hadam4h (N216). The experimental openIFS models were taking 3.5 to over 5 GB of RAM each, depending on how they configured the batch.
ID: 61721 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 61722 - Posted: 19 Dec 2019, 2:39:40 UTC

Those numbers are what is showing in Properties in BOINC.
ID: 61722 · Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 18 Feb 17
Posts: 81
Credit: 14,062,567
RAC: 2,946
Message 61723 - Posted: 19 Dec 2019, 2:59:59 UTC

Is there a command to see what specific 32 bit libraries are installed? Or would I get some sort of error in the event log relating to that if they weren't?
Still no go. I may try rebooting one more time and manually checking for updates on all machines as I heard that can force fetching of work for some.
ID: 61723 · Report as offensive     Reply Quote
Profilegeophi
Volunteer moderator

Send message
Joined: 7 Aug 04
Posts: 2187
Credit: 64,822,615
RAC: 5,275
Message 61724 - Posted: 19 Dec 2019, 4:13:13 UTC - in response to Message 61722.  

Those numbers are what is showing in Properties in BOINC.

I was checking the RES column for those tasks when running the top command in a terminal window, also checking the Memory column in the processes tab of system monitor, and the "Peak working set size" that is displayed on the completed task pages. They all show the memory used by the hadam4h tasks and show between 1.3 and 1.4 GB.
ID: 61724 · Report as offensive     Reply Quote
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 19 · Next

Questions and Answers : Unix/Linux : *** Running 32bit CPDN from 64bit Linux - Discussion ***

©2024 cpdn.org