Message boards :
Number crunching :
How big a task can I run?
Message board moderation
Author | Message |
---|---|
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
I rebooted my Linux machine last evening and this morning I looked at my Event Log. It described, in part, my machine, thus: Fri 12 May 2023 07:04:46 PM EDT | | Processor: 16 GenuineIntel Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz [Family 6 Model 85 Stepping 7] Fri 12 May 2023 07:04:46 PM EDT | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est [snip] Fri 12 May 2023 07:04:46 PM EDT | | OS: Linux Red Hat Enterprise Linux: Red Hat Enterprise Linux 8.7 (Ootpa) [4.18.0-425.13.1.el8_7.x86_64|libc 2.28] Fri 12 May 2023 07:04:46 PM EDT | | Memory: 125.34 GB physical, 15.62 GB virtual Now I understand the Memory: 125.34 GB physical part, but what does the 15.62 GB virtual mean? Is that the largest size task I can run, even though I have lots of physical RAM left? Or what? What does it mean? If OpenIFS sends out a 16 GByte task, will I be able to run it? |
Send message Joined: 28 Jul 19 Posts: 149 Credit: 12,830,559 RAC: 228 |
I’ve always understood it to be the size of your swap file. If that is correct it will not limit the size of task you can run. |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
I’ve always understood it to be the size of your swap file. That is extremely close... Sat May 13 14:00:03 EDT 2023 total used free shared buffers cache available Mem: 128345 10876 3272 79 141 114054 116164 Swap: 15991 1 15990 |
Send message Joined: 29 Oct 17 Posts: 1048 Credit: 16,386,107 RAC: 14,921 |
It used to be the case that swap should always be configured to be at least as big as the RAM size, so that memory contents could be put entirely to swap. But that tends not to happen these days though I still do it on my machines (an old sysadmin habit). For OpenIFS and the other CPDN models you really don't want them to start using swap space as it would cripple performance. I've discussed running OpenIFS tasks up to 64Gb RAM with CPDN but we are still some way from trying that in production. --- CPDN Visiting Scientist |
Send message Joined: 15 May 09 Posts: 4535 Credit: 18,961,772 RAC: 21,888 |
It used to be the case that swap should always be configured to be at least as big as the RAM size, so that memory contents could be put entirely to swap. But that tends not to happen these days though I still do it on my machines (an old sysadmin habit).I have also followed that maxim. But am likely to give it up when I add more RAM to my machine at least until my next re-install. Currently I have 58GB, the other half of a 120GB drive I have /var/lib/boinc-client on, mounted as that partition. At some point I will get a second nvme disk that will allow a swap with much faster performance though whether I will actually go into swap often enough to notice is a moot point. |
Send message Joined: 9 Mar 22 Posts: 30 Credit: 1,065,239 RAC: 556 |
The free command indeed shows the sum of all swap spaces which can be dedicated partitions and swapfiles. On modern Linux systems the old rule to set as much (or more) swap as RAM is likely to be obsolete, except for special use cases like hybernation or huge DBs - but it might not be a good idea to run a heavy BOINC app on such a DB system. From the performance perspective it is usually much better to invest in more RAM. Swap usage can be tuned via "vm.swappiness=[value]" in "/etc/sysctl.conf". The default value is usually set to 60 and you can find suggestions to increase it as well as to decrease it. I personally prefer to set it to 1 or even 0 (avoid swapping as much as possible) on a system running BOINC and having enough RAM. This works fine and fast as long as you regularly monitor your system status and plan your project mix. Other users may prefer other settings. |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
I agree. My Linux machine is currently running 13 Boinc tasks, including three ClimatePrediction ones. It has been up quite a while (but not setting any records*). After almost two weeks, it has used a little swap space, but it is by no means thrashing to it. $ uptime 18:32:15 up 13 days, 23:27, 1 user, load average: 13.39, 13.72, 13.90 $ free -hw total used free shared buffers cache available Mem: 125Gi 10Gi 2.5Gi 152Mi 986Mi 111Gi 113Gi Swap: 15Gi 410Mi 15Gi _____ * My uptime record, long ago, was about six months running Red Hat Linux 7.3. They skipped the Red Hat Linux 8 series and I did not like Red Hat Linux 9, so I switched over to Red Hat Enterprise Linux 3 at that time. I am currently running Red Hat Enterprise Linux release 8.7 (Ootpa). |
©2024 cpdn.org