Message boards : Number crunching : Windows & WSL give quite different CPU benchmark results in boinc?
Message board moderation
Author | Message |
---|---|
Send message Joined: 29 Oct 17 Posts: 1049 Credit: 16,476,460 RAC: 15,681 |
I am trying to understand why I get markedly different CPU benchmark results between my machines (or rather, why Windows performs poorly). I have a Win11 i7-11700K @ 3.60GHz and a Linux i7-3770 @ 3.4. CPU benchmarks (https://cpu.userbenchmark.com/Compare/Intel-Core-i7-3770-vs-Intel-Core-i7-11700K/1979vs4107) says the 11th gen chip is significantly faster than the 3rd gen at 41% overall. However, boincmgr tells me the CPU benchmarks are: i7-11700 (win11) : 5747 floating point MIPS i7-3700 (linux) : 5725 " " which is ridiculous. However, if I run boinc on a WSL instance (Ubuntu) on the same Win11 machine and run the benchmarks in that, then I get: i7-11700 (WSL:Ubuntu) : 7100 floating pt MIPS which is much more what I would expect. And that's running inside a container too. My question is why is Windows 11 performance so poor? I have seen the same with some tasks, my i3 will complete them in about the same time as my i7. Does anyone know what's going on? I haven't developed code on Windows since XP and have only very recently come back to Windows, but this is not impressive and I wonder if there's any bells and whistles I need fix. The i7-11700 is not overclocked, it's got an AIO and CPU temperature is in the 55-60C range (which is fine for Intel chips,). Boinc is only using 4 cores to get a decent throughput. I am considering ditching boinc on Win11 and running the projects only under WSL given the 23% speed improvement. Comments welcome. Thanks. |
Send message Joined: 7 Aug 04 Posts: 2187 Credit: 64,822,615 RAC: 5,275 |
I've seen the same thing in Linux vs. Windows boinc benchmarks for quite awhile. On a Ryzen 5 5600X (6 cores/12 threads) with a dual boot of Windows 10 and Ubuntu 20.04 and also a VMWare Ubuntu 20.04 VM in Windows 10: Windows 10 boinc 7.20.2 ~ 6300 Ubuntu 20.04 boinc 7.16.6 ~ 8945 Ubuntu 20.04 boinc 7.16.6 in a VMWare VM on Windows 10 host ~ 8880 On an i7-3770 in Windows 10 and in an Ubuntu 20.04 VM on that Windows 10 host Windows 10 boinc 7.20.2 ~ 4800 Ubuntu 20.04 boinc 7.16.6 in a VMWare VM on Windows 10 host ~ 5350 It isn't specific to these boinc versions as this difference in benchmarks have been seen for quite some time. The Whetstone FP benchmark is supposed to be compiled without optimizations, but my guess as to the difference in scores between Linux and Windows is that boinc was compiled with different compiler brands or versions and/or optimization switches so Linux boinc uses more optimizations for the benchmark. This shouldn't change the project's science application speed which is dependent on what compiler switches are used on that application. Over at World Community Grid, over the years, some science applications ran faster or as fast with Windows as with Linux on the same system, but others definitely give the edge to Linux (like ARP). What i3 do you have and what applications are you running that has the same speed as your i7 11700K? |
Send message Joined: 12 Apr 21 Posts: 317 Credit: 14,884,880 RAC: 19,188 |
Never paid attention to this before as I thought that the only use of BOINC benchmarks is that runtimes are estimated more accurately if they're ran. I just ran them on my PCs and got similar results. i7-4790 (ran on all 8 cores): - Windows 10 21H2: 4473 - Ubuntu 22.04 on WSL2: 5371 (20% higher) Ryzen 9 5900X (ran on all 24 cores) - Windows 10 21H2: 5354 - Ubuntu 22.04 on WSL2: 7643 (43% higher) The only CPU project that I know of that is significantly faster on Linux (at least Ubuntu) is Universe@Home, and I guess also WCG ARP per geophi. I might have to look at some other ones more closely just to make sure. From what I remember reading, Universe got faster starting 18.04, I believe, due to some math library updates starting that version. |
Send message Joined: 1 Jan 07 Posts: 1061 Credit: 36,716,561 RAC: 8,355 |
It isn't specific to these boinc versions as this difference in benchmarks have been seen for quite some time. The Whetstone FP benchmark is supposed to be compiled without optimizations, but my guess as to the difference in scores between Linux and Windows is that boinc was compiled with different compiler brands or versions and/or optimization switches so Linux boinc uses more optimizations for the benchmark. This shouldn't change the project's science application speed which is dependent on what compiler switches are used on that application. Over at World Community Grid, over the years, some science applications ran faster or as fast with Windows as with Linux on the same system, but others definitely give the edge to Linux (like ARP).I'd agree with that line of thinking. The Whetstone benchmark (Wiki, Roy Longbottom) dates from 1972, and is thus essentially a mainframe concept. The objective was to measure hardware speed exclusively, and to this end compiler optimisations were excluded as far as possible. In recent years, in the microprocessor age (and in particular in the Intel CPU ranges), the attention has shifted from raw clock speed to maximising throughput per clock cycle: hence the introduction of ever more sophisticated SIMD instruction sets. This has largely been driven by the need for energy saving and thermal management, but it has relevance for the significance of benchmarks too. Recent versions of BOINC have been released for 64-bit operating systems only. These also require CPUs with 64-bit support, which brings with it a minimum SIMD level of SSE2: many have moved on to the various levels of AVX. By all means, keep an eye on BOINC's Whetstone reports, but I would suggest that you run some real-world science jobs on varying hardware, and various operating systems, before coming to a final conclusion. |
Send message Joined: 29 Oct 17 Posts: 1049 Credit: 16,476,460 RAC: 15,681 |
I'd agree with that as well. Off the top of my head I can't remember which tasks I noted were running slower on the Win11 boinc. It does surprise me however there is such a difference in performance between the Microsoft compilers and (probably) Intel compilers on linux if both are supposedly using minimal optimizations. However, intel does tend to be more aggressive a compiler even with low optimizations in my experience. I did some testing with OpenIFS and found Intel produced 15% faster code to the gnu compiler (latest versions) for what was supposed to be the same optimization level. The penalty was a higher memory usage (~10%) for the Intel application. It may not be down to just the compilers though, libraries can make a difference too particularly the maths library. I remember back in the day having a chat with a Cray compiler engineer. They always saw Intel as their direct competitor and if a new intel version produced faster code than the Cray compiler on their benchmarks they'd have a very close look at what it was doing. Interesting topic. Suggests if we knew what project applications ran best on what OS, we could optimize throughput (i.e. credit) noticeably by choosing to run a project either on Windows or a WSL/VBox linux on the same host. There's probably someone somewhere who knows all this....! |
Send message Joined: 15 Jan 06 Posts: 637 Credit: 26,751,529 RAC: 653 |
I have done a bit of Win10 WSL2 work on various projects, and find that it runs just as fast as my Ubuntu 20.04 machines on comparable hardware. I wouldn't pay much attention to any BOINC measurement. https://www.cpdn.org/forum_thread.php?id=9025#63462 |
Send message Joined: 1 Jan 07 Posts: 1061 Credit: 36,716,561 RAC: 8,355 |
libraries can make a difference too particularly the maths library.This was noticed during the heyday of SETI@Home, with volunteer developers trying all sorts of optimisation tricks. That project made heavy use of FFT transforms, and the Intel (commercial) compiler had the best library. But it's use was ruled out on an open-source project. |
Send message Joined: 29 Oct 17 Posts: 1049 Credit: 16,476,460 RAC: 15,681 |
I have the same problem with OpenIFS on CPDN. Internally at ECMWF the FFTW code ("fastest FFT in the west") is used. But it would have cost real money to use here. The benefit at low resolutions is small anyway.libraries can make a difference too particularly the maths library.This was noticed during the heyday of SETI@Home, with volunteer developers trying all sorts of optimisation tricks. That project made heavy use of FFT transforms, and the Intel (commercial) compiler had the best library. But it's use was ruled out on an open-source project. @Jim1348, yes I would actually say that Microsoft have done a really nice job with WSL. I run WSL2 with boinc, though it does still freeze on me now and again. The memory overhead is quite small too. I almost feel guilty for praising Microsoft but it really is well done, even supporting X11. |
Send message Joined: 15 Jan 06 Posts: 637 Credit: 26,751,529 RAC: 653 |
I run WSL2 with boinc, though it does still freeze on me now and again. I get the freezes too every couple of weeks, so I don't use it any more. I hope they fix it with an upgrade. Maybe it works better on Win11, but I would prefer to avoid that as long as possible. |
Send message Joined: 29 Oct 17 Posts: 1049 Credit: 16,476,460 RAC: 15,681 |
I would suggest that you run some real-world science jobs on varying hardware, and various operating systems, before coming to a final conclusion.To give just one example, both my i7-3rd gen and i7-11th gen are running similar Rosetta jobs, 14res_af_hallucinated_*; with rosetta_4.20_windows_x86_64.exe & rosetta_4.20_x86_64-pc-linux-gnu. On the (much) slower i7-3rd chip with linux, progress is 2.9% / hr. On the faster i7-11th gen chip with Win11 progress is a measly 1.4% /hr, in fact it won't even finish in the 3 days allocated. Both machines have 'use at most' 70% and very similar loads. As I'll be looking at using VBox+OpenIFS for windows, I think this tells me this is a potentially better move than trying to compile OpenIFS native on Windows. |
©2024 cpdn.org