Message boards : Number crunching : New work Discussion
Message board moderation
Previous · 1 . . . 37 · 38 · 39 · 40 · 41 · 42 · 43 . . . 91 · Next
Author | Message |
---|---|
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
My turn: And they're all gone. :) |
Send message Joined: 7 Aug 04 Posts: 2187 Credit: 64,822,615 RAC: 5,275 |
They must be dumping more into that SAM50 batch as they're are still quite a lot in the queue now. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
It now says 7,560 tasks for batch 859. They must have thought of something else to look for. :) |
Send message Joined: 18 Feb 17 Posts: 81 Credit: 14,029,434 RAC: 1,176 |
Just pulled in some new work. Just enough to keep some Windows boxes busy for a few days :) Luckily I just installed Windows updates on everything last night, so I should be fine for at least a week. |
Send message Joined: 11 Dec 19 Posts: 108 Credit: 3,012,142 RAC: 0 |
Some "back of the envelope" number crunching for Weather At Home 2 (wah2) hosts and jobs: Jobs in Progress (53090) divided by the numbers of Users Active in the Last 24 Hours (282) equals an average of Jobs downloaded per user of 188.262411348. Multiplying this by the average run time of the last 100 jobs (112.1 hours) gives an Average Total Run Time of 21104.216312057 hours per host. Dividing this by 24 (hours in day) gives 879.342346336 days. Then divide that by 8 to give a very generous assumption that the average user is running a computer with 8 cores running 24/7 and you get 109.917793292 Days. 109.917793292 Days of work stored per user on average. Wouldn't it be better for the project in the long run if that number was an order of magnitude smaller? |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
It would be better. And it's been discussed by the project. But try telling that to people who never look at their results, never visit this board, and don't seem to care about the research. |
Send message Joined: 11 Dec 19 Posts: 108 Credit: 3,012,142 RAC: 0 |
I would propose that the points awarded for a work unit be decreased on log scale based on the turn around time of the host. |
Send message Joined: 7 Aug 04 Posts: 2187 Credit: 64,822,615 RAC: 5,275 |
And it's a little more complicated than the back of the envelope calculation. So many of the tasks in progress, really aren't in progress. With deadlines nearly a year, tasks having been abandoned in certain ways on some hosts, that for whatever reason, aren't crunching any more, are still seen as in progress. How many haven't trickled for 2 months or more, I don't know, but I'm sure it's a lot. The deadline should really be shortened to something far less than a year (1-3 months depending on the model type?) so that that number is more representative of the number in progress, and the tasks get re-issued quicker if they time out on a host no longer crunching. Just my opinion... |
Send message Joined: 18 Feb 17 Posts: 81 Credit: 14,029,434 RAC: 1,176 |
And it's a little more complicated than the back of the envelope calculation. So many of the tasks in progress, really aren't in progress. With deadlines nearly a year, tasks having been abandoned in certain ways on some hosts, that for whatever reason, aren't crunching any more, are still seen as in progress. How many haven't trickled for 2 months or more, I don't know, but I'm sure it's a lot. The deadline should really be shortened to something far less than a year (1-3 months depending on the model type?) so that that number is more representative of the number in progress, and the tasks get re-issued quicker if they time out on a host no longer crunching. Just my opinion... I feel like the current time given is more representative of a single core system from 16-18 years ago. Perhaps it's time to decrease that to something more realistic so the research and results can have quicker turnaround times. Lord knows I'm guilty enough of doing bad things - a few years ago I was struggling trying to get a machine to behave and ended up with tons of either aborted or failed tasks. This certainly isn't a set it and forget it project when Boinc is set to have 10 days of work. |
Send message Joined: 16 Jan 10 Posts: 1084 Credit: 7,827,799 RAC: 5,038 |
3,500 "Australia and New Zealand" 31-month simulations at 50 km have just been added in batch #860 (batch list). |
Send message Joined: 15 May 09 Posts: 4540 Credit: 19,039,635 RAC: 18,944 |
There are two new Windows batches in testing which may well lead to some main site work once they are finished. It also looks like some time this week there may be some more of the N216 tasks which on my hardware I shall avoid if possible. |
Send message Joined: 11 Dec 19 Posts: 108 Credit: 3,012,142 RAC: 0 |
While I would really like to test how well my CPUs can cope with multiple N216's I have set my clients to "No New Work" until I finish up the six N144's I am running. I expect them all to finish with in the next 36 hours. I need to install my new Noctua NH-U9S CPU cooler once I clear my work queue. Hopefully that will get my server cool enough to do CPU and GPU work at the same time. |
Send message Joined: 15 May 09 Posts: 4540 Credit: 19,039,635 RAC: 18,944 |
While I would really like to test how well my CPUs can cope with multiple N216's I have set my clients to "No New Work" until I finish up the six N144's I am running. I expect them all to finish with in the next 36 hours. Given the speed and reliability of my crystal ball my guess is you will have plenty of time to finish the N144'S before they are all gone and possibly before they are released! |
Send message Joined: 15 Jan 06 Posts: 637 Credit: 26,751,529 RAC: 653 |
While I would really like to test how well my CPUs can cope with multiple N216's Your Ryzen 3700x will work fine if you limit the N216 to four at a time. |
Send message Joined: 11 Dec 19 Posts: 108 Credit: 3,012,142 RAC: 0 |
Your Ryzen 3700x will work fine if you limit the N216 to four at a time. I was hoping to do more like six at time. I think it might be possible if I can keep three on each 4 core chiplet by setting CPU affinity. That would use 12MB of the 16MB L3 per chiplet and prevent the processes from migrating between chiplets across the Infinity Fabric. I'll let you know how it turns out. |
Send message Joined: 18 Feb 17 Posts: 81 Credit: 14,029,434 RAC: 1,176 |
While I would really like to test how well my CPUs can cope with multiple N216's Will that work for my 1700x as well? Unfortunately the 1800x is running Windows and I have no plans to install Linux, but does the same (running all cores) apply to WAH tasks? Will be setting max concurrent to 8 just in case, that way WCG can go ahead and scoop up the remainder. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
The WaH2 models aren't as resource needy as the N216, so use whatever you like. |
Send message Joined: 16 Jan 10 Posts: 1084 Credit: 7,827,799 RAC: 5,038 |
3,150 Linux-only 4-month simulations at N216 resolution have just been added in batch #861 (batch list). |
Send message Joined: 31 Dec 07 Posts: 1152 Credit: 22,363,583 RAC: 5,022 |
There are two new Windows batches in testing which may well lead to some main site work once they are finished. Did these 2 batches of Windows work ever appear or are they still in testing. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
They're still in testing. |
©2024 cpdn.org