climateprediction.net (CPDN) home page
Thread 'Intel I7 Woes....No successful completion since April 2015'

Thread 'Intel I7 Woes....No successful completion since April 2015'

Message boards : Number crunching : Intel I7 Woes....No successful completion since April 2015
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 53819 - Posted: 25 Mar 2016, 18:30:49 UTC - in response to Message 53816.  

It's certainly an interesting problem:

I can run all my projects on four other machines without getting these CPDN failures. So far, running only CPDN projects on my I7 seems to be working consistently without failures. On my I7 (and only my I7) somehow running one or more of the other projects is causing CPDN projects to fail ... but all other projects run fine -- and this behavior only started in March/April 2015.

Of course it could also be an overall CPU load problem as well -- just seems strange that the failures only impact CPDN work units....

Would be nice if I can find the "one" project that when I add it in causes the failures. That would point to some strange interaction within BOINC that the developers could then test for -- but maybe too much/too simple to hope for.

More later...more testing....if these six CPDN WU's finish normally (which it looks like they might), I'll start testing by adding one other project at a time...to see if I can find the culprit.

Happy Easter!
ID: 53819 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 53820 - Posted: 25 Mar 2016, 20:49:01 UTC - in response to Message 53819.  

Well..here we go...
I tried to bring my processing load up by allowing two more CPDN WUs to process. These almost immediately failed "error while computing" on the four WU's I tried to add to the processing. I'm now back still running the six WU's I started testing as TEST 5 below.

Current thinking...is that pushing the number of WU's processing beyond 6 is suspect...related to CPDN failures. Would appreciate any insights into the error messages. I'm continuing to process the six CPDN WUs so far with no errors....interesting....interesting....


Failures were as follows:


WU 10335878 -- Failed quickly...but this WU failed early on anther's machine previously
<core_client_version>7.6.29</core_client_version>
<![CDATA[
<stderr_txt>
05:25:27 (13812): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: 
Leaving CPDN_Main::Monitor...
05:29:05 (17460): called boinc_finish(0)

WU 10335869 -- Failed quickly...but this W|U failed early on 2 other machines previously
<core_client_version>7.6.29</core_client_version>
<![CDATA[
<stderr_txt>
05:25:29 (14204): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: 
Leaving CPDN_Main::Monitor...
05:29:03 (9536): called boinc_finish(0)




WU 10332884
-- Failed with following error:
<core_client_version>7.6.29</core_client_version>
<![CDATA[
<message>
The device does not recognize the command.
(0x16) - exit code 22 (0x16)
</message>
<stderr_txt>
Suspended CPDN Monitor - Suspend request from BOINC...
Suspended CPDN Monitor - Quit request from BOINC...
13:40:42 (18012): start_timer_thread(): CreateThread() failed, errno 0
Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
13:48:38 (18316): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
13:48:53 (11724): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
13:49:00 (3796): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
13:49:09 (18304): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
Sorry, too many model crashes! :-(
Called boinc_finish

WU1044207
failed with the following error:
<core_client_version>7.6.29</core_client_version>
<![CDATA[
<stderr_txt>
diagnostics_init_unhandled_exception_monitor(): Creating hExceptionMonitorThread failed, errno 12
WARNING: BOINC Windows Runtime Debugger has been disabled.
13:44:22 (16004): start_timer_thread(): CreateThread() failed, errno 0
Global Worker:: CPDN process is not running, exiting, bRetVal = 0, checkPID=0, selfPID=16004, iMonCtr=1
diagnostics_init_unhandled_exception_monitor(): Creating hExceptionMonitorThread failed, errno 12
WARNING: BOINC Windows Runtime Debugger has been disabled.
13:45:06 (15564): start_timer_thread(): CreateThread() failed, errno 0
Regional Worker:: CPDN process is not running, exiting, bRetVal = 1, checkPID=0, selfPID=0, iMonCtr=2
Controller:: CPDN process is not running, exiting, bRetVal = 1, checkPID=15564, selfPID=19296, iMonCtr=1
Model crash detected, will try to restart...
Leaving CPDN_Main::Monitor...
13:45:33 (19296): called boinc_finish(0)

ID: 53820 · Report as offensive     Reply Quote
Eirik Redd

Send message
Joined: 31 Aug 04
Posts: 391
Credit: 219,896,461
RAC: 649
Message 53822 - Posted: 26 Mar 2016, 7:41:20 UTC - in response to Message 53820.  

Those failures maybe seem normal - Ive had several such not CPU dependent, turned out model problems, especially the INV THETA,
Keep on with the "allowing 4 cores"
Patience pays off.
If you can run 2 CPDN models at a time -- good. 3 models on 4 cores -- better

My experience -- on I7 old "bridge" name things -- still running well.
Leave at least one real core for the OS.
Lately there's unstable models out there - that's for testing the limits of the models
the "INV THETA" is usually not a problem with your rig

If you've got 2 cores/models running OK -- good. Try 3 at a time.




Well..here we go...
I tried to bring my processing load up by allowing two more CPDN WUs to process. These almost immediately failed "error while computing" on the four WU's I tried to add to the processing. I'm now back still running the six WU's I started testing as TEST 5 below.

Current thinking...is that pushing the number of WU's processing beyond 6 is suspect...related to CPDN failures. Would appreciate any insights into the error messages. I'm continuing to process the six CPDN WUs so far with no errors....interesting....interesting....


Failures were as follows:


WU 10335878 -- Failed quickly...but this WU failed early on anther's machine previously
<core_client_version>7.6.29</core_client_version>
<![CDATA[
<stderr_txt>
05:25:27 (13812): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: 
Leaving CPDN_Main::Monitor...
05:29:05 (17460): called boinc_finish(0)

WU 10335869 -- Failed quickly...but this W|U failed early on 2 other machines previously
<core_client_version>7.6.29</core_client_version>
<![CDATA[
<stderr_txt>
05:25:29 (14204): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: 
Leaving CPDN_Main::Monitor...
05:29:03 (9536): called boinc_finish(0)




WU 10332884
-- Failed with following error:
<core_client_version>7.6.29</core_client_version>
<![CDATA[
<message>
The device does not recognize the command.
(0x16) - exit code 22 (0x16)
</message>
<stderr_txt>
Suspended CPDN Monitor - Suspend request from BOINC...
Suspended CPDN Monitor - Quit request from BOINC...
13:40:42 (18012): start_timer_thread(): CreateThread() failed, errno 0
Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
13:48:38 (18316): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
13:48:53 (11724): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
13:49:00 (3796): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
13:49:09 (18304): start_timer_thread(): CreateThread() failed, errno 0

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048

Model crashed: ATM_DYN : INVALID THETA DETECTED. tmp/pipe_dummy 2048
Sorry, too many model crashes! :-(
Called boinc_finish

WU1044207
failed with the following error:
<core_client_version>7.6.29</core_client_version>
<![CDATA[
<stderr_txt>
diagnostics_init_unhandled_exception_monitor(): Creating hExceptionMonitorThread failed, errno 12
WARNING: BOINC Windows Runtime Debugger has been disabled.
13:44:22 (16004): start_timer_thread(): CreateThread() failed, errno 0
Global Worker:: CPDN process is not running, exiting, bRetVal = 0, checkPID=0, selfPID=16004, iMonCtr=1
diagnostics_init_unhandled_exception_monitor(): Creating hExceptionMonitorThread failed, errno 12
WARNING: BOINC Windows Runtime Debugger has been disabled.
13:45:06 (15564): start_timer_thread(): CreateThread() failed, errno 0
Regional Worker:: CPDN process is not running, exiting, bRetVal = 1, checkPID=0, selfPID=0, iMonCtr=2
Controller:: CPDN process is not running, exiting, bRetVal = 1, checkPID=15564, selfPID=19296, iMonCtr=1
Model crash detected, will try to restart...
Leaving CPDN_Main::Monitor...
13:45:33 (19296): called boinc_finish(0)



ID: 53822 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 53824 - Posted: 26 Mar 2016, 10:42:44 UTC - in response to Message 53822.  

My I7 has 8 cores. All projects are still disabled except for CPDN as I continue TEST 5.

Right now I don't constrain BOINC to a maximum # of cores (never have). I'm going to let the 6 WU's continue for another day -- current processing is taking approx 75%-80% of my CPU processing capacity. My CPU cores are running at 65 - 75 degrees centigrade.

I may try allowing another WU to run again to see if adding 1 or 2 more is what is actually causing the added WU's to fail...seems like the failures are for the new/added WUs when I allow them.

Perhaps I could/should restrict BOINC to no more than 75% of my processing capacity (six cores), but don't yet want to do that until I can "prove" that the failures are being caused by my CPU (and not just "bad" models).

ID: 53824 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Jan 06
Posts: 637
Credit: 26,751,529
RAC: 653
Message 53827 - Posted: 26 Mar 2016, 19:54:28 UTC - in response to Message 53824.  

I may try allowing another WU to run again to see if adding 1 or 2 more is what is actually causing the added WU's to fail...seems like the failures are for the new/added WUs when I allow them.

I have noticed a curious phenomenon, maybe more imagined than real, but when I start a new batch of work units after a gap of days or weeks since the last one, the first one or two always fail after a few seconds. At first I thought it was maybe some necessary file that didn't get downloaded when it should. Now, I almost suspect that the server dishes out the bad ones first, if that is possible.
ID: 53827 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 53829 - Posted: 26 Mar 2016, 23:46:53 UTC - in response to Message 53827.  

I have noticed a curious phenomenon, maybe more imagined than real, but when I start a new batch of work units after a gap of days or weeks since the last one, the first one or two always fail after a few seconds. At first I thought it was maybe some necessary file that didn't get downloaded when it should. Now, I almost suspect that the server dishes out the bad ones first, if that is possible.


It's a mystery.
Continuing to run TEST 5 for another 24-48 hours.
Six CPDN WUs simultaneously.

If these start to complete successfully...I'll start trying to max out my CPU with CPDN WUs and start adding other projects one by one.

I can't tell if the four WU's that failed below were "bad" or part of the problem I'm experiencing. Hoping someone from the project can look at the error messages below and offer some insight.

ID: 53829 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 53943 - Posted: 9 Apr 2016, 18:01:14 UTC - in response to Message 53829.  

TEST 5 RESULTS

Of the six CPDN projects running, 5 finished successfully, and 1 (WU 10198749) errored after 11 trickles with the following error:

20:08:42 (9572): start_timer_thread(): CreateThread() failed, errno 0
Regional Worker:: CPDN process is not running, exiting, bRetVal = 1, checkPID=6020, selfPID=6020, iMonCtr=2
Controller:: CPDN process is not running, exiting, bRetVal = 1, checkPID=6020, selfPID=11072, iMonCtr=1
Model crash detected, will try to restart...
Leaving CPDN_Main::Monitor...
01:36:48 (11072): called boinc_finish(0)


TEST 6 is as follows:

I've enabled all projects again but will continue to run BOINC with a maximum of 6 CPU's. If I start seeing CPDN errors again, I will start disabling other projects until I'm seeing no CPDN project errors. Objective is to try to determine if the problems seems to be caused by CPU load or by a conflict with running one or more other projects. Time will tell again...this will take awhile since CPDN projects will run a lot less overall with all projects enabled.

Projects running are CPDN, Einstein@Home, Enigma@Home, SETI@Home, and Mikyway@Home.
ID: 53943 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54004 - Posted: 24 Apr 2016, 16:50:07 UTC - in response to Message 53943.  

My machine crashed and I lost everything....all WUs of CPDN errored. I will restart with 6 new WU's.
ID: 54004 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54049 - Posted: 6 May 2016, 0:28:44 UTC - in response to Message 54004.  

Update on Test 5

Processing has continued to be successful with my machine set at using no more than 6 CPU's (of 8 total) and running CPDN, Einstein@Home, and SETI@Home simultaneously.

TEST 6 STARTED on May 3, 2016

After 3 completed CPDN work units, I have now also re-enabled Milkyway@Home. If CPDN jobs continue successfully, I'll then re-enable Enigma@Home for TEST 7. (Again keeping the maximum # of processors being used by BOINC to 6. Will be a few weeks likely before any further results.

If all this is successful I will push the processor loading up to 7 then 8 processors to see at what point CPDN failures start occurring.

Art Masson
ID: 54049 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54083 - Posted: 13 May 2016, 4:35:57 UTC - in response to Message 54049.  

UPDATE ON TEST 6

Had a large number of failures (9 CPDN WU's failed) in a very short time frame (May 12 at approx 8:56 PM UTC)...this included 3 WU's with significant processing completed. I've stopped TEST 6. Also noted that my system logged a number of automatically installed Windows patches which started at 8:33 UTC...and rebooted itself...can't correlate times exactly, but I suspect these failures all occurred during the reboot process.

TEST 7

Have disabled automatic windows updates and will continue with processing -- with Milkyway@Home processing also continuing....

ID: 54083 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Jan 06
Posts: 637
Credit: 26,751,529
RAC: 653
Message 54084 - Posted: 13 May 2016, 9:19:31 UTC - in response to Message 54083.  

Also noted that my system logged a number of automatically installed Windows patches which started at 8:33 UTC...and rebooted itself...can't correlate times exactly, but I suspect these failures all occurred during the reboot process.

I had a couple of Einstein work units (Gravitational Wave search O1 all-sky F v1.04) fail on me two days ago as a result of the 10 May Microsoft update. It happens. So on all my dedicated machines where I run CPDN, I turn off automatic updates; the patches are not needed for security purposes anyway.
ID: 54084 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54086 - Posted: 13 May 2016, 16:54:23 UTC - in response to Message 54084.  

Thanks.

This Intel I7 machine is not dedicated to BOINC processing, unfortunately. So what I've done is set Windows Updates to download but not install automatically. This way I can shut down BOINC and then manually start the update process. Hopefully that will largely prevent Windows Updates from trashing work in process in BOINC during an uncontrolled reboot.

Perhaps that was my problem all along...since my machine was set for automatic windows updates every Wednesday at 3AM Local Time...(the default). Doesn't explain why problems started on April 2015 and not before, but some mysteries may never be explained.

Time will tell...Still processing TEST 7 four projects with no more than 6 (out of my 8) available CPUs.

ID: 54086 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54087 - Posted: 13 May 2016, 17:01:33 UTC - in response to Message 54086.  

Jim1348 -- By the way, I've also noticed that when WUs fail and new ones are started I also get a number of WUs that fail almost immediately as well (per your note below). No explanation for this, but share your suspicion that maybe it just is picking up WUs that are "bad" for some reason before finding some "good" ones. Might be worth more experimenting, but right now I'm focused on just trying to understand my larger problem!

Wonder if others are noticing this...

ID: 54087 · Report as offensive     Reply Quote
ProfileDave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4540
Credit: 19,039,635
RAC: 18,944
Message 54088 - Posted: 13 May 2016, 17:14:54 UTC - in response to Message 54087.  

When I can afford some faster silicon...........
ID: 54088 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54149 - Posted: 19 May 2016, 2:38:25 UTC - in response to Message 54083.  

UPDATE TEST 7

Processing continues to be successful...so I have now re-enabled all my projects that were originally running when I was getting consistent CPDN WU failures. The only differences are: 1) I'm limiting the available CPU's to 6 of 8 available, and 2) Automatic Windows Updates are disabled. I will continue this way to confirm CPDN WU stability for several weeks.
ID: 54149 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54285 - Posted: 11 Jun 2016, 15:24:58 UTC - in response to Message 54149.  

UPDATE TEST 7 RESULTS

I've experienced NO failures with my CPDN WU's since re-enabling all my projects -- running my I7 with a maximum of 6 CPUs for BOINC and with Windows Updates in manual mode.

TEST 8

I've now re-enabled all 8 CPUs. The only difference in my configuration from when I was having problems is that my weekly windows updates are no longer automatic. Would be amazing to me if this was the problem for over a year when I had NO successful CPDN WUs completed. If true, it will be a struggle to understand why this started to impact only CPDN WUs in April 2015.

I'll let my machine continue to process BOINC with all 8 CPUs for a few weeks before I conclude it was the windows updates all along....

ID: 54285 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54290 - Posted: 12 Jun 2016, 15:17:33 UTC - in response to Message 54285.  

TEST 8

Now getting multiple CPDN WU failures again after less than 24 hours

TEST 9

Back to only 6 CPU's processing. Apparently fully loading my I7 with all CPU's running -- results in CPDN failures. Will run with 6 CPU's for a while again to verify stability.

ID: 54290 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54310 - Posted: 14 Jun 2016, 20:34:02 UTC - in response to Message 54290.  

TEST 9 PARTIAL RESULTS

I had one CPDN processing error today. Not sure if this is I7 processing related or not. 5 other jobs are processing well at this time, so I'm continuing processing with up to 6 CPU's (of 8) available to BOINC.

Error message for the one failure is as follows:

<core_client_version>7.6.33</core_client_version>
<![CDATA[
<stderr_txt>
diagnostics_init_unhandled_exception_monitor(): Creating hExceptionMonitorThread failed, errno 12
WARNING: BOINC Windows Runtime Debugger has been disabled.
08:10:23 (17772): start_timer_thread(): CreateThread() failed, errno 0
diagnostics_init_unhandled_exception_monitor(): Creating hExceptionMonitorThread failed, errno 12
WARNING: BOINC Windows Runtime Debugger has been disabled.
08:10:24 (18212): start_timer_thread(): CreateThread() failed, errno 0
Global Worker:: CPDN process is not running, exiting, bRetVal = 0, checkPID=0, selfPID=17772, iMonCtr=1
Regional Worker:: CPDN process is not running, exiting, bRetVal = 1, checkPID=18212, selfPID=18212, iMonCtr=2
Suspended CPDN Monitor - Suspend request from BOINC...
Controller:: CPDN process is not running, exiting, bRetVal = 1, checkPID=18212, selfPID=17756, iMonCtr=1
Model crash detected, will try to restart...
Leaving CPDN_Main::Monitor...
19:03:55 (17756): called boinc_finish(0)

</stderr_txt>
<message>
upload failure: <file_xfer_error>
<file_name>hadam3p_eu_iarp_20164_3_396_010527395_1_1.zip</file_name>
<error_code>-161 (not found)</error_code>
</file_xfer_error>
<file_xfer_error>
<file_name>hadam3p_eu_iarp_20164_3_396_010527395_1_2.zip</file_name>
<error_code>-161 (not found)</error_code>
</file_xfer_error>
<file_xfer_error>
<file_name>hadam3p_eu_iarp_20164_3_396_010527395_1_3.zip</file_name>
<error_code>-161 (not found)</error_code>
</file_xfer_error>

</message>
]]>

ID: 54310 · Report as offensive     Reply Quote
john

Send message
Joined: 20 May 14
Posts: 13
Credit: 7,586,474
RAC: 0
Message 54311 - Posted: 15 Jun 2016, 2:29:20 UTC - in response to Message 54310.  
Last modified: 15 Jun 2016, 2:30:30 UTC

I have an I7 at (I think) 2.6 GHz. Running Climateprediction as my sole project with few errors that I didn't cause. Using all 8 (4 real, 4 hyperthread) CPUs at 100 percent of capacity (if I remember correctly).

I'm running Win7 sp1.

I saw that you have 14 gig of RAM reported by Windows and it passed a memtest. On a lark I checked mine, and it reported 16 gig of RAM. Do you have 4 MATCHED 4 gig sticks of RAM? I'm wondering if you're getting an imbalance somewhere that's causing your projects to crash. Computers with non-symmetrical RAM capacities, or even just mis-matched RAM sticks, can get real finicky real fast.

Did you change your RAM? I didn't read the whole thread through.
ID: 54311 · Report as offensive     Reply Quote
Art Masson
Avatar

Send message
Joined: 16 Oct 11
Posts: 254
Credit: 15,954,577
RAC: 0
Message 54312 - Posted: 15 Jun 2016, 5:50:13 UTC - in response to Message 54311.  

John -- I have never changed the RAM chips in my machine, but interestingly my machine has two banks with two memory chips each for a total of 4 slots. Maximum memory capacity would be 32GB if all 4 slots had a 8GB chip.

The current configuration has the first bank with two 4GB chips, while the second bank has a 2GB and a 4GB chip (for a total of 14 GB)...all as originally configured on the machine by HP.

This machine successful ran BOINC with all 8 CPU's active for a couple years...until April 2015, when only my CPDN projects started failing -- all other projects continue to run without errors. I've been able to get things working again by dropping the # of CPU's BOINC can utilize to 6 CPU's....

Rest is a mystery as to why errors started only in April 2015 -- could have been a Windows Patch or some inherent change in BOINC (or some other application interaction). Probably wouldn't hurt for me to change out the 2GB chip with a 4GB one and see if that makes any difference......

Art Masson
ID: 54312 · Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Number crunching : Intel I7 Woes....No successful completion since April 2015

©2024 cpdn.org