Unable to get more than 20WU

Message boards : Number crunching : Unable to get more than 20WU

To post messages, you must log in.

AuthorMessage
Profile marsinph

Send message
Joined: 23 Mar 18
Posts: 14
Credit: 56,870,295
RAC: 0
   
Message 784 - Posted: 3 Apr 2018, 14:45:09 UTC

I run oneWU each 30 min.
So, I only can store for less than 10 hours.
Is it a limitation ? I can undertand uch wazy, but with my CR and RAC, I think i crunch enouh to how I am not a "ghost", or someone who try and the delete without crunching.
Of coure my preference (general and project) are set on 2 days. to not overload server.
I see some host have a lot of WU. What do I wrong ???
Greetings from Belgium

ID: 784 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sergei Chernykh
Project administrator
Project developer

Send message
Joined: 5 Jan 17
Posts: 457
Credit: 72,451,573
RAC: 0
   
Message 785 - Posted: 3 Apr 2018, 15:47:04 UTC - in response to Message 784.  

The limitation is 20 WU per single GPU and 2 WU per single CPU core.
ID: 785 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile marsinph

Send message
Joined: 23 Mar 18
Posts: 14
Credit: 56,870,295
RAC: 0
   
Message 786 - Posted: 3 Apr 2018, 19:24:05 UTC - in response to Message 785.  

Hello Sergei,
I undertand the reasons. But perhaps is it posible to send amount WU depending RAC, or average turnaround time ?
On SETI it are 100WU/GPU and 100/CPU.

If I disconnect my host more than 10 hours, I run out work
I repeat and agree the reasons of limitation of 20WU. But .....
I have a modest mid-range GPU and mid-range CPU.
Some of us have very high end GPU. Si they runs WU very fast.
Probably after 2 hours all WU are done ! I project is few hours down , we can not more crunch.
It i why I insist to reconsider understable limitations dependeing RAS and/or avg turnaround time
Best regards and thanks for answer
[/url]
ID: 786 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sergei Chernykh
Project administrator
Project developer

Send message
Joined: 5 Jan 17
Posts: 457
Credit: 72,451,573
RAC: 0
   
Message 787 - Posted: 3 Apr 2018, 20:15:08 UTC - in response to Message 786.  

The project has never been down for more than a few minutes so far, it's running on a dedicated server in a data center. Is it a real issue for you? Do you often get disconnected for many hours?
ID: 787 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
xii5ku

Send message
Joined: 26 Apr 18
Posts: 3
Credit: 448,358,594
RAC: 0
   
Message 800 - Posted: 27 Apr 2018, 10:50:18 UTC - in response to Message 785.  

"Sergei Chernykh" wrote:
The limitation is 20 WU per single GPU and 2 WU per single CPU core.

In this calculation, the number of CPU cores is the minimum of
    - the number of active CPUs set in the boinc-client, and
    - the "Max # CPUs" setting in the project preferences on this web site,

right?

ID: 800 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sergei Chernykh
Project administrator
Project developer

Send message
Joined: 5 Jan 17
Posts: 457
Credit: 72,451,573
RAC: 0
   
Message 801 - Posted: 27 Apr 2018, 11:30:08 UTC - in response to Message 800.  
Last modified: 27 Apr 2018, 11:40:44 UTC

No, the number of CPU cores that count for this restriction is what you can see in your Computers belonging to page.

P.S. Actually, BOINC client settings can change this number. But "Max # CPUs" setting shouldn't influence it.
ID: 801 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
xii5ku

Send message
Joined: 26 Apr 18
Posts: 3
Credit: 448,358,594
RAC: 0
   
Message 804 - Posted: 27 Apr 2018, 21:55:59 UTC - in response to Message 801.  
Last modified: 27 Apr 2018, 21:56:24 UTC

It does though... When I joined I set "Max # CPUs = 8" and got only 16 tasks in progress per host. Now I removed the setting temporarily and received more.
ID: 804 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Unable to get more than 20WU


©2022 Sergei Chernykh