Credit system.

Message boards : Random stuff : Credit system.

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Sergei Chernykh
Project administrator
Project developer

Send message
Joined: 5 Jan 17
Posts: 461
Credit: 72,451,573
RAC: 0
   
Message 104 - Posted: 9 Feb 2017, 9:53:23 UTC

All work units are more or less uniform now and take around 40-50 minutes to complete on 6-core Intel Xeon E5-1650 v3, so the project uses fixed credit per work unit now. No more "credit random" system.
ID: 104 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile STMahlberg

Send message
Joined: 23 Jan 17
Posts: 1
Credit: 3,036,258
RAC: 0
   
Message 112 - Posted: 12 Feb 2017, 17:04:41 UTC

If you're going to nerf the credits, at least make them consistent with run-time. Giving a fixed credit of 1618.03 per wu regardless of whether it take 4500 secs or 12000 secs is also unreasonable.
ID: 112 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sergei Chernykh
Project administrator
Project developer

Send message
Joined: 5 Jan 17
Posts: 461
Credit: 72,451,573
RAC: 0
   
Message 113 - Posted: 12 Feb 2017, 18:01:32 UTC - in response to Message 112.  
Last modified: 12 Feb 2017, 18:12:35 UTC

If you're going to nerf the credits, at least make them consistent with run-time. Giving a fixed credit of 1618.03 per wu regardless of whether it take 4500 secs or 12000 secs is also unreasonable.

Fixed credit per work unit is reasonable because all work units have the same amount of work now. They're consistent with the actual work done for the project, not with run-time. I've checked your PCs and their run times per work unit are also all consistent. Run time 12000 secs is from your slowest PC. It's just slow and therefore generates less credits. And run time 4500 secs is from your fastest PC. Don't try to fool me :)
ID: 113 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tern

Send message
Joined: 17 Feb 17
Posts: 27
Credit: 46,930,874
RAC: 0
   
Message 134 - Posted: 17 Feb 2017, 14:07:04 UTC

Credit is a never-ending discussion. Go back ten-twelve years and you'll see the same points raised as today. Cobblestones, "credit-new", all are an attempt to be 'fair'.

Sergei has the "least objectionable" (there is no "best") method with fixed credit per WU. Whether the number chosen is "too high" or "too low" is not answerable. It just "is".

Picking the number based on a reference machine at least gives it some validity, it's not just a made-up number. The trick is to make sure that the number per DAY the reference machine can generate is at least somewhat comparable to what that same machine can produce on other projects that are considered to be in the "reasonable" credit range. You don't want to be Sztaki or (as much as I love the project) Rosetta. You don't want to be BitCoin or Collatz. PrimeGrid is difficult to use because different subprojects there give vastly different credits. Personally, I'd use Einstein. They are quite a bit higher than Seti, but not so much so to be objectionable. Or Seti would work but then people would complain the credit is too low. Or maybe run both and set Amicable to equal the average of those two. Sigh.

Bottom line is that it doesn't matter what you do, you cannot make everyone happy. I haven't been running here long enough to have an opinion yet.
ID: 134 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tern

Send message
Joined: 17 Feb 17
Posts: 27
Credit: 46,930,874
RAC: 0
   
Message 148 - Posted: 17 Feb 2017, 15:09:27 UTC - in response to Message 134.  

One other suggestion - now that you are "live", make any changes gradually. If you do decide you need to give fewer (or more) credits, don't make big changes all at once. If, for example, you decided to go from 1800 to 1200, go to 1600 one day, wait a while (at least a few days), go to 1400...

Just remember to tell us the "plan" here before you do it!
ID: 148 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [B@P] Daniel

Send message
Joined: 27 Feb 17
Posts: 15
Credit: 613,209,034
RAC: 0
   
Message 262 - Posted: 28 Feb 2017, 19:40:35 UTC - in response to Message 67.  

In sieve Yes. If You Try LLR it would be half of it. BTW does Your aplication use AVX/FMA/AVX2 ?

Half of it would be exactly what it is here now. My application uses only integer arithmetic. Floating point just can't represent 64-bit integers without losing precision.

SSE2 and AVX2 have separate instructions for integer operations too. SSE versions above 2 also added some specialized instructions for integers. FMA is for float point only.
ID: 262 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sergei Chernykh
Project administrator
Project developer

Send message
Joined: 5 Jan 17
Posts: 461
Credit: 72,451,573
RAC: 0
   
Message 263 - Posted: 28 Feb 2017, 19:50:53 UTC - in response to Message 262.  
Last modified: 28 Feb 2017, 19:54:40 UTC

Daniel
And? I haven't heard of AVX/AVX2/whatever being able to do 64x64->64 bit and 64x64->128 bit integer multiplications. These are the crucial part of my program.
ID: 263 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [B@P] Daniel

Send message
Joined: 27 Feb 17
Posts: 15
Credit: 613,209,034
RAC: 0
   
Message 264 - Posted: 28 Feb 2017, 21:29:56 UTC - in response to Message 263.  

Daniel
And? I haven't heard of AVX/AVX2/whatever being able to do 64x64->64 bit and 64x64->128 bit integer multiplications. These are the crucial part of my program.

Hmm, I saw other int ops there so I thought that 64-bit multiplication will be there too, but it turned out that it is missing. There is _mm_mullo_epi64 which performs 64x64->64 multiplication, but it needs AVX512VL + AVX512DQ. The other closest thing is _mm256_mul_epu32 from AVX2, gcc used it to implement AVX vector multiplication 64x64->64. It used 8 instructions for this plus load/store. I do not have hardware to benchmark it so it is hard to tell if it will provide any performance gain.

64x64->128 probably cannot be optimized this way at all. I tried to play a bit with gcc. It was able to vectorize 128x128->128, but it needed many instructions for this (~30), including few loads and stores. Code used AVX instructions, AVX2 does not change anything.
ID: 264 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sergei Chernykh
Project administrator
Project developer

Send message
Joined: 5 Jan 17
Posts: 461
Credit: 72,451,573
RAC: 0
   
Message 266 - Posted: 28 Feb 2017, 22:30:33 UTC - in response to Message 264.  

Hmm, I saw other int ops there so I thought that 64-bit multiplication will be there too, but it turned out that it is missing. There is _mm_mullo_epi64 which performs 64x64->64 multiplication, but it needs AVX512VL + AVX512DQ. The other closest thing is _mm256_mul_epu32 from AVX2, gcc used it to implement AVX vector multiplication 64x64->64. It used 8 instructions for this plus load/store. I do not have hardware to benchmark it so it is hard to tell if it will provide any performance gain.

64x64->128 probably cannot be optimized this way at all. I tried to play a bit with gcc. It was able to vectorize 128x128->128, but it needed many instructions for this (~30), including few loads and stores. Code used AVX instructions, AVX2 does not change anything.

People have tried: http://stackoverflow.com/questions/28807341/simd-signed-with-unsigned-multiplication-for-64-bit-64-bit-to-128-bit and concluded that single mul instruction is still the most efficient for 64x64->128 and imul is the most efficient for 64x64->64. My CPU program's main loop is a sequence of 4 instructions "mov-imul-cmp-jbe" to do a single trial division check, they execute in 2 cycles on Haswell when unrolled. SSE/AVX was just not meant for such type of loops - with branch in every iteration, so I doubt it can help here.
ID: 266 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2

Message boards : Random stuff : Credit system.


©2023 Sergei Chernykh