#51 Re: SiDock September sailing challange
Posted: Sun Sep 19, 2021 1:09 am
So service brat driving on the wrong side of the road. At least you would have had to drive ' stick '
Welcome to the forum of Scotland's biggest, oldest and best distributed computing team.
https://tsbt.co.uk/forum/
I don't mind keep up the good work
There's a discussion about it on the SiDock forum. Basically credit screw strikes again. https://www.sidock.si/sidock/forum_thread.php?id=143scole of TSBT wrote: ↑Sat Sep 18, 2021 11:18 pm The shorter corona_3CLpro_v5 WUs pay well.
The much longer, hours long, corona_Eprot_v1 WUs pay sux
looks like Intel has paid for this credit system.
This is a nice conspiracy theory, but it really is not an Intel vs AMD (or Windows vs Linux) issue. It is just CreditNew turning things into a lottery, as it is prone to do.
Basically, for CreditNew to work smoothly, the following two assumptions have to be met:
task run time scales reasonably well with estimated FLOP count (definitely not true here until now, as both the short 3CLpro and the long Eprot tasks had the same FLOP count estimate; I don't know yet how it will work out once hoarfrost's recent changes take effect)
computation speed of each computer is constant (a very bold assumption, especially with modern CPUs and GPUs that can adjust their clock frequencies on the fly based on temperature or power draw)
As soon as reality deviates from these assumptions, CreditNew does all sorts of weird things that may or may not average out long term (and definitely not short term, e.g. over the duration of a typical competition like the one going on right now). I have two screenshots to illustrate the complete mess CreditNew created here:
A set of 3CLpro_v5 tasks. The longest task took ~50% longer than the shortest, but credit varies by a factor of 4.3 and there is no correlation between run time and credit. And this is just a small subset, I have also seen 3CLpro_v5 tasks with credit as low as ~20 and as high as ~210, i.e. 10 times the credit for roughly the same amount of work.
A set of Eprot tasks. In this case, the longest task took only ~10% longer than the shortest, but again, credit varies a lot more and there is no correlation with run time.
Three years ago, even David Anderson came to realise that CreditNew is not working all that well and now recommends it only in cases where no better option is available. Let's take a look at the other options:
Pre-assigned credit: I think this is the best option available. There could be a fixed amount of credit based on the target, e.g. 30 Cobblestones for 3CLpro tasks, 450 Cobblestones for Eprot tasks. Yes, as shown above, there is some variation in task run time even for the same target, but a ~50% difference in credit per second is much better than the ~1000% difference we see with CreditNew. It is also cheat-proof, device-neutral, and immediately rewards a CPU switching to turbo mode rather than punish it. The downside is of course that it takes a bit more work when preparing a new target, as the right amount of credit needs to be known in advance.
Post-assigned credit: AFAIK, the run time variations are caused by different run times for the docking simulations for each ligand, while the number of ligands processed in each task is constant. So this credit option would require implementing some sort of FLOP count. I don't think this is feasible.
Runtime-based credit: In theory, this sounds like a good choice for this project, as long as there is no GPU application. In reality, however, it is a complete nightmare that really should not be used by any project any more. It already was a nightmare back when this was the standard credit system, because some users used clients that reported inflated benchmark values; nowadays, the benchmark does not really mirror the true CPU performance even without those "optimisations", because it runs on only one CPU core and the CPU might therefore run at a lot higher frequency than during the actual computations.
Ironically, that leaves adaptive credit aka CreditNew as the recommended option for exactly those cases where it performs the worst. I would argue that even in those cases, some fixed amount of credit per task would be much less of a lottery than the mess created by CreditNew.
why stop there