Page 1 of 1

#1 Native Theory and Native Atlas

Posted: Sun Jul 07, 2019 10:17 pm
by scole of TSBT
Can anyone point me to a How To to install and run Native Theory and Native Atlas?

#2 Re: Native Theory and Native Atlas

Posted: Sun Jul 07, 2019 10:21 pm
by davidbam
I think there was stuff on that on the Trello Pentathlon board. I'll have a look

#3 Re: Native Theory and Native Atlas

Posted: Sun Jul 07, 2019 10:29 pm
by davidbam
OldChap seemed to be the one trying it out but see what you find in this (long) list : https://trello.com/c/bfA9zUTp/86-lhc-discussion#

#4 Re: Native Theory and Native Atlas

Posted: Mon Jul 08, 2019 9:28 pm
by Dirk Broer
Oldchap posted here too.

#5 Re: Native Theory and Native Atlas

Posted: Tue Jul 09, 2019 12:13 am
by scole of TSBT
Thanks. That's the one I was thinking about. I have it running. I'm looking for a good Native Theory How-To also.

#6 Re: Native Theory and Native Atlas

Posted: Sun Aug 25, 2019 9:30 am
by Hal Bregg
scole of TSBT wrote: Tue Jul 09, 2019 12:13 am Thanks. That's the one I was thinking about. I have it running. I'm looking for a good Native Theory How-To also.
Have a look at this post on LHC website

https://lhcathome.cern.ch/lhcathome/for ... 4971#38259

#7 Re: Native Theory and Native Atlas

Posted: Sun Aug 25, 2019 2:59 pm
by Bryan
If you have native Atlas running then you are good to go for Theory as well.

I ran it quite a bit (Atlas) last month. The memory usage was 2.6G/WU and it doesn't care how many threads are being used ... it is 2.6G/WU period. I have 128G in my servers and this was the 1st time I've been able to get all threads involved. Just divide your available memory by 2.6G and that will tell you how many WU you can run. Then divide your actual threads by the number of WU and that tells you how many threads to assign to each WU.

The only problem with Theory, which I didn't run much of, is that with the native app, the project only gives 10 WU to a machine. To get everything cranking you need to run multiple instances. With the VBox version it will give you bunches of WU but they take longer and use a boatload more memory.

If you are going to run Atlas I STRONGLY suggest you setup a Squid3 proxy server. At the start of a WU it downloads a boatload of stuff from the project ... on every WU. The Squid will store stuff so when a WU asks for stuff that has been downloaded before it either gets that from the HDD or from memory. When running with everything coming from the server the 1st 18 minutes the WU was running it was downloading stuff and the CPUs were sitting idle. After installing the Squid the deadtime at the start dropped to 3.5 minutes.

#8 Re: Native Theory and Native Atlas

Posted: Sun Aug 25, 2019 3:07 pm
by Bryan
Scamming LHC :lol:

LHC uses credit screw. Credit screw looks at the 1st 10 WU you turn in and then "adjusts" your point payout (gives a credit boost of 2-4X). Then after 128 WU it does another adjustment etc etc.

To scam it, you want the 1st 10 WU turned in to run as fast as possible. The best way is to turn HT off and run a single WU at a time until you've turned in 11. Then turn HT back on and load the machine for everything it can do. For about 36-48 hours you will get much higher credits than where you will eventually stabilize at. The way it works is if you do nothing, your credits will start low and slowly work their way up to the stable level. Using the scam technique your credits will start way high and work their way down to the stable level. Better to start high and work down 8-)