Pudge,
Thanks for the update. I have to admit I don't understand half of what you're talking about because I'm not a very knowledgeable computer guy, but it gives me a road map of things to research and teach my self.
Hi Vinkman,
No problem. If it can help you out, use the search feature in here to search the forum as I have posted on a lot of what you've read in my last post along w\ the links to some of the specific articles or files that you can use to learn & apply the same stuff I'm doing now as I've been doing all this stuff over the last 4-5 yrs as I've learned myself how to take better advantage of what these >4 core, multi-core CPU's can provide to the usage experience including gaming thru a lot of coding already available within MS Windows OS'es (been available since MS Vista days forward) that can help existing apps (which includes games) & hardware run better (or at least run the way you want them to according to your definition of "better"....) w\o needing 3rd party software interfaces to access.
The object is to make all run smoother w\ as much reduced latency in the entire process as can be achieved (thus the term "your definition of better, faster, etc" as for example,
anything done that
reduces latency does
speed a process up so the CPU can process "faster" thus the GPU can process "faster" even though the onscreen FPS numbers may not
look any faster but the on screen cinema
appears to flow
faster when in reality is more
smoothly processed w\ less latency, ie
faster, at the current graphics frame flip rates to monitor (or FPS\monitor RR's...take your pick)...which the differences can be seen thru a GPU frametime graph running in the background) so the GPU can be fully optimized to process as fast as it can w\o having to wait on any data it needs & that answer lies within the Windows OS itself to optimize the CPU operation side in both power delivery & efficiency by making the most efficient use of all the extra CPU cores & the much larger L3 cache on die then tie it all together w\ the system mem's capacity AND mem speed optimizations to keep the GPU's optimized then take advantage of any coding within the vid card drivers to make the most of the GPU's performance (both power & efficiency...which
can increase the actual FPS numbers) running w\ a game's software.
Then anything that a software developer can do thru their software coding to also make better use of the OS AND these newer multi-core >4 core CPU's can make it even better....but I don't blame any software developer that doesn't take all the time & effort to do this thru their software coding as in reality this should all be done thru the OS itself....but MS is only gonna cater to the areas concerning their OS where the majority of their money is made from & that ain't us gamers BUT MS has coded into their OS'es the
coding necessary to do all this, just haven't taken the time to work out how to LOGICALLY apply it so that it can fit in all the myriad of system configurations AND the myriad of software development processes used to write all this software being used.
But if a CONSUMER will take the time & make the effort to learn where all this is located within Windows & how to properly use\apply it to the software of choice to run as efficiently as can be made on
their specific platform configuration being used, MS & other outlets has made all of this stuff available along w\ any instruction\direction on how to apply it on a Windows based platform
to the public on the Internet (where I found it all) & has been available to the public even before I found all this some 4-5 yrs ago.....so in some sense you really can't blame MS either....at least not & call yourself being genuine about it in all fairness......at least this is the way that I look at it from my POV......
This is not an AMD vs Intel thing as every Intel multi-core CPU w\ >4 physical CPU cores on die will benefit from the exact same stuff applied thru Windows just as AMD's will.....most of what I use now was initially validated on an Intel I7 5820K 6-core CPU X99 platform then carried over to the AMD Ryzen AM4 platforms I've used\currently using since & is still valid even today, just reworked to make it specific to AMD Ryzen's newer CPU chiplet layout (the base CPU SMP design layout is still the same between the 2 so the Windows OS still looks at\manages them both in the same manner & same logical criteria as a 4 core or less SMP designed CPU using SMT--or HT if you prefer--which is where some of the optimization issues arise from when a SMP designed CPU w\ >4 physical CPU cores is being used w\ a consumer level Windows OS which includes the current version of Win 10...).
So if I can do it, you can too as I started from the same level of understanding as you & if I can help you in any way along the way just shout........
Now after all this I have gone back into the BIOS & enabled the ERP efficiency coding (disabled by default) that a lot of mobo manuf's are providing thru their BIOS'es to provide energy efficiency capability to a system using their mobos. This is mainly intended initially for enterprise operations but is making it's way down to consumers as well. Wanted to see what effects it would have on overall system performance & stability since I've always skipped it in times past......and since I have this particular platform on hand that has some similarity to some low level server equipment used in some business environments
.
Upon rebooting my box it initially seems to be more responsive & according to my WC'ing loop using less energy overall (loop is much more quiet w\ less fan oscillation) at the desktop level of operations. Can't see any effects on the AMD Ryzen 9 3900X CPU's operation at the desktop level that could give some explanation to what I sense is happening outside of my WC loop but when I run AHIII I can clearly see some of the effects of the ERP coding as the CPU is still running at the base boost clock speeds of 4.1 Ghz all core BUT the 2nd CPU chiplet's 6 CPU cores are showing to be limited to a max boost speed of 4.2 Ghz on all 6 cores now running the AHIII game client instead of allowing random CPU cores to clock higher if the CPU core load\temps allowed for as was the case when ERP was disabled in the BIOS AND the other 6 CPU cores in the 1st CPU chiplet are held to 4.1 Ghz max clocks or allowed to clock down as low as 3.4 Ghz as well so the ERP coding at the mobo level is influencing the power usage across the mobo to the components mounted on it to align to some component efficiency level according to the amount of work application being necessary to achieve a result. The game runs flawless & is very smooth...the GPU is running flat out @ 96%-98% usage w\ no oscillation or stuttering so the CPU is adequately providing all the data to the GPU so it's not having to wait on\for anything to do it's thing (this was also true before enabling ERP, just that the CPU core speeds were allowed to clock higher w\ all else assumed being equal) so the game runs at the monitor's native RR of 144 Hz (144 FPS) @ 96% of the time using AMD's Enhanced Synch thru the drivers w\ AMD's FreeSynch still enabled w\ all in game AHIII graphics settings at max settings w\ exception of the tree detail level (around 2\3 max), environmental reflections (at the default level of 1), checkbox checked to not allow downloading of skins but using all the rest of in game post processing graphics controls along w\ some graphics enhancing settings on the AMD driver side which puts a very sizeable graphics load on this RX Vega 64's GPU & to a lesser extent this AMD Ryzen 9 3900X CPU but both show to handle all this w\o any adverse operational issue....the CPU easily as it only is using the same 64-67W of power (it is rated for max power of 105W) whereas this GPU is needing ALL the 218-225W of power it is calling for to maintain these levels of performance....which is only possible due to my closed WC loop being able to keep her cooled down sufficiently enough--along w\ my SeaSonic PRIME GOLD 850W PSU giving her all she needs w\ ease (just near\at 50% of total available PSU power over all the rails in use) so I'm assuming that this ERP coding in mobo's BIOS is reducing any other power needs across the mobo as can be done w\o affecting component effectiveness during operations.
So far this seems to work as intended and as long as it isn't affecting gameplay in any negative way I'm gonna leave it enabled as this CPU really doesn't need to run any faster than 4.0 Ghz to maintain this level of performance...from my perspective as my prior AMD Ryzen 7 1800X would almost allow the GPU to run flat out using Enhance Synch but I would've had to manually OC the 1800X to get the 4.1 Ghz on a core or 2 to have any chance of getting there (would only go to the base clocks of 3.8 Ghz all core on it's own default settings which wasn't quite to the level of smoothness I wanted w\ the RX Vega 64 running flat out as it couldn't quite keep up @ 144 Hz but cut the GPU speeds down under 120 Hz then it could maintain the GPU smoothly). This is why I was wanting to move up to this Ryzen 9 3900X CPU anyway as the Ryzen 7 1800X CPU was the only item in the way due to CPU core clock speed not being high enough across all cores. My Gigabyte GA-AX370 Gaming K5 mobo's BIOS chips failing gave me a reason to start the upgrade process.
Now AMD has come out w\ a BIOS update that will allow a 1st gen Ryzen CPU to run on a X570 chipset AM4 socket mobo
. Glad this happened after I finished upgrading mine......
Now once I pick up 1 of these PowerColor Radeon 5700XT Liquid Devil vid cards I should be able to run AHIII at full graphics load at full monitor native RR on my current platform w\ ease.....and use less power overall to do it than what I'm using now so a win, win situation using all Team Red......
Or wait on Navi 21.......but this particular PowerColor vid card is a niche product & it may not be around for much longer so I'll have to weigh this out.....
Then I think I'll be satisfied for a while...
.