Question is tho, ignoring the graphs, could you really tell/see a diff in game play?
I never couldd regarding ah.

Hi MADe,
The short answer is yes, I can see a difference in gameplay. But the difference is not always just due to FPS alone. Too many times this is misunderstood IMHO. In general terms FPS is the speed measurement of how fast a graphics card is flipping finished graphics frames to display but within this realm it is also just as important, if not more important, as to the graphics frame flipping sequencing being consistent so the scene animation is consistent which shows up as SMOOTH SCENE MOVEMENT. It is this aspect of computer graphics that affects gameplay the most, over and above the FPS as this WILL impact any control input, gunnery, views, etc....any part of the scene that imparts motion thus creates a timing scenario within the scene. FPS alone is not the end-all answer............. In fact FPS can mask these issues just as well as show them up.
I use the MSI graphs to back this up as I can type my findings here all day but unless a person is in good understanding of GPU frametiming and the effects of it on what a person sees on screen is most likely gonna be missed or dismissed.
This is the main reason why both Nvidia and AMD are very big on GPU frametiming (FCAT process) to determine overall graphics performance and also why IMHO this should be closely monitored on a computer being used to play games on.
GPU FPS and GPU frametiming are not mutually inclusive, especially when you start adding in other methods to try to achieve the same end, such as Vsynch or even VRR (G-synch or FreeSynch) or even utilizing high Hz monitors. These are all viable methods to employ to achieve graphics scene consistency but my testing has shown that these issues are not just centered around a GPU or monitor....the CPU also has a part in this, especially when a CPU has more than
4 physical CPU cores on die running software that isn't written to make full use of all of them so these extra CPU cores start to exhibit excessive idle\wait time (read as CPU latency) as Windows will try to use ALL of them to parse game threads which will throw off the CPU signal timing to the GPU to either fetch work from the system mem cache to continue to draw\render graphics frames OR to flip finished graphics frames to display in sequence which WILL show up as a stutter or a screen tear as well. We have always just associated these w\ the graphics card or Internet but CPU latency can affect ALL of these devices operation.
Just to make it very clear, my box IS using Vsynch, FreeSynch and AMD FRTC set @ 80 FPS which all have worked to smooth out the graphics frametiming to the point that any CPU anomonies will show up in the GPU frametiming graph line on my box......as I have shown. The differences are easily noted in the SMOOTHNESS of the game running which will include any control inputs, gunnery angles\bullet trajectories, etc as the CPU does all of this work and has to be done in synch w\ the graphics frames being displayed, not necessarily the SPEED or FPS of the GPU alone. This is the most noticeable when viewing a plane's prop rotational animation while playing. If the prop arc rotation is smooth w\o any hitching then all is in synch, if it is not smooth and is showing variations in rotational animation then something is not in synch and I have seen stutters and hitches in screen animation as well and these will show up on the GPU frametiming graph line....as I have shown thru these graph snippets but they aren't ALWAYS the fault of the GPU....as I have also shown.
As CPU core counts continue to grow this is gonna become a bigger problem IMHO if game developers don't compensate for this thru their software.
Pudg,
If you had say a 4 core 2700K do you think it would make a noticeable difference?
Next question,how exactly do you go about setting the affinity so the cpu uses only the 2 cores?
I ask because in some frames of FSO when there are large groups of planes if I dont reduce my settings I will see large frame-rate reductions. I use a 7950 AMD card,which I hope to upgrade soon,and I'm not sure if it's a map thing or a computer resource thing. I find AH3 runs much smoother for me than AH2 did when you drop a few frames but I would like to set my detail and terrain to a higher setting for better visuals. If I do this in FSO I risk a slideshow effect ATM.

Hi Morfiend,
From my testing on my box I have found that AHIII will run pretty well on 2-4 CPU cores w\ little variation but here is the process to set this if you want....................
http://www.techrepublic.com/blog/windows-and-office/change-the-processor-affinity-setting-in-windows-7-to-gain-a-performance-edge/HTC has said several times that AH was written to use 2 CPU cores, meaning that they have optimized the game threading across 2 CPU cores being used, but it will run on a single CPU core as well as many CPU cores but the results may not be as well as when run on the 2 CPU cores that the game was designed to be optimized on. This is due to how Windows is written to use CPU core affinity, not due to HTC. My testing has only shown to some extent the validity of what HTC has stated, but more to the exposure of running AHIII on a multi-core CPU that is in excess of 4 CPU cores (like my Intel I7 5820K 6-core CPU) as from my testing I'm finding that the extra CPU cores are inducing too much overall CPU latency due to these CPU cores experiencing excessive CPU idle\wait time so not being as efficient in processing game threads causing timing issues w\ my box's GPU culminating in GPU stuttering as seen both visually in-game AND thru the GPU frametiming graph line of MSI AB.
This scenario kinda fits the scenario as given in the blog link as to setting CPU core affinity for older software that hasn't been written to specifically make good use of all the extra CPU cores.
But your mileage may vary as your particular issues may be caused by other more mainstream causes so don't get your hopes up..........
Hope this helps you out.
