Before running any benchmarks, make sure of the following
1. That you have the latest drivers for your video card.
2. Your motherboard drivers/BIOS are up to date.
3. That you have all service packs and hotfixes for your O/S
4. That you have optimised all BIOS/Display properties settings (If
you're unsure of how to do this, please check the BEGINNERS GUIDE TO
TWEAKING YOUR VIDEO CARD for further information). See:
http://www.atomicmpc.com.au/ forums.asp?s=2&c=7&t=3253
5. You have closed ALL applications not required to be running. For
example, ICQ, MSN or Outlook.
Now that you've run your default test, jump into your overclocking program.
If you have your 3rd party utility open, or even if you're using the driver
tools through display properties/settings/advanced/clock rate you
should be presented with two slider bars, one for the clock and one for the
memory. Welcome to the place where you'll do most of your fiddling.
So do you just move the sliders really high and click "APPLY" and then you
have the best graphics card on the face of the planet?. Hell no. What you
would end up with is a melted glob of silicon dripping over your
motherboard. The best and safest way to overclock is in increments. I'm
talking about tiny increments, like 10mhz at a time. Some people would say
15mhz at a time, but really 10mhz will give you a more detailed idea of when
you hit your overclock ceiling (when bad things start to happen).
Start by bumping your core speed up by around 10mhz and applying the change.
Now open your benchmark tool, and disable all but 1 test. Keep the "High
polygon count, 1 light" test enabled. This test will be enough to see any
artefacts that might crop up, without having to run all 16 or so tests
through 3dmark2001SE.
Run it and check for artefacts. (Artefacts are glitches in the rendering
of scenes and objects during benchmarking/gaming. They indicate that the
graphics hardware is clocked to high and may be overheating. If you start
to see objects wildly distorted or polygons where they shouldn't be, it's
probably the core overheating. If you see white dots flickering all over the
screen, it's more than likely memory heat).
NOTE: Rather than doing this, you can down-load specific artefact
testing programs, designed to run a series of tests and check for graphical
glitches. These might suit your purposes better. Linkage coming soon. (Some
new driver sets, such as the latest Omega drivers, actually come with an
in-built artefact testing program as part of the suite. You will have to
make sure you choose to install this component when installing the drivers)
Link to Artefact tester program (Stand-alone)
http://www.majorgeeks.com/download4109.html If you don't see any problems, raise the core speed up by another 10mhz and
continue your testing loop until you spot a problem. A problem could be
anything from the benchmarking tool hanging/freezing, to artefacts, to a pc
rebooting itself. That's a sign you've gone too far.
Once you've reached a safe limit with the core speed and you've run the
benchmarking tool a few times, leave the setting where it is and move onto
the memory. Repeat the same steps until you hit the ceiling for your ram
modules.
You may find that raising the core to its ceiling and then raising the
memory to its maximum will then allow you to raise the core speed again by a
small margin. So once you've got both sliders as high as they can go without
any glitches, try the core again and see if you can move it any further.
So lets take a look at a typical overclock I managed out of my Radeon
9600pro.
Keep in mind that the 9600pro runs as cool as a cucumber due to various
factors,
so dont always expect an improvement like this from every card. Theres a lot
of
more powerful cards out there that dont run on the .13 micron core.
Default: 400mhz Clock (400mhz * (4 * 1) = 1600 Mega-Texels = 1.6
Giga-Texels )
600mhz Memory (128 * 600mhz / 8 = 9600Mb sec = 9.6Gb sec bandwidth. )
O/Clock: 500mhz Clock (500mhz * (4 * 1) = 2000 Mega-Texels = 2
Giga-Texels )
680mhz Memory (128 * 680mhz / 8 = 10880Mb sec = 10.8Gb sec bandwidth )
Not a bad result, especially from the bandwidth point of view
You should now have a card overclocked in both the memory and core settings.
Now run 3dmark again with all the tests enabled and check your score.
Hopefully you should have a measurable increase in 3dmarks. Congratulations,
a few hundred points difference is worth all that crazy effort and risk! If
you really want to be sure about the stability of your newly tricked up
card, it would be worth your while to loop the 3dmark test several times
over. You may have run it once or twice with no problems, but graphics
hardware sometimes needs a little time before it gets worked up enough to
crap itself due to overclocking. You may also find that although
3dmark2001SE runs all tests multiple times without any glitches, an actual
game engine will produce many. Best solution to this?, use a combination of
"Real World" and "Benchmark" tests to complete checking of your hardware.
More on that below!.
REAL WORLD Vs BENCHMARKING
This is one of the single most important questions that faces the avid
overclocker of video hardware. What's more important to test with? Real game
titles or benchmarking programs?
The answer is both are important for different reasons. Lets take a look at
some different angles of testing and whether you should look for answers in
the Real World or Synthetic benchmarking.
I want too see the improvement my efforts have made!
In the above case, most people will agree that you would be best running a
Synthetic benchmark to test the results of an overclocking or tweaking
experiment. Synthetic benchmarking tools return an exact numerical score
that accurately reflects any changes to the bandwidth or fill rate of a
graphics card. While the score might be different by around 1 or 200 points,
that may only translate to a few measly frames in an actual game title. If
you tried to run a game title to test your results, chances are you wouldn't
notice the difference between 40fps and 43fps. If you somehow manage to
drastically improve your Synthetic benchmark score from say, 11,000 3dmarks
to 14,500 3dmarks, you might find testing with a game title such as UT2003
to be worth it, as you'll see a real performance increase in framerate.
Whats the best way to test for Artefacts? Synthetic, Real World or
Artefact Tester?
The easy answer. All three. There are many different 3d engines out there
that all utilise different aspects of a 3d video card. The only way to
comprehensively test for glitches is to use a wide variety of testing
methods to ensure stability. I can't stress that enough. It might be tedious
waiting for 3dmark to finish for the 10th time, but wouldn't you rather your
hardware was running ok, than have it crash during an online game? Also
remember that glitches aren't just limited to funny polygons appearing on
the screen. They can be white or colored speckles, machine lock ups, games
crashing back to the desktop - a whole variety of different buggy
happenings.