Author Topic: Beginers guide to video cards  (Read 648 times)

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« on: August 05, 2004, 12:21:17 AM »
OK this is very long but it has tons of info and will problely help alot of you out , this is a general guide and is NOT AH2 specific.

BEGINNERS GUIDE to tweaking your graphics card:

Warning: This guide is not for the advanced user, topics covered here will
already be known to most of you, but for the others this is a guide to save
surfing the net for information about correct settings and basic information.

Yes well, you now have the best graphics card your budget will allow and
you've hurridly installed Unreal Tournemant 2003 so you can enjoy it in
triple digit frames per second, with all the "holy ****" options turned on.
But wait!?, whats this?, UT2003 chugging along, not giving you your monies worth?.
This cannot be so, I mean you spent 360 bucks on this damned piece of pcb, you expected more, MORE!.

Don't do any of the following just yet:

- Blame it on your mother
- Kick the family dog (Replace 'dog' with appropriate favourite pet)
- Burn the graphics card in a ritualistic orgy of frenzied religious
fanaticism


It might be that some of your BIOS , WINDOWS or other settings need
to be "tweaked" a little to get better performance, don't always assume that
just because you installed the card, and put the drivers into effect that
everything will be as it should be, so let me take you through some of the
BASIC tweaking and option settings available to you, spread across 3
different catagories. Namely, Bios, Windows and "In game".

NOTE: Tweaking is NOT overclocking, these are separate things,
although even tweaking options on or off, can result in instability where
your graphics card is concerned so tread carefully. If you turn an option on
or off, and observe problems with the card afterwards, set the option back
to whatever it was beforehand, with the plethora of different brands and
configurations out there, you're bound to come across some unique situations
when playing with graphic card options. But we love 'unique' situations
don't we At0micans?.....don't we?....er...hello??...

The BIOS, and flashing your BITS.

Before we jump into talking about the motherboard BIOS itself, its worth taking some time out to understand that with a video card, you actually have 2 BIOS' available, the one on your mainboard, and the actual Video card BIOS. We'll deal with the motherboard options in a momemnt, but what of the Video card BIOS itself?. Just like the mobo BIOS, the video card BIOS tells your computer information about your 3d card, such as its brand, model, and more importantly core clock and memory speeds. Most of you wont ever have the need to muck around with the video card's BIOS, but you should realize that each time you make permenant changes to the core/memory speed of your card, you're writing those changes to the video cards BIOS. That being said, sometimes you might get a card that needs its BIOS "Flashed" out of the box, because it is part of a bad batch and may be incorrectly recognized by your system. GeXcube's 9800pro Extreme is a recent example of this (if you dont know what a BIOS Flash is, stop reading NOW, and learn about that first before proceeding). The GeXcube example is simple. The 9800pro Extreme should be a higher clocked version of the 9800pro, however batch was released that still contained the normal, everyday 9800pro BIOS, therefore the card was showing the slower core/memory speeds. Whats the answer?, to flash the BIOS of the video card!. This usually involves creating a bootable floppy disk, which contains the new BIOS (which will usually have a .rom extension) and several batch programs that will both apply the new BIOS to your card, and more importantly back up the old one. Its extremely important that you play it safe when flashing the video card BIOS, as incorrectly doing so may leave your card completely unusable. Always make a backup so you can apply that if the new BIOS doesn't take for some reason. Do your homework!.

Apart from the "bad batch" scenario, there are also newer revison BIOS' released for most major graphics cards, that contain updated settings, or allow a previously 'locked' card to be 'unlocked' for overclocking. Its up to you whether you want to flash the BIOS or not to keep up with these revisions, but honestly, if it aint broke, dont fix it!.

Now onto the motherboard BIOS.

These settings are at the heart of your graphics card, determining
much of its operational defaults before windows even loads. Lets take you on
a little tour of some settings that are contained on the MOTHERBOARD BIOS....

AGP FASTWRITE:
Most modern cards support the fast write feature, setting this to be
enabled allows a "shortcut" to be created between the motherboard and your
graphics card, taking your systems main memory out of the loop - in theory
increasing performance. Personally I've tried this option both enabled and
disabled, and haven't found much of a difference either way. For the sake of
testing though, try it both ways, and see how you go.

AGP MODE:
The AGP mode should have options within it to set to 2x, 4x, 8x.
These options are only valid if both the graphics card and motherboard
support that AGP mode. You should always set the AGP mode to the highest
possible setting your configuration will allow, for all modern graphics
cards, such as the Direct X 9.0 series of cards, this will be AGP 8x. If you
only have an AGP 4x board and video card don't stress that you're missing
out on too much though. The performance measure between 4x and 8x is barely
measurable, and may only be apparent in benchmarks.

VGA PALETTE SNOOP:
This feature is disabled by default on most if not all motherboards.
Its best to leave it that way, as this option only comes into effect when
the PC is running in 256 colour mode, which doesn't often happen nowdays.
This option does not impact performance.

AGP APERTURE SIZE:
This feature is a bone of contention amongst many users, with
several different schools of thought set up as to what is the best option to
have and how much it increases or decreases performance at different
settings. I can only offer my suggestion, which is to always set the
Aperture size to half the available system memory. So if you have 256mb of
main memory, set your aperture size to 128mb. The aperture size just tells
the computer how much of the systems main memory it can access if it
requires more memory than is available on the graphics card. Being that most
video cards today support 128mb as a minimum, aperture size doesn't really
come into play. However if you're using an older 64 or 32mb card, it may be
wise to look into this option for a slight performance increase.

VIDEO BIOS SHADOWING:
This feature is still available on most modern motherboards, yet is
completely defunct nowdays. Video Bios Shadowing was once used to allow the
computer to copy the video cards BIOS to the system memory for faster
access. Modern PC's no longer access the BIOS when talking to the graphics
card anyway, so leave this disabled (as it should be by default)

VIDEO MEMORY CACHE:
Ok, im not entirely sure about this feature, so if im wrong someone
let me know and I'll fix it up. As I understand it there are 2 options, UN
and USWC. UN stands for uncacheable and USWC stands for
Uncacheable-Speculative-Write-Combining. USWC should be the option to stick
with for a minor performance increase, but if you spot problems with the
video card, UN can be set as an alternative.

Note: These options by themselves, offer nothing meaty in
performance gains, but a well tuned bios with all the options set to the
most desirable, can make a difference to your graphics cards overall
performance.


The WINDOWS Enviroment:

Bless its crummy heart for trying, but Bill Gates' baby doesn't
always give your graphics card its most ideal home, so here are some tips to
make gaming a more comfortable journey within the Windows enviroment.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #1 on: August 05, 2004, 12:23:34 AM »
cont ........

DISPLAY SETTINGS

Windows XP (which the author assumes
you're using) has a set of Display
Properties which can be tweaked to
improve performance without the need
for any additional software.

- Right click on your desktop
- Choose "Properties"
- Once your properties are displayed,
choose "Settings" then "Advanced"

You will now have access to various
difference panels of options concerning
your display and your video card. The
one that concerns us is "Direct3D" for
the most part, so click on that one.

You'll notice at the very top of the
Direct3D settings, a slider bar that
has performance at one end, and quality
at the other. Changing this closer to
one of the other extreme, changes your
displays quality for applications (mostly games). The slider bar simply
makes adjustments to things like Anti-Aliasing and Anistropic filtering
depending on where you slide the bar.
Obviously, the closer to Optimal
quality that you slide the bar, the
better things like textures will appear
in games, but your framerate will take
a hit. The opposite is true if you move
it further towards the performance end.

Below the slider bar are several other
slider bars, allowing you to more
specifically choose your settings.
You'll notice these change depending
on where your "Main settings" slider is placed. If you want to fine-tune
things like Anti-Aliasing or MipMap
Detail level, you do it with these
sliders.

IF YOU BENCHMARK USING APPLICATIONS
SUCH AS 3DMARK2001SE OR CODECREATURES
ALWAYS SET THE MAIN SLIDER TO MAXIMUM
PERFORMANCE


DRIVERS:
There are basically 3 different sets of drivers for the most popular
video card manufacturers out there - Nvidia and ATI.
(Forgive me for not including any others such as SiS's Xabre series,
but the minority loses out when making a general guide, besides, if you have
a Xabre series card, theres probably not a lot anyone can do to help you).

Boxed Drivers:
These come on the CD with your video card when you first buy it, and
are basically made up either one of these types of driver.

1.A manufacturer specific driver, such as those created by LeadTek

or

2. Driver that is actually from the chipset manufacturer, such as
Nvidia or ATI. Powercolor is an example of a brand that uses the basic
driver set developed by the chipset maker themselves. (We'll deal with those
a little further down).

The easiest way to tell if your card comes with a general driver,
same as those obtained from Nvidia or ATI, or whether you have a
manufacturer specific driver, is to check in the following menu

Right click on your desktop
Choose the properties option from the drop down list
A box will come up, choose the settings tab within this box.

You will get something along the lines of "Geforce FX5200 on FLATRON 795+
Monitor"

A manufacturer specific driver will contain the manufacturer title within
the name of the graphics card, so a Geforce FX5200 with leadtek graphics
drivers, will report as a "Leadtek Winfast A340", whereas if your chosen
brand of manufacturer uses the basic Nvidia chipset drivers, your card will
just report as the model of card it is, in this case a Geforce FX5200. Its
debatable if using manufacturer specific drivers makes a hell of a
difference to performance, so I'd say try it both ways and see how you go.
Others though, adhere to the golden rule of ALWAYS using manufacturer
specific drivers where applicable.

If you have a card for which manufacturer specific drivers are available,
you will find them at the manufacturers website, check the documentation
with your graphics card for the manufacturers website details, some even
provide a link on the drivers CD.

IMPORTANT NOTE: Always remember that when you're buying a graphics card, the
drivers on the CD will more often than not, be out of date, so you may be
missing out on performance gains, image quality and options that newer
drivers have.
If you have manufacturer specific drivers, visit their website to obtain the
latest version. If your card comes with the standard ATI or NVIDIA drivers,
visit either http://www.ati.com or http://www.nvidia.com to get the updates.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #2 on: August 05, 2004, 12:25:31 AM »
DETONATOR DRIVERS AND CATALYST DRIVERS:
These are the drivers that are created and maintained, by the chipset
designers themselves. They are called "reference" drivers. In this case we
have the Detonator drivers from Nvidia, and the Catalyst drivers from ATI.
These drivers are updated to newer versions quite frequently and are
available for download from a variety of sites, though Nvidia and ATI's main
sites would probably be the default place to go.

Detonator drivers usually have a version number format of xx.xx
Catalyst drivers usually have a version number format of x.x

As a general rule, its always best to have the most up to date drivers from
Nvidia and ATI sites, so always check their websites for details of
available or upcoming drivers. However, it is always worth checking out
individual users responses to new driver sets, as they don't always improve
upon the last set. Some people have had tales of woe when it comes to using
some of the latest Detonator drivers for instance, complaining about
everything from a loss of image quality, to reduced frames per second in
games, or lower benchmark scores. Always do a forum search for information
about the latest Catalyst and Detonator drivers, judge the feedback for
yourself.

OMEGA Drivers for Nvidia and ATI cards:

The Omega set of drivers are designed, released and maintained at
http://www.omegacorner.com They were designed as a stable alternative to
using the reference drivers from Nvidia or ATI. The benefits of using the
Omega drivers over reference drivers, according to the author are:

* More optimisations.
* Extra features, such as a built in overclocking utility.
* More resolution modes to choose from.
* Internal tweaks/options, possibly giving performance gains or image
quality gains.

The Omega drivers come in separate sets for both the Nvidia and ATI range
of cards, usually a new version of an Omega driver is released shortly
after the corresponding driver set from Nvidia or ATI and can be
downloaded from the omegacorner site.

ATI Omega drivers take the format of - x.x.x
Nvidia Omega drivers take the format of - Vx.xxxx

According to the author these driver sets have been tested for
compatibility using a PC set up with both an Nvidia and ATI graphics card,
but this in no way guarantees that they will work well with your
configuration of card and machine, or even work at all.

The reference drivers from ATI and Nvidia are assumed to be tested more
stringently and with a wider array of hardware setups due to the extensive
resources available to these companies. The debate rages here like many
other areas of tweaking, as to the benefits of using the Omega drivers
over reference drivers. I can happily report that from using the latest
Omega set Vs the latest Catalyst driver, I managed a 300 point (approx)
increase in 3dmark2001SE, which is no small thing. As a matter of fact I
celebrate the occasion every so often by an orgiastic feast of chicken
strips and Counter-Strike.

Your experience with the driver may vary wildly from this, some users have
reported a performance drop after installing them. Bearing that in mind,
the Omega drivers are a welcome addition to any enthusiasts graphics card
arsenal, as the extra options alone make it worth investigating as a
replacement for the reference set of drivers.

If there is a demand for a more in depth look at the Omega driver set from
forum users, I will include an analysis which will break down all the
optimisations, enhancements and overclocking tools contained within the
drivers. Until then this is here as a good introduction to Omega as an
alternative driver set for the user.

NOTE: the Omega website also includes links to BIOS updates for various
graphics cards. BIOS updates for your graphics card should not be
attempted by novice users, just the same as not attempting to flash the
motherboard BIOS, without at least the assistance of a more intermediate
to advanced user to guide you. I will be covering BIOS updates to your
graphics card further down the page.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #3 on: August 05, 2004, 12:28:25 AM »
CLEANING UP OLD DRIVER FILES:

As much as it seems relatively easy these days to install a new set of
drivers in Windows for your graphics card, sometimes all doesn't go as
smoothly as it appears to the naked eye. (now now, kiddies, don't go
getting all frothy over the word naked, its just a word, not an
invitation).

For the sake of saving space, because yes, this post is becoming bigger
than Ben Hur on an acid trip in the jungle, We'll stick to some simple
advice and suggestions relating to Windows XP, earlier versions of windows
may treat things a little differently, but if theres enough of you Windows
for workgroups 3.11 users out there, I'll write something up for you upon
request)

The best and most concrete way to remove old drivers for your video card,
is to go into the Device Manager in windows XP My computer/properties
menu, then select "display adaptor", right click and choose "un-install".

This removes the display adapter drivers and device from your O/S, the
next time it boots up, it will be running on VGA compatibility mode,
resulting in a loss in resolution and color depth for most users. This
should ALWAYS be done before installing new drivers, as installing new
drivers over the top of previous installations can cause issues. Some
driver packages once installed, will also have an "un-install" option in
the Start Menu/Programs section of the desktop, under the display drivers
entry. It's best that you also choose this option to remove the drivers
from your programs menu.

A known problem with ATI display drivers, is that if you attempt to
install new driver sets over old ones, you will get the following error
during setup.

"Video INF file not found"

The setup will then exit. If you get this error, you have not correctly
uninstalled the display adaptor from your system and need to do this
first. Sometimes however, especially in the case of Nvidia Detonator
drivers, even a removal of the display adaptor and un-install of the
device drivers will not result in a complete erasure of the files,
settings and registry entries for that driver.

To completely ensure that those entries are gone from your machine, you
may need something like RegCleaner, or a manual cleaning of some registry
entries. I wont go into detail listing the procedure for this here as it
can be slightly dangerous to muck around too much with regedit, but if
enough people request the information, I will post it here (Safe to say
that if you think this is an option you need to take, speak to someone
with a bit of knowledge here at the forums and they should be able to help
you out)

A useful tool for getting rid of your Detonator drivers completely, is
"Detonator Destroyer" available at
http://www.guru3d.com/detonator-destroyer/ Although I believe it is only
compatible with the following O/S's.

Windows 95 (All versions), Windows98, Windows 98SE, Windows ME

There are also some steps you can take for removal of video drivers,
listed at http://forums.extremeoverclocking.com/archive/topic/15567-1.html
These steps are a little "long winded" however and not terribly clear in
their outlay, hopefully though there's some information you can get from
them to suit your particular situation.

Display Settings In Game:

A number of users often express confusion and plain paranoia when it comes
to all the settings available in a games "Video" or "Graphics" menu. I
mean, just about every single one of us has played with the settings from
time to time, but do we all really understand what it is we're affecting,
and if it impacts on performance or not?

The golden rule is that any effect or option turned to a higher or more
severe setting, will impact game performance, it is only the upper echelon
of video cards that will be able to handle a game like Unreal Tournament
2003 with all the graphics options turned up.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #4 on: August 05, 2004, 12:29:21 AM »
I'll now take you through
some of the more common settings available to you in-game, and an idea of
what the setting represents.


32-bit Colour / 16-bit Colour

This option will also be present within your windows desktop, and most
users will have it set to 32bit by default. This option is usually tied in
with choosing a resolution for the game to run at, if you look in your
options for resolution, you'll see the format as xxxx X xxxx xx-bit, for
example 1600 x 1200 32-bit or 640 x 480 16-bit. The two options refer to
how strong the colour depth is during the game, the amount of colors
displayed. If the game is run in 16bit, then the engine will do its best
to emulate 32bit color, the difference is noticeable to some, but not
others.

Resolution.

The resolution options should be present in any form of game, and can
range from 640 x 480 (pixels wide and high) to 1600 x 1200. The range that
you have available depends both on the size of your monitor, your driver
set, and the limits set within the game engine. The greater the resolution
that is set within the game, the more viewable area you may have when
playing and the smoother the graphics appear to be, because the system can
use more pixels to represent objects on-screen.
Increasing the resolution in a game does impact performance and depending
on your graphics hardware the framerate drop can be quite severe. The
standard setting for games years ago used to be 640 x 480, this then
increased to 1024 x 768, and it now nestled somewhere between 1024 x 1280
and 1600 x 1200. The best resolution to use when playing a game also
depends on how graphics intensive the game itself is. For example, I have
no problems playing Warcraft 3 at 1600 x 1200 with flawless framerates but
when it comes to Unreal 2003, 1600 x 1200 can produce slowdown when there
is a lot happening on screen.

Filtering:

Filtering is a computer method of determining the color of a pixel based
on the texture maps provided in the game engine.
It comes into play for instance, when as a player you get to close to a
polygon texture map, the texture doesn't have enough information to
determine the "real" color of each pixel you're viewing, so it uses the
colors of the surrounding pixel's do determine the best color based on
mathematical averages. Have I lost you yet? I hope not. Filtering gives
the 'blurred' effect to textures you are viewing up close, designed to
hide the jaggedness of the texture, however this can result in a
vague/washed out effect to the texture. An example of filtering would be
the monsters in Quake. When you get close to them, they appear blurred and
at best, unrealistic. The best application for filtering is texture maps
that are vague by nature, roads, floors and walls use filtering best.

Filtering types are divided into the following types, some are applicable
to modern gaming, some are not really in use anymore:


Point filtering:

Point filtering will just copy the color of the nearest real pixel, so it
actually enlarges the real pixel. This creates a blocky effect used in
software 3D rendering because it requires very little calculation power.
This type of filtering will be in place if you choose software rendering
over Hardware T&L or pure Hardware T&L.

Bilinear filtering

Bilinear filtering uses four adjacent texels containing real color
information to determine the output pixel value. This results in a
smoother textured polygon as the interpolation filters down the blockiness
associated with point filtering. The disadvantage of bilinear texturing
is that it results in an approximate fourfold increase in texture memory
bandwidth.

Trilinear filtering

Trilinear filtering will combine Bilinear filtering in 2 Mip levels. This
however results in 8 texels being needed so memory bandwidth is multiplied
by 2. This usually means that the memory will suffer serious bandwidth
problems so trilinear filtering is usually used as an option, although
higher end graphics cards should be able to handle this filtering with
greater levels of success.

Anistropic Filtering:

Anistropic Filtering addresses quadrilateral shaped and angled areas of a
texture image. A sharper image is accomplished by interpolating and
filtering multiple samples from one or more MIP-maps to better approximate
very distorted textures. This is the next level of filtering after
trilinear filtering. While it will create the best looking images it comes
at a serious price and should only be used when your system can handle it.
If your system is performing slowly try turning off Anistropic filtering
for better perfomance.

Fill Rate

the number of pixels that a video card can render (textured and shaded)
over a given time period (millions of pixels per second, MPPS). This is
taken into account when determining the "grunt" of a given video card over
others, the higher the fill rate, the better the card will perform.

Fogging

Creates a fog like effect by placing a haze over the scene. Is used to make
object appear slowly to avoid the sudden appearance of objects. Im not sure
if this effect is as widely used in modern games as 'back in the day' of
DOOM and Quake.

MIP mapping

A technique using scaled down versions of a texture image, generated
beforehand and stored in memory, are then used in rendering a 3D scene to
provide the best quality. This technique allows objects to look more
detailed when coming closer to them by defining multiple texture maps- very
detailed textures maps are used when the object is close and less detailed
ones used when the object is further away. This helps to avoid the blocky
textures, and the step effect on lines. Usually they talk about MIP-levels
or Level of Detail (LOD) which refers to the quality of the texture map
used.

Sideband Signalling (I included this one, because I think someone asked me
about it a day or so ago, so here you are!)

An extra 8-bits of addressing capability built into AGP which, in effect,
allows the AGP graphics board to request information over AGP at the same
time as it is receiving data over the 32-bit datapath of the bus. This is
yet another way that AGP graphics board can create better efficiencies and
improve overall graphics performance.

Texture Mapping

In 3D graphics, texture mapping is the process of adding a graphic pattern
to the polygons of a 3D scene. Unlike simple shading, which uses colors to
the underlying polygons of the scene, texture mapping applies simple
textured graphics, also known as patterns or more commonly "tiles", to
simulate walls, floors, the sky, and so on.

Vertical Synch (V-Synch) refers to a video-card synchronizing its output to the monitor's vertical refresh rate. A monitor's refresh rate is the number of times per second that the monitor redraws the screen at a given resolution (expressed in Hertz).


Video cards typically use two or three frame buffers to process and display 3D graphics. When V-Synch is enabled, a video card will hold a completed frame in a frame buffer until the display is finished drawing the current rendered frame on the screen. This forces the video card to match its display speed to that of the monitor. Disabling V-Synch lets the video card render frames as fast as possible regardless of the display's refresh rate. This will eliminate a refresh rate bottleneck, but you may (or may not) notice some visual anomalies during gameplay.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #5 on: August 05, 2004, 12:32:34 AM »
USING THE REGISTRY TO AFFECT DIRECT X CHANGES

There is of course, another, more hands on way to muck around with your graphics card settings. Its called the Registry. If you dont know how to get to the registry, just go to START/RUN and type "regedit" and ZAM! you're in the registry.

Here are some example settings for the Radeon series of cards that can be accessed through the registry. These are the same registry options that a GUI tweaking program will access. This just allows you to see exactly whats being changed and where the information for these values is stored on the computer.

WARNING Dont mess with these settings if you're unsure of how to properly use Regedit. Just stick to the GUI interface rather than risk fux0ring your beautiful rig :).

The following settings are explanations of the entries that can be found in the registry under the following key: [HKEY_LOCAL_MACHINE\SOFTWARE\ATI Technologies\Driver\xxxx\atidxhal]. Where xxxx = the adapter "number" as specified by Windows. All entries in this key should be STRING values. Where applicable I've specified which settings can be modified from within the advanced display properties.

These settings apply to games and applications that use the DirectX API.


-AntiAlias: Settings: 2 = Enabled (forced), 1 = Enabled by Application Preference, 0 = Disabled.

Full Scene Anti-Aliasing in this context can help to smooth the jagged lines and flickering of very thin objects in 3D scenes. In simple terms, the Radeon does this by internally rendering the image at a higher resolution, and then downsampling the image to be displayed at the selected output resolution. This can dramatically improve image quality, but incurs a severe performance penalty. Use the AntiAliasRatio setting to trade some AA quality for speed. Setting this to 2 forces FSAA for all D3D apps; 1 only enables FSAA if the application requests it; 0 disables FSAA on the Radeon. This setting can be modified from within the D3D tab of the ATI advanced display properties.

-AntiAliasRatio: Settings: 512 = 4 samples, 384 = 2 samples

This setting affects how many samples are taken for FSAA (set in the AntiAlias key listed above). 4 sample FSAA will remove the most "jaggies" and have the most significant impact on image quality, but incurs an extreme performance penalty. 2 sample FSAA will be less effective at removing "jaggies" but will significantly reduce the performance penalty. This setting can be modified from within the D3D tab of the ATI advanced display properties.

-Colorfill: Settings: 1 = Enabled, 0 = Disabled

Setting this to "1" is said to improve the color saturation in 3D games. I've not explored this setting too much as to what may be the cause, but reports have been that it with it enabled colors tend to be more vivid.

-DitherAlpha: Settings: 2 = No Dithering, 1 = Ordered Dithering, 0 = Error Diffusion Dithering

Somewhat similar to the OpenGL setting OGLDisableDitherWhenAlphaBlen ding, this setting affects the dithering method utilized when Alpha blending is used. Experiment to find what looks best to you. This setting can be modified from within the D3D tab of the ATI advanced display properties.

-DisableHierarchicalZ: Settings: 1 = Disabled, 0 = Enabled.

This HyperZ setting enables the Radeon to use a reduced resolution Z-buffer stored in the GPU to conservatively determine the visibility of pixels and reject them before texturing. This helps save texture bandwidth, and can increase performance. Note: the current implementation occasionally introduces rendering artifacts.

-DisableHyperZ: Settings: 1 = Disabled, 0 = Enabled.

This HyperZ feature enables the Radeon to use a lossless form of compression on the Z-buffer stored in video memory. This can save significant bandwidth as well as freeing some extra video memory for textures due to the smaller Z-buffer size.

-EnableUntransformedInLocalMem: Settings: 1 = Enabled, 0 = Disabled

When set to "1" this entry allows the Radeon drivers to place vertices of a hardware T&L application into system memory instead of keeping them all in video memory. This setting should likely be enabled so the drivers can determine what is best, though in general transforming from system memory will be slower than from video memory.

-EnableWaitUntilIdxTriList2: Settings: 1 = Enabled, 0 = Disabled.

From what I can gather, this entry forces the Radeon to wait until all triangles specified in a triangle list (or possibly any of the primitive types) sent to the DirectX function "DrawPrimitive" have been rendered before returning to the program calling the function. Disabling this will return control to the program before rendering is completed. Disabling this may slightly improve performance, but according to the DirectX SDK this should only be used for debugging purposes. I generally recommend leaving this entry at its default setting of Enabled. Note: The spelling on this is UntilIdx meaning an I and not a second L. I had this misspelled in here for quite some time.

-ExportBumpMappedTex: Settings: 1 = Enabled, 0 = Disabled

This setting should enable Bump Mapped Texturing.

-ExportCompressedTex: Settings: 1 = Enabled, 0 = Disabled.

Setting this to "1" enables texture compression functionality in Direct3D. This increases performance for games and applications that support compressed textures. This setting can be modified from within the D3D tab of the ATI advanced display properties.

-ExportWBuffer: Settings: 1 = Enabled, 0 = Disabled

Setting this to "1" enables W-Buffer support. The W-Buffer is another way to determine the depth of pixels in a scene. (for the techies, the Z-Buffer stores the actual depth of each pixel, the W-Buffer interpolates 1/w or 1/z, then (1/(1/w or z)) is performed per pixel, which then is stored in the depth buffer.) The advantage of the W-Buffer is that it distributes the depth values evenly for a 3D scene, whereas the the Z-Buffer uses most of its precision for objects close to the viewport, leaving fewer values available for objects deeper in the 3D scene. This more uniform distribution of depth precision can help avoid pixel/polygon popping within 3D scenes but the application has to support the W-Buffer for this setting to matter.

-FastZClearEnabled: Settings: 1 = Enabled, 0 = Disabled.

This feature of HyperZ is used to rapidly clear the Z-buffer by tagging entire blocks of memory in the Z-buffer as cleared instead of writing zeros to the entire contents of each block. This saves bandwidth and can increase performance.

-PixelShaderVersion: Settings: 10 = (1.0)Enabled, 11 = (1.1)Enabled, 0 = Disabled(?)

When set to 10, this value enables the DirectX 8 v1.0 pixel shader capabilities of the Radeon. A setting of 11 enables DirectX8 v1.1 pixel shader support. Many Dx8 pixel shader functions are accelerated in hardware when this is enabled.

-PureDevice: Settings: 1 = Enabled, 0 = Disabled(?)

This setting enables support for Pure Device mode. An application must be coded to support the pure device type.

The following is paraphrased from the DirectX SDK:
The pure device is a variant of the hardware abstraction layer (HAL) device and is focused on hardware acceleration with more limited software emulation support than a HAL device. The pure device type supports hardware vertex processing only, and allows only a small subset of the device state to be queried by the application. Additionally, the pure device is available only on adapters that have a minimum level of capabilities.

The pure device type is intended for performance-sensitive applications that do not rely on software vertex processing or on the ability to query the device state. The pure HAL device has significant performance advantages over the (non-pure) HAL device, due to a guaranteed close mapping to the hardware, and reduced need for state shadowing.

-RasterGuardbandEnable Settings: 1 = Enabled, 0 = Disabled

Also known as Guard Band Clipping, this is a "protective band" that surrounds the video memory which allows vertex values to be sent to the video card that are completely off screen. Doing this would normally cause a system crash (in a software engine) because you would be drawing into memory that you are not supposed to. It also reduces the number of triangles a video cards GPU must clip in full 3D. It will reduce the number of triangles to be clipped by games with a software engine, but in this case the game must be designed with support for it. Triangles that are inside the guardband, but partly or totally outside the visible screen can be clipped in 2D, saving a lot of time (bandwidth) because there is no need to recompute Z, Color, or texture values.

-SysMemBlts: Settings: 1 = Enabled, 0 = Disabled

This setting allows the drivers to Blit directly from system memory to video memory. If the drivers were to actually do this it would result in poor 2D and windowed 3D performance. Just as is the case with EnableUntransformedInLocalMem though, having it enabled does not mean the drivers will do this, it just gives the drivers the option if you are low on video memory.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #6 on: August 05, 2004, 12:33:31 AM »
-TableFogEnable: Settings: 1 = Enabled, 0 = Disabled.

This entry enables support for Table Fog. [Note: Some games exhibit serious artifacts when this setting is enabled, so use where appropriate.]

-TCL: Settings: 1 = Enabled, 0 = Disabled

Enables the Transform, Clipping, and Lighting engine on the Radeon. This may only apply to DirectX applications but I have not tested this yet.

-TclEnableBackFaceCulling: Settings: 1 = Enabled, 0 = Disabled

Enables hardware back face culling for 3D applications. Back Face Culling is a way to calculate the polygons in a 3D scene that are not facing the camera and therefore don't need to be rendered. For this to work hardware accelerated, I believe the application has to be designed with support for it.

-TclEnableVertexBlend2Optimize: Settings: 1 = Enabled, 0 = Disabled

This setting enables the drivers to optimize the Vertex buffers so they can be drawn faster.

-TclEnableVertexBlendUseProjMat: Settings: 1 = Enabled, 0 = Disabled

In 3D programming there are three matrices that are commonly used to transform objects on the screen. They are View, World, and Projection matrices. This setting does just what it says, meaning it enables vertex blending to use Projection matrices. This setting does not seem to affect performance in current games, but has been demonstrated to improve performance in the Skinned mesh DirectX8 demo. For now it is disabled by default (7020 drivers at least), and enabling this feature doesn't seem to have any real benefit at this time.

-VertexShaderVersion: Settings: 10 = Enabled, 0 = Disabled(?)

When set to 10, this value enables the DirectX 8 Vertex Shader capabilities of the Radeon. A couple of sites have reported that current drivers exhibit serious problems when accelerating many vertex shader functions. I will update as I get a chance to test some vertex shader demos.

-VolTxEnable: Settings: 1 = Enabled, 0 = Disabled

Enables Volume Textures. May not work with early driver revisions.

-Vsync: Settings: 1 = Enabled, 0 = Disabled.

Enabling this function synchronizes the display of frames from the Radeon to the monitor refresh rate. This stops images from "tearing" when rapidly moving or panning the view, but it also caps the rendering speed. Disabling Vsynch will improve performance, but if the image tearing is distracting then re-enable Vsync. This setting can be modified from within the D3D tab of the ATI advanced display properties.


Of course, to muck around with some of these things, you're going to want a dedicated tweaking program. Here is a list I have compiled, which I will make sure grows, and grows and grows - to become the place to find your tweakin toolz. Enjoy!. Will be including doco on some of the more obscure options within these programs soon!.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #7 on: August 05, 2004, 12:36:29 AM »
RECOMMENDED TWEAKING PROGRAMS
---------------------------------------

RivaTuner(Nvidia) @ http://www.guru3d.com/

Rage3d Tweaker (ATI) @
http://www.rage3d.com

Powerstrip (Nvidia/ATI) @
http://www.entechtaiwan.com/

NVMax (Nvidia) @
http://www.nvmax.com/start/load.nsf/data.shtml

ATuner (Nvidia) @
http://www.3dcenter.org/atuner/index_e.php

RefreshForce (Nvidia) @
http://www.etplanet.com/download/ati_nvidia.shtm

X-Bios Editor (Nvidia) @
http://www.etplanet.com/download/ati_nvidia.shtm

NVHardpage (Nvidia) @
http://www.etplanet.com/download/ati_nvidia.shtm


Geforce Tweak Utility (Nvidia) @
http://www.guru3d.com/geforcetweakutility/

RadLinker (ATI) @
http://www.etplanet.com/download/ati_nvidia.shtm

Radeonater (ATI) @

http://www.etplanet.com/download/ati_nvidia.shtm

Geforce AA Set (NVIDA. Note this is for older Geforce 256, Geforce2 & 3 video cards) @

http://www.rivastation.com/go_e.htm

Radeon Tweaker (ATI) @

http://www.rivastation.com/go_e.htm
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #8 on: August 05, 2004, 12:37:29 AM »
Overclocking, the realm of the elitist snob? not really. Most people with
half a brain for computers and half a ball for tweaking get around to
overclocking their cpu or graphics cards after a while. Gone are the days of
cranially magnified nerds hidden in underground bunkers, learning secret
arts of the front side bus. Overclocking is there for just about anyone to
enjoy. It's all about the challenge of squeezing more out of your hardware
for no extra cost. Sometimes you win out and gain colossal performance
increases, sometimes you just blow your hardware to bits in a flurry of heat
and copper.

Because of the fine line between one and the other, I have to put a little
note in here about assuming absolutely no responsibility for damage to
hardware as a result of this Guide. Overclock at your own risk not mine.
Remember kids, overclocking voids the warranty and warranties are nice
things to have, right?

Now that all that's out of the way, lets get down to business. This section
is going to teach the you a little about how to overclock your video card,
the dos, the do-nots and all that. Its not here for the veterans, it's here
for the beginners. With that in mind, lets get our hands dirty in some
thermal pasted joy.


GET THE TOOLS

First up you'll probably need a third party utility for pushing up the speed
of your core and memory. Most recent driver sets from ATI and Nvidia contain
an in-built overclocking function contained within the drivers themselves
(to check this have a look in display properties/advanced/clock rate/ and
see if you can change the speeds). Most people prefer to use a third party
tool because of the interface and extra tweaking features that come with the
program. If you don't know where to find them, check the "BEGINNERS guide to
tweaking your graphics card" thread.

http://www.atomicmpc.com.au/forums.asp?s=2&c=7&t=3253 I've included a list
of Nvidia/ATI programs for your consumption. Personally, I'd stick with
something like "PowerStrip" which caters for any chipset and has some good
features apart from the obvious. This is a generic utility though, it
doesn't have some of the more specific features that an ATI specific or
NVIDIA specific tool may have. The choice is ultimately yours. Just
experiment till you find the one that suits your needs best.

Its great to have the tool to overclock, but what about something to test
your efforts? Most overclocking you do wont give a massive real-world
performance difference in games so you'll need a benchmarking program to
accurately reflect any increase in the power of your vid card. For the
purposes of this guide, I've chosen 3dmark2001SE. Available from
http://www.madonion.com or a million different cover CD's, 3dmark2001SE is
widely known and respected as a benchmarking tool. Despite being overtaken
by 3dmark03 and Aquamark for DX9.0 testing, it still remains an excellent
gauge of your success or failure with overclocking.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #9 on: August 05, 2004, 12:39:26 AM »
MEMORY, CORE & EVERYTHING IN-BETWEEN

The CORE: Also known as the CLOCK or GPU, the Core doesn't seem to
take as kindly to being pushed higher than its default settings as the
memory on a video card. Heat is the main adversary in your quest to push the
core speed higher on a video card. Try to find out the micron size is for
your video cards core, the smaller this number, the better the core will
overclock. For example, a Radeon 9700pro runs on a .15 micron core, so the
9600pro actually allows more headroom for overclocking due to its smaller
(therefore cooler) .13 micron core. Head even further back in time and older
cards like the Geforce2 GTS pro run on a .18 micron core which is harder to
push beyond its default.

The fabled .13 micron fabrication technique was heralded as a major
breakthrough in the chip making industry. First onto the market were SIS
with the .13 micron Xabre card - but its poor performance left little room
for praise. ATI were the first to release a strong performing card on .13,
the much heralded 9600pro. I've used it in several examples here because it
is a great overclocker and worthy of mucking around with. Now you can find a
range of .13 micron cores in the market, including the new 9600XT and
FX5700's. A glorious age for those that want to push the speed barrier a
little.

Don't expect to push the core too high on stock cooling, maybe an extra 30 -
50mhz so should be a good result. In saying that, there are exceptions to
the rule. Several models of 9600pro have been able to go as high as 520mhz
(from 400mhz default) on stock cooling. Truly impressive. Increasing the
core speed will increase the raw power and fill rate of your video card. If
you want to work out the fill rate in Giga-Texels there's an extremely rough
equation that isn't accurate, but helps to show what kind of improvement
you're getting from an overclock:

Core speed * (Number of pipelines * Maximum simultaneous textures through
a pipeline)

So for example: 400mhz * (8 * 1) = 3200 Mega-Texels = 3.2 Giga-Texels

The complete formula to work out the fill rate is much more complex, but you
can use this to give a basic idea of your fill rate, and improvement through
overclocking. So lets bump up our theoretical core speed a little and see
what happens.

Default: 400mhz * (8 * 1) = 3200 Mega-Texels = 3.2 Giga-Texels
O/clock: 430mhz * (8 * 1) = 3440 Mega-Texels = 3.4 Giga-Texels

There ya go!, an improvement in theoretical numbers, feels good eh?

The MEMORY: The memory on a video card is a little kinder to the
overclocker, usually able to be pushed further than the clock speed. There's
a few things to keep an eye out for when pushing your memory speed up. The
first is its "ns" value, for example 2.8ns or 3.3ns. Theres some information
in the "BEGINNERS guide to tweaking your graphics card" about ns ratings for
ram. Simply put, the lower the number, the better the transfer rate of your
ram as the ns rating refers to the timing of the ram in nano-seconds. Use
the following formula to work out what the Mhz speed of your ram is:

1000 / ns rating = mhz speed.

For example: 1000 / 2.8ns = 357mhz

NOTE: Theres always slight variations in formulas like these, for instance,
my pc states that the above 2.8ns ram actually runs at 351mhz not 357mhz,
but really its about getting a general idea that you can work with. So we
know its around the 350 - 360mhz mark, which is good enough for most
purposes.


You'll often find that the ram on your video card is actually running at a
speed that is less than the maximum rated speed for that ram module. For
instance someone could have a video card who's ram speed is 300mhz (or
600mhz DDR). The actual maximum rated speed could be 350mhz for that
particular ram module. To find out what ram module you have and its maximum
speed, check the actual memory modules on your video card. There should be a
maker such as "Samsung" written on it, and a serial number, e.g
NSA-9238-1330. Take your newfound information and visit the vendors website,
you should be able to find the maximum rated speed for the modules, which
helps give you a good place to start with your overclocking attempts. After
all, if your ram is running under its maximum rated speed, you shouldn't
have to many problems taking it to that speed through overclocking.

Bumping up your memory speed should give you a greater increase in
performance over increasing the core speed, as memory bandwidth is of more
importance overall than core clock speed. Hey, guess what? Its time for some
more theoretical number crunching!. Video card memory controls the bandwidth
of the card, how much data it can squeeze onto your screen at any one time.
This equation, like the one above is not very accurate when it comes to
giving exact bandwidth figures. It can be used to give an overall picture of
how much of a bandwidth increase you get from an overclock though:

Memory Bus Width * Effective memory speed / 8
(Note: "effective" speed is your memory clock * 2 so 200mhz memory clock is
400mhz effective)

So for example: 256 * 700mhz / 8 = 22,400Mb sec = 22Gb sec bandwidth.


That example was for a Radeon 9800pro, not too shabby at 22Gb a second. Now
lets check out the little brother, the 9600pro.

128 * 600mhz / 8 = 9600Mb sec = 9.6Gb sec bandwidth.

As you can see, memory bandwidth differences in the mid to high range cards
vary much more than the core clock speeds. Lets take a look at how a mild
overclock would affect the figures for a 9600pro.

Default: 128 * 600mhz / 8 = 9600Mb sec = 9.6Gb sec bandwidth.
O/clock: 128 * 640mhz / 8 = 10240Mb sec = 10.2Gb sec bandwidth.

Oh yeah, love that bandwidth increase, much fanfare all around.

So now you understand your video card more intimately. You've shared a few
candlelight dinners together, held hands and walked along the beach at
sunset. Now its time to overclock the buggery out of it!.

OVERCLOCKING THE BEAST

First things first!, run the FULL benchmark of 3dmark2001SE to get the score
for your card at its default settings. This will give you an idea of what
you're improving on.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #10 on: August 05, 2004, 12:40:11 AM »
Before running any benchmarks, make sure of the following

1. That you have the latest drivers for your video card.
2. Your motherboard drivers/BIOS are up to date.
3. That you have all service packs and hotfixes for your O/S
4. That you have optimised all BIOS/Display properties settings (If
you're unsure of how to do this, please check the BEGINNERS GUIDE TO
TWEAKING YOUR VIDEO CARD for further information). See:
http://www.atomicmpc.com.au/ forums.asp?s=2&c=7&t=3253
5. You have closed ALL applications not required to be running. For
example, ICQ, MSN or Outlook.

Now that you've run your default test, jump into your overclocking program.

If you have your 3rd party utility open, or even if you're using the driver
tools through display properties/settings/advanced/clock rate you
should be presented with two slider bars, one for the clock and one for the
memory. Welcome to the place where you'll do most of your fiddling.

So do you just move the sliders really high and click "APPLY" and then you
have the best graphics card on the face of the planet?. Hell no. What you
would end up with is a melted glob of silicon dripping over your
motherboard. The best and safest way to overclock is in increments. I'm
talking about tiny increments, like 10mhz at a time. Some people would say
15mhz at a time, but really 10mhz will give you a more detailed idea of when
you hit your overclock ceiling (when bad things start to happen).

Start by bumping your core speed up by around 10mhz and applying the change.


Now open your benchmark tool, and disable all but 1 test. Keep the "High
polygon count, 1 light" test enabled. This test will be enough to see any
artefacts that might crop up, without having to run all 16 or so tests
through 3dmark2001SE.

Run it and check for artefacts. (Artefacts are glitches in the rendering
of scenes and objects during benchmarking/gaming. They indicate that the
graphics hardware is clocked to high and may be overheating. If you start
to see objects wildly distorted or polygons where they shouldn't be, it's
probably the core overheating. If you see white dots flickering all over the
screen, it's more than likely memory heat).

NOTE: Rather than doing this, you can down-load specific artefact
testing programs, designed to run a series of tests and check for graphical
glitches. These might suit your purposes better. Linkage coming soon. (Some
new driver sets, such as the latest Omega drivers, actually come with an
in-built artefact testing program as part of the suite. You will have to
make sure you choose to install this component when installing the drivers)

Link to Artefact tester program (Stand-alone)
http://www.majorgeeks.com/download4109.html

If you don't see any problems, raise the core speed up by another 10mhz and
continue your testing loop until you spot a problem. A problem could be
anything from the benchmarking tool hanging/freezing, to artefacts, to a pc
rebooting itself. That's a sign you've gone too far.

Once you've reached a safe limit with the core speed and you've run the
benchmarking tool a few times, leave the setting where it is and move onto
the memory. Repeat the same steps until you hit the ceiling for your ram
modules.

You may find that raising the core to its ceiling and then raising the
memory to its maximum will then allow you to raise the core speed again by a
small margin. So once you've got both sliders as high as they can go without
any glitches, try the core again and see if you can move it any further.

So lets take a look at a typical overclock I managed out of my Radeon
9600pro.
Keep in mind that the 9600pro runs as cool as a cucumber due to various
factors,
so dont always expect an improvement like this from every card. Theres a lot
of
more powerful cards out there that dont run on the .13 micron core.

Default: 400mhz Clock (400mhz * (4 * 1) = 1600 Mega-Texels = 1.6
Giga-Texels )
600mhz Memory (128 * 600mhz / 8 = 9600Mb sec = 9.6Gb sec bandwidth. )


O/Clock: 500mhz Clock (500mhz * (4 * 1) = 2000 Mega-Texels = 2
Giga-Texels )
680mhz Memory (128 * 680mhz / 8 = 10880Mb sec = 10.8Gb sec bandwidth )


Not a bad result, especially from the bandwidth point of view

You should now have a card overclocked in both the memory and core settings.
Now run 3dmark again with all the tests enabled and check your score.
Hopefully you should have a measurable increase in 3dmarks. Congratulations,
a few hundred points difference is worth all that crazy effort and risk! If
you really want to be sure about the stability of your newly tricked up
card, it would be worth your while to loop the 3dmark test several times
over. You may have run it once or twice with no problems, but graphics
hardware sometimes needs a little time before it gets worked up enough to
crap itself due to overclocking. You may also find that although
3dmark2001SE runs all tests multiple times without any glitches, an actual
game engine will produce many. Best solution to this?, use a combination of
"Real World" and "Benchmark" tests to complete checking of your hardware.
More on that below!.

REAL WORLD Vs BENCHMARKING

This is one of the single most important questions that faces the avid
overclocker of video hardware. What's more important to test with? Real game
titles or benchmarking programs?
The answer is both are important for different reasons. Lets take a look at
some different angles of testing and whether you should look for answers in
the Real World or Synthetic benchmarking.

I want too see the improvement my efforts have made!

In the above case, most people will agree that you would be best running a
Synthetic benchmark to test the results of an overclocking or tweaking
experiment. Synthetic benchmarking tools return an exact numerical score
that accurately reflects any changes to the bandwidth or fill rate of a
graphics card. While the score might be different by around 1 or 200 points,
that may only translate to a few measly frames in an actual game title. If
you tried to run a game title to test your results, chances are you wouldn't
notice the difference between 40fps and 43fps. If you somehow manage to
drastically improve your Synthetic benchmark score from say, 11,000 3dmarks
to 14,500 3dmarks, you might find testing with a game title such as UT2003
to be worth it, as you'll see a real performance increase in framerate.

Whats the best way to test for Artefacts? Synthetic, Real World or
Artefact Tester?

The easy answer. All three. There are many different 3d engines out there
that all utilise different aspects of a 3d video card. The only way to
comprehensively test for glitches is to use a wide variety of testing
methods to ensure stability. I can't stress that enough. It might be tedious
waiting for 3dmark to finish for the 10th time, but wouldn't you rather your
hardware was running ok, than have it crash during an online game? Also
remember that glitches aren't just limited to funny polygons appearing on
the screen. They can be white or colored speckles, machine lock ups, games
crashing back to the desktop - a whole variety of different buggy
happenings.
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Roscoroo

  • Plutonium Member
  • *******
  • Posts: 8424
      • http://www.roscoroo.com/
Beginers guide to video cards
« Reply #11 on: August 05, 2004, 12:40:53 AM »
OH BABY, YOU'RE SO DAMNED HOT

Once you've hit the maximum possible mhz speed for your clock and memory,
you'll start to feel a little depressed. Where do you go from here?. Well,
the answers simple really. The main reason you reached that particular
ceiling with your card, is heat. To get around that problem, you'll need
custom cooling kits to replace your standard HSF. I could also mention the
need for ramsinks as well, but I won't recommend them. They are basically
money wasters that only make your card look prettier. That doesn't mean you
shouldn't get them but they wont allow you to push the memory much higher.

Your standard HSF is usually removed via 2 plastic clips on the pcb, which
when squeezed from the underside of the card, will allow the HSF to pop out.
There are other methods of mounting a HSF but that seems to be the most
common method card makers use. I wont go into detail about the process for
removal, there are several sites around which can provide a guide to doing
this, with pictures and all!. We love pictures dont we?. Now a good video
card will have thermal compound between the HSF and the GPU so you'll have
to scrape that off (gently) before mounting a new HSF. There are however
several incredibly dodgy brands out there that have no thermal stuff between
the heatsink and the GPU. This is bad.
Even if you dont want to install custom cooling on it, I'd reccommend
removing the standard
HSF and applying some thermal paste before mounting it again. Anyway, on
with the show. Once you've removed the old HSF, you spread a nice thin, even
layer of thermal paste onto the core, and mount your custom cooling device.
Its all pretty straighforward, if you've ever done it with a CPU, you should
be fine doing it to a video card. From my experiance, I've mounted custom
cooling on my Ti4200 and my R9800pro. For the Ti4200 model I'd recommend
Thermaltakes Copper VGA cooler, did an excellant job. For the high end
Radeon, you'd be best with the Zalman Heatpipe, very slick silent cooling
solution which uses two massive heatsinks joined by a copper heat-pipe that
sandwich the card between them.

If your card has a passive cooling solution (heatsink without a fan) then
I'd recommend you don't even bother trying to overclock. Change the cooling
to a heatsink + fan and then give it a shot as passive cooling isn't really
designed with overclocking in mind.

A LITTLE NOTE ABOUT KILLING YOUR VIDEO CARD

Overclocking IS a risk, there's no doubt about it. There's a slim
chance that you can burn the memory out and a far greater chance that you'll
fry the core. In saying that, I've overclocked many cpu's and many graphics
cards in my time. In my career as chief silicon burner, I've killed about 3
cpu's and 0 video cards. There have been times when I pushed my Ti4200 well
and truly beyond the barriers it was designed to stick to. It caused heaps
of reboots, artefacts and glitches, but upon returning to default speeds it
always came up ok.
It seems that video card overclocking is a little safer than its cpu
equivalent. That doesn't make video hardware invulnerable to destruction


Written by  Amiga4Eva  @here
Roscoroo ,
"Of course at Uncle Teds restaurant , you have the option to shoot them yourself"  Ted Nugent
(=Ghosts=Scenariroo's  Patch donation

Offline Defiance

  • Nickel Member
  • ***
  • Posts: 424
Beginers guide to video cards
« Reply #12 on: August 05, 2004, 06:28:46 PM »
Hiya,
Nice detailed layout wtg

Offline Kev367th

  • Platinum Member
  • ******
  • Posts: 5290
Beginers guide to video cards
« Reply #13 on: August 08, 2004, 11:58:52 AM »
Would only disagree with one thing -
Fastwrites - Disabled always.
No longer used by modern cards, in fact can cause instability. ATI themselves recommend fastwrites off .
AMD Phenom II X6 1100T
Asus M3N-HT mobo
2 x 2Gb Corsair 1066 DDR2 memory