Aces High Bulletin Board
General Forums => Hardware and Software => Topic started by: 1776 on August 28, 2001, 01:17:00 PM
-
Is 5 GHZ far off?http://www.anandtech.com/cpu/showdoc.html?i=1525
-
every 18 months speed doubles...
SKurj
-
Anandtech also has an article up about the 2 ghz P4s. Not surprisingly the Athlon still wins most of the benchmarks, but the majority of computer users only look at clock speed...
On the plus side Intel is finally releasing the 478 pin processors. That's a good thing for future upgradability.
I'm still waiting for the release of AMDs Athlon 4 desktop CPUs paired with Nvidia's nForce to decide what platform is best.
It's going to be an interesting Q4 for sure. :)
-
Originally posted by SKurj:
every 18 months speed doubles...
SKurj
So that means 8 years from now the desktop PC will average 64 GHZ!
:cool:
[ 08-28-2001: Message edited by: jihad ]
-
Our roadmap is 10 GHz by 2005.
3 GHz will be pretty easy... beyond that we're not sure exactly what we are going to do.
AKDejaVu
Intel - Components Research
-
DejaVu-
Are you at Ronler Acres, possibly the Aloha site? I spent a few years at each as a contractor.
-Greese
-
Yep.. Ronler Acres.. was at Aloha until D1B was built. Moved over to Ronler in 96.
What company were you with?
AKDejaVu
-
I worked for VWR Scientific, the company that delivers all the disposable cleanroom gaments. I actually was the "find it and get it" guy for about a year at Ronler before I moved to LA. I miss that site, and the people at it. I worked in and around the intel sites in Hillsboro/Aloha for about 5 years. I started back at Aloha before they closed FAB4. They don't build 'em like that anymore...
Were you familiar with any of the chemical engineering guys? Those were the people I came to be most familiar with (Skip Friesen and Glen McWhirter), but I knew a lot of people from a lot of areas. Usually people charged with finding out where to get strange objects they couldnt find anywhere else...
-Greese
-
Jihad look back over the development of the pc +)
SKurj
-
every 18 months speed doubles...
This is a very common rule. Anyone know who the "law" is named after?
AKDejaVu
-
I used to know it damnut!! But alas I have forgotten...
SKurj
-
Fatty's Mom???
-
moores law and it has held for thirty years
when i went to college in 1997 fastest computer available was 266 mhz, now 2 ghz :)
4 years ~ 48 months = 2.66 cycles
266*2^2.66=1700 mhz
we're actually still beating it by a bit :)
-
Hey, what would be the starting salary for a EE guy? I was thinking about doing EE in college.
-
Hey DejaVu, have you looked into some of the research on crystaline nanotubules yet? The plan is to build single electron switches by basically allowing tunneling between adjacent cells. To be frank I never have fully got a handle on electron tunneling concepts so it seems kind of like a shot in the dark attempt in my mind, but maybe you see something there... :)
I'm guessing your research at this time primarily involves SOI techniques and high-K dielectrics. What are you guys looking into at this point? 10 Ghz seems to be a pretty lofty goal with current technologies. 5 Ghz should be easy enough with the P4 design, but that means the ALUs are going to be running at 10 Ghz, which ought to be interesting.
I don't think Moore's law is going to hold much longer. My opinion is that the power dissipation combined with smaller and smaller transistor sizes is going to be a very large problem, especially considering the current trend in increasing transistor count dramatically for additional cache memory etc. One thing is for sure, heatsink design is going to have to improve.
---------------------------------------------
Rendar, EE salaries depend highly on where and who you work for. In Oregon $45,000 - $70,000 is about the average, but some very senior guys can make 1/4 mil a year with the stock options and benefits. I definately wouldn't suggest taking EE in college if you are just doing it for the money. It's HARD work. After your first year and a half you have no real life to speak of.
I'll be going into my final year at the end of this month. (Oregon State University) Last year I spent about 6 - 10 hours PER DAY between studying, class, and working near the end of the term. If you do take EE you are going to discover just how large a field it is and how hopelessly little you can hope to learn in 5 or even 8 years. You really have to specialize in one area and try to learn all you can there.
Last year I took classes dealing with:
Electrical and magnetic fields
Digital Circuit design
Analog Circuit design
Microprocessor Archetechture
Wave propagation and transmission lines
Semiconductor materials
Electric motors and transformers
Thermodynamics
Anthropology ( LOL, it's a requirement. I guess it's supposed to improve your attitude. :D )
You can see just how much they cover in just one year, and all of it requires pretty intense math and computer skills. (Except anthropology. ;) ) Once you get used to doing that much math it's actually pretty interesting, and it certainly makes you appreciate just what's been achieved in the past 100 years or so in Electronics.
If you want to know more about OSU's engineering program, send me an e-mail and I'd be glad to tell you more: bloom@engr.orst.edu
:)
-
Bloom, check your private messages, I have a business proposition for you. ;)
-
I will be studying Engineering at college...I like EE, Computer, and Mechanical. :)
-
Hey DejaVu, have you looked into some of the research on crystaline nanotubules yet? The plan is to build single electron switches by basically allowing tunneling between adjacent cells. To be frank I never have fully got a handle on electron tunneling concepts so it seems kind of like a shot in the dark attempt in my mind, but maybe you see something there...
Thought about bs'ing to make it sound like I knew what you were talking about, but then thought better of it.
I do know we are doing work with fiber-optics in microprossessors, but it is crude and very young in its development. There is also talk of crystaline mass storage devices, and it has been proven to work on a transistor (or whatever the call the cell) level... but a real application is questioned.
One thing that is popular right (already used in the industry in non-mpc chips) is 3d manufacturing. Actually placing transistors on top of transistors. I've seen some work on it and its quite fascinating. It really doesn't help in regards to speed, but it does offer the opportunity to do things such as place a video chipset inside of your cpu packaging.
I'm guessing your research at this time primarily involves SOI techniques and high-K dielectrics. What are you guys looking into at this point? 10 Ghz seems to be a pretty lofty goal with current technologies. 5 Ghz should be easy enough with the P4 design, but that means the ALUs are going to be running at 10 Ghz, which ought to be interesting.
I work with Spin-on-Dielectrics. Its actually at the other end of the spectrum from what you are thinking. Our group is called the "Low-K" group for a good reason ;) We work more on the top portion of the processor (well above the transistors).
There is a clear path to 10GHz. Many chemical manufacturers are working toward that quite nicely. I don't forsee a problem there. Unfortunately, we are running into various issues such as a certain type of molecule/atom being too thick... problems for wich there seems to be no solution.
I do believe (My opinion here) that we will eventually run into a hard barrier for speed. At that time, we will have to either find a completely different medium for manufacturing processors, or change the role processors play in computing. I believe Intel is working to cover both angles... but only time will tell on that.
I don't think Moore's law is going to hold much longer.
You aren't the first to say this. Many have been saying since Gordon first issued the statement.
My opinion is that the power dissipation combined with smaller and smaller transistor sizes is going to be a very large problem, especially considering the current trend in increasing transistor count dramatically for additional cache memory etc.
Its my job to take your oppinion on this subject and shove it up your posterier. So far, our group has been doing that very successfully. I worked for lithography and was there when we proved quite conclusively that you could pattern a line/trench smaller than the wavelength of light being used to expose it. :D
Smaller transistors is not a problem in regards to heat. Its a benifit. You decrease the length of your gate, reducing the resistance in the line. The obvious paths here are finding less resistant chemistries to make your gates out of and minimizing the amount of current used to run the devices. This is an area where Intel kicks AMD's ass. Unfortunately, cooler does not translate to faster. But, when it comes to the laptop market, we will keep a firm handle on things.
The P-4 is in its infancy. I guarantee that reducing power consumption is one of the primary objectives in its future designs.
One thing is for sure, heatsink design is going to have to improve.
I have a tendancy to think that power consumption is the area that has to improve. AMD is pushing the limits as it is right now. Intel did it too (quite to their demise) with the 1 GHz processor. Intel has since backed off a tad with the P-3, while AMD is still riding the edge.
I can't help but laugh when I read threads in regards to heat-sinks in this forum. The things people are starting to accept as common-place are increadible. Having to sand the back of your processor is not acceptable (IMHO). Having your PC shut down because you used 1 micron too much heat-transfer goo. Requiring 18 fans to keep a processor cool is not acceptable. Both companies need to work on that first and foremost.
AKDejaVu
-
Bloom? Ya made my head explode :(
Anyone besides me feeling obsolete? :)
-- Westy
-
I recommend some of you guys read:
http://www.amazon.com/exec/obidos/ISBN%3D0670882178/102-5545064-4020148 (http://www.amazon.com/exec/obidos/ISBN%3D0670882178/102-5545064-4020148)
Agree or disagree, there is some interesting theories in there. It's not highly technical, just more of a prediction thing. Very interesting though.
-
when i went to college in 1997 fastest computer available was 266 mhz, now 2 ghz :)
Damn, Zigrat, you make me feel old! Best we had when I was in school (also EE) was a 1MHz 6809! anyone care to guess the date?
Hint: We had a presentation my Senior year by a guy from Intel on the new 186 architecture, and he wowed us with wet dreams of the 286. He even mentioned that somehwere off in the distant future there might even be a 386!
-
I'll see if I can round that up at a library Lephturn.
I don't really pay much head to reviews though... especially when they call someone a visionary because of what he said yesterday in regards to tomorrow. "He is a visionary" had better have something to back it up other than a good read.
AKDejaVu
-
Feeling old? I started on teletypes and paper tape, the "winchester" hard drives were just coming into comercial use, the the floppy (8in) had not even been invented, and there was speculation that the new solid state memory chips might become viable as a lower cost alternative to iron core memory, and intel had just started up and was talking about a new semicondutor processor all on one chip ( the 4004).
-
All of you wrong, check this out. 75Ghz....Wow http://www.usatoday.com/life/cyber/tech/review/2001-09-04-motorola-chip.htm (http://www.usatoday.com/life/cyber/tech/review/2001-09-04-motorola-chip.htm)
In 3 years we should have 70+Ghz.
-
Unfortunately that Motorola article is too general to gain any real information from it. It's certainly interesting, but GaAs has been in use for a long time. Actually the coolest application I can see for this technology that they didn't mention is the possibility of a light emitting panel. It would be neccessary to get materials besides just GaAs to "stick" to silicon to get all of the available colors though. :) Blue and white LEDs use InGaN (indium galium nitride). (In case you guys were curious the white leds are just a blue led with a coating of "glow in the dark" phosphorous. :D )
"75 Ghz" looks like some number the press pulled off a press release to me. ;) It's going to take a LOT of work to get that far.
Some VERY interesting research I saw at OSU last spring involved molecular switches made of organic compounds. Basically they allowed a current to flow when exposed to certain wavelengths of light. The potential of these things are HUGE if they work as planned.
Jihad, I just checked my messages. That's very interesting, but WAY beyond me at this point. :( (My area is digital circuits.) I'll have to do some serious thinking (and pulling out the old textbooks) on this one. In the meantime let me try to dig up some good info for you. :)
-
this thread makes my head want to explode.
-
Here's that article I was talking about DejaVu: http://www.eetimes.com/story/technology/OEG20010822S0059 (http://www.eetimes.com/story/technology/OEG20010822S0059)
I agree with you that Intel is a long way ahead of AMD in process technology and power consumption. My dream right now is an Athlon CPU produced on Intel's .13 micron process. ;) AMD is working on a SOI .13 micron production line though for their Barton core CPUs slated for late 2002.
---------------------------------------------
(And now I'm going to ramble totally off-topic, but I'm bored and you don't have to read it. :p )
I had a couple (I think) good ideas for products a few months ago that unfortunately I didn't do anything with. Just a week ago I saw two of them for sale. :( I might as well tell you what they were, since I don't have the resources to do anything with them. The first was (don't laugh) a purse light using a 3mm white led sewn into the rim of a purse. I built a very crude prototype but didn't do anything with it. A couple days ago my mom told me she saw almost exactly what I built at JCPenny. The other idea was to build a replacement mag lite bulb with a white led. I built one for myself. (Bright and minimal power consumption, not to mention much higher bulb life.) A few days ago I saw a white led mag lite type flashlight on a Snap-On truck.
Recently I've designed and built some pretty simple circuits for people as side projects this summer. A few of them would certainly sell if I were to produce them. The oddest circuit I designed for someone is a solid-state replacement for the sequential turn signal lights on an old 66 Tbird. :D (The original units were an electric motor that spun around in a HUGE box that took up 1/4 of the trunk.) The circuit I designed involved all of 5 $.50 chips for each side and just worked off the already existant turn signal flasher signal. I'm currently building a little circuit for a piece of artwork involving light and sound effects. ;) You guys wouldn't believe how many people want to make little lights flash on and off real fast. :D (<- hint 555 timer, 4017 decade counter, and lm7805 5V regulator will get you far. ) The coolest project I did this summer was a security system for a Harley that was small enough to fit behind the battery (including siren) under the seat.
All of these circuits I've done occur totally by chance really. For the past 2 summers I've worked at a local electronics parts store part time. (Norvac) We get people in nearly everyday who want what doesn't exist as a complete product, but would be relatively simple to build. Several of them have market potential.
That motorcycle security system has extreme potential IMO. Basically the way I built it what it does is to disable the ignition system if the bike is tilted too far off it's kickstand for a settable time period. In addition it sounds a siren for several seconds. I also added cool little extras like a redundant 9V power supply backup. It does a lot of stuff, but all it is is one quad 555 timer (lm558) and a couple logic level mosfets to handle the siren and a relay that disconnects the hot wire to the coil. All of this is in a unit 2" wide by 5" long by 1" tall including the attached 105 dB siren. Total cost was about $10 US. I figure people would pay at least $75 for it. :)
(Edit: That crazy instant graemlins feature went nuts. )
[ 09-06-2001: Message edited by: bloom25 ]
-
Ah.. I hadn't heard of that particular technology Bloom... The problem isn't really creating smaller transistors, but rather integrating them into anything usefull.
As for AMD and SOI on a .13 micron process... I'd venture to say you'll see them do .13 micron long before you see it on SOI.
Its funny this came up this week. We just did our Components Research "poster session" this week. It covers most of the projects we are currently working on. A few have been mentioned by others in this thread. I kinda wish I could say more about them... but such is life.
Oh.. and on a side note... my first patent submission just cleared Intel legal and is on its way to the patent office. We aren't really sure just what we've patented, but it sounds really cool in legaleze. :D
AKDejaVu
-
Jihad, I just checked my messages. That's very interesting, but WAY beyond me at this point.
Don't sell yourself short, this is actually such a simple concept and natural dynamic that when people see it they will slap their forehead and say "why didn't we think of that?"
If an uneducated dumb bellybutton like me can see the concept then I'm sure you could improve on it. :D
I found some of the info I asked you about last night and have drawn up some info and diagrams outlining the design, when I get the patent process underway I'll share some of it with you. ;)
-
Just heard from a little birdie that 3.6 GHz is doable by Q3 of '02.
AKDejaVu
-
Did I stumble into a foreign country here ???
Geez I wish I understood what you guys are talking about...
Makes my head spin...
:rolleyes:
BOOT
-
If you think your head is spinning, you should try sitting through some of the classes I'm taking this term. :eek: I usually end up with a migraine at the end of them.
I've done some more research into GaAs since the Motorola announcement a month ago. One problem they did not mention is that heat is harder to dissipate off GaAs than Si. There are some advantages to GaAs that tend to improve the situation. GaAs has an intrinsic carrier concentration of about 2.25x10^6/cm^3 as opposed to Si with 1x10^10/cm^3 (at 300K), meaning it's easier to control. This gets even better as temperature increases, at 400K silicon is up to around 8x10^12/cm^3 and GaAs is only 8x10^9, which is better than Si at room temperature. This kind of nullifies the issue that GaAs is harder to remove heat from, though as clock speeds ramp up I still see heat as a very big issue though. The advantages of GaAs over silicon are enormous when it comes to clock speed that can be attained. Even at light doping levels the electron mobility in GaAs is MUCH higher than Si, due to the elecrons much reduced effective mass. My pocket reference book tells me 1360cm^2/V-sec for n-type doped at 10^14/cm^3 in Si VERSUS 8000cm^2/V-sec with equal doping in GaAs, both at room temp. That means that the electrons are moving 6 TIMES faster in GaAs than Si under fairly equal conditions. That means that if Intel could use GaAs right now with the P4 design 10Ghz would probably be attainable today.
At this point I think Gordon Moore's statement will probably hold true for another 18 months, after that we'll just have to wait and see what happens. :D Once we do find out how to use GaAs (or SiC) we can expect clock speeds to take off once again.
(DejaVu, I was wondering how many Angstroms thick and the K' values for the dielectrics you were working with are? If it's not a trade secret that is... I'm just curious to do some spice simulation runs to see what's possible. :) )
-
Another EE here :) Graduated Dec. 91 from WVU, mostly specializing digital control systems, and anything to do with computer systems/applications
I then by a blind fluke of fate ended up going to grad school, and getting my Masters in Environmental Engineering from Marshall University. Yah I know strange combo, but its a long story.
I now do Environmental/Pollution Computer Simulations for the government. Its actually quite fun :D I get to play with some REALLY big computer systems. Trying to talk my boss into building a nice 16 node Beowulf cluster right now.
I never really was into chip design, but I'm glad I still at least understand the technology/terminology that you guys are discussing, so I'm not too far out of the loop.
Cya at the Con !
-
Moore allready adjusted his law.
There are now clauses and rules for it. I can't remember all of them but a search on the web should show it.
-
That means that if Intel could use GaAs right now with the P4 design 10Ghz would probably be attainable today.
Sorry bloom... that's just not true. You are focussing too much on one aspect and not really figuring in the rest. Each processor is basically two sections.. the top end (metal lines and interconnects) and the transistors. In one area, we'd see a definate improvement... enough to totally melt the other area.
The 3.6GHz is going to be achieved on bare Si. I do know that GaAs and SOI are being looked at, but the cost per unit involved with each goes up dramatically... especially on 12" wafers. I'm not sure if much GaAs processing is going on above 6" right now. Triquent is one of the main movers in this area and they are still on 4" wafers.
At this point I think Gordon Moore's statement will probably hold true for another 18 months, after that we'll just have to wait and see what happens. Once we do find out how to use GaAs (or SiC) we can expect clock speeds to take off once again.
LOL! People have been saying that about Moore's law for 15 years now ;) Its our job to keep proving Dr. Moore right... and we do it well ;)
And I don't necesserily know that your two options for improving speed are really the only path. I do believe that after 10GHz, we'll have to go in a completely different direction... but I'm pretty sure nobody has any idea what that is right now. We have several things we're looking at, but the above are only two of them... there are many more other options to explore.
DejaVu, I was wondering how many Angstroms thick and the K' values for the dielectrics you were working with are? If it's not a trade secret that is... I'm just curious to do some spice simulation runs to see what's possible.
The current process of record (data is published) runs the ILD <inter layer dielectric> anywhere from 1.2um to 3um with an effective K value somewhere around 3.6. I can't really say what K value we're working on for future processes... but it considerably less than the above mentioned.
Besides.. in addition to the k and thickness, you'd have to know the line pitch and via height. Remember, with copper we use a dual damazine process where we etch the vias, then we etch trenches and fill the whole thing in with copper. That means the ILD thickness is the height of one whole via/metal layer as opposed to being the distance between two metal layers.
AKDejaVu
-
double e dorks. real engineers build airplanes :)
-
You're right, I forgot totally about the interconnects and such. ;) (I work mainly with design, not layout so that stuff really doesn't enter my mind. :D )
3.6 is the the Er value for regular old silicon so that doesn't surprise me much. :)
Oh, and Zigrat, you are totally wrong. REAL engineers build egos. :D
-
Originally posted by AKDejaVu
As for AMD and SOI on a .13 micron process... I'd venture to say you'll see them do .13 micron long before you see it on SOI.
Hehehehe... anyone read the news on this today?
MiniD
-
REAL engineers build egos.
and all this time I thought they only built cube-farms...
-
LOL, who dug this up? This is like going back in time to when tech companies made money! :D
Judging by what was posted above it looks like we did alright in our predictions. The Barton core Athlon XPs come out on the 10th of this month in the 2500+ and 3000+ speeds. They aren't using SOI technology though. I think Opteron (Hammer) will be the first AMD cpu to do that, and it's supposed to be out in April. I think Intel is supposed to launch the 3.2GHz P4 in March or April as well.
-
A tidbit on SOI:
http://biz.yahoo.com/rc/030131/tech_amd_athlon_1.html
-
I read that the other day. It didn't surprise me too much really. There doesn't seem to be a whole lot going as far as CPU releases until the second half of this year. From a practical standpoint, I can see why AMD would do this.
The way I see it, Athlon 64 and Opteron (Hammer) are a make or break type product for AMD. Considering the fact Hammer CPUs are built on a more expensive partially depleted channel SOI wafer, have a considerably larger die size, and increased packaging costs (750+ pin chip), it makes some sense that AMD would want to push the existing Tbred 'B' and Barton cores to their limits. (About 2.5 GHz true clock speed.)
I also can't think of much of a reason to push 64 bit on the consumer market. The reality of the situation is that there are very few circumstances where a 64 bit processor would be required by most home users. (Very large data sets or extremely high precision mathematical computations.) Obviously it's worth some marketing value, but probably not much real practical value until software is written that can take advantage of it. (It's also worth considering that the mcuh greater number of gates required for a 64 bit ALU and FPU and the associated larger registers required by them will also increase die size and thus costs.) Since Opteron is targeted toward the workstation/server market, it makes sense to launch it sooner.
Hammer does have some performance enhancing features though. It adds SSE2 instruction support, which will certainly allow it to benefit greatly in applications now optimized for the P4. Its on die memory controller will also increase performance by reducing memory latency. (It also reduces MB chipset northbridge costs significantly.) Since Opteron is supposed to launch at 1.4, 1.6, and 1.8 GHz true clockspeeds and a rating somewhere around that of a 3.2 GHz P4 for the 1.8 GHz model it appears AMD will match the performance of the Intel Xeon processors available in that time frame.
-
I wasn't as much reffering to the delay as to the cause of the delay. IBM went through a series of miraculous process disclosures. Low-K spin-on dielectric and SOI being two thrown out in the mix. Since then, they've already dropped the Low-K dielectric due to integration issues. I'm wondering if the SOI will go next due to scaling issues.
MiniD
-
3 GHz will be pretty easy... beyond that we're not sure exactly what we are going to do.
AKDejaVu
Intel - Components Research
________________________
:cool: watch what you say when you say it
only the paranoid survive..
regards,
andy and craig
-
Isn't Intel doing some 90nm runs this year?
-
I also can't think of much of a reason to push 64 bit on the consumer market. The reality of the situation is that there are very few circumstances where a 64 bit processor would be required by most home users.
I remember much the same thing being said when the i386 came out in the mid-80s.
Don't forget that it took 10 years for 32-bitness to take off in the consumer sector. The market is much more mature now, and will take advantage much more quickly.
For example, artists will quickly benefit from the ease of manipulating 64 bit colour. This will have a knock-on effect on games.
Anyone doing mathematical modelling will benefit significantly speedwise - and that means spreadsheet users.
People authoring DVDs will benefit from the increased memory space - imagine being able to hold an entire DVD in memory.
I hope to be able to buy a dual AMD system sometime next year, once the price has come down significantly.
-
Originally posted by Skuzzy
Isn't Intel doing some 90nm runs this year?
Intel to qualify 90-nm process by mid-2003 (http://www.siliconstrategies.com/story/OEG20030114S0044).
We've done 90nm. Making it what Intel terms a qualified process is another thing.
MiniD
-
You've got some good points there qts. The point about memory addressing is quite valid, but the impact of being able to address more than 4 GBs of memory would only currently benefit users of high end servers and workstations. It will also require the use of a 64 bit OS. (There is already a 64 bit Linux port available and Microsoft is working on an x86-64 Windows version.) Maybe in 3 or 4 years 4 GBs+ of memory be in typical desktop PCs.
The only thing you're missing is that a 64 bit ALU or FPU is going to be inherently slower than a 32 bit unit. (Why do you think the P4 uses 2 16 bit ALUs running at 2x clock frequency? It's FPU is also inferior in design to that of the Athlon. It is done to allow higher clockspeeds and to reduce the number of gates. ) A 64 bit carry lookahead or carry save adder unit, which are typical designs used for high speed ALUs, would have up to 4x the number of gates as a 32 bit unit. It would not surprise me the least that this is the reason that the Hammer CPUs are currently running no faster than 2 GHz. What a 64 bit FPU gives you directly is precision. Greater speed is not a given when moving from 16 bit to 32 bit, or likewise to 64 bit.
Unforunately, I'm not a graphics guy, so I can't comment on the 64 bit color issue, but I would imagine that specialized software would have to be written to take advantage of it. I do know that currently 32 bit color does not actually use all 32 bits for color information. (If I remember right, only 24 bits are used for the color, the last 8 bits are used for alpha information.) There is one other big problem: Most graphics cards only use a 10 bit D/A converter when generating an analog output and most digital LCDs can only display 16 million colors (24 bit).
I hope AMD is successful with the Hammer. The Hammer processor is certainly as revolutionary in design as the 80386 or the original Pentium. Hopefully, like them, it will in time prove to be a success.
-
Originally posted by jihad
Bloom, check your private messages, I have a business proposition for you. ;)
I think OBL is looking for a few good men. Don't forget your FLUX capacitor.:eek:
-
Bloom, two points.
Firstly, with respect to DVDs, time is money, and memory is orders of magnitude faster than disk, so a business case can easily be made for large amounts of memory.
Secondly, with respect to colour, the issue is not how many colours end up being displayed, but the number of bits per channel (R/G/B) that the computer can manipulate. Very basically, the more bits, the less error. Carmack has written on this in relation to games, so I'll defer to him.
BTW did you note that the AMD64 was benchmarked at 1 GHz and matched a 2GHz P4?
-
Interesting point on the color issue qts, I hadn't considered that. (As I said, I'm not a graphics expert, though John Carmack certainly is.) :)
I certainly have high expectations of the performance from Hammer and haven't questioned that. I think I wrote something about this in the past on this BBS. (The original estimate, which I believe still stands, is Hammer at 2 GHz will match a 3.4 GHz P4.) Opteron will launch at 1.8 GHz in April.
I do have reservations when it comes to the economics of producing Hammer processors. The way I see it, AMD does not yet have the "image" that Intel does. They cannot afford to charge the premium price for a desktop version of Hammer that it's performance versus the P4 would likely afford. I honestly think that this is the reason that AMD is launching the workstation/server Opteron processor in April and delaying Athlon 64 until September. It's only in that market segment that AMD would be able to charge a high enough price to offset the increased costs of the Hammer core versus the current Barton Athlon MP core. I have no doubt that Opteron will easily match and exceed the performance of the Xeons available in April. The Hammer architecture lends itself extremely well to a multiple CPU machine. Also, that is the one market segment where 64 bit support will have an immediate impact. There is already a x86-64 Linux port available. A sucessful launch in the workstation market for Hammer should help to smooth its launch into the desktop market later in the year.