Originally Posted by Tatwi
That's the major reason why I'd like the industry to adopt a unified gaming platform. The platform could add more processing units, allowing upgrades to perform better without changing any of the fundamental architecture. So rather than having an XBox, Playstation, and Wii you'd have the universal machine that all titles work on and Sony/Microsoft/Nintendo/Sega/etc would make games and peripherals to set themselves apart. There really isn't a technological reason why a system like this wouldn't be possible.
|
That would be a fantastic way to stagnate the industry.
You've created a system in which new features don't happen, and all you ever get is scaleability. Go back in time and propose this in 1993, and then come back in your time machine to the present to enjoy your Garaud shaded graphics with models that have
over 9 quintillion polygons! and tell me how much more awesome that is than bump mapping, pixel shading, and physics-accelerated particle effects. Oh. And textures.
The alternative is to do what consoles do, and force obsolescence on old hardware every 3-5 years. That goes over so well on $400 consoles as it is, how enthusiastically do you think PC gamers who sink $2k+ into their rigs would support that?
If you don't do that, then you're stuck supporting backwards compatibility, and now, welcome to the modern PC hardware environment.
Honestly, it's not *that* bad. There are two major hardware vendors, who have an architecture cycle of around 36 months (that's new architecture every 36 months, with a mid-cycle clock bump/process advance to give our familiar 18-month "generations" of video card), and each runs on the same API that gets expanded as we incorporate and standardize new features. It's not like the mid-90's, when you had half a dozen manufacturers putting out cards with proprietary APIs (even if some of them were outstandingly awesome, like Glide).