Forums | Chat | News | Contact Us | Register | PSU Social |
PSU: Would you like to see how far down the rabbit hole goes?
Forums | Chat | News | Contact Us | Register | PSU Social |
Home | Forum | Chat | Wiki | Social | AGN | PS2 Stats |
|
|
|
Thread Tools | Search this Thread | Display Modes |
2012-08-19, 04:09 PM | [Ignore Me] #121 | ||
Corporal
|
AMD cards have another advantage, in general you can assume that you get more performance for your money. It has less advanced features. People knew about that trade-off when they bought it. They knew their cards could not process particles, flags, wind, cloth, etc. using PhysX, and that the NVIDIA cards had the potential for that.
Whining now about their own descision and asking for SOE to write their own physics engine (lol) or use a worse one or probably none at all (as in, no banners, no more particles, no cloth wind movements, etc) when a set / toolkit that has been developed for years and is already available is just outright ridiculous. You know what. I'd want my Prowler to sport our future Outfit's flag. I want cloth banners hanging down the bases' walls, trees moving in the wind, water trails, water flashes, leaves bristling, a more alive environment. Who would not want that? How can you still defend your point of not implementing such features but use mediocre ones? Developing an own Physics engine could probably take years to even reach the level of the PhysX developer's toolkit. To anyone who wants to respond to this, just answer yourself the following question: It was your choice, wasn't it? Last edited by JoCool; 2012-08-19 at 04:16 PM. |
||
|
2012-08-19, 04:15 PM | [Ignore Me] #122 | |||
Staff Sergeant
|
|
|||
|
2012-08-19, 04:22 PM | [Ignore Me] #123 | ||
Staff Sergeant
|
I'm not sure how people get the idea that PhysX on CPU is poorly implemented.
Did it ever occur to you, that the tech put into Nvidia GPUs comes from Ageia's card, which was specifically developed to run physics code fast. A processor specifically developed for a single task is much more efficient at it than a general purpose processor like a CPU. This is also why we have distinct GPUs. So could it be, that because of the advanced physics enabled by the parallel processing power of the GPU and the Ageia tech, it actually allows it to run comparatively better physics models? And now with that, since CPUs aren't as efficient with parallel processing as GPUs, we can directly draw a parallel to why it would perform poorly on a CPU. Not because of bad drivers, but because of simply the fact that the CPU isn't good enough for it. So basically, get a better CPU or an Nvidia card. Feel free to provide reliable facts about PhysX being poorly implemented, and that being the cause, rather than what I said here. I doubt you can. Last edited by zomg; 2012-08-19 at 04:24 PM. |
||
|
2012-08-19, 04:42 PM | [Ignore Me] #124 | ||
Major
|
Contrary to some headlines, the Nvidia PhysX SDK actually offers multi-core support for CPUs. When used correctly, it even comes dangerously close to the performance of a single-card, GPU-based solution. Despite this, however, there's still a catch. PhysX automatically handles thread distribution, moving the load away from the CPU and onto the GPU when a compatible graphics card is active. Game developers need to shift some of the load back to the CPU. The effort and expenditure required to implement coding changes obviously works as a deterrent. We still think that developers should be honest and openly admit this, though. Studying certain games (with a certain logo in the credits) begs the question of whether this additional expense was spared for commercial or marketing reasons. On one hand, Nvidia has a duty to developers, helping them integrate compelling effects that gamers will be able to enjoy that might not have made it into the game otherwise. On the other hand, Nvidia wants to prevent (and with good reason) prejudices from getting out of hand. According to Nvidia, SDK 3.0 already offers these capabilities, so we look forward to seeing developers implement them. Last edited by EVILoHOMER; 2012-08-19 at 04:47 PM. |
||
|
2012-08-19, 04:54 PM | [Ignore Me] #125 | |||
Sergeant
|
Interesting read.
__________________
Planetside 2 =/= Planetside 1 V2. This means there will be changes in gameplay between Planetside 1 and Planetside 2. Cope. |
|||
|
2012-08-19, 06:33 PM | [Ignore Me] #126 | |||
Sergeant
|
Love Toms, and it is an interesting read. This is a very tricky situation, though. Is purposefully nerfing the CPU potential of PhysX so that an nVidia GPU is the best option by far to play the games on very ethical conduct? Decidedly not. Would nVidia cards be so compelling if the CPU performance more closely matched it? No it wouldn't. So it is a balance between monopolizing great GPU physics performance and selling more cards while still not screwing over everyone who has other brands. It is a tough call, and I don't see a clear solution to the problem. Why would nVidia upgrade the code to x86 if it would lose them money? They wouldn't, that's a silly concept. And again, PhysX IS open source for developers to optimize, if they so desire. |
|||
|
2012-08-19, 07:06 PM | [Ignore Me] #127 | |||
Sergeant
|
I think you've more or less hit the nail on the head here. nVidia are walking a fine line right now between making their product more desirable (physX wooo!), and making it something that the general populace despises (because of what could be viewed as underhand tactics). At the moment they're doing an admirable job of it. Right now I'm using AMD rather than nVidia. I bought a 7970. The thing that made me pause before making this decision was physX. nVidia are clearly succeeding to some degree. It is a difficult situation, and there aren't any clear solutions. As you said, nVidia are unlikely to upgrade to x86, it would be silly for them to do so. At the end of the day, it has to be said that really the onus is on the developers of the game to use physX to the best degree. It is possible, with some work, to convert physX to x86/SSE, and assuming the developers implement threading properly, there should be negligible difference between GPU and CPU (as the post above nicely demonstrated). Even if they don't (although I have the utmost confidence in the Dev Team) there is still another solution for most AMD users: Hacked nVidia drivers with a dedicated nVidia card for physX offloading. Using the modified 1.05ff drivers one can create a "hybrid" set-up which is comparable to a single card nVidia set-up. I personally don't think physX is anything to worry about. Others disagree, and they're welcome to.
__________________
Planetside 2 =/= Planetside 1 V2. This means there will be changes in gameplay between Planetside 1 and Planetside 2. Cope. Last edited by julfo; 2012-08-19 at 07:07 PM. |
|||
|
2012-08-19, 07:51 PM | [Ignore Me] #128 | |||
Sergeant
|
As a last note, while it is possible to mod in PhysX support, I do not see the majority of AMD users doing that. It is so niche (and possibly warranty-voiding) that most won't attempt to do it, though that doesn't make it an invalid tactic. |
|||
|
|
Bookmarks |
Tags |
amd, cpu, gpu, nvida, physx |
Thread Tools | Search this Thread |
Display Modes | |
|
|