For the most part I'm not seeing a problem in games since increasing the GPU frequency often produced a linear improvement in frame rate. Most importantly, if the bus width was insufficient it would cause a severe bottleneck. There is minimal latency to encode, and it's lossless compression. Much LONGER than I meant but it's a bit complicated. Not trying to hijack the thread but I think the main question was answered and this info is still relevant. (I'm holding off as my Asus GTX680 TOP is playing most games at full and I've got a ridiculous back catalogue of Steam games so I decided to just quit buying for another year or two then get the Witcher 3 as my first game. So chances are you're looking at maybe 3% or so benefit depending on the game. The worst-case is also about the same if VRAM bottlenecked (2000/1866). *We're talking about a theoretical best case advantage of under 7% though (1600/1500) with a completely GPU bound environment so may be hard to test. It's worth pointing out that some game benchmarks like Metro 2033 and the Batman Arkham series don't compare to average gameplay, and as well sometimes certain bottlenecks aren't common but can affect you at the worst possible moment. Again, I guess it's going to really vary but it would be interesting to know if you've done any testing at all to compare: It's lossless, fast, and keeps the price down a bit so I guess people complain about anything.)īack to the potential VRAM bottleneck issue. (BTW, people still complain about this texture compression for some reason. The GTX980 for example scales slightly better at higher resolutions such as 4K at times because the increased ROP count makes texture compression faster. I know you can get VRAM bottlenecking though that's heavily dependent on resolution and other factors. I might get some stability on higher once if I allow more power, but kinda don't feel like putting more power on the card, dont wanna damage anything :/ĭo you feel that 1500+2000 is optimal overall considering you can get the GPU higher but would then have to lower the VRAM? Temperature don't go beyond 68 mounting 2 120's on the HDD cage of my current case helping pull air from a 200mm intake and dumping it directly unto the naked heatsink's. OC boost 1701+ OC Memory 2000MHz Failed Mem > Stable at stock Mem clock on Fire strike Extreme, Furmark Burn test OC boost 1601+ OC Memory 2000MHz Failed Mem > Stable at 1866 Mem OC on Fire strike Extreme, Furmark Burn test OC boost 1501 + OC Memory 2000MHz > Stable on Fire strike Extreme, Furmark Burn test Stock boost Core 1386 + stock Memory 1753MHz > Stable on Fire strike Extreme, Furmark Burn test The MOSFETs of the DIGI+ power phases are cooled by a small heatsink, but the VRM controller and the MOSFETs for the memory power phase are left, like the memory chips, to fend for themselves without direct cooling It is fed by a single phase found at the other side of the PCB, and this one does not use any special components. Sadly, the memory has not been granted the same treatment. It also uses ASUS's DIGI+ VRM controller for precise, digital voltages, as well as high quality Super Alloy Power components for buzz-free choke operation, longer capacitor lifespan and MOSFETs with a 30 percent higher voltage threshold than standard. We also find a 6-phase power delivery system for the GPU, a 50 percent upgrade from stock specifications. In fact, the chips are left without any contact plate or heatsink touching them, relying solely on air from the fans to cool them. On its custom PCB, ASUS places all eight Samsung memory chips on the front side, meaning that the backplate does not directly cool any of them.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |