Rambling about RTX 3090s getting bricked by Amazon's New World game



This is all 100% speculation on my part

My Patreon: https://www.patreon.com/buildzoid
Teespring: https://teespring.com/stores/actually-hardcore-overclocking
The Twitch:https://www.twitch.tv/buildzoid
The Facebook: https://www.facebook.com/actuallyhardcoreoverclocking

#newworld #rtx3090 #nvidia

source

47 thoughts on “Rambling about RTX 3090s getting bricked by Amazon's New World game”

  1. my suspicion is that a small part of the core that happens to be far away from the closest thermal sensor is being overloaded, then melting and then something is dying on the power side

    Reply
  2. didn't cryengine play a roll in the killing some of Fermi graphics card back in the day?
    Lumberyard game engine is just modify cryengine made by amazon.
    also JayzTwoCents allege that two people have told him that RX6900 died by new world.

    Reply
  3. please correct me someone. as most games are played without vsync at full 100% load, why this particular game causing issue ? my guess is wrong or harmful coding which triggers and throttle rtx 3090 power delivery to the fullest maybe gpu is overvolting ,, other than that using 100% gpu can never brick any card.. why not 3080 are being bricked or why not 6900xt or high tier gpu's,, it's something to do with harmful game coding to the rtx 3090.. or some kind of secret gpu killing exploit

    Reply
  4. for those who don't know, there was a design flow in the first several batches of the 3090 FTW3 that caused hundreds of cards to brick themselves long before this game even came out. this is not nvidia's or amazon's fault. this is EVGA making garbage hardware. there are thread on EVGA's forum dating back to october where people said their cards died while doing some really mundane things, like watching youtube or playing overwatch

    Reply
  5. I thought it was probably a fuse too. I can tell you a lot of the FTW3 3090s have power balancing issues and the PCIE slot is drawing too much power. Mine is affected by this, it was a big deal on the EVGA forums for awhile. They offered a replacement, but not a guarantee when they could replace it, so I just hung on to it for now as I can’t be without it indefinitely. I figure if it dies they’ll replace it immediately and I’m fairly satisfied with it for now.
    To be specific, one 8 pin, in my case #3, will max out at 150w, but 8 pins 1 & 2 will will get stuck at around 110-120w. It’s like as soon as the card detects one 8 pin at the limit, the stops any further power draw for the other two. There’s something fundamentally wrong with the design that can’t be fixed by a BIOS or firmware flash. Even with a higher power BIOS it won’t draw more power. And the PCI-E us drawing 80-85 watts regularly when it shouldn’t exceed 75 watts. From what I understand the fuse on the PCIE slot power is good for 120 watts roughly, so it would have to really spike hard to pop it. I can’t remember the fuse for the 8 pins, but it’s around 200 watts I think, maybe 225. It’s been several months since I’ve thought about it.

    Reply
  6. I highly doubt it's Nvidia or EVGA at fault. It's happened to AMD GPUs and other Board Partners. If it was their fault, the issue would have come to light much sooner, and it would have also occurred in other games.
    This guy said he tested other games without issue, after the OCP tripped. Presumably he had been playing other games prior to New World too. Yet New World has caused three faults and two resulted in deaths (one within a few seconds). Other people have reported the same thing. New World killed their GPU. Not some other game, but New World. In my mind that pins the blame directly on New World. The Developers even knew about this issue since Alpha. Yet they still pushed it out for a practically Open Beta.

    Sure, you could say that if hardware is ever killed by software, then that's just the software highlighting an design flaw with the hardware. But malicious-software has been known to be capable of killing hardware. So is the hardware at fault for being unable to prevent that? Even non-malicious software, has been known to damage perfectly good hardware in the past, like Prime95 and Furmark.

    Reply
  7. Pretty sure it correlates to higher power limit cards being at risk. Probably power surges with the game generating a load similar to furmark or something. Especially considering EVGA's cards have pretty mediocre VRMs compared to the Nvidia FE when it has a much higher power limiter.

    Reply
  8. Somebody else who posted this, talked about the initial game boot up fps shot up to over 1000fps … on a 2d black screen so it doesnt matter, but it popped as soon as a 3d image flashed on screen, then dead.
    It seems that its getting hit with so much power, sooo quickly that the OCP cant shut it off quickly enough.
    That means that the game, and Amazon has at least some responsibility here.
    Ya, most is on Nvidea, but the program can be pointed at for what it is asking the hardware to do, the way its asking it to do it. Short term fix , just turn on freesync, slows it down enough.

    Reply
  9. I think 🤔 🧐 is totally Amazon's fault, they are producing the code to do this, somehow interacting with the BIOS/Firmware/Drivers and directing the card to do something outside of normal parameters, and of course, this happens every time. And now that you mentioned, Nvidia' design is retarded (not at full potential), needs way better throttling and power control; How is this even possible? did we suddenly traveled 30 years to the past? 😲 This is so dumb 🤪😤

    Reply
  10. I wonder what his resolution was, because the game feels like 1 thread cpu bound, its not even maxing out my 3070 @ 1440p when theres like 5 people on screen. Its ice cold most of the time, especially on town, gpu usage is like 40% lol The only time ive seen it 99% is if im alone, which is rare because everyones leveling and running around.

    But New World is also weird, and unoptimized af, its the only game (unmodded) ive seen use 20gb of ram, thats 70% size of the full game lol.

    Reply
  11. NICE AWES0ME C00L_ 🍨🍚🍚🍚🍚🍚 💨 🍮 🥧 🍡 🍧 🍨 🍦 🍰 ❗ 🐬 🌺 🌺 🌺 🥀 🌹 🌷 🍁 💐🍂 🐡🐄 🐅 🐆 🐯 🍣 🍣 🍣 🥘 🍨🍨🍨

    Reply
  12. the exact same thing happened to me while playing RDR2. 4K+high+FS@58FPS+Vulkan. i had to reset the BIOS so that the PC would restart. then, i changed settings from Vulkan to DX12 and all worked fine. 5600X+6800+4x8GB DDR4 3600. anyone else?

    Reply
  13. I would have thought this is caused in part by some kind of harmonics with the current sense circuitry, maybe worsened by the very fast current slew rate, as such the current spikes so high before the current sense circuitry picks up on it and pulls down the clocks… If you get this multiple times per frame I'd expect you end up with a lot more current going to the card than intended…

    Reply
  14. Don't bother to make your video shorter, maybe changing how the information is delivered, you say it multiple times, but I'm pretty sure that we, viewers that commit to watch doesn't care, and lazy viewer can just skip part, you'll save time and can focus on something else after shooting just one time, if the delivery isn't satisfactory, you can like summary up at the end, like correcting some thing or informatio you want, that will surely make the video longer but it will shorter than a full retake. you can give it a try nextime, maybe it will works maybe not, just say it in the begining that any "info editing or correction" will be like reimplemeted or reexplained at the summary even if they were already adressed at the moment speech failure or whatnot. I don't know what are the causes of your retake, maybe my thinfg will fit maybe not. you can ask in a poll if longer rambling would be appreciated with a summary at the end for the tldr ppl out there, both kind of viewer will get served then some comment can be upped to get the timeline. Or maybe I'm just wrong ahah nonetheless Have a good one. I knew that one would be way over I was expecting. Thank you.

    Reply
  15. Maybe the VRM feedback can start to oscillate under some conditions, for example when the GPU has a load for 1us then idles for another 1us and so on, which then can cause completely unstable core voltage and blow the card or power stages that way. Obviously any VRM has to be tested against that, just like all switching (and even linear) power supplies.

    Reply
  16. I don't get why no other game was able to fry cards. Plenty of games don't have frame limits, and people are blaming New World's frame limit? There should be nothing a developer can do to fry hardware, ever.

    Reply
  17. My 3090FE trips OCP on my SF750 platinum psu even when the cpu is doing almost nothing. The power draw spikes must be insanely high to do that. I'm kind of not surprised a workload exists that could kill a card with a 10.5K cuda core gpu.

    Reply

Leave a Comment