Quadro vs GeForce

NVIDIA GeForce RTX 3070 Modded With 16 GB GDDR6 Memory

Russian-based modder, VIK-on, has upgraded the NVIDIA GeForce RTX 3070 graphics card with 16 GB GDDR6 memory. The modding required some technical experience but in the end, VIK-on had the RTX 3070 running with twice its original memory capacity.

My point is that if the difference between Quadro and GeForce was insignificant, the redesigns would be mass-produced.

Perhaps there is a slightly deeper difference in the hardware than is commonly thought.

    Tomcat Perhaps there is a slightly deeper difference in the hardware than is commonly thought.

    I believe the difference is 64 bit performance.

      Pixophir I believe the difference is 64 bit performance.

      This is a somewhat belated response to the old discussion

      cybereality …but there is nothing special about them, besides a firmware device id which the software checks for. It's all just a scam.

      @cybereality 's word has certainly more weight than mine. If there are differences between the Quadro and Geforce, these can certainly be found out.

      I want 64bit on the GPU. That'll make my life so much easier :-)

      Bimbam

      That's my favorite time to buy... the older models. Get 'em dirt cheap and use the money you saved for something fun. 🙂

      Tomcat My point is that if the difference between Quadro and GeForce was insignificant, the redesigns would be mass-produced.

      Well the board is not exactly the same. I don't know about the current designs, but at one point (about 15 years ago) the GeForce and Quadro cards were exactly the same, just with basically a firmware switch that unlocked additional functions. This has changed since then. But in most cases, I would assume the GPU chip itself is the same. Because it is much cheaper to mass produce the same chip for all different models, then to run up a production line for each SKU. However, that is not to say the board is exactly the same. They could have different sizes in RAM, different memory controllers or bus speeds, etc. so it can get complicated.

      CPUs are the same way. Basically every i7 chip (of a generation) is the same chip. Intel will mass produce all of the same chip, and the ones that test better become the high end model (which they charge more for) and the ones that are not as good become mid-range or low-end. Or some are slightly defective, so they take an 8-core part, disable 2 cores, and sell it as a 6-core. But they are all essentially the same chip.

        cybereality But in most cases, I would assume the GPU chip itself is the same.

        The fact that there is only one basic chip is not a secret and is not hidden. But a simple re-flashing doesn't turn a GeForce into a Quadro like "about 15 years ago". That may have been the case, but now I can't call such a situation "It's all just a scam".

        Unfortunately, I can't check the differences in practice right now. I was going to get an Xeon W-3335 and an RTX A2000 12GB, but circumstances forced me to get a tablet for mobility.

        Well, not necessarily a scam, but in some ways an artificial market segmentation. But now that I look at the prices, there is not as huge a difference in the Quadro card prices as there once was. Either way, all companies do this in some way or another. You can decide whether you call that a scam or not, or just good business.

        And this was not just 15 years ago. There was a hack from last year that enabled GPU virtualization on consumer cards, a feature supposedly only possible on Quadro and Tesla chips. You can even look at the code on Github, the driver basically just checks for an id number to decide if the feature is going to work or not.

        https://www.notebookcheck.net/Hack-allows-unlocking-GPU-virtualization-functionality-on-consumer-NVIDIA-cards.531761.0.html

          I can't help but miss my old game engine sometimes, just because I am used to it. Granted, I have used it for eight years straight. Then I come back here after I get sick of it... I am so conflicted, as I just want an engine to learn and stick with. I don't want to spend another eight years to learn coding basics, tho. Hoping after a while, I will become more satisfied here.

          Okay, rant over.

          cybereality There was a hack from last year that enabled GPU virtualization on consumer cards, a feature supposedly only possible on Quadro and Tesla chips.

          Oh, that's very interesting 👍️ but that's just one of the features of Quadro:

          All said, the vgpu_unlock mod does not fully replace purchasing an NVIDIA-recommended vGPU solution.

          Buying the Autodesk/NVIDIA's 'recommended solution' is almost entirely for warranty and support.

          Geforce cards have been shown to be 100% compliant with Autodesk's own tests before, they just don't state them as 'verified' because NVIDIA won't support them at a driver level for CAD software if something were to go wrong due to a geforce driver issue (I can't vouch for all cards/driver combos obviousyl, but you can find videos of people showing this).

          If your running a business where reliability/warranties are paramount, then Quadro is the de facto standard, I'm not debating that. But not because dropping in a Geforce card won't do the same job most of the time for a tenth of the price. But because NVIDIA don't profit from it, and it will be arguably some amount less 'reliable'. It's not a scam, just business really.

          For VRAM limited tasks like certain ML/Pro 3D animation/simulation work, the extra available (24gb+) on the Quadro/Tesla range and double precision may force your hand, but for most people (surveyors especially) these limits are unlilely to impact them.

          Anyway I guess my point is if your an indie, self employed or even startup/smb, the actual risk of using your Geforce card, VRAM permitting, todo the same job for a tenth of the price is minimal (though should always be considered according to your use case). And as your Geforce card is likely clocked higher, there's a fair chance it will out perform it's like for like Quadro card in such tasks anyway, but comes with shorter third party warranty and potential (albeit small) driver compatibility problems.

          If however you're paying the ~$2k/yr for Maya already, you're probably not fussed that Quadros are expensive anyway.

          Kojack So holding off until maybe september for imminent 4000 series seems a good idea. Well, at least to see what's up with them, since current rumors are only the 4090 will be september, the cheaper ones are later.

          And it'll take time, but previous generations prices should drop a bit over time in response to the new generations cards, unless nvidia pulls an nvidia and increases the MSRP again...

          I'm glad nothing at all has changed since I went through my consumer electronics phase.

          If your intent is to write games that other people will run on their platforms, does it really matter whether your development system has a state-of-the-art GPU?

          Not at all. It is probably arguably a good idea to use a lower level rig for development, just to be sure the game will run smoothly later. It is only if there is a lot of compilation going on, specifically C++, that a faster systems makes a difference in the turn around times between testing and coding. But with a game engine that's not that much of a thing.

          Except for Unreal maybe ...

          For Godot it doesn't matter much. You can dev on a low end system. But compile times for C++ can be a huge benefit. For example, I can compile the Godot source from scratch in less than 2 minutes. On my old computer it took about 10 minutes. And compiling Unreal Engine from scratch took about 2 hours on my old machine (didn't even bother with my new PC, since I wasn't using it at that point). So that could be a serious time savings and productivity boost, to the point where you could actually make money by buying a more expensive computer.

          On my previous laptop, compiling godot took maybe 20 minutes, but developing on it never had totally prohibitive problems until I touched anything slightly gpu intensive, but that wasn't just Godot. Blender was always a 50/50 shot if it would crash when the shaders compiled.
          Developing on powerful hardware is always better if possible. You should leave the low end hardware for testing alone. Growing up poor, I know how much it's appreciated when a talented designer produces games that are not locked behind a hardware paywall, so I've kept all of my older machines. That especially includes the ones knocking on death's door.

          I never had issues with gpu, yet. My deal was always space. Unity and Unreal are both enormous engines, and so are the games they create. Try to make Unreal run on a laptop. It will drive you insane waiting for it, if it doesn't crash, unless you have one heckuva device. Then, each project takes up space as well.

          See, opinions differ :-)

          Yeah, I've experienced Unreal as a real resource hog as well.

          But then, if you have that big fat threadripper, 64gb of negative latency ram, gpus in parallel or how that is called, and fans that blow the neighbourhood away (and 100 bucks/month extra on the utility bill), con you be sure your game will run on a Walmart/Aldi/Worten PC from 3 years ago without your customers walking out on you with the pitchforks in hands ?

          Not totally serious, ofc :-)