• Krotiuz@kbin.social
    link
    fedilink
    arrow-up
    68
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I’m one of those people that uses DLSS, because I’ve got a large fancy 4k monitor that is big enough that is looks like shit at lower resolutions.

    DLSS is better than nothing but it’s no replacement for native rendering, it introduces a heap of visual anomalies and inconsistencies, especially in games with a consistent motion (racing games look like shit with DLSS), so I tend to be having lows of 50fps on medium before I’ll even think about DLSS.
    I’m also pretty sure Nvidia is paying devs to have it on by default, because everytime it’s patched into a game they clear all the current graphics settings to turn on DLSS, at least in my experience.

    • Nefyedardu@kbin.social
      link
      fedilink
      arrow-up
      36
      arrow-down
      3
      ·
      edit-2
      1 year ago

      I hate how AI upscaling looks and I really don’t get why everyone seems to be gaga over it. In addition to the artifacts and other weirdness it can introduce, it just looks generally like someone smeared vaseline over the picture to me.

      • FooBarrington@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        1 year ago

        That’s not inherent to “AI upscaling” as a process. ESRGAN for example is pretty good at upscaling pictures while keeping the quality.

        • Nefyedardu@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I’ve tried upscaling with ESRGAN as well and it has similar problems. It messes with the original textures too much. For example, it made carpet look like a solid surface. Skin looks too smooth and shiny. That kind of thing.

          • FooBarrington@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            It depends a lot on the source picture, but it’s definitely not a general problem inherent to AI upscaling. Otherwise there wouldn’t be so many positive examples of ESRGAN.

      • Whirlybird
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        DLSS isn’t like all the other upscalers, it’s on a whole different level. FSR is a blur filter. FSR2 is better, but still noticeably upscaled with tonnes of artifacting. Same with XeSS, because that and FSR are just software upscaling.

        DLSS on the other hand has actual hardware that is dedicated to it. It actually gives better than native results quite often. It doesn’t at all look like someone smeared Vaseline on the screen.

        • Nefyedardu@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          FSR/XeSS are basic sharpening tools, and yeah they are inherently limited because it’s just an impossible thing to do with 100% accuracy. DLSS is the same thing except NVIDIA tries to circumvent this limitation through some kind of proprietary AI magic, accelerated via their hardware. It’s impossible for it to be “better than native”, it’s using AI to approximate what “native” is. And in doing so, it makes the original image look too different to my liking. In motion the textures definitely look a little muddied to me as things blend into each other since the AI cannot accurately predict how things should look in realtime. At that point I’d rather just use FSR/XeSS as it at least preserves the original art style.

      • TheYang@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        Or 1600x1200 when most LCDs were 1024x768.

        CRTs really have gotten a bad rep, although they were great for a while still, after LCDs came on the market

        • timo_timboo@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          they were great for a while still, after LCDs came on the market

          and they are still great, if not better. I’d take a high-end CRT over a modern LCD any day.

          • MrScottyTay@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 year ago

            I really wish there was still a market for new modern CRTs I’d have loved to have seen how that technology would’ve matured further

        • Zombiepirate@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Look at fancy-pants here rendering four colors at a time!

          In my day we had green and black. And we were greatful for it!

    • Heavybell@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      This is a big part of why I’m sticking to 1440p for as long as it’s a viable option. Not like my imperfect vision with glasses on would benefit from more PPI anyway.

  • Fizz@lemmy.nz
    link
    fedilink
    arrow-up
    51
    arrow-down
    3
    ·
    1 year ago

    Why is native gaming out and dlss here to stay? I hate the feel and look of dlss and fsr.

    • Delphia@lemmy.world
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      1 year ago

      Because the largest gaming GPU manufacturer in the world says so. Unfortunately they have the clout to drive this narrative to devs who will accommodate them because devs dont want their game to look like shit on an Nvidia gpu.

      I think that these technologies are still very new. Nvidia arent going to let us know what their skunkworks is up to and what the next generation of the tech is going to look like.

      I live in hope.

    • Whirlybird
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Good DLSS is indistinguishable from, or even better than, native resolution.

  • hot_milky@lemmy.ml
    link
    fedilink
    arrow-up
    40
    ·
    1 year ago

    It’s not a prediction, {Company} will simply push whatever future that benefits {Company}.

  • lorty@lemmy.ml
    link
    fedilink
    arrow-up
    28
    arrow-down
    3
    ·
    1 year ago

    So long games don’t force it to be on, then whatever. Although I expect it to become a requirement for a usable framerate for next gen games. Big developers don’t want to optimize anymore and upscaling/framegen technologies are a great crutch.

    • emptyother@programming.dev
      link
      fedilink
      arrow-up
      26
      arrow-down
      4
      ·
      1 year ago

      Of course nobody want to optimize. Its boring. It messes up the code. Often reqires one to cheat the player with illusions. And its difficult. Not something just any junior developer can be put to work on.

      • miss_brainfart@lemmy.ml
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        You’d expect that when Raytracing/Pathtracing and the products that drive it have matured enough to be mainstream, devs will have more time for that.
        Just place your light source and the tech does the rest. It’s crazy how much less work that is.

        But we all know the Publishers and shareholders will force them to use that time differently.

        • DWin@sh.itjust.works
          link
          fedilink
          arrow-up
          11
          ·
          1 year ago

          Eh, in my experience that’s not how development works. With every new tool to improve efficiency, the result is just more features rather than using your new found time to improve your code base.

          It’s not just from the publishers and shareholders either. Fixing technicial debt issues is hard, and the solutions often need a lot of time for retrospection. It’s far easier to add a crappy new feature ontop and call it a day. It’s the lower effort thing to do for everyone, management and the low down programmers alike.

          • miss_brainfart@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            New features is what sells a product, so not far from my original point, I’d say.
            Definitely a bit of both, and improving code is never the highest priority, yeah.

      • iegod@lemm.ee
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Who are you directing the comments to? The dev company or individuals? I disagree in the latter. On the former I still think it’s a mischaracterizatuon of the situation. If the choice is to spend budget in scope and graphics at the expense of optimization that doesn’t seem a hard choice to make.

        • emptyother@programming.dev
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          I might have generalized a bit too much. Of course some individual devs love the challenge of getting better performance out of anything.

          But not enough of them that every dev company has an army of good developers who know how to do this with the expertise they are needing performance for. Theres a lot of ways one dev can specialize: gpu api (directx/opengl/vulcan/etc), os, game engine, disk access, database queries. One who knows graphic api well might not know how to optimize database queries. It doesnt help throwing money at the problem either, those who know this stuff usually already have good jobs. So you might have no choice than to use the devs you have, and the money you have budgeted, to release the game within contracted time.

    • exscape@kbin.social
      link
      fedilink
      arrow-up
      15
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Doesn’t always look bad, it’s game dependent.

      In hardware unboxeds test, it’s better than native in some titles, and better or equivalent in half of titles tested.

      It looks way better than native w/ TAA for me in BG3 (at 1440p). TAA is way too blurry. And yet it’s also faster.

      • hubobes@sh.itjust.works
        link
        fedilink
        arrow-up
        12
        arrow-down
        7
        ·
        1 year ago

        That’s like their opinion. In every single game I tried it looked worse than native. I always try it at first and then usually disable it.

        • Whirlybird
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Which version of DLSS, which GPU, and which profile do you choose? Obviously choosing a low internal res and then using DLSS to scale to 4K will look bad, but using say 1440p DLSS on quality profile looks as good as native or better.

          • hubobes@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Whatever the game provides (or I can patch in), I (besides other games of course) play cyberpunk which should be the most perfect implementation of RTX and DLSS to date in my opinion. Usually Quality for a 1440p screen. I really only use DLSS if the native frame rate is unbearable.

            Current GPU is a 4070Ti.

    • OneNot@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 year ago

      It’s constantly improving though.

      DLSS 3.5 for example comes with that new AI enhanced RT that makes RT features look better, respond to changes in lighting conditions faster, and still remain at pre-enhanced levels of performance or better.

      And Reflex fixes a lot of the latency issue.

      A lot of games don’t use the latest version of DLSS though, so I don’t blame you if you have a bad experience with it.

      • hubobes@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Uh yes I am super exited for the AI denoiser. Videos from Nvidia looked great. And Reflex is also awesome.

        I don’t dislike everything from Nvidia, I just don’t like what the upscaler and frame-gen do.

    • Whirlybird
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      DLSS2 onwards looks fantastic, often better than native resolution. Don’t care about frame gen, but the boost to resolution simply from rendering at a lower resolution gives a big native framerate boost.

    • jacaw@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      For some reason, Larian shipped an old version of DLSS with the game. It looks better if you swap out the DLL for a newer one. I use DLAA on my 3070 TI and it looks good, but I did have to swap the DLL.

        • jacaw@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 year ago

          You can either use DLSS Swapper or manually download a new DLL and drop it in yourself. It’s essentially just replacing the nvngx.dll in the game’s directory with a new one.

          There are some issues, though - for example, upgrading from a version prior to 2.5.1 will disable the use of the sharpness slider. I mitigate this by using DLSSTweaks to force preset C, which favors the newest frame more heavily.

  • vrighter@discuss.tchncs.de
    link
    fedilink
    arrow-up
    14
    arrow-down
    5
    ·
    1 year ago

    I prefer native. If you can’t render something, then just don’t. Not make everything else worse too just so you can claim to use a feature, and then try to make up junk to fill in the gaps. upscaling is upscaling. It will never be better than native.

    • ඞmir@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Have you tried DLSS Quality on 1440p or 4K? I genuinely think it looks like better anti aliasing than MSAA 4x or whatever you usually would use.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        1 year ago

        they have to “guess” what data they should fill up the missing data with. Or you could render natively and calculate, so you don’t have to guess. So you can’t get it wrong.

  • regbin_@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 year ago

    I can agree but with two conditions. Benchmarks must always be done in native resolution. Hardware capability / system requirement must not take any upscaling into account.

    For example, if a studio publishes the requirements for playing at 1080p, 60 FPS, High RT, it must be native 1080p and not 1080p with upscaling.

    • dzire187@feddit.de
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      1 year ago

      Benchmarks should not be disconnected from actual games. If games don’t play in native resolution, then benchmarks should not be limited to native resolution. they should check both native and upscaled rendering, and rate the quality of the upscaling.

    • conciselyverbose@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      Why?

      RT + DLSS is less cheating than most other graphics effects, especially any other approach to lighting. The entire graphics pipeline for anything 3D has always been fake shortcut stacked on top of fake shortcut.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    This is the best summary I could come up with:


    During their discussion with Digital Foundry’s Alex Battaglia and PCMR’s Pedro Valadas, Bryan Catanzaro — Nvidia’s VP of Applied Deep Learning Research — stated that native resolution gaming is no longer the best solution for maximum graphical fidelity.

    Catanzaro’s statement was in response to a question from Valadas regarding DLSS and whether Nvidia planned to prioritize native resolution performance in its GPUs.

    Catanzaro pointed out that improving graphics fidelity through sheer brute force is no longer an ideal solution, due to the fact that “Moore’s Law is dead.”

    In the case of Cyberpunk 2077, both Valadas and CD Projekt Red’s Jakub Knapik said that full path-tracing would have been impossible in that game without all of DLSS’s technologies — especially in terms of image upscaling and frame generation.

    Catanzaro continued by saying the industry has realized it can learn much more complicated functions by looking at large data sets (with AI) rather than by building algorithms from the ground up (“traditional rendering techniques”).

    With the alleged “death” of Moore’s Law, AI manipulation may be the only thing that continues to drive 3D graphics forward for the foreseeable future.


    The original article contains 416 words, the summary contains 188 words. Saved 55%. I’m a bot and I’m open source!

  • culpritus [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    This seems more like just a reality of LCD / LED display tech than anything. CRTs (remember those?) can do a lot of resolutions pretty well no problem, but new stuff not so much. I remember using a lower rez on early LCDs as a ‘free AA’ effect before AA got better/cheaper. This just seems like a response to folks getting ~4k or similar high rez displays and gfx card performance unable to keep up.

    I was just playing around with gamescope that allows for this kind of scaling stuff (linux with AMD gfx). Seems kinda cool, but not exactly a killer feature type thing. It’s very similar to the reprojection algos used for VR.

  • PenguinTD@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    8
    ·
    1 year ago

    I don’t get this “raw pixels are the best pixels” sentiment come from, judging from the thread everyone has their own opinion but didn’t actually see the reason behind why people doing the upscalers. Well bad news for you, games have been using virtual pixels for all kinds of effects for ages. Your TV getting broadcast also using upscalers.(4k broadcast not that popular yet.)

    I play Rocket Leauge with FSR from 1440p to 2160p and it’s practically looking the same to 2160p native AND it feels more visually pleasing as the upscale also serve as extra filter for AA to smooth out and sharpen “at the same time”. Frame rate is pretty important for older upscaler tech(or feature like distance field AO), as many tech relies information from previous frame(s) as well.

    Traditionally, the render engine do the stupid way when we have more powerful GPU than engine demand where the engine allows you to render something like 4x resolution then downscale for AA, like sure it looks nice and sharp BUT it’s a bruteforce and stupid way to approach it and many follow up AA tech prove more useful for gamedev, upscaler tech is the same. It’s not intended for you to render 320x240 then upscale all the way to 4k or 8k, it will pave way for better post processing features or lighting tech like lumen or raytracing/pathtracing to actually become usable in game with decent “final output”.(remember the PS4 Pro checkboard 4k, that was a really decent and genuinely good tech to overcome PS4 Pro’s hardware limit for more quality demanding games. )

    In the end, consumer vote with their wallet for nicer looking games all the time, that’s what drives developers gear toward photo real/feature film quality renderings. There are still plenty studio gears toward stylized, or pixel art and everyone flip their shit and praise while those tech mostly relies on the underlying hardware advance pushed by photo real approach, they just use the same pipeline but their way to reach their desired look, Octopath Traveler II used Unreal Engine.

    Game rendering is always about trade-offs, we’ve come a LONG way and will keep pushing boundaries, would upscaler tech become obsolete somewhere down the road? I have no idea, maybe AI can generate everything at native pixels, right?

    • miss_brainfart@lemmy.ml
      link
      fedilink
      arrow-up
      14
      ·
      edit-2
      1 year ago

      I don’t have anything against upscaling per se, in fact I am surprised at how good FSR 2 can look even at 1080p. (And FSR is open source, at least. I can happily try it on my GTX 970)

      What I hate about it is how Nvidia uses it as a tool to price gouge harder than they’ve ever done.

      • NineSwords@lemmy.ml
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        To me, FSR2 always looks like shit. I use it when playing on my SD or Ally and the results always look horrible.

        • miss_brainfart@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I mean, I didn’t say it looked great or anything. Just better than I expected.
          But of course my expectations were extremely low when I saw so many comments like yours, so I was actually pleasantly surprised with what it can do for what it is.

          Though to be fair to the Deck, the native resolution is already so low that there isn’t a whole lot FSR can work with.

      • PenguinTD@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        well, don’t buy NV cards then. I switched and actually feel my dollars worth the purchase. (6800xt)

        • miss_brainfart@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          My next card will be AMD, but that doesn’t change the fact that Nvidia is the biggest authority in this market. They do whatever they want, and AMD doing their best to only be slightly worse isn’t helping.

          • PenguinTD@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            nvidia is using their investor’s dollars really efficiently, which is what leads them to today’s dominance, but also make them like bully toward their business partners(like EVGA, who knew what other vendors are being treated. )

            some of the early investment to push dominance in cuda:

            • NV directly fund researches and provide equipment for accelerated computing(both graphic and non-graphic), which in return researcher are really familiar with cuda and their results improve cuda’s design/driver/compiler. the AI training side eventually leads to tensor cores.
            • NV then use those to help software developers to integrate CUDA-accelerated application, like GPU-renderer, GPU-simulation, GPU-deep learning, GPU-denoiser, GPU-video encoding.
            • NV also helps game developer implement or integrate techs like RTX, DLSS, or ealier ones like hair/physx, etc. And those notorious game specific driver enhancement. ie. they basically work with the game and have ways to set driver side parameters for each game. These collaboration also leads to that GeForce Experience’s auto best quality settings for your pc feature.
            • they also make CUDA only card for number crunching at data center.
            • all above leads to when making purchase, if you are not just playing games, your most viable cost efficient is to buy NV if your work software also use those CUDA features.

            The business plan and result is then positive feedback cycle, crytpo surge of sales or investment money is extra but Nvidia did put them to good use. But above plan make more investors willing to pump money into NV. There are no better business than monopoly business.

            Then, some thing happened for consumer end, don’t know exactly when or reasons they start selling flag ship and crank up their GPU’s prices. People would be like, dude their used GPU with crypto is selling 3x~5x higher then MSRP, why wouldn’t they just increase and get all the revenue themselves. That maybe “part” of the reason but I think they probably testing water in both front(their data center number crunching card were way, way more expensive than even the top tier consumer cards.) They took the chance, with global chip shortage and other “valid reason” to up the price and then check what the market respond, now they have about 2 generation worth of “price gouging” the market data to set their price properly.(plus the door in your face effect. ) Note, big manufacturers sign component deals in years, not quarters, the chip shortage might affect difference sector heavily, like say laundry machines, but for NV you can bet your ass their supply is top priority.

            They did lose out on the console front, and like many already mentioned, NV’s CEO no longer have passion in pushing game tech, he is all AI now. Depending on how they aim their business, their game side gpu business may not doing something really worth mentioning until AMD can put up a serious threat.

      • Whirlybird
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Nvidia can because DLSS is much better than FSR because it has hardware dedicated to it.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    5
    ·
    1 year ago

    It won’t be until it becomes more universally adopted. I’m sure they thought the same thing about ray tracing, and no one gave a shit lol.

    • Whirlybird
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Umm lots and lots of people care about raytracing, and if a pc game launches without DLSS it gets trashed by pc players.

        • Whirlybird
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          The top games list is dominated by old online multiplayer games. That’s irrelevant to people wanting ray tracing.

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            That’s exactly my point lol, people wanting ray tracing is irrelevant because it’s not popular. You’ve been falling for too much marketing hype.

            • Whirlybird
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              It’s hugely popular. My point is that games exist out of the top 25 steam games, and basically all AAA games these days get slated if they don’t have both DLSS and ray-tracing.

              The top played games aren’t top played because of their graphics, but that doesn’t mean that graphics are irrelevant.

              • 👁️👄👁️@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                What makes you say it’s hugely popular? Are you saying that according to the marketing? I’ve never actually seen anyone care. Especially in this market of overpriced GPUs and it being exclusive to them. It’s a very small market who can actually run ray tracing, and an even smaller market of games who actually support it. Usually just triple A games with deals from Nvidia. The fidelity isn’t worth the performance trade off. Of course none of this is true with DLSS, which is why it’s actually really nice to have, except it’d be a whole lot better if they supported Nvidia too like FSR.

                • Whirlybird
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  If you frequent any pc gaming forums you’ll see pretty much everyone wants ray traced lighting/shadows/reflections in every AAA game. Most console AAA games these days are adding ray tracing modes, Forza Motorsport is the next big one with it next month, and it’s running at 60fps.