I’m one of those people that uses DLSS, because I’ve got a large fancy 4k monitor that is big enough that is looks like shit at lower resolutions.
DLSS is better than nothing but it’s no replacement for native rendering, it introduces a heap of visual anomalies and inconsistencies, especially in games with a consistent motion (racing games look like shit with DLSS), so I tend to be having lows of 50fps on medium before I’ll even think about DLSS.
I’m also pretty sure Nvidia is paying devs to have it on by default, because everytime it’s patched into a game they clear all the current graphics settings to turn on DLSS, at least in my experience.
I hate how AI upscaling looks and I really don’t get why everyone seems to be gaga over it. In addition to the artifacts and other weirdness it can introduce, it just looks generally like someone smeared vaseline over the picture to me.
I’ve tried upscaling with ESRGAN as well and it has similar problems. It messes with the original textures too much. For example, it made carpet look like a solid surface. Skin looks too smooth and shiny. That kind of thing.
It depends a lot on the source picture, but it’s definitely not a general problem inherent to AI upscaling. Otherwise there wouldn’t be so many positive examples of ESRGAN.
DLSS isn’t like all the other upscalers, it’s on a whole different level. FSR is a blur filter. FSR2 is better, but still noticeably upscaled with tonnes of artifacting. Same with XeSS, because that and FSR are just software upscaling.
DLSS on the other hand has actual hardware that is dedicated to it. It actually gives better than native results quite often. It doesn’t at all look like someone smeared Vaseline on the screen.
FSR/XeSS are basic sharpening tools, and yeah they are inherently limited because it’s just an impossible thing to do with 100% accuracy. DLSS is the same thing except NVIDIA tries to circumvent this limitation through some kind of proprietary AI magic, accelerated via their hardware. It’s impossible for it to be “better than native”, it’s using AI to approximate what “native” is. And in doing so, it makes the original image look too different to my liking. In motion the textures definitely look a little muddied to me as things blend into each other since the AI cannot accurately predict how things should look in realtime. At that point I’d rather just use FSR/XeSS as it at least preserves the original art style.
This is a big part of why I’m sticking to 1440p for as long as it’s a viable option. Not like my imperfect vision with glasses on would benefit from more PPI anyway.
I’m one of those people that uses DLSS, because I’ve got a large fancy 4k monitor that is big enough that is looks like shit at lower resolutions.
DLSS is better than nothing but it’s no replacement for native rendering, it introduces a heap of visual anomalies and inconsistencies, especially in games with a consistent motion (racing games look like shit with DLSS), so I tend to be having lows of 50fps on medium before I’ll even think about DLSS.
I’m also pretty sure Nvidia is paying devs to have it on by default, because everytime it’s patched into a game they clear all the current graphics settings to turn on DLSS, at least in my experience.
I hate how AI upscaling looks and I really don’t get why everyone seems to be gaga over it. In addition to the artifacts and other weirdness it can introduce, it just looks generally like someone smeared vaseline over the picture to me.
That’s not inherent to “AI upscaling” as a process. ESRGAN for example is pretty good at upscaling pictures while keeping the quality.
I’ve tried upscaling with ESRGAN as well and it has similar problems. It messes with the original textures too much. For example, it made carpet look like a solid surface. Skin looks too smooth and shiny. That kind of thing.
It depends a lot on the source picture, but it’s definitely not a general problem inherent to AI upscaling. Otherwise there wouldn’t be so many positive examples of ESRGAN.
DLSS isn’t like all the other upscalers, it’s on a whole different level. FSR is a blur filter. FSR2 is better, but still noticeably upscaled with tonnes of artifacting. Same with XeSS, because that and FSR are just software upscaling.
DLSS on the other hand has actual hardware that is dedicated to it. It actually gives better than native results quite often. It doesn’t at all look like someone smeared Vaseline on the screen.
FSR/XeSS are basic sharpening tools, and yeah they are inherently limited because it’s just an impossible thing to do with 100% accuracy. DLSS is the same thing except NVIDIA tries to circumvent this limitation through some kind of proprietary AI magic, accelerated via their hardware. It’s impossible for it to be “better than native”, it’s using AI to approximate what “native” is. And in doing so, it makes the original image look too different to my liking. In motion the textures definitely look a little muddied to me as things blend into each other since the AI cannot accurately predict how things should look in realtime. At that point I’d rather just use FSR/XeSS as it at least preserves the original art style.
It’s not impossible for it to be better than native.
https://www.techspot.com/article/2665-dlss-vs-native-rendering/
Just play in 640x360 and squint your eyes like we used to in the CRT days.
CGA used to be good enough, you kids with your fancy pixels are just spoiled.
Look at fancy-pants here rendering four colors at a time!
In my day we had green and black. And we were greatful for it!
Or 1600x1200 when most LCDs were 1024x768.
CRTs really have gotten a bad rep, although they were great for a while still, after LCDs came on the market
and they are still great, if not better. I’d take a high-end CRT over a modern LCD any day.
I really wish there was still a market for new modern CRTs I’d have loved to have seen how that technology would’ve matured further
If you were gaming at 1600x1200 you either had a supercomputer, or you were gaming on a machine built after 2000.
*Cries in 320x200
This is a big part of why I’m sticking to 1440p for as long as it’s a viable option. Not like my imperfect vision with glasses on would benefit from more PPI anyway.