I don’t just mean DLSS or frame generation as it exists today… I mean completely re-interpreting what is rendered before it’s displayed with complete temporal and deterministic consistency. Given that we’ve seen some demos of the concept in action, and that was over a year ago, I really don’t think it’s far off, either.
Imagine booting up classic Monkey Island, and Nvidia’s AI reinterpreted makes it look like a high-end modern animated TV show. That’s the kind of thing I’m talking about.
It’s a matter of time before we can use real-time AI upscalers. Nvidia (et al) have been working on this quite a bit.
In the mean time, the two options you mentioned are it.
I thought their 40 series already did?
I don’t just mean DLSS or frame generation as it exists today… I mean completely re-interpreting what is rendered before it’s displayed with complete temporal and deterministic consistency. Given that we’ve seen some demos of the concept in action, and that was over a year ago, I really don’t think it’s far off, either.
Imagine booting up classic Monkey Island, and Nvidia’s AI reinterpreted makes it look like a high-end modern animated TV show. That’s the kind of thing I’m talking about.