Facebook has developed an AI-assisted method for supersampling real-time rendered content, something that could become an integral addition to games coming to future generations of high-resolution VR headsets.
There’s an ongoing arms race between display technology and GPUs, and adding VR into the mix only underscores the disparity. It’s not so much a question of putting higher-resolution displays in VR headsets; those panels are out there, and there’s a reason many manufacturers aren’t throwing in the latest and greatest in their headsets. It’s really more about hitting a smart balance between the display resolution and the end user’s ability to adequately render that VR content and have it look good. That’s the basics anyway.
That’s why Facebook is researching AI-assisted supersampling in a recently published paper, dubbed ‘Neural Supersampling for Real-time Rendering’. Using neural networks, Facebook researchers have developed a system capable of inputting low-resolution images and obtaining high-resolution output suitable for real-time rendering. This, they say, restores sharp details while saving computational overhead.
Researchers claim the approach is “the first learned supersampling method that achieves significant 16x supersampling of rendered content with high spatial and temporal fidelity, outperforming prior work by a large margin.”
From the paper:
“To reduce the rendering cost for high-resolution displays, our method works from an input image that has 16 times fewer pixels than the desired output. For example, if the target display has a resolution of 3840×2160, then our network starts with a 960×540 input image rendered by game engines, and upsamples it to the target display resolution as a post-process in real-time.”
If all of this sounds familiar, that’s because it’s a similar concept to Nvidia’s Deep Learning Super Sampling (DLSS), which is currently only available on its RTX GPUs.
Facebook researchers say that methods like DLSS however either introduces “obvious visual artifacts into the upsampled images, especially at upsampling ratios higher than 2 × 2, or rely on proprietary technologies and/or hardware that may be unavailable on all platforms.”
Moreover, Facebook’s Neural Supersampling approach is said to be easily integrated into modern game engines, requires no special hardware or software such as proprietary drivers (like with DLSS). It’s also designed to be compatible with a wider array of software platforms, acceleration hardware and displays.
It’s admittedly a difficult problem to address, and Facebook says there still needs to be more brains working on the issue to bring it to fruition, so we may not see it for some time, at least not direct from Facebook.
“This work points toward a future for high-resolution VR that isn’t just about the displays, but also the algorithms required to practically drive them,” the researchers conclude.
This, combined with a number of technologies such as foveated rendering, may bring the generational leap developers are looking for, and truly give VR games an undeniable edge in the visual department. VR games are often maligned for their lower-quality graphics, something will hopefully soon be an anachronism as we approach an era of photorealistic virtual reality.
– – — – –
If you’re looking for a deeper dive into Facebook’s machine-learning supersampling method, check out the full paper here, which is slated to be presented at SIGGRAPH 2020 this summer.
The post Facebook Develops AI Supersampling to Boost Rendering Performance for High-resolution VR Headsets appeared first on Road to VR.
from Road to VR https://ift.tt/2BYKcKI
via IFTTT
No comments:
Post a Comment