The studio behind GOLF+ (2020) is aiming to expand the game this year in a bid to solve some of the most persistent problems in off-course golfing simulators: building real-world muscle memory in a virtual environment.
Golf+ CEO Ryan Engle announced that the studio’s popular golf sim is getting “major product updates” this year, which is set to include a new social lobby, UI improvements, and over a dozen new courses.
In addition, Engle showed off a fresh look at a mixed reality mode which ostensibly tracks real-world golf balls and clubs so players can work on driving, iron play and putting in a Sim Golf environment.
Check it out in action below:
Traditional golf simulators use large 2D impact screens and sensors to measure ball speed and direction. While they’re generally considered effective for practicing full swings and driving, they tend to be less reliable at the slower ball speeds used in putting and short-game shots.
Worse yet, these sorts of simulator screens lack parallax, as courses are projected at a fixed viewpoint. Looping in a mixed reality setup though could allow golfers to not only build muscle memory with a real ball and club, but have the benefit of golfing in a more realistic environment.
It’s unsure whether the studio intends on releasing the mixed reality implementation as an update to the current game, or releasing a separate version for location based golf sims.
Engle says however we should expect Golf+ on more platforms in the near future. Although it’s currently only available on Quest, the studio shared plans to expand the game to PC VR headsets.
Additionally, the studio says it’s exploring flatscreen PC gameplay, as well as offering a “unified experience with shared physics, multiplayer, and cross-play across all platforms.”
Steam Frame is still shipping sometime in the first half of 2026, although now Valve says the current component shortage has led the company to revise both price and release date of the standalone VR headset.
Valve announced in a hardware news update that Steam Frame, Steam Machine, and Steam Controller are all being affected by the component shortage.
“When we announced these products in November, we planned on being able to share specific pricing and launch dates by now,” Valve says. “But the memory and storage shortages you’ve likely heard about across the industry have rapidly increased since then.”
Photo by Road to VR
Due to a surge in demand from AI and data centers, RAM and storage prices have increased significantly since this time last year, with PCPartPicker data charting a 300 percent price increase in DDR5 RAM alone.
As component availability dwindles and prices rise, Valve says it “must revisit […] exact shipping schedule and pricing,” noting that both Steam Machine and Steam Frame have especially been affected.
Still, Valve says that its hoping to ship all three products in the first half of the year—ostensibly releasing sometime before July 1st.
Valve told Road to VR in November that it expects the price of Steam Frame to be ‘cheaper than Index’, although the company didn’t qualify pricing further than that. At its 2019 launch, a Valve Index ‘full kit’ was priced at $1,000 (headset, controllers, SteamVR trackers), while the headset alone was priced at $500.
While Valve hasn’t commented on what Steam Machine will cost, it confirmed with YouTuber ‘Skill Up’ back in November the PC won’t be subsidized like a console.
Price estimations are fairly scattered at this point. Linus Tech Tips has suggested the lowest configuration could fetch somewhere around $700, based on a custom PC built on comparable parts.
In early January, Czech retailer Alza may have leaked Steam Machine’s pricing, with the 512GB model priced around $950 USD and the 2TB model at $1,070 USD.
Meta released a new update for Quest that brings a host of new features, including a new ‘Surface Keyboard’ that lets you type on any flat surface.
Meta has already started rolling out Quest’s v85 update on the public test channel (PTC), making it the first major update since the company revealed it was shaking up Reality Labs in a bid to shift focus to AI and smart glasses.
According to the v85 patch notes, the update is set to retire ‘Horizon Feed’, which released with v57 in late 2023, bringing a mishmash of user-created Worlds, apps, games, and Reels.
The update is also set to make Navigator the default UI, which overhauls the platform’s dock-based UI for a more mobile-style launcher overlay.
Image courtesy Meta
For years now, if you wanted to type something on Quest you had three main methods: use the floating keyboard, pair a physical keyboard which the headset can actually track, or use voice for text input.
Now, Meta is rolling out ‘Surface Keyboard and Touchpad’ as an experimental feature on Quest 3, which allows users to input text and control a mouse by simply mapping them to a desk or table.
Meta says the new Surface Keyboard is ideal for “casual productivity, browsing, messaging, and 2D applications,” as it includes a basic key set.
Meanwhile, the virtual touchpad supports index-finger actions like move, click (tap), drag (double tap and move) and two-finger scrolling with your index and middle finger. Meta says however that a physical keyboard “is still advised for high-volume writing.”
Quest 3 users who are enrolled in the PTC can opt into the feature from the ‘Experimental’ section of ‘Settings’. If you’re not already, here’s how to enroll in PTC:
Open the mobile app, tap Menu in the bottom-right corner, then tap Devices.
Tap Headset settings, then tap Advanced settings.
Tap the toggle next to Public Test Channel to try to join Quest PTC.
If the toggle doesn’t work, Quest PTC is currently full and not available.
Another feature is the newly redesigned activity bar, which is said to allow for quicker and easier access to controls like recording, calls, and media.
Other updates include the ability to temporarily hide virtual hands via quick actions, customize the Quest 3S’ action button to trigger preferred system actions with short or long presses, and scan for malware. You can check out the full v85 (PTC) release notes here.
DeepMind, Google’s AI research lab, announced Genie 3 last August, showing off an AI system capable of generating interactive virtual environments in real-time. Now, Google has released an experimental prototype that Google AI subscribers can try today. Granted, you can’t generate VR world on the fly just yet, but we’re getting tantalizingly close.
The News
Project Genie is what Google calls it an “experimental research prototype,” so it isn’t exactly the ‘AI game machine’ of your dreams just yet. Essentially, it allows users to create, explore, and modify interactive virtual environments through a web interface.
The system is a lot like previous image and video generators, which require inputting a text prompt and/or uploading reference images, although Project Genie takes this a few steps further.
Instead of one, Project Genie has two main prompt boxes—one for the environment and one for the character. A third prompt box also allows you to modify the initial look before fully generating the environment (e.g.. make the sword bigger, change the trees to fall time).
As an early research system, Project Genie has limitations, Google says in a blog post. Generated environments may not closely match real-world physics or prompts, character control can be inconsistent, sessions are limited to 60 seconds, and some previously announced features are not yet included.
And for now, the only thing you can output is a video of the experience, although you can explore and remix other ‘worlds’ available in the gallery.
Project Genie is now rolling out to Google AI Ultra subscribers in the US, aged 18 and over, with broader availability planned to release at some point in the future. You can find out more here.
My Take
There are a lot of hurdles to get over before we can see anything like Project Genie running on a VR headset.
One of the most important hurdles to get over is undoubtedly cloud streaming. Frankly, cloud gaming exists on VR headsets, but it’s not great right now since latency is so variable based on how close you are to your service’s data center. That, and the big names in cloud gaming today (i.e. NVIDIA GeForce Now, Xbox Cloud Gaming) are generally geared towards flatscreen games; when it comes to render and input latency, the bar is much lower than VR headsets, which generally require a maximum of 20ms motion-to-photon latency to avoid user discomfort.
And that’s also not taking into account that Project Genie would need to also somehow render the world with stereoscopy in mind—which may present its own problems since the system would technically need two distinct points of view that resolve into a single, solid 3D picture.
As far as I understand, world models created in Project Genie are probabilistic, i.e. objects can behave slightly different each time, which is part of the reason Genie 3 can only support a maximum of few minutes of continuous interaction at a time. Genie 3 world generation has a tendency to drift from prompts, which probably gives undesired results.
So while it’s unlikely we’ll see a VR version of this in the very near future, I’m excited to see the baby steps leading to where it could eventually go. The thought of being able to casually order up a world on the fly Holodeck-style that I can explore—be it past, present, or any fiction of my chooseing—feels so much more interesting to me from a learning perspective. One of my most-used VR apps to date is Google Earth VR, and I can only imagine a more detailed and vibrant version of that to help me learn foreign languages, time travel, and tour the world virtually.
Before we even get that far though, there’s a distinct possibility that the Internet will be overrun by ‘game slop’, which feels like asset flipping taken to the extreme. It will also likely expose game developers to the same struggles that other digital artists are facing right now when it comes to AI sampling and recreating copyrighted works—albeit on a whole new level (GTA VI anyone?).
That, and I can’t shake the feeling that the future is shaping up be a very odd, but hopefully also a very interesting and not entirely terrible place. I can imagine a future wherein photorealistic, AI-driven environments go hand-in-hand with brain-computer interfaces (BCI)—two subjects Valve has been researching for years—and serving up The Virtual Reality I’m actually waiting for.
XREAL has rolled out a real-time 3D conversion feature to its flagship AR glasses, which the company says converts any 2D content to 3D.
Xreal initially launched its ‘Real 3D’ software on Xreal 1S AR glasses earlier this month, however now the company has rolled out an update to Xreal One and One Pro that brings optional real-time 3D conversion to 2D content.
The company says Real 3D doesn’t require special video files, apps, DRM-protected content, or external software. All of the conversion is done in real-time on device via the company’s X1 spatial computing chipset built into the One series glasses.
XREAL One Pro | Image courtesy XREAL
“Because it doesn’t depend on proprietary players or formats, Real 3D works across connected desktops, consoles, phones, and other devices,” the company says, noting that content includes movies, streaming videos, locally stored media, and games.
Xreal tells Road to VR it does this by using the X1 chip’s NPU (neural processing unit) to perform depth estimation inference on every incoming frame and to generate the corresponding left- and right-eye views with depth relationships.
The company says it’s still investigating Real 3D’s latency. Notably, the company says that when compared to other display modes, its real-time 3D conversion results in “slightly higher power consumption,” something Xreal says is around 300mW.
Additionally, Xreal tells Road to VR that its Real 3D technology is entirely developed in-house.
“We trained a highly compact model that balances performance and power consumption specifically for integrating into the X1 chip. While real-time 3D conversion is relatively straightforward on high-end GPUs, we have not found any comparable solutions in the industry that can operate effectively on low-power platforms like X1.”
The Beijing-based AR glasses maker sells a fairly wide range of AR glasses, all of which target traditional content consumption, such as flatscreen games, TV and film running on its own Android-based operating system.
Alongside the announcement it had secured a $100 million financing round, Xreal also recently became Google’s lead AR partner following a multi-year extension of an agreement initially initially started in late 2024.
As a result, Xreal aims to bring Google’s Android XR operating system to its AR glasses over the next few years, which is slated to kick off with Xreal’s Project Aura when it launches at some point this year. In the meantime, you can check out our recent hands-on with Project Aura here.
Snapchat maker Snap announced it’s formed a new business dedicated to its upcoming AR glasses.
The News
Called Specs Inc, the wholly-owned subsidiary within Snap is said to allow for “greater operational focus and alignment” ahead of the public launch of its latest AR glasses coming later this year.
In addition to operating its AR efforts directly under the new brand, Snap says Specs Inc will also allow for “new partnerships and capital flexibility,” including the potential for minority investment.
Snap Spectacles Gen 5 (2024) | Image courtesy Snap Inc
In September, Snap CEO Evan Spiegel noted in an open letter that the company is heading into a make-or-break “crucible moment” in 2026, characterizing Specs as an integral part of the company’s future.
“This moment isn’t just about survival. It’s about proving that a different way of building technology, one that deepens friendships and inspires creativity, can succeed in a world that often rewards the opposite,” Spiegel said.
While the company hasn’t shown of its next-gen Specs yet, the company touts the device’s built-in AI, something that “uses its understanding of you and your world to help get things done on your behalf while protecting and respecting your privacy.”
Snap further notes that it’s “building a computer that we hope you’ll use less, because it does more for you.”
My Take
Snap (or rather, Specs) is set to release its sixth-gen Spectacles this year, although this is the first pair of AR glasses the company is ostensibly hoping to pitch directly to the public, and not just developers and educational institutions.
Info is still thin surrounding Spec Inc’s launch plans for the devices, although forming a new legal entity for its AR business right beforehand could mean a few things.
For now, it doesn’t appear Snap is “spinning out” Spectacles proper; Snap hasn’t announced new leadership, leading me to believe that it’s more of a play to not only attract more targeted investment in the AR efforts, but also insulate the company from potential failure.
It’s all fairly opaque at this point, although the move does allow investors to more clearly choose between supporting the company’s traditional ad business, or investing it the future of AR.
However you slice it though, AR hardware development is capital intensive, and Snap’s pockets aren’t as deep as its direct competitors, including Meta, Apple, Google, and Microsoft.
While Snap confirmed it spent $3 billion over the course of 11 years creating its AR platform, that’s notably less than what Meta typically spends in a single quarter on its XR Reality Labs division.
It’s also risky. The very real flipside is that Specs Inc could go bankrupt. Maybe it’s too early. Maybe it underdelivers in comparison to competitors. Maybe it’s too expensive out of the gate for consumers, and really only appeals to enterprise. Maybe it isn’t too expensive, but the world heads into its sixth once-in-a-generation economic meltdown.
Simply put, there are a lot of ‘maybes’ right now. And given the new legal separation, Snap still has the option to survive relatively unscathed if it goes belly up, and lives to find another existential pivot.
Indie studio Pixelity confirmed that its previously announced VR game based on the hit ’90s anime Neon Genesis Evangelion (1995) is still coming, as the studio just showed off its first teaser image.
While series protagonist Shinji Ikari never actually signed a waver to join NERV in the anime—he was all but forced into the pilot’s chair in the first episode as Tokyo-3 was under attack—the image above seems to suggest a much more tranquil recruitment into Gendo Ikari’s mysterious defense organization.
Pixelity says Cross Reflections will be a three-part experience based on the story of all 26 episodes of the original anime, with the first instalment expected to arrive in 2026.
There’s no release date or list of confirmed target platforms yet, although a few lucky attendees at Evangelion’s upcoming 30th anniversary event in Japan will get a first hands-on with a demo.
The event is set to take place in Tokyo from February 21ss – 23rd. To find out how to buy tickets and sign up for a chance to demo the game, click here.