Wednesday, 11 February 2026

‘Star Citizen’ VR Support Isn’t Prime Time Yet, But It’s Getting There

https://ift.tt/cigDvyp

Cloud Imperium Games (CIG) added experimental PC VR support to Star Citizen late last year, taking a first step in fulfilling a more than decade-old promise. Things are getting increasingly serious though after the release of its second post-VR update.

Update Alpha 4.5 initially brought a VR theater and full VR mode to the game in December, which lets users play the bulk of the game in PC VR headsets for the first time, including walking, flying, EVA, combat, and using menus.

Granted, it’s still a (very) experimental mode, which initially required some users to even add VR config lines to the game’s directory to get it working, done in addition to keeping track of keybinds to cycle through VR modes on the fly.

Now the studio has released Star Citizen 4.6, adding for the first time an official VR option in the settings menu, making managing and enabling VR mode at startup a much easier affair.

Image courtesy Ray’s Guide

Although 4.6 doesn’t radically expand VR features, it’s certainly a vote of confidence that VR support is not only still on track, but moving closer to the core of the game. Still, it’s polished a number of usability issues, such as better menus and a smoother overall experience.

That said, as mentioned in a recent ‘Ray’s Guide’ video, players still need to carefully tune OpenXR settings, upscaling options, and in-game VR stuff, such as UI scale, distance, and IPD alignment just to get comfortable results. Users also typically need to switch between full VR and theater mode constantly for inventory and kiosk interactions, which is a definite immersion breaker.

That, and it doesn’t include VR motion controller support yet, making control remapping almost mandatory at this point, with many users relying on a mix of controllers, keyboards, HOTAS, and voice command software to manage the game’s enormous number of bindings.

As Silvan-CIG says in the 4.5 update announce in December though, all of its done in the spirit of open development.

“This is not our full VR launch. When that day arrives, we will make plenty of noise about it. What we are rolling out today is an opportunity for some early hands-on time, very much in the spirit of Open Development, so you can jump in, see how things are shaping up, and help guide what comes next.”

That said, creating a VR-native out of a Star Citizen is a tall order. Looking ahead, CIG’s biggest challenges will probably be centered around balancing those ambitions with the rest of the game’s development, which is constantly growing in scope and graphical complexity.

The post ‘Star Citizen’ VR Support Isn’t Prime Time Yet, But It’s Getting There appeared first on Road to VR.



from Road to VR https://ift.tt/ILOmYub
via IFTTT

Tuesday, 10 February 2026

Ray-Ban Smart Glasses Get Massive Utility Boost with Cool (but risky) ClawdBot Hack

https://ift.tt/q5xG4dS

If you’re comfortable mucking around with a new open source project, you could be shopping on Amazon just by looking at an object with your Ray-Ban Meta smart glasses.

Ray-Ban Meta smart glasses are pretty useful out of the box, offering photo & video capture, calls, music playback, and your standard assortment of AI chatbot stuff. They don’t have an app store though, which means you’re basically stuck with a handful of curated services.

Now, indie developer Sean Liu released an open-source project called VisionClaw that links Ray-Ban Meta smart glasses with OpenClaw (aka ClawdBot), essentially giving the autonomous AI agent eyes and ears.

Check out VisionClaw in action below, courtesy Liu:

OpenClaw isn’t an AI model like ChatGPT or Google Gemini though. It’s an agentic layer—essentially a complex messaging layer built on top of an AI model that interacts with services on your behalf, like sending emails, managing shopping lists, or controlling smart home devices—just three of the 56+ tools OpenClaw can integrate with right now.

Basically, it works like this: VisionClaw uses Gemini Live for real-time voice and computer vision, which can do things like describe what you’re seeing and answer questions—basically the same sort of tasks you can do with the glasses’ native Meta AI.

Image courtesy Sean Liu

But once you want to actually interact with an app or service—like when you want to send a message over email or your favorite non-Meta messaging app like Signal or Telegram—Gemini Live hands off the request to OpenClaw, which takes action.

Users looking to run VisionClaw will need an iPhone, as Liu’s codebase is written as an Xcode/Swift app that specifically uses Meta’s Wearables Device Access Toolkit (DAT) for iOS to connect the phone to Ray-Ban Meta glasses.

Beyond that, you’ll also need a fair understanding of the risks involved with running OpenClaw on your personal hardware.

While it can do some pretty amazing things, it’s a third-party bit of software that could require you to input passwords, API keys, and personal information, which can open the user up to malicious actors. Notably, OpenClaw’s skill integrations could be written by anyone, so users need to be especially vigilant.

The post Ray-Ban Smart Glasses Get Massive Utility Boost with Cool (but risky) ClawdBot Hack appeared first on Road to VR.



from Road to VR https://ift.tt/Ayv1sg4
via IFTTT

Friday, 6 February 2026

Studio Behind VR’s Most Popular Golf Game Aims to Solve a Key Challenge with Golf Training Sims

https://ift.tt/ChtAUTn

The studio behind GOLF+ (2020) is aiming to expand the game this year in a bid to solve some of the most persistent problems in off-course golfing simulators: building real-world muscle memory in a virtual environment.

Golf+ CEO Ryan Engle announced that the studio’s popular golf sim is getting “major product updates” this year, which is set to include a new social lobby, UI improvements, and over a dozen new courses.

In addition, Engle showed off a fresh look at a mixed reality mode which ostensibly tracks real-world golf balls and clubs so players can work on driving, iron play and putting in a Sim Golf environment.

Check it out in action below:

Traditional golf simulators use large 2D impact screens and sensors to measure ball speed and direction. While they’re generally considered effective for practicing full swings and driving, they tend to be less reliable at the slower ball speeds used in putting and short-game shots.

Worse yet, these sorts of simulator screens lack parallax, as courses are projected at a fixed viewpoint. Looping in a mixed reality setup though could allow golfers to not only build muscle memory with a real ball and club, but have the benefit of golfing in a more realistic environment.

It’s unsure whether the studio intends on releasing the mixed reality implementation as an update to the current game, or releasing a separate version for location based golf sims.

Engle says however we should expect Golf+ on more platforms in the near future. Although it’s currently only available on Quest, the studio shared plans to expand the game to PC VR headsets.

Additionally, the studio says it’s exploring flatscreen PC gameplay, as well as offering a “unified experience with shared physics, multiplayer, and cross-play across all platforms.”

The post Studio Behind VR’s Most Popular Golf Game Aims to Solve a Key Challenge with Golf Training Sims appeared first on Road to VR.



from Road to VR https://ift.tt/HtC1GVI
via IFTTT

Thursday, 5 February 2026

Valve Reconsiders Steam Frame Price & Release Date Amid RAM & Storage Shortage

https://ift.tt/O4SX6v1

Steam Frame is still shipping sometime in the first half of 2026, although now Valve says the current component shortage has led the company to revise both price and release date of the standalone VR headset.

Valve announced in a hardware news update that Steam Frame, Steam Machine, and Steam Controller are all being affected by the component shortage.

“When we announced these products in November, we planned on being able to share specific pricing and launch dates by now,” Valve says. “But the memory and storage shortages you’ve likely heard about across the industry have rapidly increased since then.”

Photo by Road to VR

Due to a surge in demand from AI and data centers, RAM and storage prices have increased significantly since this time last year, with PCPartPicker data charting a 300 percent price increase in DDR5 RAM alone.

As component availability dwindles and prices rise, Valve says it “must revisit […] exact shipping schedule and pricing,” noting that both Steam Machine and Steam Frame have especially been affected.

Still, Valve says that its hoping to ship all three products in the first half of the year—ostensibly releasing sometime before July 1st.

Valve told Road to VR in November that it expects the price of Steam Frame to be ‘cheaper than Index’, although the company didn’t qualify pricing further than that. At its 2019 launch, a Valve Index ‘full kit’ was priced at $1,000 (headset, controllers, SteamVR trackers), while the headset alone was priced at $500.

While Valve hasn’t commented on what Steam Machine will cost, it confirmed with YouTuber ‘Skill Up’ back in November the PC won’t be subsidized like a console.

Price estimations are fairly scattered at this point. Linus Tech Tips has suggested the lowest configuration could fetch somewhere around $700, based on a custom PC built on comparable parts.

In early January, Czech retailer Alza may have leaked Steam Machine’s pricing, with the  512GB model priced around $950 USD and the 2TB model at $1,070 USD.

Looking for more Steam Frame news?

Valve Unveils Steam Frame VR headset to Make Your Entire Steam Library Portable: Valve shows off Steam Frame, the standalone headset that can stream and natively play your entire Steam library—with only a few caveats right now.

Hands-on: Steam Frame Reveals Valve’s Modern Vision for VR and Growing Hardware Ambitions: We go hands-on with Valve’s latest and greatest VR headset yet.

Valve Says No New First-party VR Game is in Development: Valve launched Half-Life: Alyx (2020) a few months after releasing Index, but no such luck for first-party content on Steam Frame.

Valve is Open to Bringing SteamOS to Third-party VR Headsets: Steam Frame is the first VR headset to run SteamOS, but it may not be the last.

Valve Plans to Offer Steam Frame Dev Kits to VR Developers: Steam Frame isn’t here yet; Valve says it needs more time with developers first so they can optimize their PC VR games.

Valve Announces SteamOS Console and New Steam Controller, Designed with Steam Frame Headset in Mind: Find out why Valve’s new SteamOS-running Console and controller will work seamlessly with Steam Frame.

Steam Frame vs. Quest 3 Specs: Better Streaming, Power & Hackability: Quest 3 can do a lot, but can it go toe-to-toe with Steam Frame?

Steam Frame vs. Valve Index Specs: Wireless VR Gameplay That’s Generations Ahead : Valve Index used to be the go-to PC VR headset, but the times have changed.

The post Valve Reconsiders Steam Frame Price & Release Date Amid RAM & Storage Shortage appeared first on Road to VR.



from Road to VR https://ift.tt/5Exc3OH
via IFTTT

Wednesday, 4 February 2026

Quest Update Brings New ‘Surface Keyboard’ Feature, UI Changes & More

https://ift.tt/jrAflak

Meta released a new update for Quest that brings a host of new features, including a new ‘Surface Keyboard’ that lets you type on any flat surface.

Meta has already started rolling out Quest’s v85 update on the public test channel (PTC), making it the first major update since the company revealed it was shaking up Reality Labs in a bid to shift focus to AI and smart glasses.

According to the v85 patch notes, the update is set to retire ‘Horizon Feed’, which released with v57 in late 2023, bringing a mishmash of user-created Worlds, apps, games, and Reels.

The update is also set to make Navigator the default UI, which overhauls the platform’s dock-based UI for a more mobile-style launcher overlay.

Image courtesy Meta

For years now, if you wanted to type something on Quest you had three main methods: use the floating keyboard, pair a physical keyboard which the headset can actually track, or use voice for text input.

Now, Meta is rolling out ‘Surface Keyboard and Touchpad’ as an experimental feature on Quest 3, which allows users to input text and control a mouse by simply mapping them to a desk or table.

Image courtesy ‘amtexe

Meta says the new Surface Keyboard is ideal for “casual productivity, browsing, messaging, and 2D applications,” as it includes a basic key set.

Meanwhile, the virtual touchpad supports index-finger actions like move, click (tap), drag (double tap and move) and two-finger scrolling with your index and middle finger. Meta says however that a physical keyboard “is still advised for high-volume writing.”

Quest 3 users who are enrolled in the PTC can opt into the feature from the ‘Experimental’ section of ‘Settings’. If you’re not already, here’s how to enroll in PTC:

  1. Open the mobile app, tap Menu in the bottom-right corner, then tap Devices.
  2. Tap Headset settings, then tap Advanced settings.
  3. Tap the toggle next to Public Test Channel to try to join Quest PTC.
    • If the toggle doesn’t work, Quest PTC is currently full and not available.

Another feature is the newly redesigned activity bar, which is said to allow for quicker and easier access to controls like recording, calls, and media.

Other updates include the ability to temporarily hide virtual hands via quick actions, customize the Quest 3S’ action button to trigger preferred system actions with short or long presses, and scan for malware. You can check out the full v85 (PTC) release notes here.

The post Quest Update Brings New ‘Surface Keyboard’ Feature, UI Changes & More appeared first on Road to VR.



from Road to VR https://ift.tt/7OX3eol
via IFTTT

Friday, 30 January 2026

Google’s Project Genie Makes Real-time Explorable Virtual Worlds, Offering a Peek Into VR’s Future

https://ift.tt/irVCmfd

DeepMind, Google’s AI research lab, announced Genie 3 last August, showing off an AI system capable of generating interactive virtual environments in real-time. Now, Google has released an experimental prototype that Google AI subscribers can try today. Granted, you can’t generate VR world on the fly just yet, but we’re getting tantalizingly close.

The News

Project Genie is what Google calls it an “experimental research prototype,” so it isn’t exactly the ‘AI game machine’ of your dreams just yet. Essentially, it allows users to create, explore, and modify interactive virtual environments through a web interface.

The system is a lot like previous image and video generators, which require inputting a text prompt and/or uploading reference images, although Project Genie takes this a few steps further.

Instead of one, Project Genie has two main prompt boxes—one for the environment and one for the character. A third prompt box also allows you to modify the initial look before fully generating the environment (e.g.. make the sword bigger, change the trees to fall time).

As an early research system, Project Genie has limitations, Google says in a blog post.  Generated environments may not closely match real-world physics or prompts, character control can be inconsistent, sessions are limited to 60 seconds, and some previously announced features are not yet included.

And for now, the only thing you can output is a video of the experience, although you can explore and remix other ‘worlds’ available in the gallery.

Project Genie is now rolling out to Google AI Ultra subscribers in the US, aged 18 and over, with broader availability planned to release at some point in the future. You can find out more here.

My Take

There are a lot of hurdles to get over before we can see anything like Project Genie running on a VR headset.

One of the most important hurdles to get over is undoubtedly cloud streaming. Frankly, cloud gaming exists on VR headsets, but it’s not great right now since latency is so variable based on how close you are to your service’s data center. That, and the big names in cloud gaming today (i.e. NVIDIA GeForce Now, Xbox Cloud Gaming) are generally geared towards flatscreen games; when it comes to render and input latency, the bar is much lower than VR headsets, which generally require a maximum of 20ms motion-to-photon latency to avoid user discomfort.

And that’s also not taking into account that Project Genie would need to also somehow render the world with stereoscopy in mind—which may present its own problems since the system would technically need two distinct points of view that resolve into a single, solid 3D picture.

As far as I understand, world models created in Project Genie are probabilistic, i.e. objects can behave slightly different each time, which is part of the reason Genie 3 can only support a maximum of few minutes of continuous interaction at a time. Genie 3 world generation has a tendency to drift from prompts, which probably gives undesired results.

So while it’s unlikely we’ll see a VR version of this in the very near future, I’m excited to see the baby steps leading to where it could eventually go. The thought of being able to casually order up a world on the fly Holodeck-style that I can explore—be it past, present, or any fiction of my chooseing—feels so much more interesting to me from a learning perspective. One of my most-used VR apps to date is Google Earth VR, and I can only imagine a more detailed and vibrant version of that to help me learn foreign languages, time travel, and tour the world virtually.

Before we even get that far though, there’s a distinct possibility that the Internet will be overrun by ‘game slop’, which feels like asset flipping taken to the extreme. It will also likely expose game developers to the same struggles that other digital artists are facing right now when it comes to AI sampling and recreating copyrighted works—albeit on a whole new level (GTA VI anyone?).

That, and I can’t shake the feeling that the future is shaping up be a very odd, but hopefully also a very interesting and not entirely terrible place. I can imagine a future wherein photorealistic, AI-driven environments go hand-in-hand with brain-computer interfaces (BCI)—two subjects Valve has been researching for years—and serving up The Virtual Reality I’m actually waiting for.

The post Google’s Project Genie Makes Real-time Explorable Virtual Worlds, Offering a Peek Into VR’s Future appeared first on Road to VR.



from Road to VR https://ift.tt/EqmzFIe
via IFTTT

XREAL Rolls out Automatic Real-time 3D Conversion Feature for Its AR Glasses

https://ift.tt/EqzFV56

XREAL has rolled out a real-time 3D conversion feature to its flagship AR glasses, which the company says converts any 2D content to 3D.

Xreal initially launched its ‘Real 3D’ software on Xreal 1S AR glasses earlier this month, however now the company has rolled out an update to Xreal One and One Pro that brings optional real-time 3D conversion to 2D content.

The company says Real 3D doesn’t require special video files, apps, DRM-protected content, or external software. All of the conversion is done in real-time on device via the company’s X1 spatial computing chipset built into the One series glasses.

XREAL One Pro | Image courtesy XREAL

“Because it doesn’t depend on proprietary players or formats, Real 3D works across connected desktops, consoles, phones, and other devices,” the company says, noting that content includes movies, streaming videos, locally stored media, and games.

Xreal tells Road to VR it does this by using the X1 chip’s NPU (neural processing unit) to perform depth estimation inference on every incoming frame and to generate the corresponding left- and right-eye views with depth relationships.

The company says it’s still investigating Real 3D’s latency. Notably, the company says that when compared to other display modes, its real-time 3D conversion results in “slightly higher power consumption,” something Xreal says is around 300mW.

Additionally, Xreal tells Road to VR that its Real 3D technology is entirely developed in-house.

“We trained a highly compact model that balances performance and power consumption specifically for integrating into the X1 chip. While real-time 3D conversion is relatively straightforward on high-end GPUs, we have not found any comparable solutions in the industry that can operate effectively on low-power platforms like X1.”

The Beijing-based AR glasses maker sells a fairly wide range of AR glasses, all of which  target traditional content consumption, such as flatscreen games, TV and film running on its own Android-based operating system.

Alongside the announcement it had secured a $100 million financing round, Xreal also recently became Google’s lead AR partner following a multi-year extension of an agreement initially initially started in late 2024.

As a result, Xreal aims to bring Google’s Android XR operating system to its AR glasses over the next few years, which is slated to kick off with Xreal’s Project Aura when it launches at some point this year. In the meantime, you can check out our recent hands-on with Project Aura here.

The post XREAL Rolls out Automatic Real-time 3D Conversion Feature for Its AR Glasses appeared first on Road to VR.



from Road to VR https://ift.tt/5LDATc3
via IFTTT
Related Posts Plugin for WordPress, Blogger...