3D Virtual Reality Smart Headset For IOS, Android Smartphones 3.5-5.7 Inches New
ID: 282594299466
Auction price: $18.43
Bid count:
Time left: 29d 23h
Buy it now: $18.43
August 1, 2017 at 02:12PM
via eBay http://ift.tt/2wgynHH
Chocolate, the latest virtual creation from the mind of Tyler Hurd—the VR animator and director behind BUTTS (2015) and Old Friend (2016)—is set to launch on August 17th on the HTC Vive and Oculus Rift.
Chocolate is a real-time VR music video set to a catchy, building electronic track of the same name by artist Giraffage. The short three minute experience is a quick but trippy and polished delight, placing you at the center of a psychedelic stage with choreographed animation happening in coordination with the beat.
I previewed Chocolate during its showing at Sundance earlier this year and found that it got me moving to the beat and kept my attention over the course of the music video; it also compelled me to show several others so they could experience the crazy and colorful visuals and the toe tapping beat.
Come August 17th, Chocolate will launch to the public on the HTC Vive and Oculus Rift, priced at $2 for the first week and $3 thereafter. The experience is also ‘Subpac Optimized’, meaning that it’s been specially tuned for those using the Subpac bass backpack.
The post Tyler Hurd’s Psychedelic VR Music Video ‘Chocolate’ to Launch August 17th on Vive and Rift appeared first on Road to VR.
HP announced a super high-end backpack PC powered by some of NVIDIA and Intel’s high end chips to power wireless VR for serious commercial use cases.
The 10.2 pound wearable PC includes a dock so it can be used as a high-end desktop workstation which can be removed and strapped on to become a wearable PC. It includes a pair of on-board batteries that can be swapped out one at a time so people can stay inside VR for long periods.
The system costs $3,295 and will be available starting in September.
Updates to come.
The moment millions of VR enthusiasts have been waiting for has finally arrived: Samsung’s Galaxy S8 and S8+ smartphones are now 100% Daydream-ready. This means the Galaxy S8 line of phones are the first ever to support both the Oculus-powered Gear VR ecosystem and Google’s budding Daydream platform.
We first found out the support would officially be coming back in May of this year with some hints that it was just around the corner only a couple of weeks ago and now today Google made it official with an announcement GIF on Twitter linking to the Google blog.
The Daydream-ready update is rolling out now to @SamsungMobile Galaxy S8 and S8+. Explore new worlds with #Daydream. http://pic.twitter.com/PEeC6RfyyZ
— Google VR (@googlevr) July 31, 2017
If you don’t have the update yet, be patient. Roll outs like this usually happen in phases and can take a little while.
This update now makes the Galaxy S8 one of the most attractive VR devices on the planet from a development perspective. With over 5 million potential users on the Gear VR and however many thousands (or millions? Fingers crossed?) already own a Daydream View and will soon be purchasing one (they’re only $63 on Amazon right now and come with a controller) the market is more ripe than ever before.
When we originally reviewed the S8 in conjunction with the new Gear VR + its controller, we called it the best mobile VR combination available. That’s still true and now with Daydream support, the S8 is even harder to beat.
With regard to Daydream-ready devices, Google’s own Pixel is of course on the list, as well as the Moto Z, Mate 9 Pro, Axon 7, and soon the ASUS ZenPhone AR. However, the Galaxy S8 and S8+ are the only phones that support both Daydream and Gear VR on the market.
What do you think of this news? Do you own both a View headset and a Gear VR that you still use? Let us know down in the comments below!
Tagged with: Daydream, Galaxy S8
Nvidia spans both gaming graphics and artificial intelligence, and it is showing that with its announcements this week at the Siggraph computer graphics event in Los Angeles.
Those announcements range from providing external graphics processing for content creators to testing AI robotics technology inside a virtual environment known as the Holodeck, named after the virtual reality simulator in the Star Trek series. In fact, Nvidia’s researchers have created a way for AI to create realistic human facial animations in a fraction of the time it takes human artists to do the same thing.
“We are bringing artificial intelligence to computer graphics,” said Greg Estes, vice president of developer marketing at Nvidia, in an interview with GamesBeat. “It’s bringing things full circle. If you look at our history in graphics, we took that into high-performance computing and took that into a dominant position in deep learning and AI. Now we are closing that loop and bringing AI into graphics.”
“Our strategy is to lead with research and break new ground,” he said. “Then we take that lead in research and take it into software development kits for developers.”
Above: Nvidia’s Optix 5.0 can “de-noise” images by removing graininess.
Nvidia has 10 research papers this year at the Siggraph event, Estes said. And some of that will be relevant to Nvidia’s developers, which number about 550,000 now. About half of those developers are in games, while the rest are in high-performance computing, robotics, and AI.”
Among the announcements, one is particularly cool. Estes said that Nvidial will show off its Isaac robots in a new environment. These robots, which are being used to vet AI algorithms, will be brought inside the virtual environment that Nvidia calls Project Holodeck. Project Holodeck is a virtual space for collaboration, where full simulations of things like cars and robots are possible. By putting the Isaac robots inside that world, they can learn how to behave, without causing havoc in the real world.
Above: The Project Holodeck demo
“A robot will be able to learn things in VR,” Estes said. “We can train it in a simulated environment.”
Nvidia is providing external Titan X or Quadro graphics cards through an external graphics processing unit (eGPU) chassis. That will boost workflows for people who use their laptop computers for video editing, interactive rendering, VR content creation, AI development
and more, Estes said.
To ensure professionals can enjoy great performance with applications such as Autodesk Maya and Adobe Premier Pro, Nvidia is releasing a new performance driver for Titan X hardware to make it faster. The Quadro eGPU solutions will be available in September through partners such as Bizon, Sonnet, and One Stop Systems/Magma.
Nvidia also said it was launching its Optix 5.0 SDK on the Nvidia DGX AI workstation. That will give designers, artists, and other content-creation professionals the rendering capability of 150 standard central processing unit (CPU) servers.
The tech could be used by millions of people, Estes said. And that kind of system would cost $75,000 over three years, compared to $4 million for a CPU-based system, the company said.
OptiX 5.0’s new ray tracing capabilities will speed up the process required to visualize designs or characters, thereby increasing a creative professional’s ability to interact with their content. It features new AI “de-noising” capability to accelerate the removal of graininess from images, and brings GPU-accelerated motion blur for realistic animation effects. It will be available for free in November.
By running Nvidia Optix 5.0 on a DGX Station, content creators can significantly accelerate training, inference and rendering (meaning both AI and graphics tasks).
“AI is transforming industries everywhere,” said Steve May, vice president and chief technology officer of Pixar, in a statement. “We’re excited to see how Nvidia’s new AI technologies will improve the filmmaking process.”
On the research side, Nvidia is showing how it can animate realistic human faces and simulate how light interacts with surfaces. It will tap AI technology to improve the realism of the facial animations. Right now, it takes human artists hundreds of hours to create digital faces that more closely match the faces of human actors.
Nvidia Research partnered with Remedy Entertainment, maker of games such as Quantum Break, Max Payne and Alan Wake, to help game makers produce more realistic faces with less effort and at lower cost.
Above: Nvidia is using AI to create human facial animations.
The parties combined Remedy’s animation data and Nvidia’s deep learning technology to train a neural network to produce facial animations directly from actor videos. The research was done by Samuli Laine, Tero Karras, Timo Aila, and Jaakko Lehtinen. Nvidia’s solution requires only five minutes of training data to generate all the facial animation needed for an entire game from a simple video stream.
Antti Herva, lead character technical artist at Remedy, said that over time, the new methods will let the studio build larger, richer game worlds with more characters than are now possible.
Already, the studio is creating high-quality facial animation in much less time than in the past.
“Based on the Nvidia research work we’ve seen in AI-driven facial animation, we’re convinced AI will revolutionize content creation,” said Herva, in a statement. “Complex facial animation for digital doubles like that in Quantum Break can take several man-years to create. After working with Nvidia to build video- and audio-driven deep neural networks for facial animation, we can reduce that time by 80 percent in large scale projects and free our artists to focus on other tasks.”
In another research project, Nvidia trained a system to generate realistic facial animation using only audio. With this tool, game studios will be able to add more supporting game characters, create live animated avatars, and more easily produce games in multiple languages.
Above: AI can smooth out the “jaggies,” or rough edges in 3D graphics.
AI also holds promise for rendering 3D graphics, the process that turns digital worlds into the life-like images you see on the screen. Film makers and designers use a technique called “ray tracing” to simulate light reflecting from surfaces in the virtual scene. Nvidia is using AI to improve both ray tracing and rasterization, a less costly rendering technique used in computer games.
In a related project, Nvidia researchers used AI to tackle a problem in computer game rendering known as anti-aliasing. Like the de-noising problem, anti-aliasing removes artifacts from partially-computed images, with this artifact looking like stair-stepped “jaggies.” Nvidia researchers Marco Salvi and Anjul Patney trained a neural network to recognize jaggy artifacts and replace those pixels with smooth anti-aliased pixels. The AI-based solution produces images that are sharper (less blurry) than existing algorithms.
Nvidia is also developing more efficient methods to trace virtual light rays. Computers sample the paths of many light rays to generate a photorealistic image. The problem is that not all of those light paths contribute to the final image.
Researchers Ken Daum and Alex Keller trained a neural network to guide the choice of light paths. They accomplished this by connecting the math of tracing light rays to the AI concept of reinforcement learning. Their solution taught the neural network to distinguish the paths most likely to connect lights with virtual cameras, from the paths that don’t contribute to the image.
Above: Nvidia uses AI to figure out light sources in 3D graphics.
Lastly, Nvidia said it taking immersive VR to more people by releasing the VRWorks 360 Video SDK to enable production houses to livestream high-quality, 360-degree, stereo video to their audiences.
Normally, it takes a lot of computation time to stitch together images for 360-degree videos. By doing live 360-degree stereo stitching, Nvidia is making life a lot easier for the live-production and live-event industries, said Zvi Greenstein, vice president at Nvidia.
The VRWorks SDK enables production studios, camera makers and app developers to integrate 360 degree, stereo stitching SDK into their existing workflow for live and post production. The Z Cam V1 Pro (made by VR camera firm Z Cam) is the first professional 360 degree VR camera that will fully integrate the VRWorks SDK.
“We have clients across a wide range of industries, from travel through sports, who want high quality, 360 degree video,” said Chris Grainger, CEO of Grainger VR, in a statement. “This allows filmmakers to push the boundaries of live storytelling.”
Elite: Dangerous is one of the most complex, detailed, and engrossing space flight simulators out there right now. The first time I tried to fly a spacecraft in that game resulted in me crashing into the docking bay before I could even get out into the sea of stars. But with practice I got the hang of it and eventually was exploring the galaxy like a true commander.
The game is currently available on PC, Mac, Xbox One, and PS4. However, the only way to play it in VR currently is to play it on PC using a Rift or Vive. The developers at Frontier Developments are doing their best to try and bring PlayStation VR (PSVR) support to the PS4 version of the game, but it’s a work-in-progress that’s “not there yet in terms of quality.”
That being said, PSVR support is actively being worked on. But just like any developer their resources can only go so far. They have to prioritize features that serve the community best and one way of figuring out where to spend time working on future updates is to just see what the community wants most.
Reddit user WilfridSephiroth created a survey posted on the official Elite: Dangerous subreddit designed to see exactly which platforms most people play on and to determine which future updates players are most excited about. Notably, PSVR is listed as one of the options.
But to be clear, the user that created and posted this survey is not a member of the development team, and is simply doing this to help raise awareness.
You can find the survey to fill out right here, which only takes about two minutes. Make sure to go ahead and select PSVR in the section at the end if you want to see the game on PSVR next. There is no guarantee that this will actually guide Frontier’s development roadmap, but letting the devs know where the community stands is never a bad thing.
Tagged with: elite dangerous
Augmented reality lets go on a date with virtual idol Hatsune Miku.
Over the course of the past couple of decades, Japan has firmly established itself as the number one “WTF” country on planet Earth. The endlessly entertaining nation is responsible for some of the most interesting trends ever created, from highly-stylized Anime, to used underwear vending machines. No, for real. Look it up… So to be perfectly honest when it was revealed that a new cafe in Japan would be setting up patrons with a virtual date using augmented reality technology, it didn’t come as that much of a shock to anyone.
Developed as part of a promotion with telecommunications provider au/KDDI and Crypton Future Media, patrons of the Blue Leaf Cafe in Sendai, Japan were given the opportunity to share time, as well as some food and drink, with vocaloid megastar Hatsune Miku. Using a collection of smartphones preloaded with an updated iteration of the AR program Miku Stroll entitled Miku Stroll Cafe Edition, select patrons were able to enjoy a lovely virtual afternoon with the digital popstar, sharing food, posing for photos together and even just enjoying some light conversation.
Customers were also treated to a variety of Hatsune Miku-themed food and drinks from the 700 yen ($6.34 USD) Miku Summer Tropical Soda and 650 yen ($5.88) Miku Latte, to the 800 yen ($7.20) roll cake or 900 yen ($8.15) Mike Sandwich. Visitors also had the chance to pick up limited edition character art drink coasters, because why the hell not?
While the actual Miku AR experience only ran July 8, 9, 15, 16 and 17, these Miku-focused treats were available July 8th to the 31st. There is still time to grab a Miku Latte before they’re all gone! All the studs may have wined and dined Hatsune already, but at least you can make yourself feel a little better by crying away your pain into a delicious roll cake.
Japan has a fascinating relationship with its many fictional characters and entertainers, with fans often developing a deep appreciation for these animated personalities. This has even lead to some of the more dedicated fans going as far as to actually marry their virtual “waifus.” As virtual and augmented reality continues to progress, it’s highly-likely these immersive platforms will play an integral role in the future of the virtual girlfriends/boyfriends trend. So long as companies continue to plaster Hatsune’s face onto every conceivable surface available, you’ll hear no complaints from me.
The post Never Dine Alone Again Thanks To This Japanese AR Girlfriend Cafe appeared first on VRScout.