Facebook AI Research chief AI scientist Yann LeCun believes augmented reality glasses are an ideal challenge for machine learning (ML) practitioners — a “killer app” — because they involve a confluence of unsolved problems.
Perfect AR glasses will require the combination of conversational AI, computer vision, and other complex systems capable of operating with a form factor as small as a pair of spectacles. Low-power AI will be necessary to ensure reasonable battery life so users can wear and use the glasses for long periods of time.
Alongside companies like Apple, Niantic, and Qualcomm, Facebook this fall confirmed plans to make augmented reality glasses by 2025.
“This is a huge challenge for hardware because you might have glasses with cameras that track your vision in real time at variable latency, so when you move … that requires quite a bit of computation. You want to be able to interact with an assistant through voice by talking to it so it listens to you all the time, and it will talk to you as well. You want to have gesture [recognition] so the assistant [can perform] real-time hand tracking,” he said.
Real-time hand tracking works already, LeCun said, but “we just don’t know how to do it in a tiny form factor with power consumption that will be compatible with AR glasses.”
“In terms of the bigger variant[s] of power and power consumption, and performance and form factor, it’s really beyond what we can do today, so you have to use tricks that people never thought were appropriate. One trick for example is neural nets,” he added.
Becoming more efficient
LeCun spoke last Friday at the EMC2 energy-efficient machine learning workshop at NeurIPS, the largest machine learning research conference in the world. He talked about how hardware limitations can restrict what researchers allow themselves to imagine is possible and said that good ideas are sometimes abandoned when hardware is too slow, software isn’t readily available, or experiments are not easily reproducible.
He also talked about specific deep learning approaches — like differential associative memory and convolutional neural networks — that pose a challenge and may require new hardware. Differential associative memory, or soft RAM, is a kind of computation that’s currently widely used in natural language processing (NLP) and is beginning to be seen more often in computer vision applications.
“Deep learning and machine learning architectures are going to change a lot in the next few years. You can see a lot of this already, where now with NLP, the only game in town basically is Transformer networks,” he said.
He added that more efficient batch processing and techniques for self-supervised learning that help AI learn more like humans and animals do may also help make more energy-efficient AI.
Following LeCun’s talk, Vivienne Sze, MIT associate professor of electrical engineering and computer science, talked about the need for a systematic way to evaluate deep neural networks. Earlier in the week, Sze’s presentation on efficient deep neural networks garnered some of the most views of any NeurIPS video shared online, according to the SlidesLive website.
“Memories that are larger and farther tend to consume more power,” Sze said. “All weights are not created equal.” Sze also demonstrated Accelergy, a framework for estimating hardware energy consumption developed at MIT.
In addition to the talks, the workshop’s poster session showcased noteworthy low-power AI solutions. They include DistilBERT, a lighter version of Google’s BERT that Hugging Face made especially for fast deployment on edge devices, and a comparison of quantization for deep neural networks by SRI International and Latent AI.
A number of prominent voices are calling for the machine learning community to confront climate change and saying that such a focus can drive innovation. In a panel conversation at NeurIPS last week, another deep learning pioneer, Yoshua Bengio, called for ML researchers to place more value on machine learning that impacts climate change and less on the number of publications they’re getting.
And in an interview with VentureBeat, Google AI chief Jeff Dean said he supports the idea of creating a compute-per-watt standard as a way to encourage more efficient hardware.
Saving power and the planet
Alongside theoretical work at NeurIPS to explain the workings of deep learning algorithms, a number of works at the conference highlighted the importance of accounting for AI’s contribution to climate change. These include a paper titled “Energy Usage Reports: Environmental awareness as part of algorithmic accountability.”
“The carbon footprint of algorithms must be measured and transparently reported so computer scientists can take an honest and active role in environmental sustainability,” the paper reads.
In line with this assertion, earlier at the conference organizers suggested AI researchers who submit work to NeurIPS in 2020 may be required to share the carbon footprint of work they submit for consideration.
The recently released 2019 AI Now Institute report included measuring the carbon footprint of algorithms among a dozen recommendations that it says can lead to a more just society.
In other energy-efficient AI news, machine learning practitioners from Element AI and Mila Quebec AI Institute last week introduced a new tool for calculating the carbon emissions of training AI models with GPUs to predict energy use based on factors like length of use and cloud region.
This drive toward more efficient machine learning could lead to innovations that change the planet. But big ideas and challenges need a focal point — something to make the theoretical feel more practical, with actual, specific problems that need to be solved. According to LeCun, AR glasses may be that ideal use case for machine learning practitioners.
This post by Khari Johnson originally appeared in VentureBeat.
The post Facebook’s AI Research Chief Talks AR Glasses, AI, And Machine Learning appeared first on UploadVR.
from UploadVR https://ift.tt/2MnBeZQ
via IFTTT
No comments:
Post a Comment