Home News Meta begins testing a GPT-4V rival multimodal AI in smart glasses

Meta begins testing a GPT-4V rival multimodal AI in smart glasses

by WeeklyAINews
0 comment

Are you able to carry extra consciousness to your model? Contemplate changing into a sponsor for The AI Influence Tour. Study extra in regards to the alternatives here.


Extra information from Meta Platforms at this time, dad or mum firm of Fb, Instagram, WhatsApp and Oculus VR (amongst others): sizzling on the heels of its launch of a brand new voice cloning AI referred to as Audiobox, the corporate at this time introduced that this week, it’s starting a small trial within the U.S. of a brand new, multimodal AI designed to run on its Ray Ban Meta good glasses, made in partnership with the signature eyeware firm, Ray Ban.

The brand new Meta multimodal AI is ready to launch publicly in 2024, in keeping with a video post on Instagram by longtime Fb turned Meta chief know-how officer Andrew Bosworth (aka “Boz”).

“Subsequent 12 months, we’re going to launch a multimodal model of the AI assistant that takes benefit of the digital camera on the glasses with a view to provide you with data not nearly a query you’ve requested it, but additionally in regards to the world round you,” Boz said. “And I’m so excited to share that beginning this week, we’re going to be testing that multimodal AI in beta by way of an early entry program right here within the U.S.”

Boz didn’t embrace learn how to take part in this system in his put up.

The glasses, the newest model of which was launched at Meta’s annual Join convention in Palo Alto again in September, value $299 on the entry worth, and already ship in present fashions with a built-in AI assistant onboard, however it’s pretty restricted and can’t intelligently reply to video or images, a lot much less a reside view of what the wearer was seeing (regardless of the glasses having built-in cameras).

See also  Meta unveils new AI data centers and supercomputer to power AI-first future

As a substitute, this assistant was designed merely to be managed by voice, particularly the wearer chatting with it as if it have been a voice assistant just like Amazon’s Alexa or Apple’s Siri.

Boz showcased one of many new capabilities of the multimodal model in his Instagram put up, together with a video clip of himself carrying the glasses and observing a lighted piece of wall artwork displaying the state of California in an workplace. Curiously, he seemed to be holding a smartphone as properly, suggesting the AI might have a smartphone paired with the glasses to work.

A display displaying the obvious consumer interface (UI) of the brand new Meta multimodal AI confirmed that it efficiently answered Boz’s query “Look and inform me what you see” and recognized the artwork as a “wood sculpture” which it referred to as “lovely.”

Video displaying Meta’s multimodal AI in beta. Credit score: @boztank on Instagram.

The transfer is maybe to be anticipated given Meta’s common wholesale embrace of AI throughout its merchandise and platforms, and its promotion of open supply AI by its signature LLM Llama 2. However it’s attention-grabbing to see its first makes an attempt at a multimodal AI coming within the kind not of an open supply mannequin on the internet, however by a tool.

Generative AI’s transfer into the {hardware} class has been gradual to date, with a number of smaller startups — together with Humane with its “Ai Pin” working OpenAI’s GPT-4V — making the primary makes an attempt at devoted AI gadgets.

In the meantime, OpenAI has pursued the route of providing GPT-4V, its personal multimodal AI (the “V” stands for “imaginative and prescient”), by its ChatGPT app for iOS and Android, although entry to the mannequin additionally requires a Chat GPT Plus ($20 per thirty days) or Enterprise subscription (variable pricing).

See also  OpenAI board fires Sam Altman as CEO, begins search for successor

The transfer additionally calls to thoughts Google’s ill-fated trials of Google Glass, an early good glasses prototype from the 2010s that was derided for its fashion sense (or lack thereof) and visible early-adopter userbase (spawning the time period “Glassholes“), in addition to restricted sensible use instances, regardless of heavy hype previous to its launch.

Will Meta’s new multimodal AI for Ray Ban Meta good glasses be capable of keep away from the Glasshole entice? Has sufficient time handed and sensibilities modified towards strapping a digital camera to 1’s face to permit a product of this nature to succeed?



Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.