October 21, 2025
Meta announces Meta Wearables Device Access Toolkit

Meta announces Meta Wearables Device Access Toolkit

After so many rumors about an upcoming “SDK for smartglasses”, Meta finally announced it at today’s Meta Connect. But let’s say that it is not exactly how we imagined it. Let me tell you what has been announced, what is good about it, and what could be better.

The announcement of an SDK for smartglasses

During the developer keynote at Meta Connect, Meta CTO Andrew Bosworth took the stage to announce that Meta is finally releasing some tools for developers related to smartglasses. The whole crowd (me included) cheered at the news. You can see that in the video below:

I was waiting for this moment, and I recorded it

Boz talked about some tools to let developers use the contextual audio and video capabilities of the glasses with your own app, then showed some examples of applications that partners are doing. For instance, Disney is building prototypes where smartglasses help people navigate into parks, Twitch can livestream through the glasses, Logitech Streamlabs can build personalized streams and multistreams with overlays and alerts, and Humanware is using smartglasses to help people with vision impairments navigate the world. Then he basically hinted at the fact that the sky is the limit for apps that use the power of contextual AI and smart glasses.

What I naively understood by this announcement was that we could have an SDK to develop applications that run on smartglasses. But when I went to speak with a Meta engineer and asked questions about the SDK, I actually realized the reality is pretty different.

Boz, on stage, carefully never spoke about an “SDK for glasses”: he only talked about apps that use the audio and video capabilities of the glasses. And the SDK itself is called “Meta Wearables Device Access Toolkit” and not “Meta Wearables SDK”. The reason for all this particular use of words is that there is no actual SDK for glasses, but just an SDK for mobile applications that lets your mobile app “access” the capabilities of the glasses.

The “Meta Wearables Device Access Toolkit” lets an existing mobile phone app interact with the glasses in three possible ways:

  • Stream video from the glasses to the mobile app
  • Stream audio from the glasses to the mobile app
  • Playback audio from the mobile app to the glasses

The users will have to explicitly give permissions to the mobile app to perform these operations.

The streaming of audio and video from the glasses to the app is what you can use to analyze the environment around the user using your AI models. You can have AI models running on your mobile app that analyze the audio+video stream coming from the glasses and return some feedback to the user by playing back some audio into his/her ears. At the moment, Meta AI can not be used directly, so you must run your own AI models.

Regarding the Toolkit compatibility, I have been told that it is compatible with the whole line of Meta smartglasses, including Ray-Ban Meta, Oakley Meta, and Ray-Ban Meta Display. This is good because it means that the applications built with it can be compatible with the millions of smartglasses that Meta has already sold to consumers.

ray ban meta display review impressions hands on
Me wearing the Ray-Ban Meta Display glasses. The SDK interaction with them will be pretty limited

Let’s say that this is not exactly what I was hoping for. First of all, I hoped for applications that could run on smart glasses directly, not on mobile apps that can use the glasses. But this actually makes sense: Ray-Ban Meta glasses have been built with very limited functionalities so as to respect the constraints of size and weight that make the glasses fashionable and comfortable. Probably, they can’t run much logic on their own, and in fact, Meta itself lets you operate them via a companion mobile app. So running apps directly on them seems out of the question.

A choice that sounds a bit less logical is that there is no possible interaction with the specific capabilities of the Ray-Ban Meta Display:

  • The toolkit doesn’t allow you to put text or images on the display of the smartglasses
  • The toolkit doesn’t allow you to access gestures of the Meta Neural Wristband.

It is not that these capabilities will never arrive: it is that they are not available in the current version of the toolkit. Meta suggests finding ways to go around the current limitations: for instance, if you want to give feedback to the user, instead of writing something on the display, you should play an audio with that feedback. Remember that audio playback is the only thing that can go from the mobile app to the glasses, so it is the way through which you can provide information to the user.

Regarding all the missing features, Meta is actually waiting for the feedback of the developers to integrate them in future versions of the SDK. They may integrate some operations happening on the glasses, access to the display, and so on.

The programming language

I’ve been told that this Wearable Toolkit is available for native apps. For Android developers, there is a Kotlin SDK, while for iOS developers, there is a Swift package. This means that if you are a Unity developer, you must use some wrapper to be able to call its functionalities from .NET.

How to access the SDK

The SDK is not readily available for everyone: you have to apply by filling this form, and I’ve been told that in a “few weeks,” people will start receiving access to it.

Funds availability

I’ve asked if there is some developer fund to apply for for people who want to develop applications for glasses, and I’ve answered that, currently, there is not.

Enterprise opportunities

There are currently no licensing options for Meta smartglasses for enterprise use. This means that if you develop an application, it should be designed for consumers.

While with VR headsets I’ve seen many times small companies using consumer-licensed headsets, with smartglasses, this could be a bit trickier. The main reason is that companies will not want their employees to have on their heads devices with cameras that do not have a clear statement about the fact that the images of the cameras are discarded. There is too high a risk of breach of trade secrets otherwise.

Personal considerations

How Disney plans to use smartglasses for navigation inside parks

My impression about this toolkit announcement is that Meta’s managers decided that the company must have launched a sort of SDK for glasses at Meta Connect and that this could not be delayed because Google already announced to release its developer preview for the glasses SDK before the end of the year. But probably the time was not enough to develop a proper SDK for glasses, and probably the devices on the market were not initially thought to have an SDK for them, so Meta rushed to release “something for developers”.

In my opinion, it is clear that there has been a rush by a few hints here and there. One is that the keynote announced a toolkit for glasses, but it is actually not ready yet, and you must register to be notified at an unspecified moment when you can download it. Another one is that the first version of this toolkit does not allow to use the display of the Ray-Ban Meta Display, which is a no-brainer feature.

So I wouldn’t be surprised if the first version of this toolkit weren’t perfect. But I’m pretty sure it will improve with time.

Notwithstanding all these limitations, I still think that a toolkit to use Meta glasses is an interesting opportunity, considering the growing success of these devices and the endless potential they have for integrating AI into people’s lives. It is an important opportunity for us developers to do something new and find our own success.

It will be important to evaluate the differences between this toolkit and the one that Google will release. In the same joint Google-Qualcomm session where Google announced its upcoming SDK, Qualcomm showed a Small Language Model running entirely on glasses. So I wonder if the Google Glasses SDK will also allow running small applications directly on the glasses. This would be incredibly powerful and would also allow you not to necessarily burn your mobile phone battery when running AI glasses applications. But at the same time, I know that current glasses can’t run complex AI models, so even if running standalone glasses apps would be possible, in most cases, you should still build a hybrid glasses-phone app.

There are also market considerations to consider: Meta is currently the market leader. There are already more than 2 million Ray-Ban Meta out there, and I wouldn’t be surprised if the number doubled next year. In the short term, Google can’t compete with these numbers: the partnership with Luxottica guarantees Meta a wide distribution network, which is a huge advantage for selling the glasses. So, even if Google’s SDK and smartglasses were better, it could still make more sense for us developers to release something for Meta, for the fact that there would be many more potential customers. Of course, I’m talking about something for a general purpose… if you need to build for a specific use case, you can simply pick the best glasses and dev tools.


And that’s it for the newly announced developer tools. For any questions and considerations, as usual, feel free to use the comment section below. And if you shared this article with your other XR peers, you would make me very happy.


Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I’ll be very happy because I’ll earn a small commission on your purchase. You can find my boring full disclosure here.

Leave a Reply

Your email address will not be published. Required fields are marked *