Today at Meta Connect, Meta announced the Ray-Ban Meta Display glasses, its first smartglasses with a display. And I’ve managed to go hands-on with them on the same day, so I could give you my first impressions very quickly!
Before delving into my actual review, I must issue a disclaimer regarding the demo I experienced. I think that the demo experience of the Ray-Ban Meta Display that Meta provided to us attendees was not ideal, so you have to take this first impression article with a grain of salt.
The first problem was that I had to wait 2 hours in line to try the glasses. 2 hours is a lot of time, and even if I spent the time talking with friends, I arrived at the demo experience that I was very tired and irritated. Not the ideal state of mind to review a new device.
Then, notwithstanding the long wait time, the demo was super-rushed and lasted something like 7 minutes. There was a Meta employee following us: she made us enter a room, then she said “ok, this is the feature, you can activate it this way. Now do that. Done? Ok, let’s go to the next room!”, rinse and repeat for like 5 rooms. There was no time to explore the functionalities of the device, no time to adapt to the monocular display, no time to learn how to properly use the wristband. I even had no time to shoot proper pictures of the device!
The time and the structure of the demo were absolutely insufficient to understand the potentialities of this headset. I will try to write a review where I go beyond these demo limitations, but it is objectively hard to give a reliable first impression on these glasses just by the demo I was able to try.
Wearing the devices and their comfort
While I was still in line, a Meta employee quickly measured the diameter of my wrist and determined that my correct size was “2”. This is because the bracelet comes in different sizes. This is good to guarantee the optimal comfort of different users with different arm widths.
The bracelet was closed around my wrist a bit like how you close a belt or certain types of watches. The employee closed it very tightly, so it was not very comfortable to wear, because it hurt a bit. I guess if it could have been me to properly fit it, I probably would have had a better experience. During my tests, I had no particular concern with the weight of the neural wristband, so on that side, I can say it was pretty comfortable.
As for the glasses, wearing them just meant putting them on my face like a pair of standard glasses. The glasses are lightweight, and they resemble standard glasses like the original Ray-Ban Meta, but wearing them, I had the sensation they were heavier than they should have been. For instance, I could feel a little discomfort due to the weight of the glasses on my nose. It was a slight sensation, but it was there. Anyway, since the demo was only five minutes, I can’t tell if, in the long term, you just get used to them, or if the sensation of their weight is going to accumulate until the glasses feel too uncomfortable.
Design
Both the glasses and the neural wristband have a classy design. The glasses are made by Ray-Ban, so of course, they look cool. But the fact that they need to be bulkier than the standard Ray-Ban to accommodate the display technology also has an impact on the final look. The frames are thicker, and the glasses are, in general, bigger than they should be, impacting their aesthetics. I’ve found to look cooler with the original Ray-Ban Meta than with these ones.
Visuals
The Ray-Ban Meta Display has a display on the right lens. The fact that the display is monocular heavily impacted my experience. Our brain is meant to reconstruct the world using the views from both eyes, and when something is present only in front of one eye, it becomes a bit confused. I’ve found it a bit uncomfortable to just have one display; I could also feel the eye strain from having to focus on the display with just one eye. Probably, this is something you can get used to with time, but in the five minutes of the demo, I absolutely found it uncomfortable.
Meta claims a 600×600 pixels, 20° FOV display, and in fact, the information window that you see is pretty tiny, and it is big enough only to accommodate a few buttons of a menu, or a few lines of text, for the live captioning. This is still acceptable given the current status of the technology. The display is, anyway, not in the center of the vision, but it is slightly moved outside of it on the right, so that it doesn’t appear in front of what you are looking at in the real world. This is a good thing not to disturb the user.
Meta claims that the display is bright and that it can be used outdoors thanks to its 5000 nits. In my tests, I wouldn’t define it as “bright”… most of the things I’ve seen inside it looked semitransparent, and the colors didn’t look vibrant to me. Anyway, thanks to the 5000 nits, I could put the display info over a not-so-strong light, and I could still read its text. But if I put it in front of a strong artificial light, like the big screen of the DJ during the party, I still had a hard time reading its content.
Anyway, even if the display is tiny, I’ve found the text inside it very readable. The text must be small, so my mother probably couldn’t read it, but it was definitely readable for me.
Since I like to do crazy tests, I also started shaking my head left-right while wearing the glasses, and all the visuals of the display separated into red-green-blue components in a crazy way until my head was still again. These glasses are not meant for fast movements yet.
One thing that I found cool about the display is that, looking at it from the outside, I couldn’t see that some content was playing on the lenses… there were no bright spots whatsoever. Meta said to Road To VR that the outward light leakage is only around 2%. It means that if you are using these devices on the street, you don’t look like a weirdo with a display in your eye, but you just look like a man wearing standard glasses. This is the feature that probably impressed me the most. You can see it in this video I shot while using the glasses: you have no clue from the outside that I’m looking at a display.
The display information is made to just appear for a few seconds in front of the user, and then the glasses hide it until the user requests to show it again. This has a double purpose: not cluttering the view of the user when not needed, and saving the battery.
Audio
Our tests were mostly visual, and there were just a couple of moments when we could hear any audio. I can say it worked, but I can not judge the quality or the volume.
The Meta employee told us that there are a lot of microphones on the device so that it is able to capture not only the voice of the user, but also the voices of the people around him/her, and also able to filter out the background noise.
Interactions
You interact with the glasses using the Neural Wristband. The wristband can detect your fingers’ gestures by intercepting the electrical impulses that your brain is sending to your fingers. I can say from my tests that it is fairly accurate: apart from a couple of misdetections, it always worked. And while we tended naturally to keep our hands in front of us, actually, it worked very well also with the arm fully relaxed along the body, which is much more comfortable than doing the air-tap in front of you with the Vision Pro.
What was puzzling during the demo was learning all the different commands. If I remember well:
- The double-tap of the thumb with the middle finger opened the menu
- The single tap of the thumb with the middle finger went back
- The single tap of the thumb with the index finger was to confirm
- The small swipe of the thumb on the index finger was able to go up-down left-right
- Sometimes you could also pretend you were grabbing a knob with the thumb and the index finger, and rotate your hand to make this imaginary knob rotate and select a different value. This way, you could, for instance, select the zoom of the picture to take
These interactions are all very new, and they require some time for the user to learn them. Five minutes are definitely not enough to do that, so we were all very confused about how to interact with the UI, and we continuously made mistakes like using the index finger instead of the middle one and vice versa. My impression was that the interface is not natural and should be learned. It doesn’t seem very complex to learn… it is just that five minutes is literally too little.
It was pretty disappointing that they did not let us try writing with the wristband: it is the thing that made me curious the most, and that can potentially be a game-changer for text input in XR. Zuck showed it on stage, but we have no clue about how it happened.
Applications
In our short time with the device, we were able to try:
- The main menu, which is a grid of like 6×2 rounded-rectangular buttons that you can navigate with the swipe gesture
- A very simple 2D minigame, where we could move a ball on a grid to make it enter a hole. I played it for like 15 seconds, so I can’t say much about it. It seemed very basic, made only to showcase that some entertainment content on these glasses is possible
- Meta AI, that we could activate and ask it either to open an app (e.g., saying to take a picture) or to do something contextual to what we had in front of us (e.g., explaining a picture, or restyling the world in front of us)
- An application to listen to some music, which played the audio via the speakers, while showing simple controls to play/stop/pause/change volume
- The ability to take a picture. We could see a tiny preview of what we were taking the picture of, then change the zoom with the “knob gesture”, and then finally take the picture
- The live captioning feature. The Meta employees were speaking in front of us, and we could read what they were saying, transcribed by the AI in a little bubble in front of our eyes. It worked fairly well, the transcription was quite accurate, and the text was readable.
Price and availability
This is what Meta says about the availability of the device:
Starting at $799 USD, which includes both the glasses and Meta Neural Band, Meta Ray-Ban Display will let you experience, learn about, and interact with the world in a totally new way. It hits shelves September 30 at limited brick-and-mortar retailers in the US, including Best Buy, LensCrafters, Sunglass Hut and Ray-Ban Stores. Availability in select Verizon stores will follow soon after. Expansion to Canada, France, Italy, and the UK is planned for early 2026. We’re starting with select retailers and regions to make sure customers get the glasses and band that’s perfect for them, and we’ll expand buying options over time.
Final impressions
You know, I’m not the “enthusiast” guy, so I did not come out of this demo thinking that this is a mind-blowing device. It is not the first time that I try a bracelet that can detect my input finger: Doublepoint can already detect a click using a smartwatch. And it is not the first time that I try smartglasses with color display: TCL RayNeo already has it, and it is even binocular, which is much more comfortable for my eyes. So I can’t say that Ray-Ban Meta Display is mindblowing or unique. Probably its point of strength is that it puts in a single polished package many interesting features found in other glasses. And the neural wristband is something original, but I think it still has to unleash its full potential. In the end, Ray-Ban Meta display is an interesting device that marks a step forward that Meta is taking from audio-only smartglasses to full AR glasses.
I think the glasses are well-made, and their interface is pretty neat. They seem pretty comfortable to wear for short to medium periods of time. The display is pretty readable, but it is tiny, and the choice of going monocular is a big no from me. The neural wristband does its job, and its interface can be learned with time. I also think that an open SDK for this wristband would allow us developers to create with it a lot of applications.
Ray-Ban Meta Display can be an interesting gadget for the tech enthusiast or the developer who wants to create applications for it. At $800, and with the limitations it has, I’m not completely sold that they are ready for mainstream adoption. I think the average user can go with the standard Ray-Ban Meta.
But they are an interesting step forward, and if the next iterations of them can be cheaper and have a bigger, binocular display, then they could become something interesting also for the more average user. Having a display is a game-changer because audio-only is very limited, so this is for sure the direction to take for smartglasses.
I stayed awake until 3.30 am to write this article to inform you about this new device… so if you want to reward my hard work, consider subscribing to my newsletter or resharing this post on your social media channels!
Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I’ll be very happy because I’ll earn a small commission on your purchase. You can find my boring full disclosure here.
Related