Smart glasses with augmented reality functions look more natural than VR goggles, but today they are heavily reliant on a phone for compute and next-gen communication.
More augmented reality (AR), virtual reality (VR), and mixed reality (MR) wearables are coming, but how they are connected, and where image and other data is processed, are still in flux.
Ray-Ban Meta AI glasses, for example, look like classic eyeglasses, but they rely on a tethered smart phone for such functions as taking pictures, AI voice assistance, and object identification. In contrast, the Apple Vision Pro mixed reality headset has sufficient built-in compute and battery power to operate as a standalone device, providing both AR and VR functions, but it is relatively heavy and cumbersome. As a result, few consumers might be willing to wear the goggles in their daily life besides gamers and those seeking new ways to work.
“My opinion is that the next big market after mobile is AR wearables, but not VR wearables,” said Vitali Liouti, senior director of segment strategy, product management at Imagination Technologies. “There’s a good reason for that. If I use Vision Pro in flights, people look at me weirdly. If I wear the Ray-Ban Meta glasses, no one even notices. I take photos all the time. The nice thing about wearables like the Ray-Bans is they’re comfortable, they’re natural, and like mobile phones, they’re easy to work.”
When it comes to these devices, function determines form as much as form determines function.
“AR glasses have temporarily shifted away from specialized optics or overlaying displays,” said Amol Borkar, director of product management and marketing for Tensilica DSPs in the Silicon Solutions Group at Cadence. “Instead, they resemble regular spectacles equipped with a camera and microphone to capture verbal prompts. This simplifies the design significantly, as illustrated by products such as Meta’s Ray-Ban Stories.”
In contrast, VR goggles tend to be bulkier and heavier because they fully enclose the eyes to block out the real world. That requires more processing power. “VR systems generally incur higher costs due to their focus on providing an immersive experience,” Borkar said. “Since VR goggles do not allow for pass-through vision, images must be rendered at high frame rates on high-resolution screens (often OLED or fast-switching LCD) with high refresh rates (90Hz–120Hz, or more) to ensure smooth visuals. Additionally, VR systems require highly accurate tracking of head, hand, and eye movements to reflect them in the display render correctly. Failure to achieve this may result in motion sickness and a subpar user experience.”
Battery vs. cord
The simplest way to give users more AI, AR, and eventually VR functions in small form factor glasses, is to keep them tethered to a phone or other gateway device for the necessary compute power.
“It remains to be seen what will happen, but the predominant situation will be that you will have your wearable, and it will leverage the extremely advanced, extremely expensive chipset of the mobile phone around those AI models,” said Imagination’s Liouti. “Here’s where we have not just the challenge, but the opportunity. The more performant the mobile phone chipset becomes in running local models, the more performant the wearables will become. One thing will help the other grow.”
Others agree that a phone or gateway device will continue to play a key role.
“There are two schools of camp,” said John Weil, vice president and general manager of IoT and edge AI processors at Synaptics. “If the thing you put on your head has more compute, less phone is needed. But if I was a betting man, the trend is more phone. The first one is trying to take a mobile processor and embed a device on your head. The other is what I call semi-custom products that are specifically optimized on the vision and audio modalities, and taking that, digitizing that, and using the cell phone as the compute element. One is you try to do more on the AR/VR headset turnkey — both modalities, vision and audio — and bring it back out to the physical world, and you have limited battery life. But now imagine you can manipulate the modalities. You can do vision, you can do speech, you can do audio. You can do all of that, but you need a secondary computer — your cell phone, in this example — and then a third level might go all the way back to the cloud. Think of it as distributed computing. You need a tiny amount of vision and audio in the glass, and then you hop to the phone, and then you hop from the phone to the cloud and then back. Depending on the human latency that you need in milliseconds, depends on how far back you can go, and that will dramatically improve battery power.”
In the second scenario, the glasses serve as the visual instrument to interact with a phone. Instead of pulling the phone out of your pocket, the next logical step is you’ll just wear glasses,” Weil noted. “The analogy I use is the way you buy a smart monitor and you hook it to your computer over, say, Thunderbolt. You walk in, sit down at your desk, plug in, get a monitor. That monitor has various capabilities. Today, your AR/VR things are going to become smart monitors to your cell phone, so most people are shifting toward the cell phone as the primary compute.”
In a recent report, Counterpoint defined smart glasses as being tethered to a smart phone, while AI glasses have their own computing processor with a dedicated unit for processing AI tasks, such as a neural processing unit, data processing unit, or other AI accelerator to run tiny AI models and perform on-device AI tasks.
Fig. 1: Defining AI glasses vs. smart glasses. Source: Counterpoint Global Smart Glasses Ecosystem & Market Trends, January 2025
By this definition, AI glasses have limited AR and no VR capability. So wearables have a way to go if users want more AR/VR without having to wear goggles.
“You can communicate with Ray-Bans right now, but it’s a bit clunky,” said Imagination’s Liouti. “At some point we’re going to see the tree, and it’s going to tell you, ‘This is a cherry blossom tree. This needs watering.’ I get really geeky about it, because it’s such an amazing experience, but these technologies need a few generations to come to the level where mobile phones are. People forget that mobile phones have gone through 50 iterations. That’s what wearables need.”
Others agree. “While the first wave of smart glasses focused on capturing moments, the next generation will be about interpreting and understanding them,” said Parag Beeraka, senior director of consumer computing, client line of business at Arm. “As the demand for smaller, smarter, always-on devices offering premium user experiences grows, AI-first wearables will not just be connected and voice-enabled, but will use agentic AI to reason, predict, assist, and adapt — allowing us to mix both the physical and virtual world together.”
For example, you’ll be able to step outside and ask the AI to find a coffee place and your glasses will guide you there. “The next generation of XR [extended reality] smart glasses will be capable of interpreting user behavior and reading the environment in real-time, making them a context-aware assistant,” said Beeraka. “Edge AI platforms are evolving to support advanced, power-efficient inference and on-device reasoning in ultra-low power form factors, making heterogeneous compute central to the success of future wearables.”
Leading-edge 3D-ICs may be the missing piece that lets smaller glasses have more functions.
“Whether compute is in the headset or in a separate device depends on the form factor,” said Marc Swinnen, director of product marketing at Ansys. “How big is this going to be? That has always been one of the attractions of 3D-IC. You can shrink the form factor of your system. Instead of having a PCB board with four or five chips on there, you scrunch them together. If you want to make these VR sets realistic and commercially viable, there’s a huge drive toward making custom silicon that does exactly what you need, the way you do it, and is as efficient as possible in power. Nobody wants their ear burning as the thing gets hot on power, on speed, on application software. For example, Meta has a division that is working on silicon for their VR headsets. It’s a testament to the power of silicon these days. The bespoke silicon has become really, really central to the success of so many of these entire companies or divisions in companies.”
Smaller manufacturing technology nodes also may make it possible to incorporate more compute power into the smaller form factors and make them self-contained without a tethered device, said Cadence’s Borkar. “However, this involves significant costs. Reducing the size to or below 7nm is very expensive, and the current total addressable market (TAM) or return on investment (ROI) in the AR/VR sector does not yet fully justify such an investment, even for the big players.”
In terms of custom products, TSMC recently showed a concept for AR glasses, while Bloomberg reported that Apple is designing specialized chips for smart glasses, likely to be manufactured by TSMC. Meanwhile GlobalFoundries is working on MicroLED displays built on two wafers, a front plane based on GaN LEDs and a CMOS backplane, to enable better smart glasses micro-display.
Fig. 2: TSMC’s concept for AR glasses chips, as shown at its recent North America Technology Symposium. Source: TSMC
Overall, the solution is likely to include a mix of advanced chips, heterogeneous compute, and offloading some functions to another device.
“As AR/VR glasses become more compact, lightweight, and smarter, we’re seeing a dynamic blend of compute models,” said Arm’s Beeraka. “While many experiences will leverage the tethered device — typically a smartphone or wearable device — for offloading heavy processing, the ambition for truly standalone smart glasses is accelerating. Both approaches demand heterogeneous compute architectures to enable efficient processing across a range of diverse workloads, including sensor fusion, AI-driven perception, and real-time rendering. Architectural breakthroughs and AI processing shifts are already facilitating the high-performance, low-power balancing act that makes sleek, mainstream wearables commercially viable. Whether offloaded or on the device itself, edge compute is key to bringing together power efficiency with comfort and usability to deliver immersive experiences without compromising battery life.”
Connectivity challenges for tethered devices
As long as AI/AR glasses are tethered to a phone or other device, there is the question of which communication standard is best to link the phone to the glasses, and to link the phone to network towers and the cloud. Challenges are compounded when considering 5G and eventually 6G, which will offer the very low latency needed for advanced AR/VR features.
A full 5G/6G chipset is expensive and power-hungry, said Ansys’ Swinnen. “It might not be worth putting the whole telephone communication system into your VR when you could get by with something like Wi-Fi or Bluetooth that’s cheaper and uses less power.”
However, Bluetooth and Wi-Fi also have their limitations. “As data needs and usage grow, we can expect more standards to be developed,” said Cadence’s Borkar. “The future will likely be some type of wireless tethering with higher bandwidth and lower latency than Bluetooth while providing the ability to send videos, images, and other human inputs over short distances.”
Others agree the solution lies in new wireless standards. “These will improve the bandwidth between the AR/VR instrument and the compute element on your cell phone, or whatever device you’re using,” said Synaptics’ Weil. “There’s a higher data rate Bluetooth standard out now that goes up to 8-megabit speed, so you get the power of low-energy Bluetooth [BLE] at a higher data rate. That gives you a lot more capability so you don’t need a Wi-Fi direct connection.”
To help solve these multi-protocol challenges, Synaptics recently released Wi-Fi, Bluetooth, and Zigbee/Thread combo SoCs that support high peak speed and low latency for applications including AR/VR and gaming.
Another option is ultra-wideband (UWB) wireless transceivers, designed to coexist with Bluetooth and Wi-Fi. NXP and SPARK Microsystems are both working on such devices, though the chips would need to be adopted by the AR/VR companies and the phone companies to solve the 5G/6G delivery challenge.
“AI glasses cannot have a battery that will be well suited to a comfortable form factor with Wi-Fi technology,” said Fares Mubarak, CEO of SPARK. “And Bluetooth, frankly, isn’t good enough to do the kind of connectivity at the latency and at the data rate that they look for.”
Companies will not want to spend billions of dollars deploying 6G infrastructure to get from the cloud to a phone in a single digit millisecond, but then take 140 milliseconds to get from the phone to the VR device, noted Frederic Nabki, co-founder and CTO of SPARK. “Instead of burning all that latency benefit by Bluetooth, they can use Spark technology to close that half-meter between the compute device and your ear with sub-millisecond latency. Now, finally, the promise of 6G is fully kept all the way from the cloud to your ear.”
Another solution is to connect the devices with a cable in order to guarantee better performance and latency. For example, the Sony PlayStation VR2 is tethered to the PlayStation 5 gaming console with a USB-C cable, providing both power and data transmission.
“There’s a transition before XR can be just a pair of glasses,” said Gervais Fong, senior director of product marketing for mobile, automotive, and consumer interfaces at Synopsys. “As they become more powerful, they will likely need to connect those glasses in certain operating modes to an XR-capable phone. The cable could be USB4 v.2, because if you look at the XR glass designs, I’ve seen anywhere from 12 to 16 different cameras and sensors in those types of glasses, and that consumes a huge amount of bandwidth. Imagine all that video that you have to transmit, either from the cameras sending it down to the phone for processing, or for the phone sending 4K-type resolution to each one of your eyes — and having the resolution and the frame refresh rate that’s fast enough so that you don’t get dizzy. That consumes a high degree of video bandwidth, so at the same time, it’s going to consume power.”
In terms of channel loss, Synopsys uses margin to meet the worst-case specification across process, voltage, and temperature. “We don’t know the quality of the cable a person uses, or the channel, or the quality of the PHY on the other side that we’re connecting through the cable, and anything along the line that’s connected there,” Fong said. “If it is not good quality, and within spec, it’s going to cause problems. The more margin we can have on our side, the better the chance that an SoC designer using our PHY is able to successfully get that signal through to the other end. That, for them, improves interoperability, which is the big thing with USB. When you plug it in, you expect it to work. But there’s a lot of work behind the scenes that industry has done so that when you plug it in, it just works.”
Others say a cable is a short-term solution only.
“While today’s AR/VR experiences may benefit from high bandwidth wired standards like USB4 v2.0, cables are unlikely to be the long-term answer,” said Arm’s Beeraka. “The future of spatial computing relies on freedom of movement and seamless interactions. We expect to see more advanced low-power wireless technologies emerging to bridge this performance gap, delivering the bandwidth and latency required to support immersive experiences without sacrificing comfort or convenience. Platforms that enable the compute efficiency and intelligence required at the edge will be crucial to enabling a wireless future.”
Enabling 6G through edge AI and edge compute
The low latency and high determinism of edge AI technology are creating new use cases for AR/VR, as well as improving existing use cases through AI.
“There are two major things driving the adoption of edge AI,” said Steve Tateosian, senior vice president of IoT, consumer, and industrial MCUs at Infineon. “One, latency goes down drastically. I’m not going to say it’s zero latency, but from a human perspective, it’s zero latency to act, to engage with a device locally, as opposed to going to the cloud. Two is determinism. Especially when we’re talking about human-machine interaction, we as humans naturally have an expectation around how we interact with our environment. For example, if you walk into a dark room, your expectation when you flip the light switch on is that the light comes on immediately, and you’re not standing there in the dark for three seconds wondering, ‘Do I need to turn this light switch on again?'”
Others agree that really short latencies are key. “With the delay time, when you have a VR or AR headset on and turn your head, the maximum lag you can tolerate is going to be a millisecond or it becomes really confusing to your brain, because all the overlays are trying to catch up,” said Shawn Carpenter, 5G/6G program director at Ansys. “Because you also have this time-of-flight delay, from where you’re operating to wherever the information is being processed to put the overlay down for you, you can’t have the actual edge processing be too far away.”
What that probably means is that with 6G, very-high-performance computing will be brought directly to the base station. “Then you can have just the time-of-flight delay,” Carpenter said. “Physics imposes this delay time for how long it physically takes to transmit the signal to the nearest base station. If the signal then has to go to some data center in Virginia and then come back to you, that’s going to impose delays that are dead on arrival. The concept won’t work. You’re going to probably have something like a set of GPUs or TPUs right there at the base station.”
In addition to processing delays, a further challenge is that a lot of bandwidth will be needed in order to get data through the pipe. “If you don’t have a large number of devices that you’re trying to communicate to, you could probably get [6G-enabled VR] done within the home with Wi-Fi 7,” he said. “Then the question is whether your pipe out of the home gives you enough bandwidth, or whether you’re going to need to have some kind of fixed wireless access. If you have fiber to the home, you may be able to get some of that done, but you’re probably going to have to communicate with a fairly extensive computing resource to do the overlay, to do all the AI that recognizes what you’re looking at.”
For example, an aircraft mechanic wearing VR goggles to see inside a virtual aircraft engine will want AI to label things — what’s hot, what’s electric. Something is going to have to do that computing, and it’s probably not going to be a PC, said Carpenter. “Probably it doesn’t have enough power, so you’re going to have to connect to a computing resource. You probably could get it done with two hops, one within the home, and then another one to the local fixed wireless station, provided you have a high enough bandwidth access point to do something like that. You could use your phone as that relay device, as well.”
Cellular is a mixed bag. While 4G LTE and sub-6 GHz 5G are slow but relatively predictable, 5G and 6G millimeter require line of sight. “The higher you go in frequency, the shorter the range is, and UWB is no exception to physics,” said Mubarak. But the challenge can be solved through complementary technologies. For example, if a person is using AR glasses for gaming, they want high-resolution audio with the lowest latency on the headset, and UWB would provide it while Bluetooth would be inactive — unless the person got a phone call. “Let’s say you lose your game. You get up, walk to the kitchen, you lose UWB connectivity. Then the device can, with the proper application layer, switch from ultra-wideband to a compressed solution such as Bluetooth, and the consumer will end up having their seamless solution.”
However, with a good line-of-sight connection, or with an array of well-placed repeaters, 6G could be the enabler of next-gen AR/VR capabilities. “We’re not talking about a virtual reality kind of situation or assisted virtual,” said Ron Squiers, solution networking specialist at Siemens EDA. “We’re talking about holographic images being in the room with you and not noticing any difference about the person that you’re talking to.”
Conclusion
Like AI, we are only just starting to see the huge scope of AR/VR applications, but the shrinking from goggles to glasses will take time.
“Clearly people are moving to these AI glasses, which are not a ski mask like the first-gen devices,” said Spark’s Nabki. “They want rich interaction with the AI. They don’t want you to type. They want you to talk to it. They want you to interact with it. They want you to show it images.”
The bottom line is, “No matter how good this VR headset is, if you have a thick cable coming off and there’s a box the size of a fridge that needs to run it, it’s never going to be a commercial success, no matter how well it works,” said Ansys’ Swinnen. “The silicon is critical to making it work.”
It also remains to be seen what killer application will make VR a must-have technology. “Both AR and VR technologies need a solid ecosystem to thrive,” said Cadence’s Borkar. “As a self-contained system, VR calls for its own ecosystem and applications. Despite numerous attempts, it has seen limited success and remains a niche market without a definitive ‘killer’ app or use case. When considered an extension of a phone or PC, VR might seem like just an enhanced display, possibly not worth the cost for some customers.”
Related Reading
Wearable Connectivity, AI Enable New Use Cases
New types of wearables and devices can record bodily data or simulate the senses without needing to meet stringent med-tech rules.
Three-Way Race To 3D-ICs
Intel, TSMC, and Samsung are developing a broad set of technologies and relationships that will be required for the next generation of AI chips.
Leave a Reply