Wearable Connectivity, AI Enable New Use Cases

New types of wearables and devices can record bodily data or simulate the senses without needing to meet stringent med-tech rules.

popularity

The sensing and processing technology used in smart phones, watches, and rings is starting to be being deployed in a wide variety of wearable devices, ranging from those that fill the gap between sports and med tech, to haptic devices to assist the visually impaired and AR/VR glasses.

Emerging applications include payment, building, and factory wearables. Most of these devices process signals, then plug into AI and ML tools to analyze the data at the edge, on a phone, or in the cloud.

For example, Belgian startup Skinetix developed athletic leggings with embedded sensors that measure muscle activity and movement and then send data back to the wearer to help with injury recovery. The company first considered focusing on neurological impairments such as stroke recovery, but the barrier to entry was too high. “It’s a medical application that needs medical approval,” said Joris De Winter, co-founder and CEO of Skinetix, a spin-off from imec and Vrije Universiteit Brussel.

So far, Skinetix has created a complex, functional prototype. “It needs to be cut down now into a minimal viable product, but the legging has sensors integrated inside it, and then all the signals are measured,” De Winter said. “All the routing of the signals goes through the textile toward a single data-capturing box.”

The device starts with electrodes on the skin to measure analog signals. “You do common mode rejection, and you reject some of that noise out, then you filter it,” he explained. “An instrumentation amplifier is in contact with your skin, and every time you contract your muscles there’s an electric potential that you can measure there and that you want to amplify. It’s a really small voltage. We’re in the range of a couple of microvolts to the millivolts range. It’s an analog signal that you want to amplify, then filter, because there are a number of sources of noise that can enter the signal. An important consideration when you do tests in a lab are noises coming from power lines. The 5060 hertz frequency is one that you want to leave out, for example.”

That makes testing the device a challenge. Another design challenge is cross-talk. “When you’re trying to measure a muscle, if your electrode is not placed on the right spot, you could be measuring activity from a muscle that is right next to it,” De Winter said. “You also need to ensure you have good contact between your electrodes and the skin. We mostly measure EMG activity, so the sensors are made of the EMGs and inertial measurement units (IMUs). These are the same as you have in your phone that measures its orientation.”

There are many models of instrumentation amplifiers on the market with different advantages and disadvantages. “They read in different bandwidths and amplify in different ways,” he said. “They do the common mode in different ways, so that’s where you need to select the right one for the type of activity you want to measure and the conditions you want to work in. After that, you have to filter it. There again, you can use other types of chips to just do the filtering. You can also use passive components to filter with regular RC filters. This means you need to know the range of frequencies you’re interested in for your exercises. Based on that, you’re going to choose whether to use a passive element or a chip there. Then you need to digitalize the signal using an ADC.”

There are a variety of other wearable use cases and products:

  • Athletes‘ heart rate, body temperature, sweat rate, and distance covered can be monitored. (Cadence)
  • Secure payments can be made with watches and rings. (Infineon)
  • Smart building control via a wearable or implantable device. (Siemens)
  • AR/VR can be used to enhance tasks such as remote monitoring, training, and equipment management in smart factories. (Imagination)
  • Biometric data from commercial grade off-the-shelf wearables can be used to enable early detection of infectious diseases, per the U.S. Department of Defense.
  • Virtual reality experience can be boosted via haptic feedback clothing such as bHaptics’ TactSuit.
  • Training, simulation, and engineering can be supported via VR gloves like those from Manus.
  • Programmable fiber can be sewn or woven into clothes, enabling distributed inference. (MIT)
  • 3D virtual twins can be created with AR goggles for an enhanced design and simulation process. (Dassault Systèmes and Apple)

Smart glasses, in particular, seem poised for wider adoption growth. Counterpoint reported that global smart glasses shipments rose 210% year-over-year in 2024 and 156% in 2023.

Processing data
Much of this falls into the next phase of sensor development, taking advantage of advances in semiconductors, connectivity, and more sophisticated system designs, and building that into a much smaller device, whether that’s a multi-function headset, a smart watch, or a patch of some sort.

“Phase one was where you’re taking a product, improving the quality, and enhancing the performance,” said John Weil, vice president and general manager of the Edge AI Processor Business at Synaptics. “So it might be a power improvement, and maybe more adaptive, like adaptive noise canceling headphones. The new ones can isolate my voice, and even if somebody is talking into the same mic or in the same background, it can ignore that voice. That’s kind of the next logical step. It’s doing noise canceling, and it’s probably focused on the energy coming from my voice. It has another set of filters and voice isolation. But now, with the small micros we have, we can run AI algorithms. So they can do some of the final cut algorithms on the fly. That’s Gen 1.”

The next phase is to have optimized use cases for what customers really care about, including a much smaller form factor. “Moving data around the chip is a big part of that,” Weil said. “It’s bringing human modalities into things. So you have a vision pipeline, auditory pipelines, the five senses. Nobody has gotten taste down pat yet, but smell is actually on the roadmap. So there are electrochemical sensors coming down the pipe. Processors like Astra, with the right AI models, can take that data and start to infer through training what the odor is.”

Once a device has gathered data, it needs to be processed and analyzed into meaningful information.

“What do you have to do to the signal once you have amplified and received it and done some filtering to it?” said Prakash Madhvapathy, director of product marketing, Tensilica audio/voice DSPs at Cadence. “You have to train it, and you have to train the signal processor to understand what a normal, healthy signal looks like, and what normal behavior looks like. That is where AI comes into play. Machine learning is something that one trains with real data, or lots and lots of samples to make sure there is nothing specific to the sample set that will not apply for the larger population.”

For the Skinetix leggings, their AI model is trained on an athlete’s data from the field. “The data processing platform is in the cloud,” said De Winter. “We measure inside the box. We transfer this to a cell phone where the athlete can have some basic feedback, but the actual models that we use are too complex to run on a smartphone, so we send the data to the cloud, do the processing, and then afterwards the athlete can check them out on a dashboard.”

Professional athletes will have a team around them to analyze the data, including medical doctors, physiotherapists, and coaches who can interpret the data and then implement corrective actions.

Meanwhile, researchers at Hokkaido University created a multi-modal, flexible wearable sensor patch to detect arrhythmia, coughs, and falls. The circuit includes a microcomputer, amplifier, AD converter, BLE units, and lithium-ion battery. It uses edge computing on a smartphone to analyze data.

Fig. 1: A multi-modal sensor patch with edge processing. Source: Hokkaido University

Kuniharu Takei, professor of information science and technology at Hokkaido University, said the challenges for a wearable and flexible multi-modal sensor include ensuring long-time stability to detect vitals, reducing noise caused by body motion, physiological tests, and data processing.

“By collecting more datasets and optimizing the algorithm for machine learning – in our study this was reservoir computing – the data processing of the machine learning most likely can be simplified,” said Takei. “For the training, a new efficient algorithm needs to be developed.”

Connectivity and battery life
A key hurdle to maintaining battery charge is working out how to do more things at the edge to maximize battery life and avoid transmitting data to the cloud.

“If you take the IP camera with the latest PSoC edge microcontrollers, for example, can you do object detection at the edge itself without having to do that detection on the cloud? Then you are doing fewer transmissions with Wi-Fi,” said Shantanu Bhalerao, vice president of Bluetooth products at Infineon. “You’re doing more at the edge. You can make the camera battery last longer. But what are the tradeoffs there? If you’re doing the object recognition at the edge, is that going to require more power? In general, the compute power there is going to be lower than the transmit power, because the compute power is for some object detection, at least. The tradeoff is that you may add more cost at the edge because you’re doing more processing there. In general, cloud costs are going up, and the cost of power to transmit data is high, so doing more at the edge can save. It’s also more privacy-friendly.”

Other data from a wearable device could be analyzed at the edge, such as snoring detection, wake word detection, various kinds of sound detection, image classification, and people recognition.

To drive the AI there, Infineon adds an MCU-based ML accelerator onto the chip, rather than a neural processing unit (NPU), which is frequently used in AI/ML applications. “These MCUs don’t have a full-blown Linux operating system,” Bhalerao explained. “They do not need DRAM. They use an external flash, plus an MCU, so it dramatically reduces the cost. The more powerful the microcontrollers become in terms of machine learning and accelerating machine learning, and the lower the cost, the more it will be prevalent in more applications.”

Still, a key challenge to overcome is intermittent connectivity or connectivity loss. There are several ways to solve this for both Wi-Fi and Bluetooth. “The first thing is a very robust RF performance even in noisy conditions or longer range,” said Bhalerao. “Second is planning for all these things with re-transmission of data. Third, we store data on the device itself in the onboard flash, so that when we can connect, we can transmit. So, it’s chip performance, RF performance, as well as software techniques that we are using.”

Also, good RF performance means fewer retransmits of data. “The fewer re-transmits you do, the longer the battery lasts,” he added.

A Siemens blog noted that billions of IoT devices, including connected health devices, are expected to come online in the future. They will require RF design capabilities that support ultra-fast 5G speeds. “You see more and more applications where you have some RF connectivity to the device – that’s really important,” said Ryan Bauer, director of medical device and pharmaceutical solutions at Siemens Digital Industries Software. He cited examples such as smart knees, cardio MEMS, and pacemakers.

Wi-Fi versus Bluetooth
Because wearable devices tend to be connected to other devices, whether for memory, data processing, or power supply, engineers need to weigh the pros and cons of which connectivity system to use as it will affect how long the device can last without being charged. The main tradeoff here is throughput versus battery lifetime.

“Because Wi-Fi has very good throughput, the battery lifetime is limited,” said Infineon’s Bhalerao. “Your consideration is weighing how long the device needs to last. What is the user base? If the user base is using it as a substitute for the smartphone, meaning text notifications, watching videos, using apps, then that’s a smart watch versus fitness tracker, which is more hardcore fitness guys. There are very different ways to segment the market in the wearable area.”

A smart watch is at the highest end of use cases and is typically charged daily. “It has an operating system – either Android or iOS – and typically uses a Wi-Fi and Bluetooth combo,” he said. “At the other end of the spectrum is something like a fitness wearable or a tracker that is Bluetooth-only. Bluetooth tends to be much lower power, lasting anywhere from 7 to 14 days.”

Other devices need to last much longer. For example, an implantable device that monitors a kidney after a transplant is not going to be replaced often. “So you want to make sure that the battery there lasts a very long time, or that you are powered not by battery, but by a Wi-Fi or Bluetooth type of device,” said Cadence’s Madhvapathy. “You could make it so a smartphone close to it, using near-field communication, powers the device. That way you can read it when you bring the phone close to it, and you get continuous monitoring.”

The key is to make devices low power with small batteries that last as long as needed for their use case. For example, an IP camera lasts three months, but it would be ideal to last six months, possibly by adding in a solar cell, said Madhvapathy.

Infineon’s Bhalerao agreed. “We have to make our products lower and lower power for a certain operation. The trend is to move from coin-cell batteries to silver-oxide batteries, since that will shrink the size of a device.”

Wearables to stimulate or create sensations
A burgeoning use of patches is not to capture data but to transmit haptic signals for purposes such as drug-free pain relief, to boost the AR/VR experience, or to assist the visually impaired or people with missing limbs. For example, Northwestern University and Georgia Tech researchers have been working on a device that transmits the complexity of touch to the skin.

“Vibration is a really important part of our sense of touch, but it’s only one part of it,” said Matthew T. Flavin, assistant professor in the school of electrical and computer engineering at Georgia Tech, who runs a research lab studying neural mechatronics and extended reality for patient care. “Most haptic devices that people are familiar with are vibration actuators. Our phones have video game controllers, but the limitation is that it’s like having black-and-white vision. Our goal is to create these structures that can render sensations to all the different types of receptors that we have in our skin, to engage with the skin in new ways, to create a more immersive and realistic sense of touch.”

Fig. 2: This device created by researchers at Northwestern University is comprised of a hexagonal array of 19 small magnetic actuators encapsulated within a thin, flexible silicone-mesh material. Each actuator can deliver different sensations, including pressure, vibration, and twisting. Source: John A. Rogers/Northwestern University

The “touch” device involves a mechanical interaction, which is different from electrodes that measure electrical activity in the brain (EEG), muscles (EMG), or heart (ECG). “Sometimes we want to not just sense. We want to deliver current, and that has a number of applications,” said Flavin. “There are certain ways of treating pain, for instance, and especially if you are talking about implanting a device and getting closer to the nerves, you can have even more control over what’s happening. In a very similar way, we can drive some current into our skin, and that will elicit neural activity.”

The device consists of a battery-powered array connected to a smartphone, which has sensors to map the 3D surroundings of the user, identify objects, and indicate how far away they are. There are potential AR/VR applications for such a device when combined with smart glasses or goggles, but the team is most interested in helping people by creating therapeutic biomedical systems. “We are using this to substitute and augment missing sensory information that people have — in particular, for people that have a visual impairment,” said Flavin.

To help keep the device low power and mobile, the team harnessed some energy from the skin contact. This energy is recovered within the mechanical system, but not fed back into the battery.

“You can think of your skin as a spring, and we’re pressing it in and storing mechanical energy that way,” said Flavin. “Then when we release it, which is done by triggering magnetic materials, we can allow the skin to push the armature back into the opposite position without requiring a lot of energy to do that. So we have this switchable system, which creates a bistable operation.”

The device is controlled by an Insight system-on-chip (SiP), which includes an antenna, a processor, and a Nordic Bluetooth communication stack, which is itself a SiP that includes an Arm processor. “We’re designing this board-level integration where we’re making a flexible electronic circuit and then soldering these pieces,” he said. “We’re not designing the semiconductor device. We’re designing the device that integrates all those things together.”

In addition to the functionality from Insight, there are GPL expanders, PMICs, inductance measurement units, and analog multiplexers.

“We have an inductance measurement unit, which is multiplexed to all those sensors,” said Flavin. “The control of that is mediated by the processor, which is part of the system on a chip, and so it’s performing those actions and sensing. That’s what’s responsible for communicating with external sensors and our smart phone. We have a Bluetooth connection, which we’re creating in our firmware, that’s running on that processor that’s connected to a smartphone, which is what has those visual sensors and that we’re using for that visual sensory substitution task. Also, in a broader sense, we’re utilizing a lot of the advances that are incorporated into things like smartphones with all these sensors being used for AR.”

There has been ongoing research for decades into other types of energy harvesting than solar and wind, but at the portable device level the market has yet to take off. “It’s definitely a consideration, but I personally haven’t seen it really applied very successfully,” said Siemens’ Bauer.

Conclusion
Use cases for wearable devices and patches are growing exponentially as technology evolves to gain more accurate data and process that information to provide meaningful insight. Wearables can make everyday life easier, augment video gaming, assist athletes or people with disabilities, monitor health concerns, and much more. The next frontier may be choosing to implant a device for convenience, not just for medical reasons.

Related Reading
Med Tech Morphs Into Consumer Wearables
Smart watches, rings, and a growing array of patches are adding more functionality and being used across a growing set of applications.
Challenges Grow For Medical ICs
Making devices that are defect-free and able to withstand years of harsh environments is made more difficult by a combination of low volume and high complexity.



Leave a Reply


(Note: This name will be displayed publicly)