Fashion and many other trends have a way of reappearing every few years. So we probably shouldn’t be surprised that smart glasses are doing the rounds once more, after a failed attempt by Google to popularize them over a decade ago. The difference this time round is that they’re not just more stylish – and arguably harder to tell from regular shades. They’re also packed with far more powerful technology, capable of tracking and recording their surroundings, and allowing the user to ask AI about the things they can see around them.
This presents significant security and privacy risks for both smart glasses users and the people they interact with.
What are the privacy risks?
Anyone who’s ever lived in a city will be used to being monitored. Germany and the UK have among the highest number of CCTV cameras in the world. But when that monitoring is targeted and not based around informed consent, it can quickly get out of hand. Smart glasses give anyone the ability to record or take photos of strangers surreptitiously. Although they feature a small LED light, this can be covered up, and in any case may be difficult for bystanders to spot.
But that’s not all. Harvard University researchers have demonstrated how video taken via smart glasses and livestreamed to Instagram can be connected to AI. Algorithms then work to identify faces and then pull information from the internet on those individuals. Suddenly that cool accessory becomes a powerful, portable surveillance device capable of empowering stalkers, bullies and fraudsters.
The bad news is that Meta may be looking to streamline the process with a controversial Name Tag feature. The social media giant has also come under scrutiny from regulators recently after reports revealed that some outsourced workers in Kenya were able to view highly sensitive images as part of their job to monitor users’ interaction with its AI platform. Even if users don’t have data monitored in this way, it might still be used to train AI models, according to an updated Meta privacy policy. And any voice recordings made after the “Hey Meta” wake word will be stored (along with transcripts) for up to a year by default.
When privacy risk becomes a security problem
This isn’t just about privacy. Any sensitive information shared with a public AI platform via a pair of smart glasses could theoretically be regurgitated to other users if prompted in the right way. That’s a potential security risk if they choose to use the information fraudulently. And then there are those outsourced workers and contractors who may stumble across information harvested by glasses, which they might decide to sell to scammers.
Information you might accidentally send to the cloud/AI model could include:
- Card PINs that you type in at the ATM or in–store payment terminals
- Passwords typed in at your desk or on your phone which could be used to hijack accounts
- Bank statements or bills with full details that could be used to impersonate you
There’s also a risk of nefarious smart glasses users shoulder surfing behind you in public, in order to steal PINs, passwords and other secrets. Combined with facial recognition technology, this data extraction may allow them to build up a sizeable digital profile on their targets. With enough detail they could either launch convincing phishing attacks, hijack your accounts or impersonate you in new account creation attempts.
Hacking the smart glasses ecosystem
Like any smart device, glasses could also be hacked more conventionally, by:
- Exploiting the operating system/firmware
- Hijacking connected apps/smartphones
- Intercepting traffic/inject malicious content via fake Wi–Fi hotspots
- Social engineering, such as sending a malicious QR code to scan
- Malicious lookalike smart glasses apps
These attack vectors could in turn enable bad actors to hijack your device for direct data theft, account takeover, or surveillance that could put you in physical danger.
How to manage smart glasses risks
Whether you are wearing them or being observed by someone else, there are a few steps you can take to mitigate the risks we’ve outlined above:
For wearers:
- Keep your firmware and software (apps) updated to minimize the risk of hackers compromising the device
- Only download companion apps from trusted sources and check permissions before doing so
- Use multi–factor authentication (MFA) and strong, unique passwords for your smart glasses apps and smartphone to minimize the risk of them being hijacked by hackers
- Use strong PINs or biometrics to unlock your smart glasses and switch off pairing mode if not using
- Never connect to public Wi–Fi hotspots unless you also use a virtual private network (VPN), as some public networks may be insecure or may even be rogue access points set up by hackers
- Disable AI training/human review if possible to prevent leakage of recordings to the cloud and potential access by contractors
- Keep your glasses in a case when not in use to minimize the risk of them accidentally capturing sensitive images or information
- Regularly audit and delete any unwanted recordings stored in the companion app, to minimize risk exposure
- Don’t be distracted by AR overlays. It might put you in physical danger if you lose track of your surroundings
For bystanders:
- Keep your eyes peeled for anyone wearing smart glasses. Look for the LED light on the frame; it will pulse if recording video or flash once when taking a photo
- Be mindful of shoulder surfing in crowded public spaces (e.g., on transport) or at ATMs
- Challenge wearers if you feel uncomfortable
- If you feel uneasy about usage in a business setting (e.g., a gym or high–street store) ask the wearer to remove their glasses or report it to management
Meta isn’t the only tech giant rolling out smart glasses. Google, Apple, Amazon and a host of Chinese players are said to be developing similar products. Unfortunately, many prioritize competitive advantage over users’ rights. Keep a close eye on developments to ensure your security and privacy don’t suffer as a result.






