Two students just proved that Meta’s new smart glasses are not rose-tinted

By combining smart glasses with AI and face recognition software, two students have exposed something troublingBe honest, did you smirk a little when everyone was posting their ‘legal’ message to Instagram? You probably saw it doing the rounds, after all it was one of the most viral trends ever posted on the app. Stories were flooded with a message reading “Goodbye Meta Al. Please note an attorney has advised us to put this on, failure to do so may result in legal consequences. As Meta is now a public entity all members must post a similar statement. If you do not post at least once it will be assumed you are okay with them using your information and photos. I do not give Meta or anyone else permission to use any of my personal data, profile information or photos”.

It sounds suitably official. It has the word ‘attorney’ in it and offers rules around every member needing to post a similar statement. It was shared far and wide, by influencers with millions of followers, lending it instant credibility. Perhaps you even shared it yourself and congratulated yourself on protecting your account and therefore your personal information. The problem with this message, however, is that it did absolutely nothing. Whilst there are ways to object to Meta using your data, they will only apply in certain countries and even then may not be enough to protect you and your data.

Whenever these messages appear and I comment on them, I get the usual round of replies saying ‘well if anyone thinks social media is private then they’re deluded’. I agree. Social media is not private, nor was it ever intended to be. Having social media that no one ever saw would defeat the objective of social media. We have all seemingly accepted that our data will be used for personalisation and advertising.

We’ve all had that unnerving moment where we discussed a possible purchase with our other half, only to have adverts for that exact product appear next time we are scrolling. We’ve gone along with it because we want the entertainment that these apps offer us and we’re willing to sacrifice our right to privacy to get it. But would we feel the same knowing that this data was accessing personal information, not for advertising, but to be used against us in real time?

Meta recently released ‘smart’ glasses, aimed at utilising AI to ‘explore the world around us’. However, two Harvard students uploaded software to these glasses which uncovered personal data in real time. Whilst wearing the glasses and walking past individuals, they were able to scan their faces, reverse image search and then uncover all kinds of personal information about them and use it to connect with strangers. Think of the potential for malicious use. This is a question posed by this podcast.

As the students’ post on X shows, the instant credibility that this information gives is incredible. Imagine that a stranger approaches you telling you that they were at the conference you presented at last month, that they loved your paper, that they are really inspired by the charity work you do. Would you question this? Would you really say to someone ‘I don’t believe you were there, I think you’ve just scraped all my personal data using your specs’? Unlikely. In the video clip shared, the students were showcasing mainly positive interactions (bar the unnerving point they uncover a young female student’s home address), but how easily this could have gone in a different direction.

Using this (already accessible) technology, I could call you pretending to be your bank and say ‘I can see you just spent money in XYZ store and we need further details to correct an accidental second purchase’…wouldn’t you believe me given I could describe everything about that encounter because I had just witnessed it first hand? I could also sell or exploit this information or use it for any number of inappropriate uses, which you would know absolutely nothing about until the damage was done.

Whilst we may have smirked or rolled our eyes at people attempting to protect their data, what we really need is protection ourselves. These are two students mucking around, curious to see what the latest tech could do, using publicly available programming. Imagine what a tech corporation could do with multi-million pound investment (such as Open AI’s $6.6 BILLION fundraising round). We are not being protected enough. We are not being made aware enough. We are not being educated enough.

I support companies every day to consider the ethical and sustainable impact of AI. This isn’t about being the ‘fun police’ or stifling innovation. This is about all of us being aware, being educated and understanding that posting to stories is not enough. We need to push for fair, ethical and sustainable AI which serves us well, not threatens our personal security and, ultimately, our safety.

Are you confident you are using AI in an ethical and sustainable way? What about those around you? Let’s stop mocking those seeking protection from large corporations and start questioning why they, and all of us, need protecting in the first place.