September 18, 2023
Smart technology needs to start with people if it wants to get smarter
“My engineering students had come to class with technology on their minds.” So says artist and design researcher Sara Hendren, author of What a Body Can Do: How we Meet the Built World. It’s a fascinating book in which she consciously pushes back against the prevailing narrative that so-called smart technology has a fix for every problem. As a professor teaching design for disability at Olin College of Engineering, Massachusetts, Hendren draws attention to the assumptions that drive normative behaviours to define what is a ‘problem’ in the first place.
Paying attention to the actual needs of disabled people – as opposed to technologists who seek to impose solutions to ‘help’ them – she relates how a deaf person reframes ‘hearing loss’ as ‘deaf gain’; how a quadriplegic prefers to use cable ties attached to the stump of one of her limbs rather than the expensive, heavy robotic arm made for her; and how the very word “inclusion” establishes a category of ‘normal’ from which vast numbers of people are excluded.
Too often, we’re thinking of the solution before we’ve identified the problem, or even whether there is a problem at all. And like Hendren’s students, we start with technology on our minds.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]Too often, we’re thinking of the solution before we’ve identified the problem[/perfectpullquote]
This is no accident. Because the narrative is being pedalled to us by corporations seeking to exploit us for commercial gain. First, they define the problem. Then they define the solution. Then, they package up the benefits to make a proposition so compelling we feel we can’t do without it. Nowhere is this more prevalent than in data-driven ‘smart’ solutions.
‘Smart’ has become a ubiquitous label for living and working in the 21st century. It’s no longer just people who are smart. Objects are smart too. We have smart watches, smart fridges, smart speakers, and smart chewing gum. If you’re a Manchester City fan, you can buy a smart football scarf that includes a biometric sensor to measure your heart rate, body temperature and emotional responses. Smart shoes are no longer just something that might considered fashionable or well-polished.
It’s the smart playbook: imply a problem; provide a data-rich solution; promise irresistible benefits. The scarf makes the club ‘more connected with its fans’. Shoes ‘help cut your running times and trim your waistline’. Following the COVID-19 pandemic, cleaning cobots that can ‘evidence their performance’ create ‘a safe and health work environment’ providing assurance ‘to bring people back to work’.
How smart are the solutions, anyway?
These data-driven solutions are smart for their makers. And they may be well-intentioned. But are they smart for us? How do we tell when technology is providing something truly useful? In her 1985 book More Work for Mother, author Ruth Schwartz Cowan neatly shows how the revolution in white goods that promised to liberate women from household chores eventually left them struggling to keep up with even higher standards of cleanliness. The promises were made to women; the benefits were felt by men, children and servants, whose work the machines actually replaced.
Where data is concerned big tech has a lot to answer for. Two moments stand out in Shoshana Zuboff’s bestselling book The Age of Surveillance Capitalism, in which she relates how some global giants turned us from willing consumers of data into naive providers of data.
First was Google’s decision in the heat of the dot-com crash to switch its model from unbiased search results to results supported by advertisements. This was not their original intention. In 1998 Google’s co-founders had advocated the importance of integrity in search results, warning: “We expect that advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers,” they told the 1998 World Wide Web Conference. Faced with an urgent need to generate revenues however, Zuboff notes that within two years they had launched AdWords, where user behaviour was converted to valuable data to be auctioned to the highest bidder.
Second was Facebook founder and CEO Mark Zuckerberg’s 2010 announcement (long before the Cambridge Analytica scandal that would flow from it) that its users no longer had an expectation of privacy: “We decided that these would be the social norms now, and we just went for it”, he said. What these and many other moments like them show is that ‘computational capitalism’ is capricious: promises made about your data today will not necessarily be honoured tomorrow.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]If rendering a building in digital format lets us monitor and manipulate its performance, why not do the same with people?[/perfectpullquote]
A data-driven solution currently occupying many workplace managers is that of digital twins. It involves the reconstruction of a physical asset in digital form, generated using real-time data from an extensive network of sensors in an actual building or estate. Once the digital twin is set up and replicating the ‘real world’ then the software version can be modelled to optimise building performance and efficiency to recommend changes to the real building, and eventually – in fully autonomous mode – ‘to learn and act on behalf of users’.
While it makes sense to use technology to model real-world operations, would you really want it to be fully autonomous? Think how many reductionist assumptions have gone into capturing what goes on in a building when creating its digital twin? When human interaction is converted into digital format, what is lost from the data displayed? No matter how sophisticated the digital twin, the built environment still needs to work in the all-too-messy world of the human. A world of cash-strapped local authorities, risk-averse pension funds or competitive portfolio managers who don’t, won’t or can’t co-operate, no matter what the digital twin is telling them.
More widely, the battle is on to recreate our whole lives digitally in the form of the metaverse, the subject of an article in a recent issue of IN Magazine. If rendering a building in digital format lets us monitor and manipulate its performance in the real world, why not do the same with human beings and let them recreate themselves online? In this sense, the metaverse is simply a home for the data being collected by the smart watches, smart chewing gum and smart football scarves we’re using already. Why not simply plug all this disparate data into one overarching digital space and use these data-driven insights to improve our health, increase our resilience and optimise our own performance?
A word of caution
We should be extremely cautious of these developments – and I say that as a technology advocate.
We’ve allowed data to become conflated with action. In simple terms, we have become convinced that the insight that data brings gives us power – and that the more data we have, the more power it gives us. While data can help to make informed decisions, it has no intrinsic value if those decisions don’t result in action – action that we may not choose – or may not be able – to take. Knowing we need to reduce our calorie intake or increase the amount of exercise we take is not the same as actually performing those actions.
Too often, we allow corporations to frame a problem; define the data needed to address it; choose the metrics to measure it; and to repackage the benefits to us. Google recently stood accused of effectively halving the environmental impact of flying through a process not too dissimilar from this. We’ve also forgotten to look at the costs and risks.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]While we already know that social media can be harmful, we risk greater exposure to these harms[/perfectpullquote]
Virtual reality veteran Louis Rosenberg warns of the dangers of navigating an augmented or virtual metaverse where every action we take can be recorded in highly granular detail in order to manipulate us. Every time we linger outside a virtual shop; every item we glance at; the expression on our face; whether our heart rate or breathing increases; the emotions we experience … each one can be converted into data and auctioned to a retailer. How then to know whether the person you’re chatting with in the metaverse is a real one or a constructed one, using every bit of data it knows about you to tailor its conversation and sell to you?
While we already know that social media can be harmful, we risk greater exposure to these harms. In the metaverse interactions will be realistic and in real-time, making the ability to moderate the user experience that much harder. Women already report being sexually assaulted in the metaverse, according to The New York Post. The irony of a sophisticated digital universe should not be lost on us, given that its very existence is supported by the growing number of human ‘ghost workers’ who are some of the most undervalued and exploited people in the digital economy. Someone has to review and moderate content that might be deemed unfit for public consumption. It’s their job.
[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]Cloud computing is expensive and environmentally damaging[/perfectpullquote]
And it shouldn’t be forgotten that cloud computing is expensive and environmentally damaging, creating a huge draw on the world’s finite supply of water, mineral and energy resources.
How should we respond? It’s not as if everyone involved in computational capitalism is either oblivious to the potential hazards that result from their work or else doesn’t care. Ethical guidelines do address concerns about algorithmic bias, social media harms, and the opacity of ‘black box’ artificial intelligence applications so complex it’s impossible for a human to understand them. But the voluntary codes, policies and governance documents produced by advisory bodies, governments, and commercial operators are all too often ‘ethics after the event’, swept along on the tide of technological determinism.
The one control that should be meaningful is regulation. But – rather like taxation – in a global market where online operators can choose their jurisdiction, it’s far too easy for technology businesses to work to the lowest applicable standard. In a discussion on Ethics and the Future of AI in Oxford last year, former Google CEO Eric Schmidt was contemptuous of EU legislation designed to safeguard its citizens, describing as “chilling” the idea that the regulation of AI would be drafted to protect people rather than “build huge wealth impact”. Introduction of the legislation in question – the EU AI Act – has been long delayed following extensive challenge and consultation.
Technology can and should play a part in creating a better world. But we need to be smart. That means starting with the humans, not the data; paying careful attention to the real needs of real people; and working the solutions back from there.
David Sharp is Founder and CEO of International Workplace, a learning provider specialising in health, safety and workplace management. He is currently studying on the Masters in AI Ethics and Society programme at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
Images: Max Gruber from Better Images of AI