Getting back to basics in The Great Workplace Conversation

There’s nowhere near enough talk about our base instincts in the Great Workplace Conversation.There’s nowhere near enough talk about our base instincts in the Great Workplace Conversation. Objectively speaking, we remain relatively highly evolved, communal and intelligent primates. And so we are driven by things we like to admit to – love, empathy and the Golden Rule. But also things we don’t care to admit to in quite the same way – status, jealousy and self-interest.

Often these two facets of our nature get tangled up. We will act out empathy to heighten our status amongst peers. And often we ascribe to others noble motives that may be rooted in something ignoble. As La Rochefoucald wrote:

“Great and brilliant deeds that dazzle the onlooker are depicted by strategists as the result of great plans, whereas they are usually the result of temperament and passion. So, the war between Augustus and Antony, which is ascribed to their ambition to gain mastery of the world, may merely have been due to jealousy.”

We’re going to have to account for this kind of thing at some point in all of the talk of hybrid working, and not just to acknowledge the resentment amongst those majority of people for whom all of the chatter about it, four day weeks and whatever isn’t even relevant.

Status will perhaps be the most important of the shady motivators of human behaviour. It is possible to display status remotely, but I suspect we are about to discover new ways of making it clear just where we stand in the pecking order.

Just as executives became more creative when it became clear that mahogany desks, high back leather chairs and private offices were likely to be interpreted as a sign of inadequacy (often on good grounds), so too will we see the emergence of new status symbols in the new ways of working.


The Great Eye

One of the ways this might happen is to be one of those people unwatched by the Great Eye of surveillance and productivity measuring, that has been awakened from its slumber.

Personality tests are back on the menu, boys.

As this piece in The New York Times makes clear, Myers Briggs may not be rehabilitated, but firms are looking at new ways of categorising people so they know how to manage them, or not.

“Today, there’s mounting pressure on companies to gather those perspectives on their workers, as executives wrestle with costly decisions about whether to require in-person office work or even keep office space. At the very least, personality testing can give companies the vocabulary to talk about how their workers like to socialize: whether they crave water cooler banter, or dread the holiday party.”

And sometimes, questionnaires, tests and interviews aren’t enough. We are already being asked to consider how we might respond to companies having a live feed into our minds as part of a neural interface. One legal expert has already suggested that we might want to consider establishing the notion of cognitive liberty in law to anticipate such tech. Which, based on our recent experience with the shock of AI, could be anytime soon.


AI caramba

Talking of which, the hot takes on AI continue to come thick and fast, some of which might be useful or at least informed.

One of them is this piece from The Conversation which explains how one of the problems we have in addressing the changes that AI will bring about is the way we anthropomorphise technology. We shouldn’t ascribe to it abilities and characteristics that are largely in our own heads.

“Popular culture has primed people to think about dystopias in which artificial intelligence discards the shackles of human control and takes on a life of its own, as cyborgs powered by artificial intelligence did in “Terminator 2.”

“Entrepreneur Elon Musk and physicist Stephen Hawking, who died in 2018, have further stoked these anxieties by describing the rise of artificial general intelligence as one of the greatest threats to the future of humanity.

“But these worries are – at least as far as large language models are concerned – groundless. ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less. Their uncanny responses are a function of how predictable humans are if one has enough data about the ways in which we communicate.”

The same calls for us to not be fooled into believing AI is something it is not are repeated here by James Bridle and also in this Twitter thread from William Eden who also highlights other constraints on the expansion of AI and its computing power, including the availability of hardware and a lack of profitability.

We must also keep an eye on its hunger for energy as we should all tech. As this piece in Bloomberg suggests, we need to find out soon just how much resource this stuff will consume. It will change everything, but we must also be careful about the ways in which that happens.