Does AI subvert our humanity?

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Our relationship with machines are not unidirectional

When Gary Kasparov lost to a chess playing computer in 1997 I recall the paranoia people felt as they assumed this marked the moment where machines became smarter than humans. Of course it wasn’t, but it was a reminder that humans can be as dumb as machines.

Similarly while there are a range of human capabilities that machines are nowhere near replicating, there are all sorts of things machines do, that humans seem intent on emulating.

We’ve used a discussion frame repeatedly in the past in which we encourage subjects to regard themselves as cyborgs. A mix of human and machine, with the exercise focusing on exploring which is which. What aspects of our behaviour are automatic or machine like, and what do we feel are intrinsically human?

Participants generally come to their own conclusions, but the goal is to provoke critical thinking regarding our relationship with machines.

I know I love my dish washing machine. Though I’ll admit that I’ve never wondered whether it encouraged unethical behaviour. Which now that I think about it, perhaps it has.

Here’s the abstract:

As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human–computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.

While we may have cultural and psychological defenses that might mitigate human based corruption, have we developed similar senses when it comes to machines?

In particular the issue here is not automation, but machines as puppets or proxies. As false authorities to influence and nudge our behaviour?

The research above provides four contexts or archetypes in which this influence can take place:

The role of recommendation algorithms in promoting extremist content is one example of this, but what about programmatic advertising? We’re constantly being sold things we don’t need, many of which are arguably unethical from a waste perspective at the very least.

This may be the slippery slope of ethical discussions. How much of our consumer society is based upon the convenience of not having to have ethical discussions?

Humans desire recognition and attention, and we currently employ algorithms to allocate these rewards. Whether intentional or not, the decisions these algorithms make when it comes to allocating attentional resources is inherently ethical. Similarly those people promoted influence future ethical discussions. A paradox in the form of an attention trap.

Is the fake essay the unethical behaviour is the assigning of an essay that could be faked the unethical act?

As an aside, as we’ve been researching NLG algorithms, we’ve noticed Google making significant advances in this field. Gmail and Google Docs both provide auto complete services that start to slip into what a school might consider to be cheating. Yet it’s the ground that’s shifting rather than the students being shifty.

The unethical AI agent is the stereotype. What might be a more interesting or fruitful discussion is what an ethical AI agent looks like. Not just because such agents should be ethical, but because agents in general will be popular and potentially powerful when used effectively.

Our relationship with machines is not new, but is it about to become far more complicated than it already is. How ethics play into all this is an important question, but not always for obvious reasons.

To begin with it is a mistake to think of ourselves as distinct from machines, or somehow above them. Instead we should recognize our interdependence, and use that as an opportunity to better design our present and future.

I’m currently in a bit of a deep dive around both animal behaviour (as a goatherd) and machine/human behaviour, as we deploy and employ bots as part of our ongoing media initiatives.

The primary insight that I can readily share is that we should not underestimate how easily influence flows in any and all directions.

Spend time with machines and you will act machine like. Spend time with goats and you will find yourself acting a bit more goat like. 😉

Metaviews is now available in podcast format!? Search for “Metaviews to the Future” on your podcast app/network of choice. As far as I can tell we’re most places, with the exception of Google, but that should change any day now.

Leave a Reply

Your email address will not be published. Required fields are marked *