Our relationship with machines are not unidirectional
When Gary Kasparov lost to a chess playing computer in 1997 I recall the paranoia people felt as they assumed this marked the moment where machines became smarter than humans. Of course it wasn’t, but it was a reminder that humans can be as dumb as machines.
Similarly while there are a range of human capabilities that machines are nowhere near replicating, there are all sorts of things machines do, that humans seem intent on emulating.
We’ve used a discussion frame repeatedly in the past in which we encourage subjects to regard themselves as cyborgs. A mix of human and machine, with the exercise focusing on exploring which is which. What aspects of our behaviour are automatic or machine like, and what do we feel are intrinsically human?
Participants generally come to their own conclusions, but the goal is to provoke critical thinking regarding our relationship with machines.
I know I love my dish washing machine. Though I’ll admit that I’ve never wondered whether it encouraged unethical behaviour. Which now that I think about it, perhaps it has.
Our review on how #AI can corrupt human ethical behavior just got published in @NatureHumBehav.
Super excited about it!!!
With a true dream team of @iyadrahwan @JFBonnefon
You can read it here: https://t.co/eBF0xhLd8j
P.s. coincidentally it drops right during #UNGASS2021— Nils Kobis (@NCKobis) June 3, 2021
Here’s the abstract:
As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human–computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.
While we may have cultural and psychological defenses that might mitigate human based corruption, have we developed similar senses when it comes to machines?
In particular the issue here is not automation, but machines as puppets or proxies. As false authorities to influence and nudge our behaviour?
The research above provides four contexts or archetypes in which this influence can take place:
ADVISOR:
AI now often gives us advice about how to invest, consume news, even write emails. These AI advisors may recommend unethical actions, e.g. buying products more harmful to the environment. pic.twitter.com/Q4dflezsXj— Iyad Rahwan (@iyadrahwan) June 3, 2021
The role of recommendation algorithms in promoting extremist content is one example of this, but what about programmatic advertising? We’re constantly being sold things we don’t need, many of which are arguably unethical from a waste perspective at the very least.
This may be the slippery slope of ethical discussions. How much of our consumer society is based upon the convenience of not having to have ethical discussions?
ROLE MODEL:
Alternatively, we may observe AI agents behaving in unethical ways (e.g. spreading misinformation on social media, or manipulating markets) and be tempted to imitate them. pic.twitter.com/m8Is196XSr— Iyad Rahwan (@iyadrahwan) June 3, 2021
Humans desire recognition and attention, and we currently employ algorithms to allocate these rewards. Whether intentional or not, the decisions these algorithms make when it comes to allocating attentional resources is inherently ethical. Similarly those people promoted influence future ethical discussions. A paradox in the form of an attention trap.
PARTNER IN CRIME:
AI systems may alternatively tempt us to engage with them in unethical behavior. E.g. we may engage with AI algorithms in market collusion, or we can co-author fake essays with a text generation algorithm. pic.twitter.com/NxQyVjKHck— Iyad Rahwan (@iyadrahwan) June 3, 2021
Is the fake essay the unethical behaviour is the assigning of an essay that could be faked the unethical act?
As an aside, as we’ve been researching NLG algorithms, we’ve noticed Google making significant advances in this field. Gmail and Google Docs both provide auto complete services that start to slip into what a school might consider to be cheating. Yet it’s the ground that’s shifting rather than the students being shifty.
DELEGATE:
Perhaps most seriously, we may be more willing to delegate unethical behavior to AI agents, who “do the dirty work” on our behalf. By distancing ourselves from the crime, we may be more tempted to be unethical. pic.twitter.com/5coFrn42Ej— Iyad Rahwan (@iyadrahwan) June 3, 2021
The unethical AI agent is the stereotype. What might be a more interesting or fruitful discussion is what an ethical AI agent looks like. Not just because such agents should be ethical, but because agents in general will be popular and potentially powerful when used effectively.
Our relationship with machines is not new, but is it about to become far more complicated than it already is. How ethics play into all this is an important question, but not always for obvious reasons.
To begin with it is a mistake to think of ourselves as distinct from machines, or somehow above them. Instead we should recognize our interdependence, and use that as an opportunity to better design our present and future.
The article is a call-to-arms for behavioral scientists to study how machine behavior may influence or enable unethical human behavior.
It is part of a broader endeavor to study all forms of machine behavior, which was outlined in an earlier review:https://t.co/Y4O7xxaCvh
— Iyad Rahwan (@iyadrahwan) June 3, 2021
I’m currently in a bit of a deep dive around both animal behaviour (as a goatherd) and machine/human behaviour, as we deploy and employ bots as part of our ongoing media initiatives.
The primary insight that I can readily share is that we should not underestimate how easily influence flows in any and all directions.
Spend time with machines and you will act machine like. Spend time with goats and you will find yourself acting a bit more goat like. 😉
Metaviews is now available in podcast format!? Search for “Metaviews to the Future” on your podcast app/network of choice. As far as I can tell we’re most places, with the exception of Google, but that should change any day now.