The algorithm wants you to send nudes

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Social impacts of algorithmic bias

The discussion and understanding of algorithmic bias is still in it’s infancy. Arguably this is true of the study of cognitive bias in general, and the subjective nature of reality.

However algorithms may be an easier context to study bias, as unlike humans, they’re objects, not really subjects. If we accept that algorithms are tools made by humans, and not magic evoked from the ether, than we can examine their design flaws and social impacts.

In this regard, the critical study of algorithms is advancing, as there is a growing body of research that analyzes the impact of algorithmic bias, and how to identify it.

Sometimes what is identified is not surprising but still disturbing.

Sarah is a food entrepreneur in a large European city (the name was changed). The company she created helps women feel at ease with their food intake and advocates “intuitive eating”. Like many small-business owners, Sarah relies on social media to attract clients. Instagram, Europe’s second-largest social network after Facebook, is a marketing channel she could not do without, she said.

But on Instagram, which is heavily oriented towards photos and videos, she felt that her pictures did not reach many of her 53,000 followers unless she posed in swimwear. Indeed, four of her seven most-liked posts of the last few months showed her in a bikini. Ely Killeuse, a book author with 132,000 followers on Instagram who agreed to speak on the record, said that “almost all” of her most liked pictures showed her in underwear or bathing suits.

It could be the case that their audiences massively prefer to see Sarah and Ely in bathing suits. But since early 2016, Instagram arranges the pictures in a user’s newsfeed so that the photos a user “cares about most will appear towards the top of the feed”. If the other pictures Sarah and Ely post are less popular, it could be that they are not shown to their followers as much.

Which photos are shown and which are not is not just a matter of taste. Entrepreneurs who rely on Instagram to acquire clients must adopt the norms the service encourages to reach their followers. Even if these norms do not reflect the values they built their businesses on, or those of their core audience and clients.

This is an important point. The logic of the algorithm in combination with how that influences user behaviour, results in social norms that must be followed in order to find (economic) success.

We touched upon this in an earlier issue on the political economy of influencers, but this research goes deeper, and connects to similar research on “engagement bias” that sees algorithms promoting anything that results in increased engagement. In this case it’s nudity, in others (like YouTube) it’s extremist content.

Here’s more from this study about the engagement logic:

In a patent published in 2015, engineers at Facebook, the company that runs Instagram, explained how the newsfeed could select which pictures to prioritize. When a user posts a picture, it is analyzed automatically on the spot, according to the patent. Pictures are given an “engagement metric”, which is used to decide whether or not to show an image in the user’s newsfeed.

The engagement metric is partly based on past user behavior. If a user liked a specific brand and a photo shows a product of the same brand, the engagement metric increases. But the engagement metric can also be computed based on past behavior from all users of the service. The patent specifically states that the gender, ethnicity and “state of undress” of people in a photo could be used to compute the engagement metric.

While Instagram claims that the newsfeed is organized according to what a given user “cares about most”, the company’s patent explains that it could actually be ranked according to what it thinks all users care about. Whether or not users see the pictures posted by the accounts they follow depends not only on their past behavior, but also on what Instagram believes is most engaging for other users of the platform.

This invisible dance between audience and algorithm merits greater scrutiny and study. Especially given the precarious position it puts creators in, especially women, who face pressure to show skin, but will be punished if they show too much.

Instagram’s guidelines state that nudity is “not allowed” on the service, but favors posts that show skin. The subtle difference between what is encouraged and what is forbidden is decided by unaudited, and likely biased, computer vision algorithms. Every time they post a picture, content creators must thread this very fine line between revealing enough to reach their followers but not revealing so much that they get booted off the platform.

A 2019 survey of 128 Instagram users by the US magazine Salty showed that abusive removal of content was common. Just how common such occurrences are, and whether People of Color and women are disproportionately affected, is impossible to say as long as Instagram’s algorithms remain unaudited.

However, a review of 238 patents filed by Facebook containing the phrase “computer vision” showed that, out of 340 persons listed as inventors, only 27 were female. Male-dominated environments usually lead to outcomes that are detrimental to women. Seat-belts in cars, for instance, are only tested on male dummies, leading to higher rates of injuries for women. Our research shows that Facebook’s algorithms could follow this pattern.

Overall this is an important study, but it is arguably incomplete. They’ve been dismissed as being statistically insignificant because their data size is nothing compared to the data the company possesses.

Of course no social media platform is going to enable or permit research that might impact their bottom line. As a result crowd sourced research is the only option.

If you’re an Instagram user, you might consider participating in their study:

It is also worth noting that this issue of algorithmic bias is not limited to the software, but through use and reliance upon said software, the bias can ascend to encompass the entire platform and the company.

DURING AN internal presentation at Facebook on Wednesday, the company debuted features for Facebook Workplace, an intranet-style chat and office collaboration product similar to Slack.

On Facebook Workplace, employees see a stream of content similar to a news feed, with automatically generated trending topics based on what people are posting about. One of the new tools debuted by Facebook allows administrators to remove and block certain trending topics among employees.

The presentation discussed the “benefits” of “content control.” And it offered one example of a topic employers might find it useful to blacklist: the word “unionize.”

After a modest protest by FB employees, the company spun this as a careless unauthorized mistake made in the interests of humour, but the functionality remains, and clearly there are ideas on how it can be used.

Although as much as I enjoy maligning Facebook (which owns Instagram) we must acknowledge that this is endemic to the sector as whole.

TikTok is my guilty social media pleasure and I am consistently shocked by what I see on the platform. If Instagram has a bias towards partial nudity, TikTok definitely has a bias towards violence and soft porn. This is probably a result of similar algorithmic effects as the Instagram study describes, however it is also a result of far less content moderation policies, which allows content on TikTok to survive several hours longer than on other platforms.

We should also not rule out that TikTok has different geopolitical priorities, as there’s tons of political content, especially the kind that further polarizes and sensationalizes. My geographic position in a rural community results in TikTok serving me tons of redneck content, including an alarming amount of (very popular) fantasies involving truck drivers driving their rigs (or pickups) into crowds of protesters.

Perhaps this is why we need to be careful that in our critical assessment of algorithms we do not succumb to the hubris and arrogance that might make us think bias can be removed or avoided. The goal should not be elimination of bias (arguably impossible) but rather the identification and mitigation of it.

Having a subjective (or biased) perspective can be a good thing in certain contexts. Similarly having an agenda or political imperative can also be important and essential. There’s always benefits, and there are always costs, we should understand these factors not ignore or suppress them.

It is important therefore to remember that the current dialogue around algorithmic bias is in the context of monopoly, where choice is restricted if not unavailable.

Instead what if algorithmic bias was situated in the context of agency and choice?

When newspapers were in their prime, their biases were clear, and the marketplace was competitive, so readers could choose which newspaper to read based on their desire for a particular bias. Newspapers remain biased, although there’s no longer any choice (or competition).

What if instead of the futile pursuit of eliminating bias in algorithms, we offered people choices about which biases their algorithms posses? On a basic level this is already happening, as effective (and expansive) algorithms are able to personalize themselves to the interests of the user. However in a black box system, that user cannot scrutinize what biases the algorithm is using. What if all of that was transparent?

This is why algorithmic transparency is far more important than algorithmic bias, which serves to entrench proprietary and secretive technology. We can and should expose the racist and misogynist algorithms so we can choose algorithms that reflect our values and desires.

How could we do that if we didn’t know the logic or methodology of the algorithm (and those who created it)?

Leave a Reply

Your email address will not be published. Required fields are marked *