John and Mary didn’t speak to each other as they drove home from the store. The icy silence wasn’t animosity — at least, not with each other. But it almost felt like it. They weren’t thinking about preparing for John’s business trip anymore.
Eventually, it was John who broke the silence. “She was probably just trying to help.”
This didn’t seem to help.
“Are you serious John?” said Mary. Her voice cracked slightly on serious. “Have you learned anything at all about women since we’ve been married?”
“Well, clearly not. Explain it to me.”
“There is NO REASON she should be recommending condoms for your travel bags. I was literally RIGHT THERE!”
“Not a logical reason. But maybe she’s just trying to sell more stuff.”
“By pissing off the customers?”
What I am depicting here is a kind of interaction that needs a name, but — to my knowledge — does not yet have one. The scenario is fictional, yet it depicts a real kind of interaction, one which I have experienced in my own life. A few months after I was married, I was advised by a family member in the medical industry to get my Human Papillomavirus (HPV) vaccine. HPV is a sexually-transmitted disease, and the vaccine is not recommended for people older than 26. As it turns out, the reason it isn’t recommended over the age of 26 is that it isn’t believed to be cost-effective. Most people “get it on” and make their mistakes in their late teens and early twenties. 26 seems to be the average age when people stop fooling around and enter into long-term relationships (like marriage)… as was the case with me.
Yet my nurse-family-member was advising me to get my vaccine because I was still 26. Never mind that I had literally just gotten married: the window before my 27th birthday.
I was slightly annoyed at the time — as I am by all medical nags — and my wife was more seriously offended at my family member’s implied doubts about our fidelity to each other. But what I realize now is that I wasn’t actually being addressed by a person. I was — in effect — being addressed by an algorithm speaking through a person.
In the case of the hypothetical story of John and Mary in the convenience store, it is the same scenario: someone trying to sell a product — under the beneficent guise of “protection” — casually insults customers by implication. There is a sales pipeline, a sort of flowchart of “best-practices” or simply patter that — when spoken to enough people — increases sales by some percentage. Protection is also measured by percentage, rather than on an individual basis. Presumably, it would be good for everyone, even in their 30’s or 40’s, to get the HPV shot, except that Harvard deems it uneconomical. Doctors — as well as customer service people — are trained to speak this way. The worst are trained to think this way as well.
If humans truly were rational animals, understanding the algorithmic origin of the insinuating question would remove any offense. To borrow from Scott Adams, if a robot insults you, you don’t feel insulted because you know it’s a robot.
But we humans are not rational animals. We have our rational moments, but our default and cognitive home is in emotional heuristics. We interpret meaning based upon association, not semantics. Moreover, attention dictates importance; in our heads, whatever we spend the most time thinking about we consider to be the most important. We can tell ourselves that it’s just an algorithm, that the salesperson is just trying to keep you safe, or that the doctor is just trying to sell a vaccine. But in the back of our mind, we are still being reminded of the possibility of infidelity. If the soul is dyed the color of the thoughts, then the things that we can be reminded of can literally warp our soul.
And where algorithms are concerned, the scale of attentional capture is — in theory — limitless.
In his essay on “Algorithmic Governance and Political Legitimacy,” Matthew Crawford criticizes the way in which algorithmic rule seems to separate us from our supposed “representatives,” isolating them from possible criticism with the veneer of objectivity. But these algorithms don’t just politically isolate us. They actually dehumanize us.
Asking a newly-wed to get an STD-vaccine before an statistically-determined age is not the sort of question you ask a human with emotions. It’s also not the sort of question a human with empathy is likely to ask. It’s the sort of question one might expect from an algorithm–or which one might carelessly ask of an inanimate object, like a computer.
When we wrap our heads around what’s going on here, offense should be the least of our concerns. An algorithm like this could of course be tweaked and refined to navigate around human emotions, and even to manipulate them.
One gets the distinct impression that we might already be well into that stage of the process.
Rather, I think the more serious danger is of dehumanizing ourselves.
By “dehumanization,” I do not mean looking down on others as if they were less than human. I mean we might literally be turning each other and ourselves into sub-human beings.
All of us are composites of our environments. We are byproducts of the people and experiences and relationships in our lives. One might say that algorithms — from social media, or from corporate’s sales best-practice sheet — are simply a new addition to this. Certainly, they don’t erase the other components of our composite existence. But there is a difference in kind between the ordinary external make-up of our identity and algorithms that function through people, and I think this distinction is visible in the kinds of conversations we end up having with people who are operating as skin-suits for algorithms.
Ordinary people have a wide variety of experiences that compose who they are. They are, in some sense, “choosing” which parts to take, and which parts to ignore. They might lean into a traumatic event, embracing it as a part of their identity, or they might treat that event as a springboard for resilience and growth in a different direction. In either case, the trauma was formative, but the manner in which it shaped their character is not necessarily predictable. This process of shaping an identity out of experience is organic, and is a process done by the individual in question. That is what makes it feel as if you are talking to “a person” when you speak to someone, and it also gives hope to the possibility that your own unique identity might be perceived, because such an organic individual is dynamic, and you are now a part of their surroundings.
By contrast, algorithms are designed from the top-down. There is some connection in process between “organic” and “bottom-up,” beyond mere conventional use. Things that are designed from the top-down feel artificial — they are “made things,” artifice. They have a set purpose, and due to their top-down design, they cannot really perceive as living organisms. They only perceive as it relates to the logic-tree of the algorithm.
The pinnacle of such a dehumanizing trajectory might be pictured as follows. Instead of Mary getting upset with the clerk (and with her husband) for suggesting condoms for his packing list, imagine it is Mary herself who puts them in. We might imagine this hypothetical Mary in a Trojan ad on TV: “I trust my husband to stay loyal to me, but on vacation, [smiles and shrugs] things happen. And no matter what happens, I want my husband to come home safe.”
You can’t really ask ‘what’s wrong with that logic?’ because the answer is nothing. If your spouse is going to cheat, wouldn’t you rather reduce your risks of venereal diseases? By condom or by vaccine?
But this kind of logic is how machines think. It isn’t how humans think.
The bonds that form human identity are dynamic, just as the identity it defines is organic. And our thinking reflects this. Marriage isn’t just held together by contract. It is held together by psychology. All of the people that gather together on a wedding day represent peer pressure. They are there to help you push through the difficult times. They are there so that whenever you think about leaving your partner or otherwise exploding your marriage, you aren’t just letting down your partner — at the back of your mind, you’re letting down all of them as well. And years later (we hope), we’re grateful for the assistance.
Algorithmic risk-calculation cannot account for this kind of relationship. It sees only possibilities, through the lens of statistical analysis. From this perspective, divorce — like cheating — is a possibility. So how stupid would you be not to prepare for it?
But don’t you undermine your marriage if you do hedge against it?
In the financial world, “shorting” something (betting against it) is considered something of a dick-move. This is because in a psychological world like the stock-market, bets can easily become self-fulfilling prophecies. Get enough people to short a stock, and the consensus will soon become “everyone thinks this is going down, so it probably is.” More people sell, the value of the stock plummets, potential investors withhold their investments, and the stock begins to collapse.
Why would we not think the same principle applies in marriage? If you hedge against your marriage, why would your partner not begin to hedge also? Why would they not “keep their options open.”
I use marriage as a particularly emotional example, but the way algorithmic logic dehumanizes us — by treatment and by assimilation — extends to all different aspects of our life (here politics is probably the most visible). And it’s something that should concern us as we are treated as and through depersonalized statistics in a flow-chart, rather than as individuals. The trade-off we are being offered is a degree of intimacy for a degree of “risk-management.” But as we become more dependent upon these “fail-safes,” we run up against a wall of diminishing returns, because all of the insurance measures and Plan-B options reduce the need for skill, which is our primary tool of thriving in the face of risk. Less skill will require exponentially more safety nets, in a downward spiral of dependence and planned living that threatens to make us into complete passengers in a system that — at the end of the day — needs at least some people to be drivers.
Meanwhile, the degree of intimacy sacrificed also leads into a spiral of mutual hedging.
There is a classic game-theory scenario called the “prisoner’s dilemma” in which a guard attempts to coerce two prisoners into ratting on each other by promising freedom to the rat, and mitigated sentences if both rat. To win the prisoners’ dilemma the prisoners need mutual trust. There is probably no better way to destroy that mutual trust than by introducing a small hedge: say, a statistic on the general odds that a prisoner will betray their partner. From there, it’s a small step to dramatically increasing those very odds, which can then be reported in turn.
Unrelated: have you heard that 50% of marriages end in divorce?
Never mind the source. I’m sure divorce lawyers would have no ulterior motive in manipulating statistics.
Trust the science.
Jokes aside, it is of course a dramatically misleading statistic, since it treats all marriages as equal, including serial divorcés. Looking at the odds of a first marriage succeeding, things are a lot more optimistic. The real number is somewhere between 20% and 30%.
But I wonder whether bandying about the “50%” statistic has added to that number…
In his book The World Beyond Your Head, Crawford advocates for the conceptualization of an “attentional commons,” and for respecting a new right within this commons: the right not to be addressed. He believes that this right does not extend to humans, but should extend rather to marketing and algorithms that can scale infinitely. His concern was about the destruction — or, rather, seizure — of our attentional bandwidth. But I would add that perhaps this right should include people who are operating as agents of an algorithm or top-down system. They are essentially not-human when they are operating in this capacity, and their purpose is to incorporate other people into that system as well.
This leads them to say horrific and even evil things, and they are invariably surprised — even upset — when it is pointed out how nasty they are acting.
But of course, it isn’t really a person acting. It’s an algorithm. And such things seem best avoided as much as possible, for the sake of preserving our own relationships, identity, and humanity.