What Is the Argument Against Replacing Human Professionals with Qualified AI Professionals?

I have not read New Laws of Robotics: Defending Human Expertise in the Age of AI by Frank Pasquale, but I did read this article.

Perhaps because I only read the interview and not the book, I do not understand his argument for the first new law: “Digital technologies ought to ‘complement professionals, not replace them.'” At first, I was wondering if his argument was that AI simply cannot replace professionals, but upon closer reading, it’s clear that he’s open to the possibility that AIs, in the future, might be entirely capable of doing some of these jobs on their own. Therefore, his position is that they should not, not that they cannot. And yet, later on in the interview, he seems to be in favor of AI/automation taking over blue-collar jobs such as supermarket cashier “unless people who are in those positions can say, ‘Hey, there’s a reason why you need human judgment and humans in control of this process. And if you take us out of the loop, there’s going to be a big problem.'” I do not understand (at least without reading the book) why he wants to automate away blue-collar by default but is adamant that digital technologies only supplement, not replace white-collar professions. If his position were simply ‘preserve human jobs because our political-economy is structured so that people are forced to sell labor in order to meet their material needs’ would that not apply to all jobs? Does the cashier not need to pay their bills just as badly as the doctor? And if he’s okay with ‘automating away’ cashier jobs, why not the same for doctors, teachers, etc.?

Take, for example, accounting. Back in the 1980s, there was a panic that computers were going to automate away all accounting jobs. That did not happen because, so far, digital technology has supplemented, not replaced, human accountants. Digital technology does almost all of the recordkeeping and arithmetic these days, but human accountants have to organize and interpret all of that financial information so that it is actually useful.

As a profession, accounting has always been dependent on technological supplementation. Accounting does not exist at all without the technology to externally record data, such as writing or quipu.

Let’s say that computer software really had replaced human accountants in the 1980s. That would have been a problem for people who were already invested in an accounting career, as well as accounting firms, but that problem would have been temporary. If there were no jobs, people would stop training as accountants and do something else.

If the quantity and/or quality of human jobs goes down, the political-economy can be restructured to ensure that humans can still meet their economic needs.

The only coherent argument I can see when I re-read the article is that, maybe, Pasquale wants to keep power in human hands, even if human doctors make more errors which cause more human death and suffering than a future competent autonomous AI-doctor robot. I’m not sure how I feel about this argument. It’s hard for me to say “we’re going to let more people suffer and die from medical errors so that human doctors stay in control” but can also recognize that giving autonomous AI-doctor robots too much power presents difficult-to-predict risks.

The obvious nightmare is “what if the AIs decide to get rid of humanity?” I don’t know how likely that is, but even if that is a highly unlikely scenario, it needs to be taken seriously. There is also the “AIs don’t try to get rid of humanity, but their costs exceed the benefits, yet humans are too dependent on them to live without them and must bear the costs” scenario. It could turn into a predicament like the one we have with fossil fuels/petrochemicals. Fossil fuels/petrochemicals are doing a lot more harm (greenhouse gases, ocean acidification, plastic pollution, etc.) than good right now, but we are so dependent on them that we cannot cut them out. We will cut them out because they are nonrenewable resources, and anything that is unsustainable will not be sustained, but the fallout will be painful. Perhaps we want to nip that scenario in the bud with AI technology so that we don’t have the dilemma presented in “The Gods Have Not Died in Vain” by Ken Liu after a group of drivers and workers protest the loss of their jobs, and are met with counter-protestors who want robots to deliver essential supplies to Boston:

“If everything is handed over to Centillion’s robots, wouldn’t another god—I mean a rogue AI—put us at even more risk?”

“We have grown to the point where we must depend on machines to survive,” said Mom. “The world has become too fragile for us to count on people, and so our only choice is to make it even more fragile.”

Another example of the power argument is the thesis that coal promotes democracy and oil undermines democracy because coal mining is labor-intensive and unhappy workers can strike to increase their wages/gain voting rights, whereas oil drilling requires much less labor and thus offers the working class less political leverage. Even if this thesis is not strictly true (no, I haven’t read the book), it’s an explanation of how structuring the political-economy to be dependent on human labor gives ordinary people more economic and political power. And it’s clear from the interview that Pasquale is just as afraid – or even more afraid – of power being concentrated in the hands of a few large corporations and government leaders than of the robot-AIs themselves.

The power argument is a good argument for having a minimal number of human accountants around to keep the automated accountants honest. You always want to be able to diagnose automated accountants’ failures and, if necessary, shut them down and replace them. Heck, you need human accountants to keep the other human accountants honest. One reason why so many accounting jobs exist is that some accountants (called auditors) are hired to monitor other accountants. An extreme example of this is forensic accounting.

The power argument also applies to cashiers at supermarkets. Cashiers oversee financial transactions and the movement of inventory, and they also have relationships with customers that affect the business (and the community at large). For example, they check the ages of people who buy tobacco and alcohol (right now, it is not legal to sell tobacco/alcohol via self-checkout machines in California). I’m sure that, if there isn’t already an AI that can check whether someone can legally buy tobacco/alcohol, such an AI will exist in the future, and might have a lower error rate than a human cashier. But maybe giving an automated AI-robot that power is not a good idea, especially if they verify a customer’s age by accessing their private records from a centralized database. I recognize that a robot-AI supermarket cashier is less dangerous than, say, a robot-AI that controls nuclear warheads, but I’m not sure it would be less dangerous than a robot-AI doctor since the robot-AI supermarket cashier would have significant power over the distribution of food.

The more I think about it, the more convincing I find the power argument.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.