In the same interview I commented on in last week’s post, Frank Pasquale claims that 1) post-humanists and trans-humanists are pushing for AIs/robots which simulate humans and 2) this is anti-humanist. I quote:
How, specifically, are these positions anti-humanist?
In part, an essential element of being human is accepting and understanding our limitations. Our frailties. And that effort to transcend it and say, “Well, here’s an immortal entity; let’s treat it as being above and beyond the human,” is problematic. It involves rejecting the fact that we are mortal. That we feel pain. We have a limited amount of things that we can spend our attention on.
I do not understand how trying to deal with our limitations is a rejection of their existence. As far as I can tell, this argument could also be used against writing; our human minds have limited memories, so we should embrace that and stop writing down things that we might otherwise forget. By writing things down, we are rejecting our forgetfulness, which is an essential human quality.
Pasquale frames the post-humanist position as:
So, a post-human subscriber would say, “Well, we’ve had a good run on Earth, but you know, ultimately our brains are too slow. Robots have faster processors. They’re going to understand more and do more in the world. Let them.”
If Pasquale’s description of post-humanist views is correct, then I consider that an acceptance of our mortality and limitations, not a rejection.
In Olaf Stapledon’s novel Last and First Men, human species keep on going extinct, yet manage to leave behind just enough offspring to start new human species. Some of these later species are intentionally bred/engineered to survive in new conditions because there is not enough time to rely on natural selection alone and they want to prevent premature deaths. Spoiler: in the end, the last human species goes extinct without a successor because life has become impossible in the solar system and they did not have the technology to escape. Their last hope is to leave an imprint on non-biological successors, but they have no idea if anything will come from the ‘seed’ of their humanity which they send out into the galaxy.
I don’t believe that Olaf Stapledon’s exact vision will come to pass (heck, many of his predictions of what would happen in the 20th century were wrong), but I think his extrapolation, that we will go extinct just as other human species before us have gone extinct, that we may have successor species but they too will go extinct, is correct.
To me, part of acknowledging human mortality is acknowledging that our own species is mortal. Homo sapiens will go extinct. The only questions are when will we go extinct, and whether we will leave behind successors, just as we are the successors of the other species in the Homo genus (which, I point out, have already gone extinct).
I am open to arguments both for and against creating post-human successors. But I don’t think putting effort towards establishing successors is ‘anti-humanist’ if the goal is to extend our legacy further into the future.
Pasquale goes on to say:
Hey, when I die, there’ll be a billion books I haven’t read. And I don’t think someone might say, “Well, imagine if you could, you’d be so much better — if you were a robot that could process 1 million books in a short timespan.” My short response is I don’t think it would be in any way similar to what happens when I, as a person, read a book, or when any human does. There is a unique way we engage with things, and that’s what makes us singular.
I don’t know whether a robot who processes a million books in a short time might have a similar engagement as a human who reads a book, and I claim that Pasquale doesn’t know either. No, I don’t think any AI in existence now has a remotely similar experience, but that could change.
And how precious is our unique way of engaging with things? A social history of, for example, reading habits, demonstrates that the way we engage with things has changed a lot over the decades, centuries, millennia. I don’t know about Asian, African, or Meso-American civilizations, but at least in Europe, it was rare for anyone to read silently until mere centuries ago. Widespread reading and writing in solitude is a relatively new thing in European societies. Is our unique way of engaging with books more precious than our predecessors’ unique way of engagement? And might not an AI/robot’s way of engagement be unique and, yes, precious?
Accepting that robots/AIs might have their own unique and precious way of processing books is not anti-humanist, just as accepting that ancient peoples engaged with books differently than we do today is neither anti-ancient-people nor anti-contemporary-people.
I don’t want to destroy our current ways willy-nilly. But rigidly holding onto the status quo just because it’s the status quo seems both unhelpful and futile. If our ancestors had always rigidly clung to the old ways of doing things, there would be no writing, and neither this blog nor Pasquale’s book would exist.
In my experience, anti-transhumanism arguments fall into three rough categories, “technology is bad”, “technology is developing too fast without sufficient precautions” and “transhumanists are not the sort of people you want deciding the future”. It seems like this guy has put together a book about the middle one and just landed in the other two by virtue of lazy thinking.
Though maybe he does actually think that, say, pacemakers should be banned for enshrining machines as immortal and blurring the lines of human mortality. The contradiction of considering human doctors essential because people want human interaction, while ignoring the same argument for people working at check-outs is bewildering.
I agree with the ‘technology is developing too fast without sufficient precautions’ argument, but as a reason to slow down and put in sufficient precautions, not to stop developing technology altogether. Yeah, I see your point about how Pasquale might have fallen into those other arguments by lazy thinking.
Like you, I don’t understand why he draws that distinction between doctors and cashiers, as I discussed in my previous post.
From the very little that I’ve read by this guy, it looks like his argument can be summed up as nothing so high-minded as sable’s 3 categories, but as something more transparently self-interested:
AI is worrisome if it threatens MY livelihood and privileges, and those of people like me. It wasn’t a big deal when automation replaced the little people. I order from Amazon just fine, and enjoy all sorts of computerized services. But my skills, my job… They’re special!
To go back to what you wrote about in your last post:
> I think the big dream of a lot of folks in A.I. is presumably just
> letting it take on the job of a doctor, nurse, journalist, teacher, and so on.
The big dream is the same old technological dream: can we use technology to make things cheaper, faster, more powerful, or more efficient.
> And my idea is that’s really not the goal we should be going for, right?
That “right?” is pretty funny. I translate it as, “we don’t want to lose our jobs, because you and me are special, right?”.
That explanation did occur to me, but I figured that, even if it’s true, it’s better not to say it out loud. If someone who thinks like Pasquale reads this blog, telling them ‘you are a selfish classist jerk’ is unlikely to change their minds.
Not that I think I can persuade such people to stop worrying about losing their own livelihoods (nor would I want to, I think they are right to be worried), but perhaps they can learn to empathize with the ‘little people’ and care about their livelihoods too.
Interesting! You are kind and smart to engage in argument in that way.