This post is a preface to “Huxley’s Utopia.”
.
.
.
TWO WORLDS
John A and John B both die fulfilled. They had beautiful families, amazing friends, and great accomplishments—everything they wished for. Their lives were identical.
The thing is, John B’s wife cheated on him, his friends secretly hated him, and his achievements were all ethereal. Again, he was unaware of these truths and died happy.
If you could choose to live the life of one John, do you have a preference which?
…
A: Yes! Who the fuck wouldn’t?
B: No, I don’t have a preference.
.
.
.
EXPERIENCE MACHINE
There is a experience machine that makes all your dreams come true. You can rule the world, cure cancer, and be a Disney princess—whatever your heart desires. This experience lasts just as long as your natural life, and you will forget that it’s simulated.
Assuming everyone who cares for you has already passed away and you have no obligations, are you plugging yourself in? Does your answer change otherwise?
…
A: Fuck that.
B: Plugging in asap.
.
.
.
LOBOTOMY
You are offered a lobotomy that removes a majority of your brainpower. If you choose to get it, you will always be perfectly happy. You’ll still be able to survive as a street cleaner. If others mock you, you think it’s a compliment.
You lead a simple life sweeping leaves but, like the experience machine, all your dreams come true. You can take this to mean either a) your life purpose becomes picking up leaves and thus you are blissful or b) you live in an imagination world where you’re curing cancer and whatnot.
Are you getting the lobotomy?
…
A: Fuck that.
B: Sign me up!
.
.
.
DISCUSSION
I really like asking these questions to people to see how they approach reasoning and to understand their core values. Here are my answers:
B: No preference.
C: I’m not sure.
C: I’m not sure.
This changed recently. Well, just the last two.
…
The first one is from an Intro to Philosophy class.
Quite frankly, it’s a stupid question. The motive is to try and push the student towards preferring John A and thereby show the value of truth since people tend to prefer real friends and faithful wives.
But this is absurd: the question revolves around the absence of knowledge then goes and gives it to you to make your decision. In reality, you are born as one John, and you don’t know which. Even if you chose A, you can never find out if you’re actually B.
For this reason, I don’t have a preference. If there’s value to the choice, it’s less than the utility I get from two dollars.
…
The second is another philosophy class classic.
All but two in my class of twenty, given the absence of obligation, went for the experience machine. My original answer:
Plug me in and let me übermensch the fuck out! C’mon, we’re probably in a simulation anyhow, so at least let us move to a better one.
One holdout had a gut feeling against it, and the other was studying medicine to save “real” people. So it goes.
…
The third is something I thought up.
I wanted to make something that, given you choose to plug in, you logically assent to but mentally resist. A lobotomy was super fitting.
When I asked the class, I was the only one pro-lobotomy. I’m not entirely sure why. I was quite depressed back then, but I’m sure others were too.
There were two main objections and they are both inconsistent:
Lobotomies brings false joy: The experience machine is just as fake. You imagine dragons and Daenerys but you’re sitting in a vat full of pink goop.
My brain is necessary: There is nothing a lobotomy takes away that a simulation doesn’t. Even if your deepest desire was an intact brain, you wouldn’t even know it’s missing.
Even my greatest wish, to feel One, would be present or absent in both. But that raises the question: Is suffering necessary for enlightenment?
…
That’s a tough question, and I’ll touch a tangent of it in a post on Huxley, but I doubt I’m close to an answer—and unsure there’s one at all.
Instead, if you liked my lobotomy scenario, I’ll give a quick pitch on why you should read Keyes’ Flowers for Algernon:
Charlie Gordon, a man with an mental disability, undergoes surgery to dramatically increase his intelligence. As his IQ soars, he rejoices in his new abilities and their fruits, but is also burdened by his past and a newfound isolation.
A mouse named Algernon, who was the proof-of-concept for his procedure, is his closest friend . . . and the canary for his eventual regression.
Through Charlie, Keyes asks: Is intelligence suffering? And is suffering worth it?
We join Charlies through his progress reports, giving us a first-person window into his mind. I read Flowers for Algernon about three years ago, a few weeks after my little lobotomy proposition, and I still remember the emotional punch. Highly recommend.
.
.
.
Read more Wisp.