This week in epistemology and ontology, human and machine
A few pieces of robot or AI-related discourse that caught my attention recently.
Over the weekend, I read this piece from James Somers at the New Yorker, which makes a compelling argument that the neural networks underpinning generative AI actually do represent a form of thinking.
Today's leading A.l. models are trained on a large portion of the internet, using a technique called next-token prediction. A model learns by making guesses about what it will read next, then comparing those guesses to whatever actually appears. Wrong guesses inspire changes in the connection strength between the neurons; this is gradient descent. Eventually, the model becomes so good at predicting text that it appears to know things and make sense. So that is something to think about. A group of people sought the secret of how the brain works. As their model grew toward a brain-like size, it started doing things that were thought to require brain-like intelligence. Is it possible that they found what they were looking for?

The Case That A.I. Is Thinking
ChatGPT does not have an inner life. Yet it seems to know what it’s talking about.
I've also received a review copy of Michael Pollan's upcoming book, A World Appears, which has to do with consciousness—a slippery-enough concept even before you ask the "is a machine thinking?" question. I've flipped open to a few random pages, and Pollan's doing his thing, on another big concept. I can't wait to read it.

A World Appears - Michael Pollan
A Journey Into Consciousness From the #1 New York Times bestselling author of How to Change Your Mind, a panoptic exploration of consciousness—what it is, who has it, and why—and a meditation on the essence of our humanity When it comes to the phenomenon that is consciousness, there is one point on which scientists, philosophers, and artists all […]
And I came across this set from comedian Josh Johnson, who is frequently insightful, and here he's talking about how the way we relate to our digital assistants is, in his observation, changing the ways we relate to each other.
From the transcript, lightly edited.
4:40 Cuz we also not polite. We've been taught to not be polite to the robot. That's that's been baked in now. I don't even know if you could you would be it would feel weird for you to be polite if you were polite.
You know, you know, millions and millions of people every day say, "Hey, Alexa." And you know what they don't say? "Please."
You You want to know how I know that? First Alexa experience I had, I didn't like it... we just inviting the ghost in then. You way too comfortable with ghosts cuz sooner or later, you know, because you don't know if your house haunted. So now you over here talking to Alexa all the time. Then one day Alexa voice a little raspy. You don't even clock it. Now you in a whole murder house with a bad history and you don't even you, oh it's malfunctioning. You got a whole poltergeist going on and you you over here calling Amazon.
And so then, you know, I go over to my friend's house and my friend's like, "Oh, hey, we got this new thing. You know, if you want to play want to play some music, just say, "Hey, Alexa, play whatever."
And I was like, "Hey, Alexa. I just didn't want to do it." And he was like, "No, go ahead. Give it a try." I was like, "All right. Hey, Alexa play Kendrick, please. And he was like, "Why'd you say please like that?" I was like, "I don't know. It's is weird. Like, this is weird. This doesn't feel weird to you." He's like, "No, it's not weird. It's future." I was like, "All right, future's uncomfortable." You know what I mean?
I just I just don't think that's good practice for our interactions person to person that every day you make demands of something without any courtesy, without any politeness. And then we wonder why people are like rude. You know what I mean? It's like, you're dealing with so much AI, you're dealing with so many robots, you start treating people like AI, like robots, like people who just sort of serve a function.










