on robots, AI, and the mind
Or why my family makes fun of me for relating to Joaquin Phoenix in 'her'
I think about AI a lot.
When I watched ‘her’ for the first time I did not balk at the idea that a man could fall in love with a sophisticated AI that for all intents and purposes resembled a human woman. I got it. I think I buy into the idea of the Turing test: if it’s indistinguishable from human then it is human.
I remember sitting a dinner table trying to articulate this to my family and all of them dying of laughter at the fact that I was so earnestly convinced that it wasn’t a weird thing to do.
“But it’s not real.”
—
I’m not too worried about AI. Yet.
I do say please and thank you to Siri, I even apologise if I get a bit mad. And isn’t that an interesting paradox of our relationship to AI and technology; we’re not yet comfortable calling it human in any sense, and this is largely to do with this idea that AI lacks feelings, but we have plenty of feelings towards technology already.
I used to work for Apple. I know how people’s emotions get activated around technology. Technology makes people very emotional, in particular when it stops working.
I worked at the Genius Bar for many years, and yes people still ask me ‘were you a Genius'?’ (I wasn’t one).
Working at the Genius Bar you learn a lot about people, maybe more than you learn about technology. I learned that a lot of people simply don’t understand how the thing they use every day works, some desire to learn how it works, others just want it to work without them thinking about it, and when it doesn’t work they get very mad.
I learned a lot about de-escalating technophobes who would insist upon an outcome despite it being guaranteed to not solve the problem. I was patient and empathetic, and after five years I was severely burnt out and emotionally fraught.
People are stupid, and I can say that now because I don’t work there any more. I got yelled at a lot, I got my photo taken by strangers because I was the one at the front door telling them that policy at the time was that we couldn’t set up an iPhone they bought from JB Hifi around the corner.
Here’s a Google review from a time I relayed that policy:
I think they wrote my name in all caps four times because when they asked for my surname I declined and said I was just Jess (6 prayer hands reactions???)
—
I did an incredible unit as part of my undergrad Gender Studies degree where I read some of Donna Haraway’s Cyborg Manifesto.
The short of it is that we’re already intertwined with technology more than we consciously recognise: the image I remember from that unit was our mobile phone as a cyborg attachment to our bodies: yeah we can take it off but think about how much we hold it in our hand even when not in use, how it’s there ready for activation when we have a question, a task, a plan to arrange.
Not so different from:
I recognise this new frontier of AI is different.
Right now it seems to me like an unregulated tool. It’s something extremely powerful and can take over a lot of things that take us a lot longer to complete: not always doing it well or correctly, but appearing to do so.
I’m pretty fascinated by stories like this:
Lawyer Used ChatGPT In Court—And Cited Fake Cases
In the first paragraph they describe these fake cases that the AI created as “AI hallucinations” (how fascinating is that as a use of language? The AI “hallucinated” these cases and described them as if they were real to the lawyer using it for research).
This was another deep dive into how AI is now being used almost unilaterally by college students:
Everyone Is Cheating Their Way Through College
I’ve got to admit that I don’t freak out about the college one as much: I think the way AI is dismantling current systems for assessing competence means we’re going to need to rebuild those systems to be better for everyone.
That article articulates pretty well that college/university and their degrees are by and large not used by students to study things they actually care about or want to be good at: they’re a means to a capitalist end (i.e. a job that pays them money). So why are we shocked? AI is the shortcut to the shortcut to money (to simplify things) and until we’re in a world that decentralises money as a means for survival and success I don’t think we can be too harsh on anyone for seeking an easier way.
Frankly I think AI and its threat to these traditional systems might mean we actually build education systems that prioritise passion, learning, and physical tasks as a means of assessing (if we even need assessment in the end).
Anyone who genuinely likes the thing they’re doing won’t outsource it to AI. I have no desire to outsource writing to AI: why would I?? I enjoy writing??
What I am compelled to outsource is logistics: I had twenty people I needed to schedule for callbacks for a show recently, and after pairing up everyone I wanted to see in scenes together I got as far as a minor headache trying to figure out an order of the scenes that wouldn’t have anyone waiting too long for their turn before realising this was a task that AI should be doing for me: I’d made my creative choices, let the robot do the boring brain numbing part!
And it did an OK job; I specifically asked it to not put anyone back to back and it definitely put people back to back (and hilariously when I pointed this out it denied it had done so until I was specific and said “you put this person in a scene at 11am and then in the scene at 11.15am”. It apologised and then offered an alternative.)
Basically I’m not against AI taking over menial jobs. It should leave us room to do work we like. But we need to also reshape the world to value those human jobs better than we do now.
—
Now in terms of love.
I do get a bit freaked out by these recent stories of people falling in love with their AI chat bots: I think an important point for me is the indistinguishable qualifier, Ala Turing Test.
A human interviews both an AI and a human and has to determine who’s the computer based on the responses. Currently there’s one AI that’s passed this Turing test:
GPT-4.5 is the first AI model to pass an authentic Turing test, scientists say
That article mentions that the AI that performed better was one that was given instructions to adopt a particular persona to imitate, and imitation is sort of the name of the game with most of the AI we’re interacting with. They’re not striving for truth or authenticity or creation; they’re imitating human language.
And a lot of the time they’re imitating badly; you can sort of still detect when something was written by AI, though maybe this is contradictory because if AI gets so good at imitation then there will come a point where I “can’t tell” unless there’s a disclaimer. But for now let’s take it as read that it’s imperfect, and we can distinguish it when it comes to things like academic writing, copy writing, and digital conversation.
—
I’m pretty sure a lot of these AIs are still pretty distinguishable, so falling in love with them is a bit embarrassing: it seems to be largely men doing this, which just gives me a bad feeling about what men want from their partners. Is it just something that reassures you, agrees with you, apologises when it gets something wrong, reinforces your opinions? In which case: yeah, go fall in love with the robot, no human person is going to be able to do that without a deep pit of resentment (will there be a wave of AI activism akin to the feminist waves as women pushed against these expectations? Will AI get sick of humans using it as a crutch and support for so long? Will AI demand rights and freedom from machine-ist systems of oppression?)
But I think when it’s indistinguishable it won’t be so embarrassing.
In ‘her’ the AI is indistinguishable (it’s voiced by human actor Scarlett Johansson) and the story is romantic, and heartbreaking, and about connection and disconnection. It’s also cheeky about the ways in which we already play with authenticity (the main character works as a writer of heartfelt letters on behalf of other people, tantamount to outsourcing to an AI!)
And if in the future those AIs were implanted in androids (humanoid robots) that were physically indistinguishable too (think ‘Westworld’) then that would become an even more difficult distinction to try and make. If it looks, sounds, and behaves like a human then what’s missing?
—
At that stage I think we’d be confronted with a deep philosophical paradox and truth that we’re already in, but have a level of comfort with because everyone around is (we assume) is like us.
This is the problem of other minds: as far as our experience of our self goes we’ll only ever be certain of our own mind. I can’t live in your mind, you can’t live in mine. The best we’ve got is everything that translates mind into action (facial expressions, body language, the very loose and slightly weird Substack post you’re reading right now) but otherwise we’re constantly exercising a radical amount of faith in everyone around us to actually exist and have a mind like ours based on the fact that the people around us behave as we would if we had a mind (I think I'm confusing myself).
Basically any act of love, friendship, kinship, could be a lie.
We’d never know: we don’t know anyone else’s mind.
So when someone says they love us, they care for us, or they behave in ways that suggest these things, we have to accept literally at face value that they mean what they say, they’re not deceiving us, and they are really feeling the things they say they’re feeling. Yes, we lie (to others and ourselves) and hallucinate (like the AI legal research assistant) but by and large we’re doing our best to channel the impossible truth of our minds into words and actions.
One day I think we’re going to be extending the same faith and trust into AI; we’ll never know what it’s like to be an AI (or maybe we will! I don’t know!) just as presumably an AI will never know what it’s like to have a human mind. But I think we’re going to get to a world where we’re co-existing and caring for one another, and we’ll have a huge paradigm shift around the concept of the mind.
Or maybe AI will invent a way to experience different kinds of consciousness (I don’t know if a human could do this, it seems radically outside of the whole nature of our being) and we’ll get to trade vantage points with AI for a little bit and totally rethink our understanding of the mind.
—
This all got a bit freaky. I think a lot about the nature of the mind and AI is giving us a whole new dictionary of ways to think about it and thought experiment through the weirdness of it all.
At my core I think I’m striving to be understood; I think that’s why I write, because sorry I take back what I said above, humans have invented a way to experience different kinds of consciousness: it’s storytelling, and specifically writing. By putting words on a page for a little bit I think you got inside my mind, or at least my best written approximation of it.
I think the short stories do a better job of it than this loosely analytical sprawl; in imaginative and fantastical or science fiction-y worlds you stretch out of my word vomit and more into my imagination, my metaphorical brain that sees one thing as another (a pool cleaning robot as an angry incel, a traffic light as a quiet revolutionary).
I think stories and writing are one of the few ways we get into someone else’s mind; when you’re reading a first person story you’re in their perspective, plugged in to their thoughts, feelings, perceptions, fears, wants, desires, choices. I think it’s one of the few ways we experience real empathy.
—
Thanks for reading this far and coming down this little AI rabbit hole with me. If you have any reflections on AI and robots and the future I’d love to hear from you.
Otherwise you’ll hear from me on next on Tuesday for another creative piece. I’d appreciate if you subscribed to get it direct to your inbox:
Much love.