Culture

Has Bunny The TikTok Dog Actually Acquired Self-Consciousness?

Bunny the TikTok dog can "talk". But can she actually understand?

Want more Junkee in your life? Sign up to our newsletter, and follow us on Instagram, Twitter and Facebook so you always know where to find us.

Sure, Ludwig Wittgenstein might have once famously written that if a lion could speak, we would not be able to understand him. But what would Wittgenstein have to say about Bunny the TikTok dog?

Over the last few months, Bunny, a black-and-white sheepadoodle with a serious expression and a philosopher’s penchant for the big questions, has been seemingly acquiring self-consciousness. Using a wooden board fitted with chunky buttons that play pre-recorded phrases, Bunny has been stringing together crude sentences that imply not only that she has a workable understanding of language — she asks her owner for treats, and to use the bathroom — but also a conception of time.

In one particularly viral video, Bunny demonstrates that she understands she went to the park, dropping a use of the past tense that would totally blow apart our understanding of non-human animal consciousness, if we had reason to believe Bunny has any idea what she’s actually talking about.

Which is the big question. Does Bunny actually understand what she’s saying? Could she really be recalling linear narratives about her life? And if the answer is “yes”, can all dogs do the same? Or is Bunny just a particularly impressive specimen; the Plato of the sheepadoodle world?

As it turns out, answering these questions requires not only engaging with what dog brains are like. It requires re-acquainting ourselves with language; even with consciousness itself.

Does Bunny The TikTok Dog Understand What She’s Saying?

Let’s say that you find yourself trapped in a locked room, sitting at a desk with a thick instruction manual in front of you. Suddenly, a grate in the heavy door pops open, and a slip of paper drops through. It’s a string of Chinese characters that you, as someone who does not speak Chinese, have no immediate way of understanding.

But when you flip back through the instruction manual on the desk — which is written in English — you see that the manual contains a very basic guide to “manipulating” Chinese symbols. It doesn’t translate the symbols. It shows you that some correspond to others.

So, because you are a diligent, unquestioning prisoner, you begin this process of manipulation. You look at a Chinese character on the scrap of paper you have been given, look up that character in the manual, see it corresponds to a new Chinese character you also don’t understand, and then write this new symbol on a fresh scrap of paper. You do this for each character on the scrap of paper, until eventually, you have written out a whole string of new characters — though you’re not sure, it seems like you’ve written a string of sentences. You have no idea what you’ve written. You’ve simply followed instructions.

And so when you are done, and you slide your piece of paper back under the door, you are totally unaware that the guide helped you write out the answers to a set of comprehension questions — questions that were written in Chinese on the first slip of paper that dropped through the locked door.

But if you didn’t understand the questions, and you didn’t understand your answers, can you be said to have understood anything at all?

This is John Searle’s famous Chinese Room thought experiment, designed to argue against the idea that AI programs can have what are known as “intentional states.” Intentional states are mental processes that are about something; that represent other objects and propositions. Say I’m sitting here at home, and I think about a tree. The image I conjure up of a tree is a representation of a real-life Oak I once saw. Therefore, I’m having an intentional state; my mental processing is about something.

Searle does not think that computers can have intentional states. Sure, we can make a computer process things. But just because the computer can produce an output, it doesn’t meaningfully understand what it’s doing — just as the diligent prisoner can “answer” questions without understanding what the hell they’re writing on their scrap of paper. In order to understand something, you must have intentional states. It’s not enough to simply process.

It’s not clear that Bunny understands what she’s saying.

Clearly, the case of Bunny the TikTok dog is analogous. Bunny might be able to “manipulate” speech for some kind of reward — she can understand that pressing a certain button gets a certain reaction. That’s communication, on at least some definitions of the word. But it’s not clear that Bunny understands what she’s saying. She’s just sitting in a locked room, using an instruction guide to help her turn one kind of language she doesn’t understand into more of that same language she doesn’t understand. In order for her to properly “understand” in the way that we use that word, we want her to be doing something more — we want her thoughts to be about something. When she says, “Bunny walk”, we want that to be an expression of an intentional state.

Meaning Is Complicated, Whether You’re A Dog Or Not

Which is the thing: some philosophers have argued that it is not possible for non-human animals to have intentional states altogether. Donald Davidson, in his famous paper ‘Thought and Talk’, notes how difficult it can be to attribute any meaningful, intentional beliefs to a dog. Thus, we can say that Bunny has some thoughts, certainly. But that’s very different from being able to assert what Bunny’s thoughts mean, or that they stand in for things in the world.

Take another example. I’m currently working from home, and my rescue greyhound Ida is sitting on the other side of the room, sleepily staring at me. Because I know that Ida fusses and complains when I am not around, and I can see she’s currently not fussing, I can broadly attribute to her the understanding that I am home. I can therefore say that Ida has some kind of thought in her head, something along the lines of, “Joseph is home.”

This is the example that Davidson uses in ‘Thought and Talk’. But as Davidson points out, it is very hard for me to get more specific about the thoughts of non-human animals. Who is “Joseph” to Ida? Certainly, she does not know that the “Joseph” sitting at the desk across from her ignoring her need for treats and pats is the same Joseph who is a 29-year-old writer for the youth media website Junkee. Indeed, most of the things that I would describe as making me who I am are unknown to Ida. So, is Ida’s thought about me? It seems like it would have to be a very specific thought for that to be true. And Davidson shows us that we can’t make claims about the thoughts of non-human animals being specific at all.

So even if we want to say that a dog like Bunny or a dog like Ida, has mental pictures, what are they about; what is their intention?

The answer is not clear. Bunny can press the button that says, “went”. But we have no proof that pressing that button corresponds with a mental state that Bunny has which is meaningfully about the time she went on a walk. And we need those intentional states to argue that she understands what she’s saying.

What’s It Like To Be A Dog?

None of this is to say that dogs aren’t, or can’t be, conscious in some sense. The question is what kind of consciousness they possess.

For his part, the philosopher Thomas Nagel explores the idea that non-human animals like dogs have what’s called “phenomenal consciousness.” That’s a type of consciousness related, broadly, to experiencing things. If a creature has “phenomenal consciousness”, then I can ask meaningfully ask the question, “what’s it like” to be that creature.

For instance, I can look at Bunny and ask the question, “What’s it like to be a sheepadoodle on TikTok?” I can only ask this question because Bunny has some kind of subjective experience. There’s something about what it’s like to be Bunny — a way that stuff in the world is processed by her.

I can’t ask, “what’s it like to be a rock?” There is no imagining to be done in answering that second question — the rock doesn’t subjectively experience any phenomena, so I can’t ask what it’s like to experience phenomena as a rock does. On this basis, we can say that Bunny is conscious, and the rock isn’t.

But Nagel’s entire argument makes it less likely that Bunny understands what she’s saying when she’s hitting buttons, not more. As Nagel points out, the subjective experience of animals is very subjective. It’s not obvious to us what it’s like to be Bunny. Nagel believes that I can’t just place myself meaningfully into the perspective of Bunny and therefore know precisely what it’s like to be her, no matter how hard I try.

Sure, I can imagine myself being close to the ground, as Bunny is, while a human sticks a camera in my face and I run around pressing buttons on the board. But that’s not the same as experiencing things in the way that Bunny does. I’m just experiencing things as a human does, but from Bunny’s point of view.

For Nagel, humans can only understand consciousness from the deeply subjective experience of being a human. That means Bunny’s understanding of subjective matters like the passage of time are possibly totally different to my own; I will never meaningfully understand what they are like.

That means even if Bunny knows that the “went” button corresponds to the idea of going to the park that she has in her head — which is one helluva “if” — then there’s nothing that should make me believe that Bunny’s subjective experience of going to the park is anything like my own. Bunny and I can’t share a language, because we don’t share the same experience of the world.

Hence the Wittgenstein quote that opened up this article. According to Wittgenstein, how we speak is connected to how we live our lives. And who the fuck knows how animals like Bunny live their lives; her subjective experience will be totally, mind-meltingly different from our own.

I, Bunny

In a recent video, Bunny hit the buttons “WHO?” and “THIS”, wondered over to a mirror, and stared in what TikTok assumed to be existential horror at her own reflection. It had finally happened, the memes said: Bunny had acquired self-consciousness.

Only, as everything in this article has probably already suggested to you, she almost certainly hadn’t. For Bunny to understand a question like, “who this?”, she needs an accompanying intentional state, and to have an accompanying intentional state with a question about self-consciousness, she needs an image of herself. In short, Bunny needs to understand who Bunny is, and that seems like a tall goddamn order.

Maybe that’s a sad revelation. We like to think that animals are meaningfully like us, particularly those that we have chosen to live our lives alongside. Imagining that Bunny is haunted by the same questions of purpose and direction that the rest of us are is kinda sweet, in its own horrifying way. After all, it would be refreshing to learn that being conscious is necessarily kinda shitty, whether you’re a sheepadoodle or a dude in his late-’20s whose idea of a good time is reading a bunch of philosophy in order to write an article about a TikTok dog.

Moreover, if we take Nagel seriously, then it seems like we’re very alone. Consciousness is subjective enough that the mind of a dog is foreign to us in the most essential sense of that word, and in a way that we can’t even begin to comprehend — a string of characters written in a language that we do not speak.

But why does this necessarily have to be a bad thing? It seems exhausting to imagine that everything is comprehendible; that we can have all the answers.

On the question of what it is like to be a dog, we’ll never be able to generate answers; Bunny won’t be able to tell us, and we won’t be able to guess. And yet I reckon this is precisely why we love dogs — because they are both so close, and so far from us. They want our company, and they live in our houses, and they pee on our mailboxes. But their brains are a necessarily mysterious pocket of the universe; an entire alien landscape, packed into a compact, loveable ball of fluff.


Overthinking It is a regular column about philosophy and pop culture, created by Junkee and The Ethics Centre.

Get the low-down on ethics with free bite-sized lessons direct to your inbox. Sign up to Ethics Unboxed.

Joseph Earp is a staff writer at Junkee. He tweets @JosephOEarp.