Skip to content

What is it like to be a bee?

Do bees have feelings? What would that mean? And if they do have feelings, how should we treat them? Do we have a moral obligation toward insects?

Honeybees “exhibit pessimism” according to a recent study published in Current Biology, and summarized in this Wired Science article. Pay attention to the Wired headline – “Honeybees might have emotions” – and to these choice clippings as well: “You can’t be pessimistic if you don’t have an inner life.” And, “invertebrates like bees aren’t typically thought of as having human-like emotions.” The implication, of course, is that these invertebrates have been shown to have them.

Inner life? Human-like emotions? Is there “something it is like,” then, to be a bee?

From an ethics standpoint, questions like these make a big difference. As Sam Harris has recently argued, morality is all about the well-being of conscious creatures—that is, creatures with inner life, felt emotions, or “qualia” to use the philosophers’ term. Humans are a paradigm example of qualia-possessing beings, and most of us would agree that there are certain ways we should (and shouldn’t) treat each other, based primarily on the principle that it’s bad to cause unnecessary suffering. Why is it bad? Because suffering hurts—it feels bad, subjectively—and it would be supremely selfish for any of us to avoid suffering only for ourselves.

Why shouldn’t we be selfish, you ask? Good question, but not right now, Johnny.

Ethicists like Peter Singer have done a lot of work to get us thinking about the suffering of non-human animals, and have urged that we have a moral responsibility not to harm them. That is, we have a responsibility to extend the “do no harm” principle beyond the realm of homo sapiens. This feels intuitively right when it comes to the family dog or cat; and it’s certainly no surprise that many vegetarians come from the ranks of former meat-eaters who read Upton Sinclair’s The Jungle. Cleary other animals feel pain, and we shouldn’t inflict it on them willy-nilly. Maybe we shouldn’t inflict it at all.

But bees? Those stinging little buggers from the garden? Who cares?

Well, let’s not raise the morality alarm just yet. First we should take a more detailed look at the bee experiment, due to Melissa Bateson and Jeri Wright from Newcastle University, to see what it actually involved, and what it can reasonably be taken to show.

Here’s what they did. The researchers trained a handful of worker bees—strapped in little tiny bee-harnesses, by the way—to associate a certain distinctive odor (call it odor A) with a reward, namely a lick of sugar. In addition, they trained those same bees to associate a certain different odor (call it odor B) with punishment: a lick of quinine, which tastes bitter and unpleasant. Spray the odor, give the sugar or quinine, rinse and repeat. It’s “Pavlov’s Dog” for bees. The actual behavior they looked at—to measure the “association”—was the extension or retraction of mouthparts. Pushing mouthparts outward showed the bee was reaching for an anticipated reward; pulling mouthparts inward meant it was avoiding anticipated punishment.

After this training session, the researchers took half of the bees and shook them for 60 seconds (leaving the other half alone) and then exposed both groups to some odors that were gradient between odor A and odor B.

Shaking is stressful for bees, as it can signal an attack by a predator.

They found that the all-shook-up bees were more likely to associate the in-between odors with punishment compared to reward. That is, they were more likely to retract their mouthparts when faced with the ambiguous smells than they were to extend them.

This pattern of behavior can pretty fairly be called a bias, and the agitated bees clearly exhibited it, when compared to their undisturbed counterparts, to a statistically significant degree.

That’s a pretty interesting finding, and it tells us something about how bees respond to ambiguous stimuli after they’ve been rattled around a bit. Maybe it’s an evolved survival strategy with a logic something like this: When you’re in a dangerous or stressful situation, it’s best to play it safe when it comes to (possible) poison. OK—so far so good.

But what is all this talk about human-like emotions and inner life? Are we supposed to bee-lieve (sorry) that the jangled-up insects subjectively felt pessimistic—or maybe even depressed? Are bees “conscious” in the way that humans are?

Not necessarily. I think there’s some confusion going on about the word “emotion” – and I’ll explain what this confusion is in just a moment. First, though, let’s take a closer look at the scientists’ argument, in particular their reasons for suggesting that bees may have emotions.

Step one: Human beings sometimes show “pessimistic” cognitive biases, as when a depressed person sees a frown in a neutral expression.

Step two: We know that these cognitive biases correlate with certain felt emotions in humans—like the sad feeling that comes with depression—as well as with certain chemical and physiological signals that can be measured objectively.

Step three: Human beings have a really handy self-report tool—language—which they can use to tell other human beings about their internal states. In addition, each of us knows, from our own experience, what it feels like to be in a state like sadness, and we assume that others feel that way when they tell us, “I’m feeling blue.” Other animals, and insects like bees, don’t have this nice language tool, so we’re stuck using the “objective” measures only when trying to decide what’s going on inside their heads.

Step four: Other animals, and now insects like bees, have been shown to exhibit the following things: (1) pessimistic cognitive biases (as shown through their behavior), and (2) some of the chemical and physiological signals that correlate with felt, subjective emotions (like sadness) in humans. (I didn’t tell you about this part, but the researchers took a separate group of bees, shook them up, and extracted chemical samples to prove the point.)

Step five: Given that the bees show the very same type of behavior (as well as the same chemical markers) that humans show when they experience certain emotions, shouldn’t we suppose that bees experience those emotions, too?

What do you think?

I’m not totally convinced. Here’s where I’ll tease out the confusion about the word “emotion” because it will help me explain why not. “Emotion” can refer to any number of things, but there are at least a couple of major senses of the term as it applies to human beings. On the one hand, “emotion” can refer to certain brain processes and physiological states of arousal that are triggered by stimuli and which guide behavior—a sort of “brain-level” or unconscious sense of emotion, and the sort we can measure “objectively” in ourselves and other animals. On the other hand, it can refer to that first-personal, private, subjective, self-reportable feeling people have when their brains and bodies are going through those processes and states.

It should be pretty easy to believe that bees have emotions of the first kind. But to call those emotions “human-like” assumes that the first sense always goes together with the second sense, as it seems to do in humans. But why should we think it does?

To be fair to the scientists, they were careful to address this point in the original Current Biology article:

“Although our results do not allow us to make any claims about the presence of negative subjective feelings in honeybees, they call into question how we identify emotions in any nonhuman animal.”

So what does all of this mean for morality? In the case of humans, we think it’s wrong to cause needless pain, in large part because we know, from our own, first-person experience, what it’s like to feel pain. And we sense that there is something unfair about wishing that felt experience on someone else—specifically someone else capable of subjectively having those very same sort of feelings. It’s not that we want to avoid triggering certain brain states in our fellow humans; we want to avoid triggering the way those brain states feel to them.

To extend this reasoning to bees, then, we’ll have to make up our minds about the relationship between objective “brain states” and subjective, felt experience in the case of other animals and insects. I haven’t made up my mind yet—at least when it comes to bees. Have you?

Share on

13 Comment on this post

  1. Given the radically different structure of the nervous system of bees, especially how it is heavily segmented, it seems unlikely that even if they have subjective feelings they would map neatly onto our feelings. Being a centralized and largely non-segmented creature I cannot have one feeling in my arms and a different in my legs, yet bees could perhaps have that. But it is easier to anthropomorphize than imagine anything like that – hence "angry", "busy" or emotional bees.

    Incidentally, Melissa Bateson, the primary author of this study, is also the author of several of the studies showing that the presence of eyes make people behave themselves better I refer to in my post about panopticons. It is a small (academic) world.

    1. Thank you for your thoughtful comment, Anders. One question for you: if we were to agree that there is something it is like to be a bee, would it make a difference, morally, whether "what it is like" for the bee maps on to "what it is like" for humans? What kind or degree of mapping-on would matter morally?

      Is it simply enough that there is something-it-is-like at all?

      When we think about the subjective experience of non-human animals in general, how much, and in what way, do their qualia need to be human-like for us to care, or to have a moral obligation toward them? As Thomas Nagel famously pointed out in his 1974 paper — "What is it like to be a bat?" — it may be actually impossible for us to imagine the inner life of a creature with a central nervous and sense-processing system very different from our own.

      But what if we focus just on pain, since pain seems to be at the fulcrum of moral — and especially utilitarian — arguments about inter-subjective treatment and harm. Probably pain feels subjectively unpleasant in a dog or a cat in much the same way it does in a human. Maybe the dog won’t reflect on that pain using thought-language later in the day, but in the moment, it HURTS for the dog. And in this sense, I’d be willing to believe that pain feels subjectively unpleasant in a bee just the same — even if that unpleasantness is segmented to the bee’s leg, or otherwise a clunky translation of the human (or dog, or cat, etc.) experience.

      Pain serves just about the same purpose (from an evolutionary standpoint) in creatures great and small, and it seems implausible that it should FEEL (essentially) very different across the range. Bees do press our intuitions to the limit, though.

      This conversation gets us very quickly into questions about the evolved or functional purpose of qualia, and hence David Chalmers’ "hard problem" of consciousness: why should pain feel, subjectively, like anything at all? Why couldn’t, as Richard Dawkins asked in his book "The Greatest Show on Earth," the brain simply raise a "little red flag" in the dark when we touch a hot stove, creating a memory or information processing bias that steered us away from such contact in the future … without it FEELING any way at all?

      I haven’t seen a good solution to this puzzle in any of the literature. Have you?

      1. "Why couldn’t, as Richard Dawkins asked in his book "The Greatest Show on Earth," the brain simply raise a "little red flag" in the dark when we touch a hot stove, creating a memory or information processing bias that steered us away from such contact in the future … without it FEELING any way at all?"

        In his article "Do animals feel pain?", Peter Harrison claims pain only has a reason to evolve in self-conscious animals that could otherwise choose to overrule any genetic (or psychological, e.g. phobias) biases towards performing a particular behaviour. Thus, if we saw some nice fruit hanging from a tree, but to access it we had to navigate past a bees nest, any "flag" that was raised could easily be overriden by the desire for food (of course you could also just ask "why do we need to FEEL hungry?"), even if the consequences involve being stung all over.

        But knowing that we *might* feel pain provides a strong incentive to consider our actions *before* we do them, rather than relying on experience to teach us every time. We might come across some fruit which required navigating past some unknown insects. If we were relying on an information processing bias against *all* unknown insects, then we would not even consider attempting to get the fruit even if these insects were harmless, which would be evolutionarily detrimental.

        1. Thanks for your thoughts, Matt. Harrison's idea reminds me of Supramodular Interaction Theory, proposed by Eziquiel Morsella in his paper "The Function of Phenomenal States," available here: With both theories, though, I don't see why all those considerations, computations, calculations, balancing-of-immediate-versus-longer-term-goals, etc., couldn't happen in the dark. Anticipating "red flag" pain could happen unconsciously (in principle, it seems). A higher-order, but nevertheless unconscious overruling system could trump a genetic bias given certain considerations (in principle, it seems). Why do ANY of those variables — even the seemingly complicated ones described in Harrison's or Morsella's theories — need to FEEL LIKE something to the organism? I can't seem to sort this doozy out.

          1. Perhaps they don't *need* to feel like anything. Perhaps the same abilities could have evolved without such feelings, but it was a mere accident of evolution that the route taken led to their existence? Perhaps the easiest route to gaining the benefits of greater processing power in the brain was to build upon pre-existing brain structures in a manner that simply happened to cause consciousness?

  2. An intruiging idea, Matt. I think admirers of natural selection are sometimes too quick to assume an adaptive explanation for every organismic phenomenon. Sometimes features are byproducts of adaptive selection — spandrels. What if consciousness (in the sense of qualia-posession) is simply a spandrel? I haven't come across an argument for this recently, but I'm sure it's been made. I'll look into it. 

    1. If this is correct, then Peter Harrison is wrong, as it means pain (and other sentience) can evolve in animals even if they're not self-conscious. I mean, I don't think many biologists thought he was right, but it's an argument that's been bugging me for a while.

      1. By self-conscious here do you mean, possessing of qualia? Or do you mean it in the stronger sense that requires things like language, self-reflection, etc.? Just to make sure I understand your point about Harrison. Can you find a link to his article? (Or is it a book?) I'd love to see what he says.

        1. I meant the stronger sense, i.e. self-aware/self-reflective, capable of thinking about one's thoughts. Actually, I've just looked through Harrison's paper and he goes further than this to say that pain can only serve a purpose in rational, decision making agents. (I guess it's possible to be self-aware but only behave instinctively?):

          "Pain is the body's representative in the mind's decision-making process. Without pain, the mind would imperil the body (as cases of insensitivity to pain clearly show). But without the rational, decision-making mind, pain is superfluous. Animals have no rational or moral considerations which might overrule the needs of the body." (p.38)

          The paper's here. I assume you can access it:

          1. Matt — thanks. I can only read the first page/ abstract because I'm not wired into the Oxford network at the moment (am traveling/ at a conference). I'll check out the link again soon. The quote you give seems interesting, but at first blush weak/implausible. Have to sort out why I have this intuition. First step is to read the article, which I'll do soon. Thanks for your very thoughtful contributions to this discussion.

  3. First, I would like to say that this was a fun to read and well-written post. Thanks.

    Second, to the argument, I think the arg you offer is subtle, clever and sly. I take it that it your thesis is: if you and me are going to resolve this debate about whether our commitment to not cause unnecessary suffering extends to non-human things like bees, then "we'll have to make up our minds about the relationship between objective “brain states” and subjective, felt experience." Is this your position? I take it to be rather striking. Rather than argue against your argument, let me just ask whether the following two lines of thought might defuse your conclusion.

    First, a point about metaethics. Not many of us think that we have to settle deeply metaphysical questions or solve the HARD problem to deliberate about whether we ought to eat tuna tonight. Are we wrong to think this? It seems like a good or at least convenient belief, considering that metaphysical questions are extremely hard to answer. This is not so much a direct objection to your argument proper, but to the metaethics behind it. Basically if I or someone else were to argue to the conclusion "that the moment you starting do metaphysics to answer ethical questions you've made a big mistake", then would this be a falsification of or credible objection to your broad thesis?

    I think there can be direct objection to your argument as well. The faulty premise, on my view, is the justification for the truth of ' that suffering is bad' with 'the feeling of suffering stinks'. Is that really why we think suffering is bad? Imagine your best-friend suddenly dies and this causes a bunch of suffering and depression in your life. We all think that is bad and we empathize with you. Why? Because a life without a dear friend is bad. And life full of friendship is good. Our empathy comes from the intuition that we sure think a life without friends would be a bad one. Now, I think it would be a big reach to say that "a life with friendship is good" is true, because a life with friendship forwards pleasurable high-order qualia experiences and, consequently, the suffering accompanying the death of friends is not bad because it forwards a bunch of not so pleasurable what-its-like feelings. I am not saying that "good" is some non-natural thing. Rather, it seems that your position requires, not just hedonism, but a form of hedonism that defines pleasure as "the subjective/high-order experience of pleasure". Are you committed to something like this? Basically, I think it is probably a stretch to account for the badness of suffering in terms of the feeling of suffering. If this premise is false, then are we free to think about whether we should eat cows without resolving the question of qualia?

    1. Hello — thank you for your nice reply. I think my position is slightly different from what you ask in your first paragraph. I'm saying we need to figure out the relationship between those things (brain states and subjective experience) SPECIFICALLY in other animals and insects — i.e., going beyond the human case. In the human case, we can correlate objective measures with our own felt experience. We can't do that with the other animals (or, strictly, any other individual besides ourselves). So it would be useful to see if we could come up with a theory about the relationship between the objective and subjective in cases besides our own. In humans we know they go together. But do they go together in bees?

      I think the answer to this question matters. If bees share with us the objectively-measurable stuff, but they DON'T have felt experience — qualia — then it seems less important morally if they undergo "pain" … if that pain doesn't FEEL like anything, then I think, yes, it matters less morally. Now, do we have to sort out meta-ethics before deciding about our next meal? Well, no. I think people already KNOW (or at least believe) that cows (for example) suffer miserably in slaughter houses — and yet meat-eaters (myself included) still eat them. I think this is a moral weakness of mine. But if I truly thought that cows DIDN'T subjectively feel anything like pain — if I thought it was "all dark inside" a cow's head — then, actually, I would be much LESS morally disturbed at my own selfish, meat-loving behavior. So, you're right that we make our decisions without first checking in with the latest philosophy journals to see the state of metaphysical debate on whatever topic … but I think each of us has an intuitive metaphysics — an intuitive belief about, say, how cows feel as their throats are slit — and we either act in line with the entailments of our intuitive metaphysics (as in the case of someone who gives up eating meat under this belief), or we act in tension with it (as I do, every time I have a steak).

      I'm not sure what to make of your second point. It's interesting. But I think I disagree. I think suffering is bad because it feels, subjectively, bad. Its badness just consists in the fact that it feels that way — bad. When I say, "suffering is bad" — I don't mean "there is a metaphysical principle floating in the sky which says so," but rather something more like: "I hate the way suffering feels, and so do you. So let's agree not to inflict it upon each other, or on other creatures who can feel it, insofar as it's possible and reasonable, OK?"

      To pursue you example … WHY is life without friendship bad? WHY is life full of friendship good? Imagine that we were a planet full of zombies, with no conscious life, whatsoever. Let's say it didn't FEEL like anything, subjectively, when anything happened, ever. There were no lights on in any heads, anywhere in the world. But keep everything else pretty much the same (insofar as you can.) Now, let's say your friend dies. There is no consciousness in this world. Just unconscious organisms, one of whom has just stopped fulfilling the criteria for being alive. Do you think in such a world — in this case, of your friend dying — there would be some meaningful way to talk about that being a bad thing? It just would be, objectively? Is that what you propose? … I have to say, even though I really loathe Sam Harris' recent book, "The Moral Landscape" on the point of his atrocious philosophical underpinnings (on the relationship between science and morality), I do think he makes a fair point that "good" and "bad" are very, very hard to make sense of, if they have no reference whatsoever to the experience of conscious beings. What do you think?

Comments are closed.