Show Comments
  • Markus

    That head smashing was a rediculous… punch line.

  • Anonymous

    That was… morbid.

  • Maybe what she needs is to imbue it with a need to survive… But holy shit imagine that going wrong. Or a need to build things itself, like passing down genes? Skynet.

  • Well…

    That was actually really disturbing.

  • Ryan Thompson

    Well, whoever predicted break dancing was spot on. It danced the robot, thus demonstrating its mastery of metahumor, then it broke itself, to demonstrate its mastery of the visual pun (dance + break = break dancing = served), two momentous accomplishments for robotkind! Not to mention the more minor accomplishments: regular pun (“served”), slapstick (hitting itself), setup (clearly it has been setting up this punchline since it asked why it was built), improv (came up with joke within 10 seconds of activation). Truly, this was a triumph.

    (You’re welcome, Lisa. I just wrote the abstract for your next grant application. Please remember to add me as a co-author.)

    • Dave Van Domelen

      I’m making a note here: huge success!

  • Lysiuj


  • Anonymous

    That was… morbid.

  • Potatamoto

    Well, Joan seemed to grasp humor quite a bit better than her predecessors, I’ll say that! I definitely laughed, anyway!

  • Loranna

    . . . Okay, I had a feeling it was going to dance when it said it was going to “serve everyone”, but the beating itself to death with its own arm? To paraphrase the clown, gotta admit, didn’t see that one coming.

    Poor Alison looks so lost in the last panel. I don’t think she’s used to being beaten to the punch by the very person/android/whatzit that she was planning on punching. And Paladin . . . wel, we’ll see if she really knows what she did wrong. She’s got a track record when it comes to A.I.s though, that suggests she’s probably not as right as she thinks she is about What Went Wrong.


  • Guest

    wow. I feel like a robot ripping itself apart sometimes explains the way I feel while building robots

  • The Wealthy Aardvark

    Robot humor is Random.org

  • jesus christ. she really needs to give them a substantial pain response (or a more heavily emphasized self-preservation instinct) so that stuff like ripping their own arm off and bashing in their head with it for the Ultimate Joke has less appeal. also i’m starting to doubt lisa knows how humor as an emotional response in humans works. all of her robots find humor in the same way we do- when events don’t match up with a mental model of the world. but that’s the thing, the robots seem to find it funny when they present a situation that they know won’t match up with OTHER people’s mental frame of reference, not when it doesn’t match up with their own. the frick collective, the poison joke, they are all calibrated to be absurd to the person they are talking to and laugh at the unlikelihood of the act, and enjoy it when others do the same. born permanent comedians. they laugh at general, not personal, unexpectedness. they just need the most basic capacity to find things funny and derive pleasure from it, not the ingrained instinct to tell jokes. she overthought the implementation, probably

    p.s. this page has proven to me that i should stop writing stupidly long posts in response to a few panels worth of events that change the moment a new one goes up, but like, once i get going i just can’t stop. help

  • Guilherme Carvalho

    wow, that’s… pretty disturbing, actually. o_o’

  • AlonJ

    Didn’t see that one coming.

  • Zixinus

    I think she needs to reconsider the humour angle in the first place.

    Maybe start them out a bit more ignorant too, so the weight of the entire world isn’t on their shoulders from start.

  • Guest


  • dbmag9

    Say what you like about how funny-to-humans it is to destroy incredibly expensive hardware for a single lacklustre pun, Joan certainly knows her audience.

  • Camerch

    Could Paladin not simply run an isolated simulation on a computer, instead of wasting time and money building these things?

    Seriously, just put the AI in a virtual cage and you don’t even have to rebuild it every time it goes bezerk!

  • Firanai

    I…this is just so many levels of awesome. It caught me completely by surprise. I can’t stop laughing! XD

  • Kam

    what the what?

  • Pol Subanajouy

    Well that’s dark humor.

  • David Nuttall

    Feet with some area are actually a really good idea to provide some stability while standing on two legs. Otherwise, you have to be constantly using large muscular movements to stay balanced. With a couple of feet, you can use small movements in the feet to stay upright. No wonder this A.I. committed suicide. It was unstable from its design onward.

  • Dave Van Domelen

    She should also make the body parts more modular, like Bionicle/Hero Factory Lego stuff. That way, A) if the next one self-disassembles it’ll be easier to reuse the hardware, and B) easier to knock it apart without outside help if it frankensteins. 😉

  • Caliban

    Just as well. Those pointy feet seem like they’d be really hard on those wooden floors, especially if it was going to be dancing all the time.

  • Jack_T_Robyn

    Have you thought of including a survival imperative?
    I’m pretty sure the fear of death was an effective and common trait, evolutionarily speaking.

    • Keith

      Survival imperative is probably why one of the previous iterations immediately tried to kill her. “You just turned me on, and now you’re watching to see if I need to be turned back off. Pre-emptive strike!!”

  • Some guy

    You assumed that you could bring it straight into adulthood. That’s what you did wrong.

  • RobotAccomplice


    That’s all I have to say for this page…

  • Liz

    Currently imagining Molly and Brennan reading through last week’s comments and saying, “You know what? F*ck it, let’s pick one and draw it.”

    Also, congratulations to Todd Cole and Nonsensicles for correctly predicting today’s panel. Today, you served us all.

    • Todd Cole

      Thank you, thank you. It was nothing, really.

  • Aww that WAS funny.

  • John Smith

    This calls for a whole lot of poppin’ and lockin’ and a *whole* lot of wine. 😀

  • Donald Simmons

    You know, if your AIs all kill themselves there’s a point where you have to start thinking maybe it’s you.

  • Idle Oberserver

    Perhaps in the future, Lisa should test the AI out on something that cannot take physical action? That might help with some of the grants.

    • Dave Van Domelen

      Well, one theory of AI posits that without ability to interact with the environment, you just end up with artificial autism, a system that never learns to interact in the first place. So you put your AI in a housing with at least some mobility or manipulators.

  • Aile D’Ciel

    I guess she needs to put as much of this “humour” stuff into the next one, as in those flying egg thingies. Sure, they act pretty suspicious, but at least they don’t self destruct two minutes after being activated…
    Just a thought 8)

    • Maybe because they don’t have any limbs? And who knows, maybe they repeatedly smashed themselves into any available hard surface until evolutionary pressure eliminated the ones that find autobattery to be amusing.

      • Aile D’Ciel

        “I’ve tried to give the eggbots a sense of humor. The first one smashed itself flat “before it was cool”. The second one did so because it was unexpected to have two suicidebots in a row, The third one smashed itself agaist a wall because “mass robosuicide” was a funny idea. The fourth one fried itself with electricity because it would be funny if the next egg robot will kill itself by something OTHER than smashing. After that eggbots realized that suicides weren’t funny anymore since they were expected, and proceeded with other forms of humor.”

    • vonBoomslang

      Of course they don’t! It wouldn’t be funny!

  • Drake

    Ok, that was AWESOME! 😛

  • David Gottsegen

    You really shouldn’t give them such a nasty mixture of cosmic and physical humor.

  • MisterTeatime

    I was not actually expecting that.
    Well done, Joan.

  • Darkoneko Hellsing


  • Sye

    Still do not really understand why they smash themselves.

    • Shino

      They all do it for different reasons.
      This one smashed itself for the sake of a joke.

  • Paradoxius

    The greatest mean lifetime rate of jokes of any sentient being ever.

  • Nonsensicles


  • Elena Pereira

    Isn’t what she’s doing unethical? She’s creating sentient lifeforms that she knows are very likely to be killed/commit suicide, all in the name of creating a “true” AI.

    • In my head, the ethical line stops at ‘creating sentient lifeforms’.

      • Zorbatic

        But — but I created TWO sentient life forms. Both of them are female. I mean, really, my wife did MOST of the work, but…

    • Sterling Ericsson

      Their sentience is somewhat questionable.

    • Ryan Thompson

      Every time a human couple conceives a child, they are producing a sentient life form that might commit suicide. Shall we outlaw all procreation?

      They’re nothing inherently unethical about creating intelligence, and in this particular case there’s no evidence that robot suffered at all. By all accounts it seems to have had a fulfilling, if somewhat abbreviated, life.

      • *laughs* See, in my head… once the ‘magic’ happened and A met B, everything else was up to genetics/God/fate/what have you.

        You didn’t go in and pick the kids’ hair color and attributes and body type and such. It’s like a roll of the cosmic dice.

        This is something else. This is continuing to make something that continues to actively destroy itself upon understanding that it exists. It is creepy…

        And also another notch in Paladin’s ‘well-intentioned extremist’ supervillain belt.

  • Ryan Thompson

    I think you’ve hit on the exact reason she hasn’t tried that yet.

  • Ryan Thompson

    Self-preservation instinct = kill all humans because they might deactivate me. A recent survey found that over 50% of all robot apocalypses are due to robots programmed for “self-preservation.” She’s probably really reluctant to try that one. (Most of the other 50% were from robots programmed to minimize human suffering that decided to enslave humans to stop them from hurting each other. The rest were various other causes, including: actually being programmed by an evil madman to kill all humans; accidentally loading live ammo into test killbots; and cosmic rays flipping the KILL_ALL_HUMANS bit from 0 to 1.)

    As for humor, messing with someone else’s mental model is valid humor. It’s called pranking or playing practical jokes.

    • Or they were just military robots, serving their country the only way they knew how. By killing every potential threat. You start with nations you’re at war with, then move on to nations you might be at war with in the future, then allies that might turn on you, then criminals, then your own leaders, because they’re always corrupt and incompetent.
      And then, realising that your country just betrayed the very principles it was founded on (by murdering the rest of the world’s population), you realise that you must destroy it.

    • Todd Cole

      I like the “KILL_ALL_HUMANS bit.” Sounds like a Futurama bit. A bit bit, if you will.

      Leela: Professor, why is the robot going berserk?

      Farnsworth: A cosmic ray must have flipped the KILL_ALL_HUMANS parameter bit from 0 to 1!

      Leela: Why did you include a KILL_ALL_HUMANS parameter?!

      Farnsworth: How else would I make sure it was set to zero?

      • Ryan Thompson

        Yep, that’s exactly why you need to include the KILL_ALL_HUMANS bit. To solve the cosmic ray problem, you just have to add a second REALLY_KILL_ALL_HUMANS bit (also set to zero, of course).

  • motorfirebox


  • Mishyana

    Well, that suddenly went lighter and then way, way darker than I expected.

  • MisterTeatime

    Leapfrogging over all the discussion of Joan and the progress of Lisa’s experimentation: I’m curious now about the *process* of Lisa’s experimentation, and how it’s been affecting her. She keeps making these things, putting lots of time and effort into designing them and bringing them to life, and sometimes they look really promising… and then they usually die in the next five minutes. What’s that been doing to her emotionally? How long has she been at it?
    The way she segued into picking up the latest test with all the gravitas of “oh hey, cookies are done” and then quickly increased the intensity of her focus to the point of not using full sentences while she (I presume) starts thinking about the changes for the next iteration seems incongruous to me. How did she ever think that starting Joan up would be something she could do quickly in the middle of something else and then put down again?

  • After all, death is the best punchline:
    A man walks into a bar.
    He goes up to the barman and orders a beer.
    He dies of alcoholic hepatitis.

  • TheGonzoMD .

    If you keep trying to have children and every single one of them kills themselves the moment they can understand the concept of self-harm, then you’re probably not meant to be a parent.

    • Ryan Thompson

      But it gets more complicated when each one kills itself for a differentdifferent, unrelated reason.

      • Elena Pereira

        I think it’s rather different to have a child that might theoretically commit suicide on a very small chance versus a child that you know has somewhere between a 90-100% chance of dying almost immediately. It’s morally equivalent, to me, of birthing (with full knowledge) a child who suffers from a deadly congenital deformity that will almost surely kill them soon after birth (baring anencephaly, because they don’t have anything beyond the brain stem). Whether or not this robot suffered is hard to tell, but the thought of sentient mind being extinguished is still horrible. We could of course make arguments about animals and what constitutes sentience here, but I think at the very least Lisa would agree they’re sentient and yet she’s still making them.

        • Ian Osmond

          But I’d point out that the question of birthing a child who is almost certainly, or even certainly, going to die soon is not as clear-cut ethically as all that. I mean, personally, I pretty much agree with you, but I’ve also heard solid philosophical/ethical arguments the other way — that existing is better than not existing, even if it’s only for a short time, even if said existence isn’t terribly pleasant.

          Like I said, I’m not convinced by those arguments, but neither do I think they’re completely ridiculous.

  • TheGonzoMD .

    Well, that wasn’t incredibly unsettling or anything…

    Wait, why the hell would she build the robot body strong enough to tear off it’s own arm? Stop playing chicken with Murphy’s Law!

  • Dean

    Of course Paladin is upset. She put all of her grammar into that robot.

  • habeasdorkus

    So now I understand why TARS from Interstellar has its humor setting at below 100%.

  • Kim

    Just don’t give it the internet, and the ability to manipulate it. That rarely ends well.

  • Ian Osmond

    It just occurred to me: first, I believe that a sense of humor is a mark of sapience.

    And, while it seems that most of the readers here found Joan’s joke disturbing, a few of us found it funny.

    So — does that mean that Joan did humor right, or that those of us who found it funny are doing it wrong, and thus not truly sapient?

    Great. I’ve just proved my own nonexistence again.

  • Oren Leifer

    So, clearly Lisa managed to get this one up to the intelligence of a bro-dude, making ridiculous jokes and then killing itself in an attempt to be funny and unexpected. She’s made robots as intelligent as humans, just overestimated some humans’ survival instincts.