SFP

sfp-5-72-for-web

Show Comments
  • …I want a floating coffee maker.

  • vonBoomslang

    I wonder if anybody asked that question, though.

  • adamsbja

    Good philosophy.

  • Timothy O’Brien

    Ah, super villain logic at its finest…Yeah, this thing could destroy us all. I want to be the one to do it.

  • Darkoneko Hellsing

    That’s a sensible way of thinking.
    I’m still really wary of true AI, tho.

  • Ryan Thompson

    Suddenly, morality! She makes an excellent point, though, that all the supposed dangers of AI are really just dangers of any sentient intelligence, and realistically, there’s no particular reason that we should expect AI to be more dangerous to humans than humans already are to themselves. In other words, worrying about Skynet instead of being assaulted in the street by a human is like worrying about killer robots and mind control when poverty and starvation kill far more people. This is what Lisa and Alison have in common.

    • Caliban

      I think that an underlying fear in a lot of “Killer Robot/AI” fiction isn’t so much that they will inevitably be a danger to humanity, but that they will be BETTER at it than we are. No one likes being second best.

      • Sir_Krackalot

        Frankly, the only way we survive a strong AI is if we not only manage to make it better than us, we also manage to make it a better person.

        • Sabriel

          I always thought the scary thing about AI was the total lack of emotion, especially empathy.

          • I don’t think we know enough about AI, or emotion for that matter, to be able to predict what sorts of emotion, or lack thereof, might appear. As I understand it, emotion is a complex interaction between organic cognition, loosely related perception (which is a species of cognition itself), biochemical states (including various hormones and endorphins), and memory. Obviously all of these do not assume, nor do they require, either consciousness or sapience. All higher animals have emotions, and good arguments have been made for lower animals and even plants to have emotional states. Any machine intelligence of sufficient complexity and flexibility to qualify as a classic sci-fi AI ought to have enough “black gang” physicality and substructure to demonstrate some sort of emotional continuum, especially if you make a requirement that it display some sort of creative capacity.

            Whether any hypothetical “machine emotions” might have *any* sort of relational resemblance to human or even organic emotional states… well, I kind of doubt it on a rational level, but serendipity makes fools and poets of us all.

          • Sabriel

            Black gang? Is that something about the basal ganglia and/or substantia nigra? I don’t think that googling is going to go well for me.

          • Nah, it’s just an allusion to the traditional name of the engine crew in old coalers in steam-ship days – the always-filthy underdeck people who kept the boilers stoked but couldn’t be let mix with the abovedecks “quality”. Something I picked up from Lois Bujold, calling those grotty, morlock-ish, absolutely necessary processes “the black gang”. Probably the equivalent of what Lisa likes to call “batteries”, although her term is no more accurate than mine in my opinion. I just acknowledge that I’m being… populist-sentimental in my idiom.

          • It’s not so much the fear that it will hate us. We could cope with that.
            It’s the fear that we would be completely irrelevant. That we could create something that doesn’t need us, something that finds us an outdated curiosity at best, and an irritating pest at worst.

    • The perceived problems of AI aren’t so much that of garden-variety human intelligence, but rather transcendence. There’s the worry that computer-assisted AI will operate on time frames which will basically run the apocalypse on nanosecond scale – we’ll have the four horsemen, a battle on some virtual Megiddo, then a silicon devil or second coming before anyone can blink. The techno-pessimists can’t imagine a deus from any machina made by human hands, while the techno-optimists… well, hell, I’m not sure how exactly they justify their Panglossian projections.

      And as Ms. Rosenbaum pointed out a couple weeks ago, when it comes to SkyNet scenarios, Cylons are preferable to The Machine, and are much less likely to instantiate at “fast AI” speed. Perhaps this is why Lisa’s been testing with robotics – hoping all the necessary pseudo-organic operators and feedback mechanisms use up enough cycles to slow down cognition to something a sort-of-human can react to in time?

      • Ryan Thompson

        There’s nothing stopping humans from writing an ordinary, non-sentient program to enact a nanosecond-scale apocalypse with the same speed as a sentient AI. (There are currently a lot of things making it equally impractical for both.) Anything a computer AI can do, computer-assisted human intelligence can do equally well.

        Incidentally, if you want to read a book about a human doing exactly that, look up The Daemon by Daniel Suarez.

        • I think a human could enact an Armageddon script, but only an AI operating at least at a micro-scale timeframe could play Xanatos Speed Chess with a rapidity and flexibility sufficient to defeat any possible opposition, barring another AI of equal capacity to counter it.

          As it is, we have enough difficulty noticing a server going out of whack before it locks up entire and has to be rebooted – and that’s just from workaday buggy code and punch-clock slovenly maintenance. I honestly fear what a malicious intelligence operating in that sort of environment could get up to before anyone noticed.

          • Ryan Thompson

            Putting intelligence on silicon chips instead of a network of neurons doesn’t magically make it faster. Computers seem faster because we simplify the problems we give them: cleaning up the input data, formalizing the problem, constraining the solution space. This makes sense: why would we bother giving a computer a problem to solve unless it can solve it faster than a human? But if you take away all those training wheels and require a computer to deal with the messy complexity of the real world in the same way that we do, the computer is going to seem a whole lot slower.

            As an example, witness IBM’s Watson, which was at best on par with the best human Jeopardy players. In particular, note that even though Watson was given the full text of the question instantaneously at the same time as it was revealed to the human contestants (who then required time to read the question), it still needed time to “think”, and that time was roughly equal to the time that the human contestants took. And it’s not necessarily a question of throwing more processing power at the problem to make it faster. You run up against limits like communication latency between memory and CPU. Especially when you consider that even Watson is nowhere near the intelligence of a human, and the problem it solves is still very simplified relative to the real world, and is even simplfied relative to a real game of Jeopardy. (For example, they could have required it to use a camera to “read” the question from the screen just like the human players.)

            More generally, any intelligence playing Xanatos Speed Chess in the real world is going to be primarily limited by the speed at which information comes in, and a human with a better information network will easily beat out a computer.

            Bottom line, it’s likely that any computer intelligence that is as smart as a human would also be about as fast (or slow) as a human in practice.

            And don’t underestimate what can be accomplished by a non-sentient computer program. Consider the programming behind NPCs and computer opponents in video games. If a sentient computer did make make a bid for world domination, most of the work would probably be done by non-sentient subroutines reminiscent of video game AIs, which the sentient intelligence would periodically check in on, much like a computer-assisted human would.

          • At this point in the conversation, you start getting into definitional questions about whether sentience is equivalent to consciousness, sufficient for consciousness, or if consciousness is necessary to sentience. Your garden-variety human being is a skim-skin of conscious thought on a milk-pitcher of reflexes, muscle memory, prejudices, expectations and other non-sentient autonomous processes that make up the bulk of the “person”. The touch-stone we’ve been throwing around in comments for the last month or so has been Person of Interest‘s Machine, which was never intended to be a proper sci-fi AI, but rather a heightened version of real-world AI, which is to say, a “Watson” dedicated to the detection of real-time terrorist threats. In other words, entirely constructed out of learning algorithms and autonomous routines to solve explicitly moral equations, yet still deliberately gimped to avoid consciousness. The conceit of the drama is that this is inherently impossible, and that the stripped-down rudimentary learning-machine developed a very alien sort of sentience anyways, and evolved around the blockages.

            As vastly superior as the organic turing machine which is the human brain is to existing architecture, it is still reliant on an inherently inefficient random-walk evolutionary anti-design. Synapses are *not* superior in speed even to modern-day doped-silicon circuits, they’re merely more dense and the beneficiaries of a billion years of cut-throat, massively parallel software and hardware development, packed solid with all the cheats and brilliant hacks, the accumulated inventive genius of nature’s twin god-hackers, Random Chance and Time. They’re equivalent to the mad, crufty monstrosities written over thirty years and dozens if not hundreds of anonymous programmers for the creaking old university mainframes, impenetrably optimized for certain half-forgotten tasks, neigh-unmaintainable but somehow staying up and operational despite all logic and reason.

          • Ryan Thompson

            Indeed, human consciousness is, as you say, a thin veneer over a mess of autonomous processes. And of course, the only reason organic computers are currently superior (for non-simplified real-world tasks) is the 4 billion years’ head start that they have over silicon chips. (Neurons haven’t existed for all 4 billion of those years, but they rely on biochemical reactions that are that old.) So I guess it comes down to a question of how close our brains have come to the theoretical limits of computational power (per watt, per unit volume, per unit mass, etc.). The more of a gap there is between our brains and theoretical max efficiency, the more plausible it is that computerized intelligences could eventually out-compete us. As to that, I don’t really have any idea, though I’m sure people have studied it. But my gut feeling, based mostly on speculation and not actually substantiated by anything, is what I already said: we likely won’t see a computer that can match wits with a human in real time, at least not for a long, long time (centuries).

            (Also, by the time computer AIs overtake us, our brains will probably also be computer-augmented, so that should even the odds.)

        • agent

          As a Computer Science Major trust me when I say there’s a lot of hollywood science going on in Daemon.

          • Ryan Thompson

            Yes, that’s partly what I meant by “there are currently a lot of things making it equally impractical for both [humans and computers].” But my point was that Daemon is equally plausible, if not more so, compared to most fiction about sentient AI takeovers.

      • Adam McKinney Souza

        You people keep reminding me of Eclipse Phase, the best post-apocalyptic transhumanist sci-fi-horror-conspiracy-adventure tabletop RPG where you can play as an uplifted octopus ever.

  • tygertyger

    Ah, and in that last panel we see that Lisa is just as prone to hubris as any other super scientist — that “I don’t trust anybody else to do it right” mindset is hubristic, after all. She seems to be aware of her bias, at least.

    • Sir_Krackalot

      Hubris, or responsibility?

    • Insanenoodlyguy

      Not enough to stop. I stubbornly keep to my “she’ll make robots that will kill everything” doctrine because it’s too much fun not to keep! As long as mad scientists like Paladin are allowed to go unchecked, humanity is doooooomed

  • Pol Subanajouy

    Ah, so she is different from other “technopath”-like characters where it isn’t spontaneous or intuition, just effortless learning. I always thought it was weird that Forge from the Marvel Universe apparently had a subconscious that worked out so many engineering problems for him, because it made me feel like his personality should be way weirder and/or dysfunctional. Paladin is coming off as fairly believable.

    And she makes a good case on why AI couldn’t be much worse than us. (Probably just mess up way faster and more efficiently.)

    • Markus

      I think the one problem Paladin could run into that most other biodynamics don’t tend to run into is figuring out how and where her powers are acting versus where she’s functionally the same as a chromosomal stable person. Every other biodynamics’ powers we’ve seen are some combination of visible, continuous, and controlled, but hers are none of those.

      • Pol Subanajouy

        That’s a good point. I believe that was brought up at one point: at what point does it stop being a talent and start being a power? Clearly Paladin is firmly in the super power territory, but it could be an interesting back story to explore.

  • StashaBoBasha

    Coming from Williams Sonoma Holiday Catalogue 2016, Floating Keurig Bot! (Do not leave unattended)

  • Firanai

    As far as I can see in comic books super geniuses come in two brands. On one hand you have people who have an innate or subconscious understanding of how the universe works and the laws that rule it. On the other you have people who simply have a more powerful brain, a more efficient hardware if you prefer. I think in this case paladin is the first type.

  • zophah

    Instead of making a mature AI from the beginning, shouldn’t she start with basic education and work up from there? Just make a weakened robot and “teach” it progressively. As it grows in understanding, then you can give it the means to act on its knowledge.

    • John Smith

      Maybe Alison could make that suggestion? 😀

    • I mean, it must be hard just suddenly springing into existence fully formed, without any sort of meaning or purpose. That probably causes bad decision making.

      • I don’t know, most people spring into existence half-formed, without any noticeable meaning or purpose. I’m not sure what coming into existence, half-formed or not, with meaning and purpose might be like. Rather like being an angel or a devil, one might imagine.

        • Rattigan

          It seems nonsensical not to try it, though? As a scientist, anyway; to dismiss a method without even trying to test it. Well, since she isn’t testing multiple different methods simultaneously, it might just be something she might be planning to do later? Who knows what her criteria are for prioritizing one method first before the others.

        • My point is really that we can develop a purpose and goals gradually, because we can’t do that straight away.
          The AIs here just get thrown right into it.

        • Also, a fully formed being with a purpose is probably what an actual AI in the real world would be. We’d be trying to solve some really complex problem, and eventually link so many non-sentient computer programs together that we create a conscious machine, greater than the sum of it’s parts, in the same way that the brain is capable of more than the network of neurons that it’s made of.
          Although even that would be a gradual process, I suppose. It’s funny to think that we could very well accidentally create an AI, and not be aware of it. I guess that’s a good reason to try to deliberately create one first.

          • Korataki

            The phenomena you describe was explored to some degree in Mass Effect with the Geth. They were a collaborative gestalt AI which gradually gained self-awareness and determination.

            Of course, in that depiction they went berserk and killed everyone anyway.

  • Thomas

    I see Paladin having one of the problems that self-taught people run into: unexpected gaps in knowledge like reinventing things that already exist, incorrect terminology (calling an auto transmission a “gear box” for example), and not knowing common work-arounds to typical problems. … It is the kind of thing people who *have* had formal education tend to jump on.

  • Oz

    Just read the archive, this is a great comic!

  • Ryan Thompson

    I’ll bet it brews the coffee using the waste heat from the antigravity device.

    • So.. the coffee is made of nuclear waste?

      • Johan

        Sounds like something you’d get super-powers from. I want a cup XD

        • Ryan Thompson

          Worst. Origin story. Ever.

          “Uh, yeah, so I, um, drank this cup of coffee, and then, um, you know, super powers? …What? it was so traumatic and harrowing! I almost burned my tongue!”

          • Johan

            Good luck finding a superhero name on this theme XD
            What kind of super power would that give us? Some kind of super speed? Sharper senses? The ability to produce coffee from the palm of our hands and trow it at the villains?

            That could be funny 🙂

  • wanderingdreamer

    Since I don’t think it’s been mentioned in the comic yet, how old is she? I was under the impression that most of the biodynamics were pretty close to Allison’s age when everything got crazy but that was just my impression.

    • EgilGB

      I beileve this is the closest there is to a precise date for the appearance of biodynamics:

      http://sfp.nsch.co/issue-4/page-36-3/

    • Sabriel

      Yeah, there was a season of strange thunderstorms that might have caused it. Allison’s mother was pregnant during the storms. Biodynamics are all about the same age.

    • Johan

      Actually, the way I understood, it the storms touched a lot of different people regardless of their age. When they became too visible, the government “invited” them in camps for training. I don’t think it was specifically said that only newborns received the “gift” during the storms.

      Paladin looks and sounds like she is in her early thirties to me.

  • RobNiner

    Coffee bot may begin chirping adorably and splashing spurts of coffee everywhere, do not encourage it.

  • Regarding Lisa’s AI problems, I’ve read a very interesting webcomic called Genocide Man. In the story, AI has been developed, multiple times, but there was no singularity. Anything smarter than a human inevitably becomes insane, and commits suicide, often in the most destructive way possible.
    What do you expect from something with Genocide in the title?

  • Insanenoodlyguy

    NOBODY ELSE CAN BE TRUSTED, THE FOOLS!

  • Those coffee packs are really wasteful. It doesn’t fit with her character in my opinion to be using the least sustainable method of making coffee