Suddenly, morality! She makes an excellent point, though, that all the supposed dangers of AI are really just dangers of any sentient intelligence, and realistically, there’s no particular reason that we should expect AI to be more dangerous to humans than humans already are to themselves. In other words, worrying about Skynet instead of being assaulted in the street by a human is like worrying about killer robots and mind control when poverty and starvation kill far more people. This is what Lisa and Alison have in common.
I think that an underlying fear in a lot of “Killer Robot/AI” fiction isn’t so much that they will inevitably be a danger to humanity, but that they will be BETTER at it than we are. No one likes being second best.
Frankly, the only way we survive a strong AI is if we not only manage to make it better than us, we also manage to make it a better person.
I always thought the scary thing about AI was the total lack of emotion, especially empathy.
I don’t think we know enough about AI, or emotion for that matter, to be able to predict what sorts of emotion, or lack thereof, might appear. As I understand it, emotion is a complex interaction between organic cognition, loosely related perception (which is a species of cognition itself), biochemical states (including various hormones and endorphins), and memory. Obviously all of these do not assume, nor do they require, either consciousness or sapience. All higher animals have emotions, and good arguments have been made for lower animals and even plants to have emotional states. Any machine intelligence of sufficient complexity and flexibility to qualify as a classic sci-fi AI ought to have enough “black gang” physicality and substructure to demonstrate some sort of emotional continuum, especially if you make a requirement that it display some sort of creative capacity.
Whether any hypothetical “machine emotions” might have *any* sort of relational resemblance to human or even organic emotional states… well, I kind of doubt it on a rational level, but serendipity makes fools and poets of us all.
The perceived problems of AI aren’t so much that of garden-variety human intelligence, but rather transcendence. There’s the worry that computer-assisted AI will operate on time frames which will basically run the apocalypse on nanosecond scale – we’ll have the four horsemen, a battle on some virtual Megiddo, then a silicon devil or second coming before anyone can blink. The techno-pessimists can’t imagine a deus from any machina made by human hands, while the techno-optimists… well, hell, I’m not sure how exactly they justify their Panglossian projections.
And as Ms. Rosenbaum pointed out a couple weeks ago, when it comes to SkyNet scenarios, Cylons are preferable to The Machine, and are much less likely to instantiate at “fast AI” speed. Perhaps this is why Lisa’s been testing with robotics – hoping all the necessary pseudo-organic operators and feedback mechanisms use up enough cycles to slow down cognition to something a sort-of-human can react to in time?
There’s nothing stopping humans from writing an ordinary, non-sentient program to enact a nanosecond-scale apocalypse with the same speed as a sentient AI. (There are currently a lot of things making it equally impractical for both.) Anything a computer AI can do, computer-assisted human intelligence can do equally well.
Incidentally, if you want to read a book about a human doing exactly that, look up The Daemon by Daniel Suarez.
I think a human could enact an Armageddon script, but only an AI operating at least at a micro-scale timeframe could play Xanatos Speed Chess with a rapidity and flexibility sufficient to defeat any possible opposition, barring another AI of equal capacity to counter it.
As it is, we have enough difficulty noticing a server going out of whack before it locks up entire and has to be rebooted – and that’s just from workaday buggy code and punch-clock slovenly maintenance. I honestly fear what a malicious intelligence operating in that sort of environment could get up to before anyone noticed.
Putting intelligence on silicon chips instead of a network of neurons doesn’t magically make it faster. Computers seem faster because we simplify the problems we give them: cleaning up the input data, formalizing the problem, constraining the solution space. This makes sense: why would we bother giving a computer a problem to solve unless it can solve it faster than a human? But if you take away all those training wheels and require a computer to deal with the messy complexity of the real world in the same way that we do, the computer is going to seem a whole lot slower.
As an example, witness IBM’s Watson, which was at best on par with the best human Jeopardy players. In particular, note that even though Watson was given the full text of the question instantaneously at the same time as it was revealed to the human contestants (who then required time to read the question), it still needed time to “think”, and that time was roughly equal to the time that the human contestants took. And it’s not necessarily a question of throwing more processing power at the problem to make it faster. You run up against limits like communication latency between memory and CPU. Especially when you consider that even Watson is nowhere near the intelligence of a human, and the problem it solves is still very simplified relative to the real world, and is even simplfied relative to a real game of Jeopardy. (For example, they could have required it to use a camera to “read” the question from the screen just like the human players.)
More generally, any intelligence playing Xanatos Speed Chess in the real world is going to be primarily limited by the speed at which information comes in, and a human with a better information network will easily beat out a computer.
Bottom line, it’s likely that any computer intelligence that is as smart as a human would also be about as fast (or slow) as a human in practice.
And don’t underestimate what can be accomplished by a non-sentient computer program. Consider the programming behind NPCs and computer opponents in video games. If a sentient computer did make make a bid for world domination, most of the work would probably be done by non-sentient subroutines reminiscent of video game AIs, which the sentient intelligence would periodically check in on, much like a computer-assisted human would.
At this point in the conversation, you start getting into definitional questions about whether sentience is equivalent to consciousness, sufficient for consciousness, or if consciousness is necessary to sentience. Your garden-variety human being is a skim-skin of conscious thought on a milk-pitcher of reflexes, muscle memory, prejudices, expectations and other non-sentient autonomous processes that make up the bulk of the “person”. The touch-stone we’ve been throwing around in comments for the last month or so has been Person of Interest‘s Machine, which was never intended to be a proper sci-fi AI, but rather a heightened version of real-world AI, which is to say, a “Watson” dedicated to the detection of real-time terrorist threats. In other words, entirely constructed out of learning algorithms and autonomous routines to solve explicitly moral equations, yet still deliberately gimped to avoid consciousness. The conceit of the drama is that this is inherently impossible, and that the stripped-down rudimentary learning-machine developed a very alien sort of sentience anyways, and evolved around the blockages.
As vastly superior as the organic turing machine which is the human brain is to existing architecture, it is still reliant on an inherently inefficient random-walk evolutionary anti-design. Synapses are *not* superior in speed even to modern-day doped-silicon circuits, they’re merely more dense and the beneficiaries of a billion years of cut-throat, massively parallel software and hardware development, packed solid with all the cheats and brilliant hacks, the accumulated inventive genius of nature’s twin god-hackers, Random Chance and Time. They’re equivalent to the mad, crufty monstrosities written over thirty years and dozens if not hundreds of anonymous programmers for the creaking old university mainframes, impenetrably optimized for certain half-forgotten tasks, neigh-unmaintainable but somehow staying up and operational despite all logic and reason.
You people keep reminding me of Eclipse Phase, the best post-apocalyptic transhumanist sci-fi-horror-conspiracy-adventure tabletop RPG where you can play as an uplifted octopus ever.
Ah, and in that last panel we see that Lisa is just as prone to hubris as any other super scientist — that “I don’t trust anybody else to do it right” mindset is hubristic, after all. She seems to be aware of her bias, at least.
Hubris, or responsibility?
Ah, so she is different from other “technopath”-like characters where it isn’t spontaneous or intuition, just effortless learning. I always thought it was weird that Forge from the Marvel Universe apparently had a subconscious that worked out so many engineering problems for him, because it made me feel like his personality should be way weirder and/or dysfunctional. Paladin is coming off as fairly believable.
And she makes a good case on why AI couldn’t be much worse than us. (Probably just mess up way faster and more efficiently.)
I think the one problem Paladin could run into that most other biodynamics don’t tend to run into is figuring out how and where her powers are acting versus where she’s functionally the same as a chromosomal stable person. Every other biodynamics’ powers we’ve seen are some combination of visible, continuous, and controlled, but hers are none of those.
Coming from Williams Sonoma Holiday Catalogue 2016, Floating Keurig Bot! (Do not leave unattended)
As far as I can see in comic books super geniuses come in two brands. On one hand you have people who have an innate or subconscious understanding of how the universe works and the laws that rule it. On the other you have people who simply have a more powerful brain, a more efficient hardware if you prefer. I think in this case paladin is the first type.
Instead of making a mature AI from the beginning, shouldn’t she start with basic education and work up from there? Just make a weakened robot and “teach” it progressively. As it grows in understanding, then you can give it the means to act on its knowledge.
Maybe Alison could make that suggestion?
I see Paladin having one of the problems that self-taught people run into: unexpected gaps in knowledge like reinventing things that already exist, incorrect terminology (calling an auto transmission a “gear box” for example), and not knowing common work-arounds to typical problems. … It is the kind of thing people who *have* had formal education tend to jump on.
Just read the archive, this is a great comic!
I’ll bet it brews the coffee using the waste heat from the antigravity device.
Since I don’t think it’s been mentioned in the comic yet, how old is she? I was under the impression that most of the biodynamics were pretty close to Allison’s age when everything got crazy but that was just my impression.
I beileve this is the closest there is to a precise date for the appearance of biodynamics:
Yeah, there was a season of strange thunderstorms that might have caused it. Allison’s mother was pregnant during the storms. Biodynamics are all about the same age.
Coffee bot may begin chirping adorably and splashing spurts of coffee everywhere, do not encourage it.