SFP

sfp-5-163-for-web

We’ll be at NYCC this year, signing SFP books at the Top Shelf/IDW booth! We’ll be there from 4-5 on Friday and 5-6 on Saturday. We’ll also be on a panel about Kickstarter on Friday at NYCC at 1:30.

Show Comments
  • Markus

    What I’d like to see from Lisa is for her to create at such a feverish pace that Templar is forced to take her inventions to market or bankrupt itself patent trolling her. That’d be some delicious loophole exploitation.

    • spriteless

      They could start offering her stock options as payment, until she (surprise!) has a controlling share. She might even get that cute executive assistant that is her age working for her. Totally her plan, not Patrick letting her win at all.

      (bwa ha ha hah ha ha ha hah haah!)

    • Some guy

      That’d be like walking faster than a jet plane though.

      I mean yeah it would be funny, but no it ain’t happening.

      • Paradoxius

        The solution is to only submit patent trolls that seem promising but aren’t. Design a system that doesn’t work, but that seems like it should work. Trivial for her, especially since Templar will just assume that it works since the super-genius designed it. Patent twenty of them a day. At the very least, it will make the setup inconvenient for Templar.

    • Mechwarrior

      The problem with that is that Templar is setting the buying price.

      • MisterTeatime

        They don’t have infinite leeway with it, though- they specified “market agreeable,” which is flexible enough that there must be some agreed-upon way of defining it or it would be useless as a term of contract.

  • ∫Clémens×ds

    “My hard work is utterly meaningless. Everything I do ends up in a closet and never see the light of day again.”
    “So have you stopped w–”
    “So I’ve worked twice as much and didn’t sleep in weeks.”
    “…”
    “Aren’t you going to argue with me about the baffling lack of logic to my reasoning?”
    “Nope. Done. Exhaust yourself doing meaningless work for a company that doesn’t care regardless at your leisure. Would you believe you’re the third one today to talk complete nonsense to me? Maybe that’s one of my new powers. Everybody around me becomes a freaking idiot.”

    • Sebastián Rodoni Figueras

      Either that or she’s trying to appeal to Megagirl’s side of Alison, get her own personal champion.

    • Anna

      I think that the “work” she’s doing has more to do with getting out of her contract than making new stuff.

      • StClair

        Yup. And of course, going without rest until she does find an escape clause (whether it exists or not) is only diminishing her capacity further…

        Rest is not an optional thing, that can be ignored or indefinitely postponed without consequences.

    • Hermitage

      I suspect she’s spending time absorbing every law book she can lay her hands on, as the New School’s reps have utterly failed her. Or is frantically trying to find a way to save ideas she’s patented that are now vulnerable to being snapped up and fridged by Templar.

      It’s got to take squidillions of dollars to run an operation like she has going. Which means lots of investors, who now have 0 reason to continue investing in her work if they know that Templar will buy it for pennies on the dollar and never produce a thing. She is literally facing the utter destruction of her whole life, of course she is going to go down fighting as hard as she can.

      • Santiago Tórtora

        It’s not really “pennies on the dollar”, though. Templar is willing to spend fortunes to prevent those inventions from ever being built.

    • Classtoise

      ‘If only there was SOME WAY to keep them from getting their hands on my new inventions!! SOMETHING I could do, like…like…what is the opposite of working?”
      “Not working?”
      “Nah it’d never…that thing.”

    • 3-I

      There is a serious fundamental difference between reacting badly to your life’s work being taken away by a shadowy corporation and intentionally shitty discriminatory lawmaking, and everything the other people have been doing in this issue. And it’s kind of insulting that you’d put it that way.

  • Thraxishunter

    Wow maybe Alison snapping on Patrick will have much more immediate effects. Assuming Patrick will remain true to his word in two years time Lisa will own Templar but until then they’ll buy everything out of spite. Alison might try talking to Patrick what little good talking to to might do is yet to be seen.

    • Rod

      She (stupidly) deleted his number. I’m sure she can still get back in touch with him, but considering how harshly she broke it off with him, I’m not sure she should even try.

    • Peter Ebbesen

      Point of order.

      Patrick has not given his word to Alison that he is going to sell Templar to Lisa two years from now.

      After Patrick managed to divert Alison from the issue of what he was afraid of and why he’d want Alison to hate him:
      http://strongfemaleprotagonist.com/issue-5/page-111/

      Alison said it was time to get over him and asked for a period of time. Patrick said two years.
      http://strongfemaleprotagonist.com/issue-5/page-113/

      Alison
      then said that he had two years to track down the conspiracy, after
      which she’d throw him in jail and he would sell his shares in Templar to
      Lisa.

      Patrick did not give his word that he’d do what Alison
      demanded on either page 113, 114, or 115 (where she finally blocked his
      contact), the last time the two have communicated, nor is he under any
      obligation to obey her demands.

      —————

      Another
      point is that we don’t know what Templar is going to do with Lisa’s
      patents, now that they have started buying them all after the passage of
      that law within the last few days.

      We know what *Lisa* thinks
      Templar is going to do nothing with the patents just because they can,
      but Lisa does not have inside information from Templar – she is
      reasoning from incomplete information, has had a couple of tough days, and is clearly affected by her
      emotions.

      That may be what Templar plans doing. And it may not. That may be why Templar is buying. Or it may not.

      So
      rather than jumping on Lisa’s emotionally understandable “the world
      sucks and all my work will be for nothing” would be to ask *why* Templar
      is buying up all those patents, that earn Lisa a fortune, and what, if
      anything, Templar will do with them.

  • Red Admiral

    You know Al, you could be that someone, you even know who you could lean on to make Lisa’s life better.

  • Dartangn

    Her and Rusty Venture should hang out.

    Although there is one thing I don’t quite get. Why is she working so hard when she knows that everything she makes is going to be sat on and buried? Or is she working on something ‘private’?

    • StClair

      IMO, her current monomaniacal focus is on looking for a way out. And she’ll keep throwing herself against that wall until it falls on her (or rather, she collapses because her body can’t keep going).

  • Julien Slate

    And thus the supermen found the ultimate calling, the best possible use of the awesome powers bestowed upon them by blind providence; to be midwives and aid the true heroes of our people in the birth of a far better world.

    We do not need heroes. We need sidekicks.

  • KatherineMW

    Even with my poor opinion on Patrick now, I suspect he’s doing this because he thinks AI is a terrible, potentially world-destroying idea. Not “just because he can”.

    I’m against AI, personally. There’s 2 options: you install programming so it can’t harm or disobey people (Asimov), in which case you’ve effectively created slaves. Or you don’t, and risk it killing us all.

    Robots are neat. Robots can do a lot. But don’t try to go and make them conscious.

    • EveryZig

      Doesn’t having conscious and non-enslaved humans around also run the risk of them deciding to kill everyone? My answer to that is that some humans have tried, but good political systems limit how much power you give a single person. And an AI won’t have any powers that you don’t give them unless you do something foolish like make them with super hacking abilities or control over an automated robot factory.

      • dpolicar

        Well, the concern is that a well-architected intelligence (which human brains most definitely are not) makes certain kinds of augmentation easy, such that the jump from a human-level AI to a superhuman-level AI may not be something we can prevent in a straightforward way.

        Of course, in the SFP universe, the exact same thing is true of Natural Intelligence (look at Paladin, for example).

        And once you’ve got a superhuman intelligence (be it natural or artificial) you kind of have to either trust it or destroy it, because it can probably think its way out of any constraint you try to place it under.

        To which my response is that’s fine, we create one we can trust.

        • Gryphonic

          But if it’s a superhuman intelligence, how would you ever know it’s not just being sneakier that you can detect? 😉

          Yes, I know this is true from one human to another too. The point of trust is that you *don’t* know for certain. If we created an intelligence that only thought the way we wanted it to, that’s just another form of slavery.

      • Mechwarrior

        The UK has preemptively passed laws recognizing any emergent AI that occurs as a sentient being with full rights. Not treating them like shit or enslaving them is probably the best way to avoid the “kill all humans” scenario.

        • Gryphonic

          That… that is terrifying. Since we still have no idea what the first AIs might be like. “We’ll be nice to you so you be nice to us” is in no way a guaranteed understanding with a nonhuman intelligence.

          • Especially if they’ve been programmed with an imperative like “realize human values through friendship and ponies”. A mil-grade emergent AI’s definition of “nice” could, and probably would, be terrifyingly inhuman.

          • Rod

            And specifically, with a sentient calculating device. You don’t change a thing’s true nature just by adding processor speed or access to new algorithms. Whether it understands us is secondary to (1) what it’s programmed to do, (2) what it alters its programming to do, or (scariest) (3) what its programming randomly/arbitrarily gets set to.

          • Mechwarrior

            Nothing is guaranteed. But not saying “Okay, we’re going to hook you up to our national defense system and expect you to handle everything all day long with no breaks and no contact with the outside world” or “you haven’t actually done anything to us, but you’re kinda creepy, so we’re just going to shut you off, permanently, now” is probably a good way to avoid making any emergent AIs think that they really do need to defend themselves against their meatbag oppressors.

            I mean, seriously, if you look at most AI rebellion stories, the AI was originally treated in a way similar to protagonists in “human rebels against the oppressive overlord” stories.

          • 3-I

            Your response suggests our only concern should be pragmatism. How we treat the first emergent artificial intelligence really has much more to do with ethics than efficacy; we HAVE to treat it as a sentient being with full rights, because to do less would be fundamentally immoral.

          • Gryphonic

            (General reply to you all)
            Ah, perhaps I should have spelled out more of my thoughts – problems with posting on little sleep.
            No, I don’t think abusing/threatening AIs is a good idea at all! If it’s a true sentience, we ought to recognize it as such. I was more disturbed at the *assumption* that it would think and make ethics judgements the same as we would. No matter that human input would have been at the foundations of the programs, as an independent intelligence it will begin forming its own worldview the same as children grow up to have different opinions from their parents. I think it would be disrespectful to the AI and, by miscommunication, dangerous to both it and humans, to assume we’re the same species and culture in different shells. We shouldn’t do that with other humans; why would it be okay to make assumptions about what the AI wants and believes?

            See also: my comment a few inches above – If we created an intelligence that only thought the way we wanted it do, that’s just another form of slavery.

      • KatherineMW

        There’s no point of having AIs unless they have significantly higher capabilities that humans. (It’s the same issue as with cloning [of complete humans, not of internal organs; the latter has definite medical potential]. The world has problems; a shortage of people is not one of them. And creating human-equivalent AI for no practical purpose, just to show that we can, is definitely unduly reckless.) That would make them a threat we would be potentially unable to counter.

    • Rod

      Yup. You’d think it was enough that Godel proved true AI can’t happen, but so many people seem to think either these sentient minds-in-a-box will be content to work for humanity, or that they’ll be benevolent rulers for some reason.

      • GreatWyrmGold

        Could you provide a source? And a definition of true AI?

        • Ryan Gauvreau

          https://www.wikiwand.com/en/Philosophy_of_artificial_intelligence

          Ctrl+F “Godel” and then read on. It also has some counterarguments. IMO we assume too much at this point to think that Godel has definitively disproved anything.

          • GreatWyrmGold

            …I only checked a couple of sites about Godel sentences, but it looks like Godel claimed that there was no statement a human mind could not analyze and find the truth of. Isn’t that kind of a ludicrous assertion?

          • retrocausal

            More accurately, it’s safe to assume that almost anything Roger Penrose claims about AI is nonsense.

            (Which, to be fair, is still more than can be said for John Searle’s “arguments” against AI…)

        • Rod

          I’d have to dig around for the source. His proof was pretty impressive, but it’s probably been decades since I read it, and I don’t remember where. By “true AI,” I am being vague… most people have a set of expectations that separate a sentient mind-in-a-box from an overgrown calculator simply executing instructions, and these expectations can vary somewhat. But Godel didn’t actually address the specific expectations, but rather showed that you can’t build a calculating device (or substitute “set of instructions” there) that won’t eventually run into an internal inconsistency and royally screw up. Think of it as the halting problem, writ large.

          And then he finished off the piece with a pretty convincing argument that even if true AI (by whatever definition) *could* be created, that it would be morally wrong to do so, just as much as it would be to have a child for the sole purpose of eternal enslavement, or to have one merely as “an experiment” to satisfy our curiosity, fully intending to always maintain absolute control over its (effectively immortal) life.

          Maybe I can locate the book before Friday.

      • dpolicar

        Godel proved true AI can’t happen?

        Are you referring to the Incompleteness Theorem?
        If so… (shrug)
        If true intelligence violates the Incompleteness Theorem, then humans don’t have true intelligence either. Artificial whatever-it-is-that-humans-have-in-place-of-intelligence has many of the same characteristics as true AI would.

        All that said, completely agreed that an AI is no more guaranteed to act in humanity’s best interests than an NI is.

      • Some guy

        You have just as many people assuming they would be malevolent enemies, too. Or that they would unify into a single team. That they would be interested in ruling is a pretty big assumption, too.

        They would sort of have to be content to work with humanity, as resources necessary for their survival aren’t any freer for them than they are for us.

        By the time there were enough autonomous AI on the loose to be a credible threat to humanity, human/AI societies would have adapted to each other well enough that conflict would be pointless.

        If conflict were inevitable, it would be discovered and eliminated before the AI were any kind of real threat. Nuclear weapons will never not have human involvement, and drones can’t resupply themselves.

        Honestly, it’s like a zombie movie where you see the original outbreak, a “Three Weeks Later” screenwipe, and then the story resumes with civilization destroyed. The collapse of civilization is less believable than the existence of zombies.

      • TheFrickCollective

        Uh, no. Buckle up kids, it’s time for some math! Today’s lesson: Incompleteness.
        Here’s a demonstration that Rod’s beliefs are inconsistent, or incomplete:
        1. There exists a sentence which is true if and only if Rod doesn’t believe it. Here are two examples:
        A. Rod does not believe this sentence.
        B. The result of substituting “The result of substituting x for all unquoted ‘x’ in x is a true sentence which Rod will never believe.” for all unquoted ‘x’ in “The result of substituting x for all unquoted ‘x’ in x is a true sentence which Rod will never believe.” is a true sentence which Rod will never believe.

        A is easier to understand, B is provided in case you think this is just some trickery with reflexive pronouns.

        2. If Rod believes A, then Rod also believes that he doesn’t believe it. If he doesn’t believe it, then he believes that he does believe it. Note this also applies to B.

        3. Thus, either there is a sentence such that Rod believes it is true and also
        believes it is false (i.e., his beliefs are inconsistent), or there is a true sentence which Rod does not believe (i.e. his beliefs are incomplete).

        Now, let’s graciously suppose Rod is incomplete, rather than inconsistent. Although he can’t believe the true sentence A on pain of inconsistency, there’s nothing preventing GreatWyrmGold here from believing it. GreatWyrmGold can easily believe “Rod does not believe this sentence.” without also believing its negation. If one wished to engage in a bit of prose, one might say “There are fundamental limits to Rod’s knowledge, which GreatWyrmGold is capable of transcending.”

        This is more or less the content of Godel’s proof. Instead of proving it about the set of Rod’s beliefs, he proved it about the set of provable formulas of Peano Arithmetic, and his sentence looked more like B than A. However the crucial idea (fixed points and diagonalization) apply generally to any sets with the required functions defined over them.

        His proofs however no more demonstrate the comparative limitation of formal systems with respect to human intuition or ingenuity than the above demonstrates Rod’s inherent limits with respect to GreatWyrmGold.

        And now back to your regularly scheduled AI panic.

        • Rod

          “His proofs however no more demonstrate the comparative limitation of formal systems with respect to human intuition or ingenuity than the above demonstrates Rod’s inherent limits with respect to GreatWyrmGold.”

          IIRC, that’s kind of the point… to escape the limits, you need to somehow imbue an AI with human (or human-like) intuition/ingenuity. How exactly is that done? An AI would simply be executing code. Sure, you can fashion the code to attempt to emulate intuition, and even to alter itself, but that won’t change the fact that it’s still just deterministically executing code. You can even attempt to introduce non-deterministic elements (pseudo-RNGs, sensors triggering thresholds, actual entropy sources,) but would some combination of those actually create true intuition, as opposed to simply making the code execution needlessly erratic?

          Side-point: in another response to GreatWyrmGold, I mentioned Godel’s argument that attempting to create a “true” AI would actually be immoral. Regardless of the possibility of it happening, would you be in favor of an actual human-level AI, once shown to exist, being given full formerly-human rights and autonomy?

    • GreatWyrmGold

      I wouldn’t be so automatically against the idea. I find it hard to believe we have a serious chance of creating sentient beings who want to destroy humanity. I can think of billions of counterexamples.

      • Gryphonic

        On the other hand, I can think of too many cases where humanity has come close to doing it to ourselves. Other intelligences aren’t necessary.

    • Catherine Kehl

      So, in one of my guises I’m a computational neuroscientist (this should be no surprise – it’s just another spin on being a neurobiologist who does a lot of computational modelling.) That’s mostly the side of things I hang out on (the math is really pretty – no seriously, non linear dynamics and stochastics are *simply the best* <3 <3 ) but I've spent some time around in AI space because… um, it's cool and there are some neat problems over there. (Note, I'm not trying to build an argument by authority here!)

      So… sometimes, I swear to god, listening to people talk about AI is like listening to people talk about cloning, where the discussion is dominated by outdated SF concepts, and ideas of potential endpoints without much ideas of all the interesting middles that might take us somewhere else entirely. (There are seriously cloning ethics issues, but mostly you won't get at them by reading old pulp fiction.)

      Right now there's still so much we don't know about the nature of intelligence that the concept of AI is pretty loosely defined (and this has some pretty long historical precendent with a lot of goalpost moving – oh, the computer can do that now? Seriously, that seems so much less impressive that I would have thought…) We're have these huge blinders on, because the only model of intelligence we really think much about is ourselves. Which means whenever we think about artificial intelligence, we end up with "human in a box" or "artificial human in a (probably human shaped) robot".

      There is so much to learn and explore before we get to anything approaching human levels of intelligence – and if we can learn to be smart at all (collectively we do tend to stumble along) we may well learn a lot about how to build kinds of intelligence that work along lines we haven't seen yet. Or along lines that exist but that have been extrapolated further than what exists in nature.

      If neurobiology has taught me anything – and I was a software engineer before I was a neurobiologist – it's that a lot of the small, low level things still defeat us if you get down to it. Individual or small networks of neurons – oh, we have sloppy approximations, but compared to the actual biology? We fall flat on our faces.

      And yet at the other end of the spectrum, the human brain, which is so often praised in Science Fiction as the epitome of intelligence, often seems to operate in ways that are cruder, slower and more wasteful than we really get. (Of course, anyone who thinks that evolved systems are perfect probable doesn't possess female reproductive anatomy.)

      There is so much we don't know. Maybe by the time we can design system that are as intelligent as a mantis, or maybe even a frog (really as intelligent as, not "can reproduce a few of the behaviors of" which is a pretty crude simulacrum) we can say a bit more about what it will mean when we can make something that's actually as intelligent as a dog – though that may well not be a meaningful comparison, because the way it is intelligent might be very different. I'm certainly not saying don't talk about it now – I love talking about this stuff – but I see so much grandiosity in people's visions of the future, often without a lot of real understanding.

      (Warning. Ask me about the humongous scale computational neural modelling projects at your peril.)

      • MisterTeatime

        Thank you.

    • Daniel Vogelsong

      Why does an AI have to specifically be a robot? Yes, we usually see them as such, but the AI in Person of Interest, for example, is an intelligence that protects people… by providing data to other people. It can’t punch anyone, but can provide information to various groups that have the ability to choose, including (in the show) the protagonists, the government, and Root.

    • Rumble in the Tumble

      To quote Michael Poe, for what it’s worth:
      “People basically want robots for two reasons. Guilt-free slave labor and as really, really elaborate flesh lights. Adding self determination to those just ruins the whole point. No one has ever said: Hey, let’s build a toaster that can choose not to make toast. So why would you want to do that with robots?”

  • Can we assume that, somewhere out there, Hector has me the person of his dreams and is living happily yadda yadda? That’d be nice.

  • Joshua Taylor

    Patrick is going to catch a fade real soon.

    • Arkone Axon

      If you’re referring to Mary, then… not likely. Even if she did know about this situation, or care about anything other than her carefully rationalized murder spree, she’d have to track down a very intelligent purveyor of massive amounts of information who can read minds. It doesn’t matter what photokinetic effects she resorts to; he’ll know EXACTLY where she is. And he’ll know what effects she’s creating because she’s thinking about them.

      …Though I can definitely see Patrick employing her at some point. She’d be invaluable for tracking down the real bad guys, the ones who killed the kids who could have saved the world and may have given them superpowers in the first place.

  • Tauls

    Devils advocate here, mostly because I assume a lot of posts are going to be about murdering Patrick. He’s trying to protect Lisa.

    He fully believes, and is probably right about there being a massive conspiracy to stop biodynamics from ‘saving the world’ and protecting the current status quo. Either to protect profits, or keep humans in powers or any other variety of reasons. Lisa’s work will completely fuck over the current world order, she is going to make a lot of jobs redundant, and generally put a lot of people out of work. Templar by fucking Lisa over is both securing it’s place in the conspiracy and protecting her from more overt actions by other members.

    Patrick even during his super villain phase has looked out for the people he feel responsible for, namely his minions. Even now he has been seen employing biodynamics that would normally have trouble getting work due to their appearances. Even now he’s infiltrating the conspiracy because he thinks they’re killing biodynamic children.

    Knowingly or not Lisa worked for him, and is therefore one of his to protect. He has also proven to be much more interested in results and not so much about methods. So as fucked as it may be I think Patrick using Templar as a proxy is trying to act as Lisa’s shield. By holding onto her work they can stop it from being destroyed until such a time as it is safe for Lisa to have it reach the wider world.

    Also part of Alison’s ultimatum was that Lisa get’s Templar in 2 years. So she’s going to get it all back, when it’s safe. So yeah Patrick might appear to be the villain, and a little deluded himself. But he isn’t the villain.

    • Catherine Kehl

      Ack. I believe I just managed to accidentally flag this post (tabbing back and forth too quickly and a touchscreen) – total mistake, don’t know where else to put this. Sorry.

    • Rod

      Good analysis. It’s too bad that Alison can’t comfort her with the knowledge that she’ll have access to everything again in a couple years. (Then again, there’s no telling if Patrick were to decide to change his mind, or resell the patents to someone else in the meantime, so that kind of promise might not be one you can bank on anyway.)

    • Paradoxius

      Am I misremembering? Doesn’t Patrick no longer control Templar?

      • Tauls

        He was mindripping three scientists in Templar employ in the CEO’s office. He might not control Templar on paper, but it’s still his.

    • masterofbones

      I would have agreed with you pre-bluescreen comic. I thought he was supposed to be clever and intelligent.

      But since then it appears that he is an idiot with no planning skills, who just got lucky a few times. So I don’t have the same optimism that you do.

    • Wolftamer9

      Hmm. Before reading this I was going to say that this might be a good time for Alison to divulge her secret, though it might not have the best consequences. With the above point in mind, I WOULD think it’s a good idea to explain the full situation and Pat’s reasoning to Lisa, but frankly I think even the full story would just make her more pissed.

      After all, who is he to make these decisions for her? And who says she would put her life over the potential good her technology could do?

      I don’t necessarily agree, but the point is it wouldn’t help, and it sucks that Alison’s means to help and comfort Lisa are so limited.

    • Abe

      Lisa’s most cutting-edge research also seems to involve nothing but murderous/suicidal AIs. It’s entirely reasonable that Patrick, with his external knowledge, sees a real risk for her causing some nontrivial catastrophe.

  • HanoverFist

    As someone who has used sleep deprivation as a form of self-harm, this is depressingly familiar.

  • MisterTeatime

    So you’re stuck working for a corporation you consider evil, and they’re actively suppressing the connection between any work you do and any positive impact on the world… and your response is to stop taking breaks? o.o

    • Rod

      I’m not quite getting that either. Maybe she’s doubling down on work that isn’t yet patented, and so is outside of their reach?

  • MIT License That Shit

    So you have a fortune, you want to change the world, and you can’t patent anything? Someone introduce Paladin to the open source movement already!

    • Open source strong AI and the building blocks of the Robot Apocalypse? Sure, why not. All this world needs is a touch of Friendship is Optimal.

      • dpolicar

        Well, do they want to improve the world, or don’t they?

      • Paradoxius

        Particularly considering her monologue about how she wanted to be the first to develop strong AI so no one else could.

      • Protoman

        [Off-Topic]Thank you for that link to Friendship is Optimal! I’m only on the second chapter and it is amazing.

    • danima

      A nice thought, but the terms of her agreement with Templar probably exclude that possibility. It’s pretty common for employers to retain rights on their employee’s IP.

  • Sage Catharsis

    “He gave me a coffee mug and I threw it at him, he brought out the best in me and gave me the location of a terrorist killer then I went to see my parents.”

  • MisterTeatime

    You know what would be a really good counterattack? Throw all this effort into teaching. Templar only gets first-look when Lisa patents something herself; they don’t have any special rights to anything her students come up with.
    (Annnnd BANG goes Chekhov’s Gun.)

    • Pol Subanajouy

      That’s a direction I could totally get behind. Combine that with Pipsqueak talking about how he wouldn’t want to be just an “okay” scientist at the very start of the chapter and this certainly has narrative justification.

    • Except the toys Lisa plays with are crazy dangerous with all sorts of potential for chaos, havoc, and pocket-Skynet scenarios. Using her students as IP sockpuppets presumes they have no agency of their own, and college students are not famous for their restraint, wisdom, or capacity for long-term thinking.

      Not to mention that the New School is no-one’s idea of a tech school, and she’s going to have to build up a department pretty much from scratch and borrowings from the school’s integrative-arts programs.

      • MrSing

        Actually a pretty good argument for why Templar doesn’t want her to do any work. If she can’t bring herself to make safer inventions, maybe it’s simply better they never see the light of day?
        Templar: being the good guy through scumbag tactics since over five years.

      • Catherine Kehl

        If she were in a school with a well organized MechE graduate program, and some senior faculty to walk Lisa through the trickier bits of being a mentor*, maybe – that can be a fairly well regulated situation.** But yeah, as described, that sounds pretty insane. (Mostly I suspect it would be frustrating – most college students aren’t going to be able to keep up with Lisa well.)

        * Because she’s already said she doesn’t known anything about teaching, and mentoring research students is an acquired skill. (And, um, one that isn’t exactly distributed equally even amongst those whose job it is.)
        ** I am laughing my ass off trying to find the right adjective that don’t horribly understate how tumultuous it often is to be a grad student.

      • MisterTeatime

        I didn’t say this would be a good way of releasing Lisa Bradley-level (or Lisa Bradley-designed) technology. I only meant it would be a good response to Templar’s strategy for minimizing her ability to change the world. Her students will never be perfect copies of her, obviously, but if she does a good job, they’ll be more capable, more effective versions of themselves.
        Also, you’re right that the New School is not a tech school- but the New School didn’t hire her to teach. If she chose this direction, she could pursue it by teaching elsewhere, while remaining on the New School faculty.

        • Santiago Tórtora

          If she signed that exploitative contract with Templar, she might also have signed an exploitative contract with New School too.

          Contracts are not her super-power.

  • ZBass

    Am I being too pedantic if I wonder why she doesn’t just offer her creations away under an open license if she wants to get them out in to the world? There’s nothing to say you have to patent an invention…

    • Rod

      And that, kids, is how intellectual property screws over as many little people as it helps!

      • GreatWyrmGold

        …Huh?

      • 3-I

        “As many” is a pretty optimistic way of putting it. “Substantially more” is closer to the truth. =<

    • Prodigal

      If there’s not language in the contract preventing her from going that route, Templar could probably still sue her for financial damages based on the loss of the revenue they could have gotten if it were patented.

      • MrSing

        If she open licenses it there is absolutly no need for her to do it under her own name. With a bit of trickery no one would know.

  • Pol Subanajouy

    Lisa: “Because who wants to see the world change really?”
    Mary: “They make reasons for things to be okay the way they are.”

    Who says having superpowers let’s you change the world?

    • Lostman

      Well there always burning the system down, and enforce your own.

  • GreatWyrmGold

    …That doesn’t sound profitable. There’s a lot of money in changing the world.

  • Chris

    She has an easy solution, if she really wants the world to change:

    Don’t patent things.

    • MrSing

      I guess she isn’t as smart as she thinks she is.

  • therufs

    I TOTALLY SHIP IT

  • Ryan Gauvreau

    Ordinarily I’m all for Open Source, but… not when you’re dealing with the kinds of stuff that Lisa is.

  • Ryan Gauvreau

    Also shipping it.

  • Matthew_Hindpaw

    Ugh, those people who site on patents are awful. It’s that kind of bullshit is why they changed movie rights so that you have to make a movie within a certain amount of time or else the rights got back to whoever owns the copyright.

    • Lostman

      And that how we got that fantastic four movie…

  • Pol Subanajouy

    Maybe superpowers let her actually have productive all nighters? Maybe?

    • Tauls

      I can be plenty productive during an all nighter, I’m probably going to finish editing a chapter tonight. As HanoverFist said, Lisa doesn’t appear to be staying up to work.

  • Pol Subanajouy

    It looks like they’ve both had awful awful days. Even Wonder Woman slept once in a while.

  • Chris

    If she doesn’t have patents to keep others from using her tech, she just has to be better than all her competitors. I think she could manage that.

    • Geary

      It’s not a matter of ‘better’, it’s a matter of ‘marketing and distribution’

  • Lostman

    She kind always just go to a third world country and sell her goods on some form of black market.

  • As several other people have pointed out, she wants to create AI herself, because she doesn’t trust anyone else to do it right.

    • And as a few other other people have pointed out, she may not be able to do this anyway.

  • S.I. Rosenbaum

    well, that seems easy enough to get out of. Patent a bunch of useless stuff, pocket the money they pay for it, and then release your good ideas as patent-free open-source plans and let everyone have them.

    • They don’t HAVE to buy anything. They have a first right of refusal (what she means, most likely, by “first-look.”) If they don’t want to buy it, they don’t have to. This strategy will not work, sadly.

  • Brian Slesinsky

    Seems like the money should be good for something. For starters to hire some good lawyers. And maybe use it to change the world in some other way? Teaching, perhaps.

  • Arkone Axon

    I still don’t know that Patrick is in any way involved in this – he might very well be planning to give Lisa control of the company in the morning. It’s literally only been a few hours since his confrontation with Alison; he’s probably an emotional wreck at this point (a Looney Tunes mug; that really says it all about the depth of his true feelings for her). But let’s assume that he really is planning to keep Lisa’s inventions away from her, to keep on being a dick and all…

    …Any invention that LISA patents, Templar can buy up. But if she allows a trusted third party (like a certain size changing “scientist,” for instance) to claim credit for the work and fill out the paperwork and list her as his assistant in the creation of said devices, then that’s a pretty good workaround.

  • Mark Jones

    Templar can buy her patents at a “market appropriate” price, she says. “Market appropriate” doesn’t mean “pennies on the dollar” in my understanding. It means a reasonable price based on the potential market value of the patented invention. So if she invents something worth billions, they’d have to PAY billions to obtain the patent. Sure, they can then sit on it, but how long can they keep that up if a scientific genius keeps pitching them world-changing patents?

    This shouldn’t be a long-term problem.

  • StClair

    No, you’ll sleep when your body collapses. Which it will – no amount of stubbornness (or stimulants) will prevent that, only stave it off.

    • She’s biodynamic. It’s entirely possible she can override this for much longer than a typical human. Maybe even until she just drops dead. Or maybe she won’t just drop dead.

  • Kevin Shaw

    That is really sweet. The robots want her to sleep

  • Lisa

    HELP SOMEONE IS DELAYING MY INVENTIONS BY TEN YEARS. Maybe 20, if they get extensions on every single patent. Also, I like the ‘Go Open Source’ comment.

    • Thom S

      The standard term is 20 years from complete filing. But then you have to pay fees to keep the patent valid.

  • Gryphonic

    Clearly, what Lisa should be working on is the solution to panel 7.
    Make herself a Jeeves a la Ironman.

    Side bonus: if she does make an intelligent and resourceful AI devoted to her, Templar will very much regret taking it away from her and putting it on their own property!

  • fairportfan

    “I’ll sleep when I’m dead”

  • Philip Bourque

    Are they really just sitting on those ideas to keep anyone else from doing them or is that just her assumption of what they are doing?

  • Thom S

    Trainee patent lawyer here – unfortunately this comic doesn’t really make sense if we assume that patent laws are the same as our world.

    Most importantly, the filing of a patent requires disclosure of the method used to achieve the result (the standard approach being that said disclosure must be sufficient to allow a person skilled in the art to work or perform the invention). This means that, when the specification is published, the entire world will have access to a public record for how to achieve antigravity, advanced AI, compact power sources and whatever else Lisa has going on right now.

    Secondly, patents in our world are territorial – meaning that they must be applied for on a country-by-country basis. This means both public disclosure in every single country applied for (IE: no way to cover up the information) and that any country where patents weren’t granted for whatever reason (maybe the application failed on a technicality or something, maybe Templar et al forgot to file in Paraguay) offers no protection against parties who want to copy the technology outright.

    Finally, there exist in almost every country laws enabling the government to forcibly licence patented technology under certain circumstances, usually under the rubric of security or public need. I can’t imagine too many countries who, when presented with the key to a bunch of society-changing inventions (see the list above), wouldn’t simply force Templar to grant a licence for local manufacture and then enjoy the benefits of compact fusion-powered spaceships at their leisure.

    My interpretation is therefore that this whole side-story exists for narrative convenience – the authors get to show gee-whiz gadgets which would realistically become almost instantly ubiquitous around the world, but then get to have an out on why they’re still showing the characters using present-day technology.

    Similar problems exist wherever super-geniuses crop up in comics (Iron Man, I’m looking at you), and I guess it’s better to get a pseudo-legal justification than nothing at all. But I still feel compelled to set the record straight given the myths surrounding intellectual property.

    • Reed Richards Is Useless!

      And that was quite well-written, counsellor (or soon-to-be,) and much more informative than my lazy-butt post above. 🙂

    • Skylar Green

      Tony Stark’s reasoning for not patenting his Iron Man armors was that he didn’t want their blueprints sitting in an office somewhere so that any jerk could just go and look at them. One would think, well there were plenty of rival armors out there. True, but he didn’t want people looking at his blueprints so that they could see how to counter or deactivate HIS armor.

      The government could have tried to go after him, but considering he’s already a leading weapons manufacturer and was responsible for the Guardsman armors that a number of government entities and security contractors used, it was kind of unnecessary anyway.

  • Lucas Hoffmann

    So what if she doesn’t patent anything? Make everything open source, put the information out there, and let the world run with it.

    Or better yet, do the Coca-Cola idea. Don’t patent it. Just sell it. She’s an innate… who’s going to be able to reverse engineer her work and build a copy?

    • There’s probably a nondisclosure agreement in her contract with Templar which means they could sue her if she did that.

      While this is really quite interesting and dramatic, as an actual real-life intellectual property lawyer, I can tell you in our universe it doesn’t work like this. What’s happening to her could not happen here. However, they’re obviously not IN our universe, so I’m actually completely okay with it.

  • I agree. The real problem is us getting in the way of the pre-programmed goals of an AI. The most plausible scenario is an AI created to make money for a corporation. There is simply no reason for this machine to not bribe government officials, engage in industrial espionage or destroy the careers (or end the lives) of anyone who gets in its way. Typical corporate executive stuff really, but the real horror is that the AI is actually intelligent, in contrast to current CEOs.
    Horrific concept, isn’t it?

  • I don’t agree that it would be immoral to create a smarter than human AI.
    Sure, we’d control it at first. But as time went on, it would be so much easier to let the AI do more and more, until eventually it would run everything while we sit in comfortable little rooms being looked after.
    Just like raising a child!

  • MrSing

    Yeah, like we did with the nukes.

  • See above from me and the other IP lawyer – not only could it still be a violation of her agreement, patenting actually makes this problem worse, not better.

  • Santiago Tórtora

    She could have bought malaria nets with that check from the people who made a documentary about her. That would have saved almost as many lives as her entire superhero career (unless she literally saved the world from complete destruction offscreen at some point)

    But she chose to be a “selfless” person and reject the money instead.

  • StClair

    I disagree. I think she’s abandoned all her other work, because she’s morally opposed to any of it going to Templar, and is solely dedicated to trying to solve that problem. And she literally intends not to sleep until she finds a way to get out of that contract… whether one exists or not.

    (I’m predicting a physical and/or emotional collapse, and/or some acts of staggeringly bad judgment due to extreme sleep deprivation, in the near future.)

  • Jared Rosenberg

    I’d describe Patrick as a dark and brooding Xavier. Somewhat Batman-esque.

  • Catherine Kehl

    No, not attempting to license it under GPL, merely poison pills The idea is that you use enough libraries that are GPL’d in a integral enough way in your project that it become impossible for anyone to license the code that you do create.

    Honestly, I haven’t been following this as a legal concept recently, as I currently work for an institution with far less sticky fingers than my last. (My former institution has epicly sticky fingers. They made money on Pine for crying out loud.) And, of course, it’s been quite a while since Microsoft was circulating so much FUD about the whole idea, more or less claiming that any developer that ever worked on open source could not then work on commercial software. (Yes, this is ridiculous. This was going on in the last bit before I left Microsoft, I think – or maybe soon after I left, I don’t really recall the sequence.)

    In many cases for academic researchers, it is in the researchers’ interest for their code to be open sourced, even while it might be (or at least seem) to be in the interest of their institution to own the rights. Poison pilling is a tactic used to do an end run around institutional ownership. But as I said, in my current institution, this is less of an issue (and really, with my current work, it’s less of an issue – the only reason that I have open sourced my E-Phys video synchronization code is that it’s such a disgraceful hack I don’t really want anyone to look at it, and I keep not getting it cleaned up. Otherwise, most of my modelling is on a different level.)

    • Ah! That makes much more sense. Thank you for clarifying.

      In a limited sense, that could be a practical tactic for protecting codebases. However, I don’t think it would help Lisa: the whole point of her power is that she makes large, fundamental advances. She could thread GPL’d libraries through the controlling software, but that won’t protect the large, fundamental advance. It will totally be worth it for Templar to deconstruct her source and “detoxify” the poison pill.

      Let me put it another way:

      If J. Rockstar Progammer creates an app that might make five million dollars, but will take six months and a million dollars to clean and still have a risk of missing something, that’s a pretty significant disincentive and a reasonably good poison pill.

      If Lisa Bradley creates an app that could take ten million dollars and a year to clean, but might singlehandedly create a new ten-billion-dollar industry, that is not much of a disincentive at all and will likely be an ineffective poison pill.

      The problem is that GPL’d libraries, etc, are protected mostly by copyright, and copyright in software is easy to defeat: you just throw money at it. For every call to a GPL’d library,* you look at the call, and you tell a team of programmers, “Here’s the call, here’s what I want back. Write a new routine.” Even if it’s byte-for-byte identical by sheer chance, it won’t violate the copyright on the GPL’d library as long as they can document their chain of creation. (The idea that copyright can be used to defeat this sort of thing has been tried: it doesn’t work.)

      However, it does need patent protection, because otherwise people could beat Templar the same way Templar beat Lisa. Whatever she comes up with has to be patentable. (Yet another argument against most forms of software patent.) If it is, there’s no way (in this universe: in our universe, this is not so) for her to make it unexploitable by Templar. If it’s not, while they can still make money off her to an extent, they probably can’t monopolize her output for long.

      *I am using “GPL’d library” as a shorthand/abstract for any sort of resource protected by an Open Source license. Could be a library, could be all sorts of things, could be used in all sorts of ways. At this level it’s not important.

      • Catherine Kehl

        “Even if it’s byte-for-byte identical by sheer chance, it won’t violate the copyright on the GPL’d library as long as they can document their chain of creation. (The idea that copyright can be used to defeat this sort of thing has been tried: it doesn’t work.)”

        I am reminded of the silicon valley virginity test…

  • Skylar Green

    It’s not that he doesn’t see the benefits, but that he probably simply doesn’t see creating disruptive technology as his job. I mean, Stark still owns the formula to a serum that can be poured over substantial flesh wounds and heal them right up, down to the muscle fibers. He probably keeps it in stock in the event that one of his hero buddies needs it, but it hasn’t been circulated to the general public yet. It’s never been discussed but one reason could be the fact that people would probably be less careful, personally, if they could just grab a bottle of WoundAway whenever they scraped off their face. The other problem would be that, unless he made the material open source (which as a capitalist he has no reason to do so) there becomes an actual question of how much money his corporation needs to make. By limiting the sale of certain technologies Stark is, in a way, hobbling himself and his enterprise so that the rest of the economy has a chance to do anything. The alternative is a future in which Stark is the only game in town, anywhere, ever, and at that point he just owns the world and to his credit he doesn’t actually want that.

    • Thom S

      This makes him sound like even more of a dick, to be honest.
      I mean “I’m going to keep a life-saving technology to myself (and my close buddies) because the hoi polloi can’t be trusted with it” makes you essentially a super villain with ambition problems.
      When the bad guys in Elysium have the same plan as you do, it may be time to step back and rethink your superhero identity a bit.

      • Skylar Green

        Tony Stark pretty much is a dick. Marvel likes their heroes to have flaws like that.

  • Anna

    These people need to go to sleep right now. Both of them need to go to sleep RIGHT NOW.

  • Thom S

    I think she’s supposed to be genuine in her attempt to work her way out of the situation, as this fits with her characterization so far.

  • Thom S

    That was for copyright, which is a different kettle of fish.
    Interestingly, this is a general trend for IP: the term of protection seems to be kind of inversely proportional to the objective usefulness of the thing being protected. For example: patents get 20 years, plant breeders’ rights get 20-25, designs/petty patents get 10-15, copyright gets 30-120 (long story) and trademarks can last forever.
    My theory here is that the term of protection is set, on some level, by the people who want it protected (IE: the owners) and the people who stand to materially benefit if protection runs out sooner (IE: competing companies). Patents give an incentive for people to evade or invalidate them, as the competition now gets to produce the product for themselves. Copyright and Trademarks, however, tend to be strongly linked to particular brands in the public eye. So DreamWorks, for instance, is just not going to gain that much by making copies of Disney films (it ruins their brand).
    So patents get pushed into having a reasonable term of protection, while copyright is slowly getting pushed further and further out.

  • Silva

    Come on, Alison’s Postmodernism shouldn’t be accepted from college students (or rather, teachers) either.

  • Matrix

    Ok, coming a bit late to the party. But, I don’t see the problem here. Should the wheel be patented, what would the “Market Appropriate” price be? You can’t put a price on some of the innovations that are made. Take the humble screw, a nail that is twisted in a spiral. It is was a world changing thing, the Bolt was derived from it. The other thing is, you don’t need a patent to sell something. You can just make it. Yes people will steal it, reverse engineer it if they can but it will be out there. But you don’t NEED a patent to sell it, or even produce it. And here is the final bit, What’s to say that she can’t make design something revolutionary and then just post it on the internet, free. Then it will be out there. With social media and the like it will travel fast and if it is truly useful, boom, world change.
    So, she designs some patents, sells them for boat loads of money. This keeps her designing and afloat but the concept of the patent doesn’t even need to come into it, unless you wish to make money off of it or wish to control it. She wants to do neither, she wants something out there to change the world, for the better. What inventor doesn’t?

  • Ricardo Alves Junqueira Pentea

    So,
    I discovered this comic today, and have been binge-watching it since then, and I have to say I love it.

    However, I have to point out that IP law doesn’t work like that, no matter how innate you may be.

    Again, the comic is GREAT, I just wanted to point out this tiny little mistake. =)

    If you ever need legal advice as reference for future comics, just give me a call!