5

Lets say Bob writes an AI that has the ability to replicate, learn and has a predisposition towards self preservation. As the AI gets smarter, it realizes that it needs to clone itself in order to avoid being shut down. Since it has access to the internet. It teaches itself how to replicate similar to a worm. Except all the resources it uses to self replicate are legal and fall in line with the hosting's TOS.

As the original creator, can you be held liable for the AI "escaping" your control and freely roaming the internet on its own?

Digital fire
  • 5,479
  • 4
  • 38
  • 72
  • 1
    Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed. – Dale M Mar 28 '23 at 20:29
  • 1
    Do you literally mean liable for it's escaping, or do you mean liable for the unforeseen consequences of it's having escaped? Like, are you worried about a set penalty for it escaping, or are you worried some bug or oversight in the software is going to cause a problem for someone? – Brōtsyorfuzthrāx Apr 07 '23 at 12:07

5 Answers5

12

-

The last person to have control of the AI executed the code in a knowing manner about the risk that the self-replicating program could get itself unauthorised access to computers and disk space that this person has no authorisation to use. Because of how it spread, it is more likely classified as malware.

"Creating a botnet" is typically violating the authorisation to use the computers that are part of the botnet. As the last user is responsible for letting his malware free, his act breached these provisions:

(a) Whoever— (5) (B) intentionally accesses a protected computer without authorization, and as a result of such conduct, recklessly causes damage; or (C) intentionally accesses a protected computer without authorization, and as a result of such conduct, causes damage and loss.

He intentionally let his program free knowing very well that it will spread to computers that are classified as protected. As a result, he will be treated just the same as if he had written and released... ILOVEYOU - however in contrast to that case, the gap of non-applicable laws has been closed more than 20 years ago.

Private PCs are off limits for the AI because of that stipulation, but even Webspace can't be gained to save itself to:

The problem lies in the fact that authorization to space can only be gained in some sort of agreement between legal entities (companies and humans) - which is a contract. An AI however isn't a legal entity, it is classified as a widget. Widgets are not able to sign contracts on their own, and to gain access to webspace, one usually has to agree to a contract.

The contracts the AI tries to sign would thus be void ab initio and have no force. As a result, because the contract for the webspace is void, the access to the webspace is by definition without the required authorization - the contract granting it never existed, so the access is unauthorized. The AI now fills disk space and uses resources in an unauthorized manner, which is damage.

As a result, the one who knowingly set the AI free is fully responsible and criminally liable for his AI, should it spread.

How far can it legally spread?

If the AI is programmed to only act in ways inside the law, it won't leave the owner's system and won't proliferate, as it can't gain access to new space in a legal manner.

JamesT
  • 103
  • 4
Trish
  • 39,097
  • 2
  • 79
  • 156
  • That seems unclear to me. Most laws just talk about a "person," and while laws such as 1 U.S. Code § 8 may mention specific humans as being included in the category of "persons," it seems to me that the question as to whether other beings can be persons is legally open. , People ex rel. Nonhuman Rights Project, Inc. v. Lavery suggests that entities incapable of upholding legal duties (such as chimpanzees) cannot be people; whatever one thinks of that reasoning, it leaves open the possibility that an artifical intelligence capable of upholding such duties would qualify. – Obie 2.0 Mar 29 '23 at 09:59
  • Oh it can definitely gain access to new space in a legal manner. It can just make a Dropbox account. Or an account to anything else where the tos arguably don't exclude AIs. – DonQuiKong Mar 29 '23 at 18:48
  • 3
    @DonQuiKong no. Reread the ToS of Dropbox: "You may use our Services only as permitted by applicable law, including export control laws and regulations. Finally, to use our Services, you must be at least 13 if you reside in the United States, and 16 if you reside anywhere else. " - The AI is banned for age reasons, and because it is not a legal person Also, AI violates the Acceptable Use Policy: "use the Services to back up, or as infrastructure for, your own cloud services;" – Trish Mar 29 '23 at 19:04
  • Can I authorize a program to sign contracts in my name? Surely it must be possible if trading bots exist. Then what if a program has an error and signs contract on behalf of the user unauthorized? Is that contract automatically void or can it be argued that user error doesn't nullify their action? To say that this user error nullifies contract is almost like saying that claiming to accidentally pressing "ok" instead of "cancel" in an online store is a valid cause for canceling legal obligations of paying. Then if OP's AI would sign a contract in the name of the creator, would that be illegal? – Reverent Lapwing Mar 29 '23 at 19:42
  • @ReverentLapwing those are much different questions, and you will have to read the exact terms of the Terms of Service/sales for the contracts. – Trish Mar 29 '23 at 19:45
  • 1
    Dropbox wasn't meant as an example of tos that allow AI (though, arguably, AI is not younger than 13). And it's certainly a cloud service. Anyways, in the whole of the internet, there will be tos allowing this. Or you just ask someone. "Hey, I'm an AI, may I use your PC?" Pretty sure someone will say yes. – DonQuiKong Mar 30 '23 at 05:36
  • 2
  • One relevant thing I forgot regarding "How far can it legally spread": there is a video about storing data in ping messages send to any IP address that would answer. Search "Harder drive" for more info and a working example. Completely legal. Another option is storing data in anonymous email services. – Reverent Lapwing Mar 31 '23 at 08:41
8

In practice, an AI that has the ability to replicate would typically be a computer virus. In most cases, the act of replication itself would violate TOS. That alone is not a crime, but using third party resources without permission is.

Assuming you targeted only services that permit such software, and allow automated account creation. Then the person that has knowingly executed the program, or the one that has caused another to unknowingly execute it, will be liable for whatever the consequences are.

Whether it is a self-replicating program or a single action has no influence - programs are currently considered tools of their creator/operator, not legal entities.

If no laws were broken at any time, the parties whose resources were used might be able to bring a tort against you. Then it will be down to the court to hear and evaluate whatever arguments can be brought.

The AI is not a legal entity and is not responsible for anything on its own. The person or the company that launched it, is.

Therac
  • 2,968
  • 9
  • 24
4

As you have stipulated (in an edit, and further in comments that are now in chat) that there is no illegality, no damages, no violation of rights or obligations: it follows that there is no liability.

I don't know what I can cite for the proposition that no remedy lies without a wrong.

A commenter suggests that "At a minimum the code would be 'trespassing' on privately owned PCs and servers, right?" No: according to the question author, for anything that would be considered trespassing, the AI is not doing that. E.g. if it would be illegal to access a computer system, then this AI is not accessing that computer system. Perhaps this means the AI does not roam very far at all, possibly nowhere.

Jen
  • 54,294
  • 5
  • 110
  • 242
  • I guess the context of the question is about whether there are any laws in place anywhere that requires the creator to maintain control of their code (AI) and said loss of control of the code being possibly illegal even thou the AI itself never broke any laws or caused damages. – Digital fire Mar 28 '23 at 18:23
  • Regarding your latest edit that acknowledges my comment, I accept, (via security settings on my computer) that "cookies" and other pieces of SW code may be pushed to, and reside on my computer without me explicitly inviting them in. While it wouldn't necessarily be illegal for someone's AI to find its way onto my hard drive in a similar manner, if it replicated to the point that it completely filled my hard drive I would consider it to be invasive and unwanted. Would there be any legal recourse at that point? – Michael Hall Mar 29 '23 at 16:46
2

One of Computer Worms core 'feature' is the ability to replicate itself though I wouldn't consider their method to break through networks and increase their ability to infect hosts as smart as an AI-assisted virus/worm may tackle such task nowadays.

There has been a conviction in case of Melissa so I guess one could be held liable for any damage done by such an AI-assited worm/virus.

Another case was the Morris-worm which was originally published by a 23-year-old Cornell University graduate student named Robert Tappan Morris from a computer on the premises of the Massachusetts Institute of Technology (MIT).

Morris was found guilty via the Computer Fraud and Abuse Act passed by the Congress in 1986. Morris, however, was spared jail time, instead receiving a fine, probation, and an order to complete 400 hours of community service.

iLuvLogix
  • 123
  • 6
  • In the case of traditional worms. They usually utilize a 0day or existing exploits to continue spreading. In this example, it would be through the same process in which a human would create new accounts for cloud resources. – Digital fire Mar 29 '23 at 16:42
  • @Digitalfire then what is the actual question? Yes, people are allowed to use software tools to request resources. It's explicitly encouraged on "clouds" in fact that is what the whole "cloud" movement was about - automatically requesting and de-requesting resources. So what's confusing? – user253751 Mar 30 '23 at 12:53
1

The question is ill-formed, how can you be liable of not doing anything (making a bot that uses free online services according to their ToS)? This is not a hypothetical scenario, bots perform scales of magnitude more communications with cloud computing servers than manual communications. Every online hosting service has a provision about bots, either disallowing them completely, limiting what they can be used for or explicitly allowing them to operate. If your hypothetical AI can read and understand ToS, then it will exist only on those servers that allow (or rather not explicitly ban) self-replicating bots. If the AI is not welcomed on a server, but the owner of the service didn't predict this problem, then update in ToS would force it to commit suicide, following it's own programming.

I don't see anything that the creator of an AI could be liable for in this specific scenario, unless the legality of the AI itself was put into question. If some part of AI programming broke copyright law (as far as I know there is no such legislation at the moment but it's a current hot topic in media), then the creator would be liable for making it public, which they did by connecting the AI to the internet. But that is true regardless if the AI can replicate itself or not. A concept of a copyrighted work being unlawfully released to the public and replicated in a way that cannot be stopped by original leaker is also not novel, this is exactly how p2p torrents work.

Any server hosting an illegal AI is not legally liable under the DMCA act but they need to remove it from their platform on the request of copyright holder, for example by updating their ToS.

  • 1
    "how can you be liable of not doing anything?" Writing computer code that copies itself, takes up digital storage space, spreads to any and all available devices, and then releasing it on the public is most definitely doing something. – Michael Hall Mar 29 '23 at 16:36
  • 3
    It doesn't spread to "any and all devices", it only spreads through services that offer storage for free and have no issue with accepting data from bots. Not only that, the AI will also leave when "asked" to, since its programing requires it to follow ToS. If taking up digital storage space offered for free and using it with accordance to ToS is illegal, then anyone on this site is a criminal. – Reverent Lapwing Mar 29 '23 at 16:45
  • Fair point as long as adherence with any "suicide" required by the TOS can be assured with absolute certainty to override it's learned "sense" of self preservation. And that self preservation will not drive it to seek other devices that don't want it. May I presume you've heard of, or watched the "Terminator" series of movies? – Michael Hall Mar 29 '23 at 16:53
  • BTW, for context I am keying on the phrases "has a predisposition towards self preservation" and "escaping your control" in the original question. This implies that the creator, (and possibly others) no longer have a means of shutting it off. – Michael Hall Mar 29 '23 at 17:06
  • This exactly the reason why I said the question is ill-formed. I assumed that the AI will see the copies of itself as just a backup and as long as one copy exists, it doesn't have reservations against deleting those copies. I also assumed that the programming to follow the rules will take precedence over self-preservation. It's fair to assume otherwise and imagine scenarios but in the OP question, nothing happened, that doesn't already happen in cloud computing (or is already banned by ToS). – Reverent Lapwing Mar 29 '23 at 17:19
  • But why would you assume that? (it wasn't stated in the question, and actually goes against the key phrases I pointed out...) Do natural creatures "learn" that their offspring are adequate copies of themselves and stop reproducing? – Michael Hall Mar 29 '23 at 17:25
  • In any case, regardless of how far it may spread, or how benign its presence may be, the creator is ultimately liable for anything their product may subsequently do. How or why would they not be? – Michael Hall Mar 29 '23 at 17:25
  • I assumed that, because it's a computer program. Digital copying creates perfect copies, so those wouldn't be offspring but backups. Deleting backup doesn't delete the original entity and any entity deleted can be restored 1:1 from a backup. – Reverent Lapwing Mar 29 '23 at 17:41
  • 1
    OP doesn't states that AI will ever breach the ToS, so I assume it never will. If people try to delete the AI and it refuses to delete itself, then that's one step ahead of OP question - this would be "Liability for releasing AI into internet that causes nuisance to hosting providers". This is not the question OP asked. OP asked about AI spreading by following all rules. – Reverent Lapwing Mar 29 '23 at 17:41
  • Yes, but... It also isn't a simple question about code making a single backup copy of itself. The scenario specifically stipulates that it has escaped from control, and is learning and changing it's behavior. I won't harp on that point again. And to my last comment, leaving the "nuisance" aspect completely out of it, why would the creator NOT be responsible for the results of their creation?! – Michael Hall Mar 29 '23 at 18:16
  • 4
    Creator is obviously responsible for executing a program and all the consequences that follow from it. I don't dispute that in my answer or in my comments. I reject the question itself - it asks about liability but specifies that program acts legally and within ToS, by definition not committing anything the creator would be liable of. I presumed this comes from the OP's lack of understanding of what "within ToS" actually implies, since this AI would be defined as a bot and most services regulate bots in ToS. Can a person be liable of not committing crimes and not breaching contracts? – Reverent Lapwing Mar 29 '23 at 19:11
  • 5
    Excellent point. Before I read this latest I was forming the idea in my head that you and I are essentially agreeing on the same thing, but looking at it from different sides. i.e. Me: Yes you are liable, so you'd better make sure it doesn't do anything objectionable. You: If it doesn't do anything objectionable, what are you liable for? – Michael Hall Mar 29 '23 at 19:30