Artificial Intelligence, Robotics and Tort Law

If you are nervous that a food delivery robot will run over your foot while traveling 4 mph, you are not alone. Read this article for a quick tutorial on tort law, and breathe a little easier knowing you can sue.

4 mins read

Devolving to Extinction 

Human evolution is marked by mankind’s never-ending quest to make every task easier. Where would we be without the invention of the remote control? I imagine experiencing muscle cramps between the couch and the television. Evolution is why I do not heat a hot dog over a fire, but in a microwave. Due to human “progress,” penmanship and cursive are no longer taught in schools. Technological advancements allow me to live a more sedentary life, where I dine on heavily salted pink slime that is heated in a magical box, and my children (okay, Sara’s children) cannot read birthday cards I send written in pseudo-cursive script. They will; however, be able to cash their birthday checks! Kidding, I will Venmo them. Of course the natural end– and this may actually be the end– are autonomous machines. No longer must humans be burdened with paying a delivery person to bring dinner; oh no, there are machines for that. But what happens when artificially intelligent machines cause physical harm? 

A Remarkably Concise Summary of AI and Its Present Use

Artificial intelligence (AI) has existed since computers were first engineered and society grows more dependent on it as the technology improves. For example, the military is testing fleets of autonomous drones that make decisions without human control. These machines can enter a hostile environment without “fear” and make choices more quickly than human counterparts. Some of these machines are run by artificial intelligence, which allows them to learn and improve over time.[1]  At Amazon’s Machine Learning, Automation, Robotics and Space Exploration (MARS) conference in 2017, the company publicly demonstrated its drone delivery system for the first time. Amazon’s goal is for orders to be delivered within thirty minutes.[2] Google and Facebook have artificial intelligence research labs dedicated to reassigning tasks that previously could only be performed by humans. These companies are also in a race to acquire robotic and artificial intelligence startups.[3] Due to the novelty of autonomous robotic technology, a new regulatory system should be enacted to govern the safety of these machines. There are several causes of action that might arise in a product liability case: design or manufacturing defects, misrepresentation, failure to warn, and breach of warranty. In this piece, we will only discuss strict liability and negligence in tort law. 

What are Torts? 

Torts are civil wrongs– as opposed to criminal– that cause injury where a legal liability is imposed. The remedy for a tortious act is usually monetary damages. The most common tort is negligence, which is a breach of duty of care that someone of ordinary prudence would exercise in similar circumstances. Negligence actions often stem from personal injury or product liability claims. To be liable in a negligence claim, the negligence must be the proximate cause of the injury. To meet this standard, foreseeability is a “necessary ingredient.”[4]  

Products liability claims can also be brought under strict liability, where the defendant is liable if a product defect is proven unreasonably dangerous, regardless of care exercised. State law varies regarding the scope of liability; however, most states follow the Second Restatement of Torts 1965. There are two tests where a defendant can overcome strict liability for a product defect: (1) the risk-utility test, where the product’s functionality outweighs its inherent risk of injury; (2) the consumer expectation test, where a reasonable consumer would not find the product defective when used in a reasonable manner. 

How Should Robots Be Regulated?

A new federal regulatory agency should be created to set safety standards for AI. The new agency will first be tasked with defining what is considered a robot for regulatory purposes. One definition is offered by Ryan Calo, an assistant professor at the University of Washington School of Law. He states that robots are machines that “take the world in, process what they sense, and in turn act upon the world.”[5]  Calo believes there is a distinction between technology that acts versus one that informs.[6]  If a technology meets the AI robot definition, how it is regulated will be based on its type. Some existing agencies can regulate aspects of AI, such as the Federal Aviation Administration (FAA) for AI drones. The FAA monitors where drones are used and should work with the new federal agency to develop safety standards.

Matthew U. Scherer, an attorney at Buchanan Angeli Altschul & Sullivan LLP, suggests that the new agency be given access to AI source code, a description of tested hardware and software, results of safety tests, and any other pertinent information.[7] If a product meets the safety standards determined by the agency, the company of its manufacture will have upheld its duty of care. 

Since AI is designed to be autonomous and respond creatively, traditional notions of liability for manufacturers in negligence cases will not be applicable in some instances.  One challenge posed by AI is whether an injury is foreseeable. Calo contends that if an AI system is incredibly beneficial to society, liability might need to be assessed outside foreseeability.[8] He further argues that foreseeability questions will be compounded when AI begins to interact with other AI. Scherer believes that AI is created in part to demonstrate unforeseeable behavior, thus freeing it from the confines of human creativity. He cites unusual– but ultimately ingenious– chess moves played by autonomous computers. Unforeseeable behavior is part of the design of these machines, even if specific unforeseeable actions are not.[9] It is imperative that rules are drafted to clarify when an injury caused by AI is considered foreseeable under negligence claims; otherwise, injured parties may only find remedy in strict liability suits. 

BS Conclusion 

The more I considered products liability, the more I regretted that I started this article. How is harm defined? Do I solely discuss physical injury, or should monetary or emotional harm also be addressed? Should I consider the degree of machine autonomy? Should there be a distinction in liability made for hardware versus software defects? What if there was a software update that a user failed to implement and an injury subsequently occurred– is the company still liable? Should I discuss upstream versus downstream supply chain? IT IS TOO MUCH! This piece is incredibly limited in scope and only pertains to a robot that very lightly run over your foot as it delivers Pad Thai. That is it! 

BS


[1] http://www.cbsnews.com/news/60-minutes-
autonomous-drones-set-to-revolutionize-military-technology/

[2] Jacob Siegel, Watch the Amazon Prime Air drone
make its first delivery in the US, BGR, March 24, 2017,
http://bgr.com/2017/03/24/amazon-prime-air-
drone-delivery-demo-us/.

[3] http://www.economist.com/news/briefing/21650526-
artificial-intelligence-scares-peopleexcessively-so-rise-machines.

[4] Calo at 555.

[5] Ryan Calo, Robotics and the Lessons of Cyberlaw, 
103 Calif. L. Rev. 513-563 (2015), 529.

[6] Id. 

[7] Mathew U. Scherer, Regulating Artificial Intelligence Systems:
Risks, Challenges, Competencies, and Strategies, 29 Harv. J. L. & Tech. 353 (2016), 397. 

[8] Calo at 555.

[9] Scherer at 366.