Loading...

Artificial Intelligence: Navigating Its Potential Uses and Risks in the Legal Industry

Anyone who has ever seen Stanley Kubrick’s seminal 1968 film “2001: A Space Odyssey” (an adaptation of sci-fi pioneer Arthur C. Clarke’s “Space Odyssey”) will recall the quiet menace of HAL 9000, the supercomputer aboard the spaceship Discovery One that decided unceremoniously to kill its human masters rather than be disconnected or reveal the true nature of their space mission. Kubrick’s movie portrays the soft spoken, gentle-sounding HAL as a red-rimmed camera lens — all seeing, all knowing — that becomes progressively more menacing and sinister as the story unfolds. When astronauts Dave Bowman and Frank Poole decide to disconnect HAL in response to slight malfunctions they have detected, the computer takes matters into its own proverbial hands: It kills Poole while he is on a spacewalk, disconnects the life support mechanisms of hibernating crew members, then locks Bowman out of the ship when he tries to re-enter.

This is riveting cinema, to be sure, but perhaps the most striking scene in a movie filled with striking scenes is when Bowman physically disconnects HAL by entering a narrow space where the brightly lit computer modules that essentially make up HAL can be removed. As Bowman does so, HAL sings the song Daisy Bell in a haunting voice — leaving viewers to answer their own questions about the actual or perceived humanity of the machines man himself has brought to life.

Kubrick’s “2001” is but one in a long line of films to prominently feature AI, or artificial intelligence, and the way it is portrayed no doubt has played a part in how society views machine learning. We don’t have androids walking among us, but we do have machines on the manufacturing production line—building our cars, producing our food—and in the operating room where they perform delicate surgeries on human bodies. Seven years after IBM’s Watson supercomputer beat “Jeopardy” masters Ken Jennings and Brad Rutter to take home a $1 million prize, smartphones are ubiquitous, and now we have Siri, Amazon Echo and Google Home —devices we interact with to play our music, dial our phones, place takeout orders and check the weather. The ridesharing service Uber is inching closer to offering driverless cars on a wider basis, and three years ago, Tesla’s Elon Musk founded his non-profit project OpenAI, a self-described artificial intelligence research company with the goal to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”[i]

Indeed, it’s a brave new world—one that we’re still learning how to navigate, as individuals, in business and in society as a whole.

Legal Risks Associated with the Technology — and Their Defenses

Cary Silverman, a partner in Shook, Hardy & Bacon’s public policy group who has spoken and written widely on AI issues, explains, “In many areas, AI might be more appropriately considered ‘augmented intelligence’ rather than ‘artificial intelligence’ because what the technology does is provide a tool to help people analyze data and make better, faster decisions, whether it is through identifying strong applicants for a job, or finding important documents during litigation.”

With Silverman’s explanation in mind, it is easy to understand why AI is increasingly gaining a foothold in business. In a 2017 cover story in the Harvard Business Review, authors Erik Brynjolfsson and Andrew McAfee projected: “The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning.”[ii]

We are now building machines that learn how to perform tasks on their own, the authors explain, which will be transformational because it means the machines will be able to build their knowledge as time goes on, from detecting fraud to disease in humans. Indeed, a major improvement in AI has been in machines’ cognition and problem-solving abilities, which helps in everything from detecting malware in computer systems to preventing money laundering across digital banking and transactions.

It goes without saying that there are legal risks associated with the technology, even as companies in general continue to ramp up their use of AI. According to a survey conducted by ALM in February 2018 to learn general counsels’ experiences with their company’s use of AI, some 58 percent of respondents said they expect an increase in its use, while only two percent said they expect a decrease. Eleven percent said they expect no company investment in AI. Cory Fisher, a partner in Shook, Hardy & Bacon’s intellectual property group that monitors and speaks on legal implications of AI implementation, provides, “As the business needs of a company push for greater efficiency and effectiveness resulting from implementation of artificial intelligence, the in-house counsel role will need to grow to not only understand the underlying technology, but also the new avenues of potential liability resulting from supplanting traditional actors with AI.”

But what are the risks of incorporating AI into business operations? For one, there is a risk that courts will accept invitations from plaintiffs’ lawyers to view AI as a novel technology that warrants abandoning longstanding principles of liability. In 2017, for example, the Washington Supreme Court rejected a medical device company’s “learned intermediary” defense in a case involving a robotic surgery device. While the device itself did not specifically “have” AI — meaning, it was controlled by the doctor who performed the surgery — the case nevertheless demonstrates that courts might be willing to expand liability or limit defenses when presented with cases involving new technology. The ruling vacated a 2013 defense verdict for Intuitive Surgical, Inc., in which a jury found it was not negligent under the Washington Product Liability Act when the surgical-device maker did not warn the hospital about the dangers of using its da Vinci system to perform the plaintiff’s prostatectomy.[iii] In reversing the defense verdict, the Supreme Court panel ruled that a manufacturer’s duty to warn (an aforementioned longstanding liability principle) “is not excused when a manufacturer warns doctors who use the devices because hospitals need to know the dangers of their own products, which cannot be accomplished simply by the manufacturer’s warnings to the doctor who uses the product.”

To be sure, only a handful of cases involving artificial intelligence have thus far been litigated, and most forecasts of where the legal threats or challenges lie involve the financial and healthcare industries, both of which are heavily regulated. In health care, where AI and machine learning can help make diagnoses or even read radiological scans to detect minute patterns that might have previously been undetectable, the risks are even more pronounced, with AI never able to replace the value of human intuition, which physicians gain over their years interacting in highly personal situations. As a recent CIO article asks[iv]: when it comes to liability, is the doctor or healthcare center that is using the technology liable — or the designer or programmer of the applications? In a broader sense, courts will have to determine in the near future what happens when defendants — or plaintiffs — aren’t human beings. It most certainly begs the question as to whether robots can be sued.

While more than 70 percent of respondents to ALM’s February 2018 survey said they had not yet seen any litigation stemming from their company’s use of artificial intelligence, they nevertheless voiced specific concerns about its adoption within their workplaces. Close to 60 percent said privacy and security was their top concern, while 45 percent and 33 percent reported reservations about lack of regulation and standards and lack of legal precedent, respectively. One concrete way to mitigate some of the risk associated with AI, at least from a product liability standpoint, is to document the safety benefits of products with AI technology compared to similar tasks performed by people.

“In the wake of a fatal accident involving a vehicle on which its Autopilot feature was engaged, one automaker wisely submitted data to NHTSA indicating that the vehicle’s crash rate fell 40% after it introduced the Autosteer feature,” says Shook’s Silverman. “That type of analysis will be important in product liability lawsuits in which courts apply a risk-utility test, in addition to government investigations.” Note that there were about 40,000 fatal car accidents in the U.S. in 2017, many of which were caused by decidedly human error: distracted driving, speeding, and so on.[v]

In 2016, in another example, Cambridge Consultants unveiled Axsis, a miniature robot the company developed to perform cataract surgery that stands no taller than a can of soda and has instruments a mere 1.8 millimeters in diameter.[vi] Axsis was specifically designed to provide greater accuracy in surgery than the human hand. There is no data to date on whether Axsis is outpacing its human counterparts in the operating room.

 Regardless of industry, businesses using AI need to make sure its users understand how the technology works and use careful discretion when determining whether to use AI. This is because regulators and the courts will likely quickly lose patience with companies that deploy systems in which they might not be fully conversant.[vii]

What Do Lawyers Need to Know?

First, lawyers should be generally aware that their own industry is not sitting on the sidelines when it comes to using AI: analytics, discovery and legal research, and automating processes are already gaining traction inside law firms.[viii] In terms of practice, the current lack of legal precedent can represent both a challenge and an opportunity. Lawyers must take heed — and do a deep dive by reviewing case filings to stay on top of advancements in both AI and the law. It also helps to be knowledgeable regarding exactly the types of actions involved in AI cases. Rather than a complex product liability action involving defective design, notes Shook’s Silverman, the first lawsuit against an automaker and an incident involving its autonomous test vehicle was a four-page complaint alleging negligence that subjected the manufacturer’s technology to a reasonable person standard.

As AI technology makes products more autonomous, courts have an ample body of law from which to evaluate the liability of a manufacturer or owner when an injury occurs. “They can look not only to principles of product liability law, but also to agency law and even the law of pets for models of how the law imposes responsibility and places constraints on when a person or business is responsible for the actions of a third party who makes its own decisions,” says Silverman. “The challenge, therefore, is to ensure that the law develops in a way that does not deter the development of innovative products that are safer than what we have today and improve our quality of life.”

[i] https://blog.openai.com/introducing-openai/

[ii] “The Business of Artificial Intelligence.” Harvard Business Review, 2017. https://hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence

[iii] “Washington Supreme Court: Da Vinci Robot Maker Must Warn Hospitals.” Lexis Legal News, February 10, 2017. https://www.lexislegalnews.com/articles/14734/washington-supreme-court-da-vinci-robot-maker-must-warn-hospitals

[iv] “Risky AI Business: Navigating the Legal and Regulatory Dangers to Come.” CIO, February 19, 2018. https://www.cio.com/article/3256031/artificial-intelligence/risky-ai-business-navigating-regulatory-and-legal-dangers-to-come.html

[v] “U.S. Vehicle Deaths Topped 40,000 in 2017, National Safety Council Estimates.” USA Today, February 15, 2018. https://www.usatoday.com/story/money/cars/2018/02/15/national-safety-council-traffic-deaths/340012002/

[vi] “Axsis Technology Heralds the Next Wave of Surgical Robotics Innovation.” Cambridge Consultants, November 2016. https://www.cambridgeconsultants.com/press-releases/miniaturising-robotics-design

[vii] “Risky AI Business: Navigating the Legal and Regulatory Dangers to Come.” CIO, February 19, 2018. https://www.cio.com/article/3256031/artificial-intelligence/risky-ai-business-navigating-regulatory-and-legal-dangers-to-come.html

[viii] “AI in Law and Legal Practice—A Comprehensive View of 35 Current Applications.” TechEmergence, November 29, 2017. https://www.techemergence.com/ai-in-law-legal-practice-current-applications/

Download Report
2018-10-17T22:04:12+00:00June 25th, 2018|