Once little more than science fiction, products based on artificial intelligence (“AI”) have found their way into many aspects of our daily lives. Self-driving, autonomous vehicles are on the roads in certain parts of the country. Millions of Americans use and rely on Fitbits and similar products to register and track biometric data and make recommendations for fitness, nutrition and health decisions. Robotics are finding their way into many aspects of product manufacturing and the medical field. As with many scientific and technological advances, government regulation and legal doctrines tend to be outdated and slow to catch up. As the use of AI continues to expand, questions arise concerning the extent to which such technology should be regulated. Furthermore, when AI fails or causes injuries, there are unanswered questions as to whether liability exists for such injuries, who bears liability, and under what legal theories.
What Is AI?
AI is broadly defined as computer systems and programs that perform tasks that normally require human decision-making and intelligence. AI systems use technology, typically algorithms and neural networks, that when combined with computer programs, accomplish specific tasks by recognizing and processing data. There are generally two forms of AI that exist and operate in today’s world: reactive AI and limited memory AI.