Document Type
Article
Publication Title
Columbia Law Review
Abstract
Many legal scholars have explored how courts can apply legal doctrines, such as procedural due process and equal protection, directly to government actors when those actors deploy artificial intelligence (AI) systems. But very little attention has been given to how courts should hold private vendors of these technologies accountable when the government uses their AI tools in ways that violate the law. This is a concerning gap, given that governments are turning to third-party vendors with increasing frequency to provide the algorithmic architectures for public services, including welfare benefits and criminal risk assessments. As such, when challenged, many state governments have disclaimed any knowledge or ability to understand, explain, or remedy problems created by AI systems that they have procured from third parties. The general position has been “we cannot be responsible for something we don’t understand.” This means that algorithmic systems are contributing to the process of government decisionmaking without any mechanisms of accountability or liability. They fall within an accountability gap. In response, we argue that courts should adopt a version of the state action doctrine to apply to vendors who supply AI systems for government decisionmaking. Analyzing the state action doctrine’s public function, compulsion, and joint participation tests, we argue that— much like other private actors who perform traditional core government functions at the behest of the state—developers of AI systems that directly influence government decisions should be found to be state actors for purposes of constitutional liability. This is a necessary step, we suggest, to bridge the current AI accountability gap.
First Page
1941
Volume
119
Publication Date
2019
Recommended Citation
Kate Crawford & Jason M. Schultz,
AI Systems as State Actors,
119
Columbia Law Review
1941
(2019).
Available at:
https://gretchen.law.nyu.edu/fac-articles/1027
