Report for the Administrative Conference of the United States
When individuals have questions about Federal benefits, services, and legal rules, they are increasingly seeking help from government chatbots, virtual assistants, and other automated tools. Current forms of automated legal guidance platforms include the U.S. Citizenship and Immigration Services’s “Emma,” the U.S. Department of Education’s “Aidan,” and the Internal Revenue Service’s “Interactive Tax Assistant.” Most scholars who have studied artificial intelligence and Federal government agencies have not focused on the government’s use of technology to offer guidance to the public. The absence of scholarly attention to automation as a means of communicating government guidance is an important gap in the literature, given the strong influence that these communications can have on individuals’ decisions about the law.
This Report describes the results of a qualitative study of automated legal guidance across the Federal government, which included semi-structured interviews with both agency technology experts and lawyers. This study was conducted under the auspices of the Administrative Conference of the United States (ACUS). During our study, we reviewed the automated legal guidance activities of all Federal agencies and conducted in-depth research on agencies that are already using well-developed chatbots, virtual assistants, or other related tools to assist the public in understanding or following relevant law. After identifying the agencies that are primary adopters of automated legal guidance, we conducted interviews with multiple individuals from each agency, as well as representatives from the U.S. General Services Administration.
We find that automated legal guidance offers agencies an inexpensive way to help the public navigate through complex legal regimes. However, we also find that automated legal guidance may mislead members of the public about how the law will apply in their individual circumstances. In some cases, agencies exacerbate this problem by, among other things, making guidance seem more personalized than it is, not recognizing how users may rely on the guidance, and not adequately disclosing that the guidance cannot be relied upon as a legal matter. In many respects, this is not a problem of agencies’ own making. Rather, agencies are faced with the difficult task of translating complex statutory and regulatory regimes for a public that has limited capacity to understand them. Agencies also often lack sufficient resources to engage in more personalized outreach. Fundamentally, we identify a tension between agencies’ reasonable desires to promote automated legal guidance and its underappreciated limitations.
In this Report, after exploring these challenges, we chart a path forward. We offer policy recommendations, organized into five categories: transparency; reliance; disclaimers; process; and accessibility, inclusion, and equity. We believe this Report, and the detailed policy recommendations that flow from it, will be critical for evaluating existing, as well as future, development of automated legal guidance by governments.