好色先生TV

好色先生TV
Carnegie Mellon Institute for Strategy & Technology

CMU's Home for Political Science and International Relations

CMIST logo

Ashley Deeks, vice dean and Class of 1948 Scholarly Research Professor at the University of Virginia Law School, virtually joined the Carnegie Mellon Institute for Strategy and Technology (CMIST) for the spring 2026 installment of CMIST’s Lawfully Speaking series.

March 31, 2026

Accountability Within the Double Black Box: Lawfully Speaking with Professor Ashley Deeks

By Aleksaundra Handrinos

As technology continues to advance, legal frameworks must adapt to ensure effective regulation. With national security operations in the United States increasingly adopting advanced artificial intelligence (AI) systems, there is growing concern about the capacity of existing regulations to ensure government accountability. To unpack what she labels “the Double Black Box,” Ashley Deeks, vice dean and Class of 1948 Scholarly Research Professor at the University of Virginia Law School, virtually joined the Carnegie Mellon Institute for Strategy and Technology (CMIST) for the most recent installment of CMIST’s Lawfully Speaking series. Lecturing on “National Security, Artificial Intelligence, and the Struggle for Democratic Accountability,” Deeks outlined how the use of AI in national security creates a “double black box” and identified resources for responding to the resulting challenges of oversight and responsibility.

The very nature of the field of national security, specifically the use of sensitive intelligence to guide classified decisions, creates a “black box” that presents difficulty for executive oversight. This check on executive power is important to ensure the government is complying with public law values, so the partial (at best) oversight is cause for concern. Currently, Congress and the courts offer two primary methods of oversight, Deeks explained. However, these channels are not perfect and are reactive in nature. For example, members of Congress rely on information from the executive. Additionally, the courts must wait for cases to be brought and also afford significant deference to the executive branch in cases related to national security. Given these challenges, Deeks identified several other actors who can help to ensure that the US government’s actions align with public law values, including foreign allies, executive branch lawyers, technology companies, and states and localities. “None of them is perfect, none provides comprehensive oversight, but they can sometimes see behind a curtain of secrecy, and they sometimes have some leverage over the executive,” Deeks offered.

Ashley Deeks, vice dean and Class of 1948 Scholarly Research Professor at the University of Virginia Law School, virtually joined the Carnegie Mellon Institute for Strategy and Technology (CMIST) for the Spring 2026 installment of CMIST’s Lawfully Speaking series.

As US military and intelligence agencies have begun to incorporate AI in their operations, the challenges in conducting oversight of the executive have compounded. For example, while increasing AI-driven cyber operations is attractive from a national security standpoint, it also poses new risks. “The US and its allies are going to need speed” and the ability to respond to incoming cyber attacks faster than humans can, Deeks noted. “But you can also imagine how a clash of cyber tools could escalate into full-blown conflict, almost accidentally,” she said. In particular, the difficulty in identifying exactly how advanced algorithms make determinations has led to these tools sometimes being referred to as “black boxes.” Paired with the concerns about the secrecy surrounding government actions related to national security, the use of AI tools that decision-makers may not be able to accurately interpret or fully understand has created a “double black box.”

This double black box raises questions about efficacy, accountability, and justification. “In thinking about our values, on the legality point, I think the difficulties we have today in identifying unlawful executive branch activities that are happening in a classified setting are going to be amplified by the challenges of understanding whether the use of a particular AI tool complies with the law,” Deeks summarized. Evaluating the efficacy of a given national security policy choice, which is already based on limited information, will become even more challenging given the barriers to understanding the quality of AI recommendations that will inform those decisions. Additionally, the risk of algorithmic errors will exacerbate the already complicated task of determining responsibility for illegal or poor policy choices. The executive branch may be able to avoid providing justification for international security decisions, due to not only issues of classification, as often has been the case historically, but also the nontransparency and inexplicability of the AI systems.

CMIST Director Audrey Kurth Cronin moderated the event

Ultimately, Deeks predicts that pre-existing hard national security questions will become even harder with the inclusion of AI tools. Believing that the increasing pressure on national security officials to utilize AI tools makes this challenge unavoidable, Deeks highlighted possible solutions. Given that it is generally easier to reach consensus over procedural solutions than substantive ones, Deeks recommended drawing inspiration from existing processes and statutes to fortify procedural checks. For instance, Congress could use existing statutes regarding the regulation of covert action and of the conduct of foreign intelligence agents as models for a new one that requires the president to sign off on high-risk uses of national security AI. Deeks also presented some potential options for executive branch self-regulation and opportunities to draw on existing processes. Moreover, foreign allies, who initially may seem to be a counterintuitive choice for regulating secret operations, can provide incentives to ensure the United States adheres to public law values. 

To close her lecture, Deeks raised the question regarding the role that international agreements can play to reduce the size of this double black box. Such measures would include the states’ agreement to not implement certain uses of national security AI. Given the practical difficulty of enforcing such limits, Deeks is skeptical of the potential success of international agreements. Instead, drawing on the history of how countries have addressed the difficulties posed by cyber operations, she believes the development of effective oversight tools is more likely to occur through domestic efforts than through international agreements.

Lecturing on “National Security, Artificial Intelligence, and the Struggle for Democratic Accountability,” Deeks outlined how the use of AI in national security creates a “double black box.”

The session culminated in a dynamic, cross-platform conversation. Despite the remote setup, Deeks and CMIST Director Audrey Kurth Cronin engaged in a seamless fireside chat, closing the event by fielding questions from the room. When asked about the trend for the future of oversight in the field of AI, Deeks emphasized that the slow and reactive nature of the law has resulted in a lack of timely regulation of technology. However, Congress has achieved small steps towards this goal. Deeks predicted a continuation of these small progressions, unless a major crisis occurs to spark larger change. 

Deeks left attendees with important takeaways about the law and emerging technology. While the increasing usage of AI in national security operations exacerbates existing obstacles in the oversight of the executive branch that is critical to a healthy democracy, Deeks nonetheless charted a path forward. Despite the complexities posed by AI, existing international legal frameworks—such as the United Nations Charter and the laws of armed conflict—will continue to play a role. At the same time, the difficulty of reaching substantive multilateral AI regulation demonstrates the importance of ensuring effective checks-and-balances from the domestic vantage point. “The public itself needs to signal to our government what kinds of national security AI tools we support and which ones we want our government to avoid,” she said.

Following the lecture, attendees had the opportunity to ask questions

Following the lecture, attendees had the opportunity to ask questions

Due to unexpected travel delays, the guest speaker joined this event virtually.


(Image 1:  Ashley Deeks, vice dean and Class of 1948 Scholarly Research Professor at the University of Virginia Law School, virtually joined CMIST for the Spring 2026 installment of CMIST’s Lawfully Speaking series; Image 2: CMIST Director Audrey Kurth Cronin moderated the event; Image 3: In her lecture, “National Security, Artificial Intelligence, and the Struggle for Democratic Accountability,” Deeks explained how the use of AI in national security creates a “double black box”; Images 4 and 5: Following the lecture, attendees had the opportunity to ask questions)

</