For decades, science fiction writers have tackled philosophical and existential questions arising from the creation of artificial intelligence (“AI”) by human beings. AI, however, is no longer a fictional concept, but rather an evolving part of modern society. How will AI systems impact United States’ national security interests? Considering the increased national security threat coming from actors in cyberspace, policymakers should consider the cybersecurity risk of AI systems that operate entirely in cyberspace. This article opines that a serious threat to national security will arise from a cyberspace-bound, decentralized autonomous entity (“CyDAE”) because of the “unexplainability” of current AI system design (that is, the difficulty understanding why or how the AI arrived at its conclusion or behaved the way it did), the lack of legal personhood arrangements for autonomous systems, and the already difficult task of attributing acts in cyberspace to human actors or States because of outdated Westphalian notions of sovereignty and territoriality. The article ultimately offers several broad policy suggestions, including: (1) an AI registry; (2) “explainability” criteria for AI system designs; (3) requiring human oversight for legal personhood arrangements (whether arranged in a corporation, limited liability structure, or otherwise) tailored specifically for AI autonomous systems that lack human members; and (4) universal jurisdiction of States over malicious CyDAEs that obfuscate attributive links to human actors or States.
Jonathan A. Schnader,
Mal-Who? Mal-What? Mal-Where? The Future Cyber-Threat Of A Non-Fiction Neuromancer: Legally Un-Attributable, Cyberspace-Bound, Decentralized Autonomous Entities,
N.C. J.L. & Tech.
Available at: https://scholarship.law.unc.edu/ncjolt/vol21/iss2/2