Addressing the Legal Gaps in AI Regulation for National Security: The Case of Ukraine's Defense Sector
DOI:
https://doi.org/10.26512/lstr.v17i2.56351Keywords:
Economic Security. Cybersecurity Regulation. AI Governance. Cyber Threats. Legal System.Abstract
[Purpose] The purpose of this study was to develop a scientific and methodological framework for legal regulation of the introduction of artificial intelligence systems to ensure cybersecurity in the defense sector of Ukraine.
[Methodology] This paper evaluates the weaknesses in Ukraine's present legislation related to AI in defense cybersecurity by comparing legislative frameworks throughout the world. Based on case studies and expert interviews, it also creates a risk assessment methodology and suggests legislative changes to close these gaps.
[Findings] Thirty experts in national security, law, and cybersecurity participated in the expert survey. The study's conclusions highlight serious flaws in Ukraine's legal framework for governing AI in defense cybersecurity, most notably the lack of precise definitions for autonomous cyber defense systems and AI systems. It is difficult to adequately address the hazards connected to the integration of AI technology into vital defense infrastructure because of this lack of regulatory clarity. Key legal dangers are identified by the study, including possible human rights violations such as invasions of privacy, restrictions on the right to free speech, and the possibility of discrimination. Concerns about autonomous AI systems' accountability for their deeds as well as regulatory non-compliance brought on by out-of-date or insufficient legislation are also brought to light. The paper also highlights moral conundrums that AI decision-making systems may present, which could have significant effects on both individual rights and national security. The report urges the creation of specialized law that includes precise definitions, certification procedures, accountability systems, and transparency for AI systems used in the defense industry to address these problems. These results demonstrate how urgently Ukraine must revise its legal system to guarantee that AI integration complies with contemporary technical advancements as well as the values of human rights and national security.
[Practical implications] The results of the study have considerable potential for practical application in the development of regulations and cybersecurity strategies in Ukraine, especially in the context of hybrid threats and information warfare.
[Originality] This study develops a scientific and methodological framework for the legal regulation of artificial intelligence systems to enhance cybersecurity in Ukraine's defense sector. It identifies significant gaps in Ukrainian legislation, proposes the Legal Risk Assessment Matrix to evaluate AI-related legal risks, and offers legislative recommendations, including the creation of a dedicated law on AI regulation.
References
Artificial Intelligence (AI) is a Key to the World of Tomorrow. (2020). https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_ KI-Strategie_engl.pdf
Artificial Intelligence Strategy of the German Federal Government. (2020). https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_KI-Strategie_engl.pdf
BARANOV, O., KOSTENKO, O., DUBNIAK, M., & GOLOVKO, O. (2024). Digital Transformations of Society: Problems of Law. RS Global. https://doi.org/10.31435/rsglobal/057
BROCKMANN, K., BAUER, S., & BOULANIN, V. (2019). Bio plus X: Arms Control and the Convergence of Biology and Emerging Technologies. Stockholm International Peace Research Institute.
BRUNDAGE, M., AVIN, S., CLARK, J., TONER, H., ECKERSLEY, P., GARFINKEL, B., DAFOE, A., SCHARRE, P., ZEITZOFF, T., FILAR, B., ANDERSON, H., ROFF, H., ALLEN, G.C., STEINHARDT, J., FLYNN, C., HÉIGEARTAIGH, S.Ó., BEARD, S., BELFIELD, H., FARQUHAR, S., LYLE, C., CROOTOF, R., EVANS, O., PAGE, M., BRYSON, J., YAMPOLSKIY, R., & AMODEI. D. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Apollo – University of Cambridge Repository. https://doi.org/10.17863/CAM.22520
BRUNDAGE, M., AVIN, S., WANG, J., BELFIELD, H., KRUEGER, G., HADFIELD, G., KHLAAF, H., YANG, J., TONER, H., FONG, R., MAHARAJ, T., KOH, P.W., HOOKER, S., LEUNG, J., TRASK, A., BLUEMKE, E., LEBENSOLD, J., O’KEEFE, C., KOREN, M., RYFFEL, T., RUBINOVITZ, JB., BESIROGLU, T., CARUGATI, F., CLARK, J., ECKERSLEY, P., DE HAAS, S., JOHNSON, M., LAURIE, B., INGERMAN, A., KRAWCZUK, I., ASKELL, M., CAMMAROTA, R., LOHN, A., KRUEGER, D., STIX, C., HENDERSON, P., GRAHAM, L., PRUNKL, C., MARTIN, B., SEGER, E., ZILBERMAN, N., HÉIGEARTAIGH, S.Ó., KROEGER, F., SASTRY, G., KAGAN, R., WELLER, A., TSE, B., BARNES, E., DAFOE, A., SCHARRE, P., HERBERT-VOSS, A., RASSER, M., SODHANI, S., FLYNN, C., GILBERT, T.K., DYER, L., KHAN, S., BENGIO, Y., & ANDERLJUNG, M. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. https://arxiv.org/pdf/2004.07213
Case of Centrum för Rättvisa v. Sweden. (2021). https://hudoc.echr.coe.int/fre#{%22sort%22:[%22kpdate%20Descending%22],%22itemid%22:[%22001-210078%22]}
CATH, C., WACHTER, S., MITTELSTADT, B., TADDEO, M., & FLORIDI, L. (2020). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach. Science and Engineering Ethics, 24(2), 505-528. https://doi.org/10.1007/s11948-017-9901-7
CHUKAIEVA, A., & MATULIENĖ, S. (2023). Possibilities of applying artificial intelligence in the work of law enforcement agencies. Scientific Journal of the National Academy of Internal Affairs, 28(3), 28-37. https://doi.org/10.56215/naia-herald/3.2023.28.
Criminal Code of Ukraine. (2001). https://zakon.rada.gov.ua/laws/show/2341-14#Text
Cybersecurity Strategy of Ukraine. (2021). https://cip.gov.ua/ua/news/strategiya-kiberbezpeki-ukrayini
DASHKOVSKA, A. (2023). National security and defense council of Ukraine: Administrative and legal status. Law Journal of the National Academy of Internal Affairs, 13(1), 53-62. https://doi.org/10.56215/naia-chasopis/1.2023.53.
DIGNUM, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer. https://doi.org/10.1007/978-3-030-30371-6
DWIVEDI, Y.K., HUGHES, L., ISMAGILOVA, E., AARTS, G., COOMBS, C., CRICK, T., DUAN, Y., DWIVEDI, R., EDWARDS, J., EIRUG, A., GALANOS, V., ILAVARASAN, P.V., JANSSEN, M., JONES, P., KAR, A.K., KIZGIN, H., KRONEMANN, B., LAL, B., LUCINI, B., MEDAGLIA, R., & WILLIAMS, M.D. (2021). Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. International Journal of Information Management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Estonia’s National AI Strategy. (2019). https://ai-watch.ec.europa.eu/countries/estonia/estonia-ai-strategy-report_en#ecl-inpage-249
European Convention on Human Rights. (1950). https://zakon.rada.gov.ua/laws/show/995_004#Text
FLORIDI, L., COWLS, J., BELTRAMETTI, M., CHATILA, R., CHAZERAND, P., DIGNUM, V., LUETGE, C., MADELIN, R., PAGALLO, U., ROSSI, F., SCHAFER, B., VALCKE, P., & VAYENA, E. (2018). AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5
GILLESPIE, N., LOCKEY, S., & CURTIS, C. (2021). Trust in Artificial Intelligence: A Five Country Study. University of Queensland, KPMG Australia. https://doi.org/10.14264/e34bfa3
HANER, J., & GARCIA, D. (2019). The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development. Global Policy, 10(3), 331-337. https://doi.org/10.1111/1758-5899.12713
HOROWITZ, M., & SCHARRE, P. (2021). AI and International Stability: Risks and Confidence-Building Measures. Center for a New American Security. https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures.
KANELLOPOULOS, A.N. (2024). Counterintelligence, Artificial Intelligence and National Security: Synergy and Challenges. Journal of Politics and Ethics in New Technologies and AI, 3(1), e35617. https://doi.org/10.12681/jpentai.35617
KANT, N.A. (2022). How Cyber Threat Intelligence (CTI) Ensures Cyber Resilience Using Artificial Intelligence and Machine Learning. In J.O. PRAKASH, H.L. GURURAJ, & M.R. POOJA (Eds.), Methods, Implementation, and Application of Cyber Security Intelligence and Analytics (pp. 65-96). IGI Global. https://doi.org/10.4018/978-1-6684-3991-3.ch005
KAUSHIK, K., KHAN, A., KUMARI, A., SHARMA, I., & DUBEY, R. (2024). Ethical Considerations in AI-Based Cybersecurity. In K. KAUSHIK, & I. SHARMA (Eds.), Next-Generation Cybersecurity. Blockchain Technologies (pp. 437-470). Springer. https://doi.org/10.1007/978-981-97-1249-6_19
Law of Ukraine No. 2163-8 “On the Basic Principles of Ensuring Cybersecurity of Ukraine”. (2017). https://zakon.rada.gov.ua/laws/show/2163-19#Text.
Law of Ukraine No. 2297-6 “On Personal Data Protection”. (2010). https://zakon.rada.gov.ua/laws/show/2297-17#Text
Law of Ukraine No. 2469-8 “On National Security of Ukraine”. (2018).http://zakon.rada.gov.ua/laws/show/2469-19.
Law of Ukraine No. 3855-12 “On State Secrets”. (1994). https://zakon.rada.gov.ua/laws/show/3855-12#Text
MARGULIES, P. (2020). Autonomous Cyber Capabilities below and above the Use of Force Threshold: Balancing Proportionality and the Need for Speed. International Law Studies, 96, 395-441.
MÖKANDER, J., & AXENTE, M. (2023). Ethics-Based Auditing of Automated Decision-Making Systems: Intervention Points and Policy Implications. AI & Society, 38(1), 153-171. https://doi.org/10.1007/s00146-021-01286-x
National AI Strategy. (2021). https://www.gov.uk/government/publications/national-ai-strategy
National Artificial Intelligence Initiative Act. (2020). https://www.congress.gov/bill/116th-congress/house-bill/6216
OECD. (2019). Recommendation of the Council on Artificial Intelligence. https://oecd.ai/en/assets/files/OECD-LEGAL-0449-en.pdf
PATERRA, M. (2021). Artificial Intelligence & Cybersecurity: Balancing Innovation, Execution and Risk. https://impact.economist.com/perspectives/technology-innovation/artificial-intelligence-cybersecurity-balancing-innovation-execution-and-risk
PERRY, B., & UUK, R. (2019). AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk. Big Data and Cognitive Computing, 3(2), 26. https://doi.org/10.3390/bdcc3020026
Policy Paper the UK’s International Technology Strategy. (2023). https://www.gov.uk/government/publications/uk-international-technology-strategy/the-uks-international-technology-strategy
Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
RADANLIEV, P., DE ROURE, D., WALTON, R., VAN KLEEK, M., MONTALVO, R.M., MADDOX, L.T., SANTOS, O., BURNAP, P., & ANTHI, E. (2020). Artificial Intelligence and Machine Learning in Dynamic Cyber Risk Analytics at the Edge. Discover Applied Sciences, 2, 1773. https://doi.org/10.1007/s42452-020-03559-4
Regulation (EU) No. 2024/1689 of the European Parliament and of the Council “Laying Down Harmonized Rules on Artificial Intelligence and Amending Regulations (EC) No. 300/2008, (EU) No. 167/2013, (EU) No. 168/2013, (EU) No. 2018/858, (EU) No. 2018/1139 and (EU) No. 2019/2144 and Directives No. 2014/90/EU, (EU) 2016/797 and (EU) No. 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). (2024). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
ROBBINS, S. (2020). AI and the Path to Envelopment: Knowledge as a First Step Towards the Responsible Regulation and Use of AI-powered Machines. AI & Society, 35(2), 391-400. https://doi.org/10.1007/s00146-019-00891-1
SCHMITT, M.N. (2020). Autonomous Cyber Capabilities and the International Law of Sovereignty and Intervention. International Law Studies, 96, 549-578.
SCHRAAGEN, J.M. (2023). Responsible Use of AI in Military Systems: Prospects and Challenges. Ergonomics, 66(11), 1719-1729. https://doi.org/10.1080/00140139.2023.2278394
SINGH, S., KARIMIPOUR, H., HADDADPAJOUH, H., & DEHGHANTANHA, A. (2020). Artificial Intelligence and Security of Industrial Control Systems. In: K.K. CHOO, & A. DEHGHANTANHA (Eds.), Handbook of Big Data Privacy (pp. 121-164). Springer. https://doi.org/10.1007/978-3-030-38557-6_7
SMUHA, N.A. (2021). From a ‘Race to AI’ to a ‘Race to AI regulation’ – Regulatory competition for artificial intelligence. Published in Law, Innovation and Technology, 13(1). https://dx.doi.org/10.2139/ssrn.3501410
STRILTSIV, O., & FEDORENKO, O. (2022). Problems of legal regulation of the use of artificial intelligence technologies by the National police of Ukraine. Scientific Journal of the National Academy of Internal Affairs, 27(1), 30-39. https://doi.org/10.56215/0122271.30.
TADDEO, M., MCCUTCHEON, T., & FLORIDI, L. (2019). Trusting artificial intelligence in cybersecurity is a double-edged sword. Nature Machine Intelligence, 1(12), 557-560. https://doi.org/10.1038/s42256-019-0109-1
TADDEO, M., MCNEISH, D., BLANCHARD, A., & EDGAR, E. (2021). Ethical principles for artificial intelligence in national defence. Philosophy & Technology, 34(4), 1707-1729. https://doi.org/10.1007/s13347-021-00482-3
TSAMADOS, A., AGGARWAL, N., COWLS, J., MORLEY, J., ROBERTS, H., TADDEO, M., & FLORIDI, L. (2022). The Ethics of Algorithms: Key Problems and Solutions. AI & Society, 37(1), 215-230. https://doi.org/10.1007/s00146-021-01154-8
ULNICANE, I., KNIGHT, W., LEACH, T., STAHL, B.C., & WANJIKU, W.G. (2022). Governance of Artificial Intelligence: Emerging International Trends and Policy Frames. In T. MAURIZIO (Ed.), The Global Politics of Artificial Intelligence (pp. 29-55). Chapman and Hall/CRC.
USMAN, H., NAWAZ, B., & NASEER, S. (2023). The Future of State Sovereignty in the Age of Artificial Intelligence. Journal of Law & Social Studies, 5(2), 142-152.
YEUNG, K., HOWES, A., & POGREBNA, G. (2019). AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing. In M.D. DUBBER, F. PASQUALE, & S. DAs (Eds.), The Oxford Handbook of Ethics of AI (pp. 76-106). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.5
ZALNIERIUTE, M. (2022). Big Brother Watch and Others v. the United Kingdom. American Journal of International Law, 116(3), 585-592. https://doi.org/10.1017/ajil.2022.35.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Law, State and Telecommunications Review

This work is licensed under a Creative Commons Attribution 4.0 International License.
By submitting this paper to the Law, State and Telecommunications Review,
I hereby declare that I agree to the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0).
