With rapid technological advances, robots, unmanned vehicles, and other artificial intelligence (hereby referred to as AI) – simulation of human behavior on a computer - entities proliferate into everyday lives. Increasingly, the military is using AI to put troops out of harm’s way; however, a question of criminal liability arises when certain weapons and drones could cause damage on a grand scale, at a distance, and with higher propensity. Who is to be held responsible for the potentially widespread war offenses of these automated systems when there is not necessarily someone controlling those systems on the ground: the manufacturer, the programmer, or the AI entity itself? While questions of such caliber have fanned the flames of widespread opposition to autonomous weapons, the purpose of this study is to determine ways for adjudicating them as used in war scenes instead of banning them, and to that end, this project turns towards International Laws (specifically Criminal Law) and explores the precedence set by past trialed cases to establish an understanding of responsibility as attached to certain individuals in mass violations. It further moves on to examine the types of adjudicated crimes and review statistical data surrounding the summoning of each of International Criminal Law’s provisions in individual cases as to develop a definition of prosecutable criteria in terms of weapons and destruction in order to find an outlet with jurisdiction to assess the admissibility of autonomous weapons. Preliminary findings bolster a greater demand for the International Criminal Court (that prosecutes grand scale murder and war crimes) as a candidate for trying individuals for AI-related violations. With the presence of such institution’s already well-defined regulations that can be expanded to encompass criminal liability for unlawful use of AI, this paper comes to the conclusion that fully autonomous weapons could be indeed of positive consequences.