Prime numbers are fascinating objects in mathematics, fundamental to number theory and cryptography in particular. A primality test takes an integer as input and outputs whether that number is prime or composite. Most practical applications require primality tests to be efficient. Today, the largest known prime number has over twenty million digits. Proving that a number of this size is prime can be computationally expensive. Most tests are probabilistic and do not guarantee that any given prime number is in fact prime. Other tests fail to filter pseudo-primes, like Carmichael Numbers, that are in fact composite integers. In 2002, Agrawal, Kayal, and Saxena published the first deterministic primality test that also runs in polynomial time relative to the binary representation of the input. Although their algorithm represents an important breakthrough in the field of computational number theory, it is seldom used in practice. Our objective was to determine why this idealized algorithm is not practical enough compared to other primality tests. We have re-created the algorithm and optimized our initial naïve implementation by studying each step to achieve optimal complexity using Fermat’s Little Theorem, modular arithmetic, and the Fast Fourier Transform for multiplicative efficiency. To confirm that probabilistic methods are still preferred, we compared execution times and accuracy of other tests to our own results.