Computer vision models are used to help analyze biomedical images for diagnosis and treatment through looking for differences between images by a comparison to a template image. For instance, optical coherence tomography (OCT) is used to diagnose and treat retinal issues. When looking at the brain, injury, cellular uptake and characteristic features vary across regions, therefore images are often segmented into established brain regions to determine how the brain is impacted in a particular study. Current models fail to work in segmenting brain regions because each brain has variation in local microstructure, making it difficult to compare one brain to another. Furthermore, when brains are sliced, the exact location within the brain can be difficult to pinpoint, particularly in regard to depth, because the regions vary slice to slice. Therefore, my research addresses the increasing need for a method of analysis to align and compare images from brain regions across slices from a single brain, and from brain to brain. Using scikit-image analysis tools, I extracted information from cell images and videos of nanoparticles obtained in brain slices and determined trends within various regions. My program extracted cell density, shape, and death, then analyzed the uptake of nanoparticles to determine where a small segment of an image is most likely located within the brain. Iterating over the entire image generated a rough map of the regions within the brain which is refined using mapping descriptions detailed in literature. This research resulted in a systematic program that uses image analysis tools to extract features of defined brain regions. This program allows for quick, accurate and consistent analysis of regional differences of cellular features, nanoparticle distribution, toxicity, and other important measures.