Segmentation is an imaging technique commonly used to isolate an object of interest, such as an organ, from the background, or other objects in the image. When analyzing the shape of an anatomical structure, segmentation of that structure is often the first step in analysis. Precise anatomical segmentations are often created manually by subject experts, which is time-consuming, does not scale well, and can be prone to error since it is subjective. In this project, we aim to develop a machine-learning model to expedite whole-body surface segmentation from fetal mouse scans as part of an automated pipeline to detect asymmetry and abnormality in the facial region. The International Mouse Phenotyping Consortium (IMPC) has generated a large repository of three-dimensional (3D) imaging data from mouse embryos, providing a rich resource for investigating phenotype/genotype interactions. To generate segmentations required for training and validation of our deep learning model, the full body surface was manually segmented in 91 baseline scans from the IMPC’s Knockout Mouse Phenotyping Program (KOMP2) dataset. I trained a UNet with transformers (UNETR), on these segmentations that is able to estimate surface segmentations from new micro-CT mice images with an accuracy of 0.9. I am currently developing a fetal mouse full-body segmentation application powered by our deep learning model, SurfaceExtract, that will be made publicly available as an extension to the open-source image analysis platform, 3D Slicer. SurfaceExtract will be used by our lab to quickly and accurately generate segmentations of fetal mice as part of our lab’s automated facial asymmetry phenotyping pipeline.