Modern astronomical survey telescopes, like the Vera C. Rubin Observatory and Extremely Large Telescope, are projected to produce terabytes of data each observing night, raising the need for efficient machine learning algorithms to flag astronomical data for further study. One such algorithm is the Random Forest (RF) algorithm, which has previously been demonstrated to process some 2.36 million different galaxy spectra data (in 12 hours over 128 CPUs) for any potentially unique or even undiscovered phenomena. However, this previous demonstration used spectra taken from the combined light of a given galaxy. RF algorithms are ensembles of structures called “decision trees,” which categorize data points with value-comparison questions. This method can be extended to create a metric to calculate how unusual a data point is compared to other points in the dataset. Our project will extend this RF algorithm for the Mapping of Nearby GAlaxies survey (MaNGA), which accounts for the fact that spectra vary with regions of a galaxy. We explore the behavior of the RF algorithms when accounting for these spatially-varying features. Our methods include the generation of synthetic data to train random forest algorithms, RF model hyperparameter searches, and comparison of models. Furthermore, our project compares the similarity of our results to the results from Baron and Poznanski (2016), which previously applied the RF algorithms to the 2.36 million spectra data. We present the conclusions from the RF algorithms and whether prevalent emission lines are flagged by the algorithms, such as hydrogen-alpha and O-III lines. Our project also discusses the features characteristic of outlier galactic region spectra data. Successful implementation of the RF algorithms to process data pipelines from upcoming large surveys has the potential to accelerate the rate of astronomical discoveries to unprecedented levels.