Constructed response short answer assessments provide greater insight into student understanding than multiple choice evaluation, but involve time-intensive grading. To increase scoring efficiency, we worked with the Automated Analysis of Constructive Responses (AACR) Research Group to use supervised machine learning to generate a computer scoring program for a biology constructed response formative assessment question. However, ensuring accurate and unbiased scoring is necessary before this technology enters classrooms. Due to underrepresentation of first-generation and minority students in STEM classrooms, and the assessment rubric’s potential specificity to University of Washington curriculum, I hypothesized a decreased scoring accuracy for first-generation and minority student responses and for students not attending the University of Washington. Responses for the constructed response formative assessment question were collected from five institutions, including public universities and community colleges, and were scored by myself, another trained human scorer, and the scoring program. Previous research found that this question, human-scored, shows no bias by student demographic (i.e., there is no differential item functioning). Responses were de-identified prior to human scoring, and human scores were reviewed by the supervising researcher before analysis. I analyzed the scoring program’s accuracy for dependence on students’ reasoning level, GPA, university, timing of assessment, first generation status, race or ethnicity, and gender, using logistic regression and model selection. My analysis found no significant demographic or institutional bias in the scoring program. However, results did indicate a decreased computer scoring accuracy for higher-level reasoning scores (when students had more accurate responses to the question). For this assessment question and scoring program, my results indicate that further training of the program on higher-level responses is needed before scoring bias is eliminated. This bias-analysis research ensures the increased scoring efficiency offered by computer scoring programs does not come with an increase in assessment bias.