The ability to derive information through automated scanning of personal documents has significant economic and societal value, stemming from applications in surveillance and digital forensics, e-commerce, tailored advertising, recommender systems, human resource management, mental health care, and more. Giving applications access to one’s personal text messages and e-mails can easily lead to (un)intentional privacy violations. We have developed and implemented cryptographic protocols to scan personal documents in a privacy-preserving manner, using techniques from Machine Learning (ML) and Secure Multiparty Computation (SMC). In a typical scenario of interest for our research, there are two parties, nick-named Alice and Bob. Bob has a trained ML model that can automatically classify texts like e-mails, for instance inferring whether the author is depressed, suicidal, a terrorist threat, or whether the e-mail is a spam message. Our SMC based protocols allow for the classification of a personal text written by Alice with Bob’s ML model in such a way that Bob does not learn anything about Alice’s text (other than the class label resulting from the classification) and Alice does not learn anything about Bob’s model. We demo the cryptographic protocols in an application for privacy-preserving detection of hate speech against women and immigrants in text messages, built on top of the SMC framework Lynx developed at UW. In this use case, Bob has a boosted decision tree model that flags texts as hateful based on the occurrence of particular words. We show that Bob can label Alice’s texts as hateful or not without learning which words occur in Alice’s texts, and Alice does not learn which words are in Bob’s hate speech lexicon, no how these words are used in the classification process.