I am a PhD candidate at the University of Michigan, School of Information, advised by Florian Schaub. Previously, I obtained a M.Sc. degree in biostatistics at the University of Michigan, School of Public Health, and a B.A. in applied math and political science at Macalester College in Saint Paul, Minnesota.
I study human-centered AI safety, with the goal of empowering people to recognize and respond to AI-mediated risks, including data exploitation, privacy violations, and AI-enabled deception. In particular, I study how security and privacy protections play out when people interact with data-driven AI, by combining technical methods (e.g., large-scale measurements) with empirical qualitative studies of users and experts (e.g., interviews). I further design human-centered solutions to make technical protections understandable and actionable.
My work spans four interconnected streams that engage diverse users and experts, tackling challenges from data protection to data misuse:
I advance the legibility of privacy information by transforming existing legal and technical mechanisms, which are intended to protect users yet are rarely understood, into communicable and interpretable infrastructures. For example, my large-scale analyses of privacy notices in the financial industry exposed how fragmented privacy laws lead to inconsistent privacy disclosures, and I offered actionable policy recommendations for useful and usable transparency. I broadened this research stream by designing user-centered explanations of privacy-enhancing technologies (e.g., differential privacy, federated learning) that support users' informed privacy decision-making.
I address the tension between AI's data demands and data sensitivity by creating human-centered tools for valid data analysis under privacy constraints.
I examine AI-mediated deception, particularly deepfake scams where the unique deceptive intimacy exploits familiar social relationships and identity trust. I am designing and evaluating effective deepfake scam warnings with actionable advice to enable immediate user action in real-time video calls against this new class of security threats.
I investigate AI’s societal implications across domains, especially creative and knowledge work. I found that creative work relies on identity-bearing materials and invisible labor, suggesting the need for process- and labor-aware protections when creators' content can be taken up as AI training data.
I have published extensively across top-tier venues in cybersecurity (e.g., ACM Conference on Computer and Communications Security (CCS), Proceedings on Privacy Enhancing Technologies (PoPETs/PETS)), human-computer interaction (e.g., ACM CHI Conference on Human Factors in Computing Systems (CHI), ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW)), and computational social science (e.g., AAAI Conference on Web and Social Media (ICWSM)). My work has been recognized with a Distinguished Paper Award (top <1%) at ACM CCS 2025.