Privasis: Synthesizing the Largest "Public" Private Dataset from Scratch
Abstract
A large-scale synthetic dataset called Privasis is introduced to address privacy concerns in AI research, enabling more effective text sanitization with compact models that outperform existing large language models.
Research involving privacy-sensitive data has always been constrained by data scarcity, standing in sharp contrast to other areas that have benefited from data scaling. This challenge is becoming increasingly urgent as modern AI agents--such as OpenClaw and Gemini Agent--are granted persistent access to highly sensitive personal information. To tackle this longstanding bottleneck and the rising risks, we present Privasis (i.e., privacy oasis), the first million-scale fully synthetic dataset entirely built from scratch--an expansive reservoir of texts with rich and diverse private information--designed to broaden and accelerate research in areas where processing sensitive social data is inevitable. Compared to existing datasets, Privasis, comprising 1.4 million records, offers orders-of-magnitude larger scale with quality, and far greater diversity across various document types, including medical history, legal documents, financial records, calendars, and text messages with a total of 55.1 million annotated attributes such as ethnicity, date of birth, workplace, etc. We leverage Privasis to construct a parallel corpus for text sanitization with our pipeline that decomposes texts and applies targeted sanitization. Our compact sanitization models (<=4B) trained on this dataset outperform state-of-the-art large language models, such as GPT-5 and Qwen-3 235B. We plan to release data, models, and code to accelerate future research on privacy-sensitive domains and agents.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Continual Pretraining on Encrypted Synthetic Data for Privacy-Preserving LLMs (2026)
- PrivacyBench: A Conversational Benchmark for Evaluating Privacy in Personalized AI (2025)
- CTIGuardian: A Few-Shot Framework for Mitigating Privacy Leakage in Fine-Tuned LLMs (2025)
- Data-Free Privacy-Preserving for LLMs via Model Inversion and Selective Unlearning (2026)
- Chain-of-Sanitized-Thoughts: Plugging PII Leakage in CoT of Large Reasoning Models (2026)
- You Only Anonymize What Is Not Intent-Relevant: Suppressing Non-Intent Privacy Evidence (2026)
- Unintended Memorization of Sensitive Information in Fine-Tuned Language Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper