Advertisement, Privacy, and Intimacy: Lessons from Social Media for Conversational AI
Following the latest hint from OpenAI towards ChatGPT possibly featuring advertising capabilities, as detailed in this recent interview, we find ourselves at an important juncture in the evolution of digital privacy. The prospect of AI-powered search engines and chatbots incorporating advertising mechanisms raises big questions about how we share personal information and how that data might be monetized.
The Great Privacy Awakening We’ve Forgotten
Cast your mind back to the early 2010s. Social media platforms like Facebook were under intense scrutiny for their data collection practices. The Cambridge Analytica scandal in 2018 sent shockwaves through the digital world, revealing how personal data could be harvested and used to influence political outcomes. Users became aware that their posts, likes, and demographic information were being packaged and sold to advertisers. This awareness sparked a digital privacy revolution. People learned to scrutinize privacy settings, became selective about what they shared publicly, and grew suspicious of “free” platforms that seemed too good to be true. The mantra “if you're not paying for the product, you are the product” became common knowledge. Educational campaigns about digital literacy flourished, teaching users to think twice before posting personal information.
The outrage was palpable. Users felt betrayed when they discovered that their vacation photos, relationship status updates, and casual comments were being analyzed to create detailed psychological profiles. This led to tangible changes: stricter privacy regulations like the European GDPR, increased transparency requirements for tech companies, and a general cultural shift toward data consciousness.
The Conversational AI Blind Spot
Today, however, we face a remarkable disconnect. While we’ve become more cautious about what we post on social media platforms, we’re simultaneously sharing our most intimate thoughts, fears, and personal details with conversational AI systems like ChatGPT, Claude, and Gemini.
Think about the nature of these interactions. Research reveals people are comfortable sharing deeply personal topics; far beyond what we’d post publicly. Many users disclose health concerns and medical history, financial anxieties, sexual preferences, and relationship or emotional issues. Often, they even share identifying or sensitive personal details in contexts like translation or coding tasks. Some also open up about their creative or business ideas, and political or philosophical views The conversational format creates an illusion of privacy and intimacy. Unlike a public Facebook post, these AI interactions feel like private conversations with a knowledgeable assistant or therapist. The one-on-one nature of the exchange mimics human counseling sessions, encouraging users to open up in ways they never would on traditional social media platforms.
The Advertising Integration Challenge
This gap in our privacy awareness becomes even more significant when we consider where conversational AI might be headed. Just as social media platforms turned personal sharing into a revenue model through targeted ads, AI companies are beginning to explore similar strategies.
As Giada cautioned in a 2023 presentation on chatbots and search engines (University of Sydney, ChatLLM2023), integrating advertising into AI systems raises ethical concerns that intensify existing privacy risks:
Neglect of Information Hierarchy
When AI systems prioritize advertiser-sponsored content, users may receive biased recommendations that serve commercial interests rather than their genuine needs. The understanding these systems have of user preferences and vulnerabilities makes targeted manipulation particularly effective and concerning.
Monetization of Attention and Intimacy
Unlike traditional advertising that interrupts content consumption, AI-integrated advertising operates within the conversation itself. This creates what we’ve been calling “unobjective information” that's unfairly shared, meaning the advertising influence is woven seamlessly into responses that appear to be objective, helpful advice.
Echo Chambers and Filter Bubbles
AI systems that incorporate advertising mechanisms risk creating personalized information environments that reflect user preferences and advertiser objectives. This can lead to a loss of diversity in search results, limiting exposure to alternative viewpoints or solutions that might better serve the user’s interests.
The Trust Transfer Problem
The shift from social media skepticism to AI trust stems from interface design that mimics intimate human conversation rather than public posting. Users perceive AI systems as neutral tools rather than commercial products with monetization objectives, while the immediate value of helpful responses makes data sharing feel justified. Most importantly, AI conversations create the illusion of privacy, as mentioned, and unlike social media’s visible audiences, users believe they’re only sharing with the AI system itself.
Moving Forward Mindfully… With a Little Help of Open-Source!
We need the same critical awareness around AI that we developed for social media privacy. This means educating users about AI data collection, demanding corporate transparency about how personal information is analyzed and monetized, and developing ethical guidelines that prioritize user interests over advertising revenue. Most importantly, we must approach AI systems with healthy skepticism, remembering that “free” (but occasionally, also paid) services often extract value in ways that aren’t immediately visible.
But unlike the early days of social media, we now have a choice. Open-source models give individuals, researchers, and organizations the ability to build their own AI assistants without relying solely on closed systems with opaque monetization strategies. By using open models, you can decide how your data is stored, processed, and shared; or keep it entirely local. This could be through creating your own chatbot, hosting your own chatbot on Hugging Face’s Spaces, or integrating privacy-first AI into your work. With open source, we can build systems that are transparent, ethical, and genuinely user-centered. It’s not a silver bullet, but it does mean we can imagine conversational AI that isn’t built around advertising and that’s a chance worth taking.
And whether you are a developer or a user, there are practical steps you can take to protect privacy:
If you’re a developer:
◦ Choose where you run your system: local deployment is best for privacy
◦ Minimize data collection: don’t log sensitive user inputs unless absolutely necessary, and offer clear options for deletion
◦ Be transparent: document how your system processes, stores, and possibly shares data
◦ Design for privacy by default: opt-out should not require digging through hidden settings
◦ Resist monetization shortcuts: advertising or behavioral profiling may bring short-term revenue, but undermines user trust in the long run
If you’re a user:
◦ Know your provider: check who created your AI assistant and what their business model is
◦ Ask where it runs: does the assistant process your data locally on your device, or is everything sent to remote servers?
◦ Check the fine print: does the provider store your conversations, and if so, for how long? Are your conversations used for training the model?
◦ Watch for monetization: is your data used for training, targeted advertising, or resold to third parties?
◦ Stay skeptical: treat conversational AI like any other digital service: if it feels “free”, ask what you’re really paying with
Ultimately, privacy in the age of conversational AI is not just a technical or regulatory issue; it’s about trust. The more personal and intimate these systems become, the more careful we need to be about what happens behind the interface. Social media taught us hard lessons about trading convenience for surveillance. With AI, we have the chance to do better: to demand transparency, to choose open and privacy-first systems, and to design technology that serves people rather than advertisers.