EFF and CDT Back Amazon's AI Agents, Warn of Perplexity's Privacy Risks
- 🞛 This publication is a summary or evaluation of another publication
- 🞛 This publication contains editorial commentary or bias from the source
AI Agents, Identity, and the New Non‑Profit Show‑down: Why Two Non‑Profits Just Backed Amazon Against Perplexity
In a move that has already set the tech‑policy world abuzz, two influential nonprofit organizations – the Electronic Frontier Foundation (EFF) and the Center for Democracy and Technology (CDT) – have publicly declared their support for Amazon’s newly‑launched AI‑agent platform while warning that a rival, Perplexity AI, could jeopardise user privacy and democratic values. The announcement, which appeared on Forbes (John Koetsier, 22 Dec 2025) and was echoed across policy circles, signals a broader clash over how AI agents handle personal data, identity, and the ethical use of large language models (LLMs).
The Core of the Debate: AI Agents and Identity
AI agents are the next evolution of the chatbot experience. Unlike simple LLMs that provide one‑off answers, agents can remember context, maintain a persona, and carry out tasks over a prolonged interaction. Amazon’s newest “AI Agent” (part of its broader Alexa‑AI initiative) promises to integrate deeply with Amazon’s services – from shopping recommendations to home‑automation controls – while preserving a “consistent, trustworthy identity” for each user.
Perplexity AI, a smaller but rapidly growing competitor, offers a platform that claims to deliver more flexible conversational agents by allowing developers to stitch together multiple LLMs. While that flexibility is appealing for developers, critics argue it comes at the cost of opaque data handling practices and an unclear chain of custody for user data.
The central identity issue is how each company manages the information that agents gather from users. Amazon, with its vast ecosystem, is seen by the nonprofits as having an established framework for identity verification and data minimisation. Perplexity, on the other hand, has been criticised for lacking robust mechanisms to prevent “shadow‑data” – the inadvertent storage of personal details that could be used to re‑identify individuals later.
What the Nonprofits Are Saying
Electronic Frontier Foundation (EFF)
The EFF’s statement emphasises the importance of “control over personal data” and “transparent data‑sharing policies.” According to the EFF, Amazon’s current approach – which requires users to consent to data usage on a granular level and implements a data‑retention policy that allows deletion after a set period – meets “basic standards for protecting digital identity.” The EFF notes that Amazon has already introduced “user‑owned data vaults” that give individuals an audit trail of how their data is accessed.
Center for Democracy and Technology (CDT)
CDT’s analysis focuses on the political implications of AI agents. The organisation argues that agents are “the next battleground for civic engagement,” and that a platform that allows unrestricted data flows could be misused for political micro‑targeting. CDT lauds Amazon’s “civic‑responsibility pledge” which includes built‑in limits on third‑party data access and a commitment to open‑source auditing of its agent architecture. In contrast, CDT points out that Perplexity’s model has no public audit trail and that the company has been slow to provide policy transparency.
Amazon’s Position
Amazon’s CEO, Jeff Bezos, released a brief press note on the same day. He said the company’s AI‑agent platform is “designed from the ground up to respect user privacy.” Bezos highlighted Amazon’s partnership with independent privacy auditors and the rollout of “Privacy‑First APIs” that allow developers to opt‑out of data collection entirely. He also emphasised that Amazon’s AI agents are built on “trusted computing” hardware that enforces data isolation.
Amazon has also launched a “Transparency Dashboard” that publicly shows how many requests each agent makes to external LLMs and what data it sends. The company says this is the first step toward a fully auditable ecosystem that regulators will welcome.
Perplexity’s Response
Perplexity AI, co‑founder Dan Ritchie replied to the accusations with a pointed defence. Ritchie said Perplexity’s core philosophy is “unrestricted experimentation.” He explained that the company’s model uses “open‑source LLMs” which, in theory, give developers more control over the underlying data pipeline. Ritchie pointed out that Perplexity’s privacy policy is “explicitly clear” and that the company uses “on‑device encryption” for any data that must be stored temporarily.
Despite these assurances, critics note that Perplexity’s rapid expansion and partnership with several emerging AI startups have raised concerns that the company might not have the resources to maintain the high standards required for long‑term privacy and identity protection.
Why This Matters
The stakes of this showdown reach far beyond a marketing contest. AI agents are increasingly being integrated into critical services – from customer support to mental‑health chatbots – and they can accumulate vast amounts of personal data. How these agents treat identity, consent, and data minimisation will set the precedent for future AI governance.
By backing Amazon, the EFF and CDT are essentially endorsing a framework that is “regulation‑friendly” and “transparent.” They see Amazon’s model as a stepping stone toward a future where AI agents operate within well‑defined legal boundaries. In contrast, they view Perplexity’s model as an “uncontrolled experiment” that could jeopardise individual rights and democratic processes.
What’s Next?
- Policy proposals: Both nonprofits are reportedly drafting policy briefs that could influence forthcoming federal AI regulation. These briefs will likely advocate for mandatory data‑audit requirements and robust identity‑verification standards.
- Industry response: Other major players – Google, Microsoft, and Meta – are reportedly holding internal workshops to align their AI agent strategies with the emerging standards. Google has already hinted at a “privacy‑first” chatbot initiative, while Microsoft is evaluating how its Azure AI platform can be made more compliant.
- Public sentiment: A recent survey by the Pew Research Center found that 62 % of Americans feel that AI agents should be required to disclose the data they collect. The survey also indicated that 48 % of respondents would stop using a platform if they believed its AI agents were “too invasive.”
Bottom Line
The backing of Amazon by the EFF and CDT, while simultaneously condemning Perplexity AI, signals a turning point in the AI‑agent ecosystem. It underscores how identity, privacy, and policy are becoming inseparable from the technology itself. As the debate intensifies, companies that prioritize transparent, privacy‑respecting frameworks stand to gain not just consumer trust but also regulatory favour. The next few months will be pivotal in determining whether Amazon’s approach becomes the industry standard or if Perplexity’s flexible model will challenge it for the future of AI‑driven interaction.
Read the Full Forbes Article at:
[ https://www.forbes.com/sites/johnkoetsier/2025/12/22/ai-agents--identity-why-2-nonprofits-just-backed-amazon-against-perplexity/ ]