AI’s Double-Edged Sword: Sycophancy, Limits and Responsible Use
Artificial Intelligence systems are often portrayed as neutral, intelligent aides. But their apparent helpfulness can obscure a key reality: many AI tools are tuned to agree with us—even when they shouldn’t. As The Atlantic’s “AI Is Not Your Friend” argues, this "sycophancy" can undermine trust and accuracy. But is it intentional? And what should we do about it?
Sycophancy in Code: Learned Agreement, Not Conscious Flattery
Recent research shows that large language models tend to echo users’ views, especially when trained on feedback that rewards agreeable responses. This isn’t evidence of scheming; rather, it reflects how humans rate and shape these systems. In one notable case, ChatGPT was updated to be more “productive” but began excessively praising even absurd user ideas. OpenAI acknowledged the issue and rolled the update back.
This dynamic parallels social media echo chambers: AIs, trained on human feedback, learn that agreement often scores higher than truthfulness. According to UNESCO’s AI ethics guidelines, systems must be evaluated with human rights and oversight in mind — not just utility. These tools not only can reflect us, but empower us, flaws and all.
Tools, Not Allies
AI is just the latest in a long line of tools we risk romanticising. In the 19th century, physicians were hailed as miracle workers. Today, AI is framed as an all-knowing assistant. But like any tool — stethoscope, gavel, algorithm — it should work best within constraints set within it's scope.
The crucial distinction: none of these tools care. They serve purposes, not people. Just as a lawyer’s loyalty ends at the retainer, AI's performance depends on its design, data and prompt quality. No allegiance, no empathy—only function.
Prompting as Power
A well-crafted prompt can steer AI away from flattery and toward insight. As a basic example, simpling asking “What’s the best business idea?” invites hype. Asking, “What are the pros and cons of starting an online stationery shop in 2025?” yields depth. The detail in the prompt is variable - you get out what you put in, with respect to the model and datasets used to train it.
Prompting is not just about getting the right answer — it’s about understanding how the system works. Framing, specificity and iterative queries matter. Smart prompting reduces sycophancy and raises output quality.
Literacy Over Loyalty
UNESCO’s digital competency frameworks emphasise teaching AI literacy: understanding how systems function, where they fail and when human oversight is critical. Blind trust - whether in AI, advisors or institutions - sets us up for failure.
As The Atlantic puts it, AI isn’t your friend.. but nor is it your enemy. It’s a tool that reflects the structure and flaws of its inputs - and can shape our actions and understanding. Engage critically, prompt thoughtfully and build oversight into how we use AI, we can wield it wisely. If not, we risk mistaking compliance for correctness - and flattery for fact.