Nghiên cứu cho thấy AI tạo sinh đang làm suy giảm kỹ năng nhận thức cốt lõi, khi người dùng ngày càng phụ thuộc vào công cụ để giải quyết vấn đề, thay vì tự tư duy.
Vấn đề bắt nguồn từ “cognitive offloading” – xu hướng bộ não né tránh khó khăn và dựa vào công nghệ. Khác với GPS hay máy tính bỏ túi, AI có phạm vi ứng dụng quá rộng, khiến tâm trí “tắt máy” khi dùng.
Một nghiên cứu với gần 1.000 sinh viên: nhóm dùng ChatGPT để giải toán ban đầu vượt 48% so với nhóm tự làm, nhưng khi kiểm tra không có AI, kết quả lại thấp hơn 17%. Tức là lợi ích ngắn hạn che giấu sự thụt lùi dài hạn.
Trong hàng không, FAA khuyến khích phi công bay thủ công một số chặng để duy trì kỹ năng, tránh phụ thuộc autopilot – bài học tương tự cho AI.
4 thói quen quan trọng:
Draft first, prompt second – phác thảo ý tưởng trước rồi mới dùng AI chỉnh sửa, giúp duy trì năng lực giải quyết vấn đề.
Dùng AI như gia sư Socratic – yêu cầu hướng dẫn từng bước thay vì đáp án sẵn, sau đó tự giải thích lại để kiểm chứng mức độ hiểu.
Timeouts và checklist – tạm dừng để đánh giá phản hồi của AI, đặt câu hỏi: có sai sót không, có thiên kiến không, có thiếu góc nhìn nào không.
AI fast – dành thời gian làm việc hoàn toàn không dùng AI để rèn luyện “cơ bắp nhận thức”.
Các nghiên cứu khác cũng cảnh báo: phụ thuộc AI làm giảm tư duy phản biện (Swiss Business School, Microsoft survey), tăng nguy cơ tiếp nhận thiên kiến như hình ảnh AI tạo ra về “financial managers” (85% là nam da trắng, trái ngược thực tế <45%).
Giáo dục và doanh nghiệp cần thiết kế khung sử dụng AI cân bằng: vừa khai thác lợi ích, vừa giữ nguyên nền tảng kỹ năng con người.
📌 ChatGPT và AI tạo sinh có thể làm suy giảm khả năng tư duy, nếu dùng thiếu kiểm soát. Vấn đề bắt nguồn từ “cognitive offloading” – xu hướng bộ não né tránh khó khăn và dựa vào công nghệ. Khác với GPS hay máy tính bỏ túi, AI có phạm vi ứng dụng quá rộng, khiến tâm trí “tắt máy” khi dùng. Bằng chứng: sinh viên dùng AI giải toán cao hơn 48% ban đầu, nhưng thấp hơn 17% khi thi lại không có AI. 4 giải pháp bao gồm: tự phác thảo trước, dùng AI như gia sư, áp dụng checklist để tránh thiên vị và thực hành “AI fast” nghĩa là dành thời gian làm việc hoàn toàn không dùng AI để rèn luyện “cơ bắp nhận thức”. AI nên là công cụ đồng hành, không phải thay thế, để duy trì sự sáng tạo và sức mạnh nhận thức con người.
https://www.wsj.com/tech/ai/chatgpt-tips-smarter-dc33a0fd
How to Make Sure ChatGPT Doesn’t Make You Dumber
Generative AI tools are quietly weakening our cognitive skills. Here are four ways to keep that from happening.
By Paul Rust and Nina Vasan
Sept. 3, 2025 2:04 pm ET
Beth Goody
ChatGPT is down, and dread sets in. You stare at the blank document. The report is due in two hours, but the thought of writing it yourself feels overwhelming. The analytical skills that once kicked in automatically—like breaking down problems, organizing thoughts and finding the right words—now seem impossibly difficult. When did thinking become this hard?
These moments of technological dependence are becoming increasingly common as generative AI reshapes how we work, learn and think. At our lab, we’ve spent a decade treating patients, studying how technology shapes the human mind, and supporting the integration of mental-health and well-being principles into AI and social-media platforms. As generative AI enters workplaces and schools at unprecedented speed, we’re observing a troubling phenomenon: the quiet erosion of our cognitive capabilities.
The allure of the technology is powerful. As our colleague Darja Djordjevic, a psychiatrist at Harlem Hospital and Columbia University says, “Here is a machine that is always available, endlessly enthusiastic, and seemingly competent in every domain. That creates a powerful feedback loop. You skip the discomfort of starting from scratch and get rewarded in seconds.”
No wonder our brain, delighted to dodge difficulty, craves these shortcuts. Humans have always used technology for cognitive offloading—using external tools to reduce mental demands. We write notes to remember, use calculators for arithmetic, and rely on GPS for navigation.
Where previous technologies affected discrete skills, however, generative AI tools raise the stakes. These tools are so expansive in their applications, so autonomous in their execution, that when we activate them, our minds effectively power down.
Therein lies the risk. The very convenience that boosts short-term productivity may also be accelerating long-term cognitive atrophy, across more domains than any technology before it. Drawing on insights from psychology, patient care, aviation research and behavioral science, we offer four habits that foster a more deliberate relationship with ChatGPT—one that fortifies, rather than diminishes, our cognitive faculties.
1. Draft first, prompt second
Like muscles, our cognitive skills weaken when unused. Offloading entire tasks to AI may diminish cognitive capabilities such as problem-solving, creative thinking and critical reasoning.
A recent study of nearly 1,000 students offers an unsettling glimpse of this phenomenon. Students using ChatGPT to solve math problems initially outperformed their peers by 48% during practice. But when tested without AI, their scores dropped 17% below their unassisted counterparts. Short-term gains masked long-term losses.
This pattern isn’t confined to the classroom. In aviation, research shows that overreliance on autopilot dulls pilots’ manual flying skills. The Federal Aviation Administration thus encourages pilots to fly route segments manually to ensure proficiency is maintained.
Psychologists have long emphasized the power of “mastery experiences”—struggling through and conquering difficult tasks—as the single strongest predictor of self-efficacy, our internal belief in competence. When ChatGPT skips the hurdle for us, mastery never materializes; over time the very capacity fades, confidence erodes, and dependence deepens.
To keep your skills sharp, outline your approach before opening the chat window. Think through the problem and jot down ideas; bullet points suffice. Then ask the model to elaborate or polish. This initial manual effort may feel like returning to the gym after months away, but deliberate practice preserves both skills and confidence.
2. Use AI as a tutor
Have you noticed that after asking Perplexity or ChatGPT for answers, you can articulate concepts fluently while viewing the screen, but an hour later, clarity evaporates and you fumble explaining them to colleagues?
When we ask AI for answers directly, we engage less deeply—and memory and understanding suffers. A preliminary MIT study found that individuals who used ChatGPT to write essays had poorer recall of their own work. The result echoes the “Google effect,” in which easy fact‑finding dampens internal memory retention; similarly, research shows GPS use decreases spatial memory.
But with AI, it isn’t just isolated facts at risk: We may lose grasp of entire concepts and ideas because explanations arrive in a single click.
One possible remedy: Turn the bot into a Socratic tutor. Instead of “Give me the answer,” try “Guide me through the problem so I can solve this on my own.” Request step-by-step explanations. After completing the task, close the window and re‑explain the idea in your own words to test retention.
In one study with chemistry students, a modified version of ChatGPT that withheld direct solutions and supplied only incremental hints fostered greater engagement and learning outcomes than the default model. The small shift from prompting for product to prompting for process cultivates lasting comprehension.
3. Timeouts and checklists
How often do you find yourself mechanically copying AI output, only to wince when colleagues point out glaring errors that somehow escaped your notice?
Heavy AI reliance extends beyond skill atrophy into compromised judgment and bias. A Swiss Business School study of 666 participants and a Microsoft survey of 319 knowledge workers this year both found that heavy AI reliance correlated with poorer critical-thinking skills.
This exposes us to automation bias, prompting us to accept machine judgments uncritically and inherit its blind spots: In one behavioral science experiment, volunteers forfeited a larger cash prize simply because an AI warned them not to trust their human partner, even after evidence showed cooperation would pay.
The risk goes deeper than a single bad decision; if we don’t reflect on AI’s output critically, we might internalize the machine’s inherent biases. In one experiment, participants were shown AI-generated images of “financial managers”; 85% of the AI-images depicted white men, despite the reality that, in the U.S., fewer than 45% of financial managers are men, let alone white. Alarmingly, after being exposed to the AI-images, participants also showed a greater tendency to associate that identity with the role.
To counter this bias, people can use “cognitive forcing” tools—methods borrowed from medicine and aviation—to shift thinking from our swift, intuitive processing to slow, deliberate analysis. Doctors use “diagnostic timeouts” to double-check reasoning; pilots run mental checklists before takeoff. Apply the same approach: When AI delivers answers, pause. Restate the main points aloud or in writing. Then run a mental checklist: Can this be verified? What perspectives might be missing? Could this be biased? Such metacognitive prompts catch mistakes and spark originality and critical thinking.
Research shows that workers with stronger metacognitive skills become more creative when using ChatGPT. Additionally, a preliminary study showed that students who were instructed to ask themselves metacognitive questions—such as “How closely does the response align with what you expected?” and “Is there anything in the response that you do not fully understand?”—when conducting searches with GenAI demonstrated higher levels of critical thinking.
4. Try an AI fast
The most direct path to preserving your intellectual faculties is to declare certain periods “AI-free” zones. This can be one hour, one day, even entire projects. Doing things yourself is the most reliable way to prevent dependency and ensure your cognitive capabilities remain sharp.
As generative AI continues its inexorable rise, it forces fundamental reckonings about human capability itself, both institutionally and individually.
In corporate environments, it raises questions about talent and value creation. Companies investing heavily in AI productivity tools may inadvertently be undermining their workforce’s long-term capabilities.
Educational institutions face an even thornier challenge: Outright bans on generative AI risk would leave students digitally illiterate in an AI-saturated world, yet unrestricted access risks undermining the very thinking abilities education aims to develop. The challenge demands nuanced, evidence-based frameworks that teach students to harness AI’s benefits while preserving critical-skill formation.
None of this argues of course for shelving generative AI. Few of us would trade Wikipedia for a paper encyclopedia or swap Excel for an abacus. The challenge is cultivating a mindful relationship with the technology; one that captures its leverage without letting it hollow out our faculties.
Generative AI can be a partner, muse, and accelerator. But without deliberate boundaries, this omnipresent assistant won’t just help us write, it will become the author while we, the humans, merely click “send.”
Paul Rust is a research fellow at Brainstorm: The Stanford Lab for Mental Health Innovation, and a doctoral candidate in theoretical medicine at University Witten/Herdecke in Germany. Nina Vasan is the founder and director of Stanford Brainstorm and chief medical officer of Silicon Valley Executive Psychiatry. They can be reached at