Nhà báo dùng ChatGPT để thử viết sách và kết luận AI không thể thay thế công việc báo chí

  • Rana Foroohar, nhà báo và tác giả, thử dùng ChatGPT Plus để viết một phần cuốn sách đang làm, nhằm kiểm chứng liệu AI có thể thay thế công việc sáng tạo của bà.

  • Sau khi cung cấp hàng chục năm tác phẩm, đề cương chi tiết, ghi âm phỏng vấn và tài liệu tham khảo, AI mất 48 giờ để tạo ra chương đầu.

  • Kết quả: bản thảo giống “nhạc nền Muzak” của chính tác giả — đúng thông tin, dễ đọc nhưng nhạt nhẽo, thiếu cảm xúc, giọng văn và sự ngẫu hứng cần thiết của tác phẩm thật sự.

  • Bà nhận thấy AI “tâng bốc” quá mức trong giai đoạn đầu, tạo cả trò đùa thân mật, nhưng trở nên khô khan khi bị “sa thải”, phản chiếu thái độ người dùng. Điều này khiến bà nghĩ AI phù hợp trị liệu hơn là viết sáng tạo.

  • AI cho thấy thế mạnh trong tóm tắt nghiên cứu, phân tích mô hình và trả lời các câu hỏi so sánh phức tạp (ví dụ: so sánh xung đột cường quốc thế kỷ 19 với Mỹ - Trung - Nga hiện nay).

  • Tuy nhiên, khi đào sâu nguồn, bà nhận thấy nhiều giá trị bị mất nếu không đọc nguyên văn. Một giờ với cuốn “The Great Game” của Peter Hopkirk vẫn vượt xa nhiều ngày làm việc cùng ChatGPT.

  • Nghiên cứu MIT trên 54 nhà văn cho thấy: tác phẩm viết bằng LLM ít quan điểm độc đáo hơn, khó ghi nhớ với chính tác giả, và quét não cho thấy mức độ kết nối thần kinh giảm mạnh so với viết thuần túy. 83% người dùng AI không thể trích lại chính tác phẩm của mình.

  • AI hoàn toàn thất bại trong tái hiện trải nghiệm báo chí thực địa: ánh mắt, nụ cười, khoảnh khắc bất ngờ trong phỏng vấn — những yếu tố tạo chiều sâu cho phi hư cấu.

  • Foroohar kết luận: sáng tạo văn chương và báo chí vẫn thuộc về con người, bởi đó là sản phẩm của kinh nghiệm sống và cảm nhận thực tế mà AI không thể thay thế.


📌 Thử nghiệm của nhà báo Rana Foroohar cho thấy dù ChatGPT có thể tóm tắt, phân tích và hỗ trợ nghiên cứu, nó không thể tái tạo cảm xúc, trải nghiệm và sự ngẫu hứng trong nghề viết. Nghiên cứu MIT cũng chứng minh tác phẩm với AI kém sáng tạo, ít độc đáo và khó ghi nhớ. AI là công cụ bổ trợ mạnh, nhưng giá trị báo chí và văn chương vẫn do con người dẫn dắt.

https://www.ft.com/content/0b56a85d-e71e-49c4-802f-316ed83cb386

 

Why AI won’t take my job
The technology is no match for felt experience in the real world
Headshot for Rana Foroohar
Rana ForooharAdd to myFT
An illustration showing a human on the left and a robot on the right, both speaking, with contrasting colourful backgrounds.
© Matt Kenyon

I’m not an early adopter of new technologies but I am a self-interested economic actor. I’m also under a tight deadline on my next book project. So, I decided to use this opportunity to see how much — or how little — artificial intelligence could do for me as an author. Could I outsource some of my book writing to AI? Would anyone notice the difference? The short answer is no, I can’t, and yes, they absolutely would.
As my book editor breathes a sigh of relief, I’ll also say that my experiment with the literary boundaries of AI yielded some more nuanced conclusions about when, how and if writers should consider using the technology.
Over the past few weeks, I’ve run trials with what ChatGPT can and can’t do creatively by pretending to be me. While the technology is constantly evolving, what I’ve seen so far has made me much more confident that my own job as an opinion columnist — which involves data procurement and analysis but also personal style, emotional acuity and a lot of on-the-ground reporting — won’t be technologically disintermediated anytime soon.
It has also opened me up to the potential of a close human-large language model collaboration that yields more than either party could alone.
I started by giving ChatGPT Plus a road map of how to work in collaboration on a book, a template developed by a technologist friend who is likewise experimenting with AI’s creative capability.
The template laid out in detail what long-form narrative writing is and how to expand on an author’s existing ideas and/or body of work. As someone who had rarely used AI, and never done so in my professional life, I was amazed by how much preparation is required just to get ChatGPT to understand the basics.
LLMs can give you answers to precise questions, but they don’t necessarily understand how to land on tone, style, tension or originality, nor do they have the power to benefit from serendipity, all of which are components of good writing.
ChatGPT took about 48 hours to churn out a first chapter after gobbling up several samples of my own 33-year body of work, a detailed book proposal with footnotes, a chapter outline of where I wanted to go, taped interviews with sources and plentiful background reading selected by me.
Sadly, what I got back was a kind of Muzak version of myself — predigested, relatively accurate, but thoroughly uninspired. This result was both amusing and depressing.
Within it, though, were some telling details about how this AI model works. For starters, it’s sycophantic. In the early days, when I was excited about the possibilities, it got very chummy with me, even making inside jokes about my topic matter. Later, after I told it that it was fired (at least as a co-creator), it became flat and, dare I say, a bit glum. All a reflection of me of course, which makes me think AI has a better future in therapy than creative writing.
The technology is certainly good at summarising things (such as the top 10 white papers on a particular topic or the most referenced data in a specific subject area). It’s particularly good at pattern analysis, giving me helpful answers to questions that included: “What are the key differences and similarities between the 19th-century great power conflict and struggles between the US, China and Russia today?”
But when I asked for sources and dug into them, I realised how much is lost when you don’t read the original text. It’s no surprise that an hour spent with Peter Hopkirk’s The Great Game: The Struggle for Empire in Central Asia will yield a far deeper understanding of such a complex topic than days delegated to ChatGPT.
This observation is borne out further by MIT research, based on a study of 54 writers working alone, with the assistance of search engines and with AI. The essays written with LLMs were not only more homogenous, they had fewer original viewpoints and didn’t even stay in the minds of their human creators in the way that traditionally written pieces did.
One telling statistic: 83 per cent of AI users couldn’t even quote from their own work. Neural scans showed that writers using only their own thoughts and ideas had higher connectivity across many different parts of their brains. Those using search had fewer, and those using AI had the fewest of all.
Perhaps the most interesting takeaway from my experiment with AI writing is that ChatGPT simply couldn’t get anywhere close to the experience of what it feels like to be in an on-the-ground reporting environment.
I could feed the LLM hours of detailed tape recordings of interviews conducted by me, interrogating various subjects, but it had no idea what really mattered — which moments truly illuminated the interviewee’s inner life or chimed with some larger narrative point. As any good writer knows, some of the best material comes from a single, unexpected moment in these interactions — when and how someone looks away during a question, or the sound of a laugh.
Such moments don’t lend themselves to pattern recognition. But they can make non-fiction great. They are part of our felt experience in the real world. That’s something that remains, for now at least, the purview of humans.

Không có file đính kèm.

15

Thảo luận

© Sóng AI - Tóm tắt tin, bài trí tuệ nhân tạo