In this rapidly evolving technological era, this topic has understandably sparked widespread debate.

I recently came across a report titled “ChatGPT and the Brain: A Hidden Risk for Writers and Journalists”, which cited worrying research findings from the Massachusetts Institute of Technology.
To directly receive articles from Tendai Ruben Mbofana, please join his WhatsApp Channel on: https://whatsapp.com/channel/0029VaqprWCIyPtRnKpkHe08
According to the study, prolonged use of AI—particularly ChatGPT—may actually dull brain function instead of stimulating it.
In the four-month study, participants had their brains scanned while using ChatGPT, and the results were troubling: 83.3 percent of users could not recall a single sentence they had written just minutes earlier, while those who wrote without AI had no such problem.
Even more concerning, the study found that brain connectivity dropped sharply from 79 to 42 points, a 42 percent decline in neural engagement.
These findings cannot be dismissed lightly, especially in a time when AI has rapidly penetrated nearly every aspect of professional and personal life.
Journalists, writers, academics, and creators are now increasingly using AI tools to draft, polish, or even generate entire pieces of work.
The fear, therefore, is that the very foundation of journalism and writing—creativity, critical thinking, and originality—may be under threat.
This may well be true.
But I believe the matter demands deeper reflection before we rush to condemn AI as the enemy of human intelligence.
Let us first acknowledge an undeniable fact: AI is here, and it is here to stay.
No amount of resistance or romantic longing for “the good old days” will change that.
Just as the printing press, typewriters, and later the internet changed the way writers worked, AI is another disruptive tool in the evolutionary journey of communication.
What matters now is not whether AI exists, but how we choose to use it.
The truth is, AI is not a magical tool that writes high-quality, original work at the push of a button.
As someone who uses ChatGPT regularly, I can confidently say that, if anything, there is now a greater demand on the brain than before AI’s advent.
It is naïve to think that one can simply come up with a topic, feed it into ChatGPT, and expect a compelling article to emerge.
The result will almost always be generic and bland—clearly the product of automated logic.
What AI cannot replicate, and never will, is the personal touch that a writer brings: the analysis, the lived experience, the emotional depth, the unique voice that makes writing resonate with readers.
That is why, despite using AI, I still write my articles myself—more or less in the same way I did before AI came into the picture.
The heavy lifting remains mine.
The thinking, the structuring, the arguments, the examples—all of that comes from me.
Only then do I turn to AI, asking it to review the piece, polish the grammar, or check spelling, while being careful to preserve my style.
I may also instruct it to include some research material, but only within my directions and framework.
The key here is control.
AI is a tool, nothing more.
It works according to the instructions of the user.
This is where a writer is either made or destroyed.
A lazy writer who simply feeds AI a vague topic or a handful of notes and then steps back will inevitably produce mediocre work.
That kind of reliance does dull the brain.
It is comparable to a student who copies assignments word-for-word from a textbook without attempting to understand the content.
There will always be a price to pay later, as the quality of work declines and credibility suffers.
On the other hand, an engaged writer uses AI as an assistant, not a substitute.
Even after AI has helped refine an article, the writer still has to read through the entire piece, ensuring that the instructions have been followed, that there are no factual or logical errors, and that research material or statistics have been correctly presented.
AI is not infallible—it makes mistakes, misinterprets prompts, or mixes up facts.
That is why the writer’s mind must remain fully engaged, constantly reviewing, correcting, and even rewriting sections to ensure accuracy and authenticity.
This is no different from the traditional editing process, except with the added advantage of having a tool that can speed up some of the more tedious tasks.
Ironically, when used properly, AI can actually sharpen the mind.
It demands more clarity of thought, not less.
Because AI can only produce what is asked of it, the user must be specific, detailed, and intentional in their instructions.
This requires deeper planning, sharper thinking, and a clearer understanding of what one is trying to achieve.
In this way, far from eroding brain function, AI can serve as a stimulus—challenging the user to be more precise, analytical, and thoughtful.
Consider a simple example.
ChatGPT will never know that I was born at Torwood Hospital unresponsive—without breathing, movement, or even a cry.
Yet that personal story can form a powerful part of an article I write, something that lends authenticity and depth AI could never generate on its own.
It is in weaving such human experiences, perspectives, and insights into our work that writing retains its true essence.
AI can polish, but it cannot originate human memory, emotion, or consciousness.
This is why I disagree with the assumption that the use of AI automatically leads to brain decline.
What leads to brain decline is laziness—the decision to let a tool do all the thinking for you.
A calculator does not destroy mathematical ability when used responsibly; it becomes harmful only if students are allowed to rely on it exclusively without first learning the principles of arithmetic.
Likewise, AI does not erode creativity when used responsibly; it only becomes harmful when users abdicate responsibility and originality.
Nevertheless, the concerns raised by the MIT study should not be dismissed.
They serve as a timely reminder of the risks of misuse.
Writers and journalists must resist the temptation to outsource their thinking.
Institutions, too, should develop clear guidelines on the ethical and responsible use of AI in writing.
Just as plagiarism has long been condemned in academic and journalistic circles, the lazy over-reliance on AI to generate content must equally be discouraged.
Yet, even with these warnings, it is important to emphasize the immense value AI brings.
For writers like myself, ChatGPT has become indispensable.
It helps polish my work, assists with quick fact-checks, offers suggestions I can either take or discard, and remembers my style, making future collaborations smoother.
But at no point does it replace my voice, my mind, or my calling as a writer.
In the end, the debate over AI and brain function comes down to this: tools are only as dangerous—or as empowering—as the hands that wield them.
A knife can be a weapon or a cooking utensil.
Fire can destroy or warm a home.
AI can dull the brain or sharpen it.
The difference lies in how we choose to use it.
Writers and journalists must therefore embrace AI as a companion, not a crutch.
The challenge before us is not to reject this new tool out of fear, but to master it with discipline and integrity.
Only then will we ensure that our creativity remains vibrant, our minds engaged, and our work authentically human in an age of machines.