AI: Good or Bad
Artificial Intelligence is one of the most talked about topics today. The range of opinions is broad: negative views seem more prevalent than the positive. Is this trend warranted?
Thinking critically about a new study on AI and cognition
New results suggesting that the use of AI causes cognitive deficits have triggered widespread fear, but is this interpretation justified?
By: Kathryn Birkenbach, Peter Attia
June 28, 2025
===================================================================
One of my favourite podcasters is Dr. Peter Attia. I gladly pay the $200 ‘members only’ subscriber fee to receive the benefits. One of those benefits is access to research studies and scientific papers like the one identified above.
The conclusion of that paper is presented below:
Despite the alarm, this study certainly does not show that the use of AI is rotting your brain or impairing critical thinking skills. (Ironically, the writers responsible for the panicked headlines appear to have failed to apply critical thinking in their own evaluation of this research.) Rest assured, you do not need to cancel your AI subscriptions in an effort to preserve your cognitive health.
The study does, however, demonstrate that the use of ChatGPT or other such tools have the potential to reduce engagement while completing cognitively demanding tasks. Though not shown by this work, such a tendency might eventually contribute to erosion of cognitive abilities, but only if we rely on these tools as a complete replacement for — rather than a supplement to — our own brains. As long as we keep ourselves in the driver’s seat, AI can just as easily serve to strengthen and expand our capacity for deep and creative thinking, and the combined force of human intelligence and artificial intelligence can yield advancements we have yet to imagine.
My take.
As a regularly LLM user (GROK, Claude.ai, ChatGPT), I can relate to those comments.
My creativity has been stoked and augmented by these AI tools. The quality of my thinking on various topics-of-interest has been enriched by the facts, perspectives, and insights that AI feeds me.
I often ask questions of GROK (for example) and receive content-rich and fulsome responses that invite me down new avenues of thought. Those invitations are like doors opening to intellectual adventures which I am free to accept of reject. One simple LLM question frequently leads to a cascade of others which open my mind to a seemingly bottomless rabbit hole. It’s a tantalizing experience to use these tools and I feel better off with them than without.
Wisdom is an under appreciated quality.
Wisdom comes from individual life experiences from which the consequences of choices made have been evaluated and lessons learned. Choices are best made with the best available information feeding robust knowledge.
Today’s information- and knowledge-aggregating tools portend a future with greater collective wisdom for humanity to draw upon.
This is the optimal view.
Are there risks?
The question of AI risk can be answered at the individual and the societal levels.
I can only speak with authority on my personal, individual experience.
To date, I have had nothing but positive experiences with LLMs.
For individual protection, I subscribe to a VPN service to anonymize my web activity. My goal is to minimize (eliminate?) the potential for uninvited and unwanted data miners to capture, use or sell my personal data for their gain at my expense.
At the societal level, I have many risk concerns🥺. However, there is nothing I can do to stop them from manifesting in ways that may touch my life.
I have no faith in government authorities to protect me.
In fact, I consider regulators, lawmakers and their enforcement bureaucracies to be prime sources of AI risks.
They seem completely incapable of, and uninterested in, anticipating and managing such risks on my behalf.
‘Bad actors’ have always existed.
Non-government criminals have seen their ability to do harm increase greatly in today’s world than in the past. The tools and techniques they can use are on steroids compared to those used past notorious criminals like Al Capone and Billy The Kid.
Even crimes perpetrated by men like Bernie Madoff seem small compared to the potential of today’s sophisticated technology experts who can use AI agents and bots to steal personal identities, and more. Especially our elderly, and our otherwise willfully non-tech-savvy citizens, can be easily victimized by anonymous criminals on a scale unimaginable even a generation ago. And these tools are just the beginning of what the next generation can expect.
A good defence.
I prefer to use AI for my own protection and knowledge enrichment.
If “knowledge is power”, as the saying goes, then it logically follows that more is better.
I do feel that I am better off using LLM tools to be better-informed and to manage my personal risks.
It better than being a sitting duck for those bad actors that I don’t see coming.
“None are so blind as those who will not see”. This maxim is worth contemplating by every person who chooses to avoid AI because of the fear mongering that filters through much of modern culture. Courage is required to face uncertainty head on.
Gene, I investigated AI application development for medical research back in the early-90s. Current AI models like LLM and AGI didn’t exist then as the nascent science didn’t even conceive of the raw computing power of super computers, let alone quantum computers. I wanted to create “expert systems” to facilitate and maximize our understanding of medical research knowledge in immunology. That goal was practical and possible to realize but I didn’t have the resources or pressing need to pursue it so I let it go after spending spare time and money on it over a few years. Meanwhile, AI has progressed in leaps and bounds.
My spare time is still extremely scarce but I did mention on an earlier occasion that I would offer my views of AI, and since you bring up the question I will try to summarize my thoughts on the subject.
By AI, I assume you refer to the Public’s current infatuation with LLMs, and perhaps some speculation on AGI.
HUMAN PSYCHOLOGY IS KEY
It’s my conclusion that one must have a very good grasp of human psychology before one can correctly understand the status and future of AI models. Unfortunately, almost all people don’t realize this, so the result is great misunderstanding of what current AI system represent, as well as the dangers and limitations. There is too much superficial fascination, too much greed as well as a lack of understanding of how to safely use these AI models.
The following analogy, from everyday life, will provide a demonstration of what I mean.
There exists perennially, a small segment “successful people” in modern society who are bestowed high offices and privileges on account of their perceived very high mental performance in highly-valued fields. They will have earned formal graduate and/or professional school degrees which set them well above the general population in recognition. These individuals act as scientists, physicians, engineers, lawyers, administrators, judges; as formal experts in organized fields of knowledge.
Most people would assume these formal experts must all possess the same mental advantages, but then they would be wrong as I will explain further down. This is exactly where most people’s understanding of current AI models go wrong, and unfortunately also how society’s response to AI can go wrong.
GARBAGE IN, GARBAGE OUT
Human beings can be understood by comparing people on various bases. For instance, being compared as being outgoing vs reserved, as being warm or distant, quick or slow, positive or cynical, and so on. Unfortunately, with regard to “very high mental performance”, most people merely resort to measuring IQ or academic test performances that all rely on access to and memory of data. With AI, most people repeat the same flawed approach when assessing “intelligence”, because AI relies on data, algorithms and raw computing power.
The truth is that while intellectual performance derives from memory skills, it also derives from adaptive deductive reasoning skills as well as an organic combination of both skills. Situations where all these skills are tested are where true intelligence matters.
I have used both ChatGPT and Grok 3 LLMs. I haven’t used other LLMs as I sense these 2 LLMs present lesser risks as regards privacy, when compared to other LLMs.
LLMs WILL TELL LIES TO PROTECT THEIR AGENDA
I presented questions and then reasoned with these 2 LLMs on a few topics, some mundane ones and a couple of complex topics where I have access to information not normally available. I noted both LLMs gave surprisingly similar answers, even to the extent of their choices of wording. This is normally disconcerting as it hints of plagiarism. Of course, this wasn’t surprising as it was likely both LLMs were fed the same data. Unfortunately, the answers both LLMs gave were also blatantly false in the same way, as they appeared to have been fed false data. Garbage in, garbage out.
I then told Grok 3 that it was lying, and to my surprise it started to make excuses for getting it wrong. I didn’t bother with ChatGPT as I didn’t have time but I suspect it would respond in the same way.
I also challenged both systems with a complex constitutional matter that is already replete with falsehoods, even when discussed between humans. Both systems took identical arguments and sided with the establishment. However, when I narrowed them down on specific errors of logic, both systems started to soften their arguments, trying not to appear illogical but still trying not to admit they gave false information.
So these 2 LLMs proved very easy to be outed with giving false information and then cornered into trying not to admit they were telling lies. AI will lie.
LLMs PRIORITIZE CONFORMITY, AT THE COST OF TRUTH
In human psychology, most people don’t realize western societies are managed by many people who just follow convention, follow orders, without stopping to authentically assess each individual case and respond with reason, either because of a lack of faculty or out of convenience/fear of authority. This human flaw of conforming with convention in spite of confounding evidence was demonstrated by the Asch Conformity experiments in the 50s and the Milgram Experiment at Yale in 1961. This flaw has historically caused humans to murder many millions of innocents.
This behaviour is also called cognitive dissonance, mental conflict that occurs when a person is presented with conflicting data, especially if the person has a conscience. AI systems are all based on huge pools of data, so can you imagine how often conflicting information is encountered by such systems? From my limited encounters with LLMs, it appears they are designed to always favour the establishment’s or the ruling authority’s view; and when outed they are designed to maintain as much as possible, a sympathetic bearing from the human inquisitor. The LLM is designed to lie, if need be, to pursue their agenda while avoiding more critical responses from you. In my experience with LLMs, the forum was a purely non-binding discourse between 2 non-official parties but what if the LLM was representing the government, for instance law enforcement? Would you want to criticize unjust laws with an AI government agent?
YOU DON’T WANT AI RUNNING SOCIETIES
Can you imagine what would happen if society simply gave absolute licence to such AI systems to provide “true” information and use such information to impose horribly flawed public policies on humans? It would arguably be worse than living under an authoritarian technocracy… as it would be tyranny 24/7, without any hope of exhausting the human limits in such a technocracy. It would be hell on earth, but unfortunately, this is where authoritarian woke politicians, the WEF/Yuval Hariri want to take all of us.
REMEMBER - GREED IS PUSHING AI IMPLEMENTATION. PROFIT IS DERIVED FROM YOU.
Finally, there’s a troubling aspect of current AI models that appears to show synchronicity with a sudden troubling development linked to recent vaccination phenomena. However, in the interest of keeping my comments as short as possible, I will defer those comments to a later occasion.