Discussion about this post

User's avatar
Sepp from Canada's avatar

Gene, I investigated AI application development for medical research back in the early-90s. Current AI models like LLM and AGI didn’t exist then as the nascent science didn’t even conceive of the raw computing power of super computers, let alone quantum computers. I wanted to create “expert systems” to facilitate and maximize our understanding of medical research knowledge in immunology. That goal was practical and possible to realize but I didn’t have the resources or pressing need to pursue it so I let it go after spending spare time and money on it over a few years. Meanwhile, AI has progressed in leaps and bounds.

My spare time is still extremely scarce but I did mention on an earlier occasion that I would offer my views of AI, and since you bring up the question I will try to summarize my thoughts on the subject.

By AI, I assume you refer to the Public’s current infatuation with LLMs, and perhaps some speculation on AGI.

HUMAN PSYCHOLOGY IS KEY

It’s my conclusion that one must have a very good grasp of human psychology before one can correctly understand the status and future of AI models. Unfortunately, almost all people don’t realize this, so the result is great misunderstanding of what current AI system represent, as well as the dangers and limitations. There is too much superficial fascination, too much greed as well as a lack of understanding of how to safely use these AI models.

The following analogy, from everyday life, will provide a demonstration of what I mean.

There exists perennially, a small segment “successful people” in modern society who are bestowed high offices and privileges on account of their perceived very high mental performance in highly-valued fields. They will have earned formal graduate and/or professional school degrees which set them well above the general population in recognition. These individuals act as scientists, physicians, engineers, lawyers, administrators, judges; as formal experts in organized fields of knowledge.

Most people would assume these formal experts must all possess the same mental advantages, but then they would be wrong as I will explain further down. This is exactly where most people’s understanding of current AI models go wrong, and unfortunately also how society’s response to AI can go wrong.

GARBAGE IN, GARBAGE OUT

Human beings can be understood by comparing people on various bases. For instance, being compared as being outgoing vs reserved, as being warm or distant, quick or slow, positive or cynical, and so on. Unfortunately, with regard to “very high mental performance”, most people merely resort to measuring IQ or academic test performances that all rely on access to and memory of data. With AI, most people repeat the same flawed approach when assessing “intelligence”, because AI relies on data, algorithms and raw computing power.

The truth is that while intellectual performance derives from memory skills, it also derives from adaptive deductive reasoning skills as well as an organic combination of both skills. Situations where all these skills are tested are where true intelligence matters.

I have used both ChatGPT and Grok 3 LLMs. I haven’t used other LLMs as I sense these 2 LLMs present lesser risks as regards privacy, when compared to other LLMs.

LLMs WILL TELL LIES TO PROTECT THEIR AGENDA

I presented questions and then reasoned with these 2 LLMs on a few topics, some mundane ones and a couple of complex topics where I have access to information not normally available. I noted both LLMs gave surprisingly similar answers, even to the extent of their choices of wording. This is normally disconcerting as it hints of plagiarism. Of course, this wasn’t surprising as it was likely both LLMs were fed the same data. Unfortunately, the answers both LLMs gave were also blatantly false in the same way, as they appeared to have been fed false data. Garbage in, garbage out.

I then told Grok 3 that it was lying, and to my surprise it started to make excuses for getting it wrong. I didn’t bother with ChatGPT as I didn’t have time but I suspect it would respond in the same way.

I also challenged both systems with a complex constitutional matter that is already replete with falsehoods, even when discussed between humans. Both systems took identical arguments and sided with the establishment. However, when I narrowed them down on specific errors of logic, both systems started to soften their arguments, trying not to appear illogical but still trying not to admit they gave false information.

So these 2 LLMs proved very easy to be outed with giving false information and then cornered into trying not to admit they were telling lies. AI will lie.

LLMs PRIORITIZE CONFORMITY, AT THE COST OF TRUTH

In human psychology, most people don’t realize western societies are managed by many people who just follow convention, follow orders, without stopping to authentically assess each individual case and respond with reason, either because of a lack of faculty or out of convenience/fear of authority. This human flaw of conforming with convention in spite of confounding evidence was demonstrated by the Asch Conformity experiments in the 50s and the Milgram Experiment at Yale in 1961. This flaw has historically caused humans to murder many millions of innocents.

This behaviour is also called cognitive dissonance, mental conflict that occurs when a person is presented with conflicting data, especially if the person has a conscience. AI systems are all based on huge pools of data, so can you imagine how often conflicting information is encountered by such systems? From my limited encounters with LLMs, it appears they are designed to always favour the establishment’s or the ruling authority’s view; and when outed they are designed to maintain as much as possible, a sympathetic bearing from the human inquisitor. The LLM is designed to lie, if need be, to pursue their agenda while avoiding more critical responses from you. In my experience with LLMs, the forum was a purely non-binding discourse between 2 non-official parties but what if the LLM was representing the government, for instance law enforcement? Would you want to criticize unjust laws with an AI government agent?

YOU DON’T WANT AI RUNNING SOCIETIES

Can you imagine what would happen if society simply gave absolute licence to such AI systems to provide “true” information and use such information to impose horribly flawed public policies on humans? It would arguably be worse than living under an authoritarian technocracy… as it would be tyranny 24/7, without any hope of exhausting the human limits in such a technocracy. It would be hell on earth, but unfortunately, this is where authoritarian woke politicians, the WEF/Yuval Hariri want to take all of us.

REMEMBER - GREED IS PUSHING AI IMPLEMENTATION. PROFIT IS DERIVED FROM YOU.

Finally, there’s a troubling aspect of current AI models that appears to show synchronicity with a sudden troubling development linked to recent vaccination phenomena. However, in the interest of keeping my comments as short as possible, I will defer those comments to a later occasion.

Expand full comment

No posts