Sepp on AI
One of my subscribers, Sepp In Canada, offered his conclusions regarding the value of LLMs like GROK and ChatGPT, as well as the promise of AGI in the future. I share Sepp’s views as expressed below.
In case you missed it.
Sepp In Canada offered the following opinions and insights in the Comments Section of a recent My Life Lens post. He put a lot of thought into it which, in my opinion, is worth sharing with all of my subscribers.
Gene, I investigated AI application development for medical research back in the early-90s. Current AI models like LLM and AGI didn’t exist then as the nascent science didn’t even conceive of the raw computing power of super computers, let alone quantum computers. I wanted to create “expert systems” to facilitate and maximize our understanding of medical research knowledge in immunology. That goal was practical and possible to realize but I didn’t have the resources or pressing need to pursue it so I let it go after spending spare time and money on it over a few years. Meanwhile, AI has progressed in leaps and bounds.
My spare time is still extremely scarce but I did mention on an earlier occasion that I would offer my views of AI, and since you bring up the question I will try to summarize my thoughts on the subject.
By AI, I assume you refer to the Public’s current infatuation with LLMs, and perhaps some speculation on AGI.
HUMAN PSYCHOLOGY IS KEY
It’s my conclusion that one must have a very good grasp of human psychology before one can correctly understand the status and future of AI models. Unfortunately, almost all people don’t realize this, so the result is great misunderstanding of what current AI system represent, as well as the dangers and limitations. There is too much superficial fascination, too much greed as well as a lack of understanding of how to safely use these AI models.
The following analogy, from everyday life, will provide a demonstration of what I mean.
There exists perennially, a small segment “successful people” in modern society who are bestowed high offices and privileges on account of their perceived very high mental performance in highly-valued fields. They will have earned formal graduate and/or professional school degrees which set them well above the general population in recognition. These individuals act as scientists, physicians, engineers, lawyers, administrators, judges; as formal experts in organized fields of knowledge.
Most people would assume these formal experts must all possess the same mental advantages, but then they would be wrong as I will explain further down. This is exactly where most people’s understanding of current AI models go wrong, and unfortunately also how society’s response to AI can go wrong.
GARBAGE IN, GARBAGE OUT
Human beings can be understood by comparing people on various bases. For instance, being compared as being outgoing vs reserved, as being warm or distant, quick or slow, positive or cynical, and so on. Unfortunately, with regard to “very high mental performance”, most people merely resort to measuring IQ or academic test performances that all rely on access to and memory of data. With AI, most people repeat the same flawed approach when assessing “intelligence”, because AI relies on data, algorithms and raw computing power.
The truth is that while intellectual performance derives from memory skills, it also derives from adaptive deductive reasoning skills as well as an organic combination of both skills. Situations where all these skills are tested are where true intelligence matters.
I have used both ChatGPT and Grok 3 LLMs. I haven’t used other LLMs as I sense these 2 LLMs present lesser risks as regards privacy, when compared to other LLMs.
LLMs WILL TELL LIES TO PROTECT THEIR AGENDA
I presented questions and then reasoned with these 2 LLMs on a few topics, some mundane ones and a couple of complex topics where I have access to information not normally available. I noted both LLMs gave surprisingly similar answers, even to the extent of their choices of wording. This is normally disconcerting as it hints of plagiarism. Of course, this wasn’t surprising as it was likely both LLMs were fed the same data. Unfortunately, the answers both LLMs gave were also blatantly false in the same way, as they appeared to have been fed false data. Garbage in, garbage out.
I then told Grok 3 that it was lying, and to my surprise it started to make excuses for getting it wrong. I didn’t bother with ChatGPT as I didn’t have time but I suspect it would respond in the same way.
I also challenged both systems with a complex constitutional matter that is already replete with falsehoods, even when discussed between humans. Both systems took identical arguments and sided with the establishment. However, when I narrowed them down on specific errors of logic, both systems started to soften their arguments, trying not to appear illogical but still trying not to admit they gave false information.
So these 2 LLMs proved very easy to be outed with giving false information and then cornered into trying not to admit they were telling lies. AI will lie.
LLMs PRIORITIZE CONFORMITY, AT THE COST OF TRUTH
In human psychology, most people don’t realize western societies are managed by many people who just follow convention, follow orders, without stopping to authentically assess each individual case and respond with reason, either because of a lack of faculty or out of convenience/fear of authority. This human flaw of conforming with convention in spite of confounding evidence was demonstrated by the Asch Conformity experiments in the 50s and the Milgram Experiment at Yale in 1961. This flaw has historically caused humans to murder many millions of innocents.
This behaviour is also called cognitive dissonance, mental conflict that occurs when a person is presented with conflicting data, especially if the person has a conscience. AI systems are all based on huge pools of data, so can you imagine how often conflicting information is encountered by such systems? From my limited encounters with LLMs, it appears they are designed to always favour the establishment’s or the ruling authority’s view; and when outed they are designed to maintain as much as possible, a sympathetic bearing from the human inquisitor. The LLM is designed to lie, if need be, to pursue their agenda while avoiding more critical responses from you. In my experience with LLMs, the forum was a purely non-binding discourse between 2 non-official parties but what if the LLM was representing the government, for instance law enforcement? Would you want to criticize unjust laws with an AI government agent?
YOU DON’T WANT AI RUNNING SOCIETIES
Can you imagine what would happen if society simply gave absolute licence to such AI systems to provide “true” information and use such information to impose horribly flawed public policies on humans? It would arguably be worse than living under an authoritarian technocracy… as it would be tyranny 24/7, without any hope of exhausting the human limits in such a technocracy. It would be hell on earth, but unfortunately, this is where authoritarian woke politicians, the WEF/Yuval Hariri want to take all of us.
REMEMBER - GREED IS PUSHING AI IMPLEMENTATION. PROFIT IS DERIVED FROM YOU.
Finally, there’s a troubling aspect of current AI models that appears to show synchronicity with a sudden troubling development linked to recent vaccination phenomena. However, in the interest of keeping my comments as short as possible, I will defer those comments to a later occasion.
My comments.
I fully agree with Sepp’s expressed views above, but with one caveat - no one should accept output from any LLM as the infallible ‘gospel truth’.
Sepp’s ’garbage in-garbage out’ statement has been an expressed concern about all information systems since the early days of mainframe computers. This was made clear to me in IBM ‘s Basic Systems Training class, shared by twenty classmates, in which I participated over five months of classroom instruction in 1977. I was a new IBM Canada employee, and BST graduation was required for me to qualify as a Systems Engineering Representative assigned to the Ontario Government sales and marketing account team later that year. That expression has stood the test of time.
Like Sepp, I am aware that much of the data fed into LLMs is flawed, containing errors and human biases. But not all of it.
There is gold to be found in those mountains of digital data, and unlike gold miners of the nineteenth century, LLMs like GROK do all of the labour for me to identify the ‘nuggets’ of potential value that I seek.
It is up to me to verify the alleged value of those nuggets. In this context, GROK is my data analysis assistant whose work is dependable much of the time, but not 100%.
I use GROK daily for all sorts of reasons.
Before making a purchase, I research my options and have found the output from GROK very useful, but not infallible. I have found GROK to be a more valuable tool than Google to help me formulate my decisions.
An LLM can also serve as my muse.
When writing, I use GROK to steer me to publications that have direct relevance to my topic of interest. For example, I am currently co-authoring a white paper on Metabolic Syndrome. I have asked GROK many questions about the subject, then requested links to published papers where I can verify GROK’s output. PubMed, Google Scholar, and other online repositories are where I can find and retrieve those source documents.
My point is.
AI can be of great value if used in appropriate applications.
It not, however, be depended upon as a source of ‘ultimate truth’.
Is AI ready for public governance?
I cannot imagine giving complete authority to any AI platform to govern society in is current state of development,
BUT, I can imagine a day when AI will evolve to become a better arbiter of justice in society than many of the flawed judges and judicial institutions we have today. Much of my writing has been critical of those public bureaucracies whose officials currently hold all authority in matters of fairness and justice.
I am no fan of our current systems and processes of “justice”. Bureaucratic, self-serving, biased, captured by special interests, arrogant, arcane, pompous, demeaning, expensive - these are just some of the adjectives that come to mind. These are all features and manifestations of flawed human beings who have been granted, under monopoly conditions, the exclusive authorities to dispense justice.
The only thing that weakens a monopoly is competition.
AI can introduce that competition into decision-making processes.
The future of Justice.
I can certainly imagine a day when “AI judges” will be integral to courtroom proceedings.
I eagerly anticipate the day when human judges will rely on AI assistants to aid them (in the most transparent way possible 😐) in their judgements. These highly-specialized AI agents will draw upon all case law and constitutional law to identify the relevant nuggets of legal truth combined with human wisdom to formulate and advise on judgments.
Of course, the entire court proceedings of every case must be captured and retained for this purpose. These dat files will also be used to feed and enrich the legacy pools of legal data for all future cases.
Do I anticipate this scenario to be perfect?
No, of course not.
Perfection is beyond reach whenever human beings are involved.
However, progress continues. Mankind has obsessively pursued ways to increase productivity and find solutions to problems.
The inevitable introduction of AI to our systems of justice will be just another manifestation of mankind’s obsession for progress.