GROKipedia vs Wikipedia ❓
Elon Musk plans to take on the human-controlled Wikipedia platform with an AI-controlled LLM competitor. The former has become biased by left-leaning, anti-religion influencers😳.
LARRY SANGER comments.
Mr. Sanger founded Wikipedia twenty years ago. He says it has become progressively biased in the last decade, reflecting the views of about 85 of its most influential editors and content contributors, and perhaps other unidentified influencers.
This is a legitimate concern, but can this be avoided❓After all, human bias has always been present in every information and media modality. Our minds absorb those biases, sometimes overtly, but often subliminally. These influences are known to powerfully shape our cultures and beliefs, and to nudge our group affinities.
In this short (4 minutes) interview, Larry expresses hope that Elon Musk will “get it right” by using a “prompt” interface which will filter out human bias from GROKimedia. His pessimism, however, arises from his observation that a leftist bias has been creeping into GROK over the past year🥴.
Every LIFE LENS holds bias.
Each of the eight billion humans who inhabit Earth at any point in time will necessarily perceive “reality” through the lens of their own lifetime of experiences. This unique, deeply personal perspective is inescapable.
Any attempt to understand humanity collectively will necessarily require samples of larger populations of unique LIFE LENS singularities. Social scientists and media journalists generally rely on public/population surveys, data analytics and statistics to tell their stories about human behaviour.
Now we use LARGE LANGUAGE MODELS (LLM) to do the same work to hyper-charge their efforts.
Simple LLM prompts now harness the most powerful data collection and aggregation technologies ever devised, and engage genius-level algorithmic “brains” to think through humanity’s most complex puzzles.
The data must be “pure”❓Is this even possible⁉️
GROK, and every other LLM, is fed information from an unimaginable number of sources. Research papers, published articles, government documents, corporate websites, etc., are among those sources, and human beings write all of them with their unique LIFE LENS.
Say, for example, that an LLM was available that had only been fed Roman Catholic literature.
Needless to say, the answers provided by that LLM would very likely be infused with a Roman Catholic bias.
AI algorithms also have human authors.
While an increasing amount of computer code is being generated by AI agents, much of it is still written by human beings, each possessing a unique LIFE LENS.
In our Catholic LLM example, what are the chances that its employed algorithm creator was not a Catholic, or even a Christian?
LLMs provide PERSPECTIVE, not TRUTH.
Critics of LLMs often cite evidence that they lie. Yes, some have “hallucinated” answers to user prompts in the past, but this issue is disappearing quickly as the platform creators innovate ways to eliminate those problems.
In my opinion, human bias will never be eliminated from LLMs as long as human beings author the content and the algorithms.
When I use the five LLMs I favour, I always consider the “human element’ to be the existential blemish that will forever fail to reach human expectations of “perfection”.


Wow, the 'LIFE LENS' idea is so powerful. You're so right about inherant bias! Brilliant.