ALEXA CHIRNOAGACREATIVE TECH ✧ INNOVATION ✧ VISUAL NARRATIVES

Source: Unknown



Can LLMs Truly Mirror Reality?



Published on: Apr 4th 2024.


It's a question that lingers in my mind. Large Language Models (LLMs) hold quite a fascinating, yet potentially deceptive promise for understanding the world and ourselves. Trained on enormous datasets, they analyze patterns and create narratives at a scale far more intricate than the human mind could alone. But can they ever truly give us an objective window into reality, or are they just composing elaborate stories based on what they've been fed?


At first glance, LLMs do appear like mirrors, reflecting the world just as it is. But dig a little deeper, and you realize it's more like a funhouse mirror. They can stretch and warp our ideas about the future, technology, even reality itself... but only with the building blocks they've already got. They can't invent something truly new or reach for something fundamentally 'real' beyond their existing knowledge.


Things get even more interesting if you consider how LLMs can (potentially) shape reality, not just reflect it. Just like the stories we tell ourselves influence our actions, LLMs are powerful storytellers in their own right. Don Ihde, a philosopher of science and technology, would argue that they act as hermeneutic instruments. In simpler terms, this means they act as interpretive tools, shaping how we understand the world by presenting us with a unique lens. Yet like any storyteller, bias plays a role. The narratives used to train them can be riddled with blind spots, leading to outputs that don't always align with the truth.


The challenge here is to figure out a way to use the power of LLMs while staying aware of their limitations. We need to be critical of their outputs and understand where the data comes from, and where it might fall short. Perhaps their true value isn't in mirroring reality, but in making us question the stories we already tell. By exposing biases embedded into their data, LLMs can push us towards a more nuanced and honest understanding of the world and ourselves.


LLMs also hold the promise of unlocking insights into the messy, beautiful thing we call the human experience. These interpreters can create complex stories and fictions about our past, present, and where we might be headed. But can we trust them to (always) represent us faithfully?











LLMs inherently mirror a linguistic construction of reality rather than ‘real-life’ reality. The trick is to see the not as all-knowing oracles, but more like creative partners in a cyborgian sense as Donna Haraway imagined—the cyborg blurs boundaries between human and machine, the organic and the artificial. If we see LLMs as collaborators, we can open up a whole new space for co-creation.


If we leverage their ability to construct worlds, we have to keep checking where might their narratives miss the mark, while also questioning our own assumptions about how we understand the world. Perhaps this cyborgian approach is where the true magic lies—LLMs not giving us the one right answer or a singular prediction, but making us review the very questions we ask, re-examine what we believe and gain a more comprehensive understanding of the world.








✧ ✧ ✧