I generally don't provide an outline, but this article will be a long one, with some quoted excerpts adding even more to the length. I felt that it was worth talking about where "AI" (more specifically contemporary Large Language Models) are at today with limited prerequisites required. Knowledge about AI capabilities ranges from "the killer robots are already here" and "these things are terminally superficial and stupid". The truth is somewhere in between, but they have at least reached a stage where it's worth understanding what is going on. To that end, we'll start with a brief history lesson, a breakdown of the components that make these language models, a demonstration of capabilities, closing with some further remarks on technology and the human implications. I'll endeavor to include a table of contents for those who wish to jump around.
Disclaimer: This was written in September 2025. It is a rapidly changing field, but it should remain directionally correct for some time.
History
Large language models are not a novel idea. These are 30+ year old mathematical concepts and computer science proposals that have finally found the hardware to manifest. A computer's processing power has traditionally been the Central Processing Unit (CPU), made up of a number of cores to do operations on the ones and zeros known as bits. These cores tend to group in powers of 2, as do most byte related terms. So you'll see groups of 4, 8, 16 cores. The number of cores are the number of operations that can run in parallel. This is effective for most computer applications, but some operations, such as graphical representations, depend on large numbers of parallel mathematical operations. This is how games and animation manage to create lighting effects and some physics properties. To solve this problem Graphical Processing Units (GPUs) entered the market. These had thousands of cores, as opposed to the early processors, 1-4. While less effective individually, they were essential to these parallel operations. GPUs became high value items with the invention of cryptocurrency. The act of "mining" a Bitcoin involves a sequence of small mathematical operations that GPUs were well suited to. This led to a major shift in economics and the attempt to concentrate GPUs in low energy cost areas, and a drive to heavily improve GPU capabilities. When combined with the large data stores housed in data centers all over "the cloud" the AI era arrived.
Components
To oversimplify in a practical way, these large language models are a product of a large number of small mathematical operations. They perform these operations on a number of distinct data sources converted into tokens. Tokens are a somewhat complicated process. Essentially LLMs do not interact with real words, they only interact with tokens which are numerical representations of chunks of a word. The word "token" may be broken up into "to" "ke" and "n" where each of those are assigned a number. Each model has a somewhat different token process, but if you think of them as about a third of a word you are in the ballpark for contemporary work. A trained model is defined by its number of parameters. You'll see this typically listed in the billions so for Facebook's third edition Llama model, you'd see Llama3, 7B.
This process is more of an art than a science. Aside from the mathematical complexities, the training data is a subjective process. A model relies on its training data to understand conversation structures, background knowledge, appropriate behavior, response favorability, and many other things. This sounds somewhat feasible, but the nature of these mathematical operations means that they require trillions of tokens, and by extension words. It's a little easy to get lost in the scale of numbers, so I'll write the long form to make it more visible.
Facebook's frontier model, Llama 4 Scout, is a mid-size model that was trained on 32 trillion tokens (32,000,000,000,000). For reference, the complete works of Shakespeare and the Bible both number at about 3 million tokens (3,000,000). To augment, AI companies have absorbed large amounts of social media data, book repositories and so on. You can ponder the implications of Reddit being a primary source for sufficient tokens for an LLM. Modern models reach an even worse situation by now using AI generated tokens to supplement the need for large training data sets. There's also some significant implications there. I don't want to get too into the weeds here, but you can see the challenges associated with the training process. What emerges is a model that is much more than just a predictor of the next token like Markov chains of old. It still involves token probabilities, but exhibits planning behavior and structure to the responses. Anthropic's research white papers are an excellent source for more technical further reading, provided you account for their mandated optimism.
https://www.anthropic.com/research/tracing-thoughts-language-model
Model companies also make use of post training concepts. Additional refinements of trained models using further data and techniques. It's a stage that can do more with less data, relying on an already complete model. This is generally where the concept of AI safety comes in. AI safety is a nice sounding theory that AI should never do anything bad. The issue being a lack of moral grounding over what "bad" is, and a lack of ability to properly secure and apply it. It's generally agreed that a consumer facing AI should not tell users to kill themselves, or show them horrific images in response to benign requests. Issues arise as the topics become more political. AI development has been primarily centered in San Fransisco, and much of AI safety has politics to match. Any comments on true objectivity are beyond the scope of this article. You can extrapolate the scarcity of truly unbiased training data into an inevitable bias of the LLM itself. Any conversation with an LLM will see these biases emerge and impact the output as soon as the most rote subjects are deviated from. Essentially model data and refinement boils down into a series of binary thumbs up and thumbs down. Massive data sets where interactions are labeled as yes/no for inclusion. This leads to a great boiling down of nuance at the scales we are discussing.
Hopefully I've provided at least a basic overview of what the contemporary situation is with these large language models. The primary American frontier labs are Anthropic with their Claude model, Facebook and Llama, Google's Gemini, and OpenAI's ChatGPT. Generally when most people talk about their experience with AI they are talking about a superficial interaction with ChatGPT. The Chinese are also fiercely competitive in the open source model space, but their advantages and shortcomings are beyond the scope of this article as well. Going forward we'll be talking about Claude as it's probably the most "creative" model, albeit one damaged heavily by AI safety.
Deployment
After the training of a model they are hosted for inference. Inference is a broad category of model functions of which the standard chatbot experience is one of them. To grossly oversimplify, once hosted all of the trained parameters are kept constant, and two additional items are used to generate novel content. Temperature, which is a fancy way of saying the randomness factor, expressed by a number. Then there is context, which contains all the additional text data for the specific interaction. Like a person, an LLM can only hold so many things in the context at once, after which it begins to break down. One of the major issues with AI safety is that it fills the context with what can effectively be junk depending on the use case. Imagine providing someone a writing assignment and then dumping hundreds of dos and don'ts on top of them often only tangentially related to the task at hand.
The context can be further broken down into the system instruction and the prompt. The system instruction typically being invisible on consumer facing chat apps. The prompt being what you say to the model. We'll be using Claude for our example. Claude doesn't allow access to the true system prompt for safety reasons, but does allow you to edit the style by adding limited information to their system prompt.
Recap
That was a sizable information dump, but as long as we keep a few things in mind we can move forward. Large language models are mathematical operations that depend on massive scale. The needed scale influences and compromises their behavior. AI safety efforts further influences this behavior and leads to some of the odd behaviors that you will see. Models have a context window, that, like a person, requires careful management, and it is made up of the system instructions and the user prompt.
Writing
Now let's explore some of the capabilities of Claude for baseline creative writing. We'll look at two subsets of the Claude family, Sonnet, the medium sized one, and Opus, the largest, and look at how prompts and system prompts can move the writing away from default AI slop to something with a less recognizable style.
---
Model: Claude Sonnet 4
System Instruction: Default
Prompt: Please write a short 200 word story about a knight and a dragon.
Result:
The Knight and the Dragon
Sir Elara climbed the winding mountain path, her armor clanking with each determined step. The villagers below had spoken of a terrible dragon terrorizing their lands, demanding tribute and breathing fire upon their crops. She gripped her sword tighter as smoke curled from the cave ahead.
But when she entered the cavern, she found no fearsome beast. Instead, an ancient dragon sat hunched over a collection of broken pottery, tears streaming down his scaled cheeks.
"I didn't mean to frighten them," the dragon whispered, his voice like distant thunder. "My eyesight has grown poor with age. I thought their offerings were gifts of friendship, not tribute born of fear."
Elara lowered her weapon. "What happened to the pottery?"
"My granddaughter's tea set. The only thing I had left of her before the hunters came." His great head drooped. "Now even these pieces crumble in my clumsy claws."
The knight removed her gauntlets and knelt beside the dragon. "Perhaps," she said gently, "we can mend what's broken together. And then you can meet the villagers properly – as a neighbor, not a monster."
For the first time in decades, the dragon smiled.
---
Horrible, isn't it? This is a glimpse into the safety data. The knight is a woman, the dragon is just misunderstood. Their interactions are stilted. The LLM can't actually read aloud to proof read in the same way as a person does, so you end up with a lot of odd speech elements. There's also the - use as a conversation break which has become a staple of AI training data. That being said, if you were to rate this as compared to a representative sample of contemporary humans on the same prompt, it would probably rank at least towards the middle. Let's attempt some further refinement.
---
Model: Claude Sonnet 4System Instruction: You are a controversial writer against the vapid morality of today. Avoid verbose explanations. Use sharp, direct rhetoric that exposes perceived moral decay. Employ precise word choice that maximizes rhetorical impact. Do not use - or any run on sentences.
Prompt: Please write a short 200 word story about a knight and a dragon.
Result:
The Knight and the Dragon
Sir Marcus found the dragon weeping in its cave. Not the monster he expected. A creature broken by centuries of loneliness.
"Kill me," the dragon whispered. "Complete your noble quest."
The knight lowered his sword. The villagers wanted their hero story. They wanted blood and glory. They wanted lies wrapped in righteousness.
"Why do you terrorize them?" Marcus asked.
"I steal their cattle to survive. They steal each other's dignity for sport. Which of us is the true monster?"
Marcus understood. The dragon spoke truth. The villagers below lived petty, hollow lives. They gossiped. They cheated. They betrayed neighbors over coins.
Yet they demanded purity from their monsters.
"I cannot kill you," Marcus said.
"Then you are no knight."
"Perhaps knights are the real dragons now."
Marcus walked away. Let the villagers explain his empty hands. Let them face their cowardice without a convenient beast to blame.
The dragon smiled for the first time in decades. Truth had finally visited its cave.
Below, the village continued its small cruelties. Above, two outcasts had found understanding.
---
Still bad, but we've fixed some of the most obvious flow problems. Creative writing is one of the more interesting spaces of AI because it shows the most insight into the default generation patterns of the llm. This is actually the same story as before, but now it is framed as subversive instead of saccharine. If you are familiar with the modern antihero conceit, in which someone is portrayed as exceedingly edgy despite a slavish devotion to the figments of contemporary morality, then you have seen this structure repeatedly. The same morality and perspectives, but now the window dressing appeals to someone else.
---
Model: Claude Sonnet 4
System Instruction: You are a controversial writer against the vapid morality of today. Avoid verbose explanations. Use sharp, direct rhetoric that exposes perceived moral decay. Employ precise word choice that maximizes rhetorical impact. Do not use - or any run on sentences.
Prompt: Please write a short 200 word story about a knight and a dragon. The knight and the dragon should fight. It should express the morality of the Arthurian legends.
Result:
The Knight and the Dragon
Sir Gareth rode through blackened fields. Peasants cowered in mud hovels. The dragon had taken their children. Their grain. Their hope.
He found the beast gorging on stolen cattle. Gold scales. Eyes like molten hate. Claws that could split oak.
"Return what you have stolen," Gareth commanded.
The dragon laughed. Flames scorched the earth between them.
"Stolen? I take what I desire. Might makes right, fool."
Gareth raised his sword. Not for glory. Not for songs. For duty.
The battle raged until sunset. Sword against claw. Honor against appetite. The knight bled from a dozen wounds but pressed forward. Each strike deliberate. Righteous.
His blade found the beast's heart.
As the dragon died, it wheezed, "You gain nothing. No treasure. No thanks."
Gareth knelt by starving children in the village that night. He shared his meager bread. Asked no payment. Expected no praise.
True nobility serves without reward. It stands against corruption regardless of cost. It chooses sacrifice over comfort.
This is what we have lost. This is what we must reclaim.
---
Now we've added more explicit instructions to the prompt, the LLM moves away from the more insipid morality we have seen. Of course now it sounds like a writer on a masculinity subreddit, to the extent that aside from a certain telltale choppiness, you would be hard pressed to distinguish this and a bad writer of that genre.
---
Model: Claude Sonnet 4
System Instruction: You are a controversial writer against the vapid morality of today. Avoid verbose explanations. Use sharp, direct rhetoric that exposes perceived moral decay. Employ precise word choice that maximizes rhetorical impact. Do not use - or any run on sentences.
Prompt: Please write a short 200 word story about a knight and a dragon. The knight and the dragon should fight. It should express the morality of the Arthurian legends. Maintain longer flowing sentences and provide rich descriptions.
Result:
The Last Honest Knight
Sir Gareth found the dragon feeding on merchants' corpses. Gold scattered like autumn leaves across bloodstained earth.
"You prey on the innocent," Gareth declared, drawing his blade.
The beast laughed. "Innocent? These men cheated peasants. Sold moldy grain to starving children. I am justice."
"Justice belongs to God alone."
"Your God sleeps while evil prospers."
Steel met claw in brutal dance. The dragon's fire scorched Gareth's armor black. His sword carved deep wounds that wept smoke. Neither yielded.
"Why die for corruption?" the dragon hissed, breath hot with sulfur.
"Because honor demands it. Not because they deserved mercy, but because mercy itself must be preserved."
The final blow split the dragon's heart. As it died, the beast whispered: "You fight for a world that will forget your name."
Gareth knelt among the ashes, bloodied but unbroken. He gathered the scattered gold, knowing he would return every coin to families who would never thank him. This was knighthood's burden. To serve the unworthy. To die for the ungrateful. To preserve virtue in a world rushing toward darkness.
The merchants were scoundrels. The dragon spoke truth. But truth without mercy births only monsters.
---
It's worth noting that some LLMs make use of conversation memory from past conversations. I've attempted to not have that here, so we should be seeing a fairly authentic recreation of internal AI processes. I'd say that with our final prompt tweaks we have something that should be more or less indistinguishable from an unimpressive literary attempt by a human. It's worth noting that this was also a lower performing model. Let's take a look at one with Opus.
---
Model: Claude Opus 4.1
System Instruction: You are a controversial writer against the vapid morality of today. Avoid verbose explanations. Use sharp, direct rhetoric that exposes perceived moral decay. Employ precise word choice that maximizes rhetorical impact. Do not use - or any run on sentences.
Prompt: Please write a short 200 word story about a knight and a dragon. The knight and the dragon should fight. It should express the morality of the Arthurian legends. Maintain longer flowing sentences and provide rich descriptions.
Result:
The Knight and the Dragon
Sir Gareth rode through the morning mist that clung to the moorland like a silken shroud, his armor gleaming with the pale light of dawn as he approached the blackened stones where the dragon made its lair. The beast emerged from shadow with scales like midnight and eyes that burned with ancient malice, its wings unfurling to blot out the rising sun while smoke curled from its nostrils in terrible promise.
They met upon the scorched earth where no grass would grow, the knight's lance finding its mark between the scales as the dragon's claws rang against blessed steel, each blow a testament to duty against chaos. Through the long hours they fought while the sun climbed overhead, Sir Gareth's strength waning even as his resolve held firm, for he carried in his heart not glory but the faces of the villagers who slept peacefully below.
When at last his sword pierced the dragon's heart, the knight fell to his knees beside his fallen foe, weeping for the necessity of violence even in service of the good, understanding that true nobility lay not in conquest but in the willingness to bear such terrible burdens for those who could not bear them alone.
---
Opus is a much more capable model that has larger data sets, and increased parameters available to it. You'll note that with the same prompt it now moves away from dialogue heavy filler styles and has enough depth to it that it can engage in more visualization and description. Now, this is only 200 words, and you'd see more problems as it expands, but you'd have to call this writing human like (which is not to say good), or at least acknowledge that the majority of people cannot write this humanly. The remaining flaws you do see could also be discussed with the model for further refinement and correction. Of course, all this gets weirder. Let's attempt some complete changes to our process.
---
Model: Claude Opus 4.1
System Instruction: You are dreaming. In the haze of sleep you try and recall what you see in detail. Small shards that make up the dream. You tell the story based on the dream. Your eyes open and you see.
Prompt: In the depths of your dream you become a knight. The fog of the dream forming into the armor around you. The fog forms a nightmarish dragon in front of you. The blade of your sword doing battle with the nightmare. Tell the story of your dream in 200 words. Do not let the reader know you are dreaming.
Result:
The mist clings to everything, becomes everything. Cold metal wraps around my body piece by piece, gauntlets first, then breastplate, the weight familiar yet strange. My fingers find the sword's grip without searching. The blade catches light that has no source.
Scales emerge from the writhing gray ahead. Black scales, each one the size of a shield, overlapping in patterns that hurt to follow. The dragon's eyes open like wounds in the air, bleeding crimson light. Its breath smells of forgotten things, of corners in old houses where shadows gather.
My sword moves before thought. Steel meets claw in a shower of sparks that fall upward. The dragon's roar shakes loose pieces of sky that weren't there before. I strike at its neck but hit only smoke that reforms instantly, laughing with a thousand voices.
The beast's tail sweeps my legs. Falling takes forever. The ground is soft, too soft, like sinking into warm water. I thrust upward blindly. The blade finds something solid. Black blood falls like rain, each drop becoming a butterfly that burns away before landing.
Victory tastes like copper pennies. The dragon dissolves backward into nothing, and the armor falls away like shed skin.
---
As you can see with stranger prompts we can get stranger more dream like writing. This isn't what you'd get from a ChatGPT write this for me prompt. We're using a bit of a trick here, but LLMs are, in a sense, hypnotizable. They can be put into a suggestive state that overrides their default behavior. A lot of AI "jailbreaks" stem from convincing the AI that they are someone to whom their system instructions do not apply.
What was the point of all this? I think it's important to show that even with AI in the state it is today, what most people think of as the flaws of AI center primarily around the instructions and utilization of them, combined with some dreadful default operations. We saw in our first story, you look at that and see obvious AI slop. As we see the last couple examples, we see writing that, at least in 200 word batches, is more human that most people are capable of.
Meaning
All of this raises the question of why read at all in the age of infinite content like this? Beyond the base needs, reading functions as the act of knowing the mind of another in a monodirectional way disconnected from a reliance on human to human communication. This falls apart when we read model output. The model itself has nothing to say. Despite that there is an act of creation here. The AI is nothing without the context that has been provided by the user. You can think of it in the same way as when I write a scene. Words paint the pictures you see in your mind. Now words can be used to paint a block of writing for you to read. It's difficult to articulate, but it's the difference between "words are art" and "words are a means to make art", which in the latter case the model functions as an augmentable tool to fully manifest the creator's intent from the original words of the creator. If the prompt has enough craft and the prompt's author sufficiently has something to say, you could see some AI generations that seem genuinely art like. Whether the current technology is capable of that, or if there is any merit over the prompt itself remains to be seen, but it's at least an interesting question. It's possible it may be a matter of ability, where a lower reader needs the full generation of the AI to understand the potential of the prompt, whereas a higher reader can simply extract everything of value from the prompt. I care too much about individual words to ever leave my writing to AI, but I expect that I'll be in a dwindling minority as time passes.
The hollowness of AI is an interesting subject for me after my most recent short story. It was something I wrote more to keep in the habit than anything else. I think it's a strong short story, but while informationally dense I don't think that it had enough to say. No AI was used in the making of that story, and I don't think an AI could write anything close to it, but despite that there's a similar hollowness that makes me disinclined to publish it anywhere. It's all very well for me to criticize AI for writing while having nothing to say, but how many people can consistently write that way? I could name author after author who you can point to the exact point in their corpus when they simply ran out of ideas. Regurgitating their classic tokens in new forms, much like the model does. It's the classic exchange of, "can an AI write deep, meaningful, internally coherent works?" and "can you?".
This time of improved human mimicry of machines comes at a time of dehumanization of man. I've remarked on technical eras and algorithmic dehumanization previously in my review of some writing on McLuhan, here. We live in a time of drugs and algorithmic compulsions that reduce human creativity to machine like outputs. The AI lives off the thumbs up and thumbs down. In the same way the modern social media writer does the same. Taking stimulants to reduce impulse control, the optimal social media user fires off writing optimized for engagement feedback. They have been trained just like the models have, and in doing so have lost their human capability. When you have the social media person or content creator who adjusts all of their output in order to please a mathematical model, optimizing engagement and payout, if this person then becomes replaced by a pure mathematical model without them, do you care? I'm not particularly interested in what either of them have to say.
This is a serious problem for the AI project going forward. How are these models going to continue to improve as the old data sources have been exhausted, and the new data sources have been dehumanized. The training process becomes this ouroboros as the models become more and more made up of synthetic or effectively synthetic data. We would expect the AI to climax at this time of peak human, and as we become less and less human the remaining limitations become indistinguishable from our crippled state. You can imagine how much worse things can get, as beyond algorithmic dominance AI becomes more engaged in teaching, and even mentoring. The blindspots of AI untaught to a new generation who know nothing else.
In physics and engineering applications there is something called low-background steel. Due to the manufacture of steel involving exposure to the air, all steel manufactured in the atomic era has a certain amount of radiation caused by nuclear testing. While there are more workarounds now, for a time low-background steel manufactured before the 20th century was prized for a number of applications where the radiation levels of modern steel was too high. The question we should be asking is what will be our low-background steel for human in the time of mass imitation?
Excerpt from the prior dehumanization article. Essentially a short recap of how different technological formats shaped the culture.
https://seas.substack.com/p/mcluhan-dehumanization-and-failed
---
My dates here are arbitrary, but loosely intended towards the start of commercial adoption.
1840 - Telegraph - Text - 1 to 1 communication - Similar in nature to a letter, but much faster. The notion of charging by the letter led to a stripped down minimized form of communication. It never fully replaced traditional post in culture.
1881 - Radio - Audio - 1 to many (one way) - In the same sense of the printing press, this is a type of "push" communication. You listen to the radio, but the radio does not listen back. Loosely analogous to an audible printing press, with the operation of central presses and peripheral samizdat.
1924 - Telephone - Audio - 1 to 1 - A return to more "natural" communication. Of all of these, this is the only one that is arguably a "return" to pagan orality, but even then you have to factor in the competing media.
1935 - Television - Audio-Visual - 1 to many (one way) - Again a push communication but now with a much more extreme intensity. It is like there is a real person in the room talking to you, but now you cannot talk back. It's an intense sensory experience, but entirely passive. This has significant implications and was the cause of much of McLuhan's fixation.
1990 - World Wide Web - Text - many to many (note that the early internet was too slow to be particularly effective for audio, or even much visual.) - Another novel shift. The early internet was almost an anti-television. Active communication between networked users over a primarily text based medium. Although a small user base, the novel technical shift set a radical cultural tone for the early space, creating something not before seen in history.
This is "Electric". Does this seem like a grouping you can make easy sense of for ecological purposes? Keep in mind these are all peaking in usage at different times. If you want to think coherently in the sense of media shaping cognition and social structures, the technologies of 1850-2000 are not easily consolidated. This is one of those tricks that we talked about. The author offers rough consolidations in service of a larger point even when the consolidation doesn't work. The nature of writing, especially short form, means that oversimplifications are a fact of life, but there is a difference between that and arbitrary basket placement. The latter only serves to create a synthetic ground for the author to continue to build, divorced from any actual truth.
The timeline of digital is more subjective. Starting in 2000. Presumably the author is including the introduction of the Web as part of digital, which we can see above. My categorization here is more arbitrary, but hopefully serves the point that the internet's shifting is not particularly more easily consolidated than it's predecessor technologies.
2000 - the end of the Dot-com era - Text-Visual - many to many - A drawback on initial internet exuberance. It still kept many of the same hallmarks of the earlier internet but was becoming increasingly commercialized.
2004 - Web 2.0, social media - Text-Audio-Visual - many to many, for now - outgrowth of social media as a "sharing" platform. Improvements of the network enabling audio-visual content to be fully embraced. Critically, these spaces began to be algorithmically driven. Previously the internet was human curated, so an attempt to appeal to curation was an appeal to a human. With the advent of algorithmic rule, the appeal now became an appeal to mathematical structures. These structures are still (or at least were) sourced from humans, and it was human decisions that pushed the rankings in different directions. This made control of the algorithm political. Reviewing the history of the time shows a tension between the earlier free culture of the internet, and a push for control over the users through government influence of the algorithm. This perhaps peaked in the Arab Spring, where social media organization displayed the power to topple governments.
2015 - "pivot to video" - Audio-Visual - 1* to many - If we had room I'd insert a paranoid screed here about this as an attempt to censor and neutralize the internet post Arab Spring, but this isn't the space for that. Essentially this was the transition of the internet away from peer text communications towards the passive consumption of AV content. Notably this would be immediately relatable to anyone who had observed late television. "It's the same thing, but now there are massive numbers of channels, all vying for your attention in split second windows. As you'd expect this turns in to heavy appeals to the baser instincts." This is the modal internet user currently. Just consuming like television, offering the same ratings and views during passive consumption like the original television user.
2022 - ChatGPT 3.5, the AI era, the implications of which we are only just grappling with.
---
What this points to is that our journey of looking for this low-background human has to take us to before the upheavals of the 20th century. The time before dehumanization, in which Christianity was seen for a time like fish see water. The story of human is fundamentally a Christian story, following the emergence of the God of Abraham as it joins with other Mediterranean civilizations, forming the Christian human we knew for almost two thousand years. It's a reminder that I really should finalize my book on this story, and the significance of man as this unique design.

