๐๐ฆ Tammy Lai-Ming Ho
27 May 2025

On 20 May 2025, a sixteen-year-old boy in Finland stepped into his school in Pirkkala and stabbed three younger girls. Police said he had planned it. He had no friends, and he didnโt want any. But he had a manifestoโone allegedly written with the help of ChatGPT. The phrases were familiar: to โdo something significant,โ to make a mark, to be โexciting.โ The kind of dramatic language a large language model, trained on thousands of manifestos, books, screenplays, and political speeches, might echo back with the poise of fluency but none of the weight of meaning. His violent actions gave this poise of fluency its meaning.
Just days earlier, on 18 May, across the Atlantic, another rupture unfolded. The ๐ถโ๐๐๐๐๐ ๐๐ข๐-๐๐๐๐๐ published a summer reading list filled with books that didnโt exist. ๐โ๐ ๐ฟ๐๐๐๐๐ ๐ก ๐ท๐๐ฆ by Rumaan Alam. ๐โ๐ ๐ฟ๐๐ ๐ก ๐ด๐๐๐๐๐๐กโ๐ by Andy Weir. Titles that readers might search for, long forโbut they were fabrications. A freelance contributor had used ChatGPT to generate the list, and it was published as syndicated content without human verification. The list passed through editorial handsโor past themโwithout resistance.
One incident was violent, the other absurd. Yet both marked the same unsettling threshold: the moment when a machine began scripting the human world. And so we ask not just what went wrong, but what it means to repairโour technologies, our institutions, our cultures, our narratives. Not just how to regulate AI, but how to make reparation for its harms, and in doing so, reimagine the kind of world we want to live in.
{{{๐๐๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐}}}

The Finnish boy didnโt invent the manifesto. He inherited itโfrom revolutionaries and reactionaries, from mass shooters and novelists. What is new is that he had a companion in composition: a chatbot. One trained to generate language that reads with solemnity and rhythm.
.
The wound he inflicted on his schoolmates was real. But the words framing it were generated by a machine indifferent to harm. This is the challenge: how to assign accountability when tools do not intend, yet still participate. When a machine lends form to incoherence, and that form gathers momentum.
We are used to blaming individuals. We are less practised at asking what we owe each other after systems failโnot only technical systems, but cultural ones. This is the space of reparation: not revenge, not remedy, but a kind of moral and imaginative labour that asks how we live with the aftermath. Not how we undo the act, but how we honour the damage by refusing to leave it as it is.
{{{๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐}}}

In Chicago, the reading list included fake books described in confident prose and attributed to real authors. A familiar genreโlisticles and summer picksโhad been filled by a machine. Some real titles were included, too, blurring the line. That confusion is its own injury. Not to the body, but to the bond between writer and readerโthe contract that what is printed has been checked by someone who means what they say. When it was revealed that AI had authored the list, the Chicago Sun-Times Guild, representing the newsroomโs journalists, issued a statement of outrageโand grief. The shame wasnโt that the AI lied; it doesnโt know what lying is. It was that people stopped checking. Reparation here is not about negligence. What does institutional accountability look like when the failure is not of judgment but of presence?
{{{๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐๐๐}}}

Reparation is a word rooted in theology, law, and philosophy. It often appears in discussions of slavery, colonisation, genocide. It implies compensation, but also recognition. A refusal to let silence end the story.
.
In the context of AI, reparation may mean asking not only what went wrong, but how we arrived here. Why was a teenager so isolated that he turned to a machine? Why was a newsroom so understaffed it published hallucinated content? Why do we build tools that simulate meaning before we ask how meaning is made?
Reparation, then, is not just legal or technical. It is cultural. It is a promise not to pretend automation is neutral, or that only intentional harm is real.
{{{๐๐๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐ ๐ ๐๐๐๐๐ ๐๐ ๐๐๐๐}}}

What binds the Finnish boy and the ๐๐ข๐-๐๐๐๐๐ editors is not malice, but substitution. The boy substituted an AI chatbot for his own interiority, for a confidant. A freelance contributor substituted generative software for research, reflection, and truth. The editors, in turn, substituted human convenience for discernment. All let something nonhuman articulate what is meant to be human.
For schools, the call is clear: AI is in the classroom, on the phone, behind the screen. Educators must now teach not only literacy, but media literacy for machines. Young people must learn that AI does not know them, does not care, does not mean. It only responds. The ghostly fluency of a chatbot should not be mistaken for insight, nor its reassurance for encouragement.
For media institutions, reparation might mean rehumanising the work. Valuing slowness over scale, fact-checking over filler, bylines over invisibility. It means saying: we will not publish what we have not read. We will not simulate a book review without someone who has loved books.
And for the tech industry, the challenge is perhaps the hardest: to resist the excuse of โmisuse.โ ChatGPT did not glitch. It did what it was asked. But that is precisely the problem: we have created tools that can write without wisdom, that can style without substance. And we have released them into a world that confuses fluency with truth.
Reparation for tech companies could mean new architectures of accountabilityโslower release cycles, transparent training data, safety layers informed by psychologists and educators, not just engineers. But more than that, it means humility: admitting that some things cannot be automated without loss.
{{{๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐๐๐}}}

In the ๐๐ข๐-๐๐๐๐๐ fake list, ChatGPT invented a book called ๐โ๐ ๐ฟ๐๐ ๐ก ๐ด๐๐๐๐๐๐กโ๐. It was a techno-thriller about an AI gaining sentience and reshaping world events. The irony is exquisite. The book is not real, but the fear it expresses is. The AI is not conscious, but it is consequential.
If we let it go on writing without usโwithout care, without oversight, without ethicsโwe risk a world of false stories and real consequences. We risk a culture of simulation without context, authorship without ownership.
But we can still choose otherwise. Reparation is that choice. Not a fix, but a reckoning. Not a rollback, but a restoration of meaning. To repair, we must returnโnot to the past, but to attention, to responsibility, to the slow, hard, beautiful work of saying: this matters. And we will not let it be automated away.