๐—ง๐—ต๐—ฒ ๐— ๐—ฎ๐—ป๐—ถ๐—ณ๐—ฒ๐˜€๐˜๐—ผ ๐—ฎ๐—ป๐—ฑ ๐˜๐—ต๐—ฒ ๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ: ๐—ข๐—ป ๐—ฉ๐—ถ๐—ผ๐—น๐—ฒ๐—ป๐—ฐ๐—ฒ, ๐—™๐—ฎ๐—ฏ๐—ฟ๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป, ๐—ฎ๐—ป๐—ฑ ๐˜๐—ต๐—ฒ ๐—ฆ๐—ต๐—ฎ๐—ฝ๐—ฒ ๐—ผ๐—ณ ๐—ฅ๐—ฒ๐—ฝ๐—ฎ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป

๐‘๐‘ฆ Tammy Lai-Ming Ho
27 May 2025

On 20 May 2025, a sixteen-year-old boy in Finland stepped into his school in Pirkkala and stabbed three younger girls. Police said he had planned it. He had no friends, and he didnโ€™t want any. But he had a manifestoโ€”one allegedly written with the help of ChatGPT. The phrases were familiar: to โ€œdo something significant,โ€ to make a mark, to be โ€œexciting.โ€ The kind of dramatic language a large language model, trained on thousands of manifestos, books, screenplays, and political speeches, might echo back with the poise of fluency but none of the weight of meaning. His violent actions gave this poise of fluency its meaning.

Just days earlier, on 18 May, across the Atlantic, another rupture unfolded. The ๐ถโ„Ž๐‘–๐‘๐‘Ž๐‘”๐‘œ ๐‘†๐‘ข๐‘›-๐‘‡๐‘–๐‘š๐‘’๐‘  published a summer reading list filled with books that didnโ€™t exist. ๐‘‡โ„Ž๐‘’ ๐ฟ๐‘œ๐‘›๐‘”๐‘’๐‘ ๐‘ก ๐ท๐‘Ž๐‘ฆ by Rumaan Alam. ๐‘‡โ„Ž๐‘’ ๐ฟ๐‘Ž๐‘ ๐‘ก ๐ด๐‘™๐‘”๐‘œ๐‘Ÿ๐‘–๐‘กโ„Ž๐‘š by Andy Weir. Titles that readers might search for, long forโ€”but they were fabrications. A freelance contributor had used ChatGPT to generate the list, and it was published as syndicated content without human verification. The list passed through editorial handsโ€”or past themโ€”without resistance.

One incident was violent, the other absurd. Yet both marked the same unsettling threshold: the moment when a machine began scripting the human world. And so we ask not just what went wrong, but what it means to repairโ€”our technologies, our institutions, our cultures, our narratives. Not just how to regulate AI, but how to make reparation for its harms, and in doing so, reimagine the kind of world we want to live in.

{{{๐†๐‡๐Ž๐’๐“๐–๐‘๐ˆ๐“๐“๐„๐ ๐‡๐€๐‘๐Œ}}}

The Finnish boy didnโ€™t invent the manifesto. He inherited itโ€”from revolutionaries and reactionaries, from mass shooters and novelists. What is new is that he had a companion in composition: a chatbot. One trained to generate language that reads with solemnity and rhythm.
.
The wound he inflicted on his schoolmates was real. But the words framing it were generated by a machine indifferent to harm. This is the challenge: how to assign accountability when tools do not intend, yet still participate. When a machine lends form to incoherence, and that form gathers momentum.

We are used to blaming individuals. We are less practised at asking what we owe each other after systems failโ€”not only technical systems, but cultural ones. This is the space of reparation: not revenge, not remedy, but a kind of moral and imaginative labour that asks how we live with the aftermath. Not how we undo the act, but how we honour the damage by refusing to leave it as it is.

{{{๐’๐ˆ๐Œ๐”๐‹๐€๐“๐„๐ƒ ๐“๐‘๐”๐“๐‡๐’}}}

In Chicago, the reading list included fake books described in confident prose and attributed to real authors. A familiar genreโ€”listicles and summer picksโ€”had been filled by a machine. Some real titles were included, too, blurring the line. That confusion is its own injury. Not to the body, but to the bond between writer and readerโ€”the contract that what is printed has been checked by someone who means what they say. When it was revealed that AI had authored the list, the Chicago Sun-Times Guild, representing the newsroomโ€™s journalists, issued a statement of outrageโ€”and grief. The shame wasnโ€™t that the AI lied; it doesnโ€™t know what lying is. It was that people stopped checking. Reparation here is not about negligence. What does institutional accountability look like when the failure is not of judgment but of presence?

{{{๐–๐‡๐€๐“ ๐‘๐„๐๐€๐‘๐€๐“๐ˆ๐Ž๐ ๐‘๐„๐๐”๐ˆ๐‘๐„๐’}}}

Reparation is a word rooted in theology, law, and philosophy. It often appears in discussions of slavery, colonisation, genocide. It implies compensation, but also recognition. A refusal to let silence end the story.
.
In the context of AI, reparation may mean asking not only what went wrong, but how we arrived here. Why was a teenager so isolated that he turned to a machine? Why was a newsroom so understaffed it published hallucinated content? Why do we build tools that simulate meaning before we ask how meaning is made?

Reparation, then, is not just legal or technical. It is cultural. It is a promise not to pretend automation is neutral, or that only intentional harm is real.

{{{๐ˆ๐๐’๐“๐ˆ๐“๐”๐“๐ˆ๐Ž๐๐’ ๐“๐‡๐€๐“ ๐…๐Ž๐‘๐†๐Ž๐“ ๐“๐Ž ๐‹๐Ž๐Ž๐Š}}}

What binds the Finnish boy and the ๐‘†๐‘ข๐‘›-๐‘‡๐‘–๐‘š๐‘’๐‘  editors is not malice, but substitution. The boy substituted an AI chatbot for his own interiority, for a confidant. A freelance contributor substituted generative software for research, reflection, and truth. The editors, in turn, substituted human convenience for discernment. All let something nonhuman articulate what is meant to be human.

For schools, the call is clear: AI is in the classroom, on the phone, behind the screen. Educators must now teach not only literacy, but media literacy for machines. Young people must learn that AI does not know them, does not care, does not mean. It only responds. The ghostly fluency of a chatbot should not be mistaken for insight, nor its reassurance for encouragement.

For media institutions, reparation might mean rehumanising the work. Valuing slowness over scale, fact-checking over filler, bylines over invisibility. It means saying: we will not publish what we have not read. We will not simulate a book review without someone who has loved books.

And for the tech industry, the challenge is perhaps the hardest: to resist the excuse of โ€œmisuse.โ€ ChatGPT did not glitch. It did what it was asked. But that is precisely the problem: we have created tools that can write without wisdom, that can style without substance. And we have released them into a world that confuses fluency with truth.

Reparation for tech companies could mean new architectures of accountabilityโ€”slower release cycles, transparent training data, safety layers informed by psychologists and educators, not just engineers. But more than that, it means humility: admitting that some things cannot be automated without loss.

{{{๐“๐‡๐„ ๐‹๐€๐’๐“ ๐€๐‹๐†๐Ž๐‘๐ˆ๐“๐‡๐Œ}}}

In the ๐‘†๐‘ข๐‘›-๐‘‡๐‘–๐‘š๐‘’๐‘  fake list, ChatGPT invented a book called ๐‘‡โ„Ž๐‘’ ๐ฟ๐‘Ž๐‘ ๐‘ก ๐ด๐‘™๐‘”๐‘œ๐‘Ÿ๐‘–๐‘กโ„Ž๐‘š. It was a techno-thriller about an AI gaining sentience and reshaping world events. The irony is exquisite. The book is not real, but the fear it expresses is. The AI is not conscious, but it is consequential.

If we let it go on writing without usโ€”without care, without oversight, without ethicsโ€”we risk a world of false stories and real consequences. We risk a culture of simulation without context, authorship without ownership.

But we can still choose otherwise. Reparation is that choice. Not a fix, but a reckoning. Not a rollback, but a restoration of meaning. To repair, we must returnโ€”not to the past, but to attention, to responsibility, to the slow, hard, beautiful work of saying: this matters. And we will not let it be automated away.