A Review of a ChatGPT Review of The Last of Us Part Two
May occasionally produce…biased content
-ChatGPT product disclaimer
Hey, ditto.
It’s actually been some time since I’ve done any proper writing, which naturally, has made the thought of doing it seem wonderous when I picture it in my head. It’s a fantasy of output without the input, a vision I imagine afflicts writers the most among the creative arts, because there’s no kinaesthesia to affix the labor involved in writing in one’s memory. Outside of henpecking a keyboard—which basically everyone in the world does to some extent—procrastination is writing’s only physical manifestation, and is manifestly about not thinking about writing. And so it becomes easy for me to imagine that the work only ever required making ink appear on a page.
Specifically, I wanted to write something for #ReviewJam, which helpfully suggested spending only an hour or two on a submission. Through two weeks and cruising past the deadline, noodling away in the kitchen (physically manifesting my not-writing) I kept overhearing NPR segments about ChatGPT, a new-ish bot that’s purported to be uniquely good at affecting human writing. So good, the pieces suggested, that a tsunami of bot-produced term papers and book reports loomed.
Well a pitch wrote itself for me in that instant, you might say. I’d lay a game on this ChatGPT thing’s algorithmic altar, and with a simple incantation have it render for me a whole bunch of game review lorem ipsum to joke about. The words themselves hardly mattered, I figured; even if this ChatGPT thing turned out to be all evening news hype, I could still riff on whatever it produced for quick #content.
And then a couple things happened. 1) the results were…particular. Particular enough to make me want to properly write up some of the things they’re doing. And 2) it then turned out that CNET has already been demoing the bot for articles (with all the low managerial panache of a game publisher deciding to get into NFTs, and about as much success). And before, during, and after these things, the ongoing media layoff wildfire kept burning up more acreage.
I decamped social media because I was tired of having the arc of each day defined by pseudo-events that upset me and felt like they mattered greatly but were, in posterity, just storms in a teacup. Listening to the radio and trying to suss out the mattering of ChatGPT, I hadn’t been sure which I was seeing play out; the whole thing had a certain, oh, insubstantial Musk about it. But it turns out it’s hard to even dance on the surface of a seemingly half-baked thing like ChatGPT without brushing up against very real problems. This is an obnoxious feature of Silicon Valley: tech doesn’t have to be substantial or serious to suddenly be A Thing you now get to contend with.
I chose The Last of Us Part II for my test case because of its prestige, and not for any latent buzziness from the HBO show’s release (though the two are certainly interrelated). And because I could recall contemporaneous reporting and criticism from the game’s release that punctured that prestige, even if it didn’t completely deflate it.
The Last of Us and its sequel are shooters first and foremost, which are lowbrow affairs almost by definition. Brutish things, to want to make their trade a form of violence that we Americans collectively do not understand. And yet, like a mafia don trying to take the family business legit, The TLOU games nurse naked ambitions for a class of media that has long held itself above them and their new money. Yet reporters and critics nevertheless sought out the back entrances, read the ledgers, found out where the bodies are buried. I wanted to see how both impressions of it would be reflected in the bot’s narrative.
So: the content. Without knowing too much about how the botnet works, it’s still easy to recognize it as a web-crawling aggregator, evident in its “view from nowhere” perspective on matters, everything a bit comme ci, comme ça. In doing so, it’s of course only replicating tendencies it finds in human writing. When it doubt, we retrench into the comfortable confines afforded by passivity and specious impartiality. It’s a hallmark of juvenile writing, and also of Wikipedia, which perhaps goes to explain why teachers seem to be more nonchalant about ChatGPT than media, by and large—I imagine they’re pretty used to parsing wiki-speak in student papers.
Not that professional writers are immune. In ChatGPT’s leavings we can also find the evidence of journalists myriad discrete uses of “some players” to describe any instance of online opinion sharing, percolating down from individual reported pieces and reviews into the internet’s catch basins, like Wikipedia, where they become a part of great subterranean lakes of disattribution. A typically inane one you can find on TLOU2’s own wikipedia entry begins “Some players criticized [the female protagonist’s] muscular physique…“
And then suddenly, from the aggregate out comes a sudden yawp of disavowal: it’s “important to note” that the game’s themes do not represent commentary. Gone, for the moment, is the equivocation, the dissembling. This bit is recognizable as the language of a splash screen media disclaimer, a-la “events and characters appearing fictionalized, modified or composited for dramatic purposes.” But there’s also the distinct whiff of that old games PR canard, “our game isn’t trying to be political,” usually trotted out when say, a game taps into BLM iconography for a quick injection of meaningfulness. Having sat up in a sudden moment of lucidity to declare The Last of Us II inculpable, the bot then slouches back into a haze of truism. “Different people interpret things differently,” though, hides an old disarming rhetorical move used to blunt legitimate critique, flattening both it and indefensible nonsense into equally valid perspectives.
In considering a chatbot, we must ask ourselves: who has an interest in making the words they want said appear to come out of someone else’s mouth, in ventriloquizing speech? Social media has already given us that answer: it’s who we see conducting astroturfing campaigns and creating sockpuppet accounts—governments, corporations, and right wingers, the latter as an extension of Sartre’s observation about fascists, who “have the right to play” with words, because they approach the public forum as a farce. And what better way to play with words than with a chatbot? Gaming’s most prominent incidence of this, of course, was a mummer’s parade of reactionary memes and sockpuppets, blackface optional.
Back at that time, and on the subject of reviews, I wrote about the pains that figures on that reactionary front were taking to tamp video game criticism back down into the facile objectivity of what “the average gamer” might think about matters—”why should anyone care what, say, a Native reviewer might think about a game’s portrayal of their people, when that critique won’t apply to the “average” player,” and so on. In the right’s preferred approach, truths spoken by minority critics were either to be A) dismissed entirely as statistically insignificant, B) neutralized by situating them alongside the myriad unserious of things said by an agglomerated “some players,” or C) Countered with the falsified, thrown speech of a sockpuppet or chatbot. ChatGPT, or any other network that can be counted on to turn to PR-vetted talking points, or is susceptible to sudden, concerted efforts by the online right to flood a communication channel, seems like it might be quite useful for all of these test cases. Conversely, I have trouble imagining a benevolent use.
A problem for teachers in particular? Probably not. For digital journalists? Are there any of us left? For everyone? I could see it.
But for the already marginalized, especially?
Well…
…
…do I really need to write it?