CNET's new guidelines for AI journalism met with union pushback
The site was previously forced to correct dozens of inaccurate, AI-written stories.
Nearly seven months after it began publishing machine-generated stories without disclosing their true authorship (or lack thereof) to readers, CNET has finally, publicly changed its policy on the use of AI in its journalistic endeavors. In short, stories written by its in-house artificial intelligence — which it calls Responsible AI Machine Partner (RAMP) — are no more, but the specter of AI in its newsroom is far from exorcised.
The site indicates, however, that there are still two broad categories of pursuits where RAMP will be deployed. The first, which it calls "Organizing large amounts of information" provides an example that seems more authorial than that umbrella descriptor lets on. "RAMP will help us sort things like pricing and availability data and present it in ways that tailor information to certain audiences. Without an AI assist, this volume of work wouldn’t be possible."
The other ("Speeding up certain research and administrative portions of our workflow.") is more troubling. "CNET editors could use AI to help automate some portions of our work so we can focus on the parts that add the most unique value," the guidelines state."RAMP may also generate content such as explanatory material (based on trusted sources) that a human could fact-check and edit. [emphasis ours]" You'd be forgiven if that sounds nearly identical to what got CNET into trouble in the first place.
The venerable tech site first posted an innocuously titled explainer ("What Is a Credit Card Charge-Off?") on November 11, 2022, under the byline "CNET Money Staff" with no further explanation as to its provenance, and continued posting dozens more small finance stories under that byline through mid-January. It was around that time that Futurism discovered two important details: CNET Money Staff stories were AI-generated, and much of that work was wildly inaccurate. CNET issued corrections on over half of those stories and had, by all appearances, stopped using these sorts of tools in response to the deserved criticisms they created.
In the interim, the remaining CNET staff publicly announced their intention to unionize with the Writer's Guide of America, East. Among the more typical areas of concern for a shrinking newsroom during these trying times in the media industry (retention, severance, editorial independence, et cetera), the bargaining unit also specifically pushed back against the site's intention to keep deploying AI.
Based on the union's response on Twitter, the guidelines fall well short of the kinds of protections CNET's workers were hoping for. "Before the tool rolls out, our union looks forward to negotiating," they wrote. "How & what data is retrieved; a regular role in testing/reevaluating tool; right to opt out & remove bylines; a voice to ensure editorial integrity.
New AI policy @CNET affects workers. Before the tool rolls out, our union looks forward to negotiating: how & what data is retrieved; a regular role in testing/reevaluating tool; right to opt out & remove bylines; a voice to ensure editorial integrity. https://t.co/7FQFWhRoui
— CNET Media Workers Union (@cnetunion) June 6, 2023
Granted, CNET claims it will never deploy RAMP to write full stories, though it also denies it ever did so. However, the new guidelines leave the door open for that possibility, as well as the eventuality that it uses AI to generate images or videos, promising only that where "text that originated from our AI tool, we’ll include that information in a disclosure." CNET's apparent bullishness on AI (and its staff's wariness) also arrive against a backdrop of news organizations broadly looking to survive the technology's potential ill-effects. The New York Times and other media groups began preliminary talks this week to discuss AI's role in disinformation and plagiarism, as well as how to ensure fair compensation when authorship becomes murky.
The prior CNET Money Staff articles have since been updated to reflect the new editorial guidelines. Each is credited to a human staff member who has rewritten the story and also lists the name of the overseeing editor. Each is now appended with the following note at the bottom "Editors’ note: An earlier version of this article was assisted by an AI engine. This version has been substantially updated by a staff writer."
This sort of basic disclosure is neither difficult nor unusual. Including the provenance of information has been one of the core tenants of journalism since well before AI became advanced enough to get a credit on the masthead, and The Associated Press has been including such disclosures in its cut-and-paste-level financial beat stories for the better part of a decade. On the one hand much of the embarrassment around CNET's gaffe could have been avoided if it had simply warned readers where the text of these stories had come from at the outset. But the larger concern remains that, unlike AP's use of these tools, CNET seems poised to allow RAMP more freedom to do more substantive work, the bounds of which are not meaningfully changed by these guidelines.
Correction, June 6th, 2023, 11:47am ET: An earlier version of this story inaccurately described how the altered stories previously written by CNET Money Staff appeared on page.