AI Writes Fictitious and Erroneous Stories for Major News AppDate:
06/07/2024Tag: #ai #newsbreak #news #journalism #powerelectronics AI Writes Fictitious and Erroneous Stories for Major News AppTurns out artificial intelligence is just as capable of sloppy journalism as humans – according to Reuters, Newsbreak has used AI to write at least 40 stories since 2021, many of them erroneous or with fictitious bylines. This definitely won’t be the first time we hear about a story like this. If you’ve been active on eBay lately (or several other sites), you might’ve noticed AI-generated item descriptions, which is fine – computers are generally better at recalling details than humans, and no one goes to e-commerce sites for their award-winning prose. And if I’ve learned anything in publishing, it’s that e-media has an insatiable appetite for new content, and in order to feed the beast, many brands resort to piping in RSS feeds or other sources directly to their sites. It just makes sense that online media would explore options to automate the content that isn’t produced in-house, and AI certainly has a role to play. But having AI take over the entire journalistic process is problematic at best. In the case of Newsbreak, it publishes licensed content from sources like Reuters, Fox, AP and CNN. But it also uses AI to scrape the web for news, rewrite it, and publish it as new content. And they’ve run into problems. Reuters notes that Newsbreak published about 10 news stories from local news sites under fictitious bylines, stole content from competitors, and got some important details wrong. For example, Newsbreak published incorrect food distribution times for Food to Power, a Colorado-based food bank. They also claimed that Harvest912, a charity in Erie, Pennsylvania, was holding a 24-hour foot-care clinic for homeless people that didn’t exist. "You are doing HARM by publishing this misinformation - homeless people will walk to these venues to attend a clinic that is not happening," Harvest912 told NewsBreak. Newsbreak eventually removed the tainted articles, but curiously, they never said anything about reining in the AI content in general. In fact, they recently added a disclaimer saying that its content "may not always be error-free". Norm Pearlstine, former Executive Editor at the Wall Street Journal and the Los Angeles Times and a Newsbreak consultant said that “I question the legality of creating fake accounts using content publishers put behind their paywalls. If I had learned about the practice while at the LA Times, I would have instructed our lawyer to seek a restraining order and sue for damages.” With the public’s confidence in the media at all-time lows, this is probably not the best way to restore trust. |