“AI,” “Artificial Intelligence,” “Generative AI.” These are some of the terms I searched for in the syllabus of a reporting class I took in fall 2024, when I was a senior in my undergraduate studies at UNC-Chapel Hill. My computer made a pinging sound when I hit enter.
Nothing.
It was hard to believe that zero results appeared, given that conversations about AI seemed to be all around me on campus one year later. It presents itself in syllabi, class discussions and assignments and even events. As a graduate student at the UNC Hussman School of Journalism and Media, I experiment with it in my own reporting, primarily to help generate questions. At the same time, I consider some of AI’s ethical implications, particularly for the work of editors. These include which tasks AI is appropriate for, when it should be disclosed and who is involved in these decisions.
Along with a newsroom’s own policy, guidelines created by nonprofit media institutions, such as the Poynter Institute and Trusting News, offer a roadmap for the responsible use of AI. How and when AI should be disclosed to audiences is a complex issue, yet it is critical in building trust with readers as a growing number of journalists and newsrooms use the technology in different ways.
As an aspiring news editor, I strive to acquire a comprehensive understanding of AI from both technical and ethical perspectives. It’s also something I know will be helpful to other news organizations. So when my professor asked our class to write a story on an ethical issue facing our profession last semester, I wanted mine to center on AI. My interest in this topic grew after writing the story, and this semester I’m doing an independent study through school focused on how editors are integrating AI into their work.
For my class assignment last fall, I interviewed three professionals with unique experiences using AI in their work. These interviews have been edited for brevity and clarity.
Erin Servais is the founder of AI for Editors, a primarily online, live editing training program for professional editors who work in fields ranging from education to government.
What are some of the best basic practices for ethically integrating AI in editing?
Servais: It’s a best practice to follow the AI use policy of your publication, if it has one. If it doesn’t, lobby for one, or if you’re in a situation where you’re a freelancer working directly with clients, with writers, which wouldn’t really be the case for newsrooms. The best policy is to look at your workflow, all of the individual tasks that you do as an editor, and determine at what level AI use is appropriate. I break it down into three main categories:
Automation, meaning that task is done one hundred percent by AI
Augmentation, meaning a percentage of the work is done by AI, the rest is done by humans
All Human, where it’s one hundred percent done by a human being
What sorts of situations, if any, could you see a disclosure that AI was used in copyediting for a publication, for example? Or is that not needed as much because it’s just copyediting?
Servais: If we’re talking about strictly copyediting, I would lean much more toward a disclosure not being strictly required. I could see that some publications might like their readers to know that a human signed off on all decisions related to the article and its editing. the editing of it. For that reason, maybe they would want to have a disclosure that was more of a catch-all disclosure rather than explicitly stating we used such and such program for copy editing.
From the editors that you’ve spoken with, because I’m sure they’re from a wide range of disciplines, what do you feel like has been the overall consensus about them incorporating AI into their work?
Servais: The copyeditors, it depends on the length of text that they typically have to edit at one time, because it doesn’t work yet that you can upload a book manuscript and have it copyedit the whole thing in one go. People who work with very long texts are finding it less useful than people who might be in newsrooms working on shorter texts.
For people where that is the case – they can’t use AI on the entire project at one time – then they’re using it in the way that a line editor may, where they’re putting in that long, convoluted sentence and having AI work on that sentence first, and then hopefully it does it exactly the way they want to the first try. Otherwise, it at least helps them figure out where to get started with it.
Brighton McConnell is the news director at 97.9 The Hill WCHL and Chapelboro, the latter an online news publication based in Chapel Hill. His newsroom experimented with an AI chatbot last year as part of a research project by the UNC Center for Innovation and Sustainability in Local Media.
Why did Chapelboro decide that it wanted to have this chatbot on its website?
McConnell: When we saw that CISLM, the UNC Center for Innovation and Sustainability in Local Media was doing this research project and looking for small newsrooms to participate, that jumped out to us as a good opportunity… not only was it something that’s like ‘ok, they help build the tool that saves us a bit of time,’ the idea was that it would go pretty easily onto the website, which also sounded attractive. It was in the name of helping a group that’s UNC-affiliated and helping with research.
I do not use ChatGPT in any form or fashion in my putting together of stories or research. I’ve just been really skeptical of those tools, but we also understand that a lot of outlets are beginning to use them. We thought this could be a potentially good way to dip our toe into what artificial intelligence would either look like or how it would interact on our website.
What are some examples of reader feedback that you received while having the bot up?
McConnell: In part because of the way we presented it to readers, I think they struggled to understand that it was not like a ChatGPT or [that] something was using search engines to pull in all of this information and spit a response back. Sometimes, it was readers asking it [the bot] non-Chapelboro-related things that were Chapel Hill and Carrboro-specific, to which the bot would sometimes have up and down answers for.
We had people being like, “Tell me the history of Chapel Hill,” and that bot just wasn’t prepared to do that. Some of the feedback we received is a lot of skepticism about seeing something labeled as artificial intelligence around a trusted news source.
What do you think are some of the most important lessons that you learned about AI from this experience?
McConnell: I think this was an example of how not every single AI tool is what we know – ChatGPT and other really established, big AI tools – to be. There is this middle ground and this part of the industry that’s still finding its way. It reaffirmed that it’s [AI] not going to be right every single time.
Jasmine McNealy is a professor in the department of media production, management, and technology at the University of Florida. She researches AI and its impact on communities, people and organizations, and the policies around it.
Since you teach some classes over at the University of Florida, what does your AI policy look like for students?
McNealy: I usually tell students not to use it. However, if you’re going to use it, you’re responsible for any of the outputs that it gives that you use. One of the things about critically thinking about AI is that hopefully, you get an opportunity to correct AI.
What do you think young communications professionals should watch out for when using AI?
McNealy: Young professionals need to be able to communicate well, aside from artificial intelligence. It’s ok to make mistakes, it’s ok to grow as a writer, communicator, [and] as a graphic designer; there are tools that are now able to assist you. But it is also imperative that when the system goes down, as it inevitably does, you know how to use the skills and expertise you have.
Are there any news outlets that you think have a solid AI policy or are doing good work with it ethically? And what in your view makes a good AI policy?
McNealy: CBC News, Global Voices and The New York Times. I think one good AI policy is being transparent about whether a news organization will use AI and perhaps how it may or may not use it.
Thank you Mila Mascenik for sharing your piece on AI & ethics for The Hub newsletter! If you'd like to share your newsroom's AI policy and/or experiences using tools, get in touch with her at mmila@email.unc.edu.
|