A Service of UA Little Rock
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Little Rock Public Radio AI Policy

Little Rock Public Radio guidance and policies on using artificial intelligence in our work.
Last updated: 12/5/2025

This policy is based on guidelines from Poynter and has been modified to best serve the needs of Little Rock Public Radio.

Generative artificial intelligence is the use of large language models to create something new, such as text, images, graphics and interactive media. These terms will be referenced throughout this policy:

Generative AI: A type of artificial intelligence that creates new content, such as text, images, or media, by interpreting and generating based on input data.
Large language models (LLMs): AI systems trained on vast datasets of text to understand and generate human-like language, and is the information backbone that powers Generative AI.
AI prompt: A specific input or instruction provided to an AI tool to generate a desired output.
Hallucination: The phenomenon where AI generates information or responses that are fabricated, inaccurate, or not grounded in fact.
Training data: The dataset — articles, research papers or social media posts — used to teach an AI model patterns, relationships and knowledge for making predictions or generating content.

Although generative AI has the potential to improve newsgathering, it also has the potential to harm journalists’ credibility and our unique relationship with our audience. As we proceed, the following five core values will guide our work. These principles apply explicitly to the newsroom and throughout other non-news departments including advertising, events, marketing and development.

Transparency
When we use generative AI in a significant way in our journalism (i.e., when content created by AI is published or aired in our reporting), we will document and describe to our audience the tools with specificity in a way that discloses and educates. This may be a short tagline, a caption or credit, or for something more substantial, like an editor’s note. When appropriate, we will include the prompts that are fed into the model to generate the material.

Accuracy and human oversight
All information generated by AI requires human verification. Everything we publish will live up to our long-time standards of verification. For example, an editor will review prompts, and any other inputs used to generate substantial content, including data analysis, in addition to the editing process in place for all of our content.
We will actively monitor and address biases in AI-generated content, ensuring fairness and equity in our journalism. Our newsroom will regularly evaluate and update our standards to ensure uses and tools are equitable and minimize bias.

Privacy and security
Our relationship with our audience is rooted in trust and respect. To that end, we will protect our audience’s data in accordance with our newsroom’s policies. We will never enter sensitive or identifying information about our audience members, sources or our own staff into any generative AI tools. As technology advances and opportunities to customize content for our audience arise, we will be explicit about how your data is collected in accordance with our organization’s ethical policy.

Accountability
We take responsibility for all content generated or informed by AI tools. Any errors or inaccuracies resulting from the use of these tools will be transparently addressed and corrected. We will consider audience feedback in policy updates. Violations of this policy will require retraining and possible disciplinary action.

Exploration
With the five previous principles as our foundation, we will embrace exploration and experimentation. We will strive to invest in newsroom training so every staff member is knowledgeable about the responsible and ethical use of generative AI tools.

Logistics
The point person/team on generative AI in our newsroom is the news director. Coordinate all use of AI with them. The newsroom will also be the source of frequent interim guidance distributed throughout our organization.
In addition, members of this team will:
           
            Write clear guidance about how you will or will not use AI in content generation.
            Edit and finalize our AI policy and ensure that it is both internally available and where appropriate, publicly available (with our other standards and ethics guidelines).
            Manage all disclosures about partnerships, grant funding or licensing from AI companies.
            Understand our ethics policies and explain how they apply to AI and our newsroom values. This may include consulting with editors, lawyers or other privacy experts.
            Innovate ways to communicate with the audience to both educate them and gather data about their needs and concerns.

All uses of AI should start with journalism-centered intentions and cleared by the news director. Human verification and supervision is essential. Here are the questions we consider when using AI:

How do you want to use AI?
What is the journalistic purpose of this work?
How can you gather knowledge on audience needs and attitudes about your intended use?
How should the audience’s needs and attitudes inform your AI use?
How will you fact-check the results?
Will any material be published?
Which journalists will be responsible for overseeing this work and reporting out the results?
Which editors or managers will oversee the work?
What are the risks, the bad things that might happen? (ie. hallucinations, copyright or legal issues, data privacy violation) What safety nets can you devise to intervene before negative outcomes harm your newsroom’s reputation?
What are the privacy implications of this use, and how will we protect user data?
Editorial use

Approved generative AI tools
Here is a list of tools that are currently approved for use at Little Rock Public Radio. Reporters must reach out to the news director with any tools they’d like to start using, and we will update the list pending approval.
            ChatGPT
            Google Gemini
            Grammarly
            NotebookLM
            Apple Intelligence
            Microsoft Copilot
Existing tools (Zoom, Canva, Adobe Creative Suite, etc.) that have added AI capabilities.

In upholding the five principles of AI use in our organization, these caveats apply:

            Preserve our editorial voice: We will be cautious when using AI tools to edit content, ensuring that any changes maintain Little Rock Public Radio editorial voice and style guidelines.
            Prohibit full writes and rewrites: Generative AI tools will not be used for wholesale writing or rewriting of content. We will use them for specific edits rather than rewriting entire paragraphs or articles.
            Proprietary content: We will not input any private or proprietary information, such as contracts, email lists or sensitive correspondence into generative AI tools.
            Verification: We will be mindful that generative AI tools may introduce errors, misinterpret context or suggest phrasing that unintentionally changes meaning, and will review all suggestions critically to ensure accuracy.
            Disclosure: In most cases, we will disclose the use of generative AI. Our goal is to be specific and highlight why we’re using the tool to better engage with readers.

Research
We may use generative AI to research a topic. This includes using chatbots to summarize academic papers and suggest others, surface historical information or data about the topic and suggest story angles. Generative AI tools may be used to find checkable claims to pursue, or by journalists to sift through social media posts for article topics.A reminder: These tools are prone to factual errors, so all outputs will be verified by reporters and editors.

Transcription
We may use generative AI to transcribe interviews and make our reporting more efficient. Our journalists will review transcriptions and cross-check with recordings for any material to be used in articles or other content.

Translation
We may use generative AI tools to translate material for article research. We may also use those tools to translate article content, which will always be reviewed by an expert in the language and include the following disclosure: This article/audio/video was translated using generative AI. It has been reviewed by our editorial team to ensure accuracy. Read more about how and why we use AI in our reporting at https://www.littlerockpublicradio.org/little-rock-public-radio-ai-policy. Send feedback to comments@littlerockpublicradio.org.

Searching and assembling data
We may use AI to search for information, mine public databases or assemble and calculate statistics that would be useful to our reporting and in the service of our audience. Any data analysis and writing of code used on the website will be checked by an editor with relevant data skills.

Copyediting
Generative AI may NOT be used by the editor as a tool to assist with copyediting tasks, such as identifying grammar issues, suggesting style improvements or rephrasing sentences for clarity.

Social media content
Generative AI tools will NOT be used to summarize articles to create social media posts.

Visuals
Little Rock Public Radio holds AI-generated visuals to the same rigorous ethical standards as all forms of journalism. Because images shape perception instantly and powerfully, our use of generative AI in visual storytelling is governed by principles of truth, transparency and audience trust.
These guidelines apply to all AI-generated or AI-assisted visual materials, including illustrations, composites, animations and enhanced photographs. Every visual must serve a clear editorial purpose and uphold our responsibility to inform, not mislead.

Humanity first
When a scene can be documented ethically and accurately by our journalists, human coverage is the preferred option.
AI-generated visuals may only be used when:
            They are essential to the audience’s understanding
            The image is impossible or inappropriate to obtain through traditional means
            Example: https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda
           
Accuracy over aesthetics
AI photo enhancement tools (e.g., sharpening, lighting correction, denoising) must reflect reality, not dramatize or distort it, see AP’s guidelines on page 17 for photos. Edits that exaggerate emotion, alter mood or misrepresent the scene violate visual ethics. For example, deepening shadows to heighten drama in disaster imagery is not permitted. All AI enhancements must be disclosed internally and reviewed against the original.

Review and verification
Given the rise of AI generation tools for the public, editors and journalists must be vigilant about analyzing reader-submitted content. Media verification must rely on multiple methods — metadata checks, source verification, AI-assisted forensics — and never on one tool. Verification decisions must be documented internally for future review and accountability.

No manipulation of real people or events
We do not use AI to create or alter depictions of real people or places unless clearly disclosed and editorially justified. This includes recreating faces, changing expressions, or adding or removing individuals from scenes. We will not use AI to simulate likenesses of staff or sources in news reporting.

Disclosure
AI-generated illustrations or composites must be clearly labeled. Captions should disclose the method and source of generation. Here is an example, in which a reporter is working on a story about a rescue in a cave at a National Park: This image was generated using Adobe Firefly based on geological maps, 3D topographic data and field notes from our reporter’s visit to the site. It is a visual approximation of the cave’s interior and is intended to help readers understand the story’s context. No photographs were taken inside the cave due to access restrictions. This illustration has been reviewed by our editorial team for accuracy. Read more about how we use AI in our reporting at this link.

Reporters may or may not choose to include the specific prompt used in the generation process. Prompt used: “Interior of a natural limestone cave system with uneven rock walls, narrow passageways, and underground pools, based on elevation maps and field notes. Dim natural lighting from overhead shafts, no artificial lighting, realistic texture, no people.”

Commitment to audience AI literacy
We publish our AI policy to help our audience understand how and why we’re using generative AI. This material will be regularly updated to reflect our most current experimentation. As our language evolves, we will be better able to describe specific AI applications and tools.

All interns are prohibited from using generative AI.