AI content creation practices at FMR

FMR does not use AI to create wholesale content for FMR.org or other public-facing channels. We believe we would qualify for a “Not created by AI” badge on FMR.org, but have not applied for one since they may make a website more attractive to large language models’ scraping bots. We will continue to look into this.

For now, as we value transparency, we are outlining our current practices below.

Why not ban all AI?

AI has long been built into the software that we use every day (SPAM filters are AI, standard equipment for video meetings, etc.).

The practices below focus on the AI usage we can control. In this, FMR staff are highly conscientious, cognizant of both the human and environmental costs, and intentionally limited in our usage.

Our focus: Generative AI

The practices below apply to staff use of generative AI. Generative AI is a type of artificial intelligence that can generate new content in response to user prompts.

Many programs that FMR and other nonprofits use offer both traditional and generative AI tools:

  • In Canva, the background remover and image resizing functions used to edit visual content are traditional AI.
    But Canva’s Magic Design tool, which generates new visual content based on a text prompt, is generative AI.
  • Adobe Reader can refine a scan of a document, turning images of letters into editable text, with traditional AI.
    Adobe Reader can also summarize and create new content from documents based on prompts with generative AI.
  • Spelling and grammar checkers are traditional AI.
    Grammarly Premium can also rewrite or create new content from a prompt using generative AI.
  • In Excel, data analysis tools are a form of traditional AI.
    But if we access Copilot in Excel and enter a prompt to have it create formulas or dashboards, that’s generative AI.

For the purposes of these practices, if you are entering a prompt and the AI can create or generate something from that prompt, the AI is generative.

Additional resources to learn more about the difference between generative and traditional AI:

Why we’re laying out these practices

FMR is committed to promoting gainful employment and believes that AI should serve as a supportive tool, augmenting rather than replacing human workers. Our intention is to utilize generative AI to enhance human decision-making, expertise, and creativity, rather than substituting these human qualities.

We will choose generative AI tools that align with our mission and goals, rather than selecting them simply because they are new technologies. We want everyone at FMR to use generative AI wisely.

We believe we can judiciously leverage select benefits of generative AI while also acknowledging its challenges.

We fully acknowledge that generative AI can harm our environment by using large amounts of energy and water, and that generated content may include:

  • Misinformation or inaccuracies
  • Harmful bias
  • Unvetted or unclear sources
  • Illegal use of proprietary, copyrighted and private information

To the best of our ability, we have taken these concerns into account in the practices outlined below, enabling us to manage our use of generative AI in a way that minimizes the risks and harms associated with even minimal generative AI usage.

Generative AI is rapidly evolving, and we will strive to select the most ethical tools and keep these practices current. At a minimum, we will review this document and any public-facing webpages based on it annually.

When these practices apply

Generative AI should be used judiciously to support or augment current FMR workflows and projects.

All the usual branding requirements, FMR practices and workflows still apply.

All staff may use approved generative AI to support or refine content already on their to-do list — an NRMP, comment letter, FMR.org update, grant application, emails to partners, etc. — and then send their draft to the next planned reviewer. Non-communications staff would not use generative AI to create two social media posts from that NRMP, comment letter, or other content. In turn, comms staff would not use generative AI to draft an NRMP, comment letter, etc. to offer to program staff.

Staff who have used AI in any content being submitted to the communication department for edits should inform them of your generative AI usage. This is in part to help us keep track of and understand current usage.

As always, FMR communications staff reserve the right to request a rewrite if the content does not align with FMR’s style. We may also use tools that attempt to gauge whether something is AI-generated. Regardless, if something sounds or feels AI-generated, staff will be asked to draft it in your natural voice, without AI assistance, before passing the content back to communications for final edits.

Who these practices apply to

This policy applies to everyone working or volunteering at FMR or advocating on behalf of FMR in a representative capacity who uses generative AI to create or share content for FMR's use.

It also applies to contractors as much as feasible. We encourage staff to review these practices with contractors to ensure compatibility.

At times, FMR may need to contract with consultants who rely on generative AI, such as an SEO expert or Google Ads creator, who must utilize native Ad tools to meet their deliverables. (This is a standard exemption to qualify for a “Not by AI” badge. We are not linking out to the badges because we haven’t picked one to support.) The contractor should be informed of our practices and preferences and develop a plan with the staff to complete their work while minimizing any potential harm.

Core requirements of using AI

Staff may only use the generative AI tools listed in this overview (see below).

Under no circumstances is it appropriate to blame the generative AI for what may ultimately be revealed as bad, harmful or mistaken content. If we use the tool, we must take full responsibility for using it and what we publish after using it.

FMR keeps confidential information out of all generative AI

Under no circumstances would FMR staff input confidential or private information about our organization or any of the people associated with it, including staff, board members, volunteers, or clients into generative AI, even in a system that claims to be closed. Staff assume that anything included in an AI prompt is potentially public information and may become part of the AI’s “learning material.”

FMR staff commit to significant fact-checking and reworking

While one benefit of using AI is saving time*, staff who use the approved generative AI tools (see below), must also commit the time to the following activities:

  • Carefully reviewing the content for embedded bias, this means bias in the substance or results as well as biased language. (FMR staff: For inclusive language, see our styleguide.)
  • Fact-checking any declarations, statements, or findings presented by AI. Be especially skeptical of words such as best, worst, should, must, and the like.
  • For “original” content over a few hundred words, run it through the plagiarism or copyright infringement checker (again using our approved AI tools, per below).
  • Editing the content to match the voice, style, and tone guidance of FMR. (FMR staff: See our styleguide.)
  • * Rule of thumb: Generative AI should not save more than a third of the time you would spend on a writing project. If it saves more, it is overusage.

FMR is transparent about usage

This page will be linked in the footer on every webpage, like our privacy policy, land acknowledgment, etc. and covers all content on our site. Additionally, if and when any visual assets that include any AI-generated components appear in FMR.org or in public-facing content, we will disclose that in the caption.

Approved tools

FMR allows the continued use of the following generative AI tools. None should be used to create content wholesale, but in the ways spelled out in more detail in the following sections.

If you would like to use one not listed below, please contact the Communications Director before doing so.

Only use the tools below. Do not use or test a program without prior approval. (Many generative AI tools and extensions pose significant security risks.)

  • Extremely limited use of image creation tools that utilize prompts in Canva, Photoshop, Google Slides and Animoto. (See below.)
  • Use of Grammarly Premium to generate starting-point outlines or ideas for a rough draft, detect plagiarism, or summarize and distill complex content (not to create content wholesale, see below).
  • Use of Rev.com or native Google Drive recording tools (not extensions) to transcribe, caption videos for accessibility, or generate a reference point for or starting brainstorm outlines of spinoff content from videos and meeting notes (summaries, etc.).

How we decide which generative AI tools to use

FMR considers the potential positive impact of the desired generative AI tool, the level of effort required for implementation, how the product will fit into workflows, and whether it aligns with our organizational values.

When selecting AI products to use internally, we will utilize tools that enable us to opt out of having our data used in the product’s training data. We will also prefer tools that have worked with non-profits or other community members and constituents and are built to be mindful of our concerns.

Grammarly Premium, for example:

  • Allows us to not feed a LLM or Large Language Model, supporting the augmentation rather than replacement of human skills and reducing data usage.
  • Does not sell user data.
  • Filters biased language.
  • Offers tools to promote ethical use and transparency (such as plagiarism trackers, code that indicates AI inclusion, dashboards that allow administrators to see what percent of final copy is coming from their service, etc.).
  • Note: FMR pays for pro-grade access to utilize these features. (For comparison, if staff use a free ChatGPT model, we would be providing the labor to feed and evolve a Large Language Model that may replace someone’s job and consumes more energy.)

Similarly, Rev:

  • Does not sell data to third-party LLMs.
  • Uses AI-powered speech-to-text (STT) technology that it developed as a longstanding leader in the field.
  • Also employs and offers human closed captioning.

Approved uses of AI to support the creation of written copy

FMR does not use AI to create substantial content without significant human editing.

When creating content, we use Grammarly Premium’s drafting tools so we do not feed/teach large language models (such as ChatGPT, Claude, etc.).

The following uses of AI are acceptable at FMR without additional attribution to the AI in the content we share on FMR.org and in other public-facing channels:

  • To help you brainstorm multiple versions of subject lines, headlines, and the like, from which you will pick and edit the best.
  • To repurpose content that has already been approved, such as generating a draft slide deck or presentation outline from an edited article or document.
  • To draft shorter summaries of longer documents.
  • To translate approved content into another language, only if reviewed by a native speaker of that language.
  • To write first drafts of content based on your prompts, but only if you are committed to substantively editing the draft, adding your own original material (e.g. quotes, stories, statistics), and adjusting the language to reflect the voice, style, and tone of FMR. Again, the AI should save no more than 30% of the time you would normally spend.
  • To transcribe video and audio recordings, only if the transcript is reviewed and edited before publication.
  • For public-facing content, we also strongly encourage staff to utilize FMR’s Grammarly Premium professional/paid tools to review complex content to ensure that it is inclusive and accessible, meaning that it is approachable and written at an appropriate grade level.

Approved, but highly restricted use of generative AI to support visual content

Using generative AI to create visual content such as photographic-style images and realistic artwork comes with many additional issues. In addition to the threat it poses to graphic designers and artists, it is extra energy-intensive. (Numbers vary, with MIT Technology Review on the high side and some Nature.com articles on the lower side, but varying sources agree that writing an email in AI is equivalent to using a microwave for a tenth to an eighth of a second, while the wholesale generation of a photo or a video equals hours of microwave usage).

Since FMR has an extensive library of assets (see our flickr albums) we highly restrict and discourage the use of generative AI to create visual content.

We may, on very rare occasions, use generative AI to create on-brand clipart-style images that we would not otherwise contract with an artist to create. For example, the clipart pipe in this water blog image was edited with Canva generative AI tools, but the underlying map and river image was created by a paid freelancer. (On a related note, FMR is highly conscientious of intellectual property and does not share images or artwork without permission.)

Any image or part of an image created with AI should be labeled as such in the image caption and include a link to our AI practices.

We also allow generative AI to create videos or animations from pre-existing images and stock clips that FMR has clear permission to use, such as built-in video-making tools in social media, Google Ads, etc. (This is similar to allowing Mac photos to create a video album from your images. Note: Some people consider this traditional AI rather than generative but the lines are fuzzy so we wanted to note it here.)

Such videos, animations, or presentations should be labeled or credited in public-facing content. Captions or credit lines should include "Created in part using __tool__."

 

Upcoming Events

November 1 - 20, 2025
Online
Three options: October 20, October 28, November 22, 2025
Hidden Falls Regional Park, St. Paul
Thursday, December 4, 2025 - 6:00pm to 8:00pm
F-O-K Studios, St. Paul

Our River Campaign:
It all starts here

At the heart of this new campaign is the vision of a healthy Mississippi River.