We are at the dawn of a new age of fundraising—with great challenges to tackle. The responsible use of Artificial Intelligence is one of many ways in which we can (and should) advance our sector to better understand our constituents, develop connection, scale fundraising, build community, and deliver mission.
But we must harness AI ethically.
On Oct. 23-24, nonprofit professionals will gather virtually to unpack the responsible use of AI for fundraising – and JGA is proud to be a sponsor of this important pioneering gathering.
I hope you will join me at the conference by registering here. It’s free. And it’s crucial.
This short guide, from Nathan Chappell, Senior Vice President, DonorSearch will orient you to the responsible use of AI for nonprofits and the key considerations you should keep in mind if your organization chooses to invest in new AI tools.
There are three overarching ways that nonprofits are using AI today:
- For fundraising purposes. This typically involves using machine learning to model predictions about your donors’ and prospects’ giving habits, which helps you refine your development workflows and pinpoint those who are most likely ready to give specific amounts at specific times.
- For engaging directly with donors and constituents. This field involves using generative AI to interact with the public in the form of online chatbots.
- For streamlining other internal tasks. This also involves using generative AI tools (think ChatGPT) to help accomplish tasks like drafting appeals and emails. Nonprofits may also automate certain tasks with AI, like data management, meeting and event scheduling, and more.
For nonprofits looking to make meaningful, value-generating investments in AI, the first use case (fundraising) is where you should focus your attention. Nonprofits of all sizes now understand the value of data—predictive modeling technology takes that value to the next level by using it to proactively guide your fundraising strategies in more targeted, efficient, and forward-thinking ways than ever before.
The most significant risks of using AI irresponsibly (and inadvertently) come when you don’t understand the data sources that your tools are using.
AI tools draw from extensive stores of data to generate predictions and responses. Some platforms violate data privacy and security regulations by pulling from private or unauthorized data sources. Even in cases where regulations are unclear, this is widely considered unethical.
Some AI tools are designed to analyze your nonprofit’s own data to generate predictions. The concern is that sharing this data with a third-party platform can create opportunities for it to get stolen, sold, used to train the platform’s broader algorithm (and therefore be made accessible in some form to all its users), or otherwise dispersed in unauthorized ways. To mitigate such risks, ensure that you prioritize privacy and security by complying with all local, state, and federal laws.
Aside from data privacy concerns, other clear-cut risks emerge when organizations blindly accept decisions, suggestions, and recommendations by generative AI tools.
AI can’t be a total replacement for human interaction and expertise. Its outputs should also be manually verified through human judgment to protect your nonprofit and community. This is for a couple of reasons:
- AI systems can sometimes create their own feedback loops, amplifying any incorrect information, biases, and prejudices that are present in their source data. These flaws can then find their way into your organization’s decision-making completely inadvertently.
- Generative AI can also simply give you wrong or contextually inappropriate responses. For public-facing tools, it’s highly irresponsible to not consider the potential consequences of AI malfunctions or unexpected behavior. For example, the National Eating Disorders Association suffered reputational and brand damage when it attempted to replace human hotline staff with a chatbot.
The world of AI regulations is vague and rapidly changing, so responsible use of this technology also requires keeping up with its developments—but you’ll always be better off safe than sorry. When you understand these risks and follow best practices, the benefits of using AI responsibly can far outweigh the effort that goes into it.
So, what are the actual best practices to keep in mind in order to avoid all these risks? Here are five of the most important and actionable.
- Do your due diligence. Vet your AI software providers carefully and rely on trusted names in the space that back up their products with testimonials and explanations of their security measures. Ideally, choose a vendor who endorses the Framework for Responsible AI.
- Train your team. Ensure that everyone who will be involved with using your AI tool or its predictions understands both how it works and the essentials of responsible AI usage. For those who’ll be using the tools, give them ample training and a clear rundown of all the included settings, options, and security protocols. See if your vendor can provide training services or official documentation.
- Build human insights and judgment into your workflows. You can’t blindly use AI predictions and suggestions without first giving them real, careful consideration. Once you determine exactly how you’ll be using your tools, for example, to generate outreach lists of donors likely to give to particular campaigns, make sure that your workflow includes steps for human fundraisers to screen the lists. You wouldn’t solicit a major gift from someone without having built a relationship and double-checked your prospecting records, so you shouldn’t let new tech change that.
- Maintain data hygiene. As your AI tools draw from your own data, keeping your database clean and up-to-date is more important than ever to ensure the AI’s outputs are as precise and helpful as possible. In the example of generating outreach lists, incorrect or outdated information can throw off your predictions and lead to poor ROIs or return-to-sender snafus if not caught beforehand (or avoided upfront through proper data hygiene practices). Support from a nonprofit technology consultant might be a smart choice if you already know that your approach to data management needs an upgrade.
- Regularly audit your tools. It’s a good idea to frequently review the effectiveness of your AI tools—are they generating value, and are their predictions and suggestions truly helpful? If you notice problems, these can flag issues with the tool itself, the data it’s using, and/or the quality of your in-house data. Reassessing your potential for risk should also be a key component of these regular audits.
These best practices should help cover your bases, but it’s worth it to gain a deeper understanding of all the underlying principles that make up responsible AI use. DonorSearch and the Fundraising.AI collaborative sort them into ten key tenets:
- Privacy and security
- Data ethics
- Transparency and explainability
- Continuous learning
- Legal compliance
- Social impact
Understanding how AI for fundraising works, vetting your tools carefully and integrating the best practices above into your AI workflows will allow you to easily build these tenets into how your nonprofit approaches AI. For a closer look at each of them, explore the DonorSearch guide to responsible AI usage.
All new tools that use sensitive data bring some level of risk, but as a wide and rapidly changing frontier, AI stands out. Its potential risks don’t have to be a mystery, though.
Learning more about how your AI technology works, what it uses to generate predictions, and what constitutes responsible AI use will be the best first steps you can take. As you learn and refine your approach to AI, keep in mind that building and maintaining trust with your community is essential. Be transparent about the steps you’re taking to use this technology safely, back it up with best practices, and you should see some amazing results for your nonprofit.