Another 38% feel that disclosure is not relevant to their organisation’s use of AI tools. Here, individuals often reported that they do not disclose use when the AI tool is already embedded into existing software, or if the tool is used for internally facing activity.
A discussion group participant from a community building organisation, with no generative AI policies, said:
“Do people disclose that they’ve used a spell-checker in a document? Do they disclose that they’ve had an alarm to remind them when there’s a call?… how many tools do we use during our day, that we never even think twice about needing to disclose to somebody that we’re using that? So, I think it is a good question and I think it depends on how people are using it.”
For some organisations, the decision on whether to disclose use also depends on the perceived risk of the context in which the tools are used. For perceived lower-risk tasks, such as generating ideas, many deem disclosure unnecessary.
An interviewee from an organisation working with immigrant and refugee communities, with no generative AI policies, said:
“I don’t think there’s anything on our platform that has been, had such an influence from AI, that we would need to disclose it… it would kind of be used to inspire something but even we would then go and change most of it, so the AI influence on it would be, maybe 1%. So no, we don’t.”
Some organisations actively reject the need to disclose AI use altogether, including when producing written content. Many saw disclosure in these cases as undermining their efforts in overseeing and editing the content.
A discussion group participant from a local grassroots organisation with no generative AI policies said:
“No, I don’t disclose it because I work at it pretty hard. You know, it’s not a copy-and-paste job. It’s like having an assistant who’s doing a portion of the thinking. I’m still pretty much doing the thinking, I’m leading with the questions… I feel like it’s just helping me to sort out my thinking. Like, I’m working pretty hard, so I still think that’s just my work.”
Such divergent views on disclosure tend to be based on differences in organisational beneficiaries. If organisations believe that their beneficiaries are vulnerable and/or may be sceptical about technology, they tend not to disclose their use. Many focus on what AI enables them to do, rather than the fact that they use AI to do it.
A discussion group participant from a local organisation with no generative AI policies said:
“I have fears about [disclosure], mainly because… we currently have so many beneficiaries and members of the community that are really fearful of technological change. Many of our beneficiaries are older and potentially vulnerable and… I worry that they would see something like that and be frightened about the way that our charity is operating.”
Other organisations working on public campaigning, influencing or policy, and which may engage with a broader range of audiences, are more concerned about not disclosing their generative AI use. This is due to fears that non-disclosure would discredit their organisations. The output such organisations create using generative AI tends to be for larger audiences, making disclosure even more critical concerning being seen as trusted organisations.
An interviewee from a campaigning organisation with generative AI policies said:
“We probably never want to be opaque about our use of AI, particularly in image generation… I could see how it would be tempting in some contexts… for example, we want to present ourselves as being a more diverse organisation than we actually are and we’re going to generate AI people [for organisational flyer]… I feel like that would be skating us right over the edge of what would be acceptable to beneficiaries and supporters.”
An interviewee from a community building organisation with generative AI policies said:
“Public trust in charities is absolutely fundamental to what we do… we can behave in a way that does not damage that trust. So when you’re using things like AI imagery, [other organisation] have already run into a little bit of a trouble where they used [AI-generated images]. Everyone looked at it and assumed it was real and then got a bit hissy when they realised that it wasn’t real.”
There is a strong relationship between disclosure and trust, and while some organisations are aware that disclosure connects to their values, this is not the case for all. Some may not have considered the connections between disclosure, trust, transparency and authenticity; others may simply see non-disclosure as the right thing to do. It is important to clarify, however, that these organisations do not see non-disclosure as an act of deception. Rather, non-disclosure is presented as an act of care, to avoid panicking beneficiaries who may be fearful of technologies.
It is not the purpose of this report to evaluate organisations and identify ‘correct’ practices, and we acknowledge the diverse circumstances that non-profit and grassroots organisations operate within. Many often face tough decisions in using generative AI, and disclosure in other sectors is not common practice. Yet the question remains of what the future of disclosure might look like for trusted organisations, as AI becomes more embedded in existing tools.
Considerations
Our exploration of non-profits and grassroots organisations’ governance structures regarding using generative AI, including policies, disclosure and trust, has led to the following recommendations for consideration.
Non-profit and grassroots leadership
- Consider having someone with responsibility for AI governance across all elements of administration and service delivery.
- Prioritise identifying with communities how the use of generative AI tools might affect trust and service delivery, particularly in relation to transparency.
Funders and/or supporting bodies
- Consider if there is a service which could be provided to support smaller organisations with relevant generative AI policies without adding too much operational burden.
- Build on existing resources for the sector specifically as it relates to generative AI, to ensure that even smaller organisations are aware of the potential risks and benefits and have the guidance they need.
Sector collaboration
- Continue to discuss how to navigate the trade-offs between values and imperatives in the ways generative AI is used, not only as purely internal decisions, but also as ones on which the sector as a whole can take positions. Consider developing frameworks which do not preclude supporting organisations with different priorities and missions, such as organisations who do not wish to use generative AI, and those who have found genuinely transformational uses for tools.
- Consider what can be learned (particularly from environmental justice organisations) to develop approaches across the sector for knowing how to respond to the environmental impacts of AI.
Generative AI readiness
AI readiness has been described as an organisation’s ability to use AI in ways that add value to the organisation (Holstrom, 2022). It can include areas such as digital and data infrastructure, skills, organisational culture and mindset. There has been previous research conducted which surveyed charities’ use of AI and the capacity of organisations to adapt to and benefit from AI (Amar and Ramset, 2023). We wanted to dig into the perceptions of what is needed to adapt specifically to generative AI, and how organisations are managing this challenge. Specifically, we explore:
- training on generative AI
- leadership and frontline perspectives.
Training on generative AI
The pandemic catalysed changes in how non-profit and grassroots organisations used digital technology to deliver their services: 82% of organisations said they had to invest in new technology to adapt to the pandemic, which drove demand for digital skills required by staff and volunteers (NCVO, 2021). Generative AI is now also being used to help solve problems related to the increasing economic pressures that organisations are facing. A similar trend can therefore be predicted in which organisations look to upskill their workforce to use generative AI more effectively. This also reflects the narrative that there is a race to use AI tools to secure your job, that ‘AI won’t replace humans, but humans with AI will replace humans without AI’ (Lakhani, 2023).
Previous research has also highlighted the lack of AI training; in CAST’s AI survey, 51% of respondents had not received any training or support around AI (CAST, 2024b). Through our engagement with charities, we uncovered 2 axes of discussion when it comes to training:
- AI training content
- AI training creators.
AI training content
The majority (69%) of organisations using generative AI tools have not received formal training. Despite this, most organisations expressed a need for such training. When asked about the content of such training, 2 perceived requirements emerged. Some organisations want operational training on how to use generative AI tools:
A discussion group participant, director at a disability justice organisation, said:
“I want a person talking me through the first steps of, ‘Okay, you sign up, and then these are the… this box does this for you… and this is the area where you do X, Y, Z’. It demystifies how scary and difficult it will be. So, I think, yeah, that practical 1–2–1 support, but also just having a general, like, okay, these are the things that it’s able to do on a broad umbrella, but then also examples.”
Many participants are conscious of the need for a more expansive understanding of AI as a socio-technical phenomenon, referring not just to technologies but also the complex social interrelations with which they are imbued. Some organisational leaders also expressed the need for training on the technical foundations of AI and data, to enable critical thinking about how to respond to tools.
A discussion group participant, head of AI/digital at a national charity, said:
“Now suddenly, everybody is extremely excited about AI and wants to skip all these other steps. But the risk there is that some of the foundational understanding in what computing is, what data analysis is, without having that in place, there’s a lack of interrogation that might be happening in the output that you’re getting from a generative AI tool, or a lack of awareness of where you might be encountering disinformation.”
A discussion group participant, head of AI/digital at a large organisation, said:
“The training courses that are out there are either really basic or really technical, and I actually need something a bit in-between that’s going to take our context into consideration.”
AI training creators
Faced with stretched budgets and an expressed need for training on AI, freely accessible training resources are relevant to our research participants. However, free resources could be problematic for 3 reasons:
- Discoverability: there is a lack of awareness around ‘training available that’s low cost because charities don’t have a lot of money’ (senior leadership, local organisation).
- Tailoring: The free resources are not personalised and, therefore, not necessarily effective within an organisation’s specific context.
- Motivation: The reason behind many training resources is the promotion of AI tools to potential users and, as such, they are not as balanced as they need to be.
Unpacking this first point, many organisations remain unsure as to where to go for training. The quality of the training available is also highly variable. In CAST’s 2024 AI Survey, of those who received training on AI, only 6% felt that it was sufficient(CAST, 2024b).
A discussion group participant, director at a community building organisation, said:
“I might not just go to YouTube and look about it [learning about AI]. So, yeah, where’s that training going to come from and who are we going to trust and what are the stages of it?”
This concern shows a lack of confidence in auto-didactic (self-teaching) methods for learning new tools. It could be useful to explore the extent to which this response reflects an already overstretched workforce struggling to keep up with releases of new tools, or a feeling of disempowerment when it comes to grappling with AI technologies generally.
It also raises the question as to who and what defines a trusted training provider. Two opposing views emerge throughout the discussions. Some argue that the companies developing AI tools should also provide training.
A discussion group participant, director at a disability justice organisation, said:
“Do they [tech companies] have a representative that could come and do a session… how we can get improved access so that they’re giving direct support, whether that looks like having some kind of discount code or having those sessions where they can come and ask questions and support people to set it up.”
Others oppose the idea; they perceive technology companies as ‘the bad guys’ who may embed their companies’ profit motivations into training materials. Indeed, the UK lacks nationwide AI literacy initiatives, which means that there is a market gap for tech companies to provide free explainers and training resources as part of their content marketing strategies (Duarte, 2024). Research participants who oppose tech-company-led training place the responsibility on the non-profit sector to develop or procure resources for training, alongside providing spaces for sector-wide discussions to highlight shared challenges and case studies.
A discussion group participant, director at local organisation, said:
“The charity sector should be… producing something with some credentials. But it’s certainly not our bag to do that. We rely on the bad guys to kind of be coming up with stuff like that, and it’d be good if they didn’t look at it as an income-generating opportunity.”
Questions remain around to whom the sector can turn for unbiased AI training resources. Interestingly government agency support is not mentioned at all as a possibility. Whilst big tech firms may hold the technical knowledge, they may not be best placed, nor may their business models or marketing strategies align with what non-profits would find most useful.
Leadership and frontline perspectives
Larger organisations point out a disconnect between leadership and frontline staff’s perspectives on generative AI. Leadership, who mainly decide on AI policies and strategies, are described as being seen as ‘too cautious’ regarding such technology. In contrast, frontline or junior staff are pressed for resources and often use generative AI tools in response.
A discussion group participant, head of AI/digital at a large organisation, said:
“…wider awareness within our leadership teams, around risks of AI, are very strong. And I would say, at senior level, there is more trepidation than there is in some more, kind of, people on the ground using it who are definitely playing around with it, trying it, doing more things with it than at leadership level.”
Leadership’s relative caution around using generative AI may indicate that they place greater weight on ethical concerns or consider the potential risks of reputational damage and violation of legal frameworks (such as those around data privacy), the consequences of which would most likely fall on their shoulders.
In contrast, it is reported that more junior or frontline staff engage in more experimentation with generative AI tools. This unsanctioned or ad hoc generative AI use (also called ‘shadow use’) is often used to free up time from tasks that can be spent delivering in-person services. Shadow use of AI tools is not unique to the sector; it has been reported across the private and public sectors as becoming more of a concern (Salesforce, 2023; GovUK, 2024c). Almost a quarter (24%) of survey respondents report using generative AI tools that are not formally approved by management.