How Philanthropy’s $1 Billion Big Bet on AI Can Help Nonprofits Advance Technological Literacy
There is a lot to celebrate about the announcement that a coalition of major philanthropists will commit $1 billion to support the development of artificial intelligence (AI) tools for frontline workers, including public defenders, social workers, and others serving people in vulnerable situations. If these AI tools help the nonprofit workforce–as well as the for-profit and government sectors–with tasks such as organizing caseloads, completing required reporting, and communicating with constituents, they could help bring about meaningful outcomes.
While this investment has the potential to transform how organizations serve their communities, that will only be possible if the implementation strategies are shaped around day-to-day realities, strengthen organizations’ ability to carry out their mission, and are built in genuine partnership with the people delivering direct services.
Practical Realities and Aspirations Need to Align
The nonprofit sector has a long history of being asked to solve some of society’s most complex problems while operating with fragile budgets, understaffed teams, and aging technology. Staff and boards often lack training with even the most common software. I’ve seen peers struggle to access a Google Doc or add a meeting to their calendar. While I acknowledge that AI can be used by people without advanced digital skills, shoring up basic technological literacy is an essential first step.
Capacity building has to be part of the plan from the start: nonprofits need hardware, software, technical support, hands-on training, and adequate time to learn. Strategic planning, clear metrics, and reporting support should be integrated into the grantmaking process. When an organization’s goals for using technology, including AI, are built into its strategic plan, the front-line team can accelerate progress toward those goals and improve results. Without that foundation, an investment in new technology–including AI–risks becoming another shiny new tool bought with enthusiasm and left to gather dust.
Blending Technology and Social Impact
Nonprofit leaders, especially those in civil rights, environmental justice, and public health, have seen what happens when Big Tech’s culture collides with community-based work. Emphasizing efficiency over deliberation, scale over nuance, and disruption over repair can make sense in Silicon Valley but doesn’t always fit the reality of how trust is built or progress is made on the ground.
Still, this initiative is an opening to do things differently. If foundations and technologists co-create with frontline organizations from the very beginning by deciding together who is at the table, who is heard, who is compensated, and who is accountable, the resulting tools can reflect shared values and real-world needs. That kind of design process takes longer, but it produces something worth using.
Making Big Bets Work
In large-scale philanthropy, the method of allocation often matters as much as the amount. Decisions about who is eligible, who gets invited to apply, and who sits on review committees can unintentionally favor well-known organizations or those already in a donor’s network. In Chicago, I’ve seen promising groups left out simply because no one on the decision-making team knew of their work or discounted it for superficial reasons.
For this AI initiative to succeed, the process should actively put frontline organizations in leadership roles, learn from research in equity for nonprofit technology, and make space for course correction. Big bets lose steam for many reasons including: leadership turnover, shifting donor priorities, economic downturns, or other unexpected changes, such as COVID-19. Without a long runway and steady investment, nonprofits could be left with outdated hardware, expired licenses, and no ongoing training.
AI for the Public Good
Technology is never neutral. If tools built for nonprofits turn out to be ineffective, or worse, harmful, the sector will have spent precious time and energy for little return. Since AI can replicate bias and obscure harm, this effort needs clear safeguards, regular evaluation, and the willingness to halt or adjust if problems emerge. The measure of success here isn’t the size of the investment or the novelty of the tools, but how much they improve people’s lives.
None of this is to question the goodwill or intentions of the very experienced grant-makers leading this initiative at NextLadder Ventures. The social sector badly needs both the funding and an AI upgrade. However, lasting impact depends on what happens after the announcement: who is at the table, how decisions are made, and whether the rollout matches the daily realities of nonprofit work.
Done well, this could be a model for how big bets in philanthropy deliver on their promise, not just for the donors, but for the communities they aim to serve.
Further Reading/References:
Lisa Bertagnoli, Why did the Mayor abandon Get In Chicago? (Crain’s Chicago Business, February 4, 2017).
Robert Hulshof-Schmidt, 2024 Nonprofit Digital Investments Report (NTEN and Heller Consulting).
“Equity Guide for Nonprofit Technology" (NTEN March 2025).
Author:
Questions or comments?
Reach out to us at founders@planperfect.co!