Explore Additional Resources for Non-Profits
Charting the Path to AI Readiness for Non-Profits: A Conversation with Andrew Welch, HSO's CTO of Cloud Services
In October, HSO is pleased to be presenting at the NetHope Global Summit for the second year in a row. NetHope is a consortium of more than 60 leading global non-profits that unites with technology companies and funding partners to design, fund, and implement innovative solutions to address development, humanitarian, and conservation challenges. Each year, NetHope members meet with corporate partners and supporters to address how to use technology to make collective and impactful progress against the world’s most pressing issues.
It will come as no surprise that the hottest topic this year is AI. AI has only recently come into the spotlight in a significant way and is constantly changing, so non-profits might feel that they’re not quite ready to tackle it yet—and more than likely—not even sure what it means to the sector and to their organizations.
But AI is here to stay, and it’s not too early for non-profits to start thinking in that direction. At this year’s NetHope Global Summit, Andrew Welch, HSO’s CTO of Cloud Services, and Tim Bachta, Vice President, Global Information Technology at Children International, will be discussing what non-profits can start doing now to prepare for and take full advantage of what AI has to offer.
We recently sat down with Andrew to get his take on AI in the non-profit world. Following is an excerpt from that discussion:
Andrew, we’re familiar with your 20-plus years of experience in cloud technology, but what makes you particularly interested in how AI is impacting non-profits?
I come from a sort of a double background—in technology and, well, not technology. I grew up in the US Coast Guard, which is not a non-profit, but because of its humanitarian mission, does have several non-profits associated with or that it actually controls. Earlier in my career, I also worked with an organization called the Global Environment and Technology Foundation that essentially brought together technical solutions to environmental challenges. So, non-profit has always been something that I care very much about.
So, in your opinion, is the non-profit sector ready for AI? Should they even be thinking about it right now?
Listen, the truth is that almost nobody is ready for AI now. Almost nobody is really ready to be able to capitalize on what AI can do for them now, let alone six months or a year or far into the future.
The funny thing is that the reason they're not ready for AI is not the reason they think. Over the last 20 years or so, as the volume of data individuals and organizations produce and that organizations in society own collectively is overwhelming, and our ability to produce and store data really outpaced our ability to use, curate, and consolidate it—to really do something with all that data.
Data has proliferated, but we haven't had the wherewithal to really do anything with that data, so we just figured we’d let the future “us” deal with how to corral it. So today, most organizations have data all over the place and in various states, not consolidated. It’s not unusual for an organization today to have dozens of “single sources” of truth for data because the data just landed wherever it landed.
How does the data problem impact thinking about AI?
Data is at the core of AI. AI depends on data. So, as organizations—and not just non-profits—start to think about their AI strategy, they must first deal with the data problem.
With that in mind, here is how I look at an AI strategy. There are four pillars:
The first pillar is data consolidation. We have to bring all data together into services and data storage facilities that are accessible or addressable by AI.
The second is data readiness. Once the data is consolidated and in places where AI can access it and use it, we next need the wherewithal to be able to do that. Part of that is data governance, and part of that is data security. How do we know what’s in those consolidated stores, and how do we make sure it’s secure and governed properly?
Part of the answer is indexing. In the Microsoft world, this is accomplished through a service called Cognitive Search. Cognitive Search indexes and makes your enterprise data searchable, like Googling something. Cognitive Search facilitates finding that data.
But what Microsoft has really pretty ingeniously done is to take Cognitive Search a step further and use the index as a way to feed the AI model. Data lands inside of, say, a data lake or Azure Blob storage, Cognitive Search indexes that data, and then AI uses that indexed data to produce the response for the human.
The third pillar is incremental AI. This is all about using AI to do things that a bunch of humans would have otherwise done. Microsoft’s Copilot technology falls into the category of incremental AI. For example, we need to identify which donors to approach. Traditionally, that would require a lot of manual analysis, or we could take a scattershot approach. But today, right now, we can use AI to be more precise, efficient, and effective about it. Those sorts of workloads I think will tend to be the lowest-hanging fruit and, in their aggregate, will make organizations more efficient, more effective, do more with less, et cetera.
Then there's differential AI. I would define these as moon-shot, “Oh wow, maybe we can do this thing with AI that we never would have thought of” workloads. Experimental stuff. But potentially extremely valuable. For example, when it comes to food production, we can analyze millions of images of crops, at various points in the growing cycle, across entire regions, countries, or continents, and identify anomalies to be investigated. In the non-profit world, perhaps AI could be used to explore an entirely new line of service to support the mission, but done incrementally, in shorter sprints, to reduce risk and avoid overextending from an investment perspective. Set up the model. Expand the model. Prove it works. Rinse and repeat.
How can organizations balance their AI investments to reduce risk?
It’s true that there’s some risk that a particular use of AI won’t work out as you intend, particularly in those “moon shot” or differential AI types of workloads. We’re still early days. So there are several ways that an organization can balance that risk, so that they don’t spend all of their investment in one place, so to speak.
The first option is to strike a balance between safer incremental workloads, which often involves turning on capability you may already have from Microsoft or another cloud provider, and a few more differential uses of AI that have a higher upside but likely a lot more risk of not panning out.
Essentially, balance the portfolio of AI initiatives so that there is a lot of incremental AI working to improve efficiencies, etc., interspersed with differential AI, so that they are still realizing value from their AI and their data platform investments and not betting the whole farm on moon-shot ideas.
Second, I suggest that when investing in some of the more unproven differential uses of AI, make smaller investments, measure progress regularly. Demand that the people working on this for you show small progress every couple of weeks until you grow more comfortable with the likelihood of a particular investment working out. Don’t be afraid to walk away when it seems the technology or the data you have just isn’t going to get the job done, for now, at least.
And finally, organizations can reduce their risk by working together with other organizations and other technology leaders—partners like HSO—sharing ideas and developments, thus sharing the risk that a particular AI workload won't work out, but also sharing the benefits of the AI workloads that do work out. This could be done through forums like NetHope. In the new landscape of AI, nobody has all the expertise or all the answers, so this approach is a very sound one.
This is where we are with AI: Working to find that competitive edge, in my opinion, should be shared as much as possible. In theory, this should be much easier in the non-profit space than in the commercial space because if you have two non-profits that are both pointed in a similar direction, they get a whole lot more from working together than they do from working apart.
That said, I would argue that very few, if any, organizations have gotten this far. And with non-profits having limited budgets and needing to spend that money to advance the mission, they are even further behind.
So, we know we have some way to go before most non-profits are ready to put AI to good use, but that doesn’t mean we can’t start preparing for it now.
What does “preparing for AI” look like for non-profits?
Every organization out there has a “we need to do more with less” problem, right?
But non-profits have a unique challenge in that they are accountable to their donors as well as the people they serve. Unfortunately, many non-profits are not good at spending their money wisely. This needs to change, or they will fail. They must understand how to stretch every dollar as far as it can go and ensure they’re not spending anything on frivolous IT pursuits.
There's an opportunity here, I think, for non-profits to make investments before the AI train gets too far out of the station—investments in terms of data consolidation, data readiness, et cetera—that are going to allow them to use AI to dramatically improve their efficiency in the future.
I also think it’s going to become more cost-effective to implement an AI strategy sooner than we think. We’ve already seen the cost of data storage go down dramatically, and I think we’re going to see the cost of AI workloads go down as well. Microsoft is investing in technology to make AI easier and more accessible; it’s already much better than it was a year ago.
So, where do non-profits start with preparing for AI? The first step is to make data addressable so you can use it to be more efficient in your operations, diverting resources within the organization away from administration and towards actual mission performance, working to be more targeted in the way they raise money and show donors that their money is making a difference. Now is the time to get all the pieces in place so they don’t fall behind.
Tim Bachta, Vice President, Global Information Technology at Children International, is going to be joining you in this discussion. What do you hope attendees can learn from Tim?
Tim is going to provide real-world experience around AI. I want to learn from him what specific problems his organization is thinking about using AI to address. What they’re thinking about in terms of specific workloads they want to enable with AI. That’s going to spark some very interesting conversation and questions, I think. I mean, non-profits are not all alike, but I am sure any non-profit will glean very useful insights from Children International’s experience in the realm of AI.
That should be interesting, indeed. So, one last question: What do you hope to give to the attendees? What should they take away from the session?
Ultimately, I want to share with them a framework for how they can begin to think for themselves about AI and fashion their own AI strategy. I want to get them away from panicking about AI and thinking proactively about how it can help their organization in ways they never imagined.