A couple weeks ago Vu Le wrote about how useful AI can potentially be in the process of writing grants. So often granting organizations essentially ask for the same information, with some variation in what they want answered when and the word/character limits they have set for each response.
Given that grant awards can tend to favor organizations with the resources to employ a professional grant writer who knows how to employ terminology and language that funders seek, under resourced groups and those who are not comfortable or facile at employing the preferred vernacular could benefit from the use of AI.
Unfortunately, Le notes, some funders are using AI to detect if an organization is using AI to write their grants. Le writes:
“Grants are not college essays or news articles, where it matters who actually does the writing. Grants are a tedious mechanism for delivering answers about an organization and its work. AI just makes it less tedious. Punishing nonprofits for using AI is petty and paternalistic.”
He also says some funders are moving toward having AI evaluate the grant proposals which is even worse for a number of reasons.
“Funders who use AI to write grant RFPs, read proposals, eliminate applications, come up with a list of grant finalists, or whatever, should be aware that AI engines, which are mostly designed by white dudes, will likely favor white-coded proposals. It will be interesting to see the dynamics between AI-generated grant proposals and AI-supported grant review and selection. To keep it from reinforcing inequity, both funders and nonprofits need to be aware of biases that are built into these tools.”
For years there have been conversations about the job seeking process and how dispiriting it is to have a computer program evaluate your resume and cover letter before summarily rejecting those materials before a human ever gets to see them. Many have discovered how to game the system by using keywords in their materials, sometimes resulting in stilted or nonsensical content which nonetheless sees their application advance.
The grant application process is bad enough as it is without incentivizing cynical attempts to game the system. What would it say if an AI awarded a grant to an AI constructed application that no one ever seriously evaluated over an impassioned application written by a human? Should funding for homeless projects be determined solely by algorithms conversing with each other?
If funders are trying to detect grants written by AI out of concern about possible fraud, that is certainly valid. But that is also an indication that funding decisions should never be entirely made on the basis of polished prose. Vu Le suggests that just as AI can free applicants up to concentrate on delivering their core services, so too can it free funders up to focus on more directly interacting with those they fund to learn more about the work they do. Likewise, they can work on re-evaluating the criteria and processes they employ as part of their funding decisions.
There is an opportunity to double check the AI. Are its recommendations poor to middling in quality? Are those it rejects doing a better job than the AI indicates? AI can certainly be useful in removing some of the subjectivity a person brings to information, but for every example of how it is better than humans, there are examples of gaps, some times so glaring a five year old would have avoided them that AI fails to fill.