I was slightly unsettled to be sent details of this impending webinar about automated grant writing using generative AI. I have no doubts it can be used effectively in this way as a means of supporting an existing process, particularly with regards to discerning and reproducing a best guess as to the form which a funder is seeking based on what they’ve rewarded in the past:
To do this well, an emphasis will be placed on how to intelligently prompt ChatGPT with the contextual knowledge it will require to write your grant well — using the funding body’s grant proposal guidelines, past successful applications, and more as input prior to requesting ChatGPT’s assistance. You will learn how to synthesize complex research goals into compelling and digestible content for potential national and other funding bodies, ensuring adherence to formal grant guidelines but also more subtle normative rules for grant applications. The practicalities of using AI to maintain alignment with the ever-changing landscape of grant funding will also be discussed, as well as the usefulness of the ChatGPT skills you will acquire for doing a wide array of academics tasks — including various administrative tasks that otherwise might take up significant amounts of your time.
But I find the second-order effects likely to flow from this extremely worrying. Will it lead to an increasing quantity of grant applications? Will they resemble each other? What will funders do to manage the glut and the overlap? What happens to innovation if we get into a cycle of automated grant applications being used to train future grant applications?
