Master the R Series Bootcamp - February 6, 13, 20, and 27. Registration is now open.

Prompt Engineering: Using AI and Large Language Models (LLMs) for Grant Writing

By Bouvier Grant Group

We stay current on NIH happenings and would be delighted to keep you informed.

By Griffin Smith

In today’s blog post, I’d like to consider how to get the most out of AI tools, wherever you might use them in your workflow. This is sometimes referred to as “prompt engineering” and it’s an important part of any experimentation with generative AI. How can we get the AI models to do what we want across various domains?

Part 1 by Dr. Agnella Izzo Matic was a useful primer called Don’t Believe the Hype, Setting Realistic Expectations About AI in Grant Writing. It’s worth reading as a general introduction, and it provides a list of AI tools specific to grant writing which you can explore. In general though, the piece cautions us to be aware of AI hallucinations and not blindly trust what a chatbot writes.

Part 2 by Dr. Becky Miro introduced helpful tips for using AI, like when to experiment with chatbots and what to be wary of. Becky suggests you generate an initial draft and have the AI refine it, so the ideas remain yours. She also points out that experts can potentially get more out of AI than less-experienced researchers, and using AI may lead younger professionals to miss out on valuable skills and human collaborations.

The Problem with Large Language Models

It’s easy to write-off Large Language Models (LLMs) because they are known to hallucinate. AI models can confidently say things that might sound correct, but which lead to imaginary books, fake articles, common misconceptions, or blatant falsehoods. As Dr. Agnella Izzo Matic points out in Part 1, chatbots are modelling the shape of language and in effect guessing the next word in conversation over and over again, which can lead to insidious biases and convincing hallucinations. The result can be a “word salad” (to use Dr. Matic’s term) masquerading as sensible language. However, a great deal of research has been done on precisely this issue.

Any chatbot you use today is a more refined version of a cruder previous bot, and so it can be hard to keep track of their changing weaknesses and remain skeptical of the right things over time. In general, each new generation of LLM is trained on more data and refined with different guardrails through testing and reinforcement learning. The aim is to make bots which are truthful, helpful, and polite.

While there is still much work to do, huge advances have been made recently in “aligning” AI models with these parallel goals. Not only have hallucinations been greatly reduced, but chatbots can now often search the web and cite human sources in their answers, and new models are trained to be increasingly flexible, accountable, and accurate. While we should remain thorough and always check the AI’s output as (Becky emphasizes in Part 2), we can increasingly rely on chatbots to adhere to our instructions and carry out tasks reliably. So how do we get LLMs to perform at a high level, and how do we provide clear instructions?

Prompt Engineering

Unlike a device with intuitive buttons or handholds, an LLM’s blank text box can be hard to approach. The nice thing, though, is chatbots expect natural language. Unlike software with confusing menus or a dense instruction manual, you can ask a chatbot clarifying questions anytime you’re confused.

Users can input text and upload files, which leads to AI output, which leads to more human input, and so on. This is the chat. OpenAI has a prompting guide here which is great to consult for specific strategies, but in general here’s what you should do when prompting any kind of chatbot:

1. Prioritize clarity in your prompts

  • Use specific terms from your expert domain when possible, provide clear measurable goals, and note things for the AI to avoid
  • It can be helpful to provide a role or a “voice” for the AI to perform, because this can imply many things about what you want the AI to do. The prompt “You are a Shakespearean pirate” condenses a great deal of information, it’s more efficient than describing how to talk or what to say. Consider the kind of voice you require, and describe the AI’s role in detail.
  • The AI should also understand your perspective: what do you hate writing or need help with, what is your background and level of experience, what is the scope of your project? The AI can use these details to provide you precise answers.

2. Supply the AI with reference texts

  • In addition to a text prompt, you can usually upload files to an AI chat. You should provide the AI with any reference texts you’re working with, some examples of good output, and any formatting guidelines.
    • For example, say you have 100 journal articles and you want to convert them each into a single sentence description. This seems like a good task for AI, but maybe the LLM is struggling. Try doing it once or twice manually, converting a few articles into perfect one-sentence descriptions yourself. Then show these mappings to the AI, and tell it to base future output on this gold-standard example.

3. Break complex tasks into smaller steps

  • Treat the AI with a little bit of suspicion, like a new assistant who requires clear instructions because they don’t have great judgement. Leave no room for error in your prompts.
  • This means describing what you need step-by-step, and asking the AI to show its process for later reference.

4. Refine your prompt through experimentation

  • Using the previous tactics, you can produce a robust initial prompt. When you don’t get initial success from the bot, though, it’s important to analyze your prompt and consider how to refine it.
  • Once you notice problems, try to adjust your wording, remove unnecessary terms, or add in details. If things really go off the rails, try opening a whole new chat and starting over. Prompting is all about iteration, detailed phrasing, and tweaking your instructions based on initial results.

5. Move between tools

  • Produce an initial rough draft by hand for the AI to refine, and then treat the AI’s output as a more polished draft which you’ll finalize by hand. This maintains your voice and allows you to format and export things with perfect control.
  • Export the AI output (which might be in the form of spreadsheets, computer code snippets, images, drafts of emails, bullet points, or paragraphs) for manual editing.

Putting it All Together

A good prompting workflow looks like this: you gather primary sources, formatting requirements, and a rough draft with some initial ideas. You might upload all of these files with the prompt:

“You are a grant-writing expert who helps produce compact, clear prose. You use contemporary scientific language in the field of Reconstructive ACL Surgery. Emphasize our lab’s efficiency and the specific need for October travel funding (see the attached budgeting proposal for dates). Be straightforward but upbeat in your language, and include all the authors names and full titles in the sign off. Do not mention any work from Yale, as we have another paper referencing them in more detail later. Also, make sure the sources are in MLA style, and follow the updated department guidelines I’ve attached (they are very strict about page number formatting so note those in particular). Using the bullet points in my rough draft, show me a few possible structures for the piece”

After seeing the initial output, you might refine the prompt and try again, or say something granular like “Change it to present tense, and rewrite section 4 to be less technical and more straightforward. Remove section 5.”

You might follow up with “Now develop a full outline for each section, and note which bullet point from my rough draft is included where.

You might then say “Produce the full introduction, and I’ll review it to see if anything is wrong.”

You can produce each section of your piece this way, and conclude by saying “Now synthesize all of the parts we’ve written into a complete draft.” Then, you can bring that draft into a Word document for final edits.

Using all of these strategies, you will be able to get the most from AI tools even as they evolve in the coming months and years. The strange experience of working with AI can help you analyze your entire workflow, consider your pain points, and help you devote your energy to the parts of writing you actually care about. Think of AI as a truly around-the-clock assistant who can proofread, format, brainstorm, research, and support your own writing in practically any way–but only when given the right set of instructions.

And while it’s true that recent LLMs can lack common sense, commit simple errors, and even hallucinate on the job, there is no doubt that they are getting better in every conceivable way, and fast. At this point in their development, LLMs provide about as much writing support as an intelligent human assistant. They will supercharge anyone using them effectively. You need a critical eye and a clear goal, but with those two things LLMs can help you do almost anything.

Common Mistakes in Prompting AI Models

  • Over-Relying on Single Complex Prompts
    • Many users try to accomplish everything with one lengthy prompt, leading to confused or inconsistent outputs. Break your tasks into smaller, focused prompts that build upon each other. For example, instead of requesting a complete grant proposal in one go, separate the work into outline creation, section drafting, and revision phases.
  • Not Verifying AI-Generated Citations and Sources
    • LLMs may sometimes fabricate citations, blend references, or make incorrect assertions about source material (although with search engine integration, many AI models are solving this problem). Always verify citations, statistics, or specific claims in the AI’s output against your original sources. This is especially crucial in grant writing, where accuracy and credibility are paramount.
  • Failing to Iterate and Refine Prompts
    • Many users abandon their efforts after a single subpar result. Effective prompt engineering requires iteration and refinement. Keep a record of successful prompts and the specific elements that led to better outputs.
  • Oversharing Sensitive Information
    • In the rush to provide context, users sometimes include unnecessary sensitive details in their prompts. Remember that anything you share with an LLM should be treated as potentially public information. Sanitize your prompts by removing confidential data, personal identifiers, or proprietary information before seeking AI assistance.
  • Forgetting to Specify Output Length and Detail Level
    • Without clear parameters, AI outputs can range from overly brief to unnecessarily verbose. Specify your desired length (e.g., “in approximately 500 words” or “in 2-3 paragraphs”) and the level of detail you expect. This is especially important when working with format-specific content like grant sections or executive summaries.
Dr. Meg Bouvier

Author:
Dr. Meg Bouvier

Margaret Bouvier received her PhD in 1995 in Biomedical Sciences from the Mount Sinai School of Medicine. After an NINDS post-doctoral fellowship, she worked as a staff writer for long-standing NIH Director Dr. Francis Collins in the Office of Press, Policy, and Communications for the Human Genome Project and NHGRI. Since 2007, Meg has specialized in editing and advising on NIH submissions, and began offering virtual courses in 2015. She’s recently worked with more than 25% of the nation’s highest-performing hospitals*, three of the top 10 cancer hospitals*, three of the top 16 medical schools for research*, and 8 NCI-Designated Cancer Centers. Her experience at NIH as both a bench scientist and staff writer greatly informs her approach to NIH grantwriting. She has helped clients land over half a billion in federal funding. Bouvier Grant Group is a woman-owned small business.

*As recognized by the 2024/25 US News & World Report honor roll.
Categories:
Bouvier Grant Group logo white
We read all NIH notices for our clients. When you join our mailing list, we’ll pass along important changes directly to your inbox, as well as opportunities to improve your grantsmanship skills.
Primary Position
Lead Source

Wait!

Subscribe to our monthly newsletter for the latest NIH news, grantwriting tips, and more.

NIH October 2023 Newsletter cover