How to write prompts that get the answers you want: don’t ask “questions”—communicate “specifications.”


To use generative AI effectively in day-to-day operations, it needs to be able to handle a wide range of tasks that people used to do. That requires understanding internal data and domain-specific expertise, and being able to make the kinds of judgments unique to each business process.

Existing generative AI tools such as ChatGPT and Gemini are highly capable and can produce accurate answers. However, when you want them to support daily work—or even take tasks off your hands—two challenges tend to surface: inconsistent answer quality and the effort required to give instructions.

To address these challenges, you’ll need to customize generative AI.

In this column, we explain prompt engineering—both the foundation and the most important method for using customized generative AI effectively.

Three ways to customize generative AI

How “prompt engineering,” “RAG (retrieval-augmented generation),” and “fine-tuning” differ

To use generative AI for work, you need to draw out the best possible output—such as generating knowledge tailored to your industry and role—which means customizing how the model is used. There are three main approaches to customization: “prompt engineering,” “RAG (retrieval-augmented generation),” and “fine-tuning.”

Prompt engineering

A method for getting the output you want by refining the instructions (prompts) you give to generative AI. You control the model by embedding instructions, knowledge, and other guidance into the prompt.

RAG (retrieval-augmented generation)

A method that searches external data relevant to the user’s question and feeds both the search results and the user’s question into generative AI to obtain an answer.

Fine-tuning

A method that further trains generative AI on data from a specific domain to permanently shift its knowledge, phrasing, and decision-making tendencies.

Prompt engineering

RAG

Fine-tuning

What you customize

Customization through instructions and knowledge included in the promptKnowledge customization by connecting to external dataAdvanced customization beyond knowledge, including phrasing and decision tendencies

Response quality

Depends on how the prompt is writtenDepends on the quality of retrievalDepends on the quality and volume of training data

Knowledge updates

Real-time (immediate via prompt changes)Real-time (handled via data source updates)Slow (requires retraining)

Response style

Adjustable, but with limitsPrimarily knowledge-focused; style mostly depends on the base modelHighly customizable

Implementation cost

 Low (quick to get started)Medium (requires building a retrieval system)High (requires data preparation, compute resources, and expertise)

Ongoing maintenance

Easy (just update the prompt)Medium (requires database updates)Difficult (requires retraining)
Feature comparison

Criteria for choosing between “prompt engineering,” “RAG (retrieval-augmented generation),” and “fine-tuning”

Prompt engineering is the foundation of all three approaches—and it’s also a necessary technique when implementing RAG or fine-tuning.

With that as a premise, the criteria for choosing among these three approaches are as follows.

Selection criteria

You can get good enough answers with instructions and examples, and you want to try using generative AI with minimal investment. 
 → Prompt engineering
You want answers based on a large amount of specialized knowledge that the model has not learned.
 → RAG
You want responses optimized for a specific domain, including not only knowledge but also phrasing and judgment.
 → Fine-tuning

In practice, these approaches are sometimes used in combination.

Examples of combined use

RAG + Prompt engineering:
By refining prompt design, you can leverage retrieval results more effectively.
RAG + Fine-tuning:
Use RAG to retrieve external knowledge so you can reference the latest information. Then use fine-tuning to build the capability to interpret retrieved results and respond like an expert.
Fine-tuning + Prompt engineering:
Give even more granular instructions to a specialized model.

In this column, we focus on “prompts” and explain practical tips for writing them effectively.

The frustration of “AI not answering the way you want”

Once you start using generative AI seriously at work, the first issue you tend to run into is hallucination. Nonexistent information, incorrect facts, simple mistakes in programming code, or unsolicited deletions of parts of a text during translation—errors in the output are all but inevitable.

And even when the information is correct, you’ll often get answers that miss the intent of the question, leaving you no closer to a solution. You may find yourself asking repeatedly, getting nowhere, and growing frustrated.

These common failure modes in using generative AI can be significantly reduced through better prompt design.

The flow of dialogue between people and generative AI

The big picture

Before we dive into prompt writing, let’s take a look at the process of how people interact with generative AI.

Interaction with generative AI follows the flow: “input → processing (AI) → output.”

1. Input

In conversations with generative AI, “questions” and “instructions” are essential to start the interaction—but by themselves, they’re often not enough as “input.”

In many cases, you need to provide source material and clarify assumptions. It helps to think of “input” as handing the generative AI program a “spec sheet” so it can do the data-processing work you need.

2. Processing (AI)

A probability-based “guess the next word” game

Based on the context you provide, AI predicts the next word. By learning from data using this next-word prediction mechanism, generative AI can handle a wide variety of tasks.

You can also see that generative AI runs on “word prediction” in how strongly its answers can be “pulled” by the wording of the question. Before the opening of Expo 2025 Osaka, a generative-AI chatbot run by Osaka Prefecture became a topic of discussion after giving false answers such as “the Expo got canceled.” This can be seen as the model being led by a misleading question that implied cancellation.

It tries to follow instructions faithfully

Generative AI tries to follow what it’s told. So if instructions are vague or the information needed to answer is insufficient, it may fill in missing conditions on its own just to keep generating. In those cases, falsehoods can creep in, or the answer can drift away from what you expected.

3. Output

With books or websites that clearly state the author’s responsibility, you can regard the content as “correct” as long as you accept the premise that you can trust the author.

By contrast, generative AI output is the result of probabilistic computation. That means you need to treat it with the assumption that it may contain mistakes.

Only after a person reviews the content, fact-checks it, and makes adjustments does it become a usable deliverable.

Write prompts with the process in mind

By keeping this information-processing flow in mind, it becomes easier to think about how to write prompts. In particular, focus on the following two points.

Tip 1: Design your prompt by working backward from the “output”

Imagine you’re a beginner in the field you’re asking about and don’t have much knowledge. If the “output” is written for experts, it will be hard to understand. In such cases, design your prompt by thinking, “What do I need to do to make it output a beginner-friendly answer?”

Tip 2: Focus less on “teaching me” and more on “making it process”

Generative AI works by returning answers based on the user’s instructions. While it contains a vast amount of knowledge, you’ll often fail to get good answers if you take a passive stance and simply ask it to teach you something. Instead, be proactive and focus on making the AI process information.

Prompt components and writing techniques for getting answers that exceed expectations

Next, we’ll cover specific techniques for writing prompts.

Four elements that make up a prompt

When writing prompts, think in terms of four sections: (1) Input, (2) Context, (3) Instructions, and (4) Output.

Input

  • Materials that generative AI will process.
    • For example, in translation, the original text you want translated is the “input information.”

Context (background, objective, scope, role)

  • What gives AI a “role” and “constraints”.
    • Background and explanation of the current situation
    • Reason and objective (why / what you want to do)
    • Scope and details (5W1H)
    • The role of generative AI

Instructions

  • Action directives to generative AI—i.e., what you want it to do.
    • Don’t be vague—state it clearly using verbs

Output

  • Specify the format you want to receive.
    • Output format
    • Sample answer

Prompt formatting

By showing the structure of your prompt clearly, you can improve answer accuracy. Any format is fine as long as the structure is clear to the AI. However, to reduce misunderstandings in back-and-forth with generative AI, we recommend using Markdown notation, which is commonly used in technical specifications. The rules are as follows.

Headings

  • Add # before a heading. The number of # marks indicates the heading level.
    • # Heading level 1
    • ## Heading level 2
    • ### Heading level 3

Delimiters

  • When you want to clearly separate sections—such as marking the beginning and end of a quoted block—use delimiters. Write three hyphens (“-”).

Prompt example

When you apply our recommended prompt structure, it looks like the following example.

Prompt example

# Input

Example titles for a content marketing article:
A practical guide to choosing convenient, comfortable wireless earbuds that dramatically improve productivity at work

# Context

## Role
You are the best copywriter.

## Project overview
・To boost sales of wireless earbuds, we will create a helpful article to drive traffic to our website.
・This article will not directly promote wireless earbuds.
・It will explain how to choose wireless earbuds suitable for business use.

# Instructions

・Explain what makes an article title highly effective for SEO.
・From that perspective, score the example title provided under “Input” on a 10-point scale, taking the project overview into account.
・Based on the score, create 10 improvement ideas.

# Output

Please provide answers to Instructions 1 and 3 in bullet points.

Notes

Because input information, context, instructions, and output format are interrelated, you may sometimes hesitate—for example, wondering whether something belongs to context or output format. For instance, providing sample answers is both an output-format specification and a way to convey context.

Therefore, you don’t need to be overly strict about how you structure the prompt text. Use the four sections—input information, context, instructions, and output format—as a checklist to ensure there are no major gaps.

Techniques to further improve accuracy

Being mindful of the four prompt elements and formatting can significantly improve the accuracy of generative AI’s answers, but there are also techniques to increase accuracy even further. Here, we’ll introduce a few of them.

Prompts in general

  • Keep it concise. As prompts get longer, they can become redundant or even contradictory, which can also cause the AI to get confused.

Techniques for “Input”

  • Adding sample answers as a reference improves accuracy. You can keep the number of examples small, and they can be hypotheses—just include them whenever possible.
  • It’s fine to include them in the prompt itself or provide them as an attachment.
  • When instructing tasks such as extraction, summarization, or tagging, output accuracy improves if the input data is also structured (i.e., organized and categorized).

Techniques for “Context”

  • Even without specifying context, you can still get decent answers. But if you want the output you truly need, providing context is essential.
  • In particular, it helps to clearly state what you want to do as the overarching premise for your “instructions,” and to define the scope of what you want the AI to consider.

Techniques for “Instructions”

  • You can often draw out better answers by having the AI think step by step, or by explaining from “abstract (conceptual)” to “concrete.”
  • By intentionally narrowing the scope, you can elicit more detailed answers. To do that, break the work or project into stages and specify something like, “For now, answer only the first stage.”
  • Avoid negative phrasing, as it tends to reduce accuracy.

Techniques for “Output”

  • If you ask it to play both roles—an experienced expert and a curious beginner—you’re more likely to get an answer that’s easy to understand even without prior knowledge.

Use multiple generative AI tools

  • Because generative AI creates text by predicting the next word based on probabilities, it doesn’t “know” the one correct answer. As a result, answers may contain errors and vary from run to run.
  • For example, even in software development—where you might think there’s a single right output—providing specs and asking for code doesn’t guarantee just one answer.
  • With this in mind, it can be useful to use different generative AI services or models—such as ChatGPT, Gemini, and Claude—compare their outputs, and choose the best one.

Master generative AI by specifying the “four elements”

The accuracy of generative AI’s answers can improve dramatically with better prompting. Specify the four elements—input, context, instructions, and output—in your prompt to draw out the answers you want.

Free consultation. Feel free to contact us.

Author

Taiichi Enari

Worked consistently in digital marketing at Sony, Nissan Motor, MSD, and other companies.
Led initiatives from strategy development to corporate website builds, and executed lead-generation programs including SEO, search ads, and email marketing, as well as inside sales operations. Also has experience working overseas.

Sources

  • Prompt Engineering Guide
  • Microsoft Prompt basics
  • Yuto Ueda. Understand in 60 Minutes! The Frontline of Generative AI for Business. Gijutsu-Hyoronsha
  • Makoto Shirota. ChatGPT Capitalism. Toyo Keizai Inc., 2023
  • Nyanta. A Textbook on Dify from Scratch. Gijutsu-Hyoronsha

PAGE TOP