A roadmap for adopting generative AI without failure: how to build the “system” that transforms your operations


In this column, we explain the key points you should know when introducing generative AI, the implementation steps, and the risks and solutions you need to consider.

What Does Implementing Generative AI Mean?

Implementing generative AI is not “adopting a tool” — it is “transforming business processes.”

Implementing generative AI is not simply about starting to use tools like ChatGPT. It is an initiative to redesign “which tasks to improve, and how,” using generative AI — and to transform the business processes themselves.

When you interact with generative AI such as ChatGPT, its intelligence can make it easy to assume it is a “thinking tool” that can solve anything. However, in real business settings, simply delegating the thinking to generative AI and expecting a “good enough” answer rarely works.

It is important to view generative AI not as a “thinking tool,” but as a component that delivers value only within a properly designed business process.

Why Generative AI Implementations Often Fail

When a generative AI implementation fails, the root cause is often not the AI itself. In many cases, the issue is that teams have not clarified “how to break down human work and what to hand off to AI.”

While a lot of attention is focused on “how to use” generative AI and “what to make it do,” problems are likely to arise when analysis and design are not given enough weight—such as the actual business tasks, the knowledge and data the AI should reference, and the operational rules. As a result, the way generative AI is used can become overly dependent on individual users, and misalignment with real work can easily occur.

Implementing generative AI is not merely adopting a tool; it is a business process transformation — and designing the overall mechanism is critical.

The Two Core Goals of Using Generative AI: “Knowledge Acquisition” and “Automation”

At its core, using generative AI in business tends to pursue two directions: “acquiring knowledge and insights” and “automating work.” Rather than focusing on only one, many use cases aim to achieve both.

Here are some examples of how generative AI can be used.

  • Automatically generating job postings for HR (Work automation)
    • To reduce HR workload, the HR representative simply selects relevant keywords, and the company’s job posting is generated automatically.
  • Financial market analysis and insights delivery (Knowledge and insights acquisition)
    • A media company that provides financial information trains generative AI on a vast repository of accumulated financial data to deliver advanced market analysis and insights.
  • Automating inspections in an automotive factory (Work automation)
    • By analyzing product images to identify defects or anomalies, quality control is improved.
  • AI answering questions in live commerce (Work automation / Knowledge and insights acquisition)
    • In live commerce (selling products via social media), when viewers ask questions in chat during a live stream, AI answers in real time.
  • Creating promotional emails (Work automation)
    • Generative AI automatically drafts email copy that is more likely to capture interest, tailored to customer preferences.
  • Market research and campaign planning (Work automation / Knowledge and insights acquisition)
    • For marketing teams, generative AI analyzes market trends, consumer reactions, and competitor activities to develop marketing plans.

Two Key Outcomes: “Cost Reduction” and “Revenue Growth”

By leveraging AI, you can achieve cost reduction or revenue growth.

Because many use cases focus on automating complex tasks previously handled by people, it is common to target cost reduction through reduced work time and effort.

On the other hand, examples of revenue growth include using generative AI in marketing to analyze information and personalize content, thereby increasing the number of prospective customers acquired.

A Fail-Safe Implementation Process and Roadmap for Generative AI

Generative AI implementation typically follows these steps.

  1. Analyze challenges and narrow down where to apply generative AI
  2. Set objectives and targets
  3. PoC (Proof of Concept)
  4. Design and implement the system
  5. Operations

1

Analyze challenges and narrow down where to apply generative AI

Introducing generative AI is only a means (How). To avoid making the means the goal, the golden rule is to start by working backward from the project outcomes—namely, “what challenge are we solving?”

At this stage, focus on analyzing the challenges objectively. It is also common for countermeasures like “we should do X” to creep in even during the analysis phase, so caution is needed.

2

Set objectives and targets

Clarify the project objectives. Also, to enable verification after implementation, set numerical targets before you move into implementation.

Be careful not to make the “objective” too abstract or generic. Simulating what would happen if you did not implement it—and estimating the resulting losses—can also help clarify the objective.

Because outcomes are hard to predict in a first-time initiative, teams often start without quantifying targets. Even if it is a hypothesis, be sure to quantify your targets.

3

PoC(Proof of Concept)

Because generative AI is a new field, there are many cases where the “right” way to implement it is unclear. In a PoC (Proof of Concept), you validate not only the impact but also the method. Through this process, you may uncover what it takes to improve, for example, the accuracy of information retrieval.

After the PoC, it is also important to decide whether to move forward into production or to stop.

4

Step 4: Design and implement the system

It is important to proceed in the order of design → implementation.

In detailed work such as program logic and knowledge database design, you will often discover things only once you start implementing. Therefore, you also need the flexibility to avoid over-investing in upfront design and to revisit the design as you implement.

5

Operations

In operations, you collect and analyze usage data and user feedback, and make improvements as needed.

It is important to see operations not as the “end” of the project, but as the “start.” By turning actual usage into data and continuously analyzing and improving it, you can increase the impact of business process improvements.

Key Risks and Considerations When Implementing Generative AI

The risks of implementing generative AI can be broadly divided into two categories: security and legal risks, and risks related to system quality and operations.

Security and legal risks

  • Information leakage and security risks
    • When individuals use generative AI by freely entering prompts, internal information, customer data, and non-public materials can easily become mixed in without the person realizing it. This is not a technical problem; it is a design problem—information flows are not visualized or controlled.
  • Copyright infringement and compliance risks
    • Some risks cannot be fully solved by generative AI alone. Especially in areas that lead to legal liability—such as “copyright infringement” and “compliance”—it is important to design processes that include human review.

Risks Related to System Quality and Operations

  • Risks of incorrect answers and poor judgment
    • Hallucinations—when generative AI produces false answers—occur when “which knowledge to base the answer on” is not defined. To reduce hallucinations, it becomes necessary to specify internal rules, up-to-date materials, and the domains where judgment is required.
  • Risks of individual dependency and black-boxing in generative AI usage
    • In the operational phase, a gap tends to emerge between people who can “use generative AI well” and those who cannot. If left unaddressed, the work itself may end up depending on specific individuals, so caution is required.
  • Risk of the system not being used
    • From a cost-effectiveness standpoint, a system that is implemented but not used is a business risk. One cause of “unused systems” is the lack of “operational design” at the time of implementation.

How to Think About Risk Mitigation

For issues such as information leakage, individual dependency, incorrect answers, and lack of adoption, it is important to examine causes and countermeasures from both perspectives: “the generative AI itself” and “how generative AI is used.”

In the following sections, we will explain “how to use generative AI,” including the risk mitigation perspective.

Three Elements Needed to Use Generative AI Reliably in Business

The Three Essential Elements

To use generative AI reliably in business, the following three elements are required.

  • Generative AI model
  • Knowledge data
  • Workflow

Element 1: Generative AI Model

A generative AI model is the “brain” of AI. Well-known consumer services include OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude.

Generative AI models are characterized by their versatility: a single model can handle a wide range of tasks such as answering questions, writing, and translation. The foundation of generative AI models is the technique of “predicting the text that follows a given text.”

When leveraging generative AI models in a company’s business processes, it is common to use enterprise-oriented services. For business use, it is important to choose a service that meets the following three criteria.

Checklist for Choosing an Enterprise Service

• Access to high-performance generative AI models
• Assurance that input data is not used for training the generative AI
• Safe and reliable use for enterprises, including authentication, billing, and ease of testing

Three strong candidates that meet these criteria are Microsoft Azure OpenAI, Amazon Bedrock, and Google Vertex AI. Their characteristics are as follows.

Major Enterprise Generative AI Model Platforms

Microsoft Azure OpenAICombines “OpenAI performance” with “Azure’s enterprise governance and security”
Amazon BedrockDesigned around the concept of “not depending on a specific model,” making multi-model implementations easier
Google Vertex AIStrong at combining data analytics/search with generative AI

Even with these three services, the same “brains” (AI models) used in consumer services like ChatGPT and Gemini are available. For example, the “GPT” used in ChatGPT can be used as the brain of Azure OpenAI.

Element 2: Knowledge Data

When using generative AI in business, there are many cases where you want the AI to respond based on knowledge and information specific to the company and the target work.

For example, you may want to build a chatbot that answers employee questions in line with internal policies, or you may want to create emails that clearly communicate the features of your products. In such cases, you need to provide the generative AI with your own knowledge and data.

One way to do this is to embed knowledge into the prompt along with instructions, enabling unique responses. Because you can do it simply by writing it into the prompt, the difficulty is low, and you can also expect highly accurate answers.

However, this approach also has drawbacks: you need to provide the information in the prompt every time, making outcomes dependent on each individual’s approach, and prompts become longer, increasing the cost of using generative AI.

One solution is to use a technique called RAG. This method searches external data relevant to the user’s question, then provides both the search results and the user’s question to the generative AI to obtain an answer.

Element 3: Workflow

In real business settings, not all tasks can be completed with a simple “question” → “answer” flow—in other words, not everything is a task where “as long as it answers the question, it’s fine.”

For example, when a customer support representative answers a user’s question, they often go through multiple steps: understanding the question, gathering necessary information, selecting what matters, and drafting a response. In manufacturing inspections, the required follow-up may also vary depending on the type of defect found—such as sorting defective products or notifying a supervisor.

If you want generative AI to handle work that involves multiple steps or different actions depending on decisions, you need to specify task order and conditions. To achieve this, you design and define workflows that control how the generative AI behaves.

Practical Ways to Implement Generative AI

Two Implementation Approaches Based on Generative AI Maturity

There are two broad approaches to implementing generative AI in day-to-day operations.

  • Approach 1: Use generative AI as-is (e.g., ChatGPT, Gemini)
    • Users enter instructions into chat-based tools such as ChatGPT or Gemini. While these tools can respond to many types of questions and requests, each user must input prompts every time.
  • Approach 2: Build a system that combines a generative AI model, knowledge data, and workflows
    • By combining a generative AI model with knowledge data and workflows that control behavior, even complex tasks that previously required humans can be handled automatically. You can also standardize and automate specific work and roll it out company-wide as a tool anyone can use.

Use generative AI as-is

Build a system that combines a generative AI model, knowledge data, and workflows

Interface

Enter questions into chat tools such as ChatGPT or GeminiSupports various inputs such as chat questions and button clicks

How to give instructions

Users provide instructions via promptsIn addition to prompts, it can run autonomously and support various methods such as condition settings

How to provide proprietary information

Include it in the promptReference external data

Combining multiple tasks

Humans instruct the AI separately for each taskThe system executes autonomously according to workflows

Pros

Ready to use immediately. Versatile enough to handle a wide range of questions and requestsEnables advanced automation of business processes.

Cons

Hard to automate work. Usage becomes dependent on individualsRequires time and cost to set up. Becomes a system specialized for specific tasks.
Comparison of the Two Implementation Approaches

Challenges and Countermeasures for Approach 1: “Use Generative AI as-is”

With Approach 1, users need to enter prompts each time. Because prompting styles differ by user, there is a risk that know-how becomes dependent on individuals. There is also concern that users may include personal data that should not be entered, potentially leaking information to the generative AI model.

This approach can work well in early stages of adoption when usage is limited to people with high IT literacy and flexibility is valuable. However, when moving to full-scale implementation, you should also consider designing mechanisms that embed generative AI into business processes.

Challenges and Countermeasures for Approach 2: “Build a system that combines a generative AI model, knowledge data, and workflows”

When building a system that combines a generative AI model, knowledge data, and workflows, incorporating knowledge data can become a key challenge.

RAG, often used to incorporate knowledge data, is a search technology that is separate from the prompt used to instruct generative AI, and it requires its own unique design considerations.

Simply feeding “existing data” does not necessarily improve retrieval accuracy. It is essential to carefully map business tasks to the information required, and to design for your data characteristics—such as refining prompts to guide results toward the intended outcome.

Example: Using Generative AI for Internal Inquiry Handling

Let’s Use Generative AI to Streamline the Task of “Searching for and Reading Various Policies”

Let’s walk through a practical example to explain how to use generative AI.

In any company, employees have a wide range of questions and requests—for example, “I want to know the rules for taking paid leave,” “I need to understand the process for expense reimbursement on business trips,” or “My PC isn’t working well, so please fix it.” A common flow is to first read internal policies and then ask the inquiry desk. When policies are hard to understand or it is unclear where to ask, it can feel frustrating.

Let’s consider how to use generative AI to make inquiries to HR, Accounting, and IT more efficient.

We will first introduce “using generative AI as-is,” and then “building a system that combines a generative AI model, knowledge data, and workflows.”

Approach 1: “Use Generative AI as-is”

Setup and Usage Flow

The flow for using “generative AI as-is” for internal inquiries is as follows.

Setup

Inform employees which generative AI to use and how they are allowed to use it.

Usage Flow

  1. Search for the relevant policy owned by the department responsible for the type of inquiry.
  2. Download the policy document PDF to your PC.
  3. Open a generative AI tool such as ChatGPT or Gemin.
  4. In the generative AI chat window, attach the policy document and enter your question.
  5. Receive the answer.

Pros of Approach 1

You can start immediately using existing services.
You can avoid the effort of manually referencing and carefully reading policy documents to find the sections relevant to your question.
You can use it flexibly—for example, uploading multiple documents at once, or asking follow-up questions within the same chat thread to explore solutions.

Cons of Approach 1

“Find the document → attach it in chat and ask a question” takes time and effort.
Because usage methods vary by person, there is a risk that usage becomes dependent on individuals.

Many internal documents are confidential. If you use generative AI such as ChatGPT in a typical consumer environment, that document may be used for training.
By changing the generative AI settings, you can prevent inputs from being used for training. Even so, you should avoid entering privacy-related information such as personal data. If individuals are free to use generative AI on their own, it becomes difficult to prevent such inputs—or to monitor what is being entered.

Approach 2: “Build a system that combines a generative AI model, knowledge data, and workflows”

Setup and Usage Flow

The setup and usage flow for building “a system that combines a generative AI model, knowledge data, and workflows” for internal inquiries is as follows.

Setup

Generative AI model

Select the best fit for your company from options such as Microsoft Azure OpenAI, Amazon Bedrock, and Google Vertex AI introduced earlier. In addition to each service’s characteristics, it is also helpful to confirm whether you can secure the right talent to handle the platform, and whether the fit between your company and the support partner is strong.
Knowledge data

A strong approach is to use RAG, as introduced earlier. The key is searching external data relevant to the user’s question. To improve retrieval accuracy, you need elements such as “techniques to vectorize the meaning of text,” “search algorithms,” and “document preprocessing.”

Using this approach, you process the necessary documents—such as policies for HR, Accounting, and IT—and convert them into “knowledge data.”

Because the search results will be combined with the user’s question and provided to the generative AI, adjusting the generative AI instructions (prompts) at that stage is also important.
Workflow

Design the “flow” so the system can appropriately handle a wide range of user questions.

For example, questions about expenses should reference accounting policies, while questions about paid leave should reference HR policies. For inputs unrelated to policies (such as “Thank you for your support”), you can design the system so the generative AI responds without referencing policies, making conversations feel more natural. Additionally, it’s necessary to design a workflow aligned with your business processes—for example, forwarding the case to a human representative when an automated response doesn’t resolve the issue.

Once the “flow” is designed, it is common to build an application using a programming language. Recently, tools that allow you to build applications without coding have also emerged, so it is worth considering them.

Usage Flow

General users within the company obtain the information they need by freely asking questions in the provided application, or by clicking pre-prepared options.

Pros of Approach 2

General users within the company can use it easily.
By limiting and monitoring what can be entered, you can reduce the risk of information leakage.
By analyzing usage logs, you can improve the quality of internal services.

Cons of Approach 2

You need business process analysis and design, as well as implementation of knowledge data and the application.
Ongoing maintenance is required, such as updating the knowledge data.

Comparison of Approaches 1 and 2

Use generative AI as-is

Build a system that combines a generative AI model, knowledge data, and workflows

Setup

Announce how to use common services such as ChatGPT.Analyze business processes and build a mechanism that combines a generative AI model, knowledge data, and workflows.

Usage flow

General users attach policy documents to their questions, ask the generative AI, and receive answers.General users obtain answers by asking questions as guided by the app or selecting options.

Pros

Easy to start. Flexible to use.Improved usability. Reduced risk of information leakage. Service improvement through log analysis.

Cons

Time-consuming for general users. Usage becomes dependent on individuals. Some risks such as information leakage remain.Requires time and cost to implement. Ongoing maintenance is needed.
Comparison of Approaches 1 and 2

What Matters Most for Successful Generative AI Implementation

The key to successful generative AI implementation lies not in “generative AI,” but in “the overall system.” By designing the end-to-end process to match your business and embedding knowledge data and AI appropriately, you can make generative AI implementation a success.

Free consultation—feel free to reach out anytime.

Author

Taiitsu Enari

Worked consistently in digital marketing at Sony, Nissan Motor, MSD, and more.
Executed end-to-end initiatives from strategy development to corporate website builds, and lead generation through SEO, search ads, and email marketing, as well as inside sales operations.
Also has overseas assignment experience.

Sources

  • Yuto Ueda. Understand in 60 Minutes! The Frontline of Business Use of Generative AI. Gijutsu-Hyoron Co., Ltd.
  • Nyanta. The Dify Textbook: Understand from Zero. Gijutsu-Hyoron Co., Ltd.
  • Masato Ota et al. Practical Introduction to AI Agents for Real-World Use. Kodansha

PAGE TOP