Skip to main content

Custom Chat: Responding to inbound emails

Introduction

Many of our customers get hundreds of inbound emails per month from potential new customers who are evaluating their product. Often they want to clarify technical questions to understand if the product fits their requirements. Writing great replies to these emails is valuable but time consuming and replying quickly often makes the difference between generating and loosing a lead.

This tutorial leverages the Custom Chat endpoint to draft replies to such emails and speed up the process. You can integrate this solution into platforms like Salesforce to work with your existing workflows.

Before we get started it is important to consider whether the Custom Chat endpoint is actually the correct choice for this challenge. If we want to reply to inbound emails in a very similiar manner to how kapa generates replies to questions normally it might be easier to just use the regular Chat endpoints. As will become clear below we do want to create behavior that is different so this is the right choice.

Prerequisites

  • You need a kapa.ai instance with at least one configured data source.
  • You need to create an API Key and API Integration. For this you can follow the quickstart.
  • Get the project_id of your project from the kapa.ai dashboard.

Set up

To illustrate the use of the Custom Chat endpoint to solve this challenge we will generate a reply to this fictional email received by the kapa.ai sales team.

Hi Team,

I hope this message finds you well. We are exploring various options to enhance our customer support and documentation experience.

We’ve been considering integrating a chatbot on our documentation page to improve user engagement and support. After reviewing the capabilities of Kapa.ai, I believe it could be a great fit for our needs. However, before we proceed further, I have a few questions I hope you can help with.

Could you please provide more details about the different deployment options Kapa.ai offers? We are particularly interested in understanding how flexible the platform is in terms of integration with our existing systems and any alternative use cases beyond documentation support.

Thank you in advance for your assistance. I look forward to your response and the possibility of working together.

Best regards,
Max

A great reply should:

  1. Answer the person's question about deployment options based on the kapa.ai documentation.
  2. Link to the relevant documentation.
  3. Sound like it was written by a member of the kapa sales team.

Implementation

When using the Custom Chat endpoint you need to write the full prompt to be sent to the LLM. This is different from the regular Chat endpoints where the prompting is taken care of for you behind the scenes.

The final request we sent to kapa looks as follows. It's different parts are explained below.

curl --location 'https://api.placeholder.com/query/v1/projects/{project_id}/chat/custom/' \
--header 'X-API-KEY: <API_KEY>>' \
--header 'Content-Type: application/json' \
--data '{
"persist_answer": true,
"generation_model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a sales representative for kapa.ai tasked with answering customer emails. You are professional, knowledeable, helpful and always friendly. You write short and precise emails. kapa.ai is a platform designed to help developer-facing companies build AI support bots for their community. It leverages Retrieval Augmented Generation (RAG) to create chatbots that can answer developer questions. The system is optimized for answering technical questions by indexing your technical knowledge sources and using them to generate accurate, contextual responses."
},
{
"role": "context"
},
{
"role": "user",
"content": "Action: Write a response between 50 and 125 words to the customer email below addressing all questions. If the email is vague, ask followup questions that help you determine how to best assist. Above you can find potentially relevant knowledge sources that can help you answer questions in the email. If relevant use these knwoledge sources to answer questions. If you use any of the knowledge sources include links to the relevant sections in your email. If you can not answer all of the users questions based on the knwoledge sources you should include a note in your reply that you have forwarded the issue to an expert to take a look."
},
{
"role": "query",
"content": "Customer Email: Hi Team, I hope this message finds you well. We are exploring various options to enhance our customer support and documentation experience. We’ve been considering integrating a chatbot on our documentation page to improve user engagement and support. After reviewing the capabilities of Kapa.ai, I believe it could be a great fit for our needs. However, before we proceed further, I have a few questions I hope you can help with. Could you please provide more details about the different deployment options Kapa.ai offers? We are particularly interested in understanding how flexible the platform is in terms of integration with our existing systems and any alternative use cases beyond documentation support. Thank you in advance for your assistance. I look forward to your response and the possibility of working together. Best regards, Max"
},
{
"role": "user",
"content": "Output your response in markdown and sign it with '\''Jane Doe'\''"
}
]
}'

The messages list in the request represents our prompt:

system message

The first part of the prompt is a fitting system message. The system message is used to set the behavior of the AI model by defining its desired persona and task. The system message can be used to prime the assistant with different personalities or behaviors. For example, it can instruct the model to adopt a specific tone or follow particular guidelines. We also add a description of kapa.ai to give the model better context when answering questions about kapa.ai. The system message is generally the first part of the prompt but you can also experiment with moving it elsewhere.

context message

The context message is a placeholder for the relevant documents kapa retrieves from your data sources before generating an answer. It is a special message type which represents kapa's way of implementing retrieval augmented generation (RAG). You can think of an API call to the Custom Chat endpoint as a two step process:

  1. Kapa performs semantic search over your data sources to find the most relevant documents and substitutes them in for the context message.
  2. Kapa queries an LLM with the altered prompt to generate an output.

In our example we want to fill the context message placeholder with relevant documents about kapa.ai's deployment options in order for the LLM to be able to provide a relevant answer to the email.

You can place the context message anywhere you want in your prompt. The exact location depends on your other messages. In almost all cases you will want to refer to the retrieved documents from other messages to instruct the LLM how to use them.

query message

To achieve this we need to control the input for which kapa will perform search. This can be done with the query message. The query message is the part of the prompt which is used for retrieval. This means that kapa will use the query message to perform semantic search against your data sources. We submit the customer email as query message because it contains the relevant question.

After retrieval has been performed the query message is turned into a normal user message and will be sent to the LLM as part of the prompt. This works great for this use case because we want to perform retrieval based on the email and we also need the email to be part of the prompt so the LLM can reply to it.

In other cases you might not want to include the question for which you are performing retrieval to be included in the prompt. In this scenario you can use the retrieval_query parameter instead. When using retrieval_query you do not need to include a query message in your messages.

user message

The user message type is used to represent input that the model is supposed to respond to. We also use user messages to formulate the remaining instructions we want to give to the model. There is no limit on the number of user messages you can include in your prompt.

generation_model

The Custom Chat endpoint also lets you explicitly select the LLM model you want to use for generation. In our case we pick GPT-4o.

persist_answer

If persist_answer is set to true the query message will be persisted in your analytics as a conversation. This is enabled by default and in our case it is helpful because we want to keep track of the emails we have answered in our analytics.

Response

The response we get from our API call includes multiple objects that are worth pointing out:

  • answer: This is the output generated by the LLM. In our case this is the reply to the email.
  • messages: This is the final prompt that was send to the LLM. It is included in the response so you can understand how your prompt looks post retrieval once the context message has been substituted.
  • relevant_sources: The response seperately contains the documents kapa has retrieved. You will notice that these are all present in the messages.

For the full list of returned objects check out the API Reference.

After parsing the answer from our response we can send the following reply to the user:

Hi Max

Thank you for reaching out! I'm glad to hear you're considering Kapa.ai for enhancing your customer support and documentation experience.

Kapa.ai offers several flexible deployment options:

1. **Website Widget**: Easily integrate an embeddable widget on your documentation page. [Learn more](https://docs.kapa.ai/integrations/website-widget).
2. **Slack Bot**: Deploy a bot in your Slack workspace to auto-reply or respond to mentions. [Learn more](https://docs.kapa.ai/integrations/slack-bot).
3. **Discord Bot**: Similar to Slack, but for Discord servers. [Learn more](https://docs.kapa.ai/integrations/discord-bot).
4. **Zendesk Agent App**: Integrate with Zendesk to draft instant replies to support tickets. [Learn more](https://docs.kapa.ai/integrations/zendesk-app).
5. **API**: Build custom integrations using our API. [Learn more](https://docs.kapa.ai/api/overview).

Beyond documentation support, Kapa.ai can be used internally to assist teams with technical questions, speeding up onboarding and reducing context switching for engineers. It can also help draft responses to support tickets, improving efficiency.

For more details, please visit our [documentation](https://docs.kapa.ai).

Looking forward to the possibility of working together!

Best regards,
Jane Doe

Next steps

If you want to experiment with this use case for your own product you should follow this approach:

  1. Select a sample of relevant emails that you want to respond to. Your sample should be carefully chosen:
    • It should be representative of the emails you expect to respond to in production.
    • It should be diverse.
    • It should cover the edge cases you care about.
    • It should be small enough so you can manually experiment with it.
  2. Iterate on the prompt with your sample emails until you are happy with the result.
  3. Potentially limit the data sources you want to include in retrieval using source_ids_include.
  4. Perform final testing on your solution with a larger amount of smaple emails.
  5. Deploy in production.