Skip to main content

Overview

A project can be configured as a RAG-only solution or as ReAct that includes a Reasoning Engine, Agents / Tools and HTML web forms. You select the mode by clicking the check box for Enable ReAct agentic workflow, Navigate to: AI Settings --> ReAct & LLM Agent Settings, to set the mode.

RAG (Retrieval Augmented Generation), Question Answering only.

In the ReAct & LLM Agent Settings, Enable ReAct agentic workflow is disabled. The system will just be a RAG system.

The RAG system is designed to handle user queries by employing AI to analyze the context and history of the session. The workflow is a step-by-step process that generates an accurate and context-aware response. Here's how the system operates: It answers the questions not based on the LLM knowledge but from the content from the organization that is stored in the vector database.

block diagram of the RAG system showing CMS Connectors, Website Scrapping, Docs Assets, knowledge base, Google Drive, OneDrive. Shows the flow of how the AI processes work flow

ReAct agentic workflow, Reasoning Engine, Agents and Forms

In the ReAct & LLM Agent Settings, Enable ReAct agentic workflow is enabled. The system is a reasoning engine, with Agents / Tools and Forms.

When using agents / tools and forms, and a LLM as a reasoning engine, the LLM can plan it actions to achieve a goal. The reasoning engine, based on which agents are enabled and what forms are enabled, knows which to call and in what order to call them in. It may execute several planning steps to achieve a goal

block diagram when agents are enabled and RAG system is just another agent showing CMS Connectors, Website Scrapping, Docs Assets, knowledge base, Google Drive, OneDrive. Forms and web components for input. Shows the flow of how the AI processes work flow

The Components of the System

Drilling into some of the components of the system

The image is a flowchart for a Prompt Management system that processes user queries using AI. It includes a search query input, word replacement, a choice between Context AI for single sessions or History AI for multi-session context, vector query creation, a vector database of document embeddings, reranking of results, and finally, generating an answer through the Main/Base LLM AI. This system ensures contextually accurate responses by considering the user's session history

1. Query and Image Upload Description

The process begins with the user's input, known as a 'query,' which could include an image. The system needs to process the user's question in the context of the images.

2. Word Replace

Following the 'query,' there's a 'Word Replace' function. This may involve substituting certain words or phrases for various reasons, such as using synonyms for better understanding.

3. Reasoning Engine and Vision AI

When a user uploads images with their query, the AI generates a description of these images and updates the vector query accordingly. This vector query is then used by the vector database to find relevant content.

When the Copilot feature is enabled, the language model reads the list of tools and agents, including their descriptions and the parameters that can be passed to them. The platform supports various out-of-the-box agents, Python function calls, and REST API calls, allowing seamless integration with any third-party service. It reads the available Forms and decides when these should be pushed to the the BOT or Search control.

4. Vector Database (DB)

When Rag only Vision AI, Context AI, and History AI output a modified version of the query called the (vector_query), which interacts with a 'Vector DB.' When copilot AI is enabled the history and context are done with the Reasoning Engine i.e the Copilot AI. The Vector DB database stores vectorized data representations for more efficient searching and matching. A record consists of vectors, text, content, and metadata. An example of metadata would be the URL related to the source of content or the images related to that content.

5. Rerank

The results from the Vector DB are then passed to a 'Rerank' process, which prioritizes them according to relevance, accuracy, and possibly other metrics to ensure the best matching result is selected. Embeddings do a good job of finding relevant documents, rerank ensures the best ones are first, since in the answer AI you are going to only pass 3-6 documents.

6. Answer AI LLM

The reranked results become part of the prompt for the Answer AI LLM. The prompt can contain other context (see below) such as history data. This AI is responsible for interpreting the reranked results and formulating an appropriate response, guided by the prompt and system prompt for this LLM.

7. Image AI Match

The Image AI Match images to both the query and the answer from the LLM.

8. Form Components (Bot Controls)

The output can either be a rich text bubble or a bot component. The bot components include:

  1. Validation Text Fields: Fields for validating specific formats such as email addresses and phone numbers.
  2. Multi-Select: Allows users to select multiple options from a list.
  3. Single Select: Allows users to select a single option from a list.
  4. Image Upload: Enables users to upload images as part of their query.
  5. File Upload: Enables users to upload files in various formats.
  6. Date Picker: Allows users to select a date from a calendar.
  7. Time Picker: Allows users to select a specific time.
  8. Checkboxes: Allows users to select multiple items by checking boxes.
  9. Radio Buttons: Allows users to select one option from a set of options.
  10. Text Area: Provides a larger text input field for longer responses or comments.
  11. Slider: Allows users to select a value from a range by sliding a handle.

These components enhance user interaction by providing a variety of input methods tailored to different types of data and user needs.

System Prompts and Dynamic Tokens

In a Large Language Model (LLM) like OpenAI GPT, a prompt and a system prompt refer to the initial instruction or input provided to the model that sets the context or requests a specific type of response. It acts as a directive for the model, guiding it on what information to generate, how to structure its response, or what kind of task to perform. System prompts are crucial because they can significantly influence the model's outputs, ensuring they are relevant and valuable for the intended application or user query. In the system Prompt, you can insert dynamic tokens that will be replaced with their values.

Dynamic Tokens Supported by Each AI Module

AI accepts a variety of dynamic tokens that allow for a dynamic and responsive interaction with the user. The dynamic tokens will be replaced with the real data. These dynamic tokens include:

  • {query}: Directly represents the user's inputted question.
  • {vector-query}: The vector query is adjusted by the context AI, Vision Ai or the History Ai. The Vector DB needs to retrieve the most relevant information to feed to the Answer AI.
  • {history}: Captures the conversation's history, which the AI uses to maintain context over the interaction.
  • {title}: Reflects the webpage's title where the search is taking place, often providing critical context for the query.
  • {origin}: Indicates the original URL of the webpage, which may be relevant to the search.
  • {language}: Specifies the language of the page, which is essential to return results in the user's language.
  • {referrer}: Points to the URL that led the user to the current page, which might affect the user's search intention.
  • {attributes}: Allows for the insertion of supplementary context, generally injected via JavaScript into the search control.
  • {org_name}: Denotes the name of the organization for which the AI is configured, which helps customize the AI's functionality to suit the organization's needs.
  • {purpose}: Outlines the AI bot's intended purpose, aiding the AI in focusing and streamlining the search results.
  • {org_url}: Represents the organization's domain URL, enabling the bot to give precedence to content from that specific domain.
  • {tz=America/New_York}: Timezone creates the text of what is the time and date for the LLM to use.
  • {image_upload_description}: Image descriptions of images uploaded by the site visitor into the search box processed by vision AI.

Enhancing AI Performance with Context

By utilizing these dynamic tokens, the AI modules can be provided with additional context, allowing them to operate at their fullest potential. Including dynamic content ensures that responses are not just based on static programming but are adapted to the user's real-time needs and environment.

These dynamic tokens are crucial for maintaining a nuanced and relevant dialogue between the AI and the user, allowing for a more personalized and effective user experience. Whether retaining the conversation thread through {history} or providing localized responses via {language}, these dynamic tokens are indispensable for a sophisticated AI interaction.

Conclusion

This system is designed to provide accurate and context-aware answers by considering the current session's context and the user's interaction history. It represents an advanced approach to managing AI prompts that could be employed in various applications where user interaction and historical data play a significant role in the quality of the AI's responses.