AI12Z WebSocket API Documentation
Table of Contents
- Introduction
- Prerequisites
- Connecting to the WebSocket API
- Sending Queries
- Handling Responses
- Example Client Implementation
- Advanced Topics
- Reference
- Troubleshooting
- Glossary
- About AI12Z
- API Version
Introduction
The AI12Z WebSocket API enables real-time communication between clients and the AI12Z server, facilitating dynamic interactions such as submitting queries and receiving responses asynchronously. This documentation guides you through setting up and using the WebSocket API, complete with an example client implementation.
Before creating something custom, be sure to check out the out of box Web Components and React Controls
Prerequisites
Obtaining an API Key
To use the AI12Z WebSocket API, you need a valid apiKey
. You can obtain an API key by registering on our Developer Portal and creating a new application.
Required Libraries
Ensure you include the following libraries in your project:
- Socket.IO: For WebSocket communication.
<script src="https://cdn.socket.io/4.0.1/socket.io.min.js"></script>
- Showdown: For converting Markdown to HTML.
<script src="https://cdn.jsdelivr.net/npm/showdown/dist/showdown.min.js"></script>
Connecting to the WebSocket API
URL Format
The WebSocket URL is constructed based on the environment you are working in. For example:
- Production:
wss://api.ai12z.net
Establishing a Connection
Use the io.connect
method from the Socket.IO
library to establish a WebSocket connection:
const endpoint = "wss://api.ai12z.net"
const socket = io.connect(endpoint)
Sending Queries
Sending Text Queries
To send a text query to the AI12Z server, emit the evaluate_query
event with the required data:
const data = {
apiKey: apiKey,
query: query,
conversationId: conversationId, // Optional, for follow-up queries
event: "evaluate_query",
base64Images: [], // Optional, if not sending images
}
socket.emit("evaluate_query", data)
Sending Queries with Images
To send a query along with images, include the base64Images
array in your data payload. You can send multiple images.
Note: There is a maximum payload size of 16 MB per emit. If your payload exceeds this limit, resize your images to a width of around 1024 pixels and compress them before converting to Base64.
const data = {
apiKey: apiKey,
query: query,
conversationId: conversationId, // Optional
event: "evaluate_query",
base64Images: [
"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAS....",
"data:image/png;base64,...",
],
includeTags: [], // Optional
excludeTags: [], // Optional
requestMetadata: {}, // Optional
}
socket.emit("evaluate_query", data)
Optional Fields
includeTags
(Optional): An array of tags to include in the query context.excludeTags
(Optional): An array of tags to exclude from the query context.requestMetadata
(Optional): An object containing additional metadata for the request.
Handling Responses
Partial Responses
The server sends partial responses (tokens) via the response
event. Accumulate these tokens in a buffer (e.g., a string variable) and convert the accumulated Markdown to HTML for display.
let markdownBuffer = ""
socket.on("response", function (event) {
markdownBuffer += event.data // Accumulate markdown data
updateResponseContainer()
})
Final Responses
The end_response
event indicates the end of a response. It provides the complete answer and additional data.
socket.on("end_response", function (data) {
if (data.error) {
console.error("Error from server:", data.error)
// Handle error accordingly
return
}
conversationId = data.conversationId || conversationId
handleEndResponse(data)
})
Response Data Structure
The data
object received in the end_response
event includes the following fields:
Field | Type | Description |
---|---|---|
answer | String | The complete answer from the AI, including hyperlinks, images, and videos. |
formModel | Object | If applicable, the form model data for client-side rendering. |
controlData | Object | Data returned by the agent that bypasses the LLM. |
controlType | String | The type of control data (e.g., form , carousel , custom ). |
title | String | For AnswerAI, the most relevant title from the vector database. |
link | String | For AnswerAI, the most relevant link from the vector database. |
description | String | For AnswerAI, the most relevant description from the vector database. |
relevanceScore | Number | For AnswerAI, the relevance score from the vector database. |
assetType | String | The type of asset (e.g., web , pdf , docx ). |
didAnswer | Boolean | Indicates if the AI provided an answer (true ) or not (false ). |
context | Object | For AnswerAI, the contextual data from the vector database. |
insightId | String | Used for tracking user feedback on the content (like/dislike). |
error | String | Contains error information if an error occurred; otherwise, null . |
conversationId | String | The conversation ID for maintaining context in follow-up queries. |
Error Handling
Check the error
field in the end_response
data object to handle any errors returned by the server.
if (data.error) {
console.error("Error from server:", data.error)
// Display error message to the user
}
Example Client Implementation
Below is an example client that connects to the AI12Z WebSocket API, sends a query, and handles the response.
HTML Structure
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>ai12z WebSocket Client</title>
<!-- Include Socket.IO -->
<script src="https://cdn.socket.io/4.0.1/socket.io.min.js"></script>
<!-- Include Showdown for Markdown to HTML conversion -->
<script src="https://cdn.jsdelivr.net/npm/showdown/dist/showdown.min.js"></script>
</head>
<body>
<input type="text" id="queryInput" placeholder="Type your question here" />
<button onclick="askAI()">Ask AI</button>
<div id="responseContainer"></div>
<!-- Your JavaScript code will go here -->
<script src="client.js"></script>
</body>
</html>
JavaScript Code
Create a file named client.js
and include the following code:
// Replace 'your-api-key' with your actual API key
const apiKey = "your-api-key"
let conversationId = ""
const endpoint = "wss://api.ai12z.net/" // Ensure the correct WebSocket path
let socket
let markdownBuffer = ""
// Initialize Markdown converter
const converter = new showdown.Converter()
function connectWebSocket() {
socket = io(endpoint, {
transports: ["websocket"], // Force WebSocket transport
secure: true, // Ensure secure connection
rejectUnauthorized: false, // If you're using self-signed certificates for SSL, this might help
})
socket.on("connect_error", (error) => {
console.error("WebSocket connection error:", error)
})
// Handle partial responses
socket.on("response", function (event) {
markdownBuffer += event.data // Accumulate markdown data
updateResponseContainer()
})
// Handle final response
socket.on("end_response", function (data) {
if (data.error) {
console.error("Error from server:", data.error)
return
}
conversationId = data.conversationId || conversationId
handleEndResponse(data)
})
}
// Send query to AI12Z server
function askAI() {
const query = document.getElementById("queryInput").value
const data = {
apiKey: apiKey,
query: query,
conversationId: conversationId,
event: "evaluate_query",
base64Images: [], // Include images if needed
}
socket.emit("evaluate_query", data)
markdownBuffer = "" // Clear the markdown buffer for new response
updateResponseContainer()
}
// Update the response container with the accumulated Markdown converted to HTML
function updateResponseContainer() {
// Convert the accumulated markdownBuffer to HTML
const html = converter.makeHtml(markdownBuffer)
// Find the container in the DOM to display the response
const responseContainer = document.getElementById("responseContainer")
// Set the HTML content of the container to the converted HTML
responseContainer.innerHTML = html
}
// Handle the final response
function handleEndResponse(data) {
markdownBuffer = data.answer // Update markdown buffer with the complete response
updateResponseContainer()
if (data.formModel) {
console.log("Form Model:", data.formModel)
// Handle form rendering here
}
// Handle other control types if necessary
if (data.controlType) {
switch (data.controlType) {
case "carousel":
// Handle carousel rendering
break
case "custom":
// Handle custom control rendering
break
// Add more cases as needed
}
}
}
// Connect to WebSocket on page load
window.onload = connectWebSocket
Explanation
-
HTML Elements:
- Input Field: An input field (
queryInput
) for the user to type their query. - Ask AI Button: A button that triggers the
askAI()
function. - Response Container: A
div
(responseContainer
) where the AI's response will be displayed.
- Input Field: An input field (
-
JavaScript Code:
- Variables:
apiKey
: Your API key.conversationId
: Maintains conversation context.endpoint
: The WebSocket endpoint URL.socket
: The WebSocket connection instance.markdownBuffer
: Accumulates partial responses.converter
: An instance of Showdown's Markdown converter.
- Functions:
connectWebSocket()
: Establishes the WebSocket connection and sets up event listeners.askAI()
: Sends the user's query to the AI12Z server.updateResponseContainer()
: Converts the accumulated Markdown to HTML and updates the response container.handleEndResponse(data)
: Processes the final response, updates the conversation ID, and handles any additional data like forms or custom controls.
- Variables:
-
Event Listeners:
socket.on("response", callback)
: Handles partial responses.socket.on("end_response", callback)
: Handles the final response and error checking.
-
Handling Forms and Control Data:
- If
data.formModel
is present, it indicates that the AI has provided a form to render on the client side. data.controlType
can be used to handle different types of controls like carousels or custom components.
- If
Advanced Topics
Image Processing Before Sending
To ensure your images do not exceed the payload size limit (16 MB), resize and compress them before conversion to Base64.
function resizeAndCompressImage(file, maxWidth, callback) {
const reader = new FileReader()
reader.onload = function (event) {
const img = new Image()
img.onload = function () {
const canvas = document.createElement("canvas")
const scaleSize = maxWidth / img.width
canvas.width = maxWidth
canvas.height = img.height * scaleSize
const ctx = canvas.getContext("2d")
ctx.drawImage(img, 0, 0, canvas.width, canvas.height)
const compressedDataUrl = canvas.toDataURL("image/jpeg", 0.7) // Adjust quality as needed
callback(compressedDataUrl)
}
img.src = event.target.result
}
reader.readAsDataURL(file)
}
Maintaining Conversation Context
The conversationId
is used to maintain the context of a conversation across multiple queries. If you provide a conversationId
, the AI will consider previous interactions in its response. If omitted or set to an empty string, a new conversation context is started.
Reference
Data Object Definitions
Request Data Object (evaluate_query
event):
Field | Type | Required | Description |
---|---|---|---|
apiKey | String | Yes | Your API key. |
query | String | Yes | The user's query or question. |
conversationId | String | No | The conversation ID for maintaining context. |
event | String | Yes | Should be set to "evaluate_query" . |
base64Images | Array | No | An array of Base64-encoded images. |
includeTags | Array | No | Tags to include in the query context. |
excludeTags | Array | No | Tags to exclude from the query context. |
requestMetadata | Object | No | Additional metadata for the request. |
Event Listeners
-
response
Event: Receives partial responses (tokens) from the server.socket.on("response", function (event) {
// Handle partial response
}) -
end_response
Event: Indicates the end of a response and provides the complete answer.socket.on("end_response", function (data) {
// Handle final response
})
Things to consider
Consider using the ai12z WebComponents and React control, it handles so much of what you will build
- Client Responsibility:
- This code does not manage a chatBot.
- Buttons have javaScript sendQuery, that your custom chatBot would need to handle
- Form script needs to be created
Troubleshooting
-
Connection Errors: If you cannot establish a connection, check your internet connectivity and ensure that the endpoint URL is correct.
-
Authentication Failures: If you receive an authentication error, verify that your API key is valid and has not expired.
-
Payload Size Exceeded: If you encounter payload size errors, reduce the size of your images or split your data into smaller chunks.
-
Unhandled Errors: Always check the
error
field in theend_response
event to handle any server-side errors.
Glossary
-
LLM (Large Language Model): A type of AI model that can understand and generate human-like text.
-
Vector Database: A database optimized for storing and querying high-dimensional vectors, often used in machine learning applications.
-
AnswerAI: A feature of AI12Z that provides answers based on vector database searches.
-
Bubble: A message or response unit displayed in the user interface.
-
Control Data: Data returned by the agent that bypasses the Large Language Model.
-
Form Model: A data structure representing a form to be rendered on the client side.
About AI12Z
The AI12Z platform provides advanced AI capabilities, including natural language understanding and image processing. You can use it to build chatbots, virtual assistants, and other AI-driven applications.
API Version
This documentation refers to AI12Z WebSocket API version 1.0. Ensure that your client library is compatible with this version.
By following this guide, you should be able to integrate the AI12Z WebSocket API into your application, handle real-time communication, and process AI responses effectively. If you have any questions or need further assistance, please refer to our Developer Support page.