Managing Assistant
Last updated
Last updated
All your datasources can be viewed in the Datasources tab that can be accessed from the side menu.
You can either chat with your assistant using the button or manage its settings using the button.
The details page has the following menus:
View and edit basic information like Name, Description. You can also view other assistants from the side menu.
Like all datasources, assistants have a “brain” that helps them resolve your queries. You can ask questions in natural English, and the assistant answers by orchestrating and executing tasks in a logical order. The assistant’s brain, powered by a large language model (LLM), is responsible for understanding your query, determining the correct sequence of tasks, deciding which datasource to query, retrieving data, and presenting it to you in a summarized form.
This brain is configurable, allowing you to tweak the LLM based on your specific requirements and performance expectations.
OpenAI: Uses OpenAI’s models like GPT-4 for SQL generation and other tasks.
Fine-tuned: Allows you to use a fine-tuned version of a language model specific to your dataset or use case.
Vertex AI: Google’s Vertex AI platform that allows for scalable model training and deployment.
Claude AI: Developed by Anthropic, this is another powerful LLM for natural language understanding and processing.
Azure OpenAI: Uses Microsoft’s Azure platform to access OpenAI models integrated with Azure’s enterprise-grade infrastructure.
Choose the specific language model. Each model might vary in size, speed, and capabilities.
Hyper Paramameter Tuning
Temperature:
What it Does: Controls the randomness of the AI’s output. Lower values make the output more focused and deterministic, while higher values make it more creative and diverse.
Example: Setting it to 0 will make the model output more predictable and fact-based.
Top P:
What it Does: Known as "nucleus sampling," Top P controls the diversity of the output by choosing from the smallest possible set of words whose probabilities add up to P.
Example: A Top P
of 0.9 means the model considers the top 90% probable tokens when generating responses, adding some variability to the output.
Frequency Penalty:
What it Does: Penalizes the model for using the same words too frequently, promoting more diverse wording in the output.
Example: A higher value discourages repetitive words or phrases in the generated SQL query.
Presence Penalty:
What it Does: Similar to the frequency penalty, but it applies to the presence of specific terms in the output, making the AI less likely to repeat the same concepts.
Example: Setting a higher presence penalty will push the model to explore new topics or avoid repeating the same logic.
Max Output Tokens:
What it Does: Defines the maximum number of tokens (or word pieces) that the model can generate. This limits the length of the SQL query or the response.
Example: A value of 1000
means the output will be capped at 1000 tokens, useful for controlling how verbose or concise the output is.
Select a model that best suits your data complexity and query requirements.
You can define the Assistant’s role and functionality by entering the prompt. This helps guide the Assistant in how to handle the user’s query and interact with the dataset.
Example Prompt: "You are an intelligent AI system designed to assist with data analysis and insights generation based on data available to you as tools."
Note: The default prompt should work for most use cases, but users can change it as per their specific requirements.
It is important to carefully enter the prompt as it directly affects behaviour of the assistant. The prompt can also be changed after onboarding, at a later stage.
View, add and manage the datasources that your assistant can access. These acts as tools at assistant's runtime to help solve its tasks.
You can also select any datasource from the side menu and tweak its settings as well:
Here, you can add prompts to guide the assistant on when and how to use each datasource. First, specify prompts that tell the assistant when to reference a particular datasource. Next, set instructions on how to handle query modifications before sending them to the datasource. In most cases, simply "sending the query as is" will work well. However, for specific needs, you can customize how the assistant modifies the user’s query before it’s sent to the datasource.
'Chat History' allows you to view your chat threads with the assistant. You can resume chat by clicking .