Harnessing the Power of Azure Assistants for Intelligent Interactions
In today’s data-driven world, businesses rely on artificial intelligence (AI) to automate processes, enhance decision-making, and improve user experiences. Azure OpenAI provides an extensive platform that allows developers to build and integrate intelligent assistants capable of handling complex tasks.
In this blog, we’ll explore how you can build a system where two Azure Assistants communicate with each other, working together to solve real-world problems. We’ll dive into the architecture, showcase the interaction between these assistants, and look at the flexibility of this approach for a wide range of applications.
Introduction: Why Use Multiple Assistants?
As the complexity of tasks increases, breaking them down into smaller, manageable pieces handled by separate agents or assistants can be highly beneficial. Instead of relying on a single assistant to manage all the operations, you can have multiple specialized assistants that collaborate to achieve better results. For instance, one assistant might handle data preprocessing, while another focuses on analysis or decision-making.
Azure Assistants make it easy to set up and manage this kind of multi-agent system, thanks to their robust APIs and flexible architecture. In this blog, we’ll explore how to implement two assistants interacting in sequence to complete a given task, demonstrating the power of this approach for a variety of use cases.
A Walk through the code
Step 1: Set Up Your Environment
Before we jump into the code, it’s important to ensure the environment is correctly configured. Sensitive data like API keys, endpoints, and logging levels should be managed securely through environment variables. Here’s how you can set it up:
from dotenv import load_dotenv
from app.utils.environment_details import EnvironmentDetails
import logging
import os
load_dotenv() # Load environment variables from .env file
logging.basicConfig(level=os.environ.get(“LOGLEVEL”, “INFO”))
env_var = EnvironmentDetails() # Helper class to fetch environment details
This configuration helps you avoid hardcoding sensitive information directly in your script, following best practices for security and maintainability.
Step 2: Building the Assistant Service
The core of this system is the AssistantService class. This service allows you to initialize multiple Azure OpenAI assistants, each designed to handle specific tasks. Let’s walk through how we can set up this service.
class AssistantService:
def __init__(self, input_data: list, user_token_id: str):
# Initialize the first Azure OpenAI client
self.client1 = AzureOpenAI(
api_key=env_var.get_azure_openai_api_key(),
api_version=env_var.get_azure_openai_api_version(),
azure_endpoint=env_var.get_azure_openai_endpoint(),
)
# Initialize the second Azure OpenAI client
self.client2 = AzureOpenAI(
api_key=env_var.get_azure_openai_api_key(),
api_version=env_var.get_azure_openai_api_version(),
azure_endpoint=env_var.get_azure_openai_endpoint(),
)
self.input_data = input_data
self.user_token_id = user_token_id
logging.info(“Assistant Service initialized.”)
Here, we set up two Azure OpenAI clients (client1 and client2), each responsible for interacting with a separate assistant. This setup allows you to divide tasks between the two assistants, which can be extended for various applications.
Step 3: Managing Assistant Communication
Now, let’s set up how these assistants communicate. The idea is for the assistants to interact via message threads, where one assistant processes input and passes the result to the next.
Create Message Threads
Each assistant will have its own thread to ensure isolated tasks. Here’s how to create and manage message threads:
def create_message_thread(self, client):
“””Create a new empty message thread for a given client.”””
return client.beta.threads.create()
def add_user_message_to_thread(self, client, thread, message):
“””Add a user message to the specified thread.”””
return client.beta.threads.messages.create(
thread.id, role=”user”, content=message
)
Step 4: Executing Tasks with Assistant 1
The first assistant handles the initial data processing. After processing the input, it returns a response, which is then passed to the second assistant.
async def run_assistant_1(self):
# Create thread for Assistant 1
thread1 = self.create_message_thread(self.client1)
# Add input data as messages
for item in self.input_data:
self.add_user_message_to_thread(self.client1, thread1, str(item))
# Execute the task for Assistant 1
run1 = self.execute_thread_for_assistant(self.client1, thread1, self.assistant_id1)
# Wait for the task to complete
while run1.status != “completed”:
time.sleep(10)
run1 = self.check_thread_run_status(self.client1, thread1, run1)
# Fetch the assistant’s response
thread_messages1 = self.fetch_all_thread_messages(self.client1, thread1)
assistant_message1 = next(message for message in thread_messages1 if message.role == “assistant”)
return assistant_message1.content[0].text.value # Response from Assistant 1
Step 5: Processing with Assistant 2
The second assistant takes the response from the first assistant and refines or processes it further. Here’s how this flow works:
async def run_assistant_2(self, assistant_1_response):
# Create thread for Assistant 2
thread2 = self.create_message_thread(self.client2)
# Add response from Assistant 1 as input for Assistant 2
self.add_user_message_to_thread(self.client2, thread2, assistant_1_response)
# Execute the task for Assistant 2
run2 = self.execute_thread_for_assistant(self.client2, thread2, self.assistant_id2)
# Wait for the task to complete
while run2.status != “completed”:
time.sleep(10)
run2 = self.check_thread_run_status(self.client2, thread2, run2)
# Fetch the assistant’s response
thread_messages2 = self.fetch_all_thread_messages(self.client2, thread2)
assistant_message2 = next(message for message in thread_messages2 if message.role == “assistant”)
return assistant_message2.content[0].text.value # Final response
Step 6: Real-World Use Cases
This assistant architecture is flexible and can be adapted to multiple scenarios, such as:
- Customer Support: One assistant gathers customer information, while another provides solutions or troubleshooting steps.
- Document Processing: One assistant extracts key details, and the other organizes the data into a structured report.
- Conversational AI: One assistant manages user queries, while another handles tasks like bookings or recommendations.
Step 7: Storing Results for Future Use
Finally, once both assistants complete their tasks, the results can be saved in a database for future use, reporting, or auditing.
def save_recommendation(self, result):
collection = db[‘recommendations’]
recommendation_document = {
‘user_id’: self.user_token_id,
‘result’: result
}
collection.insert_one(recommendation_document)
Conclusion
Using multiple Azure Assistants to handle distinct yet interconnected tasks is a powerful way to build intelligent, scalable systems. Whether you are working with customer data, financial records, or any other domain, dividing tasks between specialized agents allows for more efficient processing and better results.
With Azure OpenAI, you can easily set up and manage these assistants, build flexible workflows, and adapt them to a wide range of business applications. Start exploring the possibilities today and unlock the full potential of AI-powered assistants in your projects!