Middleware
While direct API calls work well, middleware provides an automated way to track and evaluate LLM interactions. Elluminate's OpenAI middleware provides:
- Automatic Response Tracking: All interactions with the LLM are automatically recorded without additional code, ensuring comprehensive documentation of model outputs
- Seamless Integration: Works with your existing OpenAI client code - minimal changes required to start collecting data
- Automated Evaluation: Automatically generates and applies evaluation criteria to responses, providing continuous quality assessment
- Development Insights: Helps identify patterns in model behavior and response quality across different prompts and use cases
- Simplified Workflow: Reduces boilerplate code needed for logging and evaluation by handling these tasks automatically
| import os
from dotenv import load_dotenv
from elluminate.middleware_sdk import ElluminateOpenAIMiddleware
from openai import AzureOpenAI
load_dotenv(override=True)
# Initialize the ElluminateOpenAIMiddleware
ElluminateOpenAIMiddleware.initialize() # (1)!
# Initialize the OpenAI client
client = AzureOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_version=os.environ.get("OPENAI_API_VERSION"),
api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
)
# List of animals
animals = ["elephant", "penguin"]
# animals = ["elephant", "penguin", "octopus", "giraffe", "platypus"]
for animal in animals:
# Create the prompt
prompt = f"Tell me a fun and surprising fact about a {animal} in one paragraph."
# Make the API call
response = client.chat.completions.create( # elluminate: animal-world
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a knowledgeable zoologist."},
{"role": "user", "content": prompt},
],
)
# Print the result
print(f"Fun fact about a {animal}:")
print(response.choices[0].message.content)
|
1. That's it. Now you can use the OpenAI client as usual, elluminate will automatically record the responses, generate criteria and rate them.