Lab For AI

Lab For AI

Share this post

Lab For AI
Lab For AI
Using Instructor for Structured Data Output in AI Agents

Using Instructor for Structured Data Output in AI Agents

A Quick Tutorial for Integrating Instructor into AutoGen Apps

Yeyu Huang's avatar
Yeyu Huang
Sep 22, 2024
∙ Paid

Share this post

Lab For AI
Lab For AI
Using Instructor for Structured Data Output in AI Agents
Share

The ability to generate structured data from unstructured inputs has become a crucial feature for AI applications. OpenAI introduced JSON mode at DevDay last year, which improved model reliability for generating valid JSON outputs to benefit the follow-up agentic process. However, it didn’t guarantee that the model’s response would conform to a particular schema.

To address this limitation, OpenAI has recently introduced Structured Outputs in their API. This new feature ensures that model-generated outputs exactly match JSON schemas provided by the programs. On complex JSON schema following evaluations, the new GPT-4o-2024–08–06 model with Structured Outputs achieves a perfect 100% score, compared to less than 40% for GPT-4–0613 which has no structured output feature.

While OpenAI has implemented this feature in GPT-4o, it’s important to note that other models, including open-source ones, can also benefit from similar capabilities. This tutorial focuses on using the Instructor library to enable structured output functionality for various models, not just for simple chatbots but also for complex multi-agent applications like those built with the AutoGen framework.

Instructor is a tool designed to generate structured outputs for language models. It offers simplicity, transparency, and user-centric design built on Pydantic. Instructor supports the applications by managing Validation and Re-asking, Retries, and streaming of Lists and Partial responses for UI rendering.

In its core concept, Instructor combines function calling and Pydantic to create structured LLM outputs. It uses Pydantic’s mechanism to define schemas and validate data in a single class, which makes it easier for developers to integrate into their AI Agents-powered Python projects.

Basic Usage of Instructor

Let’s start with a basic example of using the Instructor to generate structured output from an LLM.

First, install the latest Instructor, Pydantic and OpenAI packages.

pip install instructor pydantic openai

We’ll import the necessary libraries and set up our environment:

import instructor
from typing import List
from pydantic import BaseModel, Field
from openai import OpenAI
import os
from dotenv import load_dotenv

load_dotenv()
api_key = os.getenv("OPENROUTER_API_KEY")

Here, we load the API key from OpenRouter. This low-cost inference platform allows users to call various open-source or commercial language models with high throughput via its API. You can acquire an API key from its website. We will use its OpenAI compatible API to quickly try how the Instructor improves the output with reliable structure for those smaller LLMs.

Next, we’ll define our desired output structure using a Pydantic model:


class ReasoningSteps(BaseModel):
    reasoning_steps: List[str] = Field(
        ..., description="The detailed reasoning steps leading to the final conclusion."
    )
    answer: str = Field(..., description="The final answer, taking into account the reasoning steps.")

This is one of the practical cases to benefit from the structured output that concludes a final answer from separating reasoning steps. To implement that, we define a ReasoningSteps class using Pydantic's BaseModel. It contains two required fields:

  1. reasoning_steps: A list of strings representing the steps in the reasoning process.

  2. answer: A string for the final conclusion.

Both fields use Pydantic’s Field for validation. 

Now, we’ll set up the client by the redirected OpenAI function, which connects the inference endpoint of OpenRouter by using the Instructor’s constructor method from_openai. Also, you can call other built-in methods, like from_anthropic or from_gemini for Claude, Gemini and even Litellm, which can be a proxy for your local models.


client = instructor.from_openai(OpenAI(base_url="https://openrouter.ai/api/v1", api_key=api_key), mode=instructor.Mode.JSON)

result = client.chat.completions.create(
    model="mistralai/mistral-large",
    response_model=ReasoningSteps,
    messages=[{"role": "user", "content": "Compare three deminal numbers 9.11, 9.9, 9.10, which is bigger?"}],
)

print(result)

We specify the “mistralai/mistral-large” model and sets the response_model to ReasoningSteps, ensuring the output conforms to the previously defined structure.

In our user message, we ask the model to compare three decimal numbers: 9.11, 9.9 and 9.10. The create method is called to generate a response, including reasoning steps and a final answer about which number is the largest, structured according to the ReasoningSteps class.

Running this code will produce exactly the output we expect for a structured chain-of-thought process step by step.

Integrating Instructor with AutoGen

Now, let’s explore how to integrate Instructor with the AutoGen framework to create agents that produce reliable structured outputs with confirmed schemas.

Keep reading with a 7-day free trial

Subscribe to Lab For AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Yeyu Huang
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share