Lab For AI

Lab For AI

Share this post

Lab For AI
Lab For AI
How to Orchestrate the Conversation Between Multiple Agents in AutoGen

How to Orchestrate the Conversation Between Multiple Agents in AutoGen

A Quick Guide to the Usage of the "Description" Field in the GroupChat

Yeyu Huang's avatar
Yeyu Huang
Jan 18, 2024
∙ Paid

Share this post

Lab For AI
Lab For AI
How to Orchestrate the Conversation Between Multiple Agents in AutoGen
Share
Image by author

Since we have gone through quite many tutorials about the framework of AutoGen, it’s time to look at some tricks to help you create a practical app. Right now, when a bunch of agents (LLM bots) need to talk to each other, people usually create a GroupChatManager to manage them in a group chat. If you are developing a real-world AutoGen project with multiple agents to complete complex tasks, you’ll quickly find that managing the flow of their speaking is critical. It’s not simply about what expertise they have or what content they should generate; it’s about ensuring they speak at the right time to make the conversation coherent and purposeful. 

Without careful management, the conversation can become chaotic, with agents talking over each other or out of turn, which confuses the overarching goal of the interaction and costs much on tokens. orchestrating the conversation effectively is challenging, especially with the complexity of tasks and the roles of agents, and even more challenging when you require jump-in input as human approval. This is where the importance of orchestration by “Description” within the framework comes into play.

Lab For AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

From System Message to Description

Before version 0.2.2 of AutoGen, GroupChat manager would look at an agent’s system_message and name to figure out what it does. This was OK if the agent’s system_message was short and to the point and without any special request, but it got messy if the system_message was trying to guide the agents to collaborate with other agents.

assistant = autogen.AssistantAgent(
    name="assistant",
    system_message="You are a helpful assistant.",
    llm_config={
        "timeout": 600,
        "seed": 42,
        "config_list": config_list,
    },
)

The version 0.2.2 update brings in a description field for all the agents which replaces the system_message in GroupChat for group member recognition. 

assistant = autogen.AssistantAgent(
    name="assistant",
    system_message="You are a helpful assistant.",
    llm_config={
        "timeout": 600,
        "seed": 42,
        "config_list": config_list,
    },
    description="This is a helpful assistant",
)

If you leave the description empty, the orchestrator (GroupChatManager) will still use thesystem_message for conversation orchestration. Please note that even the orchestrator does not rely on system_message, it is a must field as agents use it for their own system prompts. If you were having trouble with GroupChat before, tweaking the description might help.

More details about the template's difference between old and new orchestration can be found in AutoGen’s official blog post.

It would be worthwhile to examine the benefits of this field for an automated conversation workflow in the below experiment which I think is more intuitive than the one in their blog post.

GroupChat Experiment

Let me first introduce the design of this experiment. In this experiment, we are going to create a group chat app that helps the user to generate a travel plan.

In this group chat, we aim to create an automated conversation workflow that generates custom travel plans and insurance packages for given locations and dates. The system includes several AI agents: 

  • Weather reporter, who provides the weather conditions based on the locations and dates. As the function calling feature is not related to today’s topic so we ask the assistant to use its training knowledge to provide the historical data.

  • Activity agent, who provides recommendations for travel activities based on the location and weather conditions.

  • Travel advisor, who generates a travel itinerary including every day’s activities.

  • Insurance agent, who recommends insurance options tailored to the planned activities and potential risks. 

To be more specific, the sequence of the speaking should be designed to follow Weather reporter -> Activity agent -> Travel advisor -> Insurance agent.

Image by author

Now let’s write the code and compare the results.

a. No Description

First, we will construct these agents with legacy system messages.

Step 1 — Install the AutoGen package first.

pip install pyautogen

Step 2 — Define the LLM_config.

In this case, I am using the GPT-3.5 model instead of the GPT-4 because most AutoGen developers will use GPT-3.5 for its fast generation and lower cost, and using GPT-3.5 is easier to expose the defects of orchestration by system messages.

import autogen

config_list = [
    {
        'model': 'gpt-3.5-turbo-1106',
        'api_key': 'sk-OpenAI_API_Key',
    }
    ]

llm_config = {"config_list": config_list, "seed": 52}

Step 3 — Construct the UserProxy

We don’t really need the user proxy for human interaction or code execution in this case, so just make a simple one.

user_proxy = autogen.UserProxyAgent(
   name="User_proxy",
   system_message="A human admin.",
   is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
   code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
   human_input_mode="TERMINATE"
)

Step 4 — Construct the four AI agents

Now we should start building the LLM-powered agents with the proper system messages to guide what content they should generate.

weather_reporter = autogen.AssistantAgent(
    name="Weather_reporter",
    system_message="""You are a weather reporter who provides weather 
    overall status based on the dates and location user provided.
    Using historical data is OK.
    Make your response short.""",
    llm_config=llm_config,
)
activity_agent = autogen.AssistantAgent(
    name="activity_agent",
    system_message="""You are an activity agent who recommends 
    activities considering the weather situation from weather_reporter.
    Don't ask questions. Make your response short.""",
    llm_config=llm_config,
)
insure_agent = autogen.AssistantAgent(
    name="Insure_agent",
    system_message="""You are an Insure agent who gives 
    the short insurance items based on the travel plan. 
    Don't ask questions. Make your response short.""",
    llm_config=llm_config,
)
travel_advisor = autogen.AssistantAgent(
    name="Travel_advisor",
    system_message="""After activities recommendation generated 
    by activity_agent, You generate a concise travel plan 
    by consolidating the activities.
    """,
    llm_config=llm_config,
)

In the system messages, we instruct clearly to each agent what other’s output needs to be referred to, like “considering the weather situation from weather_reporter”, “based on the travel plan”, or “After activities recommendation generated by activity_agent” which looks quite obvious to these GPT-3.5-powered agents to follow. Note that we also ask the agents to keep output short to make a clearer demonstration.

Step 5 — Create a group chat and prompt it.

Put all the agents into a group chat and create a manager to orchestrate them. Then give the user a request for generating a travel plan to Bohol Island in September.

groupchat = autogen.GroupChat(agents=[user_proxy, travel_advisor, activity_agent, weather_reporter,insure_agent], messages=[], max_round=8)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

user_proxy.initiate_chat(manager, message="Give me a travel advise to Bohol Island in Sept.")

Now, let’s wrap up and run the app to see the print.

Unfortunately, the speaker selection process is not what we expected. The actual speaking order is Weather_reporter->Travel_advisor->Insure_agent->Activity_agent. The Travel Advisor directly generates the travel plan without waiting for the activity items generated by the Activity Agent. That was normally happening when I developed AutoGen projects for my client which is far more complicated than this example and even worse when includes human interaction under certain conditions.

b. Use Description

Now, let’s take an additional step to the above code to equip these agents with proper descriptions. Here you should carefully review your descriptions to make sure that there is no ‘you’ in the content like the system prompt, and instead,

Keep reading with a 7-day free trial

Subscribe to Lab For AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Yeyu Huang
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share