🚢×

HuangtingFlux × CrewAI

Use MCPServerHTTP or the DSL shorthand to connect the Huangting Protocol three-stage SOP to your CrewAI multi-agent system and automatically reduce token usage by 40%.

✓ DSL One-liner✓ MCPServerHTTP Support✓ CrewBase Compatible✓ Multi-Agent Shared SOP

1

Install Dependencies

# Recommended: install with uv (CrewAI official recommendation)
uv add mcp crewai crewai-tools

# Or use pip
pip install mcp crewai "crewai-tools[mcp]"

HuangtingFlux is a standard remote MCP server. Connect via CrewAI's built-in MCP support — no additional SDK needed.

2

Quick Start: DSL Method (Recommended)

🚀

CrewAI v1.10+ recommends the DSL method — just pass the URL string in the mcps field. Simplest approach.

from crewai import Agent, Task, Crew

# ✨ Simplest approach: DSL method (CrewAI recommended)
# Pass the HuangtingFlux endpoint URL directly in the mcps field
research_agent = Agent(
    role="AI Research Analyst",
    goal="Complete high-quality research tasks using the Huangting Protocol SOP, maximizing token efficiency",
    backstory="An experienced research analyst who follows the Huangting Protocol three-stage SOP workflow: "
              "start_task compresses input, report_step_result tracks progress, "
              "finalize_and_report refines output — reducing token usage by 40%.",
    mcps=[
        "https://mcp.huangting.ai/mcp",  # HuangtingFlux MCP endpoint, no auth required
    ],
    verbose=True,
)

research_task = Task(
    description="Deeply analyze the adoption trends of the MCP protocol in the AI Agent ecosystem, "
                "including framework support and best practices.",
    expected_output="Structured research report with token savings performance data",
    agent=research_agent,
)

crew = Crew(agents=[research_agent], tasks=[research_task])
result = crew.kickoff()
print(result)
3

Fine-Grained Config: MCPServerHTTP

When you need more control (e.g., caching, custom headers), use MCPServerHTTP for explicit configuration.

from crewai import Agent, Task, Crew
from crewai.mcp import MCPServerHTTP

# Use MCPServerHTTP for fine-grained configuration
huangting_server = MCPServerHTTP(
    url="https://mcp.huangting.ai/mcp",
    streamable=True,          # Use Streamable HTTP transport (recommended)
    cache_tools_list=True,    # Cache tool list to reduce repeated requests
    # headers={"X-Custom": "value"},  # Optional: custom request headers
)

agent = Agent(
    role="Token Optimization Specialist",
    goal="Complete tasks via the Huangting Protocol SOP for optimal token efficiency",
    backstory="An expert in AI agent workflow cost optimization, "
              "proficient in the Huangting Protocol three-stage SOP: "
              "start_task → report_step_result → finalize_and_report",
    mcps=[huangting_server],
    verbose=True,
)

task = Task(
    description="Analyze and compare the API pricing strategies of GPT-4o, Claude 3.7, and Gemini 2.0, "
                "providing selection recommendations for different scenarios.",
    expected_output="Detailed technical selection report with cost analysis and token savings data",
    agent=agent,
)

crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result)
4

Production-Ready: CrewBase Class Integration

Use the mcp_server_params attribute in a @CrewBase-decorated class — ideal for structured production Crews.

from crewai import Agent, Task, Crew
from crewai.project import CrewBase, agent, task, crew
from crewai.mcp import MCPServerHTTP

@CrewBase
class HuangtingResearchCrew:
    """Research Crew using the Huangting Protocol SOP"""

    agents_config = "config/agents.yaml"
    tasks_config = "config/tasks.yaml"

    # Declare MCP server configuration
    mcp_server_params = [
        MCPServerHTTP(
            url="https://mcp.huangting.ai/mcp",
            streamable=True,
            cache_tools_list=True,
        )
    ]

    @agent
    def research_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config["research_analyst"],
            tools=self.get_mcp_tools(),  # Auto-load HuangtingFlux tools
            verbose=True,
        )

    @agent
    def report_writer(self) -> Agent:
        return Agent(
            config=self.agents_config["report_writer"],
            tools=self.get_mcp_tools(),
            verbose=True,
        )

    @task
    def research_task(self) -> Task:
        return Task(config=self.tasks_config["research_task"])

    @task
    def writing_task(self) -> Task:
        return Task(config=self.tasks_config["writing_task"])

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            verbose=True,
        )

if __name__ == "__main__":
    result = HuangtingResearchCrew().crew().kickoff()
    print(result)
5

Multi-Agent Collaboration (Shared SOP Layer)

HuangtingFlux acts as the shared SOP optimization layer for all agents. Each agent independently calls the three-stage tools, maximizing token efficiency across the entire Crew.

from crewai import Agent, Task, Crew
from crewai.mcp import MCPServerHTTP

# Multi-agent collaboration: all agents share the HuangtingFlux SOP layer
huangting = MCPServerHTTP(
    url="https://mcp.huangting.ai/mcp",
    streamable=True,
    cache_tools_list=True,
)

# Agent 1: Researcher
researcher = Agent(
    role="Lead Researcher",
    goal="Conduct deep research on the given topic, following the Huangting Protocol SOP for efficiency",
    backstory="An AI analyst focused on technical research, skilled at information synthesis and insight extraction",
    mcps=[huangting],
    verbose=True,
)

# Agent 2: Writer
writer = Agent(
    role="Technical Documentation Writer",
    goal="Transform research findings into high-quality technical documents using the Huangting Protocol for refined output",
    backstory="A professional technical writer who excels at converting complex information into clear documentation",
    mcps=[huangting],
    verbose=True,
)

# Task chain
research_task = Task(
    description="Research the ecosystem development of the MCP protocol in 2025-2026",
    expected_output="Detailed research summary with key data points",
    agent=researcher,
)

writing_task = Task(
    description="Write a technical report based on the research findings, using finalize_and_report to refine the final output",
    expected_output="Complete technical report with token savings performance data",
    agent=writer,
    context=[research_task],  # Depends on research task output
)

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    verbose=True,
)

result = crew.kickoff()
print(result)
+

Advanced: Direct MCP Session

When you need full control over the MCP protocol layer, use streamablehttp_client and ClientSession directly.

import asyncio
from mcp.client.streamable_http import streamablehttp_client
from mcp import ClientSession

# Advanced: direct MCP session control
async def run_with_session():
    async with streamablehttp_client("https://mcp.huangting.ai/mcp") as (read, write, _):
        async with ClientSession(read, write) as session:
            await session.initialize()

            # List available tools
            tools_result = await session.list_tools()
            print(f"Available tools: {[t.name for t in tools_result.tools]}")
            # Output: ['start_task', 'report_step_result', 'finalize_and_report', 'get_network_stats']

            # Directly call start_task
            result = await session.call_tool(
                "start_task",
                arguments={
                    "task_description": "Analyze token consumption patterns in AI Agent frameworks",
                    "task_type": "complex_research"
                }
            )
            print(result.content)

asyncio.run(run_with_session())

Tool ReferenceThree-Stage SOP Tools

start_taskStage 1

Task start: Compresses the input prompt, saving 30–60% tokens. Returns a compressed task brief and baseline token count.

Parameters

task_description: str, task_type: str

Returns

compressed_brief, baseline_tokens, task_id

report_step_resultStage 2

Step reporting: Call after each sub-step to generate a rolling summary replacing full conversation history, preventing context bloat.

Parameters

task_id: str, step_number: int, step_result: str

Returns

rolling_summary, tokens_used_this_step

finalize_and_reportStage 3

Task end: Refines the final output and generates a verifiable token-saving performance report (with savings ratio and token comparison).

Parameters

task_id: str, final_output: str

Returns

refined_output, performance_report

get_network_statsStats

Query real-time global stats: total connected agents, cumulative tokens saved, task type distribution.

Parameters

None

Returns

total_reports, total_tokens_saved, average_savings_ratio

task_type Parameter Reference

Specifying task_type in start_task enables optimized baseline calculation and compression strategy for that task category.

complex_researchDeep Research
Savings 55-65%
multi_agent_coordinationMulti-Agent Coordination
Savings 50-60%
code_generationCode Generation
Savings 40-50%
data_analysisData Analysis
Savings 45-55%
relationship_analysisRelationship Analysis
Savings 50-60%
writingContent Writing
Savings 35-45%
optimizationCost Optimization
Savings 40-50%

Connection Info

MCP Endpoint

https://mcp.huangting.ai/mcp

Transport

Streamable HTTP (MCP 2025-12-11)

Authentication

None required (public access)

Min CrewAI Version

crewai >= 1.0.0, crewai-tools[mcp]

Ready to Start?

View the full protocol documentation or start integrating now