As an early adopter of LangChain, I have seen them build their ecosystem piece by piece. I even used to wonder how they came up with those catchy names like LangGraph, LangSmith and all the rest. So, imagine my surprise when Harrison, LangChain's creator, called few points in OpenAI's new guide on building agents as "misguided." That definitely caught my attention.
We all perk up whenever OpenAI ships something new. After all, when OpenAI says, "this is how you do it," most of us just nod along. But this time Harrison pushed back, and it is worth understanding why.
Ever since AI agents went mainstream, the industry has struggled to establish what exactly an "agent" is. We are not just missing a shared definition but also facing different views on how to build them and which approach actually works in the real world.
The debate really took off when OpenAI published its 32-page "Practical Guide to Building Agents," laying out their own agent definition and orchestration patterns, implicitly challenging LangGraph’s approach. Harrison fired back with his How to Think About Agent Frameworks post, laying out LangGraph’s orchestration-first model and pointing out where OpenAI mischaracterized declarative workflows and agent abstractions.
Let's see what OpenAI's agent guide actually says before diving into why Harrison disagrees with it.
[Sponsored] Diskless Kafka? A crucial pivot for the modern cloud era
Proposed by Aiven, Apache Kafka® KIP-1150: Diskless Topics is poised to be a game changer. Imagine a streaming world where Kafka infrastructure costs are reduced by up to 80%. This is exactly what’s possible through local disks in tandem with object storage.
If accepted, this Kafka innovation would offload disk replication to object storage (like S3) – dramatically reducing costs, keeping flexibility and scalability high, and remaining fully open source.
OpenAI’s Agent guide in a nutshell
Here is the quick breakdown of their Practical Guide to Building Agents.
They Say…
"Agents are systems that independently accomplish tasks on your behalf."
You hook up an LLM in a loop, give it tools, and let it run until it is done.
The "Choose Wisely" Warning!
OpenAI tells you to only reach for agents when simple automation falls flat. Their three red-flag scenarios when you can switch to Agents are,
Complex decision-making: When your workflow needs nuanced judgment, handles exceptions, or relies on deep context, such as approving refunds in customer support.
Difficult-to-maintain rules: When your system has complex rules that are expensive and error-prone to update, like vendor security reviews.
Heavy reliance on unstructured data: When you must interpret free-form text or documents, think processing a home insurance claim.
Build With These Ingredients
OpenAI says agents need three core components. The right model, appropriate tools, and clear, focused instructions.
Now you know the components. How to wire them up is what they are saying next. This is where OpenAI's orchestration approach has stirred the pot.
The Orchestration Battleground
OpenAI groups orchestration into two main patterns.
Single-agent systems: A single agent handles the entire workflow in a loop, calling tools as needed and stopping when it is done. It extends capabilities by adding more tools to your one agent rather than creating specialized agents for each function.
Multi-agent systems: When one agent no longer suffices and too much branching logic or overlapping tools, then you can split responsibilities across multiple specialized agents. OpenAI calls out two simple patterns for Multi-Agent System.
Manager pattern: One central manager agent hands-off task by calling other agents as tools.
Decentralized handoffs: Agents pass control directly to each other, picking the next specialist based on the task.
Be it as a Single Agent or Multi-Agent, OpenAI points out there are two fundamentally different approaches to implementing agent systems.
Declarative vs. Code-First
Declarative: You declare what the workflow upfront, nodes for tasks and edges for transitions in a graph. The system then steps through that map exactly as drawn. In simple, you describe what you want to happen, not how to do it.
Code-First (Imperative): You script how each step runs. If-then logic, loops, and tool calls directly in code and the SDK executes those instructions without a pre-built graph. In simple, you tell the system how to do each step.
OpenAI’s claim: Declarative graphs can quickly become cumbersome and challenging as workflows grow more dynamic and complex. They also position their Agent SDK as superior because it apparently lets developers use familiar programming patterns without having to map everything out in advance.
This is where OpenAI directly targets frameworks like LangGraph, claiming they can't scale with complexity. It' is precisely this assertion that triggered Harrison's detailed takedown of what he calls "misguided" points. The gloves came off because this strikes at the core philosophy of how agent systems should be built.
Harrison insists LangGraph is perfectly capable of handling complex workflows. He also says OpenAI misunderstands the fundamental challenge. They are focused on declarative vs. code-first when the real challenge is context control. LangGraph's design prioritizes this context control above all else, letting developers see and manage exactly what reaches the model at each step.
Harrison’s Counterpoint
Looking at OpenAI's framework comparison above, Harrison doesn't hold back.
“Your 'Agent-SDK' Is Just Another Abstraction"
He argues that many so-called "code-first" or "imperative" frameworks (like OpenAI’s Agents SDK) are just higher-level abstractions and not a full imperative orchestration engine. He points out that OpenAI’s SDK hides the core logic behind easy-to-use classes. You call a few methods on their Agent class, and the SDK quietly does all the real work. To Harrison, that is not a true orchestration framework. it is just another dressed-up abstraction.
Abstraction: A simplified interface such as a pre-built agent class that hides the detailed steps of prompt construction and tool calls, making it easier to start but harder to see and control what is going into the LLM at all steps.
Harrison brings up LangChain’s early journey to make this point, and the way he puts it here is worth reading.
This is a crucial admission. The LangChain team learned firsthand that pure agent abstractions can become limiting in production environments. Harrison acknowledges there is value in these abstractions for getting started quickly but questions their long-term viability.
This direct experience shaped LangGraph's architecture. He built simplified agent abstractions on top of LangGraph's core engine, giving users an easy starting point but ensuring they have full access to the underlying framework when they need precise control over what reaches the LLM.
Don’t Confuse workflows with frameworks
Harrison catches OpenAI conflating two completely different concepts - frameworks (the tools you build with) and workflows (a specific pattern you build).
When OpenAI says "declarative graphs become cumbersome as workflows grow complex," they are mixing apples and oranges. The framework (declarative or imperative) is separate from what you are building (workflow or agent).
This distinction matters because OpenAI's criticism of "declarative frameworks" is actually about workflows becoming rigid, not about the framework architecture itself. As Harrison points out:
"This does not have anything to do with declarative or non-declarative. This has everything to do with workflows vs agents."
He explains that you could implement the exact same agent logic using either OpenAI's SDK or LangGraph, the framework choice does not determine how dynamic your system can be.
It is like blaming Python for a slow program when the real problem is your inefficient logic. The framework is just the toolset, how you use it determines whether your system is flexible or rigid.
Context Control is more important
When OpenAI said, Declarative framework can be become hard when workflows become more dynamic, Harrison counters that this misrepresents the reality of LangGraph's capabilities. He points out that LangGraph's declarative structure does not limit its flexibility, it can handle equally dynamic workflows while providing better context control.
He also stated Declarative framework like LangGraph give you exact control over what goes into the LLM at each step, which is essential for reliability.
Blend Is Inevitable
In real systems you often mix declarative and imperative approach. Harrison’s LangGraph supports both declarative and direct code hooks, so you can pick the right tool for each part of your workflow.
LangGraph does more than Agent SDK
I would like to point some of the statements that Harrison strongly mentioned when comparing Agent SDK and LangGraph.
Harrison did not just argue with words, he backed it up with data. His comparison chart examines 13 frameworks across key dimensions, clearly showing where LangGraph provides both orchestration capabilities and agent abstractions that other frameworks lack.
You can check the comparison chart in this link.
Why Harrison's Defense of LangGraph Makes Sense
What I appreciate most about Harrison's response is how he travels through the entire debate. Instead of getting stuck in the code-first vs. declarative approach, he focuses on something more important, the balance between workflows and agents.
This is not just a theory. It is a real tradeoff between predictability and flexibility. As your system moves toward the agent end, it naturally becomes less predictable. Sometimes that is exactly what you want, but in many cases, especially where user trust or regulations matter and predictability remains crucial.
Harrison captures this perfectly in the simple graph above. The beauty of his argument is that LangGraph does not force you to pick a side, it lets you position your application anywhere along this curve. Need more predictability? Lean toward workflows. Need more agency? Dial it up. Your application, your choice.
What really resonates with me from personal experience is that LangGraph manages to be both beginner-friendly and capable of advanced orchestration. The high-level abstraction help you get started quickly, but you can always go deeper when needed. You are not locked into one way of working.
I have used LangGraph myself and found the orchestration features solid, once you understand how it works. Yes, the documentation could use some organization, and there are rough edges to smooth out. But for teams needing to build systems that balance predictability with agency, LangGraph gives you the tools without forcing you to choose one over the other.
Not only this,
LangGraph does much more.
What I Learned from This Debate
This debate is not really about whether you should choose OpenAI’s Agent SDK or LangGraph for your next project.
After reading Harrison’s post, it became clear to me that OpenAI’s SDK is mainly an abstraction to help you get started, while LangGraph is a full framework that offers much more control. Harrison makes a strong point that building reliable agent systems is not just about wiring up models and tools. It is about making sure the model receives the right context at every step. That is exactly what LangGraph is designed to handle.
And OpenAI and LangGraph agree on a few important ideas. They both believe that simple, rule-based workflows can be a better choice in many situations. Not every use case needs an agent. They also agree that giving a model too much freedom can reduce reliability, especially when you need consistent and predictable results.
I learnt that simple abstraction is great in the beginning because they help you get moving quickly. But when the abstraction class performs too much, it becomes harder to fix problems later. The best frameworks make it easy to start, but still let you access everything under the hood when you need to.
The Bigger Picture
OpenAI’s perspective makes sense if you believe models will eventually become powerful enough that orchestration no longer matters. Their approach is designed for a future where the model handles most of the complexity by itself.
Harrison’s view feels more practical. He believes that most applications will continue to combine both workflows and agents, depending on the use case. His argument for LangGraph is not just about using a declarative style. It is about giving developers full control over context, while also allowing flexibility to shift between workflow-based and agent-based designs.
In the end, for those of us building real AI systems, it is not about which side wins the debate. It is about using tools that provide the right foundation. That includes features like memory handling, human-in-the-loop options, and observability. But most importantly, it is about keeping control of context, no matter what pattern you follow.
The online debates are fun to follow, but when it comes to actual engineering, it is always the practical tools that win.
Comment your thoughts on OpenAI Agent SDK Vs LangGraph.
Happy Coding!