Reflection Agent: Writing the Ideal Tweet with Self-Adjusting AI
- SquareShift Content Team
- Jun 27
- 7 min read
Updated: Jul 8

As the world of artificial intelligence is changing at a rapid pace, the capability to design agents that not only create content but also analyze and improve it critically becomes increasingly important. We have seen how AI helps in optimizing storage in enterprises. Imagine an AI capable of writing a tweet, receiving feedback, and then refining it, repeating the process until specific conditions are met, much like a human content writer. This is not science fiction; this is all possible with strong tools like LangChain and LangGraph.
This blog entry will dissect an example of how one might build such a system in practice – an AI "Twitter Techie Influencer Assistant" that creates and improves tweets, led by a "Viral Twitter Influencer" to critique them. We will examine the key building blocks, how they work together, and the wizardry involved in designing self-improving AI workflows, with particular emphasis on the role of the reflection agent.
The Building Blocks: LangChain and Generative AI
At the core of our tweet-writing assistant is LangChain and Google's Gemini 2.0 Flash model. LangChain is the conductor, providing us with the tools to tie together large language models (LLMs) with various data sources and other components. Gemini 2.0 Flash, an impressive and effective generative AI model, is the "brain" for both tweet creation and criticism.
The basic concept is to develop two independent "personalities" for our AI:
The Generator: This is a "Twitter techie influencer assistant that is responsible for generating great Twitter tweets." Its role is to receive a user's prompt and generate the optimal tweet.
The Reflector (Our Reflection Agent): This is a "viral Twitter influencer grading a tweet." Its function is to give lengthy critiques and suggestions, highlighting issues such as length, virality, and style. This is our reflection agent, specifically for dedication to the self-correction loop.
These personas are established with ChatPromptTemplate from LangChain, which enables us to specify the system's role and supply context for the AI to respond. MessagesPlaceholder is crucial here, helping the AI hold some memory of the conversation and grasp the current context, particularly when revisions are being requested.
Orchestrating the Flow: Introducing LangGraph in Reflection Agent
Whereas LangChain is great for constructing individual chains of prompts and LLM calls, LangGraph goes further still by making it possible to create stateful multi-actor applications of some complexity. Imagine a whiteboard on which you can sketch out the flow of your AI application, with nodes for various actions or agents and edges for the movement from one to another.
In our tweet-generating system, we have two principal nodes:

GENERATE:
This node is used to generate the original tweet or re-tweet from feedback. It applies the generate_chain, which brings together the generation_prompt and the LLM.
REFLECT:
This node is our reflection agent. It takes the generated tweet and gives a critique. It applies the reflect_chain, making use of the reflection_prompt and the LLM.
The elegance of LangGraph is that it is able to keep track of the "state" of the conversation. As messages are sent back and forth and tweets are created and critiqued, the state (which is a Sequence[BaseMessage]) is propagated between nodes, so that each component of the system has access to the entire conversation history.
The Self-Correction Loop: How the Reflection Agent Drives Improvement
The real innovation in this system is the self-correction loop, which is primarily prompted by the reflection agent. Once the GENERATE node generates a tweet, the system does not end there. Rather, it goes into a reflective state.
Here is how the loop works:

Initial Generation: The user makes a first request for a tweet. This request is sent to the GENERATE node.
Critique and Feedback (by the Reflection Agent): As soon as the GENERATE node generates a tweet, the should_continue function (LangGraph conditional edge) dictates what happens next. In our scenario, unless a specific number of iterations (6, in this case) have elapsed, the system goes into the REFLECT node. Here is where our reflection agent enters the scene. It is the "viral Twitter influencer" that gives meaningful feedback about the generated tweet. These comments are subsequently re-added to the message history in the form of a HumanMessage, effectively mimicking a user critiquing.
Revision and Iteration: With the critique now in the conversation history, the system returns to the GENERATE node. The "Twitter techie influencer assistant" (our generative persona) now has access to its previous tweet and the in-depth critique from the reflection agent. It then makes use of this to come up with a revised tweet, trying to address the feedback.
This cycle repeats, with the AI improving its result based on ongoing self-improvement by the reflection agent, until a predetermined number of iterations is exceeded or some other termination criterion is satisfied. This cyclical nature simulates how a human might improve his or her work, which results in better quality and more specific outcomes.
Streaming for an Improved User Experience
One of the nice touches in the code presented here is the streaming of both generation and reflection nodes. Rather than posting the whole LLM response for display as soon as it's generated, the code streams the content as it is generated. This creates a much more responsive and interactive user experience, giving the sense that the AI is "talking aloud" and not simply rendering a giant chunk of text.
The LLM stream(.) for chunk loop and print(chunk.content, end="", flush=True) line are also critical to making this real-time output work, increasing the system's apparent speed and interactivity.
Visualizing the Workflow: Graph Representation
LangGraph provides great visualization tools for the workflow that has been defined. The builder.compile() function will create a graph, and functions such as graph.get_graph(), draw_mermaid (), and graph.get_graph() print_ascii () enable developers to have a nice view of the nodes and edge,s and be able to view and debug the flow of information easily. This type of visualization is priceless when creating complicated multi-agent systems.
The Power of Iteration and Feedback
The sample tweet in the if name == "__main__" clause is an excellent demonstration of how valuable this self-improvement mechanism is. A sample tweet for LangChain's Tool Calling can be input into the system. The reflection agent would then evaluate it, maybe with recommended changes such as including a hook, writing in a more interesting tone, or truncating to fit Twitter's character constraints. The "Generation node" will then use this feedback to generate a more polished tweet, showing the AI's capacity for learning and improvement.
This constant feedback loop, fueled by the reflection agent, isn't merely about generating one perfect output; it's about creating an AI that can continually refine its performance based on predefined standards, resulting in increasingly advanced and human-like content creation.
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
import os
load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
reflection_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a viral Twitter influencer grading a tweet. Generate critique and recommendations for the user's tweet."
"Always make full recommendations, such as requests for length, virality, style, etc.",
),
MessagesPlaceholder(variable_name="messages"),
],
)
generation_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a Twitter techie influencer assistant whose job is to come up with great Twitter posts."
"Create the best possible Twitter post for the user's request."
"If the user gives feedback, return an improved version of your earlier attempts."
),
MessagesPlaceholder(variable_name="messages"),
]
)
llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash", google_api_key=GOOGLE_API_KEY)
generate_chain = generation_prompt | llm
reflect_chain = reflection_prompt | llm
from typing import List, Sequence
from dotenv import load_dotenv
load_dotenv()
from langchain_core.messages import BaseMessage, HumanMessage
from langgraph. graph import END, MessageGraph
from chains import generate_chain, reflect_chain, llm, generation_prompt, reflection_prompt
REFLECT = "reflect"
GENERATE = "generate"
def generation_node(state: Sequence[BaseMessage]):
# Streaming version of generation
print("
[Generation node reasoning]:", flush=True)
# Format the prompt manually
prompt_value = generation_prompt.format_prompt(messages=state)
for chunk in llm.stream(prompt_value.to_messages()):
if hasattr(chunk, 'content'):
print(chunk.content, end="", flush=True)
print("
[End of generation]
", flush=True)
# Return the full message as before
returning generate_chain.invoke({"messages": state})
def reflection_node(messages: Sequence[BaseMessage]) -> List[BaseMessage]:
# Streaming version of reflection
print("
[Reflection node reasoning]:", flush=True)
prompt_value = reflection_prompt.format_prompt(messages=messages)
for chunk in llm.stream(prompt_value.to_messages()):
if hasattr(chunk, 'content'):
print(chunk.content, end="", flush=True)
print("
[End of reflection]
", flush=True)
res = reflect_chain.invoke({"messages": messages})
return [HumanMessage(content=res.content)]
builder = MessageGraph()
builder.add_node(GENERATE, generation_node)
builder.add_node(REFLECT, reflection_node)
builder.set_entry_point(GENERATE)
def should_continue(state: List[BaseMessage]):
if len(state) > 6:
return END
return REFLECT
builder.add_conditional_edges(GENERATE, should_continue)
builder.add_edge(REFLECT, GENERATE)
graph = builder.compile()
print(graph.get_graph().draw_mermaid())
graph.get_graph().print_ascii()
if name == "__main__":
print("Hello LangGraph")
inputs = HumanMessage(content="""Make this tweet better:"
@LangChainAI
-- The newly added Tool Calling feature is seriously underrated.
Finally, after all the wait, it's here- making function calling and implementation of agents between models a cakewalk.
""")
response = graph.invoke(inputs)
for msg in response:
if hasattr(msg, 'content'):
print(msg.content)
Graph visualization
Beyond Tweets: The Broader Implications of Reflection Agents

Though this is an example of tweet creation, the principles involved are widely applicable. The idea of a specialized reflection agent critiquing and advising the output of another generative AI can be used in the following:
Blog post composition: An AI composes a paragraph, a "content editor" reflection agent checks it for clarity, brevity, and tone, and the composing AI makes the necessary changes.
Code generation: AI produces code, a "code reviewer" reflection agent inspects for bugs, efficiency, and best practices, and the coding AI improves its work.
Creative writing: AI produces a story, a "literary critic" reflection agent offers criticism on plot, character development, and pacing, and the writing AI improves.
Customer service replies: An AI writes a reply, a "customer satisfaction expert" reflection agent judges its helpfulness and empathy, and the AI adjusts its message.
The possibilities of self-correcting AI agents with specialized reflection agents are tremendous. By meshing the creative capability of LLMs with cognitive feedback loops managed by frameworks such as LangGraph, we can design AI agents that are not only reactive but genuinely proactive in their quest for greatness.
Conclusion
The evolution from a simple prompt-response system to a self-correcting AI agent is a testament to the dramatic pace at which the field has evolved. Technologies such as LangChain and LangGraph enable developers to create powerful AI workflows capable of learning, self-adjusting, and even improving their results in an automated manner. The "Twitter Techie Influencer Assistant" and its indispensable reflection agent are just a tiny tip of the iceberg regarding what we can achieve when we take the raw power of generative AI and add intelligent orchestration to it. As these technologies mature, we can anticipate yet more breathtaking and independent AI systems being developed, able to perform complicated tasks with human-like accuracy and ongoing refinement.
Check out more about AI agents:
Comments