I’m inspired by Anthropics Claude Code which started as a developer tool for their own engineers.
So the goal today: Make the minimal AI Coding Agent, then use that to build towards a self improving AI Coding Agent. The end of this tutorial will be written by your new agent 🧠.
Minimal requirements for an AI Coding Agent
- 1. Chat loop
- 2. Call an LLM
- 3. Add tools to call
- 4. Handle tool requests
Step 1: Chat Loop
At its most basic, the chat loop is an infinite loop that waits for your input requests. Python’s “input” does just that. It pauses execution and waits for you message and a <Enter> event.
# step1.py
print("Type q to quit")
while True:
user_message = input("You: ")
if user_message == "q":
break
ai_message = f"You said {user_message}... so insightful"
print(ai_message)
Most LLMs interfaces are stateless, so you need to handle the messages list and pass in the newest state with every request. Here we’ll make a fake_ai function to simulate a real LLM call with the required “role, content” keys.
# step1_1.py
import requests
def fake_ai(messages):
latest_user_message = messages[-1]["content"]
ai_message = f"You said {latest_user_message}... so insightful "
return {
"role": "assistant",
"content": ai_message
}
print("Press q to quit")
messages = []
while True:
user_message = input("You: ")
if user_message == 'q':
break
messages.append({
"role": "user",
"content": user_message
})
ai_message_obj = fake_ai(messages)
print("AI: " + ai_message_obj["content"])
messages.append(ai_message_obj)

Ok, enough of this maximal sycophancy.
Step 2: Call an LLM
Our chat loop is setup to the right API, so create an llm function and replace fake_ai with it. Congratulations, you have a chatbot CLI.
# step2.py
def llm(messages): # new
headers = {
"Authorization": f"Bearer sk-your-key", # your key
"Content-Type": "application/json"
}
data = {
"model": "gpt-3.5-turbo", # your choice
"messages": messages
}
url = "https://api.openai.com/v1/chat/completions" # your choice
try:
response = requests.post(url, json=data, headers=headers)
message = response.json()["choices"][0]['message']
return message
except Exception as e:
return {"content": f"Error: {e}"}
while True:
# ...
ai_message_obj = llm(messages) # new
# ... unchanged
Step: Add tools to call
What tools are useful? An AI coding agent should be able to read files.
Let’s start with a “read_file” tool. We define the tool, its arguments and a list of all available tools is passed to the llm. When the LLM decides a response requires a tool call, it call return content as None and a list of tools to call.
Let’s print the full message to see which tools the llm wants you to call.
# step3.py
TOOL_SPECS = [
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read the contents of a file",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The file path to read"
}
},
"required": ["path"]
}
}
}
]
def llm(messages):
# ...
data = {
# ...
"tools": TOOL_SPECS, # new
"tool_choice": "auto" # new
}
try:
# ...
print("Full message") # new; to see the tool_calls key
print(message) # new; to see the tool_calls key
return message

At this point, we haven’t handled the tool_calls requests and simply print the content. This makes your LLM very sad. It needs context from outside the program.
Step: Handle tool requests
While an LLM can request a tool call, its up to the program to invoke the code to add context or take actions. LLMs typically return a final message to the user with all added context. Here we need to:
- A. Check for the tool_calls key
- B. Append the ai_message to the message list
- C. Call all the tools and add those results to the messages list
- D. Call the LLM one last time with all the added context
# step4.py
def handle_tool(tool_call):
# TODO: we need to call a real read_file funtion
content = "The secret is diving deep"
return {
"role": "tool",
"tool_call_id": tool_call['id'],
"content": content
}
while True:
# ... unchanged
ai_message_obj = llm(messages)
# A: Check if AI wants to use tools
if 'tool_calls' in ai_message_obj and ai_message_obj['tool_calls']:
# B: Add AI message with tool calls
messages.append(ai_message_obj)
# C: Execute each tool and add results
for tool_call in ai_message_obj['tool_calls']:
tool_result = handle_tool(tool_call)
messages.append(tool_result)
# D: Get final response from AI
final_response = llm(messages)
print(f"AI: {final_response['content']}")
messages.append(final_response)
else: # Default flow
print(f"AI: {ai_message_obj['content']}")
messages.append(ai_message_obj)

# step4_1.py
def read_file(path):
"""Read the contents of a file"""
try:
with open(path, 'r') as f:
content = f.read()
return content
except Exception as e:
return f"Error reading file: {str(e)}"
def handle_tool(tool_call):
"""Execute a single tool call and return the result"""
tool_name = tool_call['function']['name']
tool_args = json.loads(tool_call['function']['arguments'])
print(f"[Executing {tool_name}...]")
if tool_name == 'read_file':
result = read_file(**tool_args)
else:
result = f"Unknown tool: {tool_name}"
return {
"role": "tool",
"tool_call_id": tool_call['id'],
"content": result
}
Here, we asks what step1.py does (remember, it’s the chat loop! Congrats for getting this far 🎉). The llm requests a read_file function call and we allow it.

🎉 We’ve done all the steps 🎉
Chat loop, llm call, tools passed to the llm call, and handling the tools. Now things get fun! We have read file so we can now ask AI to look at our current implementation and suggest changes.
Gist: https://gist.github.com/theptrk/d1457e039a2b15b4e650b5310246c289
Bonus section
What if we went wild?
- Use our new agent to suggest changes for a “write_file” function
- Allow agent to write_file for a README.md
- Allow agent to write_file to create list_file functionality
- This requires handling chains of tools since editing a file is a read+write operation
- Comparing three files is a read+read+read operation
- Editing an unfamiliar file may be a read+web_research+list+read+write operation
Let AI suggest the “write file” function
- copy a file into step4_2.py
- run step4_1.py
- prompt: please read the file step4_2.py. how would we add a write_file tool to it most effectively? make sure all the types match the rest of the file. you are an expert staff engineer, show the code changes.
- For brevity: comment out the full message log for now
🚀 aiagent % python step4_1.py
Press q to quit
You: please read the file step4_2.py. how would we add a write_file tool to it most effectively? make sure all the types match the rest of the file. you are an expert staff engineer, show the code changes.
[Executing read_file...]
AI: To add a `write_file` tool to the existing code most effectively, we need to follow a similar structure as `read_file` tool. Here is the updated code with the `write_file` tool added:
```python
import requests
import os
import json
TOOL_SPECS = [
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read the contents of a file",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The file path to read"
}
},
"required": ["path"]
}
}
},
{
"type": "function",
"function": {
"name": "write_file",
"description": "Write content to a file",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The file path to write"
},
"content": {
"type": "string",
"description": "The content to write to the file"
}
},
"required": ["path", "content"]
}
}
}
]
def read_file(path):
"""Read the contents of a file"""
try:
with open(path, 'r') as f:
content = f.read()
return content
except Exception as e:
return f"Error reading file: {str(e)}"
def write_file(path, content):
"""Write content to a file"""
try:
with open(path, 'w') as f:
f.write(content)
return "File successfully written"
except Exception as e:
return f"Error writing file: {str(e)}"
def handle_tool(tool_call):
"""Execute a single tool call and return the result"""
tool_name = tool_call['function']['name']
tool_args = json.loads(tool_call['function']['arguments'])
print(f"[Executing {tool_name}...]")
if tool_name == 'read_file':
result = read_file(**tool_args)
elif tool_name == 'write_file':
result = write_file(**tool_args)
else:
result = f"Unknown tool: {tool_name}"
return {
"role": "tool",
"tool_call_id": tool_call['id'],
"content": result
}
How cool is this? The coding agent is already helping us code and improve the agent.
We can improve the write file by making sure the write path is not traversing to parent directories.
# Define safe directory (current directory only)
SAFE_DIR = os.path.abspath(os.getcwd())
def is_safe_path(path):
"""Check if path is within the safe directory"""
# Resolve the absolute path
abs_path = os.path.abspath(path)
# Check if it's within the safe directory
return abs_path.startswith(SAFE_DIR)
def write_file(path, content):
"""Write content to a file"""
if not is_safe_path(path):
return f"Error: Access denied - path outside safe directory"
try:
with open(path, 'w') as f:
f.write(content)
return "File successfully written"
except Exception as e:
return f"Error writing file: {str(e)}"
PROCEED AT YOUR OWN RISK
Let’s take a risk and ask the agent to write file called README.md with a short note.
🚀 aiagent % python step4_2.py
Press q to quit
You: write a file called "README.md" that has the text "# aiagent wrote this"
[Executing write_file...]
AI: The file "README.md" has been created with the text "# aiagent wrote this".
You: q
🚀 aiagent % cat README.md
# aiagent wrote this%
Let AI write the “list files” with recursive tool calling
Now that the coding agent has “write_files” it can suggest code changes for a new list_files function and write those changes to disk. Again, be careful.
However, we encounter a problem.
– Improving a file requires two chained tool calls: 1. reading the file, and 2. writing the file.
– Comparing three files requires three chained tool cools: 1. read 2. read 3. read
Refactor: recursive tool calling
# step4_3.py
# Yes i know, its scary to mutate the messages list without gaurentees of access,
# We'll write the next version in rust to denote ownership
def handle_message(messages, ai_message_obj):
"""
Danger zone: messages is being mutated
"""
# Check if AI wants to use tools
if 'tool_calls' in ai_message_obj and ai_message_obj['tool_calls']:
# Add AI message with tool calls
messages.append(ai_message_obj)
# Execute each tool and add results
for tool_call in ai_message_obj['tool_calls']:
tool_result = handle_tool(tool_call)
messages.append(tool_result)
# Get final response from AI
final_response = llm(messages)
print(f"maybe final response: {final_response['content']}")
if 'tool_calls' in final_response and ai_message_obj['tool_calls']:
handle_message(messages, final_response)
else:
print(f"AI: {ai_message_obj['content']}")
messages.append(ai_message_obj)
return
while True:
# ...
ai_message_obj = llm(messages)
handle_message(messages, ai_message_obj)

✅ Awesome. Chaining works.
Let’s write list_files
The prompt matters a lot here:
This prompt only wrote the changes of the file.
read step4_4.py and add the functionality to list_files in the TOOL_SPEC and create the function that performs that action. do not delete any existing functionality like read file and write file. you are a staff engineer. ultrathink
This prompt wrote a lot of the file (with some bugs).
read step4_4.py and add the functionality to list_files in the TOOL_SPEC and create the function that performs that action. do not delete any existing functionality like read file and write file. you are a staff engineer. ultrathink. when you finish, read the file to make sure all the functionality remains: the chat loop, calling the llm, handling tool calls, and all tools (read file, write file, the new list file)
Terminal output: Notice the read, write, read cycles it does through
🚀 aiagent % cp step4_3.py step4_4.py
🚀 aiagent % python step4_3.py
Press q to quit
You: read step4_4.py and add the functionality to list_files in the TOOL_SPEC and create the function that performs that action. do not delete any existing functionality like read file and write file. you are a staff engineer. ultrathink. when you finish, read the file to make sure all the functionality remains: the chat loop, calling the llm, handling tool calls, and all tools (read file, write file, the new list file)
[Executing read_file...]
[Executing read_file...]
[Executing write_file...]
[Executing read_file...]
[Executing write_file...]
[Executing read_file...]
AI: The functionality to list files in a directory has been successfully added to the `step4_4.py` script. All existing functionalities like reading files, writing files, the chat loop, calling LLN, handling tool calls, and all tools (read file, write file, and the new list file) have been retained.
If you would like to make further modifications or have any other requests, feel free to let me know!
You can “diff” the two files to see the changes and fix what you need. I added the string conversion.
🚀 aiagent % diff --color step4_3.py step4_4.py
42a43,59
> },
> {
> "type": "function",
> "function": {
> "name": "list_files",
> "description": "List files in a directory",
> "parameters": {
> "type": "object",
> "properties": {
> "directory": {
> "type": "string",
> "description": "The directory to list files from"
> }
> },
> "required": ["directory"]
> }
> }
66a84,89
> def list_files(directory):
> """List files in a directory"""
> files = os.listdir(directory)
> return files
>
>
77a101,102
> elif tool_name == 'list_files':
> result = list_files(**tool_args)
80a106,109
> # Ensure result is always a string
> if not isinstance(result, str):
> result = str(result)
>
Congrats on reaching the end!
Gist of final implementation: https://gist.github.com/theptrk/08e82be93847db5d4d38f561dc882ed4
There is A TON to improve here. Maybe using claude codes system prompt would make this a lot better.
- The agent does not do multiple step tool calling and messaging cycles – if you ask to compare 5 files, it’ll often compare 2 files and finish there.
- The agent does not plan at all – should be default
- Web search
- Rewrite in Rust