Build Your First Agent
Learn how to define a custom agent, write effective system prompts, run code generation tasks, and test your agent from end to end.
Defining an Agent
An agent in ThePopeBot is a specialized AI personality with a specific purpose, a set of tools, and configuration that controls its behavior. Let us create a new agent from scratch.
Create a new file at src/agents/coder.ts:
import { defineAgent } from '../engine/agent';
export const coderAgent = defineAgent({
name: 'Coder',
description: 'A code generation specialist that writes clean, tested code',
model: 'gpt-4',
temperature: 0.2,
maxTokens: 4096,
tools: ['filesystem', 'git', 'search'],
systemPrompt: `You are an expert software developer. When asked to write code:
1. Understand the requirements fully before writing
2. Write clean, well-documented code
3. Include error handling
4. Suggest tests for the code you generate
Always explain your reasoning before presenting code.`,
});
Then register it in config/agents.yaml:
agents:
coder:
name: "Coder"
module: "./agents/coder"
enabled: true
Writing System Prompts
The system prompt is the most important part of your agent. It defines the agentβs personality, capabilities, and constraints. Here are key principles for writing effective prompts:
Be Specific About the Role
Bad: "You are a helpful assistant."
Good: "You are a senior TypeScript developer specializing in Node.js
backend services. You follow SOLID principles and always
include JSDoc comments."
Define the Output Format
When generating code:
- Use TypeScript with strict mode
- Export all public functions
- Include a brief description comment at the top of each file
- Use async/await over raw promises
Set Boundaries
Rules:
- Never modify files outside the project directory
- Always ask for confirmation before deleting files
- If a task is ambiguous, ask clarifying questions
- Do not execute shell commands without user approval
Include Examples
Providing one or two examples in your system prompt dramatically improves output quality:
Example task: "Create a utility function to validate email addresses"
Example response:
1. I'll create a validateEmail function with regex validation
2. I'll include edge case handling for common invalid formats
3. I'll add JSDoc with parameter and return type documentation
Running Code Generation Tasks
With your coder agent registered, you can now send it code generation tasks. Start the server and route a message to it:
npm run dev
Send a request through the Web UI or API:
{
"agent": "coder",
"message": "Create a REST API endpoint for user registration with email validation and password hashing"
}
The agent will:
- Plan the implementation (identify files to create, dependencies needed)
- Generate the code using the filesystem tool
- Review its own output for correctness
- Report what was created and any follow-up suggestions
Example output you might see:
Created the following files:
- src/routes/auth/register.ts (POST /api/register endpoint)
- src/utils/validation.ts (email and password validators)
- src/utils/password.ts (bcrypt hashing utilities)
Dependencies to install:
npm install bcrypt @types/bcrypt
Suggested next steps:
- Add rate limiting to the registration endpoint
- Create integration tests
- Set up email verification flow
Reviewing PRs with the Agent
One of the most practical uses of an agent is automated code review. Let us configure the coder agent to review pull requests:
export const reviewerAgent = defineAgent({
name: 'PR Reviewer',
model: 'gpt-4',
temperature: 0.3,
tools: ['git', 'filesystem', 'search'],
systemPrompt: `You are a thorough code reviewer. For each PR:
1. Summarize the changes
2. Check for bugs, security issues, and performance problems
3. Verify error handling and edge cases
4. Suggest improvements with specific code examples
5. Rate the PR: approve, request changes, or comment
Use this format for each finding:
[SEVERITY: low/medium/high/critical]
File: path/to/file.ts:lineNumber
Issue: Description of the problem
Suggestion: How to fix it`,
});
Trigger a review by sending:
{
"agent": "reviewer",
"message": "Review PR #42 in the main repository"
}
The agent will use the git tool to fetch the PR diff, read the changed files, and produce a structured review.
Testing Your Agent
Testing agents requires both unit tests for individual components and integration tests for the full pipeline.
Unit Testing the System Prompt
Verify your system prompt produces consistent results with known inputs:
import { test, expect } from 'vitest';
import { coderAgent } from '../agents/coder';
test('coder agent has required tools', () => {
expect(coderAgent.tools).toContain('filesystem');
expect(coderAgent.tools).toContain('git');
});
test('coder agent uses low temperature for consistency', () => {
expect(coderAgent.temperature).toBeLessThanOrEqual(0.3);
});
Integration Testing
Test the full request-response cycle:
import { test, expect } from 'vitest';
import { AgentEngine } from '../engine';
test('coder agent generates valid TypeScript', async () => {
const engine = new AgentEngine();
const result = await engine.run({
agent: 'coder',
message: 'Create a function that adds two numbers',
});
expect(result.files).toHaveLength(1);
expect(result.files[0].content).toContain('function');
expect(result.files[0].path).toMatch(/\.ts$/);
});
Manual Testing Checklist
Before deploying a new agent, verify:
- The agent responds appropriately to its intended use cases
- The agent gracefully handles ambiguous or off-topic requests
- Tool calls execute correctly and produce expected results
- Error handling works when tools fail or return unexpected data
- The agent stays within its defined boundaries and permissions