ai-agentstutorialpythonarchitectureopenai

Build Your First AI Agent from Scratch - Part 2: Architecting the Core Agent Loop

By AgentForge Hub2/10/20256 min read
Beginner
Build Your First AI Agent from Scratch - Part 2: Architecting the Core Agent Loop

Ad Space

Build Your First AI Agent from Scratch - Part 2: Architecting the Core Agent Loop

After Part 1, the Lumenly team had a pristine repo but no agent to speak with. Stakeholders asked, "Can it talk yet?" and that is where Part 2 begins. The trap many teams fall into is hacking together a single openai.ChatCompletion.create() call and calling it a day. Two weeks later they try to add memory, tool calls, or logging and everything unravels.

This tutorial builds the skeleton we will rely on for the rest of the series. The thesis: a good agent loop has clean seams--config in one place, conversation state in another, logging everywhere, and a CLI harness that humans actually want to use. Invest in those seams now, and Parts 3-5 (memory, tools, deployment) will feel like bolt-ons instead of rewrites.


Define the Responsibilities of the Core Agent

Before writing code, articulate what the "agent" owns:

  1. Conversation lifecycle: track user/system messages, enforce max history, surface metadata for future memories.
  2. API orchestration: build prompts, call language models, handle timeouts or retries gracefully.
  3. Policy enforcement: ensure every response flows through the same safety checks and instrumentation.
  4. Interface hooks: expose a CLI (or later, an HTTP server) that can send/receive events without knowing internals.

Capture this in a simple interface contract. In src/agent_lab/contracts.py:

from dataclasses import dataclass
from typing import Literal, Sequence

Role = Literal["user", "assistant", "system"]

@dataclass
class Message:
    role: Role
    content: str

class Agent:
    def send(self, message: Message) -> Message:
        raise NotImplementedError

    def history(self) -> Sequence[Message]:
        raise NotImplementedError

This is intentionally minimal. Each future part (memory, tools) will extend the contract without breaking existing code.

Takeaway: Write down the contract first so every feature that follows has a place to plug in.


Build a Configuration and Dependency Layer

We need a centralized way to manage API keys, defaults, and toggles. Using pydantic or vanilla dataclasses both work; we'll go with dataclasses for simplicity and reuse the .env from Part 1.

# src/agent_lab/config.py
from dataclasses import dataclass
from pathlib import Path
from dotenv import load_dotenv
import os

load_dotenv()

@dataclass(frozen=True)
class AppConfig:
    openai_api_key: str
    model: str = "gpt-4o-mini"
    max_turns: int = 20
    temperature: float = 0.3
    log_dir: Path = Path("logs")

def load_config() -> AppConfig:
    key = os.getenv("OPENAI_API_KEY")
    if not key:
        raise RuntimeError("OPENAI_API_KEY missing")
    cfg = AppConfig(openai_api_key=key)
    cfg.log_dir.mkdir(exist_ok=True)
    return cfg

Expose the config via settings = load_config() so other modules import from one place. Document overrides (e.g., AGENT_MODEL) in README. Later, when we add YAML policy files or feature flags, they live here.

Takeaway: One config module = single source of truth for secrets, models, and paths.


Implement the Conversation Engine

With contracts and config ready, create the agent implementation. Focus on modularity: a ConversationStore handles history, LLMClient wraps OpenAI, and CoreAgent coordinates both.

# src/agent_lab/conversation.py
from collections import deque
from typing import Deque
from agent_lab.contracts import Message

class ConversationStore:
    def __init__(self, limit: int):
        self._limit = limit
        self._messages: Deque[Message] = deque(maxlen=limit)

    def append(self, message: Message) -> None:
        self._messages.append(message)

    def history(self) -> list[Message]:
        return list(self._messages)
# src/agent_lab/llm_client.py
from openai import OpenAI
from agent_lab.contracts import Message
from agent_lab.config import load_config

class LLMClient:
    def __init__(self):
        cfg = load_config()
        self._client = OpenAI(api_key=cfg.openai_api_key)
        self._model = cfg.model
        self._temperature = cfg.temperature

    def complete(self, messages: list[Message]) -> Message:
        resp = self._client.chat.completions.create(
            model=self._model,
            temperature=self._temperature,
            messages=[m.__dict__ for m in messages],
        )
        content = resp.choices[0].message.content.strip()
        return Message(role="assistant", content=content)
# src/agent_lab/core.py
from agent_lab.contracts import Agent, Message
from agent_lab.conversation import ConversationStore
from agent_lab.llm_client import LLMClient
from agent_lab.telemetry import emit_metric, logger

class CoreAgent(Agent):
    def __init__(self, system_prompt: str):
        cfg = load_config()
        self.store = ConversationStore(limit=cfg.max_turns)
        self.llm = LLMClient()
        self.store.append(Message("system", system_prompt))

    def send(self, message: Message) -> Message:
        self.store.append(message)
        history = self.store.history()
        logger.info("sending turn", extra={"turn": len(history)})
        reply = self.llm.complete(history)
        self.store.append(reply)
        emit_metric("agent.turn", tokens=len(reply.content.split()))
        return reply

    def history(self):
        return self.store.history()

Note the call to emit_metric: this reuses the logging hooks from Part 1. We now have a testable loop.

Takeaway: Split responsibilities so later we can swap out ConversationStore for a vector DB or LLMClient for local inference.


Create a Human-Friendly CLI Harness

Engineers need a tight feedback loop. Instead of calling Python scripts manually, build a CLI using rich for better UX. Add commands for chatting, inspecting history, and viewing metrics.

# src/agent_lab/cli.py
import typer
from rich.console import Console
from rich.panel import Panel
from agent_lab.contracts import Message
from agent_lab.core import CoreAgent

cli = typer.Typer()
console = Console()

def agent() -> CoreAgent:
    return CoreAgent(system_prompt="You are a pragmatic assistant.")

@cli.command()
def chat():
    bot = agent()
    console.print(Panel("Agent ready. Type 'exit' to leave.", title="Agent CLI"))
    while True:
        user = console.input("[bold cyan]You[/]: ")
        if user.strip().lower() in {"exit", "quit"}:
            break
        reply = bot.send(Message("user", user))
        console.print(Panel(reply.content, title="Agent", expand=False))

@cli.command()
def history():
    bot = agent()
    for msg in bot.history():
        console.print(f"[bold]{msg.role}[/]: {msg.content}")

if __name__ == "__main__":
    cli()

Run via python -m agent_lab.cli chat. This interface will become our staging ground for testing memory (Part 3) and tools (Part 4).

Takeaway: Invest in the CLI--it keeps humans in the loop while APIs are still evolving.


Add Logging, Metrics, and Tests

Borrowing from Part 1's observability theme, we'll log structured events and assert core behaviors with pytest.

# src/agent_lab/telemetry.py
import logging, json, time
from pathlib import Path
from agent_lab.config import load_config

cfg = load_config()
logger = logging.getLogger("agent")
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s %(levelname)s %(message)s",
    handlers=[
        logging.StreamHandler(),
        logging.FileHandler(cfg.log_dir / "agent.log"),
    ],
)

def emit_metric(name: str, **data):
    entry = {"ts": time.time(), "metric": name, **data}
    Path(cfg.log_dir).mkdir(exist_ok=True)
    with open(cfg.log_dir / "metrics.ndjson", "a", encoding="utf-8") as fh:
        fh.write(json.dumps(entry) + "\n")

Tests:

# tests/test_core_agent.py
import pytest
from agent_lab.core import CoreAgent
from agent_lab.contracts import Message

@pytest.mark.integration
def test_agent_generates_response(monkeypatch):
    bot = CoreAgent(system_prompt="Test mode.")
    msg = Message("user", "Say 'hello world'")
    reply = bot.send(msg)
    assert "hello" in reply.content.lower()

def test_history_limits(monkeypatch):
    bot = CoreAgent(system_prompt="Test")
    for i in range(50):
        bot.store.append(Message("user", f"msg {i}"))
    assert len(bot.history()) <= bot.store._limit

Mock external APIs during unit tests; run integration tests manually or behind a feature flag (pytest -m integration --run-integration).

Takeaway: Observability + tests = confidence when we layer memory and tools later.


Run the End-to-End Flow

  1. Activate the environment (source .venv/bin/activate).
  2. Export OPENAI_API_KEY, or update .env.
  3. Run python -m agent_lab.cli chat and hold a conversation.
  4. Inspect logs/agent.log and logs/metrics.ndjson.
  5. Execute pytest to ensure contracts hold.

Checklist accomplished:

  • Agent contract and configuration pattern.
  • Conversation store with bounded history.
  • LLM client wrapper with instrumentation.
  • CLI harness for human testing.
  • Logging + metrics + tests.

Looking Ahead

In Part 3 we will teach this agent to remember--summary memory, vector stores, and context compression. The seams we built today (history store, telemetry, CLI) will make that addition straightforward. Until then:

Fire up the CLI, give the agent a few turns, and get ready to give it a memory in Part 3.


Ad Space

Recommended Tools & Resources

* This section contains affiliate links. We may earn a commission when you purchase through these links at no additional cost to you.

OpenAI API

AI Platform

Access GPT-4 and other powerful AI models for your agent development.

Pay-per-use

LangChain Plus

Framework

Advanced framework for building applications with large language models.

Free + Paid

Pinecone Vector Database

Database

High-performance vector database for AI applications and semantic search.

Free tier available

AI Agent Development Course

Education

Complete course on building production-ready AI agents from scratch.

$199

💡 Pro Tip

Start with the free tiers of these tools to experiment, then upgrade as your AI agent projects grow. Most successful developers use a combination of 2-3 core tools rather than trying everything at once.

🚀 Join the AgentForge Community

Get weekly insights, tutorials, and the latest AI agent developments delivered to your inbox.

No spam, ever. Unsubscribe at any time.

Loading conversations...