Build a Personal AI Assistant – Part 2: Core Logic, Conversation Loop, and CLI

📚 Build a Personal AI Assistant
Previous
Part 1: Architecture Mindset and Environment Setup
Next
Part 3: Memory, Context Windows, and Semantic Recall
Ad Space
Build a Personal AI Assistant – Part 2: Core Logic, Conversation Loop, and CLI
With the environment locked in, it is time to give the assistant a brain. Lumenly’s team originally glued prompt strings directly to API calls; when the model timed out, the entire CLI froze and no one knew which turn failed. In this tutorial we focus on resilient core logic—conversation state, error handling, retries, and an interactive CLI that exposes debugging hooks.
Outline the Assistant Contract
Define the interface up front. In src/core/types.ts:
export type Role = "system" | "user" | "assistant";
export interface Message {
role: Role;
content: string;
}
export interface Assistant {
send(message: Message): Promise<Message>;
history(): Message[];
}
This boundary ensures future upgrades (memory, tools) keep the same API. Every module interacts with the assistant via send and history.
Takeaway: Contracts keep refactors safe.
Build the OpenAI Provider with Resilience
Install openai SDK:
npm install openai
src/providers/openai.ts:
import OpenAI from "openai";
import pRetry from "p-retry";
import { Message } from "../core/types";
import { loadSettings } from "../config/settings";
const settings = loadSettings();
const client = new OpenAI({ apiKey: settings.openAiKey });
export async function createCompletion(messages: Message[]) {
return pRetry(
async () => {
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
temperature: 0.3,
messages,
});
return response.choices[0].message;
},
{
retries: 3,
onFailedAttempt: (error) => {
console.warn(`OpenAI attempt ${error.attemptNumber} failed`);
},
},
);
}
Takeaway: Wrap model calls in retry logic so transient errors don’t break UX.
Implement the Conversation Store
Add src/core/conversation.ts:
import { Message } from "./types";
export class ConversationStore {
private history: Message[] = [];
constructor(private maxTurns = 30) {}
append(message: Message) {
this.history.push(message);
if (this.history.length > this.maxTurns) {
this.history = this.history.slice(-this.maxTurns);
}
}
getHistory() {
return [...this.history];
}
}
Seed the store with a system prompt describing the assistant persona.
Assemble the Assistant Core
src/core/assistant.ts:
import { ConversationStore } from "./conversation";
import { Message, Assistant as AssistantContract } from "./types";
import { createCompletion } from "../providers/openai";
import { recordMetric } from "../utils/metrics";
import { logger } from "../utils/logger";
export class Assistant implements AssistantContract {
private store: ConversationStore;
constructor(
private readonly systemPrompt = "You are a pragmatic, concise personal assistant.",
) {
this.store = new ConversationStore();
this.store.append({ role: "system", content: this.systemPrompt });
}
async send(message: Message): Promise<Message> {
this.store.append(message);
const start = Date.now();
try {
const reply = await createCompletion(this.store.getHistory());
const assistantMessage: Message = {
role: reply.role ?? "assistant",
content: reply.content ?? "",
};
this.store.append(assistantMessage);
recordMetric("assistant.turn", {
latency_ms: Date.now() - start,
tokens_out: assistantMessage.content.split(/\s+/).length,
});
return assistantMessage;
} catch (err) {
logger.error({ err }, "Assistant turn failed");
const fallback: Message = {
role: "assistant",
content: "I ran into an error fulfilling that request. Can you try again?",
};
this.store.append(fallback);
return fallback;
}
}
history(): Message[] {
return this.store.getHistory();
}
}
Takeaway: Metrics + logging belong inside send so every turn is observable.
Craft a Developer-Friendly CLI
Use commander or ink. We’ll use commander + chalk.
npm install commander chalk
src/interfaces/cli.ts:
import { Command } from "commander";
import chalk from "chalk";
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";
import { Assistant } from "../core/assistant";
const program = new Command();
program.name("assistant-cli").description("Interact with your personal assistant");
program
.command("chat")
.option("--system <prompt>", "Override system prompt")
.action(async (opts) => {
const assistant = new Assistant(opts.system);
const rl = readline.createInterface({ input, output });
console.log(chalk.green("Assistant ready. Type 'exit' to quit."));
while (true) {
const user = await rl.question(chalk.cyan("You ▸ "));
if (user.trim().toLowerCase() === "exit") break;
const reply = await assistant.send({ role: "user", content: user });
console.log(chalk.yellow(`Assistant ▸ ${reply.content}`));
}
rl.close();
});
program
.command("history")
.action(async () => {
const assistant = new Assistant();
// Example history printing for debugging
assistant.history().forEach((msg, idx) =>
console.log(`${idx}: [${msg.role}] ${msg.content}`),
);
});
program.parse();
Add npm run cli:chat="ts-node src/interfaces/cli.ts chat" etc.
Takeaway: The CLI is your lab bench for future experiments.
Write Tests that Mirror Real Usage
Install vitest:
npm install -D vitest @vitest/coverage-v8
tests/assistant.test.ts:
import { describe, it, expect, vi } from "vitest";
import * as openaiProvider from "../src/providers/openai";
import { Assistant } from "../src/core/assistant";
describe("Assistant", () => {
it("stores conversation turns", async () => {
const spy = vi.spyOn(openaiProvider, "createCompletion").mockResolvedValue({
role: "assistant",
content: "Hello there",
});
const assistant = new Assistant("Test persona");
const reply = await assistant.send({ role: "user", content: "Hey" });
expect(reply.content).toContain("Hello");
expect(assistant.history()).toHaveLength(3);
spy.mockRestore();
});
});
Mark integration tests (real API calls) separately:
it.skip("hit real OpenAI API", async () => {
// set MOCK_PROVIDERS=false and run manually
});
Takeaway: Mock providers in tests so CI stays deterministic.
Operational Hooks
- Config flag for mock mode. Already added
MOCK_PROVIDERS; use it to bypass real API calls for demos. - Transcript export. Write a helper that dumps
assistant.history()tologs/transcript-<timestamp>.md. - Error budget. Track number of fallback responses per session; if it exceeds threshold, surface a warning.
Add CLI options:
--mock Run in mock-provider mode
--transcript Path to save conversation log
--debug Increase log verbosity
Takeaway: Small operational flags save hours once stakeholders start testing.
Verification Checklist
npm run cli:chatresponds promptly and logs metrics.npm run testpasses with mocked provider.npm run lint/npm run formatsucceed.logs/metrics.ndjsoncaptures latency and token counts..envincludesMOCK_PROVIDERS=truefor local dev.- Optionally,
npm run transcript -- --output logs/demo.mdproduces a readable conversation.
Once these checks are green, you have a stable assistant nucleus that the next tutorials will extend.
Looking Ahead
Part 3 introduces memory—episodic storage, semantic embeddings, and context compression. The modules you built today (Assistant, ConversationStore, CLI, provider hooks) are deliberately modular so memory can plug in with minimal changes. Until then, keep experimenting with the CLI, tweak the system prompt per persona, and inspect the transcripts to see how your assistant behaves under different scenarios.
Run one more conversation and commit your work. The assistant finally has a brain.
Ad Space
Recommended Tools & Resources
* This section contains affiliate links. We may earn a commission when you purchase through these links at no additional cost to you.
📚 Featured AI Books
OpenAI API
AI PlatformAccess GPT-4 and other powerful AI models for your agent development.
LangChain Plus
FrameworkAdvanced framework for building applications with large language models.
Pinecone Vector Database
DatabaseHigh-performance vector database for AI applications and semantic search.
AI Agent Development Course
EducationComplete course on building production-ready AI agents from scratch.
💡 Pro Tip
Start with the free tiers of these tools to experiment, then upgrade as your AI agent projects grow. Most successful developers use a combination of 2-3 core tools rather than trying everything at once.
📚 Build a Personal AI Assistant
Previous
Part 1: Architecture Mindset and Environment Setup
Next
Part 3: Memory, Context Windows, and Semantic Recall
🚀 Join the AgentForge Community
Get weekly insights, tutorials, and the latest AI agent developments delivered to your inbox.
No spam, ever. Unsubscribe at any time.



