Asaduzzaman Pavel

Practical Look at Building Tool-Using AI Agents in Go

I've been spending more time with AI agents lately, specifically looking at how to make them actually useful in a Go backend. It's one thing to have a model chat with you, but it's another to give it "hands"—tools it can use to fetch real data or trigger actions.

We're using LangChainGo for this. It's the go-to library for this kind of work in the Go ecosystem. You can find the full source code for this demo on GitHub.

Setting up the workspace

I'm using Ollama to run things locally. It keeps the feedback loop fast and doesn't cost anything while you're just messing around with prompts. For this setup, qwen3.5:9b has been performing well enough to follow tool-calling instructions reliably.

mkdir ai-agent && cd ai-agent
go mod init ai-agent
go get github.com/tmc/langchaingo@latest
go get github.com/tmc/langchaingo/llms/ollama

# Make sure you've pulled the model
ollama pull qwen3.5:9b

Giving the agent "Hands" (Tools)

In LangChainGo, a tool is basically just a struct that implements a few methods. The most important part is the Description. You have to be very literal here because this is what the LLM reads to decide if it should call your code.

Here's how I've implemented a simple weather tool:

package tools

import (
	"context"
	"encoding/json"
	"fmt"
)

type WeatherInput struct {
	City string `json:"city"`
	Unit string `json:"unit"`
}

type Weather struct{}

func (w Weather) Name() string { return "get_weather" }

func (w Weather) Description() string {
	// Be explicit about the expected JSON format.
	return `Get weather. Input: {"city": "Tokyo", "unit": "celsius"}`
}

func (w Weather) Call(ctx context.Context, input string) (string, error) {
	var inp WeatherInput
	if err := json.Unmarshal([]byte(input), &inp); err != nil {
		// If the LLM sends bad JSON, we tell it why so it can retry.
		return "", fmt.Errorf("invalid JSON: use {"city": "NAME", "unit": "celsius"}")
	}
	if inp.Unit == "" {
		inp.Unit = "celsius"
	}
	return fmt.Sprintf(`{"city": "%s", "temp": 22, "unit": "%s", "cond": "sunny"}`, inp.City, inp.Unit), nil
}

I also usually throw in a search tool for anything the model can't find in its internal training data:

type Search struct{}

func (s Search) Name() string { return "web_search" }

func (s Search) Description() string {
	return `Search the web for travel tips. Input is a search query string.`
}

func (s Search) Call(ctx context.Context, input string) (string, error) {
	return fmt.Sprintf(`{"query": "%s", "results": ["Tokyo travel guide", "Top 10 things to do in Tokyo"]}`, input), nil
}

The Reasoning Loop

The core logic lives in the Agent and the Executor. The agent handles the prompt engineering, and the executor manages the "think-act-observe" loop.

When using local models like Qwen, I've found that you need a very strict system prompt. Without it, the model tends to get creative with its output format, which breaks the tool-calling parser.

package main

import (
	"context"
	"fmt"
	"log"
	"strings"

	"ai-agent/internal/tools"
	"github.com/tmc/langchaingo/agents"
	"github.com/tmc/langchaingo/callbacks"
	"github.com/tmc/langchaingo/chains"
	"github.com/tmc/langchaingo/llms/ollama"
	"github.com/tmc/langchaingo/memory"
	langchaintools "github.com/tmc/langchaingo/tools"
)

func main() {
	ctx := context.Background()

	llm, err := ollama.New(
		ollama.WithModel("qwen3.5:9b"),
		ollama.WithPredictMirostat(0),
		ollama.WithPullModel(),
	)
	if err != nil {
		log.Fatal(err)
	}

	// This prompt is the "guardrail" for local models.
	systemPrompt := strings.TrimSpace(`
You are a helpful assistant that uses tools to answer questions.

IMPORTANT: You must follow this exact format:
For using a tool:
Thought: [your reasoning]
Action: [tool name]
Action Input: [tool input]

For final answer:
Thought: I now know the final answer
Final Answer: [your answer]

Always use "Final Answer:" to indicate your final response.
	`)

	agent := agents.NewConversationalAgent(
		llm,
		[]langchaintools.Tool{
			tools.Weather{},
			tools.Search{},
		},
		agents.NewOpenAIOption().WithSystemMessage(systemPrompt),
	)

	// I use the LogHandler so I can see what the agent is "thinking" in real-time.
	executor := agents.NewExecutor(
		agent,
		agents.WithMemory(memory.NewConversationBuffer()),
		agents.WithMaxIterations(5),
		agents.WithCallbacksHandler(callbacks.LogHandler{}),
	)

	input := "What is the weather in Tokyo? Also, search for travel tips."
	response, err := chains.Run(ctx, executor, input)
	if err != nil {
		log.Fatalf("Agent failed: %v", err)
	}

	fmt.Println("Agent Response:", response)
}

Memory and State

The ConversationBuffer is the simplest way to give the agent a "short-term memory." It just stores the raw message history and injects it back into the prompt on the next turn. It's fine for simple CLI tools, but for anything long-lived, you'll eventually want to swap this out for a persistent store.

Debugging the "Thoughts"

The biggest challenge with agents is when they get stuck in a loop. By using callbacks.LogHandler{}, you can see the raw "Thought" process in your terminal.

If you see the agent repeatedly trying the same tool with the same bad input, it's usually a sign that your tool's Description isn't clear enough, or your error message isn't helpful. Instead of just returning "error," I try to return something the LLM can actually use to fix its next attempt—like "invalid JSON, expected 'city' field."

Asaduzzaman Pavel

About the Author

Asaduzzaman Pavel is a Software Engineer who actually enjoys the friction of a well-architected system. He has over 15 years of experience building high-performance backends and infrastructure that can actually handle the real-world chaos of scale.

Currently looking for new opportunities to build something amazing.