I’ve created a new TS-based AI Agentic framework

Piotr Frankowski
7 min readMar 3, 2025

--

🚀 I’ve created a thing! 🚀

TL;DR

I’ve just published my new project called TS-Agents. It’s a TypeScript-based framework for building Agentic AI flows. Go check it out here: https://github.com/piotrfrankowski/ts-agents

But why?

Yes, there are already so many frameworks for building agentic AI flows.

Recent advancements in LLMs and the performance of new models like DeepSeek-R1 and the ability to run them locally have reignited my interest in AI and AI Agents. So I’ve decided that it’s high time to learn more in this field. I’ve started by checking out existing frameworks and doing some testing. I’ve noticed that while Python has a tonne of such frameworks, TypeScript seems to be less plentiful in this regard, and I thought that it’s a great opportunity to learn by doing. So, I’ve rolled up my sleeves and got cracking.

So please, give it a try and share your feedback!

Welcome TS-Agents 🤖

Currently, the framework supports the following:

  • Creating a basic framework to connect with LLMs and execute simple tasks.
  • Creating an agent that can be instructed to perform a task using a provided LLM as a base.
  • Giving the agent access to tools and performing tool calls.
  • Creating a flow where multiple agents can be executed in series.
  • Adding the ability to use previous agents’ responses as context for the next one.
  • Adding an option to execute multiple agents in parallel.

While this covers the basics, I’m planning to add more features in the future. The current TODO list includes:

  • Add graph-based flows (start, end, task).
  • Add decision-making nodes.
  • Add map-reduce nodes.
  • Add memory for graph flows.
  • Add a way to pass fine-tuning data to LLM models.
  • Add a way to generate training data for LLM models.
  • Add more tooling.
  • Add more examples.
  • Add more LLM connectors.
  • Add different types of models.

Overview 🔎

Framework is using a class-based approach.

LLM class is responsible for connecting to the LLM provider and executing the prompts. It defines the model and the connector. It is provided to the Agent class.

Each agent within a flow can use a different LLM. Apart from that, Agent has a persona, a list of tools, context, and a task. Context is an array of agent instances whose responses are added to the context. Tools are an array of Tool instances that are available to the agent.

Tool class defines a tool. It’s name, description, hint, and parameters. It also has a function that is executed when the tool is called.

Flow class is responsible for orchestrating the flow of the agents. It can be used to execute agents in series or in parallel. The steps are passed as an array where items can be either Agent or an array of Agent instances that are supposed to be executed in parallel.

All of this is located in the lib directory. In addition, there is a tools directory that contains custom tools for the agents to use. For the time being, it only contains a tool for reading the file system, but I’m planning to add more tools in the future.

example directory contains a few examples of using the framework.

The journey 🎒

With this project, I’ve set out to create an easy-to-use framework that allows defining agents by specifying their personas and providing them with tasks to achieve. As I wanted to run it locally, I’ve decided to start with implementing Ollama as an LLM provider, but the LLM class can take any arbitrary connector that implements the LLMConnector interface.

const llm = new LLM({ model: "llama3.3", connector: "ollama" });

const blockchainDeveloper = new Agent({
persona: {
role: "Blockchain Developer",
background:
"You specialize in developing smart contracts and dapps on the blockchain.",
goal: `Writing smart contracts in solidity for EVM compatible blockchains.`,
},
llm,
task: `Write a smart contract for the new ERC20 token called ${coinName}.
This token should have the following features:
- It should be an ERC20 token
- It should have a name
- It should have a symbol
- It should have a decimals
- It should have a total supply
- It should have a mint function that is only available to the owner
- It should have a burn function that is only available to the owner
- It should have a transfer function
- It should have a balanceOf function
- It should have a totalSupply function
- It should have a owner function
- It should have a approve function
- It should have a allowance function
- It should have a transferFrom function
`,
});

Next step was to provide agents with context from previous steps. That was pretty straightforward. I’ve just injected previous results as system messages. In order not to feed consecutive steps with too much information, I’ve added a property to the *Agent* class that allows setting which other agents’ results should be added to the context.

const llm = new LLM({ model: "llama3.3", connector: "ollama" });

const blockchainDeveloper = new Agent({
persona: {
role: "Blockchain Developer",
background:
"You specialize in developing smart contracts and dapps on the blockchain.",
goal: `Writing smart contracts in solidity for EVM compatible blockchains.`,
},
llm,
context: [],
task: `Write a smart contract for the new ERC20 token called ${coinName}.
This token should have the following features:
- It should be an ERC20 token
- It should have a name
- It should have a symbol
- It should have a decimals
- It should have a total supply
- It should have a mint function that is only available to the owner
- It should have a burn function that is only available to the owner
- It should have a transfer function
- It should have a balanceOf function
- It should have a totalSupply function
- It should have a owner function
- It should have a approve function
- It should have a allowance function
- It should have a transferFrom function
`,
});

const blockchainSecurityExpert = new Agent({
persona: {
role: "Blockchain Security Expert",
background:
"You specialize in securing smart contracts on the blockchain.",
goal: `Create a security audit report for the smart contract.
Provide a list of vulnerabilities and recommendations for improvements.`,
},
llm,
context: [blockchainDeveloper],
task: `You are given a smart contract in Solidity.
The smart contract is for the new ERC20 token called MeowCoin.
Review the code taking into account the security of the contract.
Suggest code improvements and best practices.
Implement the code improvements.
`,
});

After that, I’ve wanted to give the agents access to tools. I’ve created the Tool class that describes a tool to the LLM in system messages and provides the actual implementation of the tool. I’ve started with adding a simple tool that allows reading the file system.

const llm = new LLM({ model: "llama3.3", connector: "ollama" });

const readRepoTool: Tool<{ repo_name: string }> = new Tool({
name: "read_repo",
description: "Read file tree of a code repository",
hint: `Provide the name of the repository you want to read.
If it won't be accessible, you will be notified with the "Repository repo_name does not exist" message`,
params: [
{
name: "repo_name",
type: "string",
description: "The name of the repository",
required: true,
},
],
fn: async (args) => readRepository(args.repo_name, {
basePath: "~/code",
limitToDir: "src",
});
});

const codeAnalyst = new Agent({
persona: {...},
tools: [readRepoTool],
llm,
context: [],
task: ...,
});

Agents are prompted to use the tools in the system message. If the response is containing the tool calls, the framework will automatically perform the tool calls and add the results to the conversation history. This worked okay, but I’ve run into issues where the model would respond with the idea on how to approach the given task. In order to prod them to actually use the tools and continue, they are also asked to mark their final response with a tag.

If the tag is not present, the framework will keep asking the agent to continue with the task.

Upon achieving this, I was content, but I recognized that certain flows remain unaddressed by this approach. To address this limitation, I have drawn inspiration from other libraries and started working on the *Graph*-based flow. I envision this flow as a versatile tool capable of encompassing a broad spectrum of tasks. To facilitate its functionality, I intend to create various types of nodes.

  • Task node — simple take input, execute agent or some function and return output
  • Decision node — decision based on a condition that can branch out to different nodes
  • Map-reduce node — take an input, apply some mapping, apply agent or function to each element of the input, then reduce the output and return it

This approach, however, requires a more complex memory management between agents and ensuring that input and output are properly handled. Still ideating on this one, happy to hear some suggestions!

Whenever I’ll tackle this, I’ll try to work on fine-tuning ability and adding support for different types of models.

Trying it out ⚙️

In order to execute some of the examples, you will need a few prerequisites.

  • Node.js, duh
  • Ollama running on your machine
  • Desired model downloaded, you can do that by executing ollama run model_name for examle: ollama run llama3.3
  • Clone the repo
  • Install the dependencies yarn install
  • And start hacking by running any of the examples

Be wary of the model size (those can be quite hefty) and their memory consumption. You can check out ready-made models for Ollama here: https://ollama.com/search

If you want to, you can also try running it with OpenAI. Make sure to copy the .env.example file and rename it to .env, then copy-paste your OpenAI api key.

Connector and model in each example are setup to be the same for each agent in the flow and are defined at the top of corresponding agents.ts files.

Conclusion

I’ve learned a lot while working on this project, but I’m sure there are pitfalls I’ve fallen into, and there are a tonne of things that can be improved. I also have a huge list of features and ideas that I’d like to add in the future, like an option to use different types of models, including image generation and recognition.

Thus, I’m open to any feedback and suggestions and will be happy to hear from you!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Piotr Frankowski
Piotr Frankowski

Written by Piotr Frankowski

Software Architect/Lead Backend Developer

No responses yet

Write a response