Back to AI guides

just-bash: TypeScript Bash Simulation for AI Agents

Stanley Ulili
Updated on February 9, 2026

In the rapidly evolving world of AI agents, developers are constantly seeking ways to enhance their capabilities, make them more efficient, and ensure their operations are secure. One of the most powerful tools you can give an AI agent is access to a command-line environment, specifically a Bash shell. This allows the agent to interact with files, process data, and perform complex tasks programmatically. However, providing direct access to a real shell environment is fraught with complexity and significant security risks. It requires substantial infrastructure (servers, containers, file systems) and careful sandboxing to prevent malicious or unintended actions.

What if there was a way to grant your AI agent the full power of a Bash shell without any of the associated infrastructure overhead or security headaches? This article explores a revolutionary solution: just-bash. This incredible open-source package provides a fully simulated Bash environment, complete with an in-memory virtual filesystem, written entirely in TypeScript. This means you can run Bash commands directly within your Node.js or browser environment, securely and efficiently.

You'll discover how to leverage just-bash to build more powerful, cost-effective, and capable AI agents. Starting with the basics of the library and simple command execution, you'll see how to tackle a real-world problem: enabling an AI chatbot to query and analyze a massive JSON dataset. You'll compare the traditional, token-intensive approach with the elegant, efficient solution provided by just-bash, seeing how this tool can dramatically reduce your LLM costs and improve performance. By the end, you'll have a comprehensive understanding of how to integrate a secure, simulated shell into your AI applications.

Understanding just-bash architecture

At its core, just-bash is a clever and powerful re-implementation of the Bash shell and its core utilities, but instead of being written in a low-level language like C, it's written in TypeScript.

A simulated, sandboxed environment

The most significant feature of just-bash is that it is a simulation. It doesn't actually spin up a separate process or interact with your computer's underlying operating system shell. When you execute a command like echo "Hello" > greeting.txt, the library doesn't call out to the real echo binary on your system. Instead, it has its own TypeScript function that perfectly mimics the behavior of the echo command.

A screenshot of the `just-bash` GitHub README, highlighting the description: "A simulated bash environment with an in-memory virtual filesystem, written in TypeScript."

This simulation-based approach provides two immense benefits: security and portability.

Since the agent is operating within a completely self-contained, sandboxed environment, it cannot break out and affect the host system. It can't delete critical system files, access sensitive information, or execute malicious code outside of its designated virtual space. This is a game-changer for building AI agents that need to execute code, as it mitigates one of the biggest risks associated with the technology.

Because it's just TypeScript, just-bash can run anywhere JavaScript can run. This includes backend Node.js environments (like Vercel Serverless Functions or AWS Lambda), and even directly in the user's browser. You get the power of a Unix-like shell without needing a Unix-like operating system, eliminating a huge layer of infrastructure complexity.

The in-memory virtual filesystem

When a command creates a file, that file doesn't get written to your actual hard drive. It exists only in the memory of the running application, within the context of that specific Bash instance. This filesystem is persistent for the life of the Bash object, meaning an agent can create a file in one step, modify it in another, and read from it in a third, all within the same session. This is perfect for agents that need a temporary "scratchpad" to process data without needing persistent storage. While there are options to connect it to a real filesystem if needed, the default in-memory approach is fast, secure, and requires zero setup.

How translation from Bash to TypeScript works

The magic of just-bash lies in its translation of shell commands into executable TypeScript code. The library contains a comprehensive parser that understands Bash syntax (commands, arguments, flags, pipes, and redirections). When you pass a command string to the exec function, it parses this string and maps the command to its corresponding TypeScript implementation.

A view of the source code for the `echo` command within the `just-bash` library, showing a TypeScript function named `execute` that handles the logic.

For example, the source code for the echo command is a TypeScript module that exports a command object with a name (echo) and an execute function. This function contains all the logic for parsing flags like -n (to suppress newlines) and processing the arguments to produce the final output string. This output is then returned in a structured object containing stdout, stderr, and an exitCode, just like a real shell command would. Every supported command in just-bash has a similar implementation, creating a robust and feature-rich shell experience built on a modern, safe programming language.

Executing your first just-bash commands

Understanding how the basic API works and how the persistent environment operates reveals the power of just-bash.

Setting up the project

First, you'll need a Node.js project. Create a new directory for your project and initialize it:

 
mkdir ai-bash-demo
 
cd ai-bash-demo
 
npm init -y
 
npm install just-bash

If you're using TypeScript, install its dependencies:

 
npm install -D typescript @types/node

Writing and executing basic commands

Here's a demonstration script that shows just-bash in action. Create a file named demo.ts:

A code editor showing the basic `just-bash` demonstration script, with the corresponding terminal output below

demo.ts
import { Bash } from "just-bash";

async function main() {
  console.log("Initializing a new simulated Bash environment...");

  // Create a new Bash instance
  const env = new Bash();

  console.log("Executing first command: echo 'Hello' > greeting.txt");

  // Execute the first command to create a file
  await env.exec("echo 'Hello' > greeting.txt");

  console.log("Executing second command: cat greeting.txt");

  // Execute a second command in the SAME environment
  const result = await env.exec("cat greeting.txt");

  console.log("\n--- Results ---");
  // Analyze the result object
  console.log("Standard Output (stdout):", result.stdout);
  console.log("Standard Error (stderr):", result.stderr);
  console.log("Exit Code:", result.exitCode);
  console.log("Current Environment Variables:", result.env);
  console.log("----------------\n");
}

main();

Understanding the execution flow

The const env = new Bash() line creates a new instance of the Bash class. This env object represents the entire simulated, sandboxed environment. It holds the state of the virtual filesystem, environment variables, and current working directory. Each Bash instance is completely isolated from any others.

The await env.exec("echo 'Hello' > greeting.txt") call uses the asynchronous exec method on the environment instance. This command tells the shell to take the string "Hello" and redirect it into a new file named greeting.txt. Because this is an in-memory filesystem, this file now exists only within the env object's state, not on your computer's disk.

The crucial part demonstrating the persistent environment is const result = await env.exec("cat greeting.txt"). This calls exec again on the same env instance. The cat command reads the content of a file and prints it to standard output. Since the greeting.txt file was created in the previous step, the environment remembers it, and cat can access it successfully.

Analyzing the result object

The exec method returns a promise that resolves to a result object. This object contains everything you'd expect from a shell command's execution. The result.stdout is a string containing the standard output of the command (in this case, "Hello\n", which is the content of greeting.txt). The result.stderr is a string for any error messages (empty for this successful command). The result.exitCode is a number representing the exit code (0 typically means success, while any non-zero value indicates an error). The result.env is an object containing the current state of the environment variables within the simulated shell (e.g., HOME, PATH).

When you run this script (with bun run demo.ts or ts-node demo.ts), the output will clearly show the "Hello" string being printed from the stdout of the cat command, proving that the file was created and then read within the isolated, in-memory environment.

Empowering AI agents with large datasets

The true power of just-bash shines when you integrate it into AI agents, especially when dealing with tasks that involve large amounts of data. This is where you can see dramatic improvements in efficiency and cost.

The problem with LLMs and large context windows

Modern Large Language Models (LLMs) like GPT-4 have increasingly large context windows, which can seem like a simple solution for data analysis. The naive approach is to simply "stuff" all the data into the prompt, along with the user's question.

Consider a scenario with a very large JSON file (large-records.json) containing thousands of records where a user wants to ask questions about this data. The inefficient approach reads the entire JSON file, converts it to a string, and includes it directly in the system prompt sent to the LLM on every single request:

inefficient-approach.ts
// Pseudocode for the inefficient approach
const largeJsonData = fs.readFileSync("large-records.json", "utf-8");

const response = await openai.chat.completions.create({
  model: "gpt-5.2",
  messages: [
    {
      role: "system",
      content: `You are a helpful assistant. You answer questions about the following JSON data: ${largeJsonData}`,
    },
    {
      role: "user",
      content:
        'For recordId "record-1737", what are the ownerEmail and retrievalKey?',
    },
  ],
});

While this can work for simple retrieval tasks, it has massive drawbacks. This single query can consume over 133,000 input tokens. LLM API calls are priced per token, so processing this much data on every query is financially unsustainable for any real application. Sending and processing such a large payload takes significant time, leading to a slow and frustrating user experience. This approach is limited by the model's maximum context window, so if the data file grows larger than the limit, the entire method fails. When context windows are filled with vast amounts of raw data, LLMs can struggle to find the specific piece of information they need, decreasing accuracy and increasing the likelihood of hallucination.

A screenshot of the chatbot UI showing the "Normal Context Chat" with a session token usage of 133,952 for a single query.

The efficient approach using bash-tool

Instead of giving the AI the data, you can give it the tools to find the data itself. The AI SDK ecosystem provides a helper package called bash-tool which is a convenient wrapper around just-bash designed specifically for this purpose.

Integration process

You'll need the Vercel AI SDK and the bash-tool package:

 
npm install ai bash-tool

In your API route, you first initialize the bash-tool. The key step is to load your large dataset into the tool's virtual filesystem:

api-handler.ts
import { createBashTool } from "bash-tool";

// In your API handler...

// Get the raw text of your large dataset
const datasetText = await getLargeDatasetText(); // A function to read your file

const bashTool = await createBashTool({
  // The 'files' option populates the in-memory filesystem
  files: {
    // Key is the filename, value is the file content
    "large-records.json": datasetText,
  },
  // The destination is the working directory for the shell
  destination: "/workspace",
});

This code creates a sandboxed Bash environment and pre-loads it with the large-records.json file inside a /workspace directory. The AI agent can now operate on this file.

Using the Vercel AI SDK's streamText function, you can pass the tool to the model. You also need to provide instructions so the AI knows how to use it:

api-handler-stream.ts
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { getBashToolAgentsInstructions } from "bash-tool";

// ... inside the API handler, after creating bashTool

const bashToolAgentsInstructions = await getBashToolAgentsInstructions();

const result = await streamText({
  model: openai("gpt-5.2"),
  system: `You are a helpful assistant that can answer questions.
             ${bashToolAgentsInstructions.join("\n")}`,
  messages: userMessages,
  tools: {
    // We name the tool 'bash' and provide the tool instance
    bash: bashTool.bash,
  },
});

When a user asks a question, the AI won't try to answer it from its memory. Instead, it will reason that it has a bash tool available. It will intelligently construct a shell command using powerful utilities like jq (for JSON parsing) or grep (for searching) to extract only the necessary information from the large-records.json file. The tool executes this command in the just-bash simulated environment and returns the small, relevant output back to the AI, which then uses that output to formulate the final answer.

The efficiency gains

A screenshot of the "Bash Retrieval Chat" UI. It shows the AI using the `jq` command and a session token usage of only 6,447 tokens—a >95% reduction.

The results are staggering. The exact same query using the bash-tool method can consume only around 6,270 input tokens. This is a reduction of over 95% compared to the full-context approach.

This method is superior in every way. The token reduction translates directly into massive cost savings, making the application economically viable. The agent is no longer just retrieving data, it's processing it. It can answer much more complex questions, such as "How many records between record-1000 and record-2500 have metadata.active = true?". Answering this with the full-context method would be unreliable and prone to error, but with bash-tool, the agent can construct a jq command to filter, count, and return the precise, correct answer every time. The size of the initial prompt is now constant, regardless of the size of the data file. The agent's performance and cost are no longer tied to the dataset's size.

Supported commands and shell features

just-bash is not limited to just echo and cat. It comes with a remarkably extensive list of supported commands that make it a truly versatile tool for your AI agents.

A list of supported commands from the `just-bash` documentation, categorized into File Operations, Text Processing, Data Processing, etc.

Some of the key supported commands include file operations like ls, cp, mv, rm, mkdir, touch, and tree. For text processing, you have grep, sed, awk, sort, uniq, wc, head, and tail. Data processing commands include jq (for JSON), python3 (via Pyodide, an opt-in feature), sqlite3, and yq (for YAML/XML). This allows agents to perform sophisticated data analysis and transformation.

Networking commands like curl and html-to-markdown are available. The curl command can be configured with a URL whitelist for security, allowing the agent to safely fetch data from approved external APIs. It also supports core shell functionality like pipes (cmd1 | cmd2), command chaining (&&, ||), variables, loops, and functions, allowing the AI to construct complex, multi-step command sequences.

This rich feature set transforms your AI agent from a simple conversationalist into a powerful data analyst and automation engine, all within a secure, zero-infrastructure environment.

Final thoughts

The just-bash library represents a significant leap forward in the development of capable and efficient AI agents. By providing a secure, portable, and feature-rich simulated Bash environment entirely in TypeScript, it solves many of the fundamental challenges associated with giving AI the ability to interact with data and execute commands.

This innovative approach offers a powerful alternative to the costly and limited method of stuffing large amounts of data into an LLM's context window. By equipping agents with tools instead of just raw data, you can achieve a staggering 95% reduction in token usage, which directly translates to lower operational costs and faster response times. More importantly, it unlocks a new level of capability, enabling the agent to perform complex data analysis and manipulation with a high degree of accuracy and reliability.

The beauty of just-bash lies in its simplicity and power. It requires no additional infrastructure, runs anywhere JavaScript does, and is secure by design. Whether you are building a chatbot to query a database, an agent to automate file processing, or any application that can benefit from programmatic shell access, just-bash provides an elegant, efficient, and cost-effective solution. It is a testament to how clever software design can empower developers to build the next generation of smarter, more powerful AI applications.

Got an article suggestion? Let us know
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.