AI-generated code
AI-generated

RUN AI-GENERATED code SECURELY in your APP

E2B is an open-source runtime for executing AI-generated code in secure cloud sandboxes. Made for agentic & AI use cases.
TRUSTED BY
LLM
[.500]
[.873]
[.542]
[.704]
[.285]
[.717]
[.598]
[.557]
[.232]
[.746]
[.211]
[.013]
[.510]
[.718]
[.621]
[.223]
[.124]
[.801]
[.798]
[.117]️
[.817]
[.070]
[.353]




[.833]
[.477]
[.620]
[.829]
[.195]
[.245]
[.891]
[.454]
[.145]
[.984]
[.634]
[.342]
[.746]
[.330]
[.103]
[.742]
[.004]
[.165]
[.459]
[.597]
[.910]
[.072]
[.336]
]·········[
]·········[
]·········[
E2B SANDBOX
RUNNING CODE…
]·····[
]·····[
]·····[
✶✶
✶✶
✶✶
✶✶




✶✶
✶✶
✶✶
✶✶
✶✶
✶✶
✶✶
✶✶




✶✶
✶✶
✶✶
✶✶
✶✶
✶✶
✶✶
✶✶




✶✶
✶✶
✶✶
✶✶
✶✶
✶✶
✶✶
✶✶




✶✶
✶✶
✶✶
✶✶
]·····[
]·····[
]·····[
[_______________]
[%%%%%__________]
[%%%%%%%%%%_____]
[%%%%%%%%%%%%%%%]
CPU: 8 × ▤  /  RAM: 4 GB
]·········[
]·········[
]·········[
OUTPUT
8 – ––––– ––– ––––– ––– ––––– –––
7 – ––––– ––– @@@@@ ––– ––––– –––
6 – ––––– ––– @@@@@ ––– ––––– –––
5 @@@@@ ––– @@@@@ ––– ––––– –––
4 @@@@@ ––– @@@@@ ––– ––––– –––
3 @@@@@ ––– @@@@@ ––– @@@@@ –––
2 @@@@@ ––– @@@@@ ––– @@@@@ –––
1 @@@@@ ––– @@@@@ ––– @@@@@ –––
–––––––––––––––––––––––––––––––––
      A         B         C
✓ CHART-1
OUTPUT
______    ______    ______
❘    ❘_\  ❘    ❘_\  ❘    ❘_\
╔═══════╗ ╔═══════╗ ╔═══════╗
║  CSV  ║ ║  TXT  ║ ║  .JS  ║
╚═══════╝ ╚═══════╝ ╚═══════╝
❘______❘  ❘______❘  ❘______❘
✓ File
OUTPUT
╔ Email ══════════════╗
║ your@email.com      ║
╚═════════════════════╝
╔ Pw ═════════════════╗
║ ********            ║
╚═════════════════════╝
╔═════════════════════╗
║       Sign In       ║
╚═════════════════════╝
✓ UI
OUTPUT
NVDA                            @
$120.91                        @
+32%                          @
                       @@@   @
            @@@       @   @ @
          @@   @     @     @
      @@@@      @   @
  @@@@           @@@
 @
@
✓ CHART-2
OUTPUT
1999 @@@@@@@@@@ │         │
1998 @@@        │         │
1997 @@@@@@@@@@@@@        │
1996 @@@@@@@    │         │
1995 @@@@@@@@@@@@@@@@     │
1994 @@@@@@@@@@@@@@@@@@@@@@@@@
1993 @@@@@      │         │
1992 @@@@@@@@@@ │         │
–––––––––––––––––––––––––––––––––
      2         4         6
✓ CHART-3
OUTPUT
/!\

Error: [$rootScope:inprog] $apply
already in progress


http://errors.angular.js.org/1.3
.15/$rootScope/inprog?p0=
%24apply

at angular.js:63
☓ Error
OUTPUT
8 –––––––––––––––––––––––––––@
7 ––––––––––––––––@–––––––––@
6 –––––––––––––––@@@@@@–––@––
5 –––––––––@@@@–––@–––@@@@
4 @@@@│–––@@–––@│@–––@@––@@––
3 ––––@@@@––@@@@@–––@–––@@│@@@–––
2 –––@│@@@@@––––@@@––––––––––––
1 @@––––––––––––––––––––––––
–––––––––––––––––––––––––––––––––
      A         B         C
✓ CHART-4
20K+
DEVELOPERS
250K+
MONTHLY DOWNLOADS
10M+
STARTED SANDBOXES
AI

Build for AI
Use Cases

From running short AI-generated code snippets, up to fully autonomous AI agents.
> HOVER (↓↓)
/EXPLORE
.CSV
_____
❘   ❘_\
❘_____❘

AI Data Analysis

From running short AI-generated code snippets, up to fully autonomous AI agents.
LEARN MORE
   @@@   
@@@@@@   
@@@@@@@@@

AI Data Visualization

Run AI-generated code to render charts, plots, and visual outputs based on your data.
LEARN MORE
 ======    
======== 
=== ===  
<        

Coding Agents

Use sandbox to execute code, use I/O, access the internet, or start terminal commands.
LEARN MORE
╔   ═   ╗
        ╣
       
╚     ═  

Generative UI

Use sandbox as a code runtime for AI-generated apps. Supports any language and framework.
LEARN MORE
==╔═══╗==
==║ ✓ ║== ==╚═══╝==

Codegen Evals

Use sandboxes for your codegen gym for popular evals like swe-benchmark or internal evals.
LEARN MORE
NEW

Computer Use

Use Desktop Sandbox to provide secure virtual computers in cloud for your LLM.
LEARN MORE
A FEW LINES

IN YOUR CODE
WITH A FEW LINES

Need help? Join Discord, check Docs or Email us.
+
+
+
+
+
+
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
// npm install @e2b/code-interpreter
import { Sandbox } from '@e2b/code-interpreter'

// Create a E2B Code Interpreter with JavaScript kernel
const sandbox = await Sandbox.create()

// Execute JavaScript cells
await sandbox.runCode('x = 1')
const execution = await sandbox.runCode('x+=1; x')

// Outputs 2
console.log(execution.text)
“~/index.ts”
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
# pip install e2b-code-interpreter
from e2b_code_interpreter import Sandbox

# Create a E2B Sandbox
with Sandbox() as sandbox:
    # Run code
    sandbox.run_code("x = 1")
    execution = sandbox.run_code("x+=1; x")

    print(execution.text) # outputs 2
“~/index.py”
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
// npm install ai @ai-sdk/openai zod @e2b/code-interpreter
import { openai } from '@ai-sdk/openai'
import { generateText } from 'ai'
import z from 'zod'
import { Sandbox } from '@e2b/code-interpreter'

// Create OpenAI client
const model = openai('gpt-4o')

const prompt = "Calculate how many r's are in the word 'strawberry'"

// Generate text with OpenAI
const { text } = await generateText({
  model,
  prompt,
  tools: {
    // Define a tool that runs code in a sandbox
    codeInterpreter: {
      description: 'Execute python code in a Jupyter notebook cell and return result',
      parameters: z.object({
        code: z.string().describe('The python code to execute in a single cell'),
      }),
      execute: async ({ code }) => {
        // Create a sandbox, execute LLM-generated code, and return the result
        const sandbox = await Sandbox.create()
        const { text, results, logs, error } = await sandbox.runCode(code)
        return results
      },
    },
  },
  // This is required to feed the tool call result back to the LLM
  maxSteps: 2
})

console.log(text)
“~/aisdk_tools.ts”
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
~
~
~
~
~
~
~
# pip install openai e2b-code-interpreter
from openai import OpenAI
from e2b_code_interpreter import Sandbox

# Create OpenAI client
client = OpenAI()
system = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"

# Send messages to OpenAI API
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": system},
        {"role": "user", "content": prompt}
    ]
)

# Extract the code from the response
code = response.choices[0].message.content

# Execute code in E2B Sandbox
if code:
    with Sandbox() as sandbox:
        execution = sandbox.run_code(code)
        result = execution.text

    print(result)
“~/oai.py”
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
~
~
~
~
~
~
~
# pip install anthropic e2b-code-interpreter
from anthropic import Anthropic
from e2b_code_interpreter import Sandbox

# Create Anthropic client
anthropic = Anthropic()
system_prompt = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"

# Send messages to Anthropic API
response = anthropic.messages.create(
    model="claude-3-5-sonnet-20240620",
    max_tokens=1024,
    messages=[
        {"role": "assistant", "content": system_prompt},
        {"role": "user", "content": prompt}
    ]
)

# Extract code from response
code = response.content[0].text

# Execute code in E2B Sandbox
with Sandbox() as sandbox:
    execution = sandbox.run_code(code)
    result = execution.logs.stdout

print(result)
“~/anth.py”
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
~
~
~
~
~
# pip install mistralai e2b-code-interpreter
import os
from mistralai import Mistral
from e2b_code_interpreter import Sandbox

api_key = os.environ["MISTRAL_API_KEY"]

# Create Mistral client
client = Mistral(api_key=api_key)
system_prompt = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"

# Send the prompt to the model
response = client.chat.complete(
    model="codestral-latest",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": prompt}
    ]
)

# Extract the code from the response
code = response.choices[0].message.content

# Execute code in E2B Sandbox
with Sandbox() as sandbox:
    execution = sandbox.run_code(code)
    result = execution.text

print(result)
“~/mistral.py”
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
~
~
~
~
~
~
~
~
~
~
# pip install ollama
import ollama
from e2b_code_interpreter import Sandbox

# Send the prompt to the model
response = ollama.chat(model="llama3.2", messages=[
    {
        "role": "system",
        "content": "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
    },
    {
        "role": "user",
        "content": "Calculate how many r's are in the word 'strawberry'"
    }
])

# Extract the code from the response
code = response['message']['content']

# Execute code in E2B Sandbox
with Sandbox() as sandbox:
    execution = sandbox.run_code(code)
    result = execution.logs.stdout

print(result)
“~/llama.py”
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
~
~
~
~
~
# pip install langchain langchain-openai e2b-code-interpreter
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from e2b_code_interpreter import Sandbox

system_prompt = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"

# Create LangChain components
llm = ChatOpenAI(model="gpt-4o")
prompt_template = ChatPromptTemplate.from_messages([
    ("system", system_prompt),
    ("human", "{input}")
])

output_parser = StrOutputParser()

# Create the chain
chain = prompt_template | llm | output_parser

# Run the chain
code = chain.invoke({"input": prompt})

# Execute code in E2B Sandbox
with Sandbox() as sandbox:
    execution = sandbox.run_code(code)
    result = execution.text

print(result)
“~/lchain.py”
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
~
~
~
~
~
~
~
~
~
~
~
~
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import ReActAgent
from e2b_code_interpreter import Sandbox

# Define the tool
def execute_python(code: str):
    with Sandbox() as sandbox:
        execution = sandbox.run_code(code)
        return execution.text

e2b_interpreter_tool = FunctionTool.from_defaults(
    name="execute_python",
    description="Execute python code in a Jupyter notebook cell and return result",
    fn=execute_python
)

# Initialize LLM
llm = OpenAI(model="gpt-4o")

# Initialize ReAct agent
agent = ReActAgent.from_tools([e2b_interpreter_tool], llm=llm, verbose=True)
agent.chat("Calculate how many r's are in the word 'strawberry'")
“~/llindex.py”
FEATURES

FEATURES FOR THE
llm-powered DEVELOPERs

We built E2B with the next generation of developers in mind — software engineering AI agents.
> MADE FOR AI
> DSCVR ALL (↓↓)

Works with any LLM

Use OpenAI, Llama, Anthropic, Mistral, or your
own custom models. E2B is LLM-agnostic
and compatible with any model.

Quick start

The E2B Sandboxes in the same region as
the client start in less than 200 ms.
NO COLD STARTS

Run

... or just any other AI-generated code.

AI-generated Python, JavaScript, Ruby, or C++? Popular framework or custom library? If you can run it on a Linux box, you can run it in the E2B sandbox.

Quick start

The E2B Sandboxes in the same region as
the client start in less than 200 ms.
NO COLD STARTS
Control code execution context
inspect errors
install packages
interactive charts
Filesystem I/O

Features made for LLM

E2B features are made to turn your
LLM into a competent coder.
tailor-made for ai
  ^  
^ ^  
^^^^^
^^ ^^
^^^

Secure & battle-tested

Sandboxes are powered by Firecracker microVM,
a VM made for running untrusted code.
battle-tested
24H

Up to 24h long sessions

Run for a few seconds or several hours, each E2B
sandbox can run up to 24 hours.
AVAILABLE IN PRO

Install any package or system library with

and more.

Completely customize the sandbox for your use case by creating a custom sandbox template or installing a package when the sandbox is running.
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·
*
·

Self-hosting

Deploy E2B in your AWS, or GCP account
and run sandboxes in your VPC.
SOON
COOKBOOK

GET INSPIRed BY
OUR COOKBOOK

Production use cases & full-fledged apps.
HOVER (↓↓)
Language/Framework
LLM Providers
+ Suggest new
No results found.
Try a different keyword.
Python
Mistral
Fireworks AI
Autogen
LangGraph
LangChain
Magentic
OpenAI
Firecrawl
Together AI
Anthropic
Next.js
COMPANIES

USED BY TOP
COMPANIES

From running short AI-generated code snippets, up to fully autonomous AI agents.
/print("
")
(↓↓)
“E2B has a great product that unlocked a new set of rich answers for our users. We love working with both the product and team behind it.”
— Denis Yarats, CTO
Data Analysis
“It took just one hour to integrate E2B end-to-end. The performance is excellent, and the support is on another level. Issues are resolved in minutes.”
— Maciej Donajski, CTO
Finance
Data Processing
“It took just one hour to integrate E2B end-to-end. The performance is excellent, and the support is on another level. Issues are resolved in minutes.”
— Maciej Donajski, CTO
Finance
Data Processing
“E2B has revolutionized our agents' capabilities. This advanced alternative to OpenAI's Code Interpreter helps us focus on our unique product.”
— Kevin J. Scott, CTO/CIO
AI CHATBOT
Enterprise
Contact us for custom enterprise
solution with special pricing.
Contact Us
(◔) GET REPLY IN 24H
It just works. Product is great, and the support that E2B team provides is next level.”
— Max Brodeur-Urbas, CEO
Workflow Automation
“E2B helps us gain enterprises’ trust. Executing the code from Athena inside the sandbox makes it easy to check and automatically fix any errors.”
— Brendon Geils, CEO
Data Analysis
Today

GET STARTED TODAY

E2B is an open-source runtime for executing AI-generated code in secure cloud sandboxes. Made for agentic & AI use cases.
/RUN CODE
>
>

>>

Github

See our complete codebase, Cookbook examples, and more — all in one place.
STAR (7.1K+) ↗

Join our Discord

Become part of AI developers community & get support from the E2B team.
Join Today ↗
Docs

Docs

See the walkthrough of how E2B works, including hello world examples.
Browse