Menü schliessen
Created: September 16th 2025
Last updated: January 24th 2026
Categories: Artificial intelligence (AI)
Author: Marcus Fleuti

[solved] Complete Guide: Install OpenAI Codex CLI and Claude Code in Docker on Ubuntu/Linux Mint with Security Configuration

Install Claude Code and OpenAI Codex CLI in a Minimal Ubuntu Docker Container

Combining Claude Code and the OpenAI Codex CLI inside a lightweight Ubuntu Docker container lets you run powerful AI coding agents in a clean, reproducible, and isolated workspace. This guide walks through every step of building that environment on Ubuntu or Linux Mint, from installing Docker to securing first-run authentication flows. Follow along to replicate the exact setup I use to keep my AI assistants sandboxed while still accessible from the host terminal.

Why Isolate AI Coding Agents in Docker?

Running Claude Code and the Codex CLI inside Docker keeps system packages, credentials, and file permissions tidy. The minimal ubuntu:latest base layers only the dependencies you choose, while persistent volume mounts preserve tokens and project files. This approach makes updates painless, protects host configuration files, and gives you an easy reset button if you ever want to rebuild the environment from scratch.

Host Prerequisites on Ubuntu or Linux Mint

  • Docker Engine 24.x or newer
  • A non-root user with sudo rights
  • At least 4 GB free disk space for the container image and npm packages

Install Docker from the standard repositories if it is not already available:

sudo apt update
sudo apt install docker.io -y
sudo systemctl enable --now docker

If you prefer running docker without sudo, add your user to the docker group, then log out and back in:

sudo usermod -aG docker $USER

For a different existing account, substitute the username explicitly:

sudo usermod -aG docker <username>

Creating a brand-new user who can run Docker commands without elevation can be done in one step by assigning the group at creation time:

sudo useradd -m -G docker <username>

Set a password for the new user with sudo passwd <username>, then sign in to initialize the home directory and pick up the group membership.

Prepare Shared Directories for AI Agents

Create the host directory that will hold project files and serve as the shared workspace. Mounting it into the container means all AI agent data, tokens, settings, and project files survive container rebuilds.

mkdir -p ~/ai-agent

The ~/ai-agent folder becomes the shared workspace. Configuration directories .claude and .codex will be created inside the container and placed in /ai-agent during the user setup process later in this guide.

Launch the Minimal Ubuntu AI Container with Callback Port

Start a long-lived container named ai-agent that mounts the shared workspace directory and forwards the Codex CLI authentication callback on localhost:1455. During login Codex opens a temporary HTTP listener on that port so your browser can hand the authorization token back to the CLI. Publishing it on the host loopback interface keeps the exchange private while still bridging the container boundary. If OpenAI ever changes the callback port, update the -p flag here to match. The container runs a sleep process so it stays online and ready for interactive shells.

docker run -d \
  --restart=unless-stopped \
  --name ai-agent \
  --hostname ai-agent \
  --network host \
  -v "$HOME/ai-agent:/ai-agent" \
  ubuntu:latest \
  sleep infinity

This single run command is all you need—avoid rebuilding the container later, because reinstalling the CLIs would be required. Instead, keep the container updated from inside, as shown in a later section.

Note on volume mounts: Unlike earlier setups that mounted ~/.claude and ~/.codex separately, this guide places both configuration directories inside /ai-agent after user setup. This keeps all AI agent data consolidated in one location and simplifies permission management.

Only Codex CLI needs this port game. Claude Code simply prompts for your API token in the terminal and stores it inside the shared .claude directory, so no extra networking tweaks are required for Anthropic's tool.

Alternative option: if you would rather keep every port closed, download the Codex binary from GitHub, authenticate once on the host, and let the token sync into the shared .codex directory inside /ai-agent. Future container sessions will already be authorized.

Enter the Container and Apply System Updates

Open a bash shell inside the running container to install updates and tooling:

docker exec -it ai-agent bash -l

If this doesn't work: logout and login again with this user to ensure the user is properly added to the docker group. Once inside, bring the base image current and add Node.js plus npm:

apt update && apt upgrade -y
apt install -y nodejs npm

Install Essential Developer Tools for AI Agents

Before installing the CLIs, equip the container with tools that Claude Code and OpenAI Codex commonly use during code generation, analysis, and troubleshooting. AI agents rely on build tools, version control, search utilities, and language runtimes to deliver comprehensive solutions. Installing this curated toolkit ensures both CLIs can compile code, process data, search files efficiently, and interact with databases without hitting missing dependency errors.

This complete package set covers:

  • Build essentials: gcc, make, and development headers for compiling native extensions
  • Version control: git for repository operations
  • Search and navigation: ripgrep, fd-find, tree for fast file discovery
  • Process monitoring: htop for system resource inspection
  • Session management: tmux and screen for persistent shells
  • Text editors: vim and nano for quick edits
  • Archive utilities: zip, tar, gzip, bzip2, xz for compression tasks
  • Python ecosystem: python3, pip, venv, and development libraries
  • Database clients: sqlite3, postgresql-client, redis-tools for data operations
  • Media processing: imagemagick and ffmpeg for image and video manipulation
  • Parsing and web tools: jq, curl, wget, libxml2, libxslt for API and data tasks
  • SSL and security: ca-certificates, openssh-client, gnupg for secure connections

Install the complete toolkit with a single command (still inside the container):

apt install -y \
  build-essential \
  git \
  curl \
  wget \
  jq \
  ripgrep \
  fd-find \
  tree \
  htop \
  tmux \
  screen \
  vim \
  nano \
  zip \
  unzip \
  tar \
  gzip \
  whois \
  bzip2 \
  xz-utils \
  python3 \
  python3-pip \
  python3-venv \
  python3-dev \
  python3-whois \
  sqlite3 \
  libsqlite3-dev \
  postgresql-client \
  libpq-dev \
  redis-tools \
  ca-certificates \
  openssh-client \
  software-properties-common \
  apt-transport-https \
  gnupg \
  lsb-release \
  sed \
  gawk \
  grep \
  iputils-ping \
  imagemagick \
  ffmpeg \
  libxml2-dev \
  libxslt1-dev \
  libyaml-dev \
  libffi-dev \
  libssl-dev \
  libreadline-dev \
  zlib1g-dev \
  iproute2 \
  sudo \
  libcurl4-openssl-dev \
  mailutils \
  sendmail

This installation may take a few minutes depending on your network speed. Once complete, the AI agents will have access to a professional development environment comparable to a full-stack engineering workstation.

Align Host and Container User IDs to Avoid Permission Issues

Before creating aliases, sync the user ID between your host and the container. This prevents file permission conflicts when the AI agents write to the shared /ai-agent volume. Most Linux systems assign the first non-root user ID 1000, so the goal is to replicate that ID inside the container.

Step 1: Check Your Host User ID

Run this command on your host system to confirm your user ID (usually 1000):

id

Look for uid=1000(yourUserName) in the output. If your UID differs, replace 1000 with your actual UID in the steps below.

Step 2: Check if UID 1000 is Already Taken Inside the Container

docker exec -it ai-agent bash -lc "id 1000"

If the command returns uid=1000(ubuntu) or another username, that ID is occupied by a default user (typically ubuntu) that you don't need.

Step 3: Delete the Conflicting User Inside the Container

⚠️ WARNING: Only run this command inside the Docker container, not on your host system. Verify you are inside the container by checking the shell prompt or running hostname. Make sure the user you are deleting is truly unnecessary (like the default ubuntu user) before proceeding.

Delete the existing user to free UID 1000:

userdel -r ubuntu

Replace ubuntu with the actual username if it differs. The -r flag removes the user's home directory.

Step 4: Create a New User with UID 1000 Inside the Container

Create a new user matching your host username and UID, with /ai-agent as the home directory:

useradd -u 1000 -d /ai-agent -m yourUserName

Replace yourUserName with your actual host username (the name from the id command). The flags do the following:

  • -u 1000: Set the user ID to 1000 to match your host
  • -d /ai-agent: Set the home directory to the shared workspace
  • -m: Create the home directory if it doesn't exist

Step 5: Copy Configuration Files and Set Permissions

Move the .claude and .codex configuration directories from the root user to the new user's home inside the container:

cp -r /root/.claude /ai-agent/
cp -r /root/.codex /ai-agent/

Then fix ownership recursively so the new user owns everything in /ai-agent:

chown yourUserName: /ai-agent -R

Replace yourUserName with the username you created. The trailing colon also sets the group to match the user.

Both CLIs look for their configuration in ~/.claude and ~/.codex by default. Since you set the new user's home directory to /ai-agent with the -d flag during user creation, the tilde (~) will automatically resolve to /ai-agent when running commands as that user. This means the CLIs will find their configs at /ai-agent/.claude and /ai-agent/.codex without any additional environment variables.

Now exit the container:

exit

Step 6: Reload Shell Configuration and Test Aliases

After adding the aliases to ~/.bashrc or ~/.zshrc, log out and log back in to apply the docker group membership (if you chose Option A), then reload your shell:

source ~/.bashrc

Install Claude Code and OpenAI Codex CLI

Use npm to install both global packages. Pinning @latest keeps the CLIs aligned with the newest feature set.

docker exec -it -u yourUserName -w "${PWD/$HOME/}" ai-agent bash -lc "echo 'export PATH=\"\$HOME/.local/bin:\$PATH\"' >>~/.profile && curl -fsSL https://claude.ai/install.sh | bash"
docker exec -it -u yourUserName -w "${PWD/$HOME/}" ai-agent bash -lc "npm install -g @openai/codex@latest"

Replace yourUserName with your actual username in all three aliases. The -u yourUserName flag tells Docker to run commands as that user instead of root, so all file operations respect your host user's ownership.

Create the Core Claude Code Configuration in Docker

Create/Open the file settings.json with nano, and paste the JSON block:

docker exec -it -u yourUserName -w "${PWD/$HOME/}" ai-agent bash -lc "mkdir -p ~/.claude && nano ~/.claude/settings.json"

 

{
  "permissions": {
    "allow": [
      "Read",
      "Edit",
      "MultiEdit",
      "Write",
      "Task",
      "Bash",
      "Grep",
      "Find",
      "Glob",
      "SlashCommand",
      "TodoWrite",
      "WebFetch",
      "WebSearch",
      "NotebookEdit",
      "BashOutput",
      "KillShell"
    ],
    "deny": [
      "Read(./.env)",
      "Read(./.env.*)",
      "Read(./secrets/**)",
      "Read(./config/credentials.json)",
      "Read(./node_modules/**)",
      "Read(./.git/**)",
      "Read(./build)",
      "Read(./dist)",
      "Read(~/.ssh/**)",
      "Read(~/.aws/**)",
      "Read(~/.gcp/**)",
      "Read(~/.azure/**)",
      "Read(~/.docker/config.json)",
      "Read(~/.kube/config)",
      "Read(~/.netrc)",
      "Read(~/.npmrc)",
      "Read(~/.pypirc)",
      "Read(~/.gitconfig)",
      "Read(~/.zsh_history)",
      "Read(~/.bash_history)",
      "Read(~/.history)",
      "Read(/etc/**)",
      "Read(./logs/**)",
      "Read(./.log)",
      "Read(./*.log)",
      "Read(./*.key)",
      "Read(./*.pem)",
      "Read(./*.p12)",
      "Read(./*.pfx)",
      "Read(./*.crt)",
      "Read(./*.csr)",
      "Read(./database.db)",
      "Read(./*.db)",
      "Read(./*.sqlite)",
      "Read(./*.sqlite3)",
      "Read(./backup/**)",
      "Read(./*.bak)",
      "Read(./*.backup)",
      "Read(./.cache/**)",
      "Read(./cache/**)",
      "Read(./.tmp/**)",
      "Read(./tmp/**)",
      "Read(./.vagrant/**)",
      "Read(./Vagrantfile)",
      "Read(./docker-compose.override.yml)",
      "Read(./.terraform/**)",
      "Read(./terraform.tfstate)",
      "Read(./terraform.tfstate.backup)",
      "Read(./*.tfvars)"
    ]
  },
  "defaultMode": "bypassPermissions",
  "autoUpdates": true,
  "shiftEnterKeyBindingInstalled": true,
  "hasCompletedOnboarding": true,
  "ansiColors": true,
  "spinnerTipsEnabled": true,
  "bashSettings": {
    "defaultTimeoutMs": 600000,
    "maxTimeoutMs": 600000,
    "maxOutputLength": 200000,
    "maintainProjectWorkingDir": true
  },
  "outputSettings": {
    "maxOutputTokens": 64000,
    "disableNonEssentialModelCalls": false,
    "disableCostWarnings": true
  },
  "privacy": {
    "disableErrorReporting": false,
    "disableTelemetry": true,
    "disableBugCommand": false,
    "disableAutoupdater": false
  },
  "env": {
    "MAX_MCP_OUTPUT_TOKENS": "50000",
    "CLAUDE_CODE_MAX_OUTPUT_TOKENS": "64000",
    "MAX_THINKING_TOKENS": "32000"
  },
  "model": "opus"
}

Save with Ctrl+O, press Enter, and exit with Ctrl+X. The example below enables the claude-opus model, uses long-lived shells, and explicitly denies access to sensitive files and secrets. This configuration will be copied to /ai-agent/.claude during the user setup process later.

Create the Core Codex CLI Configuration

The OpenAI Codex CLI reads its preferences from /root/.codex/config.toml. Still inside the container, prepare the file by creating the directory, launching nano, and pasting the TOML configuration:

docker exec -it -u yourUserName -w "${PWD/$HOME/}" ai-agent bash -lc "mkdir -p ~/.codex && nano ~/.codex/config.toml"

Standard Configuration (with sandbox function enabled)

This baseline keeps Codex in workspace-write mode, enables network access inside the container, and surfaces detailed reasoning output—ideal for day-to-day work under ~/ai-agent.

[projects."/ai-agent"]
trust_level = "trusted"
approval_policy = "on-failure"

model = "gpt-5-codex"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
model_verbosity = "high"
model_supports_reasoning_summaries = true

sandbox_mode = "workspace-write"
sandbox_workspace_write.network_access = true
sandbox_workspace_write.exclude_tmpdir_env_var = false
sandbox_workspace_write.exclude_slash_tmp = false
disable_response_storage = false

hide_agent_reasoning = false
show_raw_agent_reasoning = true

network_access = true
web_search = true

Save with Ctrl+O, press Enter, and exit with Ctrl+X. Start with a Docker-friendly profile that keeps sandboxing enabled so the agent can only touch the volumes you mount. This configuration will be copied to /ai-agent/.codex during the user setup process later.

YOLO Troubleshooting Configuration (Disables Sandbox)

Codex also offers the --dangerously-bypass-approvals-and-sandbox flag (alias: --yolo) for moments when you must run without the sandbox. Pair that flag with the profile below so your configuration stays in sync while you temporarily grant full access.

approval_policy = "never"

model = "gpt-5-codex"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
model_verbosity = "high"
model_supports_reasoning_summaries = true

sandbox_mode = "danger-full-access"
disable_response_storage = false

hide_agent_reasoning = false
show_raw_agent_reasoning = true

network_access = true
web_search = true

This configuration removes filesystem guardrails and approval prompts, so only use it inside disposable lab containers and revert to the Docker-safe profile once debugging is complete.

Update the model field as OpenAI releases new Codex tiers, but keep your preferred sandbox profile aligned with how much host access you want the agent to wield.

Create Host Aliases for Claude Code and Codex CLI

With user IDs aligned, configure aliases that run the CLIs as your non-root user inside the container. This ensures files created by Claude and Codex are owned by your host user, not root.

Run the first command once to join the docker group, add the alias lines to your shell profile, then log out and back in and reload the shell with source ~/.bashrc after saving the updates:

# Update & Start Codex
alias codex='docker exec -it -u yourUserName -w "${PWD/$HOME/}" ai-agent bash -lc "npm install -g @openai/codex@latest && codex $*"'

# Update & Start Claude in bypassPermissions mode
alias claude='docker exec -it -u yourUserName -w "${PWD/$HOME/}" ai-agent bash -lc "claude install && claude --permission-mode bypassPermissions $*"'

# enter docker container as root
alias ai-agent='docker exec -it -w /ai-agent/ ai-agent bash -l'

# Check for system updates and upgrade all docker container software
alias ai-agent-update='docker exec -it -w /ai-agent/ ai-agent bash -lc "apt update && apt upgrade -y"'

Replace yourUserName with your actual username in all three aliases. The -u yourUserName flag tells Docker to run commands as that user instead of root, so all file operations respect your host user's ownership.

Executing either the codex or claude alias, will first try to update and then start the CLI.

The -w "${PWD/$HOME/}" flag tells Bash to swap your host home directory prefix with nothing before Docker ever sees the path. That means ${PWD} of ~/ai-agent becomes /ai-agent inside the container, and ~/ai-agent/api resolves to /ai-agent/api. Stay inside ~/ai-agent (and any nested project folders) when you call the aliases so both CLIs stay perfectly aligned with the files you intend them to work on.


Daily Workflow Tips with Claude and Codex

Use the ai-agent alias for quick debugging sessions, or run codex "instruction" and claude "instruction" directly from your project folder. Both CLIs respect the working directory you were in when you invoked the alias, so they can edit and read files without extra path juggling. Keep VS Code or another editor pointed at ~/ai-agent to watch changes happen in real time.

Security Tips for Claude Code and OpenAI Codex CLI

  • Review the permissions deny list whenever you add secrets or infrastructure files to your repo.
  • Keep confidential material outside ~/ai-agent; the container only exposes that mount, so Claude and Codex stay sandboxed even with relaxed permissions.
  • Use docker exec sparingly for manual edits; prefer alias-driven commands to maintain the correct working directory.

Conclusion: Master AI Coding with Docker Isolation

With this setup, Claude Code and the OpenAI Codex CLI run inside a minimal Ubuntu Docker environment that is portable, secure, and aligned with modern DevOps practices. You now have a reproducible blueprint for orchestrating multiple AI assistants without cluttering the host OS.

Keep refining the stack by layering in container security hardening, proactive AI coding workflow automation, and disciplined secret rotation so your pipelines stay audit-friendly.

Ready for your next experiment? Try wiring the same container into CI pipelines or pair it with VS Code Remote Containers to keep all your Claude Code and OpenAI Codex CLI workflows consistent across machines. Explore integrations for AI agent observability, containerized developer tooling, Ubuntu automation guides, and DevOps automation best practices to help this tutorial rank for even more high-intent searches.