OpenCode is Remarkable

I started using OpenCode at work recently. It is a very usable tool and its capabilities are remarkable! Its biggest feature is something that can be understood only by those who experience it once: It has the best user experience out of every single AI tool that I have used before. It is easy to understand conceptually as well. It starts as a Terminal UI program. It runs completely locally1 and it has access to all the files on your filesystem2. It is able to run commands using the shell: So, it can run git, grep, rspec, go test, etc. This appears basic but it is a huge improvement over all existing AI tools, because one of the questions I am always asking myself when using tools like Claude or Gemini is “What is part of your context?”, “What do you have access to?” If I ask “What was the final conclusion of <ticket link>?” and the machine responds with “Sorry, I don’t have access to <ticket link>. Can you paste the contents as text here?”, that is a problem. Honestly, I cannot think of a machine generated response that is more infuriating than this. This basic lack of user clarity about context is why I had very little confidence that LLM-based tools had any real capability beyond being used in chat bots preloaded with information from 2021. Until now. Using LLMs through a terminal UI is every computer worker’s best option for improving their productivity at work today.

Another reason I like OpenCode is that its values align with many of my own regarding what constitutes good software. OpenCode is:

  1. Open-source
  2. Capable of running completely locally, without relying on the Internet
  3. Being developed in the open
  4. Not backed by a company whose incentives are to increase token usage
  5. Inter-operable with many model providers

OpenCode stores everything related to its internal state in a Sqlite database locally. This makes it easy to start and resume sessions. (If I had to start OpenCode in a container, the only directories that would have to be mounted would be the one containing this database and OpenCode’s configuration which is a JSON file.)

Adherence to these values have undoubtedly contributed to how good the end product is. OpenCode is supposed to be similar to Claude Code3 (I am not sure which one came first.) Now, I understand why my friends were talking so much about Claude Code as early as August 2025.

The list of built-in tools in OpenCode is limited: 15 tools such as bash, grep, webfetch. However, the single tool bash allows it to do a lot of things: Given the right credentials, it can use the CLI tool of your favorite website and read things from that website. This works well for GitHub and GitLab, both of which have APIs that allow bots to do most anything that a user can. This is the cleanest sandbox that I can think of: If I start OpenCode with a read-only API token for GitLab, I know with certainty that no matter how bezerk the LLM goes, I am protected from catastrophic write or delete operations. There is also the ability to set permissions that force the program to ask for the user’s approval before using a tool, or to even completely disable a tool such as webfetch, forcing OpenCode to be completely local.

Yet another good thing is that OpenCode is designed as a client-and-server program: When you run the CLI command opencode locally, it starts a server in the background, and a client in the foreground. The client that I am working on is the Terminal UI and I like it a lot. However, this background server can have other clients. Clients already exist for editors like VSCode. I am looking forward to an Emacs client; probably built by someone in the community. Emacs can render complex UIs, so a simple chat bot UI should work well. This would be a key contribution to making Emacs the most versatile kitchen sink ever created!


No tool is perfect. The corollary of that axiom is that even OpenCode leaves some things to be desired. The main thing that has been missing for me is a proper sandbox: A great sandbox will reliably restrict the tool from doing what I definitely don’t want it do:

  1. I don’t want it to use sudo on my machine
  2. When I am using it in Plan mode, I don’t want it to write anything at all
  3. When I am using it in Build mode, I don’t want it to write anything outside of the directory where I started it

These are features that are built into OpenCode; but they are not guarantees. For me, a guarantee works at a lower layer. For instance: If I mount a directory as a read-only volume into a Docker container, I know that no program which runs in the Docker container can edit files in that directory. This increases my confidence and makes it easier to experiment. OpenCode’s Build mode asks for permission when attempting to run commands outside the directory where it was started, but it has access to all files on the filesystem already. So, the request-approval sequence is more of a formality and something that an LLM could sidestep. I don’t know when the LLM may decide to sidestep this sequence, but it is possible and that’s what makes me uncomfortable.

An inherent risk of using non-deterministic software (like LLMs) is that there is no telling what the model might decide to do: What if it attempts to use its Bash tool to sudo rm -rf /? I don’t know whether or why it would do that. But it is possible and I do not want to take that risk. So far, this has prevented me from using this amazing tool on my personal machine.4 There are some projects out there which put OpenCode in a sandbox: fluxbase-eu/opencode-docker and ocvm. I have not yet tested any of these. Once I find a solution which introduces a sandbox, while still maintaining the great UX, then I would be looking to build the following two wrappers around OpenCode:

$ opencode_plan

$ opencode_build

These should work as “true” wrappers, so that I can start a session in the Plan mode, but later, switch to the Build mode using opencode_build -s <session-ID>.

OpenCode has improved my productivity at work. It has made it possible to complete tasks which had been on the “back burner” for months. It has made it trivial to submit improvements which would have taken hours if completed manually. It has significantly improved my velocity in terms of generating code and submitting concise change requests. I look forward to using this tool to review change requests as well, and post an update here in a few months.

  1. You can connect it to any model provider of your choice or run models locally too. I have been running it with the model provider that is offered at work. 

  2. Please hold all your concerns about Security. I have them too and that’s why I have not run this tool yet on my local machine and I won’t run it until I have a convincing containerized setup where I can mount local files as a read-only volume. 

  3. The only FAQ on the OpenCode GitHub repository is “How is this different from Claude Code? => It’s very similar to Claude Code in terms of capability.” 

  4. I have been feeling the urge to use it for some tasks that have been on my list for several months now which have seen absolutely no progress because most of them are tedious and time-consuming, but not particularly difficult.