
We've been experimenting with Claude Code, Anthropic's terminal-based coding assistant, alongside their latest Opus 4.5 model released in November 2025. Here's what we've learned so far.
Last year we wrote about agentic coding with Cursor and Claude. At the time, Cursor offered the more productive workflow—it let us edit code directly alongside AI suggestions rather than delegating tasks entirely. Claude Code existed but required pay-per-use API billing, which discouraged experimentation.
Since then, Anthropic added Claude Code to their fixed-price subscription plans and continued improving the underlying models. Opus 4.5, their latest release, has noticeably better accuracy than earlier versions, which prompted us to revisit the tool.
Claude Code is a coding assistant from Anthropic that runs in the terminal. Rather than typing code manually, developers describe what they want in plain language and Claude Code writes the code for them. Opus 4.5 is Anthropic's most advanced AI model, known for its accuracy on complex tasks. Claude Code uses this model by default.
Claude Code works best when given context about a project. We use markdown files to share details about our stack—which is tailored for China's internet environment—and client-specific conventions. Without this setup, the tool makes generic suggestions that don't always fit our requirements. With it, the output is more relevant and useful.
In practice, these tools haven't shortened our timelines. Instead, we're reinvesting the time into more complete early-stage drafts and more polished final work. The efficiency gains go toward quality rather than speed.
Claude Code is especially good at reviewing large codebases and applying consistent changes, and also building something similar from a provided example. For best results though, it still needs the direction of an experienced professional.
Online discussions sometimes suggest these tools work perfectly every time. That hasn't been our experience. Claude Code rarely gets it right on the first try—sometimes it needs small refinements, sometimes it misunderstands entirely. With feedback, it usually gets there within two or three attempts. When it doesn't, we go in and fix it by hand, the old fashioned way. Developer judgment remains part of the process.
Claude Code requires a VPN and an international credit card, which puts it out of reach for most local teams. Local alternatives like Baidu Comate and Alibaba's Tongyi Lingma exist, though in our experience working with local clients, AI coding tools haven't come up in conversation yet.
Anthropic's policy is that they don't train on data from paid subscribers, so project code isn't used to improve their models. Code is sent to Anthropic's servers for processing, but client data stored in databases remains on your infrastructure.
Claude Code with Opus 4.5 is a capable tool that continues to improve. Like other AI coding tools, it works best with proper context and human oversight. We're continuing to explore where it fits in our workflow.