We're switching things up with the newsletter. Every Sunday, we’ll share our unfiltered thoughts. One thing from the week you can take with you into the next one: A tool, a new obsession, a realization, or something that stuck with us. 

There’s a lot of AI stuff out there, so here’s what you can expect from us: what we're actually doing with it, what's working, what surprised us, and what you might want to try yourself.

Let’s get into it.

Russell:

This week was all about token optimization. A week ago, Anthropic changed their policy on how third party harnesses, like OpenClaw, can access their model. This was huge news in the developer world, because it meant people had to actually construct and optimize workflows for their agents. 

The news came on April 4th, when Anthropic announced an official end to a loophole that allowed developers to run almost unlimited workflows using Anthropic’s flagship model, Opus, without worrying about cost or optimizations. It’s easy to forget the cost of AI, because talking to tools like ChatGPT and Claude has become second nature for so many of us by now. But every time you ask Chat a question, that costs tokens (and money), and it can add up fast.  

Thousands of developers woke up to messages like this on April 5th from their agents:

“LLM request rejected: Third-party apps now draw from your extra usage, not your plan limits. We've added a $100 credit to get you started. Claim it at claude.ai/settings/usage and keep going.”

We had been leveraging Claude Code in some of our agent products, so this week was about digging back in on the APIs for Anthropic and OpenAI. Here’s what we found:

  • Anthropic models are still better at tool calls and following direction pound for pound than the OpenAI models

  • Sonnet is the best model for general use cases, while Haiku can be used strategically for small simple tasks

  • Optimization strategies like caching and tool call scripts are super important to keep your costs down, so that your workflows execute with the least number of model turns as possible 

When Claude hits its limit but you haven’t…

Julia:

Change is hard. Most of the businesses we work with have been running on the same systems for years (sometimes decades) and "this is how we've always done it" is deeply embedded. Pushing someone to rip everything out overnight is a recipe for resistance. So we always try to meet clients where they are.

This week, we spoke to someone dealing with this exact issue for a business that relies heavily on ERP and inventory systems. Instead of forcing a full migration, here's what we proposed: build a neutral database layer in between their existing systems, like a bridge. Webhooks fire when something happens in system A, the data lands in a connective layer, and flows wherever it needs to go in system B. The systems don't need to know about each other. They just need to talk to the bridge. And when the client eventually migrates off one of those legacy tools, the bridge stays. The integrations, the data, the automations — none of it gets thrown away. It just points somewhere new.

AI is already intimidating enough: it's new, it requires changing habits, and nobody wants to blow up what's working. The biggest competitor we face isn't another agency or tool. It's inertia. But baby steps still move you forward. And enough of them — especially when every phase of work compounds — eventually add up to one big one, just without scaring everyone off.

Stay curious,

Julia & Russell

Reply

Avatar

or to participate

Keep Reading