Steve Yegge has released Gas Town. Gas Town is high-leverage productivity environment where tooling, automation, and agentic workflows dramatically amplify how much real work a developer can get done. So figuring how to use multiple agents, even 10-12 of them, is a (steep) productivity imperative now.
One of the low hanging fruit it to move non-coding development workflows to AI. What are these workflows? These are small activities, performed several times a day that can be automated away using AI.
A few examples.
- Adhoc querying of database
- Performing manual one-off API endpoint testing
- Performing UI test
- Merging work tree feature branches
- Composite workflow: perform api endpoint operation + querying the database to verify
Of course, coding agents with some configuration provided, will be able to do some of these on their own - out of the box. But we would typically like to not use too many tokens, complete the task fast, complete the task right, and have guard rails around what agents can do. Lets see the examples on how it can be done.
Adhoc querying of database
I want a list of users who have not posted a blog in last one month but had more than 1000 views on the last post. Earlier I would write a query & run - but prompting is faster in providing input.
But if we just prompt, it takes much longer than I like and consumes tokens in the process. It takes very long because it needs to be figure out how to accomplish the task from scratch. It needs help with:
- what the database schema is. e.g. If I am using prisma, I can provide schema.prisma file and it can find out.
- getting the database connection setting
Since I will use it regularly, ensuring that it runs only read-only queries, will put me at ease. Hence, a simple script that provides this guardrail. See all the pieces here.
Performing API endpoint test
Pre-AI approach for me was to create postman collection and maintain all endpoints. I end up with a lot of times in the collection, that I don't reuse. They become too many over time to find easily. I use a simple prompt template like the following now.
## Instructions
- Call http api running on port provided in .env
- Call custom endpoints at / defined in apiRoutes in file generic-agents/src/mastra/index.ts
## Rules
- Do NOT repeat the test on your own
- Do not start / stop servers. they are managed separately
An important consideration here is whether I want the agent to start the server on demand and stop when done. I have decided against it for now. I find it is cumbersome since the coding agent's management and interface of background processes is not robust. Secondly, it is difficult to tell the agent to not analyze the stdout/err of these processes. They tend to analyze and end up consuming a lot of tokens - without proportionate value for me.
Also, a bit about git work trees. git worktrees offer an excellent solution for working in parallel on multiple features in parallel using different agents sessions in each. Such a setup also allow one to run endpoint API test on a server, while another agent session is modifying the code. The code modification doesn't result in restart of the server.
adhoc web UI test
This requires regular functional tests setup (like playwright, selenium). There is one difference - the tests generated are ephemeral (throw away). The prompt template for this could be like the following.
### Rules
- create the tests in /ephemeral-tests and results in /ephemeral-tests/results folder
- before starting the test, delete any previous tests present in the folder
- use playwright for running the tests
### Setup
- playwright.config.ts file is present in the root folder
- before starting test, add local storage item "accessToken" with value "ya29.a0AQQ_BDRpp-dfdsfdsf" without quotes
### Capture and show in the output
- network errors from the browser
When prompting one can do:
<prompt>. follow @prompt-template.md
Let's look at another example now - merging git worktree feature branches
Managing multiple feature branches locally using git worktrees requires constant merging of branches. This can be offloaded to the agent. Again, just a prompt template describing the process can be used.
1. Ensure there are no local changes
2. Merge local branch ignoring spaces taking changes from the main branch
3. On successful merge, run all unit tests using npm run test:unit. Analyse only exit code not the output.
4. On successful run ask yes/no question on whether to push or not.
5. If answered yes then push.
6. If no then do nothing
There are three building blocks - Prompts, Prompt Templates, and software infrastructure to run them. The later two are version controlled. The mental model is - Claude code has been given a computer and terminal is interface to everything. It is super important to create a firewall between what is fine for Claude to access and what isn't.
Once we have these different primitives like run a query, run api endpoint test, one can compose them during prompt.
Finally, on importance of ephemeral testing
Ephemeral testing is basic end to end type manual tests that an engineer does to test whether what one has built is working from the user's perspective in a developer environment. This is a very important feedback tool. People who are used to moving the story to QA or Done without doing this - learn its importance the hard way (e.g. story pushed back to "In Dev" with 7 bugs within half an hour of testing).
At the same time, ephemeral testing is kind of testing that is too early to fully automate and make it part of the test suite. Automating too early usually results in maintaining code and test both - without getting desired value - increasing overall work.







