I start with a high level design md doc which an AI helps write. Then I ask another AI - whether the same model without the context, or another model - to critique it and spot bugs, gaps and omissions. It always finds obvious in hindsight stuff. So I ask it to summarize its findings and I paste that into the first AI and ask its opinions. We form an agreed change and make it and carry on this adversarial round robin until no model can suggest anything that seems weighty.
I then ask the AI to make a plan. And I round robin that through a bunch of AIs adversarially as well. In the end, the plan looks solid.
Then the end to end test cases plan and so on.
By the end of the first day or week or month - depending on the scale of the system - we are ready to code.
And as code gets made I paste that into other AIs with the spec and plan and ask them to spot bugs, omissions and gaps too and so on. Continually using other AI to check on the main one implementing.
And of course you have to go read the code because I have found it that AI misses polishes.
- the project essentially spans almost 3 different (albeit minor) generations of LLMs. Have you noticed major differences in their personas, behavior, output for that specific use case?
- when using AI for feedback, have you ever considered giving it different "personalities"? I have few skills that role play as very different reviewers with their own different (by design conflicting) personalities. I found this to improve the output, but also to be extremely tiring.
- when did you, if ever, felt that AI was slowing you down massively compared to just doing it yourself (e.g. some specific bug or performance or design fix)?
- conversely, how often did AI had moments where it genuinely gave you feedback or ideas that would've not come to you?
- last: do you have specific prompts, skills, setups, etc to work on specific repositories?
Closely matches my own experiences with current SOTA AI. Extremely useful collaborator, far from being a replacement for human intelligence and creativity.
There are projects that I develop mostly not looking at the code, but owning the concepts, algorithms and ideas asking questions and giving hints, and owning especially the product. But, not for Redis, not yet at least. When in the future this will be possible, server software, the way it is developed today, will be over. I bet there will be still projects and repositories, as accumulation of features, fixes and experiences will still be worth it, but the role of programmers will be very similar to what Linus did so far for the kernel. And for certain projects I'm developing, like the DeepSeek v4 inference engine, I'l already working like that.
LLMs are the insensitive Asmovian robots I’ve always wanted, who translate and do the hardest part of my job: ensuring my emails are polite and none of my true thoughts or feelings are revealed…
Now I just need a way to protect my chats from any potential discovery, and <pew pew> business’ll be easy.
I occasionally type into slack "Future lawyers, the previous conversation is a joke. No one is doing cocaine to get through writing requirements docs."
It feels like Redis is becoming a small database, which seems to make it more convenient to use. Could you add more examples that clarify where the boundary should be?
Well, Redis is a data structures server, and has very complicated and edgy data structures like the HyperLogLog, so I have very little doubts that a fundamental data type like the Array will fit :) Also the actual complexity added is mostly two C files that are quite commented and understandable.
Sure there are also the AOF / RDB glues, the tests, the vendored TRE library for ARGREP. But all in all it's self contained complexity with little interactions with the rest of the server.
A quick note: if we focus only on that part of the implementation, skipping tests and persistence code which is not huge, 4075 lines in 4 months are an average of 33 lines per day, which is quite low.
I’m a big fan of your work, and I honestly didn’t expect to receive a reply from you. Thank you.
Also, thank you for pointing out exactly where I was misunderstanding the issue.
In the past, I used Redis for temperature measurements in a smart farm project. I used Hashes back then, but it seems like Array would fit that use case much better.
This looks like a very useful feature. Thank you again for the reply.
I start with a high level design md doc which an AI helps write. Then I ask another AI - whether the same model without the context, or another model - to critique it and spot bugs, gaps and omissions. It always finds obvious in hindsight stuff. So I ask it to summarize its findings and I paste that into the first AI and ask its opinions. We form an agreed change and make it and carry on this adversarial round robin until no model can suggest anything that seems weighty.
I then ask the AI to make a plan. And I round robin that through a bunch of AIs adversarially as well. In the end, the plan looks solid.
Then the end to end test cases plan and so on.
By the end of the first day or week or month - depending on the scale of the system - we are ready to code.
And as code gets made I paste that into other AIs with the spec and plan and ask them to spot bugs, omissions and gaps too and so on. Continually using other AI to check on the main one implementing.
And of course you have to go read the code because I have found it that AI misses polishes.
- the project essentially spans almost 3 different (albeit minor) generations of LLMs. Have you noticed major differences in their personas, behavior, output for that specific use case?
- when using AI for feedback, have you ever considered giving it different "personalities"? I have few skills that role play as very different reviewers with their own different (by design conflicting) personalities. I found this to improve the output, but also to be extremely tiring.
- when did you, if ever, felt that AI was slowing you down massively compared to just doing it yourself (e.g. some specific bug or performance or design fix)?
- conversely, how often did AI had moments where it genuinely gave you feedback or ideas that would've not come to you?
- last: do you have specific prompts, skills, setups, etc to work on specific repositories?
Now I just need a way to protect my chats from any potential discovery, and <pew pew> business’ll be easy.
A quick note: if we focus only on that part of the implementation, skipping tests and persistence code which is not huge, 4075 lines in 4 months are an average of 33 lines per day, which is quite low.
This looks like a very useful feature. Thank you again for the reply.
He is not "your avg dev" and it took him 4 months with llm.
This is not a seal of approval for you to go and command all your developers to move to Claude code/codex/any other ai coding tool fully.
I'm looking at you - any avg CEO of a startup.
Very cool anyway! Can I expect a youtube video about this soon?