At Meta, Microsoft, Salesforce and other large companies, devs are purposefully burning tokens (and money!) to inflate their AI usage and hit AI usage metrics which they treat as targets.
Do you not use it enough because yet get bad results? I discovered that, no matter how smart the LLM might be, its first attempt is never its best work. Tell it to review its work (or its plan, if using planning mode). If it makes any changes, tell it to review its work again. Repeat until there are no more changes.
(You don’t actually have to do this repetition manually; just tell the AI to do it in a loop. I recommend making it into a SKILL.md so you don’t have to explain the loop every time.)
With these loops, I get better results AND burn lots of tokens. (Yes, it feels strange that excessive token consumption is actually considered a good thing)
Do you not use it enough because yet get bad results? I discovered that, no matter how smart the LLM might be, its first attempt is never its best work. Tell it to review its work (or its plan, if using planning mode). If it makes any changes, tell it to review its work again. Repeat until there are no more changes.
(You don’t actually have to do this repetition manually; just tell the AI to do it in a loop. I recommend making it into a
SKILL.mdso you don’t have to explain the loop every time.)With these loops, I get better results AND burn lots of tokens. (Yes, it feels strange that excessive token consumption is actually considered a good thing)