jmathai 16 hours ago

I agree that people have varied experience with LLMs helping them write code. I think the article is right that it's because it's lack intuitiveness leads people to inefficient ways of doing it.

I think the most valuable suggestions from the article that I've found work well for me are:

Context - Provide sufficient context and a way to do this continually. Some tools do this for you like Cursor or Claude Code.

Testing - You need to be able to quickly test the code it gives you. It may be wrong the first time but right the second. The faster you can get to validating the faster you can get to the right code. It's likely going to be faster than writing it yourself.

If you're still having trouble, then find someone who isn't ask see if they'll let you watch them code with LLMs!

goosejuice 12 hours ago

Indeed function signatures and/or types help tremendously. So does a TDD loop. LLMs are productive with guardrails.

  • czk 12 hours ago

    I have had great success with Claude Code by asking it to do TDD (something I personally don't practice). The agentic loop seems to benefit from writing the tests first and then hands-off code iteration until they pass.

    When working with Claude Code I tell it to add a lot of very verbose debugging output to the software as well, I find that being able to ingest the stdout logging in its agentic loop also improves the quality of iterative results.