Generate redundant code, and the number of code responses is limited to 3 code files

Hello,

I use Cody Pro with VScode and Experimental.

I noticed that it stops its answers after generating 3 or 4 screens of code (files for a class or a namespace) and asks the question again. I asked it to go step by step in the prompt.

I always have to ask it to continue.

Also, even though I work with a GitHub repository, it loses the project context and starts creating the same code redundantly.
I’m not going to say that I’m an expert in prompting, but I also work with ChatGPT Plus, and I don’t encounter these problems.

Do you have any advice for me?

Thanks

Some models behave to spit out the complete code file even a small change should be applied.
You can prompt the model to “Only provide the code that needs to be changed and leave out the rest for brevity.”
Usually Sonnet does not behave like this, but the Google models do.
Every LLM has its own characteristics and is sensitive to prompting. I know back in the early days before ChatGPT was released, that only a small change like the placement of a comma, whitespace or spelling errors, let the model behave differently.

Additionally, it helps to decompose your tasks into smaller sub-tasks and stay within the token limits. A too long conversation leads to hallucinations of the LLM.

Thanks for your replay
I follow all of your advice.

1 Like