While using the Smart Apply with things like golang, Cody fails more than alternatives like Cursor with the same prompts. Other languages seem to have better results, but I had to temp switch to Cursor over my preferance of Cody on a project due to just losing code or Cody not knowing where to put things.
The smart apply either misses where it should go or completely replaces or deletes code blocks.
Are there any prompts or things to be included in your queries to help optimize Cody’s ability to find the code snippets to work on.
Like
include the first 3 lines before and after an edit so I can identify where to make the change.
or
When providing code changes, put the code change in a code block and include a 3 or more lines before and after so I can figure out the context it should go in. Include line numbers and file names if possible.
Cursor injects prompts to the LLM like this.
First, I will give you some potentially helpful context about my code.
Then, I will show you the insertion point and give you the instruction. The insertion point will be in openai-gemini/src/worker.mjs
.
Potentially helpful context
file_context_4
path/to/file1.jsfrom line 1:
{fake context here}
file_context_3
path/to/file2.mjs from line 13:
{fake context here}
file_context_2
path/to/file3.css from line 1:
{fake context here}
```
### Generation Prompt
{My prompt here}
## Your Task
Generate the code to be inserted in accordance with the instructions.
Please format your output as:
```
// Start Generation Here
// INSERT_YOUR_CODE
// End Generation Here
```
Immediately start your response with ```