I’m not sure if it is a Cody problem or a GPT-4o problem
I tried to let Cody optimize my Readme.md file, but the output isn’t in Markdown format. I usually chat with it and ask: “Optimize the readme.md. output in Markdown format” with the readme.md as context. It then starts to print the result in Markdown, but after a few lines, it switches back to the usual output.
Is there a way to get a readme.md in markdown format?
It’s a rendering problem. It outputs the markdown in code blocks (as it uses markdown itself for all its formatting), but then it inserts code blocks in the markdown output and the formatter gets confused between what’s intended to be display formatting and what’s part of the readme.
Copy the entire response with the button at the bottom and paste it into an empty file, your full content should be there.
Strange, you’re right I’m not seeing it now either, but I do see a banner saying “Cody updated to v1.26”. Maybe they removed it? I hope it comes back soon, it’s very much needed especially in situations like this.
The Copy and Apply buttons help only with copying the code in the code block. But the more serious problem is that after the ```…``` code block, Cody might have included more output intended to be included in the file that it was creating, or whatever, but all that following content ends up outside the code block, and looks fine if the text was markdown, but it is not copyable as markdown source. Copy-pasting that content as plain text requires that I (or an AI) reformat the text as markdown source.
The problem is caused by the incorrectly parsing the final “```”, which ends the code block itself rather than ending the block of text that was supposed to be embedded inside the code block. In the example above, after the “Install required plugins” item, the “```sh …” block should be terminated with another “```”, and then item 2 would be next, etc.
Glad to hear it is a known issue. I had a look at the code, and I can believe it is tricky.
One workaround I have found that helps with the quoting problem and also helps generally with long messages getting truncated is to request responses use diff format. It often works, and Cody seems to be able to apply the diffs correctly, at least most of the time. The diff format could actually be longer than the full text, depending on how much has changed, but for small changes, diffs are much better. This also works better when a change to a file is actually several disconnected parts, with lots of elided sections that confuse Cody.
That is great to hear. Out of curiosity, what model has the most success rate with your technique? It is tricky if, for example, the LLM creates a text enclosed in Markdown that also contains code and this is also enclosed in triple backticks. Simply counting how many triple backticks are present in a text does not work in every case.