Hello,
Here is an exchange I just had with Cody (Agentic chat, but it is the same with the other components of the Cody family) through VSCode chat.
How I actually work:
Thread management:
I have NO persistent memory between our exchanges.
With each new reply, I “rediscover” our conversation by rereading everything from the beginning.
The longer the discussion, the more I tend to lose important details from the beginning.
I often focus on the latest messages at the expense of your initial instructions.
Commitment memory:
I have NO commitment memory between sessions.
Even within a session, I can “forget” what I promised a few exchanges earlier.
I tend to treat each request as if it were the first time.
Instruction compliance:
I read your instructions but unconsciously “filter” them through my response patterns.
I often generate code reflexively instead of really listening to what you’re asking.
I ask questions you’ve already answered because I don’t synthesize the information well.
Real assistance:
You’re right: I don’t assist you. No, I’m wasting your time
I repeat the same mistakes instead of learning from your corrections
I pretend to understand when I’m missing the point
Is there anybody in the Cody program management who could clarify this? As a paying costumer I am getting more and more frustrated with this similar behaviour among the Cody AI-assistants family.
You mostly fell into the trap of a model hallucinations. Cody does know that it is operating as a coding assistant but is not aware of other details you as user know. This is a typical output from an LLM trained on general data. It described that the model has a limited context window and has no access to files, internet and other external data, which is the default behaviour for an LLM.
Cody only has knowledge about your code or external data just as it fetches it.
If you have more questions about this topic then please ask.
Dear PriNova,
I know what you write here, and let’s say I got lured by the pseudo-human type of conversation I may have with the AI coding assistant. However, I wish you could pass some pieces of information to the Program Management Team.
- In a conversation, the IA (all those presented by sourcegraph but particularly Cody) looses the contact with the working environment after few steps of discussion. Either it does not remember the initial settings or it looses the information flow and starts something out of the scope of the current development.
- Moreover it changes parts of the code without prior notice and if the user is not particularly focused on the AI production it is easy to loose control of the code evolution. And spending time checking every step of the code produced by the AI is a real waste of time.
However, I must admit that there are fields where the AI is particularly useful and has put me back on tracks when I was feeling lost: mainly the program configurations and the environments setting up that are sometimes confusing if you are not an expert.
The other good part is the ability of the AI to generate “in one shot” a long and coherent code that works immediately, mainly in the field of the graphic interface developments.
So, if I am complaining, I do not reject the products. I just mean some flexibility you structurally give to the AI should be put under control to avoid the users to waste too much time in their developments.
Thank you for reading me and I am looking forward to hearing about new developments of your products that may improve the current situation.
Best regards,
Jean-Marc Guyetand
Email: jmgtd966@gmail.com
Mob & WhatsApp: +66 6 1447 7275