I’ve used a number of AI-powered code assistant tools, but I chose Cody because of its reasonable price, easy-to-use interface, and frequent updates.
However, I’m discontinuing my Cody subscription because:
- cannot access it in a VPN environment
- servers are unreliable (especially recently)
- It hasn’t stayed innovative in the face of other competing products.
It is with mixed feelings of gratitude and sadness that I write this post to bid farewell to a product i have loved ,and also recommended to my colleagues over the years.
Hey @4N3MONE
Sad to hear that you are considering to cancel your subscription.
In regards to VPN, we use Cloudeflare to protect our services against malicious attacks and DDoS attacks. It is the responsible of CloudFlare to block VPN connections if the IPs are on their blacklist. Nothing we can do from our side.
The Free/Pro plans are currently in a migration phase to new server infra, because of high traffic. We apologize if that goes along with some downtimes.
About innovation, what features would you like to have in Cody? But remember that Cody is also used by enterprise companies and they need to be highly reliable features. For example, “vibe coding” is not a thing for enterprises because this introduces alot of technical dept and low security.
Thank you for your feedback and we hope you consider our decisions.
I can second the sentiment of disappointment (although I don’t really share the love - more like high hopes). I get that it take time to create a smooth experience, but the amount of bugs and small annoyances eat into the timesavings bigtime. Having observed a colleague using Cursor and heard his experiences (having used Cody before) Sourcegraph need to up their game soon…
EDIT: Honestly - the more I try to use Cody, the more frustrated I become. It’s just not an enjoyable process. Maybe I’m not great at “prompting”… So I’ve tried to chose simpler task to have higher “hitrate”, but it also fails at very simple things like converting python docstring format to reSt. In this case I can see that the output is correct, but “Apply” fails (1). It also spend a significant amount of time applying. This is not consistent. Often it works, but the number of times it doesn’t is too high - making the whole thing a timesink.
EDIT: And either the pre-chat/edit instructions are not applied, or the models are just incapable of writing sphinx/restructuredText docstrings…
(1) Yes, I could use the edit-code command (which might be more robust/faster for this), but the lack of history make adjusting the prompt really annoying and for some reason my custom edit prompts is not possible to use?