I have a question about indexing and embedding projects.
Is having the project under git and uploaded to public a mandatory part? Is it not possible to do embedding locally, for example via transformers.js?
If embedding has started, how do I stop it and restart it? Let’s say I made changes to the .cody/ignore file - embedding that has already started will not get changed, no matter how many times I restart vscode.
I have the same question.
I’ve been evaluating Cody and am using a local copy of code, without a repo. Cody seems to have zero context of anything outside of the open file.
Can I run a local index somehow?
Am I doing something wrong?
Is this just an oversight in how it works?
I can’t justify rolling this out to my teams if I can’t see it working better than cutting and pasting directly into an LLM…
For Cody to work properly, you need to initialize a git repo within your project and assign a remote repo.
After that, the automatic indexing should start and Cody is able to comprehend your codebase.
I have some questions about this, what happens if my repository is a bit big: about 50 files with 200 to 400 lines of code + comments.
I imagine the repository must be in public mode or can it also be in a private one?
I have a problem with the context, I run out of tokens quickly for each chat… just when introducing a project so Cody understands what it’s about and where the development is coming from.
Depending on the plan you are on, Cody can understand your local code base through a symbol finder in local mode. This means that if you are familiar with your code base, you can ask targeted questions using the symbols, such as classes, functions, and variable names. Even better if the code is well documented. For the Pro plan, the repository must at least be initialized with git, and you must commit to update the indexing locally.
For the Enterprise Starter plan, the repository can be in private mode, and you can even point to multiple repositories.
I have the Pro Plan and this is my public repository, but I have no idea how to configure this in phpStorm JetBrain.
Can you help me in any way?
What I want is for him to understand my development workflow without the need to consume tokens (what this normally requires is manually adding the project to the context one by one the files) so that when I ask questions about changing something or implementing something… I have access to the data content, I don’t care if the answer comes with information extracted from the internet… etc…
If you have cloned your repository locally, which I assume you did, and have commits on it, you can mention your repository in your chat message box via @-current repository and Cody will find out which files to add to the context based on your prompt. This heavily depends on the meaningful naming scheme you used.
Usually, this is your own repository, you will add the files or code snippets to the context based on your workflow to get the most reliable answer from Cody.
What exactly are those?
Maybe you can provide some prompt examples and tell, if the wording should find that file because you mentioned it clearly in your prompt.
For example, in a project where I’m adding the multi-language feature for en User, even though I’ve told it that the global functions file exists in X path, which it should consider because in that file are the functions that should be implemented … it ignores the contents of this file … and then I have to spend tokens on a new interaction making the correction… and attaching the file to the context …
I thought that if I gave it this file, the AI could pull anything… that was mentioned to it…