Cody cannot index/ access local repo


I tried multiple times making cody access my local repo, earlier it was saying :“direct access to your local files or any particular repository unless you provide more context” and then I thought asking in public knowledge which didn’t help at all,
and after deleting the embedding to rebuild it , it can only access 2 files from repo, a complete understanding about the base is missing which is much needed.
And I am unable to locate enhanced context option.

1 Like

Hello @l1nk

LLMs usually are not working for that use cases by default. For folder structures you usually use a terminal command like tree, ls etc.

LLMs, like Cody, are for uses cases understanding the content of files in this case the code included in the files to comprehend the semantics and what the code does to help fixing bugs, implementing new features or simply to understand an unfamiliar code base.

I hope that explanation helps.

I am using folder structure command to just check the access of cody, from what I can see its not able to access all the files instead I need to attach them one by one for context to get results.
cody2

I doesn’t make sense if I have to do this everytime

1 Like

Think of it this way, if you write a very general prompt like “What is this project about?” with only the repo chip ths - and it seems there is no readme file or similar - it is a hard task for every LLM to infer the repo context.

There also seems no usual entry point file like ‘main.py’, ‘app.py’ or similar.

The best way with such general prompting may be to include at least the entry point for the app as a starter.

By looking at your second example image, at least for me, it is hard to recognize where to start even if I know what files to consider from your image alone. I can infer from the naming of the files what might be the content.

Start by iterating simply. If you have a readme file, take this as context and ask “what is this project about?”. Then take the next step about what feature interests you the most, like for example, the training.py or inference.py.
Write down all the knowledge you got from asking that questions in a text file.

Later, if you deep dive into that repo again, include that generated text file again to avoid asking the same questions again.

Does that make sense?

The problem for me is not whether the LLM understands what my project does… the problem here is the ease of telling the LLM what files and structure it should take as context… if you have to add them one by one to get a result, this is horrible and tedious.

if the entry point is irrelevant such as if the entry point is irrelevant and what I want is to show the MLL how 2 or more files relate and work together… that’s a problem…

1 Like