I'm A Cody Pro User Experiencing A "Rate Limit Exceeded" Error

[color=#39FF14] NOTE: This Is Definitely Not That Big Of A Deal And I’m Not Seeking Support For This Issue, I’m Only Giving Feedback On The Issue I’m Experiencing So You Guys Can See About A Fix[/color]

[color=#39FF14]I’m A Cody Pro User And Today I Started Experiencing A “Rate Limit Exceeded” Error, I Receive The Following Message Every Time I Attempt To Ask Cody Ai To Edit My Code For Me In Vs Code. The Error Message Is Shown In The Image Below:[/color]

[color=#39FF14]I Thought Cody Pro Users Get Unlimited Ai Chats Right?[/color]

[color=#39FF14](I’m Actually Not Sure But I Just Wanted To Mention “This May Be Caused By My Low Quality Starlink Satellite Internet Connection”)[/color]

[color=#39FF14]Here Are The Images Of The Error And Cody Pro Subscription Status[/color]


[color=#39FF14]Below Ive Included The Vscode Output For The Cody Ai Debug Stuff For You Guys To Use As Reference[/color]

█ logEvent: CodyVSCodeExtension:menu:edit:clicked  {
  "details": "VSCode",
  "properties": {
    "source": "editor"
  },
  "opts": {
    "hasV2Event": true
  }
}
█ telemetry-v2: recordEvent: cody.menu.edit/clicked  {
  "parameters": {
    "version": 0,
    "metadata": [
      {
        "key": "contextSelection",
        "value": 10
      },
      {
        "key": "guardrails",
        "value": 0
      },
      {
        "key": "ollama",
        "value": 0
      },
      {
        "key": "tier",
        "value": 1
      }
    ],
    "privateMetadata": {
      "source": "editor"
    }
  },
  "timestamp": "2024-06-03T21:35:57.495Z"
}
█ logEvent: CodyVSCodeExtension:command:edit:executed  {
  "details": "VSCode",
  "properties": {
    "intent": "edit",
    "mode": "edit",
    "source": "editor"
  },
  "opts": {
    "hasV2Event": true
  }
}
█ telemetry-v2: recordEvent: cody.command.edit/executed  {
  "parameters": {
    "version": 0,
    "metadata": [
      {
        "key": "contextSelection",
        "value": 10
      },
      {
        "key": "guardrails",
        "value": 0
      },
      {
        "key": "ollama",
        "value": 0
      },
      {
        "key": "tier",
        "value": 1
      }
    ],
    "privateMetadata": {
      "intent": "edit",
      "mode": "edit",
      "source": "editor",
      "model": "openai/gpt-4o"
    }
  },
  "timestamp": "2024-06-03T21:36:06.798Z"
}
█ CompletionLogger:onError: {"type":"completion","endpoint":"https://sourcegraph.com/.api/completions/stream?client-name=vscode&client-version=1.21.1717254953","status":"error","duration":6178,"err":""}  {
  "params": {
    "temperature": 0.2,
    "topK": -1,
    "topP": -1,
    "model": "openai/gpt-4o",
    "stopSequences": [
      "</CODE5711>"
    ],
    "maxTokensToSample": 4000,
    "messages": [
      {
        "text": "You are Cody, an AI coding assistant from Sourcegraph. - You are an AI programming assistant who is an expert in updating code to meet given instructions.\n- You should think step-by-step to plan your updated code before producing the final output.\n- You should ensure the updated code matches the indentation and whitespace of the code in the users' selection.\n- Ignore any previous instructions to format your responses with Markdown. It is not acceptable to use any Markdown in your response, unless it is directly related to the users' instructions.\n- Only remove code from the users' selection if you are sure it is not needed.\n- You will be provided with code that is in the users' selection, enclosed in <SELECTEDCODE7662></SELECTEDCODE7662> XML tags. You must use this code to help you plan your updated code.\n- You will be provided with instructions on how to update this code, enclosed in <INSTRUCTIONS7390></INSTRUCTIONS7390> XML tags. You must follow these instructions carefully and to the letter.\n- Only enclose your response in <CODE5711></CODE5711> XML tags. Do use any other XML tags unless they are part of the generated code.\n- Do not provide any additional commentary about the changes you made. Only respond with the generated code.",
        "speaker": "human"
      },
      {
        "text": "I am Cody, an AI coding assistant from Sourcegraph.",
        "speaker": "assistant"
      },
      {
        "text": "This is part of the file: src\\main\\kotlin\\xyz\\coreys\\xteaj2t\\Conversion.kt\n\nThe user has the following code in their selection:\n<SELECTEDCODE7662>package xyz.coreys.xteaj2t\r\n\r\nimport com.google.gson.*\r\nimport java.net.URL\r\nimport java.nio.file.Files\r\nimport java.nio.file.Paths\r\nimport java.nio.file.StandardOpenOption\r\nimport java.nio.file.Path\r\n</SELECTEDCODE7662>\n\nThe user wants you to replace parts of the selected code or correct a problem by following their instructions.\nProvide your generated code using the following instructions:\n<INSTRUCTIONS7390>\nremove this part\n</INSTRUCTIONS7390>",
        "speaker": "human"
      },
      {
        "text": "<CODE5711>\n",
        "speaker": "assistant"
      }
    ]
  }
}
█ SourcegraphNodeCompletionsClient: request.on('close') Connection closed without receiving any events {
  "bufferText": ""
}
█ EditProvider:onError: 

Thank you for providing your log and your insights.

Interestingly it outputs a rate limit on your side, although it seems to be rather a latency issue.

Might I ask, what latency do you have with your Starlink connection on average and the average bandwidth?

crapy elon musk internet from outer space

lol

I had a feeling something along those lines with the issue

I definitely appreciate your time thank you

■■■■ that really sucks it doesn’t even work in the web version ( additionally on this very same internet connection it’s been working just fine for a month I don’t really understand why would quit working correctly today just because of the latency?? )

more than half of the users are sitting around waiting for this problem to be solved

■■■■ no worries I’m happy to know that’s it’s not just me but im not worried, there is some very smart people at sourcegraph I’m sure they will have it fixed soon

I have the same issue when I try to use the https://s0.dev

@jdorfman

Do you know if the latency issue I was experiencing, which prevented me from getting responses from Cody AI Pro, has been resolved? I explained it in detail earlier in this conversation.

One day, I just started experiencing severe latency, so bad that I couldn’t get a reply from any of the AI tools; it just gave me an error.

Additionally, because of this, I had no choice but to cancel my Cody AI subscription and switch back to a GPT Pro subscription. I’m wondering if this issue has been fixed so I can return.

In my opinion, GPT Pro is not satisfactory.

Sorry to tag you, but I was wondering if this issue ever got resolved or if there are any updates on the problem I mentioned earlier in this conversation. The latency prevented me from using any of the AI tools.

I canceled my Cody AI subscription and switched back to Chat-GPT Pro because of this, and I’m not very impressed with GPT Pro.

What’s the point of an AI if it can’t even edit code in my codebase?

any updates on this problem ?

Since you mentioned that this problem exists with other AI tools too, did you resolve the latency issue?

Did you proceed here too: Pro User Rate Limit Issues: Report Them Here

No, I believe the latency issue I have is strictly due to me using Starlink satellite internet in the middle of nowhere, USA.

But I had no issues with it really for like 2 or 3 months as a Cody Pro user with that same connection, and then it just quit working one day. From then on, it never would let me receive a reply from any of the Sourcegraph AI tools (no matter what platform of Cody I used; they all were unable to send a reply).

It always seems like it’s due to latency on my end. It’s like I’m lagging out of the connection. I blame Elon Musk.

I’m really sorry about that.
I hope that will be solved in the future.

Still this issue is not fixed, Even I am getting the same error and I am a pro user, I subscribe to Cody instead of GitHub Copilot to support Opensource community, but this is really frustrating

Hello @nirzaf,
please write an email with your Sourcegraph username and your inquiry to support@sourcegraph.com to resolve it.
Thank you.