models.localModels is not iterable

HI all, I have installed a fresh install of vscode (1.95.2) and cody ai and I am getting the error “models.localModels is not iterable” when trying to edit my js files. I successfully sign up each time to cody ai. I have uninstalled vscode, rimraffed the .vscode and .cache and still getting the same error message.

Version: 1.95.2
Commit: e8653663e8840adaf45af01eab5c627a5af81807
Date: 2024-11-07T11:07:22.054Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Linux x64 5.4.0-200-generic

Each time in the debug console I get the same error : [error] {
“agent”: “vscode”,
“message”: “Request Finished with Error”,
“protocol”: “https”,
“timestamp”: “2024-11-13T19:16:05.612Z”,
“timings”: {
“close”: “2024-11-13T19:16:05.612Z”,
“error”: “2024-11-13T19:16:05.612Z”,
“errorValue”: {
“code”: “ECONNRESET”,
“message”: “socket hang up”,
“stack”: “Error: socket hang up\n\tat TLSSocket.socketCloseListener (node:_http_client:477:27)\n\tat TLSSocket.emit (node:events:531:35)\n\tat TLSSocket.emit (node:domain:488:12)\n\tat node:net:339:12\n\tat TCP.done (node:_tls_wrap:648:7)\n\tat TCP.callbackTrampoline (node:internal/async_hooks:130:17)”
},
“socket”: {
“connect”: “2024-11-13T19:16:05.206Z”,
“start”: “2024-11-13T19:16:05.164Z”
},
“start”: “2024-11-13T19:16:04.975Z”
},
“url”: “https://sourcegraph.com/.api/graphql?EvaluateFeatureFlag
}
2024-11-13 21:16:05.614 [error] {
“agent”: “vscode”,
“message”: “Request Finished with Error”,
“protocol”: “https”,
“timestamp”: “2024-11-13T19:16:05.614Z”,
“timings”: {
“close”: “2024-11-13T19:16:05.614Z”,
“error”: “2024-11-13T19:16:05.614Z”,
“errorValue”: {
“code”: “ECONNRESET”,
“message”: “socket hang up”,
“stack”: “Error: socket hang up\n\tat TLSSocket.socketCloseListener (node:_http_client:477:27)\n\tat TLSSocket.emit (node:events:531:35)\n\tat TLSSocket.emit (node:domain:488:12)\n\tat node:net:339:12\n\tat TCP.done (node:_tls_wrap:648:7)\n\tat TCP.callbackTrampoline (node:internal/async_hooks:130:17)”
},
“socket”: {
“connect”: “2024-11-13T19:16:05.205Z”,
“start”: “2024-11-13T19:16:05.150Z”
},
“start”: “2024-11-13T19:16:04.974Z”
},
“url”: “https://sourcegraph.com/.api/graphql?EvaluateFeatureFlag
}

Thank you for providing the Cody network logs.
There should be another log in the output window called “Cody by Sourcegraph”. Here we find the relevant logs.

The first you provided shouldn’t affect the functionality.

Hi there :slight_smile:

Thx for the quick reply. Here are the logs of “Cody by SourceGraph” after hitting alt-k couple of times and received the same message. In the mean time I installed debian 11 on vbox and after installing cody ai (only plugin on new installation) I again got the same message.

█ telemetry-v2 recordEvent: cody.extension/savedLogin:

█ auth Authenticating to https://sourcegraph.com/…:

█ ModelsService User model preferences changed: {“defaults”:{“chat”:“anthropic::2023-06-01::claude-3.5-sonnet”,“edit”:“anthropic::2023-06-01::claude-3.5-sonnet”,“autocomplete”:“fireworks::v1::deepseek-coder-v2-lite-base”},“selected”:{}}

█ ChatsController:constructor init:

█ GraphQLTelemetryExporter evaluated export mode:: 5.2.5+

█ auth Authentication succeed to endpoint https://sourcegraph.com/:

█ telemetry-v2 recordEvent: cody.auth/connected:

█ SymfRunner unsafeEnsureIndex: file:///home/karanaso/coding/google-cloud-storage

█ ClientConfigSingleton refreshing configuration:

█ Autocomplete:initialized using “dotcom-feature-flags”: “fireworks::deepseek-coder-v2-lite-base”:

█ ClientConfigSingleton refreshed: {“codyEnabled”:true,“chatEnabled”:true,“autoCompleteEnabled”:true,“customCommandsEnabled”:true,“attributionEnabled”:false,“smartContextWindowEnabled”:true,“modelsAPIEnabled”:false,“latestSupportedCompletionsStreamAPIVersion”:5}

█ ModelsService new models API enabled:

█ ModelsService ModelsData changed: 13 primary models

█ ClientConfigSingleton refreshing configuration:

█ ClientConfigSingleton refreshed: {“codyEnabled”:true,“chatEnabled”:true,“autoCompleteEnabled”:true,“customCommandsEnabled”:true,“attributionEnabled”:false,“smartContextWindowEnabled”:true,“modelsAPIEnabled”:false,“latestSupportedCompletionsStreamAPIVersion”:5}

█ UpstreamHealth Ping took 294ms (Gateway: 317ms):