Request Failed: context deadline exceeded

This situation often occurs when using o1, and it also happens when using Claude or GPT-4o to output long sections of code. I believe the context length should be further extended.

Hello @wudizhe002

This is a known issue and happens, because the reply of the model exceeds the output context limit.

Try to append to your prompt “Keep your answer brief!”

The team is evaluating to increase the output tokens limit.

Thank you