To specify the model's provider, append it after the model name using `@` as before.
This format supports cases like `google vertex ai` with a model name like `claude-3-5-sonnet@20240620`.
For instance, `claude-3-5-sonnet@20240620@vertex-ai` will be split by `split(/@(?!.*@)/)` into:
`[ 'claude-3-5-sonnet@20240620', 'vertex-ai' ]`, where the former is the model name and the latter is the custom provider.
- [+] fix(common.ts): improve handling of OpenAI-Organization header
- Check if serverConfig.openaiOrgId is defined and not an empty string
- Log the value of openaiOrganizationHeader if present, otherwise log that the header is not present
- Conditionally delete the OpenAI-Organization header from the response if [Org ID] is undefined or empty (not setup in ENV)
- [+] refactor(common.ts): remove unnecessary console.log for [Org ID] in requestOpenai function
- [+] refactor(common.ts): conditionally delete OpenAI-Organization header from response if [Org ID] is not set up in ENV
This commit resolves a memory leak issue that was occurring due to fetch requests hanging indefinitely. A timeout has been introduced to the `requestOpenai` function which ensures that these requests are aborted after a set period of time (currently 10 minutes). Additionally, error handling has been added to catch and log `AbortError` when a fetch request is aborted. This fix significantly improves the stability and reliability of the application by preventing memory leaks related to unresolved fetch requests.
This commit resolves a memory leak issue that was occurring due to fetch requests hanging indefinitely. A timeout has been introduced to the `requestOpenai` function which ensures that these requests are aborted after a set period of time (currently 10 minutes). Additionally, error handling has been added to catch and log `AbortError` when a fetch request is aborted. This fix significantly improves the stability and reliability of the application by preventing memory leaks related to unresolved fetch requests.