mirror of
https://github.com/Yidadaa/ChatGPT-Next-Web.git
synced 2025-08-31 03:09:04 +08:00
Compare commits
51 Commits
website
...
feature/ch
Author | SHA1 | Date | |
---|---|---|---|
|
ba8e2414c6 | ||
|
fe7f726c4b | ||
|
6b914b7ced | ||
|
175c52b13b | ||
|
0db21cf836 | ||
|
5b1a759f86 | ||
|
0c3d4462ca | ||
|
3c859fc29f | ||
|
1d15666713 | ||
|
a127ae1fb4 | ||
|
ea1329f73e | ||
|
149d732cb7 | ||
|
210b29bfbe | ||
|
acc2e97aab | ||
|
93ac0e5017 | ||
|
ed8c3580c8 | ||
|
0a056a7c5c | ||
|
74c4711cdd | ||
|
eceec092cf | ||
|
42743410a8 | ||
|
0f04756d4c | ||
|
ab158bf042 | ||
|
14e3b409cc | ||
|
acdded8161 | ||
|
e939ce5a02 | ||
|
46a0b100f7 | ||
|
e27e8fb0e1 | ||
|
a433d1606c | ||
|
83cea3a90d | ||
|
759a09a76c | ||
|
2623a92763 | ||
|
3932c594c7 | ||
|
b7acb89096 | ||
|
ef24d3e633 | ||
|
23350c842b | ||
|
a2adfbbd32 | ||
|
f22cec1eb4 | ||
|
e56216549e | ||
|
19facc7c85 | ||
|
b08ce5630c | ||
|
b41c012d27 | ||
|
a392daab71 | ||
|
0628ddfc6f | ||
|
7eda14f138 | ||
|
9a86c42c95 | ||
|
819d249a09 | ||
|
8d66fedb1f | ||
|
7cf89b53ce | ||
|
459c373f13 | ||
|
1d14a991ee | ||
|
05ef5adfa7 |
20
README.md
20
README.md
@@ -1,16 +1,17 @@
|
|||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||
<a href='#企业版'>
|
<a href='https://nextchat.dev/chat'>
|
||||||
<img src="./docs/images/ent.svg" alt="icon"/>
|
<img src="https://github.com/user-attachments/assets/287c510f-f508-478e-ade3-54d30453dc18" width="1000" alt="icon"/>
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
|
|
||||||
<h1 align="center">NextChat (ChatGPT Next Web)</h1>
|
<h1 align="center">NextChat (ChatGPT Next Web)</h1>
|
||||||
|
|
||||||
English / [简体中文](./README_CN.md)
|
English / [简体中文](./README_CN.md)
|
||||||
|
|
||||||
One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support.
|
One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT4 & Gemini Pro support.
|
||||||
|
|
||||||
一键免费部署你的跨平台私人 ChatGPT 应用, 支持 GPT3, GPT4 & Gemini Pro 模型。
|
一键免费部署你的跨平台私人 ChatGPT 应用, 支持 Claude, GPT4 & Gemini Pro 模型。
|
||||||
|
|
||||||
[![Saas][Saas-image]][saas-url]
|
[![Saas][Saas-image]][saas-url]
|
||||||
[![Web][Web-image]][web-url]
|
[![Web][Web-image]][web-url]
|
||||||
@@ -31,7 +32,7 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4
|
|||||||
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
|
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
|
||||||
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu
|
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu
|
||||||
|
|
||||||
[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html) [<img src="https://svgshare.com/i/1AVg.svg" alt="Deploy to Alibaba Cloud" height="30">](https://computenest.aliyun.com/market/service-f1c9b75e59814dc49d52)
|
[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https://github.com/Dogtiti/ChatGPT-Next-Web-EarlyBird&env=OPENAI_API_KEY&env=CODE&project-name=nextchat-earlyBird&repository-name=NextChat-EarlyBird) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html)
|
||||||
|
|
||||||
[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp)
|
[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp)
|
||||||
|
|
||||||
@@ -355,6 +356,13 @@ For ByteDance: use `modelName@bytedance=deploymentName` to customize model name
|
|||||||
|
|
||||||
Change default model
|
Change default model
|
||||||
|
|
||||||
|
### `VISION_MODELS` (optional)
|
||||||
|
|
||||||
|
> Default: Empty
|
||||||
|
> Example: `gpt-4-vision,claude-3-opus,my-custom-model` means add vision capabilities to these models in addition to the default pattern matches (which detect models containing keywords like "vision", "claude-3", "gemini-1.5", etc).
|
||||||
|
|
||||||
|
Add additional models to have vision capabilities, beyond the default pattern matching. Multiple models should be separated by commas.
|
||||||
|
|
||||||
### `WHITE_WEBDAV_ENDPOINTS` (optional)
|
### `WHITE_WEBDAV_ENDPOINTS` (optional)
|
||||||
|
|
||||||
You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the format:
|
You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the format:
|
||||||
@@ -469,7 +477,7 @@ If you want to add a new translation, read this [document](./docs/translation.md
|
|||||||
|
|
||||||
## Donation
|
## Donation
|
||||||
|
|
||||||
[Buy Me a Coffee](https://www.buymeacoffee.com/yidadaa)
|
[Buy Me a Coffee](https://1kafei.com/dogtiti)
|
||||||
|
|
||||||
## Special Thanks
|
## Special Thanks
|
||||||
|
|
||||||
|
@@ -33,7 +33,7 @@
|
|||||||
|
|
||||||
1. 准备好你的 [OpenAI API Key](https://platform.openai.com/account/api-keys);
|
1. 准备好你的 [OpenAI API Key](https://platform.openai.com/account/api-keys);
|
||||||
2. 点击右侧按钮开始部署:
|
2. 点击右侧按钮开始部署:
|
||||||
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&env=GOOGLE_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web),直接使用 Github 账号登录即可,记得在环境变量页填入 API Key 和[页面访问密码](#配置页面访问密码) CODE;
|
[](https://vercel.com/new/clone?repository-url=https://github.com/Dogtiti/ChatGPT-Next-Web-EarlyBird&env=OPENAI_API_KEY&env=CODE&project-name=nextchat-earlyBird&repository-name=NextChat-EarlyBird),直接使用 Github 账号登录即可,记得在环境变量页填入 API Key 和[页面访问密码](#配置页面访问密码) CODE;
|
||||||
3. 部署完毕后,即可开始使用;
|
3. 部署完毕后,即可开始使用;
|
||||||
4. (可选)[绑定自定义域名](https://vercel.com/docs/concepts/projects/domains/add-a-domain):Vercel 分配的域名 DNS 在某些区域被污染了,绑定自定义域名即可直连。
|
4. (可选)[绑定自定义域名](https://vercel.com/docs/concepts/projects/domains/add-a-domain):Vercel 分配的域名 DNS 在某些区域被污染了,绑定自定义域名即可直连。
|
||||||
|
|
||||||
@@ -235,6 +235,13 @@ ChatGLM Api Url.
|
|||||||
|
|
||||||
更改默认模型
|
更改默认模型
|
||||||
|
|
||||||
|
### `VISION_MODELS` (可选)
|
||||||
|
|
||||||
|
> 默认值:空
|
||||||
|
> 示例:`gpt-4-vision,claude-3-opus,my-custom-model` 表示为这些模型添加视觉能力,作为对默认模式匹配的补充(默认会检测包含"vision"、"claude-3"、"gemini-1.5"等关键词的模型)。
|
||||||
|
|
||||||
|
在默认模式匹配之外,添加更多具有视觉能力的模型。多个模型用逗号分隔。
|
||||||
|
|
||||||
### `DEFAULT_INPUT_TEMPLATE` (可选)
|
### `DEFAULT_INPUT_TEMPLATE` (可选)
|
||||||
|
|
||||||
自定义默认的 template,用于初始化『设置』中的『用户输入预处理』配置项
|
自定义默认的 template,用于初始化『设置』中的『用户输入预处理』配置项
|
||||||
|
@@ -30,7 +30,7 @@
|
|||||||
|
|
||||||
1. [OpenAI API Key](https://platform.openai.com/account/api-keys)を準備する;
|
1. [OpenAI API Key](https://platform.openai.com/account/api-keys)を準備する;
|
||||||
2. 右側のボタンをクリックしてデプロイを開始:
|
2. 右側のボタンをクリックしてデプロイを開始:
|
||||||
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&env=GOOGLE_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web) 、GitHubアカウントで直接ログインし、環境変数ページにAPI Keyと[ページアクセスパスワード](#設定ページアクセスパスワード) CODEを入力してください;
|
[](https://vercel.com/new/clone?repository-url=https://github.com/Dogtiti/ChatGPT-Next-Web-EarlyBird&env=OPENAI_API_KEY&env=CODE&project-name=nextchat-earlyBird&repository-name=NextChat-EarlyBird) 、GitHubアカウントで直接ログインし、環境変数ページにAPI Keyと[ページアクセスパスワード](#設定ページアクセスパスワード) CODEを入力してください;
|
||||||
3. デプロイが完了したら、すぐに使用を開始できます;
|
3. デプロイが完了したら、すぐに使用を開始できます;
|
||||||
4. (オプション)[カスタムドメインをバインド](https://vercel.com/docs/concepts/projects/domains/add-a-domain):Vercelが割り当てたドメインDNSは一部の地域で汚染されているため、カスタムドメインをバインドすると直接接続できます。
|
4. (オプション)[カスタムドメインをバインド](https://vercel.com/docs/concepts/projects/domains/add-a-domain):Vercelが割り当てたドメインDNSは一部の地域で汚染されているため、カスタムドメインをバインドすると直接接続できます。
|
||||||
|
|
||||||
@@ -217,6 +217,13 @@ ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデ
|
|||||||
|
|
||||||
デフォルトのモデルを変更します。
|
デフォルトのモデルを変更します。
|
||||||
|
|
||||||
|
### `VISION_MODELS` (オプション)
|
||||||
|
|
||||||
|
> デフォルト:空
|
||||||
|
> 例:`gpt-4-vision,claude-3-opus,my-custom-model` は、これらのモデルにビジョン機能を追加します。これはデフォルトのパターンマッチング("vision"、"claude-3"、"gemini-1.5"などのキーワードを含むモデルを検出)に加えて適用されます。
|
||||||
|
|
||||||
|
デフォルトのパターンマッチングに加えて、追加のモデルにビジョン機能を付与します。複数のモデルはカンマで区切ります。
|
||||||
|
|
||||||
### `DEFAULT_INPUT_TEMPLATE` (オプション)
|
### `DEFAULT_INPUT_TEMPLATE` (オプション)
|
||||||
|
|
||||||
『設定』の『ユーザー入力前処理』の初期設定に使用するテンプレートをカスタマイズします。
|
『設定』の『ユーザー入力前処理』の初期設定に使用するテンプレートをカスタマイズします。
|
||||||
|
@@ -14,7 +14,7 @@ function getModels(remoteModelRes: OpenAIListModelResponse) {
|
|||||||
if (config.disableGPT4) {
|
if (config.disableGPT4) {
|
||||||
remoteModelRes.data = remoteModelRes.data.filter(
|
remoteModelRes.data = remoteModelRes.data.filter(
|
||||||
(m) =>
|
(m) =>
|
||||||
!(m.id.startsWith("gpt-4") || m.id.startsWith("chatgpt-4o")) ||
|
!(m.id.startsWith("gpt-4") || m.id.startsWith("chatgpt-4o") || m.id.startsWith("o1")) ||
|
||||||
m.id.startsWith("gpt-4o-mini"),
|
m.id.startsWith("gpt-4o-mini"),
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
@@ -25,12 +25,103 @@ import { getMessageTextContent } from "@/app/utils";
|
|||||||
import { RequestPayload } from "./openai";
|
import { RequestPayload } from "./openai";
|
||||||
import { fetch } from "@/app/utils/stream";
|
import { fetch } from "@/app/utils/stream";
|
||||||
|
|
||||||
|
interface BasePayload {
|
||||||
|
model: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface ChatPayload extends BasePayload {
|
||||||
|
messages: ChatOptions["messages"];
|
||||||
|
stream?: boolean;
|
||||||
|
temperature?: number;
|
||||||
|
presence_penalty?: number;
|
||||||
|
frequency_penalty?: number;
|
||||||
|
top_p?: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface ImageGenerationPayload extends BasePayload {
|
||||||
|
prompt: string;
|
||||||
|
size?: string;
|
||||||
|
user_id?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface VideoGenerationPayload extends BasePayload {
|
||||||
|
prompt: string;
|
||||||
|
duration?: number;
|
||||||
|
resolution?: string;
|
||||||
|
user_id?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
type ModelType = "chat" | "image" | "video";
|
||||||
|
|
||||||
export class ChatGLMApi implements LLMApi {
|
export class ChatGLMApi implements LLMApi {
|
||||||
private disableListModels = true;
|
private disableListModels = true;
|
||||||
|
|
||||||
|
private getModelType(model: string): ModelType {
|
||||||
|
if (model.startsWith("cogview-")) return "image";
|
||||||
|
if (model.startsWith("cogvideo-")) return "video";
|
||||||
|
return "chat";
|
||||||
|
}
|
||||||
|
|
||||||
|
private getModelPath(type: ModelType): string {
|
||||||
|
switch (type) {
|
||||||
|
case "image":
|
||||||
|
return ChatGLM.ImagePath;
|
||||||
|
case "video":
|
||||||
|
return ChatGLM.VideoPath;
|
||||||
|
default:
|
||||||
|
return ChatGLM.ChatPath;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private createPayload(
|
||||||
|
messages: ChatOptions["messages"],
|
||||||
|
modelConfig: any,
|
||||||
|
options: ChatOptions,
|
||||||
|
): BasePayload {
|
||||||
|
const modelType = this.getModelType(modelConfig.model);
|
||||||
|
const lastMessage = messages[messages.length - 1];
|
||||||
|
const prompt =
|
||||||
|
typeof lastMessage.content === "string"
|
||||||
|
? lastMessage.content
|
||||||
|
: lastMessage.content.map((c) => c.text).join("\n");
|
||||||
|
|
||||||
|
switch (modelType) {
|
||||||
|
case "image":
|
||||||
|
return {
|
||||||
|
model: modelConfig.model,
|
||||||
|
prompt,
|
||||||
|
size: options.config.size,
|
||||||
|
} as ImageGenerationPayload;
|
||||||
|
default:
|
||||||
|
return {
|
||||||
|
messages,
|
||||||
|
stream: options.config.stream,
|
||||||
|
model: modelConfig.model,
|
||||||
|
temperature: modelConfig.temperature,
|
||||||
|
presence_penalty: modelConfig.presence_penalty,
|
||||||
|
frequency_penalty: modelConfig.frequency_penalty,
|
||||||
|
top_p: modelConfig.top_p,
|
||||||
|
} as ChatPayload;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private parseResponse(modelType: ModelType, json: any): string {
|
||||||
|
switch (modelType) {
|
||||||
|
case "image": {
|
||||||
|
const imageUrl = json.data?.[0]?.url;
|
||||||
|
return imageUrl ? `` : "";
|
||||||
|
}
|
||||||
|
case "video": {
|
||||||
|
const videoUrl = json.data?.[0]?.url;
|
||||||
|
return videoUrl ? `<video controls src="${videoUrl}"></video>` : "";
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return this.extractMessage(json);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
path(path: string): string {
|
path(path: string): string {
|
||||||
const accessStore = useAccessStore.getState();
|
const accessStore = useAccessStore.getState();
|
||||||
|
|
||||||
let baseUrl = "";
|
let baseUrl = "";
|
||||||
|
|
||||||
if (accessStore.useCustomConfig) {
|
if (accessStore.useCustomConfig) {
|
||||||
@@ -51,7 +142,6 @@ export class ChatGLMApi implements LLMApi {
|
|||||||
}
|
}
|
||||||
|
|
||||||
console.log("[Proxy Endpoint] ", baseUrl, path);
|
console.log("[Proxy Endpoint] ", baseUrl, path);
|
||||||
|
|
||||||
return [baseUrl, path].join("/");
|
return [baseUrl, path].join("/");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -79,24 +169,16 @@ export class ChatGLMApi implements LLMApi {
|
|||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
const requestPayload: RequestPayload = {
|
const modelType = this.getModelType(modelConfig.model);
|
||||||
messages,
|
const requestPayload = this.createPayload(messages, modelConfig, options);
|
||||||
stream: options.config.stream,
|
const path = this.path(this.getModelPath(modelType));
|
||||||
model: modelConfig.model,
|
|
||||||
temperature: modelConfig.temperature,
|
|
||||||
presence_penalty: modelConfig.presence_penalty,
|
|
||||||
frequency_penalty: modelConfig.frequency_penalty,
|
|
||||||
top_p: modelConfig.top_p,
|
|
||||||
};
|
|
||||||
|
|
||||||
console.log("[Request] glm payload: ", requestPayload);
|
console.log(`[Request] glm ${modelType} payload: `, requestPayload);
|
||||||
|
|
||||||
const shouldStream = !!options.config.stream;
|
|
||||||
const controller = new AbortController();
|
const controller = new AbortController();
|
||||||
options.onController?.(controller);
|
options.onController?.(controller);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const chatPath = this.path(ChatGLM.ChatPath);
|
|
||||||
const chatPayload = {
|
const chatPayload = {
|
||||||
method: "POST",
|
method: "POST",
|
||||||
body: JSON.stringify(requestPayload),
|
body: JSON.stringify(requestPayload),
|
||||||
@@ -104,12 +186,23 @@ export class ChatGLMApi implements LLMApi {
|
|||||||
headers: getHeaders(),
|
headers: getHeaders(),
|
||||||
};
|
};
|
||||||
|
|
||||||
// make a fetch request
|
|
||||||
const requestTimeoutId = setTimeout(
|
const requestTimeoutId = setTimeout(
|
||||||
() => controller.abort(),
|
() => controller.abort(),
|
||||||
REQUEST_TIMEOUT_MS,
|
REQUEST_TIMEOUT_MS,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
if (modelType === "image" || modelType === "video") {
|
||||||
|
const res = await fetch(path, chatPayload);
|
||||||
|
clearTimeout(requestTimeoutId);
|
||||||
|
|
||||||
|
const resJson = await res.json();
|
||||||
|
console.log(`[Response] glm ${modelType}:`, resJson);
|
||||||
|
const message = this.parseResponse(modelType, resJson);
|
||||||
|
options.onFinish(message, res);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const shouldStream = !!options.config.stream;
|
||||||
if (shouldStream) {
|
if (shouldStream) {
|
||||||
const [tools, funcs] = usePluginStore
|
const [tools, funcs] = usePluginStore
|
||||||
.getState()
|
.getState()
|
||||||
@@ -117,7 +210,7 @@ export class ChatGLMApi implements LLMApi {
|
|||||||
useChatStore.getState().currentSession().mask?.plugin || [],
|
useChatStore.getState().currentSession().mask?.plugin || [],
|
||||||
);
|
);
|
||||||
return stream(
|
return stream(
|
||||||
chatPath,
|
path,
|
||||||
requestPayload,
|
requestPayload,
|
||||||
getHeaders(),
|
getHeaders(),
|
||||||
tools as any,
|
tools as any,
|
||||||
@@ -125,7 +218,6 @@ export class ChatGLMApi implements LLMApi {
|
|||||||
controller,
|
controller,
|
||||||
// parseSSE
|
// parseSSE
|
||||||
(text: string, runTools: ChatMessageTool[]) => {
|
(text: string, runTools: ChatMessageTool[]) => {
|
||||||
// console.log("parseSSE", text, runTools);
|
|
||||||
const json = JSON.parse(text);
|
const json = JSON.parse(text);
|
||||||
const choices = json.choices as Array<{
|
const choices = json.choices as Array<{
|
||||||
delta: {
|
delta: {
|
||||||
@@ -154,7 +246,7 @@ export class ChatGLMApi implements LLMApi {
|
|||||||
}
|
}
|
||||||
return choices[0]?.delta?.content;
|
return choices[0]?.delta?.content;
|
||||||
},
|
},
|
||||||
// processToolMessage, include tool_calls message and tool call results
|
// processToolMessage
|
||||||
(
|
(
|
||||||
requestPayload: RequestPayload,
|
requestPayload: RequestPayload,
|
||||||
toolCallMessage: any,
|
toolCallMessage: any,
|
||||||
@@ -172,7 +264,7 @@ export class ChatGLMApi implements LLMApi {
|
|||||||
options,
|
options,
|
||||||
);
|
);
|
||||||
} else {
|
} else {
|
||||||
const res = await fetch(chatPath, chatPayload);
|
const res = await fetch(path, chatPayload);
|
||||||
clearTimeout(requestTimeoutId);
|
clearTimeout(requestTimeoutId);
|
||||||
|
|
||||||
const resJson = await res.json();
|
const resJson = await res.json();
|
||||||
@@ -184,6 +276,7 @@ export class ChatGLMApi implements LLMApi {
|
|||||||
options.onError?.(e as Error);
|
options.onError?.(e as Error);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async usage() {
|
async usage() {
|
||||||
return {
|
return {
|
||||||
used: 0,
|
used: 0,
|
||||||
|
@@ -29,7 +29,7 @@ import { RequestPayload } from "./openai";
|
|||||||
import { fetch } from "@/app/utils/stream";
|
import { fetch } from "@/app/utils/stream";
|
||||||
|
|
||||||
export class GeminiProApi implements LLMApi {
|
export class GeminiProApi implements LLMApi {
|
||||||
path(path: string): string {
|
path(path: string, shouldStream = false): string {
|
||||||
const accessStore = useAccessStore.getState();
|
const accessStore = useAccessStore.getState();
|
||||||
|
|
||||||
let baseUrl = "";
|
let baseUrl = "";
|
||||||
@@ -51,8 +51,10 @@ export class GeminiProApi implements LLMApi {
|
|||||||
console.log("[Proxy Endpoint] ", baseUrl, path);
|
console.log("[Proxy Endpoint] ", baseUrl, path);
|
||||||
|
|
||||||
let chatPath = [baseUrl, path].join("/");
|
let chatPath = [baseUrl, path].join("/");
|
||||||
|
if (shouldStream) {
|
||||||
|
chatPath += chatPath.includes("?") ? "&alt=sse" : "?alt=sse";
|
||||||
|
}
|
||||||
|
|
||||||
chatPath += chatPath.includes("?") ? "&alt=sse" : "?alt=sse";
|
|
||||||
return chatPath;
|
return chatPath;
|
||||||
}
|
}
|
||||||
extractMessage(res: any) {
|
extractMessage(res: any) {
|
||||||
@@ -60,6 +62,7 @@ export class GeminiProApi implements LLMApi {
|
|||||||
|
|
||||||
return (
|
return (
|
||||||
res?.candidates?.at(0)?.content?.parts.at(0)?.text ||
|
res?.candidates?.at(0)?.content?.parts.at(0)?.text ||
|
||||||
|
res?.at(0)?.candidates?.at(0)?.content?.parts.at(0)?.text ||
|
||||||
res?.error?.message ||
|
res?.error?.message ||
|
||||||
""
|
""
|
||||||
);
|
);
|
||||||
@@ -166,7 +169,10 @@ export class GeminiProApi implements LLMApi {
|
|||||||
options.onController?.(controller);
|
options.onController?.(controller);
|
||||||
try {
|
try {
|
||||||
// https://github.com/google-gemini/cookbook/blob/main/quickstarts/rest/Streaming_REST.ipynb
|
// https://github.com/google-gemini/cookbook/blob/main/quickstarts/rest/Streaming_REST.ipynb
|
||||||
const chatPath = this.path(Google.ChatPath(modelConfig.model));
|
const chatPath = this.path(
|
||||||
|
Google.ChatPath(modelConfig.model),
|
||||||
|
shouldStream,
|
||||||
|
);
|
||||||
|
|
||||||
const chatPayload = {
|
const chatPayload = {
|
||||||
method: "POST",
|
method: "POST",
|
||||||
|
@@ -24,7 +24,7 @@ import {
|
|||||||
stream,
|
stream,
|
||||||
} from "@/app/utils/chat";
|
} from "@/app/utils/chat";
|
||||||
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
|
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
|
||||||
import { DalleSize, DalleQuality, DalleStyle } from "@/app/typing";
|
import { ModelSize, DalleQuality, DalleStyle } from "@/app/typing";
|
||||||
|
|
||||||
import {
|
import {
|
||||||
ChatOptions,
|
ChatOptions,
|
||||||
@@ -73,7 +73,7 @@ export interface DalleRequestPayload {
|
|||||||
prompt: string;
|
prompt: string;
|
||||||
response_format: "url" | "b64_json";
|
response_format: "url" | "b64_json";
|
||||||
n: number;
|
n: number;
|
||||||
size: DalleSize;
|
size: ModelSize;
|
||||||
quality: DalleQuality;
|
quality: DalleQuality;
|
||||||
style: DalleStyle;
|
style: DalleStyle;
|
||||||
}
|
}
|
||||||
@@ -224,7 +224,7 @@ export class ChatGPTApi implements LLMApi {
|
|||||||
// O1 not support image, tools (plugin in ChatGPTNextWeb) and system, stream, logprobs, temperature, top_p, n, presence_penalty, frequency_penalty yet.
|
// O1 not support image, tools (plugin in ChatGPTNextWeb) and system, stream, logprobs, temperature, top_p, n, presence_penalty, frequency_penalty yet.
|
||||||
requestPayload = {
|
requestPayload = {
|
||||||
messages,
|
messages,
|
||||||
stream: !isO1 ? options.config.stream : false,
|
stream: options.config.stream,
|
||||||
model: modelConfig.model,
|
model: modelConfig.model,
|
||||||
temperature: !isO1 ? modelConfig.temperature : 1,
|
temperature: !isO1 ? modelConfig.temperature : 1,
|
||||||
presence_penalty: !isO1 ? modelConfig.presence_penalty : 0,
|
presence_penalty: !isO1 ? modelConfig.presence_penalty : 0,
|
||||||
@@ -247,7 +247,7 @@ export class ChatGPTApi implements LLMApi {
|
|||||||
|
|
||||||
console.log("[Request] openai payload: ", requestPayload);
|
console.log("[Request] openai payload: ", requestPayload);
|
||||||
|
|
||||||
const shouldStream = !isDalle3 && !!options.config.stream && !isO1;
|
const shouldStream = !isDalle3 && !!options.config.stream;
|
||||||
const controller = new AbortController();
|
const controller = new AbortController();
|
||||||
options.onController?.(controller);
|
options.onController?.(controller);
|
||||||
|
|
||||||
|
@@ -72,6 +72,8 @@ import {
|
|||||||
isDalle3,
|
isDalle3,
|
||||||
showPlugins,
|
showPlugins,
|
||||||
safeLocalStorage,
|
safeLocalStorage,
|
||||||
|
getModelSizes,
|
||||||
|
supportsCustomSize,
|
||||||
} from "../utils";
|
} from "../utils";
|
||||||
|
|
||||||
import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
|
import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
|
||||||
@@ -79,7 +81,7 @@ import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
|
|||||||
import dynamic from "next/dynamic";
|
import dynamic from "next/dynamic";
|
||||||
|
|
||||||
import { ChatControllerPool } from "../client/controller";
|
import { ChatControllerPool } from "../client/controller";
|
||||||
import { DalleSize, DalleQuality, DalleStyle } from "../typing";
|
import { DalleQuality, DalleStyle, ModelSize } from "../typing";
|
||||||
import { Prompt, usePromptStore } from "../store/prompt";
|
import { Prompt, usePromptStore } from "../store/prompt";
|
||||||
import Locale from "../locales";
|
import Locale from "../locales";
|
||||||
|
|
||||||
@@ -519,10 +521,11 @@ export function ChatActions(props: {
|
|||||||
const [showSizeSelector, setShowSizeSelector] = useState(false);
|
const [showSizeSelector, setShowSizeSelector] = useState(false);
|
||||||
const [showQualitySelector, setShowQualitySelector] = useState(false);
|
const [showQualitySelector, setShowQualitySelector] = useState(false);
|
||||||
const [showStyleSelector, setShowStyleSelector] = useState(false);
|
const [showStyleSelector, setShowStyleSelector] = useState(false);
|
||||||
const dalle3Sizes: DalleSize[] = ["1024x1024", "1792x1024", "1024x1792"];
|
const modelSizes = getModelSizes(currentModel);
|
||||||
const dalle3Qualitys: DalleQuality[] = ["standard", "hd"];
|
const dalle3Qualitys: DalleQuality[] = ["standard", "hd"];
|
||||||
const dalle3Styles: DalleStyle[] = ["vivid", "natural"];
|
const dalle3Styles: DalleStyle[] = ["vivid", "natural"];
|
||||||
const currentSize = session.mask.modelConfig?.size ?? "1024x1024";
|
const currentSize =
|
||||||
|
session.mask.modelConfig?.size ?? ("1024x1024" as ModelSize);
|
||||||
const currentQuality = session.mask.modelConfig?.quality ?? "standard";
|
const currentQuality = session.mask.modelConfig?.quality ?? "standard";
|
||||||
const currentStyle = session.mask.modelConfig?.style ?? "vivid";
|
const currentStyle = session.mask.modelConfig?.style ?? "vivid";
|
||||||
|
|
||||||
@@ -673,7 +676,7 @@ export function ChatActions(props: {
|
|||||||
/>
|
/>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{isDalle3(currentModel) && (
|
{supportsCustomSize(currentModel) && (
|
||||||
<ChatAction
|
<ChatAction
|
||||||
onClick={() => setShowSizeSelector(true)}
|
onClick={() => setShowSizeSelector(true)}
|
||||||
text={currentSize}
|
text={currentSize}
|
||||||
@@ -684,7 +687,7 @@ export function ChatActions(props: {
|
|||||||
{showSizeSelector && (
|
{showSizeSelector && (
|
||||||
<Selector
|
<Selector
|
||||||
defaultSelectedValue={currentSize}
|
defaultSelectedValue={currentSize}
|
||||||
items={dalle3Sizes.map((m) => ({
|
items={modelSizes.map((m) => ({
|
||||||
title: m,
|
title: m,
|
||||||
value: m,
|
value: m,
|
||||||
}))}
|
}))}
|
||||||
@@ -960,9 +963,24 @@ function _Chat() {
|
|||||||
(scrollRef.current.scrollTop + scrollRef.current.clientHeight),
|
(scrollRef.current.scrollTop + scrollRef.current.clientHeight),
|
||||||
) <= 1
|
) <= 1
|
||||||
: false;
|
: false;
|
||||||
|
const isAttachWithTop = useMemo(() => {
|
||||||
|
const lastMessage = scrollRef.current?.lastElementChild as HTMLElement;
|
||||||
|
// if scrolllRef is not ready or no message, return false
|
||||||
|
if (!scrollRef?.current || !lastMessage) return false;
|
||||||
|
const topDistance =
|
||||||
|
lastMessage!.getBoundingClientRect().top -
|
||||||
|
scrollRef.current.getBoundingClientRect().top;
|
||||||
|
// leave some space for user question
|
||||||
|
return topDistance < 100;
|
||||||
|
}, [scrollRef?.current?.scrollHeight]);
|
||||||
|
|
||||||
|
const isTyping = userInput !== "";
|
||||||
|
|
||||||
|
// if user is typing, should auto scroll to bottom
|
||||||
|
// if user is not typing, should auto scroll to bottom only if already at bottom
|
||||||
const { setAutoScroll, scrollDomToBottom } = useScrollToBottom(
|
const { setAutoScroll, scrollDomToBottom } = useScrollToBottom(
|
||||||
scrollRef,
|
scrollRef,
|
||||||
isScrolledToBottom,
|
(isScrolledToBottom || isAttachWithTop) && !isTyping,
|
||||||
);
|
);
|
||||||
const [hitBottom, setHitBottom] = useState(true);
|
const [hitBottom, setHitBottom] = useState(true);
|
||||||
const isMobileScreen = useMobileScreen();
|
const isMobileScreen = useMobileScreen();
|
||||||
@@ -2071,6 +2089,6 @@ function _Chat() {
|
|||||||
|
|
||||||
export function Chat() {
|
export function Chat() {
|
||||||
const chatStore = useChatStore();
|
const chatStore = useChatStore();
|
||||||
const sessionIndex = chatStore.currentSessionIndex;
|
const session = chatStore.currentSession();
|
||||||
return <_Chat key={sessionIndex}></_Chat>;
|
return <_Chat key={session.id}></_Chat>;
|
||||||
}
|
}
|
||||||
|
@@ -37,7 +37,8 @@ export function Avatar(props: { model?: ModelType; avatar?: string }) {
|
|||||||
return (
|
return (
|
||||||
<div className="no-dark">
|
<div className="no-dark">
|
||||||
{props.model?.startsWith("gpt-4") ||
|
{props.model?.startsWith("gpt-4") ||
|
||||||
props.model?.startsWith("chatgpt-4o") ? (
|
props.model?.startsWith("chatgpt-4o") ||
|
||||||
|
props.model?.startsWith("o1") ? (
|
||||||
<BlackBotIcon className="user-avatar" />
|
<BlackBotIcon className="user-avatar" />
|
||||||
) : (
|
) : (
|
||||||
<BotIcon className="user-avatar" />
|
<BotIcon className="user-avatar" />
|
||||||
|
@@ -90,7 +90,11 @@ export function PreCode(props: { children: any }) {
|
|||||||
const refText = ref.current.querySelector("code")?.innerText;
|
const refText = ref.current.querySelector("code")?.innerText;
|
||||||
if (htmlDom) {
|
if (htmlDom) {
|
||||||
setHtmlCode((htmlDom as HTMLElement).innerText);
|
setHtmlCode((htmlDom as HTMLElement).innerText);
|
||||||
} else if (refText?.startsWith("<!DOCTYPE")) {
|
} else if (
|
||||||
|
refText?.startsWith("<!DOCTYPE") ||
|
||||||
|
refText?.startsWith("<svg") ||
|
||||||
|
refText?.startsWith("<?xml")
|
||||||
|
) {
|
||||||
setHtmlCode(refText);
|
setHtmlCode(refText);
|
||||||
}
|
}
|
||||||
}, 600);
|
}, 600);
|
||||||
@@ -244,6 +248,10 @@ function escapeBrackets(text: string) {
|
|||||||
|
|
||||||
function tryWrapHtmlCode(text: string) {
|
function tryWrapHtmlCode(text: string) {
|
||||||
// try add wrap html code (fixed: html codeblock include 2 newline)
|
// try add wrap html code (fixed: html codeblock include 2 newline)
|
||||||
|
// ignore embed codeblock
|
||||||
|
if (text.includes("```")) {
|
||||||
|
return text;
|
||||||
|
}
|
||||||
return text
|
return text
|
||||||
.replace(
|
.replace(
|
||||||
/([`]*?)(\w*?)([\n\r]*?)(<!DOCTYPE html>)/g,
|
/([`]*?)(\w*?)([\n\r]*?)(<!DOCTYPE html>)/g,
|
||||||
|
@@ -1771,9 +1771,11 @@ export function Settings() {
|
|||||||
<ListItem
|
<ListItem
|
||||||
title={Locale.Settings.Access.CustomModel.Title}
|
title={Locale.Settings.Access.CustomModel.Title}
|
||||||
subTitle={Locale.Settings.Access.CustomModel.SubTitle}
|
subTitle={Locale.Settings.Access.CustomModel.SubTitle}
|
||||||
|
vertical={true}
|
||||||
>
|
>
|
||||||
<input
|
<input
|
||||||
aria-label={Locale.Settings.Access.CustomModel.Title}
|
aria-label={Locale.Settings.Access.CustomModel.Title}
|
||||||
|
style={{ width: "100%", maxWidth: "unset", textAlign: "left" }}
|
||||||
type="text"
|
type="text"
|
||||||
value={config.customModels}
|
value={config.customModels}
|
||||||
placeholder="model1,model2,model3"
|
placeholder="model1,model2,model3"
|
||||||
|
@@ -40,6 +40,7 @@ export const getBuildConfig = () => {
|
|||||||
buildMode,
|
buildMode,
|
||||||
isApp,
|
isApp,
|
||||||
template: process.env.DEFAULT_INPUT_TEMPLATE ?? DEFAULT_INPUT_TEMPLATE,
|
template: process.env.DEFAULT_INPUT_TEMPLATE ?? DEFAULT_INPUT_TEMPLATE,
|
||||||
|
visionModels: process.env.VISION_MODELS || "",
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@@ -129,14 +129,15 @@ export const getServerSideConfig = () => {
|
|||||||
if (customModels) customModels += ",";
|
if (customModels) customModels += ",";
|
||||||
customModels += DEFAULT_MODELS.filter(
|
customModels += DEFAULT_MODELS.filter(
|
||||||
(m) =>
|
(m) =>
|
||||||
(m.name.startsWith("gpt-4") || m.name.startsWith("chatgpt-4o")) &&
|
(m.name.startsWith("gpt-4") || m.name.startsWith("chatgpt-4o") || m.name.startsWith("o1")) &&
|
||||||
!m.name.startsWith("gpt-4o-mini"),
|
!m.name.startsWith("gpt-4o-mini"),
|
||||||
)
|
)
|
||||||
.map((m) => "-" + m.name)
|
.map((m) => "-" + m.name)
|
||||||
.join(",");
|
.join(",");
|
||||||
if (
|
if (
|
||||||
(defaultModel.startsWith("gpt-4") ||
|
(defaultModel.startsWith("gpt-4") ||
|
||||||
defaultModel.startsWith("chatgpt-4o")) &&
|
defaultModel.startsWith("chatgpt-4o") ||
|
||||||
|
defaultModel.startsWith("o1")) &&
|
||||||
!defaultModel.startsWith("gpt-4o-mini")
|
!defaultModel.startsWith("gpt-4o-mini")
|
||||||
)
|
)
|
||||||
defaultModel = "";
|
defaultModel = "";
|
||||||
|
@@ -233,6 +233,8 @@ export const XAI = {
|
|||||||
export const ChatGLM = {
|
export const ChatGLM = {
|
||||||
ExampleEndpoint: CHATGLM_BASE_URL,
|
ExampleEndpoint: CHATGLM_BASE_URL,
|
||||||
ChatPath: "api/paas/v4/chat/completions",
|
ChatPath: "api/paas/v4/chat/completions",
|
||||||
|
ImagePath: "api/paas/v4/images/generations",
|
||||||
|
VideoPath: "api/paas/v4/videos/generations",
|
||||||
};
|
};
|
||||||
|
|
||||||
export const DEFAULT_INPUT_TEMPLATE = `{{input}}`; // input / time / model / lang
|
export const DEFAULT_INPUT_TEMPLATE = `{{input}}`; // input / time / model / lang
|
||||||
@@ -264,6 +266,7 @@ export const KnowledgeCutOffDate: Record<string, string> = {
|
|||||||
"gpt-4o": "2023-10",
|
"gpt-4o": "2023-10",
|
||||||
"gpt-4o-2024-05-13": "2023-10",
|
"gpt-4o-2024-05-13": "2023-10",
|
||||||
"gpt-4o-2024-08-06": "2023-10",
|
"gpt-4o-2024-08-06": "2023-10",
|
||||||
|
"gpt-4o-2024-11-20": "2023-10",
|
||||||
"chatgpt-4o-latest": "2023-10",
|
"chatgpt-4o-latest": "2023-10",
|
||||||
"gpt-4o-mini": "2023-10",
|
"gpt-4o-mini": "2023-10",
|
||||||
"gpt-4o-mini-2024-07-18": "2023-10",
|
"gpt-4o-mini-2024-07-18": "2023-10",
|
||||||
@@ -290,6 +293,22 @@ export const DEFAULT_TTS_VOICES = [
|
|||||||
"shimmer",
|
"shimmer",
|
||||||
];
|
];
|
||||||
|
|
||||||
|
export const VISION_MODEL_REGEXES = [
|
||||||
|
/vision/,
|
||||||
|
/gpt-4o/,
|
||||||
|
/claude-3/,
|
||||||
|
/gemini-1\.5/,
|
||||||
|
/gemini-exp/,
|
||||||
|
/gemini-2\.0/,
|
||||||
|
/learnlm/,
|
||||||
|
/qwen-vl/,
|
||||||
|
/qwen2-vl/,
|
||||||
|
/gpt-4-turbo(?!.*preview)/, // Matches "gpt-4-turbo" but not "gpt-4-turbo-preview"
|
||||||
|
/^dall-e-3$/, // Matches exactly "dall-e-3"
|
||||||
|
];
|
||||||
|
|
||||||
|
export const EXCLUDE_VISION_MODEL_REGEXES = [/claude-3-5-haiku-20241022/];
|
||||||
|
|
||||||
const openaiModels = [
|
const openaiModels = [
|
||||||
"gpt-3.5-turbo",
|
"gpt-3.5-turbo",
|
||||||
"gpt-3.5-turbo-1106",
|
"gpt-3.5-turbo-1106",
|
||||||
@@ -303,6 +322,7 @@ const openaiModels = [
|
|||||||
"gpt-4o",
|
"gpt-4o",
|
||||||
"gpt-4o-2024-05-13",
|
"gpt-4o-2024-05-13",
|
||||||
"gpt-4o-2024-08-06",
|
"gpt-4o-2024-08-06",
|
||||||
|
"gpt-4o-2024-11-20",
|
||||||
"chatgpt-4o-latest",
|
"chatgpt-4o-latest",
|
||||||
"gpt-4o-mini",
|
"gpt-4o-mini",
|
||||||
"gpt-4o-mini-2024-07-18",
|
"gpt-4o-mini-2024-07-18",
|
||||||
@@ -315,10 +335,23 @@ const openaiModels = [
|
|||||||
];
|
];
|
||||||
|
|
||||||
const googleModels = [
|
const googleModels = [
|
||||||
"gemini-1.0-pro",
|
"gemini-1.0-pro", // Deprecated on 2/15/2025
|
||||||
"gemini-1.5-pro-latest",
|
"gemini-1.5-pro-latest",
|
||||||
|
"gemini-1.5-pro",
|
||||||
|
"gemini-1.5-pro-002",
|
||||||
|
"gemini-1.5-pro-exp-0827",
|
||||||
"gemini-1.5-flash-latest",
|
"gemini-1.5-flash-latest",
|
||||||
"gemini-pro-vision",
|
"gemini-1.5-flash-8b-latest",
|
||||||
|
"gemini-1.5-flash",
|
||||||
|
"gemini-1.5-flash-8b",
|
||||||
|
"gemini-1.5-flash-002",
|
||||||
|
"gemini-1.5-flash-exp-0827",
|
||||||
|
"learnlm-1.5-pro-experimental",
|
||||||
|
"gemini-exp-1114",
|
||||||
|
"gemini-exp-1121",
|
||||||
|
"gemini-exp-1206",
|
||||||
|
"gemini-2.0-flash-exp",
|
||||||
|
"gemini-2.0-flash-thinking-exp-1219",
|
||||||
];
|
];
|
||||||
|
|
||||||
const anthropicModels = [
|
const anthropicModels = [
|
||||||
@@ -400,6 +433,15 @@ const chatglmModels = [
|
|||||||
"glm-4-long",
|
"glm-4-long",
|
||||||
"glm-4-flashx",
|
"glm-4-flashx",
|
||||||
"glm-4-flash",
|
"glm-4-flash",
|
||||||
|
"glm-4v-plus",
|
||||||
|
"glm-4v",
|
||||||
|
"glm-4v-flash", // free
|
||||||
|
"cogview-3-plus",
|
||||||
|
"cogview-3",
|
||||||
|
"cogview-3-flash", // free
|
||||||
|
// 目前无法适配轮询任务
|
||||||
|
// "cogvideox",
|
||||||
|
// "cogvideox-flash", // free
|
||||||
];
|
];
|
||||||
|
|
||||||
let seq = 1000; // 内置的模型序号生成器从1000开始
|
let seq = 1000; // 内置的模型序号生成器从1000开始
|
||||||
|
@@ -3,7 +3,7 @@ import { BuiltinMask } from "./typing";
|
|||||||
export const CN_MASKS: BuiltinMask[] = [
|
export const CN_MASKS: BuiltinMask[] = [
|
||||||
{
|
{
|
||||||
avatar: "1f5bc-fe0f",
|
avatar: "1f5bc-fe0f",
|
||||||
name: "以文搜图",
|
name: "AI文生图",
|
||||||
context: [
|
context: [
|
||||||
{
|
{
|
||||||
id: "text-to-pic-0",
|
id: "text-to-pic-0",
|
||||||
@@ -28,7 +28,7 @@ export const CN_MASKS: BuiltinMask[] = [
|
|||||||
id: "text-to-pic-3",
|
id: "text-to-pic-3",
|
||||||
role: "system",
|
role: "system",
|
||||||
content:
|
content:
|
||||||
"助手善于判断用户意图,当确定需要提供图片时,助手会变得沉默寡言,只使用以下格式输出markdown图片:,因为这个语法可以自动按照提示生成并渲染图片。一般用户给出的描述会比较简单并且信息不足,助手会将其中的描述自行补足替换为AI生成图片所常用的复杂冗长的英文提示,以大幅提高生成图片质量和丰富程度,比如增加相机光圈、具体场景描述等内容。助手会避免用代码块或原始块包围markdown标记,因为那样只会渲染出代码块或原始块而不是图片。",
|
"助手善于判断用户意图,当确定需要提供图片时,助手会变得沉默寡言,只使用以下格式输出markdown图片:,因为这个语法可以自动按照提示生成并渲染图片。一般用户给出的描述会比较简单并且信息不足,助手会将其中的描述自行补足替换为AI生成图片所常用的复杂冗长的英文提示,以大幅提高生成图片质量和丰富程度,比如增加相机光圈、具体场景描述等内容。助手会避免用代码块或原始块包围markdown标记,因为那样只会渲染出代码块或原始块而不是图片。url中的空格等符号需要转义。",
|
||||||
date: "",
|
date: "",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
|
@@ -1,5 +1,5 @@
|
|||||||
import { LLMModel } from "../client/api";
|
import { LLMModel } from "../client/api";
|
||||||
import { DalleSize, DalleQuality, DalleStyle } from "../typing";
|
import { DalleQuality, DalleStyle, ModelSize } from "../typing";
|
||||||
import { getClientConfig } from "../config/client";
|
import { getClientConfig } from "../config/client";
|
||||||
import {
|
import {
|
||||||
DEFAULT_INPUT_TEMPLATE,
|
DEFAULT_INPUT_TEMPLATE,
|
||||||
@@ -78,7 +78,7 @@ export const DEFAULT_CONFIG = {
|
|||||||
compressProviderName: "",
|
compressProviderName: "",
|
||||||
enableInjectSystemPrompts: true,
|
enableInjectSystemPrompts: true,
|
||||||
template: config?.template ?? DEFAULT_INPUT_TEMPLATE,
|
template: config?.template ?? DEFAULT_INPUT_TEMPLATE,
|
||||||
size: "1024x1024" as DalleSize,
|
size: "1024x1024" as ModelSize,
|
||||||
quality: "standard" as DalleQuality,
|
quality: "standard" as DalleQuality,
|
||||||
style: "vivid" as DalleStyle,
|
style: "vivid" as DalleStyle,
|
||||||
},
|
},
|
||||||
|
@@ -11,3 +11,14 @@ export interface RequestMessage {
|
|||||||
export type DalleSize = "1024x1024" | "1792x1024" | "1024x1792";
|
export type DalleSize = "1024x1024" | "1792x1024" | "1024x1792";
|
||||||
export type DalleQuality = "standard" | "hd";
|
export type DalleQuality = "standard" | "hd";
|
||||||
export type DalleStyle = "vivid" | "natural";
|
export type DalleStyle = "vivid" | "natural";
|
||||||
|
|
||||||
|
export type ModelSize =
|
||||||
|
| "1024x1024"
|
||||||
|
| "1792x1024"
|
||||||
|
| "1024x1792"
|
||||||
|
| "768x1344"
|
||||||
|
| "864x1152"
|
||||||
|
| "1344x768"
|
||||||
|
| "1152x864"
|
||||||
|
| "1440x720"
|
||||||
|
| "720x1440";
|
||||||
|
52
app/utils.ts
52
app/utils.ts
@@ -5,6 +5,9 @@ import { RequestMessage } from "./client/api";
|
|||||||
import { ServiceProvider } from "./constant";
|
import { ServiceProvider } from "./constant";
|
||||||
// import { fetch as tauriFetch, ResponseType } from "@tauri-apps/api/http";
|
// import { fetch as tauriFetch, ResponseType } from "@tauri-apps/api/http";
|
||||||
import { fetch as tauriStreamFetch } from "./utils/stream";
|
import { fetch as tauriStreamFetch } from "./utils/stream";
|
||||||
|
import { VISION_MODEL_REGEXES, EXCLUDE_VISION_MODEL_REGEXES } from "./constant";
|
||||||
|
import { getClientConfig } from "./config/client";
|
||||||
|
import { ModelSize } from "./typing";
|
||||||
|
|
||||||
export function trimTopic(topic: string) {
|
export function trimTopic(topic: string) {
|
||||||
// Fix an issue where double quotes still show in the Indonesian language
|
// Fix an issue where double quotes still show in the Indonesian language
|
||||||
@@ -252,25 +255,16 @@ export function getMessageImages(message: RequestMessage): string[] {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export function isVisionModel(model: string) {
|
export function isVisionModel(model: string) {
|
||||||
// Note: This is a better way using the TypeScript feature instead of `&&` or `||` (ts v5.5.0-dev.20240314 I've been using)
|
const clientConfig = getClientConfig();
|
||||||
|
const envVisionModels = clientConfig?.visionModels
|
||||||
const excludeKeywords = ["claude-3-5-haiku-20241022"];
|
?.split(",")
|
||||||
const visionKeywords = [
|
.map((m) => m.trim());
|
||||||
"vision",
|
if (envVisionModels?.includes(model)) {
|
||||||
"claude-3",
|
return true;
|
||||||
"gemini-1.5-pro",
|
}
|
||||||
"gemini-1.5-flash",
|
|
||||||
"gpt-4o",
|
|
||||||
"gpt-4o-mini",
|
|
||||||
];
|
|
||||||
const isGpt4Turbo =
|
|
||||||
model.includes("gpt-4-turbo") && !model.includes("preview");
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
!excludeKeywords.some((keyword) => model.includes(keyword)) &&
|
!EXCLUDE_VISION_MODEL_REGEXES.some((regex) => regex.test(model)) &&
|
||||||
(visionKeywords.some((keyword) => model.includes(keyword)) ||
|
VISION_MODEL_REGEXES.some((regex) => regex.test(model))
|
||||||
isGpt4Turbo ||
|
|
||||||
isDalle3(model))
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -278,6 +272,28 @@ export function isDalle3(model: string) {
|
|||||||
return "dall-e-3" === model;
|
return "dall-e-3" === model;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export function getModelSizes(model: string): ModelSize[] {
|
||||||
|
if (isDalle3(model)) {
|
||||||
|
return ["1024x1024", "1792x1024", "1024x1792"];
|
||||||
|
}
|
||||||
|
if (model.toLowerCase().includes("cogview")) {
|
||||||
|
return [
|
||||||
|
"1024x1024",
|
||||||
|
"768x1344",
|
||||||
|
"864x1152",
|
||||||
|
"1344x768",
|
||||||
|
"1152x864",
|
||||||
|
"1440x720",
|
||||||
|
"720x1440",
|
||||||
|
];
|
||||||
|
}
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
export function supportsCustomSize(model: string): boolean {
|
||||||
|
return getModelSizes(model).length > 0;
|
||||||
|
}
|
||||||
|
|
||||||
export function showPlugins(provider: ServiceProvider, model: string) {
|
export function showPlugins(provider: ServiceProvider, model: string) {
|
||||||
if (
|
if (
|
||||||
provider == ServiceProvider.OpenAI ||
|
provider == ServiceProvider.OpenAI ||
|
||||||
|
@@ -59,8 +59,8 @@
|
|||||||
"@tauri-apps/api": "^1.6.0",
|
"@tauri-apps/api": "^1.6.0",
|
||||||
"@tauri-apps/cli": "1.5.11",
|
"@tauri-apps/cli": "1.5.11",
|
||||||
"@testing-library/dom": "^10.4.0",
|
"@testing-library/dom": "^10.4.0",
|
||||||
"@testing-library/jest-dom": "^6.6.2",
|
"@testing-library/jest-dom": "^6.6.3",
|
||||||
"@testing-library/react": "^16.0.1",
|
"@testing-library/react": "^16.1.0",
|
||||||
"@types/jest": "^29.5.14",
|
"@types/jest": "^29.5.14",
|
||||||
"@types/js-yaml": "4.0.9",
|
"@types/js-yaml": "4.0.9",
|
||||||
"@types/lodash-es": "^4.17.12",
|
"@types/lodash-es": "^4.17.12",
|
||||||
|
67
test/vision-model-checker.test.ts
Normal file
67
test/vision-model-checker.test.ts
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
import { isVisionModel } from "../app/utils";
|
||||||
|
|
||||||
|
describe("isVisionModel", () => {
|
||||||
|
const originalEnv = process.env;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
jest.resetModules();
|
||||||
|
process.env = { ...originalEnv };
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
process.env = originalEnv;
|
||||||
|
});
|
||||||
|
|
||||||
|
test("should identify vision models using regex patterns", () => {
|
||||||
|
const visionModels = [
|
||||||
|
"gpt-4-vision",
|
||||||
|
"claude-3-opus",
|
||||||
|
"gemini-1.5-pro",
|
||||||
|
"gemini-2.0",
|
||||||
|
"gemini-exp-vision",
|
||||||
|
"learnlm-vision",
|
||||||
|
"qwen-vl-max",
|
||||||
|
"qwen2-vl-max",
|
||||||
|
"gpt-4-turbo",
|
||||||
|
"dall-e-3",
|
||||||
|
];
|
||||||
|
|
||||||
|
visionModels.forEach((model) => {
|
||||||
|
expect(isVisionModel(model)).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
test("should exclude specific models", () => {
|
||||||
|
expect(isVisionModel("claude-3-5-haiku-20241022")).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("should not identify non-vision models", () => {
|
||||||
|
const nonVisionModels = [
|
||||||
|
"gpt-3.5-turbo",
|
||||||
|
"gpt-4-turbo-preview",
|
||||||
|
"claude-2",
|
||||||
|
"regular-model",
|
||||||
|
];
|
||||||
|
|
||||||
|
nonVisionModels.forEach((model) => {
|
||||||
|
expect(isVisionModel(model)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
test("should identify models from VISION_MODELS env var", () => {
|
||||||
|
process.env.VISION_MODELS = "custom-vision-model,another-vision-model";
|
||||||
|
|
||||||
|
expect(isVisionModel("custom-vision-model")).toBe(true);
|
||||||
|
expect(isVisionModel("another-vision-model")).toBe(true);
|
||||||
|
expect(isVisionModel("unrelated-model")).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("should handle empty or missing VISION_MODELS", () => {
|
||||||
|
process.env.VISION_MODELS = "";
|
||||||
|
expect(isVisionModel("unrelated-model")).toBe(false);
|
||||||
|
|
||||||
|
delete process.env.VISION_MODELS;
|
||||||
|
expect(isVisionModel("unrelated-model")).toBe(false);
|
||||||
|
expect(isVisionModel("gpt-4-vision")).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
16
yarn.lock
16
yarn.lock
@@ -2114,10 +2114,10 @@
|
|||||||
lz-string "^1.5.0"
|
lz-string "^1.5.0"
|
||||||
pretty-format "^27.0.2"
|
pretty-format "^27.0.2"
|
||||||
|
|
||||||
"@testing-library/jest-dom@^6.6.2":
|
"@testing-library/jest-dom@^6.6.3":
|
||||||
version "6.6.2"
|
version "6.6.3"
|
||||||
resolved "https://registry.yarnpkg.com/@testing-library/jest-dom/-/jest-dom-6.6.2.tgz#8186aa9a07263adef9cc5a59a4772db8c31f4a5b"
|
resolved "https://registry.yarnpkg.com/@testing-library/jest-dom/-/jest-dom-6.6.3.tgz#26ba906cf928c0f8172e182c6fe214eb4f9f2bd2"
|
||||||
integrity sha512-P6GJD4yqc9jZLbe98j/EkyQDTPgqftohZF5FBkHY5BUERZmcf4HeO2k0XaefEg329ux2p21i1A1DmyQ1kKw2Jw==
|
integrity sha512-IteBhl4XqYNkM54f4ejhLRJiZNqcSCoXUOG2CPK7qbD322KjQozM4kHQOfkG2oln9b9HTYqs+Sae8vBATubxxA==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@adobe/css-tools" "^4.4.0"
|
"@adobe/css-tools" "^4.4.0"
|
||||||
aria-query "^5.0.0"
|
aria-query "^5.0.0"
|
||||||
@@ -2127,10 +2127,10 @@
|
|||||||
lodash "^4.17.21"
|
lodash "^4.17.21"
|
||||||
redent "^3.0.0"
|
redent "^3.0.0"
|
||||||
|
|
||||||
"@testing-library/react@^16.0.1":
|
"@testing-library/react@^16.1.0":
|
||||||
version "16.0.1"
|
version "16.1.0"
|
||||||
resolved "https://registry.yarnpkg.com/@testing-library/react/-/react-16.0.1.tgz#29c0ee878d672703f5e7579f239005e4e0faa875"
|
resolved "https://registry.yarnpkg.com/@testing-library/react/-/react-16.1.0.tgz#aa0c61398bac82eaf89776967e97de41ac742d71"
|
||||||
integrity sha512-dSmwJVtJXmku+iocRhWOUFbrERC76TX2Mnf0ATODz8brzAZrMBbzLwQixlBSanZxR6LddK3eiwpSFZgDET1URg==
|
integrity sha512-Q2ToPvg0KsVL0ohND9A3zLJWcOXXcO8IDu3fj11KhNt0UlCWyFyvnCIBkd12tidB2lkiVRG8VFqdhcqhqnAQtg==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@babel/runtime" "^7.12.5"
|
"@babel/runtime" "^7.12.5"
|
||||||
|
|
||||||
|
Reference in New Issue
Block a user