This commit is contained in:
GH Action - Upstream Sync 2025-01-04 06:36:47 +00:00
commit 641078744d
39 changed files with 1018 additions and 125 deletions

View File

@ -1,16 +1,17 @@
<div align="center"> <div align="center">
<a href='#企业版'> <a href='https://nextchat.dev/chat'>
<img src="./docs/images/ent.svg" alt="icon"/> <img src="https://github.com/user-attachments/assets/287c510f-f508-478e-ade3-54d30453dc18" width="1000" alt="icon"/>
</a> </a>
<h1 align="center">NextChat (ChatGPT Next Web)</h1> <h1 align="center">NextChat (ChatGPT Next Web)</h1>
English / [简体中文](./README_CN.md) English / [简体中文](./README_CN.md)
One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support. One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT4 & Gemini Pro support.
一键免费部署你的跨平台私人 ChatGPT 应用, 支持 GPT3, GPT4 & Gemini Pro 模型。 一键免费部署你的跨平台私人 ChatGPT 应用, 支持 Claude, GPT4 & Gemini Pro 模型。
[![Saas][Saas-image]][saas-url] [![Saas][Saas-image]][saas-url]
[![Web][Web-image]][web-url] [![Web][Web-image]][web-url]
@ -18,9 +19,9 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4
[![MacOS][MacOS-image]][download-url] [![MacOS][MacOS-image]][download-url]
[![Linux][Linux-image]][download-url] [![Linux][Linux-image]][download-url]
[NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [Web App](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Enterprise Edition](#enterprise-edition) / [Twitter](https://twitter.com/NextChatDev) [NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [Web App Demo](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Enterprise Edition](#enterprise-edition) / [Twitter](https://twitter.com/NextChatDev)
[NextChatAI](https://nextchat.dev/chat) / [网页版](https://app.nextchat.dev) / [客户端](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) / [反馈](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) [NextChatAI](https://nextchat.dev/chat) / [自部署网页版](https://app.nextchat.dev) / [客户端](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) / [反馈](https://github.com/Yidadaa/ChatGPT-Next-Web/issues)
[saas-url]: https://nextchat.dev/chat?utm_source=readme [saas-url]: https://nextchat.dev/chat?utm_source=readme
[saas-image]: https://img.shields.io/badge/NextChat-Saas-green?logo=microsoftedge [saas-image]: https://img.shields.io/badge/NextChat-Saas-green?logo=microsoftedge
@ -31,7 +32,7 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple [MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu [Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu
[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html) [<img src="https://svgshare.com/i/1AVg.svg" alt="Deploy to Alibaba Cloud" height="30">](https://computenest.aliyun.com/market/service-f1c9b75e59814dc49d52) [<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html)
[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp) [<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp)
@ -311,6 +312,14 @@ ChatGLM Api Key.
ChatGLM Api Url. ChatGLM Api Url.
### `DEEPSEEK_API_KEY` (optional)
DeepSeek Api Key.
### `DEEPSEEK_URL` (optional)
DeepSeek Api Url.
### `HIDE_USER_API_KEY` (optional) ### `HIDE_USER_API_KEY` (optional)
> Default: Empty > Default: Empty
@ -355,6 +364,13 @@ For ByteDance: use `modelName@bytedance=deploymentName` to customize model name
Change default model Change default model
### `VISION_MODELS` (optional)
> Default: Empty
> Example: `gpt-4-vision,claude-3-opus,my-custom-model` means add vision capabilities to these models in addition to the default pattern matches (which detect models containing keywords like "vision", "claude-3", "gemini-1.5", etc).
Add additional models to have vision capabilities, beyond the default pattern matching. Multiple models should be separated by commas.
### `WHITE_WEBDAV_ENDPOINTS` (optional) ### `WHITE_WEBDAV_ENDPOINTS` (optional)
You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the format You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the format

View File

@ -192,6 +192,14 @@ ChatGLM Api Key.
ChatGLM Api Url. ChatGLM Api Url.
### `DEEPSEEK_API_KEY` (可选)
DeepSeek Api Key.
### `DEEPSEEK_URL` (可选)
DeepSeek Api Url.
### `HIDE_USER_API_KEY` (可选) ### `HIDE_USER_API_KEY` (可选)
@ -235,6 +243,13 @@ ChatGLM Api Url.
更改默认模型 更改默认模型
### `VISION_MODELS` (可选)
> 默认值:空
> 示例:`gpt-4-vision,claude-3-opus,my-custom-model` 表示为这些模型添加视觉能力,作为对默认模式匹配的补充(默认会检测包含"vision"、"claude-3"、"gemini-1.5"等关键词的模型)。
在默认模式匹配之外,添加更多具有视觉能力的模型。多个模型用逗号分隔。
### `DEFAULT_INPUT_TEMPLATE` (可选) ### `DEFAULT_INPUT_TEMPLATE` (可选)
自定义默认的 template用于初始化『设置』中的『用户输入预处理』配置项 自定义默认的 template用于初始化『设置』中的『用户输入预处理』配置项

View File

@ -217,6 +217,13 @@ ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデ
デフォルトのモデルを変更します。 デフォルトのモデルを変更します。
### `VISION_MODELS` (オプション)
> デフォルト:空
> 例:`gpt-4-vision,claude-3-opus,my-custom-model` は、これらのモデルにビジョン機能を追加します。これはデフォルトのパターンマッチング("vision"、"claude-3"、"gemini-1.5"などのキーワードを含むモデルを検出)に加えて適用されます。
デフォルトのパターンマッチングに加えて、追加のモデルにビジョン機能を付与します。複数のモデルはカンマで区切ります。
### `DEFAULT_INPUT_TEMPLATE` (オプション) ### `DEFAULT_INPUT_TEMPLATE` (オプション)
『設定』の『ユーザー入力前処理』の初期設定に使用するテンプレートをカスタマイズします。 『設定』の『ユーザー入力前処理』の初期設定に使用するテンプレートをカスタマイズします。

View File

@ -10,6 +10,7 @@ import { handle as alibabaHandler } from "../../alibaba";
import { handle as moonshotHandler } from "../../moonshot"; import { handle as moonshotHandler } from "../../moonshot";
import { handle as stabilityHandler } from "../../stability"; import { handle as stabilityHandler } from "../../stability";
import { handle as iflytekHandler } from "../../iflytek"; import { handle as iflytekHandler } from "../../iflytek";
import { handle as deepseekHandler } from "../../deepseek";
import { handle as xaiHandler } from "../../xai"; import { handle as xaiHandler } from "../../xai";
import { handle as chatglmHandler } from "../../glm"; import { handle as chatglmHandler } from "../../glm";
import { handle as proxyHandler } from "../../proxy"; import { handle as proxyHandler } from "../../proxy";
@ -40,6 +41,8 @@ async function handle(
return stabilityHandler(req, { params }); return stabilityHandler(req, { params });
case ApiPath.Iflytek: case ApiPath.Iflytek:
return iflytekHandler(req, { params }); return iflytekHandler(req, { params });
case ApiPath.DeepSeek:
return deepseekHandler(req, { params });
case ApiPath.XAI: case ApiPath.XAI:
return xaiHandler(req, { params }); return xaiHandler(req, { params });
case ApiPath.ChatGLM: case ApiPath.ChatGLM:

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model"; import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Alibaba as string, ServiceProvider.Alibaba as string,

View File

@ -9,7 +9,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "./auth"; import { auth } from "./auth";
import { isModelAvailableInServer } from "@/app/utils/model"; import { isModelNotavailableInServer } from "@/app/utils/model";
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare"; import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
const ALLOWD_PATH = new Set([Anthropic.ChatPath, Anthropic.ChatPath1]); const ALLOWD_PATH = new Set([Anthropic.ChatPath, Anthropic.ChatPath1]);
@ -122,7 +122,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Anthropic as string, ServiceProvider.Anthropic as string,

View File

@ -92,6 +92,9 @@ export function auth(req: NextRequest, modelProvider: ModelProvider) {
systemApiKey = systemApiKey =
serverConfig.iflytekApiKey + ":" + serverConfig.iflytekApiSecret; serverConfig.iflytekApiKey + ":" + serverConfig.iflytekApiSecret;
break; break;
case ModelProvider.DeepSeek:
systemApiKey = serverConfig.deepseekApiKey;
break;
case ModelProvider.XAI: case ModelProvider.XAI:
systemApiKey = serverConfig.xaiApiKey; systemApiKey = serverConfig.xaiApiKey;
break; break;

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model"; import { isModelNotavailableInServer } from "@/app/utils/model";
import { getAccessToken } from "@/app/utils/baidu"; import { getAccessToken } from "@/app/utils/baidu";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@ -104,7 +104,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Baidu as string, ServiceProvider.Baidu as string,

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model"; import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.ByteDance as string, ServiceProvider.ByteDance as string,

View File

@ -2,7 +2,7 @@ import { NextRequest, NextResponse } from "next/server";
import { getServerSideConfig } from "../config/server"; import { getServerSideConfig } from "../config/server";
import { OPENAI_BASE_URL, ServiceProvider } from "../constant"; import { OPENAI_BASE_URL, ServiceProvider } from "../constant";
import { cloudflareAIGatewayUrl } from "../utils/cloudflare"; import { cloudflareAIGatewayUrl } from "../utils/cloudflare";
import { getModelProvider, isModelAvailableInServer } from "../utils/model"; import { getModelProvider, isModelNotavailableInServer } from "../utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@ -118,15 +118,14 @@ export async function requestOpenai(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.OpenAI as string, [
) || ServiceProvider.OpenAI,
isModelAvailableInServer( ServiceProvider.Azure,
serverConfig.customModels, jsonBody?.model as string, // support provider-unspecified model
jsonBody?.model as string, ],
ServiceProvider.Azure as string,
) )
) { ) {
return NextResponse.json( return NextResponse.json(

View File

@ -14,6 +14,7 @@ const DANGER_CONFIG = {
disableFastLink: serverConfig.disableFastLink, disableFastLink: serverConfig.disableFastLink,
customModels: serverConfig.customModels, customModels: serverConfig.customModels,
defaultModel: serverConfig.defaultModel, defaultModel: serverConfig.defaultModel,
visionModels: serverConfig.visionModels,
}; };
declare global { declare global {

128
app/api/deepseek.ts Normal file
View File

@ -0,0 +1,128 @@
import { getServerSideConfig } from "@/app/config/server";
import {
DEEPSEEK_BASE_URL,
ApiPath,
ModelProvider,
ServiceProvider,
} from "@/app/constant";
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig();
export async function handle(
req: NextRequest,
{ params }: { params: { path: string[] } },
) {
console.log("[DeepSeek Route] params ", params);
if (req.method === "OPTIONS") {
return NextResponse.json({ body: "OK" }, { status: 200 });
}
const authResult = auth(req, ModelProvider.DeepSeek);
if (authResult.error) {
return NextResponse.json(authResult, {
status: 401,
});
}
try {
const response = await request(req);
return response;
} catch (e) {
console.error("[DeepSeek] ", e);
return NextResponse.json(prettyObject(e));
}
}
async function request(req: NextRequest) {
const controller = new AbortController();
// alibaba use base url or just remove the path
let path = `${req.nextUrl.pathname}`.replaceAll(ApiPath.DeepSeek, "");
let baseUrl = serverConfig.deepseekUrl || DEEPSEEK_BASE_URL;
if (!baseUrl.startsWith("http")) {
baseUrl = `https://${baseUrl}`;
}
if (baseUrl.endsWith("/")) {
baseUrl = baseUrl.slice(0, -1);
}
console.log("[Proxy] ", path);
console.log("[Base Url]", baseUrl);
const timeoutId = setTimeout(
() => {
controller.abort();
},
10 * 60 * 1000,
);
const fetchUrl = `${baseUrl}${path}`;
const fetchOptions: RequestInit = {
headers: {
"Content-Type": "application/json",
Authorization: req.headers.get("Authorization") ?? "",
},
method: req.method,
body: req.body,
redirect: "manual",
// @ts-ignore
duplex: "half",
signal: controller.signal,
};
// #1815 try to refuse some request to some models
if (serverConfig.customModels && req.body) {
try {
const clonedBody = await req.text();
fetchOptions.body = clonedBody;
const jsonBody = JSON.parse(clonedBody) as { model?: string };
// not undefined and is false
if (
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.DeepSeek as string,
)
) {
return NextResponse.json(
{
error: true,
message: `you are not allowed to use ${jsonBody?.model} model`,
},
{
status: 403,
},
);
}
} catch (e) {
console.error(`[DeepSeek] filter`, e);
}
}
try {
const res = await fetch(fetchUrl, fetchOptions);
// to prevent browser prompt for credentials
const newHeaders = new Headers(res.headers);
newHeaders.delete("www-authenticate");
// to disable nginx buffering
newHeaders.set("X-Accel-Buffering", "no");
return new Response(res.body, {
status: res.status,
statusText: res.statusText,
headers: newHeaders,
});
} finally {
clearTimeout(timeoutId);
}
}

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model"; import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.ChatGLM as string, ServiceProvider.ChatGLM as string,

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model"; import { isModelNotavailableInServer } from "@/app/utils/model";
// iflytek // iflytek
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Iflytek as string, ServiceProvider.Iflytek as string,

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model"; import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Moonshot as string, ServiceProvider.Moonshot as string,

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model"; import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelAvailableInServer( isModelNotavailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.XAI as string, ServiceProvider.XAI as string,

View File

@ -20,6 +20,7 @@ import { QwenApi } from "./platforms/alibaba";
import { HunyuanApi } from "./platforms/tencent"; import { HunyuanApi } from "./platforms/tencent";
import { MoonshotApi } from "./platforms/moonshot"; import { MoonshotApi } from "./platforms/moonshot";
import { SparkApi } from "./platforms/iflytek"; import { SparkApi } from "./platforms/iflytek";
import { DeepSeekApi } from "./platforms/deepseek";
import { XAIApi } from "./platforms/xai"; import { XAIApi } from "./platforms/xai";
import { ChatGLMApi } from "./platforms/glm"; import { ChatGLMApi } from "./platforms/glm";
@ -154,6 +155,9 @@ export class ClientApi {
case ModelProvider.Iflytek: case ModelProvider.Iflytek:
this.llm = new SparkApi(); this.llm = new SparkApi();
break; break;
case ModelProvider.DeepSeek:
this.llm = new DeepSeekApi();
break;
case ModelProvider.XAI: case ModelProvider.XAI:
this.llm = new XAIApi(); this.llm = new XAIApi();
break; break;
@ -247,6 +251,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
const isAlibaba = modelConfig.providerName === ServiceProvider.Alibaba; const isAlibaba = modelConfig.providerName === ServiceProvider.Alibaba;
const isMoonshot = modelConfig.providerName === ServiceProvider.Moonshot; const isMoonshot = modelConfig.providerName === ServiceProvider.Moonshot;
const isIflytek = modelConfig.providerName === ServiceProvider.Iflytek; const isIflytek = modelConfig.providerName === ServiceProvider.Iflytek;
const isDeepSeek = modelConfig.providerName === ServiceProvider.DeepSeek;
const isXAI = modelConfig.providerName === ServiceProvider.XAI; const isXAI = modelConfig.providerName === ServiceProvider.XAI;
const isChatGLM = modelConfig.providerName === ServiceProvider.ChatGLM; const isChatGLM = modelConfig.providerName === ServiceProvider.ChatGLM;
const isEnabledAccessControl = accessStore.enabledAccessControl(); const isEnabledAccessControl = accessStore.enabledAccessControl();
@ -264,6 +269,8 @@ export function getHeaders(ignoreHeaders: boolean = false) {
? accessStore.moonshotApiKey ? accessStore.moonshotApiKey
: isXAI : isXAI
? accessStore.xaiApiKey ? accessStore.xaiApiKey
: isDeepSeek
? accessStore.deepseekApiKey
: isChatGLM : isChatGLM
? accessStore.chatglmApiKey ? accessStore.chatglmApiKey
: isIflytek : isIflytek
@ -280,6 +287,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
isAlibaba, isAlibaba,
isMoonshot, isMoonshot,
isIflytek, isIflytek,
isDeepSeek,
isXAI, isXAI,
isChatGLM, isChatGLM,
apiKey, apiKey,
@ -302,6 +310,13 @@ export function getHeaders(ignoreHeaders: boolean = false) {
isAzure, isAzure,
isAnthropic, isAnthropic,
isBaidu, isBaidu,
isByteDance,
isAlibaba,
isMoonshot,
isIflytek,
isDeepSeek,
isXAI,
isChatGLM,
apiKey, apiKey,
isEnabledAccessControl, isEnabledAccessControl,
} = getConfig(); } = getConfig();
@ -344,6 +359,8 @@ export function getClientApi(provider: ServiceProvider): ClientApi {
return new ClientApi(ModelProvider.Moonshot); return new ClientApi(ModelProvider.Moonshot);
case ServiceProvider.Iflytek: case ServiceProvider.Iflytek:
return new ClientApi(ModelProvider.Iflytek); return new ClientApi(ModelProvider.Iflytek);
case ServiceProvider.DeepSeek:
return new ClientApi(ModelProvider.DeepSeek);
case ServiceProvider.XAI: case ServiceProvider.XAI:
return new ClientApi(ModelProvider.XAI); return new ClientApi(ModelProvider.XAI);
case ServiceProvider.ChatGLM: case ServiceProvider.ChatGLM:

View File

@ -0,0 +1,200 @@
"use client";
// azure and openai, using same models. so using same LLMApi.
import {
ApiPath,
DEEPSEEK_BASE_URL,
DeepSeek,
REQUEST_TIMEOUT_MS,
} from "@/app/constant";
import {
useAccessStore,
useAppConfig,
useChatStore,
ChatMessageTool,
usePluginStore,
} from "@/app/store";
import { stream } from "@/app/utils/chat";
import {
ChatOptions,
getHeaders,
LLMApi,
LLMModel,
SpeechOptions,
} from "../api";
import { getClientConfig } from "@/app/config/client";
import { getMessageTextContent } from "@/app/utils";
import { RequestPayload } from "./openai";
import { fetch } from "@/app/utils/stream";
export class DeepSeekApi implements LLMApi {
private disableListModels = true;
path(path: string): string {
const accessStore = useAccessStore.getState();
let baseUrl = "";
if (accessStore.useCustomConfig) {
baseUrl = accessStore.deepseekUrl;
}
if (baseUrl.length === 0) {
const isApp = !!getClientConfig()?.isApp;
const apiPath = ApiPath.DeepSeek;
baseUrl = isApp ? DEEPSEEK_BASE_URL : apiPath;
}
if (baseUrl.endsWith("/")) {
baseUrl = baseUrl.slice(0, baseUrl.length - 1);
}
if (!baseUrl.startsWith("http") && !baseUrl.startsWith(ApiPath.DeepSeek)) {
baseUrl = "https://" + baseUrl;
}
console.log("[Proxy Endpoint] ", baseUrl, path);
return [baseUrl, path].join("/");
}
extractMessage(res: any) {
return res.choices?.at(0)?.message?.content ?? "";
}
speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}
async chat(options: ChatOptions) {
const messages: ChatOptions["messages"] = [];
for (const v of options.messages) {
const content = getMessageTextContent(v);
messages.push({ role: v.role, content });
}
const modelConfig = {
...useAppConfig.getState().modelConfig,
...useChatStore.getState().currentSession().mask.modelConfig,
...{
model: options.config.model,
providerName: options.config.providerName,
},
};
const requestPayload: RequestPayload = {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
// max_tokens: Math.max(modelConfig.max_tokens, 1024),
// Please do not ask me why not send max_tokens, no reason, this param is just shit, I dont want to explain anymore.
};
console.log("[Request] openai payload: ", requestPayload);
const shouldStream = !!options.config.stream;
const controller = new AbortController();
options.onController?.(controller);
try {
const chatPath = this.path(DeepSeek.ChatPath);
const chatPayload = {
method: "POST",
body: JSON.stringify(requestPayload),
signal: controller.signal,
headers: getHeaders(),
};
// make a fetch request
const requestTimeoutId = setTimeout(
() => controller.abort(),
REQUEST_TIMEOUT_MS,
);
if (shouldStream) {
const [tools, funcs] = usePluginStore
.getState()
.getAsTools(
useChatStore.getState().currentSession().mask?.plugin || [],
);
return stream(
chatPath,
requestPayload,
getHeaders(),
tools as any,
funcs,
controller,
// parseSSE
(text: string, runTools: ChatMessageTool[]) => {
// console.log("parseSSE", text, runTools);
const json = JSON.parse(text);
const choices = json.choices as Array<{
delta: {
content: string;
tool_calls: ChatMessageTool[];
};
}>;
const tool_calls = choices[0]?.delta?.tool_calls;
if (tool_calls?.length > 0) {
const index = tool_calls[0]?.index;
const id = tool_calls[0]?.id;
const args = tool_calls[0]?.function?.arguments;
if (id) {
runTools.push({
id,
type: tool_calls[0]?.type,
function: {
name: tool_calls[0]?.function?.name as string,
arguments: args,
},
});
} else {
// @ts-ignore
runTools[index]["function"]["arguments"] += args;
}
}
return choices[0]?.delta?.content;
},
// processToolMessage, include tool_calls message and tool call results
(
requestPayload: RequestPayload,
toolCallMessage: any,
toolCallResult: any[],
) => {
// @ts-ignore
requestPayload?.messages?.splice(
// @ts-ignore
requestPayload?.messages?.length,
0,
toolCallMessage,
...toolCallResult,
);
},
options,
);
} else {
const res = await fetch(chatPath, chatPayload);
clearTimeout(requestTimeoutId);
const resJson = await res.json();
const message = this.extractMessage(resJson);
options.onFinish(message, res);
}
} catch (e) {
console.log("[Request] failed to make a chat request", e);
options.onError?.(e as Error);
}
}
async usage() {
return {
used: 0,
total: 0,
};
}
async models(): Promise<LLMModel[]> {
return [];
}
}

View File

@ -21,16 +21,108 @@ import {
SpeechOptions, SpeechOptions,
} from "../api"; } from "../api";
import { getClientConfig } from "@/app/config/client"; import { getClientConfig } from "@/app/config/client";
import { getMessageTextContent } from "@/app/utils"; import { getMessageTextContent, isVisionModel } from "@/app/utils";
import { RequestPayload } from "./openai"; import { RequestPayload } from "./openai";
import { fetch } from "@/app/utils/stream"; import { fetch } from "@/app/utils/stream";
import { preProcessImageContent } from "@/app/utils/chat";
interface BasePayload {
model: string;
}
interface ChatPayload extends BasePayload {
messages: ChatOptions["messages"];
stream?: boolean;
temperature?: number;
presence_penalty?: number;
frequency_penalty?: number;
top_p?: number;
}
interface ImageGenerationPayload extends BasePayload {
prompt: string;
size?: string;
user_id?: string;
}
interface VideoGenerationPayload extends BasePayload {
prompt: string;
duration?: number;
resolution?: string;
user_id?: string;
}
type ModelType = "chat" | "image" | "video";
export class ChatGLMApi implements LLMApi { export class ChatGLMApi implements LLMApi {
private disableListModels = true; private disableListModels = true;
private getModelType(model: string): ModelType {
if (model.startsWith("cogview-")) return "image";
if (model.startsWith("cogvideo-")) return "video";
return "chat";
}
private getModelPath(type: ModelType): string {
switch (type) {
case "image":
return ChatGLM.ImagePath;
case "video":
return ChatGLM.VideoPath;
default:
return ChatGLM.ChatPath;
}
}
private createPayload(
messages: ChatOptions["messages"],
modelConfig: any,
options: ChatOptions,
): BasePayload {
const modelType = this.getModelType(modelConfig.model);
const lastMessage = messages[messages.length - 1];
const prompt =
typeof lastMessage.content === "string"
? lastMessage.content
: lastMessage.content.map((c) => c.text).join("\n");
switch (modelType) {
case "image":
return {
model: modelConfig.model,
prompt,
size: options.config.size,
} as ImageGenerationPayload;
default:
return {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
} as ChatPayload;
}
}
private parseResponse(modelType: ModelType, json: any): string {
switch (modelType) {
case "image": {
const imageUrl = json.data?.[0]?.url;
return imageUrl ? `![Generated Image](${imageUrl})` : "";
}
case "video": {
const videoUrl = json.data?.[0]?.url;
return videoUrl ? `<video controls src="${videoUrl}"></video>` : "";
}
default:
return this.extractMessage(json);
}
}
path(path: string): string { path(path: string): string {
const accessStore = useAccessStore.getState(); const accessStore = useAccessStore.getState();
let baseUrl = ""; let baseUrl = "";
if (accessStore.useCustomConfig) { if (accessStore.useCustomConfig) {
@ -51,7 +143,6 @@ export class ChatGLMApi implements LLMApi {
} }
console.log("[Proxy Endpoint] ", baseUrl, path); console.log("[Proxy Endpoint] ", baseUrl, path);
return [baseUrl, path].join("/"); return [baseUrl, path].join("/");
} }
@ -64,9 +155,12 @@ export class ChatGLMApi implements LLMApi {
} }
async chat(options: ChatOptions) { async chat(options: ChatOptions) {
const visionModel = isVisionModel(options.config.model);
const messages: ChatOptions["messages"] = []; const messages: ChatOptions["messages"] = [];
for (const v of options.messages) { for (const v of options.messages) {
const content = getMessageTextContent(v); const content = visionModel
? await preProcessImageContent(v.content)
: getMessageTextContent(v);
messages.push({ role: v.role, content }); messages.push({ role: v.role, content });
} }
@ -78,25 +172,16 @@ export class ChatGLMApi implements LLMApi {
providerName: options.config.providerName, providerName: options.config.providerName,
}, },
}; };
const modelType = this.getModelType(modelConfig.model);
const requestPayload = this.createPayload(messages, modelConfig, options);
const path = this.path(this.getModelPath(modelType));
const requestPayload: RequestPayload = { console.log(`[Request] glm ${modelType} payload: `, requestPayload);
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
};
console.log("[Request] glm payload: ", requestPayload);
const shouldStream = !!options.config.stream;
const controller = new AbortController(); const controller = new AbortController();
options.onController?.(controller); options.onController?.(controller);
try { try {
const chatPath = this.path(ChatGLM.ChatPath);
const chatPayload = { const chatPayload = {
method: "POST", method: "POST",
body: JSON.stringify(requestPayload), body: JSON.stringify(requestPayload),
@ -104,12 +189,23 @@ export class ChatGLMApi implements LLMApi {
headers: getHeaders(), headers: getHeaders(),
}; };
// make a fetch request
const requestTimeoutId = setTimeout( const requestTimeoutId = setTimeout(
() => controller.abort(), () => controller.abort(),
REQUEST_TIMEOUT_MS, REQUEST_TIMEOUT_MS,
); );
if (modelType === "image" || modelType === "video") {
const res = await fetch(path, chatPayload);
clearTimeout(requestTimeoutId);
const resJson = await res.json();
console.log(`[Response] glm ${modelType}:`, resJson);
const message = this.parseResponse(modelType, resJson);
options.onFinish(message, res);
return;
}
const shouldStream = !!options.config.stream;
if (shouldStream) { if (shouldStream) {
const [tools, funcs] = usePluginStore const [tools, funcs] = usePluginStore
.getState() .getState()
@ -117,7 +213,7 @@ export class ChatGLMApi implements LLMApi {
useChatStore.getState().currentSession().mask?.plugin || [], useChatStore.getState().currentSession().mask?.plugin || [],
); );
return stream( return stream(
chatPath, path,
requestPayload, requestPayload,
getHeaders(), getHeaders(),
tools as any, tools as any,
@ -125,7 +221,6 @@ export class ChatGLMApi implements LLMApi {
controller, controller,
// parseSSE // parseSSE
(text: string, runTools: ChatMessageTool[]) => { (text: string, runTools: ChatMessageTool[]) => {
// console.log("parseSSE", text, runTools);
const json = JSON.parse(text); const json = JSON.parse(text);
const choices = json.choices as Array<{ const choices = json.choices as Array<{
delta: { delta: {
@ -154,7 +249,7 @@ export class ChatGLMApi implements LLMApi {
} }
return choices[0]?.delta?.content; return choices[0]?.delta?.content;
}, },
// processToolMessage, include tool_calls message and tool call results // processToolMessage
( (
requestPayload: RequestPayload, requestPayload: RequestPayload,
toolCallMessage: any, toolCallMessage: any,
@ -172,7 +267,7 @@ export class ChatGLMApi implements LLMApi {
options, options,
); );
} else { } else {
const res = await fetch(chatPath, chatPayload); const res = await fetch(path, chatPayload);
clearTimeout(requestTimeoutId); clearTimeout(requestTimeoutId);
const resJson = await res.json(); const resJson = await res.json();
@ -184,6 +279,7 @@ export class ChatGLMApi implements LLMApi {
options.onError?.(e as Error); options.onError?.(e as Error);
} }
} }
async usage() { async usage() {
return { return {
used: 0, used: 0,

View File

@ -60,9 +60,18 @@ export class GeminiProApi implements LLMApi {
extractMessage(res: any) { extractMessage(res: any) {
console.log("[Response] gemini-pro response: ", res); console.log("[Response] gemini-pro response: ", res);
const getTextFromParts = (parts: any[]) => {
if (!Array.isArray(parts)) return "";
return parts
.map((part) => part?.text || "")
.filter((text) => text.trim() !== "")
.join("\n\n");
};
return ( return (
res?.candidates?.at(0)?.content?.parts.at(0)?.text || getTextFromParts(res?.candidates?.at(0)?.content?.parts) ||
res?.at(0)?.candidates?.at(0)?.content?.parts.at(0)?.text || getTextFromParts(res?.at(0)?.candidates?.at(0)?.content?.parts) ||
res?.error?.message || res?.error?.message ||
"" ""
); );
@ -223,7 +232,10 @@ export class GeminiProApi implements LLMApi {
}, },
}); });
} }
return chunkJson?.candidates?.at(0)?.content.parts.at(0)?.text; return chunkJson?.candidates
?.at(0)
?.content.parts?.map((part: { text: string }) => part.text)
.join("\n\n");
}, },
// processToolMessage, include tool_calls message and tool call results // processToolMessage, include tool_calls message and tool call results
( (

View File

@ -24,7 +24,7 @@ import {
stream, stream,
} from "@/app/utils/chat"; } from "@/app/utils/chat";
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare"; import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
import { DalleSize, DalleQuality, DalleStyle } from "@/app/typing"; import { ModelSize, DalleQuality, DalleStyle } from "@/app/typing";
import { import {
ChatOptions, ChatOptions,
@ -73,7 +73,7 @@ export interface DalleRequestPayload {
prompt: string; prompt: string;
response_format: "url" | "b64_json"; response_format: "url" | "b64_json";
n: number; n: number;
size: DalleSize; size: ModelSize;
quality: DalleQuality; quality: DalleQuality;
style: DalleStyle; style: DalleStyle;
} }

View File

@ -72,6 +72,8 @@ import {
isDalle3, isDalle3,
showPlugins, showPlugins,
safeLocalStorage, safeLocalStorage,
getModelSizes,
supportsCustomSize,
} from "../utils"; } from "../utils";
import { uploadImage as uploadImageRemote } from "@/app/utils/chat"; import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
@ -79,7 +81,7 @@ import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
import dynamic from "next/dynamic"; import dynamic from "next/dynamic";
import { ChatControllerPool } from "../client/controller"; import { ChatControllerPool } from "../client/controller";
import { DalleSize, DalleQuality, DalleStyle } from "../typing"; import { DalleQuality, DalleStyle, ModelSize } from "../typing";
import { Prompt, usePromptStore } from "../store/prompt"; import { Prompt, usePromptStore } from "../store/prompt";
import Locale from "../locales"; import Locale from "../locales";
@ -519,10 +521,11 @@ export function ChatActions(props: {
const [showSizeSelector, setShowSizeSelector] = useState(false); const [showSizeSelector, setShowSizeSelector] = useState(false);
const [showQualitySelector, setShowQualitySelector] = useState(false); const [showQualitySelector, setShowQualitySelector] = useState(false);
const [showStyleSelector, setShowStyleSelector] = useState(false); const [showStyleSelector, setShowStyleSelector] = useState(false);
const dalle3Sizes: DalleSize[] = ["1024x1024", "1792x1024", "1024x1792"]; const modelSizes = getModelSizes(currentModel);
const dalle3Qualitys: DalleQuality[] = ["standard", "hd"]; const dalle3Qualitys: DalleQuality[] = ["standard", "hd"];
const dalle3Styles: DalleStyle[] = ["vivid", "natural"]; const dalle3Styles: DalleStyle[] = ["vivid", "natural"];
const currentSize = session.mask.modelConfig?.size ?? "1024x1024"; const currentSize =
session.mask.modelConfig?.size ?? ("1024x1024" as ModelSize);
const currentQuality = session.mask.modelConfig?.quality ?? "standard"; const currentQuality = session.mask.modelConfig?.quality ?? "standard";
const currentStyle = session.mask.modelConfig?.style ?? "vivid"; const currentStyle = session.mask.modelConfig?.style ?? "vivid";
@ -673,7 +676,7 @@ export function ChatActions(props: {
/> />
)} )}
{isDalle3(currentModel) && ( {supportsCustomSize(currentModel) && (
<ChatAction <ChatAction
onClick={() => setShowSizeSelector(true)} onClick={() => setShowSizeSelector(true)}
text={currentSize} text={currentSize}
@ -684,7 +687,7 @@ export function ChatActions(props: {
{showSizeSelector && ( {showSizeSelector && (
<Selector <Selector
defaultSelectedValue={currentSize} defaultSelectedValue={currentSize}
items={dalle3Sizes.map((m) => ({ items={modelSizes.map((m) => ({
title: m, title: m,
value: m, value: m,
}))} }))}
@ -897,6 +900,12 @@ export function ShortcutKeyModal(props: { onClose: () => void }) {
title: Locale.Chat.ShortcutKey.showShortcutKey, title: Locale.Chat.ShortcutKey.showShortcutKey,
keys: isMac ? ["⌘", "/"] : ["Ctrl", "/"], keys: isMac ? ["⌘", "/"] : ["Ctrl", "/"],
}, },
{
title: Locale.Chat.ShortcutKey.clearContext,
keys: isMac
? ["⌘", "Shift", "backspace"]
: ["Ctrl", "Shift", "backspace"],
},
]; ];
return ( return (
<div className="modal-mask"> <div className="modal-mask">
@ -1549,7 +1558,7 @@ function _Chat() {
const [showShortcutKeyModal, setShowShortcutKeyModal] = useState(false); const [showShortcutKeyModal, setShowShortcutKeyModal] = useState(false);
useEffect(() => { useEffect(() => {
const handleKeyDown = (event: any) => { const handleKeyDown = (event: KeyboardEvent) => {
// 打开新聊天 command + shift + o // 打开新聊天 command + shift + o
if ( if (
(event.metaKey || event.ctrlKey) && (event.metaKey || event.ctrlKey) &&
@ -1600,14 +1609,30 @@ function _Chat() {
event.preventDefault(); event.preventDefault();
setShowShortcutKeyModal(true); setShowShortcutKeyModal(true);
} }
// 清除上下文 command + shift + backspace
else if (
(event.metaKey || event.ctrlKey) &&
event.shiftKey &&
event.key.toLowerCase() === "backspace"
) {
event.preventDefault();
chatStore.updateTargetSession(session, (session) => {
if (session.clearContextIndex === session.messages.length) {
session.clearContextIndex = undefined;
} else {
session.clearContextIndex = session.messages.length;
session.memoryPrompt = ""; // will clear memory
}
});
}
}; };
window.addEventListener("keydown", handleKeyDown); document.addEventListener("keydown", handleKeyDown);
return () => { return () => {
window.removeEventListener("keydown", handleKeyDown); document.removeEventListener("keydown", handleKeyDown);
}; };
}, [messages, chatStore, navigate]); }, [messages, chatStore, navigate, session]);
const [showChatSidePanel, setShowChatSidePanel] = useState(false); const [showChatSidePanel, setShowChatSidePanel] = useState(false);

View File

@ -73,6 +73,7 @@ import {
Iflytek, Iflytek,
SAAS_CHAT_URL, SAAS_CHAT_URL,
ChatGLM, ChatGLM,
DeepSeek,
} from "../constant"; } from "../constant";
import { Prompt, SearchService, usePromptStore } from "../store/prompt"; import { Prompt, SearchService, usePromptStore } from "../store/prompt";
import { ErrorBoundary } from "./error"; import { ErrorBoundary } from "./error";
@ -1197,6 +1198,47 @@ export function Settings() {
</> </>
); );
const deepseekConfigComponent = accessStore.provider ===
ServiceProvider.DeepSeek && (
<>
<ListItem
title={Locale.Settings.Access.DeepSeek.Endpoint.Title}
subTitle={
Locale.Settings.Access.DeepSeek.Endpoint.SubTitle +
DeepSeek.ExampleEndpoint
}
>
<input
aria-label={Locale.Settings.Access.DeepSeek.Endpoint.Title}
type="text"
value={accessStore.deepseekUrl}
placeholder={DeepSeek.ExampleEndpoint}
onChange={(e) =>
accessStore.update(
(access) => (access.deepseekUrl = e.currentTarget.value),
)
}
></input>
</ListItem>
<ListItem
title={Locale.Settings.Access.DeepSeek.ApiKey.Title}
subTitle={Locale.Settings.Access.DeepSeek.ApiKey.SubTitle}
>
<PasswordInput
aria-label={Locale.Settings.Access.DeepSeek.ApiKey.Title}
value={accessStore.deepseekApiKey}
type="text"
placeholder={Locale.Settings.Access.DeepSeek.ApiKey.Placeholder}
onChange={(e) => {
accessStore.update(
(access) => (access.deepseekApiKey = e.currentTarget.value),
);
}}
/>
</ListItem>
</>
);
const XAIConfigComponent = accessStore.provider === ServiceProvider.XAI && ( const XAIConfigComponent = accessStore.provider === ServiceProvider.XAI && (
<> <>
<ListItem <ListItem
@ -1733,6 +1775,7 @@ export function Settings() {
{alibabaConfigComponent} {alibabaConfigComponent}
{tencentConfigComponent} {tencentConfigComponent}
{moonshotConfigComponent} {moonshotConfigComponent}
{deepseekConfigComponent}
{stabilityConfigComponent} {stabilityConfigComponent}
{lflytekConfigComponent} {lflytekConfigComponent}
{XAIConfigComponent} {XAIConfigComponent}

View File

@ -22,7 +22,6 @@ import {
MIN_SIDEBAR_WIDTH, MIN_SIDEBAR_WIDTH,
NARROW_SIDEBAR_WIDTH, NARROW_SIDEBAR_WIDTH,
Path, Path,
PLUGINS,
REPO_URL, REPO_URL,
} from "../constant"; } from "../constant";
@ -32,6 +31,12 @@ import dynamic from "next/dynamic";
import { showConfirm, Selector } from "./ui-lib"; import { showConfirm, Selector } from "./ui-lib";
import clsx from "clsx"; import clsx from "clsx";
const DISCOVERY = [
{ name: Locale.Plugin.Name, path: Path.Plugins },
{ name: "Stable Diffusion", path: Path.Sd },
{ name: Locale.SearchChat.Page.Title, path: Path.SearchChat },
];
const ChatList = dynamic(async () => (await import("./chat-list")).ChatList, { const ChatList = dynamic(async () => (await import("./chat-list")).ChatList, {
loading: () => null, loading: () => null,
}); });
@ -219,7 +224,7 @@ export function SideBarTail(props: {
export function SideBar(props: { className?: string }) { export function SideBar(props: { className?: string }) {
useHotKey(); useHotKey();
const { onDragStart, shouldNarrow } = useDragSideBar(); const { onDragStart, shouldNarrow } = useDragSideBar();
const [showPluginSelector, setShowPluginSelector] = useState(false); const [showDiscoverySelector, setshowDiscoverySelector] = useState(false);
const navigate = useNavigate(); const navigate = useNavigate();
const config = useAppConfig(); const config = useAppConfig();
const chatStore = useChatStore(); const chatStore = useChatStore();
@ -254,21 +259,21 @@ export function SideBar(props: { className?: string }) {
icon={<DiscoveryIcon />} icon={<DiscoveryIcon />}
text={shouldNarrow ? undefined : Locale.Discovery.Name} text={shouldNarrow ? undefined : Locale.Discovery.Name}
className={styles["sidebar-bar-button"]} className={styles["sidebar-bar-button"]}
onClick={() => setShowPluginSelector(true)} onClick={() => setshowDiscoverySelector(true)}
shadow shadow
/> />
</div> </div>
{showPluginSelector && ( {showDiscoverySelector && (
<Selector <Selector
items={[ items={[
...PLUGINS.map((item) => { ...DISCOVERY.map((item) => {
return { return {
title: item.name, title: item.name,
value: item.path, value: item.path,
}; };
}), }),
]} ]}
onClose={() => setShowPluginSelector(false)} onClose={() => setshowDiscoverySelector(false)}
onSelection={(s) => { onSelection={(s) => {
navigate(s[0], { state: { fromHome: true } }); navigate(s[0], { state: { fromHome: true } });
}} }}

View File

@ -1,5 +1,6 @@
import md5 from "spark-md5"; import md5 from "spark-md5";
import { DEFAULT_MODELS, DEFAULT_GA_ID } from "../constant"; import { DEFAULT_MODELS, DEFAULT_GA_ID } from "../constant";
import { isGPT4Model } from "../utils/model";
declare global { declare global {
namespace NodeJS { namespace NodeJS {
@ -22,6 +23,7 @@ declare global {
DISABLE_FAST_LINK?: string; // disallow parse settings from url or not DISABLE_FAST_LINK?: string; // disallow parse settings from url or not
CUSTOM_MODELS?: string; // to control custom models CUSTOM_MODELS?: string; // to control custom models
DEFAULT_MODEL?: string; // to control default model in every new chat window DEFAULT_MODEL?: string; // to control default model in every new chat window
VISION_MODELS?: string; // to control vision models
// stability only // stability only
STABILITY_URL?: string; STABILITY_URL?: string;
@ -71,6 +73,9 @@ declare global {
IFLYTEK_API_KEY?: string; IFLYTEK_API_KEY?: string;
IFLYTEK_API_SECRET?: string; IFLYTEK_API_SECRET?: string;
DEEPSEEK_URL?: string;
DEEPSEEK_API_KEY?: string;
// xai only // xai only
XAI_URL?: string; XAI_URL?: string;
XAI_API_KEY?: string; XAI_API_KEY?: string;
@ -124,23 +129,16 @@ export const getServerSideConfig = () => {
const disableGPT4 = !!process.env.DISABLE_GPT4; const disableGPT4 = !!process.env.DISABLE_GPT4;
let customModels = process.env.CUSTOM_MODELS ?? ""; let customModels = process.env.CUSTOM_MODELS ?? "";
let defaultModel = process.env.DEFAULT_MODEL ?? ""; let defaultModel = process.env.DEFAULT_MODEL ?? "";
let visionModels = process.env.VISION_MODELS ?? "";
if (disableGPT4) { if (disableGPT4) {
if (customModels) customModels += ","; if (customModels) customModels += ",";
customModels += DEFAULT_MODELS.filter( customModels += DEFAULT_MODELS.filter((m) => isGPT4Model(m.name))
(m) =>
(m.name.startsWith("gpt-4") || m.name.startsWith("chatgpt-4o") || m.name.startsWith("o1")) &&
!m.name.startsWith("gpt-4o-mini"),
)
.map((m) => "-" + m.name) .map((m) => "-" + m.name)
.join(","); .join(",");
if ( if (defaultModel && isGPT4Model(defaultModel)) {
(defaultModel.startsWith("gpt-4") ||
defaultModel.startsWith("chatgpt-4o") ||
defaultModel.startsWith("o1")) &&
!defaultModel.startsWith("gpt-4o-mini")
)
defaultModel = ""; defaultModel = "";
}
} }
const isStability = !!process.env.STABILITY_API_KEY; const isStability = !!process.env.STABILITY_API_KEY;
@ -155,6 +153,7 @@ export const getServerSideConfig = () => {
const isAlibaba = !!process.env.ALIBABA_API_KEY; const isAlibaba = !!process.env.ALIBABA_API_KEY;
const isMoonshot = !!process.env.MOONSHOT_API_KEY; const isMoonshot = !!process.env.MOONSHOT_API_KEY;
const isIflytek = !!process.env.IFLYTEK_API_KEY; const isIflytek = !!process.env.IFLYTEK_API_KEY;
const isDeepSeek = !!process.env.DEEPSEEK_API_KEY;
const isXAI = !!process.env.XAI_API_KEY; const isXAI = !!process.env.XAI_API_KEY;
const isChatGLM = !!process.env.CHATGLM_API_KEY; const isChatGLM = !!process.env.CHATGLM_API_KEY;
// const apiKeyEnvVar = process.env.OPENAI_API_KEY ?? ""; // const apiKeyEnvVar = process.env.OPENAI_API_KEY ?? "";
@ -219,6 +218,10 @@ export const getServerSideConfig = () => {
iflytekApiKey: process.env.IFLYTEK_API_KEY, iflytekApiKey: process.env.IFLYTEK_API_KEY,
iflytekApiSecret: process.env.IFLYTEK_API_SECRET, iflytekApiSecret: process.env.IFLYTEK_API_SECRET,
isDeepSeek,
deepseekUrl: process.env.DEEPSEEK_URL,
deepseekApiKey: getApiKey(process.env.DEEPSEEK_API_KEY),
isXAI, isXAI,
xaiUrl: process.env.XAI_URL, xaiUrl: process.env.XAI_URL,
xaiApiKey: getApiKey(process.env.XAI_API_KEY), xaiApiKey: getApiKey(process.env.XAI_API_KEY),
@ -248,6 +251,7 @@ export const getServerSideConfig = () => {
disableFastLink: !!process.env.DISABLE_FAST_LINK, disableFastLink: !!process.env.DISABLE_FAST_LINK,
customModels, customModels,
defaultModel, defaultModel,
visionModels,
allowedWebDavEndpoints, allowedWebDavEndpoints,
}; };
}; };

View File

@ -28,6 +28,8 @@ export const TENCENT_BASE_URL = "https://hunyuan.tencentcloudapi.com";
export const MOONSHOT_BASE_URL = "https://api.moonshot.cn"; export const MOONSHOT_BASE_URL = "https://api.moonshot.cn";
export const IFLYTEK_BASE_URL = "https://spark-api-open.xf-yun.com"; export const IFLYTEK_BASE_URL = "https://spark-api-open.xf-yun.com";
export const DEEPSEEK_BASE_URL = "https://api.deepseek.com";
export const XAI_BASE_URL = "https://api.x.ai"; export const XAI_BASE_URL = "https://api.x.ai";
export const CHATGLM_BASE_URL = "https://open.bigmodel.cn"; export const CHATGLM_BASE_URL = "https://open.bigmodel.cn";
@ -65,6 +67,7 @@ export enum ApiPath {
Artifacts = "/api/artifacts", Artifacts = "/api/artifacts",
XAI = "/api/xai", XAI = "/api/xai",
ChatGLM = "/api/chatglm", ChatGLM = "/api/chatglm",
DeepSeek = "/api/deepseek",
} }
export enum SlotID { export enum SlotID {
@ -119,6 +122,7 @@ export enum ServiceProvider {
Iflytek = "Iflytek", Iflytek = "Iflytek",
XAI = "XAI", XAI = "XAI",
ChatGLM = "ChatGLM", ChatGLM = "ChatGLM",
DeepSeek = "DeepSeek",
} }
// Google API safety settings, see https://ai.google.dev/gemini-api/docs/safety-settings // Google API safety settings, see https://ai.google.dev/gemini-api/docs/safety-settings
@ -143,6 +147,7 @@ export enum ModelProvider {
Iflytek = "Iflytek", Iflytek = "Iflytek",
XAI = "XAI", XAI = "XAI",
ChatGLM = "ChatGLM", ChatGLM = "ChatGLM",
DeepSeek = "DeepSeek",
} }
export const Stability = { export const Stability = {
@ -225,6 +230,11 @@ export const Iflytek = {
ChatPath: "v1/chat/completions", ChatPath: "v1/chat/completions",
}; };
export const DeepSeek = {
ExampleEndpoint: DEEPSEEK_BASE_URL,
ChatPath: "chat/completions",
};
export const XAI = { export const XAI = {
ExampleEndpoint: XAI_BASE_URL, ExampleEndpoint: XAI_BASE_URL,
ChatPath: "v1/chat/completions", ChatPath: "v1/chat/completions",
@ -233,6 +243,8 @@ export const XAI = {
export const ChatGLM = { export const ChatGLM = {
ExampleEndpoint: CHATGLM_BASE_URL, ExampleEndpoint: CHATGLM_BASE_URL,
ChatPath: "api/paas/v4/chat/completions", ChatPath: "api/paas/v4/chat/completions",
ImagePath: "api/paas/v4/images/generations",
VideoPath: "api/paas/v4/videos/generations",
}; };
export const DEFAULT_INPUT_TEMPLATE = `{{input}}`; // input / time / model / lang export const DEFAULT_INPUT_TEMPLATE = `{{input}}`; // input / time / model / lang
@ -275,6 +287,8 @@ export const KnowledgeCutOffDate: Record<string, string> = {
// it's now easier to add "KnowledgeCutOffDate" instead of stupid hardcoding it, as was done previously. // it's now easier to add "KnowledgeCutOffDate" instead of stupid hardcoding it, as was done previously.
"gemini-pro": "2023-12", "gemini-pro": "2023-12",
"gemini-pro-vision": "2023-12", "gemini-pro-vision": "2023-12",
"deepseek-chat": "2024-07",
"deepseek-coder": "2024-07",
}; };
export const DEFAULT_TTS_ENGINE = "Edge-TTS"; export const DEFAULT_TTS_ENGINE = "Edge-TTS";
@ -291,6 +305,23 @@ export const DEFAULT_TTS_VOICES = [
"shimmer", "shimmer",
]; ];
export const VISION_MODEL_REGEXES = [
/vision/,
/gpt-4o/,
/claude-3/,
/gemini-1\.5/,
/gemini-exp/,
/gemini-2\.0/,
/learnlm/,
/qwen-vl/,
/qwen2-vl/,
/gpt-4-turbo(?!.*preview)/, // Matches "gpt-4-turbo" but not "gpt-4-turbo-preview"
/^dall-e-3$/, // Matches exactly "dall-e-3"
/glm-4v/,
];
export const EXCLUDE_VISION_MODEL_REGEXES = [/claude-3-5-haiku-20241022/];
const openaiModels = [ const openaiModels = [
"gpt-3.5-turbo", "gpt-3.5-turbo",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-1106",
@ -317,13 +348,23 @@ const openaiModels = [
]; ];
const googleModels = [ const googleModels = [
"gemini-1.0-pro", "gemini-1.0-pro", // Deprecated on 2/15/2025
"gemini-1.5-pro-latest", "gemini-1.5-pro-latest",
"gemini-1.5-pro",
"gemini-1.5-pro-002",
"gemini-1.5-pro-exp-0827",
"gemini-1.5-flash-latest", "gemini-1.5-flash-latest",
"gemini-1.5-flash-8b-latest",
"gemini-1.5-flash",
"gemini-1.5-flash-8b",
"gemini-1.5-flash-002",
"gemini-1.5-flash-exp-0827",
"learnlm-1.5-pro-experimental",
"gemini-exp-1114", "gemini-exp-1114",
"gemini-exp-1121", "gemini-exp-1121",
"learnlm-1.5-pro-experimental", "gemini-exp-1206",
"gemini-pro-vision", "gemini-2.0-flash-exp",
"gemini-2.0-flash-thinking-exp-1219",
]; ];
const anthropicModels = [ const anthropicModels = [
@ -394,6 +435,8 @@ const iflytekModels = [
"4.0Ultra", "4.0Ultra",
]; ];
const deepseekModels = ["deepseek-chat", "deepseek-coder"];
const xAIModes = ["grok-beta"]; const xAIModes = ["grok-beta"];
const chatglmModels = [ const chatglmModels = [
@ -405,6 +448,15 @@ const chatglmModels = [
"glm-4-long", "glm-4-long",
"glm-4-flashx", "glm-4-flashx",
"glm-4-flash", "glm-4-flash",
"glm-4v-plus",
"glm-4v",
"glm-4v-flash", // free
"cogview-3-plus",
"cogview-3",
"cogview-3-flash", // free
// 目前无法适配轮询任务
// "cogvideox",
// "cogvideox-flash", // free
]; ];
let seq = 1000; // 内置的模型序号生成器从1000开始 let seq = 1000; // 内置的模型序号生成器从1000开始
@ -541,6 +593,17 @@ export const DEFAULT_MODELS = [
sorted: 12, sorted: 12,
}, },
})), })),
...deepseekModels.map((name) => ({
name,
available: true,
sorted: seq++,
provider: {
id: "deepseek",
providerName: "DeepSeek",
providerType: "deepseek",
sorted: 13,
},
})),
] as const; ] as const;
export const CHAT_PAGE_SIZE = 15; export const CHAT_PAGE_SIZE = 15;
@ -560,11 +623,6 @@ export const internalAllowedWebDavEndpoints = [
]; ];
export const DEFAULT_GA_ID = "G-89WN60ZK2E"; export const DEFAULT_GA_ID = "G-89WN60ZK2E";
export const PLUGINS = [
{ name: "Plugins", path: Path.Plugins },
{ name: "Stable Diffusion", path: Path.Sd },
{ name: "Search Chat", path: Path.SearchChat },
];
export const SAAS_CHAT_URL = "https://kaiyanmedical.com"; export const SAAS_CHAT_URL = "https://kaiyanmedical.com";
export const SAAS_CHAT_UTM_URL = "https://kaiyanmedical.com/chat?utm=github"; export const SAAS_CHAT_UTM_URL = "https://kaiyanmedical.com/chat?utm=github";

View File

@ -106,6 +106,7 @@ const cn = {
copyLastMessage: "复制最后一个回复", copyLastMessage: "复制最后一个回复",
copyLastCode: "复制最后一个代码块", copyLastCode: "复制最后一个代码块",
showShortcutKey: "显示快捷方式", showShortcutKey: "显示快捷方式",
clearContext: "清除上下文",
}, },
}, },
Export: { Export: {
@ -176,7 +177,7 @@ const cn = {
}, },
}, },
Lang: { Lang: {
Name: "Language", // ATTENTION: if you wanna add a new translation, please do not translate this value, leave it as `Language` Name: "Language", // 注意:如果要添加新的翻译,请不要翻译此值,将它保留为 `Language`
All: "所有语言", All: "所有语言",
}, },
Avatar: "头像", Avatar: "头像",
@ -462,6 +463,17 @@ const cn = {
SubTitle: "样例:", SubTitle: "样例:",
}, },
}, },
DeepSeek: {
ApiKey: {
Title: "接口密钥",
SubTitle: "使用自定义DeepSeek API Key",
Placeholder: "DeepSeek API Key",
},
Endpoint: {
Title: "接口地址",
SubTitle: "样例:",
},
},
XAI: { XAI: {
ApiKey: { ApiKey: {
Title: "接口密钥", Title: "接口密钥",
@ -630,7 +642,7 @@ const cn = {
Sysmessage: "你是一个助手", Sysmessage: "你是一个助手",
}, },
SearchChat: { SearchChat: {
Name: "搜索", Name: "搜索聊天记录",
Page: { Page: {
Title: "搜索聊天记录", Title: "搜索聊天记录",
Search: "输入搜索关键词", Search: "输入搜索关键词",

View File

@ -103,6 +103,7 @@ const en: LocaleType = {
copyLastMessage: "Copy Last Reply", copyLastMessage: "Copy Last Reply",
copyLastCode: "Copy Last Code Block", copyLastCode: "Copy Last Code Block",
showShortcutKey: "Show Shortcuts", showShortcutKey: "Show Shortcuts",
clearContext: "Clear Context",
}, },
}, },
Export: { Export: {
@ -442,6 +443,17 @@ const en: LocaleType = {
SubTitle: "Example: ", SubTitle: "Example: ",
}, },
}, },
DeepSeek: {
ApiKey: {
Title: "DeepSeek API Key",
SubTitle: "Use a custom DeepSeek API Key",
Placeholder: "DeepSeek API Key",
},
Endpoint: {
Title: "Endpoint Address",
SubTitle: "Example: ",
},
},
XAI: { XAI: {
ApiKey: { ApiKey: {
Title: "XAI API Key", Title: "XAI API Key",

View File

@ -100,6 +100,7 @@ const tw = {
copyLastMessage: "複製最後一個回覆", copyLastMessage: "複製最後一個回覆",
copyLastCode: "複製最後一個程式碼區塊", copyLastCode: "複製最後一個程式碼區塊",
showShortcutKey: "顯示快捷方式", showShortcutKey: "顯示快捷方式",
clearContext: "清除上下文",
}, },
}, },
Export: { Export: {
@ -485,7 +486,7 @@ const tw = {
}, },
}, },
SearchChat: { SearchChat: {
Name: "搜尋", Name: "搜尋聊天記錄",
Page: { Page: {
Title: "搜尋聊天記錄", Title: "搜尋聊天記錄",
Search: "輸入搜尋關鍵詞", Search: "輸入搜尋關鍵詞",

View File

@ -3,7 +3,7 @@ import { BuiltinMask } from "./typing";
export const CN_MASKS: BuiltinMask[] = [ export const CN_MASKS: BuiltinMask[] = [
{ {
avatar: "1f5bc-fe0f", avatar: "1f5bc-fe0f",
name: "以文搜图", name: "AI文生图",
context: [ context: [
{ {
id: "text-to-pic-0", id: "text-to-pic-0",
@ -28,7 +28,7 @@ export const CN_MASKS: BuiltinMask[] = [
id: "text-to-pic-3", id: "text-to-pic-3",
role: "system", role: "system",
content: content:
"助手善于判断用户意图当确定需要提供图片时助手会变得沉默寡言只使用以下格式输出markdown图片![描述](https://image.pollinations.ai/prompt/描述)因为这个语法可以自动按照提示生成并渲染图片。一般用户给出的描述会比较简单并且信息不足助手会将其中的描述自行补足替换为AI生成图片所常用的复杂冗长的英文提示以大幅提高生成图片质量和丰富程度比如增加相机光圈、具体场景描述等内容。助手会避免用代码块或原始块包围markdown标记因为那样只会渲染出代码块或原始块而不是图片。", "助手善于判断用户意图当确定需要提供图片时助手会变得沉默寡言只使用以下格式输出markdown图片![description](https://image.pollinations.ai/prompt/description?nologo=true)因为这个语法可以自动按照提示生成并渲染图片。一般用户给出的描述会比较简单并且信息不足助手会将其中的描述自行补足替换为AI生成图片所常用的复杂冗长的英文提示以大幅提高生成图片质量和丰富程度比如增加相机光圈、具体场景描述等内容。助手会避免用代码块或原始块包围markdown标记因为那样只会渲染出代码块或原始块而不是图片。url中的空格等符号需要转义。",
date: "", date: "",
}, },
], ],

View File

@ -13,6 +13,7 @@ import {
MOONSHOT_BASE_URL, MOONSHOT_BASE_URL,
STABILITY_BASE_URL, STABILITY_BASE_URL,
IFLYTEK_BASE_URL, IFLYTEK_BASE_URL,
DEEPSEEK_BASE_URL,
XAI_BASE_URL, XAI_BASE_URL,
CHATGLM_BASE_URL, CHATGLM_BASE_URL,
} from "../constant"; } from "../constant";
@ -47,6 +48,8 @@ const DEFAULT_STABILITY_URL = isApp ? STABILITY_BASE_URL : ApiPath.Stability;
const DEFAULT_IFLYTEK_URL = isApp ? IFLYTEK_BASE_URL : ApiPath.Iflytek; const DEFAULT_IFLYTEK_URL = isApp ? IFLYTEK_BASE_URL : ApiPath.Iflytek;
const DEFAULT_DEEPSEEK_URL = isApp ? DEEPSEEK_BASE_URL : ApiPath.DeepSeek;
const DEFAULT_XAI_URL = isApp ? XAI_BASE_URL : ApiPath.XAI; const DEFAULT_XAI_URL = isApp ? XAI_BASE_URL : ApiPath.XAI;
const DEFAULT_CHATGLM_URL = isApp ? CHATGLM_BASE_URL : ApiPath.ChatGLM; const DEFAULT_CHATGLM_URL = isApp ? CHATGLM_BASE_URL : ApiPath.ChatGLM;
@ -108,6 +111,10 @@ const DEFAULT_ACCESS_STATE = {
iflytekApiKey: "", iflytekApiKey: "",
iflytekApiSecret: "", iflytekApiSecret: "",
// deepseek
deepseekUrl: DEFAULT_DEEPSEEK_URL,
deepseekApiKey: "",
// xai // xai
xaiUrl: DEFAULT_XAI_URL, xaiUrl: DEFAULT_XAI_URL,
xaiApiKey: "", xaiApiKey: "",
@ -124,6 +131,7 @@ const DEFAULT_ACCESS_STATE = {
disableFastLink: false, disableFastLink: false,
customModels: "", customModels: "",
defaultModel: "", defaultModel: "",
visionModels: "",
// tts config // tts config
edgeTTSVoiceName: "zh-CN-YunxiNeural", edgeTTSVoiceName: "zh-CN-YunxiNeural",
@ -138,7 +146,10 @@ export const useAccessStore = createPersistStore(
return get().needCode; return get().needCode;
}, },
getVisionModels() {
this.fetch();
return get().visionModels;
},
edgeVoiceName() { edgeVoiceName() {
this.fetch(); this.fetch();
@ -183,6 +194,9 @@ export const useAccessStore = createPersistStore(
isValidIflytek() { isValidIflytek() {
return ensure(get(), ["iflytekApiKey"]); return ensure(get(), ["iflytekApiKey"]);
}, },
isValidDeepSeek() {
return ensure(get(), ["deepseekApiKey"]);
},
isValidXAI() { isValidXAI() {
return ensure(get(), ["xaiApiKey"]); return ensure(get(), ["xaiApiKey"]);
@ -207,6 +221,7 @@ export const useAccessStore = createPersistStore(
this.isValidTencent() || this.isValidTencent() ||
this.isValidMoonshot() || this.isValidMoonshot() ||
this.isValidIflytek() || this.isValidIflytek() ||
this.isValidDeepSeek() ||
this.isValidXAI() || this.isValidXAI() ||
this.isValidChatGLM() || this.isValidChatGLM() ||
!this.enabledAccessControl() || !this.enabledAccessControl() ||

View File

@ -1,5 +1,5 @@
import { LLMModel } from "../client/api"; import { LLMModel } from "../client/api";
import { DalleSize, DalleQuality, DalleStyle } from "../typing"; import { DalleQuality, DalleStyle, ModelSize } from "../typing";
import { getClientConfig } from "../config/client"; import { getClientConfig } from "../config/client";
import { import {
DEFAULT_INPUT_TEMPLATE, DEFAULT_INPUT_TEMPLATE,
@ -78,7 +78,7 @@ export const DEFAULT_CONFIG = {
compressProviderName: "", compressProviderName: "",
enableInjectSystemPrompts: true, enableInjectSystemPrompts: true,
template: config?.template ?? DEFAULT_INPUT_TEMPLATE, template: config?.template ?? DEFAULT_INPUT_TEMPLATE,
size: "1024x1024" as DalleSize, size: "1024x1024" as ModelSize,
quality: "standard" as DalleQuality, quality: "standard" as DalleQuality,
style: "vivid" as DalleStyle, style: "vivid" as DalleStyle,
}, },

View File

@ -11,3 +11,14 @@ export interface RequestMessage {
export type DalleSize = "1024x1024" | "1792x1024" | "1024x1792"; export type DalleSize = "1024x1024" | "1792x1024" | "1024x1792";
export type DalleQuality = "standard" | "hd"; export type DalleQuality = "standard" | "hd";
export type DalleStyle = "vivid" | "natural"; export type DalleStyle = "vivid" | "natural";
export type ModelSize =
| "1024x1024"
| "1792x1024"
| "1024x1792"
| "768x1344"
| "864x1152"
| "1344x768"
| "1152x864"
| "1440x720"
| "720x1440";

View File

@ -5,6 +5,9 @@ import { RequestMessage } from "./client/api";
import { ServiceProvider } from "./constant"; import { ServiceProvider } from "./constant";
// import { fetch as tauriFetch, ResponseType } from "@tauri-apps/api/http"; // import { fetch as tauriFetch, ResponseType } from "@tauri-apps/api/http";
import { fetch as tauriStreamFetch } from "./utils/stream"; import { fetch as tauriStreamFetch } from "./utils/stream";
import { VISION_MODEL_REGEXES, EXCLUDE_VISION_MODEL_REGEXES } from "./constant";
import { useAccessStore } from "./store";
import { ModelSize } from "./typing";
export function trimTopic(topic: string) { export function trimTopic(topic: string) {
// Fix an issue where double quotes still show in the Indonesian language // Fix an issue where double quotes still show in the Indonesian language
@ -252,27 +255,16 @@ export function getMessageImages(message: RequestMessage): string[] {
} }
export function isVisionModel(model: string) { export function isVisionModel(model: string) {
// Note: This is a better way using the TypeScript feature instead of `&&` or `||` (ts v5.5.0-dev.20240314 I've been using) const visionModels = useAccessStore.getState().visionModels;
const envVisionModels = visionModels
const excludeKeywords = ["claude-3-5-haiku-20241022"]; ?.split(",")
const visionKeywords = [ .map((m) => m.trim());
"vision", if (envVisionModels?.includes(model)) {
"gpt-4o", return true;
"claude-3", }
"gemini-1.5",
"gemini-exp",
"learnlm",
"qwen-vl",
"qwen2-vl",
];
const isGpt4Turbo =
model.includes("gpt-4-turbo") && !model.includes("preview");
return ( return (
!excludeKeywords.some((keyword) => model.includes(keyword)) && !EXCLUDE_VISION_MODEL_REGEXES.some((regex) => regex.test(model)) &&
(visionKeywords.some((keyword) => model.includes(keyword)) || VISION_MODEL_REGEXES.some((regex) => regex.test(model))
isGpt4Turbo ||
isDalle3(model))
); );
} }
@ -280,6 +272,28 @@ export function isDalle3(model: string) {
return "dall-e-3" === model; return "dall-e-3" === model;
} }
export function getModelSizes(model: string): ModelSize[] {
if (isDalle3(model)) {
return ["1024x1024", "1792x1024", "1024x1792"];
}
if (model.toLowerCase().includes("cogview")) {
return [
"1024x1024",
"768x1344",
"864x1152",
"1344x768",
"1152x864",
"1440x720",
"720x1440",
];
}
return [];
}
export function supportsCustomSize(model: string): boolean {
return getModelSizes(model).length > 0;
}
export function showPlugins(provider: ServiceProvider, model: string) { export function showPlugins(provider: ServiceProvider, model: string) {
if ( if (
provider == ServiceProvider.OpenAI || provider == ServiceProvider.OpenAI ||

View File

@ -202,3 +202,52 @@ export function isModelAvailableInServer(
const modelTable = collectModelTable(DEFAULT_MODELS, customModels); const modelTable = collectModelTable(DEFAULT_MODELS, customModels);
return modelTable[fullName]?.available === false; return modelTable[fullName]?.available === false;
} }
/**
* Check if the model name is a GPT-4 related model
*
* @param modelName The name of the model to check
* @returns True if the model is a GPT-4 related model (excluding gpt-4o-mini)
*/
export function isGPT4Model(modelName: string): boolean {
return (
(modelName.startsWith("gpt-4") ||
modelName.startsWith("chatgpt-4o") ||
modelName.startsWith("o1")) &&
!modelName.startsWith("gpt-4o-mini")
);
}
/**
* Checks if a model is not available on any of the specified providers in the server.
*
* @param {string} customModels - A string of custom models, comma-separated.
* @param {string} modelName - The name of the model to check.
* @param {string|string[]} providerNames - A string or array of provider names to check against.
*
* @returns {boolean} True if the model is not available on any of the specified providers, false otherwise.
*/
export function isModelNotavailableInServer(
customModels: string,
modelName: string,
providerNames: string | string[],
): boolean {
// Check DISABLE_GPT4 environment variable
if (
process.env.DISABLE_GPT4 === "1" &&
isGPT4Model(modelName.toLowerCase())
) {
return true;
}
const modelTable = collectModelTable(DEFAULT_MODELS, customModels);
const providerNamesArray = Array.isArray(providerNames)
? providerNames
: [providerNames];
for (const providerName of providerNamesArray) {
const fullName = `${modelName}@${providerName.toLowerCase()}`;
if (modelTable?.[fullName]?.available === true) return false;
}
return true;
}

View File

@ -62,7 +62,7 @@
"@tauri-apps/cli": "1.5.11", "@tauri-apps/cli": "1.5.11",
"@testing-library/dom": "^10.4.0", "@testing-library/dom": "^10.4.0",
"@testing-library/jest-dom": "^6.6.3", "@testing-library/jest-dom": "^6.6.3",
"@testing-library/react": "^16.0.1", "@testing-library/react": "^16.1.0",
"@types/jest": "^29.5.14", "@types/jest": "^29.5.14",
"@types/js-yaml": "4.0.9", "@types/js-yaml": "4.0.9",
"@types/lodash-es": "^4.17.12", "@types/lodash-es": "^4.17.12",

View File

@ -0,0 +1,80 @@
import { isModelNotavailableInServer } from "../app/utils/model";
describe("isModelNotavailableInServer", () => {
test("test model will return false, which means the model is available", () => {
const customModels = "";
const modelName = "gpt-4";
const providerNames = "OpenAI";
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(false);
});
test("test model will return true when model is not available in custom models", () => {
const customModels = "-all,gpt-4o-mini";
const modelName = "gpt-4";
const providerNames = "OpenAI";
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(true);
});
test("should respect DISABLE_GPT4 setting", () => {
process.env.DISABLE_GPT4 = "1";
const result = isModelNotavailableInServer("", "gpt-4", "OpenAI");
expect(result).toBe(true);
});
test("should handle empty provider names", () => {
const result = isModelNotavailableInServer("-all,gpt-4", "gpt-4", "");
expect(result).toBe(true);
});
test("should be case insensitive for model names", () => {
const result = isModelNotavailableInServer("-all,GPT-4", "gpt-4", "OpenAI");
expect(result).toBe(true);
});
test("support passing multiple providers, model unavailable on one of the providers will return true", () => {
const customModels = "-all,gpt-4@google";
const modelName = "gpt-4";
const providerNames = ["OpenAI", "Azure"];
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(true);
});
// FIXME: 这个测试用例有问题,需要修复
// test("support passing multiple providers, model available on one of the providers will return false", () => {
// const customModels = "-all,gpt-4@google";
// const modelName = "gpt-4";
// const providerNames = ["OpenAI", "Google"];
// const result = isModelNotavailableInServer(
// customModels,
// modelName,
// providerNames,
// );
// expect(result).toBe(false);
// });
test("test custom model without setting provider", () => {
const customModels = "-all,mistral-large";
const modelName = "mistral-large";
const providerNames = modelName;
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(false);
});
});

View File

@ -0,0 +1,67 @@
import { isVisionModel } from "../app/utils";
describe("isVisionModel", () => {
const originalEnv = process.env;
beforeEach(() => {
jest.resetModules();
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
test("should identify vision models using regex patterns", () => {
const visionModels = [
"gpt-4-vision",
"claude-3-opus",
"gemini-1.5-pro",
"gemini-2.0",
"gemini-exp-vision",
"learnlm-vision",
"qwen-vl-max",
"qwen2-vl-max",
"gpt-4-turbo",
"dall-e-3",
];
visionModels.forEach((model) => {
expect(isVisionModel(model)).toBe(true);
});
});
test("should exclude specific models", () => {
expect(isVisionModel("claude-3-5-haiku-20241022")).toBe(false);
});
test("should not identify non-vision models", () => {
const nonVisionModels = [
"gpt-3.5-turbo",
"gpt-4-turbo-preview",
"claude-2",
"regular-model",
];
nonVisionModels.forEach((model) => {
expect(isVisionModel(model)).toBe(false);
});
});
test("should identify models from VISION_MODELS env var", () => {
process.env.VISION_MODELS = "custom-vision-model,another-vision-model";
expect(isVisionModel("custom-vision-model")).toBe(true);
expect(isVisionModel("another-vision-model")).toBe(true);
expect(isVisionModel("unrelated-model")).toBe(false);
});
test("should handle empty or missing VISION_MODELS", () => {
process.env.VISION_MODELS = "";
expect(isVisionModel("unrelated-model")).toBe(false);
delete process.env.VISION_MODELS;
expect(isVisionModel("unrelated-model")).toBe(false);
expect(isVisionModel("gpt-4-vision")).toBe(true);
});
});

View File

@ -2807,10 +2807,10 @@
lodash "^4.17.21" lodash "^4.17.21"
redent "^3.0.0" redent "^3.0.0"
"@testing-library/react@^16.0.1": "@testing-library/react@^16.1.0":
version "16.0.1" version "16.1.0"
resolved "https://registry.yarnpkg.com/@testing-library/react/-/react-16.0.1.tgz#29c0ee878d672703f5e7579f239005e4e0faa875" resolved "https://registry.yarnpkg.com/@testing-library/react/-/react-16.1.0.tgz#aa0c61398bac82eaf89776967e97de41ac742d71"
integrity sha512-dSmwJVtJXmku+iocRhWOUFbrERC76TX2Mnf0ATODz8brzAZrMBbzLwQixlBSanZxR6LddK3eiwpSFZgDET1URg== integrity sha512-Q2ToPvg0KsVL0ohND9A3zLJWcOXXcO8IDu3fj11KhNt0UlCWyFyvnCIBkd12tidB2lkiVRG8VFqdhcqhqnAQtg==
dependencies: dependencies:
"@babel/runtime" "^7.12.5" "@babel/runtime" "^7.12.5"