Merge branch 'main' into feat-mcp

This commit is contained in:
Kadxy 2025-01-19 23:28:12 +08:00 committed by GitHub
commit a3d3ce3f4c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
37 changed files with 896 additions and 206 deletions

127
README.md
View File

@ -1,16 +1,19 @@
<div align="center">
<a href='#企业版'>
<img src="./docs/images/ent.svg" alt="icon"/>
<a href='https://nextchat.dev/chat'>
<img src="https://github.com/user-attachments/assets/287c510f-f508-478e-ade3-54d30453dc18" width="1000" alt="icon"/>
</a>
<h1 align="center">NextChat (ChatGPT Next Web)</h1>
English / [简体中文](./README_CN.md)
One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support.
<a href="https://trendshift.io/repositories/5973" target="_blank"><img src="https://trendshift.io/api/badge/repositories/5973" alt="ChatGPTNextWeb%2FChatGPT-Next-Web | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
一键免费部署你的跨平台私人 ChatGPT 应用, 支持 GPT3, GPT4 & Gemini Pro 模型。
One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT4 & Gemini Pro support.
[![Saas][Saas-image]][saas-url]
[![Web][Web-image]][web-url]
@ -18,9 +21,8 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4
[![MacOS][MacOS-image]][download-url]
[![Linux][Linux-image]][download-url]
[NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [Web App](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Enterprise Edition](#enterprise-edition) / [Twitter](https://twitter.com/NextChatDev)
[NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [Web App Demo](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Enterprise Edition](#enterprise-edition) / [Twitter](https://twitter.com/NextChatDev)
[NextChatAI](https://nextchat.dev/chat) / [网页版](https://app.nextchat.dev) / [客户端](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) / [反馈](https://github.com/Yidadaa/ChatGPT-Next-Web/issues)
[saas-url]: https://nextchat.dev/chat?utm_source=readme
[saas-image]: https://img.shields.io/badge/NextChat-Saas-green?logo=microsoftedge
@ -31,7 +33,7 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu
[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html) [<img src="https://svgshare.com/i/1AVg.svg" alt="Deploy to Alibaba Cloud" height="30">](https://computenest.aliyun.com/market/service-f1c9b75e59814dc49d52)
[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html)
[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp)
@ -50,20 +52,12 @@ Meeting Your Company's Privatization and Customization Deployment Requirements:
For enterprise inquiries, please contact: **business@nextchat.dev**
## 企业版
## Screenshots
满足企业用户私有化部署和个性化定制需求:
- **品牌定制**:企业量身定制 VI/UI与企业品牌形象无缝契合
- **资源集成**:由企业管理人员统一配置和管理数十种 AI 资源,团队成员开箱即用
- **权限管理**:成员权限、资源权限、知识库权限层级分明,企业级 Admin Panel 统一控制
- **知识接入**:企业内部知识库与 AI 能力相结合,比通用 AI 更贴近企业自身业务需求
- **安全审计**:自动拦截敏感提问,支持追溯全部历史对话记录,让 AI 也能遵循企业信息安全规范
- **私有部署**:企业级私有部署,支持各类主流私有云部署,确保数据安全和隐私保护
- **持续更新**:提供多模态、智能体等前沿能力持续更新升级服务,常用常新、持续先进
![Settings](./docs/images/settings.png)
企业版咨询: **business@nextchat.dev**
![More](./docs/images/more.png)
<img width="300" src="https://github.com/user-attachments/assets/3d4305ac-6e95-489e-884b-51d51db5f692">
## Features
@ -110,50 +104,8 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
- 🚀 v2.7 let's share conversations as image, or share to ShareGPT!
- 🚀 v2.0 is released, now you can create prompt templates, turn your ideas into reality! Read this: [ChatGPT Prompt Engineering Tips: Zero, One and Few Shot Prompting](https://www.allabtai.com/prompt-engineering-tips-zero-one-and-few-shot-prompting/).
## 主要功能
- 在 1 分钟内使用 Vercel **免费一键部署**
- 提供体积极小(~5MB的跨平台客户端Linux/Windows/MacOS, [下载地址](https://github.com/Yidadaa/ChatGPT-Next-Web/releases)
- 完整的 Markdown 支持LaTex 公式、Mermaid 流程图、代码高亮等等
- 精心设计的 UI响应式设计支持深色模式支持 PWA
- 极快的首屏加载速度(~100kb支持流式响应
- 隐私安全,所有数据保存在用户浏览器本地
- 预制角色功能(面具),方便地创建、分享和调试你的个性化对话
- 海量的内置 prompt 列表,来自[中文](https://github.com/PlexPt/awesome-chatgpt-prompts-zh)和[英文](https://github.com/f/awesome-chatgpt-prompts)
- 自动压缩上下文聊天记录,在节省 Token 的同时支持超长对话
- 多国语言支持English, 简体中文, 繁体中文, 日本語, Español, Italiano, Türkçe, Deutsch, Tiếng Việt, Русский, Čeština, 한국어, Indonesia
- 拥有自己的域名?好上加好,绑定后即可在任何地方**无障碍**快速访问
## 开发计划
- [x] 为每个对话设置系统 Prompt [#138](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/138)
- [x] 允许用户自行编辑内置 Prompt 列表
- [x] 预制角色:使用预制角色快速定制新对话 [#993](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/993)
- [x] 分享为图片,分享到 ShareGPT 链接 [#1741](https://github.com/Yidadaa/ChatGPT-Next-Web/pull/1741)
- [x] 使用 tauri 打包桌面应用
- [x] 支持自部署的大语言模型:开箱即用 [RWKV-Runner](https://github.com/josStorer/RWKV-Runner) ,服务端部署 [LocalAI 项目](https://github.com/go-skynet/LocalAI) llama / gpt4all / rwkv / vicuna / koala / gpt4all-j / cerebras / falcon / dolly 等等,或者使用 [api-for-open-llm](https://github.com/xusenlinzy/api-for-open-llm)
- [x] Artifacts: 通过独立窗口,轻松预览、复制和分享生成的内容/可交互网页 [#5092](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/pull/5092)
- [x] 插件机制,支持`联网搜索`、`计算器`、调用其他平台 api [#165](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/165) [#5353](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5353)
- [x] 支持联网搜索、计算器、调用其他平台 api [#165](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/165) [#5353](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5353)
- [x] 支持 Realtime Chat [#5672](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5672)
- [ ] 本地知识库
## 最新动态
- 🚀 v2.15.8 现在支持Realtime Chat [#5672](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5672)
- 🚀 v2.15.4 客户端支持Tauri本地直接调用大模型API更安全[#5379](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5379)
- 🚀 v2.15.0 现在支持插件功能了!了解更多:[NextChat-Awesome-Plugins](https://github.com/ChatGPTNextWeb/NextChat-Awesome-Plugins)
- 🚀 v2.14.0 现在支持 Artifacts & SD 了。
- 🚀 v2.10.1 现在支持 Gemini Pro 模型。
- 🚀 v2.9.11 现在可以使用自定义 Azure 服务了。
- 🚀 v2.8 发布了横跨 Linux/Windows/MacOS 的体积极小的客户端。
- 🚀 v2.7 现在可以将会话分享为图片了,也可以分享到 ShareGPT 的在线链接。
- 🚀 v2.0 已经发布,现在你可以使用面具功能快速创建预制对话了! 了解更多: [ChatGPT 提示词高阶技能:零次、一次和少样本提示](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/138)。
- 💡 想要更方便地随时随地使用本项目可以试下这款桌面插件https://github.com/mushan0x0/AI0x0.com
## Get Started
> [简体中文 > 如何开始使用](./README_CN.md#开始使用)
1. Get [OpenAI API Key](https://platform.openai.com/account/api-keys);
2. Click
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web), remember that `CODE` is your page password;
@ -161,14 +113,10 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
## FAQ
[简体中文 > 常见问题](./docs/faq-cn.md)
[English > FAQ](./docs/faq-en.md)
## Keep Updated
> [简体中文 > 如何保持代码更新](./README_CN.md#保持更新)
If you have deployed your own project with just one click following the steps above, you may encounter the issue of "Updates Available" constantly showing up. This is because Vercel will create a new project for you by default instead of forking this project, resulting in the inability to detect updates correctly.
We recommend that you follow the steps below to re-deploy:
@ -195,8 +143,6 @@ You can star or watch this project or follow author to get release notifications
## Access Password
> [简体中文 > 如何增加访问密码](./README_CN.md#配置页面访问密码)
This project provides limited access control. Please add an environment variable named `CODE` on the vercel environment variables page. The value should be passwords separated by comma like this:
```
@ -207,8 +153,6 @@ After adding or modifying this environment variable, please redeploy the project
## Environment Variables
> [简体中文 > 如何配置 api key、访问密码、接口代理](./README_CN.md#环境变量)
### `CODE` (optional)
Access password, separated by comma.
@ -311,6 +255,14 @@ ChatGLM Api Key.
ChatGLM Api Url.
### `DEEPSEEK_API_KEY` (optional)
DeepSeek Api Key.
### `DEEPSEEK_URL` (optional)
DeepSeek Api Url.
### `HIDE_USER_API_KEY` (optional)
> Default: Empty
@ -387,7 +339,6 @@ NodeJS >= 18, Docker >= 20
## Development
> [简体中文 > 如何进行二次开发](./README_CN.md#开发)
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web)
@ -412,10 +363,6 @@ yarn dev
## Deployment
> [简体中文 > 如何部署到私人服务器](./README_CN.md#部署)
### BT Install
> [简体中文 > 如何通过宝塔一键部署](./docs/bt-cn.md)
### Docker (Recommended)
@ -464,11 +411,7 @@ bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/s
- [How to use Vercel (No English)](./docs/vercel-cn.md)
- [User Manual (Only Chinese, WIP)](./docs/user-manual-cn.md)
## Screenshots
![Settings](./docs/images/settings.png)
![More](./docs/images/more.png)
## Translation
@ -480,37 +423,7 @@ If you want to add a new translation, read this [document](./docs/translation.md
## Special Thanks
### Sponsor
> 仅列出捐赠金额 >= 100RMB 的用户。
[@mushan0x0](https://github.com/mushan0x0)
[@ClarenceDan](https://github.com/ClarenceDan)
[@zhangjia](https://github.com/zhangjia)
[@hoochanlon](https://github.com/hoochanlon)
[@relativequantum](https://github.com/relativequantum)
[@desenmeng](https://github.com/desenmeng)
[@webees](https://github.com/webees)
[@chazzhou](https://github.com/chazzhou)
[@hauy](https://github.com/hauy)
[@Corwin006](https://github.com/Corwin006)
[@yankunsong](https://github.com/yankunsong)
[@ypwhs](https://github.com/ypwhs)
[@fxxxchao](https://github.com/fxxxchao)
[@hotic](https://github.com/hotic)
[@WingCH](https://github.com/WingCH)
[@jtung4](https://github.com/jtung4)
[@micozhu](https://github.com/micozhu)
[@jhansion](https://github.com/jhansion)
[@Sha1rholder](https://github.com/Sha1rholder)
[@AnsonHyq](https://github.com/AnsonHyq)
[@synwith](https://github.com/synwith)
[@piksonGit](https://github.com/piksonGit)
[@ouyangzhiping](https://github.com/ouyangzhiping)
[@wenjiavv](https://github.com/wenjiavv)
[@LeXwDeX](https://github.com/LeXwDeX)
[@Licoy](https://github.com/Licoy)
[@shangmin2009](https://github.com/shangmin2009)
### Contributors

View File

@ -192,6 +192,14 @@ ChatGLM Api Key.
ChatGLM Api Url.
### `DEEPSEEK_API_KEY` (可选)
DeepSeek Api Key.
### `DEEPSEEK_URL` (可选)
DeepSeek Api Url.
### `HIDE_USER_API_KEY` (可选)

View File

@ -10,6 +10,7 @@ import { handle as alibabaHandler } from "../../alibaba";
import { handle as moonshotHandler } from "../../moonshot";
import { handle as stabilityHandler } from "../../stability";
import { handle as iflytekHandler } from "../../iflytek";
import { handle as deepseekHandler } from "../../deepseek";
import { handle as xaiHandler } from "../../xai";
import { handle as chatglmHandler } from "../../glm";
import { handle as proxyHandler } from "../../proxy";
@ -40,6 +41,8 @@ async function handle(
return stabilityHandler(req, { params });
case ApiPath.Iflytek:
return iflytekHandler(req, { params });
case ApiPath.DeepSeek:
return deepseekHandler(req, { params });
case ApiPath.XAI:
return xaiHandler(req, { params });
case ApiPath.ChatGLM:

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model";
import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig();
@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.Alibaba as string,

View File

@ -9,7 +9,7 @@ import {
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "./auth";
import { isModelAvailableInServer } from "@/app/utils/model";
import { isModelNotavailableInServer } from "@/app/utils/model";
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
const ALLOWD_PATH = new Set([Anthropic.ChatPath, Anthropic.ChatPath1]);
@ -122,7 +122,7 @@ async function request(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.Anthropic as string,

View File

@ -92,6 +92,9 @@ export function auth(req: NextRequest, modelProvider: ModelProvider) {
systemApiKey =
serverConfig.iflytekApiKey + ":" + serverConfig.iflytekApiSecret;
break;
case ModelProvider.DeepSeek:
systemApiKey = serverConfig.deepseekApiKey;
break;
case ModelProvider.XAI:
systemApiKey = serverConfig.xaiApiKey;
break;

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model";
import { isModelNotavailableInServer } from "@/app/utils/model";
import { getAccessToken } from "@/app/utils/baidu";
const serverConfig = getServerSideConfig();
@ -104,7 +104,7 @@ async function request(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.Baidu as string,

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model";
import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig();
@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.ByteDance as string,

View File

@ -2,7 +2,7 @@ import { NextRequest, NextResponse } from "next/server";
import { getServerSideConfig } from "../config/server";
import { OPENAI_BASE_URL, ServiceProvider } from "../constant";
import { cloudflareAIGatewayUrl } from "../utils/cloudflare";
import { getModelProvider, isModelAvailableInServer } from "../utils/model";
import { getModelProvider, isModelNotavailableInServer } from "../utils/model";
const serverConfig = getServerSideConfig();
@ -118,15 +118,14 @@ export async function requestOpenai(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.OpenAI as string,
) ||
isModelAvailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.Azure as string,
[
ServiceProvider.OpenAI,
ServiceProvider.Azure,
jsonBody?.model as string, // support provider-unspecified model
],
)
) {
return NextResponse.json(

View File

@ -14,6 +14,7 @@ const DANGER_CONFIG = {
disableFastLink: serverConfig.disableFastLink,
customModels: serverConfig.customModels,
defaultModel: serverConfig.defaultModel,
visionModels: serverConfig.visionModels,
};
declare global {

128
app/api/deepseek.ts Normal file
View File

@ -0,0 +1,128 @@
import { getServerSideConfig } from "@/app/config/server";
import {
DEEPSEEK_BASE_URL,
ApiPath,
ModelProvider,
ServiceProvider,
} from "@/app/constant";
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig();
export async function handle(
req: NextRequest,
{ params }: { params: { path: string[] } },
) {
console.log("[DeepSeek Route] params ", params);
if (req.method === "OPTIONS") {
return NextResponse.json({ body: "OK" }, { status: 200 });
}
const authResult = auth(req, ModelProvider.DeepSeek);
if (authResult.error) {
return NextResponse.json(authResult, {
status: 401,
});
}
try {
const response = await request(req);
return response;
} catch (e) {
console.error("[DeepSeek] ", e);
return NextResponse.json(prettyObject(e));
}
}
async function request(req: NextRequest) {
const controller = new AbortController();
// alibaba use base url or just remove the path
let path = `${req.nextUrl.pathname}`.replaceAll(ApiPath.DeepSeek, "");
let baseUrl = serverConfig.deepseekUrl || DEEPSEEK_BASE_URL;
if (!baseUrl.startsWith("http")) {
baseUrl = `https://${baseUrl}`;
}
if (baseUrl.endsWith("/")) {
baseUrl = baseUrl.slice(0, -1);
}
console.log("[Proxy] ", path);
console.log("[Base Url]", baseUrl);
const timeoutId = setTimeout(
() => {
controller.abort();
},
10 * 60 * 1000,
);
const fetchUrl = `${baseUrl}${path}`;
const fetchOptions: RequestInit = {
headers: {
"Content-Type": "application/json",
Authorization: req.headers.get("Authorization") ?? "",
},
method: req.method,
body: req.body,
redirect: "manual",
// @ts-ignore
duplex: "half",
signal: controller.signal,
};
// #1815 try to refuse some request to some models
if (serverConfig.customModels && req.body) {
try {
const clonedBody = await req.text();
fetchOptions.body = clonedBody;
const jsonBody = JSON.parse(clonedBody) as { model?: string };
// not undefined and is false
if (
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.DeepSeek as string,
)
) {
return NextResponse.json(
{
error: true,
message: `you are not allowed to use ${jsonBody?.model} model`,
},
{
status: 403,
},
);
}
} catch (e) {
console.error(`[DeepSeek] filter`, e);
}
}
try {
const res = await fetch(fetchUrl, fetchOptions);
// to prevent browser prompt for credentials
const newHeaders = new Headers(res.headers);
newHeaders.delete("www-authenticate");
// to disable nginx buffering
newHeaders.set("X-Accel-Buffering", "no");
return new Response(res.body, {
status: res.status,
statusText: res.statusText,
headers: newHeaders,
});
} finally {
clearTimeout(timeoutId);
}
}

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model";
import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig();
@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.ChatGLM as string,

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model";
import { isModelNotavailableInServer } from "@/app/utils/model";
// iflytek
const serverConfig = getServerSideConfig();
@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.Iflytek as string,

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model";
import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig();
@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.Moonshot as string,

View File

@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth";
import { isModelAvailableInServer } from "@/app/utils/model";
import { isModelNotavailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig();
@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false
if (
isModelAvailableInServer(
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.XAI as string,

View File

@ -20,6 +20,7 @@ import { QwenApi } from "./platforms/alibaba";
import { HunyuanApi } from "./platforms/tencent";
import { MoonshotApi } from "./platforms/moonshot";
import { SparkApi } from "./platforms/iflytek";
import { DeepSeekApi } from "./platforms/deepseek";
import { XAIApi } from "./platforms/xai";
import { ChatGLMApi } from "./platforms/glm";
@ -154,6 +155,9 @@ export class ClientApi {
case ModelProvider.Iflytek:
this.llm = new SparkApi();
break;
case ModelProvider.DeepSeek:
this.llm = new DeepSeekApi();
break;
case ModelProvider.XAI:
this.llm = new XAIApi();
break;
@ -247,6 +251,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
const isAlibaba = modelConfig.providerName === ServiceProvider.Alibaba;
const isMoonshot = modelConfig.providerName === ServiceProvider.Moonshot;
const isIflytek = modelConfig.providerName === ServiceProvider.Iflytek;
const isDeepSeek = modelConfig.providerName === ServiceProvider.DeepSeek;
const isXAI = modelConfig.providerName === ServiceProvider.XAI;
const isChatGLM = modelConfig.providerName === ServiceProvider.ChatGLM;
const isEnabledAccessControl = accessStore.enabledAccessControl();
@ -264,6 +269,8 @@ export function getHeaders(ignoreHeaders: boolean = false) {
? accessStore.moonshotApiKey
: isXAI
? accessStore.xaiApiKey
: isDeepSeek
? accessStore.deepseekApiKey
: isChatGLM
? accessStore.chatglmApiKey
: isIflytek
@ -280,6 +287,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
isAlibaba,
isMoonshot,
isIflytek,
isDeepSeek,
isXAI,
isChatGLM,
apiKey,
@ -302,6 +310,13 @@ export function getHeaders(ignoreHeaders: boolean = false) {
isAzure,
isAnthropic,
isBaidu,
isByteDance,
isAlibaba,
isMoonshot,
isIflytek,
isDeepSeek,
isXAI,
isChatGLM,
apiKey,
isEnabledAccessControl,
} = getConfig();
@ -344,6 +359,8 @@ export function getClientApi(provider: ServiceProvider): ClientApi {
return new ClientApi(ModelProvider.Moonshot);
case ServiceProvider.Iflytek:
return new ClientApi(ModelProvider.Iflytek);
case ServiceProvider.DeepSeek:
return new ClientApi(ModelProvider.DeepSeek);
case ServiceProvider.XAI:
return new ClientApi(ModelProvider.XAI);
case ServiceProvider.ChatGLM:

View File

@ -0,0 +1,200 @@
"use client";
// azure and openai, using same models. so using same LLMApi.
import {
ApiPath,
DEEPSEEK_BASE_URL,
DeepSeek,
REQUEST_TIMEOUT_MS,
} from "@/app/constant";
import {
useAccessStore,
useAppConfig,
useChatStore,
ChatMessageTool,
usePluginStore,
} from "@/app/store";
import { stream } from "@/app/utils/chat";
import {
ChatOptions,
getHeaders,
LLMApi,
LLMModel,
SpeechOptions,
} from "../api";
import { getClientConfig } from "@/app/config/client";
import { getMessageTextContent } from "@/app/utils";
import { RequestPayload } from "./openai";
import { fetch } from "@/app/utils/stream";
export class DeepSeekApi implements LLMApi {
private disableListModels = true;
path(path: string): string {
const accessStore = useAccessStore.getState();
let baseUrl = "";
if (accessStore.useCustomConfig) {
baseUrl = accessStore.deepseekUrl;
}
if (baseUrl.length === 0) {
const isApp = !!getClientConfig()?.isApp;
const apiPath = ApiPath.DeepSeek;
baseUrl = isApp ? DEEPSEEK_BASE_URL : apiPath;
}
if (baseUrl.endsWith("/")) {
baseUrl = baseUrl.slice(0, baseUrl.length - 1);
}
if (!baseUrl.startsWith("http") && !baseUrl.startsWith(ApiPath.DeepSeek)) {
baseUrl = "https://" + baseUrl;
}
console.log("[Proxy Endpoint] ", baseUrl, path);
return [baseUrl, path].join("/");
}
extractMessage(res: any) {
return res.choices?.at(0)?.message?.content ?? "";
}
speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}
async chat(options: ChatOptions) {
const messages: ChatOptions["messages"] = [];
for (const v of options.messages) {
const content = getMessageTextContent(v);
messages.push({ role: v.role, content });
}
const modelConfig = {
...useAppConfig.getState().modelConfig,
...useChatStore.getState().currentSession().mask.modelConfig,
...{
model: options.config.model,
providerName: options.config.providerName,
},
};
const requestPayload: RequestPayload = {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
// max_tokens: Math.max(modelConfig.max_tokens, 1024),
// Please do not ask me why not send max_tokens, no reason, this param is just shit, I dont want to explain anymore.
};
console.log("[Request] openai payload: ", requestPayload);
const shouldStream = !!options.config.stream;
const controller = new AbortController();
options.onController?.(controller);
try {
const chatPath = this.path(DeepSeek.ChatPath);
const chatPayload = {
method: "POST",
body: JSON.stringify(requestPayload),
signal: controller.signal,
headers: getHeaders(),
};
// make a fetch request
const requestTimeoutId = setTimeout(
() => controller.abort(),
REQUEST_TIMEOUT_MS,
);
if (shouldStream) {
const [tools, funcs] = usePluginStore
.getState()
.getAsTools(
useChatStore.getState().currentSession().mask?.plugin || [],
);
return stream(
chatPath,
requestPayload,
getHeaders(),
tools as any,
funcs,
controller,
// parseSSE
(text: string, runTools: ChatMessageTool[]) => {
// console.log("parseSSE", text, runTools);
const json = JSON.parse(text);
const choices = json.choices as Array<{
delta: {
content: string;
tool_calls: ChatMessageTool[];
};
}>;
const tool_calls = choices[0]?.delta?.tool_calls;
if (tool_calls?.length > 0) {
const index = tool_calls[0]?.index;
const id = tool_calls[0]?.id;
const args = tool_calls[0]?.function?.arguments;
if (id) {
runTools.push({
id,
type: tool_calls[0]?.type,
function: {
name: tool_calls[0]?.function?.name as string,
arguments: args,
},
});
} else {
// @ts-ignore
runTools[index]["function"]["arguments"] += args;
}
}
return choices[0]?.delta?.content;
},
// processToolMessage, include tool_calls message and tool call results
(
requestPayload: RequestPayload,
toolCallMessage: any,
toolCallResult: any[],
) => {
// @ts-ignore
requestPayload?.messages?.splice(
// @ts-ignore
requestPayload?.messages?.length,
0,
toolCallMessage,
...toolCallResult,
);
},
options,
);
} else {
const res = await fetch(chatPath, chatPayload);
clearTimeout(requestTimeoutId);
const resJson = await res.json();
const message = this.extractMessage(resJson);
options.onFinish(message, res);
}
} catch (e) {
console.log("[Request] failed to make a chat request", e);
options.onError?.(e as Error);
}
}
async usage() {
return {
used: 0,
total: 0,
};
}
async models(): Promise<LLMModel[]> {
return [];
}
}

View File

@ -21,16 +21,108 @@ import {
SpeechOptions,
} from "../api";
import { getClientConfig } from "@/app/config/client";
import { getMessageTextContent } from "@/app/utils";
import { getMessageTextContent, isVisionModel } from "@/app/utils";
import { RequestPayload } from "./openai";
import { fetch } from "@/app/utils/stream";
import { preProcessImageContent } from "@/app/utils/chat";
interface BasePayload {
model: string;
}
interface ChatPayload extends BasePayload {
messages: ChatOptions["messages"];
stream?: boolean;
temperature?: number;
presence_penalty?: number;
frequency_penalty?: number;
top_p?: number;
}
interface ImageGenerationPayload extends BasePayload {
prompt: string;
size?: string;
user_id?: string;
}
interface VideoGenerationPayload extends BasePayload {
prompt: string;
duration?: number;
resolution?: string;
user_id?: string;
}
type ModelType = "chat" | "image" | "video";
export class ChatGLMApi implements LLMApi {
private disableListModels = true;
private getModelType(model: string): ModelType {
if (model.startsWith("cogview-")) return "image";
if (model.startsWith("cogvideo-")) return "video";
return "chat";
}
private getModelPath(type: ModelType): string {
switch (type) {
case "image":
return ChatGLM.ImagePath;
case "video":
return ChatGLM.VideoPath;
default:
return ChatGLM.ChatPath;
}
}
private createPayload(
messages: ChatOptions["messages"],
modelConfig: any,
options: ChatOptions,
): BasePayload {
const modelType = this.getModelType(modelConfig.model);
const lastMessage = messages[messages.length - 1];
const prompt =
typeof lastMessage.content === "string"
? lastMessage.content
: lastMessage.content.map((c) => c.text).join("\n");
switch (modelType) {
case "image":
return {
model: modelConfig.model,
prompt,
size: options.config.size,
} as ImageGenerationPayload;
default:
return {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
} as ChatPayload;
}
}
private parseResponse(modelType: ModelType, json: any): string {
switch (modelType) {
case "image": {
const imageUrl = json.data?.[0]?.url;
return imageUrl ? `![Generated Image](${imageUrl})` : "";
}
case "video": {
const videoUrl = json.data?.[0]?.url;
return videoUrl ? `<video controls src="${videoUrl}"></video>` : "";
}
default:
return this.extractMessage(json);
}
}
path(path: string): string {
const accessStore = useAccessStore.getState();
let baseUrl = "";
if (accessStore.useCustomConfig) {
@ -51,7 +143,6 @@ export class ChatGLMApi implements LLMApi {
}
console.log("[Proxy Endpoint] ", baseUrl, path);
return [baseUrl, path].join("/");
}
@ -64,9 +155,12 @@ export class ChatGLMApi implements LLMApi {
}
async chat(options: ChatOptions) {
const visionModel = isVisionModel(options.config.model);
const messages: ChatOptions["messages"] = [];
for (const v of options.messages) {
const content = getMessageTextContent(v);
const content = visionModel
? await preProcessImageContent(v.content)
: getMessageTextContent(v);
messages.push({ role: v.role, content });
}
@ -78,25 +172,16 @@ export class ChatGLMApi implements LLMApi {
providerName: options.config.providerName,
},
};
const modelType = this.getModelType(modelConfig.model);
const requestPayload = this.createPayload(messages, modelConfig, options);
const path = this.path(this.getModelPath(modelType));
const requestPayload: RequestPayload = {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
};
console.log(`[Request] glm ${modelType} payload: `, requestPayload);
console.log("[Request] glm payload: ", requestPayload);
const shouldStream = !!options.config.stream;
const controller = new AbortController();
options.onController?.(controller);
try {
const chatPath = this.path(ChatGLM.ChatPath);
const chatPayload = {
method: "POST",
body: JSON.stringify(requestPayload),
@ -104,12 +189,23 @@ export class ChatGLMApi implements LLMApi {
headers: getHeaders(),
};
// make a fetch request
const requestTimeoutId = setTimeout(
() => controller.abort(),
REQUEST_TIMEOUT_MS,
);
if (modelType === "image" || modelType === "video") {
const res = await fetch(path, chatPayload);
clearTimeout(requestTimeoutId);
const resJson = await res.json();
console.log(`[Response] glm ${modelType}:`, resJson);
const message = this.parseResponse(modelType, resJson);
options.onFinish(message, res);
return;
}
const shouldStream = !!options.config.stream;
if (shouldStream) {
const [tools, funcs] = usePluginStore
.getState()
@ -117,7 +213,7 @@ export class ChatGLMApi implements LLMApi {
useChatStore.getState().currentSession().mask?.plugin || [],
);
return stream(
chatPath,
path,
requestPayload,
getHeaders(),
tools as any,
@ -125,7 +221,6 @@ export class ChatGLMApi implements LLMApi {
controller,
// parseSSE
(text: string, runTools: ChatMessageTool[]) => {
// console.log("parseSSE", text, runTools);
const json = JSON.parse(text);
const choices = json.choices as Array<{
delta: {
@ -154,7 +249,7 @@ export class ChatGLMApi implements LLMApi {
}
return choices[0]?.delta?.content;
},
// processToolMessage, include tool_calls message and tool call results
// processToolMessage
(
requestPayload: RequestPayload,
toolCallMessage: any,
@ -172,7 +267,7 @@ export class ChatGLMApi implements LLMApi {
options,
);
} else {
const res = await fetch(chatPath, chatPayload);
const res = await fetch(path, chatPayload);
clearTimeout(requestTimeoutId);
const resJson = await res.json();
@ -184,6 +279,7 @@ export class ChatGLMApi implements LLMApi {
options.onError?.(e as Error);
}
}
async usage() {
return {
used: 0,

View File

@ -60,9 +60,18 @@ export class GeminiProApi implements LLMApi {
extractMessage(res: any) {
console.log("[Response] gemini-pro response: ", res);
const getTextFromParts = (parts: any[]) => {
if (!Array.isArray(parts)) return "";
return parts
.map((part) => part?.text || "")
.filter((text) => text.trim() !== "")
.join("\n\n");
};
return (
res?.candidates?.at(0)?.content?.parts.at(0)?.text ||
res?.at(0)?.candidates?.at(0)?.content?.parts.at(0)?.text ||
getTextFromParts(res?.candidates?.at(0)?.content?.parts) ||
getTextFromParts(res?.at(0)?.candidates?.at(0)?.content?.parts) ||
res?.error?.message ||
""
);
@ -223,7 +232,10 @@ export class GeminiProApi implements LLMApi {
},
});
}
return chunkJson?.candidates?.at(0)?.content.parts.at(0)?.text;
return chunkJson?.candidates
?.at(0)
?.content.parts?.map((part: { text: string }) => part.text)
.join("\n\n");
},
// processToolMessage, include tool_calls message and tool call results
(

View File

@ -24,7 +24,7 @@ import {
stream,
} from "@/app/utils/chat";
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
import { DalleSize, DalleQuality, DalleStyle } from "@/app/typing";
import { ModelSize, DalleQuality, DalleStyle } from "@/app/typing";
import {
ChatOptions,
@ -73,7 +73,7 @@ export interface DalleRequestPayload {
prompt: string;
response_format: "url" | "b64_json";
n: number;
size: DalleSize;
size: ModelSize;
quality: DalleQuality;
style: DalleStyle;
}

View File

@ -70,9 +70,8 @@ import {
isDalle3,
isVisionModel,
safeLocalStorage,
selectOrCopy,
showPlugins,
useMobileScreen,
getModelSizes,
supportsCustomSize,
} from "../utils";
import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
@ -80,7 +79,7 @@ import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
import dynamic from "next/dynamic";
import { ChatControllerPool } from "../client/controller";
import { DalleQuality, DalleSize, DalleStyle } from "../typing";
import { DalleQuality, DalleStyle, ModelSize } from "../typing";
import { Prompt, usePromptStore } from "../store/prompt";
import Locale from "../locales";
@ -557,10 +556,11 @@ export function ChatActions(props: {
const [showSizeSelector, setShowSizeSelector] = useState(false);
const [showQualitySelector, setShowQualitySelector] = useState(false);
const [showStyleSelector, setShowStyleSelector] = useState(false);
const dalle3Sizes: DalleSize[] = ["1024x1024", "1792x1024", "1024x1792"];
const modelSizes = getModelSizes(currentModel);
const dalle3Qualitys: DalleQuality[] = ["standard", "hd"];
const dalle3Styles: DalleStyle[] = ["vivid", "natural"];
const currentSize = session.mask.modelConfig?.size ?? "1024x1024";
const currentSize =
session.mask.modelConfig?.size ?? ("1024x1024" as ModelSize);
const currentQuality = session.mask.modelConfig?.quality ?? "standard";
const currentStyle = session.mask.modelConfig?.style ?? "vivid";
@ -711,7 +711,7 @@ export function ChatActions(props: {
/>
)}
{isDalle3(currentModel) && (
{supportsCustomSize(currentModel) && (
<ChatAction
onClick={() => setShowSizeSelector(true)}
text={currentSize}
@ -722,7 +722,7 @@ export function ChatActions(props: {
{showSizeSelector && (
<Selector
defaultSelectedValue={currentSize}
items={dalle3Sizes.map((m) => ({
items={modelSizes.map((m) => ({
title: m,
value: m,
}))}
@ -936,6 +936,12 @@ export function ShortcutKeyModal(props: { onClose: () => void }) {
title: Locale.Chat.ShortcutKey.showShortcutKey,
keys: isMac ? ["⌘", "/"] : ["Ctrl", "/"],
},
{
title: Locale.Chat.ShortcutKey.clearContext,
keys: isMac
? ["⌘", "Shift", "backspace"]
: ["Ctrl", "Shift", "backspace"],
},
];
return (
<div className="modal-mask">
@ -1592,7 +1598,7 @@ function _Chat() {
const [showShortcutKeyModal, setShowShortcutKeyModal] = useState(false);
useEffect(() => {
const handleKeyDown = (event: any) => {
const handleKeyDown = (event: KeyboardEvent) => {
// 打开新聊天 command + shift + o
if (
(event.metaKey || event.ctrlKey) &&
@ -1643,14 +1649,30 @@ function _Chat() {
event.preventDefault();
setShowShortcutKeyModal(true);
}
// 清除上下文 command + shift + backspace
else if (
(event.metaKey || event.ctrlKey) &&
event.shiftKey &&
event.key.toLowerCase() === "backspace"
) {
event.preventDefault();
chatStore.updateTargetSession(session, (session) => {
if (session.clearContextIndex === session.messages.length) {
session.clearContextIndex = undefined;
} else {
session.clearContextIndex = session.messages.length;
session.memoryPrompt = ""; // will clear memory
}
});
}
};
window.addEventListener("keydown", handleKeyDown);
document.addEventListener("keydown", handleKeyDown);
return () => {
window.removeEventListener("keydown", handleKeyDown);
document.removeEventListener("keydown", handleKeyDown);
};
}, [messages, chatStore, navigate]);
}, [messages, chatStore, navigate, session]);
const [showChatSidePanel, setShowChatSidePanel] = useState(false);

View File

@ -73,6 +73,7 @@ import {
Iflytek,
SAAS_CHAT_URL,
ChatGLM,
DeepSeek,
} from "../constant";
import { Prompt, SearchService, usePromptStore } from "../store/prompt";
import { ErrorBoundary } from "./error";
@ -1197,6 +1198,47 @@ export function Settings() {
</>
);
const deepseekConfigComponent = accessStore.provider ===
ServiceProvider.DeepSeek && (
<>
<ListItem
title={Locale.Settings.Access.DeepSeek.Endpoint.Title}
subTitle={
Locale.Settings.Access.DeepSeek.Endpoint.SubTitle +
DeepSeek.ExampleEndpoint
}
>
<input
aria-label={Locale.Settings.Access.DeepSeek.Endpoint.Title}
type="text"
value={accessStore.deepseekUrl}
placeholder={DeepSeek.ExampleEndpoint}
onChange={(e) =>
accessStore.update(
(access) => (access.deepseekUrl = e.currentTarget.value),
)
}
></input>
</ListItem>
<ListItem
title={Locale.Settings.Access.DeepSeek.ApiKey.Title}
subTitle={Locale.Settings.Access.DeepSeek.ApiKey.SubTitle}
>
<PasswordInput
aria-label={Locale.Settings.Access.DeepSeek.ApiKey.Title}
value={accessStore.deepseekApiKey}
type="text"
placeholder={Locale.Settings.Access.DeepSeek.ApiKey.Placeholder}
onChange={(e) => {
accessStore.update(
(access) => (access.deepseekApiKey = e.currentTarget.value),
);
}}
/>
</ListItem>
</>
);
const XAIConfigComponent = accessStore.provider === ServiceProvider.XAI && (
<>
<ListItem
@ -1733,6 +1775,7 @@ export function Settings() {
{alibabaConfigComponent}
{tencentConfigComponent}
{moonshotConfigComponent}
{deepseekConfigComponent}
{stabilityConfigComponent}
{lflytekConfigComponent}
{XAIConfigComponent}

View File

@ -23,7 +23,6 @@ import {
MIN_SIDEBAR_WIDTH,
NARROW_SIDEBAR_WIDTH,
Path,
PLUGINS,
REPO_URL,
} from "../constant";
@ -34,6 +33,12 @@ import { Selector, showConfirm } from "./ui-lib";
import clsx from "clsx";
import { isMcpEnabled } from "../mcp/actions";
const DISCOVERY = [
{ name: Locale.Plugin.Name, path: Path.Plugins },
{ name: "Stable Diffusion", path: Path.Sd },
{ name: Locale.SearchChat.Page.Title, path: Path.SearchChat },
];
const ChatList = dynamic(async () => (await import("./chat-list")).ChatList, {
loading: () => null,
});
@ -222,7 +227,7 @@ export function SideBarTail(props: {
export function SideBar(props: { className?: string }) {
useHotKey();
const { onDragStart, shouldNarrow } = useDragSideBar();
const [showPluginSelector, setShowPluginSelector] = useState(false);
const [showDiscoverySelector, setshowDiscoverySelector] = useState(false);
const navigate = useNavigate();
const config = useAppConfig();
const chatStore = useChatStore();
@ -279,21 +284,21 @@ export function SideBar(props: { className?: string }) {
icon={<DiscoveryIcon />}
text={shouldNarrow ? undefined : Locale.Discovery.Name}
className={styles["sidebar-bar-button"]}
onClick={() => setShowPluginSelector(true)}
onClick={() => setshowDiscoverySelector(true)}
shadow
/>
</div>
{showPluginSelector && (
{showDiscoverySelector && (
<Selector
items={[
...PLUGINS.map((item) => {
...DISCOVERY.map((item) => {
return {
title: item.name,
value: item.path,
};
}),
]}
onClose={() => setShowPluginSelector(false)}
onClose={() => setshowDiscoverySelector(false)}
onSelection={(s) => {
navigate(s[0], { state: { fromHome: true } });
}}

View File

@ -40,7 +40,6 @@ export const getBuildConfig = () => {
buildMode,
isApp,
template: process.env.DEFAULT_INPUT_TEMPLATE ?? DEFAULT_INPUT_TEMPLATE,
visionModels: process.env.VISION_MODELS || "",
};
};

View File

@ -1,5 +1,6 @@
import md5 from "spark-md5";
import { DEFAULT_MODELS, DEFAULT_GA_ID } from "../constant";
import { isGPT4Model } from "../utils/model";
declare global {
namespace NodeJS {
@ -22,6 +23,7 @@ declare global {
DISABLE_FAST_LINK?: string; // disallow parse settings from url or not
CUSTOM_MODELS?: string; // to control custom models
DEFAULT_MODEL?: string; // to control default model in every new chat window
VISION_MODELS?: string; // to control vision models
// stability only
STABILITY_URL?: string;
@ -71,6 +73,9 @@ declare global {
IFLYTEK_API_KEY?: string;
IFLYTEK_API_SECRET?: string;
DEEPSEEK_URL?: string;
DEEPSEEK_API_KEY?: string;
// xai only
XAI_URL?: string;
XAI_API_KEY?: string;
@ -126,25 +131,16 @@ export const getServerSideConfig = () => {
const disableGPT4 = !!process.env.DISABLE_GPT4;
let customModels = process.env.CUSTOM_MODELS ?? "";
let defaultModel = process.env.DEFAULT_MODEL ?? "";
let visionModels = process.env.VISION_MODELS ?? "";
if (disableGPT4) {
if (customModels) customModels += ",";
customModels += DEFAULT_MODELS.filter(
(m) =>
(m.name.startsWith("gpt-4") ||
m.name.startsWith("chatgpt-4o") ||
m.name.startsWith("o1")) &&
!m.name.startsWith("gpt-4o-mini"),
)
customModels += DEFAULT_MODELS.filter((m) => isGPT4Model(m.name))
.map((m) => "-" + m.name)
.join(",");
if (
(defaultModel.startsWith("gpt-4") ||
defaultModel.startsWith("chatgpt-4o") ||
defaultModel.startsWith("o1")) &&
!defaultModel.startsWith("gpt-4o-mini")
)
if (defaultModel && isGPT4Model(defaultModel)) {
defaultModel = "";
}
}
const isStability = !!process.env.STABILITY_API_KEY;
@ -159,6 +155,7 @@ export const getServerSideConfig = () => {
const isAlibaba = !!process.env.ALIBABA_API_KEY;
const isMoonshot = !!process.env.MOONSHOT_API_KEY;
const isIflytek = !!process.env.IFLYTEK_API_KEY;
const isDeepSeek = !!process.env.DEEPSEEK_API_KEY;
const isXAI = !!process.env.XAI_API_KEY;
const isChatGLM = !!process.env.CHATGLM_API_KEY;
// const apiKeyEnvVar = process.env.OPENAI_API_KEY ?? "";
@ -223,6 +220,10 @@ export const getServerSideConfig = () => {
iflytekApiKey: process.env.IFLYTEK_API_KEY,
iflytekApiSecret: process.env.IFLYTEK_API_SECRET,
isDeepSeek,
deepseekUrl: process.env.DEEPSEEK_URL,
deepseekApiKey: getApiKey(process.env.DEEPSEEK_API_KEY),
isXAI,
xaiUrl: process.env.XAI_URL,
xaiApiKey: getApiKey(process.env.XAI_API_KEY),
@ -252,6 +253,7 @@ export const getServerSideConfig = () => {
disableFastLink: !!process.env.DISABLE_FAST_LINK,
customModels,
defaultModel,
visionModels,
allowedWebDavEndpoints,
enableMcp: !!process.env.ENABLE_MCP,
};

View File

@ -28,6 +28,8 @@ export const TENCENT_BASE_URL = "https://hunyuan.tencentcloudapi.com";
export const MOONSHOT_BASE_URL = "https://api.moonshot.cn";
export const IFLYTEK_BASE_URL = "https://spark-api-open.xf-yun.com";
export const DEEPSEEK_BASE_URL = "https://api.deepseek.com";
export const XAI_BASE_URL = "https://api.x.ai";
export const CHATGLM_BASE_URL = "https://open.bigmodel.cn";
@ -66,6 +68,7 @@ export enum ApiPath {
Artifacts = "/api/artifacts",
XAI = "/api/xai",
ChatGLM = "/api/chatglm",
DeepSeek = "/api/deepseek",
}
export enum SlotID {
@ -121,6 +124,7 @@ export enum ServiceProvider {
Iflytek = "Iflytek",
XAI = "XAI",
ChatGLM = "ChatGLM",
DeepSeek = "DeepSeek",
}
// Google API safety settings, see https://ai.google.dev/gemini-api/docs/safety-settings
@ -145,6 +149,7 @@ export enum ModelProvider {
Iflytek = "Iflytek",
XAI = "XAI",
ChatGLM = "ChatGLM",
DeepSeek = "DeepSeek",
}
export const Stability = {
@ -227,6 +232,11 @@ export const Iflytek = {
ChatPath: "v1/chat/completions",
};
export const DeepSeek = {
ExampleEndpoint: DEEPSEEK_BASE_URL,
ChatPath: "chat/completions",
};
export const XAI = {
ExampleEndpoint: XAI_BASE_URL,
ChatPath: "v1/chat/completions",
@ -235,6 +245,8 @@ export const XAI = {
export const ChatGLM = {
ExampleEndpoint: CHATGLM_BASE_URL,
ChatPath: "api/paas/v4/chat/completions",
ImagePath: "api/paas/v4/images/generations",
VideoPath: "api/paas/v4/videos/generations",
};
export const DEFAULT_INPUT_TEMPLATE = `{{input}}`; // input / time / model / lang
@ -401,6 +413,8 @@ export const KnowledgeCutOffDate: Record<string, string> = {
// it's now easier to add "KnowledgeCutOffDate" instead of stupid hardcoding it, as was done previously.
"gemini-pro": "2023-12",
"gemini-pro-vision": "2023-12",
"deepseek-chat": "2024-07",
"deepseek-coder": "2024-07",
};
export const DEFAULT_TTS_ENGINE = "OpenAI-TTS";
@ -429,6 +443,7 @@ export const VISION_MODEL_REGEXES = [
/qwen2-vl/,
/gpt-4-turbo(?!.*preview)/, // Matches "gpt-4-turbo" but not "gpt-4-turbo-preview"
/^dall-e-3$/, // Matches exactly "dall-e-3"
/glm-4v/,
];
export const EXCLUDE_VISION_MODEL_REGEXES = [/claude-3-5-haiku-20241022/];
@ -546,6 +561,8 @@ const iflytekModels = [
"4.0Ultra",
];
const deepseekModels = ["deepseek-chat", "deepseek-coder"];
const xAIModes = ["grok-beta"];
const chatglmModels = [
@ -557,6 +574,15 @@ const chatglmModels = [
"glm-4-long",
"glm-4-flashx",
"glm-4-flash",
"glm-4v-plus",
"glm-4v",
"glm-4v-flash", // free
"cogview-3-plus",
"cogview-3",
"cogview-3-flash", // free
// 目前无法适配轮询任务
// "cogvideox",
// "cogvideox-flash", // free
];
let seq = 1000; // 内置的模型序号生成器从1000开始
@ -693,6 +719,17 @@ export const DEFAULT_MODELS = [
sorted: 12,
},
})),
...deepseekModels.map((name) => ({
name,
available: true,
sorted: seq++,
provider: {
id: "deepseek",
providerName: "DeepSeek",
providerType: "deepseek",
sorted: 13,
},
})),
] as const;
export const CHAT_PAGE_SIZE = 15;
@ -712,11 +749,6 @@ export const internalAllowedWebDavEndpoints = [
];
export const DEFAULT_GA_ID = "G-89WN60ZK2E";
export const PLUGINS = [
{ name: "Plugins", path: Path.Plugins },
{ name: "Stable Diffusion", path: Path.Sd },
{ name: "Search Chat", path: Path.SearchChat },
];
export const SAAS_CHAT_URL = "https://nextchat.dev/chat";
export const SAAS_CHAT_UTM_URL = "https://nextchat.dev/chat?utm=github";

View File

@ -106,6 +106,7 @@ const cn = {
copyLastMessage: "复制最后一个回复",
copyLastCode: "复制最后一个代码块",
showShortcutKey: "显示快捷方式",
clearContext: "清除上下文",
},
},
Export: {
@ -176,7 +177,7 @@ const cn = {
},
},
Lang: {
Name: "Language", // ATTENTION: if you wanna add a new translation, please do not translate this value, leave it as `Language`
Name: "Language", // 注意:如果要添加新的翻译,请不要翻译此值,将它保留为 `Language`
All: "所有语言",
},
Avatar: "头像",
@ -462,6 +463,17 @@ const cn = {
SubTitle: "样例:",
},
},
DeepSeek: {
ApiKey: {
Title: "接口密钥",
SubTitle: "使用自定义DeepSeek API Key",
Placeholder: "DeepSeek API Key",
},
Endpoint: {
Title: "接口地址",
SubTitle: "样例:",
},
},
XAI: {
ApiKey: {
Title: "接口密钥",
@ -633,7 +645,7 @@ const cn = {
Sysmessage: "你是一个助手",
},
SearchChat: {
Name: "搜索",
Name: "搜索聊天记录",
Page: {
Title: "搜索聊天记录",
Search: "输入搜索关键词",

View File

@ -107,6 +107,7 @@ const en: LocaleType = {
copyLastMessage: "Copy Last Reply",
copyLastCode: "Copy Last Code Block",
showShortcutKey: "Show Shortcuts",
clearContext: "Clear Context",
},
},
Export: {
@ -446,6 +447,17 @@ const en: LocaleType = {
SubTitle: "Example: ",
},
},
DeepSeek: {
ApiKey: {
Title: "DeepSeek API Key",
SubTitle: "Use a custom DeepSeek API Key",
Placeholder: "DeepSeek API Key",
},
Endpoint: {
Title: "Endpoint Address",
SubTitle: "Example: ",
},
},
XAI: {
ApiKey: {
Title: "XAI API Key",

View File

@ -100,6 +100,7 @@ const tw = {
copyLastMessage: "複製最後一個回覆",
copyLastCode: "複製最後一個程式碼區塊",
showShortcutKey: "顯示快捷方式",
clearContext: "清除上下文",
},
},
Export: {
@ -485,7 +486,7 @@ const tw = {
},
},
SearchChat: {
Name: "搜尋",
Name: "搜尋聊天記錄",
Page: {
Title: "搜尋聊天記錄",
Search: "輸入搜尋關鍵詞",

View File

@ -13,6 +13,7 @@ import {
MOONSHOT_BASE_URL,
STABILITY_BASE_URL,
IFLYTEK_BASE_URL,
DEEPSEEK_BASE_URL,
XAI_BASE_URL,
CHATGLM_BASE_URL,
} from "../constant";
@ -47,6 +48,8 @@ const DEFAULT_STABILITY_URL = isApp ? STABILITY_BASE_URL : ApiPath.Stability;
const DEFAULT_IFLYTEK_URL = isApp ? IFLYTEK_BASE_URL : ApiPath.Iflytek;
const DEFAULT_DEEPSEEK_URL = isApp ? DEEPSEEK_BASE_URL : ApiPath.DeepSeek;
const DEFAULT_XAI_URL = isApp ? XAI_BASE_URL : ApiPath.XAI;
const DEFAULT_CHATGLM_URL = isApp ? CHATGLM_BASE_URL : ApiPath.ChatGLM;
@ -108,6 +111,10 @@ const DEFAULT_ACCESS_STATE = {
iflytekApiKey: "",
iflytekApiSecret: "",
// deepseek
deepseekUrl: DEFAULT_DEEPSEEK_URL,
deepseekApiKey: "",
// xai
xaiUrl: DEFAULT_XAI_URL,
xaiApiKey: "",
@ -124,6 +131,7 @@ const DEFAULT_ACCESS_STATE = {
disableFastLink: false,
customModels: "",
defaultModel: "",
visionModels: "",
// tts config
edgeTTSVoiceName: "zh-CN-YunxiNeural",
@ -138,7 +146,10 @@ export const useAccessStore = createPersistStore(
return get().needCode;
},
getVisionModels() {
this.fetch();
return get().visionModels;
},
edgeVoiceName() {
this.fetch();
@ -183,6 +194,9 @@ export const useAccessStore = createPersistStore(
isValidIflytek() {
return ensure(get(), ["iflytekApiKey"]);
},
isValidDeepSeek() {
return ensure(get(), ["deepseekApiKey"]);
},
isValidXAI() {
return ensure(get(), ["xaiApiKey"]);
@ -207,6 +221,7 @@ export const useAccessStore = createPersistStore(
this.isValidTencent() ||
this.isValidMoonshot() ||
this.isValidIflytek() ||
this.isValidDeepSeek() ||
this.isValidXAI() ||
this.isValidChatGLM() ||
!this.enabledAccessControl() ||

View File

@ -244,7 +244,11 @@ export const useChatStore = createPersistStore(
const newSession = createEmptySession();
newSession.topic = currentSession.topic;
newSession.messages = [...currentSession.messages];
// 深拷贝消息
newSession.messages = currentSession.messages.map(msg => ({
...msg,
id: nanoid(), // 生成新的消息 ID
}));
newSession.mask = {
...currentSession.mask,
modelConfig: {

View File

@ -1,5 +1,5 @@
import { LLMModel } from "../client/api";
import { DalleSize, DalleQuality, DalleStyle } from "../typing";
import { DalleQuality, DalleStyle, ModelSize } from "../typing";
import { getClientConfig } from "../config/client";
import {
DEFAULT_INPUT_TEMPLATE,
@ -78,7 +78,7 @@ export const DEFAULT_CONFIG = {
compressProviderName: "",
enableInjectSystemPrompts: true,
template: config?.template ?? DEFAULT_INPUT_TEMPLATE,
size: "1024x1024" as DalleSize,
size: "1024x1024" as ModelSize,
quality: "standard" as DalleQuality,
style: "vivid" as DalleStyle,
},

View File

@ -11,3 +11,14 @@ export interface RequestMessage {
export type DalleSize = "1024x1024" | "1792x1024" | "1024x1792";
export type DalleQuality = "standard" | "hd";
export type DalleStyle = "vivid" | "natural";
export type ModelSize =
| "1024x1024"
| "1792x1024"
| "1024x1792"
| "768x1344"
| "864x1152"
| "1344x768"
| "1152x864"
| "1440x720"
| "720x1440";

View File

@ -6,7 +6,8 @@ import { ServiceProvider } from "./constant";
// import { fetch as tauriFetch, ResponseType } from "@tauri-apps/api/http";
import { fetch as tauriStreamFetch } from "./utils/stream";
import { VISION_MODEL_REGEXES, EXCLUDE_VISION_MODEL_REGEXES } from "./constant";
import { getClientConfig } from "./config/client";
import { useAccessStore } from "./store";
import { ModelSize } from "./typing";
export function trimTopic(topic: string) {
// Fix an issue where double quotes still show in the Indonesian language
@ -254,8 +255,8 @@ export function getMessageImages(message: RequestMessage): string[] {
}
export function isVisionModel(model: string) {
const clientConfig = getClientConfig();
const envVisionModels = clientConfig?.visionModels
const visionModels = useAccessStore.getState().visionModels;
const envVisionModels = visionModels
?.split(",")
.map((m) => m.trim());
if (envVisionModels?.includes(model)) {
@ -271,6 +272,28 @@ export function isDalle3(model: string) {
return "dall-e-3" === model;
}
export function getModelSizes(model: string): ModelSize[] {
if (isDalle3(model)) {
return ["1024x1024", "1792x1024", "1024x1792"];
}
if (model.toLowerCase().includes("cogview")) {
return [
"1024x1024",
"768x1344",
"864x1152",
"1344x768",
"1152x864",
"1440x720",
"720x1440",
];
}
return [];
}
export function supportsCustomSize(model: string): boolean {
return getModelSizes(model).length > 0;
}
export function showPlugins(provider: ServiceProvider, model: string) {
if (
provider == ServiceProvider.OpenAI ||

View File

@ -202,3 +202,52 @@ export function isModelAvailableInServer(
const modelTable = collectModelTable(DEFAULT_MODELS, customModels);
return modelTable[fullName]?.available === false;
}
/**
* Check if the model name is a GPT-4 related model
*
* @param modelName The name of the model to check
* @returns True if the model is a GPT-4 related model (excluding gpt-4o-mini)
*/
export function isGPT4Model(modelName: string): boolean {
return (
(modelName.startsWith("gpt-4") ||
modelName.startsWith("chatgpt-4o") ||
modelName.startsWith("o1")) &&
!modelName.startsWith("gpt-4o-mini")
);
}
/**
* Checks if a model is not available on any of the specified providers in the server.
*
* @param {string} customModels - A string of custom models, comma-separated.
* @param {string} modelName - The name of the model to check.
* @param {string|string[]} providerNames - A string or array of provider names to check against.
*
* @returns {boolean} True if the model is not available on any of the specified providers, false otherwise.
*/
export function isModelNotavailableInServer(
customModels: string,
modelName: string,
providerNames: string | string[],
): boolean {
// Check DISABLE_GPT4 environment variable
if (
process.env.DISABLE_GPT4 === "1" &&
isGPT4Model(modelName.toLowerCase())
) {
return true;
}
const modelTable = collectModelTable(DEFAULT_MODELS, customModels);
const providerNamesArray = Array.isArray(providerNames)
? providerNames
: [providerNames];
for (const providerName of providerNamesArray) {
const fullName = `${modelName}@${providerName.toLowerCase()}`;
if (modelTable?.[fullName]?.available === true) return false;
}
return true;
}

View File

@ -82,7 +82,7 @@
同时为了让 ChatGPT 理解我们对话的上下文,往往会携带多条历史消息来提供上下文信息,而当对话进行一段时间之后,很容易就会触发长度限制。
为了解决此问题,我们增加了历史记录压缩功能,假设阈值为 1000 字符,那么每次用户产生的聊天记录超过 1000 字符时,都会将没有被总结过的消息,发送给 ChatGPT让其产生一个 100 字所有的摘要。
为了解决此问题,我们增加了历史记录压缩功能,假设阈值为 1000 字符,那么每次用户产生的聊天记录超过 1000 字符时,都会将没有被总结过的消息,发送给 ChatGPT让其产生一个 100 字左右的摘要。
这样,历史信息就从 1000 字压缩到了 100 字,这是一种有损压缩,但已能满足大多数使用场景。

View File

@ -0,0 +1,80 @@
import { isModelNotavailableInServer } from "../app/utils/model";
describe("isModelNotavailableInServer", () => {
test("test model will return false, which means the model is available", () => {
const customModels = "";
const modelName = "gpt-4";
const providerNames = "OpenAI";
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(false);
});
test("test model will return true when model is not available in custom models", () => {
const customModels = "-all,gpt-4o-mini";
const modelName = "gpt-4";
const providerNames = "OpenAI";
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(true);
});
test("should respect DISABLE_GPT4 setting", () => {
process.env.DISABLE_GPT4 = "1";
const result = isModelNotavailableInServer("", "gpt-4", "OpenAI");
expect(result).toBe(true);
});
test("should handle empty provider names", () => {
const result = isModelNotavailableInServer("-all,gpt-4", "gpt-4", "");
expect(result).toBe(true);
});
test("should be case insensitive for model names", () => {
const result = isModelNotavailableInServer("-all,GPT-4", "gpt-4", "OpenAI");
expect(result).toBe(true);
});
test("support passing multiple providers, model unavailable on one of the providers will return true", () => {
const customModels = "-all,gpt-4@google";
const modelName = "gpt-4";
const providerNames = ["OpenAI", "Azure"];
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(true);
});
// FIXME: 这个测试用例有问题,需要修复
// test("support passing multiple providers, model available on one of the providers will return false", () => {
// const customModels = "-all,gpt-4@google";
// const modelName = "gpt-4";
// const providerNames = ["OpenAI", "Google"];
// const result = isModelNotavailableInServer(
// customModels,
// modelName,
// providerNames,
// );
// expect(result).toBe(false);
// });
test("test custom model without setting provider", () => {
const customModels = "-all,mistral-large";
const modelName = "mistral-large";
const providerNames = modelName;
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(false);
});
});