Compare commits

..

23 Commits

Author SHA1 Message Date
lloydzhou
009447a188 Merge remote-tracking branch 'origin/main' into website 2024-11-11 13:26:21 +08:00
lloydzhou
d7270a5840 Merge remote-tracking branch 'origin/main' into website 2024-11-05 17:47:01 +08:00
lloydzhou
a0c78b8260 Merge branch 'main' into website 2024-10-24 16:36:12 +08:00
lloydzhou
143fdf4c9d Merge branch 'main' into website 2024-10-16 00:46:25 +08:00
lloydzhou
37e48b431d Merge remote-tracking branch 'origin/main' into website 2024-10-15 17:29:42 +08:00
lloydzhou
eed651ddde Merge branch 'main' into website 2024-10-10 12:56:59 +08:00
lloydzhou
22fa98d73d Merge branch 'main' into website 2024-10-09 19:34:01 +08:00
lloydzhou
7d3b5f1ac8 Merge remote-tracking branch 'origin/main' into website 2024-09-26 13:57:05 +08:00
lloydzhou
1df5a16141 Merge remote-tracking branch 'origin/main' into website 2024-09-25 15:54:41 +08:00
lloydzhou
2b944b154c hotfix openai function call tool_calls no index 2024-09-22 19:09:28 +08:00
lloydzhou
9b7cb795ca hotfix openai function call tool_calls no index 2024-09-22 19:09:18 +08:00
lloydzhou
20ae4f54e6 Merge remote-tracking branch 'origin/main' into website 2024-09-13 17:41:44 +08:00
lloydzhou
78facb282b Merge remote-tracking branch 'origin/main' into website 2024-09-07 22:13:20 +08:00
lloydzhou
6f3d7530b9 Merge remote-tracking branch 'origin/main' into website 2024-09-06 20:18:21 +08:00
lloydzhou
5e1064a5c8 Merge branch 'main' into website 2024-08-16 16:58:30 +08:00
lloydzhou
faac0d9817 Merge remote-tracking branch 'origin/main' into website 2024-08-06 22:45:16 +08:00
lloydzhou
c440637ad0 Merge remote-tracking branch 'origin/main' into website 2024-07-27 01:32:47 +08:00
lloydzhou
284d33bcdf Merge remote-tracking branch 'origin/main' into website 2024-07-19 18:37:32 +08:00
lloydzhou
d9573973ca Merge remote-tracking branch 'origin/main' into website 2024-07-13 21:31:15 +08:00
fred-bf
cd354cf045 Merge pull request #4685 from ChatGPTNextWeb/main
feat: update upstream
2024-05-14 17:40:46 +08:00
fred-bf
1cce87acaa Merge pull request #4181 from ChatGPTNextWeb/main
merge main
2024-03-01 11:10:11 +08:00
fred-bf
78c4084501 Merge pull request #4148 from ChatGPTNextWeb/main
feat: catch up latest commit
2024-02-27 10:43:15 +08:00
Fred Liang
1d0a40b9e8 chore: low the google safety setting to avoid unexpected blocking 2023-12-31 19:50:06 +08:00
35 changed files with 129 additions and 556 deletions

View File

@@ -1,17 +1,16 @@
<div align="center"> <div align="center">
<a href='https://nextchat.dev/chat'> <a href='#企业版'>
<img src="https://github.com/user-attachments/assets/287c510f-f508-478e-ade3-54d30453dc18" width="1000" alt="icon"/> <img src="./docs/images/ent.svg" alt="icon"/>
</a> </a>
<h1 align="center">NextChat (ChatGPT Next Web)</h1> <h1 align="center">NextChat (ChatGPT Next Web)</h1>
English / [简体中文](./README_CN.md) English / [简体中文](./README_CN.md)
One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT4 & Gemini Pro support. One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support.
一键免费部署你的跨平台私人 ChatGPT 应用, 支持 Claude, GPT4 & Gemini Pro 模型。 一键免费部署你的跨平台私人 ChatGPT 应用, 支持 GPT3, GPT4 & Gemini Pro 模型。
[![Saas][Saas-image]][saas-url] [![Saas][Saas-image]][saas-url]
[![Web][Web-image]][web-url] [![Web][Web-image]][web-url]
@@ -32,7 +31,7 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple [MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu [Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu
[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html) [<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html) [<img src="https://svgshare.com/i/1AVg.svg" alt="Deploy to Alibaba Cloud" height="30">](https://computenest.aliyun.com/market/service-f1c9b75e59814dc49d52)
[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp) [<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp)
@@ -356,13 +355,6 @@ For ByteDance: use `modelName@bytedance=deploymentName` to customize model name
Change default model Change default model
### `VISION_MODELS` (optional)
> Default: Empty
> Example: `gpt-4-vision,claude-3-opus,my-custom-model` means add vision capabilities to these models in addition to the default pattern matches (which detect models containing keywords like "vision", "claude-3", "gemini-1.5", etc).
Add additional models to have vision capabilities, beyond the default pattern matching. Multiple models should be separated by commas.
### `WHITE_WEBDAV_ENDPOINTS` (optional) ### `WHITE_WEBDAV_ENDPOINTS` (optional)
You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the format You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the format

View File

@@ -235,13 +235,6 @@ ChatGLM Api Url.
更改默认模型 更改默认模型
### `VISION_MODELS` (可选)
> 默认值:空
> 示例:`gpt-4-vision,claude-3-opus,my-custom-model` 表示为这些模型添加视觉能力,作为对默认模式匹配的补充(默认会检测包含"vision"、"claude-3"、"gemini-1.5"等关键词的模型)。
在默认模式匹配之外,添加更多具有视觉能力的模型。多个模型用逗号分隔。
### `DEFAULT_INPUT_TEMPLATE` (可选) ### `DEFAULT_INPUT_TEMPLATE` (可选)
自定义默认的 template用于初始化『设置』中的『用户输入预处理』配置项 自定义默认的 template用于初始化『设置』中的『用户输入预处理』配置项

View File

@@ -217,13 +217,6 @@ ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデ
デフォルトのモデルを変更します。 デフォルトのモデルを変更します。
### `VISION_MODELS` (オプション)
> デフォルト:空
> 例:`gpt-4-vision,claude-3-opus,my-custom-model` は、これらのモデルにビジョン機能を追加します。これはデフォルトのパターンマッチング("vision"、"claude-3"、"gemini-1.5"などのキーワードを含むモデルを検出)に加えて適用されます。
デフォルトのパターンマッチングに加えて、追加のモデルにビジョン機能を付与します。複数のモデルはカンマで区切ります。
### `DEFAULT_INPUT_TEMPLATE` (オプション) ### `DEFAULT_INPUT_TEMPLATE` (オプション)
『設定』の『ユーザー入力前処理』の初期設定に使用するテンプレートをカスタマイズします。 『設定』の『ユーザー入力前処理』の初期設定に使用するテンプレートをカスタマイズします。

View File

@@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model"; import { isModelAvailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Alibaba as string, ServiceProvider.Alibaba as string,

View File

@@ -9,7 +9,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "./auth"; import { auth } from "./auth";
import { isModelNotavailableInServer } from "@/app/utils/model"; import { isModelAvailableInServer } from "@/app/utils/model";
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare"; import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
const ALLOWD_PATH = new Set([Anthropic.ChatPath, Anthropic.ChatPath1]); const ALLOWD_PATH = new Set([Anthropic.ChatPath, Anthropic.ChatPath1]);
@@ -122,7 +122,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Anthropic as string, ServiceProvider.Anthropic as string,

View File

@@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model"; import { isModelAvailableInServer } from "@/app/utils/model";
import { getAccessToken } from "@/app/utils/baidu"; import { getAccessToken } from "@/app/utils/baidu";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@@ -104,7 +104,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Baidu as string, ServiceProvider.Baidu as string,

View File

@@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model"; import { isModelAvailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.ByteDance as string, ServiceProvider.ByteDance as string,

View File

@@ -2,7 +2,7 @@ import { NextRequest, NextResponse } from "next/server";
import { getServerSideConfig } from "../config/server"; import { getServerSideConfig } from "../config/server";
import { OPENAI_BASE_URL, ServiceProvider } from "../constant"; import { OPENAI_BASE_URL, ServiceProvider } from "../constant";
import { cloudflareAIGatewayUrl } from "../utils/cloudflare"; import { cloudflareAIGatewayUrl } from "../utils/cloudflare";
import { getModelProvider, isModelNotavailableInServer } from "../utils/model"; import { getModelProvider, isModelAvailableInServer } from "../utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@@ -118,14 +118,15 @@ export async function requestOpenai(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
[ ServiceProvider.OpenAI as string,
ServiceProvider.OpenAI, ) ||
ServiceProvider.Azure, isModelAvailableInServer(
jsonBody?.model as string, // support provider-unspecified model serverConfig.customModels,
], jsonBody?.model as string,
ServiceProvider.Azure as string,
) )
) { ) {
return NextResponse.json( return NextResponse.json(

View File

@@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model"; import { isModelAvailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.ChatGLM as string, ServiceProvider.ChatGLM as string,

View File

@@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model"; import { isModelAvailableInServer } from "@/app/utils/model";
// iflytek // iflytek
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@@ -89,7 +89,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Iflytek as string, ServiceProvider.Iflytek as string,

View File

@@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model"; import { isModelAvailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.Moonshot as string, ServiceProvider.Moonshot as string,

View File

@@ -14,7 +14,7 @@ function getModels(remoteModelRes: OpenAIListModelResponse) {
if (config.disableGPT4) { if (config.disableGPT4) {
remoteModelRes.data = remoteModelRes.data.filter( remoteModelRes.data = remoteModelRes.data.filter(
(m) => (m) =>
!(m.id.startsWith("gpt-4") || m.id.startsWith("chatgpt-4o") || m.id.startsWith("o1")) || !(m.id.startsWith("gpt-4") || m.id.startsWith("chatgpt-4o")) ||
m.id.startsWith("gpt-4o-mini"), m.id.startsWith("gpt-4o-mini"),
); );
} }

View File

@@ -8,7 +8,7 @@ import {
import { prettyObject } from "@/app/utils/format"; import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { auth } from "@/app/api/auth"; import { auth } from "@/app/api/auth";
import { isModelNotavailableInServer } from "@/app/utils/model"; import { isModelAvailableInServer } from "@/app/utils/model";
const serverConfig = getServerSideConfig(); const serverConfig = getServerSideConfig();
@@ -88,7 +88,7 @@ async function request(req: NextRequest) {
// not undefined and is false // not undefined and is false
if ( if (
isModelNotavailableInServer( isModelAvailableInServer(
serverConfig.customModels, serverConfig.customModels,
jsonBody?.model as string, jsonBody?.model as string,
ServiceProvider.XAI as string, ServiceProvider.XAI as string,

View File

@@ -21,108 +21,16 @@ import {
SpeechOptions, SpeechOptions,
} from "../api"; } from "../api";
import { getClientConfig } from "@/app/config/client"; import { getClientConfig } from "@/app/config/client";
import { getMessageTextContent, isVisionModel } from "@/app/utils"; import { getMessageTextContent } from "@/app/utils";
import { RequestPayload } from "./openai"; import { RequestPayload } from "./openai";
import { fetch } from "@/app/utils/stream"; import { fetch } from "@/app/utils/stream";
import { preProcessImageContent } from "@/app/utils/chat";
interface BasePayload {
model: string;
}
interface ChatPayload extends BasePayload {
messages: ChatOptions["messages"];
stream?: boolean;
temperature?: number;
presence_penalty?: number;
frequency_penalty?: number;
top_p?: number;
}
interface ImageGenerationPayload extends BasePayload {
prompt: string;
size?: string;
user_id?: string;
}
interface VideoGenerationPayload extends BasePayload {
prompt: string;
duration?: number;
resolution?: string;
user_id?: string;
}
type ModelType = "chat" | "image" | "video";
export class ChatGLMApi implements LLMApi { export class ChatGLMApi implements LLMApi {
private disableListModels = true; private disableListModels = true;
private getModelType(model: string): ModelType {
if (model.startsWith("cogview-")) return "image";
if (model.startsWith("cogvideo-")) return "video";
return "chat";
}
private getModelPath(type: ModelType): string {
switch (type) {
case "image":
return ChatGLM.ImagePath;
case "video":
return ChatGLM.VideoPath;
default:
return ChatGLM.ChatPath;
}
}
private createPayload(
messages: ChatOptions["messages"],
modelConfig: any,
options: ChatOptions,
): BasePayload {
const modelType = this.getModelType(modelConfig.model);
const lastMessage = messages[messages.length - 1];
const prompt =
typeof lastMessage.content === "string"
? lastMessage.content
: lastMessage.content.map((c) => c.text).join("\n");
switch (modelType) {
case "image":
return {
model: modelConfig.model,
prompt,
size: options.config.size,
} as ImageGenerationPayload;
default:
return {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
} as ChatPayload;
}
}
private parseResponse(modelType: ModelType, json: any): string {
switch (modelType) {
case "image": {
const imageUrl = json.data?.[0]?.url;
return imageUrl ? `![Generated Image](${imageUrl})` : "";
}
case "video": {
const videoUrl = json.data?.[0]?.url;
return videoUrl ? `<video controls src="${videoUrl}"></video>` : "";
}
default:
return this.extractMessage(json);
}
}
path(path: string): string { path(path: string): string {
const accessStore = useAccessStore.getState(); const accessStore = useAccessStore.getState();
let baseUrl = ""; let baseUrl = "";
if (accessStore.useCustomConfig) { if (accessStore.useCustomConfig) {
@@ -143,6 +51,7 @@ export class ChatGLMApi implements LLMApi {
} }
console.log("[Proxy Endpoint] ", baseUrl, path); console.log("[Proxy Endpoint] ", baseUrl, path);
return [baseUrl, path].join("/"); return [baseUrl, path].join("/");
} }
@@ -155,12 +64,9 @@ export class ChatGLMApi implements LLMApi {
} }
async chat(options: ChatOptions) { async chat(options: ChatOptions) {
const visionModel = isVisionModel(options.config.model);
const messages: ChatOptions["messages"] = []; const messages: ChatOptions["messages"] = [];
for (const v of options.messages) { for (const v of options.messages) {
const content = visionModel const content = getMessageTextContent(v);
? await preProcessImageContent(v.content)
: getMessageTextContent(v);
messages.push({ role: v.role, content }); messages.push({ role: v.role, content });
} }
@@ -172,16 +78,25 @@ export class ChatGLMApi implements LLMApi {
providerName: options.config.providerName, providerName: options.config.providerName,
}, },
}; };
const modelType = this.getModelType(modelConfig.model);
const requestPayload = this.createPayload(messages, modelConfig, options);
const path = this.path(this.getModelPath(modelType));
console.log(`[Request] glm ${modelType} payload: `, requestPayload); const requestPayload: RequestPayload = {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
};
console.log("[Request] glm payload: ", requestPayload);
const shouldStream = !!options.config.stream;
const controller = new AbortController(); const controller = new AbortController();
options.onController?.(controller); options.onController?.(controller);
try { try {
const chatPath = this.path(ChatGLM.ChatPath);
const chatPayload = { const chatPayload = {
method: "POST", method: "POST",
body: JSON.stringify(requestPayload), body: JSON.stringify(requestPayload),
@@ -189,23 +104,12 @@ export class ChatGLMApi implements LLMApi {
headers: getHeaders(), headers: getHeaders(),
}; };
// make a fetch request
const requestTimeoutId = setTimeout( const requestTimeoutId = setTimeout(
() => controller.abort(), () => controller.abort(),
REQUEST_TIMEOUT_MS, REQUEST_TIMEOUT_MS,
); );
if (modelType === "image" || modelType === "video") {
const res = await fetch(path, chatPayload);
clearTimeout(requestTimeoutId);
const resJson = await res.json();
console.log(`[Response] glm ${modelType}:`, resJson);
const message = this.parseResponse(modelType, resJson);
options.onFinish(message, res);
return;
}
const shouldStream = !!options.config.stream;
if (shouldStream) { if (shouldStream) {
const [tools, funcs] = usePluginStore const [tools, funcs] = usePluginStore
.getState() .getState()
@@ -213,7 +117,7 @@ export class ChatGLMApi implements LLMApi {
useChatStore.getState().currentSession().mask?.plugin || [], useChatStore.getState().currentSession().mask?.plugin || [],
); );
return stream( return stream(
path, chatPath,
requestPayload, requestPayload,
getHeaders(), getHeaders(),
tools as any, tools as any,
@@ -221,6 +125,7 @@ export class ChatGLMApi implements LLMApi {
controller, controller,
// parseSSE // parseSSE
(text: string, runTools: ChatMessageTool[]) => { (text: string, runTools: ChatMessageTool[]) => {
// console.log("parseSSE", text, runTools);
const json = JSON.parse(text); const json = JSON.parse(text);
const choices = json.choices as Array<{ const choices = json.choices as Array<{
delta: { delta: {
@@ -249,7 +154,7 @@ export class ChatGLMApi implements LLMApi {
} }
return choices[0]?.delta?.content; return choices[0]?.delta?.content;
}, },
// processToolMessage // processToolMessage, include tool_calls message and tool call results
( (
requestPayload: RequestPayload, requestPayload: RequestPayload,
toolCallMessage: any, toolCallMessage: any,
@@ -267,7 +172,7 @@ export class ChatGLMApi implements LLMApi {
options, options,
); );
} else { } else {
const res = await fetch(path, chatPayload); const res = await fetch(chatPath, chatPayload);
clearTimeout(requestTimeoutId); clearTimeout(requestTimeoutId);
const resJson = await res.json(); const resJson = await res.json();
@@ -279,7 +184,6 @@ export class ChatGLMApi implements LLMApi {
options.onError?.(e as Error); options.onError?.(e as Error);
} }
} }
async usage() { async usage() {
return { return {
used: 0, used: 0,

View File

@@ -29,7 +29,7 @@ import { RequestPayload } from "./openai";
import { fetch } from "@/app/utils/stream"; import { fetch } from "@/app/utils/stream";
export class GeminiProApi implements LLMApi { export class GeminiProApi implements LLMApi {
path(path: string, shouldStream = false): string { path(path: string): string {
const accessStore = useAccessStore.getState(); const accessStore = useAccessStore.getState();
let baseUrl = ""; let baseUrl = "";
@@ -51,27 +51,15 @@ export class GeminiProApi implements LLMApi {
console.log("[Proxy Endpoint] ", baseUrl, path); console.log("[Proxy Endpoint] ", baseUrl, path);
let chatPath = [baseUrl, path].join("/"); let chatPath = [baseUrl, path].join("/");
if (shouldStream) {
chatPath += chatPath.includes("?") ? "&alt=sse" : "?alt=sse";
}
chatPath += chatPath.includes("?") ? "&alt=sse" : "?alt=sse";
return chatPath; return chatPath;
} }
extractMessage(res: any) { extractMessage(res: any) {
console.log("[Response] gemini-pro response: ", res); console.log("[Response] gemini-pro response: ", res);
const getTextFromParts = (parts: any[]) => {
if (!Array.isArray(parts)) return "";
return parts
.map((part) => part?.text || "")
.filter((text) => text.trim() !== "")
.join("\n\n");
};
return ( return (
getTextFromParts(res?.candidates?.at(0)?.content?.parts) || res?.candidates?.at(0)?.content?.parts.at(0)?.text ||
getTextFromParts(res?.at(0)?.candidates?.at(0)?.content?.parts) ||
res?.error?.message || res?.error?.message ||
"" ""
); );
@@ -178,10 +166,7 @@ export class GeminiProApi implements LLMApi {
options.onController?.(controller); options.onController?.(controller);
try { try {
// https://github.com/google-gemini/cookbook/blob/main/quickstarts/rest/Streaming_REST.ipynb // https://github.com/google-gemini/cookbook/blob/main/quickstarts/rest/Streaming_REST.ipynb
const chatPath = this.path( const chatPath = this.path(Google.ChatPath(modelConfig.model));
Google.ChatPath(modelConfig.model),
shouldStream,
);
const chatPayload = { const chatPayload = {
method: "POST", method: "POST",
@@ -232,10 +217,7 @@ export class GeminiProApi implements LLMApi {
}, },
}); });
} }
return chunkJson?.candidates return chunkJson?.candidates?.at(0)?.content.parts.at(0)?.text;
?.at(0)
?.content.parts?.map((part: { text: string }) => part.text)
.join("\n\n");
}, },
// processToolMessage, include tool_calls message and tool call results // processToolMessage, include tool_calls message and tool call results
( (

View File

@@ -24,7 +24,7 @@ import {
stream, stream,
} from "@/app/utils/chat"; } from "@/app/utils/chat";
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare"; import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
import { ModelSize, DalleQuality, DalleStyle } from "@/app/typing"; import { DalleSize, DalleQuality, DalleStyle } from "@/app/typing";
import { import {
ChatOptions, ChatOptions,
@@ -73,7 +73,7 @@ export interface DalleRequestPayload {
prompt: string; prompt: string;
response_format: "url" | "b64_json"; response_format: "url" | "b64_json";
n: number; n: number;
size: ModelSize; size: DalleSize;
quality: DalleQuality; quality: DalleQuality;
style: DalleStyle; style: DalleStyle;
} }
@@ -224,7 +224,7 @@ export class ChatGPTApi implements LLMApi {
// O1 not support image, tools (plugin in ChatGPTNextWeb) and system, stream, logprobs, temperature, top_p, n, presence_penalty, frequency_penalty yet. // O1 not support image, tools (plugin in ChatGPTNextWeb) and system, stream, logprobs, temperature, top_p, n, presence_penalty, frequency_penalty yet.
requestPayload = { requestPayload = {
messages, messages,
stream: options.config.stream, stream: !isO1 ? options.config.stream : false,
model: modelConfig.model, model: modelConfig.model,
temperature: !isO1 ? modelConfig.temperature : 1, temperature: !isO1 ? modelConfig.temperature : 1,
presence_penalty: !isO1 ? modelConfig.presence_penalty : 0, presence_penalty: !isO1 ? modelConfig.presence_penalty : 0,
@@ -247,7 +247,7 @@ export class ChatGPTApi implements LLMApi {
console.log("[Request] openai payload: ", requestPayload); console.log("[Request] openai payload: ", requestPayload);
const shouldStream = !isDalle3 && !!options.config.stream; const shouldStream = !isDalle3 && !!options.config.stream && !isO1;
const controller = new AbortController(); const controller = new AbortController();
options.onController?.(controller); options.onController?.(controller);

View File

@@ -72,8 +72,6 @@ import {
isDalle3, isDalle3,
showPlugins, showPlugins,
safeLocalStorage, safeLocalStorage,
getModelSizes,
supportsCustomSize,
} from "../utils"; } from "../utils";
import { uploadImage as uploadImageRemote } from "@/app/utils/chat"; import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
@@ -81,7 +79,7 @@ import { uploadImage as uploadImageRemote } from "@/app/utils/chat";
import dynamic from "next/dynamic"; import dynamic from "next/dynamic";
import { ChatControllerPool } from "../client/controller"; import { ChatControllerPool } from "../client/controller";
import { DalleQuality, DalleStyle, ModelSize } from "../typing"; import { DalleSize, DalleQuality, DalleStyle } from "../typing";
import { Prompt, usePromptStore } from "../store/prompt"; import { Prompt, usePromptStore } from "../store/prompt";
import Locale from "../locales"; import Locale from "../locales";
@@ -521,11 +519,10 @@ export function ChatActions(props: {
const [showSizeSelector, setShowSizeSelector] = useState(false); const [showSizeSelector, setShowSizeSelector] = useState(false);
const [showQualitySelector, setShowQualitySelector] = useState(false); const [showQualitySelector, setShowQualitySelector] = useState(false);
const [showStyleSelector, setShowStyleSelector] = useState(false); const [showStyleSelector, setShowStyleSelector] = useState(false);
const modelSizes = getModelSizes(currentModel); const dalle3Sizes: DalleSize[] = ["1024x1024", "1792x1024", "1024x1792"];
const dalle3Qualitys: DalleQuality[] = ["standard", "hd"]; const dalle3Qualitys: DalleQuality[] = ["standard", "hd"];
const dalle3Styles: DalleStyle[] = ["vivid", "natural"]; const dalle3Styles: DalleStyle[] = ["vivid", "natural"];
const currentSize = const currentSize = session.mask.modelConfig?.size ?? "1024x1024";
session.mask.modelConfig?.size ?? ("1024x1024" as ModelSize);
const currentQuality = session.mask.modelConfig?.quality ?? "standard"; const currentQuality = session.mask.modelConfig?.quality ?? "standard";
const currentStyle = session.mask.modelConfig?.style ?? "vivid"; const currentStyle = session.mask.modelConfig?.style ?? "vivid";
@@ -676,7 +673,7 @@ export function ChatActions(props: {
/> />
)} )}
{supportsCustomSize(currentModel) && ( {isDalle3(currentModel) && (
<ChatAction <ChatAction
onClick={() => setShowSizeSelector(true)} onClick={() => setShowSizeSelector(true)}
text={currentSize} text={currentSize}
@@ -687,7 +684,7 @@ export function ChatActions(props: {
{showSizeSelector && ( {showSizeSelector && (
<Selector <Selector
defaultSelectedValue={currentSize} defaultSelectedValue={currentSize}
items={modelSizes.map((m) => ({ items={dalle3Sizes.map((m) => ({
title: m, title: m,
value: m, value: m,
}))} }))}
@@ -963,24 +960,9 @@ function _Chat() {
(scrollRef.current.scrollTop + scrollRef.current.clientHeight), (scrollRef.current.scrollTop + scrollRef.current.clientHeight),
) <= 1 ) <= 1
: false; : false;
const isAttachWithTop = useMemo(() => {
const lastMessage = scrollRef.current?.lastElementChild as HTMLElement;
// if scrolllRef is not ready or no message, return false
if (!scrollRef?.current || !lastMessage) return false;
const topDistance =
lastMessage!.getBoundingClientRect().top -
scrollRef.current.getBoundingClientRect().top;
// leave some space for user question
return topDistance < 100;
}, [scrollRef?.current?.scrollHeight]);
const isTyping = userInput !== "";
// if user is typing, should auto scroll to bottom
// if user is not typing, should auto scroll to bottom only if already at bottom
const { setAutoScroll, scrollDomToBottom } = useScrollToBottom( const { setAutoScroll, scrollDomToBottom } = useScrollToBottom(
scrollRef, scrollRef,
(isScrolledToBottom || isAttachWithTop) && !isTyping, isScrolledToBottom,
); );
const [hitBottom, setHitBottom] = useState(true); const [hitBottom, setHitBottom] = useState(true);
const isMobileScreen = useMobileScreen(); const isMobileScreen = useMobileScreen();
@@ -2089,6 +2071,6 @@ function _Chat() {
export function Chat() { export function Chat() {
const chatStore = useChatStore(); const chatStore = useChatStore();
const session = chatStore.currentSession(); const sessionIndex = chatStore.currentSessionIndex;
return <_Chat key={session.id}></_Chat>; return <_Chat key={sessionIndex}></_Chat>;
} }

View File

@@ -37,8 +37,7 @@ export function Avatar(props: { model?: ModelType; avatar?: string }) {
return ( return (
<div className="no-dark"> <div className="no-dark">
{props.model?.startsWith("gpt-4") || {props.model?.startsWith("gpt-4") ||
props.model?.startsWith("chatgpt-4o") || props.model?.startsWith("chatgpt-4o") ? (
props.model?.startsWith("o1") ? (
<BlackBotIcon className="user-avatar" /> <BlackBotIcon className="user-avatar" />
) : ( ) : (
<BotIcon className="user-avatar" /> <BotIcon className="user-avatar" />

View File

@@ -90,11 +90,7 @@ export function PreCode(props: { children: any }) {
const refText = ref.current.querySelector("code")?.innerText; const refText = ref.current.querySelector("code")?.innerText;
if (htmlDom) { if (htmlDom) {
setHtmlCode((htmlDom as HTMLElement).innerText); setHtmlCode((htmlDom as HTMLElement).innerText);
} else if ( } else if (refText?.startsWith("<!DOCTYPE")) {
refText?.startsWith("<!DOCTYPE") ||
refText?.startsWith("<svg") ||
refText?.startsWith("<?xml")
) {
setHtmlCode(refText); setHtmlCode(refText);
} }
}, 600); }, 600);
@@ -248,10 +244,6 @@ function escapeBrackets(text: string) {
function tryWrapHtmlCode(text: string) { function tryWrapHtmlCode(text: string) {
// try add wrap html code (fixed: html codeblock include 2 newline) // try add wrap html code (fixed: html codeblock include 2 newline)
// ignore embed codeblock
if (text.includes("```")) {
return text;
}
return text return text
.replace( .replace(
/([`]*?)(\w*?)([\n\r]*?)(<!DOCTYPE html>)/g, /([`]*?)(\w*?)([\n\r]*?)(<!DOCTYPE html>)/g,

View File

@@ -1771,11 +1771,9 @@ export function Settings() {
<ListItem <ListItem
title={Locale.Settings.Access.CustomModel.Title} title={Locale.Settings.Access.CustomModel.Title}
subTitle={Locale.Settings.Access.CustomModel.SubTitle} subTitle={Locale.Settings.Access.CustomModel.SubTitle}
vertical={true}
> >
<input <input
aria-label={Locale.Settings.Access.CustomModel.Title} aria-label={Locale.Settings.Access.CustomModel.Title}
style={{ width: "100%", maxWidth: "unset", textAlign: "left" }}
type="text" type="text"
value={config.customModels} value={config.customModels}
placeholder="model1,model2,model3" placeholder="model1,model2,model3"

View File

@@ -22,6 +22,7 @@ import {
MIN_SIDEBAR_WIDTH, MIN_SIDEBAR_WIDTH,
NARROW_SIDEBAR_WIDTH, NARROW_SIDEBAR_WIDTH,
Path, Path,
PLUGINS,
REPO_URL, REPO_URL,
} from "../constant"; } from "../constant";
@@ -31,12 +32,6 @@ import dynamic from "next/dynamic";
import { showConfirm, Selector } from "./ui-lib"; import { showConfirm, Selector } from "./ui-lib";
import clsx from "clsx"; import clsx from "clsx";
const DISCOVERY = [
{ name: Locale.Plugin.Name, path: Path.Plugins },
{ name: "Stable Diffusion", path: Path.Sd },
{ name: Locale.SearchChat.Page.Title, path: Path.SearchChat },
];
const ChatList = dynamic(async () => (await import("./chat-list")).ChatList, { const ChatList = dynamic(async () => (await import("./chat-list")).ChatList, {
loading: () => null, loading: () => null,
}); });
@@ -224,7 +219,7 @@ export function SideBarTail(props: {
export function SideBar(props: { className?: string }) { export function SideBar(props: { className?: string }) {
useHotKey(); useHotKey();
const { onDragStart, shouldNarrow } = useDragSideBar(); const { onDragStart, shouldNarrow } = useDragSideBar();
const [showDiscoverySelector, setshowDiscoverySelector] = useState(false); const [showPluginSelector, setShowPluginSelector] = useState(false);
const navigate = useNavigate(); const navigate = useNavigate();
const config = useAppConfig(); const config = useAppConfig();
const chatStore = useChatStore(); const chatStore = useChatStore();
@@ -259,21 +254,21 @@ export function SideBar(props: { className?: string }) {
icon={<DiscoveryIcon />} icon={<DiscoveryIcon />}
text={shouldNarrow ? undefined : Locale.Discovery.Name} text={shouldNarrow ? undefined : Locale.Discovery.Name}
className={styles["sidebar-bar-button"]} className={styles["sidebar-bar-button"]}
onClick={() => setshowDiscoverySelector(true)} onClick={() => setShowPluginSelector(true)}
shadow shadow
/> />
</div> </div>
{showDiscoverySelector && ( {showPluginSelector && (
<Selector <Selector
items={[ items={[
...DISCOVERY.map((item) => { ...PLUGINS.map((item) => {
return { return {
title: item.name, title: item.name,
value: item.path, value: item.path,
}; };
}), }),
]} ]}
onClose={() => setshowDiscoverySelector(false)} onClose={() => setShowPluginSelector(false)}
onSelection={(s) => { onSelection={(s) => {
navigate(s[0], { state: { fromHome: true } }); navigate(s[0], { state: { fromHome: true } });
}} }}

View File

@@ -40,7 +40,6 @@ export const getBuildConfig = () => {
buildMode, buildMode,
isApp, isApp,
template: process.env.DEFAULT_INPUT_TEMPLATE ?? DEFAULT_INPUT_TEMPLATE, template: process.env.DEFAULT_INPUT_TEMPLATE ?? DEFAULT_INPUT_TEMPLATE,
visionModels: process.env.VISION_MODELS || "",
}; };
}; };

View File

@@ -1,6 +1,5 @@
import md5 from "spark-md5"; import md5 from "spark-md5";
import { DEFAULT_MODELS, DEFAULT_GA_ID } from "../constant"; import { DEFAULT_MODELS, DEFAULT_GA_ID } from "../constant";
import { isGPT4Model } from "../utils/model";
declare global { declare global {
namespace NodeJS { namespace NodeJS {
@@ -128,12 +127,19 @@ export const getServerSideConfig = () => {
if (disableGPT4) { if (disableGPT4) {
if (customModels) customModels += ","; if (customModels) customModels += ",";
customModels += DEFAULT_MODELS.filter((m) => isGPT4Model(m.name)) customModels += DEFAULT_MODELS.filter(
(m) =>
(m.name.startsWith("gpt-4") || m.name.startsWith("chatgpt-4o")) &&
!m.name.startsWith("gpt-4o-mini"),
)
.map((m) => "-" + m.name) .map((m) => "-" + m.name)
.join(","); .join(",");
if (defaultModel && isGPT4Model(defaultModel)) { if (
(defaultModel.startsWith("gpt-4") ||
defaultModel.startsWith("chatgpt-4o")) &&
!defaultModel.startsWith("gpt-4o-mini")
)
defaultModel = ""; defaultModel = "";
}
} }
const isStability = !!process.env.STABILITY_API_KEY; const isStability = !!process.env.STABILITY_API_KEY;

View File

@@ -233,8 +233,6 @@ export const XAI = {
export const ChatGLM = { export const ChatGLM = {
ExampleEndpoint: CHATGLM_BASE_URL, ExampleEndpoint: CHATGLM_BASE_URL,
ChatPath: "api/paas/v4/chat/completions", ChatPath: "api/paas/v4/chat/completions",
ImagePath: "api/paas/v4/images/generations",
VideoPath: "api/paas/v4/videos/generations",
}; };
export const DEFAULT_INPUT_TEMPLATE = `{{input}}`; // input / time / model / lang export const DEFAULT_INPUT_TEMPLATE = `{{input}}`; // input / time / model / lang
@@ -266,7 +264,6 @@ export const KnowledgeCutOffDate: Record<string, string> = {
"gpt-4o": "2023-10", "gpt-4o": "2023-10",
"gpt-4o-2024-05-13": "2023-10", "gpt-4o-2024-05-13": "2023-10",
"gpt-4o-2024-08-06": "2023-10", "gpt-4o-2024-08-06": "2023-10",
"gpt-4o-2024-11-20": "2023-10",
"chatgpt-4o-latest": "2023-10", "chatgpt-4o-latest": "2023-10",
"gpt-4o-mini": "2023-10", "gpt-4o-mini": "2023-10",
"gpt-4o-mini-2024-07-18": "2023-10", "gpt-4o-mini-2024-07-18": "2023-10",
@@ -293,25 +290,6 @@ export const DEFAULT_TTS_VOICES = [
"shimmer", "shimmer",
]; ];
export const VISION_MODEL_REGEXES = [
/vision/,
/gpt-4o/,
/claude-3/,
/gemini-1\.5/,
/gemini-exp/,
/gemini-2\.0/,
/learnlm/,
/qwen-vl/,
/qwen2-vl/,
/gpt-4-turbo(?!.*preview)/, // Matches "gpt-4-turbo" but not "gpt-4-turbo-preview"
/^dall-e-3$/, // Matches exactly "dall-e-3"
/glm-4v-plus/,
/glm-4v/,
/glm-4v-flash/,
];
export const EXCLUDE_VISION_MODEL_REGEXES = [/claude-3-5-haiku-20241022/];
const openaiModels = [ const openaiModels = [
"gpt-3.5-turbo", "gpt-3.5-turbo",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-1106",
@@ -325,7 +303,6 @@ const openaiModels = [
"gpt-4o", "gpt-4o",
"gpt-4o-2024-05-13", "gpt-4o-2024-05-13",
"gpt-4o-2024-08-06", "gpt-4o-2024-08-06",
"gpt-4o-2024-11-20",
"chatgpt-4o-latest", "chatgpt-4o-latest",
"gpt-4o-mini", "gpt-4o-mini",
"gpt-4o-mini-2024-07-18", "gpt-4o-mini-2024-07-18",
@@ -338,23 +315,10 @@ const openaiModels = [
]; ];
const googleModels = [ const googleModels = [
"gemini-1.0-pro", // Deprecated on 2/15/2025 "gemini-1.0-pro",
"gemini-1.5-pro-latest", "gemini-1.5-pro-latest",
"gemini-1.5-pro",
"gemini-1.5-pro-002",
"gemini-1.5-pro-exp-0827",
"gemini-1.5-flash-latest", "gemini-1.5-flash-latest",
"gemini-1.5-flash-8b-latest", "gemini-pro-vision",
"gemini-1.5-flash",
"gemini-1.5-flash-8b",
"gemini-1.5-flash-002",
"gemini-1.5-flash-exp-0827",
"learnlm-1.5-pro-experimental",
"gemini-exp-1114",
"gemini-exp-1121",
"gemini-exp-1206",
"gemini-2.0-flash-exp",
"gemini-2.0-flash-thinking-exp-1219",
]; ];
const anthropicModels = [ const anthropicModels = [
@@ -436,15 +400,6 @@ const chatglmModels = [
"glm-4-long", "glm-4-long",
"glm-4-flashx", "glm-4-flashx",
"glm-4-flash", "glm-4-flash",
"glm-4v-plus",
"glm-4v",
"glm-4v-flash", // free
"cogview-3-plus",
"cogview-3",
"cogview-3-flash", // free
// 目前无法适配轮询任务
// "cogvideox",
// "cogvideox-flash", // free
]; ];
let seq = 1000; // 内置的模型序号生成器从1000开始 let seq = 1000; // 内置的模型序号生成器从1000开始
@@ -600,6 +555,11 @@ export const internalAllowedWebDavEndpoints = [
]; ];
export const DEFAULT_GA_ID = "G-89WN60ZK2E"; export const DEFAULT_GA_ID = "G-89WN60ZK2E";
export const PLUGINS = [
{ name: "Plugins", path: Path.Plugins },
{ name: "Stable Diffusion", path: Path.Sd },
{ name: "Search Chat", path: Path.SearchChat },
];
export const SAAS_CHAT_URL = "https://nextchat.dev/chat"; export const SAAS_CHAT_URL = "https://nextchat.dev/chat";
export const SAAS_CHAT_UTM_URL = "https://nextchat.dev/chat?utm=github"; export const SAAS_CHAT_UTM_URL = "https://nextchat.dev/chat?utm=github";

View File

@@ -176,7 +176,7 @@ const cn = {
}, },
}, },
Lang: { Lang: {
Name: "Language", // 注意:如果要添加新的翻译,请不要翻译此值,将它保留为 `Language` Name: "Language", // ATTENTION: if you wanna add a new translation, please do not translate this value, leave it as `Language`
All: "所有语言", All: "所有语言",
}, },
Avatar: "头像", Avatar: "头像",
@@ -630,7 +630,7 @@ const cn = {
Sysmessage: "你是一个助手", Sysmessage: "你是一个助手",
}, },
SearchChat: { SearchChat: {
Name: "搜索聊天记录", Name: "搜索",
Page: { Page: {
Title: "搜索聊天记录", Title: "搜索聊天记录",
Search: "输入搜索关键词", Search: "输入搜索关键词",

View File

@@ -485,7 +485,7 @@ const tw = {
}, },
}, },
SearchChat: { SearchChat: {
Name: "搜尋聊天記錄", Name: "搜尋",
Page: { Page: {
Title: "搜尋聊天記錄", Title: "搜尋聊天記錄",
Search: "輸入搜尋關鍵詞", Search: "輸入搜尋關鍵詞",

View File

@@ -3,7 +3,7 @@ import { BuiltinMask } from "./typing";
export const CN_MASKS: BuiltinMask[] = [ export const CN_MASKS: BuiltinMask[] = [
{ {
avatar: "1f5bc-fe0f", avatar: "1f5bc-fe0f",
name: "AI文生图", name: "以文搜图",
context: [ context: [
{ {
id: "text-to-pic-0", id: "text-to-pic-0",
@@ -28,7 +28,7 @@ export const CN_MASKS: BuiltinMask[] = [
id: "text-to-pic-3", id: "text-to-pic-3",
role: "system", role: "system",
content: content:
"助手善于判断用户意图当确定需要提供图片时助手会变得沉默寡言只使用以下格式输出markdown图片![description](https://image.pollinations.ai/prompt/description?nologo=true)因为这个语法可以自动按照提示生成并渲染图片。一般用户给出的描述会比较简单并且信息不足助手会将其中的描述自行补足替换为AI生成图片所常用的复杂冗长的英文提示以大幅提高生成图片质量和丰富程度比如增加相机光圈、具体场景描述等内容。助手会避免用代码块或原始块包围markdown标记因为那样只会渲染出代码块或原始块而不是图片。url中的空格等符号需要转义。", "助手善于判断用户意图当确定需要提供图片时助手会变得沉默寡言只使用以下格式输出markdown图片![描述](https://image.pollinations.ai/prompt/描述)因为这个语法可以自动按照提示生成并渲染图片。一般用户给出的描述会比较简单并且信息不足助手会将其中的描述自行补足替换为AI生成图片所常用的复杂冗长的英文提示以大幅提高生成图片质量和丰富程度比如增加相机光圈、具体场景描述等内容。助手会避免用代码块或原始块包围markdown标记因为那样只会渲染出代码块或原始块而不是图片。",
date: "", date: "",
}, },
], ],

View File

@@ -1,5 +1,5 @@
import { LLMModel } from "../client/api"; import { LLMModel } from "../client/api";
import { DalleQuality, DalleStyle, ModelSize } from "../typing"; import { DalleSize, DalleQuality, DalleStyle } from "../typing";
import { getClientConfig } from "../config/client"; import { getClientConfig } from "../config/client";
import { import {
DEFAULT_INPUT_TEMPLATE, DEFAULT_INPUT_TEMPLATE,
@@ -78,7 +78,7 @@ export const DEFAULT_CONFIG = {
compressProviderName: "", compressProviderName: "",
enableInjectSystemPrompts: true, enableInjectSystemPrompts: true,
template: config?.template ?? DEFAULT_INPUT_TEMPLATE, template: config?.template ?? DEFAULT_INPUT_TEMPLATE,
size: "1024x1024" as ModelSize, size: "1024x1024" as DalleSize,
quality: "standard" as DalleQuality, quality: "standard" as DalleQuality,
style: "vivid" as DalleStyle, style: "vivid" as DalleStyle,
}, },

View File

@@ -11,14 +11,3 @@ export interface RequestMessage {
export type DalleSize = "1024x1024" | "1792x1024" | "1024x1792"; export type DalleSize = "1024x1024" | "1792x1024" | "1024x1792";
export type DalleQuality = "standard" | "hd"; export type DalleQuality = "standard" | "hd";
export type DalleStyle = "vivid" | "natural"; export type DalleStyle = "vivid" | "natural";
export type ModelSize =
| "1024x1024"
| "1792x1024"
| "1024x1792"
| "768x1344"
| "864x1152"
| "1344x768"
| "1152x864"
| "1440x720"
| "720x1440";

View File

@@ -5,9 +5,6 @@ import { RequestMessage } from "./client/api";
import { ServiceProvider } from "./constant"; import { ServiceProvider } from "./constant";
// import { fetch as tauriFetch, ResponseType } from "@tauri-apps/api/http"; // import { fetch as tauriFetch, ResponseType } from "@tauri-apps/api/http";
import { fetch as tauriStreamFetch } from "./utils/stream"; import { fetch as tauriStreamFetch } from "./utils/stream";
import { VISION_MODEL_REGEXES, EXCLUDE_VISION_MODEL_REGEXES } from "./constant";
import { getClientConfig } from "./config/client";
import { ModelSize } from "./typing";
export function trimTopic(topic: string) { export function trimTopic(topic: string) {
// Fix an issue where double quotes still show in the Indonesian language // Fix an issue where double quotes still show in the Indonesian language
@@ -255,16 +252,25 @@ export function getMessageImages(message: RequestMessage): string[] {
} }
export function isVisionModel(model: string) { export function isVisionModel(model: string) {
const clientConfig = getClientConfig(); // Note: This is a better way using the TypeScript feature instead of `&&` or `||` (ts v5.5.0-dev.20240314 I've been using)
const envVisionModels = clientConfig?.visionModels
?.split(",") const excludeKeywords = ["claude-3-5-haiku-20241022"];
.map((m) => m.trim()); const visionKeywords = [
if (envVisionModels?.includes(model)) { "vision",
return true; "claude-3",
} "gemini-1.5-pro",
"gemini-1.5-flash",
"gpt-4o",
"gpt-4o-mini",
];
const isGpt4Turbo =
model.includes("gpt-4-turbo") && !model.includes("preview");
return ( return (
!EXCLUDE_VISION_MODEL_REGEXES.some((regex) => regex.test(model)) && !excludeKeywords.some((keyword) => model.includes(keyword)) &&
VISION_MODEL_REGEXES.some((regex) => regex.test(model)) (visionKeywords.some((keyword) => model.includes(keyword)) ||
isGpt4Turbo ||
isDalle3(model))
); );
} }
@@ -272,28 +278,6 @@ export function isDalle3(model: string) {
return "dall-e-3" === model; return "dall-e-3" === model;
} }
export function getModelSizes(model: string): ModelSize[] {
if (isDalle3(model)) {
return ["1024x1024", "1792x1024", "1024x1792"];
}
if (model.toLowerCase().includes("cogview")) {
return [
"1024x1024",
"768x1344",
"864x1152",
"1344x768",
"1152x864",
"1440x720",
"720x1440",
];
}
return [];
}
export function supportsCustomSize(model: string): boolean {
return getModelSizes(model).length > 0;
}
export function showPlugins(provider: ServiceProvider, model: string) { export function showPlugins(provider: ServiceProvider, model: string) {
if ( if (
provider == ServiceProvider.OpenAI || provider == ServiceProvider.OpenAI ||

View File

@@ -202,52 +202,3 @@ export function isModelAvailableInServer(
const modelTable = collectModelTable(DEFAULT_MODELS, customModels); const modelTable = collectModelTable(DEFAULT_MODELS, customModels);
return modelTable[fullName]?.available === false; return modelTable[fullName]?.available === false;
} }
/**
* Check if the model name is a GPT-4 related model
*
* @param modelName The name of the model to check
* @returns True if the model is a GPT-4 related model (excluding gpt-4o-mini)
*/
export function isGPT4Model(modelName: string): boolean {
return (
(modelName.startsWith("gpt-4") ||
modelName.startsWith("chatgpt-4o") ||
modelName.startsWith("o1")) &&
!modelName.startsWith("gpt-4o-mini")
);
}
/**
* Checks if a model is not available on any of the specified providers in the server.
*
* @param {string} customModels - A string of custom models, comma-separated.
* @param {string} modelName - The name of the model to check.
* @param {string|string[]} providerNames - A string or array of provider names to check against.
*
* @returns {boolean} True if the model is not available on any of the specified providers, false otherwise.
*/
export function isModelNotavailableInServer(
customModels: string,
modelName: string,
providerNames: string | string[],
): boolean {
// Check DISABLE_GPT4 environment variable
if (
process.env.DISABLE_GPT4 === "1" &&
isGPT4Model(modelName.toLowerCase())
) {
return true;
}
const modelTable = collectModelTable(DEFAULT_MODELS, customModels);
const providerNamesArray = Array.isArray(providerNames)
? providerNames
: [providerNames];
for (const providerName of providerNamesArray) {
const fullName = `${modelName}@${providerName.toLowerCase()}`;
if (modelTable?.[fullName]?.available === true) return false;
}
return true;
}

View File

@@ -59,8 +59,8 @@
"@tauri-apps/api": "^1.6.0", "@tauri-apps/api": "^1.6.0",
"@tauri-apps/cli": "1.5.11", "@tauri-apps/cli": "1.5.11",
"@testing-library/dom": "^10.4.0", "@testing-library/dom": "^10.4.0",
"@testing-library/jest-dom": "^6.6.3", "@testing-library/jest-dom": "^6.6.2",
"@testing-library/react": "^16.1.0", "@testing-library/react": "^16.0.1",
"@types/jest": "^29.5.14", "@types/jest": "^29.5.14",
"@types/js-yaml": "4.0.9", "@types/js-yaml": "4.0.9",
"@types/lodash-es": "^4.17.12", "@types/lodash-es": "^4.17.12",

View File

@@ -1,80 +0,0 @@
import { isModelNotavailableInServer } from "../app/utils/model";
describe("isModelNotavailableInServer", () => {
test("test model will return false, which means the model is available", () => {
const customModels = "";
const modelName = "gpt-4";
const providerNames = "OpenAI";
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(false);
});
test("test model will return true when model is not available in custom models", () => {
const customModels = "-all,gpt-4o-mini";
const modelName = "gpt-4";
const providerNames = "OpenAI";
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(true);
});
test("should respect DISABLE_GPT4 setting", () => {
process.env.DISABLE_GPT4 = "1";
const result = isModelNotavailableInServer("", "gpt-4", "OpenAI");
expect(result).toBe(true);
});
test("should handle empty provider names", () => {
const result = isModelNotavailableInServer("-all,gpt-4", "gpt-4", "");
expect(result).toBe(true);
});
test("should be case insensitive for model names", () => {
const result = isModelNotavailableInServer("-all,GPT-4", "gpt-4", "OpenAI");
expect(result).toBe(true);
});
test("support passing multiple providers, model unavailable on one of the providers will return true", () => {
const customModels = "-all,gpt-4@google";
const modelName = "gpt-4";
const providerNames = ["OpenAI", "Azure"];
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(true);
});
// FIXME: 这个测试用例有问题,需要修复
// test("support passing multiple providers, model available on one of the providers will return false", () => {
// const customModels = "-all,gpt-4@google";
// const modelName = "gpt-4";
// const providerNames = ["OpenAI", "Google"];
// const result = isModelNotavailableInServer(
// customModels,
// modelName,
// providerNames,
// );
// expect(result).toBe(false);
// });
test("test custom model without setting provider", () => {
const customModels = "-all,mistral-large";
const modelName = "mistral-large";
const providerNames = modelName;
const result = isModelNotavailableInServer(
customModels,
modelName,
providerNames,
);
expect(result).toBe(false);
});
});

View File

@@ -1,67 +0,0 @@
import { isVisionModel } from "../app/utils";
describe("isVisionModel", () => {
const originalEnv = process.env;
beforeEach(() => {
jest.resetModules();
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
test("should identify vision models using regex patterns", () => {
const visionModels = [
"gpt-4-vision",
"claude-3-opus",
"gemini-1.5-pro",
"gemini-2.0",
"gemini-exp-vision",
"learnlm-vision",
"qwen-vl-max",
"qwen2-vl-max",
"gpt-4-turbo",
"dall-e-3",
];
visionModels.forEach((model) => {
expect(isVisionModel(model)).toBe(true);
});
});
test("should exclude specific models", () => {
expect(isVisionModel("claude-3-5-haiku-20241022")).toBe(false);
});
test("should not identify non-vision models", () => {
const nonVisionModels = [
"gpt-3.5-turbo",
"gpt-4-turbo-preview",
"claude-2",
"regular-model",
];
nonVisionModels.forEach((model) => {
expect(isVisionModel(model)).toBe(false);
});
});
test("should identify models from VISION_MODELS env var", () => {
process.env.VISION_MODELS = "custom-vision-model,another-vision-model";
expect(isVisionModel("custom-vision-model")).toBe(true);
expect(isVisionModel("another-vision-model")).toBe(true);
expect(isVisionModel("unrelated-model")).toBe(false);
});
test("should handle empty or missing VISION_MODELS", () => {
process.env.VISION_MODELS = "";
expect(isVisionModel("unrelated-model")).toBe(false);
delete process.env.VISION_MODELS;
expect(isVisionModel("unrelated-model")).toBe(false);
expect(isVisionModel("gpt-4-vision")).toBe(true);
});
});

View File

@@ -2114,10 +2114,10 @@
lz-string "^1.5.0" lz-string "^1.5.0"
pretty-format "^27.0.2" pretty-format "^27.0.2"
"@testing-library/jest-dom@^6.6.3": "@testing-library/jest-dom@^6.6.2":
version "6.6.3" version "6.6.2"
resolved "https://registry.yarnpkg.com/@testing-library/jest-dom/-/jest-dom-6.6.3.tgz#26ba906cf928c0f8172e182c6fe214eb4f9f2bd2" resolved "https://registry.yarnpkg.com/@testing-library/jest-dom/-/jest-dom-6.6.2.tgz#8186aa9a07263adef9cc5a59a4772db8c31f4a5b"
integrity sha512-IteBhl4XqYNkM54f4ejhLRJiZNqcSCoXUOG2CPK7qbD322KjQozM4kHQOfkG2oln9b9HTYqs+Sae8vBATubxxA== integrity sha512-P6GJD4yqc9jZLbe98j/EkyQDTPgqftohZF5FBkHY5BUERZmcf4HeO2k0XaefEg329ux2p21i1A1DmyQ1kKw2Jw==
dependencies: dependencies:
"@adobe/css-tools" "^4.4.0" "@adobe/css-tools" "^4.4.0"
aria-query "^5.0.0" aria-query "^5.0.0"
@@ -2127,10 +2127,10 @@
lodash "^4.17.21" lodash "^4.17.21"
redent "^3.0.0" redent "^3.0.0"
"@testing-library/react@^16.1.0": "@testing-library/react@^16.0.1":
version "16.1.0" version "16.0.1"
resolved "https://registry.yarnpkg.com/@testing-library/react/-/react-16.1.0.tgz#aa0c61398bac82eaf89776967e97de41ac742d71" resolved "https://registry.yarnpkg.com/@testing-library/react/-/react-16.0.1.tgz#29c0ee878d672703f5e7579f239005e4e0faa875"
integrity sha512-Q2ToPvg0KsVL0ohND9A3zLJWcOXXcO8IDu3fj11KhNt0UlCWyFyvnCIBkd12tidB2lkiVRG8VFqdhcqhqnAQtg== integrity sha512-dSmwJVtJXmku+iocRhWOUFbrERC76TX2Mnf0ATODz8brzAZrMBbzLwQixlBSanZxR6LddK3eiwpSFZgDET1URg==
dependencies: dependencies:
"@babel/runtime" "^7.12.5" "@babel/runtime" "^7.12.5"