One-click deployment of your own customized chatgpt web program, ChatGPT3.5 API service (that is, the model used on the OpenAI webpage), which allows multiple ChatGPT accounts to work together and build an external API interface service.
The PHP version calls the API interface of OpenAI
The PHP version calls OpenAI’s API interface for Q&A Demo, and the code has been updated to call the latest gpt-3.5-turbo model. It adopts Stream flow mode communication and outputs while generating, and the response speed exceeds the official website. The front end adopts JS EventSource, typeset the text in Markdown format, and colorizes the code. The server keeps a log of all visitors’ conversations.
Many people want to enter the API-KEY function in the Demo website, and the code has already been added, just uncomment the index.php. For the sake of beauty, you can comment out the “continuous dialogue” part above, otherwise, the mobile phone access is not very friendly.
Accessing the new interface of OpenAI in China will prompt a timeout. If you have HTTP-PROXY locally, you can comment out “curl_setopt($ch, CURLOPT_PROXY,” http://127.0.0.1:1081 “);” in stream.php Modify it so that you can access the openai interface through your local proxy.
- run with docker
This project is
chatgpta privatized deployment, based on
The first method: download the binary directly (suitable for students who do not know programming)
For non-technical personnel, please directly download the compressed package in the release , select the appropriate compressed package according to your own system and architecture, and directly decompress and run it after downloading.
After downloading, unzip it locally, you can see the executable program and the configuration file:
The second type: run based on source code (suitable for students who understand go language programming)
$ git clone https://github.com/869413421/chatgpt-web.git
$ cd chatgpt-web
$ copy config.dev.json config.json
$ go run main.go
run with docker
You can run this project quickly with docker.
The first: run based on environment variables
- # 运行项目，环境变量参考下方配置说明
- $ docker run –itd —name chatgpt–web —restart=always \
- –e APIKEY=换成你的key \
- –e MODEL=gpt–3.5–turbo–0301 \
- –e BOT_DESC=你是一个AI助手,我需要你模拟一名温柔贴心的女朋友来回答我的问题. \
- –e MAX_TOKENS=512 \
- –e TEMPREATURE=0.9 \
- –e TOP_P=1 \
- –e FREQ=0.0 \
- –e PRES=0.6 \
- –e PROXY=http://host.docker.internal:10809 \
- –e AUTH_USER= \
- –e AUTH_PASSWORD= \
- –p 8080:8080 \
- —add–host=“host.docker.internal:host-gateway“ \
host.docker.internalIt will point to the IP of the host where the container is located, so you only need to change the port to your proxy port.
For the configuration file mapped in the run command, refer to the description of the configuration file below.
The second method: mount and run based on the configuration file
- # 复制配置文件，根据自己实际情况，调整配置里的内容
- $ cp config.dev.json config.json # 其中 config.dev.json 从项目的根目录获取
- # 运行项目
- $ docker run –itd —name chatgpt–web –v `pwd`/config.json:/app/config.json –p 8080:8080 qingshui869413421/chatgpt–web:latest
The configuration file refers to the description of the configuration file below.
Obtain an OpenAI account (email) and password
- Click to register for OpenAI