Merge branch 'xtekky:main' into main

This commit is contained in:
Valerii 2023-05-05 01:14:29 +03:00 committed by GitHub
commit b3754facf9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
37 changed files with 387 additions and 303 deletions

9
.dockerignore Normal file
View File

@ -0,0 +1,9 @@
# Development
.dockerignore
.git
.gitignore
.github
.idea
# Application
venv/

10
.github/FUNDING.yml vendored
View File

@ -1,13 +1,3 @@
# These are supported funding model platforms
github: [onlp] github: [onlp]
patreon: xtekky patreon: xtekky
open_collective: # Replace with a single Open Collective username
ko_fi: xtekky ko_fi: xtekky
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: tekky
issuehunt: xtekky
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

37
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,37 @@
name: Build and push `gpt4free` docker image
on:
workflow_dispatch:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up qemu
uses: docker/setup-qemu-action@v2
- name: Set up docker buildx
uses: docker/setup-buildx-action@v2
- name: Login to docker hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push docker image
uses: docker/build-push-action@v4
with:
context: .
platforms: linux/amd64,linux/arm64
push: ${{ github.ref == 'refs/heads/main' }}
tags: |
${{ secrets.DOCKER_USERNAME }}/gpt4free:latest

4
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,4 @@
{
"editor.tabCompletion": "on",
"diffEditor.codeLens": true
}

View File

@ -1,18 +1,29 @@
FROM python:3.10 FROM python:3.11 as builder
RUN apt-get update && apt-get install -y git WORKDIR /usr/app
ENV PATH="/usr/app/venv/bin:$PATH"
RUN mkdir -p /usr/src/gpt4free #RUN apt-get update && apt-get install -y git
WORKDIR /usr/src/gpt4free RUN mkdir -p /usr/app
RUN python -m venv ./venv
COPY requirements.txt .
RUN pip install -r requirements.txt
# RUN pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/ # RUN pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/
# RUN pip config set global.trusted-host mirrors.aliyun.com # RUN pip config set global.trusted-host mirrors.aliyun.com
COPY requirements.txt /usr/src/gpt4free/ FROM python:3.11
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/gpt4free
RUN cp gui/streamlit_app.py .
EXPOSE 8501 WORKDIR /usr/app
ENV PATH="/usr/app/venv/bin:$PATH"
COPY --from=builder /usr/app/venv ./venv
COPY . .
RUN cp ./gui/streamlit_app.py .
CMD ["streamlit", "run", "streamlit_app.py"] CMD ["streamlit", "run", "streamlit_app.py"]
EXPOSE 8501

View File

@ -1,7 +1,34 @@
Due to legal and personal issues, the development speed of this Repository may slow down over the next one to two weeks. I apologize for any inconvenience this may cause. I have been putting a lot of effort into this small personal/educational project, and it is now on the verge of being taken down.
<p>You may join our discord: <a href="https://discord.com/invite/gpt4free">discord.gg/gpt4free<a> for further updates. <a href="https://discord.gg/gpt4free"><img align="center" alt="gpt4free Discord" width="22px" src="https://raw.githubusercontent.com/peterthehan/peterthehan/master/assets/discord.svg" /></a></p>
<img alt="gpt4free logo" src="https://user-images.githubusercontent.com/98614666/233799515-1a7cb6a3-b17f-42c4-956d-8d2a0664466f.png"> <img alt="gpt4free logo" src="https://user-images.githubusercontent.com/98614666/233799515-1a7cb6a3-b17f-42c4-956d-8d2a0664466f.png">
## Legal Notice <a name="legal-notice"></a>
This repository is _not_ associated with or endorsed by providers of the APIs contained in this GitHub repository. This project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to improve their security or request the removal of their site from this repository.
Please note the following:
1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. This project is _not_ claiming any right over them nor is it affiliated with or endorsed by any of the providers mentioned.
2. **Responsibility**: The author of this repository is _not_ responsible for any consequences, damages, or losses arising from the use or misuse of this repository or the content provided by the third-party APIs. Users are solely responsible for their actions and any repercussions that may follow. We strongly recommend the users to follow the TOS of the each Website.
3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations.
4. **Copyright**: All content in this repository, including but not limited to code, images, and documentation, is the intellectual property of the repository author, unless otherwise stated. Unauthorized copying, distribution, or use of any content in this repository is strictly prohibited without the express written consent of the repository author.
5. **Indemnification**: Users agree to indemnify, defend, and hold harmless the author of this repository from and against any and all claims, liabilities, damages, losses, or expenses, including legal fees and costs, arising out of or in any way connected with their use or misuse of this repository, its content, or related third-party APIs.
6. **Updates and Changes**: The author reserves the right to modify, update, or remove any content, information, or features in this repository at any time without prior notice. Users are responsible for regularly reviewing the content and any changes made to this repository.
By using this repository or any code related to it, you agree to these terms. The author is not responsible for any copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses.
<br>
<img src="https://media.giphy.com/media/LnQjpWaON8nhr21vNW/giphy.gif" width="100" align="left"> <img src="https://media.giphy.com/media/LnQjpWaON8nhr21vNW/giphy.gif" width="100" align="left">
Just API's from some language model sites. Just API's from some language model sites.
<p>Join our <a href="https://discord.com/invite/gpt4free">discord.gg/gpt4free<a> Discord community! <a href="https://discord.gg/gpt4free"><img align="center" alt="gpt4free Discord" width="22px" src="https://raw.githubusercontent.com/peterthehan/peterthehan/master/assets/discord.svg" /></a></p>
# Related gpt4free projects # Related gpt4free projects
@ -24,6 +51,13 @@ Just API's from some language model sites.
<td><a href="https://github.com/xtekky/gpt4free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td> <td><a href="https://github.com/xtekky/gpt4free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xtekky/gpt4free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td> <td><a href="https://github.com/xtekky/gpt4free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
</tr> </tr>
<tr>
<td><a href="https://github.com/xiangsx/gpt4free-ts"><b>gpt4free-ts</b></a></td>
<td><a href="https://github.com/xiangsx/gpt4free-ts/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xiangsx/gpt4free-ts/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xiangsx/gpt4free-ts/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xiangsx/gpt4free-ts/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<tr> <tr>
<td><a href="https://github.com/xtekky/chatgpt-clone"><b>ChatGPT-Clone</b></a></td> <td><a href="https://github.com/xtekky/chatgpt-clone"><b>ChatGPT-Clone</b></a></td>
<td><a href="https://github.com/xtekky/chatgpt-clone/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td> <td><a href="https://github.com/xtekky/chatgpt-clone/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
@ -32,11 +66,11 @@ Just API's from some language model sites.
<td><a href="https://github.com/xtekky/chatgpt-clone/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td> <td><a href="https://github.com/xtekky/chatgpt-clone/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
</tr> </tr>
<tr> <tr>
<td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free"><b>ChatGpt Discord Bot</b></a></td> <td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free"><b>ChatGpt Discord Bot</b></a></td>
<td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/mishalhossin/Coding-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td> <td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/mishalhossin/Coding-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td> <td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/mishalhossin/Coding-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td> <td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/mishalhossin/Coding-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td> <td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
@ -86,7 +120,7 @@ Just API's from some language model sites.
| [sqlchat.ai](https://sqlchat.ai) | GPT-3.5 | | [sqlchat.ai](https://sqlchat.ai) | GPT-3.5 |
| [bard.google.com](https://bard.google.com) | custom / search | | [bard.google.com](https://bard.google.com) | custom / search |
| [bing.com/chat](https://bing.com/chat) | GPT-4/3.5 | | [bing.com/chat](https://bing.com/chat) | GPT-4/3.5 |
| [chat.forefront.ai/](https://chat.forefront.ai/) | GPT-4/3.5 | | [italygpt.it](https://italygpt.it) | GPT-3.5 |
## Best sites <a name="best-sites"></a> ## Best sites <a name="best-sites"></a>
@ -110,8 +144,7 @@ pip3 install -r requirements.txt
## To start gpt4free GUI <a name="streamlit-gpt4free-gui"></a> ## To start gpt4free GUI <a name="streamlit-gpt4free-gui"></a>
Move `streamlit_app.py` from `./gui` to the base folder Move `streamlit_app.py` from `./gui` to the base folder then run:
then run:
`streamlit run streamlit_app.py` or `python3 -m streamlit run streamlit_app.py` `streamlit run streamlit_app.py` or `python3 -m streamlit run streamlit_app.py`
## Docker <a name="docker-instructions"></a> ## Docker <a name="docker-instructions"></a>
@ -119,7 +152,7 @@ then run:
Build Build
``` ```
docker build -t gpt4free:latest -f Docker/Dockerfile . docker build -t gpt4free:latest .
``` ```
Run Run
@ -127,37 +160,21 @@ Run
``` ```
docker run -p 8501:8501 gpt4free:latest docker run -p 8501:8501 gpt4free:latest
``` ```
Another way - docker-compose (no docker build/run needed)
```
docker-compose up -d
```
## Deploy using docker-compose ## Deploy using docker-compose
Run the following: Run the following:
``` ```
docker-compose up -d docker-compose up --build -d
``` ```
## ChatGPT clone ## ChatGPT clone
> currently implementing new features and trying to scale it, please be patient it may be unstable > Currently implementing new features and trying to scale it, please be patient it may be unstable
> https://chat.chatbot.sex/chat > https://chat.g4f.ai/chat
> This site was developed by me and includes **gpt-4/3.5**, **internet access** and **gpt-jailbreak's** like DAN > This site was developed by me and includes **gpt-4/3.5**, **internet access** and **gpt-jailbreak's** like DAN
> run locally here: https://github.com/xtekky/chatgpt-clone > Run locally here: https://github.com/xtekky/chatgpt-clone
## Legal Notice <a name="legal-notice"></a>
This repository uses third-party APIs and is _not_ associated with or endorsed by the API providers. This project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to improve their security.
Please note the following:
1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. This project is _not_ claiming any right over them.
2. **Responsibility**: The author of this repository is _not_ responsible for any consequences arising from the use or misuse of this repository or the content provided by the third-party APIs and any damage or losses caused by users' actions.
3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations.
## Copyright: ## Copyright:

View File

@ -2,8 +2,14 @@ version: "3.9"
services: services:
gpt4free: gpt4free:
build: build:
context: . context: ./
dockerfile: Dockerfile dockerfile: Dockerfile
container_name: dc_gpt4free
# environment:
# - http_proxy=http://127.0.0.1:1080 # modify this for your proxy
# - https_proxy=http://127.0.0.1:1080 # modify this for your proxy
image: img_gpt4free
ports: ports:
- "8501:8501" - 8501:8501
restart: always

View File

@ -1,12 +0,0 @@
version: '3.8'
services:
gpt4:
build:
context: .
dockerfile: Dockerfile
image: gpt4free:latest
container_name: gpt4
ports:
- 8501:8501
restart: unless-stopped

View File

@ -4,8 +4,8 @@ from gpt4free import cocalc
from gpt4free import forefront from gpt4free import forefront
from gpt4free import quora from gpt4free import quora
from gpt4free import theb from gpt4free import theb
from gpt4free import you
from gpt4free import usesless from gpt4free import usesless
from gpt4free import you
class Provider(Enum): class Provider(Enum):
@ -24,7 +24,6 @@ class Completion:
@staticmethod @staticmethod
def create(provider: Provider, prompt: str, **kwargs) -> str: def create(provider: Provider, prompt: str, **kwargs) -> str:
""" """
Invokes the given provider with given prompt and addition arguments and returns the string response Invokes the given provider with given prompt and addition arguments and returns the string response
@ -47,10 +46,10 @@ class Completion:
return Completion.__useless_service(prompt, **kwargs) return Completion.__useless_service(prompt, **kwargs)
else: else:
raise Exception('Provider not exist, Please try again') raise Exception('Provider not exist, Please try again')
@staticmethod @staticmethod
def __useless_service(prompt: str, **kwargs) -> str: def __useless_service(prompt: str, **kwargs) -> str:
return usesless.Completion.create(prompt = prompt, **kwargs) return usesless.Completion.create(prompt=prompt, **kwargs)
@staticmethod @staticmethod
def __you_service(prompt: str, **kwargs) -> str: def __you_service(prompt: str, **kwargs) -> str:

View File

@ -6,8 +6,11 @@ from gpt4free import forefront
token = forefront.Account.create(logging=False) token = forefront.Account.create(logging=False)
print(token) print(token)
# get a response # get a response
for response in forefront.StreamingCompletion.create(token=token, for response in forefront.StreamingCompletion.create(
prompt='hello world', model='gpt-4'): token=token,
print(response.completion.choices[0].text, end='') prompt='hello world',
model='gpt-4'
):
print(response.choices[0].text, end='')
print("") print("")
``` ```

View File

@ -1,14 +1,13 @@
from json import loads from json import loads
from xtempmail import Email
from re import findall from re import findall
from typing import Optional, Generator
from faker import Faker
from time import time, sleep from time import time, sleep
from typing import Generator, Optional
from uuid import uuid4 from uuid import uuid4
from fake_useragent import UserAgent from fake_useragent import UserAgent
from requests import post from requests import post
from pymailtm import MailTm, Message
from tls_client import Session from tls_client import Session
from .typing import ForeFrontResponse from .typing import ForeFrontResponse
@ -16,11 +15,13 @@ class Account:
@staticmethod @staticmethod
def create(proxy: Optional[str] = None, logging: bool = False): def create(proxy: Optional[str] = None, logging: bool = False):
proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else False proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else False
faker = Faker()
name = (faker.name().replace(' ', '_')).lower()
start = time() start = time()
mail_client = MailTm().get_account() mail_client = Email(name=name)
mail_address = mail_client.address mail_address = mail_client.email
client = Session(client_identifier='chrome110') client = Session(client_identifier='chrome110')
client.proxies = proxies client.proxies = proxies
@ -54,21 +55,17 @@ class Account:
if 'sign_up_attempt' not in response.text: if 'sign_up_attempt' not in response.text:
return 'Failed to create account!' return 'Failed to create account!'
verification_url = None
while True: new_message = mail_client.get_new_message(5)
sleep(1) for msg in new_message:
new_message: Message = mail_client.wait_for_message() verification_url = findall(r'https:\/\/clerk\.forefront\.ai\/v1\/verify\?token=\w.+', msg.text)[0]
if logging:
print(new_message.data['id'])
verification_url = findall(r'https:\/\/clerk\.forefront\.ai\/v1\/verify\?token=\w.+', new_message.text)[0]
if verification_url: if verification_url:
break break
if verification_url is None or not verification_url:
raise RuntimeError('Error while obtaining verfication URL!')
if logging: if logging:
print(verification_url) print(verification_url)
response = client.get(verification_url) response = client.get(verification_url)
response = client.get('https://clerk.forefront.ai/v1/client?_clerk_js_version=4.38.4') response = client.get('https://clerk.forefront.ai/v1/client?_clerk_js_version=4.38.4')

View File

@ -1,4 +1,5 @@
from typing import Any, List from typing import Any, List
from pydantic import BaseModel from pydantic import BaseModel
@ -22,4 +23,4 @@ class ForeFrontResponse(BaseModel):
model: str model: str
choices: List[Choice] choices: List[Choice]
usage: Usage usage: Usage
text: str text: str

View File

@ -65,4 +65,13 @@ poe.chat('who won the football world cup most?')
# new bot creation # new bot creation
poe.create_bot('new_bot_name', prompt='You are new test bot', base_model='gpt-3.5-turbo') poe.create_bot('new_bot_name', prompt='You are new test bot', base_model='gpt-3.5-turbo')
# delete account
poe.delete_account()
```
### Deleting the Poe Account
```python
from gpt4free import quora
quora.Account.delete(token='')
``` ```

View File

@ -104,8 +104,8 @@ class Model:
def create( def create(
token: str, token: str,
model: str = 'gpt-3.5-turbo', # claude-instant model: str = 'gpt-3.5-turbo', # claude-instant
system_prompt: str = 'You are ChatGPT a large language model developed by Openai. Answer as consisely as possible', system_prompt: str = 'You are ChatGPT a large language model. Answer as consisely as possible',
description: str = 'gpt-3.5 language model from openai, skidded by poe.com', description: str = 'gpt-3.5 language model',
handle: str = None, handle: str = None,
) -> ModelResponse: ) -> ModelResponse:
if not handle: if not handle:
@ -285,6 +285,11 @@ class Account:
cookies = open(Path(__file__).resolve().parent / 'cookies.txt', 'r').read().splitlines() cookies = open(Path(__file__).resolve().parent / 'cookies.txt', 'r').read().splitlines()
return choice(cookies) return choice(cookies)
@staticmethod
def delete(token: str, proxy: Optional[str] = None):
client = PoeClient(token, proxy=proxy)
client.delete_account()
class StreamingCompletion: class StreamingCompletion:
@staticmethod @staticmethod
@ -293,11 +298,11 @@ class StreamingCompletion:
custom_model: bool = None, custom_model: bool = None,
prompt: str = 'hello world', prompt: str = 'hello world',
token: str = '', token: str = '',
proxy: Optional[str] = None proxy: Optional[str] = None,
) -> Generator[PoeResponse, None, None]: ) -> Generator[PoeResponse, None, None]:
_model = MODELS[model] if not custom_model else custom_model _model = MODELS[model] if not custom_model else custom_model
proxies = { 'http': 'http://' + proxy, 'https': 'http://' + proxy } if proxy else False proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else False
client = PoeClient(token) client = PoeClient(token)
client.proxy = proxies client.proxy = proxies
@ -333,7 +338,7 @@ class Completion:
custom_model: str = None, custom_model: str = None,
prompt: str = 'hello world', prompt: str = 'hello world',
token: str = '', token: str = '',
proxy: Optional[str] = None proxy: Optional[str] = None,
) -> PoeResponse: ) -> PoeResponse:
_model = MODELS[model] if not custom_model else custom_model _model = MODELS[model] if not custom_model else custom_model
@ -454,14 +459,7 @@ class Poe:
response = chunk['text'] response = chunk['text']
return response return response
def create_bot( def create_bot(self, name: str, /, prompt: str = '', base_model: str = 'ChatGPT', description: str = '') -> None:
self,
name: str,
/,
prompt: str = '',
base_model: str = 'ChatGPT',
description: str = '',
) -> None:
if base_model not in MODELS: if base_model not in MODELS:
raise RuntimeError('Sorry, the base_model you provided does not exist. Please check and try again.') raise RuntimeError('Sorry, the base_model you provided does not exist. Please check and try again.')
@ -475,3 +473,6 @@ class Poe:
def list_bots(self) -> list: def list_bots(self) -> list:
return list(self.client.bot_names.values()) return list(self.client.bot_names.values())
def delete_account(self) -> None:
self.client.delete_account()

View File

@ -548,5 +548,11 @@ class Client:
self.get_bots() self.get_bots()
return data return data
def delete_account(self) -> None:
response = self.send_query('SettingsDeleteAccountButton_deleteAccountMutation_Mutation', {})
data = response['data']['deleteAccount']
if 'viewer' not in data:
raise RuntimeError(f'Error occurred while deleting the account, Please try again!')
load_queries() load_queries()

View File

@ -1,7 +1,10 @@
from requests import Session
from time import sleep
from json import loads from json import loads
from re import findall from re import findall
from time import sleep
from requests import Session
class Mail: class Mail:
def __init__(self) -> None: def __init__(self) -> None:
self.client = Session() self.client = Session()
@ -9,29 +12,34 @@ class Mail:
self.cookies = {'acceptcookie': 'true'} self.cookies = {'acceptcookie': 'true'}
self.cookies["ci_session"] = self.client.cookies.get_dict()["ci_session"] self.cookies["ci_session"] = self.client.cookies.get_dict()["ci_session"]
self.email = None self.email = None
def get_mail(self): def get_mail(self):
respone=self.client.post("https://etempmail.com/getEmailAddress") respone = self.client.post("https://etempmail.com/getEmailAddress")
#cookies # cookies
self.cookies["lisansimo"] = eval(respone.text)["recover_key"] self.cookies["lisansimo"] = eval(respone.text)["recover_key"]
self.email = eval(respone.text)["address"] self.email = eval(respone.text)["address"]
return self.email return self.email
def get_message(self): def get_message(self):
print("Waiting for message...") print("Waiting for message...")
while True: while True:
sleep(5) sleep(5)
respone=self.client.post("https://etempmail.com/getInbox") respone = self.client.post("https://etempmail.com/getInbox")
mail_token=loads(respone.text) mail_token = loads(respone.text)
print(self.client.cookies.get_dict()) print(self.client.cookies.get_dict())
if len(mail_token) == 1: if len(mail_token) == 1:
break break
params = {'id': '1',} params = {
self.mail_context = self.client.post("https://etempmail.com/getInbox",params=params) 'id': '1',
}
self.mail_context = self.client.post("https://etempmail.com/getInbox", params=params)
self.mail_context = eval(self.mail_context.text)[0]["body"] self.mail_context = eval(self.mail_context.text)[0]["body"]
return self.mail_context return self.mail_context
#,cookies=self.cookies
# ,cookies=self.cookies
def get_verification_code(self): def get_verification_code(self):
message = self.mail_context message = self.mail_context
code = findall(r';">(\d{6,7})</div>', message)[0] code = findall(r';">(\d{6,7})</div>', message)[0]
print(f"Verification code: {code}") print(f"Verification code: {code}")
return code return code

View File

@ -0,0 +1 @@
mutation SettingsDeleteAccountButton_deleteAccountMutation_Mutation{ deleteAccount { viewer { uid id } }}

4
gpt4free/test.py Normal file
View File

@ -0,0 +1,4 @@
import forefront
token = forefront.Account.create()
response = forefront.Completion.create(token=token, prompt='Hello!')
print(response)

View File

@ -5,7 +5,10 @@
from gpt4free import theb from gpt4free import theb
# simple streaming completion # simple streaming completion
for token in theb.Completion.create('hello world'):
print(token, end='', flush=True) while True:
print("") x = input()
for token in theb.Completion.create(x):
print(token, end='', flush=True)
print("")
``` ```

View File

@ -17,9 +17,10 @@ class Completion:
timer = None timer = None
message_queue = Queue() message_queue = Queue()
stream_completed = False stream_completed = False
last_msg_id = None
@staticmethod @staticmethod
def request(prompt: str, proxy: Optional[str]=None): def request(prompt: str, proxy: Optional[str] = None):
headers = { headers = {
'authority': 'chatbot.theb.ai', 'authority': 'chatbot.theb.ai',
'content-type': 'application/json', 'content-type': 'application/json',
@ -28,26 +29,35 @@ class Completion:
} }
proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else None proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else None
options = {}
if Completion.last_msg_id:
options['parentMessageId'] = Completion.last_msg_id
requests.post( requests.post(
'https://chatbot.theb.ai/api/chat-process', 'https://chatbot.theb.ai/api/chat-process',
headers=headers, headers=headers,
proxies=proxies, proxies=proxies,
content_callback=Completion.handle_stream_response, content_callback=Completion.handle_stream_response,
json={'prompt': prompt, 'options': {}}, json={'prompt': prompt, 'options': options},
timeout=100000
) )
Completion.stream_completed = True Completion.stream_completed = True
@staticmethod @staticmethod
def create(prompt: str, proxy: Optional[str]=None) -> Generator[str, None, None]: def create(prompt: str, proxy: Optional[str] = None) -> Generator[str, None, None]:
Completion.stream_completed = False
Thread(target=Completion.request, args=[prompt, proxy]).start() Thread(target=Completion.request, args=[prompt, proxy]).start()
while not Completion.stream_completed or not Completion.message_queue.empty(): while not Completion.stream_completed or not Completion.message_queue.empty():
try: try:
message = Completion.message_queue.get(timeout=0.01) message = Completion.message_queue.get(timeout=0.01)
for message in findall(Completion.regex, message): for message in findall(Completion.regex, message):
yield loads(Completion.part1 + message + Completion.part2)['delta'] message_json = loads(Completion.part1 + message + Completion.part2)
Completion.last_msg_id = message_json['id']
yield message_json['delta']
except Empty: except Empty:
pass pass
@ -55,3 +65,12 @@ class Completion:
@staticmethod @staticmethod
def handle_stream_response(response): def handle_stream_response(response):
Completion.message_queue.put(response.decode()) Completion.message_queue.put(response.decode())
@staticmethod
def get_response(prompt: str, proxy: Optional[str] = None) -> str:
response_list = []
for message in Completion.create(prompt, proxy):
response_list.append(message)
return ''.join(response_list)
Completion.message_queue.put(response.decode(errors='replace'))

View File

@ -1,6 +1,7 @@
import requests
import json import json
import requests
class Completion: class Completion:
headers = { headers = {
@ -24,7 +25,7 @@ class Completion:
model: str = "gpt-3.5-turbo", model: str = "gpt-3.5-turbo",
): ):
print(parentMessageId, prompt) print(parentMessageId, prompt)
json_data = { json_data = {
"openaiKey": "", "openaiKey": "",
"prompt": prompt, "prompt": prompt,
@ -42,14 +43,14 @@ class Completion:
url = "https://ai.usesless.com/api/chat-process" url = "https://ai.usesless.com/api/chat-process"
request = requests.post(url, headers=Completion.headers, json=json_data) request = requests.post(url, headers=Completion.headers, json=json_data)
content = request.content content = request.content
response = Completion.__response_to_json(content) response = Completion.__response_to_json(content)
return response return response
@classmethod @classmethod
def __response_to_json(cls, text) -> dict: def __response_to_json(cls, text) -> dict:
text = str(text.decode("utf-8")) text = str(text.decode("utf-8"))
split_text = text.rsplit("\n", 1)[1] split_text = text.rsplit("\n", 1)[1]
to_json = json.loads(split_text) to_json = json.loads(split_text)
return to_json return to_json

View File

@ -30,12 +30,12 @@ class Completion:
include_links: bool = False, include_links: bool = False,
detailed: bool = False, detailed: bool = False,
debug: bool = False, debug: bool = False,
proxy: Optional[str] = None proxy: Optional[str] = None,
) -> PoeResponse: ) -> PoeResponse:
if chat is None: if chat is None:
chat = [] chat = []
proxies = { 'http': 'http://' + proxy, 'https': 'http://' + proxy } if proxy else {} proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else {}
client = Session(client_identifier='chrome_108') client = Session(client_identifier='chrome_108')
client.headers = Completion.__get_headers() client.headers = Completion.__get_headers()

View File

@ -2,6 +2,8 @@
This code provides a Graphical User Interface (GUI) for gpt4free. Users can ask questions and get answers from GPT-4 API's, utilizing multiple API implementations. The project contains two different Streamlit applications: `streamlit_app.py` and `streamlit_chat_app.py`. This code provides a Graphical User Interface (GUI) for gpt4free. Users can ask questions and get answers from GPT-4 API's, utilizing multiple API implementations. The project contains two different Streamlit applications: `streamlit_app.py` and `streamlit_chat_app.py`.
In addition, a new GUI script specifically implemented using PyWebIO has been added and can be found in the pywebio-gui folder. If there are errors with the Streamlit version, you can try using the PyWebIO version instead
Installation Installation
------------ ------------
@ -69,4 +71,4 @@ There is a bug in `streamlit_chat_app.py` right now that I haven't pinpointed ye
License License
------- -------
This project is licensed under the MIT License. This project is licensed under the MIT License.

24
gui/pywebio-gui/README.md Normal file
View File

@ -0,0 +1,24 @@
# GUI with PyWebIO
Simple, fast, and with fewer errors
Only requires
```bash
pip install gpt4free
pip install pywebio
```
clicking on 'pywebio-usesless.py' will run it
PS: Currently, only 'usesless' is implemented, and the GUI is expected to be updated infrequently, with a focus on stability.
↓ Here is the introduction in zh-Hans-CN below.
# 使用pywebio实现的极简GUI
简单,快捷,报错少
只需要
```bash
pip install gpt4free
pip install pywebio
```
双击pywebio-usesless.py即可运行
ps目前仅实现usesless这个gui更新频率应该会比较少目的是追求稳定

View File

@ -0,0 +1,59 @@
from gpt4free import usesless
import time
from pywebio import start_server,config
from pywebio.input import *
from pywebio.output import *
from pywebio.session import local
message_id = ""
def status():
try:
req = usesless.Completion.create(prompt="hello", parentMessageId=message_id)
print(f"Answer: {req['text']}")
put_success(f"Answer: {req['text']}",scope="body")
except:
put_error("Program Error",scope="body")
def ask(prompt):
req = usesless.Completion.create(prompt=prompt, parentMessageId=local.message_id)
rp=req['text']
local.message_id=req["id"]
print("AI\n"+rp)
local.conversation.extend([
{"role": "user", "content": prompt},
{"role": "assistant", "content": rp}
])
print(local.conversation)
return rp
def msg():
while True:
text= input_group("You:",[textarea('You',name='text',rows=3, placeholder='请输入问题')])
if not(bool(text)):
break
if not(bool(text["text"])):
continue
time.sleep(0.5)
put_code("You"+text["text"],scope="body")
print("Question"+text["text"])
with use_scope('foot'):
put_loading(color="info")
rp= ask(text["text"])
clear(scope="foot")
time.sleep(0.5)
put_markdown("Bot:\n"+rp,scope="body")
time.sleep(0.7)
@config(title="AIchat",theme="dark")
def main():
put_scope("heads")
with use_scope('heads'):
put_html("<h1><center>AI Chat</center></h1>")
put_scope("body")
put_scope("foot")
status()
local.conversation=[]
local.message_id=""
msg()
print("Click link to chat page")
start_server(main, port=8099,allowed_origins="*",auto_open_webbrowser=True,debug=True)

View File

@ -1,4 +1,5 @@
import atexit import atexit
import Levenshtein
import os import os
import sys import sys
@ -37,6 +38,17 @@ def save_conversations(conversations, current_conversation):
os.replace(temp_conversations_file, conversations_file) os.replace(temp_conversations_file, conversations_file)
def delete_conversation(conversations, current_conversation):
for idx, conversation in enumerate(conversations):
conversations[idx] = current_conversation
break
conversations.remove(current_conversation)
temp_conversations_file = "temp_" + conversations_file
with open(temp_conversations_file, "wb") as f:
pickle.dump(conversations, f)
os.replace(temp_conversations_file, conversations_file)
def exit_handler(): def exit_handler():
print("Exiting, saving data...") print("Exiting, saving data...")
@ -64,26 +76,29 @@ if 'input_field_key' not in st.session_state:
if 'query_method' not in st.session_state: if 'query_method' not in st.session_state:
st.session_state['query_method'] = query st.session_state['query_method'] = query
if 'search_query' not in st.session_state:
st.session_state['search_query'] = ''
# Initialize new conversation # Initialize new conversation
if 'current_conversation' not in st.session_state or st.session_state['current_conversation'] is None: if 'current_conversation' not in st.session_state or st.session_state['current_conversation'] is None:
st.session_state['current_conversation'] = {'user_inputs': [], 'generated_responses': []} st.session_state['current_conversation'] = {'user_inputs': [], 'generated_responses': []}
input_placeholder = st.empty() input_placeholder = st.empty()
user_input = input_placeholder.text_input( user_input = input_placeholder.text_input(
'You:', value=st.session_state['input_text'], key=f'input_text_{st.session_state["input_field_key"]}' 'You:', value=st.session_state['input_text'], key=f'input_text_-1'#{st.session_state["input_field_key"]}
) )
submit_button = st.button("Submit") submit_button = st.button("Submit")
if (user_input and user_input != st.session_state['input_text']) or submit_button: if (user_input and user_input != st.session_state['input_text']) or submit_button:
output = query(user_input, st.session_state['query_method']) output = query(user_input, st.session_state['query_method'])
escaped_output = output.encode('utf-8').decode('unicode-escape') escaped_output = output.encode('utf-8').decode('unicode-escape')
st.session_state.current_conversation['user_inputs'].append(user_input) st.session_state['current_conversation']['user_inputs'].append(user_input)
st.session_state.current_conversation['generated_responses'].append(escaped_output) st.session_state.current_conversation['generated_responses'].append(escaped_output)
save_conversations(st.session_state.conversations, st.session_state.current_conversation) save_conversations(st.session_state.conversations, st.session_state.current_conversation)
st.session_state['input_text'] = '' st.session_state['input_text'] = ''
st.session_state['input_field_key'] += 1 # Increment key value for new widget
user_input = input_placeholder.text_input( user_input = input_placeholder.text_input(
'You:', value=st.session_state['input_text'], key=f'input_text_{st.session_state["input_field_key"]}' 'You:', value=st.session_state['input_text'], key=f'input_text_{st.session_state["input_field_key"]}'
) # Clear the input field ) # Clear the input field
@ -92,27 +107,50 @@ if (user_input and user_input != st.session_state['input_text']) or submit_butto
if st.sidebar.button("New Conversation"): if st.sidebar.button("New Conversation"):
st.session_state['selected_conversation'] = None st.session_state['selected_conversation'] = None
st.session_state['current_conversation'] = {'user_inputs': [], 'generated_responses': []} st.session_state['current_conversation'] = {'user_inputs': [], 'generated_responses': []}
st.session_state['input_field_key'] += 1 st.session_state['input_field_key'] += 1 # Increment key value for new widget
st.session_state['query_method'] = st.sidebar.selectbox("Select API:", options=avail_query_methods, index=0)
st.session_state['query_method'] = st.sidebar.selectbox("Select API:", options=avail_query_methods, index=0)
# Proxy # Proxy
st.session_state['proxy'] = st.sidebar.text_input("Proxy: ") st.session_state['proxy'] = st.sidebar.text_input("Proxy: ")
# Searchbar
search_query = st.sidebar.text_input("Search Conversations:", value=st.session_state.get('search_query', ''), key='search')
if search_query:
filtered_conversations = []
indices = []
for idx, conversation in enumerate(st.session_state.conversations):
if search_query in conversation['user_inputs'][0]:
filtered_conversations.append(conversation)
indices.append(idx)
filtered_conversations = list(zip(indices, filtered_conversations))
conversations = sorted(filtered_conversations, key=lambda x: Levenshtein.distance(search_query, x[1]['user_inputs'][0]))
sidebar_header = f"Search Results ({len(conversations)})"
else:
conversations = st.session_state.conversations
sidebar_header = "Conversation History"
# Sidebar # Sidebar
st.sidebar.header("Conversation History") st.sidebar.header(sidebar_header)
sidebar_col1, sidebar_col2 = st.sidebar.columns([5,1])
for idx, conversation in enumerate(st.session_state.conversations): for idx, conversation in enumerate(conversations):
if st.sidebar.button(f"Conversation {idx + 1}: {conversation['user_inputs'][0]}", key=f"sidebar_btn_{idx}"): if sidebar_col1.button(f"Conversation {idx + 1}: {conversation['user_inputs'][0]}", key=f"sidebar_btn_{idx}"):
st.session_state['selected_conversation'] = idx st.session_state['selected_conversation'] = idx
st.session_state['current_conversation'] = st.session_state.conversations[idx] st.session_state['current_conversation'] = conversation
if sidebar_col2.button('🗑️', key=f"sidebar_btn_delete_{idx}"):
if st.session_state['selected_conversation'] == idx:
st.session_state['selected_conversation'] = None
st.session_state['current_conversation'] = {'user_inputs': [], 'generated_responses': []}
delete_conversation(conversations, conversation)
st.experimental_rerun()
if st.session_state['selected_conversation'] is not None: if st.session_state['selected_conversation'] is not None:
conversation_to_display = st.session_state.conversations[st.session_state['selected_conversation']] conversation_to_display = conversations[st.session_state['selected_conversation']]
else: else:
conversation_to_display = st.session_state.current_conversation conversation_to_display = st.session_state.current_conversation
if conversation_to_display['generated_responses']: if conversation_to_display['generated_responses']:
for i in range(len(conversation_to_display['generated_responses']) - 1, -1, -1): for i in range(len(conversation_to_display['generated_responses']) - 1, -1, -1):
message(conversation_to_display["generated_responses"][i], key=f"display_generated_{i}") message(conversation_to_display["generated_responses"][i], key=f"display_generated_{i}")
message(conversation_to_display['user_inputs'][i], is_user=True, key=f"display_user_{i}") message(conversation_to_display['user_inputs'][i], is_user=True, key=f"display_user_{i}")

View File

@ -1,5 +1,5 @@
[tool.poetry] [tool.poetry]
name = "openai-rev" name = "gpt4free"
version = "0.1.0" version = "0.1.0"
description = "" description = ""
authors = [] authors = []

View File

@ -12,3 +12,4 @@ twocaptcha
https://github.com/AI-Yash/st-chat/archive/refs/pull/24/head.zip https://github.com/AI-Yash/st-chat/archive/refs/pull/24/head.zip
pydantic pydantic
pymailtm pymailtm
Levenshtein

View File

@ -1,6 +1,6 @@
from time import sleep from time import sleep
from gpt4free import quora from gpt4free import quora
token = quora.Account.create(proxy=None, logging=True) token = quora.Account.create(proxy=None, logging=True)
print('token', token) print('token', token)
@ -9,3 +9,5 @@ sleep(2)
for response in quora.StreamingCompletion.create(model='ChatGPT', prompt='hello world', token=token): for response in quora.StreamingCompletion.create(model='ChatGPT', prompt='hello world', token=token):
print(response.text, flush=True) print(response.text, flush=True)
quora.Account.delete(token)

View File

@ -2,4 +2,4 @@ from gpt4free import theb
for token in theb.Completion.create('hello world'): for token in theb.Completion.create('hello world'):
print(token, end='', flush=True) print(token, end='', flush=True)
print('asdsos') print('asdsos')

View File

@ -11,7 +11,6 @@ while True:
print(f"Answer: {req['text']}") print(f"Answer: {req['text']}")
message_id = req["id"] message_id = req["id"]
import gpt4free import gpt4free
message_id = "" message_id = ""
@ -20,8 +19,7 @@ while True:
if prompt == "!stop": if prompt == "!stop":
break break
req = gpt4free.Completion.create(provider = gpt4free.Provider.UseLess, req = gpt4free.Completion.create(provider=gpt4free.Provider.UseLess, prompt=prompt, parentMessageId=message_id)
prompt=prompt, parentMessageId=message_id)
print(f"Answer: {req['text']}") print(f"Answer: {req['text']}")
message_id = req["id"] message_id = req["id"]

View File

@ -1,8 +0,0 @@
# asyncio.run(gptbz.test())
import requests
image = '/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAAoALQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iiigDkZP+EhS4W0k1S+VntQPtEWmRsgkNwBu4ZsHYQNvTbls5BA6DS7uW6S6E0VwjQ3UsQM0Pl71DZUrydy4IAbvg8CsTx3DbHQLi4uVs9scWzdd+dsAaWI4PlfNjKjpzkDtmpoNSgbWYpLR7Ty5bq5trw/vd3nIowBxtzti53Y6fKT3z2djra56fNbv07HR1z13ZRX/jDyby0+02f9nfdmsEeHd5o/5anndwPkxjjPWuhrh9Mvra88RLqccmnOHtvLEqfaN+1r1lUcjbg4PbO4H+Cqk+hnRi9ZI29E0uC2N1eG3Am+13DITZRwuqlsYG0ZYEKCGJywwT2AtWTapcW1vcPPCiyrE5ils2SRQV+dW/ecMT/3zgj5utZtpdwL4e190e02W9xeb9vm7FOWY78/NnnJ28f3ahkgtptD8JRlbMos9s8QPnbcrEzDy/4sgDjzOMdeaSZbi23f8vmbfn6hBFuktmuWWPJWCNELNuxgbpcDj1Pbr2qJ9bMVyIZNK1JVLyr5qwB1AjUNu+Uk4bovGSRjAqCTwdoElv5B02MReT5G1HZfk8zzMcEfx81YlsJ7NJX0tolZzNK8dyZJA8jDIwd3yjcBkAHjOAM09SP3b/q36mkjiSNXAYBgCNykH8QeRWdfaw1ldSW66XqN0UgE++3iBRsvt2BiQN/8WPQZqharF9oN5osVml1NLbLqUbmUFY/L4CrgYYKy4yoGM5xjhlnc2OoeMrfULV7aQXGkExyYlErJ5oPQ/Jtye/zZ9qLgqaTba0NyzvPtizH7NcQeVM8OJ49u/acbl9VPY96s1geFjF/xOhF9m41Wfd9n8z73BO7f/Fzzt+X0q7c6mWvRY2DwSXcUsQuUff8Auo2ySflB+YqrYyQOmTyARPQmVP32kLqF1cbmsrJZkuni3rcfZ98UfzKvJJUE4JOM5wpODwDl3Meuf2rHbRatcBJXuj5iachjhUovlBmZudrNkEZ3HIOMGlhREhbS9He2a8MO6a4fzmGDMQ3zAk5yZ8DzMgj0yRuWdha2CzLawrEJpnnkx/G7HLMfc0bl3VNf5pff/kVLS8uxFHHJZ3s5Xyo2mZI4y2VBZyN44B6gDrwAcVZ069Go2EV2Le5t/MBPlXMZjkXnGGU9OlULSdbfTt8LWy5mt0JAkK4YRLjnnODx26Z71TXULEWn/CUWDwmxeDbM4WbkCXJbaB23SnlM5PUDNF7CcObZf12OlpCcDoTz2oVlcZVgRkjIPccGo7hgsSk7ceYg+bP94elUYpamda64915GdH1SESxiTM0KjZmTZtbDHB53Y/u89eK1qw4xD9l0mIC3wLdCg/eYwHh+73x0+9znb71uUkXUSWyCiiimZhRRRQBieL5Hj8LXjxySxuNmGivFtWHzr0lbhfx69O9MvHdZpbKKWYnUluNji+VGikVFULHnkdGbjO05JHPEviyF5/DF7HGkjuQpCx2i3THDA8RNw3Tv069qR0kk0i4uFilF3bSXTwE2a+YGzIAUQnnIPByN46kbjUPc6YNKC9X+SLtjeB9Mt5ZyqzbI1lQzK5R2C/KWGAT8w6dcjHUVzemSyxeCba9e5uWfzIgxl1aOTgXPebGw5BwR3ACdalna8+0R3Kx3nk6jc2MvkjTI2MH97zDnI+4uWOSny4z2Lqxmt/hytvHHIZhFHJsj0yJnyXDEfZ87M9cjPB56ik2y4xSsu7XcnjMsejeJszXBZZrgozaihZAYwQFfGIQM8Bvu9ehrTKuJtOg3y5gKs/8ApAy2Y5B846uMj8Tz/CaqzROH1C3EchW6uHGRZIVx9nHXs4yPvN1PydBV2Lc+u3eUkCJBDtZoAFJzJna/VjgjI/h/4EaaM5PS/wDXRF+iiirOcy7RZE8RanukmKPFA6q9yHVfvg7Y+qfd5J4Y9OhrJ8Nm4FxYJNNdORaXCsJtTS4yVnAyQoG5sfxfw/dPJrUslmGt6rcymQxM0MMStahMALk4cfM65c9cBSGA7mqmi2k9t/ZZuDJJKbSdpHNjHEdzyRvhtv3G5PyjIbBJOVqDpurP5d+zGWtzeLdahZQLNK895PiV7+N/IURKQQMEqNzKAm1tucnggG4Fkhs4INNuJL145oEuHa7BcIAuWOQRkrhiAFzkkEE8rNDJPczWtnG1rG7yfapvsqESsY1AIJPP3hztbPllTjHKvpv2CWKbTUSHdJCk8cVtH+8jUFOSNpGAynOTgJgL1BNRNxf9fmWNGa3fR7U2ty9zDswJZJxMzHvlwSCc5BwccVerBZ3tLf8Atqyguvsxt/n02OyUSsxk3FsHa24bnyM4ycgE9d1WDDIz1I5BHQ471SM6i1uY8cjjSIWLyFjLbDJu1J5Mefn6HryP4snH3hRdmTS5f7T82aS2WBY5Y5LpVjX94Pn+YYzhmydw4UDB4wio/wDY8K+XLuE1qcfY1B4MWfk6DHOT/Bg4+6K1zGkkHlSoroy7WVlGCCOQRSsU5JGUrPo96EZ5p7O7mmmlubm7XFqQoYIobB2fK3Aztwe3TQvX2QKQSMyxDiQJ1dR1P8u/TvWb5bWty2m3KTXlvqMs7Ky2ieVbqVBKSEcHJL4JB3ZwfeLfcQRnTpY7mT7PLZiOdbJSkillzgA44KMScLsBBAOBkuNxu0/6epcQv9s0+LfJzauxBuVJJDRckdXPJ+YcDJH8QrTrN2sNcsxsk2LZyjd9nXaCWj439VPH3RwcZ/hFaVNGc+gUUUUyAooooAxfFVxZxeG9RS7ltVQ25ytwzbCCQBkJ82MkD5eeah0G7tYLi/sZJrKO4fUbjy4oncM/SQ5D9Ww4J25Xniiis2/eO2FNOhf1/CxmamsEGp2+nzx2CwxajYyWKN9o3KdpX+Ebd2I2287ePm973i3UdMg0W+0y4mtUkNqJPKuBJ5ewuEBYx8gbiBxz+FFFS3ZM1p01OdNN/wBaFfVtU0qHxHplx9qsSkEl2853SvIjxwjdtCZXIX7wbt05q7YJdS6nc6vYxWEtpfi2KS+bKsjQhCSWBBG4bhtAAyCcmiinF3k0RWgqdKMl1VvxZfM2s+VkWFh5nl5x9tfG/djGfK6bec468Y/irN1CeUCeHXbrTItPc3O6GN5PNltxHx0I+YKXLYB42455ooqpaIwo2lO1rE1rZjUYrcCO2Giw/Zp7BYzKrkKu4bh8oAB2EA56HIz0u3uxL+1kbygQpQFt2fmki4GOOuOvfHbNFFPpcTu6nKFpsTU75V8oNJKXIXduOI4hk54zjHTjGO+a0KKKaM59PQxLqNNBMuoQpDFYJEfPQLISp8zcWAXIxh5CcLnOMnHQaFNKkkvtOFoli0k9xqP32Zn24LIFyM7kwRg98c5yUVL3No6xTfV2/IrxyW0vh21kQ2phaexKn97s5aErj+LPTbnj7u7+KujoopxZNZW+9/oQXdpBfWk1rcxiSGVGjdSSMhgQeRyOCRxWOtvbXU0Ol6mIHksJbea0IMoJYISGy3U5ST+JuB83uUUMVJuz121JnaL/AITOBSYPOGnyEA7/ADdvmJnH8G3IHX5s4xxmtmiihdRVFZR9AoooqjI//9k='
response = requests.get('https://ocr.holey.cc/ncku?base64_str=%s' % image) # .split('base64,')[1])
print(response.content)

View File

@ -1,41 +0,0 @@
import requests
class Completion:
def create(prompt: str,
model: str = 'openai:gpt-3.5-turbo',
temperature: float = 0.7,
max_tokens: int = 200,
top_p: float = 1,
top_k: int = 1,
frequency_penalty: float = 1,
presence_penalty: float = 1,
stopSequences: list = []):
token = requests.get('https://play.vercel.ai/openai.jpeg', headers={
'authority': 'play.vercel.ai',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'referer': 'https://play.vercel.ai/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'}).text.replace('=','')
print(token)
headers = {
'authority': 'play.vercel.ai',
'custom-encoding': token,
'origin': 'https://play.vercel.ai',
'referer': 'https://play.vercel.ai/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
}
for chunk in requests.post('https://play.vercel.ai/api/generate', headers=headers, stream=True, json={
'prompt': prompt,
'model': model,
'temperature': temperature,
'maxTokens': max_tokens,
'topK': top_p,
'topP': top_k,
'frequencyPenalty': frequency_penalty,
'presencePenalty': presence_penalty,
'stopSequences': stopSequences}).iter_lines():
yield (chunk)

View File

@ -1,33 +0,0 @@
(async () => {
let response = await fetch("https://play.vercel.ai/openai.jpeg", {
"headers": {
"accept": "*/*",
"accept-language": "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3",
"sec-ch-ua": "\"Chromium\";v=\"112\", \"Google Chrome\";v=\"112\", \"Not:A-Brand\";v=\"99\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"macOS\"",
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin"
},
"referrer": "https://play.vercel.ai/",
"referrerPolicy": "strict-origin-when-cross-origin",
"body": null,
"method": "GET",
"mode": "cors",
"credentials": "omit"
});
let data = JSON.parse(atob(await response.text()))
let ret = eval("(".concat(data.c, ")(data.a)"));
botPreventionToken = btoa(JSON.stringify({
r: ret,
t: data.t
}))
console.log(botPreventionToken);
})()

View File

@ -1,67 +0,0 @@
import requests
from base64 import b64decode, b64encode
from json import loads
from json import dumps
headers = {
'Accept': '*/*',
'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8',
'Connection': 'keep-alive',
'Referer': 'https://play.vercel.ai/',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36',
'sec-ch-ua': '"Chromium";v="110", "Google Chrome";v="110", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
}
response = requests.get('https://play.vercel.ai/openai.jpeg', headers=headers)
token_data = loads(b64decode(response.text))
print(token_data)
raw_token = {
'a': token_data['a'] * .1 * .2,
't': token_data['t']
}
print(raw_token)
new_token = b64encode(dumps(raw_token, separators=(',', ':')).encode()).decode()
print(new_token)
import requests
headers = {
'authority': 'play.vercel.ai',
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'content-type': 'application/json',
'custom-encoding': new_token,
'origin': 'https://play.vercel.ai',
'referer': 'https://play.vercel.ai/',
'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
}
json_data = {
'prompt': 'hello\n',
'model': 'openai:gpt-3.5-turbo',
'temperature': 0.7,
'maxTokens': 200,
'topK': 1,
'topP': 1,
'frequencyPenalty': 1,
'presencePenalty': 1,
'stopSequences': [],
}
response = requests.post('https://play.vercel.ai/api/generate', headers=headers, json=json_data)
print(response.text)

View File

@ -1,5 +0,0 @@
import vercelai
for token in vercelai.Completion.create('summarize the gnu gpl 1.0'):
print(token, end='', flush=True)