diff --git a/README.md b/README.md index a2ec683c..9fd505f2 100644 --- a/README.md +++ b/README.md @@ -1,22 +1,46 @@ -##### You may join our discord server for updates and support ; ) - -- [Discord Link](https://discord.gg/gpt4free) - -image - +gpt4free logo + Just API's from some language model sites. +

Join our discord.gg/gpt4free Discord community! gpt4free Discord

-## Legal Notice -This repository uses third-party APIs and is _not_ associated with or endorsed by the API providers. This project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to improve their security. +# Related gpt4free projects -Please note the following: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
🎁 Projects⭐ Stars📚 Forks🛎 Issues📬 Pull requests
gpt4freeStarsForksIssuesPull Requests
ChatGPT-CloneStarsForksIssuesPull Requests
ChatGpt Discord BotStarsForksIssuesPull Requests
-1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. This project is _not_ claiming any right over them. - -2. **Responsibility**: The author of this repository is _not_ responsible for any consequences arising from the use or misuse of this repository or the content provided by the third-party APIs and any damage or losses caused by users' actions. - -3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations. ## Table of Contents | Section | Description | Link | Status | @@ -28,9 +52,6 @@ Please note the following: | **Docker** | Instructions on how to run gpt4free in a Docker container | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#docker-instructions) | - | | **ChatGPT clone** | A ChatGPT clone with new features and scalability | [![Link to Website](https://img.shields.io/badge/Link-Visit%20Site-blue)](https://chat.chatbot.sex/chat) | - | | **How to install** | Instructions on how to install gpt4free | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#install) | - | -| **Legal Notice** | Legal notice or disclaimer | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#legal-notice) | - | -| **Copyright** | Copyright information | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#copyright) | - | -| **Star History** | Star History | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#star-history) | - | | **Usage Examples** | | | | | `theb` | Example usage for theb (gpt-3.5) | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](gpt4free/theb/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | | `forefront` | Example usage for forefront (gpt-4) | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](gpt4free/forefront/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | || @@ -39,8 +60,12 @@ Please note the following: | **Try it Out** | | | | | Google Colab Jupyter Notebook | Example usage for gpt4free | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DanielShemesh/gpt4free-colab/blob/main/gpt4free.ipynb) | - | | replit Example (feel free to fork this repl) | Example usage for gpt4free | [![](https://img.shields.io/badge/Open%20in-Replit-1A1E27?logo=replit)](https://replit.com/@gpt4free/gpt4free-webui) | - | +| **Legal Notice** | Legal notice or disclaimer | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#legal-notice) | - | +| **Copyright** | Copyright information | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#copyright) | - | +| **Star History** | Star History | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#star-history) | - | -## Todo + +## To do list - [x] Add a GUI for the repo - [ ] Make a general package named `gpt4free`, instead of different folders @@ -82,6 +107,7 @@ install requirements with: pip3 install -r requirements.txt ``` + ## To start gpt4free GUI Move `streamlit_app.py` from `./gui` to the base folder @@ -121,6 +147,18 @@ docker-compose up -d > This site was developed by me and includes **gpt-4/3.5**, **internet access** and **gpt-jailbreak's** like DAN > run locally here: https://github.com/xtekky/chatgpt-clone +## Legal Notice + +This repository uses third-party APIs and is _not_ associated with or endorsed by the API providers. This project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to improve their security. + +Please note the following: + +1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. This project is _not_ claiming any right over them. + +2. **Responsibility**: The author of this repository is _not_ responsible for any consequences arising from the use or misuse of this repository or the content provided by the third-party APIs and any damage or losses caused by users' actions. + +3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations. + ## Copyright: This program is licensed under the [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt) @@ -147,6 +185,9 @@ You should have received a copy of the GNU General Public License along with this program. If not, see . ``` + ## Star History -[![Star History Chart](https://api.star-history.com/svg?repos=xtekky/gpt4free&type=Date)](https://star-history.com/#xtekky/gpt4free) + + Star History Chart + diff --git a/docker-compose.yml b/docker-compose.yml new file mode 100644 index 00000000..e8e7119b --- /dev/null +++ b/docker-compose.yml @@ -0,0 +1,12 @@ +version: '3.8' + +services: + gpt4: + build: + context: . + dockerfile: Dockerfile + image: gpt4free:latest + container_name: gpt4 + ports: + - 8501:8501 + restart: unless-stopped diff --git a/gpt4free/__init__.py b/gpt4free/__init__.py index 5336c825..b4742b64 100644 --- a/gpt4free/__init__.py +++ b/gpt4free/__init__.py @@ -5,6 +5,7 @@ from gpt4free import forefront from gpt4free import quora from gpt4free import theb from gpt4free import you +from gpt4free import usesless class Provider(Enum): @@ -15,6 +16,7 @@ class Provider(Enum): ForeFront = 'fore_front' Theb = 'theb' CoCalc = 'cocalc' + UseLess = 'useless' class Completion: @@ -22,6 +24,7 @@ class Completion: @staticmethod def create(provider: Provider, prompt: str, **kwargs) -> str: + """ Invokes the given provider with given prompt and addition arguments and returns the string response @@ -40,8 +43,14 @@ class Completion: return Completion.__theb_service(prompt, **kwargs) elif provider == Provider.CoCalc: return Completion.__cocalc_service(prompt, **kwargs) + elif provider == Provider.UseLess: + return Completion.__useless_service(prompt, **kwargs) else: raise Exception('Provider not exist, Please try again') + + @staticmethod + def __useless_service(prompt: str, **kwargs) -> str: + return usesless.Completion.create(prompt = prompt, **kwargs) @staticmethod def __you_service(prompt: str, **kwargs) -> str: diff --git a/gpt4free/forefront/README.md b/gpt4free/forefront/README.md index 3d0aac4d..35ba9897 100644 --- a/gpt4free/forefront/README.md +++ b/gpt4free/forefront/README.md @@ -1,16 +1,13 @@ ### Example: `forefront` (use like openai pypi package) ```python - from gpt4free import forefront - # create an account token = forefront.Account.create(logging=False) print(token) - # get a response for response in forefront.StreamingCompletion.create(token=token, prompt='hello world', model='gpt-4'): print(response.completion.choices[0].text, end='') print("") -``` +``` \ No newline at end of file diff --git a/gpt4free/forefront/__init__.py b/gpt4free/forefront/__init__.py index d646de92..969b33b8 100644 --- a/gpt4free/forefront/__init__.py +++ b/gpt4free/forefront/__init__.py @@ -1,59 +1,47 @@ -import re -import time +from json import loads +from re import findall +from time import time, sleep from typing import Generator, Optional from uuid import uuid4 -import fake_useragent -import pymailtm -import requests +from fake_useragent import UserAgent +from requests import post +from pymailtm import MailTm, Message +from tls_client import Session from .typing import ForeFrontResponse -def speed_logging(func): - def wrapper(*args, **kwargs): - start = time.time() - res = func(*args, **kwargs) - print(time() - start) - return res - return wrapper - - class Account: - @speed_logging @staticmethod - def create_forefront_account(proxy: Optional[str] = None) -> Optional[str]: - """Create a ForeFront account. + def create(proxy: Optional[str] = None, logging: bool = False): + proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else False - Args: - proxy: The proxy to use for the request. + start = time() - Returns: - The ForeFront token if successful, else None. - """ - proxies = {'http': f'http://{proxy}', 'https': f'http://{proxy}'} if proxy else None - - mail_client = pymailtm.MailTm().get_account() + mail_client = MailTm().get_account() mail_address = mail_client.address - session = requests.Session() - session.proxies = proxies - session.headers = { + client = Session(client_identifier='chrome110') + client.proxies = proxies + client.headers = { 'origin': 'https://accounts.forefront.ai', - 'user-agent': fake_useragent.UserAgent().random, + 'user-agent': UserAgent().random, } - response = session.post( + response = client.post( 'https://clerk.forefront.ai/v1/client/sign_ups?_clerk_js_version=4.38.4', data={'email_address': mail_address}, ) try: trace_token = response.json()['response']['id'] + if logging: + print(trace_token) except KeyError: - return None + return 'Failed to create account!' - response = session.post( + response = client.post( f'https://clerk.forefront.ai/v1/client/sign_ups/{trace_token}/prepare_verification?_clerk_js_version=4.38.4', data={ 'strategy': 'email_link', @@ -61,26 +49,38 @@ class Account: }, ) + if logging: + print(response.text) + if 'sign_up_attempt' not in response.text: - return None + return 'Failed to create account!' while True: - time.sleep(1) - new_message = mail_client.wait_for_message() + sleep(1) + new_message: Message = mail_client.wait_for_message() + if logging: + print(new_message.data['id']) + + verification_url = findall(r'https:\/\/clerk\.forefront\.ai\/v1\/verify\?token=\w.+', new_message.text)[0] - verification_url = re.findall(r'https:\/\/clerk\.forefront\.ai\/v1\/verify\?token=\w.+', new_message.text) if verification_url: break - response = session.get(verification_url[0]) + if logging: + print(verification_url) - response = session.get('https://clerk.forefront.ai/v1/client?_clerk_js_version=4.38.4') + response = client.get(verification_url) + + response = client.get('https://clerk.forefront.ai/v1/client?_clerk_js_version=4.38.4') token = response.json()['response']['sessions'][0]['last_active_token']['jwt'] with open('accounts.txt', 'a') as f: f.write(f'{mail_address}:{token}\n') + if logging: + print(time() - start) + return token @@ -100,7 +100,7 @@ class StreamingCompletion: if not chat_id: chat_id = str(uuid4()) - proxies = { 'http': f'http://{proxy}', 'https': f'http://{proxy}' } if proxy else None + proxies = { 'http': 'http://' + proxy, 'https': 'http://' + proxy } if proxy else None headers = { 'authority': 'chat-server.tenant-forefront-default.knative.chi.coreweave.com', @@ -191,6 +191,4 @@ class Completion: raise Exception('Unable to get the response, Please try again') return final_response - - - + \ No newline at end of file diff --git a/gpt4free/forefront/typing.py b/gpt4free/forefront/typing.py index 23e90903..a9025419 100644 --- a/gpt4free/forefront/typing.py +++ b/gpt4free/forefront/typing.py @@ -1,5 +1,4 @@ from typing import Any, List - from pydantic import BaseModel @@ -23,4 +22,4 @@ class ForeFrontResponse(BaseModel): model: str choices: List[Choice] usage: Usage - text: str + text: str \ No newline at end of file diff --git a/unfinished/usesless/README.md b/gpt4free/usesless/README.md similarity index 100% rename from unfinished/usesless/README.md rename to gpt4free/usesless/README.md diff --git a/unfinished/usesless/__init__.py b/gpt4free/usesless/__init__.py similarity index 96% rename from unfinished/usesless/__init__.py rename to gpt4free/usesless/__init__.py index 6f9a47ef..6029009d 100644 --- a/unfinished/usesless/__init__.py +++ b/gpt4free/usesless/__init__.py @@ -23,6 +23,8 @@ class Completion: temperature: float = 1, model: str = "gpt-3.5-turbo", ): + print(parentMessageId, prompt) + json_data = { "openaiKey": "", "prompt": prompt, @@ -40,12 +42,14 @@ class Completion: url = "https://ai.usesless.com/api/chat-process" request = requests.post(url, headers=Completion.headers, json=json_data) content = request.content + response = Completion.__response_to_json(content) return response @classmethod def __response_to_json(cls, text) -> dict: text = str(text.decode("utf-8")) + split_text = text.rsplit("\n", 1)[1] to_json = json.loads(split_text) return to_json diff --git a/gui/streamlit_chat_app.py b/gui/streamlit_chat_app.py index fc5c8d8e..6abc9caf 100644 --- a/gui/streamlit_chat_app.py +++ b/gui/streamlit_chat_app.py @@ -11,7 +11,6 @@ import pickle conversations_file = "conversations.pkl" - def load_conversations(): try: with open(conversations_file, "rb") as f: diff --git a/test.py b/testing/theb_test.py similarity index 100% rename from test.py rename to testing/theb_test.py diff --git a/testing/useless_test.py b/testing/useless_test.py new file mode 100644 index 00000000..9b613aac --- /dev/null +++ b/testing/useless_test.py @@ -0,0 +1,27 @@ +from gpt4free import usesless + +message_id = "" +while True: + prompt = input("Question: ") + if prompt == "!stop": + break + + req = usesless.Completion.create(prompt=prompt, parentMessageId=message_id) + + print(f"Answer: {req['text']}") + message_id = req["id"] + + +import gpt4free + +message_id = "" +while True: + prompt = input("Question: ") + if prompt == "!stop": + break + + req = gpt4free.Completion.create(provider = gpt4free.Provider.UseLess, + prompt=prompt, parentMessageId=message_id) + + print(f"Answer: {req['text']}") + message_id = req["id"] \ No newline at end of file diff --git a/unfinished/chatpdf/__init__.py b/unfinished/chatpdf/__init__.py index 4c9d2d3e..30dc1d3e 100644 --- a/unfinished/chatpdf/__init__.py +++ b/unfinished/chatpdf/__init__.py @@ -1,59 +1,66 @@ import requests import json +from queue import Queue, Empty +from threading import Thread +from json import loads +from re import findall + + class Completion: def request(prompt: str): '''TODO: some sort of authentication + upload PDF from URL or local file - Then you should get the atoken and chat ID - ''' - + Then you should get the atoken and chat ID + ''' + token = "your_token_here" chat_id = "your_chat_id_here" url = "https://chat-pr4yueoqha-ue.a.run.app/" payload = json.dumps({ - "v": 2, - "chatSession": { - "type": "join", - "chatId": chat_id - }, - "history": [ - { - "id": "VNsSyJIq_0", - "author": "p_if2GPSfyN8hjDoA7unYe", - "msg": "", - "time": 1682672009270 - }, - { - "id": "Zk8DRUtx_6", - "author": "uplaceholder", - "msg": prompt, - "time": 1682672181339 - } - ] - }) - + "v": 2, + "chatSession": { + "type": "join", + "chatId": chat_id + }, + "history": [ + { + "id": "VNsSyJIq_0", + "author": "p_if2GPSfyN8hjDoA7unYe", + "msg": "", + "time": 1682672009270 + }, + { + "id": "Zk8DRUtx_6", + "author": "uplaceholder", + "msg": prompt, + "time": 1682672181339 + } + ] + }) + # TODO: fix headers, use random user-agent, streaming response, etc headers = { - 'authority': 'chat-pr4yueoqha-ue.a.run.app', - 'accept': '*/*', - 'accept-language': 'en-US,en;q=0.9', - 'atoken': token, - 'content-type': 'application/json', - 'origin': 'https://www.chatpdf.com', - 'referer': 'https://www.chatpdf.com/', - 'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"Windows"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'cross-site', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36' - } + 'authority': 'chat-pr4yueoqha-ue.a.run.app', + 'accept': '*/*', + 'accept-language': 'en-US,en;q=0.9', + 'atoken': token, + 'content-type': 'application/json', + 'origin': 'https://www.chatpdf.com', + 'referer': 'https://www.chatpdf.com/', + 'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"', + 'sec-ch-ua-mobile': '?0', + 'sec-ch-ua-platform': '"Windows"', + 'sec-fetch-dest': 'empty', + 'sec-fetch-mode': 'cors', + 'sec-fetch-site': 'cross-site', + 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36' + } - response = requests.request("POST", url, headers=headers, data=payload).text + response = requests.request( + "POST", url, headers=headers, data=payload).text Completion.stream_completed = True return {'response': response}