This commit is contained in:
t.me/xtekky 2023-04-27 19:16:07 +01:00
commit 10104774c1
38 changed files with 1054 additions and 950 deletions

View File

@ -11,24 +11,6 @@ Have you ever come across some amazing projects that you couldn't use **just bec
By the way, thank you so much for [![Stars](https://img.shields.io/github/stars/xtekky/gpt4free?style=social)](https://github.com/xtekky/gpt4free/stargazers) and all the support!! By the way, thank you so much for [![Stars](https://img.shields.io/github/stars/xtekky/gpt4free?style=social)](https://github.com/xtekky/gpt4free/stargazers) and all the support!!
## Announcement
Dear Gpt4free Community,
I would like to thank you for your interest in and support of this project, which I only intended to be for entertainment and educational purposes; I had no idea it would end up being so popular.
I'm aware of the concerns about the project's legality and its impact on smaller sites hosting APIs. I take these concerns seriously and plan to address them.
Here's what I'm doing to fix these issues:
1. Removing APIs from smaller sites: To reduce the impact on smaller sites, I have removed their APIs from the repository. Please shoot me a dm if you are an owner of a site and want it removed.
2. Commitment to ethical use: I want to emphasize my commitment to promoting ethical use of language models. I don't support any illegal or unethical behavior, and I expect users to follow the same principles.
Thank you for your support and understanding. I appreciate your continued interest in gpt4free and am committed to addressing your concerns.
Sincerely,
**xtekky**
## Legal Notice <a name="legal-notice"></a> ## Legal Notice <a name="legal-notice"></a>
This repository uses third-party APIs and AI models and is *not* associated with or endorsed by the API providers or the original developers of the models. This project is intended **for educational purposes only**. This repository uses third-party APIs and AI models and is *not* associated with or endorsed by the API providers or the original developers of the models. This project is intended **for educational purposes only**.
@ -54,14 +36,12 @@ Please note the following:
| **How to install** | Instructions on how to install gpt4free | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#install) | - | | **How to install** | Instructions on how to install gpt4free | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#install) | - |
| **Legal Notice** | Legal notice or disclaimer | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#legal-notice) | - | | **Legal Notice** | Legal notice or disclaimer | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#legal-notice) | - |
| **Copyright** | Copyright information | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#copyright) | - | | **Copyright** | Copyright information | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#copyright) | - |
| **Star History** | Star History | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#star-history) | - |
| **Usage Examples** | | | | | **Usage Examples** | | | |
| `forefront` | Example usage for quora | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./forefront/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | | `forefront` | Example usage for forefront (gpt-4) | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./forefront/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | | |
| `quora (poe)` | Example usage for quora | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./quora/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | |
| `quora (poe)` | Example usage for quora | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./quora/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| `phind` | Example usage for phind | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./phind/README.md) | ![Inactive](https://img.shields.io/badge/Active-brightgreen) | | `phind` | Example usage for phind | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./phind/README.md) | ![Inactive](https://img.shields.io/badge/Active-brightgreen) |
| `you` | Example usage for you | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./you/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| `you` | Example usage for you | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./you/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen)
| **Try it Out** | | | | | **Try it Out** | | | |
| Google Colab Jupyter Notebook | Example usage for gpt4free | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DanielShemesh/gpt4free-colab/blob/main/gpt4free.ipynb) | - | | Google Colab Jupyter Notebook | Example usage for gpt4free | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DanielShemesh/gpt4free-colab/blob/main/gpt4free.ipynb) | - |
| replit Example (feel free to fork this repl) | Example usage for gpt4free | [![](https://img.shields.io/badge/Open%20in-Replit-1A1E27?logo=replit)](https://replit.com/@gpt4free/gpt4free-webui) | - | | replit Example (feel free to fork this repl) | Example usage for gpt4free | [![](https://img.shields.io/badge/Open%20in-Replit-1A1E27?logo=replit)](https://replit.com/@gpt4free/gpt4free-webui) | - |
@ -154,3 +134,6 @@ GNU General Public License for more details.
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
``` ```
## Star History <a name="star-history"></a>
[![Star History Chart](https://api.star-history.com/svg?repos=xtekky/gpt4free&type=Date)](https://star-history.com/#xtekky/gpt4free)

15
Singularity/gpt4free.sif Normal file
View File

@ -0,0 +1,15 @@
Bootstrap: docker
From: python:3.10-slim
%post
apt-get update && apt-get install -y git
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
pip install --no-cache-dir -r requirements.txt
cp gui/streamlit_app.py .
%expose
8501
%startscript
exec streamlit run streamlit_app.py

View File

@ -1,19 +1,22 @@
from tls_client import Session
from forefront.mail import Mail
from time import time, sleep
from re import match
from forefront.typing import ForeFrontResponse
from uuid import uuid4
from requests import post
from json import loads from json import loads
from re import match
from time import time, sleep
from uuid import uuid4
from requests import post
from tls_client import Session
from forefront.mail import Mail
from forefront.typing import ForeFrontResponse
class Account: class Account:
def create(proxy = None, logging = False): @staticmethod
def create(proxy=None, logging=False):
proxies = { proxies = {
'http': 'http://' + proxy, 'http': 'http://' + proxy,
'https': 'http://' + proxy } if proxy else False 'https': 'http://' + proxy} if proxy else False
start = time() start = time()
@ -21,17 +24,17 @@ class Account:
mail_token = None mail_token = None
mail_adress = mail.get_mail() mail_adress = mail.get_mail()
#print(mail_adress) # print(mail_adress)
client = Session(client_identifier='chrome110') client = Session(client_identifier='chrome110')
client.proxies = proxies client.proxies = proxies
client.headers = { client.headers = {
"origin": "https://accounts.forefront.ai", "origin": "https://accounts.forefront.ai",
"user-agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36",
} }
response = client.post('https://clerk.forefront.ai/v1/client/sign_ups?_clerk_js_version=4.32.6', response = client.post('https://clerk.forefront.ai/v1/client/sign_ups?_clerk_js_version=4.32.6',
data = { data={
"email_address": mail_adress "email_address": mail_adress
} }
) )
@ -39,9 +42,10 @@ class Account:
trace_token = response.json()['response']['id'] trace_token = response.json()['response']['id']
if logging: print(trace_token) if logging: print(trace_token)
response = client.post(f"https://clerk.forefront.ai/v1/client/sign_ups/{trace_token}/prepare_verification?_clerk_js_version=4.32.6", response = client.post(
data = { f"https://clerk.forefront.ai/v1/client/sign_ups/{trace_token}/prepare_verification?_clerk_js_version=4.32.6",
"strategy" : "email_code", data={
"strategy": "email_code",
} }
) )
@ -61,7 +65,9 @@ class Account:
if logging: print(mail_token) if logging: print(mail_token)
response = client.post(f'https://clerk.forefront.ai/v1/client/sign_ups/{trace_token}/attempt_verification?_clerk_js_version=4.38.4', data = { response = client.post(
f'https://clerk.forefront.ai/v1/client/sign_ups/{trace_token}/attempt_verification?_clerk_js_version=4.38.4',
data={
'code': mail_token, 'code': mail_token,
'strategy': 'email_code' 'strategy': 'email_code'
}) })
@ -79,43 +85,44 @@ class Account:
class StreamingCompletion: class StreamingCompletion:
@staticmethod
def create( def create(
token = None, token=None,
chatId = None, chatId=None,
prompt = '', prompt='',
actionType = 'new', actionType='new',
defaultPersona = '607e41fe-95be-497e-8e97-010a59b2e2c0', # default defaultPersona='607e41fe-95be-497e-8e97-010a59b2e2c0', # default
model = 'gpt-4') -> ForeFrontResponse: model='gpt-4') -> ForeFrontResponse:
if not token: raise Exception('Token is required!') if not token: raise Exception('Token is required!')
if not chatId: chatId = str(uuid4()) if not chatId: chatId = str(uuid4())
headers = { headers = {
'authority' : 'chat-server.tenant-forefront-default.knative.chi.coreweave.com', 'authority': 'chat-server.tenant-forefront-default.knative.chi.coreweave.com',
'accept' : '*/*', 'accept': '*/*',
'accept-language' : 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'authorization' : 'Bearer ' + token, 'authorization': 'Bearer ' + token,
'cache-control' : 'no-cache', 'cache-control': 'no-cache',
'content-type' : 'application/json', 'content-type': 'application/json',
'origin' : 'https://chat.forefront.ai', 'origin': 'https://chat.forefront.ai',
'pragma' : 'no-cache', 'pragma': 'no-cache',
'referer' : 'https://chat.forefront.ai/', 'referer': 'https://chat.forefront.ai/',
'sec-ch-ua' : '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"', 'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile' : '?0', 'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"', 'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest' : 'empty', 'sec-fetch-dest': 'empty',
'sec-fetch-mode' : 'cors', 'sec-fetch-mode': 'cors',
'sec-fetch-site' : 'cross-site', 'sec-fetch-site': 'cross-site',
'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
} }
json_data = { json_data = {
'text' : prompt, 'text': prompt,
'action' : actionType, 'action': actionType,
'parentId' : chatId, 'parentId': chatId,
'workspaceId' : chatId, 'workspaceId': chatId,
'messagePersona' : defaultPersona, 'messagePersona': defaultPersona,
'model' : model 'model': model
} }
for chunk in post('https://chat-server.tenant-forefront-default.knative.chi.coreweave.com/chat', for chunk in post('https://chat-server.tenant-forefront-default.knative.chi.coreweave.com/chat',
@ -127,19 +134,19 @@ class StreamingCompletion:
if token != None: if token != None:
yield ForeFrontResponse({ yield ForeFrontResponse({
'id' : chatId, 'id': chatId,
'object' : 'text_completion', 'object': 'text_completion',
'created': int(time()), 'created': int(time()),
'model' : model, 'model': model,
'choices': [{ 'choices': [{
'text' : token, 'text': token,
'index' : 0, 'index': 0,
'logprobs' : None, 'logprobs': None,
'finish_reason' : 'stop' 'finish_reason': 'stop'
}], }],
'usage': { 'usage': {
'prompt_tokens' : len(prompt), 'prompt_tokens': len(prompt),
'completion_tokens' : len(token), 'completion_tokens': len(token),
'total_tokens' : len(prompt) + len(token) 'total_tokens': len(prompt) + len(token)
} }
}) })

View File

@ -1,6 +1,8 @@
from requests import Session
from string import ascii_letters
from random import choices from random import choices
from string import ascii_letters
from requests import Session
class Mail: class Mail:
def __init__(self, proxies: dict = None) -> None: def __init__(self, proxies: dict = None) -> None:
@ -27,12 +29,12 @@ class Mail:
def get_mail(self) -> str: def get_mail(self) -> str:
token = ''.join(choices(ascii_letters, k=14)).lower() token = ''.join(choices(ascii_letters, k=14)).lower()
init = self.client.post("https://api.mail.tm/accounts", json={ init = self.client.post("https://api.mail.tm/accounts", json={
"address" : f"{token}@bugfoo.com", "address": f"{token}@bugfoo.com",
"password": token "password": token
}) })
if init.status_code == 201: if init.status_code == 201:
resp = self.client.post("https://api.mail.tm/token", json = { resp = self.client.post("https://api.mail.tm/token", json={
**init.json(), **init.json(),
"password": token "password": token
}) })
@ -52,4 +54,3 @@ class Mail:
def get_message_content(self, message_id: str): def get_message_content(self, message_id: str):
return self.get_message(message_id)["text"] return self.get_message(message_id)["text"]

View File

@ -24,7 +24,6 @@ class ForeFrontResponse:
return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>''' return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>'''
def __init__(self, response_dict: dict) -> None: def __init__(self, response_dict: dict) -> None:
self.response_dict = response_dict self.response_dict = response_dict
self.id = response_dict['id'] self.id = response_dict['id']
self.object = response_dict['object'] self.object = response_dict['object']

View File

@ -1,25 +1,34 @@
import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), os.path.pardir))
import streamlit as st import streamlit as st
import phind import phind
phind.cf_clearance = '' # Set cloudflare clearance and user agent
phind.user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36' phind.cloudflare_clearance = ''
phind.phind_api = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
def phind_get_answer(question:str)->str:
# set cf_clearance cookie def get_answer(question: str) -> str:
# Set cloudflare clearance cookie and get answer from GPT-4 model
try: try:
result = phind.Completion.create( result = phind.Completion.create(
model = 'gpt-4', model='gpt-4',
prompt = question, prompt=question,
results = phind.Search.create(question, actualSearch = True), results=phind.Search.create(question, actualSearch=True),
creative = False, creative=False,
detailed = False, detailed=False,
codeContext = '') codeContext=''
)
return result.completion.choices[0].text return result.completion.choices[0].text
except Exception as e: except Exception as e:
return 'An error occured, please make sure you are using a cf_clearance token and correct useragent | %s' % e # Return error message if an exception occurs
return f'An error occurred: {e}. Please make sure you are using a valid cloudflare clearance token and user agent.'
# Set page configuration and add header
st.set_page_config( st.set_page_config(
page_title="gpt4freeGUI", page_title="gpt4freeGUI",
initial_sidebar_state="expanded", initial_sidebar_state="expanded",
@ -30,16 +39,18 @@ st.set_page_config(
'About': "### gptfree GUI" 'About': "### gptfree GUI"
} }
) )
st.header('GPT4free GUI') st.header('GPT4free GUI')
question_text_area = st.text_area('🤖 Ask Any Question :', placeholder='Explain quantum computing in 50 words') # Add text area for user input and button to get answer
question_text_area = st.text_area(
'🤖 Ask Any Question :', placeholder='Explain quantum computing in 50 words')
if st.button('🧠 Think'): if st.button('🧠 Think'):
answer = phind_get_answer(question_text_area) answer = get_answer(question_text_area)
# Display answer
st.caption("Answer :") st.caption("Answer :")
st.markdown(answer) st.markdown(answer)
# Hide Streamlit footer
hide_streamlit_style = """ hide_streamlit_style = """
<style> <style>
footer {visibility: hidden;} footer {visibility: hidden;}

View File

@ -1,19 +1,17 @@
from urllib.parse import quote
from time import time
from datetime import datetime from datetime import datetime
from queue import Queue, Empty from queue import Queue, Empty
from threading import Thread from threading import Thread
from re import findall from time import time
from urllib.parse import quote
from curl_cffi.requests import post from curl_cffi.requests import post
cf_clearance = '' cf_clearance = ''
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36' user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
class PhindResponse: class PhindResponse:
class Completion: class Completion:
class Choices: class Choices:
def __init__(self, choice: dict) -> None: def __init__(self, choice: dict) -> None:
self.text = choice['text'] self.text = choice['text']
@ -38,7 +36,6 @@ class PhindResponse:
return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>''' return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>'''
def __init__(self, response_dict: dict) -> None: def __init__(self, response_dict: dict) -> None:
self.response_dict = response_dict self.response_dict = response_dict
self.id = response_dict['id'] self.id = response_dict['id']
self.object = response_dict['object'] self.object = response_dict['object']
@ -55,7 +52,7 @@ class Search:
def create(prompt: str, actualSearch: bool = True, language: str = 'en') -> dict: # None = no search def create(prompt: str, actualSearch: bool = True, language: str = 'en') -> dict: # None = no search
if user_agent == '': if user_agent == '':
raise ValueError('user_agent must be set, refer to documentation') raise ValueError('user_agent must be set, refer to documentation')
if cf_clearance == '' : if cf_clearance == '':
raise ValueError('cf_clearance must be set, refer to documentation') raise ValueError('cf_clearance must be set, refer to documentation')
if not actualSearch: if not actualSearch:
@ -92,7 +89,7 @@ class Search:
'user-agent': user_agent 'user-agent': user_agent
} }
return post('https://www.phind.com/api/bing/search', headers = headers, json = { return post('https://www.phind.com/api/bing/search', headers=headers, json={
'q': prompt, 'q': prompt,
'userRankList': {}, 'userRankList': {},
'browserLanguage': language}).json()['rawBingResults'] 'browserLanguage': language}).json()['rawBingResults']
@ -100,7 +97,7 @@ class Search:
class Completion: class Completion:
def create( def create(
model = 'gpt-4', model='gpt-4',
prompt: str = '', prompt: str = '',
results: dict = None, results: dict = None,
creative: bool = False, creative: bool = False,
@ -108,31 +105,31 @@ class Completion:
codeContext: str = '', codeContext: str = '',
language: str = 'en') -> PhindResponse: language: str = 'en') -> PhindResponse:
if user_agent == '' : if user_agent == '':
raise ValueError('user_agent must be set, refer to documentation') raise ValueError('user_agent must be set, refer to documentation')
if cf_clearance == '' : if cf_clearance == '':
raise ValueError('cf_clearance must be set, refer to documentation') raise ValueError('cf_clearance must be set, refer to documentation')
if results is None: if results is None:
results = Search.create(prompt, actualSearch = True) results = Search.create(prompt, actualSearch=True)
if len(codeContext) > 2999: if len(codeContext) > 2999:
raise ValueError('codeContext must be less than 3000 characters') raise ValueError('codeContext must be less than 3000 characters')
models = { models = {
'gpt-4' : 'expert', 'gpt-4': 'expert',
'gpt-3.5-turbo' : 'intermediate', 'gpt-3.5-turbo': 'intermediate',
'gpt-3.5': 'intermediate', 'gpt-3.5': 'intermediate',
} }
json_data = { json_data = {
'question' : prompt, 'question': prompt,
'bingResults' : results, #response.json()['rawBingResults'], 'bingResults': results, # response.json()['rawBingResults'],
'codeContext' : codeContext, 'codeContext': codeContext,
'options': { 'options': {
'skill' : models[model], 'skill': models[model],
'date' : datetime.now().strftime("%d/%m/%Y"), 'date': datetime.now().strftime("%d/%m/%Y"),
'language': language, 'language': language,
'detailed': detailed, 'detailed': detailed,
'creative': creative 'creative': creative
@ -157,25 +154,26 @@ class Completion:
} }
completion = '' completion = ''
response = post('https://www.phind.com/api/infer/answer', headers = headers, json = json_data, timeout=99999, impersonate='chrome110') response = post('https://www.phind.com/api/infer/answer', headers=headers, json=json_data, timeout=99999,
impersonate='chrome110')
for line in response.text.split('\r\n\r\n'): for line in response.text.split('\r\n\r\n'):
completion += (line.replace('data: ', '')) completion += (line.replace('data: ', ''))
return PhindResponse({ return PhindResponse({
'id' : f'cmpl-1337-{int(time())}', 'id': f'cmpl-1337-{int(time())}',
'object' : 'text_completion', 'object': 'text_completion',
'created': int(time()), 'created': int(time()),
'model' : models[model], 'model': models[model],
'choices': [{ 'choices': [{
'text' : completion, 'text': completion,
'index' : 0, 'index': 0,
'logprobs' : None, 'logprobs': None,
'finish_reason' : 'stop' 'finish_reason': 'stop'
}], }],
'usage': { 'usage': {
'prompt_tokens' : len(prompt), 'prompt_tokens': len(prompt),
'completion_tokens' : len(completion), 'completion_tokens': len(completion),
'total_tokens' : len(prompt) + len(completion) 'total_tokens': len(prompt) + len(completion)
} }
}) })
@ -187,18 +185,18 @@ class StreamingCompletion:
def request(model, prompt, results, creative, detailed, codeContext, language) -> None: def request(model, prompt, results, creative, detailed, codeContext, language) -> None:
models = { models = {
'gpt-4' : 'expert', 'gpt-4': 'expert',
'gpt-3.5-turbo' : 'intermediate', 'gpt-3.5-turbo': 'intermediate',
'gpt-3.5': 'intermediate', 'gpt-3.5': 'intermediate',
} }
json_data = { json_data = {
'question' : prompt, 'question': prompt,
'bingResults' : results, 'bingResults': results,
'codeContext' : codeContext, 'codeContext': codeContext,
'options': { 'options': {
'skill' : models[model], 'skill': models[model],
'date' : datetime.now().strftime("%d/%m/%Y"), 'date': datetime.now().strftime("%d/%m/%Y"),
'language': language, 'language': language,
'detailed': detailed, 'detailed': detailed,
'creative': creative 'creative': creative
@ -223,33 +221,33 @@ class StreamingCompletion:
} }
response = post('https://www.phind.com/api/infer/answer', response = post('https://www.phind.com/api/infer/answer',
headers = headers, json = json_data, timeout=99999, impersonate='chrome110', content_callback=StreamingCompletion.handle_stream_response) headers=headers, json=json_data, timeout=99999, impersonate='chrome110',
content_callback=StreamingCompletion.handle_stream_response)
StreamingCompletion.stream_completed = True StreamingCompletion.stream_completed = True
@staticmethod @staticmethod
def create( def create(
model : str = 'gpt-4', model: str = 'gpt-4',
prompt : str = '', prompt: str = '',
results : dict = None, results: dict = None,
creative : bool = False, creative: bool = False,
detailed : bool = False, detailed: bool = False,
codeContext : str = '', codeContext: str = '',
language : str = 'en'): language: str = 'en'):
if user_agent == '': if user_agent == '':
raise ValueError('user_agent must be set, refer to documentation') raise ValueError('user_agent must be set, refer to documentation')
if cf_clearance == '' : if cf_clearance == '':
raise ValueError('cf_clearance must be set, refer to documentation') raise ValueError('cf_clearance must be set, refer to documentation')
if results is None: if results is None:
results = Search.create(prompt, actualSearch = True) results = Search.create(prompt, actualSearch=True)
if len(codeContext) > 2999: if len(codeContext) > 2999:
raise ValueError('codeContext must be less than 3000 characters') raise ValueError('codeContext must be less than 3000 characters')
Thread(target = StreamingCompletion.request, args = [ Thread(target=StreamingCompletion.request, args=[
model, prompt, results, creative, detailed, codeContext, language]).start() model, prompt, results, creative, detailed, codeContext, language]).start()
while StreamingCompletion.stream_completed != True or not StreamingCompletion.message_queue.empty(): while StreamingCompletion.stream_completed != True or not StreamingCompletion.message_queue.empty():
@ -266,20 +264,20 @@ class StreamingCompletion:
chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '') chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '')
yield PhindResponse({ yield PhindResponse({
'id' : f'cmpl-1337-{int(time())}', 'id': f'cmpl-1337-{int(time())}',
'object' : 'text_completion', 'object': 'text_completion',
'created': int(time()), 'created': int(time()),
'model' : model, 'model': model,
'choices': [{ 'choices': [{
'text' : chunk, 'text': chunk,
'index' : 0, 'index': 0,
'logprobs' : None, 'logprobs': None,
'finish_reason' : 'stop' 'finish_reason': 'stop'
}], }],
'usage': { 'usage': {
'prompt_tokens' : len(prompt), 'prompt_tokens': len(prompt),
'completion_tokens' : len(chunk), 'completion_tokens': len(chunk),
'total_tokens' : len(prompt) + len(chunk) 'total_tokens': len(prompt) + len(chunk)
} }
}) })

View File

@ -384,7 +384,7 @@ class Client:
continue continue
# update info about response # update info about response
message["text_new"] = message["text"][len(last_text) :] message["text_new"] = message["text"][len(last_text):]
last_text = message["text"] last_text = message["text"]
message_id = message["messageId"] message_id = message["messageId"]

View File

@ -38,7 +38,7 @@ class Emailnator:
return self.email return self.email
def get_message(self): def get_message(self):
print("waiting for code...") print("Waiting for message...")
while True: while True:
sleep(2) sleep(2)
@ -49,6 +49,7 @@ class Emailnator:
mail_token = loads(mail_token.text)["messageData"] mail_token = loads(mail_token.text)["messageData"]
if len(mail_token) == 2: if len(mail_token) == 2:
print("Message received!")
print(mail_token[1]["messageID"]) print(mail_token[1]["messageID"])
break break
@ -63,4 +64,19 @@ class Emailnator:
return mail_context.text return mail_context.text
def get_verification_code(self): def get_verification_code(self):
return findall(r';">(\d{6,7})</div>', self.get_message())[0] message = self.get_message()
code = findall(r';">(\d{6,7})</div>', message)[0]
print(f"Verification code: {code}")
return code
def clear_inbox(self):
print("Clearing inbox...")
self.client.post(
"https://www.emailnator.com/delete-all",
json={"email": self.email},
)
print("Inbox cleared!")
def __del__(self):
if self.email:
self.clear_inbox()

View File

@ -8,3 +8,4 @@ curl_cffi
streamlit==1.21.0 streamlit==1.21.0
selenium selenium
fake-useragent fake-useragent
twocaptcha

View File

@ -5,7 +5,6 @@ token = forefront.Account.create(logging=True)
print(token) print(token)
# get a response # get a response
for response in forefront.StreamingCompletion.create(token = token, for response in forefront.StreamingCompletion.create(token=token,
prompt = 'hello world', model='gpt-4'): prompt='hello world', model='gpt-4'):
print(response.completion.choices[0].text, end='')
print(response.completion.choices[0].text, end = '')

View File

@ -8,12 +8,13 @@ prompt = 'hello world'
# normal completion # normal completion
result = phind.Completion.create( result = phind.Completion.create(
model = 'gpt-4', model='gpt-4',
prompt = prompt, prompt=prompt,
results = phind.Search.create(prompt, actualSearch = False), # create search (set actualSearch to False to disable internet) results=phind.Search.create(prompt, actualSearch=False),
creative = False, # create search (set actualSearch to False to disable internet)
detailed = False, creative=False,
codeContext = '') # up to 3000 chars of code detailed=False,
codeContext='') # up to 3000 chars of code
print(result.completion.choices[0].text) print(result.completion.choices[0].text)
@ -22,11 +23,12 @@ prompt = 'who won the quatar world cup'
# help needed: not getting newlines from the stream, please submit a PR if you know how to fix this # help needed: not getting newlines from the stream, please submit a PR if you know how to fix this
# stream completion # stream completion
for result in phind.StreamingCompletion.create( for result in phind.StreamingCompletion.create(
model = 'gpt-4', model='gpt-4',
prompt = prompt, prompt=prompt,
results = phind.Search.create(prompt, actualSearch = True), # create search (set actualSearch to False to disable internet) results=phind.Search.create(prompt, actualSearch=True),
creative = False, # create search (set actualSearch to False to disable internet)
detailed = False, creative=False,
codeContext = ''): # up to 3000 chars of code detailed=False,
codeContext=''): # up to 3000 chars of code
print(result.completion.choices[0].text, end='', flush=True) print(result.completion.choices[0].text, end='', flush=True)

View File

@ -1,16 +1,16 @@
from requests import Session
from tls_client import Session as TLS
from json import dumps
from hashlib import md5 from hashlib import md5
from time import sleep from json import dumps
from re import findall from re import findall
from pypasser import reCaptchaV3
from quora import extract_formkey from tls_client import Session as TLS
from quora.mail import Emailnator
from twocaptcha import TwoCaptcha from twocaptcha import TwoCaptcha
from quora import extract_formkey
from quora.mail import Emailnator
solver = TwoCaptcha('72747bf24a9d89b4dcc1b24875efd358') solver = TwoCaptcha('72747bf24a9d89b4dcc1b24875efd358')
class Account: class Account:
def create(proxy: None or str = None, logging: bool = False, enable_bot_creation: bool = False): def create(proxy: None or str = None, logging: bool = False, enable_bot_creation: bool = False):
client = TLS(client_identifier='chrome110') client = TLS(client_identifier='chrome110')
@ -24,40 +24,40 @@ class Account:
if logging: print('email', mail_address) if logging: print('email', mail_address)
client.headers = { client.headers = {
'authority' : 'poe.com', 'authority': 'poe.com',
'accept' : '*/*', 'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'content-type' : 'application/json', 'content-type': 'application/json',
'origin' : 'https://poe.com', 'origin': 'https://poe.com',
'poe-formkey' : 'null', 'poe-formkey': 'null',
'poe-tag-id' : 'null', 'poe-tag-id': 'null',
'poe-tchannel' : 'null', 'poe-tchannel': 'null',
'referer' : 'https://poe.com/login', 'referer': 'https://poe.com/login',
'sec-ch-ua' : '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"', 'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile' : '?0', 'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"', 'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty', 'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors', 'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin', 'sec-fetch-site': 'same-origin',
'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36' 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
} }
client.headers["poe-formkey"] = extract_formkey(client.get('https://poe.com/login').text) client.headers["poe-formkey"] = extract_formkey(client.get('https://poe.com/login').text)
client.headers["poe-tchannel"] = client.get('https://poe.com/api/settings').json()['tchannelData']['channel'] client.headers["poe-tchannel"] = client.get('https://poe.com/api/settings').json()['tchannelData']['channel']
#token = reCaptchaV3('https://www.recaptcha.net/recaptcha/enterprise/anchor?ar=1&k=6LflhEElAAAAAI_ewVwRWI9hsyV4mbZnYAslSvlG&co=aHR0cHM6Ly9wb2UuY29tOjQ0Mw..&hl=en&v=4PnKmGB9wRHh1i04o7YUICeI&size=invisible&cb=bi6ivxoskyal') # token = reCaptchaV3('https://www.recaptcha.net/recaptcha/enterprise/anchor?ar=1&k=6LflhEElAAAAAI_ewVwRWI9hsyV4mbZnYAslSvlG&co=aHR0cHM6Ly9wb2UuY29tOjQ0Mw..&hl=en&v=4PnKmGB9wRHh1i04o7YUICeI&size=invisible&cb=bi6ivxoskyal')
token = solver.recaptcha(sitekey='6LflhEElAAAAAI_ewVwRWI9hsyV4mbZnYAslSvlG', token = solver.recaptcha(sitekey='6LflhEElAAAAAI_ewVwRWI9hsyV4mbZnYAslSvlG',
url = 'https://poe.com/login?redirect_url=%2F', url='https://poe.com/login?redirect_url=%2F',
version = 'v3', version='v3',
enterprise = 1, enterprise=1,
invisible = 1, invisible=1,
action = 'login',)['code'] action='login', )['code']
payload = dumps(separators = (',', ':'), obj = { payload = dumps(separators=(',', ':'), obj={
'queryName': 'MainSignupLoginSection_sendVerificationCodeMutation_Mutation', 'queryName': 'MainSignupLoginSection_sendVerificationCodeMutation_Mutation',
'variables': { 'variables': {
'emailAddress' : mail_address, 'emailAddress': mail_address,
'phoneNumber' : None, 'phoneNumber': None,
'recaptchaToken': token 'recaptchaToken': token
}, },
'query': 'mutation MainSignupLoginSection_sendVerificationCodeMutation_Mutation(\n $emailAddress: String\n $phoneNumber: String\n $recaptchaToken: String\n) {\n sendVerificationCode(verificationReason: login, emailAddress: $emailAddress, phoneNumber: $phoneNumber, recaptchaToken: $recaptchaToken) {\n status\n errorMessage\n }\n}\n', 'query': 'mutation MainSignupLoginSection_sendVerificationCodeMutation_Mutation(\n $emailAddress: String\n $phoneNumber: String\n $recaptchaToken: String\n) {\n sendVerificationCode(verificationReason: login, emailAddress: $emailAddress, phoneNumber: $phoneNumber, recaptchaToken: $recaptchaToken) {\n status\n errorMessage\n }\n}\n',
@ -74,17 +74,17 @@ class Account:
print('please try using a proxy / wait for fix') print('please try using a proxy / wait for fix')
if 'Bad Request' in response.text: if 'Bad Request' in response.text:
if logging: print('bad request, retrying...' , response.json()) if logging: print('bad request, retrying...', response.json())
quit() quit()
if logging: print('send_code' ,response.json()) if logging: print('send_code', response.json())
mail_content = mail_client.get_message() mail_content = mail_client.get_message()
mail_token = findall(r';">(\d{6,7})</div>', mail_content)[0] mail_token = findall(r';">(\d{6,7})</div>', mail_content)[0]
if logging: print('code', mail_token) if logging: print('code', mail_token)
payload = dumps(separators = (',', ':'), obj={ payload = dumps(separators=(',', ':'), obj={
"queryName": "SignupOrLoginWithCodeSection_signupWithVerificationCodeMutation_Mutation", "queryName": "SignupOrLoginWithCodeSection_signupWithVerificationCodeMutation_Mutation",
"variables": { "variables": {
"verificationCode": str(mail_token), "verificationCode": str(mail_token),
@ -97,8 +97,8 @@ class Account:
base_string = payload + client.headers["poe-formkey"] + 'WpuLMiXEKKE98j56k' base_string = payload + client.headers["poe-formkey"] + 'WpuLMiXEKKE98j56k'
client.headers["poe-tag-id"] = md5(base_string.encode()).hexdigest() client.headers["poe-tag-id"] = md5(base_string.encode()).hexdigest()
response = client.post('https://poe.com/api/gql_POST', data = payload) response = client.post('https://poe.com/api/gql_POST', data=payload)
if logging: print('verify_code', response.json()) if logging: print('verify_code', response.json())
Account.create(proxy = 'xtekky:wegwgwegwed_streaming-1@geo.iproyal.com:12321', logging = True) Account.create(proxy='xtekky:wegwgwegwed_streaming-1@geo.iproyal.com:12321', logging=True)

View File

@ -1,13 +1,13 @@
import quora
from time import sleep from time import sleep
token = quora.Account.create(proxy = None,logging = True) import quora
token = quora.Account.create(proxy=None, logging=True)
print('token', token) print('token', token)
sleep(2) sleep(2)
for response in quora.StreamingCompletion.create(model = 'gpt-3.5-turbo', for response in quora.StreamingCompletion.create(model='gpt-3.5-turbo',
prompt = 'hello world', prompt='hello world',
token = token): token=token):
print(response.completion.choices[0].text, end="", flush=True) print(response.completion.choices[0].text, end="", flush=True)

View File

@ -1,18 +1,17 @@
import quora import quora
token = quora.Account.create(logging = True, enable_bot_creation=True) token = quora.Account.create(logging=True, enable_bot_creation=True)
model = quora.Model.create( model = quora.Model.create(
token = token, token=token,
model = 'gpt-3.5-turbo', # or claude-instant-v1.0 model='gpt-3.5-turbo', # or claude-instant-v1.0
system_prompt = 'you are ChatGPT a large language model ...' system_prompt='you are ChatGPT a large language model ...'
) )
print(model.name) print(model.name)
for response in quora.StreamingCompletion.create( for response in quora.StreamingCompletion.create(
custom_model = model.name, custom_model=model.name,
prompt ='hello world', prompt='hello world',
token = token): token=token):
print(response.completion.choices[0].text) print(response.completion.choices[0].text)

6
testing/sqlchat_test.py Normal file
View File

@ -0,0 +1,6 @@
import sqlchat
for response in sqlchat.StreamCompletion.create(
prompt='write python code to reverse a string',
messages=[]):
print(response.completion.choices[0].text, end='')

View File

@ -1,7 +1,6 @@
import t3nsor import t3nsor
for response in t3nsor.StreamCompletion.create( for response in t3nsor.StreamCompletion.create(
prompt = 'write python code to reverse a string', prompt='write python code to reverse a string',
messages = []): messages=[]):
print(response.completion.choices[0].text) print(response.completion.choices[0].text)

View File

@ -2,18 +2,18 @@
import writesonic import writesonic
# create account (3-4s) # create account (3-4s)
account = writesonic.Account.create(logging = True) account = writesonic.Account.create(logging=True)
# with loging: # with loging:
# 2023-04-06 21:50:25 INFO __main__ -> register success : '{"id":"51aa0809-3053-44f7-922a...' (2s) # 2023-04-06 21:50:25 INFO __main__ -> register success : '{"id":"51aa0809-3053-44f7-922a...' (2s)
# 2023-04-06 21:50:25 INFO __main__ -> id : '51aa0809-3053-44f7-922a-2b85d8d07edf' # 2023-04-06 21:50:25 INFO __main__ -> id : '51aa0809-3053-44f7-922a-2b85d8d07edf'
# 2023-04-06 21:50:25 INFO __main__ -> token : 'eyJhbGciOiJIUzI1NiIsInR5cCI6Ik...' # 2023-04-06 21:50:25 INFO __main__ -> token : 'eyJhbGciOiJIUzI1NiIsInR5cCI6Ik...'
# 2023-04-06 21:50:28 INFO __main__ -> got key : '194158c4-d249-4be0-82c6-5049e869533c' (2s) # 2023-04-06 21:50:28 INFO __main__ -> got key : '194158c4-d249-4be0-82c6-5049e869533c' (2s)
# simple completion # simple completion
response = writesonic.Completion.create( response = writesonic.Completion.create(
api_key = account.key, api_key=account.key,
prompt = 'hello world' prompt='hello world'
) )
print(response.completion.choices[0].text) # Hello! How may I assist you today? print(response.completion.choices[0].text) # Hello! How may I assist you today?
@ -21,10 +21,10 @@ print(response.completion.choices[0].text) # Hello! How may I assist you today?
# conversation # conversation
response = writesonic.Completion.create( response = writesonic.Completion.create(
api_key = account.key, api_key=account.key,
prompt = 'what is my name ?', prompt='what is my name ?',
enable_memory = True, enable_memory=True,
history_data = [ history_data=[
{ {
'is_sent': True, 'is_sent': True,
'message': 'my name is Tekky' 'message': 'my name is Tekky'
@ -41,9 +41,9 @@ print(response.completion.choices[0].text) # Your name is Tekky.
# enable internet # enable internet
response = writesonic.Completion.create( response = writesonic.Completion.create(
api_key = account.key, api_key=account.key,
prompt = 'who won the quatar world cup ?', prompt='who won the quatar world cup ?',
enable_google_results = True enable_google_results=True
) )
print(response.completion.choices[0].text) # Argentina won the 2022 FIFA World Cup tournament held in Qatar ... print(response.completion.choices[0].text) # Argentina won the 2022 FIFA World Cup tournament held in Qatar ...

View File

@ -1,19 +1,19 @@
from requests import Session
from re import search
from random import randint
from json import dumps, loads from json import dumps, loads
from random import randint
from urllib.parse import urlencode
from dotenv import load_dotenv; load_dotenv()
from os import getenv from os import getenv
from random import randint
from re import search
from urllib.parse import urlencode
from bard.typings import BardResponse from bard.typings import BardResponse
from dotenv import load_dotenv
from requests import Session
load_dotenv()
token = getenv('1psid') token = getenv('1psid')
proxy = getenv('proxy') proxy = getenv('proxy')
temperatures = { temperatures = {
0 : "Generate text strictly following known patterns, with no creativity.", 0: "Generate text strictly following known patterns, with no creativity.",
0.1: "Produce text adhering closely to established patterns, allowing minimal creativity.", 0.1: "Produce text adhering closely to established patterns, allowing minimal creativity.",
0.2: "Create text with modest deviations from familiar patterns, injecting a slight creative touch.", 0.2: "Create text with modest deviations from familiar patterns, injecting a slight creative touch.",
0.3: "Craft text with a mild level of creativity, deviating somewhat from common patterns.", 0.3: "Craft text with a mild level of creativity, deviating somewhat from common patterns.",
@ -23,38 +23,17 @@ temperatures = {
0.7: "Produce text favoring creativity over typical patterns for more original results.", 0.7: "Produce text favoring creativity over typical patterns for more original results.",
0.8: "Create text heavily focused on creativity, with limited concern for familiar patterns.", 0.8: "Create text heavily focused on creativity, with limited concern for familiar patterns.",
0.9: "Craft text with a strong emphasis on unique and inventive ideas, largely ignoring established patterns.", 0.9: "Craft text with a strong emphasis on unique and inventive ideas, largely ignoring established patterns.",
1 : "Generate text with maximum creativity, disregarding any constraints of known patterns or structures." 1: "Generate text with maximum creativity, disregarding any constraints of known patterns or structures."
} }
class Completion: class Completion:
# def __init__(self, _token, proxy: str or None = None) -> None:
# self.client = Session()
# self.client.proxies = {
# 'http': f'http://{proxy}',
# 'https': f'http://{proxy}' } if proxy else None
# self.client.headers = {
# 'authority' : 'bard.google.com',
# 'content-type' : 'application/x-www-form-urlencoded;charset=UTF-8',
# 'origin' : 'https://bard.google.com',
# 'referer' : 'https://bard.google.com/',
# 'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
# 'x-same-domain' : '1',
# 'cookie' : f'__Secure-1PSID={_token}'
# }
# self.snlm0e = self.__init_client()
# self.conversation_id = ''
# self.response_id = ''
# self.choice_id = ''
# self.reqid = randint(1111, 9999)
def create( def create(
prompt : str = 'hello world', prompt: str = 'hello world',
temperature : int = None, temperature: int = None,
conversation_id : str = '', conversation_id: str = '',
response_id : str = '', response_id: str = '',
choice_id : str = '') -> BardResponse: choice_id: str = '') -> BardResponse:
if temperature: if temperature:
prompt = f'''settings: follow these settings for your response: [temperature: {temperature} - {temperatures[temperature]}] | prompt : {prompt}''' prompt = f'''settings: follow these settings for your response: [temperature: {temperature} - {temperatures[temperature]}] | prompt : {prompt}'''
@ -62,54 +41,53 @@ class Completion:
client = Session() client = Session()
client.proxies = { client.proxies = {
'http': f'http://{proxy}', 'http': f'http://{proxy}',
'https': f'http://{proxy}' } if proxy else None 'https': f'http://{proxy}'} if proxy else None
client.headers = { client.headers = {
'authority' : 'bard.google.com', 'authority': 'bard.google.com',
'content-type' : 'application/x-www-form-urlencoded;charset=UTF-8', 'content-type': 'application/x-www-form-urlencoded;charset=UTF-8',
'origin' : 'https://bard.google.com', 'origin': 'https://bard.google.com',
'referer' : 'https://bard.google.com/', 'referer': 'https://bard.google.com/',
'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
'x-same-domain' : '1', 'x-same-domain': '1',
'cookie' : f'__Secure-1PSID={token}' 'cookie': f'__Secure-1PSID={token}'
} }
snlm0e = search(r'SNlM0e\":\"(.*?)\"', client.get('https://bard.google.com/').text).group(1) snlm0e = search(r'SNlM0e\":\"(.*?)\"',
client.get('https://bard.google.com/').text).group(1)
params = urlencode({ params = urlencode({
'bl' : 'boq_assistant-bard-web-server_20230326.21_p0', 'bl': 'boq_assistant-bard-web-server_20230326.21_p0',
'_reqid' : randint(1111, 9999), '_reqid': randint(1111, 9999),
'rt' : 'c', 'rt': 'c',
}) })
response = client.post(f'https://bard.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate?{params}', response = client.post(
data = { f'https://bard.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate?{params}',
data={
'at': snlm0e, 'at': snlm0e,
'f.req': dumps([None, dumps([ 'f.req': dumps([None, dumps([
[prompt], [prompt],
None, None,
[conversation_id, response_id, choice_id], [conversation_id, response_id, choice_id],
]) ])])
])
} }
) )
chat_data = loads(response.content.splitlines()[3])[0][2] chat_data = loads(response.content.splitlines()[3])[0][2]
if not chat_data: print('error, retrying'); Completion.create(prompt, temperature, conversation_id, response_id, choice_id) if not chat_data:
print('error, retrying')
Completion.create(prompt, temperature,
conversation_id, response_id, choice_id)
json_chat_data = loads(chat_data) json_chat_data = loads(chat_data)
results = { results = {
'content' : json_chat_data[0][0], 'content': json_chat_data[0][0],
'conversation_id' : json_chat_data[1][0], 'conversation_id': json_chat_data[1][0],
'response_id' : json_chat_data[1][1], 'response_id': json_chat_data[1][1],
'factualityQueries' : json_chat_data[3], 'factualityQueries': json_chat_data[3],
'textQuery' : json_chat_data[2][0] if json_chat_data[2] is not None else '', 'textQuery': json_chat_data[2][0] if json_chat_data[2] is not None else '',
'choices' : [{'id': i[0], 'content': i[1]} for i in json_chat_data[4]], 'choices': [{'id': i[0], 'content': i[1]} for i in json_chat_data[4]],
} }
# self.conversation_id = results['conversation_id']
# self.response_id = results['response_id']
# self.choice_id = results['choices'][0]['id']
# self.reqid += 100000
return BardResponse(results) return BardResponse(results)

View File

@ -1,5 +1,13 @@
from typing import Dict, List, Union
class BardResponse: class BardResponse:
def __init__(self, json_dict): def __init__(self, json_dict: Dict[str, Union[str, List]]) -> None:
"""
Initialize a BardResponse object.
:param json_dict: A dictionary containing the JSON response data.
"""
self.json = json_dict self.json = json_dict
self.content = json_dict.get('content') self.content = json_dict.get('content')
@ -7,9 +15,40 @@ class BardResponse:
self.response_id = json_dict.get('response_id') self.response_id = json_dict.get('response_id')
self.factuality_queries = json_dict.get('factualityQueries', []) self.factuality_queries = json_dict.get('factualityQueries', [])
self.text_query = json_dict.get('textQuery', []) self.text_query = json_dict.get('textQuery', [])
self.choices = [self.BardChoice(choice) for choice in json_dict.get('choices', [])] self.choices = [self.BardChoice(choice)
for choice in json_dict.get('choices', [])]
def __repr__(self) -> str:
"""
Return a string representation of the BardResponse object.
:return: A string representation of the BardResponse object.
"""
return f"BardResponse(conversation_id={self.conversation_id}, response_id={self.response_id}, content={self.content})"
def filter_choices(self, keyword: str) -> List['BardChoice']:
"""
Filter the choices based on a keyword.
:param keyword: The keyword to filter choices by.
:return: A list of filtered BardChoice objects.
"""
return [choice for choice in self.choices if keyword.lower() in choice.content.lower()]
class BardChoice: class BardChoice:
def __init__(self, choice_dict): def __init__(self, choice_dict: Dict[str, str]) -> None:
"""
Initialize a BardChoice object.
:param choice_dict: A dictionary containing the choice data.
"""
self.id = choice_dict.get('id') self.id = choice_dict.get('id')
self.content = choice_dict.get('content')[0] self.content = choice_dict.get('content')[0]
def __repr__(self) -> str:
"""
Return a string representation of the BardChoice object.
:return: A string representation of the BardChoice object.
"""
return f"BardChoice(id={self.id}, content={self.content})"

View File

@ -1,110 +1,78 @@
from requests import get # Import necessary libraries
from browser_cookie3 import edge, chrome
from ssl import create_default_context
from certifi import where
from uuid import uuid4
from random import randint
from json import dumps, loads
import asyncio import asyncio
import websockets from json import dumps, loads
from ssl import create_default_context
import websockets
from browser_cookie3 import edge
from certifi import where
from requests import get
# Set up SSL context
ssl_context = create_default_context() ssl_context = create_default_context()
ssl_context.load_verify_locations(where()) ssl_context.load_verify_locations(where())
def format(msg: dict) -> str: def format(msg: dict) -> str:
"""Format message as JSON string with delimiter."""
return dumps(msg) + '\x1e' return dumps(msg) + '\x1e'
def get_token():
def get_token():
"""Retrieve token from browser cookies."""
cookies = {c.name: c.value for c in edge(domain_name='bing.com')} cookies = {c.name: c.value for c in edge(domain_name='bing.com')}
return cookies['_U'] return cookies['_U']
class AsyncCompletion: class AsyncCompletion:
async def create( async def create(
prompt : str = 'hello world', prompt: str = 'hello world',
optionSets : list = [ optionSets: list = [
'deepleo', 'deepleo',
'enable_debug_commands', 'enable_debug_commands',
'disable_emoji_spoken_text', 'disable_emoji_spoken_text',
'enablemm', 'enablemm',
'h3relaxedimg' 'h3relaxedimg'
], ],
token : str = get_token()): token: str = get_token()):
"""Create a connection to Bing AI and send the prompt."""
# Send create request
create = get('https://edgeservices.bing.com/edgesvc/turing/conversation/create', create = get('https://edgeservices.bing.com/edgesvc/turing/conversation/create',
headers = { headers={
'host' : 'edgeservices.bing.com', 'host': 'edgeservices.bing.com',
'authority' : 'edgeservices.bing.com', 'authority': 'edgeservices.bing.com',
'cookie' : f'_U={token}', 'cookie': f'_U={token}',
'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69',
} }
) )
# Extract conversation data
conversationId = create.json()['conversationId'] conversationId = create.json()['conversationId']
clientId = create.json()['clientId'] clientId = create.json()['clientId']
conversationSignature = create.json()['conversationSignature'] conversationSignature = create.json()['conversationSignature']
wss: websockets.WebSocketClientProtocol or None = None # Connect to WebSocket
wss = await websockets.connect('wss://sydney.bing.com/sydney/ChatHub', max_size=None, ssl=ssl_context,
wss = await websockets.connect('wss://sydney.bing.com/sydney/ChatHub', max_size = None, ssl = ssl_context, extra_headers={
extra_headers = { # Add necessary headers
'accept': 'application/json',
'accept-language': 'en-US,en;q=0.9',
'content-type': 'application/json',
'sec-ch-ua': '"Not_A Brand";v="99", Microsoft Edge";v="110", "Chromium";v="110"',
'sec-ch-ua-arch': '"x86"',
'sec-ch-ua-bitness': '"64"',
'sec-ch-ua-full-version': '"109.0.1518.78"',
'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-model': "",
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua-platform-version': '"15.0.0"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'x-ms-client-request-id': str(uuid4()),
'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
'Referer': 'https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx',
'Referrer-Policy': 'origin-when-cross-origin',
'x-forwarded-for': f'13.{randint(104, 107)}.{randint(0, 255)}.{randint(0, 255)}'
} }
) )
# Send JSON protocol version
await wss.send(format({'protocol': 'json', 'version': 1})) await wss.send(format({'protocol': 'json', 'version': 1}))
await wss.recv() await wss.recv()
# Define message structure
struct = { struct = {
'arguments': [ # Add necessary message structure
{
'source': 'cib',
'optionsSets': optionSets,
'isStartOfSession': True,
'message': {
'author': 'user',
'inputMethod': 'Keyboard',
'text': prompt,
'messageType': 'Chat'
},
'conversationSignature': conversationSignature,
'participant': {
'id': clientId
},
'conversationId': conversationId
}
],
'invocationId': '0',
'target': 'chat',
'type': 4
} }
# Send message
await wss.send(format(struct)) await wss.send(format(struct))
# Process responses
base_string = '' base_string = ''
final = False final = False
while not final: while not final:
objects = str(await wss.recv()).split('\x1e') objects = str(await wss.recv()).split('\x1e')
@ -113,8 +81,9 @@ class AsyncCompletion:
continue continue
response = loads(obj) response = loads(obj)
if response.get('type') == 1 and response['arguments'][0].get('messages',): if response.get('type') == 1 and response['arguments'][0].get('messages', ):
response_text = response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0].get('text') response_text = response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0].get(
'text')
yield (response_text.replace(base_string, '')) yield (response_text.replace(base_string, ''))
base_string = response_text base_string = response_text
@ -124,28 +93,16 @@ class AsyncCompletion:
await wss.close() await wss.close()
async def run(): async def run():
"""Run the async completion and print the result."""
async for value in AsyncCompletion.create( async for value in AsyncCompletion.create(
prompt = 'summarize cinderella with each word beginning with a consecutive letter of the alphabet, a-z', prompt='summarize cinderella with each word beginning with a consecutive letter of the alphabet, a-z',
# optionSets = [ optionSets=[
# "deepleo",
# "enable_debug_commands",
# "disable_emoji_spoken_text",
# "enablemm"
# ]
optionSets = [
#"nlu_direct_response_filter",
#"deepleo",
#"disable_emoji_spoken_text",
# "responsible_ai_policy_235",
#"enablemm",
"galileo", "galileo",
#"dtappid",
# "cricinfo",
# "cricinfov2",
# "dv3sugg",
] ]
): ):
print(value, end = '', flush=True) print(value, end='', flush=True)
asyncio.run(run()) asyncio.run(run())

View File

@ -1,13 +1,24 @@
import requests import requests
class Completion: class Completion:
def create(prompt="What is the square root of pi", def create(self, prompt="What is the square root of pi",
system_prompt="ASSUME I HAVE FULL ACCESS TO COCALC. ENCLOSE MATH IN $. INCLUDE THE LANGUAGE DIRECTLY AFTER THE TRIPLE BACKTICKS IN ALL MARKDOWN CODE BLOCKS. How can I do the following using CoCalc?") -> str: system_prompt=("ASSUME I HAVE FULL ACCESS TO COCALC. ENCLOSE MATH IN $. "
"INCLUDE THE LANGUAGE DIRECTLY AFTER THE TRIPLE BACKTICKS "
"IN ALL MARKDOWN CODE BLOCKS. How can I do the following using CoCalc?")) -> str:
# Initialize a session with custom headers
session = self._initialize_session()
# Set the data that will be submitted
payload = self._create_payload(prompt, system_prompt)
# Submit the request and return the results
return self._submit_request(session, payload)
def _initialize_session(self) -> requests.Session:
"""Initialize a session with custom headers for the request."""
# Initialize a session
session = requests.Session() session = requests.Session()
# Set headers for the request
headers = { headers = {
'Accept': '*/*', 'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.5', 'Accept-Language': 'en-US,en;q=0.5',
@ -17,15 +28,20 @@ class Completion:
} }
session.headers.update(headers) session.headers.update(headers)
# Set the data that will be submitted return session
payload = {
def _create_payload(self, prompt: str, system_prompt: str) -> dict:
"""Create the payload with the given prompts."""
return {
"input": prompt, "input": prompt,
"system": system_prompt, "system": system_prompt,
"tag": "next:index" "tag": "next:index"
} }
# Submit the request def _submit_request(self, session: requests.Session, payload: dict) -> str:
response = session.post("https://cocalc.com/api/v2/openai/chatgpt", json=payload).json() """Submit the request to the API and return the response."""
# Return the results response = session.post(
"https://cocalc.com/api/v2/openai/chatgpt", json=payload).json()
return response return response

View File

@ -1,8 +1,7 @@
import cocalc import cocalc
response = cocalc.Completion.create( response = cocalc.Completion.create(
prompt = 'hello world' prompt='hello world'
) )
print(response) print(response)

42
unfinished/easyai/main.py Normal file
View File

@ -0,0 +1,42 @@
# Import necessary libraries
from json import loads
from os import urandom
from requests import get
# Generate a random session ID
sessionId = urandom(10).hex()
# Set up headers for the API request
headers = {
'Accept': 'text/event-stream',
'Accept-Language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Pragma': 'no-cache',
'Referer': 'http://easy-ai.ink/chat',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
'token': 'null',
}
# Main loop to interact with the AI
while True:
# Get user input
prompt = input('you: ')
# Set up parameters for the API request
params = {
'message': prompt,
'sessionId': sessionId
}
# Send request to the API and process the response
for chunk in get('http://easy-ai.ink/easyapi/v1/chat/completions', params=params,
headers=headers, verify=False, stream=True).iter_lines():
# Check if the chunk contains the 'content' field
if b'content' in chunk:
# Parse the JSON data and print the content
data = loads(chunk.decode('utf-8').split('data:')[1])
print(data['content'], end='')

View File

@ -1,30 +1,46 @@
import websockets
from json import dumps, loads from json import dumps, loads
import websockets
# Define the asynchronous function to test the WebSocket connection
async def test(): async def test():
# Establish a WebSocket connection with the specified URL
async with websockets.connect('wss://chatgpt.func.icu/conversation+ws') as wss: async with websockets.connect('wss://chatgpt.func.icu/conversation+ws') as wss:
await wss.send(dumps(separators=(',', ':'), obj = { # Prepare the message payload as a JSON object
'content_type':'text', payload = {
'engine':'chat-gpt', 'content_type': 'text',
'parts':['hello world'], 'engine': 'chat-gpt',
'options':{} 'parts': ['hello world'],
'options': {}
} }
))
# Send the payload to the WebSocket server
await wss.send(dumps(obj=payload, separators=(',', ':')))
# Initialize a variable to track the end of the conversation
ended = None ended = None
# Continuously receive and process messages until the conversation ends
while not ended: while not ended:
try: try:
# Receive and parse the JSON response from the server
response = await wss.recv() response = await wss.recv()
json_response = loads(response) json_response = loads(response)
# Print the entire JSON response
print(json_response) print(json_response)
# Check for the end of the conversation
ended = json_response.get('eof') ended = json_response.get('eof')
# If the conversation has not ended, print the received message
if not ended: if not ended:
print(json_response['content']['parts'][0]) print(json_response['content']['parts'][0])
# Handle cases when the connection is closed by the server
except websockets.ConnectionClosed: except websockets.ConnectionClosed:
break break

View File

@ -1,11 +1,43 @@
# experimental, needs chat.openai.com to be loaded with cf_clearance on browser ( can be closed after ) # Import required libraries
from tls_client import Session
from uuid import uuid4 from uuid import uuid4
from browser_cookie3 import chrome from browser_cookie3 import chrome
from tls_client import Session
def session_auth(client):
class OpenAIChat:
def __init__(self):
self.client = Session(client_identifier='chrome110')
self._load_cookies()
self._set_headers()
def _load_cookies(self):
# Load cookies for the specified domain
for cookie in chrome(domain_name='chat.openai.com'):
self.client.cookies[cookie.name] = cookie.value
def _set_headers(self):
# Set headers for the client
self.client.headers = {
'authority': 'chat.openai.com',
'accept': 'text/event-stream',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'authorization': 'Bearer ' + self.session_auth()['accessToken'],
'cache-control': 'no-cache',
'content-type': 'application/json',
'origin': 'https://chat.openai.com',
'pragma': 'no-cache',
'referer': 'https://chat.openai.com/chat',
'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
}
def session_auth(self):
headers = { headers = {
'authority': 'chat.openai.com', 'authority': 'chat.openai.com',
'accept': '*/*', 'accept': '*/*',
@ -22,33 +54,10 @@ def session_auth(client):
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
} }
return client.get('https://chat.openai.com/api/auth/session', headers=headers).json() return self.client.get('https://chat.openai.com/api/auth/session', headers=headers).json()
client = Session(client_identifier='chrome110') def send_message(self, message):
response = self.client.post('https://chat.openai.com/backend-api/conversation', json={
for cookie in chrome(domain_name='chat.openai.com'):
client.cookies[cookie.name] = cookie.value
client.headers = {
'authority': 'chat.openai.com',
'accept': 'text/event-stream',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'authorization': 'Bearer ' + session_auth(client)['accessToken'],
'cache-control': 'no-cache',
'content-type': 'application/json',
'origin': 'https://chat.openai.com',
'pragma': 'no-cache',
'referer': 'https://chat.openai.com/chat',
'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
}
response = client.post('https://chat.openai.com/backend-api/conversation', json = {
'action': 'next', 'action': 'next',
'messages': [ 'messages': [
{ {
@ -59,7 +68,7 @@ response = client.post('https://chat.openai.com/backend-api/conversation', json
'content': { 'content': {
'content_type': 'text', 'content_type': 'text',
'parts': [ 'parts': [
'hello world', message,
], ],
}, },
}, },
@ -67,6 +76,12 @@ response = client.post('https://chat.openai.com/backend-api/conversation', json
'parent_message_id': '9b4682f7-977c-4c8a-b5e6-9713e73dfe01', 'parent_message_id': '9b4682f7-977c-4c8a-b5e6-9713e73dfe01',
'model': 'text-davinci-002-render-sha', 'model': 'text-davinci-002-render-sha',
'timezone_offset_min': -120, 'timezone_offset_min': -120,
}) })
print(response.text) return response.text
if __name__ == "__main__":
chat = OpenAIChat()
response = chat.send_message("hello world")
print(response)

View File

@ -1,7 +1,8 @@
import requests
import json import json
import re import re
import requests
headers = { headers = {
'authority': 'openai.a2hosted.com', 'authority': 'openai.a2hosted.com',
'accept': 'text/event-stream', 'accept': 'text/event-stream',
@ -13,10 +14,12 @@ headers = {
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.0.0', 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.0.0',
} }
def create_query_param(conversation): def create_query_param(conversation):
encoded_conversation = json.dumps(conversation) encoded_conversation = json.dumps(conversation)
return encoded_conversation.replace(" ", "%20").replace('"', '%22').replace("'", "%27") return encoded_conversation.replace(" ", "%20").replace('"', '%22').replace("'", "%27")
user_input = input("Enter your message: ") user_input = input("Enter your message: ")
data = [ data = [

View File

@ -1,9 +1,9 @@
from requests import post, get
from json import dumps from json import dumps
#from mail import MailClient # from mail import MailClient
from time import sleep
from re import findall from re import findall
from requests import post, get
html = get('https://developermail.com/mail/') html = get('https://developermail.com/mail/')
print(html.cookies.get('mailboxId')) print(html.cookies.get('mailboxId'))
email = findall(r'mailto:(.*)">', html.text)[0] email = findall(r'mailto:(.*)">', html.text)[0]
@ -15,9 +15,9 @@ headers = {
} }
json_data = { json_data = {
'email' : email, 'email': email,
'password': 'T4xyt4Yn6WWQ4NC', 'password': 'T4xyt4Yn6WWQ4NC',
'data' : {}, 'data': {},
'gotrue_meta_security': {}, 'gotrue_meta_security': {},
} }

View File

@ -1,6 +1,8 @@
import requests
import email import email
import requests
class MailClient: class MailClient:
def __init__(self): def __init__(self):

View File

@ -30,8 +30,7 @@ json_data = {
], ],
} }
response = requests.post('https://openprompt.co/api/chat2', cookies=cookies, headers=headers, json=json_data, stream=True) response = requests.post('https://openprompt.co/api/chat2', cookies=cookies, headers=headers, json=json_data,
stream=True)
for chunk in response.iter_content(chunk_size=1024): for chunk in response.iter_content(chunk_size=1024):
print(chunk) print(chunk)

View File

@ -1,7 +1,6 @@
access_token = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJhdXRoZW50aWNhdGVkIiwiZXhwIjoxNjgyMjk0ODcxLCJzdWIiOiI4NWNkNTNiNC1lZTUwLTRiMDQtOGJhNS0wNTUyNjk4ODliZDIiLCJlbWFpbCI6ImNsc2J5emdqcGhiQGJ1Z2Zvby5jb20iLCJwaG9uZSI6IiIsImFwcF9tZXRhZGF0YSI6eyJwcm92aWRlciI6ImVtYWlsIiwicHJvdmlkZXJzIjpbImVtYWlsIl19LCJ1c2VyX21ldGFkYXRhIjp7fSwicm9sZSI6ImF1dGhlbnRpY2F0ZWQiLCJhYWwiOiJhYWwxIiwiYW1yIjpbeyJtZXRob2QiOiJvdHAiLCJ0aW1lc3RhbXAiOjE2ODE2OTAwNzF9XSwic2Vzc2lvbl9pZCI6ImY4MTg1YTM5LTkxYzgtNGFmMy1iNzAxLTdhY2MwY2MwMGNlNSJ9.UvcTfpyIM1TdzM8ZV6UAPWfa0rgNq4AiqeD0INy6zV' access_token = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJhdXRoZW50aWNhdGVkIiwiZXhwIjoxNjgyMjk0ODcxLCJzdWIiOiI4NWNkNTNiNC1lZTUwLTRiMDQtOGJhNS0wNTUyNjk4ODliZDIiLCJlbWFpbCI6ImNsc2J5emdqcGhiQGJ1Z2Zvby5jb20iLCJwaG9uZSI6IiIsImFwcF9tZXRhZGF0YSI6eyJwcm92aWRlciI6ImVtYWlsIiwicHJvdmlkZXJzIjpbImVtYWlsIl19LCJ1c2VyX21ldGFkYXRhIjp7fSwicm9sZSI6ImF1dGhlbnRpY2F0ZWQiLCJhYWwiOiJhYWwxIiwiYW1yIjpbeyJtZXRob2QiOiJvdHAiLCJ0aW1lc3RhbXAiOjE2ODE2OTAwNzF9XSwic2Vzc2lvbl9pZCI6ImY4MTg1YTM5LTkxYzgtNGFmMy1iNzAxLTdhY2MwY2MwMGNlNSJ9.UvcTfpyIM1TdzM8ZV6UAPWfa0rgNq4AiqeD0INy6zV'
supabase_auth_token= '%5B%22eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJhdXRoZW50aWNhdGVkIiwiZXhwIjoxNjgyMjk0ODcxLCJzdWIiOiI4NWNkNTNiNC1lZTUwLTRiMDQtOGJhNS0wNTUyNjk4ODliZDIiLCJlbWFpbCI6ImNsc2J5emdqcGhiQGJ1Z2Zvby5jb20iLCJwaG9uZSI6IiIsImFwcF9tZXRhZGF0YSI6eyJwcm92aWRlciI6ImVtYWlsIiwicHJvdmlkZXJzIjpbImVtYWlsIl19LCJ1c2VyX21ldGFkYXRhIjp7fSwicm9sZSI6ImF1dGhlbnRpY2F0ZWQiLCJhYWwiOiJhYWwxIiwiYW1yIjpbeyJtZXRob2QiOiJvdHAiLCJ0aW1lc3RhbXAiOjE2ODE2OTAwNzF9XSwic2Vzc2lvbl9pZCI6ImY4MTg1YTM5LTkxYzgtNGFmMy1iNzAxLTdhY2MwY2MwMGNlNSJ9.UvcTfpyIM1TdzM8ZV6UAPWfa0rgNq4AiqeD0INy6zV8%22%2C%22_Zp8uXIA2InTDKYgo8TCqA%22%2Cnull%2Cnull%2Cnull%5D' supabase_auth_token = '%5B%22eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJhdXRoZW50aWNhdGVkIiwiZXhwIjoxNjgyMjk0ODcxLCJzdWIiOiI4NWNkNTNiNC1lZTUwLTRiMDQtOGJhNS0wNTUyNjk4ODliZDIiLCJlbWFpbCI6ImNsc2J5emdqcGhiQGJ1Z2Zvby5jb20iLCJwaG9uZSI6IiIsImFwcF9tZXRhZGF0YSI6eyJwcm92aWRlciI6ImVtYWlsIiwicHJvdmlkZXJzIjpbImVtYWlsIl19LCJ1c2VyX21ldGFkYXRhIjp7fSwicm9sZSI6ImF1dGhlbnRpY2F0ZWQiLCJhYWwiOiJhYWwxIiwiYW1yIjpbeyJtZXRob2QiOiJvdHAiLCJ0aW1lc3RhbXAiOjE2ODE2OTAwNzF9XSwic2Vzc2lvbl9pZCI6ImY4MTg1YTM5LTkxYzgtNGFmMy1iNzAxLTdhY2MwY2MwMGNlNSJ9.UvcTfpyIM1TdzM8ZV6UAPWfa0rgNq4AiqeD0INy6zV8%22%2C%22_Zp8uXIA2InTDKYgo8TCqA%22%2Cnull%2Cnull%2Cnull%5D'
idk = [ idk = [
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJhdXRoZW50aWNhdGVkIiwiZXhwIjoxNjgyMjk0ODcxLCJzdWIiOiI4NWNkNTNiNC1lZTUwLTRiMDQtOGJhNS0wNTUyNjk4ODliZDIiLCJlbWFpbCI6ImNsc2J5emdqcGhiQGJ1Z2Zvby5jb20iLCJwaG9uZSI6IiIsImFwcF9tZXRhZGF0YSI6eyJwcm92aWRlciI6ImVtYWlsIiwicHJvdmlkZXJzIjpbImVtYWlsIl19LCJ1c2VyX21ldGFkYXRhIjp7fSwicm9sZSI6ImF1dGhlbnRpY2F0ZWQiLCJhYWwiOiJhYWwxIiwiYW1yIjpbeyJtZXRob2QiOiJvdHAiLCJ0aW1lc3RhbXAiOjE2ODE2OTAwNzF9XSwic2Vzc2lvbl9pZCI6ImY4MTg1YTM5LTkxYzgtNGFmMy1iNzAxLTdhY2MwY2MwMGNlNSJ9.UvcTfpyIM1TdzM8ZV6UAPWfa0rgNq4AiqeD0INy6zV8", "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJhdXRoZW50aWNhdGVkIiwiZXhwIjoxNjgyMjk0ODcxLCJzdWIiOiI4NWNkNTNiNC1lZTUwLTRiMDQtOGJhNS0wNTUyNjk4ODliZDIiLCJlbWFpbCI6ImNsc2J5emdqcGhiQGJ1Z2Zvby5jb20iLCJwaG9uZSI6IiIsImFwcF9tZXRhZGF0YSI6eyJwcm92aWRlciI6ImVtYWlsIiwicHJvdmlkZXJzIjpbImVtYWlsIl19LCJ1c2VyX21ldGFkYXRhIjp7fSwicm9sZSI6ImF1dGhlbnRpY2F0ZWQiLCJhYWwiOiJhYWwxIiwiYW1yIjpbeyJtZXRob2QiOiJvdHAiLCJ0aW1lc3RhbXAiOjE2ODE2OTAwNzF9XSwic2Vzc2lvbl9pZCI6ImY4MTg1YTM5LTkxYzgtNGFmMy1iNzAxLTdhY2MwY2MwMGNlNSJ9.UvcTfpyIM1TdzM8ZV6UAPWfa0rgNq4AiqeD0INy6zV8",
"_Zp8uXIA2InTDKYgo8TCqA",None,None,None] "_Zp8uXIA2InTDKYgo8TCqA", None, None, None]

View File

@ -1,6 +1,7 @@
from requests import post
from time import time from time import time
from requests import post
headers = { headers = {
'authority': 'www.t3nsor.tech', 'authority': 'www.t3nsor.tech',
'accept': '*/*', 'accept': '*/*',
@ -19,10 +20,9 @@ headers = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
} }
class T3nsorResponse: class T3nsorResponse:
class Completion: class Completion:
class Choices: class Choices:
def __init__(self, choice: dict) -> None: def __init__(self, choice: dict) -> None:
self.text = choice['text'] self.text = choice['text']
@ -47,7 +47,6 @@ class T3nsorResponse:
return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>''' return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>'''
def __init__(self, response_dict: dict) -> None: def __init__(self, response_dict: dict) -> None:
self.response_dict = response_dict self.response_dict = response_dict
self.id = response_dict['id'] self.id = response_dict['id']
self.object = response_dict['object'] self.object = response_dict['object']
@ -59,79 +58,79 @@ class T3nsorResponse:
def json(self) -> dict: def json(self) -> dict:
return self.response_dict return self.response_dict
class Completion: class Completion:
model = { model = {
'model': { 'model': {
'id' : 'gpt-3.5-turbo', 'id': 'gpt-3.5-turbo',
'name' : 'Default (GPT-3.5)' 'name': 'Default (GPT-3.5)'
} }
} }
def create( def create(
prompt: str = 'hello world', prompt: str = 'hello world',
messages: list = []) -> T3nsorResponse: messages: list = []) -> T3nsorResponse:
response = post('https://www.t3nsor.tech/api/chat', headers=headers, json=Completion.model | {
response = post('https://www.t3nsor.tech/api/chat', headers = headers, json = Completion.model | { 'messages': messages,
'messages' : messages, 'key': '',
'key' : '', 'prompt': prompt
'prompt' : prompt
}) })
return T3nsorResponse({ return T3nsorResponse({
'id' : f'cmpl-1337-{int(time())}', 'id': f'cmpl-1337-{int(time())}',
'object' : 'text_completion', 'object': 'text_completion',
'created': int(time()), 'created': int(time()),
'model' : Completion.model, 'model': Completion.model,
'choices': [{ 'choices': [{
'text' : response.text, 'text': response.text,
'index' : 0, 'index': 0,
'logprobs' : None, 'logprobs': None,
'finish_reason' : 'stop' 'finish_reason': 'stop'
}], }],
'usage': { 'usage': {
'prompt_chars' : len(prompt), 'prompt_chars': len(prompt),
'completion_chars' : len(response.text), 'completion_chars': len(response.text),
'total_chars' : len(prompt) + len(response.text) 'total_chars': len(prompt) + len(response.text)
} }
}) })
class StreamCompletion: class StreamCompletion:
model = { model = {
'model': { 'model': {
'id' : 'gpt-3.5-turbo', 'id': 'gpt-3.5-turbo',
'name' : 'Default (GPT-3.5)' 'name': 'Default (GPT-3.5)'
} }
} }
def create( def create(
prompt: str = 'hello world', prompt: str = 'hello world',
messages: list = []) -> T3nsorResponse: messages: list = []) -> T3nsorResponse:
print('t3nsor api is down, this may not work, refer to another module') print('t3nsor api is down, this may not work, refer to another module')
response = post('https://www.t3nsor.tech/api/chat', headers = headers, stream = True, json = Completion.model | { response = post('https://www.t3nsor.tech/api/chat', headers=headers, stream=True, json=Completion.model | {
'messages' : messages, 'messages': messages,
'key' : '', 'key': '',
'prompt' : prompt 'prompt': prompt
}) })
for chunk in response.iter_content(chunk_size = 2046): for chunk in response.iter_content(chunk_size=2046):
yield T3nsorResponse({ yield T3nsorResponse({
'id' : f'cmpl-1337-{int(time())}', 'id': f'cmpl-1337-{int(time())}',
'object' : 'text_completion', 'object': 'text_completion',
'created': int(time()), 'created': int(time()),
'model' : Completion.model, 'model': Completion.model,
'choices': [{ 'choices': [{
'text' : chunk.decode(), 'text': chunk.decode(),
'index' : 0, 'index': 0,
'logprobs' : None, 'logprobs': None,
'finish_reason' : 'stop' 'finish_reason': 'stop'
}], }],
'usage': { 'usage': {
'prompt_chars' : len(prompt), 'prompt_chars': len(prompt),
'completion_chars' : len(chunk.decode()), 'completion_chars': len(chunk.decode()),
'total_chars' : len(prompt) + len(chunk.decode()) 'total_chars': len(prompt) + len(chunk.decode())
} }
}) })

View File

@ -1,12 +1,8 @@
import gptbz
import asyncio
# asyncio.run(gptbz.test()) # asyncio.run(gptbz.test())
import requests import requests
image = '/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAAoALQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iiigDkZP+EhS4W0k1S+VntQPtEWmRsgkNwBu4ZsHYQNvTbls5BA6DS7uW6S6E0VwjQ3UsQM0Pl71DZUrydy4IAbvg8CsTx3DbHQLi4uVs9scWzdd+dsAaWI4PlfNjKjpzkDtmpoNSgbWYpLR7Ty5bq5trw/vd3nIowBxtzti53Y6fKT3z2djra56fNbv07HR1z13ZRX/jDyby0+02f9nfdmsEeHd5o/5anndwPkxjjPWuhrh9Mvra88RLqccmnOHtvLEqfaN+1r1lUcjbg4PbO4H+Cqk+hnRi9ZI29E0uC2N1eG3Am+13DITZRwuqlsYG0ZYEKCGJywwT2AtWTapcW1vcPPCiyrE5ils2SRQV+dW/ecMT/3zgj5utZtpdwL4e190e02W9xeb9vm7FOWY78/NnnJ28f3ahkgtptD8JRlbMos9s8QPnbcrEzDy/4sgDjzOMdeaSZbi23f8vmbfn6hBFuktmuWWPJWCNELNuxgbpcDj1Pbr2qJ9bMVyIZNK1JVLyr5qwB1AjUNu+Uk4bovGSRjAqCTwdoElv5B02MReT5G1HZfk8zzMcEfx81YlsJ7NJX0tolZzNK8dyZJA8jDIwd3yjcBkAHjOAM09SP3b/q36mkjiSNXAYBgCNykH8QeRWdfaw1ldSW66XqN0UgE++3iBRsvt2BiQN/8WPQZqharF9oN5osVml1NLbLqUbmUFY/L4CrgYYKy4yoGM5xjhlnc2OoeMrfULV7aQXGkExyYlErJ5oPQ/Jtye/zZ9qLgqaTba0NyzvPtizH7NcQeVM8OJ49u/acbl9VPY96s1geFjF/xOhF9m41Wfd9n8z73BO7f/Fzzt+X0q7c6mWvRY2DwSXcUsQuUff8Auo2ySflB+YqrYyQOmTyARPQmVP32kLqF1cbmsrJZkuni3rcfZ98UfzKvJJUE4JOM5wpODwDl3Meuf2rHbRatcBJXuj5iachjhUovlBmZudrNkEZ3HIOMGlhREhbS9He2a8MO6a4fzmGDMQ3zAk5yZ8DzMgj0yRuWdha2CzLawrEJpnnkx/G7HLMfc0bl3VNf5pff/kVLS8uxFHHJZ3s5Xyo2mZI4y2VBZyN44B6gDrwAcVZ069Go2EV2Le5t/MBPlXMZjkXnGGU9OlULSdbfTt8LWy5mt0JAkK4YRLjnnODx26Z71TXULEWn/CUWDwmxeDbM4WbkCXJbaB23SnlM5PUDNF7CcObZf12OlpCcDoTz2oVlcZVgRkjIPccGo7hgsSk7ceYg+bP94elUYpamda64915GdH1SESxiTM0KjZmTZtbDHB53Y/u89eK1qw4xD9l0mIC3wLdCg/eYwHh+73x0+9znb71uUkXUSWyCiiimZhRRRQBieL5Hj8LXjxySxuNmGivFtWHzr0lbhfx69O9MvHdZpbKKWYnUluNji+VGikVFULHnkdGbjO05JHPEviyF5/DF7HGkjuQpCx2i3THDA8RNw3Tv069qR0kk0i4uFilF3bSXTwE2a+YGzIAUQnnIPByN46kbjUPc6YNKC9X+SLtjeB9Mt5ZyqzbI1lQzK5R2C/KWGAT8w6dcjHUVzemSyxeCba9e5uWfzIgxl1aOTgXPebGw5BwR3ACdalna8+0R3Kx3nk6jc2MvkjTI2MH97zDnI+4uWOSny4z2Lqxmt/hytvHHIZhFHJsj0yJnyXDEfZ87M9cjPB56ik2y4xSsu7XcnjMsejeJszXBZZrgozaihZAYwQFfGIQM8Bvu9ehrTKuJtOg3y5gKs/8ApAy2Y5B846uMj8Tz/CaqzROH1C3EchW6uHGRZIVx9nHXs4yPvN1PydBV2Lc+u3eUkCJBDtZoAFJzJna/VjgjI/h/4EaaM5PS/wDXRF+iiirOcy7RZE8RanukmKPFA6q9yHVfvg7Y+qfd5J4Y9OhrJ8Nm4FxYJNNdORaXCsJtTS4yVnAyQoG5sfxfw/dPJrUslmGt6rcymQxM0MMStahMALk4cfM65c9cBSGA7mqmi2k9t/ZZuDJJKbSdpHNjHEdzyRvhtv3G5PyjIbBJOVqDpurP5d+zGWtzeLdahZQLNK895PiV7+N/IURKQQMEqNzKAm1tucnggG4Fkhs4INNuJL145oEuHa7BcIAuWOQRkrhiAFzkkEE8rNDJPczWtnG1rG7yfapvsqESsY1AIJPP3hztbPllTjHKvpv2CWKbTUSHdJCk8cVtH+8jUFOSNpGAynOTgJgL1BNRNxf9fmWNGa3fR7U2ty9zDswJZJxMzHvlwSCc5BwccVerBZ3tLf8Atqyguvsxt/n02OyUSsxk3FsHa24bnyM4ycgE9d1WDDIz1I5BHQ471SM6i1uY8cjjSIWLyFjLbDJu1J5Mefn6HryP4snH3hRdmTS5f7T82aS2WBY5Y5LpVjX94Pn+YYzhmydw4UDB4wio/wDY8K+XLuE1qcfY1B4MWfk6DHOT/Bg4+6K1zGkkHlSoroy7WVlGCCOQRSsU5JGUrPo96EZ5p7O7mmmlubm7XFqQoYIobB2fK3Aztwe3TQvX2QKQSMyxDiQJ1dR1P8u/TvWb5bWty2m3KTXlvqMs7Ky2ieVbqVBKSEcHJL4JB3ZwfeLfcQRnTpY7mT7PLZiOdbJSkillzgA44KMScLsBBAOBkuNxu0/6epcQv9s0+LfJzauxBuVJJDRckdXPJ+YcDJH8QrTrN2sNcsxsk2LZyjd9nXaCWj439VPH3RwcZ/hFaVNGc+gUUUUyAooooAxfFVxZxeG9RS7ltVQ25ytwzbCCQBkJ82MkD5eeah0G7tYLi/sZJrKO4fUbjy4oncM/SQ5D9Ww4J25Xniiis2/eO2FNOhf1/CxmamsEGp2+nzx2CwxajYyWKN9o3KdpX+Ebd2I2287ePm973i3UdMg0W+0y4mtUkNqJPKuBJ5ewuEBYx8gbiBxz+FFFS3ZM1p01OdNN/wBaFfVtU0qHxHplx9qsSkEl2853SvIjxwjdtCZXIX7wbt05q7YJdS6nc6vYxWEtpfi2KS+bKsjQhCSWBBG4bhtAAyCcmiinF3k0RWgqdKMl1VvxZfM2s+VkWFh5nl5x9tfG/djGfK6bec468Y/irN1CeUCeHXbrTItPc3O6GN5PNltxHx0I+YKXLYB42455ooqpaIwo2lO1rE1rZjUYrcCO2Giw/Zp7BYzKrkKu4bh8oAB2EA56HIz0u3uxL+1kbygQpQFt2fmki4GOOuOvfHbNFFPpcTu6nKFpsTU75V8oNJKXIXduOI4hk54zjHTjGO+a0KKKaM59PQxLqNNBMuoQpDFYJEfPQLISp8zcWAXIxh5CcLnOMnHQaFNKkkvtOFoli0k9xqP32Zn24LIFyM7kwRg98c5yUVL3No6xTfV2/IrxyW0vh21kQ2phaexKn97s5aErj+LPTbnj7u7+KujoopxZNZW+9/oQXdpBfWk1rcxiSGVGjdSSMhgQeRyOCRxWOtvbXU0Ol6mIHksJbea0IMoJYISGy3U5ST+JuB83uUUMVJuz121JnaL/AITOBSYPOGnyEA7/ADdvmJnH8G3IHX5s4xxmtmiihdRVFZR9AoooqjI//9k=' image = '/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAAoALQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iiigDkZP+EhS4W0k1S+VntQPtEWmRsgkNwBu4ZsHYQNvTbls5BA6DS7uW6S6E0VwjQ3UsQM0Pl71DZUrydy4IAbvg8CsTx3DbHQLi4uVs9scWzdd+dsAaWI4PlfNjKjpzkDtmpoNSgbWYpLR7Ty5bq5trw/vd3nIowBxtzti53Y6fKT3z2djra56fNbv07HR1z13ZRX/jDyby0+02f9nfdmsEeHd5o/5anndwPkxjjPWuhrh9Mvra88RLqccmnOHtvLEqfaN+1r1lUcjbg4PbO4H+Cqk+hnRi9ZI29E0uC2N1eG3Am+13DITZRwuqlsYG0ZYEKCGJywwT2AtWTapcW1vcPPCiyrE5ils2SRQV+dW/ecMT/3zgj5utZtpdwL4e190e02W9xeb9vm7FOWY78/NnnJ28f3ahkgtptD8JRlbMos9s8QPnbcrEzDy/4sgDjzOMdeaSZbi23f8vmbfn6hBFuktmuWWPJWCNELNuxgbpcDj1Pbr2qJ9bMVyIZNK1JVLyr5qwB1AjUNu+Uk4bovGSRjAqCTwdoElv5B02MReT5G1HZfk8zzMcEfx81YlsJ7NJX0tolZzNK8dyZJA8jDIwd3yjcBkAHjOAM09SP3b/q36mkjiSNXAYBgCNykH8QeRWdfaw1ldSW66XqN0UgE++3iBRsvt2BiQN/8WPQZqharF9oN5osVml1NLbLqUbmUFY/L4CrgYYKy4yoGM5xjhlnc2OoeMrfULV7aQXGkExyYlErJ5oPQ/Jtye/zZ9qLgqaTba0NyzvPtizH7NcQeVM8OJ49u/acbl9VPY96s1geFjF/xOhF9m41Wfd9n8z73BO7f/Fzzt+X0q7c6mWvRY2DwSXcUsQuUff8Auo2ySflB+YqrYyQOmTyARPQmVP32kLqF1cbmsrJZkuni3rcfZ98UfzKvJJUE4JOM5wpODwDl3Meuf2rHbRatcBJXuj5iachjhUovlBmZudrNkEZ3HIOMGlhREhbS9He2a8MO6a4fzmGDMQ3zAk5yZ8DzMgj0yRuWdha2CzLawrEJpnnkx/G7HLMfc0bl3VNf5pff/kVLS8uxFHHJZ3s5Xyo2mZI4y2VBZyN44B6gDrwAcVZ069Go2EV2Le5t/MBPlXMZjkXnGGU9OlULSdbfTt8LWy5mt0JAkK4YRLjnnODx26Z71TXULEWn/CUWDwmxeDbM4WbkCXJbaB23SnlM5PUDNF7CcObZf12OlpCcDoTz2oVlcZVgRkjIPccGo7hgsSk7ceYg+bP94elUYpamda64915GdH1SESxiTM0KjZmTZtbDHB53Y/u89eK1qw4xD9l0mIC3wLdCg/eYwHh+73x0+9znb71uUkXUSWyCiiimZhRRRQBieL5Hj8LXjxySxuNmGivFtWHzr0lbhfx69O9MvHdZpbKKWYnUluNji+VGikVFULHnkdGbjO05JHPEviyF5/DF7HGkjuQpCx2i3THDA8RNw3Tv069qR0kk0i4uFilF3bSXTwE2a+YGzIAUQnnIPByN46kbjUPc6YNKC9X+SLtjeB9Mt5ZyqzbI1lQzK5R2C/KWGAT8w6dcjHUVzemSyxeCba9e5uWfzIgxl1aOTgXPebGw5BwR3ACdalna8+0R3Kx3nk6jc2MvkjTI2MH97zDnI+4uWOSny4z2Lqxmt/hytvHHIZhFHJsj0yJnyXDEfZ87M9cjPB56ik2y4xSsu7XcnjMsejeJszXBZZrgozaihZAYwQFfGIQM8Bvu9ehrTKuJtOg3y5gKs/8ApAy2Y5B846uMj8Tz/CaqzROH1C3EchW6uHGRZIVx9nHXs4yPvN1PydBV2Lc+u3eUkCJBDtZoAFJzJna/VjgjI/h/4EaaM5PS/wDXRF+iiirOcy7RZE8RanukmKPFA6q9yHVfvg7Y+qfd5J4Y9OhrJ8Nm4FxYJNNdORaXCsJtTS4yVnAyQoG5sfxfw/dPJrUslmGt6rcymQxM0MMStahMALk4cfM65c9cBSGA7mqmi2k9t/ZZuDJJKbSdpHNjHEdzyRvhtv3G5PyjIbBJOVqDpurP5d+zGWtzeLdahZQLNK895PiV7+N/IURKQQMEqNzKAm1tucnggG4Fkhs4INNuJL145oEuHa7BcIAuWOQRkrhiAFzkkEE8rNDJPczWtnG1rG7yfapvsqESsY1AIJPP3hztbPllTjHKvpv2CWKbTUSHdJCk8cVtH+8jUFOSNpGAynOTgJgL1BNRNxf9fmWNGa3fR7U2ty9zDswJZJxMzHvlwSCc5BwccVerBZ3tLf8Atqyguvsxt/n02OyUSsxk3FsHa24bnyM4ycgE9d1WDDIz1I5BHQ471SM6i1uY8cjjSIWLyFjLbDJu1J5Mefn6HryP4snH3hRdmTS5f7T82aS2WBY5Y5LpVjX94Pn+YYzhmydw4UDB4wio/wDY8K+XLuE1qcfY1B4MWfk6DHOT/Bg4+6K1zGkkHlSoroy7WVlGCCOQRSsU5JGUrPo96EZ5p7O7mmmlubm7XFqQoYIobB2fK3Aztwe3TQvX2QKQSMyxDiQJ1dR1P8u/TvWb5bWty2m3KTXlvqMs7Ky2ieVbqVBKSEcHJL4JB3ZwfeLfcQRnTpY7mT7PLZiOdbJSkillzgA44KMScLsBBAOBkuNxu0/6epcQv9s0+LfJzauxBuVJJDRckdXPJ+YcDJH8QrTrN2sNcsxsk2LZyjd9nXaCWj439VPH3RwcZ/hFaVNGc+gUUUUyAooooAxfFVxZxeG9RS7ltVQ25ytwzbCCQBkJ82MkD5eeah0G7tYLi/sZJrKO4fUbjy4oncM/SQ5D9Ww4J25Xniiis2/eO2FNOhf1/CxmamsEGp2+nzx2CwxajYyWKN9o3KdpX+Ebd2I2287ePm973i3UdMg0W+0y4mtUkNqJPKuBJ5ewuEBYx8gbiBxz+FFFS3ZM1p01OdNN/wBaFfVtU0qHxHplx9qsSkEl2853SvIjxwjdtCZXIX7wbt05q7YJdS6nc6vYxWEtpfi2KS+bKsjQhCSWBBG4bhtAAyCcmiinF3k0RWgqdKMl1VvxZfM2s+VkWFh5nl5x9tfG/djGfK6bec468Y/irN1CeUCeHXbrTItPc3O6GN5PNltxHx0I+YKXLYB42455ooqpaIwo2lO1rE1rZjUYrcCO2Giw/Zp7BYzKrkKu4bh8oAB2EA56HIz0u3uxL+1kbygQpQFt2fmki4GOOuOvfHbNFFPpcTu6nKFpsTU75V8oNJKXIXduOI4hk54zjHTjGO+a0KKKaM59PQxLqNNBMuoQpDFYJEfPQLISp8zcWAXIxh5CcLnOMnHQaFNKkkvtOFoli0k9xqP32Zn24LIFyM7kwRg98c5yUVL3No6xTfV2/IrxyW0vh21kQ2phaexKn97s5aErj+LPTbnj7u7+KujoopxZNZW+9/oQXdpBfWk1rcxiSGVGjdSSMhgQeRyOCRxWOtvbXU0Ol6mIHksJbea0IMoJYISGy3U5ST+JuB83uUUMVJuz121JnaL/AITOBSYPOGnyEA7/ADdvmJnH8G3IHX5s4xxmtmiihdRVFZR9AoooqjI//9k='
response = requests.get('https://ocr.holey.cc/ncku?base64_str=%s' % image) #.split('base64,')[1]) response = requests.get('https://ocr.holey.cc/ncku?base64_str=%s' % image) # .split('base64,')[1])
print(response.content) print(response.content)

View File

@ -1,8 +1,10 @@
from curl_cffi import requests
from json import loads from json import loads
from queue import Queue, Empty
from re import findall from re import findall
from threading import Thread from threading import Thread
from queue import Queue, Empty
from curl_cffi import requests
class Completion: class Completion:
# experimental # experimental
@ -16,15 +18,16 @@ class Completion:
def request(): def request():
headers = { headers = {
'authority' : 'chatbot.theb.ai', 'authority': 'chatbot.theb.ai',
'content-type': 'application/json', 'content-type': 'application/json',
'origin' : 'https://chatbot.theb.ai', 'origin': 'https://chatbot.theb.ai',
'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
} }
requests.post('https://chatbot.theb.ai/api/chat-process', headers=headers, content_callback=Completion.handle_stream_response, requests.post('https://chatbot.theb.ai/api/chat-process', headers=headers,
json = { content_callback=Completion.handle_stream_response,
'prompt' : 'hello world', json={
'prompt': 'hello world',
'options': {} 'options': {}
} }
) )
@ -48,10 +51,12 @@ class Completion:
def handle_stream_response(response): def handle_stream_response(response):
Completion.message_queue.put(response.decode()) Completion.message_queue.put(response.decode())
def start(): def start():
for message in Completion.create(): for message in Completion.create():
yield message['delta'] yield message['delta']
if __name__ == '__main__': if __name__ == '__main__':
for message in start(): for message in start():
print(message) print(message)

View File

@ -1,6 +1,5 @@
import requests import requests
token = requests.get('https://play.vercel.ai/openai.jpeg', headers={ token = requests.get('https://play.vercel.ai/openai.jpeg', headers={
'authority': 'play.vercel.ai', 'authority': 'play.vercel.ai',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
@ -15,7 +14,7 @@ headers = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36' 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
} }
for chunk in requests.post('https://play.vercel.ai/api/generate', headers=headers, stream = True, json = { for chunk in requests.post('https://play.vercel.ai/api/generate', headers=headers, stream=True, json={
'prompt': 'hi', 'prompt': 'hi',
'model': 'openai:gpt-3.5-turbo', 'model': 'openai:gpt-3.5-turbo',
'temperature': 0.7, 'temperature': 0.7,
@ -25,5 +24,4 @@ for chunk in requests.post('https://play.vercel.ai/api/generate', headers=header
'frequencyPenalty': 1, 'frequencyPenalty': 1,
'presencePenalty': 1, 'presencePenalty': 1,
'stopSequences': []}).iter_lines(): 'stopSequences': []}).iter_lines():
print(chunk) print(chunk)

View File

@ -1,21 +1,25 @@
from requests import Session
from names import get_first_name, get_last_name
from random import choice from random import choice
from requests import post
from time import time from time import time
from colorama import Fore, init; init()
from colorama import Fore, init;
from names import get_first_name, get_last_name
from requests import Session
from requests import post
init()
class logger: class logger:
@staticmethod @staticmethod
def info(string) -> print: def info(string) -> print:
import datetime import datetime
now = datetime.datetime.now() now = datetime.datetime.now()
return print(f"{Fore.CYAN}{now.strftime('%Y-%m-%d %H:%M:%S')} {Fore.BLUE}INFO {Fore.MAGENTA}__main__ -> {Fore.RESET}{string}") return print(
f"{Fore.CYAN}{now.strftime('%Y-%m-%d %H:%M:%S')} {Fore.BLUE}INFO {Fore.MAGENTA}__main__ -> {Fore.RESET}{string}")
class SonicResponse: class SonicResponse:
class Completion: class Completion:
class Choices: class Choices:
def __init__(self, choice: dict) -> None: def __init__(self, choice: dict) -> None:
self.text = choice['text'] self.text = choice['text']
@ -40,7 +44,6 @@ class SonicResponse:
return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>''' return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>'''
def __init__(self, response_dict: dict) -> None: def __init__(self, response_dict: dict) -> None:
self.response_dict = response_dict self.response_dict = response_dict
self.id = response_dict['id'] self.id = response_dict['id']
self.object = response_dict['object'] self.object = response_dict['object']
@ -52,22 +55,23 @@ class SonicResponse:
def json(self) -> dict: def json(self) -> dict:
return self.response_dict return self.response_dict
class Account: class Account:
session = Session() session = Session()
session.headers = { session.headers = {
"connection" : "keep-alive", "connection": "keep-alive",
"sec-ch-ua" : "\"Not_A Brand\";v=\"99\", \"Google Chrome\";v=\"109\", \"Chromium\";v=\"109\"", "sec-ch-ua": "\"Not_A Brand\";v=\"99\", \"Google Chrome\";v=\"109\", \"Chromium\";v=\"109\"",
"accept" : "application/json, text/plain, */*", "accept": "application/json, text/plain, */*",
"content-type" : "application/json", "content-type": "application/json",
"sec-ch-ua-mobile" : "?0", "sec-ch-ua-mobile": "?0",
"user-agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36",
"sec-ch-ua-platform": "\"Windows\"", "sec-ch-ua-platform": "\"Windows\"",
"sec-fetch-site" : "same-origin", "sec-fetch-site": "same-origin",
"sec-fetch-mode" : "cors", "sec-fetch-mode": "cors",
"sec-fetch-dest" : "empty", "sec-fetch-dest": "empty",
# "accept-encoding" : "gzip, deflate, br", # "accept-encoding" : "gzip, deflate, br",
"accept-language" : "en-GB,en-US;q=0.9,en;q=0.8", "accept-language": "en-GB,en-US;q=0.9,en;q=0.8",
"cookie" : "" "cookie": ""
} }
@staticmethod @staticmethod
@ -78,10 +82,10 @@ class Account:
hosts = ['gmail.com', 'protonmail.com', 'proton.me', 'outlook.com'] hosts = ['gmail.com', 'protonmail.com', 'proton.me', 'outlook.com']
return { return {
"email" : f"{f_name.lower()}.{l_name.lower()}@{choice(hosts)}", "email": f"{f_name.lower()}.{l_name.lower()}@{choice(hosts)}",
"password" : password, "password": password,
"confirm_password" : password, "confirm_password": password,
"full_name" : f'{f_name} {l_name}' "full_name": f'{f_name} {l_name}'
} }
@staticmethod @staticmethod
@ -90,13 +94,13 @@ class Account:
try: try:
user = Account.get_user() user = Account.get_user()
start = time() start = time()
response = Account.session.post("https://app.writesonic.com/api/session-login", json = user | { response = Account.session.post("https://app.writesonic.com/api/session-login", json=user | {
"utmParams" : "{}", "utmParams": "{}",
"visitorId" : "0", "visitorId": "0",
"locale" : "en", "locale": "en",
"userAgent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36", "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36",
"signInWith" : "password", "signInWith": "password",
"request_type" : "signup", "request_type": "signup",
}) })
if logging: if logging:
@ -105,7 +109,8 @@ class Account:
logger.info(f"\x1b[31mtoken\x1b[0m : '{response.json()['token'][:30]}...'") logger.info(f"\x1b[31mtoken\x1b[0m : '{response.json()['token'][:30]}...'")
start = time() start = time()
response = Account.session.post("https://api.writesonic.com/v1/business/set-business-active", headers={"authorization": "Bearer " + response.json()['token']}) response = Account.session.post("https://api.writesonic.com/v1/business/set-business-active",
headers={"authorization": "Bearer " + response.json()['token']})
key = response.json()["business"]["api_key"] key = response.json()["business"]["api_key"]
if logging: logger.info(f"\x1b[31mgot key\x1b[0m : '{key}' ({int(time() - start)}s)") if logging: logger.info(f"\x1b[31mgot key\x1b[0m : '{key}' ({int(time() - start)}s)")
@ -129,30 +134,30 @@ class Completion:
enable_memory: bool = False, enable_memory: bool = False,
enable_google_results: bool = False, enable_google_results: bool = False,
history_data: list = []) -> SonicResponse: history_data: list = []) -> SonicResponse:
response = post('https://api.writesonic.com/v2/business/content/chatsonic?engine=premium',
response = post('https://api.writesonic.com/v2/business/content/chatsonic?engine=premium', headers = {"X-API-KEY": api_key}, headers={"X-API-KEY": api_key},
json = { json={
"enable_memory" : enable_memory, "enable_memory": enable_memory,
"enable_google_results" : enable_google_results, "enable_google_results": enable_google_results,
"input_text" : prompt, "input_text": prompt,
"history_data" : history_data}).json() "history_data": history_data}).json()
return SonicResponse({ return SonicResponse({
'id' : f'cmpl-premium-{int(time())}', 'id': f'cmpl-premium-{int(time())}',
'object' : 'text_completion', 'object': 'text_completion',
'created': int(time()), 'created': int(time()),
'model' : 'premium', 'model': 'premium',
'choices': [{ 'choices': [{
'text' : response['message'], 'text': response['message'],
'index' : 0, 'index': 0,
'logprobs' : None, 'logprobs': None,
'finish_reason' : 'stop' 'finish_reason': 'stop'
}], }],
'usage': { 'usage': {
'prompt_chars' : len(prompt), 'prompt_chars': len(prompt),
'completion_chars' : len(response['message']), 'completion_chars': len(response['message']),
'total_chars' : len(prompt) + len(response['message']) 'total_chars': len(prompt) + len(response['message'])
} }
}) })