~ | g4f `v-0.1.7.2`

patch / unpatch providers
This commit is contained in:
abc 2023-10-21 00:52:19 +01:00
parent 29e2302fb2
commit ae8dae82cf
20 changed files with 195 additions and 140 deletions

View File

@ -2,7 +2,7 @@
By using this repository or any code related to it, you agree to the [legal notice](./LEGAL_NOTICE.md). The author is not responsible for any copies, forks, reuploads made by other users, or anything else related to gpt4free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses. By using this repository or any code related to it, you agree to the [legal notice](./LEGAL_NOTICE.md). The author is not responsible for any copies, forks, reuploads made by other users, or anything else related to gpt4free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
- latest pypi version: [`0.1.7.1`](https://pypi.org/project/g4f/0.1.7.1) - latest pypi version: [`0.1.7.2`](https://pypi.org/project/g4f/0.1.7.2)
```sh ```sh
pip install -U g4f pip install -U g4f
``` ```
@ -412,37 +412,34 @@ if __name__ == "__main__":
| Website| Provider| gpt-3.5 | Stream | Async | Status | Auth | | Website| Provider| gpt-3.5 | Stream | Async | Status | Auth |
| ------ | ------- | ------- | --------- | --------- | ------ | ---- | | ------ | ------- | ------- | --------- | --------- | ------ | ---- |
| [www.aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [e.aiask.me](https://e.aiask.me) | `g4f.Provider.AiAsk` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [e.aiask.me](https://e.aiask.me) | `g4f.Provider.AiAsk` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat-gpt.org](https://chat-gpt.org/chat) | `g4f.Provider.Aichat` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chat-gpt.org](https://chat-gpt.org/chat) | `g4f.Provider.Aichat` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [www.chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [www.chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt.ai](https://chatgpt.ai/) | `g4f.Provider.ChatgptAi` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatgpt.ai](https://chatgpt.ai/) | `g4f.Provider.ChatgptAi` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.chatgptdemo.net](https://chat.chatgptdemo.net) | `g4f.Provider.ChatgptDemo` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatgptfree.ai](https://chatgptfree.ai) | `g4f.Provider.ChatgptFree` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgptlogin.ai](https://chatgptlogin.ai) | `g4f.Provider.ChatgptLogin` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [freegpts1.aifree.site](https://freegpts1.aifree.site/) | `g4f.Provider.FreeGpt` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [freegpts1.aifree.site](https://freegpts1.aifree.site/) | `g4f.Provider.FreeGpt` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gptalk.net](https://gptalk.net) | `g4f.Provider.GPTalk` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [gptalk.net](https://gptalk.net) | `g4f.Provider.GPTalk` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [ai18.gptforlove.com](https://ai18.gptforlove.com) | `g4f.Provider.GptForLove` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [ai18.gptforlove.com](https://ai18.gptforlove.com) | `g4f.Provider.GptForLove` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gptgo.ai](https://gptgo.ai) | `g4f.Provider.GptGo` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [gptgo.ai](https://gptgo.ai) | `g4f.Provider.GptGo` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gptgod.site](https://gptgod.site) | `g4f.Provider.GptGod` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [www.llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [www.llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [noowai.com](https://noowai.com) | `g4f.Provider.NoowAi` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [noowai.com](https://noowai.com) | `g4f.Provider.NoowAi` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [opchatgpts.net](https://opchatgpts.net) | `g4f.Provider.Opchatgpts` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [theb.ai](https://theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [theb.ai](https://theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [sdk.vercel.ai](https://sdk.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [sdk.vercel.ai](https://sdk.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat9.yqcloud.top](https://chat9.yqcloud.top/) | `g4f.Provider.Yqcloud` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [www.aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.acytoo.com](https://chat.acytoo.com) | `g4f.Provider.Acytoo` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat.acytoo.com](https://chat.acytoo.com) | `g4f.Provider.Acytoo` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [aiservice.vercel.app](https://aiservice.vercel.app/) | `g4f.Provider.AiService` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [aiservice.vercel.app](https://aiservice.vercel.app/) | `g4f.Provider.AiService` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [aibn.cc](https://aibn.cc) | `g4f.Provider.Aibn` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [aibn.cc](https://aibn.cc) | `g4f.Provider.Aibn` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [ai.ls](https://ai.ls) | `g4f.Provider.Ails` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [ai.ls](https://ai.ls) | `g4f.Provider.Ails` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chataigpt.org](https://chataigpt.org) | `g4f.Provider.ChatAiGpt` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chataigpt.org](https://chataigpt.org) | `g4f.Provider.ChatAiGpt` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.chatgptdemo.net](https://chat.chatgptdemo.net) | `g4f.Provider.ChatgptDemo` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgptduo.com](https://chatgptduo.com) | `g4f.Provider.ChatgptDuo` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chatgptduo.com](https://chatgptduo.com) | `g4f.Provider.ChatgptDuo` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgptfree.ai](https://chatgptfree.ai) | `g4f.Provider.ChatgptFree` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chatgptlogin.ai](https://chatgptlogin.ai) | `g4f.Provider.ChatgptLogin` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [ava-ai-ef611.web.app](https://ava-ai-ef611.web.app) | `g4f.Provider.CodeLinkAva` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [ava-ai-ef611.web.app](https://ava-ai-ef611.web.app) | `g4f.Provider.CodeLinkAva` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [cromicle.top](https://cromicle.top) | `g4f.Provider.Cromicle` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [cromicle.top](https://cromicle.top) | `g4f.Provider.Cromicle` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.dfehub.com](https://chat.dfehub.com/) | `g4f.Provider.DfeHub` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat.dfehub.com](https://chat.dfehub.com/) | `g4f.Provider.DfeHub` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
@ -451,8 +448,10 @@ if __name__ == "__main__":
| [chat9.fastgpt.me](https://chat9.fastgpt.me/) | `g4f.Provider.FastGpt` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat9.fastgpt.me](https://chat9.fastgpt.me/) | `g4f.Provider.FastGpt` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [forefront.com](https://forefront.com) | `g4f.Provider.Forefront` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [forefront.com](https://forefront.com) | `g4f.Provider.Forefront` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.getgpt.world](https://chat.getgpt.world/) | `g4f.Provider.GetGpt` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat.getgpt.world](https://chat.getgpt.world/) | `g4f.Provider.GetGpt` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [gptgod.site](https://gptgod.site) | `g4f.Provider.GptGod` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [komo.ai](https://komo.ai/api/ask) | `g4f.Provider.Komo` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [komo.ai](https://komo.ai/api/ask) | `g4f.Provider.Komo` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [ai.okmiku.com](https://ai.okmiku.com) | `g4f.Provider.MikuChat` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [ai.okmiku.com](https://ai.okmiku.com) | `g4f.Provider.MikuChat` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [opchatgpts.net](https://opchatgpts.net) | `g4f.Provider.Opchatgpts` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [www.perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [www.perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [talkai.info](https://talkai.info) | `g4f.Provider.TalkAi` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [talkai.info](https://talkai.info) | `g4f.Provider.TalkAi` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [p5.v50.ltd](https://p5.v50.ltd) | `g4f.Provider.V50` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [p5.v50.ltd](https://p5.v50.ltd) | `g4f.Provider.V50` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
@ -460,6 +459,7 @@ if __name__ == "__main__":
| [wewordle.org](https://wewordle.org) | `g4f.Provider.Wewordle` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [wewordle.org](https://wewordle.org) | `g4f.Provider.Wewordle` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.wuguokai.xyz](https://chat.wuguokai.xyz) | `g4f.Provider.Wuguokai` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat.wuguokai.xyz](https://chat.wuguokai.xyz) | `g4f.Provider.Wuguokai` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.ylokh.xyz](https://chat.ylokh.xyz) | `g4f.Provider.Ylokh` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat.ylokh.xyz](https://chat.ylokh.xyz) | `g4f.Provider.Ylokh` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat9.yqcloud.top](https://chat9.yqcloud.top/) | `g4f.Provider.Yqcloud` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
### Other Models ### Other Models

View File

@ -10,7 +10,7 @@ from .base_provider import AsyncGeneratorProvider, format_prompt, get_cookies
class AItianhu(AsyncGeneratorProvider): class AItianhu(AsyncGeneratorProvider):
url = "https://www.aitianhu.com" url = "https://www.aitianhu.com"
working = False working = True
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
@classmethod @classmethod
@ -23,9 +23,9 @@ class AItianhu(AsyncGeneratorProvider):
timeout: int = 120, **kwargs) -> AsyncResult: timeout: int = 120, **kwargs) -> AsyncResult:
if not cookies: if not cookies:
cookies = browser_cookie3.chrome(domain_name='www.aitianhu.com') cookies = get_cookies(domain_name='www.aitianhu.com')
if not cookies: if not cookies:
raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies") raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies [refresh https://www.aitianhu.com on chrome]")
data = { data = {
"prompt": format_prompt(messages), "prompt": format_prompt(messages),
@ -68,7 +68,6 @@ class AItianhu(AsyncGeneratorProvider):
if b"platform's risk control" in line: if b"platform's risk control" in line:
raise RuntimeError("Platform's Risk Control") raise RuntimeError("Platform's Risk Control")
print(line)
line = json.loads(line) line = json.loads(line)
if "detail" in line: if "detail" in line:

View File

@ -1,9 +1,9 @@
from __future__ import annotations from __future__ import annotations
import random, json import random, json
from ..debug import logging
from ..typing import AsyncResult, Messages from ..typing import AsyncResult, Messages
from ..requests import StreamSession from ..requests import StreamSession
from .base_provider import AsyncGeneratorProvider, format_prompt, get_cookies from .base_provider import AsyncGeneratorProvider, format_prompt, get_cookies
domains = { domains = {
@ -17,37 +17,37 @@ class AItianhuSpace(AsyncGeneratorProvider):
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
@classmethod @classmethod
async def create_async_generator( async def create_async_generator(cls,
cls, model: str,
model: str, messages: Messages,
messages: Messages, proxy: str = None,
proxy: str = None, domain: str = None,
domain: str = None, cookies: dict = None,
cookies: dict = None, timeout: int = 10, **kwargs) -> AsyncResult:
timeout: int = 120,
**kwargs
) -> AsyncResult:
if not model: if not model:
model = "gpt-3.5-turbo" model = "gpt-3.5-turbo"
elif not model in domains: elif not model in domains:
raise ValueError(f"Model are not supported: {model}") raise ValueError(f"Model are not supported: {model}")
if not domain: if not domain:
chars = 'abcdefghijklmnopqrstuvwxyz0123456789' chars = 'abcdefghijklmnopqrstuvwxyz0123456789'
rand = ''.join(random.choice(chars) for _ in range(6)) rand = ''.join(random.choice(chars) for _ in range(6))
domain = f"{rand}.{domains[model]}" domain = f"{rand}.{domains[model]}"
if logging:
print(f"AItianhuSpace | using domain: {domain}")
if not cookies: if not cookies:
cookies = get_cookies(domain) cookies = get_cookies('.aitianhu.space')
if not cookies: if not cookies:
raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies") raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies [refresh https://{domain} on chrome]")
url = f'https://{domain}' url = f'https://{domain}'
async with StreamSession( async with StreamSession(proxies={"https": proxy},
proxies={"https": proxy}, cookies=cookies, timeout=timeout, impersonate="chrome110", verify=False) as session:
cookies=cookies,
timeout=timeout,
impersonate="chrome110",
verify=False
) as session:
data = { data = {
"prompt": format_prompt(messages), "prompt": format_prompt(messages),
"options": {}, "options": {},

View File

@ -4,7 +4,8 @@ from aiohttp import ClientSession
from ..typing import Messages from ..typing import Messages
from .base_provider import AsyncProvider, format_prompt from .base_provider import AsyncProvider, format_prompt
from .helper import get_cookies
from ..requests import StreamSession
class Aichat(AsyncProvider): class Aichat(AsyncProvider):
url = "https://chat-gpt.org/chat" url = "https://chat-gpt.org/chat"
@ -15,27 +16,34 @@ class Aichat(AsyncProvider):
async def create_async( async def create_async(
model: str, model: str,
messages: Messages, messages: Messages,
proxy: str = None, proxy: str = None, **kwargs) -> str:
**kwargs
) -> str: cookies = get_cookies('chat-gpt.org') if not kwargs.get('cookies') else kwargs.get('cookies')
if not cookies:
raise RuntimeError(f"g4f.provider.Aichat requires cookies, [refresh https://chat-gpt.org on chrome]")
headers = { headers = {
"authority": "chat-gpt.org", 'authority': 'chat-gpt.org',
"accept": "*/*", 'accept': '*/*',
"cache-control": "no-cache", 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
"content-type": "application/json", 'content-type': 'application/json',
"origin": "https://chat-gpt.org", 'origin': 'https://chat-gpt.org',
"pragma": "no-cache", 'referer': 'https://chat-gpt.org/chat',
"referer": "https://chat-gpt.org/chat", 'sec-ch-ua': '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
"sec-ch-ua-mobile": "?0", 'sec-ch-ua-mobile': '?0',
"sec-ch-ua-platform": '"macOS"', 'sec-ch-ua-platform': '"macOS"',
"sec-fetch-dest": "empty", 'sec-fetch-dest': 'empty',
"sec-fetch-mode": "cors", 'sec-fetch-mode': 'cors',
"sec-fetch-site": "same-origin", 'sec-fetch-site': 'same-origin',
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36", 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36',
} }
async with ClientSession(
headers=headers async with StreamSession(headers=headers,
) as session: cookies=cookies,
timeout=6,
proxies={"https": proxy} if proxy else None,
impersonate="chrome110", verify=False) as session:
json_data = { json_data = {
"message": format_prompt(messages), "message": format_prompt(messages),
"temperature": kwargs.get('temperature', 0.5), "temperature": kwargs.get('temperature', 0.5),
@ -43,13 +51,14 @@ class Aichat(AsyncProvider):
"top_p": kwargs.get('top_p', 1), "top_p": kwargs.get('top_p', 1),
"frequency_penalty": 0, "frequency_penalty": 0,
} }
async with session.post(
"https://chat-gpt.org/api/text", async with session.post("https://chat-gpt.org/api/text",
proxy=proxy, json=json_data) as response:
json=json_data
) as response:
response.raise_for_status() response.raise_for_status()
result = await response.json() result = await response.json()
if not result['response']: if not result['response']:
raise Exception(f"Error Response: {result}") raise Exception(f"Error Response: {result}")
return result["message"] return result["message"]

View File

@ -10,7 +10,7 @@ from .base_provider import AsyncGeneratorProvider
class ChatForAi(AsyncGeneratorProvider): class ChatForAi(AsyncGeneratorProvider):
url = "https://chatforai.store" url = "https://chatforai.store"
working = True working = False
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
@classmethod @classmethod

View File

@ -10,7 +10,7 @@ from .base_provider import AsyncGeneratorProvider
class Chatgpt4Online(AsyncGeneratorProvider): class Chatgpt4Online(AsyncGeneratorProvider):
url = "https://chatgpt4online.org" url = "https://chatgpt4online.org"
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
working = True working = False
@classmethod @classmethod
async def create_async_generator( async def create_async_generator(
@ -31,6 +31,7 @@ class Chatgpt4Online(AsyncGeneratorProvider):
"newMessage": messages[-1]["content"], "newMessage": messages[-1]["content"],
"stream": True "stream": True
} }
async with session.post(cls.url + "/wp-json/mwai-ui/v1/chats/submit", json=data, proxy=proxy) as response: async with session.post(cls.url + "/wp-json/mwai-ui/v1/chats/submit", json=data, proxy=proxy) as response:
response.raise_for_status() response.raise_for_status()
async for line in response.content: async for line in response.content:

View File

@ -10,7 +10,7 @@ from .helper import format_prompt
class ChatgptDemo(AsyncGeneratorProvider): class ChatgptDemo(AsyncGeneratorProvider):
url = "https://chat.chatgptdemo.net" url = "https://chat.chatgptdemo.net"
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
working = True working = False
@classmethod @classmethod
async def create_async_generator( async def create_async_generator(

View File

@ -5,6 +5,7 @@ from __future__ import annotations
import re import re
from aiohttp import ClientSession from aiohttp import ClientSession
from ..requests import StreamSession
from ..typing import Messages from ..typing import Messages
from .base_provider import AsyncProvider from .base_provider import AsyncProvider
from .helper import format_prompt, get_cookies from .helper import format_prompt, get_cookies
@ -13,7 +14,7 @@ from .helper import format_prompt, get_cookies
class ChatgptFree(AsyncProvider): class ChatgptFree(AsyncProvider):
url = "https://chatgptfree.ai" url = "https://chatgptfree.ai"
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
working = False working = True
_post_id = None _post_id = None
_nonce = None _nonce = None
@ -23,40 +24,50 @@ class ChatgptFree(AsyncProvider):
model: str, model: str,
messages: Messages, messages: Messages,
proxy: str = None, proxy: str = None,
cookies: dict = None,
**kwargs **kwargs
) -> str: ) -> str:
cookies = get_cookies('chatgptfree.ai')
if not cookies:
cookies = get_cookies('chatgptfree.ai')
if not cookies:
raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies [refresh https://chatgptfree.ai on chrome]")
headers = { headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0", 'authority': 'chatgptfree.ai',
"Accept": "*/*", 'accept': '*/*',
"Accept-Language": "de,en-US;q=0.7,en;q=0.3", 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
"Accept-Encoding": "gzip, deflate, br", 'origin': 'https://chatgptfree.ai',
"Origin": cls.url, 'referer': 'https://chatgptfree.ai/chat/',
"Alt-Used": "chatgptfree.ai", 'sec-ch-ua': '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
"Connection": "keep-alive", 'sec-ch-ua-mobile': '?0',
"Referer": f"{cls.url}/", 'sec-ch-ua-platform': '"macOS"',
"Sec-Fetch-Dest": "empty", 'sec-fetch-dest': 'empty',
"Sec-Fetch-Mode": "cors", 'sec-fetch-mode': 'cors',
"Sec-Fetch-Site": "same-origin", 'sec-fetch-site': 'same-origin',
"Pragma": "no-cache", 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36',
"Cache-Control": "no-cache",
"TE": "trailers"
} }
async with ClientSession(headers=headers) as session:
async with StreamSession(headers=headers,
impersonate="chrome107", proxies={"https": proxy}, timeout=10) as session:
if not cls._nonce: if not cls._nonce:
async with session.get(f"{cls.url}/", async with session.get(f"{cls.url}/", cookies=cookies) as response:
proxy=proxy, cookies=cookies) as response:
response.raise_for_status() response.raise_for_status()
response = await response.text() response = await response.text()
result = re.search(r'data-post-id="([0-9]+)"', response) result = re.search(r'data-post-id="([0-9]+)"', response)
if not result: if not result:
raise RuntimeError("No post id found") raise RuntimeError("No post id found")
cls._post_id = result.group(1) cls._post_id = result.group(1)
result = re.search(r'data-nonce="(.*?)"', response) result = re.search(r'data-nonce="(.*?)"', response)
if not result: if not result:
raise RuntimeError("No nonce found") raise RuntimeError("No nonce found")
cls._nonce = result.group(1) cls._nonce = result.group(1)
prompt = format_prompt(messages) prompt = format_prompt(messages)
data = { data = {
"_wpnonce": cls._nonce, "_wpnonce": cls._nonce,
@ -66,6 +77,8 @@ class ChatgptFree(AsyncProvider):
"message": prompt, "message": prompt,
"bot_id": "0" "bot_id": "0"
} }
async with session.post(cls.url + "/wp-admin/admin-ajax.php", data=data, proxy=proxy) as response: async with session.post(cls.url + "/wp-admin/admin-ajax.php",
data=data, cookies=cookies) as response:
response.raise_for_status() response.raise_for_status()
return (await response.json())["data"] return (await response.json())["data"]

View File

@ -13,7 +13,7 @@ from .helper import format_prompt
class ChatgptLogin(AsyncGeneratorProvider): class ChatgptLogin(AsyncGeneratorProvider):
url = "https://chatgptlogin.ai" url = "https://chatgptlogin.ai"
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
working = True working = False
_user_id = None _user_id = None
@classmethod @classmethod

View File

@ -7,8 +7,7 @@ from ..requests import StreamSession
from .base_provider import AsyncGeneratorProvider from .base_provider import AsyncGeneratorProvider
domains = [ domains = [
'https://k.aifree.site', 'https://r.aifree.site'
'https://p.aifree.site'
] ]
class FreeGpt(AsyncGeneratorProvider): class FreeGpt(AsyncGeneratorProvider):

View File

@ -2,18 +2,17 @@
from __future__ import annotations from __future__ import annotations
from aiohttp import ClientSession from ..requests import StreamSession
from ..typing import Messages
from ..typing import Messages
from .base_provider import AsyncProvider from .base_provider import AsyncProvider
from .helper import get_cookies from .helper import get_cookies
class GptChatly(AsyncProvider): class GptChatly(AsyncProvider):
url = "https://gptchatly.com" url = "https://gptchatly.com"
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
supports_gpt_4 = True supports_gpt_4 = True
working = False working = True
@classmethod @classmethod
async def create_async( async def create_async(
@ -22,9 +21,9 @@ class GptChatly(AsyncProvider):
messages: Messages, messages: Messages,
proxy: str = None, cookies: dict = None, **kwargs) -> str: proxy: str = None, cookies: dict = None, **kwargs) -> str:
cookies = get_cookies('gptchatly.com') if not cookies else cookies
if not cookies: if not cookies:
cookies = get_cookies('gptchatly.com') raise RuntimeError(f"g4f.provider.GptChatly requires cookies, [refresh https://gptchatly.com on chrome]")
if model.startswith("gpt-4"): if model.startswith("gpt-4"):
chat_url = f"{cls.url}/fetch-gpt4-response" chat_url = f"{cls.url}/fetch-gpt4-response"
@ -32,25 +31,26 @@ class GptChatly(AsyncProvider):
chat_url = f"{cls.url}/fetch-response" chat_url = f"{cls.url}/fetch-response"
headers = { headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0", 'authority': 'gptchatly.com',
"Accept": "*/*", 'accept': '*/*',
"Accept-Language": "de,en-US;q=0.7,en;q=0.3", 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
"Accept-Encoding": "gzip, deflate, br", 'content-type': 'application/json',
"Referer": f"{cls.url}/", 'origin': 'https://gptchatly.com',
"Content-Type": "application/json", 'referer': 'https://gptchatly.com/',
"Origin": cls.url, 'sec-ch-ua': '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
"Connection": "keep-alive", 'sec-ch-ua-mobile': '?0',
"Sec-Fetch-Dest": "empty", 'sec-ch-ua-platform': '"macOS"',
"Sec-Fetch-Mode": "cors", 'sec-fetch-dest': 'empty',
"Sec-Fetch-Site": "same-origin", 'sec-fetch-mode': 'cors',
"Pragma": "no-cache", 'sec-fetch-site': 'same-origin',
"Cache-Control": "no-cache", 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36',
"TE": "trailers",
} }
async with ClientSession(headers=headers) as session:
async with StreamSession(headers=headers,
proxies={"https": proxy}, cookies=cookies, impersonate='chrome110') as session:
data = { data = {
"past_conversations": messages "past_conversations": messages
} }
async with session.post(chat_url, json=data, proxy=proxy) as response: async with session.post(chat_url, json=data) as response:
response.raise_for_status() response.raise_for_status()
return (await response.json())["chatGPTResponse"] return (await response.json())["chatGPTResponse"]

View File

@ -8,7 +8,7 @@ from .helper import format_prompt
class GptGod(AsyncGeneratorProvider): class GptGod(AsyncGeneratorProvider):
url = "https://gptgod.site" url = "https://gptgod.site"
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
working = True working = False
@classmethod @classmethod
async def create_async_generator( async def create_async_generator(
@ -18,6 +18,7 @@ class GptGod(AsyncGeneratorProvider):
proxy: str = None, proxy: str = None,
**kwargs **kwargs
) -> AsyncResult: ) -> AsyncResult:
headers = { headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0", "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0",
"Accept": "text/event-stream", "Accept": "text/event-stream",
@ -32,6 +33,7 @@ class GptGod(AsyncGeneratorProvider):
"Pragma": "no-cache", "Pragma": "no-cache",
"Cache-Control": "no-cache", "Cache-Control": "no-cache",
} }
async with ClientSession(headers=headers) as session: async with ClientSession(headers=headers) as session:
prompt = format_prompt(messages) prompt = format_prompt(messages)
data = { data = {
@ -42,6 +44,8 @@ class GptGod(AsyncGeneratorProvider):
response.raise_for_status() response.raise_for_status()
event = None event = None
async for line in response.content: async for line in response.content:
print(line)
if line.startswith(b'event: '): if line.startswith(b'event: '):
event = line[7:-1] event = line[7:-1]
elif event == b"data" and line.startswith(b"data: "): elif event == b"data" and line.startswith(b"data: "):

View File

@ -30,7 +30,7 @@ models = {
class Liaobots(AsyncGeneratorProvider): class Liaobots(AsyncGeneratorProvider):
url = "https://liaobots.site" url = "https://liaobots.site"
working = True working = False
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
supports_gpt_4 = True supports_gpt_4 = True
_auth_code = None _auth_code = None

View File

@ -10,16 +10,15 @@ from .base_provider import AsyncGeneratorProvider
class Opchatgpts(AsyncGeneratorProvider): class Opchatgpts(AsyncGeneratorProvider):
url = "https://opchatgpts.net" url = "https://opchatgpts.net"
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
working = True working = False
@classmethod @classmethod
async def create_async_generator( async def create_async_generator(
cls, cls,
model: str, model: str,
messages: Messages, messages: Messages,
proxy: str = None, proxy: str = None, **kwargs) -> AsyncResult:
**kwargs
) -> AsyncResult:
headers = { headers = {
"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36", "User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36",
"Accept" : "*/*", "Accept" : "*/*",
@ -58,7 +57,6 @@ class Opchatgpts(AsyncGeneratorProvider):
elif line["type"] == "end": elif line["type"] == "end":
break break
@classmethod @classmethod
@property @property
def params(cls): def params(cls):
@ -70,7 +68,6 @@ class Opchatgpts(AsyncGeneratorProvider):
] ]
param = ", ".join([": ".join(p) for p in params]) param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})" return f"g4f.provider.{cls.__name__} supports: ({param})"
def random_string(length: int = 10): def random_string(length: int = 10):
return ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(length)) return ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(length))

View File

@ -22,8 +22,6 @@ class Vercel(BaseProvider):
stream: bool, stream: bool,
proxy: str = None, **kwargs) -> CreateResult: proxy: str = None, **kwargs) -> CreateResult:
print(model)
if not model: if not model:
model = "gpt-3.5-turbo" model = "gpt-3.5-turbo"

View File

@ -9,7 +9,7 @@ from .base_provider import AsyncGeneratorProvider, format_prompt
class Yqcloud(AsyncGeneratorProvider): class Yqcloud(AsyncGeneratorProvider):
url = "https://chat9.yqcloud.top/" url = "https://chat9.yqcloud.top/"
working = True working = False
supports_gpt_35_turbo = True supports_gpt_35_turbo = True
@staticmethod @staticmethod

View File

@ -1,11 +1,15 @@
from __future__ import annotations from __future__ import annotations
import asyncio
import sys import sys
from asyncio import AbstractEventLoop import asyncio
from os import path import webbrowser
from ..typing import Dict, List, Messages import http.cookiejar
import browser_cookie3
from os import path
from asyncio import AbstractEventLoop
from ..typing import Dict, Messages
from browser_cookie3 import chrome, chromium, opera, opera_gx, brave, edge, vivaldi, firefox, BrowserCookieError
# Change event loop policy on windows # Change event loop policy on windows
if sys.platform == 'win32': if sys.platform == 'win32':
@ -39,18 +43,46 @@ def get_event_loop() -> AbstractEventLoop:
'Use "create_async" instead of "create" function in a running event loop. Or install the "nest_asyncio" package.' 'Use "create_async" instead of "create" function in a running event loop. Or install the "nest_asyncio" package.'
) )
def init_cookies():
urls = [
'https://chat-gpt.org',
'https://www.aitianhu.com',
'https://chatgptfree.ai',
'https://gptchatly.com',
'https://bard.google.com',
'https://huggingface.co/chat',
'https://open-assistant.io/chat'
]
browsers = ['google-chrome', 'chrome', 'firefox', 'safari']
def open_urls_in_browser(browser):
b = webbrowser.get(browser)
for url in urls:
b.open(url, new=0, autoraise=True)
for browser in browsers:
try:
open_urls_in_browser(browser)
break
except webbrowser.Error:
continue
# Load cookies for a domain from all supported browsers. # Load cookies for a domain from all supported browsers.
# Cache the results in the "_cookies" variable. # Cache the results in the "_cookies" variable.
def get_cookies(cookie_domain: str) -> Dict[str, str]: def get_cookies(domain_name=''):
if cookie_domain not in _cookies: cj = http.cookiejar.CookieJar()
_cookies[cookie_domain] = {} for cookie_fn in [chrome, chromium, opera, opera_gx, brave, edge, vivaldi, firefox]:
try: try:
for cookie in browser_cookie3.load(cookie_domain): for cookie in cookie_fn(domain_name=domain_name):
_cookies[cookie_domain][cookie.name] = cookie.value cj.set_cookie(cookie)
except: except BrowserCookieError:
pass pass
return _cookies[cookie_domain]
_cookies[domain_name] = {cookie.name: cookie.value for cookie in cj}
return _cookies[domain_name]
def format_prompt(messages: Messages, add_special_tokens=False) -> str: def format_prompt(messages: Messages, add_special_tokens=False) -> str:

View File

@ -5,7 +5,7 @@ from .Provider import BaseProvider, RetryProvider
from .typing import Messages, CreateResult, Union, List from .typing import Messages, CreateResult, Union, List
from .debug import logging from .debug import logging
version = '0.1.7.1' version = '0.1.7.2'
version_check = True version_check = True
def check_pypi_version() -> None: def check_pypi_version() -> None:
@ -22,7 +22,8 @@ def check_pypi_version() -> None:
def get_model_and_provider(model : Union[Model, str], def get_model_and_provider(model : Union[Model, str],
provider : Union[type[BaseProvider], None], provider : Union[type[BaseProvider], None],
stream : bool, stream : bool,
ignored : List[str] = None) -> tuple[Model, type[BaseProvider]]: ignored : List[str] = None,
ignore_working: bool = False) -> tuple[Model, type[BaseProvider]]:
if isinstance(model, str): if isinstance(model, str):
if model in ModelUtils.convert: if model in ModelUtils.convert:
@ -39,7 +40,7 @@ def get_model_and_provider(model : Union[Model, str],
if not provider: if not provider:
raise RuntimeError(f'No provider found for model: {model}') raise RuntimeError(f'No provider found for model: {model}')
if not provider.working: if not provider.working and not ignore_working:
raise RuntimeError(f'{provider.__name__} is not working') raise RuntimeError(f'{provider.__name__} is not working')
if not provider.supports_stream and stream: if not provider.supports_stream and stream:
@ -59,9 +60,10 @@ class ChatCompletion:
provider : Union[type[BaseProvider], None] = None, provider : Union[type[BaseProvider], None] = None,
stream : bool = False, stream : bool = False,
auth : Union[str, None] = None, auth : Union[str, None] = None,
ignored : List[str] = None, **kwargs) -> Union[CreateResult, str]: ignored : List[str] = None,
ignore_working: bool = False, **kwargs) -> Union[CreateResult, str]:
model, provider = get_model_and_provider(model, provider, stream, ignored) model, provider = get_model_and_provider(model, provider, stream, ignored, ignore_working)
if provider.needs_auth and not auth: if provider.needs_auth and not auth:
raise ValueError( raise ValueError(

View File

@ -16,6 +16,7 @@ from .Provider import (
GeekGpt, GeekGpt,
Myshell, Myshell,
FreeGpt, FreeGpt,
Cromicle,
NoowAi, NoowAi,
Vercel, Vercel,
Aichat, Aichat,
@ -72,7 +73,7 @@ gpt_35_turbo = Model(
base_provider = 'openai', base_provider = 'openai',
best_provider=RetryProvider([ best_provider=RetryProvider([
ChatgptX, ChatgptDemo, GptGo, You, ChatgptX, ChatgptDemo, GptGo, You,
NoowAi, GPTalk, GptForLove, Phind, ChatBase NoowAi, GPTalk, GptForLove, Phind, ChatBase, Cromicle
]) ])
) )

View File

@ -11,7 +11,7 @@ with codecs.open(os.path.join(here, "README.md"), encoding="utf-8") as fh:
with open("requirements.txt") as f: with open("requirements.txt") as f:
required = f.read().splitlines() required = f.read().splitlines()
VERSION = "0.1.7.1" VERSION = "0.1.7.2"
DESCRIPTION = ( DESCRIPTION = (
"The official gpt4free repository | various collection of powerful language models" "The official gpt4free repository | various collection of powerful language models"
) )