Merge pull request #1875 from hlohaus/carst

Add image model list
This commit is contained in:
H Lohaus 2024-04-22 01:35:07 +02:00 committed by GitHub
commit 4b4d1f08b5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
33 changed files with 553 additions and 312 deletions

1
.gitignore vendored
View File

@ -55,6 +55,7 @@ local.py
image.py
.buildozer
hardir
har_and_cookies
node_modules
models
projects/windows/g4f

147
README.md
View File

@ -91,7 +91,7 @@ As per the survey, here is a list of improvements to come
```sh
docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v ${PWD}/hardir:/app/hardir hlohaus789/g4f:latest
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v ${PWD}/har_and_cookies:/app/har_and_cookies hlohaus789/g4f:latest
```
3. **Access the Client:**
@ -217,10 +217,11 @@ Access with: http://localhost:1337/v1
#### Cookies
You need cookies for BingCreateImages and the Gemini Provider.
From Bing you need the "_U" cookie and from Gemini you need the "__Secure-1PSID" cookie.
Sometimes you doesn't need the "__Secure-1PSID" cookie, but some other auth cookies.
You can pass the cookies in the create function or you use the `set_cookies` setter before you run G4F:
Cookies are essential for using Meta AI and Microsoft Designer to create images.
Additionally, cookies are required for the Google Gemini and WhiteRabbitNeo Provider.
From Bing, ensure you have the "_U" cookie, and from Google, all cookies starting with "__Secure-1PSID" are needed.
You can pass these cookies directly to the create function or set them using the `set_cookies` method before running G4F:
```python
from g4f.cookies import set_cookies
@ -228,10 +229,25 @@ from g4f.cookies import set_cookies
set_cookies(".bing.com", {
"_U": "cookie value"
})
set_cookies(".google.com", {
"__Secure-1PSID": "cookie value"
})
...
```
Alternatively, you can place your .har and cookie files in the `/har_and_cookies` directory. To export a cookie file, use the EditThisCookie extension available on the Chrome Web Store: [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg).
You can also create .har files to capture cookies. If you need further assistance, refer to the next section.
```bash
python -m g4f.cli api --debug
```
```
Read .har file: ./har_and_cookies/you.com.har
Cookies added: 10 from .you.com
Read cookie file: ./har_and_cookies/google.json
Cookies added: 16 from .google.com
Starting server... [g4f v-0.0.0] (debug)
```
#### .HAR File for OpenaiChat Provider
@ -249,7 +265,7 @@ To utilize the OpenaiChat provider, a .har file is required from https://chat.op
##### Storing the .HAR File
- Place the exported .har file in the `./hardir` directory if you are using Docker. Alternatively, you can store it in any preferred location within your current working directory.
- Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, you can store it in any preferred location within your current working directory.
Note: Ensure that your .har file is stored securely, as it may contain sensitive information.
@ -273,13 +289,13 @@ set G4F_PROXY=http://host:port
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ❌ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌+✔️ |
| [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [beta.theb.ai](https://beta.theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
## Best OpenSource Models
While we wait for gpt-5, here is a list of new models that are at least better than gpt-3.5-turbo. **Some are better than gpt-4**. Expect this list to grow.
@ -296,19 +312,24 @@ While we wait for gpt-5, here is a list of new models that are at least better t
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt-free.cc](https://www.chatgpt-free.cc) | `g4f.Provider.ChatgptNext` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [flowgpt.com](https://flowgpt.com/chat) | `g4f.Provider.FlowGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [freegptsnav.aifree.site](https://freegptsnav.aifree.site) | `g4f.Provider.FreeGpt` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [gpttalk.ru](https://gpttalk.ru) | `g4f.Provider.GptTalkRu` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [koala.sh](https://koala.sh) | `g4f.Provider.Koala` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat10.aichatos.xyz](https://chat10.aichatos.xyz) | `g4f.Provider.Aichatos` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt-free.cc](https://www.chatgpt-free.cc) | `g4f.Provider.ChatgptNext` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [f1.cnote.top](https://f1.cnote.top) | `g4f.Provider.Cnote` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [duckduckgo.com](https://duckduckgo.com/duckchat) | `g4f.Provider.DuckDuckGo` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [ecosia.org](https://www.ecosia.org) | `g4f.Provider.Ecosia` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [feedough.com](https://www.feedough.com) | `g4f.Provider.Feedough` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [flowgpt.com](https://flowgpt.com/chat) | `g4f.Provider.FlowGpt` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [freegptsnav.aifree.site](https://freegptsnav.aifree.site) | `g4f.Provider.FreeGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gpttalk.ru](https://gpttalk.ru) | `g4f.Provider.GptTalkRu` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [koala.sh](https://koala.sh) | `g4f.Provider.Koala` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [app.myshell.ai](https://app.myshell.ai/chat) | `g4f.Provider.MyShell` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [poe.com](https://poe.com) | `g4f.Provider.Poe` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [talkai.info](https://talkai.info) | `g4f.Provider.TalkAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat.vercel.ai](https://chat.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.vercel.ai](https://chat.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgpt.bestim.org](https://chatgpt.bestim.org) | `g4f.Provider.Bestim` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
@ -328,48 +349,68 @@ While we wait for gpt-5, here is a list of new models that are at least better t
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [openchat.team](https://openchat.team) | `g4f.Provider.Aura` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [openchat.team](https://openchat.team) | `g4f.Provider.Aura` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [blackbox.ai](https://www.blackbox.ai) | `g4f.Provider.Blackbox` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [cohereforai-c4ai-command-r-plus.hf.space](https://cohereforai-c4ai-command-r-plus.hf.space) | `g4f.Provider.Cohere` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [deepinfra.com](https://deepinfra.com) | `g4f.Provider.DeepInfra` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [free.chatgpt.org.uk](https://free.chatgpt.org.uk) | `g4f.Provider.FreeChatgpt` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [free.chatgpt.org.uk](https://free.chatgpt.org.uk) | `g4f.Provider.FreeChatgpt` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [gemini.google.com](https://gemini.google.com) | `g4f.Provider.Gemini` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [ai.google.dev](https://ai.google.dev) | `g4f.Provider.GeminiPro` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [gemini-chatbot-sigma.vercel.app](https://gemini-chatbot-sigma.vercel.app) | `g4f.Provider.GeminiProChat` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [developers.sber.ru](https://developers.sber.ru/gigachat) | `g4f.Provider.GigaChat` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [console.groq.com](https://console.groq.com/playground) | `g4f.Provider.Groq` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingFace` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [meta.ai](https://www.meta.ai) | `g4f.Provider.MetaAI` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌+✔️ |
| [openrouter.ai](https://openrouter.ai) | `g4f.Provider.OpenRouter` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [labs.perplexity.ai](https://labs.perplexity.ai) | `g4f.Provider.PerplexityLabs` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [pi.ai](https://pi.ai/talk) | `g4f.Provider.Pi` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [theb.ai](https://theb.ai) | `g4f.Provider.ThebApi` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [open-assistant.io](https://open-assistant.io/chat) | `g4f.Provider.OpenAssistant` | ❌ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ✔️ |
| [pi.ai](https://pi.ai/talk) | `g4f.Provider.Pi` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [replicate.com](https://replicate.com) | `g4f.Provider.ReplicateImage` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [theb.ai](https://theb.ai) | `g4f.Provider.ThebApi` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [whiterabbitneo.com](https://www.whiterabbitneo.com) | `g4f.Provider.WhiteRabbitNeo` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | ❌ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ✔️ |
### Models
| Model | Base Provider | Provider | Website |
|-----------------------------| ------------- | -------- | ------- |
| gpt-3.5-turbo | OpenAI | 5+ Providers | [openai.com](https://openai.com/) |
| gpt-4 | OpenAI | 2+ Providers | [openai.com](https://openai.com/) |
| gpt-4-turbo | OpenAI | g4f.Provider.Bing | [openai.com](https://openai.com/) |
| Llama-2-7b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-13b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-70b-chat-hf | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-8b | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-70b | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-34b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-70b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Mixtral-8x7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) |
| Mistral-7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) |
| dolphin-2.6-mixtral-8x7b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| lzlv_70b_fp16_hf | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| airoboros-70b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| airoboros-l2-70b-gpt4-1.4.1 | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| openchat_3.5 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
| gemini | Google | g4f.Provider.Gemini | [gemini.google.com](https://gemini.google.com/) |
| gemini-pro | Google | 2+ Providers | [gemini.google.com](https://gemini.google.com/) |
| claude-v2 | Anthropic | 1+ Providers | [anthropic.com](https://www.anthropic.com/) |
| claude-3-opus | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| claude-3-sonnet | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| pi | Inflection | g4f.Provider.Pi | [inflection.ai](https://inflection.ai/) |
| Model | Base Provider | Provider | Website |
| ----- | ------------- | -------- | ------- |
| gpt-3.5-turbo | OpenAI | 8+ Providers | [openai.com](https://openai.com/) |
| gpt-4 | OpenAI | 2+ Providers | [openai.com](https://openai.com/) |
| gpt-4-turbo | OpenAI | g4f.Provider.Bing | [openai.com](https://openai.com/) |
| Llama-2-7b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-13b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-70b-chat-hf | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-8b-instruct | Meta | 1+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-70b-instruct | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-34b-Instruct-hf | Meta | g4f.Provider.HuggingChat | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-70b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Mixtral-8x7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) |
| Mistral-7B-Instruct-v0.1 | Huggingface | 3+ Providers | [huggingface.co](https://huggingface.co/) |
| Mistral-7B-Instruct-v0.2 | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| zephyr-orpo-141b-A35b-v0.1 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
| dolphin-2.6-mixtral-8x7b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| gemini | Google | g4f.Provider.Gemini | [gemini.google.com](https://gemini.google.com/) |
| gemini-pro | Google | 2+ Providers | [gemini.google.com](https://gemini.google.com/) |
| claude-v2 | Anthropic | 1+ Providers | [anthropic.com](https://www.anthropic.com/) |
| claude-3-opus | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| claude-3-sonnet | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| lzlv_70b_fp16_hf | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| airoboros-70b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| openchat_3.5 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
| pi | Inflection | g4f.Provider.Pi | [inflection.ai](https://inflection.ai/) |
### Image and Vision Models
| Label | Provider | Image Model | Vision Model | Website |
| ----- | -------- | ----------- | ------------ | ------- |
| Microsoft Copilot in Bing | `g4f.Provider.Bing` | dall-e| gpt-4-vision | [bing.com](https://bing.com/chat) |
| DeepInfra | `g4f.Provider.DeepInfra` | stability-ai/sdxl| llava-1.5-7b-hf | [deepinfra.com](https://deepinfra.com) |
| Gemini | `g4f.Provider.Gemini` | gemini| gemini | [gemini.google.com](https://gemini.google.com) |
| Meta AI | `g4f.Provider.MetaAI` | meta| ❌ | [meta.ai](https://www.meta.ai) |
| OpenAI ChatGPT | `g4f.Provider.OpenaiChat` | dall-e| gpt-4-vision | [chat.openai.com](https://chat.openai.com) |
| Replicate | `g4f.Provider.Replicate` | stability-ai/sdxl| ❌ | [replicate.com](https://replicate.com) |
| You.com | `g4f.Provider.You` | dall-e| agent | [you.com](https://you.com) |
## 🔗 Powered by gpt4free

View File

@ -14,6 +14,8 @@ async def test_async(provider: ProviderType):
return False
messages = [{"role": "user", "content": "Hello Assistant!"}]
try:
if "webdriver" in provider.get_parameters():
return False
response = await asyncio.wait_for(ChatCompletion.create_async(
model=models.default,
messages=messages,
@ -88,7 +90,7 @@ def print_models():
"huggingface": "Huggingface",
"anthropic": "Anthropic",
"inflection": "Inflection",
"meta": "Meta"
"meta": "Meta",
}
provider_urls = {
"google": "https://gemini.google.com/",
@ -96,7 +98,7 @@ def print_models():
"huggingface": "https://huggingface.co/",
"anthropic": "https://www.anthropic.com/",
"inflection": "https://inflection.ai/",
"meta": "https://llama.meta.com/"
"meta": "https://llama.meta.com/",
}
lines = [
@ -108,6 +110,8 @@ def print_models():
if name not in ("gpt-3.5-turbo", "gpt-4", "gpt-4-turbo"):
continue
name = re.split(r":|/", model.name)[-1]
if model.base_provider not in base_provider_names:
continue
base_provider = base_provider_names[model.base_provider]
if not isinstance(model.best_provider, BaseRetryProvider):
provider_name = f"g4f.Provider.{model.best_provider.__name__}"
@ -121,7 +125,26 @@ def print_models():
print("\n".join(lines))
def print_image_models():
lines = [
"| Label | Provider | Image Model | Vision Model | Website |",
"| ----- | -------- | ----------- | ------------ | ------- |",
]
from g4f.gui.server.api import Api
for image_model in Api.get_image_models():
provider_url = image_model["url"]
netloc = urlparse(provider_url).netloc.replace("www.", "")
website = f"[{netloc}]({provider_url})"
label = image_model["provider"] if image_model["label"] is None else image_model["label"]
if image_model["vision_model"] is None:
image_model["vision_model"] = ""
lines.append(f'| {label} | `g4f.Provider.{image_model["provider"]}` | {image_model["image_model"]}| {image_model["vision_model"]} | {website} |')
print("\n".join(lines))
if __name__ == "__main__":
print_providers()
#print_providers()
#print("\n", "-" * 50, "\n")
#print_models()
print("\n", "-" * 50, "\n")
print_models()
print_image_models()

View File

@ -38,8 +38,9 @@ class Bing(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True
supports_gpt_4 = True
default_model = "Balanced"
default_vision_model = "gpt-4-vision"
models = [getattr(Tones, key) for key in Tones.__dict__ if not key.startswith("__")]
@classmethod
def create_async_generator(
cls,

View File

@ -13,9 +13,11 @@ from .bing.create_images import create_images, create_session, get_cookies_from_
class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
label = "Microsoft Designer"
parent = "Bing"
url = "https://www.bing.com/images/create"
working = True
needs_auth = True
image_models = ["dall-e"]
def __init__(self, cookies: Cookies = None, proxy: str = None) -> None:
self.cookies: Cookies = cookies

View File

@ -1,17 +1,22 @@
from __future__ import annotations
import requests
from ..typing import AsyncResult, Messages
from ..typing import AsyncResult, Messages, ImageType
from ..image import to_data_uri
from .needs_auth.Openai import Openai
class DeepInfra(Openai):
label = "DeepInfra"
url = "https://deepinfra.com"
working = True
needs_auth = False
has_auth = True
supports_stream = True
supports_message_history = True
default_model = 'HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1'
default_model = "meta-llama/Meta-Llama-3-70b-instruct"
default_vision_model = "llava-hf/llava-1.5-7b-hf"
model_aliases = {
'mixtral-8x22b': 'HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1'
}
@classmethod
def get_models(cls):
@ -27,19 +32,12 @@ class DeepInfra(Openai):
model: str,
messages: Messages,
stream: bool,
image: ImageType = None,
api_base: str = "https://api.deepinfra.com/v1/openai",
temperature: float = 0.7,
max_tokens: int = 1028,
**kwargs
) -> AsyncResult:
if not '/' in model:
models = {
'mixtral-8x22b': 'HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1',
'dbrx-instruct': 'databricks/dbrx-instruct',
}
model = models.get(model, model)
headers = {
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US',
@ -55,6 +53,19 @@ class DeepInfra(Openai):
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
}
if image is not None:
if not model:
model = cls.default_vision_model
messages[-1]["content"] = [
{
"type": "image_url",
"image_url": {"url": to_data_uri(image)}
},
{
"type": "text",
"text": messages[-1]["content"]
}
]
return super().create_async_generator(
model, messages,
stream=stream,

View File

@ -9,8 +9,10 @@ from ..image import ImageResponse
class DeepInfraImage(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://deepinfra.com"
parent = "DeepInfra"
working = True
default_model = 'stability-ai/sdxl'
image_models = [default_model]
@classmethod
def get_models(cls):
@ -18,6 +20,7 @@ class DeepInfraImage(AsyncGeneratorProvider, ProviderModelMixin):
url = 'https://api.deepinfra.com/models/featured'
models = requests.get(url).json()
cls.models = [model['model_name'] for model in models if model["reported_type"] == "text-to-image"]
cls.image_models = cls.models
return cls.models
@classmethod

View File

@ -13,7 +13,7 @@ from ..requests import raise_for_status, DEFAULT_HEADERS
from ..image import ImageResponse, ImagePreview
from ..errors import ResponseError
from .base_provider import AsyncGeneratorProvider
from .helper import format_prompt, get_connector
from .helper import format_prompt, get_connector, format_cookies
class Sources():
def __init__(self, link_list: List[Dict[str, str]]) -> None:
@ -48,7 +48,6 @@ class MetaAI(AsyncGeneratorProvider):
async def update_access_token(self, birthday: str = "1999-01-01"):
url = "https://www.meta.ai/api/graphql/"
payload = {
"lsd": self.lsd,
"fb_api_caller_class": "RelayModern",
@ -90,7 +89,7 @@ class MetaAI(AsyncGeneratorProvider):
headers = {}
headers = {
'content-type': 'application/x-www-form-urlencoded',
'cookie': "; ".join([f"{k}={v}" for k, v in cookies.items()]),
'cookie': format_cookies(self.cookies),
'origin': 'https://www.meta.ai',
'referer': 'https://www.meta.ai/',
'x-asbd-id': '129477',
@ -194,7 +193,7 @@ class MetaAI(AsyncGeneratorProvider):
**headers
}
async with self.session.post(url, headers=headers, cookies=self.cookies, data=payload) as response:
await raise_for_status(response)
await raise_for_status(response, "Fetch sources failed")
text = await response.text()
if "<h1>Something Went Wrong</h1>" in text:
raise ResponseError("Response: Something Went Wrong")

View File

@ -6,6 +6,8 @@ from .MetaAI import MetaAI
class MetaAIAccount(MetaAI):
needs_auth = True
parent = "MetaAI"
image_models = ["meta"]
@classmethod
async def create_async_generator(

84
g4f/Provider/Replicate.py Normal file
View File

@ -0,0 +1,84 @@
from __future__ import annotations
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt, filter_none
from ..typing import AsyncResult, Messages
from ..requests import raise_for_status
from ..requests.aiohttp import StreamSession
from ..errors import ResponseError, MissingAuthError
class Replicate(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
working = True
default_model = "meta/meta-llama-3-70b-instruct"
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
proxy: str = None,
timeout: int = 180,
system_prompt: str = None,
max_new_tokens: int = None,
temperature: float = None,
top_p: float = None,
top_k: float = None,
stop: list = None,
extra_data: dict = {},
headers: dict = {
"accept": "application/json",
},
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
if cls.needs_auth and api_key is None:
raise MissingAuthError("api_key is missing")
if api_key is not None:
headers["Authorization"] = f"Bearer {api_key}"
api_base = "https://api.replicate.com/v1/models/"
else:
api_base = "https://replicate.com/api/models/"
async with StreamSession(
proxy=proxy,
headers=headers,
timeout=timeout
) as session:
data = {
"stream": True,
"input": {
"prompt": format_prompt(messages),
**filter_none(
system_prompt=system_prompt,
max_new_tokens=max_new_tokens,
temperature=temperature,
top_p=top_p,
top_k=top_k,
stop_sequences=",".join(stop) if stop else None
),
**extra_data
},
}
url = f"{api_base.rstrip('/')}/{model}/predictions"
async with session.post(url, json=data) as response:
message = "Model not found" if response.status == 404 else None
await raise_for_status(response, message)
result = await response.json()
if "id" not in result:
raise ResponseError(f"Invalid response: {result}")
async with session.get(result["urls"]["stream"], headers={"Accept": "text/event-stream"}) as response:
await raise_for_status(response)
event = None
async for line in response.iter_lines():
if line.startswith(b"event: "):
event = line[7:]
if event == b"done":
break
elif event == b"output":
if line.startswith(b"data: "):
new_text = line[6:].decode()
if new_text:
yield new_text
else:
yield "\n"

View File

@ -11,12 +11,14 @@ from ..errors import ResponseError
class ReplicateImage(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
parent = "Replicate"
working = True
default_model = 'stability-ai/sdxl'
default_versions = [
"39ed52f2a78e934b3ba6e2a89f5b1c712de7dfea535525255b1aa35c5565e08b",
"2b017d9b67edd2ee1401238df49d75da53c523f36e363881e057f5dc3ed3c5b2"
]
image_models = [default_model]
@classmethod
async def create_async_generator(

View File

@ -8,19 +8,22 @@ import uuid
from ..typing import AsyncResult, Messages, ImageType, Cookies
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ..image import ImageResponse, to_bytes, is_accepted_format
from ..image import ImageResponse, ImagePreview, to_bytes, is_accepted_format
from ..requests import StreamSession, FormData, raise_for_status
from .you.har_file import get_telemetry_ids
from .. import debug
class You(AsyncGeneratorProvider, ProviderModelMixin):
label = "You.com"
url = "https://you.com"
working = True
supports_gpt_35_turbo = True
supports_gpt_4 = True
default_model = "gpt-3.5-turbo"
default_vision_model = "agent"
image_models = ["dall-e"]
models = [
"gpt-3.5-turbo",
default_model,
"gpt-4",
"gpt-4-turbo",
"claude-instant",
@ -29,7 +32,8 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
"claude-3-sonnet",
"gemini-pro",
"zephyr",
"dall-e",
default_vision_model,
*image_models
]
model_aliases = {
"claude-v2": "claude-2"
@ -51,7 +55,7 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
chat_mode: str = "default",
**kwargs,
) -> AsyncResult:
if image is not None:
if image is not None or model == cls.default_vision_model:
chat_mode = "agent"
elif not model or model == cls.default_model:
...
@ -62,13 +66,18 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
chat_mode = "custom"
model = cls.get_model(model)
async with StreamSession(
proxies={"all": proxy},
proxy=proxy,
impersonate="chrome",
timeout=(30, timeout)
) as session:
cookies = await cls.get_cookies(session) if chat_mode != "default" else None
upload = json.dumps([await cls.upload_file(session, cookies, to_bytes(image), image_name)]) if image else ""
upload = ""
if image is not None:
upload_file = await cls.upload_file(
session, cookies,
to_bytes(image), image_name
)
upload = json.dumps([upload_file])
headers = {
"Accept": "text/event-stream",
"Referer": f"{cls.url}/search?fromSearchBar=true&tbm=youchat",
@ -102,11 +111,17 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
if event == "youChatToken" and event in data:
yield data[event]
elif event == "youChatUpdate" and "t" in data and data["t"] is not None:
match = re.search(r"!\[fig\]\((.+?)\)", data["t"])
if match:
yield ImageResponse(match.group(1), messages[-1]["content"])
if chat_mode == "create":
match = re.search(r"!\[(.+?)\]\((.+?)\)", data["t"])
if match:
if match.group(1) == "fig":
yield ImagePreview(match.group(2), messages[-1]["content"])
else:
yield ImageResponse(match.group(2), match.group(1))
else:
yield data["t"]
else:
yield data["t"]
yield data["t"]
@classmethod
async def upload_file(cls, client: StreamSession, cookies: Cookies, file: bytes, filename: str = None) -> dict:

View File

@ -9,7 +9,6 @@ from .deprecated import *
from .not_working import *
from .selenium import *
from .needs_auth import *
from .unfinished import *
from .Aichatos import Aichatos
from .Aura import Aura
@ -46,6 +45,7 @@ from .MetaAI import MetaAI
from .MetaAIAccount import MetaAIAccount
from .PerplexityLabs import PerplexityLabs
from .Pi import Pi
from .Replicate import Replicate
from .ReplicateImage import ReplicateImage
from .Vercel import Vercel
from .WhiteRabbitNeo import WhiteRabbitNeo

View File

@ -41,6 +41,8 @@ async def create_conversation(session: StreamSession, headers: dict, tone: str)
raise RateLimitError("Response 404: Do less requests and reuse conversations")
await raise_for_status(response, "Failed to create conversation")
data = await response.json()
if not data:
raise RuntimeError('Empty response: Failed to create conversation')
conversationId = data.get('conversationId')
clientId = data.get('clientId')
conversationSignature = response.headers.get('X-Sydney-Encryptedconversationsignature')

View File

@ -16,6 +16,7 @@ try:
except ImportError:
pass
from ... import debug
from ...typing import Messages, Cookies, ImageType, AsyncResult
from ..base_provider import AsyncGeneratorProvider
from ..helper import format_prompt, get_cookies
@ -53,6 +54,56 @@ class Gemini(AsyncGeneratorProvider):
url = "https://gemini.google.com"
needs_auth = True
working = True
image_models = ["gemini"]
default_vision_model = "gemini"
_cookies: Cookies = None
@classmethod
async def nodriver_login(cls) -> Cookies:
try:
import nodriver as uc
except ImportError:
return
try:
from platformdirs import user_config_dir
user_data_dir = user_config_dir("g4f-nodriver")
except:
user_data_dir = None
if debug.logging:
print(f"Open nodriver with user_dir: {user_data_dir}")
browser = await uc.start(user_data_dir=user_data_dir)
page = await browser.get(f"{cls.url}/app")
await page.select("div.ql-editor.textarea", 240)
cookies = {}
for c in await page.browser.cookies.get_all():
if c.domain.endswith(".google.com"):
cookies[c.name] = c.value
await page.close()
return cookies
@classmethod
async def webdriver_login(cls, proxy: str):
driver = None
try:
driver = get_browser(proxy=proxy)
try:
driver.get(f"{cls.url}/app")
WebDriverWait(driver, 5).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea"))
)
except:
login_url = os.environ.get("G4F_LOGIN_URL")
if login_url:
yield f"Please login: [Google Gemini]({login_url})\n\n"
WebDriverWait(driver, 240).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea"))
)
cls._cookies = get_driver_cookies(driver)
except MissingRequirementsError:
pass
finally:
if driver:
driver.close()
@classmethod
async def create_async_generator(
@ -72,47 +123,30 @@ class Gemini(AsyncGeneratorProvider):
if cookies is None:
cookies = {}
cookies["__Secure-1PSID"] = api_key
cookies = cookies if cookies else get_cookies(".google.com", False, True)
cls._cookies = cookies or cls._cookies or get_cookies(".google.com", False, True)
base_connector = get_connector(connector, proxy)
async with ClientSession(
headers=REQUEST_HEADERS,
connector=base_connector
) as session:
snlm0e = await cls.fetch_snlm0e(session, cookies) if cookies else None
snlm0e = await cls.fetch_snlm0e(session, cls._cookies) if cls._cookies else None
if not snlm0e:
driver = None
try:
driver = get_browser(proxy=proxy)
try:
driver.get(f"{cls.url}/app")
WebDriverWait(driver, 5).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea"))
)
except:
login_url = os.environ.get("G4F_LOGIN_URL")
if login_url:
yield f"Please login: [Google Gemini]({login_url})\n\n"
WebDriverWait(driver, 240).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea"))
)
cookies = get_driver_cookies(driver)
except MissingRequirementsError:
pass
finally:
if driver:
driver.close()
cls._cookies = await cls.nodriver_login();
if cls._cookies is None:
async for chunk in cls.webdriver_login(proxy):
yield chunk
if not snlm0e:
if "__Secure-1PSID" not in cookies:
if "__Secure-1PSID" not in cls._cookies:
raise MissingAuthError('Missing "__Secure-1PSID" cookie')
snlm0e = await cls.fetch_snlm0e(session, cookies)
snlm0e = await cls.fetch_snlm0e(session, cls._cookies)
if not snlm0e:
raise RuntimeError("Invalid auth. SNlM0e not found")
raise RuntimeError("Invalid cookies. SNlM0e not found")
image_url = await cls.upload_image(base_connector, to_bytes(image), image_name) if image else None
async with ClientSession(
cookies=cookies,
cookies=cls._cookies,
headers=REQUEST_HEADERS,
connector=base_connector,
) as client:

View File

@ -3,4 +3,6 @@ from __future__ import annotations
from .OpenaiChat import OpenaiChat
class OpenaiAccount(OpenaiChat):
needs_auth = True
needs_auth = True
parent = "OpenaiChat"
image_models = ["dall-e"]

View File

@ -29,6 +29,7 @@ from ...requests.aiohttp import StreamSession
from ...image import to_image, to_bytes, ImageResponse, ImageRequest
from ...errors import MissingAuthError, ResponseError
from ...providers.conversation import BaseConversation
from ..helper import format_cookies
from ..openai.har_file import getArkoseAndAccessToken, NoValidHarFileError
from ... import debug
@ -43,8 +44,14 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True
supports_system_message = True
default_model = None
default_vision_model = "gpt-4-vision"
models = ["gpt-3.5-turbo", "gpt-4", "gpt-4-gizmo"]
model_aliases = {"text-davinci-002-render-sha": "gpt-3.5-turbo", "": "gpt-3.5-turbo", "gpt-4-turbo-preview": "gpt-4"}
model_aliases = {
"text-davinci-002-render-sha": "gpt-3.5-turbo",
"": "gpt-3.5-turbo",
"gpt-4-turbo-preview": "gpt-4",
"dall-e": "gpt-4",
}
_api_key: str = None
_headers: dict = None
_cookies: Cookies = None
@ -334,9 +341,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
Raises:
RuntimeError: If an error occurs during processing.
"""
async with StreamSession(
proxies={"all": proxy},
proxy=proxy,
impersonate="chrome",
timeout=timeout
) as session:
@ -358,26 +364,27 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
api_key = cls._api_key = None
cls._create_request_args()
if debug.logging:
print("OpenaiChat: Load default_model failed")
print("OpenaiChat: Load default model failed")
print(f"{e.__class__.__name__}: {e}")
arkose_token = None
if cls.default_model is None:
error = None
try:
arkose_token, api_key, cookies = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies)
arkose_token, api_key, cookies, headers = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers)
cls._set_api_key(api_key)
except NoValidHarFileError as e:
...
error = e
if cls._api_key is None:
await cls.nodriver_access_token()
if cls._api_key is None and cls.needs_auth:
raise e
raise error
cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
async with session.post(
f"{cls.url}/backend-anon/sentinel/chat-requirements"
if not cls._api_key else
if cls._api_key is None else
f"{cls.url}/backend-api/sentinel/chat-requirements",
json={"conversation_mode_kind": "primary_assistant"},
headers=cls._headers
@ -393,8 +400,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
print(f'Arkose: {need_arkose} Turnstile: {data["turnstile"]["required"]}')
if need_arkose and arkose_token is None:
arkose_token, api_key, cookies = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies)
arkose_token, api_key, cookies, headers = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers)
cls._set_api_key(api_key)
if arkose_token is None:
raise MissingAuthError("No arkose token found in .har file")
@ -406,7 +413,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
print("OpenaiChat: Upload image failed")
print(f"{e.__class__.__name__}: {e}")
model = cls.get_model(model).replace("gpt-3.5-turbo", "text-davinci-002-render-sha")
model = cls.get_model(model)
model = "text-davinci-002-render-sha" if model == "gpt-3.5-turbo" else model
if conversation is None:
conversation = Conversation(conversation_id, str(uuid.uuid4()) if parent_id is None else parent_id)
else:
@ -613,7 +621,7 @@ this.fetch = async (url, options) => {
cookies[c.name] = c.value
user_agent = await page.evaluate("window.navigator.userAgent")
await page.close()
cls._create_request_args(cookies, user_agent)
cls._create_request_args(cookies, user_agent=user_agent)
cls._set_api_key(api_key)
@classmethod
@ -667,16 +675,12 @@ this.fetch = async (url, options) => {
"oai-language": "en-US",
}
@staticmethod
def _format_cookies(cookies: Cookies):
return "; ".join(f"{k}={v}" for k, v in cookies.items() if k != "access_token")
@classmethod
def _create_request_args(cls, cookies: Cookies = None, user_agent: str = None):
cls._headers = cls.get_default_headers()
def _create_request_args(cls, cookies: Cookies = None, headers: dict = None, user_agent: str = None):
cls._headers = cls.get_default_headers() if headers is None else headers
if user_agent is not None:
cls._headers["user-agent"] = user_agent
cls._cookies = {} if cookies is None else cookies
cls._cookies = {} if cookies is None else {k: v for k, v in cookies.items() if k != "access_token"}
cls._update_cookie_header()
@classmethod
@ -693,7 +697,7 @@ this.fetch = async (url, options) => {
@classmethod
def _update_cookie_header(cls):
cls._headers["cookie"] = cls._format_cookies(cls._cookies)
cls._headers["cookie"] = format_cookies(cls._cookies)
class Conversation(BaseConversation):
"""

View File

@ -1,3 +1,5 @@
from __future__ import annotations
import json
import base64
import hashlib

View File

@ -1,3 +1,5 @@
from __future__ import annotations
import base64
import json
import os
@ -59,17 +61,21 @@ def readHAR():
except KeyError:
continue
cookies = {c['name']: c['value'] for c in v['request']['cookies']}
headers = get_headers(v)
if not accessToken:
raise NoValidHarFileError("No accessToken found in .har files")
if not chatArks:
return None, accessToken, cookies
return chatArks.pop(), accessToken, cookies
return None, accessToken, cookies, headers
return chatArks.pop(), accessToken, cookies, headers
def get_headers(entry) -> dict:
return {h['name'].lower(): h['value'] for h in entry['request']['headers'] if h['name'].lower() not in ['content-length', 'cookie'] and not h['name'].startswith(':')}
def parseHAREntry(entry) -> arkReq:
tmpArk = arkReq(
arkURL=entry['request']['url'],
arkBx="",
arkHeader={h['name'].lower(): h['value'] for h in entry['request']['headers'] if h['name'].lower() not in ['content-length', 'cookie'] and not h['name'].startswith(':')},
arkHeader=get_headers(entry),
arkBody={p['name']: unquote(p['value']) for p in entry['request']['postData']['params'] if p['name'] not in ['rnd']},
arkCookies={c['name']: c['value'] for c in entry['request']['cookies']},
userAgent=""
@ -123,11 +129,11 @@ def getN() -> str:
timestamp = str(int(time.time()))
return base64.b64encode(timestamp.encode()).decode()
async def getArkoseAndAccessToken(proxy: str):
async def getArkoseAndAccessToken(proxy: str) -> tuple[str, str, dict, dict]:
global chatArk, accessToken, cookies
if chatArk is None or accessToken is None:
chatArk, accessToken, cookies = readHAR()
chatArk, accessToken, cookies, headers = readHAR()
if chatArk is None:
return None, accessToken, cookies
return None, accessToken, cookies, headers
newReq = genArkReq(chatArk)
return await sendRequest(newReq, proxy), accessToken, cookies
return await sendRequest(newReq, proxy), accessToken, cookies, headers

View File

@ -1,78 +0,0 @@
from __future__ import annotations
import asyncio
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt, filter_none
from ...typing import AsyncResult, Messages
from ...requests import StreamSession, raise_for_status
from ...image import ImageResponse
from ...errors import ResponseError, MissingAuthError
class Replicate(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
working = True
default_model = "mistralai/mixtral-8x7b-instruct-v0.1"
api_base = "https://api.replicate.com/v1/models/"
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
proxy: str = None,
timeout: int = 180,
system_prompt: str = None,
max_new_tokens: int = None,
temperature: float = None,
top_p: float = None,
top_k: float = None,
stop: list = None,
extra_data: dict = {},
headers: dict = {},
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
if api_key is None:
raise MissingAuthError("api_key is missing")
headers["Authorization"] = f"Bearer {api_key}"
async with StreamSession(
proxies={"all": proxy},
headers=headers,
timeout=timeout
) as session:
data = {
"stream": True,
"input": {
"prompt": format_prompt(messages),
**filter_none(
system_prompt=system_prompt,
max_new_tokens=max_new_tokens,
temperature=temperature,
top_p=top_p,
top_k=top_k,
stop_sequences=",".join(stop) if stop else None
),
**extra_data
},
}
url = f"{cls.api_base.rstrip('/')}/{model}/predictions"
async with session.post(url, json=data) as response:
await raise_for_status(response)
result = await response.json()
if "id" not in result:
raise ResponseError(f"Invalid response: {result}")
async with session.get(result["urls"]["stream"], headers={"Accept": "text/event-stream"}) as response:
await raise_for_status(response)
event = None
async for line in response.iter_lines():
if line.startswith(b"event: "):
event = line[7:]
elif event == b"output":
if line.startswith(b"data: "):
yield line[6:].decode()
elif not line.startswith(b"id: "):
continue#yield "+"+line.decode()
elif event == b"done":
break

View File

@ -4,7 +4,6 @@ import json
import os
import os.path
import random
import requests
from ...requests import StreamSession, raise_for_status
from ...errors import MissingRequirementsError
@ -21,7 +20,8 @@ class arkReq:
self.arkCookies = arkCookies
self.userAgent = userAgent
arkPreURL = "https://telemetry.stytch.com/submit"
telemetry_url = "https://telemetry.stytch.com/submit"
public_token = "public-token-live-507a52ad-7e69-496b-aee0-1c9863c7c819"
chatArks: list = None
def readHAR():
@ -44,7 +44,7 @@ def readHAR():
# Error: not a HAR file!
continue
for v in harFile['log']['entries']:
if arkPreURL in v['request']['url']:
if v['request']['url'] == telemetry_url:
chatArks.append(parseHAREntry(v))
if not chatArks:
raise NoValidHarFileError("No telemetry in .har files found")
@ -62,24 +62,29 @@ def parseHAREntry(entry) -> arkReq:
return tmpArk
async def sendRequest(tmpArk: arkReq, proxy: str = None):
async with StreamSession(headers=tmpArk.arkHeaders, cookies=tmpArk.arkCookies, proxies={"all": proxy}) as session:
async with StreamSession(headers=tmpArk.arkHeaders, cookies=tmpArk.arkCookies, proxy=proxy) as session:
async with session.post(tmpArk.arkURL, data=tmpArk.arkBody) as response:
await raise_for_status(response)
return await response.text()
async def get_dfp_telemetry_id(proxy: str = None):
async def create_telemetry_id(proxy: str = None):
global chatArks
if chatArks is None:
chatArks = readHAR()
return await sendRequest(random.choice(chatArks), proxy)
async def get_telemetry_ids(proxy: str = None) -> list:
try:
return [await create_telemetry_id(proxy)]
except NoValidHarFileError as e:
if debug.logging:
print(e)
if debug.logging:
print('Getting telemetry_id for you.com with nodriver')
try:
from nodriver import start
except ImportError:
raise MissingRequirementsError('Install "nodriver" package | pip install -U nodriver')
raise MissingRequirementsError('Add .har file from you.com or install "nodriver" package | pip install -U nodriver')
try:
browser = await start()
tab = browser.main_tab
@ -89,49 +94,11 @@ async def get_telemetry_ids(proxy: str = None) -> list:
await tab.sleep(1)
async def get_telemetry_id():
public_token = "public-token-live-507a52ad-7e69-496b-aee0-1c9863c7c819"
telemetry_url = "https://telemetry.stytch.com/submit"
return await tab.evaluate(f'this.GetTelemetryID("{public_token}", "{telemetry_url}");', await_promise=True)
return await tab.evaluate(
f'this.GetTelemetryID("{public_token}", "{telemetry_url}");',
await_promise=True
)
# for _ in range(500):
# with open("hardir/you.com_telemetry_ids.txt", "a") as f:
# f.write((await get_telemetry_id()) + "\n")
return [await get_telemetry_id() for _ in range(4)]
return [await get_telemetry_id() for _ in range(1)]
finally:
try:
await tab.close()
except Exception as e:
print(f"Error occurred while closing tab: {str(e)}")
try:
await browser.stop()
except Exception as e:
pass
headers = {
'Accept': '*/*',
'Accept-Language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'Connection': 'keep-alive',
'Content-type': 'application/x-www-form-urlencoded',
'Origin': 'https://you.com',
'Referer': 'https://you.com/',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'cross-site',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36',
'sec-ch-ua': '"Google Chrome";v="123", "Not:A-Brand";v="8", "Chromium";v="123"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
}
proxies = {
'http': proxy,
'https': proxy} if proxy else None
response = requests.post('https://telemetry.stytch.com/submit',
headers=headers, data=payload, proxies=proxies)
if '-' in response.text:
print(f'telemetry generated: {response.text}')
return (response.text)
await browser.stop()

View File

@ -1,3 +1,5 @@
from __future__ import annotations
import logging
import json
import uvicorn
@ -8,14 +10,19 @@ from fastapi.exceptions import RequestValidationError
from starlette.status import HTTP_422_UNPROCESSABLE_ENTITY
from fastapi.encoders import jsonable_encoder
from pydantic import BaseModel
from typing import List, Union, Optional
from typing import Union, Optional
import g4f
import g4f.debug
from g4f.client import AsyncClient
from g4f.typing import Messages
app = FastAPI()
def create_app() -> FastAPI:
app = FastAPI()
api = Api(app)
api.register_routes()
api.register_validation_exception_handler()
return app
class ChatCompletionsConfig(BaseModel):
messages: Messages
@ -29,16 +36,19 @@ class ChatCompletionsConfig(BaseModel):
web_search: Optional[bool] = None
proxy: Optional[str] = None
list_ignored_providers: list[str] = None
def set_list_ignored_providers(ignored: list[str]):
global list_ignored_providers
list_ignored_providers = ignored
class Api:
def __init__(self, list_ignored_providers: List[str] = None) -> None:
self.list_ignored_providers = list_ignored_providers
def __init__(self, app: FastAPI) -> None:
self.app = app
self.client = AsyncClient()
def set_list_ignored_providers(self, list: list):
self.list_ignored_providers = list
def register_validation_exception_handler(self):
@app.exception_handler(RequestValidationError)
@self.app.exception_handler(RequestValidationError)
async def validation_exception_handler(request: Request, exc: RequestValidationError):
details = exc.errors()
modified_details = []
@ -54,17 +64,17 @@ class Api:
)
def register_routes(self):
@app.get("/")
@self.app.get("/")
async def read_root():
return RedirectResponse("/v1", 302)
@app.get("/v1")
@self.app.get("/v1")
async def read_root_v1():
return HTMLResponse('g4f API: Go to '
'<a href="/v1/chat/completions">chat/completions</a> '
'or <a href="/v1/models">models</a>.')
@app.get("/v1/models")
@self.app.get("/v1/models")
async def models():
model_list = dict(
(model, g4f.models.ModelUtils.convert[model])
@ -78,7 +88,7 @@ class Api:
} for model_id, model in model_list.items()]
return JSONResponse(model_list)
@app.get("/v1/models/{model_name}")
@self.app.get("/v1/models/{model_name}")
async def model_info(model_name: str):
try:
model_info = g4f.models.ModelUtils.convert[model_name]
@ -91,7 +101,7 @@ class Api:
except:
return JSONResponse({"error": "The model does not exist."})
@app.post("/v1/chat/completions")
@self.app.post("/v1/chat/completions")
async def chat_completions(config: ChatCompletionsConfig, request: Request = None, provider: str = None):
try:
config.provider = provider if config.provider is None else config.provider
@ -103,7 +113,7 @@ class Api:
config.api_key = auth_header
response = self.client.chat.completions.create(
**config.dict(exclude_none=True),
ignored=self.list_ignored_providers
ignored=list_ignored_providers
)
except Exception as e:
logging.exception(e)
@ -125,14 +135,10 @@ class Api:
return StreamingResponse(streaming(), media_type="text/event-stream")
@app.post("/v1/completions")
@self.app.post("/v1/completions")
async def completions():
return Response(content=json.dumps({'info': 'Not working yet.'}, indent=4), media_type="application/json")
api = Api()
api.register_routes()
api.register_validation_exception_handler()
def format_exception(e: Exception, config: ChatCompletionsConfig) -> str:
last_provider = g4f.get_last_provider(True)
return json.dumps({
@ -156,4 +162,4 @@ def run_api(
host, port = bind.split(":")
if debug:
g4f.debug.logging = True
uvicorn.run("g4f.api:app", host=host, port=int(port), workers=workers, use_colors=use_colors)#
uvicorn.run("g4f.api:create_app", host=host, port=int(port), workers=workers, use_colors=use_colors, factory=True)#

View File

@ -1,7 +1,11 @@
from __future__ import annotations
import argparse
from g4f import Provider
from g4f.gui.run import gui_parser, run_gui_args
from g4f.cookies import read_cookie_files
from g4f import debug
def main():
parser = argparse.ArgumentParser(description="Run gpt4free")
@ -10,28 +14,36 @@ def main():
api_parser.add_argument("--bind", default="0.0.0.0:1337", help="The bind string.")
api_parser.add_argument("--debug", action="store_true", help="Enable verbose logging.")
api_parser.add_argument("--workers", type=int, default=None, help="Number of workers.")
api_parser.add_argument("--disable_colors", action="store_true", help="Don't use colors.")
api_parser.add_argument("--disable-colors", action="store_true", help="Don't use colors.")
api_parser.add_argument("--ignore-cookie-files", action="store_true", help="Don't read .har and cookie files.")
api_parser.add_argument("--ignored-providers", nargs="+", choices=[provider for provider in Provider.__map__],
default=[], help="List of providers to ignore when processing request.")
subparsers.add_parser("gui", parents=[gui_parser()], add_help=False)
args = parser.parse_args()
if args.mode == "api":
import g4f.api
g4f.api.api.set_list_ignored_providers(
args.ignored_providers
)
g4f.api.run_api(
bind=args.bind,
debug=args.debug,
workers=args.workers,
use_colors=not args.disable_colors
)
run_api_args(args)
elif args.mode == "gui":
run_gui_args(args)
else:
parser.print_help()
exit(1)
def run_api_args(args):
if args.debug:
debug.logging = True
if not args.ignore_cookie_files:
read_cookie_files()
import g4f.api
g4f.api.set_list_ignored_providers(
args.ignored_providers
)
g4f.api.run_api(
bind=args.bind,
debug=args.debug,
workers=args.workers,
use_colors=not args.disable_colors
)
if __name__ == "__main__":
main()

View File

@ -2,6 +2,7 @@ from __future__ import annotations
import os
import time
import json
try:
from platformdirs import user_config_dir
@ -38,6 +39,7 @@ def get_cookies(domain_name: str = '', raise_requirements_error: bool = True, si
Returns:
Dict[str, str]: A dictionary of cookie names and values.
"""
global _cookies
if domain_name in _cookies:
return _cookies[domain_name]
@ -46,6 +48,7 @@ def get_cookies(domain_name: str = '', raise_requirements_error: bool = True, si
return cookies
def set_cookies(domain_name: str, cookies: Cookies = None) -> None:
global _cookies
if cookies:
_cookies[domain_name] = cookies
elif domain_name in _cookies:
@ -84,6 +87,61 @@ def load_cookies_from_browsers(domain_name: str, raise_requirements_error: bool
print(f"Error reading cookies from {cookie_fn.__name__} for {domain_name}: {e}")
return cookies
def read_cookie_files(dirPath: str = "./har_and_cookies"):
global _cookies
harFiles = []
cookieFiles = []
for root, dirs, files in os.walk(dirPath):
for file in files:
if file.endswith(".har"):
harFiles.append(os.path.join(root, file))
elif file.endswith(".json"):
cookieFiles.append(os.path.join(root, file))
_cookies = {}
for path in harFiles:
with open(path, 'rb') as file:
try:
harFile = json.load(file)
except json.JSONDecodeError:
# Error: not a HAR file!
continue
if debug.logging:
print("Read .har file:", path)
new_cookies = {}
for v in harFile['log']['entries']:
v_cookies = {}
for c in v['request']['cookies']:
if c['domain'] not in v_cookies:
v_cookies[c['domain']] = {}
v_cookies[c['domain']][c['name']] = c['value']
for domain, c in v_cookies.items():
_cookies[domain] = c
new_cookies[domain] = len(c)
if debug.logging:
for domain, new_values in new_cookies.items():
print(f"Cookies added: {new_values} from {domain}")
for path in cookieFiles:
with open(path, 'rb') as file:
try:
cookieFile = json.load(file)
except json.JSONDecodeError:
# Error: not a json file!
continue
if not isinstance(cookieFile, list):
continue
if debug.logging:
print("Read cookie file:", path)
new_cookies = {}
for c in cookieFile:
if isinstance(c, dict) and "domain" in c:
if c["domain"] not in new_cookies:
new_cookies[c["domain"]] = {}
new_cookies[c["domain"]][c["name"]] = c["value"]
for domain, new_values in new_cookies.items():
if debug.logging:
print(f"Cookies added: {len(new_values)} from {domain}")
_cookies[domain] = new_values
def _g4f(domain_name: str) -> list:
"""
Load cookies from the 'g4f' browser (if exists).

View File

@ -12,9 +12,6 @@ def run_gui(host: str = '0.0.0.0', port: int = 8080, debug: bool = False) -> Non
if import_error is not None:
raise MissingRequirementsError(f'Install "gui" requirements | pip install -U g4f[gui]\n{import_error}')
if debug:
from g4f import debug
debug.logging = True
config = {
'host' : host,
'port' : port,

View File

@ -5,4 +5,5 @@ def gui_parser():
parser.add_argument("-host", type=str, default="0.0.0.0", help="hostname")
parser.add_argument("-port", type=int, default=8080, help="port")
parser.add_argument("-debug", action="store_true", help="debug mode")
parser.add_argument("--ignore-cookie-files", action="store_true", help="Don't read .har and cookie files.")
return parser

View File

@ -1,6 +1,12 @@
from .gui_parser import gui_parser
from ..cookies import read_cookie_files
import g4f.debug
def run_gui_args(args):
if args.debug:
g4f.debug.logging = True
if not args.ignore_cookie_files:
read_cookie_files()
from g4f.gui import run_gui
host = args.host
port = args.port

View File

@ -16,7 +16,8 @@ conversations: dict[dict[str, BaseConversation]] = {}
class Api():
def get_models(self) -> list[str]:
@staticmethod
def get_models() -> list[str]:
"""
Return a list of all models.
@ -27,7 +28,8 @@ class Api():
"""
return models._all_models
def get_provider_models(self, provider: str) -> list[dict]:
@staticmethod
def get_provider_models(provider: str) -> list[dict]:
if provider in __map__:
provider: ProviderType = __map__[provider]
if issubclass(provider, ProviderModelMixin):
@ -40,7 +42,28 @@ class Api():
else:
return [];
def get_providers(self) -> list[str]:
@staticmethod
def get_image_models() -> list[dict]:
image_models = []
for provider in __providers__:
if hasattr(provider, "image_models"):
if hasattr(provider, "get_models"):
provider.get_models()
parent = provider
if hasattr(provider, "parent"):
parent = __map__[provider.parent]
for model in provider.image_models:
image_models.append({
"provider": parent.__name__,
"url": parent.url,
"label": parent.label if hasattr(parent, "label") else None,
"image_model": model,
"vision_model": parent.default_vision_model if hasattr(parent, "default_vision_model") else None
})
return image_models
@staticmethod
def get_providers() -> list[str]:
"""
Return a list of all working providers.
"""
@ -58,7 +81,8 @@ class Api():
if provider.working
}
def get_version(self):
@staticmethod
def get_version():
"""
Returns the current and latest version of the application.

View File

@ -31,6 +31,10 @@ class Backend_Api(Api):
'function': self.get_provider_models,
'methods': ['GET']
},
'/backend-api/v2/image_models': {
'function': self.get_image_models,
'methods': ['GET']
},
'/backend-api/v2/providers': {
'function': self.get_providers,
'methods': ['GET']

View File

@ -86,7 +86,7 @@ def is_data_uri_an_image(data_uri: str) -> bool:
if image_format not in ALLOWED_EXTENSIONS and image_format != "svg+xml":
raise ValueError("Invalid image format (from mime file type).")
def is_accepted_format(binary_data: bytes) -> bool:
def is_accepted_format(binary_data: bytes) -> str:
"""
Checks if the given binary data represents an image with an accepted format.
@ -241,6 +241,13 @@ def to_bytes(image: ImageType) -> bytes:
else:
return image.read()
def to_data_uri(image: ImageType) -> str:
if not isinstance(image, str):
data = to_bytes(image)
data_base64 = base64.b64encode(data).decode()
return f"data:{is_accepted_format(data)};base64,{data_base64}"
return image
class ImageResponse:
def __init__(
self,

View File

@ -271,13 +271,13 @@ class AsyncGeneratorProvider(AsyncProvider):
raise NotImplementedError()
class ProviderModelMixin:
default_model: str
default_model: str = None
models: list[str] = []
model_aliases: dict[str, str] = {}
@classmethod
def get_models(cls) -> list[str]:
if not cls.models:
if not cls.models and cls.default_model is not None:
return [cls.default_model]
return cls.models

View File

@ -3,7 +3,7 @@ from __future__ import annotations
import random
import string
from ..typing import Messages
from ..typing import Messages, Cookies
def format_prompt(messages: Messages, add_special_tokens=False) -> str:
"""
@ -56,4 +56,7 @@ def filter_none(**kwargs) -> dict:
key: value
for key, value in kwargs.items()
if value is not None
}
}
def format_cookies(cookies: Cookies) -> str:
return "; ".join([f"{k}={v}" for k, v in cookies.items()])