![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9) By using this repository or any code related to it, you agree to the [legal notice](./LEGAL_NOTICE.md). The author is not responsible for any copies, forks, reuploads made by other users, or anything else related to gpt4free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses. - latest pypi version: `[0.1.5.7](https://pypi.org/project/g4f/0.1.5.7)``: ```sh pip install -U g4f ``` ## New features - Telegram Channel: https://t.me/g4f_official - g4f GUI is back !!: Install g4f with pip and then run: ```py python -m g4f.gui.run ``` or ```py from g4f.gui import run_gui; run_gui() ``` preview: image - run interference from pypi package: ```py python -m g4f.interference.run ``` or ```py from g4f.interference import run_interference; run_interference() ``` ## Table of Contents - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [Setting up the project](#setting-up-the-project) - [Install using pypi](#install-using-pypi) - [Install using docker](#setting-up-with-docker) - [Usage](#usage) - [The `g4f` Package](#the-g4f-package) - [interference openai-proxy api (use with openai python package)](#interference-openai-proxy-api-use-with-openai-python-package) - [Providers](#models) - [gpt-3.5 / gpt-4](#gpt-35--gpt-4) - [Other Models](#other-models) - [Related gpt4free projects](#related-gpt4free-projects) - [Contribute](#contribute) - [Contributors](#contributors) - [Copyright](#copyright) - [Star History](#star-history) ## Getting Started #### Prerequisites: 1. [Download and install Python](https://www.python.org/downloads/) (Version 3.10+ is recommended). #### Setting up the project: ##### Install using pypi ``` pip install -U g4f ``` ##### or 1. Clone the GitHub repository: ``` git clone https://github.com/xtekky/gpt4free.git ``` 2. Navigate to the project directory: ``` cd gpt4free ``` 3. (Recommended) Create a virtual environment to manage Python packages for your project: ``` python3 -m venv venv ``` 4. Activate the virtual environment: - On Windows: ``` .\venv\Scripts\activate ``` - On macOS and Linux: ``` source venv/bin/activate ``` 5. Install the required Python packages from `requirements.txt`: ``` pip install -r requirements.txt ``` 6. Create a `test.py` file in the root folder and start using the repo, further Instructions are below ```py import g4f ... ``` ##### Setting up with Docker: If you have Docker installed, you can easily set up and run the project without manually installing dependencies. 1. First, ensure you have both Docker and Docker Compose installed. - [Install Docker](https://docs.docker.com/get-docker/) - [Install Docker Compose](https://docs.docker.com/compose/install/) 2. Clone the GitHub repo: ```bash git clone https://github.com/xtekky/gpt4free.git ``` 3. Navigate to the project directory: ```bash cd gpt4free ``` 4. Build the Docker image: ```bash docker compose build ``` 5. Start the service using Docker Compose: ```bash docker compose up ``` You server will now be running at `http://localhost:1337`. You can interact with the API or run your tests as you would normally. To stop the Docker containers, simply run: ```bash docker compose down ``` **Note:** When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the `docker-compose.yml` file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using `docker compose build`. ## Usage ### The `g4f` Package #### ChatCompletion ```py import g4f print(g4f.Provider.Ails.params) # supported args # Automatic selection of provider # streamed completion response = g4f.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}], stream=True, ) for message in response: print(message, flush=True, end='') # normal response response = g4f.ChatCompletion.create( model=g4f.models.gpt_4, messages=[{"role": "user", "content": "hi"}], ) # alterative model setting print(response) # Set with provider response = g4f.ChatCompletion.create( model="gpt-3.5-turbo", provider=g4f.Provider.DeepAi, messages=[{"role": "user", "content": "Hello world"}], stream=True, ) for message in response: print(message) ``` ##### Completion ```py import g4f allowed_models = [ 'code-davinci-002', 'text-ada-001', 'text-babbage-001', 'text-curie-001', 'text-davinci-002', 'text-davinci-003' ] response = g4f.Completion.create( model = 'text-davinci-003', prompt = 'say this is a test') print(response) ``` ##### Providers: ```py from g4f.Provider import ( AItianhu, Acytoo, Aichat, Ails, Bard, Bing, ChatBase, ChatgptAi, DeepAi, H2o, HuggingChat, OpenAssistant, OpenaiChat, Raycast, Theb, Vercel, Vitalentum, Ylokh, You, Yqcloud, ) # Usage: response = g4f.ChatCompletion.create(..., provider=ProviderName) ``` ##### Cookies Required: Cookies are essential for the proper functioning of some service providers. It is imperative to maintain an active session, typically achieved by logging into your account. When running the g4f package locally, the package automatically retrieves cookies from your web browser using the `get_cookies` function. However, if you're not running it locally, you'll need to provide the cookies manually by passing them as parameters using the `cookies` parameter. ```py import g4f from g4f.Provider import ( Bard, Bing, HuggingChat, OpenAssistant, OpenaiChat, ) # Usage: response = g4f.ChatCompletion.create( model=g4f.models.default, messages=[{"role": "user", "content": "Hello"}], provider=Bard, #cookies=g4f.get_cookies(".google.com"), cookies={"cookie_name": "value", "cookie_name2": "value2"}, auth=True ) ``` ##### Async Support: To enhance speed and overall performance, execute providers asynchronously. The total execution time will be determined by the duration of the slowest provider's execution. ```py import g4f, asyncio _providers = [ g4f.Provider.Aichat, g4f.Provider.ChatBase, g4f.Provider.Bing, g4f.Provider.DeepAi, g4f.Provider.GptGo, g4f.Provider.You, g4f.Provider.Yqcloud, ] async def run_provider(provider: g4f.Provider.BaseProvider): try: response = await g4f.ChatCompletion.create_async( model=g4f.models.default, messages=[{"role": "user", "content": "Hello"}], provider=provider, ) print(f"{provider.__name__}:", response) except Exception as e: print(f"{provider.__name__}:", e) async def run_all(): calls = [ run_provider(provider) for provider in _providers ] await asyncio.gather(*calls) asyncio.run(run_all()) ``` ##### Proxy Support: All providers support specifying a proxy in the create function. ```py import g4f response = await g4f.ChatCompletion.create( model=g4f.models.default, messages=[{"role": "user", "content": "Hello"}], proxy="http://host:port", # or socks5://user:pass@host:port ) print(f"Result:", response) ``` ### interference openai-proxy api (use with openai python package) #### run interference from pypi package: ```py from g4f.interference import run_interference run_interference() ``` #### run interference from repo: If you want to use the embedding function, you need to get a huggingface token. You can get one at https://huggingface.co/settings/tokens make sure your role is set to write. If you have your token, just use it instead of the OpenAI api-key. get requirements: ```sh pip install -r etc/interference/requirements.txt ``` run server: ```sh python3 -m etc/interference.app ``` ```py import openai openai.api_key = "Empty if you don't use embeddings, otherwise your hugginface token" openai.api_base = "http://localhost:1337" def main(): chat_completion = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "write a poem about a tree"}], stream=True, ) if isinstance(chat_completion, dict): # not stream print(chat_completion.choices[0].message.content) else: # stream for token in chat_completion: content = token["choices"][0]["delta"].get("content") if content != None: print(content, end="", flush=True) if __name__ == "__main__": main() ``` ## Models ### gpt-3.5 / gpt-4 | Website| Provider| gpt-3.5 | gpt-4 | Streaming | Asynchron | Status | Auth | | ------ | ------- | ------- | ----- | --------- | --------- | ------ | ---- | | [www.aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chat.acytoo.com](https://chat.acytoo.com) | `g4f.Provider.Acytoo` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chat-gpt.org](https://chat-gpt.org/chat) | `g4f.Provider.Aichat` | ✔️ | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [ai.ls](https://ai.ls) | `g4f.Provider.Ails` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | ❌ | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [www.chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatgpt.ai](https://chatgpt.ai/) | `g4f.Provider.ChatgptAi` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [opchatgpts.net](https://opchatgpts.net) | `g4f.Provider.ChatgptLogin` | ✔️ | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [ava-ai-ef611.web.app](https://ava-ai-ef611.web.app) | `g4f.Provider.CodeLinkAva` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [deepai.org](https://deepai.org) | `g4f.Provider.DeepAi` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [gptgo.ai](https://gptgo.ai) | `g4f.Provider.GptGo` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [gpt-gm.h2o.ai](https://gpt-gm.h2o.ai) | `g4f.Provider.H2o` | ❌ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [huggingface.co](https://huggingface.co/chat/) | `g4f.Provider.HuggingChat` | ❌ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [opchatgpts.net](https://opchatgpts.net) | `g4f.Provider.Opchatgpts` | ✔️ | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [open-assistant.io](https://open-assistant.io/chat) | `g4f.Provider.OpenAssistant` | ❌ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [www.perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [theb.ai](https://theb.ai) | `g4f.Provider.Theb` | ✔️ | ❌ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [sdk.vercel.ai](https://sdk.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ❌ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [app.vitalentum.io](https://app.vitalentum.io) | `g4f.Provider.Vitalentum` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [wewordle.org](https://wewordle.org) | `g4f.Provider.Wewordle` | ✔️ | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chat.ylokh.xyz](https://chat.ylokh.xyz) | `g4f.Provider.Ylokh` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chat9.yqcloud.top](https://chat9.yqcloud.top/) | `g4f.Provider.Yqcloud` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [cromicle.top](https://cromicle.top) | `g4f.Provider.Cromicle` | ✔️ | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [aiservice.vercel.app](https://aiservice.vercel.app/) | `g4f.Provider.AiService` | ✔️ | ❌ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat.dfehub.com](https://chat.dfehub.com/) | `g4f.Provider.DfeHub` | ✔️ | ❌ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [free.easychat.work](https://free.easychat.work) | `g4f.Provider.EasyChat` | ✔️ | ❌ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [next.eqing.tech](https://next.eqing.tech/) | `g4f.Provider.Equing` | ✔️ | ❌ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat9.fastgpt.me](https://chat9.fastgpt.me/) | `g4f.Provider.FastGpt` | ✔️ | ❌ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [forefront.com](https://forefront.com) | `g4f.Provider.Forefront` | ✔️ | ❌ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat.getgpt.world](https://chat.getgpt.world/) | `g4f.Provider.GetGpt` | ✔️ | ❌ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [liaobots.com](https://liaobots.com) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [p5.v50.ltd](https://p5.v50.ltd) | `g4f.Provider.V50` | ✔️ | ❌ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chat.wuguokai.xyz](https://chat.wuguokai.xyz) | `g4f.Provider.Wuguokai` | ✔️ | ❌ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | ### Other Models | Model | Base Provider | Provider | Website | | --------------------------------------- | ------------- | ------------------- | ------------------------------------------- | | palm | Google | g4f.Provider.Bard | [bard.google.com](https://bard.google.com/) | | h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 | Huggingface | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) | | h2ogpt-gm-oasst1-en-2048-falcon-40b-v1 | Huggingface | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) | | h2ogpt-gm-oasst1-en-2048-open-llama-13b | Huggingface | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) | | claude-instant-v1 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | claude-v1 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | claude-v2 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | command-light-nightly | Cohere | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | command-nightly | Cohere | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | gpt-neox-20b | Huggingface | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | oasst-sft-1-pythia-12b | Huggingface | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | oasst-sft-4-pythia-12b-epoch-3.5 | Huggingface | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | santacoder | Huggingface | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | bloom | Huggingface | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | flan-t5-xxl | Huggingface | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | code-davinci-002 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | gpt-3.5-turbo-16k | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | gpt-3.5-turbo-16k-0613 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | gpt-4-0613 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-ada-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-babbage-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-curie-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-davinci-002 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-davinci-003 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | llama13b-v2-chat | Replicate | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | llama7b-v2-chat | Replicate | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | ## Related gpt4free projects
🎁 Projects ⭐ Stars 📚 Forks 🛎 Issues 📬 Pull requests
gpt4free Stars Forks Issues Pull Requests
gpt4free-ts Stars Forks Issues Pull Requests
Free AI API's & Potential Providers List Stars Forks Issues Pull Requests
ChatGPT-Clone Stars Forks Issues Pull Requests
ChatGpt Discord Bot Stars Forks Issues Pull Requests
LangChain gpt4free Stars Forks Issues Pull Requests
ChatGpt Telegram Bot Stars Forks Issues Pull Requests
Action Translate Readme Stars Forks Issues Pull Requests
Langchain Document GPT Stars Forks Issues Pull Requests
## Contribute ####Create Provider with AI Tool Call in your terminal the "create_provider" script: ```bash $ python etc/tool/create_provider.py ``` 1. Enter your name for the new provider. 2. Copy&Paste a cURL command from your browser developer tools. 3. Let the AI ​​create the provider for you. 4. Customize the provider according to your needs. ####Create Provider 0. Check out the current [list of potential providers](https://github.com/zukixa/cool-ai-stuff#ai-chat-websites), or find your own provider source! 1. Create a new file in [g4f/provider](./g4f/provider) with the name of the Provider 2. Implement a class that extends [BaseProvider](./g4f/provider/base_provider.py). ```py from __future__ import annotations from ..typing import AsyncResult, Messages from .base_provider import AsyncGeneratorProvider class HogeService(AsyncGeneratorProvider): url = "https://chat-gpt.com" supports_gpt_35_turbo = True working = True @classmethod async def create_async_generator( cls, model: str, messages: Messages, proxy: str = None, **kwargs ) -> AsyncResult: yield "" ``` 3. Here, you can adjust the settings, for example if the website does support streaming, set `supports_stream` to `True`... 4. Write code to request the provider in `create_async_generator` and `yield` the response, _even if_ its a one-time response, do not hesitate to look at other providers for inspiration 5. Add the Provider Name in [g4f/provider/**init**.py](./g4f/provider/__init__.py) ```py from .HogeService import HogeService __all__ = [ HogeService, ] ``` 6. You are done !, test the provider by calling it: ```py import g4f response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME, messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream) for message in response: print(message, flush=True, end='') ``` ## Contributors A list of the contributors is available [here](https://github.com/xtekky/gpt4free/graphs/contributors) The [`Vercel.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/Vercel.py) file contains code from [vercel-llm-api](https://github.com/ading2210/vercel-llm-api) by [@ading2210](https://github.com/ading2210), which is licenced under the [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt) Top 1 Contributor: [@hlohaus](https://github.com/hlohaus) ## Copyright This program is licensed under the [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt) ``` xtekky/gpt4free: Copyright (C) 2023 xtekky This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ``` ## Star History Star History Chart