Update model configurations, provider implementations, and documentation (#2577)

* Update model configurations, provider implementations, and documentation

- Updated model names and aliases for Qwen QVQ 72B and Qwen 2 72B (@TheFirstNoob)
- Revised HuggingSpace class configuration, added default_image_model
- Added llama-3.2-70b alias for Llama 3.2 70B model in AutonomousAI
- Removed BlackboxCreateAgent class
- Added gpt-4o alias for Copilot model
- Moved api_key to Mhystical class attribute
- Added models property with default_model value for Free2GPT
- Simplified Jmuz class implementation
- Improved image generation and model handling in DeepInfra
- Standardized default models and removed aliases in Gemini
- Replaced model aliases with direct model list in GlhfChat (@TheFirstNoob)
- Removed trailing slash from image generation URL in PollinationsAI (https://github.com/xtekky/gpt4free/issues/2571)
- Updated llama and qwen model configurations
- Enhanced provider documentation and model details

* Removed from (g4f/models.py) 'Yqcloud' provider from Default due to error 'ResponseStatusError: Response 429: 文字过长,请删减后重试。'

* Update docs/providers-and-models.md

* refactor(g4f/Provider/DDG.py): Add error handling and rate limiting to DDG provider

- Add custom exception classes for rate limits, timeouts, and conversation limits
- Implement rate limiting with sleep between requests (0.75s minimum delay)
- Add model validation method to check supported models
- Add proper error handling for API responses with custom exceptions
- Improve session cookie handling for conversation persistence
- Clean up User-Agent string and remove redundant code
- Add proper error propagation through async generator

Breaking changes:
- New custom exceptions may require updates to error handling code
- Rate limiting affects request timing and throughput
- Model validation is now stricter

Related:
- Adds error handling similar to standard API clients
- Improves reliability and robustness of chat interactions

* Update g4f/models.py g4f/Provider/PollinationsAI.py

* Update g4f/models.py

* Restored provider which was not working and was disabled (g4f/Provider/DeepInfraChat.py)

* Fixing a bug with Streaming Completions

* Update g4f/Provider/PollinationsAI.py

* Update g4f/Provider/Blackbox.py g4f/Provider/DDG.py

* Added another model for generating images 'ImageGeneration2' to the 'Blackbox' provider

* Update docs/providers-and-models.md

* Update g4f/models.py g4f/Provider/Blackbox.py

* Added a new OIVSCode provider from the Text Models and Vision (Image Upload) model

* Update docs/providers-and-models.md

* docs: add Conversation Memory class with context handling requested by @TheFirstNoob

* Simplified README.md documentation added new docs/configuration.md documentation

* Update add README.md docs/configuration.md

* Update README.md

* Update docs/providers-and-models.md g4f/models.py g4f/Provider/PollinationsAI.py

* Added new model deepseek-r1 to Blackbox provider. @TheFirstNoob

* Fixed bugs and updated docs/providers-and-models.md etc/unittest/client.py g4f/models.py g4f/Provider/.

---------

Co-authored-by: kqlio67 <>
Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
This commit is contained in:
kqlio67 2025-01-24 02:47:57 +00:00 committed by GitHub
parent a9fde5bf88
commit 9def1aa71f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
42 changed files with 1464 additions and 1769 deletions

350
README.md
View file

@ -1,12 +1,17 @@
![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9)
<a href="https://trendshift.io/repositories/1692" target="_blank"><img src="https://trendshift.io/api/badge/repositories/1692" alt="xtekky%2Fgpt4free | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
---
<p align="center"><strong>Written by <a href="https://github.com/xtekky">@xtekky</a></strong></p>
<p align="center">
<span style="background: linear-gradient(45deg, #12c2e9, #c471ed, #f64f59); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">
<strong>Written by <a href="https://github.com/xtekky">@xtekky</a></strong>
</span>
</p>
<div id="top"></div>
@ -30,7 +35,6 @@ docker pull hlohaus789/g4f
## 🆕 What's New
- **For comprehensive details on new features and updates, please refer to our** [Releases](https://github.com/xtekky/gpt4free/releases) **page**
- **Installation Guide for Windows (.exe):** 💻 [Installation Guide for Windows (.exe)](#installation-guide-for-windows-exe)
- **Join our Telegram Channel:** 📨 [telegram.me/g4f_channel](https://telegram.me/g4f_channel)
- **Join our Discord Group:** 💬🆕️ [https://discord.gg/5E39JUWUFa](https://discord.gg/5E39JUWUFa)
@ -39,166 +43,128 @@ docker pull hlohaus789/g4f
Is your site on this repository and you want to take it down? Send an email to takedown@g4f.ai with proof it is yours and it will be removed as fast as possible. To prevent reproduction please secure your API. 😉
## 🚀 GPT4Free on HuggingFace
[![HuggingSpace](https://github.com/user-attachments/assets/1d859e8a-d6fa-416f-a213-ccc26aa11e90)](https://huggingface.co/spaces/roxky/g4f)
**Is a proof-of-concept API package for multi-provider AI requests. It showcases features such as:**
Explore our GPT4Free project on HuggingFace Spaces by clicking the link below:
- Load balancing and request flow control.
- Seamless integration with multiple AI providers.
- Comprehensive text and image generation support.
- [Visit GPT4Free on HuggingFace](https://huggingface.co/spaces/roxky/g4f)
If you would like to create your own copy of this space, you can duplicate it using the following link:
- [Duplicate GPT4Free Space](https://huggingface.co/spaces/roxky/g4f?duplicate=true)
> Explore the [Visit GPT4Free on HuggingFace Space](https://huggingface.co/spaces/roxky/g4f) for a hosted version or [Duplicate GPT4Free Space](https://huggingface.co/spaces/roxky/g4f?duplicate=true) it for personal use.
---
## 📚 Table of Contents
- [🆕 What's New](#-whats-new)
- [📚 Table of Contents](#-table-of-contents)
- [🛠️ Getting Started](#-getting-started)
- [Docker Container Guide](#docker-container-guide)
- [Installation Guide for Windows (.exe)](#installation-guide-for-windows-exe)
- [Use python](#use-python)
- [Prerequisites](#prerequisites)
- [Install using PyPI package](#install-using-pypi-package)
- [Install from source](#install-from-source)
- [Install using Docker](#install-using-docker)
- [💡 Usage](#-usage)
- [Text Generation](#text-generation)
- [Image Generation](#image-generation)
- [Web UI](#web-ui)
- [Interference API](#interference-api)
- [Local Inference](docs/local.md)
- [Configuration](#configuration)
- [Full Documentation for Python API](#full-documentation-for-python-api)
- [Requests API from G4F](docs/requests.md)
- [Client API from G4F](docs/client.md)
- [AsyncClient API from G4F](docs/async_client.md)
- [🚀 Providers and Models](docs/providers-and-models.md)
- [🔗 Powered by gpt4free](#-powered-by-gpt4free)
- [🤝 Contribute](#-contribute)
- [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider)
- [How can AI help me with writing code?](#guide-how-can-ai-help-me-with-writing-code)
- [⚡ Getting Started](#-getting-started)
- [🛠 Installation](#-installation)
- [🐳 Using Docker](#-using-docker)
- [🪟 Windows Guide (.exe)](#-windows-guide-exe)
- [🐍 Python Installation](#-python-installation)
- [💡 Usage](#-usage)
- [📝 Text Generation](#-text-generation)
- [🎨 Image Generation](#-image-generation)
- [🌐 Web Interface](#-web-interface)
- [🖥️ Local Inference](docs/local.md)
- [🤖 Interference API](#-interference-api)
- [🛠️ Configuration](docs/configuration.md)
- [📱 Run on Smartphone](#-run-on-smartphone)
- [📘 Full Documentation for Python API](#-full-documentation-for-python-api)
- [🚀 Providers and Models](docs/providers-and-models.md)
- [🔗 Powered by gpt4free](#-powered-by-gpt4free)
- [🤝 Contribute](#-contribute)
- [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider)
- [How can AI help me with writing code?](#guide-how-can-ai-help-me-with-writing-code)
- [🙌 Contributors](#-contributors)
- [©️ Copyright](#-copyright)
- [⭐ Star History](#-star-history)
- [📄 License](#-license)
- [⭐ Star History](#-star-history)
- [📄 License](#-license)
## 🛠️ Getting Started
---
#### Docker Container Guide
## ⚡️ Getting Started
##### Getting Started Quickly:
## 🛠 Installation
1. **Install Docker:** Begin by [downloading and installing Docker](https://docs.docker.com/get-docker/).
### 🐳 Using Docker
1. **Install Docker:** [Download and install Docker](https://docs.docker.com/get-docker/).
2. **Set Up Directories:** Before running the container, make sure the necessary data directories exist or can be created. For example, you can create and set ownership on these directories by running:
```bash
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_images
chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_images
```
3. **Run the Docker Container:** Use the following commands to pull the latest image and start the container:
```bash
docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 \
--shm-size="2g" \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_images:/app/generated_images \
hlohaus789/g4f:latest
```
2. **Check Directories:**
4. **Running the Slim Docker Image:** Use the following command to run the Slim Docker image. This command also updates the `g4f` package at startup and installs any additional dependencies:
```bash
docker run \
-p 1337:1337 \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_images:/app/generated_images \
hlohaus789/g4f:latest-slim \
rm -r -f /app/g4f/ \
&& pip install -U g4f[slim] \
&& python -m g4f --debug
```
Before running the container, make sure the necessary data directories exist or can be created. For example, you can create and set ownership on these directories by running:
5. **Access the Client Interface:**
- **To use the included client, navigate to:** [http://localhost:8080/chat/](http://localhost:8080/chat/) or [http://localhost:1337/chat/](http://localhost:1337/chat/)
- **Or set the API base for your client to:** [http://localhost:1337/v1](http://localhost:1337/v1)
```bash
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_images
chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_images
```
3. **Set Up the Container:**
Use the following commands to pull the latest image and start the container:
```bash
docker pull hlohaus789/g4f
docker run \
-p 8080:8080 -p 1337:1337 -p 7900:7900 \
--shm-size="2g" \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_images:/app/generated_images \
hlohaus789/g4f:latest
```
##### Running the Slim Docker Image
Use the following command to run the Slim Docker image. This command also updates the `g4f` package at startup and installs any additional dependencies:
```bash
docker run \
-p 1337:1337 \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_images:/app/generated_images \
hlohaus789/g4f:latest-slim \
rm -r -f /app/g4f/ \
&& pip install -U g4f[slim] \
&& python -m g4f --debug
```
4. **Access the Client:**
- To use the included client, navigate to: [http://localhost:8080/chat/](http://localhost:8080/chat/) or [http://localhost:1337/chat/](http://localhost:1337/chat/)
- Or set the API base for your client to: [http://localhost:1337/v1](http://localhost:1337/v1)
5. **(Optional) Provider Login:**
6. **(Optional) Provider Login:**
If required, you can access the container's desktop here: http://localhost:7900/?autoconnect=1&resize=scale&password=secret for provider login purposes.
#### Installation Guide for Windows (.exe)
---
### 🪟 Windows Guide (.exe)
To ensure the seamless operation of our application, please follow the instructions below. These steps are designed to guide you through the installation process on Windows operating systems.
### Installation Steps
**Installation Steps:**
1. **Download the Application**: Visit our [releases page](https://github.com/xtekky/gpt4free/releases/tag/0.4.0.6) and download the most recent version of the application, named `g4f.exe.zip`.
2. **File Placement**: After downloading, locate the `.zip` file in your Downloads folder. Unpack it to a directory of your choice on your system, then execute the `g4f.exe` file to run the app.
3. **Open GUI**: The app starts a web server with the GUI. Open your favorite browser and navigate to `http://localhost:8080/chat/` to access the application interface.
3. **Open GUI**: The app starts a web server with the GUI. Open your favorite browser and navigate to [http://localhost:8080/chat/](http://localhost:8080/chat/) to access the application interface.
4. **Firewall Configuration (Hotfix)**: Upon installation, it may be necessary to adjust your Windows Firewall settings to allow the application to operate correctly. To do this, access your Windows Firewall settings and allow the application.
By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance.
---
### Learn More About the GUI
### 🐍 Python Installation
For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the **GUI Documentation**:
#### Prerequisites:
1. Install Python 3.10+ from [python.org](https://www.python.org/downloads/).
2. Install Google Chrome for certain providers.
- [GUI Documentation](docs/gui.md)
This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more.
---
### Use Your Smartphone
Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device:
- [Run on Smartphone Guide](docs/guides/phone.md)
---
### Use python
##### Prerequisites:
1. [Download and install Python](https://www.python.org/downloads/) (Version 3.10+ is recommended).
2. [Install Google Chrome](https://www.google.com/chrome/) for providers with webdriver
##### Install using PyPI package:
```
#### Install with PyPI:
```bash
pip install -U g4f[all]
```
How do I install only parts or do disable parts?
Use partial requirements: [/docs/requirements](docs/requirements.md)
> How do I install only parts or do disable parts? **Use partial requirements:** [/docs/requirements](docs/requirements.md)
##### Install from source:
#### Install from Source:
```bash
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
pip install -r requirements.txt
```
How do I load the project using git and installing the project requirements?
Read this tutorial and follow it step by step: [/docs/git](docs/git.md)
> How do I load the project using git and installing the project requirements? **Read this tutorial and follow it step by step:** [/docs/git](docs/git.md)
##### Install using Docker:
How do I build and run composer image from source?
Use docker-compose: [/docs/docker](docs/docker.md)
---
## 💡 Usage
#### Text Generation
### 📝 Text Generation
```python
from g4f.client import Client
@ -206,16 +172,15 @@ client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
web_search = False
web_search=False
)
print(response.choices[0].message.content)
```
```
Hello! How can I assist you today?
```
#### Image Generation
### 🎨 Image Generation
```python
from g4f.client import Client
@ -226,37 +191,27 @@ response = client.images.generate(
response_format="url"
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
print(f"Generated image URL: {response.data[0].url}")
```
[![Image with cat](/docs/images/cat.jpeg)](docs/client.md)
#### **Full Documentation for Python API**
- **New:**
- **Requests API from G4F:** [/docs/requests](docs/requests.md)
- **Client API from G4F:** [/docs/client](docs/client.md)
- **AsyncClient API from G4F:** [/docs/async_client](docs/async_client.md)
- **File API from G4F:** [/docs/file](docs/file.md)
- **Legacy:**
- **Legacy API with python modules:** [/docs/legacy](docs/legacy.md)
#### Web UI
**To start the web interface, type the following codes in python:**
### 🌐 Web Interface
**Run the GUI using Python:**
```python
from g4f.gui import run_gui
run_gui()
```
or execute the following command:
**Or, run via CLI:**
```bash
python -m g4f.cli gui -port 8080 -debug
```
### Interference API
> **Learn More About the GUI:** For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the [GUI Documentation](docs/gui.md) . This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more.
---
### 🤖 Interference API
The **Interference API** enables seamless integration with OpenAI's services through G4F, allowing you to deploy efficient AI solutions.
@ -266,99 +221,21 @@ The **Interference API** enables seamless integration with OpenAI's services thr
This API is designed for straightforward implementation and enhanced compatibility with other OpenAI integrations.
### Configuration
---
#### Authentication
### 📱 Run on Smartphone
Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device: [Run on Smartphone Guide](docs/guides/phone.md)
Refer to the [G4F Authentication Setup Guide](docs/authentication.md) for detailed instructions on setting up authentication.
---
#### Cookies
#### **📘 Full Documentation for Python API**
- **Client API from G4F:** [/docs/client](docs/client.md)
- **AsyncClient API from G4F:** [/docs/async_client](docs/async_client.md)
- **Requests API from G4F:** [/docs/requests](docs/requests.md)
- **File API from G4F:** [/docs/file](docs/file.md)
- **Legacy API with python modules:** [/docs/legacy](docs/legacy.md)
Cookies are essential for using Meta AI and Microsoft Designer to create images.
Additionally, cookies are required for the Google Gemini and WhiteRabbitNeo Provider.
From Bing, ensure you have the "\_U" cookie, and from Google, all cookies starting with "\_\_Secure-1PSID" are needed.
You can pass these cookies directly to the create function or set them using the `set_cookies` method before running G4F:
```python
from g4f.cookies import set_cookies
set_cookies(".bing.com", {
"_U": "cookie value"
})
set_cookies(".google.com", {
"__Secure-1PSID": "cookie value"
})
```
#### Using .har and Cookie Files
You can place `.har` and cookie files `.json` in the default `./har_and_cookies` directory. To export a cookie file, use the [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) available on the Chrome Web Store.
#### Creating .har Files to Capture Cookies
To capture cookies, you can also create `.har` files. For more details, refer to the next section.
#### Changing the Cookies Directory and Loading Cookie Files in Python
You can change the cookies directory and load cookie files in your Python environment. To set the cookies directory relative to your Python file, use the following code:
```python
import os.path
from g4f.cookies import set_cookies_dir, read_cookie_files
import g4f.debug
g4f.debug.logging = True
cookies_dir = os.path.join(os.path.dirname(__file__), "har_and_cookies")
set_cookies_dir(cookies_dir)
read_cookie_files(cookies_dir)
```
### Debug Mode
If you enable debug mode, you will see logs similar to the following:
```
Read .har file: ./har_and_cookies/you.com.har
Cookies added: 10 from .you.com
Read cookie file: ./har_and_cookies/google.json
Cookies added: 16 from .google.com
```
#### .HAR File for OpenaiChat Provider
##### Generating a .HAR File
To utilize the OpenaiChat provider, a .har file is required from https://chatgpt.com/. Follow the steps below to create a valid .har file:
1. Navigate to https://chatgpt.com/ using your preferred web browser and log in with your credentials.
2. Access the Developer Tools in your browser. This can typically be done by right-clicking the page and selecting "Inspect," or by pressing F12 or Ctrl+Shift+I (Cmd+Option+I on a Mac).
3. With the Developer Tools open, switch to the "Network" tab.
4. Reload the website to capture the loading process within the Network tab.
5. Initiate an action in the chat which can be captured in the .har file.
6. Right-click any of the network activities listed and select "Save all as HAR with content" to export the .har file.
##### Storing the .HAR File
- Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, if you are using Python from a terminal, you can store it in a `./har_and_cookies` directory within your current working directory.
> **Note:** Ensure that your .har file is stored securely, as it may contain sensitive information.
#### Using Proxy
If you want to hide or change your IP address for the providers, you can set a proxy globally via an environment variable:
**- On macOS and Linux:**
```bash
export G4F_PROXY="http://host:port"
```
**- On Windows:**
```bash
set G4F_PROXY=http://host:port
```
---
## 🔗 Powered by gpt4free
@ -818,6 +695,8 @@ set G4F_PROXY=http://host:port
</tbody>
</table>
## 🤝 Contribute
We welcome contributions from the community. Whether you're adding new providers or features, or simply fixing typos and making small improvements, your input is valued. Creating a pull request is all it takes our co-pilot will handle the code review process. Once all changes have been addressed, we'll merge the pull request into the main branch and release the updates at a later time.
@ -827,7 +706,9 @@ We welcome contributions from the community. Whether you're adding new providers
###### Guide: How can AI help me with writing code?
- **Read:** [AI Assistance Guide](docs/guides/help_me.md)
## 🙌 Contributors
## Contributors
A list of all contributors is available [here](https://github.com/xtekky/gpt4free/graphs/contributors)
<a href="https://github.com/xtekky" target="_blank"><img src="https://avatars.githubusercontent.com/u/98614666?v=4&s=45" width="45" title="xtekky"></a>
@ -946,6 +827,7 @@ A list of all contributors is available [here](https://github.com/xtekky/gpt4fre
_Having input implies that the AI's code generation utilized it as one of many sources._
## ©️ Copyright
This program is licensed under the [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt)
@ -967,12 +849,14 @@ You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
```
## ⭐ Star History
<a href="https://github.com/xtekky/gpt4free/stargazers">
<img width="500" alt="Star History Chart" src="https://api.star-history.com/svg?repos=xtekky/gpt4free&type=Date">
</a>
## 📄 License
<table>

View file

@ -1,4 +1,6 @@
# G4F - AsyncClient API Guide
The G4F AsyncClient API is a powerful asynchronous interface for interacting with various AI models. This guide provides comprehensive information on how to use the API effectively, including setup, usage examples, best practices, and important considerations for optimal performance.
@ -18,6 +20,9 @@ The G4F AsyncClient API is designed to be compatible with the OpenAI API, making
- [Streaming Completions](#streaming-completions)
- [Using a Vision Model](#using-a-vision-model)
- [Image Generation](#image-generation)
- [Advanced Usage](#advanced-usage)
- [Conversation Memory](#conversation-memory)
- [Search Tool Support](#search-tool-support)
- [Concurrent Tasks](#concurrent-tasks-with-asynciogather)
- [Available Models and Providers](#available-models-and-providers)
- [Error Handling and Best Practices](#error-handling-and-best-practices)
@ -145,7 +150,7 @@ from g4f.client import AsyncClient
async def main():
client = AsyncClient()
stream = client.chat.completions.create(
stream = await client.chat.completions.create(
model="gpt-4",
messages=[
{
@ -154,6 +159,7 @@ async def main():
}
],
stream=True,
web_search = False
)
async for chunk in stream:
@ -163,6 +169,8 @@ async def main():
asyncio.run(main())
```
---
### Using a Vision Model
**Analyze an image and generate a description:**
```python
@ -244,6 +252,194 @@ async def main():
asyncio.run(main())
```
---
### Creating Image Variations
**Create variations of an existing image:**
```python
import asyncio
from g4f.client import AsyncClient
from g4f.Provider import OpenaiChat
async def main():
client = AsyncClient(image_provider=OpenaiChat)
response = await client.images.create_variation(
prompt="a white siamese cat",
image=open("docs/images/cat.jpg", "rb"),
model="dall-e-3",
# Add any other necessary parameters
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
asyncio.run(main())
```
---
## Advanced Usage
### Conversation Memory
To maintain a coherent conversation, it's important to store the context or history of the dialogue. This can be achieved by appending both the user's inputs and the bot's responses to a messages list. This allows the model to reference past exchanges when generating responses.
**The following example demonstrates how to implement conversation memory with the G4F:**
```python
import asyncio
from g4f.client import AsyncClient
class Conversation:
def __init__(self):
self.client = AsyncClient()
self.history = [
{
"role": "system",
"content": "You are a helpful assistant."
}
]
def add_message(self, role, content):
self.history.append({
"role": role,
"content": content
})
async def get_response(self, user_message):
# Add user message to history
self.add_message("user", user_message)
# Get response from AI
response = await self.client.chat.completions.create(
model="gpt-4o-mini",
messages=self.history,
web_search=False
)
# Add AI response to history
assistant_response = response.choices[0].message.content
self.add_message("assistant", assistant_response)
return assistant_response
async def main():
conversation = Conversation()
print("=" * 50)
print("G4F Chat started (type 'exit' to end)".center(50))
print("=" * 50)
print("\nAI: Hello! How can I assist you today?")
while True:
user_input = input("\nYou: ")
if user_input.lower() == 'exit':
print("\nGoodbye!")
break
response = await conversation.get_response(user_input)
print("\nAI:", response)
if __name__ == "__main__":
asyncio.run(main())
```
---
## Search Tool Support
The **Search Tool Support** feature enables triggering a web search during chat completions. This is useful for retrieving real-time or specific data, offering a more flexible solution than `web_search`.
**Example Usage:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
tool_calls = [
{
"function": {
"arguments": {
"query": "Latest advancements in AI",
"max_results": 5,
"max_words": 2500,
"backend": "api",
"add_text": True,
"timeout": 5
},
"name": "search_tool"
},
"type": "function"
}
]
response = await client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "user",
"content": "Tell me about recent advancements in AI."
}
],
tool_calls=tool_calls
)
print(response.choices[0].message.content)
if __name__ == "__main__":
asyncio.run(main())
```
**Parameters for `search_tool`:**
- **`query`**: The search query string.
- **`max_results`**: Number of search results to retrieve.
- **`max_words`**: Maximum number of words in the response.
- **`backend`**: The backend used for search (e.g., `"api"`).
- **`add_text`**: Whether to include text snippets in the response.
- **`timeout`**: Maximum time (in seconds) for the search operation.
**Advantages of Search Tool Support:**
- Works with any provider, irrespective of `web_search` support.
- Offers more customization and control over the search process.
- Bypasses provider-specific limitations.
---
### Using a List of Providers with RetryProvider
```python
import asyncio
from g4f.client import AsyncClient
import g4f.debug
g4f.debug.logging = True
g4f.debug.version_check = False
from g4f.Provider import RetryProvider, Phind, FreeChatgpt, Liaobots
async def main():
client = AsyncClient(provider=RetryProvider([Phind, FreeChatgpt, Liaobots], shuffle=False)
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": "Hello"
}
],
web_search = False
)
print(response.choices[0].message.content)
asyncio.run(main())
```
---
### Concurrent Tasks with asyncio.gather
**Execute multiple tasks concurrently:**
```python
@ -284,9 +480,10 @@ asyncio.run(main())
```
## Available Models and Providers
The G4F AsyncClient supports a wide range of AI models and providers, allowing you to choose the best option for your specific use case. **Here's a brief overview of the available models and providers:**
The G4F AsyncClient supports a wide range of AI models and providers, allowing you to choose the best option for your specific use case.
### Models
**Here's a brief overview of the available models and providers:**
**Models**
- GPT-3.5-Turbo
- GPT-4o-Mini
- GPT-4
@ -295,7 +492,7 @@ The G4F AsyncClient supports a wide range of AI models and providers, allowing y
- Claude (Anthropic)
- And more...
### Providers
**Providers**
- OpenAI
- Google (for Gemini)
- Anthropic
@ -321,7 +518,9 @@ response = await client.chat.completions.create(
```
## Error Handling and Best Practices
Implementing proper error handling and following best practices is crucial when working with the G4F AsyncClient API. This ensures your application remains robust and can gracefully handle various scenarios. **Here are some key practices to follow:**
Implementing proper error handling and following best practices is crucial when working with the G4F AsyncClient API. This ensures your application remains robust and can gracefully handle various scenarios.
**Here are some key practices to follow:**
1. **Use try-except blocks to catch and handle exceptions:**
```python

View file

@ -1,139 +1,266 @@
# G4F Authentication Setup Guide
**# G4F - Authentication Guide**
This documentation explains how to authenticate with G4F providers and configure GUI security. It covers API key management, cookie-based authentication, rate limiting, and GUI access controls.
This documentation explains how to set up Basic Authentication for the GUI and API key authentication for the API when running the G4F server.
---
## Prerequisites
## **Table of Contents**
1. **[Provider Authentication](#provider-authentication)**
- [Prerequisites](#prerequisites)
- [API Key Setup](#api-key-setup)
- [Synchronous Usage](#synchronous-usage)
- [Asynchronous Usage](#asynchronous-usage)
- [Multiple Providers](#multiple-providers-with-api-keys)
- [Cookie-Based Authentication](#cookie-based-authentication)
- [Rate Limiting](#rate-limiting)
- [Error Handling](#error-handling)
- [Supported Providers](#supported-providers)
2. **[GUI Authentication](#gui-authentication)**
- [Server Setup](#server-setup)
- [Browser Access](#browser-access)
- [Programmatic Access](#programmatic-access)
3. **[Best Practices](#best-practices)**
4. **[Troubleshooting](#troubleshooting)**
Before proceeding, ensure you have the following installed:
- Python 3.x
- G4F package installed (ensure it is set up and working)
- Basic knowledge of using environment variables on your operating system
---
## Steps to Set Up Authentication
## **Provider Authentication**
### 1. API Key Authentication for Both GUI and API
### **Prerequisites**
- Python 3.7+
- Installed `g4f` package:
```bash
pip install g4f
```
- API keys or cookies from providers (if required).
To secure both the GUI and the API, you'll authenticate using an API key. The API key should be injected via an environment variable and passed to both the GUI (via Basic Authentication) and the API.
---
#### Steps to Inject the API Key Using Environment Variables:
### **API Key Setup**
#### **Step 1: Set Environment Variables**
**For Linux/macOS (Terminal)**:
```bash
# Example for Anthropic
export ANTHROPIC_API_KEY="your_key_here"
1. **Set the environment variable** for your API key:
# Example for HuggingFace
export HUGGINGFACE_API_KEY="another_key_here"
```
On Linux/macOS:
**For Windows (Command Prompt)**:
```cmd
:: Example for Anthropic
set ANTHROPIC_API_KEY=your_key_here
:: Example for HuggingFace
set HUGGINGFACE_API_KEY=another_key_here
```
**For Windows (PowerShell)**:
```powershell
# Example for Anthropic
$env:ANTHROPIC_API_KEY = "your_key_here"
# Example for HuggingFace
$env:HUGGINGFACE_API_KEY = "another_key_here"
```
#### **Step 2: Initialize Client**
```python
from g4f.client import Client
# Example for Anthropic
client = Client(
provider="g4f.Provider.Anthropic",
api_key="your_key_here" # Or use os.getenv("ANTHROPIC_API_KEY")
)
```
---
### **Synchronous Usage**
```python
from g4f.client import Client
# Initialize with Anthropic
client = Client(provider="g4f.Provider.Anthropic", api_key="your_key_here")
# Simple request
response = client.chat.completions.create(
model="claude-3.5-sonnet",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
```
---
### **Asynchronous Usage**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
# Initialize with Groq
client = AsyncClient(provider="g4f.Provider.Groq", api_key="your_key_here")
response = await client.chat.completions.create(
model="mixtral-8x7b",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
asyncio.run(main())
```
---
### **Multiple Providers with API Keys**
```python
import os
from g4f.client import Client
# Using environment variables
providers = {
"Anthropic": os.getenv("ANTHROPIC_API_KEY"),
"Groq": os.getenv("GROQ_API_KEY")
}
for provider_name, api_key in providers.items():
client = Client(provider=f"g4f.Provider.{provider_name}", api_key=api_key)
response = client.chat.completions.create(
model="claude-3.5-sonnet",
messages=[{"role": "user", "content": f"Hello from {provider_name}!"}]
)
print(f"{provider_name}: {response.choices[0].message.content}")
```
---
### **Cookie-Based Authentication**
**For Providers Like Gemini/Bing**:
1. Open your browser and log in to the provider's website.
2. Use developer tools (F12) to copy cookies:
- Chrome/Edge: **Application****Cookies**
- Firefox: **Storage****Cookies**
```python
from g4f.Provider import Gemini
# Initialize with cookies
client = Client(
provider=Gemini,
cookies={
"__Secure-1PSID": "your_cookie_value_here",
"__Secure-1PSIDTS": "timestamp_value_here"
}
)
```
---
### **Rate Limiting**
```python
from aiolimiter import AsyncLimiter
# Limit to 5 requests per second
rate_limiter = AsyncLimiter(max_rate=5, time_period=1)
async def make_request():
async with rate_limiter:
return await client.chat.completions.create(...)
```
---
### **Error Handling**
```python
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
def safe_request():
try:
return client.chat.completions.create(...)
except Exception as e:
print(f"Attempt failed: {str(e)}")
raise
```
---
### **Supported Providers**
| Provider | Auth Type | Example Models |
|----------------|-----------------|----------------------|
| Anthropic | API Key | `claude-3.5-sonnet` |
| Gemini | Cookies | `gemini-1.5-pro` |
| Groq | API Key | `mixtral-8x7b` |
| HuggingFace | API Key | `llama-3.1-70b` |
*Full list: [Providers and Models](providers-and-models.md)*
---
## **GUI Authentication**
### **Server Setup**
1. Create a password:
```bash
export G4F_API_KEY="your-api-key-here"
# Linux/macOS
export G4F_API_KEY="your_password_here"
# Windows (Command Prompt)
set G4F_API_KEY=your_password_here
# Windows (PowerShell)
$env:G4F_API_KEY = "your_password_here"
```
On Windows (Command Prompt):
```bash
set G4F_API_KEY="your-api-key-here"
```
On Windows (PowerShell):
```bash
$env:G4F_API_KEY="your-api-key-here"
```
Replace `your-api-key-here` with your actual API key.
2. **Run the G4F server with the API key injected**:
Use the following command to start the G4F server. The API key will be passed to both the GUI and the API:
2. Start the server:
```bash
python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
```
- `--debug` enables debug mode for more verbose logs.
- `--port 8080` specifies the port on which the server will run (you can change this if needed).
- `--g4f-api-key` specifies the API key for both the GUI and the API.
---
#### Example:
```bash
export G4F_API_KEY="my-secret-api-key"
python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
```
Now, both the GUI and API will require the correct API key for access.
### **Browser Access**
1. Navigate to `http://localhost:8080/chat/`.
2. Use credentials:
- **Username**: Any value (e.g., `admin`).
- **Password**: Your `G4F_API_KEY`.
---
### 2. Accessing the GUI with Basic Authentication
The GUI uses **Basic Authentication**, where the **username** can be any value, and the **password** is your API key.
#### Example:
To access the GUI, open your web browser and navigate to `http://localhost:8080/chat/`. You will be prompted for a username and password.
- **Username**: You can use any username (e.g., `user` or `admin`).
- **Password**: Enter your API key (the same key you set in the `G4F_API_KEY` environment variable).
---
### 3. Python Example for Accessing the API
To interact with the API, you can send requests by including the `g4f-api-key` in the headers. Here's an example of how to do this using the `requests` library in Python.
#### Example Code to Send a Request:
### **Programmatic Access**
```python
import requests
url = "http://localhost:8080/v1/chat/completions"
# Body of the request
body = {
"model": "your-model-name", # Replace with your model name
"provider": "your-provider", # Replace with the provider name
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}
# API Key (can be set as an environment variable)
api_key = "your-api-key-here" # Replace with your actual API key
# Send the POST request
response = requests.post(url, json=body, headers={"g4f-api-key": api_key})
# Check the response
print(response.status_code)
print(response.json())
response = requests.get(
"http://localhost:8080/chat/",
auth=("admin", "your_password_here")
)
print("Success!" if response.status_code == 200 else f"Failed: {response.status_code}")
```
In this example:
- Replace `"your-api-key-here"` with your actual API key.
- `"model"` and `"provider"` should be replaced with the appropriate model and provider you're using.
- The `messages` array contains the conversation you want to send to the API.
---
#### Response:
The response will contain the output of the API request, such as the model's completion or other relevant data, which you can then process in your application.
## **Best Practices**
1. 🔒 **Never hardcode keys**
- Use `.env` files or secret managers like AWS Secrets Manager.
2. 🔄 **Rotate keys every 90 days**
- Especially critical for production environments.
3. 📊 **Monitor API usage**
- Use tools like Prometheus/Grafana for tracking.
4. ♻️ **Retry transient errors**
- Use the `tenacity` library for robust retry logic.
---
### 4. Testing the Setup
- **Accessing the GUI**: Open a web browser and navigate to `http://localhost:8080/chat/`. The GUI will now prompt you for a username and password. You can enter any username (e.g., `admin`), and for the password, enter the API key you set up in the environment variable.
- **Accessing the API**: Use the Python code example above to send requests to the API. Ensure the correct API key is included in the `g4f-api-key` header.
## **Troubleshooting**
| Issue | Solution |
|---------------------------|-------------------------------------------|
| **"Invalid API Key"** | 1. Verify key spelling<br>2. Regenerate key in provider dashboard |
| **"Cookie Expired"** | 1. Re-login to provider website<br>2. Update cookie values |
| **"Rate Limit Exceeded"** | 1. Implement rate limiting<br>2. Upgrade provider plan |
| **"Provider Not Found"** | 1. Check provider name spelling<br>2. Verify provider compatibility |
---
### 5. Troubleshooting
- **GUI Access Issues**: If you're unable to access the GUI, ensure that you are using the correct API key as the password.
- **API Access Issues**: If the API is rejecting requests, verify that the `G4F_API_KEY` environment variable is correctly set and passed to the server. You can also check the server logs for more detailed error messages.
---
## Summary
By following the steps above, you will have successfully set up Basic Authentication for the G4F GUI (using any username and the API key as the password) and API key authentication for the API. This ensures that only authorized users can access both the interface and make API requests.
[Return to Home](/)
**[⬆ Back to Top](#table-of-contents)** | **[Providers and Models →](providers-and-models.md)**

View file

@ -1,3 +1,5 @@
# G4F Client API Guide
## Table of Contents
@ -11,10 +13,12 @@
- [Usage Examples](#usage-examples)
- [Text Completions](#text-completions)
- [Streaming Completions](#streaming-completions)
- [Using a Vision Model](#using-a-vision-model)
- [Image Generation](#image-generation)
- [Creating Image Variations](#creating-image-variations)
- [Search Tool Support](#search-tool-support)
- [Advanced Usage](#advanced-usage)
- [Conversation Memory](#conversation-memory)
- [Search Tool Support](#search-tool-support)
- [Using a List of Providers with RetryProvider](#using-a-list-of-providers-with-retryprovider)
- [Using a Vision Model](#using-a-vision-model)
- [Command-line Chat Program](#command-line-chat-program)
@ -170,78 +174,48 @@ stream = client.chat.completions.create(
}
],
stream=True,
web_search = False
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content or "", end="")
```
---
## Search Tool Support
The **Search Tool Support** feature enables triggering a web search during chat completions. This is useful for retrieving real-time or specific data, offering a more flexible solution than `web_search`.
**Example Usage**:
### Using a Vision Model
**Analyze an image and generate a description:**
```python
import g4f
import requests
from g4f.client import Client
from g4f.Provider.GeminiPro import GeminiPro
client = Client()
# Initialize the GPT client with the desired provider and api key
client = Client(
api_key="your_api_key_here",
provider=GeminiPro
)
tool_calls = [
{
"function": {
"arguments": {
"query": "Latest advancements in AI",
"max_results": 5,
"max_words": 2500,
"backend": "api",
"add_text": True,
"timeout": 5
},
"name": "search_tool"
},
"type": "function"
}
]
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw
# Or: image = open("docs/images/cat.jpeg", "rb")
response = client.chat.completions.create(
model="gpt-4",
model=g4f.models.default,
messages=[
{"role": "user", "content": "Tell me about recent advancements in AI."}
{
"role": "user",
"content": "What's in this image?"
}
],
tool_calls=tool_calls
image=image
# Add any other necessary parameters
)
print(response.choices[0].message.content)
```
**Parameters for `search_tool`:**
- **`query`**: The search query string.
- **`max_results`**: Number of search results to retrieve.
- **`max_words`**: Maximum number of words in the response.
- **`backend`**: The backend used for search (e.g., `"api"`).
- **`add_text`**: Whether to include text snippets in the response.
- **`timeout`**: Maximum time (in seconds) for the search operation.
**Advantages of Search Tool Support:**
- Works with any provider, irrespective of `web_search` support.
- Offers more customization and control over the search process.
- Bypasses provider-specific limitations.
### Streaming Completions
```python
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
```
---
### Image Generation
@ -310,6 +284,144 @@ print(f"Generated image URL: {image_url}")
## Advanced Usage
### Conversation Memory
To maintain a coherent conversation, it's important to store the context or history of the dialogue. This can be achieved by appending both the user's inputs and the bot's responses to a messages list. This allows the model to reference past exchanges when generating responses.
**The conversation history consists of messages with different roles:**
- `system`: Initial instructions that define the AI's behavior
- `user`: Messages from the user
- `assistant`: Responses from the AI
**The following example demonstrates how to implement conversation memory with the G4F:**
```python
from g4f.client import Client
class Conversation:
def __init__(self):
self.client = Client()
self.history = [
{
"role": "system",
"content": "You are a helpful assistant."
}
]
def add_message(self, role, content):
self.history.append({
"role": role,
"content": content
})
def get_response(self, user_message):
# Add user message to history
self.add_message("user", user_message)
# Get response from AI
response = self.client.chat.completions.create(
model="gpt-4o-mini",
messages=self.history,
web_search=False
)
# Add AI response to history
assistant_response = response.choices[0].message.content
self.add_message("assistant", assistant_response)
return assistant_response
def main():
conversation = Conversation()
print("=" * 50)
print("G4F Chat started (type 'exit' to end)".center(50))
print("=" * 50)
print("\nAI: Hello! How can I assist you today?")
while True:
user_input = input("\nYou: ")
if user_input.lower() == 'exit':
print("\nGoodbye!")
break
response = conversation.get_response(user_input)
print("\nAI:", response)
if __name__ == "__main__":
main()
```
**Key Features:**
- Maintains conversation context through a message history
- Includes system instructions for AI behavior
- Automatically stores both user inputs and AI responses
- Simple and clean implementation using a class-based approach
**Usage Example:**
```python
conversation = Conversation()
response = conversation.get_response("Hello, how are you?")
print(response)
```
**Note:**
The conversation history grows with each interaction. For long conversations, you might want to implement a method to limit the history size or clear old messages to manage token usage.
---
## Search Tool Support
The **Search Tool Support** feature enables triggering a web search during chat completions. This is useful for retrieving real-time or specific data, offering a more flexible solution than `web_search`.
**Example Usage**:
```python
from g4f.client import Client
client = Client()
tool_calls = [
{
"function": {
"arguments": {
"query": "Latest advancements in AI",
"max_results": 5,
"max_words": 2500,
"backend": "api",
"add_text": True,
"timeout": 5
},
"name": "search_tool"
},
"type": "function"
}
]
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": "Tell me about recent advancements in AI."}
],
tool_calls=tool_calls
)
print(response.choices[0].message.content)
```
**Parameters for `search_tool`:**
- **`query`**: The search query string.
- **`max_results`**: Number of search results to retrieve.
- **`max_words`**: Maximum number of words in the response.
- **`backend`**: The backend used for search (e.g., `"api"`).
- **`add_text`**: Whether to include text snippets in the response.
- **`timeout`**: Maximum time (in seconds) for the search operation.
**Advantages of Search Tool Support:**
- Works with any provider, irrespective of `web_search` support.
- Offers more customization and control over the search process.
- Bypasses provider-specific limitations.
---
### Using a List of Providers with RetryProvider
```python
from g4f.client import Client
@ -336,40 +448,6 @@ response = client.chat.completions.create(
print(response.choices[0].message.content)
```
### Using a Vision Model
**Analyze an image and generate a description:**
```python
import g4f
import requests
from g4f.client import Client
from g4f.Provider.GeminiPro import GeminiPro
# Initialize the GPT client with the desired provider and api key
client = Client(
api_key="your_api_key_here",
provider=GeminiPro
)
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw
# Or: image = open("docs/images/cat.jpeg", "rb")
response = client.chat.completions.create(
model=g4f.models.default,
messages=[
{
"role": "user",
"content": "What's in this image?"
}
],
image=image
# Add any other necessary parameters
)
print(response.choices[0].message.content)
```
## Command-line Chat Program
**Here's an example of a simple command-line chat program using the G4F Client:**
```python

95
docs/configuration.md Normal file
View file

@ -0,0 +1,95 @@
### G4F - Configuration
## Table of Contents
- [Authentication](#authentication)
- [Cookies Configuration](#cookies-configuration)
- [HAR and Cookie Files](#har-and-cookie-files)
- [Debug Mode](#debug-mode)
- [Proxy Configuration](#proxy-configuration)
#### Authentication
Refer to the [G4F Authentication Setup Guide](authentication.md) for detailed instructions on setting up authentication.
### Cookies Configuration
Cookies are essential for using Meta AI and Microsoft Designer to create images.
Additionally, cookies are required for the Google Gemini and WhiteRabbitNeo Provider.
From Bing, ensure you have the "\_U" cookie, and from Google, all cookies starting with "\_\_Secure-1PSID" are needed.
**You can pass these cookies directly to the create function or set them using the `set_cookies` method before running G4F:**
```python
from g4f.cookies import set_cookies
set_cookies(".bing.com", {
"_U": "cookie value"
})
set_cookies(".google.com", {
"__Secure-1PSID": "cookie value"
})
```
---
### HAR and Cookie Files
**Using .har and Cookie Files**
You can place `.har` and cookie files `.json` in the default `./har_and_cookies` directory. To export a cookie file, use the [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) available on the Chrome Web Store.
**Creating .har Files to Capture Cookies**
To capture cookies, you can also create `.har` files. For more details, refer to the next section.
### Changing the Cookies Directory and Loading Cookie Files in Python
**You can change the cookies directory and load cookie files in your Python environment. To set the cookies directory relative to your Python file, use the following code:**
```python
import os.path
from g4f.cookies import set_cookies_dir, read_cookie_files
import g4f.debug
g4f.debug.logging = True
cookies_dir = os.path.join(os.path.dirname(__file__), "har_and_cookies")
set_cookies_dir(cookies_dir)
read_cookie_files(cookies_dir)
```
### Debug Mode
**If you enable debug mode, you will see logs similar to the following:**
```
Read .har file: ./har_and_cookies/you.com.har
Cookies added: 10 from .you.com
Read cookie file: ./har_and_cookies/google.json
Cookies added: 16 from .google.com
```
#### .HAR File for OpenaiChat Provider
##### Generating a .HAR File
**To utilize the OpenaiChat provider, a .har file is required from https://chatgpt.com/. Follow the steps below to create a valid .har file:**
1. Navigate to https://chatgpt.com/ using your preferred web browser and log in with your credentials.
2. Access the Developer Tools in your browser. This can typically be done by right-clicking the page and selecting "Inspect," or by pressing F12 or Ctrl+Shift+I (Cmd+Option+I on a Mac).
3. With the Developer Tools open, switch to the "Network" tab.
4. Reload the website to capture the loading process within the Network tab.
5. Initiate an action in the chat which can be captured in the .har file.
6. Right-click any of the network activities listed and select "Save all as HAR with content" to export the .har file.
##### Storing the .HAR File
- Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, if you are using Python from a terminal, you can store it in a `./har_and_cookies` directory within your current working directory.
> **Note:** Ensure that your .har file is stored securely, as it may contain sensitive information.
### Proxy Configuration
**If you want to hide or change your IP address for the providers, you can set a proxy globally via an environment variable:**
**- On macOS and Linux:**
```bash
export G4F_PROXY="http://host:port"
```
**- On Windows:**
```bash
set G4F_PROXY=http://host:port
```

View file

@ -5,11 +5,14 @@
This document provides an overview of various AI providers and models, including text generation, image generation, and vision capabilities. It aims to help users navigate the diverse landscape of AI services and choose the most suitable option for their needs.
> **Note**: See our [Authentication Guide] (authentication.md) for authentication instructions for the provider.
## Table of Contents
- [Providers](#providers)
- [Free](#providers-free)
- [No auth required](#providers-not-needs-auth)
- [HuggingSpace](#providers-huggingspace)
- [Needs Auth](#providers-needs-auth)
- [Needs auth](#providers-needs-auth)
- [Models](#models)
- [Text Models](#text-models)
- [Image Models](#image-models)
@ -17,90 +20,99 @@ This document provides an overview of various AI providers and models, including
---
## Providers
**Authentication types:**
- **Get API key** - Requires an API key for authentication. You need to obtain an API key from the provider's website to use their services.
- **Manual cookies** - Requires manual browser cookies setup. You need to be logged in to the provider's website to use their services.
- **Automatic cookies** - Browser cookies authentication that is automatically fetched. No manual setup needed.
- **Optional API key** - Works without authentication, but you can provide an API key for better rate limits or additional features. The service is usable without an API key.
- **API key / Cookies** - Supports both authentication methods. You can use either an API key or browser cookies for authentication.
- **No auth required** - No authentication needed. The service is publicly available without any credentials.
### Providers Free
| Website | Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth |
**Symbols:**
- ✔ - Feature is supported
- ❌ - Feature is not supported
- ✔ _**(n+)**_ - Number of additional models supported by the provider but not publicly listed
---
### Providers No auth required
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|
|[aichatfree.info](https://aichatfree.info)|`g4f.Provider.AIChatFree`|`gemini-1.5-pro`|`sdxl, flux-pro, flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[api.airforce](https://api.airforce)|`g4f.Provider.Airforce`|`phi-2, openchat-3.5, deepseek-coder, hermes-2-dpo, hermes-2-pro, openhermes-2.5, lfm-40b, german-7b, llama-2-7b, llama-3.1-8b, llama-3.1-70b, neural-7b, zephyr-7b, evil`|`sdxl, flux-pro, flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[aiuncensored.info/ai_uncensored](https://www.aiuncensored.info/ai_uncensored)|`g4f.Provider.AIUncensored`|`hermes-3`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[amigochat.io](https://amigochat.io/chat/)|`g4f.Provider.AmigoChat`|✔|✔|❌|✔|![Error](https://img.shields.io/badge/RateLimit-f48d37)|❌|
|[autonomous.ai](https://www.autonomous.ai/anon/)|`g4f.Provider.AutonomousAI`|`llama-3.3-70b, qwen-2.5-coder-32b, hermes-3, llama-3.2-90b, llama-3.3-70b`|✔|❌|✔|![Error](https://img.shields.io/badge/RateLimit-f48d37)|❌|
|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, gemini-1.5-pro, claude-3.5-sonnet, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama_3_1_405b, llama-3.3-70b, mixtral-7b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo`|`flux`|`blackboxai, gpt-4o, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.BlackboxCreateAgent`|`llama-3.1-70b`|`flux`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[cablyai.com](https://cablyai.com)|`g4f.Provider.CablyAI`|`cably-80b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chatglm.cn](https://chatglm.cn)|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chatgpt.com](https://chatgpt.com)|`g4f.Provider.ChatGpt`|✔|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)|❌|
|[chatgpt.es](https://chatgpt.es)|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chatgptt.me](https://chatgptt.me)|`g4f.Provider.ChatGptt`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[claudeson.net](https://claudeson.net)|`g4f.Provider.ClaudeSon`|`claude-3.5-sonnet`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[copilot.microsoft.com](https://copilot.microsoft.com)|`g4f.Provider.Copilot`|`gpt-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[darkai.foundation](https://darkai.foundation)|`g4f.Provider.DarkAI`|`gpt-3.5-turbo, gpt-4o, llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, claude-3-haiku, llama-3.1-70b, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|`g4f.Provider.Flux`|❌|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|`g4f.Provider.Free2GPT`|`mistral-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|`g4f.Provider.FreeGpt`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[gprochat.com](https://gprochat.com)|`g4f.Provider.GPROChat`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[editor.imagelabs.net](editor.imagelabs.net)|`g4f.Provider.ImageLabs`|❌|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[huggingface.co/spaces](https://huggingface.co/spaces)|`g4f.Provider.HuggingSpace`|`qwen-2.5-72b, qwen-2.5-72b`|`flux-dev, flux-schnell, sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[jmuz.me](https://jmuz.me)|`g4f.Provider.Jmuz`|`gpt-4o, gpt-4, gpt-4o-mini, claude-3.5-sonnet, claude-3-opus, claude-3-haiku, gemini-1.5-pro, gemini-1.5-flash, gemini-exp, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-90b, llama-3.2-11b, llama-3.3-70b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b-preview, wizardlm-2-8x22b, deepseek-2.5, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[liaobots.work](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-2, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash, gemini-2.0-flash-thinking`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[mhystical.cc](https://mhystical.cc)|`g4f.Provider.Mhystical`|`gpt-4`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[labs.perplexity.ai](https://labs.perplexity.ai)|`g4f.Provider.PerplexityLabs`|`sonar-online, sonar-chat, llama-3.3-70b, llama-3.1-8b, llama-3.1-70b, lfm-40b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[pi.ai/talk](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[pizzagpt.it](https://www.pizzagpt.it)|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[pollinations.ai](https://pollinations.ai)|`g4f.Provider.PollinationsAI`|`gpt-4o, mistral-large, mistral-nemo, llama-3.3-70b, gpt-4, qwen-2-72b, qwen-2.5-coder-32b, claude-3.5-sonnet, command-r, deepseek-chat, llama-3.2-3b, evil, p1, turbo, unity, midijourney, rtist`|`flux, flux-realism, flux-cablyai, flux-anime, flux-3d, any-dark, flux-pro, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[app.prodia.com](https://app.prodia.com)|`g4f.Provider.Prodia`|❌|✔|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[rubiks.ai](https://rubiks.ai)|`g4f.Provider.RubiksAI`|`gpt-4o-mini, llama-3.1-70b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[teach-anything.com](https://www.teach-anything.com)|`g4f.Provider.TeachAnything`|`llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[you.com](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|`g4f.Provider.Yqcloud`|`gpt-4`|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[aichatfree.info](https://aichatfree.info)|No auth required|`g4f.Provider.AIChatFree`|`gemini-1.5-pro` _**(1+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[aiuncensored.info/ai_uncensored](https://www.aiuncensored.info/ai_uncensored)|Optional API key|`g4f.Provider.AIUncensored`|`hermes-3`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[autonomous.ai](https://www.autonomous.ai/anon/)|No auth required|`g4f.Provider.AutonomousAI`|`llama-3.3-70b, qwen-2.5-coder-32b, hermes-3, llama-3.2-90b, llama-3.3-70b, llama-3-2-70b`|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, gemini-1.5-flash, gemini-1.5-pro, claude-3.5-sonnet, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-7b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo, deepseek-r1` _**(+31)**_|`flux`|`blackboxai, gpt-4o, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[cablyai.com](https://cablyai.com)|No auth required|`g4f.Provider.CablyAI`|`cably-80b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(+7)**_|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)|
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgptt.me](https://chatgptt.me)|No auth required|`g4f.Provider.ChatGptt`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)||[Automatic cookies](https://playground.ai.cloudflare.com)||`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, gpt-4o`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[darkai.foundation](https://darkai.foundation)|No auth required|`g4f.Provider.DarkAI`|`gpt-3.5-turbo, gpt-4o, llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, claude-3-haiku, llama-3.1-70b, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.1-70b, qwq-32b, wizardlm-2-8x22b, wizardlm-2-7b, qwen-2-72b, qwen-2.5-coder-32b, nemotron-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`mistral-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[gprochat.com](https://gprochat.com)|No auth required|`g4f.Provider.GPROChat`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|✔ _**(1+)**_|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b`|`flux-dev, flux-schnell, sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3-haiku, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-2, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash, gemini-2.0-flash-thinking`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[mhystical.cc](https://mhystical.cc)||[Optional API key](https://mhystical.cc/dashboard)|`g4f.Provider.Mhystical`|`gpt-4`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini`|❌|`gpt-4o-mini`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar-online, sonar-chat, llama-3.3-70b, llama-3.1-8b, llama-3.1-70b, lfm-40b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[pi.ai/talk](https://pi.ai/talk)||[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o, mistral-large, mistral-nemo, llama-3.3-70b, gpt-4, qwen-2-72b, qwen-2.5-coder-32b, claude-3.5-sonnet, claude-3.5-haiku, command-r, deepseek-chat, llama-3.1-8b, evil, p1, unity, midijourney, rtist`|`flux, midjourney, dall-e-3, sd-turbo`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[app.prodia.com](https://app.prodia.com)|No auth required|`g4f.Provider.Prodia`|❌|✔ _**(46)**_|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|
---
### Providers HuggingSpace
| Website | Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth |
|----------|-------------|--------------|---------------|--------|--------|------|------|
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|`g4f.Provider.BlackForestLabsFlux1Dev`|❌|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|`g4f.Provider.BlackForestLabsFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|`g4f.Provider.CohereForAI`|`command-r, command-r-plus, command-r7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|`g4f.Provider.Qwen_QVQ_72B`|`qwen-2.5-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|`g4f.Provider.Qwen_Qwen_2_72B_Instruct`|`qwen-2.5-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|`g4f.Provider.StableDiffusion35Large`|❌|`sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|`g4f.Provider.VoodoohopFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
| Website | API Credentials | Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth |
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabsFlux1Dev`|❌|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabsFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI`|`command-r, command-r-plus, command-r7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B_Instruct`|`qwen-2-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StableDiffusion35Large`|❌|`sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.VoodoohopFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
---
### Providers Needs Auth
| Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth |
|----------|-------------|--------------|---------------|--------|--------|------|
|[console.anthropic.com](https://console.anthropic.com)|`g4f.Provider.Anthropic`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[bing.com/images/create](https://www.bing.com/images/create)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[inference.cerebras.ai](https://inference.cerebras.ai/)|`g4f.Provider.Cerebras`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[deepinfra.com](https://deepinfra.com)|`g4f.Provider.DeepInfra`|✔|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[platform.deepseek.com](https://platform.deepseek.com)|`g4f.Provider.DeepSeek`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[gemini.google.com](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini`|`gemini`|`gemini`|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[ai.google.dev](https://ai.google.dev)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|`gemini-1.5-pro`|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[github.com/copilot](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[glhf.chat](https://glhf.chat)|`g4f.Provider.GlhfChat`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[console.groq.com/playground](https://console.groq.com/playground)|`g4f.Provider.Groq`|✔|❌|✔|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[huggingface.co/chat](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, qwq-32b, nemotron-70b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[huggingface.co/chat](https://huggingface.co/chat)|`g4f.Provider.HuggingFace`|✔|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|`g4f.Provider.HuggingFaceAPI`|✔|❌|✔|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[meta.ai](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[designer.microsoft.com](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[platform.openai.com](https://platform.openai.com)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[chatgpt.com](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4`|❌|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[perplexity.ai](https://www.perplexity.ai)|`g4f.Provider.PerplexityApi`|`gpt-4o, gpt-4o-mini, gpt-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[poe.com](https://poe.com)|`g4f.Provider.Poe`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[raycast.com](https://raycast.com)|`g4f.Provider.Raycast`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[chat.reka.ai](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|❌|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[replicate.com](https://replicate.com)|`g4f.Provider.Replicate`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[beta.theb.ai](https://beta.theb.ai)|`g4f.Provider.Theb`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[beta.theb.ai](https://beta.theb.ai)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[console.x.ai](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
| Website | API Credentials | Provider | Text Models | Image Models | Vision Models | Stream | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|
|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|✔ _**(1+)**_|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini, gemini-1.5-flash, gemini-1.5-pro`|`gemini`|`gemini`|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|`gemini-1.5-pro`|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|✔|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, deepseek-r1, qwq-32b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|✔ _**(1+)**_|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔_**(1)**_|✔_**(8+)**_|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|❌|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
---
## Models
@ -119,13 +131,14 @@ This document provides an overview of various AI providers and models, including
|gigachat||1+ Providers|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|
|meta-ai|Meta|1+ Providers|[ai.meta.com](https://ai.meta.com/)|
|llama-2-7b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)|
|llama-3-8b|Meta Llama|1+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3.1-8b|Meta Llama|5+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3-8b|Meta Llama|2+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3-70b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Meta-Llama-3-70B)|
|llama-3.1-8b|Meta Llama|6+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-70b|Meta Llama|9+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-405b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.1-405B)|
|llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)|
|llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-3B)|
|llama-3.2-11b|Meta Llama|3+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llama-3.2-70b|Meta Llama|1+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llama-3.2-90b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)|
|llama-3.3-70b|Meta Llama|7+ Providers|[llama.com/]()|
|mixtral-7b|Mistral AI|1+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
@ -133,10 +146,10 @@ This document provides an overview of various AI providers and models, including
|mistral-nemo|Mistral AI|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|mistral-large|Mistral AI|1+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)|
|hermes-2-dpo|NousResearch|2+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|hermes-2-pro|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|
|hermes-3|NousResearch|2+ Providers|[nousresearch.com](https://nousresearch.com/hermes3/)|
|phi-2|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/phi-2)|
|phi-3.5-mini|Microsoft|2+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)|
|wizardlm-2-7b|Microsoft|1+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|wizardlm-2-8x22b|Microsoft|2+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|gemini|Google DeepMind|2+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|gemini-1.5-flash|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-1.5-pro|Google DeepMind|7+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
@ -145,6 +158,7 @@ This document provides an overview of various AI providers and models, including
|claude-3-haiku|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
|claude-3-sonnet|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3.5-haiku|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|claude-3.5-sonnet|Anthropic|4+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|reka-core|Reka AI|1+ Providers|[reka.ai](https://www.reka.ai/ourmodels)|
|blackboxai|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
@ -156,28 +170,22 @@ This document provides an overview of various AI providers and models, including
|qwen-2-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-72B)|
|qwen-2.5-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)|
|qwen-2.5-coder-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|qwq-32b|Qwen|4+ Providers|[qwen2.org](https://qwen2.org/qwq-32b-preview/)|
|qwq-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)|
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|deepseek-chat|DeepSeek|3+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|deepseek-coder|DeepSeek|1+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct)|
|wizardlm-2-8x22b|WizardLM|1+ Providers|[huggingface.co](https://huggingface.co/alpindale/WizardLM-2-8x22B)|
|openchat-3.5|OpenChat|1+ Providers|[huggingface.co](https://huggingface.co/openchat/openchat_3.5)|
|deepseek-r1|DeepSeek|1+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|grok-2|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-2)|
|sonar-online|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|sonar-chat|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|nemotron-70b|Nvidia|2+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|openhermes-2.5|Teknium|1+ Providers|[huggingface.co](https://huggingface.co/datasets/teknium/OpenHermes-2.5)|
|lfm-40b|Liquid|2+ Providers|[liquid.ai](https://www.liquid.ai/liquid-foundation-models)|
|german-7b|TheBloke|1+ Providers|[huggingface.co](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF)|
|zephyr-7b|HuggingFaceH4|1+ Providers|[huggingface.co](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)|
|neural-7b|Inferless|1+ Providers|[huggingface.co](https://huggingface.co/Intel/neural-chat-7b-v3-1)|
|dbrx-instruct|Databricks|1+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|p1|PollinationsAI|1+ Providers|[]( )|
|cably-80b|CablyAI|1+ Providers|[cablyai.com](https://cablyai.com)|
|glm-4|THUDM|1+ Providers|[github.com/THUDM](https://github.com/THUDM/GLM-4)|
|evil|Evil Mode - Experimental|2+ Providers|[]( )|
|midijourney||1+ Providers|[]( )|
|turbo||1+ Providers|[]( )|
|unity||1+ Providers|[]( )|
|rtist||1+ Providers|[]( )|
@ -185,24 +193,13 @@ This document provides an overview of various AI providers and models, including
### Image Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|sdxl|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)|
|sdxl-lora|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/blog/lcm_lora)|
|sd-turbo||1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sd-turbo)|
|sd-3.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)|
|flux|Black Forest Labs|4+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|flux-pro|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|flux-dev|Black Forest Labs|3+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-dev)|
|flux-schnell|Black Forest Labs|2+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|flux-realism|Flux AI|2+ Providers|[]( )|
|flux-cablyai|Flux AI|1+ Providers|[]( )|
|flux-anime|Flux AI|2+ Providers|[]( )|
|flux-3d|Flux AI|2+ Providers|[]( )|
|flux-disney|Flux AI|1+ Providers|[]( )|
|flux-pixel|Flux AI|1+ Providers|[]( )|
|flux-4o|Flux AI|1+ Providers|[]( )|
|dall-e-3|OpenAI|6+ Providers|[openai.com](https://openai.com/index/dall-e/)|
|midjourney|Midjourney|2+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|
|any-dark||2+ Providers|[]( )|
## Conclusion and Usage Tips

View file

@ -1,575 +0,0 @@
## Free
### AmigoChat
| Provider | `g4f.Provider.AmigoChat` |
| -------- | ---- |
| **Website** | [amigochat.io](https://amigochat.io/chat/) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4o, gpt-4o-mini, llama-3.1-405b, mistral-nemo, gemini-flash, gemma-2b, claude-3.5-sonnet, command-r-plus, qwen-2.5-72b, grok-beta (37)|
| **Image Models (Image Generation)** | flux-realism, flux-pro, dall-e-3, flux-dev |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Blackbox AI
| Provider | `g4f.Provider.Blackbox` |
| -------- | ---- |
| **Website** | [blackbox.ai](https://www.blackbox.ai) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gpt-4, gpt-4o, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, gemini-pro, gemini-flash, claude-3.5-sonnet, blackboxai, blackboxai-pro, llama-3.3-70b, mixtral-7b, deepseek-chat, dbrx-instruct, llama-3.1-405b, qwq-32b, hermes-2-dpo (46)|
| **Image Models (Image Generation)** | flux (2)|
| **Vision (Image Upload)** | ✔️ |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Blackbox2
| Provider | `g4f.Provider.Blackbox2` |
| -------- | ---- |
| **Website** | [blackbox.ai](https://www.blackbox.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | llama-3.1-70b (2)|
| **Image Models (Image Generation)** | flux |
| **Authentication** | ❌ |
| **Streaming** | ❌ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### ChatGpt
| Provider | `g4f.Provider.ChatGpt` |
| -------- | ---- |
| **Website** | [chatgpt.com](https://chatgpt.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-3.5-turbo, gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini (7)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### ChatGptEs
| Provider | `g4f.Provider.ChatGptEs` |
| -------- | ---- |
| **Website** | [chatgpt.es](https://chatgpt.es) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gpt-4, gpt-4o, gpt-4o-mini (3)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Cloudflare AI
| Provider | `g4f.Provider.Cloudflare` |
| -------- | ---- |
| **Website** | [playground.ai.cloudflare.com](https://playground.ai.cloudflare.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b (37)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Microsoft Copilot
| Provider | `g4f.Provider.Copilot` |
| -------- | ---- |
| **Website** | [copilot.microsoft.com](https://copilot.microsoft.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gpt-4 (1)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### DuckDuckGo AI Chat
| Provider | `g4f.Provider.DDG` |
| -------- | ---- |
| **Website** | [duckduckgo.com](https://duckduckgo.com/aichat) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gpt-4, gpt-4o, gpt-4o-mini, llama-3.1-70b, mixtral-8x7b, claude-3-haiku (8)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### DarkAI
| Provider | `g4f.Provider.DarkAI` |
| -------- | ---- |
| **Website** | [darkai.foundation](https://darkai.foundation/chat) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gpt-3.5-turbo, gpt-4o, llama-3.1-70b (3)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Flux (HuggingSpace)
| Provider | `g4f.Provider.Flux` |
| -------- | ---- |
| **Website** | [black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Image Models (Image Generation)** | flux-dev |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Free2GPT
| Provider | `g4f.Provider.Free2GPT` |
| -------- | ---- |
| **Website** | [chat10.free2gpt.xyz](https://chat10.free2gpt.xyz) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ✔️ |
### FreeGpt
| Provider | `g4f.Provider.FreeGpt` |
| -------- | ---- |
| **Website** | [freegptsnav.aifree.site](https://freegptsnav.aifree.site) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gemini-pro (1)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### GizAI
| Provider | `g4f.Provider.GizAI` |
| -------- | ---- |
| **Website** | [app.giz.ai](https://app.giz.ai/assistant) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gemini-flash (1)|
| **Authentication** | ❌ |
| **Streaming** | ❌ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### HuggingFace
| Provider | `g4f.Provider.HuggingFace` |
| -------- | ---- |
| **Website** | [huggingface.co](https://huggingface.co/chat) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | llama-3.2-11b, llama-3.3-70b, mistral-nemo, hermes-3, phi-3.5-mini, command-r-plus, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, nemotron-70b (11)|
| **Image Models (Image Generation)** | flux-dev |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ✔️ |
### Liaobots
| Provider | `g4f.Provider.Liaobots` |
| -------- | ---- |
| **Website** | [liaobots.site](https://liaobots.site) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4, gpt-4o, gpt-4o-mini, o1-preview, o1-mini, gemini-pro, gemini-flash, claude-3-opus, claude-3-sonnet, claude-3.5-sonnet, grok-beta (14)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### GPT4All
| Provider | `g4f.Provider.Local` |
| -------- | ---- |
| **Website** | ❌ |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Meta AI
| Provider | `g4f.Provider.MetaAI` |
| -------- | ---- |
| **Website** | [meta.ai](https://www.meta.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | meta-ai (1)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Mhystical
| Provider | `g4f.Provider.Mhystical` |
| -------- | ---- |
| **Website** | [api.mhystical.cc](https://api.mhystical.cc) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4 (1)|
| **Authentication** | ❌ |
| **Streaming** | ❌ |
| **System message** | ❌ |
| **Message history** | ✔️ |
### Ollama
| Provider | `g4f.Provider.Ollama` |
| -------- | ---- |
| **Website** | [ollama.com](https://ollama.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### OpenAI ChatGPT
| Provider | `g4f.Provider.OpenaiChat` |
| -------- | ---- |
| **Website** | [chatgpt.com](https://chatgpt.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gpt-4, gpt-4o, gpt-4o-mini, o1-preview, o1-mini (8)|
| **Vision (Image Upload)** | ✔️ |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### PerplexityLabs
| Provider | `g4f.Provider.PerplexityLabs` |
| -------- | ---- |
| **Website** | [labs.perplexity.ai](https://labs.perplexity.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | llama-3.1-8b, llama-3.1-70b, llama-3.3-70b, sonar-online, sonar-chat, lfm-40b (8)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Pi
| Provider | `g4f.Provider.Pi` |
| -------- | ---- |
| **Website** | [pi.ai](https://pi.ai/talk) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Pizzagpt
| Provider | `g4f.Provider.Pizzagpt` |
| -------- | ---- |
| **Website** | [pizzagpt.it](https://www.pizzagpt.it) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gpt-4o-mini (1)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Pollinations AI
| Provider | `g4f.Provider.PollinationsAI` |
| -------- | ---- |
| **Website** | [pollinations.ai](https://pollinations.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4, gpt-4o, llama-3.1-70b, mistral-nemo, mistral-large, claude-3.5-sonnet, command-r, qwen-2.5-coder-32b, p1, evil, midijourney, unity, rtist (25)|
| **Image Models (Image Generation)** | flux, flux-realism, flux-cablyai, flux-anime, flux-3d, any-dark, flux-pro, turbo, midjourney, dall-e-3 |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Prodia
| Provider | `g4f.Provider.Prodia` |
| -------- | ---- |
| **Website** | [app.prodia.com](https://app.prodia.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### ReplicateHome
| Provider | `g4f.Provider.ReplicateHome` |
| -------- | ---- |
| **Website** | [replicate.com](https://replicate.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gemma-2b (4)|
| **Image Models (Image Generation)** | sd-3, sdxl, playground-v2.5 |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Rubiks AI
| Provider | `g4f.Provider.RubiksAI` |
| -------- | ---- |
| **Website** | [rubiks.ai](https://rubiks.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4o, gpt-4o-mini, o1-mini, llama-3.1-70b, claude-3.5-sonnet, grok-beta (8)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### TeachAnything
| Provider | `g4f.Provider.TeachAnything` |
| -------- | ---- |
| **Website** | [teach-anything.com](https://www.teach-anything.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | llama-3.1-70b (1)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### TheB.AI
| Provider | `g4f.Provider.Theb` |
| -------- | ---- |
| **Website** | [beta.theb.ai](https://beta.theb.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### You.com
| Provider | `g4f.Provider.You` |
| -------- | ---- |
| **Website** | [you.com](https://you.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, llama-3.1-70b, claude-3-opus, claude-3-sonnet, claude-3-haiku, claude-3.5-sonnet, command-r-plus, command-r (20)|
| **Authentication** | ❌ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
## Auth
### Airforce
| Provider | `g4f.Provider.Airforce` |
| -------- | ---- |
| **Website** | [llmplayground.net](https://llmplayground.net) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, o1-mini, llama-2-7b, llama-3.1-8b, llama-3.1-70b, hermes-2-dpo, hermes-2-pro, phi-2, openchat-3.5, deepseek-coder, german-7b, openhermes-2.5, lfm-40b, zephyr-7b, neural-7b, evil (40)|
| **Image Models (Image Generation)** | flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3, sdxl, flux-pro |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Microsoft Designer in Bing
| Provider | `g4f.Provider.BingCreateImages` |
| -------- | ---- |
| **Website** | [bing.com](https://www.bing.com/images/create) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Image Models (Image Generation)** | dall-e-3 |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Cerebras Inference
| Provider | `g4f.Provider.Cerebras` |
| -------- | ---- |
| **Website** | [inference.cerebras.ai](https://inference.cerebras.ai/) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | llama-3.1-8b, llama-3.1-70b (2)|
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Microsoft Copilot
| Provider | `g4f.Provider.CopilotAccount` |
| -------- | ---- |
| **Website** | [copilot.microsoft.com](https://copilot.microsoft.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Image Models (Image Generation)** | dall-e-3 |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### DeepInfra
| Provider | `g4f.Provider.DeepInfra` |
| -------- | ---- |
| **Website** | [deepinfra.com](https://deepinfra.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### DeepInfra Chat
| Provider | `g4f.Provider.DeepInfraChat` |
| -------- | ---- |
| **Website** | [deepinfra.com](https://deepinfra.com/chat) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | llama-3.1-8b, llama-3.1-70b, qwen-2-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b, nemotron-70b (7)|
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### DeepInfraImage
| Provider | `g4f.Provider.DeepInfraImage` |
| -------- | ---- |
| **Website** | [deepinfra.com](https://deepinfra.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Google Gemini
| Provider | `g4f.Provider.Gemini` |
| -------- | ---- |
| **Website** | [gemini.google.com](https://gemini.google.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | gemini-pro, gemini-flash (3)|
| **Image Models (Image Generation)** | gemini |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Google Gemini API
| Provider | `g4f.Provider.GeminiPro` |
| -------- | ---- |
| **Website** | [ai.google.dev](https://ai.google.dev) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gemini-pro, gemini-flash (4)|
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ✔️ |
### GigaChat
| Provider | `g4f.Provider.GigaChat` |
| -------- | ---- |
| **Website** | [developers.sber.ru](https://developers.sber.ru/gigachat) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | GigaChat:latest (3)|
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### GithubCopilot
| Provider | `g4f.Provider.GithubCopilot` |
| -------- | ---- |
| **Website** | [github.com](https://github.com/copilot) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4o, o1-preview, o1-mini, claude-3.5-sonnet (4)|
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Groq
| Provider | `g4f.Provider.Groq` |
| -------- | ---- |
| **Website** | [console.groq.com](https://console.groq.com/playground) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | mixtral-8x7b (18)|
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### HuggingChat
| Provider | `g4f.Provider.HuggingChat` |
| -------- | ---- |
| **Website** | [huggingface.co](https://huggingface.co/chat) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Models** | llama-3.2-11b, llama-3.3-70b, mistral-nemo, hermes-3, phi-3.5-mini, command-r-plus, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, nemotron-70b (11)|
| **Image Models (Image Generation)** | flux-dev |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### HuggingFace (Inference API)
| Provider | `g4f.Provider.HuggingFaceAPI` |
| -------- | ---- |
| **Website** | [api-inference.huggingface.co](https://api-inference.huggingface.co) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Meta AI
| Provider | `g4f.Provider.MetaAIAccount` |
| -------- | ---- |
| **Website** | [meta.ai](https://www.meta.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | meta-ai (1)|
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Microsoft Designer
| Provider | `g4f.Provider.MicrosoftDesigner` |
| -------- | ---- |
| **Website** | [designer.microsoft.com](https://designer.microsoft.com) |
| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) |
| **Image Models (Image Generation)** | dall-e-3 |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### OpenAI API
| Provider | `g4f.Provider.OpenaiAPI` |
| -------- | ---- |
| **Website** | [platform.openai.com](https://platform.openai.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### OpenAI ChatGPT
| Provider | `g4f.Provider.OpenaiAccount` |
| -------- | ---- |
| **Website** | [chatgpt.com](https://chatgpt.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-4o-mini, o1-preview, o1-mini (9)|
| **Image Models (Image Generation)** | dall-e-3 |
| **Vision (Image Upload)** | ✔️ |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Perplexity API
| Provider | `g4f.Provider.PerplexityApi` |
| -------- | ---- |
| **Website** | [perplexity.ai](https://www.perplexity.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### Poe
| Provider | `g4f.Provider.Poe` |
| -------- | ---- |
| **Website** | [poe.com](https://poe.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Raycast
| Provider | `g4f.Provider.Raycast` |
| -------- | ---- |
| **Website** | [raycast.com](https://raycast.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Reka
| Provider | `g4f.Provider.Reka` |
| -------- | ---- |
| **Website** | [chat.reka.ai](https://chat.reka.ai/) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### Replicate
| Provider | `g4f.Provider.Replicate` |
| -------- | ---- |
| **Website** | [replicate.com](https://replicate.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ❌ |
### TheB.AI API
| Provider | `g4f.Provider.ThebApi` |
| -------- | ---- |
| **Website** | [theb.ai](https://theb.ai) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Models** | gpt-3.5-turbo, gpt-4, gpt-4-turbo (21)|
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ✔️ |
| **Message history** | ✔️ |
### WhiteRabbitNeo
| Provider | `g4f.Provider.WhiteRabbitNeo` |
| -------- | ---- |
| **Website** | [whiterabbitneo.com](https://www.whiterabbitneo.com) |
| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) |
| **Authentication** | ✔️ |
| **Streaming** | ✔️ |
| **System message** | ❌ |
| **Message history** | ✔️ |
--------------------------------------------------
| Label | Provider | Image Model | Vision Model | Website |
| ----- | -------- | ----------- | ------------ | ------- |
| Airforce | `g4f.Provider.Airforce` | flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3, sdxl, flux-pro| ❌ | [llmplayground.net](https://llmplayground.net) |
| AmigoChat | `g4f.Provider.AmigoChat` | flux-realism, flux-pro, dall-e-3, flux-dev| ❌ | [amigochat.io](https://amigochat.io/chat/) |
| Microsoft Designer in Bing | `g4f.Provider.BingCreateImages` | dall-e-3| ❌ | [bing.com](https://www.bing.com/images/create) |
| Blackbox AI | `g4f.Provider.Blackbox` | flux| ✔️ | [blackbox.ai](https://www.blackbox.ai) |
| Blackbox2 | `g4f.Provider.Blackbox2` | flux| ❌ | [blackbox.ai](https://www.blackbox.ai) |
| Microsoft Copilot | `g4f.Provider.CopilotAccount` | dall-e-3| ❌ | [copilot.microsoft.com](https://copilot.microsoft.com) |
| DeepInfraImage | `g4f.Provider.DeepInfraImage` | | ❌ | [deepinfra.com](https://deepinfra.com) |
| Flux (HuggingSpace) | `g4f.Provider.Flux` | flux-dev| ❌ | [black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space) |
| Google Gemini | `g4f.Provider.Gemini` | gemini| ❌ | [gemini.google.com](https://gemini.google.com) |
| HuggingChat | `g4f.Provider.HuggingChat` | flux-dev| ❌ | [huggingface.co](https://huggingface.co/chat) |
| HuggingFace | `g4f.Provider.HuggingFace` | flux-dev| ❌ | [huggingface.co](https://huggingface.co/chat) |
| Meta AI | `g4f.Provider.MetaAIAccount` | | ❌ | [meta.ai](https://www.meta.ai) |
| Microsoft Designer | `g4f.Provider.MicrosoftDesigner` | dall-e-3| ❌ | [designer.microsoft.com](https://designer.microsoft.com) |
| OpenAI ChatGPT | `g4f.Provider.OpenaiAccount` | dall-e-3, gpt-4, gpt-4o, dall-e-3| ✔️ | [chatgpt.com](https://chatgpt.com) |
| OpenAI ChatGPT | `g4f.Provider.OpenaiChat` | ❌| ✔️ | [chatgpt.com](https://chatgpt.com) |
| Pollinations AI | `g4f.Provider.PollinationsAI` | flux, flux-realism, flux-cablyai, flux-anime, flux-3d, any-dark, flux-pro, turbo, midjourney, dall-e-3| ❌ | [pollinations.ai](https://pollinations.ai) |
| Prodia | `g4f.Provider.Prodia` | | ❌ | [app.prodia.com](https://app.prodia.com) |
| ReplicateHome | `g4f.Provider.ReplicateHome` | sd-3, sdxl, playground-v2.5| ❌ | [replicate.com](https://replicate.com) |
| You.com | `g4f.Provider.You` | | ❌ | [you.com](https://you.com) |

View file

@ -37,14 +37,14 @@ class AsyncTestPassModel(unittest.IsolatedAsyncioTestCase):
async def test_max_stream(self):
client = AsyncClient(provider=YieldProviderMock)
messages = [{'role': 'user', 'content': chunk} for chunk in ["How ", "are ", "you", "?"]]
response = client.chat.completions.create(messages, "Hello", stream=True)
response = await client.chat.completions.create(messages, "Hello", stream=True)
async for chunk in response:
chunk: ChatCompletionChunk = chunk
self.assertIsInstance(chunk, ChatCompletionChunk)
if chunk.choices[0].delta.content is not None:
self.assertIsInstance(chunk.choices[0].delta.content, str)
messages = [{'role': 'user', 'content': chunk} for chunk in ["You ", "You ", "Other", "?"]]
response = client.chat.completions.create(messages, "Hello", stream=True, max_tokens=2)
response = await client.chat.completions.create(messages, "Hello", stream=True, max_tokens=2)
response_list = []
async for chunk in response:
response_list.append(chunk)

View file

@ -32,6 +32,7 @@ class AutonomousAI(AsyncGeneratorProvider, ProviderModelMixin):
"qwen-2.5-coder-32b": "qwen_coder",
"hermes-3": "hermes",
"llama-3.2-90b": "vision",
"llama-3.2-70b": "summary",
}
@classmethod

View file

@ -1,14 +1,12 @@
from __future__ import annotations
from aiohttp import ClientSession, TCPConnector, ClientTimeout
from aiohttp import ClientSession
from pathlib import Path
import re
import json
import random
import string
from pathlib import Path
from ..typing import AsyncResult, Messages, ImagesType
from ..requests.raise_for_status import raise_for_status
@ -32,15 +30,14 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
api_endpoint = "https://www.blackbox.ai/api/chat"
working = True
needs_auth = True
supports_stream = False
supports_system_message = False
supports_stream = True
supports_system_message = True
supports_message_history = True
default_model = "blackboxai"
default_vision_model = default_model
default_image_model = 'ImageGeneration'
image_models = [default_image_model]
image_models = [default_image_model, "ImageGeneration2"]
vision_models = [default_vision_model, 'gpt-4o', 'gemini-pro', 'gemini-1.5-flash', 'llama-3.1-8b', 'llama-3.1-70b', 'llama-3.1-405b']
userSelectedModel = ['gpt-4o', 'gemini-pro', 'claude-sonnet-3.5', 'blackboxai-pro']
@ -48,12 +45,13 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
agentMode = {
'ImageGeneration': {'mode': True, 'id': "ImageGenerationLV45LJp", 'name': "Image Generation"},
#
'meta-llama/Llama-3.3-70B-Instruct-Turbo': {'mode': True, 'id': "meta-llama/Llama-3.3-70B-Instruct-Turbo", 'name': "Meta-Llama-3.3-70B-Instruct-Turbo"},
'mistralai/Mistral-7B-Instruct-v0.2': {'mode': True, 'id': "mistralai/Mistral-7B-Instruct-v0.2", 'name': "Mistral-(7B)-Instruct-v0.2"},
'deepseek-ai/deepseek-llm-67b-chat': {'mode': True, 'id': "deepseek-ai/deepseek-llm-67b-chat", 'name': "DeepSeek-LLM-Chat-(67B)"},
'databricks/dbrx-instruct': {'mode': True, 'id': "databricks/dbrx-instruct", 'name': "DBRX-Instruct"},
'Qwen/QwQ-32B-Preview': {'mode': True, 'id': "Qwen/QwQ-32B-Preview", 'name': "Qwen-QwQ-32B-Preview"},
'NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO': {'mode': True, 'id': "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", 'name': "Nous-Hermes-2-Mixtral-8x7B-DPO"}
'Meta-Llama-3.3-70B-Instruct-Turbo': {'mode': True, 'id': "meta-llama/Llama-3.3-70B-Instruct-Turbo", 'name': "Meta-Llama-3.3-70B-Instruct-Turbo"},
'Mistral-(7B)-Instruct-v0.': {'mode': True, 'id': "mistralai/Mistral-7B-Instruct-v0.2", 'name': "Mistral-(7B)-Instruct-v0.2"},
'DeepSeek-LLM-Chat-(67B)': {'mode': True, 'id': "deepseek-ai/deepseek-llm-67b-chat", 'name': "DeepSeek-LLM-Chat-(67B)"},
'DBRX-Instruct': {'mode': True, 'id': "databricks/dbrx-instruct", 'name': "DBRX-Instruct"},
'Qwen-QwQ-32B-Preview': {'mode': True, 'id': "Qwen/QwQ-32B-Preview", 'name': "Qwen-QwQ-32B-Preview"},
'Nous-Hermes-2-Mixtral-8x7B-DPO': {'mode': True, 'id': "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", 'name': "Nous-Hermes-2-Mixtral-8x7B-DPO"},
'DeepSeek-R1': {'mode': True, 'id': "deepseek-reasoner", 'name': "DeepSeek-R1"}
}
trendingAgentMode = {
@ -99,7 +97,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
'builder Agent': {'mode': True, 'id': "builder Agent"},
}
models = list(dict.fromkeys([default_model, *userSelectedModel, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
models = list(dict.fromkeys([default_model, *userSelectedModel, *image_models, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
model_aliases = {
### chat ###
@ -107,15 +105,17 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
"gemini-1.5-flash": "gemini-1.5-flash",
"gemini-1.5-pro": "gemini-pro",
"claude-3.5-sonnet": "claude-sonnet-3.5",
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"mixtral-7b": "mistralai/Mistral-7B-Instruct-v0.2",
"deepseek-chat": "deepseek-ai/deepseek-llm-67b-chat",
"dbrx-instruct": "databricks/dbrx-instruct",
"qwq-32b": "Qwen/QwQ-32B-Preview",
"hermes-2-dpo": "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"llama-3.3-70b": "Meta-Llama-3.3-70B-Instruct-Turbo",
"mixtral-7b": "Mistral-(7B)-Instruct-v0.",
"deepseek-chat": "DeepSeek-LLM-Chat-(67B)",
"dbrx-instruct": "DBRX-Instruct",
"qwq-32b": "Qwen-QwQ-32B-Preview",
"hermes-2-dpo": "Nous-Hermes-2-Mixtral-8x7B-DPO",
"deepseek-r1": "DeepSeek-R1",
### image ###
"flux": "ImageGeneration",
"flux": "ImageGeneration2",
}
@classmethod
@ -215,10 +215,31 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36'
}
connector = TCPConnector(limit=10, ttl_dns_cache=300)
timeout = ClientTimeout(total=30)
async with ClientSession(headers=headers) as session:
if model == "ImageGeneration2":
prompt = messages[-1]["content"]
data = {
"query": prompt,
"agentMode": True
}
headers['content-type'] = 'text/plain;charset=UTF-8'
async with session.post(
"https://www.blackbox.ai/api/image-generator",
json=data,
proxy=proxy,
headers=headers
) as response:
await raise_for_status(response)
response_json = await response.json()
if "markdown" in response_json:
image_url_match = re.search(r'!\[.*?\]\((.*?)\)', response_json["markdown"])
if image_url_match:
image_url = image_url_match.group(1)
yield ImageResponse(images=[image_url], alt=prompt)
return
async with ClientSession(headers=headers, connector=connector, timeout=timeout) as session:
if conversation is None:
conversation = Conversation(model)
conversation.validated_value = await cls.fetch_validated()
@ -312,8 +333,14 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
yield text_to_yield
full_response = text_to_yield
if return_conversation:
conversation.message_history.append({"role": "assistant", "content": full_response})
yield conversation
if full_response:
if max_tokens and len(full_response) >= max_tokens:
reason = "length"
else:
reason = "stop"
yield FinishReason("stop")
if return_conversation:
conversation.message_history.append({"role": "assistant", "content": full_response})
yield conversation
yield FinishReason(reason)

View file

@ -1,259 +0,0 @@
from __future__ import annotations
import random
import asyncio
import re
import json
from pathlib import Path
from aiohttp import ClientSession
from typing import AsyncIterator, Optional
from ..typing import AsyncResult, Messages
from ..image import ImageResponse
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..cookies import get_cookies_dir
from .. import debug
class BlackboxCreateAgent(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://www.blackbox.ai"
api_endpoints = {
"llama-3.1-70b": "https://www.blackbox.ai/api/improve-prompt",
"flux": "https://www.blackbox.ai/api/image-generator"
}
working = True
supports_system_message = True
supports_message_history = True
default_model = 'llama-3.1-70b'
chat_models = [default_model]
image_models = ['flux']
models = [*chat_models, *image_models]
@classmethod
def _get_cache_file(cls) -> Path:
"""Returns the path to the cache file."""
dir = Path(get_cookies_dir())
dir.mkdir(exist_ok=True)
return dir / 'blackbox_create_agent.json'
@classmethod
def _load_cached_value(cls) -> str | None:
cache_file = cls._get_cache_file()
if cache_file.exists():
try:
with open(cache_file, 'r') as f:
data = json.load(f)
return data.get('validated_value')
except Exception as e:
debug.log(f"Error reading cache file: {e}")
return None
@classmethod
def _save_cached_value(cls, value: str):
cache_file = cls._get_cache_file()
try:
with open(cache_file, 'w') as f:
json.dump({'validated_value': value}, f)
except Exception as e:
debug.log(f"Error writing to cache file: {e}")
@classmethod
async def fetch_validated(cls) -> Optional[str]:
"""
Asynchronously retrieves the validated value from cache or website.
:return: The validated value or None if retrieval fails.
"""
cached_value = cls._load_cached_value()
if cached_value:
return cached_value
js_file_pattern = r'static/chunks/\d{4}-[a-fA-F0-9]+\.js'
v_pattern = r'L\s*=\s*[\'"]([0-9a-fA-F-]{36})[\'"]'
def is_valid_context(text: str) -> bool:
"""Checks if the context is valid."""
return any(char + '=' in text for char in 'abcdefghijklmnopqrstuvwxyz')
async with ClientSession() as session:
try:
async with session.get(cls.url) as response:
if response.status != 200:
debug.log("Failed to download the page.")
return cached_value
page_content = await response.text()
js_files = re.findall(js_file_pattern, page_content)
for js_file in js_files:
js_url = f"{cls.url}/_next/{js_file}"
async with session.get(js_url) as js_response:
if js_response.status == 200:
js_content = await js_response.text()
for match in re.finditer(v_pattern, js_content):
start = max(0, match.start() - 50)
end = min(len(js_content), match.end() + 50)
context = js_content[start:end]
if is_valid_context(context):
validated_value = match.group(1)
cls._save_cached_value(validated_value)
return validated_value
except Exception as e:
debug.log(f"Error while retrieving validated_value: {e}")
return cached_value
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
prompt: str = None,
**kwargs
) -> AsyncIterator[str | ImageResponse]:
"""
Creates an async generator for text or image generation.
"""
if model in cls.chat_models:
async for text in cls._generate_text(model, messages, proxy=proxy, **kwargs):
yield text
elif model in cls.image_models:
prompt = messages[-1]['content']
async for image in cls._generate_image(model, prompt, proxy=proxy, **kwargs):
yield image
else:
raise ValueError(f"Model {model} not supported")
@classmethod
async def _generate_text(
cls,
model: str,
messages: Messages,
proxy: str = None,
max_retries: int = 3,
delay: int = 1,
max_tokens: int = None,
**kwargs
) -> AsyncIterator[str]:
headers = cls._get_headers()
for outer_attempt in range(2): # Add outer loop for retrying with a new key
validated_value = await cls.fetch_validated()
if not validated_value:
raise RuntimeError("Failed to get validated value")
async with ClientSession(headers=headers) as session:
api_endpoint = cls.api_endpoints[model]
data = {
"messages": messages,
"max_tokens": max_tokens,
"validated": validated_value
}
for attempt in range(max_retries):
try:
async with session.post(api_endpoint, json=data, proxy=proxy) as response:
response.raise_for_status()
response_data = await response.json()
if response_data.get('status') == 200 and 'prompt' in response_data:
yield response_data['prompt']
return # Successful execution
else:
raise KeyError("Invalid response format or missing 'prompt' key")
except Exception as e:
if attempt == max_retries - 1:
if outer_attempt == 0: # If this is the first attempt with this key
# Remove the cached key and try to get a new one
cls._save_cached_value("")
debug.log("Invalid key, trying to get a new one...")
break # Exit the inner loop to get a new key
else:
raise RuntimeError(f"Error after all attempts: {str(e)}")
else:
wait_time = delay * (2 ** attempt) + random.uniform(0, 1)
debug.log(f"Attempt {attempt + 1} failed. Retrying in {wait_time:.2f} seconds...")
await asyncio.sleep(wait_time)
@classmethod
async def _generate_image(
cls,
model: str,
prompt: str,
proxy: str = None,
**kwargs
) -> AsyncIterator[ImageResponse]:
headers = {
**cls._get_headers()
}
api_endpoint = cls.api_endpoints[model]
async with ClientSession(headers=headers) as session:
data = {
"query": prompt
}
async with session.post(api_endpoint, json=data, proxy=proxy) as response:
response.raise_for_status()
response_data = await response.json()
if 'markdown' in response_data:
# Extract URL from markdown format: ![](url)
image_url = re.search(r'\!\[\]\((.*?)\)', response_data['markdown'])
if image_url:
yield ImageResponse(images=[image_url.group(1)], alt=prompt)
else:
raise ValueError("Could not extract image URL from markdown")
else:
raise KeyError("'markdown' key not found in response")
@staticmethod
def _get_headers() -> dict:
return {
'accept': '*/*',
'accept-language': 'en-US,en;q=0.9',
'authorization': f'Bearer 56c8eeff9971269d7a7e625ff88e8a83a34a556003a5c87c289ebe9a3d8a3d2c',
'content-type': 'application/json',
'origin': 'https://www.blackbox.ai',
'referer': 'https://www.blackbox.ai',
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36'
}
@classmethod
async def create_async(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
"""
Creates an async response for the provider.
Args:
model: The model to use
messages: The messages to process
proxy: Optional proxy to use
**kwargs: Additional arguments
Returns:
AsyncResult: The response from the provider
"""
if not model:
model = cls.default_model
if model in cls.chat_models:
async for text in cls._generate_text(model, messages, proxy=proxy, **kwargs):
return text
elif model in cls.image_models:
prompt = messages[-1]['content']
async for image in cls._generate_image(model, prompt, proxy=proxy, **kwargs):
return image
else:
raise ValueError(f"Model {model} not supported")

View file

@ -39,12 +39,15 @@ class Conversation(JsonConversation):
class Copilot(AbstractProvider, ProviderModelMixin):
label = "Microsoft Copilot"
url = "https://copilot.microsoft.com"
working = True
supports_stream = True
default_model = "Copilot"
models = [default_model]
model_aliases = {
"gpt-4": "Copilot",
"gpt-4": default_model,
"gpt-4o": default_model,
}
websocket_url = "wss://copilot.microsoft.com/c/api/chat?api-version=2"

View file

@ -1,19 +1,33 @@
from __future__ import annotations
import time
from aiohttp import ClientSession, ClientTimeout
import json
import asyncio
import random
from ..typing import AsyncResult, Messages
from ..typing import AsyncResult, Messages, Cookies
from ..requests.raise_for_status import raise_for_status
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ..providers.response import FinishReason, JsonConversation
class DuckDuckGoSearchException(Exception):
"""Base exception class for duckduckgo_search."""
class RatelimitException(DuckDuckGoSearchException):
"""Raised for rate limit exceeded errors during API requests."""
class TimeoutException(DuckDuckGoSearchException):
"""Raised for timeout errors during API requests."""
class ConversationLimitException(DuckDuckGoSearchException):
"""Raised for conversation limit during API requests to AI endpoint."""
class Conversation(JsonConversation):
vqd: str = None
message_history: Messages = []
cookies: dict = {}
def __init__(self, model: str):
self.model = model
@ -39,20 +53,40 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
"mixtral-8x7b": "mistralai/Mixtral-8x7B-Instruct-v0.1",
}
last_request_time = 0
@classmethod
def validate_model(cls, model: str) -> str:
"""Validates and returns the correct model name"""
if model in cls.model_aliases:
model = cls.model_aliases[model]
if model not in cls.models:
raise ValueError(f"Model {model} not supported. Available models: {cls.models}")
return model
@classmethod
async def sleep(cls):
"""Implements rate limiting between requests"""
now = time.time()
if cls.last_request_time > 0:
delay = max(0.0, 0.75 - (now - cls.last_request_time))
if delay > 0:
await asyncio.sleep(delay)
cls.last_request_time = now
@classmethod
async def fetch_vqd(cls, session: ClientSession, max_retries: int = 3) -> str:
"""
Fetches the required VQD token for the chat session with retries.
"""
"""Fetches the required VQD token for the chat session with retries."""
headers = {
"accept": "text/event-stream",
"content-type": "application/json",
"x-vqd-accept": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36"
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
}
for attempt in range(max_retries):
try:
await cls.sleep()
async with session.get(cls.status_url, headers=headers) as response:
if response.status == 200:
vqd = response.headers.get("x-vqd-4", "")
@ -81,50 +115,92 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
messages: Messages,
proxy: str = None,
timeout: int = 30,
cookies: Cookies = None,
conversation: Conversation = None,
return_conversation: bool = False,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
async with ClientSession(timeout=ClientTimeout(total=timeout)) as session:
# Fetch VQD token
if conversation is None:
conversation = Conversation(model)
conversation.vqd = await cls.fetch_vqd(session)
conversation.message_history = [{"role": "user", "content": format_prompt(messages)}]
else:
conversation.message_history.append(messages[-1])
headers = {
"accept": "text/event-stream",
"content-type": "application/json",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36",
"x-vqd-4": conversation.vqd,
}
data = {
"model": model,
"messages": conversation.message_history,
}
async with session.post(cls.api_endpoint, json=data, headers=headers, proxy=proxy) as response:
await raise_for_status(response)
reason = None
full_message = ""
async for line in response.content:
line = line.decode("utf-8").strip()
if line.startswith("data:"):
try:
message = json.loads(line[5:].strip())
if "message" in message:
if message["message"]:
yield message["message"]
full_message += message["message"]
reason = "length"
else:
reason = "stop"
except json.JSONDecodeError:
continue
if return_conversation:
conversation.message_history.append({"role": "assistant", "content": full_message})
conversation.vqd = response.headers.get("x-vqd-4", conversation.vqd)
yield conversation
if reason is not None:
yield FinishReason(reason)
model = cls.validate_model(model)
if cookies is None and conversation is not None:
cookies = conversation.cookies
try:
async with ClientSession(timeout=ClientTimeout(total=timeout), cookies=cookies) as session:
if conversation is None:
conversation = Conversation(model)
conversation.vqd = await cls.fetch_vqd(session)
conversation.message_history = [{"role": "user", "content": format_prompt(messages)}]
else:
conversation.message_history.append(messages[-1])
headers = {
"accept": "text/event-stream",
"content-type": "application/json",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
"x-vqd-4": conversation.vqd,
}
data = {
"model": model,
"messages": conversation.message_history,
}
await cls.sleep()
try:
async with session.post(cls.api_endpoint, json=data, headers=headers, proxy=proxy) as response:
await raise_for_status(response)
reason = None
full_message = ""
async for line in response.content:
line = line.decode("utf-8").strip()
if line.startswith("data:"):
try:
message = json.loads(line[5:].strip())
if "action" in message and message["action"] == "error":
error_type = message.get("type", "")
if message.get("status") == 429:
if error_type == "ERR_CONVERSATION_LIMIT":
raise ConversationLimitException(error_type)
raise RatelimitException(error_type)
raise DuckDuckGoSearchException(error_type)
if "message" in message:
if message["message"]:
yield message["message"]
full_message += message["message"]
reason = "length"
else:
reason = "stop"
except json.JSONDecodeError:
continue
if return_conversation:
conversation.message_history.append({"role": "assistant", "content": full_message})
conversation.vqd = response.headers.get("x-vqd-4", conversation.vqd)
conversation.cookies = {
n: c.value
for n, c in session.cookie_jar.filter_cookies(cls.url).items()
}
if reason is not None:
yield FinishReason(reason)
if return_conversation:
yield conversation
except asyncio.TimeoutError as e:
raise TimeoutException(f"Request timed out: {str(e)}")
except Exception as e:
if "time" in str(e).lower():
raise TimeoutException(f"Request timed out: {str(e)}")
raise DuckDuckGoSearchException(f"Request failed: {str(e)}")
except Exception as e:
if isinstance(e, (RatelimitException, TimeoutException, ConversationLimitException)):
raise
if "time" in str(e).lower():
raise TimeoutException(f"Request timed out: {str(e)}")
raise DuckDuckGoSearchException(f"Request failed: {str(e)}")

View file

@ -3,15 +3,15 @@ from __future__ import annotations
import json
from aiohttp import ClientSession
from ...typing import AsyncResult, Messages
from ...requests.raise_for_status import raise_for_status
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..typing import AsyncResult, Messages
from ..requests.raise_for_status import raise_for_status
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://deepinfra.com/chat"
api_endpoint = "https://api.deepinfra.com/v1/openai/chat/completions"
working = False
working = True
supports_stream = True
supports_system_message = True
supports_message_history = True
@ -24,6 +24,7 @@ class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin):
'meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo',
'Qwen/QwQ-32B-Preview',
'microsoft/WizardLM-2-8x22B',
'microsoft/WizardLM-2-7B',
'Qwen/Qwen2.5-72B-Instruct',
'Qwen/Qwen2.5-Coder-32B-Instruct',
'nvidia/Llama-3.1-Nemotron-70B-Instruct',
@ -35,6 +36,7 @@ class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin):
"llama-3.1-70b": "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
"qwq-32b": "Qwen/QwQ-32B-Preview",
"wizardlm-2-8x22b": "microsoft/WizardLM-2-8x22B",
"wizardlm-2-7b": "microsoft/WizardLM-2-7B",
"qwen-2-72b": "Qwen/Qwen2.5-72B-Instruct",
"qwen-2.5-coder-32b": "Qwen/Qwen2.5-Coder-32B-Instruct",
"nemotron-70b": "nvidia/Llama-3.1-Nemotron-70B-Instruct",

View file

@ -17,6 +17,7 @@ class Free2GPT(AsyncGeneratorProvider, ProviderModelMixin):
working = True
supports_message_history = True
default_model = 'mistral-7b'
models = [default_model]
@classmethod
async def create_async_generator(

View file

@ -17,11 +17,11 @@ class Jmuz(OpenaiAPI):
default_model = "gpt-4o"
model_aliases = {
"gemini": "gemini-exp",
"gemini-1.5-pro": "gemini-pro",
"gemini-1.5-flash": "gemini-thinking",
"deepseek-chat": "deepseek-v3",
"qwq-32b": "qwq-32b-preview",
"gemini-1.5-flash": "gemini-flash",
"gemini-1.5-pro": "gemini-pro",
"gemini-2.0-flash-thinking": "gemini-thinking",
"deepseek-chat": "deepseek-v3",
}
@classmethod
@ -35,9 +35,7 @@ class Jmuz(OpenaiAPI):
cls,
model: str,
messages: Messages,
stream: bool = False,
api_key: str = None,
api_base: str = None,
stream: bool = True,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
@ -45,9 +43,9 @@ class Jmuz(OpenaiAPI):
"Authorization": f"Bearer {cls.api_key}",
"Content-Type": "application/json",
"accept": "*/*",
"cache-control": "no-cache",
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
}
started = False
buffer = ""
async for chunk in super().create_async_generator(

View file

@ -19,6 +19,8 @@ class Mhystical(OpenaiAPI):
url = "https://mhystical.cc"
api_endpoint = "https://api.mhystical.cc/v1/completions"
login_url = "https://mhystical.cc/dashboard"
api_key = "mhystical"
working = True
needs_auth = False
supports_stream = False # Set to False, as streaming is not specified in ChatifyAI
@ -38,12 +40,11 @@ class Mhystical(OpenaiAPI):
model: str,
messages: Messages,
stream: bool = False,
api_key: str = "mhystical",
api_key: str = None,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
"x-api-key": api_key,
"x-api-key": cls.api_key,
"Content-Type": "application/json",
"accept": "*/*",
"cache-control": "no-cache",

101
g4f/Provider/OIVSCode.py Normal file
View file

@ -0,0 +1,101 @@
from __future__ import annotations
import json
from aiohttp import ClientSession
from ..image import to_data_uri
from ..typing import AsyncResult, Messages, ImagesType
from ..requests.raise_for_status import raise_for_status
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ..providers.response import FinishReason
class OIVSCode(AsyncGeneratorProvider, ProviderModelMixin):
label = "OI VSCode Server"
url = "https://oi-vscode-server.onrender.com"
api_endpoint = "https://oi-vscode-server.onrender.com/v1/chat/completions"
working = True
supports_stream = True
supports_system_message = True
supports_message_history = True
default_model = "gpt-4o-mini-2024-07-18"
default_vision_model = default_model
vision_models = [default_model, "gpt-4o-mini"]
models = vision_models
model_aliases = {"gpt-4o-mini": "gpt-4o-mini-2024-07-18"}
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
stream: bool = False,
images: ImagesType = None,
proxy: str = None,
**kwargs
) -> AsyncResult:
headers = {
"accept": "*/*",
"accept-language": "en-US,en;q=0.9",
"content-type": "application/json",
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
}
async with ClientSession(headers=headers) as session:
if images is not None:
messages[-1]['content'] = [
{
"type": "text",
"text": messages[-1]['content']
},
*[
{
"type": "image_url",
"image_url": {
"url": to_data_uri(image)
}
}
for image, _ in images
]
]
data = {
"model": model,
"stream": stream,
"messages": messages
}
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
await raise_for_status(response)
full_response = ""
if stream:
async for line in response.content:
if line:
line = line.decode()
if line.startswith("data: "):
if line.strip() == "data: [DONE]":
break
try:
data = json.loads(line[6:])
if content := data["choices"][0]["delta"].get("content"):
yield content
full_response += content
except:
continue
reason = "length" if len(full_response) > 0 else "stop"
yield FinishReason(reason)
else:
response_data = await response.json()
full_response = response_data["choices"][0]["message"]["content"]
yield full_response
reason = "length" if len(full_response) > 0 else "stop"
yield FinishReason(reason)

View file

@ -47,11 +47,15 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
"llama-3.3-70b": "llama",
"mistral-nemo": "mistral",
#"": "karma",
#"": "sur-mistral",
"gpt-4": "searchgpt",
"claude-3.5-haiku": "claude-hybridspace",
"claude-3.5-sonnet": "claude-email",
"gpt-4": "claude",
"claude-3.5-sonnet": "sur",
"deepseek-chat": "deepseek",
"llama-3.2-3b": "llamalight",
"llama-3.1-8b": "llamalight",
### Image Models ###
"sd-turbo": "turbo",
}
text_models = []

View file

@ -13,11 +13,9 @@ from .local import *
from .hf_space import HuggingSpace
from .AIChatFree import AIChatFree
from .Airforce import Airforce
from .AIUncensored import AIUncensored
from .AutonomousAI import AutonomousAI
from .Blackbox import Blackbox
from .BlackboxCreateAgent import BlackboxCreateAgent
from .CablyAI import CablyAI
from .ChatGLM import ChatGLM
from .ChatGpt import ChatGpt
@ -27,6 +25,7 @@ from .Cloudflare import Cloudflare
from .Copilot import Copilot
from .DarkAI import DarkAI
from .DDG import DDG
from .DeepInfraChat import DeepInfraChat
from .Free2GPT import Free2GPT
from .FreeGpt import FreeGpt
from .GizAI import GizAI
@ -35,12 +34,12 @@ from .ImageLabs import ImageLabs
from .Jmuz import Jmuz
from .Liaobots import Liaobots
from .Mhystical import Mhystical
from .OIVSCode import OIVSCode
from .PerplexityLabs import PerplexityLabs
from .Pi import Pi
from .Pizzagpt import Pizzagpt
from .PollinationsAI import PollinationsAI
from .Prodia import Prodia
from .RubiksAI import RubiksAI
from .TeachAnything import TeachAnything
from .You import You
from .Yqcloud import Yqcloud

View file

@ -18,8 +18,8 @@ class Qwen_QVQ_72B(AsyncGeneratorProvider, ProviderModelMixin):
default_model = "qwen-qvq-72b-preview"
models = [default_model]
model_aliases = {"qvq-72b": default_model}
vision_models = models
model_aliases = {"qwq-32b": default_model}
@classmethod
async def create_async_generator(

View file

@ -21,7 +21,7 @@ class Qwen_Qwen_2_72B_Instruct(AsyncGeneratorProvider, ProviderModelMixin):
default_model = "qwen-qwen2-72b-instruct"
models = [default_model]
model_aliases = {"qwen-2.5-72b": default_model}
model_aliases = {"qwen-2-72b": default_model}
@classmethod
async def create_async_generator(

View file

@ -7,10 +7,10 @@ from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .BlackForestLabsFlux1Dev import BlackForestLabsFlux1Dev
from .BlackForestLabsFlux1Schnell import BlackForestLabsFlux1Schnell
from .VoodoohopFlux1Schnell import VoodoohopFlux1Schnell
from .StableDiffusion35Large import StableDiffusion35Large
from .CohereForAI import CohereForAI
from .Qwen_QVQ_72B import Qwen_QVQ_72B
from .Qwen_Qwen_2_72B_Instruct import Qwen_Qwen_2_72B_Instruct
from .StableDiffusion35Large import StableDiffusion35Large
class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://huggingface.co/spaces"
@ -18,9 +18,12 @@ class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
working = True
default_model = BlackForestLabsFlux1Dev.default_model
default_model = Qwen_Qwen_2_72B_Instruct.default_model
default_image_model = BlackForestLabsFlux1Dev.default_model
default_vision_model = Qwen_QVQ_72B.default_model
providers = [BlackForestLabsFlux1Dev, BlackForestLabsFlux1Schnell, VoodoohopFlux1Schnell, StableDiffusion35Large, CohereForAI, Qwen_QVQ_72B, Qwen_Qwen_2_72B_Instruct]
providers = [BlackForestLabsFlux1Dev, BlackForestLabsFlux1Schnell, VoodoohopFlux1Schnell, CohereForAI, Qwen_QVQ_72B, Qwen_Qwen_2_72B_Instruct, StableDiffusion35Large]
@classmethod
def get_parameters(cls, **kwargs) -> dict:

View file

@ -15,11 +15,11 @@ class Cerebras(OpenaiAPI):
working = True
default_model = "llama3.1-70b"
models = [
"llama3.1-70b",
default_model,
"llama3.1-8b",
"llama-3.3-70b"
]
model_aliases = {"llama-3.1-70b": "llama3.1-70b", "llama-3.1-8b": "llama3.1-8b"}
model_aliases = {"llama-3.1-70b": default_model, "llama-3.1-8b": "llama3.1-8b"}
@classmethod
async def create_async_generator(

View file

@ -2,41 +2,59 @@ from __future__ import annotations
import requests
from ...typing import AsyncResult, Messages
from .OpenaiAPI import OpenaiAPI
from ...requests import StreamSession, raise_for_status
from ...image import ImageResponse
from .OpenaiAPI import OpenaiAPI
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
class DeepInfra(OpenaiAPI):
class DeepInfra(OpenaiAPI, AsyncGeneratorProvider, ProviderModelMixin):
label = "DeepInfra"
url = "https://deepinfra.com"
login_url = "https://deepinfra.com/dash/api_keys"
working = True
api_base = "https://api.deepinfra.com/v1/openai",
api_base = "https://api.deepinfra.com/v1/openai"
needs_auth = True
supports_stream = True
supports_message_history = True
default_model = "meta-llama/Meta-Llama-3.1-70B-Instruct"
default_image_model = ''
image_models = [default_image_model]
default_image_model = "stabilityai/sd3.5"
models = []
image_models = []
@classmethod
def get_models(cls, **kwargs):
if not cls.models:
url = 'https://api.deepinfra.com/models/featured'
models = requests.get(url).json()
cls.models = [model['model_name'] for model in models if model["type"] == "text-generation"]
cls.image_models = [model['model_name'] for model in models if model["reported_type"] == "text-to-image"]
response = requests.get(url)
models = response.json()
cls.models = []
cls.image_models = []
for model in models:
if model["type"] == "text-generation":
cls.models.append(model['model_name'])
elif model["reported_type"] == "text-to-image":
cls.image_models.append(model['model_name'])
cls.models.extend(cls.image_models)
return cls.models
@classmethod
def get_image_models(cls, **kwargs):
if not cls.image_models:
cls.get_models()
return cls.image_models
@classmethod
def create_async_generator(
cls,
model: str,
messages: Messages,
stream: bool = True,
stream: bool,
temperature: float = 0.7,
max_tokens: int = 1028,
prompt: str = None,
**kwargs
) -> AsyncResult:
headers = {
@ -47,12 +65,6 @@ class DeepInfra(OpenaiAPI):
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36',
'X-Deepinfra-Source': 'web-embed',
}
# Check if the model is an image model
if model in cls.image_models:
return cls.create_image_generator(messages[-1]["content"] if prompt is None else prompt, model, headers=headers, **kwargs)
# Text generation
return super().create_async_generator(
model, messages,
stream=stream,
@ -63,7 +75,7 @@ class DeepInfra(OpenaiAPI):
)
@classmethod
async def create_image_generator(
async def create_async_image(
cls,
prompt: str,
model: str,
@ -71,13 +83,26 @@ class DeepInfra(OpenaiAPI):
api_base: str = "https://api.deepinfra.com/v1/inference",
proxy: str = None,
timeout: int = 180,
headers: dict = None,
extra_data: dict = {},
**kwargs
) -> AsyncResult:
if api_key is not None and headers is not None:
) -> ImageResponse:
headers = {
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US',
'Connection': 'keep-alive',
'Origin': 'https://deepinfra.com',
'Referer': 'https://deepinfra.com/',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-site',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36',
'X-Deepinfra-Source': 'web-embed',
'sec-ch-ua': '"Google Chrome";v="119", "Chromium";v="119", "Not?A_Brand";v="24"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
}
if api_key is not None:
headers["Authorization"] = f"Bearer {api_key}"
async with StreamSession(
proxies={"all": proxy},
headers=headers,
@ -85,7 +110,7 @@ class DeepInfra(OpenaiAPI):
) as session:
model = cls.get_model(model)
data = {"prompt": prompt, **extra_data}
data = {"input": data} if model == cls.default_image_model else data
data = {"input": data} if model == cls.default_model else data
async with session.post(f"{api_base.rstrip('/')}/{model}", json=data) as response:
await raise_for_status(response)
data = await response.json()
@ -93,4 +118,14 @@ class DeepInfra(OpenaiAPI):
if not images:
raise RuntimeError(f"Response: {data}")
images = images[0] if len(images) == 1 else images
yield ImageResponse(images, prompt)
return ImageResponse(images, prompt)
@classmethod
async def create_async_image_generator(
cls,
model: str,
messages: Messages,
prompt: str = None,
**kwargs
) -> AsyncResult:
yield await cls.create_async_image(messages[-1]["content"] if prompt is None else prompt, model, **kwargs)

View file

@ -60,13 +60,11 @@ class Gemini(AsyncGeneratorProvider, ProviderModelMixin):
working = True
default_model = 'gemini'
image_models = ["gemini"]
default_vision_model = "gemini"
models = ["gemini", "gemini-1.5-flash", "gemini-1.5-pro"]
model_aliases = {
"gemini-flash": "gemini-1.5-flash",
"gemini-pro": "gemini-1.5-pro",
}
default_image_model = default_model
default_vision_model = default_model
image_models = [default_image_model]
models = [default_model, "gemini-1.5-flash", "gemini-1.5-pro"]
synthesize_content_type = "audio/vnd.wav"
_cookies: Cookies = None

View file

@ -61,7 +61,7 @@ class GigaChat(AsyncGeneratorProvider, ProviderModelMixin):
supports_stream = True
needs_auth = True
default_model = "GigaChat:latest"
models = ["GigaChat:latest", "GigaChat-Plus", "GigaChat-Pro"]
models = [default_model, "GigaChat-Plus", "GigaChat-Pro"]
@classmethod
async def create_async_generator(

View file

@ -5,26 +5,10 @@ from .OpenaiAPI import OpenaiAPI
class GlhfChat(OpenaiAPI):
label = "GlhfChat"
url = "https://glhf.chat"
login_url = "https://glhf.chat/users/settings/api"
login_url = "https://glhf.chat/user-settings/api"
api_base = "https://glhf.chat/api/openai/v1"
working = True
model_aliases = {
'Qwen2.5-Coder-32B-Instruct': 'hf:Qwen/Qwen2.5-Coder-32B-Instruct',
'Llama-3.1-405B-Instruct': 'hf:meta-llama/Llama-3.1-405B-Instruct',
'Llama-3.1-70B-Instruct': 'hf:meta-llama/Llama-3.1-70B-Instruct',
'Llama-3.1-8B-Instruct': 'hf:meta-llama/Llama-3.1-8B-Instruct',
'Llama-3.2-3B-Instruct': 'hf:meta-llama/Llama-3.2-3B-Instruct',
'Llama-3.2-11B-Vision-Instruct': 'hf:meta-llama/Llama-3.2-11B-Vision-Instruct',
'Llama-3.2-90B-Vision-Instruct': 'hf:meta-llama/Llama-3.2-90B-Vision-Instruct',
'Qwen2.5-72B-Instruct': 'hf:Qwen/Qwen2.5-72B-Instruct',
'Llama-3.3-70B-Instruct': 'hf:meta-llama/Llama-3.3-70B-Instruct',
'gemma-2-9b-it': 'hf:google/gemma-2-9b-it',
'gemma-2-27b-it': 'hf:google/gemma-2-27b-it',
'Mistral-7B-Instruct-v0.3': 'hf:mistralai/Mistral-7B-Instruct-v0.3',
'Mixtral-8x7B-Instruct-v0.1': 'hf:mistralai/Mixtral-8x7B-Instruct-v0.1',
'Mixtral-8x22B-Instruct-v0.1': 'hf:mistralai/Mixtral-8x22B-Instruct-v0.1',
'Nous-Hermes-2-Mixtral-8x7B-DPO': 'hf:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO',
'Qwen2.5-7B-Instruct': 'hf:Qwen/Qwen2.5-7B-Instruct',
'SOLAR-10.7B-Instruct-v1.0': 'hf:upstage/SOLAR-10.7B-Instruct-v1.0',
'Llama-3.1-Nemotron-70B-Instruct-HF': 'hf:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF'
}
default_model = "hf:meta-llama/Llama-3.3-70B-Instruct"
models = ["hf:meta-llama/Llama-3.1-405B-Instruct", default_model, "hf:deepseek-ai/DeepSeek-V3", "hf:Qwen/QwQ-32B-Preview", "hf:huihui-ai/Llama-3.3-70B-Instruct-abliterated", "hf:anthracite-org/magnum-v4-12b", "hf:meta-llama/Llama-3.1-70B-Instruct", "hf:meta-llama/Llama-3.1-8B-Instruct", "hf:meta-llama/Llama-3.2-3B-Instruct", "hf:meta-llama/Llama-3.2-11B-Vision-Instruct", "hf:meta-llama/Llama-3.2-90B-Vision-Instruct", "hf:Qwen/Qwen2.5-72B-Instruct", "hf:Qwen/Qwen2.5-Coder-32B-Instruct", "hf:google/gemma-2-9b-it", "hf:google/gemma-2-27b-it", "hf:mistralai/Mistral-7B-Instruct-v0.3", "hf:mistralai/Mixtral-8x7B-Instruct-v0.1", "hf:mistralai/Mixtral-8x22B-Instruct-v0.1", "hf:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", "hf:Qwen/Qwen2.5-7B-Instruct", "hf:upstage/SOLAR-10.7B-Instruct-v1.0", "hf:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF"]

View file

@ -45,6 +45,7 @@ class HuggingChat(AsyncAuthedProvider, ProviderModelMixin):
default_model,
'meta-llama/Llama-3.3-70B-Instruct',
'CohereForAI/c4ai-command-r-plus-08-2024',
'deepseek-ai/DeepSeek-R1-Distill-Qwen-32B',
'Qwen/QwQ-32B-Preview',
'nvidia/Llama-3.1-Nemotron-70B-Instruct-HF',
'Qwen/Qwen2.5-Coder-32B-Instruct',
@ -57,6 +58,7 @@ class HuggingChat(AsyncAuthedProvider, ProviderModelMixin):
"qwen-2.5-72b": "Qwen/Qwen2.5-Coder-32B-Instruct",
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct",
"command-r-plus": "CohereForAI/c4ai-command-r-plus-08-2024",
"deepseek-r1": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"qwq-32b": "Qwen/QwQ-32B-Preview",
"nemotron-70b": "nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"qwen-2.5-coder-32b": "Qwen/Qwen2.5-Coder-32B-Instruct",

View file

@ -10,6 +10,7 @@ class HuggingFaceAPI(OpenaiAPI):
url = "https://api-inference.huggingface.com"
api_base = "https://api-inference.huggingface.co/v1"
working = True
default_model = "meta-llama/Llama-3.2-11B-Vision-Instruct"
default_vision_model = default_model

View file

@ -5,8 +5,8 @@ from .OpenaiChat import OpenaiChat
class OpenaiAccount(OpenaiChat):
needs_auth = True
parent = "OpenaiChat"
image_models = ["dall-e-3", "gpt-4", "gpt-4o"]
default_vision_model = "gpt-4o"
default_image_model = "dall-e-3"
default_model = "gpt-4o"
default_vision_model = default_model
default_image_model = OpenaiChat.default_image_model
image_models = [default_model, default_image_model, "gpt-4"]
fallback_models = [*OpenaiChat.fallback_models, default_image_model]
model_aliases = {default_image_model: default_vision_model}

View file

@ -95,7 +95,9 @@ class OpenaiChat(AsyncAuthedProvider, ProviderModelMixin):
supports_message_history = True
supports_system_message = True
default_model = "auto"
fallback_models = [default_model, "gpt-4", "gpt-4o", "gpt-4o-mini", "gpt-4o-canmore", "o1", "o1-preview", "o1-mini"]
default_image_model = "dall-e-3"
image_models = [default_image_model]
fallback_models = [default_model, "gpt-4", "gpt-4o", "gpt-4o-mini", "gpt-4o-canmore", "o1", "o1-preview", "o1-mini"] +image_models
vision_models = fallback_models
synthesize_content_type = "audio/mpeg"

View file

@ -11,7 +11,7 @@ class PerplexityApi(OpenaiAPI):
default_model = "llama-3-sonar-large-32k-online"
models = [
"llama-3-sonar-small-32k-chat",
"llama-3-sonar-small-32k-online",
default_model,
"llama-3-sonar-large-32k-chat",
"llama-3-sonar-large-32k-online",
"llama-3-8b-instruct",

View file

@ -13,9 +13,7 @@ class Replicate(AsyncGeneratorProvider, ProviderModelMixin):
working = True
needs_auth = True
default_model = "meta/meta-llama-3-70b-instruct"
model_aliases = {
"meta-llama/Meta-Llama-3-70B-Instruct": default_model
}
models = [default_model]
@classmethod
async def create_async_generator(

View file

@ -1,3 +1,4 @@
from .Anthropic import Anthropic
from .BingCreateImages import BingCreateImages
from .Cerebras import Cerebras
from .CopilotAccount import CopilotAccount
@ -20,8 +21,6 @@ from .OpenaiAccount import OpenaiAccount
from .OpenaiAPI import OpenaiAPI
from .OpenaiChat import OpenaiChat
from .PerplexityApi import PerplexityApi
from .Poe import Poe
from .Raycast import Raycast
from .Reka import Reka
from .Replicate import Replicate
from .ThebApi import ThebApi

View file

@ -5,13 +5,13 @@ import requests
from aiohttp import ClientSession
from typing import List
from ..typing import AsyncResult, Messages
from ..image import ImageResponse
from ..providers.response import FinishReason, Usage
from ..requests.raise_for_status import raise_for_status
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ...typing import AsyncResult, Messages
from ...image import ImageResponse
from ...providers.response import FinishReason, Usage
from ...requests.raise_for_status import raise_for_status
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .. import debug
from ... import debug
def split_message(message: str, max_length: int = 1000) -> List[str]:
"""Splits the message into parts up to (max_length)."""
@ -31,7 +31,7 @@ class Airforce(AsyncGeneratorProvider, ProviderModelMixin):
api_endpoint_completions = "https://api.airforce/chat/completions"
api_endpoint_imagine2 = "https://api.airforce/imagine2"
working = True
working = False
supports_stream = True
supports_system_message = True
supports_message_history = True

View file

@ -12,7 +12,7 @@ class Raycast(AbstractProvider):
url = "https://raycast.com"
supports_stream = True
needs_auth = True
working = True
working = False
models = [
"gpt-3.5-turbo",

View file

@ -8,9 +8,9 @@ from urllib.parse import urlencode
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin, Sources
from ..requests.raise_for_status import raise_for_status
from ...typing import AsyncResult, Messages
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin, Sources
from ...requests.raise_for_status import raise_for_status
class RubiksAI(AsyncGeneratorProvider, ProviderModelMixin):
label = "Rubiks AI"

View file

@ -1,18 +1,21 @@
from .AI365VIP import AI365VIP
from .AiChatOnline import AiChatOnline
from .AiChats import AiChats
from .Airforce import Airforce
from .AmigoChat import AmigoChat
from .Aura import Aura
from .Chatgpt4o import Chatgpt4o
from .Chatgpt4Online import Chatgpt4Online
from .ChatgptFree import ChatgptFree
from .DeepInfraChat import DeepInfraChat
from .FlowGpt import FlowGpt
from .FreeNetfly import FreeNetfly
from .Koala import Koala
from .MagickPen import MagickPen
from .MyShell import MyShell
from .Poe import Poe
from .Raycast import Raycast
from .ReplicateHome import ReplicateHome
from .RobocodersAPI import RobocodersAPI
from .RubiksAI import RubiksAI
from .Theb import Theb
from .Upstage import Upstage

View file

@ -512,7 +512,7 @@ class AsyncCompletions:
self.client: AsyncClient = client
self.provider: ProviderType = provider
def create(
async def create(
self,
messages: Messages,
model: str,
@ -529,7 +529,7 @@ class AsyncCompletions:
ignore_working: Optional[bool] = False,
ignore_stream: Optional[bool] = False,
**kwargs
) -> Awaitable[ChatCompletion]:
) -> Awaitable[ChatCompletion, AsyncIterator[ChatCompletionChunk]]:
model, provider = get_model_and_provider(
model,
self.provider if provider is None else provider,
@ -542,6 +542,7 @@ class AsyncCompletions:
kwargs["images"] = [(image, image_name)]
if ignore_stream:
kwargs["ignore_stream"] = True
response = async_iter_run_tools(
provider.get_async_create_function(),
model,
@ -555,9 +556,14 @@ class AsyncCompletions:
),
**kwargs
)
response = async_iter_response(response, stream, response_format, max_tokens, stop)
response = async_iter_append_model_and_provider(response, model, provider)
return response if stream else anext(response)
if stream:
return response
else:
return await anext(response)
def stream(
self,

View file

@ -4,13 +4,11 @@ from dataclasses import dataclass
from .Provider import IterListProvider, ProviderType
from .Provider import (
### no auth required ###
AIChatFree,
Airforce,
AIUncensored,
AutonomousAI,
Blackbox,
BlackboxCreateAgent,
BingCreateImages,
CablyAI,
ChatGLM,
ChatGpt,
@ -18,30 +16,34 @@ from .Provider import (
ChatGptt,
Cloudflare,
Copilot,
CopilotAccount,
DarkAI,
DDG,
GigaChat,
Gemini,
GeminiPro,
HuggingChat,
HuggingFace,
DeepInfraChat,
HuggingSpace,
GPROChat,
Jmuz,
Liaobots,
Mhystical,
MetaAI,
MicrosoftDesigner,
OpenaiChat,
OpenaiAccount,
OIVSCode,
PerplexityLabs,
Pi,
PollinationsAI,
Reka,
RubiksAI,
TeachAnything,
Yqcloud,
### needs auth ###
BingCreateImages,
CopilotAccount,
Gemini,
GeminiPro,
GigaChat,
HuggingChat,
HuggingFace,
MetaAI,
MicrosoftDesigner,
OpenaiAccount,
OpenaiChat,
Reka,
)
@dataclass(unsafe_hash=True)
@ -74,16 +76,16 @@ default = Model(
DDG,
Blackbox,
Copilot,
DeepInfraChat,
ChatGptEs,
ChatGptt,
PollinationsAI,
Jmuz,
CablyAI,
OpenaiChat,
OIVSCode,
DarkAI,
Yqcloud,
AIUncensored,
Airforce,
OpenaiChat,
Cloudflare,
])
)
@ -104,20 +106,20 @@ gpt_35_turbo = Model(
gpt_4 = Model(
name = 'gpt-4',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, Blackbox, Jmuz, ChatGptEs, ChatGptt, PollinationsAI, Copilot, Yqcloud, OpenaiChat, Liaobots, Mhystical])
best_provider = IterListProvider([DDG, Blackbox, Jmuz, ChatGptEs, ChatGptt, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
)
# gpt-4o
gpt_4o = Model(
name = 'gpt-4o',
base_provider = 'OpenAI',
best_provider = IterListProvider([Blackbox, ChatGptt, Jmuz, ChatGptEs, PollinationsAI, DarkAI, ChatGpt, Liaobots, OpenaiChat])
best_provider = IterListProvider([Blackbox, ChatGptt, Jmuz, ChatGptEs, PollinationsAI, DarkAI, Copilot, ChatGpt, Liaobots, OpenaiChat])
)
gpt_4o_mini = Model(
name = 'gpt-4o-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, ChatGptEs, ChatGptt, Jmuz, ChatGpt, RubiksAI, Liaobots, OpenaiChat])
best_provider = IterListProvider([DDG, ChatGptEs, ChatGptt, Jmuz, PollinationsAI, OIVSCode, ChatGpt, Liaobots, OpenaiChat])
)
# o1
@ -157,26 +159,32 @@ meta = Model(
llama_2_7b = Model(
name = "llama-2-7b",
base_provider = "Meta Llama",
best_provider = IterListProvider([Cloudflare, Airforce])
best_provider = Cloudflare
)
# llama 3
llama_3_8b = Model(
name = "llama-3-8b",
base_provider = "Meta Llama",
best_provider = Cloudflare
best_provider = IterListProvider([Jmuz, Cloudflare])
)
llama_3_70b = Model(
name = "llama-3-70b",
base_provider = "Meta Llama",
best_provider = Jmuz
)
# llama 3.1
llama_3_1_8b = Model(
name = "llama-3.1-8b",
base_provider = "Meta Llama",
best_provider = IterListProvider([Blackbox, Jmuz, Cloudflare, Airforce, PerplexityLabs])
best_provider = IterListProvider([Blackbox, DeepInfraChat, Jmuz, PollinationsAI, Cloudflare, PerplexityLabs])
)
llama_3_1_70b = Model(
name = "llama-3.1-70b",
base_provider = "Meta Llama",
best_provider = IterListProvider([DDG, Jmuz, Blackbox, BlackboxCreateAgent, TeachAnything, DarkAI, Airforce, RubiksAI, PerplexityLabs])
best_provider = IterListProvider([DDG, Jmuz, Blackbox, TeachAnything, DarkAI, PerplexityLabs])
)
llama_3_1_405b = Model(
@ -192,29 +200,29 @@ llama_3_2_1b = Model(
best_provider = Cloudflare
)
llama_3_2_3b = Model(
name = "llama-3.2-3b",
base_provider = "Meta Llama",
best_provider = PollinationsAI
)
llama_3_2_11b = Model(
name = "llama-3.2-11b",
base_provider = "Meta Llama",
best_provider = IterListProvider([Jmuz, HuggingChat, HuggingFace])
)
llama_3_2_70b = Model(
name = "llama-3.2-70b",
base_provider = "Meta Llama",
best_provider = AutonomousAI
)
llama_3_2_90b = Model(
name = "llama-3.2-90b",
base_provider = "Meta Llama",
best_provider = IterListProvider([AutonomousAI, Jmuz])
best_provider = IterListProvider([Jmuz, AutonomousAI])
)
# llama 3.3
llama_3_3_70b = Model(
name = "llama-3.3-70b",
base_provider = "Meta Llama",
best_provider = IterListProvider([Blackbox, PollinationsAI, AutonomousAI, Jmuz, HuggingChat, HuggingFace, PerplexityLabs])
best_provider = IterListProvider([Blackbox, DeepInfraChat, PollinationsAI, AutonomousAI, Jmuz, HuggingChat, HuggingFace, PerplexityLabs])
)
### Mistral ###
@ -246,13 +254,7 @@ mistral_large = Model(
hermes_2_dpo = Model(
name = "hermes-2-dpo",
base_provider = "NousResearch",
best_provider = IterListProvider([Blackbox, Airforce])
)
hermes_2_pro = Model(
name = "hermes-2-pro",
base_provider = "NousResearch",
best_provider = Airforce
best_provider = Blackbox
)
hermes_3 = Model(
@ -263,16 +265,24 @@ hermes_3 = Model(
### Microsoft ###
phi_2 = Model(
name = "phi-2",
base_provider = "Microsoft",
best_provider = Airforce
)
# phi
phi_3_5_mini = Model(
name = "phi-3.5-mini",
base_provider = "Microsoft",
best_provider = IterListProvider([HuggingChat, HuggingFace])
best_provider = HuggingChat
)
# wizardlm
wizardlm_2_7b = Model(
name = 'wizardlm-2-7b',
base_provider = 'Microsoft',
best_provider = DeepInfraChat
)
wizardlm_2_8x22b = Model(
name = 'wizardlm-2-8x22b',
base_provider = 'Microsoft',
best_provider = IterListProvider([DeepInfraChat, Jmuz])
)
### Google DeepMind ###
@ -280,7 +290,14 @@ phi_3_5_mini = Model(
gemini = Model(
name = 'gemini',
base_provider = 'Google',
best_provider = IterListProvider([Jmuz, Gemini])
best_provider = Gemini
)
# gemini-exp
gemini_exp = Model(
name = 'gemini-exp',
base_provider = 'Google',
best_provider = Jmuz
)
# gemini-1.5
@ -332,6 +349,12 @@ claude_3_opus = Model(
# claude 3.5
claude_3_5_haiku = Model(
name = 'claude-3.5-haiku',
base_provider = 'Anthropic',
best_provider = PollinationsAI
)
claude_3_5_sonnet = Model(
name = 'claude-3.5-sonnet',
base_provider = 'Anthropic',
@ -389,26 +412,33 @@ qwen_1_5_7b = Model(
qwen_2_72b = Model(
name = 'qwen-2-72b',
base_provider = 'Qwen',
best_provider = PollinationsAI
best_provider = IterListProvider([DeepInfraChat, PollinationsAI, HuggingSpace])
)
# qwen 2.5
qwen_2_5_72b = Model(
name = 'qwen-2.5-72b',
base_provider = 'Qwen',
best_provider = IterListProvider([Jmuz, HuggingSpace])
best_provider = Jmuz
)
qwen_2_5_coder_32b = Model(
name = 'qwen-2.5-coder-32b',
base_provider = 'Qwen',
best_provider = IterListProvider([Jmuz, PollinationsAI, AutonomousAI, HuggingChat])
best_provider = IterListProvider([DeepInfraChat, PollinationsAI, AutonomousAI, Jmuz, HuggingChat])
)
# qwq/qvq
qwq_32b = Model(
name = 'qwq-32b',
base_provider = 'Qwen',
best_provider = IterListProvider([Blackbox, Jmuz, HuggingSpace, HuggingChat])
best_provider = IterListProvider([Blackbox, DeepInfraChat, Jmuz, HuggingChat])
)
qvq_72b = Model(
name = 'qvq-72b',
base_provider = 'Qwen',
best_provider = HuggingSpace
)
### Inflection ###
@ -425,27 +455,12 @@ deepseek_chat = Model(
best_provider = IterListProvider([Blackbox, Jmuz, PollinationsAI])
)
deepseek_coder = Model(
name = 'deepseek-coder',
deepseek_r1 = Model(
name = 'deepseek-r1',
base_provider = 'DeepSeek',
best_provider = Airforce
best_provider = IterListProvider([Blackbox, Jmuz, HuggingChat, HuggingFace])
)
### WizardLM ###
wizardlm_2_8x22b = Model(
name = 'wizardlm-2-8x22b',
base_provider = 'WizardLM',
best_provider = Jmuz
)
### OpenChat ###
openchat_3_5 = Model(
name = 'openchat-3.5',
base_provider = 'OpenChat',
best_provider = Airforce
)
### x.ai ###
grok_2 = Model(
name = 'grok-2',
@ -470,43 +485,14 @@ sonar_chat = Model(
nemotron_70b = Model(
name = 'nemotron-70b',
base_provider = 'Nvidia',
best_provider = IterListProvider([HuggingChat, HuggingFace])
)
### Teknium ###
openhermes_2_5 = Model(
name = 'openhermes-2.5',
base_provider = 'Teknium',
best_provider = Airforce
best_provider = IterListProvider([DeepInfraChat, HuggingChat, HuggingFace])
)
### Liquid ###
lfm_40b = Model(
name = 'lfm-40b',
base_provider = 'Liquid',
best_provider = IterListProvider([Airforce, PerplexityLabs])
)
### DiscoResearch ###
german_7b = Model(
name = 'german-7b',
base_provider = 'DiscoResearch',
best_provider = Airforce
)
### HuggingFaceH4 ###
zephyr_7b = Model(
name = 'zephyr-7b',
base_provider = 'HuggingFaceH4',
best_provider = Airforce
)
### Inferless ###
neural_7b = Model(
name = 'neural-7b',
base_provider = 'Inferless',
best_provider = Airforce
best_provider = PerplexityLabs
)
### Databricks ###
@ -541,7 +527,7 @@ glm_4 = Model(
evil = Model(
name = 'evil',
base_provider = 'Evil Mode - Experimental',
best_provider = IterListProvider([PollinationsAI, Airforce])
best_provider = PollinationsAI
)
### Other ###
@ -550,11 +536,6 @@ midijourney = Model(
base_provider = 'Other',
best_provider = PollinationsAI
)
turbo = Model(
name = 'turbo',
base_provider = 'Other',
best_provider = PollinationsAI
)
unity = Model(
name = 'unity',
@ -573,11 +554,10 @@ rtist = Model(
#############
### Stability AI ###
sdxl = ImageModel(
name = 'sdxl',
sd_turbo = ImageModel(
name = 'sd-turbo',
base_provider = 'Stability AI',
best_provider = Airforce
best_provider = PollinationsAI
)
sd_3_5 = ImageModel(
@ -586,18 +566,11 @@ sd_3_5 = ImageModel(
best_provider = HuggingSpace
)
### Flux AI ###
flux = ImageModel(
name = 'flux',
base_provider = 'Flux AI',
best_provider = IterListProvider([Blackbox, BlackboxCreateAgent, PollinationsAI, Airforce])
)
flux_pro = ImageModel(
name = 'flux-pro',
base_provider = 'Flux AI',
best_provider = IterListProvider([PollinationsAI, Airforce])
best_provider = IterListProvider([Blackbox, PollinationsAI])
)
flux_dev = ImageModel(
@ -609,70 +582,21 @@ flux_dev = ImageModel(
flux_schnell = ImageModel(
name = 'flux-schnell',
base_provider = 'Flux AI',
best_provider = IterListProvider([HuggingSpace, HuggingFace])
)
flux_realism = ImageModel(
name = 'flux-realism',
base_provider = 'Flux AI',
best_provider = IterListProvider([PollinationsAI, Airforce])
)
flux_cablyai = ImageModel(
name = 'flux-cablyai',
base_provider = 'Flux AI',
best_provider = PollinationsAI
)
flux_anime = ImageModel(
name = 'flux-anime',
base_provider = 'Flux AI',
best_provider = IterListProvider([PollinationsAI, Airforce])
)
flux_3d = ImageModel(
name = 'flux-3d',
base_provider = 'Flux AI',
best_provider = IterListProvider([PollinationsAI, Airforce])
)
flux_disney = ImageModel(
name = 'flux-disney',
base_provider = 'Flux AI',
best_provider = Airforce
)
flux_pixel = ImageModel(
name = 'flux-pixel',
base_provider = 'Flux AI',
best_provider = Airforce
)
flux_4o = ImageModel(
name = 'flux-4o',
base_provider = 'Flux AI',
best_provider = Airforce
best_provider = IterListProvider([HuggingSpace, HuggingChat, HuggingFace])
)
### OpenAI ###
dall_e_3 = ImageModel(
name = 'dall-e-3',
base_provider = 'OpenAI',
best_provider = IterListProvider([Airforce, PollinationsAI, CopilotAccount, OpenaiAccount, MicrosoftDesigner, BingCreateImages])
best_provider = IterListProvider([PollinationsAI, CopilotAccount, OpenaiAccount, MicrosoftDesigner, BingCreateImages])
)
### Midjourney ###
midjourney = ImageModel(
name = 'midjourney',
base_provider = 'Midjourney',
best_provider = IterListProvider([PollinationsAI, Airforce])
)
### Other ###
any_dark = ImageModel(
name = 'any-dark',
base_provider = 'Other',
best_provider = IterListProvider([PollinationsAI, Airforce])
best_provider = PollinationsAI
)
class ModelUtils:
@ -714,6 +638,7 @@ class ModelUtils:
# llama-3
llama_3_8b.name: llama_3_8b,
llama_3_70b.name: llama_3_70b,
# llama-3.1
llama_3_1_8b.name: llama_3_1_8b,
@ -722,8 +647,8 @@ class ModelUtils:
# llama-3.2
llama_3_2_1b.name: llama_3_2_1b,
llama_3_2_3b.name: llama_3_2_3b,
llama_3_2_11b.name: llama_3_2_11b,
llama_3_2_70b.name: llama_3_2_70b,
llama_3_2_90b.name: llama_3_2_90b,
# llama-3.3
@ -737,17 +662,23 @@ class ModelUtils:
### NousResearch ###
hermes_2_dpo.name: hermes_2_dpo,
hermes_2_pro.name: hermes_2_pro,
hermes_3.name: hermes_3,
### Microsoft ###
phi_2.name: phi_2,
# phi
phi_3_5_mini.name: phi_3_5_mini,
# wizardlm
wizardlm_2_7b.name: wizardlm_2_7b,
wizardlm_2_8x22b.name: wizardlm_2_8x22b,
### Google ###
# gemini
gemini.name: gemini,
# gemini-exp
gemini_exp.name: gemini_exp,
# gemini-1.5
gemini_1_5_pro.name: gemini_1_5_pro,
gemini_1_5_flash.name: gemini_1_5_flash,
@ -764,6 +695,7 @@ class ModelUtils:
claude_3_haiku.name: claude_3_haiku,
# claude 3.5
claude_3_5_haiku.name: claude_3_5_haiku,
claude_3_5_sonnet.name: claude_3_5_sonnet,
### Reka AI ###
@ -791,17 +723,14 @@ class ModelUtils:
# qwen 2.5
qwen_2_5_72b.name: qwen_2_5_72b,
qwen_2_5_coder_32b.name: qwen_2_5_coder_32b,
# qwq/qvq
qwq_32b.name: qwq_32b,
qvq_72b.name: qvq_72b,
### Inflection ###
pi.name: pi,
### WizardLM ###
wizardlm_2_8x22b.name: wizardlm_2_8x22b,
### OpenChat ###
openchat_3_5.name: openchat_3_5,
### x.ai ###
grok_2.name: grok_2,
@ -811,26 +740,14 @@ class ModelUtils:
### DeepSeek ###
deepseek_chat.name: deepseek_chat,
deepseek_coder.name: deepseek_coder,
### TheBloke ###
german_7b.name: german_7b,
deepseek_r1.name: deepseek_r1,
### Nvidia ###
nemotron_70b.name: nemotron_70b,
### Teknium ###
openhermes_2_5.name: openhermes_2_5,
### Liquid ###
lfm_40b.name: lfm_40b,
### HuggingFaceH4 ###
zephyr_7b.name: zephyr_7b,
### Inferless ###
neural_7b.name: neural_7b,
### Databricks ###
dbrx_instruct.name: dbrx_instruct,
@ -848,7 +765,6 @@ class ModelUtils:
### Other ###
midijourney.name: midijourney,
turbo.name: turbo,
unity.name: unity,
rtist.name: rtist,
@ -857,30 +773,19 @@ class ModelUtils:
#############
### Stability AI ###
sdxl.name: sdxl,
sd_turbo.name: sd_turbo,
sd_3_5.name: sd_3_5,
### Flux AI ###
flux.name: flux,
flux_pro.name: flux_pro,
flux_dev.name: flux_dev,
flux_schnell.name: flux_schnell,
flux_realism.name: flux_realism,
flux_cablyai.name: flux_cablyai,
flux_anime.name: flux_anime,
flux_3d.name: flux_3d,
flux_disney.name: flux_disney,
flux_pixel.name: flux_pixel,
flux_4o.name: flux_4o,
### OpenAI ###
dall_e_3.name: dall_e_3,
### Midjourney ###
midjourney.name: midjourney,
### Other ###
any_dark.name: any_dark,
}
# Create a list of all models and his providers