Move documentation

This commit is contained in:
hlohaus 2025-04-25 07:13:25 +02:00
parent 5ff7c88428
commit 80c835552d
28 changed files with 19 additions and 4474 deletions

View file

@ -72,12 +72,12 @@ Is your site on this repository and you want to take it down? Send an email to t
- [📝 Text Generation](#-text-generation) - [📝 Text Generation](#-text-generation)
- [🎨 Image Generation](#-image-generation) - [🎨 Image Generation](#-image-generation)
- [🌐 Web Interface](#-web-interface) - [🌐 Web Interface](#-web-interface)
- [🖥️ Local Inference](docs/local.md) - [🖥️ Local Inference](https://gpt4free.github.io/docs/local.html)
- [🤖 Interference API](#-interference-api) - [🤖 Interference API](#-interference-api)
- [🛠️ Configuration](docs/configuration.md) - [🛠️ Configuration](https://gpt4free.github.io/docs/configuration.html)
- [📱 Run on Smartphone](#-run-on-smartphone) - [📱 Run on Smartphone](#-run-on-smartphone)
- [📘 Full Documentation for Python API](#-full-documentation-for-python-api) - [📘 Full Documentation for Python API](#-full-documentation-for-python-api)
- [🚀 Providers and Models](docs/providers-and-models.md) - [🚀 Providers and Models](https://gpt4free.github.io/docs/providers-and-models.html)
- [🔗 Powered by gpt4free](#-powered-by-gpt4free) - [🔗 Powered by gpt4free](#-powered-by-gpt4free)
- [🤝 Contribute](#-contribute) - [🤝 Contribute](#-contribute)
- [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider) - [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider)
@ -155,7 +155,7 @@ By following these steps, you should be able to successfully install and run the
pip install -U g4f[all] pip install -U g4f[all]
``` ```
> How do I install only parts or do disable parts? **Use partial requirements:** [/docs/requirements](docs/requirements.md) > How do I install only parts or do disable parts? **Use partial requirements:** [/docs/requirements](https://gpt4free.github.io/docs/requirements.html)
#### Install from Source: #### Install from Source:
```bash ```bash
@ -164,7 +164,7 @@ cd gpt4free
pip install -r requirements.txt pip install -r requirements.txt
``` ```
> How do I load the project using git and installing the project requirements? **Read this tutorial and follow it step by step:** [/docs/git](docs/git.md) > How do I load the project using git and installing the project requirements? **Read this tutorial and follow it step by step:** [/docs/git](https://gpt4free.github.io/docs/git.html)
--- ---
@ -199,7 +199,7 @@ response = client.images.generate(
print(f"Generated image URL: {response.data[0].url}") print(f"Generated image URL: {response.data[0].url}")
``` ```
[![Image with cat](/docs/images/cat.jpeg)](docs/client.md) [![Image with cat](/https://gpt4free.github.io/docs/images/cat.jpeg)](https://gpt4free.github.io/docs/client.html)
### 🌐 Web Interface ### 🌐 Web Interface
**Run the GUI using Python:** **Run the GUI using Python:**
@ -217,7 +217,7 @@ python -m g4f.cli gui --port 8080 --debug
python -m g4f --port 8080 --debug python -m g4f --port 8080 --debug
``` ```
> **Learn More About the GUI:** For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the [GUI Documentation](docs/gui.md) . This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more. > **Learn More About the GUI:** For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the [GUI Documentation](https://gpt4free.github.io/docs/gui.html) . This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more.
--- ---
@ -225,28 +225,28 @@ python -m g4f --port 8080 --debug
The **Interference API** enables seamless integration with OpenAI's services through G4F, allowing you to deploy efficient AI solutions. The **Interference API** enables seamless integration with OpenAI's services through G4F, allowing you to deploy efficient AI solutions.
- **Documentation**: [Interference API Docs](docs/interference-api.md) - **Documentation**: [Interference API Docs](https://gpt4free.github.io/docs/interference-api.html)
- **Endpoint**: `http://localhost:1337/v1` - **Endpoint**: `http://localhost:1337/v1`
- **Swagger UI**: Explore the OpenAPI documentation via Swagger UI at `http://localhost:1337/docs` - **Swagger UI**: Explore the OpenAPI documentation via Swagger UI at `http://localhost:1337/docs`
- **Provider Selection**: [How to Specify a Provider?](docs/selecting_a_provider.md) - **Provider Selection**: [How to Specify a Provider?](https://gpt4free.github.io/docs/selecting_a_provider.html)
This API is designed for straightforward implementation and enhanced compatibility with other OpenAI integrations. This API is designed for straightforward implementation and enhanced compatibility with other OpenAI integrations.
--- ---
### 📱 Run on Smartphone ### 📱 Run on Smartphone
Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device: [Run on Smartphone Guide](docs/guides/phone.md) Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device: [Run on Smartphone Guide](https://gpt4free.github.io/docs/guides/phone.html)
--- ---
#### **📘 Full Documentation for Python API** #### **📘 Full Documentation for Python API**
- **Client API from G4F:** [/docs/client](docs/client.md) - **Client API from G4F:** [/docs/client](https://gpt4free.github.io/docs/client.html)
- **AsyncClient API from G4F:** [/docs/async_client](docs/async_client.md) - **AsyncClient API from G4F:** [/docs/async_client](https://gpt4free.github.io/docs/async_client.html)
- **Requests API from G4F:** [/docs/requests](docs/requests.md) - **Requests API from G4F:** [/docs/requests](https://gpt4free.github.io/docs/requests.html)
- **File API from G4F:** [/docs/file](docs/file.md) - **File API from G4F:** [/docs/file](https://gpt4free.github.io/docs/file.html)
- **PydanticAI and LangChain Integration for G4F:** [/docs/pydantic_ai](docs/pydantic_ai.md) - **PydanticAI and LangChain Integration for G4F:** [/docs/pydantic_ai](https://gpt4free.github.io/docs/pydantic_ai.html)
- **Legacy API with python modules:** [/docs/legacy](docs/legacy.md) - **Legacy API with python modules:** [/docs/legacy](https://gpt4free.github.io/docs/legacy.html)
- **G4F - Media Documentation** [/docs/media](/docs/media.md) *(New)* - **G4F - Media Documentation** [/docs/media](https://gpt4free.github.io/media.html) *(New)*
--- ---
@ -714,10 +714,10 @@ Run the Web UI on your smartphone for easy access on the go. Check out the dedic
We welcome contributions from the community. Whether you're adding new providers or features, or simply fixing typos and making small improvements, your input is valued. Creating a pull request is all it takes our co-pilot will handle the code review process. Once all changes have been addressed, we'll merge the pull request into the main branch and release the updates at a later time. We welcome contributions from the community. Whether you're adding new providers or features, or simply fixing typos and making small improvements, your input is valued. Creating a pull request is all it takes our co-pilot will handle the code review process. Once all changes have been addressed, we'll merge the pull request into the main branch and release the updates at a later time.
###### Guide: How do i create a new Provider? ###### Guide: How do i create a new Provider?
- **Read:** [Create Provider Guide](docs/guides/create_provider.md) - **Read:** [Create Provider Guide](https://gpt4free.github.io/docs/guides/create_provider.html)
###### Guide: How can AI help me with writing code? ###### Guide: How can AI help me with writing code?
- **Read:** [AI Assistance Guide](docs/guides/help_me.md) - **Read:** [AI Assistance Guide](https://gpt4free.github.io/docs/guides/help_me.html)

View file

@ -1,698 +0,0 @@
# G4F - AsyncClient API Guide
The G4F AsyncClient API is a powerful asynchronous interface for interacting with various AI models. This guide provides comprehensive information on how to use the API effectively, including setup, usage examples, best practices, and important considerations for optimal performance.
## Compatibility Note
The G4F AsyncClient API is designed to be compatible with the OpenAI API, making it easy for developers familiar with OpenAI's interface to transition to G4F.
## Table of Contents
- [Introduction](#introduction)
- [Key Features](#key-features)
- [Getting Started](#getting-started)
- [Initializing the Client](#initializing-the-client)
- [Creating Chat Completions](#creating-chat-completions)
- [Configuration](#configuration)
- [Explanation of Parameters](#explanation-of-parameters)
- [Usage Examples](#usage-examples)
- [Text Completions](#text-completions)
- [Streaming Completions](#streaming-completions)
- [Using a Vision Model](#using-a-vision-model)
- **[Transcribing Audio with Chat Completions](#transcribing-audio-with-chat-completions)** *(New Section)*
- [Image Generation](#image-generation)
- **[Video Generation](#video-generation)** *(New Section)*
- [Advanced Usage](#advanced-usage)
- [Conversation Memory](#conversation-memory)
- [Search Tool Support](#search-tool-support)
- [Concurrent Tasks](#concurrent-tasks-with-asynciogather)
- [Available Models and Providers](#available-models-and-providers)
- [Error Handling and Best Practices](#error-handling-and-best-practices)
- [Rate Limiting and API Usage](#rate-limiting-and-api-usage)
- [Conclusion](#conclusion)
## Introduction
The G4F AsyncClient API is an asynchronous version of the standard G4F Client API. It offers the same functionality as the synchronous API but with improved performance due to its asynchronous nature. This guide will walk you through the key features and usage of the G4F AsyncClient API.
## Key Features
- **Custom Providers**: Use custom providers for enhanced flexibility.
- **ChatCompletion Interface**: Interact with chat models through the ChatCompletion class.
- **Streaming Responses**: Get responses iteratively as they are received.
- **Non-Streaming Responses**: Generate complete responses in a single call.
- **Image Generation and Vision Models**: Support for image-related tasks.
## Getting Started
### Initializing the AsyncClient
**To use the G4F `AsyncClient`, create a new instance:**
```python
from g4f.client import AsyncClient
from g4f.Provider import OpenaiChat, Gemini
client = AsyncClient(
provider=OpenaiChat,
image_provider=Gemini,
# Add other parameters as needed
)
```
## Creating Chat Completions
**Heres an improved example of creating chat completions:**
```python
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
]
# Add other parameters as needed
)
```
**This example:**
- Asks a specific question `Say this is a test`
- Configures various parameters like temperature and max_tokens for more control over the output
- Disables streaming for a complete response
You can adjust these parameters based on your specific needs.
### Configuration
**Configure the `AsyncClient` with additional settings:**
```python
client = AsyncClient(
api_key="your_api_key_here",
proxies="http://user:pass@host",
# Add other parameters as needed
)
```
## Explanation of Parameters
**When using the G4F to create chat completions or perform related tasks, you can configure the following parameters:**
- **`model`**:
Specifies the AI model to be used for the task. Examples include `"gpt-4o"` for GPT-4 Optimized or `"gpt-4o-mini"` for a lightweight version. The choice of model determines the quality and speed of the response. Always ensure the selected model is supported by the provider.
- **`messages`**:
**A list of dictionaries representing the conversation context. Each dictionary contains two keys:**
- `role`: Defines the role of the message sender, such as `"user"` (input from the user) or `"system"` (instructions to the AI).
- `content`: The actual text of the message.
**Example:**
```python
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What day is it today?"}
]
```
- **`web_search`**:
*(Optional)* A Boolean flag indicating whether to enable internet-based search capabilities. If set to **True**, the system performs a web search using the providers native method to retrieve up-to-date information. This is especially useful for obtaining real-time or specific details not included in the models training data.
- **Providers Supporting** `web_search`:
- ChatGPT
- HuggingChat
- Blackbox
- RubiksAI
- **`provider`**:
*(Optional)* Specifies the backend provider for the API. Examples include `g4f.Provider.Blackbox` or `g4f.Provider.OpenaiChat`. Each provider may support a different subset of models and features, so select one that matches your requirements.
## Usage Examples
### Text Completions
**Generate text completions using the ChatCompletions endpoint:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
],
web_search = False
)
print(response.choices[0].message.content)
asyncio.run(main())
```
### Streaming Completions
**Process responses incrementally as they are generated:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
stream = client.chat.completions.stream(
model="gpt-4",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
],
web_search = False
)
async for chunk in stream:
if chunk.choices and chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
asyncio.run(main())
```
---
### Using a Vision Model
**Analyze an image and generate a description:**
```python
import g4f
import requests
import asyncio
from g4f.client import AsyncClient
from g4f.Provider.CopilotAccount import CopilotAccount
async def main():
client = AsyncClient(
provider=CopilotAccount
)
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw
# Or: image = open("docs/images/cat.jpeg", "rb")
response = await client.chat.completions.create(
model=g4f.models.default,
messages=[
{
"role": "user",
"content": "What's in this image?"
}
],
image=image
)
print(response.choices[0].message.content)
asyncio.run(main())
```
---
### Transcribing Audio with Chat Completions
Some providers in G4F support audio inputs in chat completions, allowing you to transcribe audio files by instructing the model accordingly. This example demonstrates how to use the `AsyncClient` to transcribe an audio file asynchronously:
```python
import asyncio
from g4f.client import AsyncClient
import g4f.Provider
import g4f.models
async def main():
client = AsyncClient(provider=g4f.Provider.PollinationsAI) # or g4f.Provider.Microsoft_Phi_4
with open("audio.wav", "rb") as audio_file:
response = await client.chat.completions.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Transcribe this audio"}],
media=[[audio_file, "audio.wav"]],
modalities=["text"],
)
print(response.choices[0].message.content)
if __name__ == "__main__":
asyncio.run(main())
```
#### Explanation
- **Client Initialization**: An `AsyncClient` instance is created with a provider that supports audio inputs, such as `PollinationsAI` or `Microsoft_Phi_4`.
- **File Handling**: The audio file (`audio.wav`) is opened in binary read mode (`"rb"`) using a context manager (`with` statement) to ensure proper file closure after use.
- **API Call**: The `chat.completions.create` method is called with:
- `model=g4f.models.default`: Uses the default model for the selected provider.
- `messages`: A list containing a user message instructing the model to transcribe the audio.
- `media`: A list of lists, where each inner list contains the file object and its name (`[[audio_file, "audio.wav"]]`).
- `modalities=["text"]`: Specifies that the output should be text (the transcription).
- **Response**: The transcription is extracted from `response.choices[0].message.content` and printed.
#### Notes
- **Provider Support**: Ensure the chosen provider (e.g., `PollinationsAI` or `Microsoft_Phi_4`) supports audio inputs in chat completions. Not all providers may offer this functionality.
- **File Path**: Replace `"audio.wav"` with the path to your own audio file. The file format (e.g., WAV) should be compatible with the provider.
- **Model Selection**: If `g4f.models.default` does not support audio transcription, you may need to specify a model that does (consult the provider's documentation for supported models).
This example complements the guide by showcasing how to handle audio inputs asynchronously, expanding on the multimodal capabilities of the G4F AsyncClient API.
---
### Image Generation
**The `response_format` parameter is optional and can have the following values:**
- **If not specified (default):** The image will be saved locally, and a local path will be returned (e.g., "/images/1733331238_cf9d6aa9-f606-4fea-ba4b-f06576cba309.jpg").
- **"url":** Returns a URL to the generated image.
- **"b64_json":** Returns the image as a base64-encoded JSON string.
**Generate images using a specified prompt:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
response = await client.images.generate(
prompt="a white siamese cat",
model="flux",
response_format="url"
# Add any other necessary parameters
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
asyncio.run(main())
```
#### Base64 Response Format
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
response = await client.images.generate(
prompt="a white siamese cat",
model="flux",
response_format="b64_json"
# Add any other necessary parameters
)
base64_text = response.data[0].b64_json
print(base64_text)
asyncio.run(main())
```
---
### Creating Image Variations
**Create variations of an existing image:**
```python
import asyncio
from g4f.client import AsyncClient
from g4f.Provider import OpenaiChat
async def main():
client = AsyncClient(image_provider=OpenaiChat)
response = await client.images.create_variation(
prompt="a white siamese cat",
image=open("docs/images/cat.jpg", "rb"),
model="dall-e-3",
# Add any other necessary parameters
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
asyncio.run(main())
```
---
### Video Generation
The G4F `AsyncClient` also supports **video generation** through supported providers like `HuggingFaceMedia`. You can retrieve the list of available video models and generate videos from prompts.
**Example: Generate a video using a prompt**
```python
import asyncio
from g4f.client import AsyncClient
from g4f.Provider import HuggingFaceMedia
async def main():
client = AsyncClient(
provider=HuggingFaceMedia,
api_key="hf_***" # Your API key here
)
# Get available video models
video_models = client.models.get_video()
print("Available Video Models:", video_models)
# Generate video
result = await client.media.generate(
model=video_models[0],
prompt="G4F AI technology is the best in the world.",
response_format="url"
)
print("Generated Video URL:", result.data[0].url)
asyncio.run(main())
```
#### Explanation
- **Client Initialization**: An `AsyncClient` is initialized using the `HuggingFaceMedia` provider with an API key.
- **Model Discovery**: `client.models.get_video()` fetches a list of supported video models.
- **Video Generation**: A prompt is submitted to generate a video using `await client.media.generate(...)`.
- **Output**: The result includes a URL to the generated video, accessed via `result.data[0].url`.
> Make sure your selected provider supports media generation and your API key has appropriate permissions.
## Advanced Usage
### Conversation Memory
To maintain a coherent conversation, it's important to store the context or history of the dialogue. This can be achieved by appending both the user's inputs and the bot's responses to a messages list. This allows the model to reference past exchanges when generating responses.
**The following example demonstrates how to implement conversation memory with the G4F:**
```python
import asyncio
from g4f.client import AsyncClient
class Conversation:
def __init__(self):
self.client = AsyncClient()
self.history = [
{
"role": "system",
"content": "You are a helpful assistant."
}
]
def add_message(self, role, content):
self.history.append({
"role": role,
"content": content
})
async def get_response(self, user_message):
# Add user message to history
self.add_message("user", user_message)
# Get response from AI
response = await self.client.chat.completions.create(
model="gpt-4o-mini",
messages=self.history,
web_search=False
)
# Add AI response to history
assistant_response = response.choices[0].message.content
self.add_message("assistant", assistant_response)
return assistant_response
async def main():
conversation = Conversation()
print("=" * 50)
print("G4F Chat started (type 'exit' to end)".center(50))
print("=" * 50)
print("\nAI: Hello! How can I assist you today?")
while True:
user_input = input("\nYou: ")
if user_input.lower() == 'exit':
print("\nGoodbye!")
break
response = await conversation.get_response(user_input)
print("\nAI:", response)
if __name__ == "__main__":
asyncio.run(main())
```
---
## Search Tool Support
The **Search Tool Support** feature enables triggering a web search during chat completions. This is useful for retrieving real-time or specific data, offering a more flexible solution than `web_search`.
**Example Usage:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
tool_calls = [
{
"function": {
"arguments": {
"query": "Latest advancements in AI",
"max_results": 5,
"max_words": 2500,
"backend": "auto",
"add_text": True,
"timeout": 5
},
"name": "search_tool"
},
"type": "function"
}
]
response = await client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "user",
"content": "Tell me about recent advancements in AI."
}
],
tool_calls=tool_calls
)
print(response.choices[0].message.content)
if __name__ == "__main__":
asyncio.run(main())
```
**Parameters for `search_tool`:**
- **`query`**: The search query string.
- **`max_results`**: Number of search results to retrieve.
- **`max_words`**: Maximum number of words in the response.
- **`backend`**: The backend used for search (e.g., `"api"`).
- **`add_text`**: Whether to include text snippets in the response.
- **`timeout`**: Maximum time (in seconds) for the search operation.
**Advantages of Search Tool Support:**
- Works with any provider, irrespective of `web_search` support.
- Offers more customization and control over the search process.
- Bypasses provider-specific limitations.
---
### Using a List of Providers with RetryProvider
```python
import asyncio
from g4f.client import AsyncClient
import g4f.debug
g4f.debug.logging = True
g4f.debug.version_check = False
from g4f.Provider import RetryProvider, Phind, FreeChatgpt, Liaobots
async def main():
client = AsyncClient(provider=RetryProvider([Phind, FreeChatgpt, Liaobots], shuffle=False)
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": "Hello"
}
],
web_search = False
)
print(response.choices[0].message.content)
asyncio.run(main())
```
---
### Concurrent Tasks with asyncio.gather
**Execute multiple tasks concurrently:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
task1 = client.chat.completions.create(
model=None,
messages=[
{
"role": "user",
"content": "Say this is a test"
}
]
)
task2 = client.images.generate(
model="flux",
prompt="a white siamese cat",
response_format="url"
)
try:
chat_response, image_response = await asyncio.gather(task1, task2)
print("Chat Response:")
print(chat_response.choices[0].message.content)
print("\nImage Response:")
print(image_response.data[0].url)
except Exception as e:
print(f"An error occurred: {e}")
asyncio.run(main())
```
## Available Models and Providers
The G4F AsyncClient supports a wide range of AI models and providers, allowing you to choose the best option for your specific use case.
**Here's a brief overview of the available models and providers:**
**Models**
- GPT-3.5-Turbo
- GPT-4o-Mini
- GPT-4
- DALL-E 3
- Gemini
- Claude (Anthropic)
- And more...
**Providers**
- OpenAI
- Google (for Gemini)
- Anthropic
- Microsoft Copilot
- Custom providers
**To use a specific model or provider, specify it when creating the client or in the API call:**
```python
client = AsyncClient(provider=g4f.Provider.OpenaiChat)
# or
response = await client.chat.completions.create(
model="gpt-4",
provider=g4f.Provider.CopilotAccount,
messages=[
{
"role": "user",
"content": "Hello, world!"
}
]
)
```
## Error Handling and Best Practices
Implementing proper error handling and following best practices is crucial when working with the G4F AsyncClient API. This ensures your application remains robust and can gracefully handle various scenarios.
**Here are some key practices to follow:**
1. **Use try-except blocks to catch and handle exceptions:**
```python
try:
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": "Hello, world!"
}
]
)
except Exception as e:
print(f"An error occurred: {e}")
```
2. **Check the response status and handle different scenarios:**
```python
if response.choices:
print(response.choices[0].message.content)
else:
print("No response generated")
```
3. **Implement retries for transient errors:**
```python
import asyncio
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
async def make_api_call():
# Your API call here
pass
```
## Rate Limiting and API Usage
When working with the G4F AsyncClient API, it's important to implement rate limiting and monitor your API usage. This helps ensure fair usage, prevents overloading the service, and optimizes your application's performance. Here are some key strategies to consider:
1. **Implement rate limiting in your application:**
```python
import asyncio
from aiolimiter import AsyncLimiter
rate_limit = AsyncLimiter(max_rate=10, time_period=1) # 10 requests per second
async def make_api_call():
async with rate_limit:
# Your API call here
pass
```
2. **Monitor your API usage and implement logging:**
```python
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def make_api_call():
try:
response = await client.chat.completions.create(...)
logger.info(f"API call successful. Tokens used: {response.usage.total_tokens}")
except Exception as e:
logger.error(f"API call failed: {e}")
```
3. **Use caching to reduce API calls for repeated queries:**
```python
from functools import lru_cache
@lru_cache(maxsize=100)
def get_cached_response(query):
# Your API call here
pass
```
## Conclusion
The G4F AsyncClient API provides a powerful and flexible way to interact with various AI models asynchronously. By leveraging its features and following best practices, you can build efficient and responsive applications that harness the power of AI for text generation, image analysis, and image creation.
Remember to handle errors gracefully, implement rate limiting, and monitor your API usage to ensure optimal performance and reliability in your applications.
---
[Return to Home](/)

View file

@ -1,273 +0,0 @@
**# G4F - Authentication Guide**
This documentation explains how to authenticate with G4F providers and configure GUI security. It covers API key management, cookie-based authentication, rate limiting, and GUI access controls.
---
## **Table of Contents**
1. **[Provider Authentication](#provider-authentication)**
- [Prerequisites](#prerequisites)
- [API Key Setup](#api-key-setup)
- [Synchronous Usage](#synchronous-usage)
- [Asynchronous Usage](#asynchronous-usage)
- [Multiple Providers](#multiple-providers-with-api-keys)
- [Cookie-Based Authentication](#cookie-based-authentication)
- [Rate Limiting](#rate-limiting)
- [Error Handling](#error-handling)
- [Supported Providers](#supported-providers)
2. **[GUI Authentication](#gui-authentication)**
- [Server Setup](#server-setup)
- [Browser Access](#browser-access)
- [Programmatic Access](#programmatic-access)
3. **[Best Practices](#best-practices)**
4. **[Troubleshooting](#troubleshooting)**
---
## **Provider Authentication**
### **Prerequisites**
- Python 3.7+
- Installed `g4f` package:
```bash
pip install g4f
```
- API keys or cookies from providers (if required).
---
### **API Key Setup**
#### **Step 1: Set Environment Variables**
**For Linux/macOS (Terminal)**:
```bash
# Example for Anthropic
export ANTHROPIC_API_KEY="your_key_here"
# Example for HuggingFace
export HUGGINGFACE_API_KEY="another_key_here"
```
**For Windows (Command Prompt)**:
```cmd
:: Example for Anthropic
set ANTHROPIC_API_KEY=your_key_here
:: Example for HuggingFace
set HUGGINGFACE_API_KEY=another_key_here
```
**For Windows (PowerShell)**:
```powershell
# Example for Anthropic
$env:ANTHROPIC_API_KEY = "your_key_here"
# Example for HuggingFace
$env:HUGGINGFACE_API_KEY = "another_key_here"
```
#### **Step 2: Initialize Client**
```python
from g4f.client import Client
# Example for Anthropic
client = Client(
provider="g4f.Provider.Anthropic",
api_key="your_key_here" # Or use os.getenv("ANTHROPIC_API_KEY")
)
```
---
### **Synchronous Usage**
```python
from g4f.client import Client
# Initialize with Anthropic
client = Client(provider="g4f.Provider.Anthropic", api_key="your_key_here")
# Simple request
response = client.chat.completions.create(
model="claude-3.5-sonnet",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
```
---
### **Asynchronous Usage**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
# Initialize with Groq
client = AsyncClient(provider="g4f.Provider.Groq", api_key="your_key_here")
response = await client.chat.completions.create(
model="mixtral-8x7b",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
asyncio.run(main())
```
---
### **Multiple Providers with API Keys**
```python
import os
import g4f.Provider
from g4f.client import Client
# Using environment variables
providers = {
"Anthropic": os.getenv("ANTHROPIC_API_KEY"),
"Groq": os.getenv("GROQ_API_KEY")
}
for provider_name, api_key in providers.items():
client = Client(provider=getattr(g4f.Provider, provider_name), api_key=api_key)
response = client.chat.completions.create(
model="claude-3.5-sonnet",
messages=[{"role": "user", "content": f"Hello to {provider_name}!"}]
)
print(f"{provider_name}: {response.choices[0].message.content}")
```
---
### **Cookie-Based Authentication**
**For Providers Like Gemini/Bing**:
1. Open your browser and log in to the provider's website.
2. Use developer tools (F12) to copy cookies:
- Chrome/Edge: **Application****Cookies**
- Firefox: **Storage****Cookies**
```python
from g4f.client import Client
from g4f.Provider import Gemini
# Using with cookies
client = Client(
provider=Gemini,
)
response = client.chat.completions.create(
model="", # Default model
messages="Hello Google",
cookies={
"__Secure-1PSID": "your_cookie_value_here",
"__Secure-1PSIDTS": "your_cookie_value_here"
}
)
print(f"Gemini: {response.choices[0].message.content}")
```
---
### **Rate Limiting**
```python
from aiolimiter import AsyncLimiter
# Limit to 5 requests per second
rate_limiter = AsyncLimiter(max_rate=5, time_period=1)
async def make_request():
async with rate_limiter:
return await client.chat.completions.create(...)
```
---
### **Error Handling**
```python
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
def safe_request():
try:
return client.chat.completions.create(...)
except Exception as e:
print(f"Attempt failed: {str(e)}")
raise
```
---
### **Supported Providers**
| Provider | Auth Type | Example Models |
|----------------|-----------------|----------------------|
| Anthropic | API Key | `claude-3.5-sonnet` |
| Gemini | Cookies | `gemini-1.5-pro` |
| Groq | API Key | `mixtral-8x7b` |
| HuggingFace | API Key | `llama-3.1-70b` |
*Full list: [Providers and Models](providers-and-models.md)*
---
## **GUI Authentication**
### **Server Setup**
1. Create a password:
```bash
# Linux/macOS
export G4F_API_KEY="your_password_here"
# Windows (Command Prompt)
set G4F_API_KEY=your_password_here
# Windows (PowerShell)
$env:G4F_API_KEY = "your_password_here"
```
2. Start the server:
```bash
python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
```
---
### **Browser Access**
1. Navigate to `http://localhost:8080/chat/`.
2. Use credentials:
- **Username**: Any value (e.g., `admin`).
- **Password**: Your `G4F_API_KEY`.
---
### **Programmatic Access**
```python
import requests
response = requests.get(
"http://localhost:8080/chat/",
auth=("admin", "your_password_here")
)
print("Success!" if response.status_code == 200 else f"Failed: {response.status_code}")
```
---
## **Best Practices**
1. 🔒 **Never hardcode keys**
- Use `.env` files or secret managers like AWS Secrets Manager.
2. 🔄 **Rotate keys every 90 days**
- Especially critical for production environments.
3. 📊 **Monitor API usage**
- Use tools like Prometheus/Grafana for tracking.
4. ♻️ **Retry transient errors**
- Use the `tenacity` library for robust retry logic.
---
## **Troubleshooting**
| Issue | Solution |
|---------------------------|-------------------------------------------|
| **"Invalid API Key"** | 1. Verify key spelling<br>2. Regenerate key in provider dashboard |
| **"Cookie Expired"** | 1. Re-login to provider website<br>2. Update cookie values |
| **"Rate Limit Exceeded"** | 1. Implement rate limiting<br>2. Upgrade provider plan |
| **"Provider Not Found"** | 1. Check provider name spelling<br>2. Verify provider compatibility |
---
**[⬆ Back to Top](#table-of-contents)** | **[Providers and Models →](providers-and-models.md)**

View file

@ -1,497 +0,0 @@
# G4F Client API Guide
## Table of Contents
- [Introduction](#introduction)
- [Getting Started](#getting-started)
- [Switching to G4F Client](#switching-to-g4f-client)
- [Initializing the Client](#initializing-the-client)
- [Creating Chat Completions](#creating-chat-completions)
- [Configuration](#configuration)
- [Explanation of Parameters](#explanation-of-parameters)
- [Usage Examples](#usage-examples)
- [Text Completions](#text-completions)
- [Streaming Completions](#streaming-completions)
- [Using a Vision Model](#using-a-vision-model)
- [Image Generation](#image-generation)
- [Creating Image Variations](#creating-image-variations)
- [Advanced Usage](#advanced-usage)
- [Conversation Memory](#conversation-memory)
- [Search Tool Support](#search-tool-support)
- [Using a List of Providers with RetryProvider](#using-a-list-of-providers-with-retryprovider)
- [Using a Vision Model](#using-a-vision-model)
- [Command-line Chat Program](#command-line-chat-program)
## Introduction
Welcome to the G4F Client API, a cutting-edge tool for seamlessly integrating advanced AI capabilities into your Python applications. This guide is designed to facilitate your transition from using the OpenAI client to the G4F Client, offering enhanced features while maintaining compatibility with the existing OpenAI API.
---
## Getting Started
### Switching to G4F Client
**To begin using the G4F Client, simply update your import statement in your Python code:**
**Old Import:**
```python
from openai import OpenAI
```
**New Import:**
```python
from g4f.client import Client as OpenAI
```
The G4F Client preserves the same familiar API interface as OpenAI, ensuring a smooth transition process.
---
## Initializing the Client
To utilize the G4F Client, create a new instance. **Below is an example showcasing custom providers:**
```python
from g4f.client import Client
from g4f.Provider import BingCreateImages, OpenaiChat, Gemini
client = Client(
provider=OpenaiChat,
image_provider=Gemini,
# Add any other necessary parameters
)
```
---
## Creating Chat Completions
**Heres an improved example of creating chat completions:**
```python
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
]
# Add any other necessary parameters
)
```
**This example:**
- Asks a specific question `Say this is a test`
- Configures various parameters like temperature and max_tokens for more control over the output
- Disables streaming for a complete response
You can adjust these parameters based on your specific needs.
## Configuration
**You can set an `api_key` for your provider in the client and define a proxy for all outgoing requests:**
```python
from g4f.client import Client
client = Client(
api_key="your_api_key_here",
proxies="http://user:pass@host",
# Add any other necessary parameters
)
```
---
## Explanation of Parameters
**When using the G4F to create chat completions or perform related tasks, you can configure the following parameters:**
- **`model`**:
Specifies the AI model to be used for the task. Examples include `"gpt-4o"` for GPT-4 Optimized or `"gpt-4o-mini"` for a lightweight version. The choice of model determines the quality and speed of the response. Always ensure the selected model is supported by the provider.
- **`messages`**:
**A list of dictionaries representing the conversation context. Each dictionary contains two keys:**
- `role`: Defines the role of the message sender, such as `"user"` (input from the user) or `"system"` (instructions to the AI).
- `content`: The actual text of the message.
**Example:**
```python
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What day is it today?"}
]
```
- **`provider`**:
*(Optional)* Specifies the backend provider for the API. Examples include `g4f.Provider.Blackbox` or `g4f.Provider.OpenaiChat`. Each provider may support a different subset of models and features, so select one that matches your requirements.
- **`web_search`** (Optional):
Boolean flag indicating whether to enable internet-based search capabilities. This is useful for obtaining real-time or specific details not included in the models training data.
#### Providers Limitation
The `web_search` argument is **limited to specific providers**, including:
- ChatGPT
- HuggingChat
- Blackbox
- RubiksAI
If your chosen provider does not support `web_search`, it will not function as expected.
**Alternative Solution:**
Instead of relying on the `web_search` argument, you can use the more versatile **Search Tool Support**, which allows for highly customizable web search operations. The search tool enables you to define parameters such as query, number of results, word limit, and timeout, offering greater control over search capabilities.
---
## Usage Examples
### Text Completions
**Generate text completions using the `ChatCompletions` endpoint:**
```python
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
],
web_search = False
)
print(response.choices[0].message.content)
```
### Streaming Completions
**Process responses incrementally as they are generated:**
```python
from g4f.client import Client
client = Client()
stream = client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
],
stream=True,
web_search = False
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content or "", end="")
```
---
### Using a Vision Model
**Analyze an image and generate a description:**
```python
import g4f
import requests
from g4f.client import Client
from g4f.Provider.GeminiPro import GeminiPro
# Initialize the GPT client with the desired provider and api key
client = Client(
api_key="your_api_key_here",
provider=GeminiPro
)
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw
# Or: image = open("docs/images/cat.jpeg", "rb")
response = client.chat.completions.create(
model=g4f.models.default,
messages=[
{
"role": "user",
"content": "What's in this image?"
}
],
image=image
# Add any other necessary parameters
)
print(response.choices[0].message.content)
```
---
### Image Generation
**The `response_format` parameter is optional and can have the following values:**
- **If not specified (default):** The image will be saved locally, and a local path will be returned (e.g., "/images/1733331238_cf9d6aa9-f606-4fea-ba4b-f06576cba309.jpg").
- **"url":** Returns a URL to the generated image.
- **"b64_json":** Returns the image as a base64-encoded JSON string.
**Generate images using a specified prompt:**
```python
from g4f.client import Client
client = Client()
response = client.images.generate(
model="flux",
prompt="a white siamese cat",
response_format="url"
# Add any other necessary parameters
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
```
#### Base64 Response Format
```python
from g4f.client import Client
client = Client()
response = client.images.generate(
model="flux",
prompt="a white siamese cat",
response_format="b64_json"
# Add any other necessary parameters
)
base64_text = response.data[0].b64_json
print(base64_text)
```
### Creating Image Variations
**Create variations of an existing image:**
```python
from g4f.client import Client
from g4f.Provider import OpenaiChat
client = Client(
image_provider=OpenaiChat
)
response = client.images.create_variation(
image=open("docs/images/cat.jpg", "rb"),
model="dall-e-3",
# Add any other necessary parameters
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
```
---
## Advanced Usage
### Conversation Memory
To maintain a coherent conversation, it's important to store the context or history of the dialogue. This can be achieved by appending both the user's inputs and the bot's responses to a messages list. This allows the model to reference past exchanges when generating responses.
**The conversation history consists of messages with different roles:**
- `system`: Initial instructions that define the AI's behavior
- `user`: Messages from the user
- `assistant`: Responses from the AI
**The following example demonstrates how to implement conversation memory with the G4F:**
```python
from g4f.client import Client
class Conversation:
def __init__(self):
self.client = Client()
self.history = [
{
"role": "system",
"content": "You are a helpful assistant."
}
]
def add_message(self, role, content):
self.history.append({
"role": role,
"content": content
})
def get_response(self, user_message):
# Add user message to history
self.add_message("user", user_message)
# Get response from AI
response = self.client.chat.completions.create(
model="gpt-4o-mini",
messages=self.history,
web_search=False
)
# Add AI response to history
assistant_response = response.choices[0].message.content
self.add_message("assistant", assistant_response)
return assistant_response
def main():
conversation = Conversation()
print("=" * 50)
print("G4F Chat started (type 'exit' to end)".center(50))
print("=" * 50)
print("\nAI: Hello! How can I assist you today?")
while True:
user_input = input("\nYou: ")
if user_input.lower() == 'exit':
print("\nGoodbye!")
break
response = conversation.get_response(user_input)
print("\nAI:", response)
if __name__ == "__main__":
main()
```
**Key Features:**
- Maintains conversation context through a message history
- Includes system instructions for AI behavior
- Automatically stores both user inputs and AI responses
- Simple and clean implementation using a class-based approach
**Usage Example:**
```python
conversation = Conversation()
response = conversation.get_response("Hello, how are you?")
print(response)
```
**Note:**
The conversation history grows with each interaction. For long conversations, you might want to implement a method to limit the history size or clear old messages to manage token usage.
---
## Search Tool Support
The **Search Tool Support** feature enables triggering a web search during chat completions. This is useful for retrieving real-time or specific data, offering a more flexible solution than `web_search`.
**Example Usage**:
```python
from g4f.client import Client
client = Client()
tool_calls = [
{
"function": {
"arguments": {
"query": "Latest advancements in AI",
"max_results": 5,
"max_words": 2500,
"backend": "auto",
"add_text": True,
"timeout": 5
},
"name": "search_tool"
},
"type": "function"
}
]
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": "Tell me about recent advancements in AI."}
],
tool_calls=tool_calls
)
print(response.choices[0].message.content)
```
**Parameters for `search_tool`:**
- **`query`**: The search query string.
- **`max_results`**: Number of search results to retrieve.
- **`max_words`**: Maximum number of words in the response.
- **`backend`**: The backend used for search (e.g., `"api"`).
- **`add_text`**: Whether to include text snippets in the response.
- **`timeout`**: Maximum time (in seconds) for the search operation.
**Advantages of Search Tool Support:**
- Works with any provider, irrespective of `web_search` support.
- Offers more customization and control over the search process.
- Bypasses provider-specific limitations.
---
### Using a List of Providers with RetryProvider
```python
from g4f.client import Client
from g4f.Provider import RetryProvider, Phind, FreeChatgpt, Liaobots
import g4f.debug
g4f.debug.logging = True
g4f.debug.version_check = False
client = Client(
provider=RetryProvider([Phind, FreeChatgpt, Liaobots], shuffle=False)
)
response = client.chat.completions.create(
model="",
messages=[
{
"role": "user",
"content": "Hello"
}
]
)
print(response.choices[0].message.content)
```
## Command-line Chat Program
**Here's an example of a simple command-line chat program using the G4F Client:**
```python
import g4f
from g4f.client import Client
# Initialize the GPT client with the desired provider
client = Client()
# Initialize an empty conversation history
messages = []
while True:
# Get user input
user_input = input("You: ")
# Check if the user wants to exit the chat
if user_input.lower() == "exit":
print("Exiting chat...")
break # Exit the loop to end the conversation
# Update the conversation history with the user's message
messages.append({"role": "user", "content": user_input})
try:
# Get GPT's response
response = client.chat.completions.create(
messages=messages,
model=g4f.models.default,
)
# Extract the GPT response and print it
gpt_response = response.choices[0].message.content
print(f"Bot: {gpt_response}")
# Update the conversation history with GPT's response
messages.append({"role": "assistant", "content": gpt_response})
except Exception as e:
print(f"An error occurred: {e}")
```
This guide provides a comprehensive overview of the G4F Client API, demonstrating its versatility in handling various AI tasks, from text generation to image analysis and creation. By leveraging these features, you can build powerful and responsive applications that harness the capabilities of advanced AI models.
---
[Return to Home](/)

View file

@ -1,95 +0,0 @@
### G4F - Configuration
## Table of Contents
- [Authentication](#authentication)
- [Cookies Configuration](#cookies-configuration)
- [HAR and Cookie Files](#har-and-cookie-files)
- [Debug Mode](#debug-mode)
- [Proxy Configuration](#proxy-configuration)
#### Authentication
Refer to the [G4F Authentication Setup Guide](authentication.md) for detailed instructions on setting up authentication.
### Cookies Configuration
Cookies are essential for using Meta AI and Microsoft Designer to create images.
Additionally, cookies are required for the Google Gemini and WhiteRabbitNeo Provider.
From Bing, ensure you have the "\_U" cookie, and from Google, all cookies starting with "\_\_Secure-1PSID" are needed.
**You can pass these cookies directly to the create function or set them using the `set_cookies` method before running G4F:**
```python
from g4f.cookies import set_cookies
set_cookies(".bing.com", {
"_U": "cookie value"
})
set_cookies(".google.com", {
"__Secure-1PSID": "cookie value"
})
```
---
### HAR and Cookie Files
**Using .har and Cookie Files**
You can place `.har` and cookie files `.json` in the default `./har_and_cookies` directory. To export a cookie file, use the [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) available on the Chrome Web Store.
**Creating .har Files to Capture Cookies**
To capture cookies, you can also create `.har` files. For more details, refer to the next section.
### Changing the Cookies Directory and Loading Cookie Files in Python
**You can change the cookies directory and load cookie files in your Python environment. To set the cookies directory relative to your Python file, use the following code:**
```python
import os.path
from g4f.cookies import set_cookies_dir, read_cookie_files
import g4f.debug
g4f.debug.logging = True
cookies_dir = os.path.join(os.path.dirname(__file__), "har_and_cookies")
set_cookies_dir(cookies_dir)
read_cookie_files(cookies_dir)
```
### Debug Mode
**If you enable debug mode, you will see logs similar to the following:**
```
Read .har file: ./har_and_cookies/you.com.har
Cookies added: 10 from .you.com
Read cookie file: ./har_and_cookies/google.json
Cookies added: 16 from .google.com
```
#### .HAR File for OpenaiChat Provider
##### Generating a .HAR File
**To utilize the OpenaiChat provider, a .har file is required from https://chatgpt.com/. Follow the steps below to create a valid .har file:**
1. Navigate to https://chatgpt.com/ using your preferred web browser and log in with your credentials.
2. Access the Developer Tools in your browser. This can typically be done by right-clicking the page and selecting "Inspect," or by pressing F12 or Ctrl+Shift+I (Cmd+Option+I on a Mac).
3. With the Developer Tools open, switch to the "Network" tab.
4. Reload the website to capture the loading process within the Network tab.
5. Initiate an action in the chat which can be captured in the .har file.
6. Right-click any of the network activities listed and select "Save all as HAR with content" to export the .har file.
##### Storing the .HAR File
- Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, if you are using Python from a terminal, you can store it in a `./har_and_cookies` directory within your current working directory.
> **Note:** Ensure that your .har file is stored securely, as it may contain sensitive information.
### Proxy Configuration
**If you want to hide or change your IP address for the providers, you can set a proxy globally via an environment variable:**
**- On macOS and Linux:**
```bash
export G4F_PROXY="http://host:port"
```
**- On Windows:**
```bash
set G4F_PROXY=http://host:port
```

View file

@ -1,128 +0,0 @@
# G4F Docker Setup
## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation and Setup](#installation-and-setup)
- [Testing the API](#testing-the-api)
- [Troubleshooting](#troubleshooting)
- [Stopping the Service](#stopping-the-service)
## Prerequisites
**Before you begin, ensure you have the following installed on your system:**
- [Docker](https://docs.docker.com/get-docker/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- Python 3.7 or higher
- pip (Python package manager)
**Note:** If you encounter issues with Docker, you can run the project directly using Python.
## Installation and Setup
### Docker Method (Recommended)
1. **Clone the Repository**
```bash
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
```
2. **Build and Run with Docker Compose**
Pull the latest image and run a container with Google Chrome support:
```bash
docker pull hlohaus789/g4f
docker-compose up -d
```
Or run the small docker images without Google Chrome:
```bash
docker-compose -f docker-compose-slim.yml up -d
```
3. **Access the API or the GUI**
The api server will be accessible at `http://localhost:1337`
And the gui at this url: `http://localhost:8080`
### Non-Docker Method
If you encounter issues with Docker, you can run the project directly using Python:
1. **Clone the Repository**
```bash
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
```
2. **Install Dependencies**
```bash
pip install -r requirements.txt
```
3. **Run the Server**
```bash
python -m g4f.api.run
```
4. **Access the API or the GUI**
The api server will be accessible at `http://localhost:1337`
And the gui at this url: `http://localhost:8080`
## Testing the API
**You can test the API using curl or by creating a simple Python script:**
### Using curl
```bash
curl -X POST -H "Content-Type: application/json" -d '{"prompt": "What is the capital of France?"}' http://localhost:1337/chat/completions
```
### Using Python
**Create a file named `test_g4f.py` with the following content:**
```python
import requests
url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "gpt-4o-mini",
"stream": False,
"messages": [
{"role": "assistant", "content": "What can you do?"}
]
}
json_response = requests.post(url, json=body).json().get('choices', [])
for choice in json_response:
print(choice.get('message', {}).get('content', ''))
```
**Run the script:**
```bash
python test_g4f.py
```
## Troubleshooting
- If you encounter issues with Docker, try running the project directly using Python as described in the Non-Docker Method.
- Ensure that you have the necessary permissions to run Docker commands. You might need to use `sudo` or add your user to the `docker` group.
- If the server doesn't start, check the logs for any error messages and ensure all dependencies are correctly installed.
**_For more detailed information on API endpoints and usage, refer to the [G4F API documentation](docs/interference-api.md)._**
## Stopping the Service
### Docker Method
**To stop the Docker containers, use the following command:**
```bash
docker-compose down
```
### Non-Docker Method
If you're running the server directly with Python, you can stop it by pressing Ctrl+C in the terminal where it's running.
---
[Return to Home](/)

View file

@ -1,205 +0,0 @@
## G4F - File API Documentation with Web Download and Enhanced File Support
This document details the enhanced G4F File API, allowing users to upload files, download files from web URLs, and process a wider range of file types for integration with language models.
**Key Improvements:**
* **Web URL Downloads:** Upload a `downloads.json` file to your bucket containing a list of URLs. The API will download and process these files. Example: `[{"url": "https://example.com/document.pdf"}]`
* **Expanded File Support:** Added support for additional plain text file extensions: `.txt`, `.xml`, `.json`, `.js`, `.har`, `.sh`, `.py`, `.php`, `.css`, `.yaml`, `.sql`, `.log`, `.csv`, `.twig`, `.md`. Binary file support remains for `.pdf`, `.html`, `.docx`, `.odt`, `.epub`, `.xlsx`, and `.zip`.
* **Server-Sent Events (SSE):** SSE are now used to provide asynchronous updates on file download and processing progress. This improves the user experience, particularly for large files and multiple downloads.
**API Endpoints:**
* **Upload:** `/v1/files/{bucket_id}` (POST)
* **Method:** POST
* **Path Parameters:** `bucket_id` (Generated by your own. For example a UUID)
* **Body:** Multipart/form-data with files OR a `downloads.json` file containing URLs.
* **Response:** JSON object with `bucket_id`, `url`, and a list of uploaded/downloaded filenames.
* **Retrieve:** `/v1/files/{bucket_id}` (GET)
* **Method:** GET
* **Path Parameters:** `bucket_id`
* **Query Parameters:**
* `delete_files`: (Optional, boolean, default `true`) Delete files after retrieval.
* `refine_chunks_with_spacy`: (Optional, boolean, default `false`) Apply spaCy-based refinement.
* **Response:** Streaming response with extracted text, separated by ``` markers. SSE updates are sent if the `Accept` header includes `text/event-stream`.
**Example Usage (Python):**
```python
import requests
import uuid
import json
def upload_and_process(files_or_urls, bucket_id=None):
if bucket_id is None:
bucket_id = str(uuid.uuid4())
if isinstance(files_or_urls, list): #URLs
files = {'files': ('downloads.json', json.dumps(files_or_urls), 'application/json')}
elif isinstance(files_or_urls, dict): #Files
files = files_or_urls
else:
raise ValueError("files_or_urls must be a list of URLs or a dictionary of files")
upload_response = requests.post(f'http://localhost:1337/v1/files/{bucket_id}', files=files)
if upload_response.status_code == 200:
upload_data = upload_response.json()
print(f"Upload successful. Bucket ID: {upload_data['bucket_id']}")
else:
print(f"Upload failed: {upload_response.status_code} - {upload_response.text}")
response = requests.get(f'http://localhost:1337/v1/files/{bucket_id}', stream=True, headers={'Accept': 'text/event-stream'})
for line in response.iter_lines():
if line:
line = line.decode('utf-8')
if line.startswith('data:'):
try:
data = json.loads(line[5:]) #remove data: prefix
if "action" in data:
print(f"SSE Event: {data}")
elif "error" in data:
print(f"Error: {data['error']['message']}")
else:
print(f"File data received: {data}") #Assuming it's file content
except json.JSONDecodeError as e:
print(f"Error decoding JSON: {e}")
else:
print(f"Unhandled SSE event: {line}")
response.close()
return bucket_id
# Example with URLs
urls = [{"url": "https://github.com/xtekky/gpt4free/issues"}]
bucket_id = upload_and_process(urls)
#Example with files
files = {'files': ('document.pdf', open('document.pdf', 'rb'))}
bucket_id = upload_and_process(files)
```
**Usage of Uploaded Files:**
```python
from g4f.client import Client
# Enable debug mode
import g4f.debug
g4f.debug.logging = True
client = Client()
# Upload example file
files = {'files': ('demo.docx', open('demo.docx', 'rb'))}
bucket_id = upload_and_process(files)
# Send request with file:
response = client.chat.completions.create(
[{"role": "user", "content": [
{"type": "text", "text": "Discribe this file."},
{"bucket_id": bucket_id}
]}],
)
print(response.choices[0].message.content)
```
**Example Output:**
```
This document is a demonstration of the DOCX Input plugin capabilities in the software ...
```
**Example Usage (JavaScript):**
```javascript
function uuid() {
return ([1e7]+-1e3+-4e3+-8e3+-1e11).replace(/[018]/g, c =>
(c ^ crypto.getRandomValues(new Uint8Array(1))[0] & 15 >> c / 4).toString(16)
);
}
async function upload_files_or_urls(data) {
let bucket_id = uuid(); // Use a random generated key for your bucket
let formData = new FormData();
if (typeof data === "object" && data.constructor === Array) { //URLs
const blob = new Blob([JSON.stringify(data)], { type: 'application/json' });
const file = new File([blob], 'downloads.json', { type: 'application/json' }); // Create File object
formData.append('files', file); // Append as a file
} else { //Files
Array.from(data).forEach(file => {
formData.append('files', file);
});
}
await fetch("/v1/files/" + bucket_id, {
method: 'POST',
body: formData
});
function connectToSSE(url) {
const eventSource = new EventSource(url);
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.error) {
console.error("Error:", data.error.message);
} else if (data.action === "done") {
console.log("Files loaded successfully. Bucket ID:", bucket_id);
// Use bucket_id in your LLM prompt.
const prompt = `Use files from bucket. ${JSON.stringify({"bucket_id": bucket_id})} to answer this: ...your question...`;
// ... Send prompt to your language model ...
} else {
console.log("SSE Event:", data); // Update UI with progress as needed
}
};
eventSource.onerror = (event) => {
console.error("SSE Error:", event);
eventSource.close();
};
}
connectToSSE(`/v1/files/${bucket_id}`); //Retrieve and refine
}
// Example with URLs
const urls = [{"url": "https://github.com/xtekky/gpt4free/issues"}];
upload_files_or_urls(urls)
// Example with files (using a file input element)
const fileInput = document.getElementById('fileInput');
fileInput.addEventListener('change', () => {
upload_files_or_urls(fileInput.files);
});
```
**Integrating with `ChatCompletion`:**
To incorporate file uploads into your client applications, include the `bucket` in your chat completion requests, using inline content parts.
```json
{
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "Answer this question using the files in the specified bucket: ...your question..."},
{"bucket_id": "your_actual_bucket_id"}
]
}
]
}
```
**Important Considerations:**
* **Error Handling:** Implement robust error handling in both Python and JavaScript to gracefully manage potential issues during file uploads, downloads, and API interactions.
* **Dependencies:** Ensure all required packages are installed (`pip install -U g4f[files]` for Python).
---
[Return to Home](/)

View file

@ -1,129 +0,0 @@
# G4F - Git Installation Guide
This guide provides step-by-step instructions for installing G4F from the source code using Git.
## Table of Contents
1. [Prerequisites](#prerequisites)
2. [Installation Steps](#installation-steps)
1. [Clone the Repository](#1-clone-the-repository)
2. [Navigate to the Project Directory](#2-navigate-to-the-project-directory)
3. [Set Up a Python Virtual Environment](#3-set-up-a-python-virtual-environment-recommended)
4. [Activate the Virtual Environment](#4-activate-the-virtual-environment)
5. [Install Dependencies](#5-install-dependencies)
6. [Verify Installation](#6-verify-installation)
3. [Usage](#usage)
4. [Troubleshooting](#troubleshooting)
5. [Additional Resources](#additional-resources)
---
## Prerequisites
Before you begin, ensure you have the following installed on your system:
- Git
- Python 3.7 or higher
- pip (Python package installer)
## Installation Steps
### 1. Clone the Repository
**Open your terminal and run the following command to clone the G4F repository:**
```bash
git clone https://github.com/xtekky/gpt4free.git
```
### 2. Navigate to the Project Directory
**Change to the project directory:**
```bash
cd gpt4free
```
### 3. Set Up a Python Virtual Environment (Recommended)
**It's best practice to use a virtual environment to manage project dependencies:**
```bash
python3 -m venv venv
```
### 4. Activate the Virtual Environment
**Activate the virtual environment based on your operating system:**
- **Windows:**
```bash
.\venv\Scripts\activate
```
- **macOS and Linux:**
```bash
source venv/bin/activate
```
### 5. Install Dependencies
**You have two options for installing dependencies:**
#### Option A: Install Minimum Requirements
**For a lightweight installation, use:**
```bash
pip install -r requirements-min.txt
```
#### Option B: Install All Packages
**For a full installation with all features, use:**
```bash
pip install -r requirements.txt
```
### 6. Verify Installation
You can now create Python scripts and utilize the G4F functionalities. Here's a basic example:
**Create a `g4f-test.py` file in the root folder and start using the repository:**
```python
import g4f
# Your code here
```
## Usage
**After installation, you can start using G4F in your Python scripts. Here's a basic example:**
```python
import g4f
# Your G4F code here
# For example:
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
]
# Add any other necessary parameters
)
print(response.choices[0].message.content)
```
## Troubleshooting
**If you encounter any issues during installation or usage:**
1. Ensure all prerequisites are correctly installed.
2. Check that you're in the correct directory and the virtual environment is activated.
3. Try reinstalling the dependencies.
4. Consult the [G4F documentation](https://github.com/xtekky/gpt4free) for more detailed information.
## Additional Resources
- [G4F GitHub Repository](https://github.com/xtekky/gpt4free)
- [Python Virtual Environments Guide](https://docs.python.org/3/tutorial/venv.html)
- [pip Documentation](https://pip.pypa.io/en/stable/)
---
**_For more information or support, please visit the [G4F GitHub Issues page](https://github.com/xtekky/gpt4free/issues)._**
---
[Return to Home](/)

View file

@ -1,212 +0,0 @@
# G4F - GUI Documentation
## Overview
The G4F GUI is a self-contained, user-friendly interface designed for interacting with multiple AI models from various providers. It allows users to generate text, code, and images effortlessly. Advanced features such as speech recognition, file uploads, conversation backup/restore, and more are included. Both the backend and frontend are fully integrated into the GUI, making setup simple and seamless.
## Features
### 1. **Multiple Providers and Models**
- **Provider/Model Selection via Dropdown**
Use the select box to choose a specific **provider/model combination**.
- **Pinning Provider/Model Combinations**
After selecting a provider and model from the dropdown, click the **pin button** to add the combination to the pinned list.
- **Remove Pinned Combinations**
Each pinned provider/model combination is displayed as a button. Clicking on the button removes it from the pinned list.
- **Send Requests to Multiple Providers**
You can pin multiple provider/model combinations and send requests to all of them simultaneously, enabling fast and comprehensive content generation.
### 2. **Text, Code, and Image Generation**
- **Text and Code Generation**
Enter prompts to generate text or code outputs.
- **Image Generation**
Provide text prompts to generate images, which are shown as thumbnails. Clicking on a thumbnail opens the image in a lightbox view.
### 3. **Gallery Functionality**
- **Image Thumbnails**
Generated images appear as small thumbnails within the conversation.
- **Lightbox View**
Clicking a thumbnail opens the image in full size, along with the prompt used to generate it.
- **Automatic Image Download**
You can enable automatic downloading of generated images through the settings.
### 4. **Conversation Management**
- **Message Reuse**
While messages cannot be edited after sending, you can copy and reuse them.
- **Message Deletion**
Individual messages or entire conversations can be deleted for a cleaner workspace.
- **Conversation List**
The left sidebar displays a list of active and past conversations for easy navigation.
- **Change Conversation Title**
By clicking the three dots next to a conversation title, you can either delete the conversation or change its title.
- **Backup and Restore Conversations**
Backup and restore all conversations and messages as a JSON file (accessible via the settings).
### 5. **Speech Recognition and Synthesis**
- **Speech Input**
Use speech recognition to input prompts by speaking instead of typing.
- **Speech Output (Text-to-Speech)**
The generated text can be read aloud using speech synthesis.
- **Custom Language Settings**
Configure the language used for speech recognition to match your preference.
### 6. **File Uploads**
- **Image Uploads**
Upload images that will be appended to your message and sent to the AI provider.
- **Text File Uploads**
Upload text files; their contents will be added to the message to provide more detailed input to the AI.
### 7. **Web Access and Settings**
- **DuckDuckGo Web Access**
Enable web access through DuckDuckGo for privacy-focused browsing.
- **Theme Toggle**
Switch between **dark mode** and **light mode** in the settings.
- **Provider Visibility**
Hide unused providers in the settings using toggle buttons.
- **Log Access**
View application logs, including error messages and debug logs, through the settings.
### 8. **Authentication**
- **Basic Authentication**
You can set a password for Basic Authentication using the `--g4f-api-key` argument when starting the web server.
### 9. **Continue Button**
- **Automatic Detection of Truncated Responses**
When using providers, responses may occasionally be cut off or truncated.
- **Continue Button**
If the GUI detects that the response ended abruptly, a **Continue** button appears directly below the truncated message. Clicking this button sends a follow-up request to the same provider and model, retrieving the rest of the message.
- **Seamless Conversation Flow**
This feature ensures that you can read complete messages without manually re-prompting.
---
## Installation
You can install the G4F GUI either as a full stack or in a lightweight version:
1. **Full Stack Installation** (includes all packages, including browser support and drivers):
```bash
pip install -U g4f[all]
```
- Installs all necessary dependencies, including browser support for web-based interactions.
2. **Slim Installation** (does not include browser drivers, suitable for headless environments):
```bash
pip install -U g4f[slim]
```
- This version is lighter, with no browser support, ideal for environments where browser interactions are not required.
---
## Setup
### 1. Setting the Environment Variable
It is **recommended** to set a `G4F_API_KEY` environment variable for authentication. You can do this as follows:
- **Linux/macOS**:
```bash
export G4F_API_KEY="your-api-key-here"
```
- **Windows**:
```bash
set G4F_API_KEY="your-api-key-here"
```
### 2. Start the GUI and Backend
Run the following command to start both the GUI and backend services based on the G4F client:
```bash
python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
```
This starts the GUI at `http://localhost:8080` with all necessary backend components running seamlessly.
### 3. Access the GUI
Once the server is running, open your browser and navigate to:
```
http://localhost:8080/chat/
```
---
## Using the Interface
1. **Select and Manage Providers/Models**
- Use the **select box** to view the list of available providers and models.
- Select a **provider/model combination** from the dropdown.
- Click the **pin button** to add the combination to your pinned list.
- To **unpin** a combination, click the corresponding pinned button.
2. **Input a Prompt**
- Enter your prompt manually or use **speech recognition** to dictate it.
- You can also upload **images** or **text files** to include them in your prompt.
3. **Generate Content**
- Click the **Generate** button to produce the text, code, or images requested.
4. **View and Interact with Results**
- **Text/Code:** The generated response appears in the conversation window.
- **Images:** Generated images are displayed as thumbnails. Click on any thumbnail to view it in full size within the lightbox.
5. **Continue Button**
- If a response is truncated, a **Continue** button will appear under the last message. Clicking it asks the same provider to continue the response from where it ended.
6. **Manage Conversations**
- **Delete** or **rename** any conversation by clicking the three dots next to its title.
- **Backup/Restore** all your conversations as a JSON file in the settings.
---
## Gallery Functionality
- **Image Thumbnails:** All generated images are shown as thumbnails within the conversation window.
- **Lightbox View:** Clicking any thumbnail opens the image in a larger view along with the associated prompt.
- **Automatic Image Download:** Enable this feature in the settings if you want images to be saved automatically.
---
## Settings Configuration
1. **API Key**
Set your API key when starting the server by defining the `G4F_API_KEY` environment variable.
2. **Provider Visibility**
Hide any providers you dont plan to use through the settings.
3. **Theme**
Toggle between **dark mode** and **light mode**. Disabling dark mode switches to a white theme.
4. **DuckDuckGo Access**
Optionally enable DuckDuckGo for privacy-focused web searching.
5. **Speech Recognition Language**
Configure your preferred speech recognition language.
6. **Log Access**
View logs (including error and debug messages) from the settings menu.
7. **Automatic Image Download**
Enable this to have all generated images downloaded immediately upon creation.
---
## Known Issues
1. **Gallery Loading**
Large images may take additional time to load depending on your hardware and network.
2. **Speech Recognition Accuracy**
Voice recognition may vary with microphone quality, background noise, or speech clarity.
3. **Provider Downtime**
Some AI providers may experience temporary downtime or disruptions.
---
[Return to Home](/)

View file

@ -1,62 +0,0 @@
#### Create Provider with AI Tool
Call in your terminal the `create_provider` script:
```bash
python -m etc.tool.create_provider
```
1. Enter your name for the new provider.
2. Copy and paste the `cURL` command from your browser developer tools.
3. Let the AI create the provider for you.
4. Customize the provider according to your needs.
#### Create Provider
1. Check out the current [list of potential providers](https://github.com/zukixa/cool-ai-stuff#ai-chat-websites), or find your own provider source!
2. Create a new file in [g4f/Provider](/g4f/Provider) with the name of the Provider.
3. Implement a class that extends [BaseProvider](/g4f/providers/base_provider.py).
```py
from __future__ import annotations
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider
class HogeService(AsyncGeneratorProvider):
url = "https://chat-gpt.com"
working = True
supports_gpt_35_turbo = True
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
yield ""
```
4. Here, you can adjust the settings, for example, if the website does support streaming, set `supports_stream` to `True`...
5. Write code to request the provider in `create_async_generator` and `yield` the response, _even if_ it's a one-time response, do not hesitate to look at other providers for inspiration.
6. Add the Provider Import in [`g4f/Provider/__init__.py`](./g4f/Provider/__init__.py)
```py
from .HogeService import HogeService
__all__ = [
HogeService,
]
```
7. You are done !, test the provider by calling it:
```py
import g4f
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME,
messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream)
for message in response:
print(message, flush=True, end='')
```

View file

@ -1,106 +0,0 @@
### Guide: How can AI help me with writing code?
🤖 Ever dreamt of writing code at lightning speed, effortlessly crafting clean, bug-free functionalities? Welcome to the age of AI-powered coding, where your imagination merges seamlessly with the machine's precision. This guide unveils 4 powerful ways AI can become your secret weapon in the coding world, saving you time, resources, and frustration. 😎
- Language Agnostic? No Problem! Rewrite Code with Ease
- Improving Written Code: Polish Your Code Like a Pro
- Demystifying the Unreadable: Deobfuscating Unreadable Code
- Custom Functions: Your AI-Powered Genie in a Bottle
So you can streamline your coding process, improve code quality, and enhance productivity.
#### 1. Rewriting Code from One Language to Another
You can leverage AI to translate code from one programming language to another. For instance, you can easily convert JavaScript code to Python using the following approach:
**Prompt:**
```
Rewrite this JavaScript code to Python code. Use the aiohttp package:
```
```javascript
async function fetch_data(url) {
const response = await fetch(url);
const data = await response.json();
return data;
}
```
**AI Response:**
```python
import aiohttp
async def fetch_data(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.json()
return data
```
#### 2. Improving Written Code
You can ask the AI to enhance your code by adding comments, docstrings, and type hints for better readability and maintainability.
**Example Usage:**
```
Improve this Python code, add Google docstrings and type hints.
Responses with the complete code and don't use placeholders.
Keep the comments:
```
```python
def calculate_area(radius):
# Calculate the area of a circle
return 3.14 * radius ** 2
```
**AI Response:**
```python
def calculate_area(radius: float) -> float:
"""Calculate the area of a circle.
Args:
radius (float): The radius of the circle.
Returns:
float: The area of the circle.
"""
return 3.14 * radius ** 2
```
#### 3. Deobfuscating Unreadable Code
AI can help in making obfuscated or unreadable code more understandable and maintainable.
**Example Usage:**
```
Deobfuscate this JavaScript code. Improve the readability:
```
```javascript
(function() {
var a = 'Hello', b = 'World';
console.log(a + ' ' + b);
})();
```
**AI Response:**
```javascript
(function() {
var greeting = 'Hello';
var subject = 'World';
console.log(greeting + ' ' + subject);
})();
```
#### 4. Writing Custom Functions
You can also ask the AI to generate custom functions or help you with specific coding issues.
**Example Usage:**
```
Write a function that returns the sum of two numbers.
```
**AI Response:**
```python
def add_numbers(a, b):
"""Add two numbers together."""
return a + b
```
These are just a few ways AI can revolutionize your coding experience. As AI technology continues to evolve, the possibilities are endless. So, embrace the future, unlock the power of AI, and watch your coding potential soar! 👷‍♂️

View file

@ -1,50 +0,0 @@
### Guide: Running the G4F GUI on Your Smartphone
Running Python applications on your smartphone is possible with specialized apps like Pydroid. This tutorial will walk you through the process using an Android smartphone with Pydroid. Note that the steps may vary slightly for iPhone users due to differences in app names and ownership.
<p align="center">
On the first screenshot is <strong>Pydroid</strong> and on the second is the <strong>Web UI</strong> in a browser
</p>
<p align="center">
<img src="/docs/guides/phone.png" />
<img src="/docs/guides/phone2.jpeg" />
</p>
1. **Install Pydroid from the Google Play Store:**
- Navigate to the Google Play Store and search for "Pydroid 3 - IDE for Python 3" or use the following link: [Pydroid 3 - IDE for Python 3](https://play.google.com/store/apps/details/Pydroid_3_IDE_for_Python_3).
2. **Install the Pydroid Repository Plugin:**
- To enhance functionality, install the Pydroid repository plugin. Find it on the Google Play Store or use this link: [Pydroid Repository Plugin](https://play.google.com/store/apps/details?id=ru.iiec.pydroid3.quickinstallrepo).
3. **Adjust App Settings:**
- In the app settings for Pydroid, disable power-saving mode and ensure that the option to pause when not in use is also disabled. This ensures uninterrupted operation of your Python scripts.
4. **Install Required Packages:**
- Open Pip within the Pydroid app and install these necessary packages:
```
g4f flask pillow beautifulsoup4
```
5. **Create a New Python Script:**
- Within Pydroid, create a new Python script and input the following content:
```python
from g4f import set_cookies
set_cookies(".bing.com", {
"_U": "cookie value"
})
from g4f.gui import run_gui
run_gui("0.0.0.0", 8080, debug=True)
```
Replace `"cookie value"` with your actual cookie value from Bing if you intend to create images using Bing.
6. **Execute the Script:**
- Run the script by clicking on the play button or selecting the option to execute it.
7. **Access the GUI:**
- Wait for the server to start, and once it's running, open the GUI using the URL provided in the output. [http://localhost:8080/chat/](http://localhost:8080/chat/)
By following these steps, you can successfully run the G4F GUI on your smartphone using Pydroid, allowing you to create and interact with graphical interfaces directly from your device.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.1 KiB

View file

@ -1,182 +0,0 @@
# G4F - Interference API Usage Guide
## Table of Contents
- [Introduction](#introduction)
- [Running the Interference API](#running-the-interference-api)
- [From PyPI Package](#from-pypi-package)
- [From Repository](#from-repository)
- [Using the Interference API](#using-the-interference-api)
- [Basic Usage](#basic-usage)
- [Using the OpenAI Library](#using-the-openai-library)
- [With Requests Library](#with-requests-library)
- [Selecting a Provider](#selecting-a-provider)
- [Key Points](#key-points)
- [Conclusion](#conclusion)
## Introduction
The G4F Interference API is a powerful tool that allows you to serve other OpenAI integrations using G4F (Gpt4free). It acts as a proxy, translating requests intended for the OpenAI API into requests compatible with G4F providers. This guide will walk you through the process of setting up, running, and using the Interference API effectively.
## Running the Interference API
**You can run the Interference API in two ways:** using the PyPI package or from the repository.
### From PyPI Package
**To run the Interference API directly from the G4F PyPI package, use the following Python code:**
```python
from g4f.api import run_api
run_api()
```
### From Repository
**If you prefer to run the Interference API from the cloned repository, you have two options:**
1. **Using the command line:**
```bash
g4f api
```
2. **Using Python:**
```bash
python -m g4f.api.run
```
**Once running, the API will be accessible at:** `http://localhost:1337/v1`
**(Advanced) Bind to custom port:**
```bash
python -m g4f.cli api --bind "0.0.0.0:2400"
```
## Using the Interference API
### Basic Usage
**You can interact with the Interference API using curl commands for both text and image generation:**
**For text generation:**
```bash
curl -X POST "http://localhost:1337/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "user",
"content": "Hello"
}
],
"model": "gpt-4o-mini"
}'
```
**For image generation:**
1. **url:**
```bash
curl -X POST "http://localhost:1337/v1/images/generate" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a white siamese cat",
"model": "flux",
"response_format": "url"
}'
```
2. **b64_json**
```bash
curl -X POST "http://localhost:1337/v1/images/generate" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a white siamese cat",
"model": "flux",
"response_format": "b64_json"
}'
```
---
### Using the OpenAI Library
**To utilize the Inference API with the OpenAI Python library, you can specify the `base_url` to point to your endpoint:**
```python
from openai import OpenAI
# Initialize the OpenAI client
client = OpenAI(
api_key="secret", # Set an API key (use "secret" if your provider doesn't require one)
base_url="http://localhost:1337/v1" # Point to your local or custom API endpoint
)
# Create a chat completion request
response = client.chat.completions.create(
model="gpt-4o-mini", # Specify the model to use
messages=[{"role": "user", "content": "Write a poem about a tree"}], # Define the input message
stream=True, # Enable streaming for real-time responses
)
# Handle the response
if isinstance(response, dict):
# Non-streaming response
print(response.choices[0].message.content)
else:
# Streaming response
for token in response:
content = token.choices[0].delta.content
if content is not None:
print(content, end="", flush=True)
```
**Notes:**
- The `api_key` is required by the OpenAI Python library. If your provider does not require an API key, you can set it to `"secret"`. This value will be ignored by providers in G4F.
- Replace `"http://localhost:1337/v1"` with the appropriate URL for your custom or local inference API.
---
### With Requests Library
**You can also send requests directly to the Interference API using the `requests` library:**
```python
import requests
url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "gpt-4o-mini",
"stream": False,
"messages": [
{"role": "assistant", "content": "What can you do?"}
]
}
json_response = requests.post(url, json=body).json().get('choices', [])
for choice in json_response:
print(choice.get('message', {}).get('content', ''))
```
## Selecting a Provider
**Provider Selection**: [How to Specify a Provider?](selecting_a_provider.md)
Selecting the right provider is a key step in configuring the G4F Interference API to suit your needs. Refer to the guide linked above for detailed instructions on choosing and specifying a provider.
## Key Points
- The Interference API translates OpenAI API requests into G4F provider requests.
- It can be run from either the PyPI package or the cloned repository.
- The API supports usage with the OpenAI Python library by changing the `base_url`.
- Direct requests can be sent to the API endpoints using libraries like `requests`.
- Both text and image generation are supported.
## Conclusion
The G4F Interference API provides a seamless way to integrate G4F with existing OpenAI-based applications and tools. By following this guide, you should now be able to set up, run, and use the Interference API effectively. Whether you're using it for text generation, image creation, or as a drop-in replacement for OpenAI in your projects, the Interference API offers flexibility and power for your AI-driven applications.
---
[Return to Home](/)

View file

@ -1,171 +0,0 @@
### G4F - Legacy API
#### ChatCompletion
```python
import g4f
g4f.debug.logging = True # Enable debug logging
g4f.debug.version_check = False # Disable automatic version checking
print(g4f.Provider.Gemini.params) # Print supported args for Gemini
# Using automatic a provider for the given model
## Streamed completion
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for message in response:
print(message, flush=True, end='')
## Normal response
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": "Hello"}],
) # Alternative model setting
print(response)
```
##### Completion
```python
import g4f
allowed_models = [
'code-davinci-002',
'text-ada-001',
'text-babbage-001',
'text-curie-001',
'text-davinci-002',
'text-davinci-003'
]
response = g4f.Completion.create(
model='text-davinci-003',
prompt='say this is a test'
)
print(response)
```
##### Providers
```python
import g4f
# Print all available providers
print([
provider.__name__
for provider in g4f.Provider.__providers__
if provider.working
])
# Execute with a specific provider
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.Aichat,
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for message in response:
print(message)
```
##### Image Upload & Generation
Image upload and generation are supported by three main providers:
- **Microsoft Copilot & Other GPT-4 Providers:** Utilizes Microsoft's Image Creator.
- **Google Gemini:** Available for free accounts with IP addresses outside Europe.
- **OpenaiChat with GPT-4:** Accessible for users with a Plus subscription.
```python
import g4f
# Setting up the request for image creation
response = g4f.ChatCompletion.create(
model=g4f.models.default, # Using the default model
provider=g4f.Provider.Gemini, # Specifying the provider as Gemini
messages=[{"role": "user", "content": "Create an image like this"}],
image=open("images/g4f.png", "rb"), # Image input can be a data URI, bytes, PIL Image, or IO object
image_name="g4f.png" # Optional: specifying the filename
)
# Displaying the response
print(response)
from g4f.image import ImageResponse
# Get image links from response
for chunk in g4f.ChatCompletion.create(
model=g4f.models.default, # Using the default model
provider=g4f.Provider.OpenaiChat, # Specifying the provider as OpenaiChat
messages=[{"role": "user", "content": "Create images with dogs"}],
access_token="...", # Need a access token from a plus user
stream=True,
ignore_stream=True
):
if isinstance(chunk, ImageResponse):
print(chunk.images) # Print generated image links
print(chunk.alt) # Print used prompt for image generation
```
##### Async Support
To enhance speed and overall performance, execute providers asynchronously. The total execution time will be determined by the duration of the slowest provider's execution.
```python
import g4f
import asyncio
_providers = [
g4f.Provider.Aichat,
g4f.Provider.ChatBase,
g4f.Provider.Bing,
g4f.Provider.GptGo,
g4f.Provider.You,
g4f.Provider.Yqcloud,
]
async def run_provider(provider: g4f.Provider.BaseProvider):
try:
response = await g4f.ChatCompletion.create_async(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
provider=provider,
)
print(f"{provider.__name__}:", response)
except Exception as e:
print(f"{provider.__name__}:", e)
async def run_all():
calls = [
run_provider(provider) for provider in _providers
]
await asyncio.gather(*calls)
asyncio.run(run_all())
```
##### Proxy and Timeout Support
All providers support specifying a proxy and increasing timeout in the create functions.
```python
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
proxy="http://host:port",
# or socks5://user:pass@host:port
timeout=120, # in secs
)
print(f"Result:", response)
```
[Return to Home](/)

View file

@ -1,164 +0,0 @@
### G4F - Local Usage Guide
### Table of Contents
1. [Introduction](#introduction)
2. [Required Dependencies](#required-dependencies)
3. [Basic Usage Example](#basic-usage-example)
4. [Supported Models](#supported-models)
5. [Performance Considerations](#performance-considerations)
6. [Troubleshooting](#troubleshooting)
#### Introduction
This guide explains how to use g4f to run language models locally. G4F (GPT4Free) allows you to interact with various language models on your local machine, providing a flexible and private solution for natural language processing tasks.
## Usage
#### Local inference
How to use g4f to run language models locally
#### Required dependencies
**Make sure to install the required dependencies by running:**
```bash
pip install g4f[local]
```
or
```bash
pip install -U gpt4all
```
#### Basic usage example
```python
from g4f.local import LocalClient
client = LocalClient()
response = client.chat.completions.create(
model = 'orca-mini-3b',
messages = [{"role": "user", "content": "hi"}],
stream = True
)
for token in response:
print(token.choices[0].delta.content or "")
```
Upon first use, there will be a prompt asking you if you wish to download the model. If you respond with `y`, g4f will go ahead and download the model for you.
You can also manually place supported models into `./g4f/local/models/`
**You can get a list of the current supported models by running:**
```python
from g4f.local import LocalClient
client = LocalClient()
client.list_models()
```
```json
{
"mistral-7b": {
"path": "mistral-7b-openorca.gguf2.Q4_0.gguf",
"ram": "8",
"prompt": "<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n",
"system": "<|im_start|>system\nYou are MistralOrca, a large language model trained by Alignment Lab AI. For multi-step problems, write out your reasoning for each step.\n<|im_end|>"
},
"mistral-7b-instruct": {
"path": "mistral-7b-instruct-v0.1.Q4_0.gguf",
"ram": "8",
"prompt": "[INST] %1 [/INST]",
"system": None
},
"gpt4all-falcon": {
"path": "gpt4all-falcon-newbpe-q4_0.gguf",
"ram": "8",
"prompt": "### Instruction:\n%1\n### Response:\n",
"system": None
},
"orca-2": {
"path": "orca-2-13b.Q4_0.gguf",
"ram": "16",
"prompt": None,
"system": None
},
"wizardlm-13b": {
"path": "wizardlm-13b-v1.2.Q4_0.gguf",
"ram": "16",
"prompt": None,
"system": None
},
"nous-hermes-llama2": {
"path": "nous-hermes-llama2-13b.Q4_0.gguf",
"ram": "16",
"prompt": "### Instruction:\n%1\n### Response:\n",
"system": None
},
"gpt4all-13b-snoozy": {
"path": "gpt4all-13b-snoozy-q4_0.gguf",
"ram": "16",
"prompt": None,
"system": None
},
"mpt-7b-chat": {
"path": "mpt-7b-chat-newbpe-q4_0.gguf",
"ram": "8",
"prompt": "<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n",
"system": "<|im_start|>system\n- You are a helpful assistant chatbot trained by MosaicML.\n- You answer questions.\n- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>"
},
"orca-mini-3b": {
"path": "orca-mini-3b-gguf2-q4_0.gguf",
"ram": "4",
"prompt": "### User:\n%1\n### Response:\n",
"system": "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"
},
"replit-code-3b": {
"path": "replit-code-v1_5-3b-newbpe-q4_0.gguf",
"ram": "4",
"prompt": "%1",
"system": None
},
"starcoder": {
"path": "starcoder-newbpe-q4_0.gguf",
"ram": "4",
"prompt": "%1",
"system": None
},
"rift-coder-7b": {
"path": "rift-coder-v0-7b-q4_0.gguf",
"ram": "8",
"prompt": "%1",
"system": None
},
"all-MiniLM-L6-v2": {
"path": "all-MiniLM-L6-v2-f16.gguf",
"ram": "1",
"prompt": None,
"system": None
},
"mistral-7b-german": {
"path": "em_german_mistral_v01.Q4_0.gguf",
"ram": "8",
"prompt": "USER: %1 ASSISTANT: ",
"system": "Du bist ein hilfreicher Assistent. "
}
}
```
#### Performance Considerations
**When running language models locally, consider the following:**
- RAM requirements vary by model size (see the 'ram' field in the model list).
- CPU/GPU capabilities affect inference speed.
- Disk space is needed to store the model files.
#### Troubleshooting
**Common issues and solutions:**
1. **Model download fails**: Check your internet connection and try again.
2. **Out of memory error**: Choose a smaller model or increase your system's RAM.
3. **Slow inference**: Consider using a GPU or a more powerful CPU.
[Return to Home](/)

View file

@ -1,295 +0,0 @@
### G4F - Media Documentation
This document outlines how to use the G4F (Generative Framework) library to generate and process various media types, including audio, images, and videos.
---
### 1. **Audio Generation and Transcription**
G4F supports audio generation through providers like PollinationsAI and audio transcription using providers like Microsoft_Phi_4.
#### **Generate Audio with PollinationsAI:**
```python
import asyncio
from g4f.client import AsyncClient
import g4f.Provider
async def main():
client = AsyncClient(provider=g4f.Provider.PollinationsAI)
response = await client.chat.completions.create(
model="openai-audio",
messages=[{"role": "user", "content": "Say good day to the world"}],
audio={"voice": "alloy", "format": "mp3"},
)
response.choices[0].message.save("alloy.mp3")
asyncio.run(main())
```
#### **More examples for Generate Audio:**
```python
from g4f.client import Client
from g4f.Provider import gTTS, EdgeTTS, Gemini, PollinationsAI
client = Client(provider=PollinationsAI)
response = client.media.generate("Hello", audio={"voice": "alloy", "format": "mp3"})
response.data[0].save("openai.mp3")
client = Client(provider=PollinationsAI)
response = client.media.generate("Hello", model="hypnosis-tracy")
response.data[0].save("hypnosis.mp3")
client = Client(provider=Gemini)
response = client.media.generate("Hello", model="gemini-audio")
response.data[0].save("gemini.ogx")
client = Client(provider=EdgeTTS)
response = client.media.generate("Hello", audio={"language": "en"})
response.data[0].save("edge-tts.mp3")
# The EdgeTTS provider also support the audio parameters `rate`, `volume` and `pitch`
client = Client(provider=gTTS)
response = client.media.generate("Hello", audio={"language": "en-US"})
response.data[0].save("google-tts.mp3")
# The gTTS provider also support the audio parameters `tld` and `slow`
```
#### **Transcribe an Audio File:**
Some providers in G4F support audio inputs in chat completions, allowing you to transcribe audio files by instructing the model accordingly. This example demonstrates how to use the `AsyncClient` to transcribe an audio file asynchronously:
```python
import asyncio
from g4f.client import AsyncClient
import g4f.Provider
async def main():
client = AsyncClient(provider=g4f.Provider.Microsoft_Phi_4)
with open("audio.wav", "rb") as audio_file:
response = await client.chat.completions.create(
messages="Transcribe this audio",
media=[[audio_file, "audio.wav"]],
modalities=["text"],
)
print(response.choices[0].message.content)
if __name__ == "__main__":
asyncio.run(main())
```
#### Explanation
- **Client Initialization**: An `AsyncClient` instance is created with a provider that supports audio inputs, such as `PollinationsAI` or `Microsoft_Phi_4`.
- **File Handling**: The audio file (`audio.wav`) is opened in binary read mode (`"rb"`) using a context manager (`with` statement) to ensure proper file closure after use.
- **API Call**: The `chat.completions.create` method is called with:
- `messages`: Containing a user message instructing the model to transcribe the audio.
- `media`: A list of lists, where each inner list contains the file object and its name (`[[audio_file, "audio.wav"]]`).
- `modalities=["text"]`: Specifies that the output should be text (the transcription).
- **Response**: The transcription is extracted from `response.choices[0].message.content` and printed.
#### Notes
- **Provider Support**: Ensure the chosen provider (e.g., `PollinationsAI` or `Microsoft_Phi_4`) supports audio inputs in chat completions. Not all providers may offer this functionality.
- **File Path**: Replace `"audio.wav"` with the path to your own audio file. The file format (e.g., WAV) should be compatible with the provider.
- **Model Selection**: If `g4f.models.default` does not support audio transcription, you may need to specify a model that does (consult the provider's documentation for supported models).
This example complements the guide by showcasing how to handle audio inputs asynchronously, expanding on the multimodal capabilities of the G4F AsyncClient API.
---
### 2. **Image Generation**
G4F can generate images from text prompts and provides options to retrieve images as URLs or base64-encoded strings.
#### **Generate an Image:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
response = await client.images.generate(
prompt="a white siamese cat",
model="flux",
response_format="url",
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
asyncio.run(main())
```
#### **Base64 Response Format:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
response = await client.images.generate(
prompt="a white siamese cat",
model="flux",
response_format="b64_json",
)
base64_text = response.data[0].b64_json
print(base64_text)
asyncio.run(main())
```
#### **Image Parameters:**
- **`width`**: Defines the width of the generated image.
- **`height`**: Defines the height of the generated image.
- **`n`**: Specifies the number of images to generate.
- **`response_format`**: Specifies the format of the response:
- `"url"`: Returns the URL of the image.
- `"b64_json"`: Returns the image as a base64-encoded JSON string.
- (Default): Saves the image locally and returns a local url.
#### **Example with Image Parameters:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
response = await client.images.generate(
prompt="a white siamese cat",
model="flux",
response_format="url",
width=512,
height=512,
n=2,
)
for image in response.data:
print(f"Generated image URL: {image.url}")
asyncio.run(main())
```
---
### 3. **Creating Image Variations**
You can generate variations of an existing image using G4F.
#### **Create Image Variations:**
```python
import asyncio
from g4f.client import AsyncClient
from g4f.Provider import OpenaiChat
async def main():
client = AsyncClient(image_provider=OpenaiChat)
response = await client.images.create_variation(
prompt="a white siamese cat",
image=open("docs/images/cat.jpg", "rb"),
model="dall-e-3",
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
asyncio.run(main())
```
---
### 4. **Video Generation**
G4F supports video generation through providers like HuggingFaceMedia.
#### **Generate a Video:**
```python
import asyncio
from g4f.client import AsyncClient
from g4f.Provider import HuggingFaceMedia
async def main():
client = AsyncClient(
provider=HuggingFaceMedia,
api_key=os.getenv("HF_TOKEN") # Your API key here
)
video_models = client.models.get_video()
print("Available Video Models:", video_models)
result = await client.media.generate(
model=video_models[0],
prompt="G4F AI technology is the best in the world.",
response_format="url",
)
print("Generated Video URL:", result.data[0].url)
asyncio.run(main())
```
#### **Video Parameters:**
- **`resolution`**: Specifies the resolution of the generated video. Options include:
- `"480p"` (default)
- `"720p"`
- **`aspect_ratio`**: Defines the width-to-height ratio (e.g., `"16:9"`).
- **`n`**: Specifies the number of videos to generate.
- **`response_format`**: Specifies the format of the response:
- `"url"`: Returns the URL of the video.
- `"b64_json"`: Returns the video as a base64-encoded JSON string.
- (Default): Saves the video locally and returns a local url.
#### **Example with Video Parameters:**
```python
import os
import asyncio
from g4f.client import AsyncClient
from g4f.Provider import HuggingFaceMedia
async def main():
client = AsyncClient(
provider=HuggingFaceMedia,
api_key=os.getenv("HF_TOKEN") # Your API key here
)
video_models = client.models.get_video()
print("Available Video Models:", video_models)
result = await client.media.generate(
model=video_models[0],
prompt="G4F AI technology is the best in the world.",
resolution="720p",
aspect_ratio="16:9",
n=1,
response_format="url",
)
print("Generated Video URL:", result.data[0].url)
asyncio.run(main())
```
---
**Key Points:**
- **Provider Selection**: Ensure the selected provider supports the desired media generation or processing task.
- **API Keys**: Some providers require API keys for authentication.
- **Response Formats**: Use `response_format` to control the output format (URL, base64, local file).
- **Parameter Usage**: Use parameters like `width`, `height`, `resolution`, `aspect_ratio`, and `n` to customize the generated media.

View file

@ -1,270 +0,0 @@
# G4F - Providers and Models
This document provides an overview of various AI providers and models, including text generation, image generation, and vision capabilities. It aims to help users navigate the diverse landscape of AI services and choose the most suitable option for their needs.
> **Note**: See our [Authentication Guide](authentication.md) for authentication instructions for the provider.
## Table of Contents
- [Providers](#providers)
- [No auth required](#providers-not-needs-auth)
- [HuggingFace](#providers-huggingface)
- [HuggingSpace](#providers-huggingspace)
- [Local](#providers-local)
- [MiniMax](#providers-minimax)
- [Needs auth](#providers-needs-auth)
- [Models](#models)
- [Text generation models](#text-generation-models)
- [Image generation models](#image-generation-models)
- [Conclusion and Usage Tips](#conclusion-and-usage-tips)
---
## Providers
**Authentication types:**
- **Get API key** - Requires an API key for authentication. You need to obtain an API key from the provider's website to use their services.
- **Manual cookies** - Requires manual browser cookies setup. You need to be logged in to the provider's website to use their services.
- **Automatic cookies** - Browser cookies authentication that is automatically fetched. No manual setup needed.
- **Optional API key** - Works without authentication, but you can provide an API key for better rate limits or additional features. The service is usable without an API key.
- **API key / Cookies** - Supports both authentication methods. You can use either an API key or browser cookies for authentication.
- **No auth required** - No authentication needed. The service is publicly available without any credentials.
- **No auth / HAR file** - Supports both authentication methods. The service works without authentication, but you can also use HAR file authentication for potentially enhanced features or capabilities.
**Symbols:**
- ✔ - Feature is supported
- ❌ - Feature is not supported
- ✔ _**(n+)**_ - Number of additional models supported by the provider but not publicly listed
---
### Providers No auth required
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|[playground.allenai.org](https://playground.allenai.org)|No auth required|`g4f.Provider.AllenAI`|`tulu-3-405b, olmo-2-13b, tulu-3-1-8b, tulu-3-70b, olmoe-0125, olmo-2-32b`|❌|❌|❌|`olmo-4-synthetic`|![](https://img.shields.io/badge/Active-brightgreen)|
|[ai-arta.com](https://ai-arta.com)|No auth required|`g4f.Provider.ARTA`|❌|`flux` _**(16+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[blackbox.ai](https://www.blackbox.ai)|No auth / HAR file|`g4f.Provider.Blackbox`|`blackboxai, blackboxai-pro, gpt-4o-mini, deepseek-chat, deepseek-v3, deepseek-r1, gpt-4o, o1, o3-mini, claude-3.7-sonnet, llama-3.3-70b, mixtral-small-24b, qwq-32b` _**(40+)**_|`flux`|❌|❌|`blackboxai, gpt-4o, o1, o3-mini, deepseek-v3` _**(7+)**_|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(7+)**_|❌|❌|❌|❌|![Error](https://img.shields.io/badge/HTTPError-f48d37)|
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|❌|❌|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, o1`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, llama-3.3-70b, claude-3-haiku, o3-mini, mixtral-small-24b`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-24b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|❌|❌|`llama-3.2-90b, minicpm-2.5`|![](https://img.shields.io/badge/Active-brightgreen)|
|[dynaspark.onrender.com](https://dynaspark.onrender.com)|No auth required|`g4f.Provider.Dynaspark`|`gemini-1.5-flash, gemini-2.0-flash`|❌|❌|❌|`gemini-1.5-flash, gemini-2.0-flash`|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[glider.so](https://glider.so)|No auth required|`g4f.Provider.Glider`|`llama-3.1-70b, llama-3.1-8b, llama-3.2-3b, deepseek-r1`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[goabror.uz](https://goabror.uz)|No auth required|`g4f.Provider.Goabror`|`gpt-4`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[hailuo.ai](https://www.hailuo.ai)|No auth required|`g4f.Provider.HailuoAI`|`MiniMax` _**(1+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b, command-a`|`flux-dev, flux-schnell, sd-3.5`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[lambda.chat](https://lambda.chat)|No auth required|`g4f.Provider.LambdaChat`|`deepseek-v3, deepseek-r1, hermes-3, nemotron-70b, llama-3.3-70b`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thinking, claude-3-opus, claude-3-sonnet, deepseek-r1, deepseek-v3, gemini-2.0-flash, gemini-2.0-flash-thinking, gemini-2.0-pro, gpt-4, gpt-4o, gpt-4o-mini, grok-3, grok-3-r1, o3-mini`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini, deepseek-v3`|❌|❌|❌|`gpt-4o-mini`|![](https://img.shields.io/badge/Active-brightgreen)|
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|❌|❌|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|❌|❌|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, o1-mini, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, llama-3.1-8b, deepseek-r1, phi-4. qwq-32b, deepseek-v3, llama-3.2-11b` _**(9+)**_|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|`gpt-4o-audio` _**(3+)**_|❌|`gpt-4o, gpt-4o-mini, o1-mini, o3-mini`|![](https://img.shields.io/badge/Active-brightgreen)|
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsImage`|❌|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat.typegpt.net](https://chat.typegpt.net)|No auth required|`g4f.Provider.TypeGPT`|`gpt-3.5-turbo, o3-mini, deepseek-r1, deepseek-v3, evil, o1`|❌|❌|❌|`gpt-3.5-turbo, o3-mini`|![](https://img.shields.io/badge/Active-brightgreen)|
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[websim.ai](https://websim.ai)|No auth required|`g4f.Provider.Websim`|`gemini-1.5-pro, gemini-1.5-flash`|`flux`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
---
### Providers HuggingFace
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, deepseek-r1, qwq-32b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|❌|❌|✔ _**(1+)**_|![](https://img.shields.io/badge/Active-brightgreen)|
---
### Providers HuggingSpace
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Dev`|❌|`flux, flux-dev`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Schnell`|❌|`flux, flux-schnell`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI_C4AI_Command`|`command-r, command-r-plus, command-r7b`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[huggingface.co/spaces/deepseek-ai/Janus-Pro-7B](https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.DeepseekAI_Janus_Pro_7b`|✔|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[roxky-flux-1-dev.hf.space](https://roxky-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.G4F`|✔ _**(1+)**_|✔ _**(4+)**_|❌|❌|✔ _**(1+)**_|![](https://img.shields.io/badge/Active-brightgreen)|
|[microsoft-phi-4-multimodal.hf.space](https://microsoft-phi-4-multimodal.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Microsoft_Phi_4`|`phi-4`|❌|❌|❌|`phi-4`|![](https://img.shields.io/badge/Active-brightgreen)|
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[qwen-qwen2-5.hf.space](https://qwen-qwen2-5.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5`|`qwen-2.5`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[qwen-qwen2-5-1m-demo.hf.space](https://qwen-qwen2-5-1m-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5M`|`qwen-2.5-1m`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[qwen-qwen2-5-max-demo.hf.space](https://qwen-qwen2-5-max-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5_Max`|`qwen-2-5-max`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B`|`qwen-2-72b`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StabilityAI_SD35Large`|❌|`sd-3.5`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Voodoohop_Flux1Schnell`|❌|`flux, flux-schnell`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
### Providers Local
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|[]( )|No auth required|`g4f.Provider.Local`|✔|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[ollama.com](https://ollama.com)|No auth required|`g4f.Provider.Ollama`|✔|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
---
### Providers MiniMax
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|[hailuo.ai/chat](https://www.hailuo.ai/chat)|[Get API key](https://intl.minimaxi.com/user-center/basic-information/interface-key)|`g4f.Provider.MiniMax`|`MiniMax` _**(1+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
---
### Providers Needs Auth
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[cablyai.com/chat](https://cablyai.com/chat)|[Get API key](https://cablyai.com)|`g4f.Provider.CablyAI`|✔|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|❌|❌|✔ _**(1+)**_|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini-2.0`|`gemini-2.0`|❌|❌|`gemini-2.0`|![](https://img.shields.io/badge/Active-brightgreen)|
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|❌|❌|`gemini-1.5-pro`|![](https://img.shields.io/badge/Active-brightgreen)|
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔ _**(1)**_|❌|❌|✔ _**(8+)**_|![](https://img.shields.io/badge/Active-brightgreen)|
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔ _**(1)**_|❌|❌|✔ _**(8+)**_|![](https://img.shields.io/badge/Active-brightgreen)|
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
---
## Models
### Text generation models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|gpt-3.5-turbo|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/engines/gpt-3.5-turbo)|
|gpt-4|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|6+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4o-mini|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|o1|OpenAI|4+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-mini|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|o3-mini|OpenAI|4+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|gigachat|GigaChat|1+ Providers|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|
|meta-ai|Meta|1+ Providers|[ai.meta.com](https://ai.meta.com/)|
|llama-2-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)|
|llama-3-8b|Meta Llama|2+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3-70b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Meta-Llama-3-70B)|
|llama-3.1-8b|Meta Llama|6+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-70b|Meta Llama|3+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-405b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.1-405B)|
|llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)|
|llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-3B)|
|llama-3.2-11b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llama-3.2-90b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)|
|llama-3.3-70b|Meta Llama|8+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-3/)|
|mixtral-8x7b|Mistral|1+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mixtral-8x22b|Mistral|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1)|
|mistral-nemo|Mistral|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|mixtral-small-24b|Mistral|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)|
|hermes-3|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B-FP8)|
|phi-3.5-mini|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)|
|phi-4|Microsoft|3+ Providers|[techcommunity.microsoft.com](https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090)|
|wizardlm-2-7b|Microsoft|1+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|wizardlm-2-8x22b|Microsoft|2+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|gemini-exp|Google DeepMind|1+ Providers|[blog.google](https://blog.google/feed/gemini-exp-1206/)|
|gemini-1.5-flash|Google DeepMind|7+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-1.5-pro|Google DeepMind|6+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemini-2.0|Google DeepMind|1+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|gemini-2.0-flash|Google DeepMind|3+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-2.0-flash-thinking|Google DeepMind|1+ Providers|[ai.google.dev](https://ai.google.dev/gemini-api/docs/thinking-mode)|
|gemini-2.0-pro|Google DeepMind|1+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash-thinking/)|
|claude-3-haiku|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
|claude-3-sonnet|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3.5-sonnet|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|claude-3.7-sonnet|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/claude/sonnet)|
|claude-3.7-sonnet-thinking|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/claude/sonnet)|
|reka-core|Reka AI|1+ Providers|[reka.ai](https://www.reka.ai/ourmodels)|
|blackboxai|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|blackboxai-pro|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|command-r|CohereForAI|1+ Providers|[docs.cohere.com](https://docs.cohere.com/docs/command-r-plus)|
|command-r-plus|CohereForAI|2+ Providers|[docs.cohere.com](https://docs.cohere.com/docs/command-r-plus)|
|command-r7b|CohereForAI|1+ Providers|[huggingface.co](https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024)|
|command-a|CohereForAI|1+ Providers|[docs.cohere.com](https://docs.cohere.com/v2/docs/command-a)|
|qwen-1.5-7b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-7B)|
|qwen-2-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-72B)|
|qwen-2-vl-7b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-VL-7B)|
|qwen-2.5-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)|
|qwen-2.5-coder-32b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|qwen-2.5-1m|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-1M-Demo)|
|qwen-2-5-max|Qwen|1+ Providers|[qwen-ai.com](https://www.qwen-ai.com/2-5-max/)|
|qwq-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)|
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|deepseek-chat|DeepSeek|2+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|deepseek-v3|DeepSeek|6+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|deepseek-r1|DeepSeek|10+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|janus-pro-7b|DeepSeek|2+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/docs/janus-pro-7b)|
|grok-3|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-3)|
|grok-3-r1|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-3)|
|sonar|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-pro|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-reasoning|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-reasoning-pro|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|r1-1776|Perplexity AI|1+ Providers|[perplexity.ai](https://www.perplexity.ai/hub/blog/open-sourcing-r1-1776)|
|nemotron-70b|Nvidia|3+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|dbrx-instruct|Databricks|1+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|glm-4|THUDM|1+ Providers|[github.com/THUDM](https://github.com/THUDM/GLM-4)|
|mini_max|MiniMax|1+ Providers|[hailuo.ai](https://www.hailuo.ai/)|
|yi-34b|01-ai|1+ Providers|[huggingface.co](https://huggingface.co/01-ai/Yi-34B-Chat)|
|dolphin-2.6|Cognitive Computations|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b)|
|dolphin-2.9|Cognitive Computations|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|airoboros-70b|DeepInfra|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|lzlv-70b|Lizpreciatior|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|minicpm-2.5|OpenBMB|1+ Providers|[huggingface.co](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5)|
|tulu-3-1-8b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|tulu-3-70b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|tulu-3-405b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|olmo-1-7b|Ai2|1+ Providers|[allenai.org](https://allenai.org/olmo)|
|olmo-2-13b|Ai2|1+ Providers|[allenai.org](https://allenai.org/olmo)|
|olmo-2-32b|Ai2|1+ Providers|[allenai.org](https://allenai.org/olmo)|
|olmo-4-synthetic|Ai2|1+ Providers|[allenai.org](https://allenai.org/olmo)|
|lfm-40b|Liquid AI|1+ Providers|[liquid.ai](https://www.liquid.ai/liquid-foundation-models)|
|evil|Evil Mode - Experimental|2+ Providers|[]( )|
---
### Image generation models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|sdxl-turbo|Stability AI|2+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sdxl-turbo)|
|sd-3.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)|
|flux|Black Forest Labs|5+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|flux-pro|Black Forest Labs|1+ Providers|[huggingface.co](https://huggingface.co/enhanceaiteam/FLUX.1-Pro)|
|flux-dev|Black Forest Labs|4+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-dev)|
|flux-schnell|Black Forest Labs|4+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|dall-e-3|OpenAI|5+ Providers|[openai.com](https://openai.com/index/dall-e/)|
|midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|
---
### Audio generation models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|gpt-4o-audio|Stability AI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-audio)|
## Conclusion and Usage Tips
This document provides a comprehensive overview of various AI providers and models available for text generation, image generation, and vision tasks. **When choosing a provider or model, consider the following factors:**
1. **Availability**: Check the status of the provider to ensure it's currently active and accessible.
2. **Model Capabilities**: Different models excel at different tasks. Choose a model that best fits your specific needs, whether it's text generation, image creation, or vision-related tasks.
3. **Authentication**: Some providers require authentication, while others don't. Consider this when selecting a provider for your project.
4. **Vision Models**: For tasks requiring image understanding or multimodal interactions, look for providers offering vision models.
Remember to stay updated with the latest developments in the AI field, as new models and providers are constantly emerging and evolving.
---
[Return to Home](/)

View file

@ -1,244 +0,0 @@
# PydanticAI Integration with G4F Client
This README provides an overview of how to integrate PydanticAI with the G4F client to create an agent that interacts with a language model. With this setup, you'll be able to apply patches to use PydanticAI models, enable debugging, and run simple agent-based interactions synchronously. However, please note that tool calls within AI requests are currently **not fully supported** in this environment.
## Requirements
Before starting, make sure you have the following Python dependencies installed:
- `g4f`: A client that interfaces with various LLMs.
- `pydantic_ai`: A module that provides integration with Pydantic-based models.
### Installation
To install these dependencies, you can use `pip`:
```bash
pip install g4f pydantic_ai
```
## Step-by-Step Setup
### 1. Patch PydanticAI to Use G4F Models
In order to use PydanticAI with G4F models, you need to apply the necessary patch to the client. This can be done by importing `patch_infer_model` from `g4f.integration.pydantic_ai`. The `api_key` parameter is optional, so if you have one, you can provide it. If not, the system will proceed without it.
```python
from g4f.integration.pydantic_ai import patch_infer_model
patch_infer_model(api_key="your_api_key_here") # Optional
```
If you don't have an API key, simply omit the `api_key` argument.
### 2. Enable Debug Logging
For troubleshooting and monitoring purposes, you may want to enable debug logging. This can be achieved by setting `g4f.debug.logging` to `True`.
```python
import g4f.debug
g4f.debug.logging = True
```
This will log detailed information about the internal processes and interactions.
### 3. Create a Simple Agent
Now you are ready to create a simple agent that can interact with the LLM. The agent is initialized with a model, and you can also define a system prompt. Here's an example where a basic agent is created with the model `g4f:Gemini:Gemini` and a simple system prompt:
```python
from pydantic_ai import Agent
# Define the agent
agent = Agent(
'g4f:Gemini:Gemini', # g4f:provider:model_name or g4f:model_name
system_prompt='Be concise, reply with one sentence.',
)
```
### 4. Run the Agent Synchronously
Once the agent is set up, you can run it synchronously to interact with the LLM. The `run_sync` method sends a query to the LLM and returns the result.
```python
# Run the agent synchronously with a user query
result = agent.run_sync('Where does "hello world" come from?')
# Output the response
print(result.data)
```
In this example, the agent will send the system prompt along with the user query (`"Where does 'hello world' come from?"`) to the LLM. The LLM will process the request and return a concise answer.
### Example Output
```bash
The phrase "hello world" is commonly used in programming tutorials to demonstrate basic syntax and the concept of outputting text to the screen.
```
## Tool Calls and Limitations
**Important**: Tool calls (such as applying external functions or calling APIs within the AI request itself) are **currently not fully supported**. If your system relies on invoking specific external tools or functions during the conversation with the model, you will need to implement this functionality outside the agent's context or handle it before or after the agent's request.
For example, you can process your query or interact with external systems before passing the data to the agent.
---
### Simple Example without `patch_infer_model`
```python
from pydantic_ai import Agent
from g4f.integration.pydantic_ai import AIModel
agent = Agent(
AIModel("gpt-4o"),
)
result = agent.run_sync('Are you gpt-4o?')
print(result.data)
```
This example shows how to initialize an agent with a specific model (`gpt-4o`) and run it synchronously.
---
### Full Example with Tool Calls:
```python
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models import ModelSettings
from g4f.integration.pydantic_ai import AIModel
from g4f.Provider import PollinationsAI
class MyModel(BaseModel):
city: str
country: str
nt = Agent(AIModel(
"gpt-4o", # Specify the provider and model
PollinationsAI # Use a supported provider to handle tool-based response formatting
), result_type=MyModel, model_settings=ModelSettings(temperature=0))
if __name__ == '__main__':
result = agent.run_sync('The windy city in the US of A.')
print(result.data)
print(result.usage())
```
This example demonstrates the use of a custom Pydantic model (`MyModel`) to capture structured data (city and country) from the response and running the agent with specific model settings.
---
### Support for Models/Providers without Tool Call Support
For models/providers that do not fully support tool calls or lack a direct API for structured output, the `ToolSupportProvider` can be used to bridge the gap. This provider ensures that the agent properly formats the response, even when the model itself doesn't have built-in support for structured outputs. It does so by leveraging a tool list and creating a response format when only one tool is used.
### Example for Models/Providers without Tool Support (Single Tool Usage)
```python
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models import ModelSettings
from g4f.integration.pydantic_ai import AIModel
from g4f.providers.tool_support import ToolSupportProvider
from g4f import debug
debug.logging = True
# Define a custom model for structured output (e.g., city and country)
class MyModel(BaseModel):
city: str
country: str
# Create the agent for a model with tool support (using one tool)
agent = Agent(AIModel(
"OpenaiChat:gpt-4o", # Specify the provider and model
ToolSupportProvider # Use ToolSupportProvider to handle tool-based response formatting
), result_type=MyModel, model_settings=ModelSettings(temperature=0))
if __name__ == '__main__':
# Run the agent with a query to extract information (e.g., city and country)
result = agent.run_sync('European city with the bear.')
print(result.data) # Structured output of city and country
print(result.usage()) # Usage statistics
```
### Explanation:
- **`ToolSupportProvider` as a Bridge:** The `ToolSupportProvider` acts as a bridge between the agent and the model, ensuring that the response is formatted into a structured output, even if the model doesn't have an API that directly supports such formatting.
- For instance, if the model generates raw text or unstructured data, the `ToolSupportProvider` will convert this into the expected format (like `MyModel`), allowing the agent to process it as structured data.
- **Model Initialization:** We initialize the agent with the `PollinationsAI:openai` model, which may not have a built-in API for returning structured outputs. Instead, it relies on the `ToolSupportProvider` to format the output.
- **Custom Result Model:** We define a custom Pydantic model (`MyModel`) to capture the expected output in a structured way (e.g., `city` and `country` fields). This helps ensure that even when the model doesn't support structured data, the agent can interpret and format it.
- **Debug Logging:** The `g4f.debug.logging` is enabled to provide detailed logs for troubleshooting and monitoring the agent's execution.
### Example Output:
```bash
city='Berlin'
country='Germany'
usage={'prompt_tokens': 15, 'completion_tokens': 50}
```
### Key Points:
- **`ToolSupportProvider` Role:** The `ToolSupportProvider` ensures that the agent formats the raw or unstructured response from the model into a structured format, even if the model itself lacks built-in support for structured data.
- **Single Tool Usage:** The `ToolSupportProvider` is particularly useful when only one tool is used by the model, and it needs to format or transform the model's output into a structured response without additional tools.
### Notes:
- This approach is ideal for models that return unstructured text or data that needs to be transformed into a structured format (e.g., Pydantic models).
- The `ToolSupportProvider` bridges the gap between the model's output and the expected structured format, enabling seamless integration into workflows that require structured responses.
---
## LangChain Integration Example
For users working with LangChain, here is an example demonstrating how to integrate G4F models into a LangChain environment:
```python
from g4f.integration.langchain import ChatAI
import g4f.debug
# Enable debugging logs
g4f.debug.logging = True
llm = ChatAI(
model="llama3-70b-8192",
provider="Groq",
api_key="" # Optionally add your API key here
)
messages = [
{"role": "user", "content": "2 🦜 2"},
{"role": "assistant", "content": "4 🦜"},
{"role": "user", "content": "2 🦜 3"},
{"role": "assistant", "content": "5 🦜"},
{"role": "user", "content": "3 🦜 4"},
]
response = llm.invoke(messages)
assert(response.content == "7 🦜")
```
This example shows how to use LangChain's `ChatAI` integration to create a conversational agent with a G4F model. The interaction takes place with the given messages and the agent processes them step-by-step to return the expected output.
---
## Conclusion
By following these steps, you have successfully integrated PydanticAI models into the G4F client, created an agent, and enabled debugging. This allows you to conduct conversations with the language model, pass system prompts, and retrieve responses synchronously.
### Notes:
- The `api_key` parameter when calling `patch_infer_model` is optional. If you dont provide it, the system will still work without an API key.
- Modify the agents `system_prompt` to suit the nature of the conversation you wish to have.
- **Tool calls within AI requests are not fully supported** at the moment. Use the agent's basic functionality for generating responses and handle external calls separately.
For further customization and advanced use cases, refer to the G4F and PydanticAI documentation.

View file

@ -1,386 +0,0 @@
# G4F Requests API Guide
## Table of Contents
- [Introduction](#introduction)
- [Getting Started](#getting-started)
- [Installing Dependencies](#installing-dependencies)
- [Making API Requests](#making-api-requests)
- [Text Generation](#text-generation)
- [Using the Chat Completions Endpoint](#using-the-chat-completions-endpoint)
- [Streaming Text Generation](#streaming-text-generation)
- [Model Retrieval](#model-retrieval)
- [Fetching Available Models](#fetching-available-models)
- [Image Generation](#image-generation)
- [Creating Images with AI](#creating-images-with-ai)
- [Advanced Usage](#advanced-usage)
## Introduction
Welcome to the G4F Requests API Guide, a powerful tool for leveraging AI capabilities directly from your Python applications using HTTP requests. This guide will take you through the steps of setting up requests to interact with AI models for a variety of tasks, from text generation to image creation.
## Getting Started
### Installing Dependencies
Ensure you have the `requests` library installed in your environment. You can install it via `pip` if needed:
```bash
pip install requests
```
This guide provides examples on how to make API requests using Python's `requests` library, focusing on tasks such as text and image generation, as well as retrieving available models.
## Making API Requests
Before diving into specific functionalities, it's essential to understand how to structure your API requests. All endpoints assume that your server is running locally at `http://localhost`. If your server is running on a different port, adjust the URLs accordingly (e.g., `http://localhost:8000`).
## Text Generation
### Using the Chat Completions Endpoint
To generate text responses using the chat completions endpoint, follow this example:
```python
import requests
# Define the payload
payload = {
"model": "gpt-4o",
"temperature": 0.9,
"messages": [{"role": "system", "content": "Hello, how are you?"}]
}
# Send the POST request to the chat completions endpoint
response = requests.post("http://localhost/v1/chat/completions", json=payload)
# Check if the request was successful
if response.status_code == 200:
# Print the response text
print(response.text)
else:
print(f"Request failed with status code {response.status_code}")
print("Response:", response.text)
```
**Explanation:**
- This request sends a conversation context to the model, which in turn generates and returns a response.
- The `temperature` parameter controls the randomness of the output.
### Streaming Text Generation
For scenarios where you want to receive partial responses or stream data as it's generated, you can utilize the streaming capabilities of the API. Here's how you can implement streaming text generation using Python's `requests` library:
```python
import requests
import json
def fetch_response(url, model, messages):
"""
Sends a POST request to the streaming chat completions endpoint.
Args:
url (str): The API endpoint URL.
model (str): The model to use for text generation.
messages (list): A list of message dictionaries.
Returns:
requests.Response: The streamed response object.
"""
payload = {"model": model, "messages": messages, "stream": True}
headers = {
"Content-Type": "application/json",
"Accept": "text/event-stream",
}
response = requests.post(url, headers=headers, json=payload, stream=True)
if response.status_code != 200:
raise Exception(
f"Failed to send message: {response.status_code} {response.text}"
)
return response
def process_stream(response):
"""
Processes the streamed response and extracts messages.
Args:
response (requests.Response): The streamed response object.
output_queue (Queue): A queue to store the extracted messages.
"""
for line in response.iter_lines():
if line:
line = line.decode("utf-8")
if line == "data: [DONE]":
print("\n\nConversation completed.")
break
if line.startswith("data: "):
try:
data = json.loads(line[6:])
message = data.get("choices", [{}])[0].get("delta", {}).get("content")
if message:
print(message, end="", flush=True)
except json.JSONDecodeError as e:
print(f"Error decoding JSON: {e}")
continue
# Define the API endpoint
chat_url = "http://localhost:8080/v1/chat/completions"
# Define the payload
model = ""
messages = [{"role": "user", "content": "Hello, how are you?"}]
try:
# Fetch the streamed response
response = fetch_response(chat_url, model, messages)
# Process the streamed response
process_stream(response)
except Exception as e:
print(f"An error occurred: {e}")
```
**Explanation:**
- **`fetch_response` Function:**
- Sends a POST request to the streaming chat completions endpoint with the specified model and messages.
- Sets `stream` parameter to `true` to enable streaming.
- Raises an exception if the request fails.
- **`process_stream` Function:**
- Iterates over each line in the streamed response.
- Decodes the line and checks for the termination signal `"data: [DONE]"`.
- Parses lines that start with `"data: "` to extract the message content.
- **Main Execution:**
- Defines the API endpoint, model, and messages.
- Fetches and processes the streamed response.
- Retrieves and prints messages.
**Usage Tips:**
- Ensure your local server supports streaming.
- Adjust the `chat_url` if your local server runs on a different port or path.
- Use threading or asynchronous programming for handling streams in real-time applications.
## Model Retrieval
### Fetching Available Models
To retrieve a list of available models, you can use the following function:
```python
import requests
def fetch_models():
"""
Retrieves the list of available models from the API.
Returns:
dict: A dictionary containing available models or an error message.
"""
url = "http://localhost/v1/models/"
try:
response = requests.get(url)
response.raise_for_status() # Raise an error for HTTP issues
return response.json() # Parse and return the JSON response
except Exception as e:
return {"error": str(e)} # Return an error message if something goes wrong
models = fetch_models()
print(models)
```
**Explanation:**
- The `fetch_models` function makes a GET request to the models endpoint.
- It handles HTTP errors and returns a parsed JSON response containing available models or an error message.
## Image Generation
### Creating Images with AI
The following function demonstrates how to generate images using a specified model:
```python
import requests
def generate_image(prompt: str, model: str = "flux-4o"):
"""
Generates an image based on the provided text prompt.
Args:
prompt (str): The text prompt for image generation.
model (str, optional): The model to use for image generation. Defaults to "flux-4o".
Returns:
tuple: A tuple containing the image URL, caption, and the full response.
"""
payload = {
"model": model,
"temperature": 0.9,
"prompt": prompt.replace(" ", "+"),
}
try:
response = requests.post("http://localhost/v1/images/generate", json=payload)
response.raise_for_status()
res = response.json()
data = res.get("data")
if not data or not isinstance(data, list):
raise ValueError("Invalid 'data' in response")
image_url = data[0].get("url")
if not image_url:
raise ValueError("No 'url' found in response data")
timestamp = res.get("created")
caption = f"Prompt: {prompt}\nCreated: {timestamp}\nModel: {model}"
return image_url, caption, res
except Exception as e:
return None, f"Error: {e}", None
prompt = "A tiger in a forest"
image_url, caption, res = generate_image(prompt)
print("API Response:", res)
print("Image URL:", image_url)
print("Caption:", caption)
```
**Explanation:**
- The `generate_image` function constructs a request to create an image based on a text prompt.
- It handles responses and possible errors, ensuring a URL and caption are returned if successful.
## Advanced Usage
This guide has demonstrated basic usage scenarios for the G4F Requests API. The API provides robust capabilities for integrating advanced AI into your applications. You can expand upon these examples to fit more complex workflows and tasks, ensuring your applications are built with cutting-edge AI features.
### Handling Concurrency and Asynchronous Requests
For applications requiring high performance and non-blocking operations, consider using asynchronous programming libraries such as `aiohttp` or `httpx`. Here's an example using `aiohttp`:
```python
import aiohttp
import asyncio
import json
from queue import Queue
async def fetch_response_async(url, model, messages, output_queue):
"""
Asynchronously sends a POST request to the streaming chat completions endpoint and processes the stream.
Args:
url (str): The API endpoint URL.
model (str): The model to use for text generation.
messages (list): A list of message dictionaries.
output_queue (Queue): A queue to store the extracted messages.
"""
payload = {"model": model, "messages": messages, "stream": True}
headers = {
"Content-Type": "application/json",
"Accept": "text/event-stream",
}
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload) as resp:
if resp.status != 200:
text = await resp.text()
raise Exception(f"Failed to send message: {resp.status} {text}")
async for line in resp.content:
decoded_line = line.decode('utf-8').strip()
if decoded_line == "data: [DONE]":
break
if decoded_line.startswith("data: "):
try:
data = json.loads(decoded_line[6:])
message = data.get("choices", [{}])[0].get("delta", {}).get("content")
if message:
output_queue.put(message)
except json.JSONDecodeError:
continue
async def main():
chat_url = "http://localhost/v1/chat/completions"
model = "gpt-4o"
messages = [{"role": "system", "content": "Hello, how are you?"}]
output_queue = Queue()
try:
await fetch_response_async(chat_url, model, messages, output_queue)
while not output_queue.empty():
msg = output_queue.get()
print(msg)
except Exception as e:
print(f"An error occurred: {e}")
# Run the asynchronous main function
asyncio.run(main())
```
**Explanation:**
- **`aiohttp` Library:** Facilitates asynchronous HTTP requests, allowing your application to handle multiple requests concurrently without blocking.
- **`fetch_response_async` Function:**
- Sends an asynchronous POST request to the streaming chat completions endpoint.
- Processes the streamed response line by line.
- Extracts messages and enqueues them into `output_queue`.
- **`main` Function:**
- Defines the API endpoint, model, and messages.
- Initializes a `Queue` to store incoming messages.
- Invokes the asynchronous fetch function and processes the messages.
**Benefits:**
- **Performance:** Handles multiple requests efficiently, reducing latency in high-throughput applications.
- **Scalability:** Easily scales with increasing demand, making it suitable for production environments.
**Note:** Ensure you have `aiohttp` installed:
```bash
pip install aiohttp
```
## Conclusion
By following this guide, you can effectively integrate the G4F Requests API into your Python applications, enabling powerful AI-driven functionalities such as text and image generation, model retrieval, and handling streaming data. Whether you're building simple scripts or complex, high-performance applications, the examples provided offer a solid foundation to harness the full potential of AI in your projects.
Feel free to customize and expand upon these examples to suit your specific needs. If you encounter any issues or have further questions, don't hesitate to seek assistance or refer to additional resources.
---
# Additional Notes
1. **Adjusting the Base URL:**
- The guide assumes your API server is accessible at `http://localhost`. If your server runs on a different port (e.g., `8000`), update the URLs accordingly:
```python
# Example for port 8000
chat_url = "http://localhost:8000/v1/chat/completions"
```
2. **Environment Variables (Optional):**
- For better flexibility and security, consider using environment variables to store your base URL and other sensitive information.
```python
import os
BASE_URL = os.getenv("API_BASE_URL", "http://localhost")
chat_url = f"{BASE_URL}/v1/chat/completions"
```
3. **Error Handling:**
- Always implement robust error handling to gracefully manage unexpected scenarios, such as network failures or invalid responses.
4. **Security Considerations:**
- Ensure that your local API server is secured, especially if accessible over a network. Implement authentication mechanisms if necessary.
5. **Testing:**
- Utilize tools like [Postman](https://www.postman.com/) or [Insomnia](https://insomnia.rest/) for testing your API endpoints before integrating them into your code.
6. **Logging:**
- Implement logging to monitor the behavior of your applications, which is crucial for debugging and maintaining your systems.
---
[Return to Home](/)

View file

@ -1,43 +0,0 @@
### G4F - Additional Requirements
#### Introduction
You can install requirements partially or completely. So G4F can be used as you wish. You have the following options for this:
#### Options
Install g4f with all possible dependencies:
```
pip install -U g4f[all]
```
Or install only g4f and the required packages for the OpenaiChat provider:
```
pip install -U g4f[openai]
```
Install required packages for the Interference API:
```
pip install -U g4f[api]
```
Install required packages for the Web UI:
```
pip install -U g4f[gui]
```
Install required packages for uploading / generating images:
```
pip install -U g4f[image]
```
Install required package for proxy support with aiohttp:
```
pip install -U aiohttp_socks
```
Install required package for loading cookies from browser:
```
pip install browser_cookie3
```
Install all packages and uninstall this package for disabling the browser support:
```
pip uninstall nodriver
```
---
[Return to Home](/)

View file

@ -1,132 +0,0 @@
### Selecting a Provider
**The Interference API also allows you to specify which provider(s) to use for processing requests. This is done using the `provider` parameter, which can be included alongside the `model` parameter in your API requests. Providers can be specified as a space-separated string of provider IDs.**
#### How to Specify a Provider
To select one or more providers, include the `provider` parameter in your request body. This parameter accepts a string of space-separated provider IDs. Each ID represents a specific provider available in the system.
#### Example: Getting a List of Available Providers
Use the following Python code to fetch the list of available providers:
```python
import requests
url = "http://localhost:1337/v1/providers"
response = requests.get(url, headers={"accept": "application/json"})
providers = response.json()
for provider in providers:
print(f"ID: {provider['id']}, URL: {provider['url']}")
```
#### Example: Getting Detailed Information About a Specific Provider
Retrieve details about a specific provider, including supported models and parameters:
```python
provider_id = "HuggingChat"
url = f"http://localhost:1337/v1/providers/{provider_id}"
response = requests.get(url, headers={"accept": "application/json"})
provider_details = response.json()
print(f"Provider ID: {provider_details['id']}")
print(f"Supported Models: {provider_details['models']}")
print(f"Parameters: {provider_details['params']}")
```
#### Example: Using a Single Provider in Text Generation
Specify a single provider (`HuggingChat`) in the request body:
```python
import requests
url = "http://localhost:1337/v1/chat/completions"
payload = {
"model": "gpt-4o-mini",
"provider": "HuggingChat",
"messages": [
{"role": "user", "content": "Write a short story about a robot"}
]
}
response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
data = response.json()
if "choices" in data:
for choice in data["choices"]:
print(choice["message"]["content"])
else:
print("No response received")
```
#### Example: Using Multiple Providers in Text Generation
Specify multiple providers by separating their IDs with a space:
```python
import requests
url = "http://localhost:1337/v1/chat/completions"
payload = {
"model": "gpt-4o-mini",
"provider": "HuggingChat AnotherProvider",
"messages": [
{"role": "user", "content": "What are the benefits of AI in education?"}
]
}
response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
data = response.json()
if "choices" in data:
for choice in data["choices"]:
print(choice["message"]["content"])
else:
print("No response received")
```
#### Example: Using a Provider for Image Generation
You can also use the `provider` parameter for image generation:
```python
import requests
url = "http://localhost:1337/v1/images/generate"
payload = {
"prompt": "a futuristic cityscape at sunset",
"model": "flux",
"provider": "HuggingSpace",
"response_format": "url"
}
response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
data = response.json()
if "data" in data:
for item in data["data"]:
print(f"Image URL: {item['url']}")
else:
print("No response received")
```
### Key Points About Providers
- **Flexibility:** Use the `provider` parameter to select one or more providers for your requests.
- **Discoverability:** Fetch available providers using the `/providers` endpoint.
- **Compatibility:** Check provider details to ensure support for the desired models and parameters.
By specifying providers in a space-separated string, you can efficiently target specific providers or combine multiple providers in a single request. This approach gives you fine-grained control over how your requests are processed.
---
[Go to Interference API Docs](docs/interference-api.md)

View file

@ -1,83 +0,0 @@
## Vision Support in Chat Completion
This documentation provides an overview of how to integrate vision support into chat completions using an API and a client. It includes examples to guide you through the process.
### Example with the API
To use vision support in chat completion with the API, follow the example below:
```python
import requests
import json
from g4f.image import to_data_uri
from g4f.requests.raise_for_status import raise_for_status
url = "http://localhost:8080/v1/chat/completions"
body = {
"model": "",
"provider": "Copilot",
"messages": [
{"role": "user", "content": "what are on this image?"}
],
"images": [
["data:image/jpeg;base64,...", "cat.jpeg"]
]
}
response = requests.post(url, json=body, headers={"g4f-api-key": "secret"})
raise_for_status(response)
print(response.json())
```
In this example:
- `url` is the endpoint for the chat completion API.
- `body` contains the model, provider, messages, and images.
- `messages` is a list of message objects with roles and content.
- `images` is a list of image data in Data URI format and optional filenames.
- `response` stores the API response.
### Example with the Client
To use vision support in chat completion with the client, follow the example below:
```python
import g4f
import g4f.Provider
def chat_completion(prompt):
client = g4f.Client(provider=g4f.Provider.Blackbox)
images = [
[open("docs/images/waterfall.jpeg", "rb"), "waterfall.jpeg"],
[open("docs/images/cat.webp", "rb"), "cat.webp"]
]
response = client.chat.completions.create([{"content": prompt, "role": "user"}], "", images=images)
print(response.choices[0].message.content)
prompt = "what are on this images?"
chat_completion(prompt)
```
```
**Image 1**
* A waterfall with a rainbow
* Lush greenery surrounding the waterfall
* A stream flowing from the waterfall
**Image 2**
* A white cat with blue eyes
* A bird perched on a window sill
* Sunlight streaming through the window
```
In this example:
- `client` initializes a new client with the specified provider.
- `images` is a list of image data and optional filenames.
- `response` stores the response from the client.
- The `chat_completion` function prints the chat completion output.
### Notes
- Multiple images can be sent. Each image has two data parts: image data (in Data URI format for the API) and an optional filename.
- The client supports bytes, IO objects, and PIL images as input.
- Ensure you use a provider that supports vision and multiple images.

View file

@ -1,30 +0,0 @@
### G4F - Webview GUI
Open the GUI in a window of your OS. Runs on a local/static/ssl server and use a JavaScript API.
Supports login into the OpenAI Chat (.har files), Image Upload and streamed Text Generation.
Supports all platforms, but only Linux/Windows tested.
1. Install all python requirements with:
```bash
pip install g4f[webview]
```
2. *a)* Follow the **OS specific** steps here:
[pywebview installation](https://pywebview.flowrl.com/guide/installation.html#dependencies)
2. *b)* **WebView2** on **Windows**: Our application requires the *WebView2 Runtime* to be installed on your system. If you do not have it installed, please download and install it from the [Microsoft Developer Website](https://developer.microsoft.com/en-us/microsoft-edge/webview2/). If you already have *WebView2 Runtime* installed but are encountering issues, navigate to *Installed Windows Apps*, select *WebView2*, and opt for the repair option.
3. Run the app with:
```python
from g4f.gui.webview import run_webview
run_webview(debug=True)
```
or execute the following command:
```bash
python -m g4f.gui.webview -debug
```
[Return to Home](/)