Add comprehensive documentation for G4F library

Co-authored-by: fkahdias <fkahdias@gmail.com>
This commit is contained in:
Cursor Agent 2025-07-01 20:17:16 +00:00
parent 78c0d67d54
commit b0ffc9e997
5 changed files with 3404 additions and 0 deletions

821
API_DOCUMENTATION.md Normal file
View file

@ -0,0 +1,821 @@
# G4F (GPT4Free) API Documentation
## Overview
G4F (GPT4Free) is a comprehensive Python library that provides free access to various AI models through multiple providers. It supports text generation, image generation, and provides both synchronous and asynchronous interfaces.
## Table of Contents
1. [Installation](#installation)
2. [Quick Start](#quick-start)
3. [Client API](#client-api)
4. [Legacy API](#legacy-api)
5. [Models](#models)
6. [Providers](#providers)
7. [REST API](#rest-api)
8. [CLI Interface](#cli-interface)
9. [GUI Interface](#gui-interface)
10. [Error Handling](#error-handling)
11. [Configuration](#configuration)
12. [Examples](#examples)
## Installation
### Basic Installation
```bash
pip install g4f
```
### Full Installation with All Features
```bash
pip install g4f[all]
```
### Docker Installation
```bash
docker pull hlohaus789/g4f
docker run -p 8080:8080 hlohaus789/g4f
```
## Quick Start
### Simple Text Generation
```python
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(response.choices[0].message.content)
```
### Image Generation
```python
from g4f.client import Client
client = Client()
response = client.images.generate(
model="flux",
prompt="A beautiful sunset over mountains"
)
print(f"Generated image URL: {response.data[0].url}")
```
## Client API
The Client API provides a modern, OpenAI-compatible interface for interacting with AI models.
### Client Class
#### `Client(**kwargs)`
Main client class for interacting with AI models.
**Parameters:**
- `provider` (Optional[ProviderType]): Default provider to use
- `media_provider` (Optional[ProviderType]): Provider for image/media generation
- `proxy` (Optional[str]): Proxy server URL
- `api_key` (Optional[str]): API key for authenticated providers
**Example:**
```python
from g4f.client import Client
from g4f.Provider import OpenaiChat
client = Client(
provider=OpenaiChat,
proxy="http://proxy.example.com:8080"
)
```
### Chat Completions
#### `client.chat.completions.create(**kwargs)`
Creates a chat completion.
**Parameters:**
- `messages` (Messages): List of message dictionaries
- `model` (str): Model name to use
- `provider` (Optional[ProviderType]): Provider override
- `stream` (Optional[bool]): Enable streaming response
- `proxy` (Optional[str]): Proxy override
- `image` (Optional[ImageType]): Image for vision models
- `response_format` (Optional[dict]): Response format specification
- `max_tokens` (Optional[int]): Maximum tokens to generate
- `stop` (Optional[Union[list[str], str]]): Stop sequences
- `api_key` (Optional[str]): API key override
**Returns:**
- `ChatCompletion`: Completion response object
**Example:**
```python
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing"}
],
max_tokens=500,
temperature=0.7
)
print(response.choices[0].message.content)
print(f"Usage: {response.usage.total_tokens} tokens")
```
#### Streaming Example
```python
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Write a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
```
### Image Generation
#### `client.images.generate(**kwargs)`
Generates images from text prompts.
**Parameters:**
- `prompt` (str): Text description of the image
- `model` (Optional[str]): Image model to use
- `provider` (Optional[ProviderType]): Provider override
- `response_format` (Optional[str]): "url" or "b64_json"
- `proxy` (Optional[str]): Proxy override
**Returns:**
- `ImagesResponse`: Response containing generated images
**Example:**
```python
response = client.images.generate(
model="dall-e-3",
prompt="A futuristic city with flying cars",
response_format="url"
)
for image in response.data:
print(f"Image URL: {image.url}")
```
#### `client.images.create_variation(**kwargs)`
Creates variations of an existing image.
**Parameters:**
- `image` (ImageType): Source image (path, URL, or bytes)
- `model` (Optional[str]): Model to use
- `provider` (Optional[ProviderType]): Provider override
- `response_format` (Optional[str]): Response format
**Example:**
```python
response = client.images.create_variation(
image="path/to/image.jpg",
model="dall-e-3"
)
```
### Async Client
#### `AsyncClient(**kwargs)`
Asynchronous version of the Client class.
**Example:**
```python
import asyncio
from g4f.client import AsyncClient
async def main():
client = AsyncClient()
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
asyncio.run(main())
```
#### Async Streaming Example
```python
async def stream_example():
client = AsyncClient()
stream = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a joke"}],
stream=True
)
async for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
asyncio.run(stream_example())
```
## Legacy API
The legacy API provides direct access to the core functionality.
### ChatCompletion
#### `g4f.ChatCompletion.create(**kwargs)`
Creates a chat completion using the legacy interface.
**Parameters:**
- `model` (Union[Model, str]): Model to use
- `messages` (Messages): Message list
- `provider` (Union[ProviderType, str, None]): Provider
- `stream` (bool): Enable streaming
- `image` (ImageType): Image for vision models
- `ignore_working` (bool): Ignore provider working status
- `ignore_stream` (bool): Ignore streaming support
**Example:**
```python
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_4o,
messages=[{"role": "user", "content": "Hello!"}],
provider=g4f.Provider.Copilot
)
print(response)
```
#### `g4f.ChatCompletion.create_async(**kwargs)`
Asynchronous version of create.
**Example:**
```python
import asyncio
import g4f
async def main():
response = await g4f.ChatCompletion.create_async(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response)
asyncio.run(main())
```
## Models
### Available Models
#### Text Models
- **GPT-4 Family**: `gpt-4`, `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo`
- **GPT-3.5**: `gpt-3.5-turbo`
- **Claude**: `claude-3-opus`, `claude-3-sonnet`, `claude-3-haiku`
- **Llama**: `llama-3-70b`, `llama-3-8b`, `llama-2-70b`
- **Gemini**: `gemini-pro`, `gemini-1.5-pro`
- **Others**: `mistral-7b`, `mixtral-8x7b`, `phi-4`
#### Image Models
- **DALL-E**: `dall-e-3`
- **Flux**: `flux`, `flux-dev`, `flux-schnell`
- **Stable Diffusion**: `stable-diffusion-xl`
#### Vision Models
- **GPT-4 Vision**: `gpt-4o`, `gpt-4-vision-preview`
- **Gemini Vision**: `gemini-pro-vision`
- **Claude Vision**: `claude-3-opus`, `claude-3-sonnet`
### Model Usage
```python
from g4f import models
# Use predefined model
response = client.chat.completions.create(
model=models.gpt_4o,
messages=[{"role": "user", "content": "Hello!"}]
)
# Or use string name
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
```
### Model Information
```python
from g4f.models import ModelUtils
# Get all available models
all_models = ModelUtils.convert
# Get model by name
model = ModelUtils.get_model("gpt-4o")
if model:
print(f"Provider: {model.base_provider}")
print(f"Best provider: {model.best_provider}")
```
## Providers
### Provider Types
#### Working Providers
- **Blackbox**: Free GPT-4 access
- **Copilot**: Microsoft Copilot integration
- **PollinationsAI**: Multi-model support
- **DeepInfraChat**: Various open-source models
- **Free2GPT**: Free GPT access
- **OpenaiChat**: Official OpenAI API
#### Authentication Required
- **OpenaiAccount**: Official OpenAI with account
- **Gemini**: Google Gemini API
- **MetaAI**: Meta's AI models
- **HuggingChat**: Hugging Face chat
### Provider Usage
```python
from g4f import Provider
# Use specific provider
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
provider=Provider.Copilot
)
# Get provider information
print(Provider.Copilot.params)
print(Provider.Copilot.working)
```
### Custom Provider Selection
```python
from g4f.providers.retry_provider import IterListProvider
from g4f import Provider
# Create custom provider list with retry logic
custom_provider = IterListProvider([
Provider.Copilot,
Provider.Blackbox,
Provider.PollinationsAI
])
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
provider=custom_provider
)
```
## REST API
G4F provides a FastAPI-based REST API compatible with OpenAI's API.
### Starting the API Server
```bash
# Start with default settings
python -m g4f.cli api
# Start with custom port and debug
python -m g4f.cli api --port 8080 --debug
# Start with GUI
python -m g4f.cli api --gui --port 8080
```
### API Endpoints
#### Chat Completions
```
POST /v1/chat/completions
```
**Request Body:**
```json
{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "Hello!"}
],
"stream": false,
"max_tokens": 500
}
```
**Response:**
```json
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o-mini",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you?"
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
```
#### Image Generation
```
POST /v1/images/generations
```
**Request Body:**
```json
{
"prompt": "A beautiful landscape",
"model": "dall-e-3",
"response_format": "url"
}
```
#### Models List
```
GET /v1/models
```
**Response:**
```json
{
"object": "list",
"data": [
{
"id": "gpt-4o",
"object": "model",
"created": 0,
"owned_by": "OpenAI"
}
]
}
```
### Client Usage with API
```python
import openai
# Configure client to use G4F API
client = openai.OpenAI(
api_key="your-g4f-api-key", # Optional
base_url="http://localhost:1337/v1"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
```
## CLI Interface
The CLI provides command-line access to G4F functionality.
### Available Commands
#### Start API Server
```bash
g4f api --port 8080 --debug
```
#### Start GUI
```bash
g4f gui --port 8080
```
#### Chat Client
```bash
g4f client --model gpt-4o --provider Copilot
```
### CLI Options
#### API Command
- `--port, -p`: Server port (default: 1337)
- `--bind`: Bind address (default: 0.0.0.0:1337)
- `--debug, -d`: Enable debug mode
- `--gui, -g`: Start with GUI
- `--model`: Default model
- `--provider`: Default provider
- `--proxy`: Proxy server URL
- `--g4f-api-key`: API authentication key
#### GUI Command
- `--port, -p`: Server port
- `--debug, -d`: Enable debug mode
- `--demo`: Enable demo mode
#### Client Command
- `--model`: Model to use
- `--provider`: Provider to use
- `--stream`: Enable streaming
- `--proxy`: Proxy server URL
### Examples
```bash
# Start API with authentication
g4f api --port 8080 --g4f-api-key "your-secret-key"
# Start GUI in demo mode
g4f gui --port 8080 --demo
# Interactive chat session
g4f client --model gpt-4o --provider Copilot --stream
```
## GUI Interface
G4F provides a web-based GUI for easy interaction.
### Starting the GUI
```python
from g4f.gui import run_gui
# Start GUI programmatically
run_gui(port=8080, debug=True)
```
Or using CLI:
```bash
g4f gui --port 8080
```
### Features
- **Chat Interface**: Interactive chat with AI models
- **Provider Selection**: Choose from available providers
- **Model Selection**: Select different AI models
- **Image Generation**: Generate images from text prompts
- **Settings**: Configure proxy, API keys, and other options
- **Conversation History**: Save and load conversations
### Accessing the GUI
Once started, access the GUI at: `http://localhost:8080/chat/`
## Error Handling
G4F provides comprehensive error handling with specific exception types.
### Exception Types
```python
from g4f.errors import (
ProviderNotFoundError,
ProviderNotWorkingError,
ModelNotFoundError,
MissingAuthError,
PaymentRequiredError,
RateLimitError,
TimeoutError,
NoMediaResponseError
)
```
### Error Handling Examples
```python
from g4f.client import Client
from g4f.errors import ProviderNotWorkingError, ModelNotFoundError
client = Client()
try:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
except ProviderNotWorkingError as e:
print(f"Provider error: {e}")
except ModelNotFoundError as e:
print(f"Model error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
```
### Retry Logic
```python
from g4f.providers.retry_provider import RetryProvider
from g4f import Provider
# Automatic retry with multiple providers
retry_provider = RetryProvider([
Provider.Copilot,
Provider.Blackbox,
Provider.PollinationsAI
], max_retries=3)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
provider=retry_provider
)
```
## Configuration
### Environment Variables
```bash
# Set default proxy
export G4F_PROXY="http://proxy.example.com:8080"
# Set debug mode
export G4F_DEBUG="true"
```
### Configuration in Code
```python
import g4f
# Enable debug logging
g4f.debug.logging = True
# Set global proxy
import os
os.environ["G4F_PROXY"] = "http://proxy.example.com:8080"
```
### Cookie Management
```python
from g4f.cookies import get_cookies, set_cookies
# Get cookies for a domain
cookies = get_cookies("chat.openai.com")
# Set cookies
set_cookies("chat.openai.com", {"session": "value"})
```
## Examples
### Advanced Chat with Vision
```python
from g4f.client import Client
import base64
client = Client()
# Read and encode image
with open("image.jpg", "rb") as f:
image_data = base64.b64encode(f.read()).decode()
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image_data}"
}
}
]
}
]
)
print(response.choices[0].message.content)
```
### Batch Processing
```python
import asyncio
from g4f.client import AsyncClient
async def process_multiple_requests():
client = AsyncClient()
prompts = [
"Explain machine learning",
"What is quantum computing?",
"How does photosynthesis work?"
]
tasks = [
client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
for prompt in prompts
]
responses = await asyncio.gather(*tasks)
for i, response in enumerate(responses):
print(f"Response {i+1}: {response.choices[0].message.content}")
asyncio.run(process_multiple_requests())
```
### Custom Provider Implementation
```python
from g4f.providers.base_provider import AsyncGeneratorProvider
from g4f.typing import AsyncResult, Messages
class CustomProvider(AsyncGeneratorProvider):
url = "https://api.example.com"
working = True
supports_stream = True
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
**kwargs
) -> AsyncResult:
# Implement your custom provider logic
yield "Custom response from your provider"
# Use custom provider
from g4f.client import Client
client = Client(provider=CustomProvider)
response = client.chat.completions.create(
model="custom-model",
messages=[{"role": "user", "content": "Hello!"}]
)
```
### Function Calling / Tools
```python
from g4f.client import Client
client = Client()
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather information",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}
]
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's the weather in Paris?"}],
tools=tools
)
# Handle tool calls
if response.choices[0].message.tool_calls:
for tool_call in response.choices[0].message.tool_calls:
print(f"Tool: {tool_call.function.name}")
print(f"Arguments: {tool_call.function.arguments}")
```
This documentation covers all the major public APIs, functions, and components of the G4F library. For the most up-to-date information, always refer to the official repository and documentation.

199
DOCUMENTATION_INDEX.md Normal file
View file

@ -0,0 +1,199 @@
# G4F Documentation Index
## Overview
This documentation suite provides comprehensive coverage of the G4F (GPT4Free) library, including all public APIs, functions, components, and usage examples.
## Documentation Files
### 1. [API_DOCUMENTATION.md](./API_DOCUMENTATION.md)
**Main API Documentation** - Complete reference for all public APIs and functions
**Contents:**
- Installation and quick start
- Client API (sync and async)
- Legacy API
- Models and providers
- REST API overview
- CLI and GUI interfaces
- Error handling
- Configuration options
- Comprehensive examples
### 2. [PROVIDER_DOCUMENTATION.md](./PROVIDER_DOCUMENTATION.md)
**Provider System Documentation** - Detailed guide to the provider architecture
**Contents:**
- Provider architecture and base classes
- Working providers (free and authenticated)
- Provider selection and retry logic
- Creating custom providers
- Provider parameters and configuration
- Error handling and testing
- Best practices and performance
### 3. [REST_API_DOCUMENTATION.md](./REST_API_DOCUMENTATION.md)
**REST API Reference** - Complete OpenAI-compatible API documentation
**Contents:**
- API server setup and configuration
- Authentication methods
- All endpoints with examples
- Request/response formats
- Advanced features (vision, tools, streaming)
- Error handling and status codes
- Integration examples
- Performance and scaling
- Security considerations
### 4. [EXAMPLES_AND_USAGE.md](./EXAMPLES_AND_USAGE.md)
**Examples and Usage Guide** - Practical code examples and patterns
**Contents:**
- Basic usage examples
- Advanced features (vision, functions, JSON mode)
- Provider-specific examples
- Integration patterns (async, web frameworks, LangChain)
- Error handling patterns
- Performance optimization
- Production use cases (chatbots, content generation)
## Quick Reference
### Installation
```bash
pip install g4f[all]
```
### Basic Usage
```python
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
```
### REST API
```bash
g4f api --port 8080
curl -X POST "http://localhost:8080/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Hello!"}]}'
```
## Key Features Covered
### Core Functionality
- ✅ Text generation with multiple models
- ✅ Image generation and analysis
- ✅ Streaming responses
- ✅ Function/tool calling
- ✅ Vision models with image input
- ✅ JSON response formatting
### Provider System
- ✅ 20+ working providers
- ✅ Automatic fallback and retry logic
- ✅ Custom provider development
- ✅ Authentication handling
- ✅ Provider health monitoring
### APIs and Interfaces
- ✅ Modern Client API (OpenAI-compatible)
- ✅ Legacy API for backwards compatibility
- ✅ REST API server (FastAPI-based)
- ✅ Command-line interface
- ✅ Web GUI interface
### Integration Support
- ✅ Async/await support
- ✅ LangChain integration
- ✅ OpenAI client compatibility
- ✅ Docker deployment
- ✅ Production deployment patterns
### Error Handling
- ✅ Comprehensive exception types
- ✅ Retry logic and fallback strategies
- ✅ Provider health checking
- ✅ Graceful degradation patterns
## Target Audiences
### Developers
- Quick start guides for immediate usage
- Comprehensive API reference
- Integration examples with popular frameworks
- Custom provider development guides
### System Administrators
- Deployment guides (Docker, production)
- Configuration and security options
- Monitoring and logging setup
- Performance optimization tips
### Data Scientists/Researchers
- Model comparison and selection guides
- Batch processing examples
- Provider capability matrices
- Performance benchmarking patterns
## Documentation Standards
### Code Examples
- All examples are tested and functional
- Multiple programming languages where applicable
- Clear error handling demonstrations
- Production-ready patterns
### API Reference
- Complete parameter documentation
- Request/response examples
- HTTP status codes and error types
- OpenAI compatibility notes
### Architecture Documentation
- Class hierarchies and inheritance
- Plugin/extension points
- Configuration options
- Best practices and anti-patterns
## Getting Help
### Documentation Issues
If you find any issues with the documentation:
1. Check the official repository for updates
2. Look for similar issues in the issue tracker
3. Create a detailed issue report with examples
### Code Examples
All code examples in this documentation are designed to work with the latest version of G4F. If an example doesn't work:
1. Verify your G4F version: `pip show g4f`
2. Check for any required dependencies
3. Review the error message for configuration issues
### Community Resources
- GitHub Repository: Primary source for latest updates
- Discord Community: Real-time help and discussions
- Issue Tracker: Bug reports and feature requests
## Contributing to Documentation
### Guidelines
1. Keep examples simple and focused
2. Include error handling in complex examples
3. Test all code before documitting
4. Use consistent formatting and style
5. Provide context for each example
### Structure
- Start with the simplest use case
- Build complexity gradually
- Include common pitfalls and solutions
- Cross-reference related sections
This documentation is continuously updated to reflect the latest features and best practices. Always refer to the official repository for the most current information.

828
EXAMPLES_AND_USAGE.md Normal file
View file

@ -0,0 +1,828 @@
# G4F Examples and Advanced Usage Guide
## Table of Contents
1. [Basic Usage Examples](#basic-usage-examples)
2. [Advanced Features](#advanced-features)
3. [Provider-Specific Examples](#provider-specific-examples)
4. [Integration Examples](#integration-examples)
5. [Error Handling Patterns](#error-handling-patterns)
6. [Performance Optimization](#performance-optimization)
7. [Production Use Cases](#production-use-cases)
## Basic Usage Examples
### Simple Chat Completion
```python
from g4f.client import Client
client = Client()
# Basic chat
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(response.choices[0].message.content)
```
### Streaming Response
```python
from g4f.client import Client
client = Client()
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
```
### Image Generation
```python
from g4f.client import Client
client = Client()
response = client.images.generate(
model="dall-e-3",
prompt="A beautiful sunset over mountains",
response_format="url"
)
print(f"Generated image: {response.data[0].url}")
```
## Advanced Features
### Vision Models with Images
```python
import base64
from g4f.client import Client
client = Client()
# Read and encode image
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
base64_image = encode_image("path/to/your/image.jpg")
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
]
)
print(response.choices[0].message.content)
```
### Function Calling
```python
from g4f.client import Client
import json
client = Client()
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
]
def get_current_weather(location, unit="fahrenheit"):
"""Mock function to get weather"""
return f"The weather in {location} is 72°{unit[0].upper()}"
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's the weather like in Boston?"}],
tools=tools
)
# Handle tool calls
message = response.choices[0].message
if message.tool_calls:
for tool_call in message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
if function_name == "get_current_weather":
weather_result = get_current_weather(**function_args)
print(f"Weather: {weather_result}")
```
### JSON Response Format
```python
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": "Generate a JSON object with information about Paris, France including population, landmarks, and cuisine."
}
],
response_format={"type": "json_object"}
)
import json
data = json.loads(response.choices[0].message.content)
print(json.dumps(data, indent=2))
```
## Provider-Specific Examples
### Using Different Providers
```python
from g4f.client import Client
from g4f import Provider
client = Client()
# Use specific provider
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
provider=Provider.Copilot
)
# Provider with custom configuration
response = client.chat.completions.create(
model="llama-3-70b",
messages=[{"role": "user", "content": "Explain quantum computing"}],
provider=Provider.DeepInfraChat,
temperature=0.7,
max_tokens=1000
)
```
### Provider Fallback Strategy
```python
from g4f.providers.retry_provider import IterListProvider
from g4f import Provider
from g4f.client import Client
# Create fallback provider list
fallback_providers = IterListProvider([
Provider.Copilot,
Provider.Blackbox,
Provider.PollinationsAI,
Provider.DeepInfraChat
])
client = Client(provider=fallback_providers)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
```
### Authenticated Providers
```python
from g4f.client import Client
from g4f.Provider import OpenaiAccount
# Using OpenAI account (requires authentication setup)
client = Client(provider=OpenaiAccount)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
api_key="your-openai-api-key" # If needed
)
```
## Integration Examples
### Async Client Usage
```python
import asyncio
from g4f.client import AsyncClient
async def async_chat_example():
client = AsyncClient()
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
return response.choices[0].message.content
# Run async function
result = asyncio.run(async_chat_example())
print(result)
```
### Batch Processing
```python
import asyncio
from g4f.client import AsyncClient
async def process_batch_requests():
client = AsyncClient()
prompts = [
"Explain machine learning",
"What is quantum computing?",
"How does blockchain work?",
"What is artificial intelligence?"
]
# Create tasks for concurrent processing
tasks = [
client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
for prompt in prompts
]
# Execute all tasks concurrently
responses = await asyncio.gather(*tasks, return_exceptions=True)
# Process results
for i, response in enumerate(responses):
if isinstance(response, Exception):
print(f"Error for prompt {i+1}: {response}")
else:
print(f"Response {i+1}: {response.choices[0].message.content[:100]}...")
asyncio.run(process_batch_requests())
```
### Web Framework Integration (FastAPI)
```python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from g4f.client import AsyncClient
import asyncio
app = FastAPI()
client = AsyncClient()
class ChatRequest(BaseModel):
message: str
model: str = "gpt-4o-mini"
class ChatResponse(BaseModel):
response: str
model: str
@app.post("/chat", response_model=ChatResponse)
async def chat_endpoint(request: ChatRequest):
try:
response = await client.chat.completions.create(
model=request.model,
messages=[{"role": "user", "content": request.message}]
)
return ChatResponse(
response=response.choices[0].message.content,
model=response.model
)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
# Run with: uvicorn main:app --reload
```
### LangChain Integration
```python
from langchain.chat_models.base import BaseChatModel
from langchain.schema import BaseMessage, HumanMessage, AIMessage
from g4f.client import Client
from typing import List
class G4FChatModel(BaseChatModel):
client: Client = Client()
model_name: str = "gpt-4o-mini"
def _generate(self, messages: List[BaseMessage], **kwargs):
# Convert LangChain messages to G4F format
g4f_messages = []
for msg in messages:
if isinstance(msg, HumanMessage):
g4f_messages.append({"role": "user", "content": msg.content})
elif isinstance(msg, AIMessage):
g4f_messages.append({"role": "assistant", "content": msg.content})
response = self.client.chat.completions.create(
model=self.model_name,
messages=g4f_messages,
**kwargs
)
return AIMessage(content=response.choices[0].message.content)
# Usage
llm = G4FChatModel()
response = llm([HumanMessage(content="Hello!")])
print(response.content)
```
## Error Handling Patterns
### Comprehensive Error Handling
```python
from g4f.client import Client
from g4f.errors import (
ProviderNotWorkingError,
ModelNotFoundError,
MissingAuthError,
RateLimitError,
TimeoutError
)
import time
def robust_chat_completion(message, max_retries=3, retry_delay=1):
client = Client()
for attempt in range(max_retries):
try:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": message}],
timeout=30
)
return response.choices[0].message.content
except ProviderNotWorkingError:
print(f"Provider not working, attempt {attempt + 1}")
if attempt < max_retries - 1:
time.sleep(retry_delay)
continue
raise
except ModelNotFoundError as e:
print(f"Model not found: {e}")
# Try with different model
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": message}]
)
return response.choices[0].message.content
except:
raise e
except RateLimitError:
print(f"Rate limited, waiting before retry {attempt + 1}")
if attempt < max_retries - 1:
time.sleep(retry_delay * 2) # Longer wait for rate limits
continue
raise
except TimeoutError:
print(f"Timeout, attempt {attempt + 1}")
if attempt < max_retries - 1:
time.sleep(retry_delay)
continue
raise
except Exception as e:
print(f"Unexpected error: {e}")
if attempt < max_retries - 1:
time.sleep(retry_delay)
continue
raise
# Usage
try:
result = robust_chat_completion("Hello, how are you?")
print(result)
except Exception as e:
print(f"All retry attempts failed: {e}")
```
### Provider Health Monitoring
```python
import asyncio
import time
from g4f.client import AsyncClient
from g4f import Provider
async def check_provider_health():
client = AsyncClient()
test_message = [{"role": "user", "content": "Hello"}]
providers = [
Provider.Copilot,
Provider.Blackbox,
Provider.PollinationsAI,
Provider.DeepInfraChat
]
health_status = {}
for provider in providers:
try:
start_time = time.time()
response = await client.chat.completions.create(
model="gpt-4",
messages=test_message,
provider=provider,
timeout=10
)
end_time = time.time()
health_status[provider.__name__] = {
"status": "healthy",
"response_time": round(end_time - start_time, 2),
"response_preview": response.choices[0].message.content[:50]
}
except Exception as e:
health_status[provider.__name__] = {
"status": "unhealthy",
"error": str(e)
}
return health_status
# Check provider health
health = asyncio.run(check_provider_health())
for provider, status in health.items():
print(f"{provider}: {status}")
```
## Performance Optimization
### Connection Pooling and Reuse
```python
from g4f.client import AsyncClient
import asyncio
class G4FManager:
def __init__(self):
self.client = AsyncClient()
self.session_pool = {}
async def chat_completion(self, message, model="gpt-4o-mini"):
response = await self.client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": message}]
)
return response.choices[0].message.content
async def batch_completions(self, messages, model="gpt-4o-mini", max_concurrent=5):
semaphore = asyncio.Semaphore(max_concurrent)
async def process_message(message):
async with semaphore:
return await self.chat_completion(message, model)
tasks = [process_message(msg) for msg in messages]
return await asyncio.gather(*tasks, return_exceptions=True)
# Usage
manager = G4FManager()
# Single completion
result = asyncio.run(manager.chat_completion("Hello!"))
print(result)
# Batch processing with concurrency control
messages = ["Hello!", "How are you?", "What's AI?", "Explain ML"]
results = asyncio.run(manager.batch_completions(messages, max_concurrent=3))
```
### Caching Responses
```python
import hashlib
import json
import time
from functools import wraps
from g4f.client import Client
class ResponseCache:
def __init__(self, ttl=3600): # 1 hour TTL
self.cache = {}
self.ttl = ttl
def get_cache_key(self, model, messages, **kwargs):
# Create deterministic hash of request
cache_data = {
"model": model,
"messages": messages,
**{k: v for k, v in kwargs.items() if k not in ['stream']}
}
return hashlib.md5(json.dumps(cache_data, sort_keys=True).encode()).hexdigest()
def get(self, key):
if key in self.cache:
data, timestamp = self.cache[key]
if time.time() - timestamp < self.ttl:
return data
else:
del self.cache[key]
return None
def set(self, key, value):
self.cache[key] = (value, time.time())
def cached_completion(cache_instance):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Extract model and messages for cache key
model = kwargs.get('model', 'gpt-4o-mini')
messages = kwargs.get('messages', [])
cache_key = cache_instance.get_cache_key(model, messages, **kwargs)
# Check cache first
cached_result = cache_instance.get(cache_key)
if cached_result:
print("Cache hit!")
return cached_result
# If not in cache, make actual request
result = func(*args, **kwargs)
# Cache the result
cache_instance.set(cache_key, result)
return result
return wrapper
return decorator
# Usage
cache = ResponseCache(ttl=1800) # 30 minutes
client = Client()
@cached_completion(cache)
def get_completion(**kwargs):
response = client.chat.completions.create(**kwargs)
return response.choices[0].message.content
# This will hit the API
result1 = get_completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is AI?"}]
)
# This will use cache
result2 = get_completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is AI?"}]
)
```
## Production Use Cases
### Chatbot Implementation
```python
import asyncio
from datetime import datetime
from g4f.client import AsyncClient
from g4f import Provider
class Chatbot:
def __init__(self, name="Assistant", model="gpt-4o-mini"):
self.name = name
self.model = model
self.client = AsyncClient()
self.conversation_history = []
self.system_prompt = f"You are {name}, a helpful AI assistant."
async def chat(self, user_message, maintain_history=True):
# Prepare messages
messages = [{"role": "system", "content": self.system_prompt}]
if maintain_history:
messages.extend(self.conversation_history)
messages.append({"role": "user", "content": user_message})
try:
response = await self.client.chat.completions.create(
model=self.model,
messages=messages,
provider=Provider.Copilot
)
assistant_response = response.choices[0].message.content
# Update conversation history
if maintain_history:
self.conversation_history.append({"role": "user", "content": user_message})
self.conversation_history.append({"role": "assistant", "content": assistant_response})
# Keep only last 10 exchanges to manage context length
if len(self.conversation_history) > 20:
self.conversation_history = self.conversation_history[-20:]
return assistant_response
except Exception as e:
return f"I'm sorry, I encountered an error: {str(e)}"
def clear_history(self):
self.conversation_history = []
def get_conversation_summary(self):
return {
"total_exchanges": len(self.conversation_history) // 2,
"last_interaction": datetime.now().isoformat()
}
# Usage
async def main():
bot = Chatbot("Alex", "gpt-4o-mini")
print("Chatbot started! Type 'quit' to exit.")
while True:
user_input = input("\nYou: ")
if user_input.lower() == 'quit':
break
response = await bot.chat(user_input)
print(f"{bot.name}: {response}")
# Run the chatbot
if __name__ == "__main__":
asyncio.run(main())
```
### Content Generation Pipeline
```python
import asyncio
from g4f.client import AsyncClient
from g4f import Provider
class ContentGenerator:
def __init__(self):
self.client = AsyncClient()
async def generate_blog_post(self, topic, target_length=1000):
"""Generate a complete blog post with title, outline, and content"""
# Generate title
title_prompt = f"Generate a compelling blog post title about: {topic}"
title_response = await self.client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": title_prompt}],
provider=Provider.Copilot
)
title = title_response.choices[0].message.content.strip()
# Generate outline
outline_prompt = f"Create a detailed outline for a blog post titled '{title}' about {topic}. Include 4-6 main sections."
outline_response = await self.client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": outline_prompt}]
)
outline = outline_response.choices[0].message.content
# Generate content
content_prompt = f"""
Write a {target_length}-word blog post with the following details:
Title: {title}
Topic: {topic}
Outline: {outline}
Make it engaging, informative, and well-structured with proper headings.
"""
content_response = await self.client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": content_prompt}],
max_tokens=target_length * 2 # Allow extra tokens for formatting
)
content = content_response.choices[0].message.content
return {
"title": title,
"outline": outline,
"content": content,
"word_count": len(content.split())
}
async def generate_social_media_content(self, main_content, platforms):
"""Generate social media adaptations of main content"""
platform_configs = {
"twitter": {"limit": 280, "style": "concise and engaging with hashtags"},
"linkedin": {"limit": 3000, "style": "professional and insightful"},
"instagram": {"limit": 2200, "style": "visual and inspiring with emojis"},
"facebook": {"limit": 63206, "style": "conversational and community-focused"}
}
social_content = {}
for platform in platforms:
if platform in platform_configs:
config = platform_configs[platform]
prompt = f"""
Adapt the following content for {platform}:
Original content: {main_content[:500]}...
Requirements:
- Maximum {config['limit']} characters
- Style: {config['style']}
- Platform: {platform}
Create engaging {platform} post:
"""
response = await self.client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
social_content[platform] = response.choices[0].message.content
return social_content
# Usage example
async def content_pipeline_example():
generator = ContentGenerator()
# Generate blog post
blog_post = await generator.generate_blog_post(
"The Future of Artificial Intelligence in Healthcare",
target_length=1200
)
print(f"Title: {blog_post['title']}")
print(f"Word count: {blog_post['word_count']}")
print(f"Content preview: {blog_post['content'][:200]}...")
# Generate social media adaptations
social_content = await generator.generate_social_media_content(
blog_post['content'],
['twitter', 'linkedin', 'instagram']
)
for platform, content in social_content.items():
print(f"\n{platform.upper()}:")
print(content)
asyncio.run(content_pipeline_example())
```
This comprehensive examples guide demonstrates practical usage patterns for G4F across different scenarios, from basic chat completions to complex production workflows. The examples show how to handle errors gracefully, optimize performance, and integrate G4F into larger applications.

670
PROVIDER_DOCUMENTATION.md Normal file
View file

@ -0,0 +1,670 @@
# G4F Provider Documentation
## Overview
The provider system in G4F is the core mechanism that enables access to different AI models through various endpoints and services. Each provider implements a standardized interface while handling the specifics of different AI services.
## Provider Architecture
### Base Provider Classes
#### `BaseProvider`
The abstract base class that all providers inherit from.
```python
from g4f.providers.types import BaseProvider
class BaseProvider(ABC):
url: str = None
working: bool = False
supports_stream: bool = False
supports_system_message: bool = True
supports_message_history: bool = True
```
#### `AbstractProvider`
Provides synchronous completion functionality.
```python
from g4f.providers.base_provider import AbstractProvider
class MyProvider(AbstractProvider):
@classmethod
def create_completion(cls, model: str, messages: Messages, stream: bool, **kwargs) -> CreateResult:
# Implementation here
pass
```
#### `AsyncProvider`
For asynchronous single-response providers.
```python
from g4f.providers.base_provider import AsyncProvider
class MyAsyncProvider(AsyncProvider):
@staticmethod
async def create_async(model: str, messages: Messages, **kwargs) -> str:
# Implementation here
pass
```
#### `AsyncGeneratorProvider`
For asynchronous streaming providers (most common).
```python
from g4f.providers.base_provider import AsyncGeneratorProvider
class MyStreamingProvider(AsyncGeneratorProvider):
@staticmethod
async def create_async_generator(model: str, messages: Messages, stream: bool = True, **kwargs) -> AsyncResult:
# Implementation here
yield "Response chunk"
```
### Provider Mixins
#### `ProviderModelMixin`
Adds model management capabilities.
```python
from g4f.providers.base_provider import ProviderModelMixin
class MyProvider(AsyncGeneratorProvider, ProviderModelMixin):
default_model = "gpt-4"
models = ["gpt-4", "gpt-3.5-turbo"]
model_aliases = {"gpt-4": "gpt-4-0613"}
@classmethod
def get_model(cls, model: str, **kwargs) -> str:
return super().get_model(model, **kwargs)
```
#### `AuthFileMixin`
For providers requiring authentication with file-based credential storage.
```python
from g4f.providers.base_provider import AuthFileMixin
class AuthProvider(AsyncGeneratorProvider, AuthFileMixin):
@classmethod
def get_cache_file(cls) -> Path:
return super().get_cache_file()
```
## Working Providers
### Free Providers (No Authentication Required)
#### Blackbox
- **URL**: `https://www.blackbox.ai`
- **Models**: GPT-4, GPT-3.5, Claude models
- **Features**: Code generation, general chat
- **Streaming**: Yes
```python
from g4f.Provider import Blackbox
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
provider=Blackbox
)
```
#### Copilot
- **URL**: `https://copilot.microsoft.com`
- **Models**: GPT-4, GPT-4 Vision
- **Features**: Search integration, image analysis
- **Streaming**: Yes
```python
from g4f.Provider import Copilot
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Search for latest AI news"}],
provider=Copilot,
web_search=True
)
```
#### PollinationsAI
- **URL**: `https://pollinations.ai`
- **Models**: Multiple models including image generation
- **Features**: Text and image generation
- **Streaming**: Yes
```python
from g4f.Provider import PollinationsAI
# Text generation
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
provider=PollinationsAI
)
# Image generation
image_response = client.images.generate(
prompt="A beautiful landscape",
provider=PollinationsAI
)
```
#### DeepInfraChat
- **URL**: `https://deepinfra.com`
- **Models**: Llama, Mistral, and other open-source models
- **Features**: Open-source model access
- **Streaming**: Yes
```python
from g4f.Provider import DeepInfraChat
response = client.chat.completions.create(
model="llama-3-70b",
messages=[{"role": "user", "content": "Hello!"}],
provider=DeepInfraChat
)
```
#### Free2GPT
- **URL**: Various endpoints
- **Models**: GPT-3.5, GPT-4
- **Features**: Free GPT access
- **Streaming**: No
```python
from g4f.Provider import Free2GPT
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello!"}],
provider=Free2GPT
)
```
#### LambdaChat
- **URL**: Multiple lambda endpoints
- **Models**: Various models
- **Features**: Serverless model access
- **Streaming**: Yes
#### Together
- **URL**: `https://together.ai`
- **Models**: Llama, Mistral, CodeLlama models
- **Features**: Open-source model hosting
- **Streaming**: Yes
### Authentication Required Providers
#### OpenaiAccount
- **URL**: `https://chat.openai.com`
- **Models**: All OpenAI models
- **Features**: Full OpenAI functionality
- **Authentication**: Session cookies or HAR files
```python
from g4f.Provider import OpenaiAccount
# Requires authentication setup
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
provider=OpenaiAccount
)
```
#### Gemini
- **URL**: `https://gemini.google.com`
- **Models**: Gemini Pro, Gemini Vision
- **Features**: Google's AI models
- **Authentication**: Google account session
#### MetaAI
- **URL**: `https://meta.ai`
- **Models**: Llama models
- **Features**: Meta's AI assistant
- **Authentication**: Meta account session
#### HuggingChat
- **URL**: `https://huggingface.co/chat`
- **Models**: Multiple open-source models
- **Features**: Hugging Face model hub
- **Authentication**: Hugging Face account
## Provider Selection and Retry Logic
### IterListProvider
Iterates through multiple providers until one succeeds.
```python
from g4f.providers.retry_provider import IterListProvider
from g4f import Provider
# Create provider list with automatic fallback
provider_list = IterListProvider([
Provider.Copilot,
Provider.Blackbox,
Provider.PollinationsAI,
Provider.DeepInfraChat
])
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
provider=provider_list
)
```
### RetryProvider
Extends IterListProvider with configurable retry logic.
```python
from g4f.providers.retry_provider import RetryProvider
from g4f import Provider
retry_provider = RetryProvider([
Provider.Copilot,
Provider.Blackbox
], max_retries=3, retry_delay=1.0)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
provider=retry_provider
)
```
### AnyProvider
Automatically selects the best available provider for a model.
```python
from g4f.providers.any_provider import AnyProvider
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
provider=AnyProvider # Automatically selects best provider
)
```
## Creating Custom Providers
### Basic Custom Provider
```python
from g4f.providers.base_provider import AsyncGeneratorProvider, ProviderModelMixin
from g4f.typing import AsyncResult, Messages
import aiohttp
import json
class CustomProvider(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://api.example.com"
working = True
supports_stream = True
default_model = "custom-model"
models = ["custom-model", "another-model"]
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
stream: bool = True,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
"Content-Type": "application/json",
"User-Agent": "Custom G4F Provider"
}
data = {
"model": model,
"messages": messages,
"stream": stream
}
async with aiohttp.ClientSession(headers=headers) as session:
async with session.post(f"{cls.url}/chat/completions", json=data) as response:
if stream:
async for line in response.content:
if line:
yield line.decode().strip()
else:
result = await response.json()
yield result["choices"][0]["message"]["content"]
```
### Provider with Authentication
```python
from g4f.providers.base_provider import AsyncGeneratorProvider, AuthFileMixin
from g4f.errors import MissingAuthError
class AuthenticatedProvider(AsyncGeneratorProvider, AuthFileMixin):
url = "https://api.secure-example.com"
working = True
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
**kwargs
) -> AsyncResult:
if not api_key:
raise MissingAuthError(f"API key required for {cls.__name__}")
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
# Implementation here
yield "Authenticated response"
```
### Provider with Image Support
```python
from g4f.providers.base_provider import AsyncGeneratorProvider
from g4f.providers.create_images import CreateImagesProvider
class ImageProvider(AsyncGeneratorProvider, CreateImagesProvider):
url = "https://api.image-example.com"
working = True
image_models = ["image-model-1", "image-model-2"]
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
**kwargs
) -> AsyncResult:
# Handle both text and image generation
if model in cls.image_models:
# Image generation logic
yield cls.create_image_response(messages[-1]["content"])
else:
# Text generation logic
yield "Text response"
```
## Provider Parameters
### Common Parameters
All providers support these standard parameters:
```python
response = client.chat.completions.create(
model="gpt-4",
messages=messages,
provider=SomeProvider,
# Common parameters
stream=True, # Enable streaming
proxy="http://proxy:8080", # Proxy server
timeout=30, # Request timeout
max_tokens=1000, # Maximum tokens
temperature=0.7, # Response randomness
top_p=0.9, # Nucleus sampling
stop=["stop", "end"], # Stop sequences
# Provider-specific parameters
api_key="your-api-key", # For authenticated providers
custom_param="value" # Provider-specific options
)
```
### Getting Provider Parameters
```python
from g4f.Provider import Copilot
# Get supported parameters
params = Copilot.get_parameters()
print(params)
# Get parameters as JSON with examples
json_params = Copilot.get_parameters(as_json=True)
print(json_params)
# Get parameter information string
print(Copilot.params)
```
## Provider Status and Health
### Checking Provider Status
```python
from g4f import Provider
# Check if provider is working
if Provider.Copilot.working:
print("Copilot is available")
# Check streaming support
if Provider.Copilot.supports_stream:
print("Copilot supports streaming")
# Check system message support
if Provider.Copilot.supports_system_message:
print("Copilot supports system messages")
```
### Provider Information
```python
from g4f.Provider import ProviderUtils
# Get all providers
all_providers = ProviderUtils.convert
# Get working providers
working_providers = {
name: provider for name, provider in all_providers.items()
if provider.working
}
# Get providers supporting specific features
streaming_providers = {
name: provider for name, provider in all_providers.items()
if provider.supports_stream
}
```
## Provider Error Handling
### Common Provider Errors
```python
from g4f.errors import (
ProviderNotFoundError,
ProviderNotWorkingError,
MissingAuthError,
RateLimitError,
PaymentRequiredError
)
try:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
provider=SomeProvider
)
except ProviderNotWorkingError:
print("Provider is currently not working")
except MissingAuthError:
print("Authentication required for this provider")
except RateLimitError:
print("Rate limit exceeded")
except PaymentRequiredError:
print("Payment or subscription required")
```
### Provider-Specific Error Handling
```python
from g4f.providers.base_provider import RaiseErrorMixin
class SafeProvider(AsyncGeneratorProvider, RaiseErrorMixin):
@classmethod
async def create_async_generator(cls, model, messages, **kwargs):
try:
# Provider implementation
yield "response"
except Exception as e:
# Use built-in error handling
cls.raise_error({"error": str(e)})
```
## Provider Testing
### Testing Custom Providers
```python
import asyncio
from g4f.client import AsyncClient
async def test_provider():
client = AsyncClient()
try:
response = await client.chat.completions.create(
model="test-model",
messages=[{"role": "user", "content": "Test message"}],
provider=CustomProvider
)
print(f"Success: {response.choices[0].message.content}")
except Exception as e:
print(f"Error: {e}")
asyncio.run(test_provider())
```
### Provider Performance Testing
```python
import time
import asyncio
async def benchmark_provider(provider, model, message, iterations=10):
client = AsyncClient()
times = []
for i in range(iterations):
start_time = time.time()
try:
response = await client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": message}],
provider=provider
)
end_time = time.time()
times.append(end_time - start_time)
print(f"Iteration {i+1}: {end_time - start_time:.2f}s")
except Exception as e:
print(f"Iteration {i+1}: Error - {e}")
if times:
avg_time = sum(times) / len(times)
print(f"Average response time: {avg_time:.2f}s")
print(f"Success rate: {len(times)}/{iterations}")
# Example usage
asyncio.run(benchmark_provider(
Provider.Copilot,
"gpt-4",
"Hello, how are you?",
5
))
```
## Best Practices
### 1. Provider Selection Strategy
```python
from g4f.providers.retry_provider import IterListProvider
from g4f import Provider
# Prioritize reliable providers
reliable_providers = IterListProvider([
Provider.Copilot, # High reliability, good features
Provider.Blackbox, # Good fallback
Provider.PollinationsAI, # Good for diverse models
Provider.DeepInfraChat # Open source models
])
```
### 2. Error Recovery
```python
async def robust_chat_completion(client, model, messages, max_retries=3):
providers = [Provider.Copilot, Provider.Blackbox, Provider.PollinationsAI]
for attempt in range(max_retries):
for provider in providers:
try:
response = await client.chat.completions.create(
model=model,
messages=messages,
provider=provider,
timeout=30
)
return response
except Exception as e:
print(f"Attempt {attempt+1} with {provider.__name__} failed: {e}")
continue
raise Exception("All providers failed")
```
### 3. Provider Health Monitoring
```python
async def check_provider_health():
test_message = [{"role": "user", "content": "Hello"}]
client = AsyncClient()
providers_to_test = [
Provider.Copilot,
Provider.Blackbox,
Provider.PollinationsAI
]
health_status = {}
for provider in providers_to_test:
try:
start_time = time.time()
response = await client.chat.completions.create(
model="gpt-4",
messages=test_message,
provider=provider,
timeout=10
)
response_time = time.time() - start_time
health_status[provider.__name__] = {
"status": "healthy",
"response_time": response_time,
"response_length": len(response.choices[0].message.content)
}
except Exception as e:
health_status[provider.__name__] = {
"status": "unhealthy",
"error": str(e)
}
return health_status
```
This documentation provides a comprehensive guide to understanding and working with the G4F provider system. For the latest provider status and capabilities, always check the official repository.

886
REST_API_DOCUMENTATION.md Normal file
View file

@ -0,0 +1,886 @@
# G4F REST API Documentation
## Overview
G4F provides a FastAPI-based REST API that is fully compatible with OpenAI's API specifications. This allows you to use existing OpenAI-compatible tools and libraries with G4F's free AI providers.
## Getting Started
### Starting the API Server
#### Command Line
```bash
# Basic startup
g4f api
# Custom port and debug mode
g4f api --port 8080 --debug
# With GUI interface
g4f api --gui --port 8080
# With authentication
g4f api --g4f-api-key "your-secret-key"
# With custom provider and model defaults
g4f api --provider Copilot --model gpt-4o
# Full configuration example
g4f api \
--port 8080 \
--debug \
--gui \
--g4f-api-key "secret-key" \
--provider Copilot \
--model gpt-4o-mini \
--proxy "http://proxy.example.com:8080" \
--timeout 300
```
#### Programmatic Startup
```python
from g4f.api import run_api, AppConfig
# Configure the application
AppConfig.set_config(
g4f_api_key="your-secret-key",
provider="Copilot",
model="gpt-4o-mini",
gui=True,
timeout=300
)
# Start the server
run_api(host="0.0.0.0", port=8080, debug=True)
```
### Base URL
Once started, the API is available at:
- **Default**: `http://localhost:1337`
- **Custom port**: `http://localhost:<PORT>`
## Authentication
G4F API supports optional authentication via API keys.
### Setting Up Authentication
```bash
# Start server with authentication
g4f api --g4f-api-key "your-secret-key"
```
### Using Authentication
```python
import openai
client = openai.OpenAI(
api_key="your-secret-key",
base_url="http://localhost:1337/v1"
)
```
### HTTP Headers
```http
Authorization: Bearer your-secret-key
# OR
g4f-api-key: your-secret-key
```
## API Endpoints
### Chat Completions
#### `POST /v1/chat/completions`
Creates a chat completion response.
**Request Body:**
```json
{
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"stream": false,
"max_tokens": 1000,
"temperature": 0.7,
"top_p": 0.9,
"frequency_penalty": 0,
"presence_penalty": 0,
"stop": ["Human:", "AI:"],
"provider": "Copilot",
"proxy": "http://proxy.example.com:8080",
"response_format": {"type": "json_object"},
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather information",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}
]
}
```
**Parameters:**
- `model` (string, required): Model to use for completion
- `messages` (array, required): List of message objects
- `stream` (boolean): Enable streaming responses
- `max_tokens` (integer): Maximum tokens to generate
- `temperature` (number): Sampling temperature (0-2)
- `top_p` (number): Nucleus sampling parameter
- `frequency_penalty` (number): Frequency penalty (-2 to 2)
- `presence_penalty` (number): Presence penalty (-2 to 2)
- `stop` (string|array): Stop sequences
- `provider` (string): Specific provider to use
- `proxy` (string): Proxy server URL
- `response_format` (object): Response format specification
- `tools` (array): Available tools/functions
**Response (Non-streaming):**
```json
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o-mini",
"provider": "Copilot",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 8,
"total_tokens": 18
}
}
```
**Response (Streaming):**
```http
Content-Type: text/event-stream
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-4o-mini","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-4o-mini","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-4o-mini","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":10,"completion_tokens":8,"total_tokens":18}}
data: [DONE]
```
#### Example Usage
**cURL:**
```bash
curl -X POST "http://localhost:1337/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "g4f-api-key: your-secret-key" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'
```
**Python:**
```python
import openai
client = openai.OpenAI(
api_key="your-secret-key",
base_url="http://localhost:1337/v1"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
```
**JavaScript:**
```javascript
const OpenAI = require('openai');
const client = new OpenAI({
apiKey: 'your-secret-key',
baseURL: 'http://localhost:1337/v1'
});
async function main() {
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }]
});
console.log(response.choices[0].message.content);
}
main();
```
### Image Generation
#### `POST /v1/images/generations`
Generates images from text prompts.
**Request Body:**
```json
{
"prompt": "A beautiful sunset over mountains",
"model": "dall-e-3",
"n": 1,
"size": "1024x1024",
"response_format": "url",
"provider": "PollinationsAI"
}
```
**Parameters:**
- `prompt` (string, required): Text description of desired image
- `model` (string): Image model to use
- `n` (integer): Number of images to generate (1-4)
- `size` (string): Image dimensions
- `response_format` (string): "url" or "b64_json"
- `provider` (string): Specific provider to use
**Response:**
```json
{
"created": 1677652288,
"data": [
{
"url": "https://example.com/generated-image.jpg"
}
]
}
```
#### Example Usage
**cURL:**
```bash
curl -X POST "http://localhost:1337/v1/images/generations" \
-H "Content-Type: application/json" \
-H "g4f-api-key: your-secret-key" \
-d '{
"prompt": "A beautiful sunset",
"model": "dall-e-3",
"response_format": "url"
}'
```
**Python:**
```python
response = client.images.generate(
prompt="A beautiful sunset over mountains",
model="dall-e-3",
size="1024x1024",
response_format="url"
)
print(response.data[0].url)
```
### Models
#### `GET /v1/models`
Lists available models.
**Response:**
```json
{
"object": "list",
"data": [
{
"id": "gpt-4o",
"object": "model",
"created": 0,
"owned_by": "OpenAI",
"image": false,
"provider": false
},
{
"id": "gpt-4o-mini",
"object": "model",
"created": 0,
"owned_by": "OpenAI",
"image": false,
"provider": false
},
{
"id": "Copilot",
"object": "model",
"created": 0,
"owned_by": "Microsoft",
"image": false,
"provider": true
}
]
}
```
#### `GET /v1/models/{model_name}`
Get information about a specific model.
**Response:**
```json
{
"id": "gpt-4o",
"object": "model",
"created": 0,
"owned_by": "OpenAI"
}
```
### Provider-Specific Endpoints
#### `POST /api/{provider}/chat/completions`
Use a specific provider for chat completions.
**Example:**
```bash
curl -X POST "http://localhost:1337/api/Copilot/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```
#### `GET /api/{provider}/models`
Get models available for a specific provider.
**Response:**
```json
{
"object": "list",
"data": [
{
"id": "gpt-4",
"object": "model",
"created": 0,
"owned_by": "Microsoft",
"image": false,
"vision": true
}
]
}
```
### Providers
#### `GET /v1/providers`
Lists all available providers.
**Response:**
```json
{
"object": "list",
"data": [
{
"provider": "Copilot",
"models": ["gpt-4", "gpt-4-vision"],
"image_models": [],
"vision_models": ["gpt-4-vision"],
"url": "https://copilot.microsoft.com",
"working": true,
"auth": false
}
]
}
```
#### `GET /v1/providers/{provider}`
Get detailed information about a specific provider.
**Response:**
```json
{
"provider": "Copilot",
"models": ["gpt-4", "gpt-4-vision"],
"image_models": [],
"vision_models": ["gpt-4-vision"],
"url": "https://copilot.microsoft.com",
"working": true,
"auth": false,
"stream": true,
"description": "Microsoft Copilot AI assistant"
}
```
### Audio
#### `POST /v1/audio/transcriptions`
Transcribe audio to text.
**Request:**
```bash
curl -X POST "http://localhost:1337/v1/audio/transcriptions" \
-H "g4f-api-key: your-secret-key" \
-F "file=@audio.mp3" \
-F "model=whisper-1"
```
#### `POST /v1/audio/speech`
Generate speech from text.
**Request Body:**
```json
{
"model": "tts-1",
"input": "Hello, this is a test.",
"voice": "alloy"
}
```
### File Upload and Media
#### `POST /v1/upload_cookies`
Upload cookie files for authentication.
**Request:**
```bash
curl -X POST "http://localhost:1337/v1/upload_cookies" \
-H "g4f-api-key: your-secret-key" \
-F "files=@cookies.json"
```
#### `GET /media/{filename}`
Access generated media files.
**Example:**
```
GET /media/generated-image-abc123.jpg
```
## Advanced Features
### Conversation Management
#### Conversation ID
Use conversation IDs to maintain context across requests:
```json
{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}],
"conversation_id": "conv-abc123"
}
```
#### Provider-Specific Conversations
```bash
curl -X POST "http://localhost:1337/api/Copilot/conv-123/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Continue our conversation"}]
}'
```
### Vision Models
Send images with text for vision-capable models:
```json
{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQ..."
}
}
]
}
]
}
```
### Tool/Function Calling
Define and use tools in your requests:
```json
{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "What's the weather in Paris?"}],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}
]
}
```
### Custom Response Formats
#### JSON Mode
```json
{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Generate a JSON object with user info"}],
"response_format": {"type": "json_object"}
}
```
## Error Handling
### Error Response Format
```json
{
"error": {
"message": "Model not found",
"type": "model_not_found",
"code": "model_not_found"
}
}
```
### Common HTTP Status Codes
- **200**: Success
- **400**: Bad Request (invalid parameters)
- **401**: Unauthorized (missing or invalid API key)
- **403**: Forbidden (insufficient permissions)
- **404**: Not Found (model or provider not found)
- **422**: Unprocessable Entity (validation error)
- **500**: Internal Server Error
### Error Types
#### Authentication Errors
```json
{
"error": {
"message": "Invalid API key",
"type": "authentication_error",
"code": "invalid_api_key"
}
}
```
#### Model Errors
```json
{
"error": {
"message": "Model 'invalid-model' not found",
"type": "model_not_found",
"code": "model_not_found"
}
}
```
#### Provider Errors
```json
{
"error": {
"message": "Provider not working",
"type": "provider_error",
"code": "provider_not_working"
}
}
```
## Configuration
### Environment Variables
```bash
# Set API configuration via environment
export G4F_PROXY="http://proxy.example.com:8080"
export G4F_API_KEY="your-secret-key"
export G4F_DEBUG="true"
```
### Runtime Configuration
```python
from g4f.api import AppConfig
# Configure at runtime
AppConfig.set_config(
g4f_api_key="secret-key",
provider="Copilot",
model="gpt-4o",
proxy="http://proxy.example.com:8080",
timeout=300,
ignored_providers=["SomeProvider"],
gui=True,
demo=False
)
```
## Integration Examples
### OpenAI Python Client
```python
import openai
client = openai.OpenAI(
api_key="g4f-api-key",
base_url="http://localhost:1337/v1"
)
# Standard usage
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
# Streaming
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
```
### LangChain Integration
```python
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
llm = ChatOpenAI(
openai_api_base="http://localhost:1337/v1",
openai_api_key="g4f-api-key",
model_name="gpt-4o-mini"
)
response = llm([HumanMessage(content="Hello!")])
print(response.content)
```
### Node.js Integration
```javascript
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: "g4f-api-key",
basePath: "http://localhost:1337/v1"
});
const openai = new OpenAIApi(configuration);
async function main() {
const response = await openai.createChatCompletion({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello!" }]
});
console.log(response.data.choices[0].message.content);
}
```
## Performance and Scaling
### Rate Limiting
G4F API doesn't implement built-in rate limiting, but you can add it using:
```python
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
from fastapi import Request
limiter = Limiter(key_func=get_remote_address)
@app.middleware("http")
async def rate_limit_middleware(request: Request, call_next):
# Custom rate limiting logic
pass
```
### Caching
Implement response caching for improved performance:
```python
from functools import lru_cache
import hashlib
@lru_cache(maxsize=1000)
def get_cached_response(request_hash):
# Cache implementation
pass
```
### Load Balancing
Use multiple G4F instances behind a load balancer:
```yaml
# docker-compose.yml
version: '3.8'
services:
g4f-1:
image: hlohaus789/g4f
ports:
- "1337:1337"
g4f-2:
image: hlohaus789/g4f
ports:
- "1338:1337"
nginx:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
```
## Security Considerations
### API Key Management
```python
import secrets
# Generate secure API key
api_key = secrets.token_urlsafe(32)
# Validate API key format
def is_valid_api_key(key):
return len(key) >= 32 and key.isalnum()
```
### Input Validation
The API automatically validates:
- Message format and structure
- Model name validity
- Parameter ranges and types
- File upload security
### CORS Configuration
```python
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=["https://yourdomain.com"],
allow_credentials=True,
allow_methods=["GET", "POST"],
allow_headers=["*"],
)
```
## Monitoring and Logging
### Enable Debug Logging
```bash
g4f api --debug
```
### Custom Logging
```python
import logging
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger("g4f.api")
```
### Health Checks
```bash
# Check API health
curl http://localhost:1337/v1/models
```
## Deployment
### Docker Deployment
```dockerfile
FROM hlohaus789/g4f:latest
# Set environment variables
ENV G4F_API_KEY=your-secret-key
ENV G4F_DEBUG=false
# Expose port
EXPOSE 1337
# Start API
CMD ["python", "-m", "g4f.cli", "api", "--host", "0.0.0.0", "--port", "1337"]
```
### Production Deployment
```bash
# Install production dependencies
pip install gunicorn uvloop
# Run with Gunicorn
gunicorn g4f.api:create_app \
--workers 4 \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:1337 \
--access-logfile - \
--error-logfile -
```
This comprehensive REST API documentation covers all aspects of using G4F's API endpoints. The API is designed to be fully compatible with OpenAI's API, making it easy to integrate with existing tools and workflows.