diff --git a/README.md b/README.md index 8f3181ae..8d218352 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,10 @@ -
-
-
+
+
@@ -16,176 +18,222 @@ โค๏ธ
- - -> Latest version:+Live demo & docs: https://g4f.dev | Documentation: https://g4f.dev/docs +
--- -## ๐ Table of Contents - - [๐ What's New](#-whats-new) - - [๐ Table of Contents](#-table-of-contents) - - [โก Getting Started](#-getting-started) - - [๐ Installation](#-installation) - - [๐ณ Using Docker](#-using-docker) - - [๐ช Windows Guide (.exe)](#-windows-guide-exe) - - [๐ Python Installation](#-python-installation) - - [๐ก Usage](#-usage) - - [๐ Text Generation](#-text-generation) - - [๐จ Image Generation](#-image-generation) - - [๐ Web Interface](#-web-interface) - - [๐ฅ๏ธ Local Inference](https://github.com/gpt4free/g4f.dev/blob/main/docs/local.md) - - [๐ค Interference API](#-interference-api) - - [๐ ๏ธ Configuration](https://github.com/gpt4free/g4f.dev/blob/main/docs/configuration.md) - - [๐ฑ Run on Smartphone](#-run-on-smartphone) - - [๐ Full Documentation for Python API](#-full-documentation-for-python-api) - - [๐ Providers and Models](https://github.com/gpt4free/g4f.dev/blob/main/docs%2Fproviders-and-models.md) - - [๐ Powered by gpt4free](#-powered-by-gpt4free) - - [๐ค Contribute](#-contribute) - - [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider) - - [How can AI help me with writing code?](#guide-how-can-ai-help-me-with-writing-code) - - [๐ Contributors](#-contributors) - - [ยฉ๏ธ Copyright](#-copyright) - - [โญ Star History](#-star-history) - - [๐ License](#-license) +GPT4Free (g4f) is a community-driven project that aggregates multiple accessible providers and interfaces to make working with modern LLMs and media-generation models easier and more flexible. GPT4Free aims to offer multi-provider support, local GUI, OpenAI-compatible REST APIs, and convenient Python and JavaScript clients โ all under a community-first license. + +This README is a consolidated, improved, and complete guide to installing, running, and contributing to GPT4Free. + +Table of contents +- [Whatโs included](#whats-included) +- [Quick links](#quick-links) +- [Requirements & compatibility](#requirements--compatibility) +- [Installation](#installation) + - [Docker (recommended)](#docker-recommended) + - [Slim Docker image](#slim-docker-image) + - [Windows (.exe)](#windows-exe) + - [Python (pip / from source / partial installs)](#python-pip--from-source--partial-installs) +- [Running the app](#running-the-app) + - [GUI (web client)](#gui-web-client) + - [FastAPI / Interference API](#fastapi--interference-api) + - [CLI](#cli) + - [Optional provider login (desktop in container)](#optional-provider-login-desktop-in-container) +- [Using the Python client](#using-the-python-client) + - [Synchronous text example](#synchronous-text-example) + - [Image generation example](#image-generation-example) + - [Async client example](#async-client-example) +- [Using GPT4Free.js (browser JS client)](#using-gpt4freejs-browser-js-client) +- [Providers & models (overview)](#providers--models-overview) +- [Local inference & media](#local-inference--media) +- [Configuration & customization](#configuration--customization) +- [Running on smartphone](#running-on-smartphone) +- [Interference API (OpenAIโcompatible)](#interference-api-openai-compatible) +- [Examples & common patterns](#examples--common-patterns) +- [Contributing](#contributing) + - [How to create a new provider](#how-to-create-a-new-provider) + - [How AI can help you write code](#how-ai-can-help-you-write-code) +- [Security, privacy & takedown policy](#security-privacy--takedown-policy) +- [Credits, contributors & attribution](#credits-contributors--attribution) +- [Powered-by highlights](#powered-by-highlights) +- [Changelog & releases](#changelog--releases) +- [Manifesto / Project principles](#manifesto--project-principles) +- [License](#license) +- [Contact & sponsorship](#contact--sponsorship) +- [Appendix: Quick commands & examples](#appendix-quick-commands--examples) --- -## โก๏ธ Getting Started +## Whatโs included +- Python client library and async client. +- Optional local web GUI. +- FastAPI-based OpenAI-compatible API (Interference API). +- Official browser JS client (g4f.dev distribution). +- Docker images (full and slim). +- Multi-provider adapters (LLMs, media providers, local inference backends). +- Tooling for image/audio/video generation and media persistence. -## ๐ Installation +--- -### ๐ณ Using Docker -1. **Install Docker:** [Download and install Docker](https://docs.docker.com/get-docker/). -2. **Set Up Directories:** Before running the container, make sure the necessary data directories exist or can be created. For example, you can create and set ownership on these directories by running: -```bash -mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media -sudo chown -R 1200:1201 ${PWD}/har_and_cookies ${PWD}/generated_media -``` -3. **Run the Docker Container:** Use the following commands to pull the latest image and start the container (Only x64): -```bash -docker pull hlohaus789/g4f -docker run -p 8080:8080 -p 7900:7900 \ - --shm-size="2g" \ - -v ${PWD}/har_and_cookies:/app/har_and_cookies \ - -v ${PWD}/generated_media:/app/generated_media \ - hlohaus789/g4f:latest -``` +## Quick links +- Website & docs: https://g4f.dev | https://g4f.dev/docs +- PyPI: https://pypi.org/project/g4f +- Docker image: https://hub.docker.com/r/hlohaus789/g4f +- Releases: https://github.com/xtekky/gpt4free/releases +- Issues: https://github.com/xtekky/gpt4free/issues +- Community: Telegram (https://telegram.me/g4f_channel) ยท Discord News (https://discord.gg/5E39JUWUFa) ยท Discord Support (https://discord.gg/qXA4Wf4Fsm) -4. **Running the Slim Docker Image:** And use the following commands to run the Slim Docker image. This command also updates the `g4f` package at startup and installs any additional dependencies: (x64 and arm64) +--- + +## Requirements & compatibility +- Python 3.10+ recommended. +- Google Chrome/Chromium for providers using browser automation. +- Docker for containerized deployment. +- Works on x86_64 and arm64 (slim image supports both). +- Some provider adapters may require platform-specific tooling (Chrome/Chromium, etc.). Check provider docs for details. + +--- + +## Installation + +### Docker (recommended) +1. Install Docker: https://docs.docker.com/get-docker/ +2. Create persistent directories: + - Example (Linux/macOS): + ```bash + mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media + sudo chown -R 1200:1201 ${PWD}/har_and_cookies ${PWD}/generated_media + ``` +3. Pull image: + ```bash + docker pull hlohaus789/g4f + ``` +4. Run container: + ```bash + docker run -p 8080:8080 -p 7900:7900 \ + --shm-size="2g" \ + -v ${PWD}/har_and_cookies:/app/har_and_cookies \ + -v ${PWD}/generated_media:/app/generated_media \ + hlohaus789/g4f:latest + ``` +Notes: +- Port 8080 serves GUI/API; 7900 can expose a VNC-like desktop for provider logins (optional). +- Increase --shm-size for heavier browser automation tasks. + +### Slim Docker image (x64 & arm64) ```bash mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_media + docker run \ -p 1337:8080 -p 8080:8080 \ -v ${PWD}/har_and_cookies:/app/har_and_cookies \ -v ${PWD}/generated_media:/app/generated_media \ hlohaus789/g4f:latest-slim ``` - -5. **Access the Client Interface:** - - **To use the included client, navigate to:** [http://localhost:8080/chat/](http://localhost:8080/chat/) - - **Or set the API base for your client to:** [http://localhost:8080/v1](http://localhost:8080/v1) +Notes: +- The slim image can update the g4f package on startup and installs additional dependencies as needed. +- In this example, the Interference API is mapped to 1337. -6. **(Optional) Provider Login:** - If required, you can access the container's desktop here: http://localhost:7900/?autoconnect=1&resize=scale&password=secret for provider login purposes. +### Windows Guide (.exe) +1. Download the release artifact `g4f.exe.zip` from: + https://github.com/xtekky/gpt4free/releases/latest +2. Unzip and run `g4f.exe`. +3. Open GUI at: http://localhost:8080/chat/ +4. If Windows Firewall blocks access, allow the application. ---- +### Python Installation (pip / from source / partial installs) -### ๐ช Windows Guide (.exe) -To ensure the seamless operation of our application, please follow the instructions below. These steps are designed to guide you through the installation process on Windows operating systems. +Prerequisites: +- Python 3.10+ (https://www.python.org/downloads/) +- Chrome/Chromium for some providers. -**Installation Steps:** -1. **Download the Application**: Visit our [releases page](https://github.com/xtekky/gpt4free/releases/latest) and download the most recent version of the application, named `g4f.exe.zip`. -2. **File Placement**: After downloading, locate the `.zip` file in your Downloads folder. Unpack it to a directory of your choice on your system, then execute the `g4f.exe` file to run the app. -3. **Open GUI**: The app starts a web server with the GUI. Open your favorite browser and navigate to [http://localhost:8080/chat/](http://localhost:8080/chat/) to access the application interface. -4. **Firewall Configuration (Hotfix)**: Upon installation, it may be necessary to adjust your Windows Firewall settings to allow the application to operate correctly. To do this, access your Windows Firewall settings and allow the application. - -By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance. - ---- - -### ๐ Python Installation - -#### Prerequisites: -1. Install Python 3.10+ from [python.org](https://www.python.org/downloads/). -2. Install Google Chrome for certain providers. - -#### Install with PyPI: +Install from PyPI (recommended): ```bash pip install -U g4f[all] ``` -> How do I install only parts or do disable parts? **Use partial requirements:** [/docs/requirements](https://github.com/gpt4free/g4f.dev/blob/main/docs/requirements.md) +Partial installs +- To install only specific functionality, use optional extras groups. See docs/requirements.md in the project docs. -#### Install from Source: +Install from source: ```bash git clone https://github.com/xtekky/gpt4free.git cd gpt4free pip install -r requirements.txt +pip install -e . ``` -> How do I load the project using git and installing the project requirements? **Read this tutorial and follow it step by step:** [/docs/git](https://github.com/gpt4free/g4f.dev/blob/main/docs/git.md) +Notes: +- Some features require Chrome/Chromium or other tools; follow provider-specific docs. --- -## ๐ก Usage +## Running the app -### ๐ Text Generation +### GUI (web client) +- Run via Python: +```python +from g4f.gui import run_gui +run_gui() +``` +- Or via CLI: +```bash +python -m g4f.cli gui --port 8080 --debug +``` +- Open: http://localhost:8080/chat/ + +### FastAPI / Interference API +- Start FastAPI server: +```bash +python -m g4f --port 8080 --debug +``` +- If using slim docker mapping, Interference API may be available at `http://localhost:1337/v1` +- Swagger UI: `http://localhost:1337/docs` + +### CLI +- Start GUI server: +```bash +python -m g4f.cli gui --port 8080 --debug +``` + +### Optional provider login (desktop within container) +- Accessible at: + ``` + http://localhost:7900/?autoconnect=1&resize=scale&password=secret + ``` +- Useful for logging into web-based providers to obtain cookies/HAR files. + +--- + +## Using the Python client + +Install: +```bash +pip install -U g4f[all] +``` + +Synchronous text example: ```python from g4f.client import Client client = Client() response = client.chat.completions.create( model="gpt-4o-mini", - messages=[{"role": "user", "content": "Hello"}], + messages=[{"role": "user", "content": "Hello, how are you?"}], web_search=False ) print(response.choices[0].message.content) ``` +Expected: ``` Hello! How can I assist you today? ``` -### ๐จ Image Generation +Image generation example: ```python from g4f.client import Client @@ -195,266 +243,179 @@ response = client.images.generate( prompt="a white siamese cat", response_format="url" ) - print(f"Generated image URL: {response.data[0].url}") ``` -[](https://github.com/gpt4free/g4f.dev/blob/main/docs/client.md) -### ๐งโโ๏ธ Using GPT4Free.js +Async client example: +```python +from g4f.async_client import AsyncClient +import asyncio -Use the **official JS client** right in the browserโno backend needed. +async def main(): + client = AsyncClient() + response = await client.chat.completions.create( + model="gpt-4o-mini", + messages=[{"role": "user", "content": "Explain quantum computing briefly"}], + ) + print(response.choices[0].message.content) -For text generation: +asyncio.run(main()) +``` + +Notes: +- See the full API reference for streaming, tool-calling patterns, and advanced options: https://g4f.dev/docs/client + +--- + +## Using GPT4Free.js (browser JS client) +Use the official JS client in the browserโno backend required. + +Example: ```html ``` -### ๐ Web Interface -**Run the GUI using Python:** -```python -from g4f.gui import run_gui - -run_gui() -``` -**Run via CLI (To start the Flask Server):** -```bash -python -m g4f.cli gui --port 8080 --debug -``` -**Or, start the FastAPI Server:** -```bash -python -m g4f --port 8080 --debug -``` - -> **Learn More About the GUI:** For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the [GUI Documentation](https://github.com/gpt4free/g4f.dev/blob/main/docs/gui.md) . This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more. +Notes: +- The JS client is distributed via the g4f.dev CDN for easy usage. Review CORS considerations and usage limits. --- -### ๐ค Interference API +## Providers & models (overview) +- GPT4Free integrates many providers including (but not limited to) OpenAI-compatible endpoints, PerplexityLabs, Gemini, MetaAI, Pollinations (media), and local inference backends. +- Model availability and behavior depend on provider capabilities. See the providers doc for current, supported provider/model lists: https://g4f.dev/docs/providers-and-models -The **Interference API** enables seamless integration with OpenAI's services through G4F, allowing you to deploy efficient AI solutions. - -- **Documentation**: [Interference API Docs](https://github.com/gpt4free/g4f.dev/blob/main/docs/interference-api.md) -- **Endpoint**: `http://localhost:1337/v1` -- **Swagger UI**: Explore the OpenAPI documentation via Swagger UI at `http://localhost:1337/docs` -- **Provider Selection**: [How to Specify a Provider?](https://github.com/gpt4free/g4f.dev/blob/main/docs/selecting_a_provider.md) - -This API is designed for straightforward implementation and enhanced compatibility with other OpenAI integrations. +Provider requirements may include: +- API keys or tokens (for authenticated providers) +- Browser cookies / HAR files for providers scraped via browser automation +- Chrome/Chromium or headless browser tooling +- Local model binaries and runtime (for local inference) --- -### ๐ฑ Run on Smartphone -Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device: [Run on Smartphone Guide](https://github.com/gpt4free/g4f.dev/blob/main/docs/guides/phone.md) +## Local inference & media +- GPT4Free supports local inference backends. See [docs/local.md](https://github.com/gpt4free/g4f.dev/blob/main/docs/local.md) for supported runtimes and hardware guidance. +- Media generation (image, audio, video) is supported through providers (e.g., Pollinations). See [docs/media.md](https://github.com/gpt4free/g4f.dev/blob/main/docs/media.md) for formats, options, and sample usage. --- -#### **๐ Full Documentation for Python API** - - **Client API from G4F:** [/docs/client](https://github.com/gpt4free/g4f.dev/blob/main/docs/client.md) - - **AsyncClient API from G4F:** [/docs/async_client](https://github.com/gpt4free/g4f.dev/blob/main/docs/async_client.md) - - **Requests API from G4F:** [/docs/requests](https://github.com/gpt4free/g4f.dev/blob/main/docs/requests.md) - - **File API from G4F:** [/docs/file](https://github.com/gpt4free/g4f.dev/blob/main/docs/file.md) - - **PydanticAI and LangChain Integration for G4F:** [/docs/pydantic_ai](https://github.com/gpt4free/g4f.dev/blob/main/docs/pydantic_ai.md) - - **Legacy API with python modules:** [/docs/legacy](https://github.com/gpt4free/g4f.dev/blob/main/docs/legacy.md) - - **G4F - Media Documentation (Image, Audio and Video)** [/docs/media](https://github.com/gpt4free/g4f.dev/blob/main/docs/media.md) *(New)* +## Configuration & customization +- Configure via environment variables, CLI flags, or config files. See [docs/config.md](https://github.com/gpt4free/g4f.dev/blob/main/docs/config.md). +- To reduce install size, use partial requirement groups. See [docs/requirements.md](https://github.com/gpt4free/g4f.dev/blob/main/docs/requirements.md). +- Provider selection: learn how to set defaults and override per-request at [docs/selecting_a_provider.md](https://github.com/gpt4free/g4f.dev/blob/main/docs/selecting_a_provider.md). +- Persistence: HAR files, cookies, and generated media persist in mapped directories (e.g., har_and_cookies, generated_media). --- -### Powered by Pollinations AI - -**๐ Pollinations AI** - -|
- |
-
- -This project is licensed under GNU_GPL_v3.0. - |
-