mirror of
https://github.com/xtekky/gpt4free.git
synced 2025-12-05 18:20:35 -08:00
Add comprehensive documentation with usage guide and API reference
Co-authored-by: fkahdias <fkahdias@gmail.com>
This commit is contained in:
parent
78c0d67d54
commit
ef72441e47
3 changed files with 355 additions and 1 deletions
166
docs/API_REFERENCE.md
Normal file
166
docs/API_REFERENCE.md
Normal file
|
|
@ -0,0 +1,166 @@
|
||||||
|
# g4f API Reference
|
||||||
|
|
||||||
|
> This document gives a **human-curated** overview of all *public* classes, functions and constants in the `g4f` package. Internal helpers (modules prefixed with an underscore or imported only for backward-compatibility) are intentionally omitted.
|
||||||
|
|
||||||
|
For detailed type information inspect the inline type hints or open the corresponding source files in your editor.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Package-level exports (`import g4f`)
|
||||||
|
|
||||||
|
| Symbol | Type | Description |
|
||||||
|
| ------ | ---- | ----------- |
|
||||||
|
| `ChatCompletion` | `class` | High-level static interface for creating chat (and optionally image) completions. Mirrors the official OpenAI semantics. |
|
||||||
|
| `Model` | `dataclass` | Immutable description of a model + its preferred provider. Registered on import. |
|
||||||
|
| `ModelRegistry` | `class` | Global registry; look-up utility for `Model` instances and aliases. |
|
||||||
|
| `Client` | `class` | Synchronous convenience wrapper combining chat & image endpoints. |
|
||||||
|
| `AsyncClient` | `class` | Asynchronous variant of `Client`. |
|
||||||
|
| `get_cookies / set_cookies` | `function` | Persist and retrieve provider-specific cookies used during web-scraping. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. `ChatCompletion`
|
||||||
|
|
||||||
|
```python
|
||||||
|
class ChatCompletion:
|
||||||
|
@staticmethod
|
||||||
|
def create(
|
||||||
|
model: Union[Model, str],
|
||||||
|
messages: Messages,
|
||||||
|
provider: Union[ProviderType, str, None] = None,
|
||||||
|
stream: bool = False,
|
||||||
|
image: ImageType | None = None,
|
||||||
|
image_name: str | None = None,
|
||||||
|
ignore_working: bool = False,
|
||||||
|
ignore_stream: bool = False,
|
||||||
|
**provider_kwargs,
|
||||||
|
) -> str | Iterator[ChatCompletionChunk] | ChatCompletionChunk:
|
||||||
|
"""Generate a completion. If *stream* is *True* an iterator of chunks is returned."""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
async def create_async(...):
|
||||||
|
"""Asynchronous mirror of `create`. Returns either a Coroutine (non-stream) or an async iterator (stream)."""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Behaviour
|
||||||
|
|
||||||
|
1. **Automatic provider routing** – When *provider* is `None` the best provider for the chosen *model* is selected via `models.py`.
|
||||||
|
2. **Proxies** – honours `G4F_PROXY` env variable if `proxy` kwarg is omitted.
|
||||||
|
3. **Images** – pass a file-like object or bytes via the `image` parameter to switch to *vision* mode.
|
||||||
|
|
||||||
|
### Minimal example
|
||||||
|
|
||||||
|
```python
|
||||||
|
from g4f import ChatCompletion
|
||||||
|
|
||||||
|
answer = ChatCompletion.create(
|
||||||
|
model="gpt-4", messages=[{"role": "user", "content": "Ping!"}]
|
||||||
|
)
|
||||||
|
print(answer)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. `g4f.client` module
|
||||||
|
|
||||||
|
### `Client`
|
||||||
|
|
||||||
|
A one-stop object that contains nested service namespaces mirroring the official OpenAI Python SDK.
|
||||||
|
|
||||||
|
```python
|
||||||
|
client = g4f.client.Client(proxy="http://127.0.0.1:7890")
|
||||||
|
|
||||||
|
client.chat.completions.create(...)
|
||||||
|
client.images.generate(...)
|
||||||
|
```
|
||||||
|
|
||||||
|
Key attributes:
|
||||||
|
|
||||||
|
| Attribute | Type | Purpose |
|
||||||
|
| --------- | ---- | ------- |
|
||||||
|
| `chat.completions` | `Completions` | Synchronous chat endpoint. |
|
||||||
|
| `images` / `media` | `Images` | Image generation & variation helpers. |
|
||||||
|
| `models` | `ClientModels` | Convenience object for provider selection. |
|
||||||
|
|
||||||
|
### `AsyncClient`
|
||||||
|
|
||||||
|
Identical surface but all methods are `async`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
async_client = g4f.client.AsyncClient()
|
||||||
|
answer = await async_client.chat.completions.create(...)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. `g4f.models` module
|
||||||
|
|
||||||
|
### `Model`
|
||||||
|
Dataclass with fields:
|
||||||
|
|
||||||
|
```python
|
||||||
|
name: str # "gpt-4o", "llama-3-8b", ...
|
||||||
|
base_provider: str # human readable provider family
|
||||||
|
best_provider: ProviderType | IterListProvider
|
||||||
|
```
|
||||||
|
|
||||||
|
The file ships with **hundreds** of ready-to-use model constants (e.g. `g4f.models.gpt_4`, `llama_3_70b`, `dall_e_3`). Retrieve them dynamically via:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from g4f.models import ModelRegistry
|
||||||
|
print(ModelRegistry.all_models().keys())
|
||||||
|
```
|
||||||
|
|
||||||
|
### `ModelRegistry` – helper methods
|
||||||
|
|
||||||
|
* `get(name)` – resolve an alias or canonical name to a `Model` instance.
|
||||||
|
* `list_models_by_provider(provider_name)` – filter by provider (e.g. "Together").
|
||||||
|
* `validate_all_models()` – sanity-check that each registered model has a provider.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Error classes (`g4f.errors`)
|
||||||
|
|
||||||
|
| Error | Raised when |
|
||||||
|
| ----- | ----------- |
|
||||||
|
| `StreamNotSupportedError` | You requested `stream=True` but the selected provider lacks streaming support. |
|
||||||
|
| `NoMediaResponseError` | No image data was returned by the provider. |
|
||||||
|
|
||||||
|
All errors ultimately inherit from `Exception`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Provider ecosystem (advanced)
|
||||||
|
|
||||||
|
Providers live in the `g4f.providers` & `g4f.Provider` packages. Each provider class implements `create_function` and (optionally) `async_create_function`. When adding a new provider follow the template in `providers/types.py`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Typing aliases (`g4f.typing`)
|
||||||
|
|
||||||
|
Useful public aliases:
|
||||||
|
|
||||||
|
```python
|
||||||
|
Messages = list[dict[str, str]]
|
||||||
|
ImageType = Union[str, bytes, pathlib.Path, BinaryIO]
|
||||||
|
CreateResult = ChatCompletion | Iterator[ChatCompletionChunk]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. CLI entry-point (`python -m g4f`)
|
||||||
|
|
||||||
|
`python -m g4f "Your prompt here" --model gpt-4o --stream` launches the minimal CLI tool defined in `g4f/__main__.py`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Debug utilities
|
||||||
|
|
||||||
|
Set the `G4F_DEBUG` environment variable to enable verbose logs from the `g4f.debug` module.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
|
||||||
|
* **Stability** – Public APIs follow semantic-versioning rules (see `g4f.version.__version__`). Minor & patch releases will not introduce breaking changes.
|
||||||
|
* **Experimental modules** (`g4f.gui`, `g4f.local`) are *not* covered by this reference and may change at any time without notice.
|
||||||
|
|
@ -1 +1,10 @@
|
||||||
Link to [Documentation](https://github.com/gpt4free/gpt4free.github.io)
|
Link to [Documentation](https://github.com/gpt4free/gpt4free.github.io)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Local Documentation Index
|
||||||
|
|
||||||
|
* [Usage Guide](./USAGE.md) – step-by-step examples for the most common tasks.
|
||||||
|
* [API Reference](./API_REFERENCE.md) – class & function level documentation.
|
||||||
|
|
||||||
|
The upstream hosted docs remain available at the link above, but the Markdown files in this folder are guaranteed to be in sync with the current commit.
|
||||||
179
docs/USAGE.md
Normal file
179
docs/USAGE.md
Normal file
|
|
@ -0,0 +1,179 @@
|
||||||
|
# g4f Usage Guide
|
||||||
|
|
||||||
|
This guide provides practical, copy-paste ready examples demonstrating the most common ways to use **g4f** in your own projects.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Installation
|
||||||
|
|
||||||
|
```
|
||||||
|
pip install g4f # or install from source
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Tip** – If you are in a PEP-668 managed environment (e.g. Debian/Ubuntu 24.04) add the `--break-system-packages` flag:
|
||||||
|
>
|
||||||
|
> ```bash
|
||||||
|
> pip install --break-system-packages g4f
|
||||||
|
> ```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Quick start – one-liner
|
||||||
|
|
||||||
|
```python
|
||||||
|
import g4f
|
||||||
|
|
||||||
|
response = g4f.ChatCompletion.create(
|
||||||
|
model="gpt-4o", # or any other supported model name
|
||||||
|
messages=[{"role": "user", "content": "Hello!"}]
|
||||||
|
)
|
||||||
|
print(response) # → "Hello! How can I help you today?"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Chat completions in detail
|
||||||
|
|
||||||
|
### Synchronous API
|
||||||
|
|
||||||
|
```python
|
||||||
|
from g4f import ChatCompletion
|
||||||
|
|
||||||
|
messages = [
|
||||||
|
{"role": "system", "content": "You are a concise assistant."},
|
||||||
|
{"role": "user", "content": "Summarise the plot of Dune in one sentence."},
|
||||||
|
]
|
||||||
|
|
||||||
|
result = ChatCompletion.create(
|
||||||
|
model="gpt-4o-mini", # model alias or full name
|
||||||
|
messages=messages,
|
||||||
|
# provider="DeepInfraChat", # optional – override automatic routing
|
||||||
|
stream=False # default: return single string
|
||||||
|
)
|
||||||
|
print(result.content)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Streaming responses
|
||||||
|
|
||||||
|
```python
|
||||||
|
for chunk in ChatCompletion.create(
|
||||||
|
model="gpt-4o-mini", messages=messages, stream=True
|
||||||
|
):
|
||||||
|
print(chunk, end="", flush=True) # each chunk is a ChatCompletionChunk
|
||||||
|
```
|
||||||
|
|
||||||
|
### Asynchronous API
|
||||||
|
|
||||||
|
```python
|
||||||
|
import asyncio
|
||||||
|
from g4f import ChatCompletion
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
async for chunk in ChatCompletion.create_async(
|
||||||
|
model="gpt-4", messages=messages, stream=True
|
||||||
|
):
|
||||||
|
print(chunk)
|
||||||
|
|
||||||
|
asyncio.run(main())
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. High–level clients
|
||||||
|
|
||||||
|
The `Client` and `AsyncClient` classes wrap chat, image and (soon) voice endpoints in a single object.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from g4f.client import Client, AsyncClient
|
||||||
|
|
||||||
|
client = Client(proxy="http://127.0.0.1:7890")
|
||||||
|
|
||||||
|
# Chat
|
||||||
|
answer = client.chat.completions.create(
|
||||||
|
messages="Why is the sky blue?"
|
||||||
|
)
|
||||||
|
print(answer.content)
|
||||||
|
|
||||||
|
# Images (sync)
|
||||||
|
image_resp = client.images.generate("A cyber-punk cityscape at night")
|
||||||
|
image_resp.save("cyberpunk.png")
|
||||||
|
|
||||||
|
# Asynchronous variant
|
||||||
|
async def run():
|
||||||
|
async_client = AsyncClient()
|
||||||
|
answer = await async_client.chat.completions.create(
|
||||||
|
messages="List the first 5 prime numbers"
|
||||||
|
)
|
||||||
|
print(answer.content)
|
||||||
|
|
||||||
|
import asyncio; asyncio.run(run())
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Image generation
|
||||||
|
|
||||||
|
```python
|
||||||
|
img = client.images.generate(
|
||||||
|
prompt="A photo-realistic cat wearing sunglasses",
|
||||||
|
model="dall-e-3" # or leave blank for automatic provider selection
|
||||||
|
)
|
||||||
|
|
||||||
|
# Access the image as Pillow object
|
||||||
|
img_pil = img.images[0]
|
||||||
|
img_pil.show()
|
||||||
|
|
||||||
|
# Or save to disk
|
||||||
|
img.save_all("output/")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Model registry utilities
|
||||||
|
|
||||||
|
```python
|
||||||
|
from g4f.models import ModelRegistry
|
||||||
|
|
||||||
|
print("All models:", ModelRegistry.all_models().keys())
|
||||||
|
print("Aliases for Llama providers:", ModelRegistry.list_models_by_provider("Together"))
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Environment variables
|
||||||
|
|
||||||
|
• `G4F_PROXY` – default HTTP(S) proxy used when `proxy` is not supplied.
|
||||||
|
|
||||||
|
• `G4F_PROVIDER_TIMEOUT` – override default request timeout (in seconds).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Error handling basics
|
||||||
|
|
||||||
|
```python
|
||||||
|
from g4f.errors import StreamNotSupportedError
|
||||||
|
|
||||||
|
try:
|
||||||
|
ChatCompletion.create(model="some-model", messages=[], stream=True)
|
||||||
|
except StreamNotSupportedError:
|
||||||
|
print("Selected provider does not support streaming")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. CLI usage
|
||||||
|
|
||||||
|
The project ships with an experimental CLI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
g4f "Translate 'Good morning' to Spanish" --model gpt-4o
|
||||||
|
```
|
||||||
|
|
||||||
|
Run `g4f --help` to see the full list of flags.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Next steps
|
||||||
|
|
||||||
|
* Dive into the [API reference](./API_REFERENCE.md) for every public class and function.
|
||||||
|
* Read the [Contributing guide](../CONTRIBUTING.md) if you want to add a new provider or model.
|
||||||
Loading…
Add table
Add a link
Reference in a new issue