* feat: add agent CLI, new providers, and update models
- Add a new `agent` mode to the CLI, a feature-rich AI coding assistant with capabilities for file system operations, code execution, git integration, and interactive chat.
- Add new provider `OperaAria` with support for vision, streaming, and conversation history.
- Add new provider `Startnest` with support for `gpt-4o-mini`, vision, and streaming.
- Move providers `FreeGpt` and `Websim` to the `not_working` directory.
- Delete the `OIVSCodeSer2` provider.
- Rename the CLI `client` mode to `chat` and refactor its argument parsing.
- In `g4f/Provider/DeepInfraChat.py`:
- Add a new `api_endpoint` attribute.
- Extensively update and reorganize the `models` list and `model_aliases` dictionary with numerous new models.
- In `g4f/Provider/LambdaChat.py`:
- Change the `default_model` to `deepseek-v3-0324`.
- In `g4f/Provider/Together.py`:
- Update aliases for `llama-3.1-405b`, `deepseek-r1`, and `flux`.
- Add new models including `gemma-3-27b`, `gemma-3n-e4b`, and `qwen-3-32b`.
- In `g4f/models.py`:
- Add new model definitions for `aria`, `deepseek_v3_0324_turbo`, `deepseek_r1_0528_turbo`, and several `gemma` variants.
- Remove the `mixtral_8x22b` model definition.
- Update the `best_provider` lists for `default`, `default_vision`, `gpt_4o_mini`, `gemini-1.5-pro`, `gemini-1.5-flash`, and others to reflect provider changes.
- In `g4f/Provider/__init__.py`:
- Add `OperaAria` and `Startnest` to the list of imported providers.
- Remove `FreeGpt`, `OIVSCodeSer2`, and `Websim` from imports.
- In `requirements.txt`:
- Add `rich` as a new dependency for the agent CLI.
* feat: add gemma-3-4b alias to DeepInfraChat
In g4f/Provider/DeepInfraChat.py, add the gemma-3-4b alias to the model_aliases dictionary.
The new alias points to the google/gemma-3-4b-it model.
* feat: add OIVSCodeSer2 provider
- Create the new provider file .
- The provider supports the model and includes a custom method to generate a .
- Import and include in .
- Add to the for in .
- Add to in .
* feat: add OIVSCodeSer2 provider
- Create the new provider file .
- The provider supports the model and includes a custom method to generate a .
- Import and include in .
- Add to the for in .
- Add to in .
* refactor: Migrate from duckduckgo-search to ddgs library
* Replaced the dependency with the new library.
* In , updated imports from to and to .
* Modified the function in to use an context manager instead of a global instance.
* Updated the function to catch the new exception.
* In , updated the web search import and installation instructions to use .
* Removed unnecessary comments and simplified f-string formatting in and .
* Added and to .
* Added , , and to the extras in .
* test: Update web search tests for ddgs library and cleanup code
* In `etc/unittest/web_search.py`, updated imports from `duckduckgo_search` to `ddgs` and `DDGSError`.
* Added an import for `MissingRequirementsError` in `etc/unittest/web_search.py`.
* Modified exception handling in web search tests to catch both `DDGSError` and `MissingRequirementsError`.
* Removed temporary modification comments in `g4f/cli/agent/agent.py` and `g4f/tools/web_search.py`.
* fix: remove unstable CLI feature due to critical errors
- Remove the experimental CLI coding assistant feature due to multiple stability issues and critical errors in production environments
- Delete the entire `g4f/cli/agent/` directory and all related functionality
- Remove `rich` dependency from `requirements.txt` as it was only used by the removed feature
- Remove `rich` from the `all` extras in `setup.py`
- Revert CLI mode naming from `chat` back to `client` for consistency
- Clean up argument parsing in CLI to remove references to the removed functionality
- Remove installation instructions and imports related to the unstable feature from documentation
This removal is necessary due to:
- Unpredictable behavior causing data loss risks
- Incompatibility with certain system configurations
- Security concerns with unrestricted file system access
- Excessive resource consumption in production environments
Note: This feature may be reintroduced in the future with a more stable and secure implementation that addresses the current limitations and safety concerns.
---------
Co-authored-by: kqlio67 <kqlio67@users.noreply.github.com>
- Added GithubCopilotAPI provider to g4f/Provider/needs_auth and __init__.py
- Fixed typo "GGOGLE_SID_COOKIE" to "GOOGLE_SID_COOKIE" in Gemini.py and updated all references
- Updated PollinationsAI.py:
- Refined model aliases and removed/commented unused/legacy aliases
- Updated logic for loading audio and vision models, using swap_models for alias reversals
- Adjusted get_model and model loading methods for accuracy
- Changed default model lists for text, image, and vision models
- Updated conversation title and followup labels for followups tools
- Modified save_content in g4f/cli/client.py to handle url downloads for lists, allow cookies/headers, and removed duplicate HTTP download logic
- Added asyncio sleep after stdout writes in stream_response for smoother streaming
- Changed website.py render default to "home," adjusted chat route to accept any filename, and updated filenames used for rendering
- Updated model selection in g4f/models.py by removing PollinationsAI from best_provider and changing model provider order for specific models
- Enhanced media merging in g4f/tools/media.py to clarify comment about last user message and handle content appending for lists in render_messages
- Updated OpenaiTemplate.py to add an image_url field if media with http(s) URLs is present
- Adjusted test_provider_has_model in etc/unittest/models.py to skip providers requiring auth
- Modified `TestThinkingProcessor` unit tests in `thinking.py` to update expected results for reasoning statuses:
- Changed `"Finished"` status to `""` in `test_thinking_end`, `test_thinking_start_and_end`, and `test_chunk_with_text_after_think`.
feat: Add model aliases to HuggingFaceMedia provider
- Imported `model_aliases` from `.models` in `HuggingFaceMedia.py`.
- Added `model_aliases` attribute to `HuggingFaceMedia` provider class.
feat: Add timeout argument to CLI API parser
- Introduced `--timeout` argument in `get_api_parser()` in `cli.py` with default value of `600` seconds.
- Passed `timeout` argument to `run_api_args()` function.
```
- Added `--max-retries` argument to `parse_arguments()` in `commit.py` with default `MAX_RETRIES`
- Updated `generate_commit_message()` to accept a `max_retries` parameter and iterate accordingly
- Included check to raise immediately if `max_retries` is 1 within `generate_commit_message()`
- Passed `args.max_retries` when calling `generate_commit_message()` in `main()`
- In `g4f/Provider/har/__init__.py`, imported `ResponseError` and added check for network error to raise `ResponseError`
- In `g4f/Provider/hf_space/Qwen_Qwen_3.py`, changed default model string and updated system prompt handling to use `get_system_prompt()`
- In `g4f/Provider/needs_auth/LMArenaBeta.py`, modified callback to wait for cookie and turnstile response
- In `g4f/Provider/needs_auth/PuterJS.py`, adjusted `get_models()` to filter out certain models
- In `g4f/gui/server/api.py`, adjusted `get_model_data()` to handle models starting with "openrouter:"
- In `g4f/providers/any_provider.py`, imported and used `ResponseError`; added logic to process `model_aliases` with updated model name resolution
- Refined model name cleaning logic to handle additional patterns and replaced multiple regex patterns to better match version strings
- Updated list of providers `PROVIERS_LIST_1`, `PROVIERS_LIST_2`, `PROVIERS_LIST_3`, and their usage to include new providers and adjust filtering
- In `g4f/version.py`, added `get_git_version()` function, retrieved version with `git describe` command, instead of only relying on `get_github_version()`, increasing robustness
* feat: add repository path support and new md2html converter tool
- Add `--repo` argument to commit.py for specifying git repository path with validation
- Add `validate_git_repository()` function to check repository existence and git status
- Add `get_repository_info()` function to extract branch and remote information
- Update `get_git_diff()` and `make_commit()` functions to accept repository path parameter
- Add Path import and repository validation in main workflow
- Enhance error messages with repository-specific guidance and context
- Update argument parser description and help text for new repository functionality
- Expand module docstring with comprehensive usage examples and feature descriptions
- Add new md2html.py tool for converting Markdown files to HTML using GitHub API
- Add template.html file with GitHub-styled CSS and responsive design
- Implement batch processing, retry logic, and rate limit handling in md2html converter
- Add comprehensive command-line interface with directory processing and custom output options
* refactor: Update provider configurations and model handling
- Removed Dynaspark provider entirely by deleting `g4f/Provider/Dynaspark.py`
- Deprecated DDG provider by moving to `not_working` directory and updating imports
- Restructured HuggingFace and MiniMax providers into `needs_auth` subpackage:
- Moved all HuggingFace provider files to `needs_auth/hf/`
- Moved MiniMax providers to `needs_auth/mini_max/`
- Updated ARTA provider:
- Expanded `model_aliases` with new tattoo styles and added aliases
- Added `get_model()` method for model resolution with list support
- Simplified Blackbox provider:
- Removed openrouter models and agentMode configurations
- Reduced model lists to core GPT variants
- Set session/subscriptionCache to None in payload
- Added model resolution to Gemini providers:
- Implemented `get_model()` in Gemini.py and GeminiPro.py
- Added alias handling with list support
- Updated model definitions in `g4f/models.py`:
- Removed references to Dynaspark and DDG providers
- Added new SDXL image models with ARTA provider
- Adjusted best_provider assignments across multiple models
- Removed Dynaspark/DDG references from provider imports and AnyProvider
- Added DDG to not_working providers in `__init__.py`
* feat: Add new models to DeepInfraChat, LambdaChat, and models
- Add 'deepseek-ai/DeepSeek-R1-0528' model to DeepInfraChat provider's models list
- Include alias 'deepseek-r1-0528' for DeepSeek-R1-0528 in DeepInfraChat's model_aliases
- Add 'apriel-5b-instruct' model to LambdaChat provider's models list
- Define new 'deepseek-r1-0528' model in models.py with DeepSeek base provider and DeepInfraChat as best provider
* refactor: simplify model registry and add validation
- Remove unused imports: sys, inspect, Set, Type
- Remove ModelRegistry._discovered flag and automatic discovery mechanism
- Add ModelRegistry.clear() method for resetting registry state
- Implement ModelRegistry.list_models_by_provider() for provider-based filtering
- Add ModelRegistry.validate_all_models() for configuration checks
- Remove Model._registered field and simplify registration logic
- Fix gemma_3_12b model name from empty string to 'gemma-3-12b'
- Add image model section header in model definitions
- Replace ModelUtils.convert dict with dynamic property
- Remove ModelUtils.refresh() method
- Register 'gemini' alias directly in ModelRegistry after model creation
- Remove module-level model discovery and ModelUtils.convert initialization
* refactor: Replace ModelUtils.convert property with class variable
- Add class variable `convert` to `ModelUtils` initialized as empty dictionary
- Replace `@property convert` method with `refresh()` class method that updates `convert`
- Remove dynamic property returning `ModelRegistry.all_models()`
- Add module-level assignment to initialize `ModelUtils.convert` with `ModelRegistry.all_models()`
- Include comment for clarity on filling the convert dictionary
* refactor: Reorganize providers and update model configuration
- Removed unused providers from `g4f/Provider/__init__.py`: ChatGpt, Pi, Pizzagpt, PuterJS, You
- Moved LMArenaBeta provider to `needs_auth` directory with updated relative imports
- Moved Pi provider to `needs_auth` directory with updated relative imports
- Moved PuterJS provider to `needs_auth` directory with updated relative imports
- Moved You provider to `needs_auth` directory with updated relative imports
- Added LMArenaBeta, Pi, PuterJS, You to `needs_auth/__init__.py`
- Moved ChatGpt provider to `not_working` directory with updated relative imports
- Moved Pizzagpt provider to `not_working` directory with updated relative imports
- Added ChatGpt, Pizzagpt to `not_working/__init__.py`
- Updated `g4f/models.py` to remove Reka import and change reka_core model provider
- Changed reka_core model's best_provider from IterListProvider to LegacyLMArena in `g4f/models.py`
* feat: add Together provider and update model handling
- Add new provider `Together` in `g4f/Provider/Together.py` with model aliases and configuration
- Implement `get_activation_key` and `get_models` methods in `Together` provider
- Add `get_model` method to resolve aliases in `Together` and `DeepInfraChat`
- Update `DeepInfraChat` model mappings to support multiple versions
- Change "deepseek-v3" to list with two model options
- Change "deepseek-r1" to list with two model options
- Remove duplicate "deepseek-v3" entry
- Remove "mistral-small" alias
- Remove "midjourney" from `PollinationsAI.extra_image_models`
- Register `Together` provider in `g4f/Provider/__init__.py`
- Update `g4f/models.py` with new providers and models
- Add `Together` to default and default_vision provider lists
- Add `Together` as provider for multiple existing models
- Add new vision model `qwen_2_vl_72b`
- Add new text models: `qwen_2_5_7b`, `deepseek_r1_distill_qwen_1_5b`, `deepseek_r1_distill_qwen_14b`
- Add new image models: `flux_redux`, `flux_depth`, `flux_canny`, `flux_kontext_max`, `flux_dev_lora`, `flux_kontext_pro`
- Remove `pi` model definition
- Update provider assignments for multiple models to include `Together`
* refactor: Remove LegacyLMArena provider and update model best_providers
- Remove LegacyLMArena import from Provider list in models.py
- Delete LegacyLMArena from default model's best_provider IterListProvider
- Remove multiple obsolete model definitions (gpt_3_5_turbo, gpt_4_turbo, phi_3_small, etc.) that exclusively used LegacyLMArena
- Update best_provider for all remaining models to remove LegacyLMArena from IterListProvider arguments
- Replace LegacyLMArena with alternative providers in model definitions (e.g., OpenaiChat, Together, DeepInfraChat)
- Simplify model definitions by removing redundant IterListProvider wrappers for single providers
- Expand provider imports in any_provider.py to include Blackboxapi, OIVSCodeSer2, etc.
- Extend provider list in AnyProvider with additional working providers for fallback support
* refactor: Remove Blackboxapi provider
- Deleted Blackboxapi provider implementation file
- Removed Blackboxapi import from provider __init__ file
- Updated default model configuration to exclude Blackboxapi provider
- Removed Blackboxapi from llama-3.1-70b model's best_provider
- Updated any_provider to exclude Blackboxapi from provider list
* fix: add missing parameters to Together.get_model method signature
- Add api_key and api_base parameters to get_model method in Together class
- Import random module at the top of the file
- Add inline import comment for random module inside get_model method
* fix: remove broken providers and update model configurations
- Remove non-working providers: ChatGLM, DocsBot, GizAI, OIVSCodeSer5
- Fix Blackbox provider by removing userSelectedModel logic
- Update DeepInfraChat default model to 'deepseek-ai/DeepSeek-V3-0324'
- Add random model selection for DeepInfraChat aliases
- Update LambdaChat default model to 'deepseek-v3-0324' and expand model list
- Fix LegacyLMArena model loading with better error handling and caching
- Add retry logic and timeouts to LegacyLMArena streaming responses
- Improve LegacyLMArena response parsing to handle various data formats
- Update model references across g4f/models.py to remove deleted providers
- Fix AnyProvider model categorization logic for better grouping
- Add LegacyLMArena and ARTA to special provider handling in AnyProvider
- Update provider imports in __init__.py to exclude removed providers
- Add needs_auth flag to You.com and HailuoAI providers
- Fix GeminiPro get_model method signature to accept kwargs
* fix (g4f/Provider/LambdaChat.py)
* refactor: format models list in LMArenaBeta provider
- Convert single-line models array to multi-line format
- Add 11 new models (hunyuan, flux-kontext-pro, cobalt variants, etc.)
- Remove 6 models (bagel, goldmane, redsword, etc.)
- Update stephen model ID
---------
Co-authored-by: kqlio67 <kqlio67.noreply.github.com>
- Introduced a new provider class `LMArenaBeta` in `g4f/Provider/LMArenaBeta.py` with capabilities for text and image models.
- Updated `g4f/Provider/Cloudflare.py` to remove an unused import of `Cookies`.
- Modified `g4f/Provider/PollinationsAI.py` to change the condition for checking the action in the `next` command.
- Added a new provider `PuterJS` in `g4f/Provider/PuterJS.py` with various model handling and authentication logic.
- Removed the old `PuterJS` implementation from `g4f/Provider/not_working/PuterJS.py`.
- Updated `g4f/Provider/__init__.py` to include the new `LMArenaBeta` and `PuterJS` providers.
- Changed the label of `HarProvider` in `g4f/Provider/har/__init__.py` to "LMArena (Har)".
- Adjusted the model list in `g4f/Provider/openai/models.py` to ensure consistency in model definitions.
- Updated the API response handling in `g4f/providers/response.py` to calculate total tokens in the `Usage` class constructor.
* feat: enhance provider support and add PuterJS provider
- Add new PuterJS provider with extensive model support and authentication handling
- Add three new OIVSCode providers (OIVSCodeSer2, OIVSCodeSer5, OIVSCodeSer0501)
- Fix Blackbox provider with improved model handling and session generation
- Update model aliases across multiple providers for consistency
- Mark DDG provider as not working
- Move TypeGPT to not_working directory
- Fix model name formatting in DeepInfraChat and other providers (qwen3 → qwen-3)
- Add get_model method to LambdaChat and other providers for better model alias handling
- Add ModelNotFoundError import to providers that need it
- Update model definitions in models.py with new providers and aliases
- Fix client/stubs.py to allow arbitrary types in ChatCompletionMessage
* Fix conflicts g4f/Provider/needs_auth/Grok.py
* fix: update Blackbox provider default settings
- Changed parameter to use only the passed value without fallback to 1024
- Set to instead of in request payload
* feat: add WeWordle provider with gpt-4 support
- Created new WeWordle.py provider file implementing AsyncGeneratorProvider
- Added WeWordle class with API endpoint at wewordle.org/gptapi/v1/web/turbo
- Set provider properties: working=True, needs_auth=False, supports_stream=True
- Configured default_model as 'gpt-4' with retry mechanism for API requests
- Implemented URL sanitization logic to handle malformed URLs
- Added response parsing for different JSON response formats
- Added WeWordle to Provider/__init__.py imports
- Added WeWordle to default model providers list in models.py
- Added WeWordle to gpt_4 best_provider list in models.py
* feat: add DocsBot provider with GPT-4o support
- Added new DocsBot.py provider file implementing AsyncGeneratorProvider and ProviderModelMixin
- Created Conversation class extending JsonConversation to track conversation state
- Implemented create_async_generator method with support for:
- Streaming and non-streaming responses
- System messages
- Message history
- Image handling via data URIs
- Conversation tracking
- Added DocsBot to Provider/__init__.py imports
- Added DocsBot to default and default_vision model providers in models.py
- Added DocsBot as a provider for gpt_4o model in models.py
- Set default_model and vision support to 'gpt-4o'
- Implemented API endpoint communication with docsbot.ai
* feat: add OpenAIFM provider and update audio model references
- Added new OpenAIFM provider in g4f/Provider/audio/OpenAIFM.py for text-to-speech functionality
- Updated PollinationsAI.py to rename "gpt-4o-audio" to "gpt-4o-mini-audio"
- Added OpenAIFM to audio provider imports in g4f/Provider/audio/__init__.py
- Modified save_response_media() in g4f/image/copy_images.py to handle source_url separately from media_url
- Added new gpt_4o_mini_tts AudioModel in g4f/models.py with OpenAIFM as best provider
- Updated ModelUtils dictionary in models.py to include both gpt_4o_mini_audio and gpt_4o_mini_tts
* fix: improve PuterJS provider and add Gemini to best providers
- Changed client_id generation in PuterJS from time-based to UUID format
- Fixed duplicate json import in PuterJS.py
- Added uuid module import in PuterJS.py
- Changed host header from "api.puter.com" to "puter.com"
- Modified error handling to use Exception instead of RateLimitError
- Added Gemini to best_provider list for gemini-2.5-flash model
- Added Gemini to best_provider list for gemini-2.5-pro model
- Fixed missing newline at end of Gemini.py file
---------
Co-authored-by: kqlio67 <kqlio67.noreply.github.com>
- Changed default model in commit.py from "gpt-4o" to "claude-3.7-sonnet"
- Fixed ARTA provider by adding proper auth token handling and form data submission
- Updated Blackbox provider to use OpenRouter models instead of premium models
- Improved DDG provider with simplified authentication and better error handling
- Updated DeepInfraChat provider with new models and aliases
- Removed non-working providers: Goabror, Jmuz, OIVSCode, AllenAI, ChatGptEs, FreeRouter, Glider
- Moved non-working providers to the not_working directory
- Added BlackboxPro provider in needs_auth directory with premium model support
- Updated Liaobots provider with new models and improved authentication
- Renamed Microsoft_Phi_4 to Microsoft_Phi_4_Multimodal for clarity
- Updated LambdaChat provider with direct API implementation instead of HuggingChat
- Updated models.py with new model definitions and provider mappings
- Removed BlackForestLabs_Flux1Schnell from HuggingSpace providers
- Updated model aliases across multiple providers for better compatibility
- Fixed Dynaspark provider endpoint URL to prevent spam detection
- Changed `generate_commit_message` return to `.strip("`").strip()` in `commit.py`
- Added new model mappings in `PollinationsAI.py`, including `gpt-4.1`, `gpt-4.1-mini`, and `deepseek-r1-distill-*`
- Removed `print` debug statement from `PollinationsAI.py` request payload
- Replaced temp file handling in `MarkItDown.py` with `get_tempfile` utility
- Added `get_tempfile` function to `files.py` for consistent tempfile creation
- Added `gpt-4.1` to `text_models` list in `models.py`
- Added `ModelNotSupportedError` to exception handling in `OpenaiChat.py`
- Updated message content creation to use `to_string()` in `OpenaiChat.py`
- Wrapped `get_model()` in try-except to ignore `ModelNotSupportedError` in `OpenaiChat.py`
- Adjusted `convert` endpoint in `api/__init__.py` to accept optional `provider` param
- Refactored `/api/markitdown` to reuse `convert()` handler in `api/__init__.py
- Changed `DEFAULT_MODEL` from `"o1"` to `"gpt-4o"` in `etc/tool/commit.py`
- Replaced `FALLBACK_MODELS` list with an empty list in `etc/tool/commit.py`
- Moved spinner stop logic inside content-checking block in `generate_commit_message` in `etc/tool/commit.py`
- Trimmed trailing characters "` \n" from returned content in `generate_commit_message` in `etc/tool/commit.py`
- Wrapped cache check in `g4f/gui/server/api.py` in a `try` block to catch `RuntimeError`
- Ensured fallback to `version.utils.latest_version` if cache access fails in `g4f/gui/server/api.py
* feat: introduce AnyProvider & LM Arena, overhaul model/provider logic
- **Provider additions & removals**
- Added `Provider/LMArenaProvider.py` with full async stream implementation and vision model support
- Registered `LMArenaProvider` in `Provider/__init__.py`; removed old `hf_space/LMArenaProvider.py`
- Created `providers/any_provider.py`; registers `AnyProvider` dynamically in `Provider`
- **Provider framework enhancements**
- `providers/base_provider.py`
- Added `video_models` and `audio_models` attributes
- `providers/retry_provider.py`
- Introduced `is_content()` helper; now treats `AudioResponse` as stream content
- **Cloudflare provider refactor**
- `Provider/Cloudflare.py`
- Re‑implemented `get_models()` with `read_models()` helper, `fallback_models`, robust nodriver/curl handling and model‑name cleaning
- **Other provider tweaks**
- `Provider/Copilot.py` – removed `"reasoning"` alias and initial `setOptions` WS message
- `Provider/PollinationsAI.py` & `PollinationsImage.py`
- Converted `audio_models` from list to dict, adjusted usage checks and labels
- `Provider/hf/__init__.py` – applies `model_aliases` remap before dispatch
- `Provider/hf_space/DeepseekAI_JanusPro7b.py` – now merges media before upload
- `needs_auth/Gemini.py` – dropped obsolete Gemini model entries
- `needs_auth/GigaChat.py` – added lowercase `"gigachat"` alias
- **API & client updates**
- Replaced `ProviderUtils` with new `Provider` map usage throughout API and GUI server
- Integrated `AnyProvider` as default fallback in `g4f/client` sync & async flows
- API endpoints now return counts of providers per model and filter by `x_ignored` header
- **GUI improvements**
- Updated JS labels with emoji icons, provider ignore logic, model count display
- **Model registry**
- Renamed base model `"GigaChat:latest"` ➜ `"gigachat"` in `models.py`
- **Miscellaneous**
- Added audio/video flags to GUI provider list
- Tightened error propagation in `retry_provider.raise_exceptions`
* Fix unittests
* fix: handle None conversation when accessing provider-specific data
- Modified `AnyProvider` class in `g4f/providers/any_provider.py`
- Updated logic to check if `conversation` is not None before accessing `provider.__name__` attribute
- Wrapped `getattr(conversation, provider.__name__, None)` block in an additional `if conversation is not None` condition
- Changed `setattr(conversation, provider.__name__, chunk)` to use `chunk.get_dict()` instead of the object directly
- Ensured consistent use of `JsonConversation` when modifying or assigning `conversation` data
* ```
feat: add provider string conversion & update IterListProvider call
- In g4f/client/__init__.py, within both Completions and AsyncCompletions, added a check to convert the provider from a string using convert_to_provider(provider) when applicable.
- In g4f/providers/any_provider.py, removed the second argument (False) from the IterListProvider constructor call in the async for loop.
```
---------
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
- Added command-line argument parsing with options for model selection, editing, and no-commit mode
- Implemented fallback mechanism with multiple AI models if the primary model fails
- Added spinner display to indicate progress during API calls
- Created functions to filter sensitive data from diffs before sending to API
- Added diff truncation capabilities for handling large changesets
- Implemented commit message editing in user's configured editor
- Added model listing functionality to show available AI options
- Enhanced error handling with retries and better error reporting
- Added keyboard interrupt handling for graceful termination
- Improved type annotations throughout the codebase
- Added constants for configuration parameters like retry delay and max diff size
- Added "No auth / HAR file" authentication type in providers-and-models.md
- Added "Video generation" column to provider tables for future capability
- Updated model counts and provider capabilities throughout documentation
- Fixed ARTA provider with improved error handling and response validation
- Enhanced AllenAI provider with vision model support and proper image handling
- Significantly improved Blackbox provider:
- Added HAR file authentication support
- Added subscription status checking
- Added premium/demo model differentiation
- Improved session handling and error recovery
- Enhanced DDG provider with better error handling for challenges
- Improved PollinationsAI and PollinationsImage providers' model handling
- Added VideoModel class in g4f/models.py
- Added audio/video generation indicators in GUI components
- Added new Ai2 models: olmo-1-7b, olmo-2-32b, olmo-4-synthetic
- Added new commit message generation tool in etc/tool/commit.py
Update conversation body in OpenaiChat provider
Update ThinkingProcessor in run_tools
Add unittests for ThinkingProcessor
Update default headers in requests module
Add AuthFileMixin to base_provider
* New provider added(g4f/Provider/Websim.py)
* New provider added(g4f/Provider/Dynaspark.py)
* feat(g4f/gui/client/static/js/chat.v1.js): Enhance provider labeling for HuggingFace integrations
* feat(g4f/gui/server/api.py): add Hugging Face Space compatibility flag to provider data
* feat(g4f/models.py): add new providers and update model configurations
* Update g4f/Provider/__init__.py
* feat(g4f/Provider/AllenAI.py): expand model alias mappings for AllenAI provider
* feat(g4f/Provider/Blackbox.py): restructure image model handling and response processing
* feat(g4f/Provider/PollinationsAI.py): add new model aliases and streamline headers
* Update g4f/Provider/hf_space/*
* refactor(g4f/Provider/Copilot.py): update model alias mapping
* chore(g4f/models.py): update provider configurations for OpenAI models
* docs(docs/providers-and-models.md): update provider tables and model categorization
* fix(etc/examples/vision_images.py): update model and simplify client configuration
* fix(docs/providers-and-models.md): correct streaming status for GlhfChat provider
* docs(docs/providers-and-models.md): update provider capabilities and model documentation
* fix(models): update provider configurations for Mistral models
* fix(g4f/Provider/Blackbox.py): correct model alias key for Mistral variant
* feat(g4f/Provider/hf_space/CohereForAI_C4AI_Command.py): update supported model versions and aliases (close#2802)
* fix(documentation): correct model names and provider counts (https://github.com/xtekky/gpt4free/pull/2805#issuecomment-2727489835)
* fix(g4f/models.py): correct mistral model configurations
* fix(g4f/Provider/DeepInfraChat.py): correct mixtral-small alias key
* New provider added(g4f/Provider/LambdaChat.py)
* feat(g4f/models.py): add new providers and enhance model configurations
* docs(docs/providers-and-models.md): add LambdaChat provider and update model listings
* feat(g4f/models.py): add new Liquid AI model and enhance providers
* docs(docs/providers-and-models.md): update model listings and provider counts
* feat(g4f/Provider/LambdaChat.py): add conditional reasoning processing based on model
* fix(g4f/tools/run_tools.py): handle combined thinking tags in single chunk
* New provider added(g4f/Provider/Goabror.py)
* feat(g4f/Provider/Blackbox.py): implement dynamic session management and model access control
* refactor(g4f/models.py): update provider configurations and model entries
* docs(docs/providers-and-models.md): update model listings and provider counts
---------
Co-authored-by: kqlio67 <>
* Update model configurations, provider implementations, and documentation
- Updated model names and aliases for Qwen QVQ 72B and Qwen 2 72B (@TheFirstNoob)
- Revised HuggingSpace class configuration, added default_image_model
- Added llama-3.2-70b alias for Llama 3.2 70B model in AutonomousAI
- Removed BlackboxCreateAgent class
- Added gpt-4o alias for Copilot model
- Moved api_key to Mhystical class attribute
- Added models property with default_model value for Free2GPT
- Simplified Jmuz class implementation
- Improved image generation and model handling in DeepInfra
- Standardized default models and removed aliases in Gemini
- Replaced model aliases with direct model list in GlhfChat (@TheFirstNoob)
- Removed trailing slash from image generation URL in PollinationsAI (https://github.com/xtekky/gpt4free/issues/2571)
- Updated llama and qwen model configurations
- Enhanced provider documentation and model details
* Removed from (g4f/models.py) 'Yqcloud' provider from Default due to error 'ResponseStatusError: Response 429: 文字过长,请删减后重试。'
* Update docs/providers-and-models.md
* refactor(g4f/Provider/DDG.py): Add error handling and rate limiting to DDG provider
- Add custom exception classes for rate limits, timeouts, and conversation limits
- Implement rate limiting with sleep between requests (0.75s minimum delay)
- Add model validation method to check supported models
- Add proper error handling for API responses with custom exceptions
- Improve session cookie handling for conversation persistence
- Clean up User-Agent string and remove redundant code
- Add proper error propagation through async generator
Breaking changes:
- New custom exceptions may require updates to error handling code
- Rate limiting affects request timing and throughput
- Model validation is now stricter
Related:
- Adds error handling similar to standard API clients
- Improves reliability and robustness of chat interactions
* Update g4f/models.py g4f/Provider/PollinationsAI.py
* Update g4f/models.py
* Restored provider which was not working and was disabled (g4f/Provider/DeepInfraChat.py)
* Fixing a bug with Streaming Completions
* Update g4f/Provider/PollinationsAI.py
* Update g4f/Provider/Blackbox.py g4f/Provider/DDG.py
* Added another model for generating images 'ImageGeneration2' to the 'Blackbox' provider
* Update docs/providers-and-models.md
* Update g4f/models.py g4f/Provider/Blackbox.py
* Added a new OIVSCode provider from the Text Models and Vision (Image Upload) model
* Update docs/providers-and-models.md
* docs: add Conversation Memory class with context handling requested by @TheFirstNoob
* Simplified README.md documentation added new docs/configuration.md documentation
* Update add README.md docs/configuration.md
* Update README.md
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/PollinationsAI.py
* Added new model deepseek-r1 to Blackbox provider. @TheFirstNoob
* Fixed bugs and updated docs/providers-and-models.md etc/unittest/client.py g4f/models.py g4f/Provider/.
---------
Co-authored-by: kqlio67 <>
Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
* Add Path and PathLike support when uploading images
Improve raise_for_status in special cases
Move ImageResponse to providers.response module
Improve OpenaiChat and OpenaiAccount providers
Add Sources for web_search in OpenaiChat
Add JsonConversation for import and export conversations to js
Add RequestLogin response type
Add TitleGeneration support in OpenaiChat and gui
* Improve Docker Container Guide in README.md
* Add tool calls api support, add search tool support
* Add multiple images support
* Add multiple images support in gui
* Support multiple images in legacy client and in the api
Fix some model names in provider model list
* Fix unittests
* Add vision and providers docs
* Add more contributers, add link to Swagger UI
* Update Dockerfile-slim
* Update retry_provider.py
* Add html preview to gui, fix urls in website manifest
* Missing chunks in OpenaiChat
* IterListProvider support for generating images
* Add missing get_har_files import in Copilot
* Fix typo in dall-e-3 model name
* Add image client unittests
* Add MicrosoftDesigner provider
* Import MicrosoftDesigner and add it to the model list
* Improve download of generated images, serve images in the api
Add support for conversation handling in the api
* Add orginal prompt to image response
* Add download images option in gui, fix loading model list in Airforce
* Add download images option in gui, fix loading model list in Airforce