- Removed the `cache` parameter from the `PollinationsAI` class in `PollinationsAI.py`.
- Added a `parent_message_id` parameter to the `chat_completion` method call in the `DeepSeekAPI` class in `DeepSeekAPI.py`.
- Updated the handling of `conversation` in `DeepSeekAPI.py` to yield the `conversation` object at the end of the method.
- Set `conversation.parent_id` to `chunk['message_id']` when present in the response in `DeepSeekAPI.py`.
- Adjusted the method signatures in `aiohttp.py` to remove unnecessary type hints for `ClientSession` and `None`.
- Updated the `PollinationsAI` class in `g4f/Provider/PollinationsAI.py`:
- Changed `aspect_ratio` parameter handling to conditionally use default "1:1" if not specified.
- Enhanced media handling by introducing `media` parameter in `_generate_image` method.
- Updated parameter processing in `_generate_image_async` method for `model == "gptimage"`.
- Updated `Api` class in `g4f/api/__init__.py`:
- Simplified handling of `credentials` for `config.api_key`.
- Updated `Images` class in `g4f/client/__init__.py`:
- Added `download_media` parameter to `_process_image_response` method.
- Enhanced `_process_image_response` method to conditionally download media based on `download_media` flag.
- Updated `_process_image_response` method in `Images` class in `g4f/client/__init__.py`:
- Enhanced handling of media response based on `download_media` flag.
- Updated `is_valid_media` function in `g4f/image/__init__.py`:
- Added typing annotations for clarity.
- Updated `AnyProvider` class in `g4f/providers/any_provider.py`:
- Improved handling of `api_key` dictionary to set `extra_body["api_key"]`.
- Updated `IterListProvider` class in `g4f/providers/retry_provider.py`:
- Enhanced handling of `model` and `api_key` parameters.
- Updated `BaseProvider` class in `g4f/providers/types.py`:
- Added `create_function` and `async_create_function` methods.
- Updated `BaseRetryProvider` class in `g4f/providers/types.py`:
- Enhanced handling of `model` and `api_key` parameters in provider iteration.
- Updated `PollinationsAI.py` to import `quote` and `ClientTimeout` from `aiohttp`.
- Added `MissingAuthError` to error imports in `PollinationsAI.py`.
- Included `AudioResponse` in response imports in `PollinationsAI.py`.
- Expanded `image_models` in `PollinationsAI` to include "turbo" and "gptimage".
- Changed the initialization of `image_models` to use `cls.image_models.copy()`.
- Introduced `download_media` parameter in the `_generate_image` method.
- Added a condition to set `n = 1` if the model is "gptimage".
- Renamed `get_image_url` to `get_url_with_seed` and modified its logic to return the URL for "gptimage".
- Updated the session creation in `_generate_image` to include a timeout.
- Enhanced error handling in the image fetching logic to yield `Reasoning` status.
- Modified the media handling in `save_response_media` to yield `AudioResponse` based on `download_media`.
- Updated `OpenAIFM.py` to include `quote` and `AudioResponse` imports.
- Added `styles` to the `OpenAIFM` class and created a method to group models.
- Introduced `download_media` parameter in the `synthesize` method of `OpenAIFM`.
- Updated the `create_app` function in `api/__init__.py` to check for `a2wsgi` availability.
- Removed unused file handling routes in `api/__init__.py`.
- Updated `secure_filename` function in `files.py` to handle encoding more robustly.
- Added `download_media` to the `RequestConfig` model in `stubs.py`.
- Updated imports to use format_media_prompt in g4f/Provider/ARTA.py, PollinationsAI.py, PollinationsImage.py, Websim.py, audio/OpenAIFM.py, hf_space/BlackForestLabs_Flux1Dev.py, hf_space/DeepseekAI_JanusPro7b.py, hf_space/G4F.py, hf_space/Microsoft_Phi_4_Multimodal.py, hf_space/StabilityAI_SD35Large.py, needs_auth/BingCreateImages.py, needs_auth/BlackboxPro.py, needs_auth/DeepInfra.py, needs_auth/Gemini.py, needs_auth/MicrosoftDesigner.py, needs_auth/OpenaiChat.py, needs_auth/hf/HuggingChat.py, needs_auth/hf/HuggingFaceInference.py, needs_auth/hf/HuggingFaceMedia.py, not_working/AllenAI.py, template/OpenaiTemplate.py, api.py, and gui/server/api.py
- Replaced calls to format_image_prompt with format_media_prompt in relevant locations
- Changed media prompt handling in various providers to ensure consistent usage of format_media_prompt
- Modified the __aenter__ and __aexit__ methods of requests/aiohttp.py to properly manage ClientSession lifecycle
- Changed the default value of `extra_body` from an empty dictionary to `None` in `ImageLabs` and `PollinationsAI` classes.
- Added a check to initialize `extra_body` to an empty dictionary if it is `None` in the `ImageLabs` class.
- Removed the `extra_image_models` list from the `PollinationsAI` class.
- Updated the way image models are combined in the `PollinationsAI` class to avoid duplicates.
- Changed the error handling for unsupported models from `ModelNotSupportedError` to `ModelNotFoundError` in multiple classes including `OpenaiChat`, `HuggingFaceAPI`, and `HuggingFaceInference`.
- Updated the `save_response_media` function to handle both string and bytes responses.
- Adjusted the handling of audio data in the `PollinationsAI` class to ensure proper processing of audio responses.
* feat: add repository path support and new md2html converter tool
- Add `--repo` argument to commit.py for specifying git repository path with validation
- Add `validate_git_repository()` function to check repository existence and git status
- Add `get_repository_info()` function to extract branch and remote information
- Update `get_git_diff()` and `make_commit()` functions to accept repository path parameter
- Add Path import and repository validation in main workflow
- Enhance error messages with repository-specific guidance and context
- Update argument parser description and help text for new repository functionality
- Expand module docstring with comprehensive usage examples and feature descriptions
- Add new md2html.py tool for converting Markdown files to HTML using GitHub API
- Add template.html file with GitHub-styled CSS and responsive design
- Implement batch processing, retry logic, and rate limit handling in md2html converter
- Add comprehensive command-line interface with directory processing and custom output options
* refactor: Update provider configurations and model handling
- Removed Dynaspark provider entirely by deleting `g4f/Provider/Dynaspark.py`
- Deprecated DDG provider by moving to `not_working` directory and updating imports
- Restructured HuggingFace and MiniMax providers into `needs_auth` subpackage:
- Moved all HuggingFace provider files to `needs_auth/hf/`
- Moved MiniMax providers to `needs_auth/mini_max/`
- Updated ARTA provider:
- Expanded `model_aliases` with new tattoo styles and added aliases
- Added `get_model()` method for model resolution with list support
- Simplified Blackbox provider:
- Removed openrouter models and agentMode configurations
- Reduced model lists to core GPT variants
- Set session/subscriptionCache to None in payload
- Added model resolution to Gemini providers:
- Implemented `get_model()` in Gemini.py and GeminiPro.py
- Added alias handling with list support
- Updated model definitions in `g4f/models.py`:
- Removed references to Dynaspark and DDG providers
- Added new SDXL image models with ARTA provider
- Adjusted best_provider assignments across multiple models
- Removed Dynaspark/DDG references from provider imports and AnyProvider
- Added DDG to not_working providers in `__init__.py`
* feat: Add new models to DeepInfraChat, LambdaChat, and models
- Add 'deepseek-ai/DeepSeek-R1-0528' model to DeepInfraChat provider's models list
- Include alias 'deepseek-r1-0528' for DeepSeek-R1-0528 in DeepInfraChat's model_aliases
- Add 'apriel-5b-instruct' model to LambdaChat provider's models list
- Define new 'deepseek-r1-0528' model in models.py with DeepSeek base provider and DeepInfraChat as best provider
* refactor: simplify model registry and add validation
- Remove unused imports: sys, inspect, Set, Type
- Remove ModelRegistry._discovered flag and automatic discovery mechanism
- Add ModelRegistry.clear() method for resetting registry state
- Implement ModelRegistry.list_models_by_provider() for provider-based filtering
- Add ModelRegistry.validate_all_models() for configuration checks
- Remove Model._registered field and simplify registration logic
- Fix gemma_3_12b model name from empty string to 'gemma-3-12b'
- Add image model section header in model definitions
- Replace ModelUtils.convert dict with dynamic property
- Remove ModelUtils.refresh() method
- Register 'gemini' alias directly in ModelRegistry after model creation
- Remove module-level model discovery and ModelUtils.convert initialization
* refactor: Replace ModelUtils.convert property with class variable
- Add class variable `convert` to `ModelUtils` initialized as empty dictionary
- Replace `@property convert` method with `refresh()` class method that updates `convert`
- Remove dynamic property returning `ModelRegistry.all_models()`
- Add module-level assignment to initialize `ModelUtils.convert` with `ModelRegistry.all_models()`
- Include comment for clarity on filling the convert dictionary
* refactor: Reorganize providers and update model configuration
- Removed unused providers from `g4f/Provider/__init__.py`: ChatGpt, Pi, Pizzagpt, PuterJS, You
- Moved LMArenaBeta provider to `needs_auth` directory with updated relative imports
- Moved Pi provider to `needs_auth` directory with updated relative imports
- Moved PuterJS provider to `needs_auth` directory with updated relative imports
- Moved You provider to `needs_auth` directory with updated relative imports
- Added LMArenaBeta, Pi, PuterJS, You to `needs_auth/__init__.py`
- Moved ChatGpt provider to `not_working` directory with updated relative imports
- Moved Pizzagpt provider to `not_working` directory with updated relative imports
- Added ChatGpt, Pizzagpt to `not_working/__init__.py`
- Updated `g4f/models.py` to remove Reka import and change reka_core model provider
- Changed reka_core model's best_provider from IterListProvider to LegacyLMArena in `g4f/models.py`
* feat: add Together provider and update model handling
- Add new provider `Together` in `g4f/Provider/Together.py` with model aliases and configuration
- Implement `get_activation_key` and `get_models` methods in `Together` provider
- Add `get_model` method to resolve aliases in `Together` and `DeepInfraChat`
- Update `DeepInfraChat` model mappings to support multiple versions
- Change "deepseek-v3" to list with two model options
- Change "deepseek-r1" to list with two model options
- Remove duplicate "deepseek-v3" entry
- Remove "mistral-small" alias
- Remove "midjourney" from `PollinationsAI.extra_image_models`
- Register `Together` provider in `g4f/Provider/__init__.py`
- Update `g4f/models.py` with new providers and models
- Add `Together` to default and default_vision provider lists
- Add `Together` as provider for multiple existing models
- Add new vision model `qwen_2_vl_72b`
- Add new text models: `qwen_2_5_7b`, `deepseek_r1_distill_qwen_1_5b`, `deepseek_r1_distill_qwen_14b`
- Add new image models: `flux_redux`, `flux_depth`, `flux_canny`, `flux_kontext_max`, `flux_dev_lora`, `flux_kontext_pro`
- Remove `pi` model definition
- Update provider assignments for multiple models to include `Together`
* refactor: Remove LegacyLMArena provider and update model best_providers
- Remove LegacyLMArena import from Provider list in models.py
- Delete LegacyLMArena from default model's best_provider IterListProvider
- Remove multiple obsolete model definitions (gpt_3_5_turbo, gpt_4_turbo, phi_3_small, etc.) that exclusively used LegacyLMArena
- Update best_provider for all remaining models to remove LegacyLMArena from IterListProvider arguments
- Replace LegacyLMArena with alternative providers in model definitions (e.g., OpenaiChat, Together, DeepInfraChat)
- Simplify model definitions by removing redundant IterListProvider wrappers for single providers
- Expand provider imports in any_provider.py to include Blackboxapi, OIVSCodeSer2, etc.
- Extend provider list in AnyProvider with additional working providers for fallback support
* refactor: Remove Blackboxapi provider
- Deleted Blackboxapi provider implementation file
- Removed Blackboxapi import from provider __init__ file
- Updated default model configuration to exclude Blackboxapi provider
- Removed Blackboxapi from llama-3.1-70b model's best_provider
- Updated any_provider to exclude Blackboxapi from provider list
* fix: add missing parameters to Together.get_model method signature
- Add api_key and api_base parameters to get_model method in Together class
- Import random module at the top of the file
- Add inline import comment for random module inside get_model method
* fix: remove broken providers and update model configurations
- Remove non-working providers: ChatGLM, DocsBot, GizAI, OIVSCodeSer5
- Fix Blackbox provider by removing userSelectedModel logic
- Update DeepInfraChat default model to 'deepseek-ai/DeepSeek-V3-0324'
- Add random model selection for DeepInfraChat aliases
- Update LambdaChat default model to 'deepseek-v3-0324' and expand model list
- Fix LegacyLMArena model loading with better error handling and caching
- Add retry logic and timeouts to LegacyLMArena streaming responses
- Improve LegacyLMArena response parsing to handle various data formats
- Update model references across g4f/models.py to remove deleted providers
- Fix AnyProvider model categorization logic for better grouping
- Add LegacyLMArena and ARTA to special provider handling in AnyProvider
- Update provider imports in __init__.py to exclude removed providers
- Add needs_auth flag to You.com and HailuoAI providers
- Fix GeminiPro get_model method signature to accept kwargs
* fix (g4f/Provider/LambdaChat.py)
* refactor: format models list in LMArenaBeta provider
- Convert single-line models array to multi-line format
- Add 11 new models (hunyuan, flux-kontext-pro, cobalt variants, etc.)
- Remove 6 models (bagel, goldmane, redsword, etc.)
- Update stephen model ID
---------
Co-authored-by: kqlio67 <kqlio67.noreply.github.com>
- Added `login_url` attribute to `PollinationsAI` class.
- Introduced API key authentication by adding `api_key` parameter to relevant functions in `PollinationsAI` and `PollinationsImage`.
- Updated request headers to include `Authorization` header when an API key is provided.
- Modified model retrieval to include `audio_models` voices in `get_models`.
- Removed redundant audio model voice extensions from `text_models` initialization.
- Added `get_grouped_models` method in `PollinationsAI` to categorize models into groups.
- Updated async generator functions to pass API key where required.
- Adjusted image request handling to use shared headers with API key support.
- Refactored `PollinationsImage` async generator to include API key parameter.
- Introduced a new provider class `LMArenaBeta` in `g4f/Provider/LMArenaBeta.py` with capabilities for text and image models.
- Updated `g4f/Provider/Cloudflare.py` to remove an unused import of `Cookies`.
- Modified `g4f/Provider/PollinationsAI.py` to change the condition for checking the action in the `next` command.
- Added a new provider `PuterJS` in `g4f/Provider/PuterJS.py` with various model handling and authentication logic.
- Removed the old `PuterJS` implementation from `g4f/Provider/not_working/PuterJS.py`.
- Updated `g4f/Provider/__init__.py` to include the new `LMArenaBeta` and `PuterJS` providers.
- Changed the label of `HarProvider` in `g4f/Provider/har/__init__.py` to "LMArena (Har)".
- Adjusted the model list in `g4f/Provider/openai/models.py` to ensure consistency in model definitions.
- Updated the API response handling in `g4f/providers/response.py` to calculate total tokens in the `Usage` class constructor.
- Deleted multiple deprecated providers including Acytoo, AiAsk, AiService, Aibn, Aivvm, Berlin, ChatAnywhere, ChatgptDuo, CodeLinkAva, Cromicle, DfeHub, EasyChat, FakeGpt, FastGpt, Forefront, GPTalk, GeekGpt, GetGpt, H2o, Hashnode, Myshell, NoowAi, Opchatgpts, OpenAssistant, V50, Vitalentum, VoiGpt, Wewordle, Wuguokai, Ylokh, Yqcloud, and corresponding deprecated/__init__.py
- Renamed LMArenaProvider.py to LMArena.py and incorporated its functionality with enhancements, including updated model lists, aliases, comprehensive model discovery, payload building, and asynchronous generator for completions
- Removed LMArenaProvider import and added LMArena import in Provider/__init__.py
- Modified Blackbox provider:
- Removed generate_session_data method and updated generate_session to use fixed email
- Updated session payload usage in send request to use generate_session without email argument
- Added asyncMode flag set to False in session payload
- In DeepInfraChat, removed model aliases for "llama-4-maverick-17b" and "llama-4-scout-17b"
- In PollinationsAI, updated model aliases: replaced "command-r-plus-08-2024" with "command-r-plus"; added "gpt-image" and "grok-3-mini" aliases
- In LambdaChat, added "llama-3.3-70b" mapping to "llama3.3-70b-instruct"
- In hf_space:
- Deleted Qwen_QVQ_72B and Voodoohop_Flux1Schnell providers
- Updated model_aliases in Qwen_Qwen_2_5_Max to fix model alias key from "qwen-2-5-max" to "qwen-2.5-max"
- Changed model_aliases in StabilityAI_SD35Large from "stable-diffusion-3.5-large" to "sd-3.5-large"
- Removed imports of deleted providers in hf_space/__init__.py and updated defaults accordingly
- In BingCreateImages moved import to relative from .bing.create_images
- Moved bing directory into needs_auth directory and updated imports accordingly
- Changed PuterJS provider:
- Changed working flag from True to False
- Changed return_conversation default from False to True in create_async_generator
- Changed yield error messages to raising exceptions for authentication and rate limits
- Modified models.py:
- Added ModelRegistry class for dynamic registration and lookup of Model instances
- Modified Model dataclass to auto-register instances on initialization via ModelRegistry
- Adjusted imports and removed PuterJS from lists of providers and best_provider assignments
- Replaced many references of PuterJS as best_provider with LMArena or IterListProvider including core models like gpt-3.5-turbo, gpt-4, gpt-4o, llama series, mistral, hermes, Microsoft phi, gemini, anthopic claude, cohere, qwen, deepseek, and others
- Fixed aliases and model names (e.g., "qwen-2-5-max" to "qwen-2.5-max")
- Removed outdated or deprecated model definitions referencing PuterJS
- Updated HarProvider label to "LM Arena (Har)" from "LM Arena"
- Removed deprecated providers imports from Provider/__init__.py and not_working directory imports updated accordingly
- Various exact functions impacted: create_async_generator in providers Blackbox, LMArena, PollinationsAI, LambdaChat, PuterJS; model aliases and model definitions in models.py; Provider package __init__.py files; BingCreateImages import; and deletions of numerous deprecated providers and not_working providers.
* feat: enhance provider support and add PuterJS provider
- Add new PuterJS provider with extensive model support and authentication handling
- Add three new OIVSCode providers (OIVSCodeSer2, OIVSCodeSer5, OIVSCodeSer0501)
- Fix Blackbox provider with improved model handling and session generation
- Update model aliases across multiple providers for consistency
- Mark DDG provider as not working
- Move TypeGPT to not_working directory
- Fix model name formatting in DeepInfraChat and other providers (qwen3 → qwen-3)
- Add get_model method to LambdaChat and other providers for better model alias handling
- Add ModelNotFoundError import to providers that need it
- Update model definitions in models.py with new providers and aliases
- Fix client/stubs.py to allow arbitrary types in ChatCompletionMessage
* Fix conflicts g4f/Provider/needs_auth/Grok.py
* fix: update Blackbox provider default settings
- Changed parameter to use only the passed value without fallback to 1024
- Set to instead of in request payload
* feat: add WeWordle provider with gpt-4 support
- Created new WeWordle.py provider file implementing AsyncGeneratorProvider
- Added WeWordle class with API endpoint at wewordle.org/gptapi/v1/web/turbo
- Set provider properties: working=True, needs_auth=False, supports_stream=True
- Configured default_model as 'gpt-4' with retry mechanism for API requests
- Implemented URL sanitization logic to handle malformed URLs
- Added response parsing for different JSON response formats
- Added WeWordle to Provider/__init__.py imports
- Added WeWordle to default model providers list in models.py
- Added WeWordle to gpt_4 best_provider list in models.py
* feat: add DocsBot provider with GPT-4o support
- Added new DocsBot.py provider file implementing AsyncGeneratorProvider and ProviderModelMixin
- Created Conversation class extending JsonConversation to track conversation state
- Implemented create_async_generator method with support for:
- Streaming and non-streaming responses
- System messages
- Message history
- Image handling via data URIs
- Conversation tracking
- Added DocsBot to Provider/__init__.py imports
- Added DocsBot to default and default_vision model providers in models.py
- Added DocsBot as a provider for gpt_4o model in models.py
- Set default_model and vision support to 'gpt-4o'
- Implemented API endpoint communication with docsbot.ai
* feat: add OpenAIFM provider and update audio model references
- Added new OpenAIFM provider in g4f/Provider/audio/OpenAIFM.py for text-to-speech functionality
- Updated PollinationsAI.py to rename "gpt-4o-audio" to "gpt-4o-mini-audio"
- Added OpenAIFM to audio provider imports in g4f/Provider/audio/__init__.py
- Modified save_response_media() in g4f/image/copy_images.py to handle source_url separately from media_url
- Added new gpt_4o_mini_tts AudioModel in g4f/models.py with OpenAIFM as best provider
- Updated ModelUtils dictionary in models.py to include both gpt_4o_mini_audio and gpt_4o_mini_tts
* fix: improve PuterJS provider and add Gemini to best providers
- Changed client_id generation in PuterJS from time-based to UUID format
- Fixed duplicate json import in PuterJS.py
- Added uuid module import in PuterJS.py
- Changed host header from "api.puter.com" to "puter.com"
- Modified error handling to use Exception instead of RateLimitError
- Added Gemini to best_provider list for gemini-2.5-flash model
- Added Gemini to best_provider list for gemini-2.5-pro model
- Fixed missing newline at end of Gemini.py file
---------
Co-authored-by: kqlio67 <kqlio67.noreply.github.com>
- Added try-except block to catch RuntimeError around asyncio.run(nodriver_read_models()) in Cloudflare.py to set cls.models to fallback_models if encountered
- Corrected indentation of "followups" key in PollinationsAI.py from 43 to 44, changing it from nested to proper dictionary key
- No other code logic changed in these files
- Modified g4f/providers/response.py to ensure format_images_markdown returns the result directly without additional flags in the 'format_images_markdown' function.
- Updated g4f/gui/server/api.py to add 'tempfiles' parameter with default empty list to '_create_response_stream' method.
- Changed or added code in API response handling to iterate over 'tempfiles' and attempt to remove each file after response completion, with exception handling (try-except block with logger.exception).
- Adjusted g4f/Tools/files.py to fix tempfile creation: corrected the 'suffix' parameter in 'get_tempfile' to use 'suffix' directly instead of splitting.
- In g4f/tools/media.py, changed 'render_part' function to handle 'text' key properly, checking 'part.get("text")' and returning a dictionary with 'type': 'text' and 'text': value, if present.
- In PollinationsAI.py, modified get_image method to initialize responses set and manage concurrent image fetches with asyncio tasks, adding a while loop to yield responses as they complete
- Changed response index in get_image from 1 to 0 to align with zero-based indexing
- Introduced 'responses' set and 'finished' counter outside inner get_image function for proper progress tracking
- Updated gather() usage to run all get_image tasks concurrently after loop
- In __init__.py, enhanced use_aspect_ratio function: added checks if width and height are None before assigning aspect ratio-based defaults
- Assigned default width and height values for aspect ratios "1:1", "16:9", and "9:16" if not already specified in extra_body
- In copy_images.py, corrected get_filename function to convert tags to strings before joining with '+', ensuring proper filename formatting
- In response.py, refined is_content function to exclude Reasoning objects where is_thinking and token are both None
- Removed __eq__ method from Reasoning class to prevent comparison issues
- In web_search.py, simplified import by removing unused datetime and date modules
- Replaced the large GitHub project stats table in `README.md` with summaries and logos for Pollinations AI and MoneyPrinter V2
- Introduced `STATIC_URL` and `DIST_DIR` constants in new `g4f/constants.py` and used them across multiple files
- Updated `PollinationsAI.py` to support conversation title and follow-up generation using tool calls
- Modified `PollinationsAI.py` and `PollinationsImage.py` to use `STATIC_URL` for the `referrer` header
- Enhanced `PollinationsAI.stream_complete` to yield `ToolCalls`, `TitleGeneration`, and `SuggestedFollowups`
- Added `ToolCalls` handling in `client/__init__.py` to support non-stream and stream modes
- Updated `ChatCompletionDelta` model in `client/stubs.py` to support `ToolCalls`
- Modified `HarProvider` to merge `DEFAULT_HEADERS` into request headers
- Improved `OpenaiChat.py` by adding optional chaining to page evaluation expressions for robustness
- Updated `any_provider.py` to force use of `PollinationsAI` if `tools` key is present in kwargs
- Refactored `is_content` into a reusable function in `providers/response.py` and used in `retry_provider.py`
- Updated `gui/server/website.py` to use `STATIC_URL` and simplify `GPT4FREE_URL` handling
- Removed redundant constants from `version.py` and imported them from `constants.py
- Updated error handling in g4f/Provider/DDG.py to raise ResponseError instead of yield error strings
- Replaced yield statements with raises in g4f/Provider/DDG.py for HTTP and response errors
- Added response raising in g4f/Provider/DeepInfraChat.py for image upload responses
- Included model alias validation and error raising in g4f/Provider/hf/HuggingFaceMedia.py
- Corrected model alias dictionary key in g4f/Provider/hf_space/StabilityAI_SD35Large.py
- Ensured referrer parameter default value in g4f/Provider/PollinationsImage.py
- Removed duplicate imports and adjusted get_models method in g4f/Provider/har/__init__.py
- Modified g4f/gui/server/api.py to remove unused conversation parameter in _create_response_stream
- Fixed logic to handle single exception in g4f/providers/retry_provider.py
- Added missing import of JsonConversation in g4f/providers/retry_provider.py
- Corrected stream_read_files to replace extension in return string in g4f/tools/files.py
- Fixed duplicate model entries in Blackbox provider model_aliases
- Added meta-llama- to llama- name cleaning in Cloudflare provider
- Enhanced PollinationsAI provider with improved vision model detection
- Added reasoning support to PollinationsAI provider
- Fixed HuggingChat authentication to include headers and impersonate
- Removed unused max_inputs_length parameter from HuggingFaceAPI
- Renamed extra_data to extra_body for consistency across providers
- Added Puter provider with grouped model support
- Enhanced AnyProvider with grouped model display and better model organization
- Fixed model cleaning in AnyProvider to handle more model name variations
- Added api_key handling for HuggingFace providers in AnyProvider
- Added see_stream helper function to parse event streams
- Updated GUI server to handle JsonConversation properly
- Fixed aspect ratio handling in image generation functions
- Added ResponsesConfig and ClientResponse for new API endpoint
- Updated requirements to include markitdown
- Fix SSL parameter in Liaobots provider (change verify_ssl to ssl=False)
- Simplify model aliases in PollinationsAI by removing list-based random selection
- Add referrer parameter to PollinationsAI provider methods
- Fix image URL generation in PollinationsAI to prevent URL length issues
- Add Gemini-2.5-flash model to Gemini provider models dictionary
- Add Gemini-2.5-pro alias in Gemini provider
- Remove try/except blocks in Provider/__init__.py for more direct imports
- Fix response_format handling in PollinationsAI provider
- Update RequestLogin handling in Gemini provider
- Changed the model alias for "gpt-4.1-nano" to be a list containing "openai-fast" and "openai-small".
- Updated the model alias for "gpt-4.1" to be a list containing "openai", "openai-large", and "openai-xlarge".
- Modified the model alias for "gpt-4.1-mini" to be a list containing "openai", "openai-roblox", and "roblox-rp".
- Changed the model alias for "deepseek-r1" to be a list containing "deepseek-reasoning-large" and "deepseek-reasoning".
- Added a new class method `get_model` to retrieve the internal model name based on user-provided model names, including handling for aliases that are lists.
- Implemented error handling in `get_model` to raise a `ModelNotFoundError` if the model is not found.
- Changed default model in commit.py from "gpt-4o" to "claude-3.7-sonnet"
- Fixed ARTA provider by adding proper auth token handling and form data submission
- Updated Blackbox provider to use OpenRouter models instead of premium models
- Improved DDG provider with simplified authentication and better error handling
- Updated DeepInfraChat provider with new models and aliases
- Removed non-working providers: Goabror, Jmuz, OIVSCode, AllenAI, ChatGptEs, FreeRouter, Glider
- Moved non-working providers to the not_working directory
- Added BlackboxPro provider in needs_auth directory with premium model support
- Updated Liaobots provider with new models and improved authentication
- Renamed Microsoft_Phi_4 to Microsoft_Phi_4_Multimodal for clarity
- Updated LambdaChat provider with direct API implementation instead of HuggingChat
- Updated models.py with new model definitions and provider mappings
- Removed BlackForestLabs_Flux1Schnell from HuggingSpace providers
- Updated model aliases across multiple providers for better compatibility
- Fixed Dynaspark provider endpoint URL to prevent spam detection
- Changed documentation URL in README.md for detailed guidance link
- In g4f/Provider/Cloudflare.py, broadened exception handling in async argument fetching to catch all exceptions in one place and only specific exceptions in another
- In g4f/Provider/PollinationsAI.py, removed raising of exception for unknown model not in image_models and replaced it with pass
- In g4f/Provider/needs_auth/OpenaiChat.py, modified session post call to always use cls._headers
- Changed if-chain in OpenaiChat.py to use elif for checking element prefix "sediment://"
- Added logic to extract and yield generated images for unique "file-service://" matches in streamed responses within OpenaiChat.py
- Commented out multimodal_text image asset pointer handling in OpenaiChat.py
- In g4f/client/__init__.py resolve_media(), set media name to basename of file path using os.path.basename
- Changed `generate_commit_message` return to `.strip("`").strip()` in `commit.py`
- Added new model mappings in `PollinationsAI.py`, including `gpt-4.1`, `gpt-4.1-mini`, and `deepseek-r1-distill-*`
- Removed `print` debug statement from `PollinationsAI.py` request payload
- Replaced temp file handling in `MarkItDown.py` with `get_tempfile` utility
- Added `get_tempfile` function to `files.py` for consistent tempfile creation
- Added `gpt-4.1` to `text_models` list in `models.py`
- Added `ModelNotSupportedError` to exception handling in `OpenaiChat.py`
- Updated message content creation to use `to_string()` in `OpenaiChat.py`
- Wrapped `get_model()` in try-except to ignore `ModelNotSupportedError` in `OpenaiChat.py`
- Adjusted `convert` endpoint in `api/__init__.py` to accept optional `provider` param
- Refactored `/api/markitdown` to reuse `convert()` handler in `api/__init__.py
- Added new `/v1/audio/speech` and `/api/{path_provider}/audio/speech` endpoints in `g4f/api/__init__.py` for generating speech from text
- Introduced `AudioSpeechConfig` model in `g4f/api/stubs.py` with fields for input, model, provider, voice, instructions, and response format
- Updated `PollinationsAI.py` to support `modalities` in `kwargs` when checking for audio
- Set default voice for audio models in `PollinationsAI.py` if not provided in `kwargs`
- Added debug print in `PollinationsAI.py` to log request data to text API endpoint
- Extended supported FastAPI response types in `g4f/api/__init__.py` to include `FileResponse` from `starlette.responses`
- Added `BackgroundTask` to clean up generated audio files after serving in `g4f/api/__init__.py`
- Modified `AnyProvider.py` to include `EdgeTTS`, `gTTS`, and `MarkItDown` as audio providers when `audio` is in `kwargs` or `modalities`
- Created `resolve_media` helper in `g4f/client/__init__.py` to standardize media handling for audio/image input
- Replaced manual media preprocessing in `Completions`, `AsyncCompletions`, and `Images` classes with `resolve_media`
- Added `/docs/README.md` with a link to the documentation site
- Updated `PollinationsAI` to exclude "gemini" model from `audio_models`
- Added logic in `PollinationsAI` to expand `audio_models` with voices from `default_audio_model`
- Appended voice names to `text_models` list in `PollinationsAI` if present in `default_audio_model`
- Modified `PollinationsAI._generate_text` to inject `audio` parameters when a voice model is used
- Updated `save_response_media` call to include voice name in model list
- Changed `OpenaiChat.get_generated_image` to support both `file-service://` and `sediment://` URLs using `conversation_id`
- Modified `OpenaiChat.create_messages` to optionally pass `prompt`
- Adjusted `OpenaiChat.run` to determine `prompt` explicitly and set messages accordingly
- Updated `OpenaiChat.iter_messages_line` to handle `None` in `fields.p` safely
- Passed `prompt` and `conversation_id` to `OpenaiChat.get_generated_image` inside image parsing loop
- Fixed redirect logic in `backend_api.py` to safely handle missing `skip` query param
- Enhanced `render` function in `website.py` to support live file serving with `live` query param
- Added new route `/dist/<path:name>` to serve static files from `DIST_DIR` in `website.py`
- Adjusted `render` to include `.live` suffix in cache filename when applicable
- Modified HTML replacements in `render` to preserve local `dist/` path if `add_origion` is True
- Added new examples for `client.media.generate` with `PollinationsAI`, `EdgeTTS`, and `Gemini` in `docs/media.md`
- Modified `PollinationsAI.py` to default to `default_audio_model` when audio data is present
- Adjusted `PollinationsAI.py` to conditionally construct message list from `prompt` when media is being generated
- Rearranged `PollinationsAI.py` response handling to yield `save_response_media` after checking for non-JSON content types
- Added support in `EdgeTTS.py` to use default values for `language`, `locale`, and `format` from class attributes
- Improved voice selection logic in `EdgeTTS.py` to fallback to default locale or language when not explicitly provided
- Updated `EdgeTTS.py` to yield `AudioResponse` with `text` field included
- Modified `Gemini.py` to support `.ogx` audio generation when `model == "gemini-audio"` or `audio` is passed
- Used `format_image_prompt` in `Gemini.py` to create audio prompt and saved audio file using `synthesize`
- Appended `AudioResponse` to `Gemini.py` for audio generation flow
- Added `save()` method to `Image` class in `stubs.py` to support saving `/media/` files locally
- Changed `client/__init__.py` to fallback to `options["text"]` if `alt` is missing in `Images.create`
- Ensured `AudioResponse` in `copy_images.py` includes the `text` (prompt) field
- Added `Annotated` fallback definition in `api/__init__.py` for compatibility with older Python versions
* feat: introduce AnyProvider & LM Arena, overhaul model/provider logic
- **Provider additions & removals**
- Added `Provider/LMArenaProvider.py` with full async stream implementation and vision model support
- Registered `LMArenaProvider` in `Provider/__init__.py`; removed old `hf_space/LMArenaProvider.py`
- Created `providers/any_provider.py`; registers `AnyProvider` dynamically in `Provider`
- **Provider framework enhancements**
- `providers/base_provider.py`
- Added `video_models` and `audio_models` attributes
- `providers/retry_provider.py`
- Introduced `is_content()` helper; now treats `AudioResponse` as stream content
- **Cloudflare provider refactor**
- `Provider/Cloudflare.py`
- Re‑implemented `get_models()` with `read_models()` helper, `fallback_models`, robust nodriver/curl handling and model‑name cleaning
- **Other provider tweaks**
- `Provider/Copilot.py` – removed `"reasoning"` alias and initial `setOptions` WS message
- `Provider/PollinationsAI.py` & `PollinationsImage.py`
- Converted `audio_models` from list to dict, adjusted usage checks and labels
- `Provider/hf/__init__.py` – applies `model_aliases` remap before dispatch
- `Provider/hf_space/DeepseekAI_JanusPro7b.py` – now merges media before upload
- `needs_auth/Gemini.py` – dropped obsolete Gemini model entries
- `needs_auth/GigaChat.py` – added lowercase `"gigachat"` alias
- **API & client updates**
- Replaced `ProviderUtils` with new `Provider` map usage throughout API and GUI server
- Integrated `AnyProvider` as default fallback in `g4f/client` sync & async flows
- API endpoints now return counts of providers per model and filter by `x_ignored` header
- **GUI improvements**
- Updated JS labels with emoji icons, provider ignore logic, model count display
- **Model registry**
- Renamed base model `"GigaChat:latest"` ➜ `"gigachat"` in `models.py`
- **Miscellaneous**
- Added audio/video flags to GUI provider list
- Tightened error propagation in `retry_provider.raise_exceptions`
* Fix unittests
* fix: handle None conversation when accessing provider-specific data
- Modified `AnyProvider` class in `g4f/providers/any_provider.py`
- Updated logic to check if `conversation` is not None before accessing `provider.__name__` attribute
- Wrapped `getattr(conversation, provider.__name__, None)` block in an additional `if conversation is not None` condition
- Changed `setattr(conversation, provider.__name__, chunk)` to use `chunk.get_dict()` instead of the object directly
- Ensured consistent use of `JsonConversation` when modifying or assigning `conversation` data
* ```
feat: add provider string conversion & update IterListProvider call
- In g4f/client/__init__.py, within both Completions and AsyncCompletions, added a check to convert the provider from a string using convert_to_provider(provider) when applicable.
- In g4f/providers/any_provider.py, removed the second argument (False) from the IterListProvider constructor call in the async for loop.
```
---------
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
- In **g4f/Provider/Cloudflare.py**:
- Added `from .helper import to_string`.
- Replaced conditional string checks with `to_string(message["content"])` for both `"content"` and elements in `"parts"`.
- In **g4f/Provider/PollinationsAI.py**:
- Removed `"o3-mini"` from the `vision_models` list.
- Updated the alias mapping dictionary by:
- Removing the `"o3-mini": "openai-reasoning"` entry.
- Removing the duplicate `"gpt-4o-mini": "searchgpt"` mapping.
- Removing the duplicate `"gemini-2.0-flash-thinking": "gemini-reasoning"` entry.
- Removing the `"qwq-32b": "qwen-reasoning"` mapping.
- Adding a new alias `"llama-4-scout": "llamascout"`.
- In **g4f/gui/client/static/css/style.css**:
- Changed the `border-left` property value from `var(--colour-4)` to `var(--media-select)`.
- In **g4f/models.py**:
- For the `"o3-mini"` model, removed `PollinationsAI` from its `best_provider` list.
- Changed the comment from `# llama 2` to `### llama 2-4 ###` and removed redundant comments for llama 3.1 and 3.2.
- Added a new model `llama_4_scout` with `base_provider` set to `"Meta Llama"` and `best_provider` as `IterListProvider([Cloudflare, PollinationsAI])`.
- For the `"qwq-32b"` model, removed `PollinationsAI` from its `best_provider` list.
- Updated the `ModelUtils` mapping to include the new `llama_4_scout` model.
- Added "No auth / HAR file" authentication type in providers-and-models.md
- Added "Video generation" column to provider tables for future capability
- Updated model counts and provider capabilities throughout documentation
- Fixed ARTA provider with improved error handling and response validation
- Enhanced AllenAI provider with vision model support and proper image handling
- Significantly improved Blackbox provider:
- Added HAR file authentication support
- Added subscription status checking
- Added premium/demo model differentiation
- Improved session handling and error recovery
- Enhanced DDG provider with better error handling for challenges
- Improved PollinationsAI and PollinationsImage providers' model handling
- Added VideoModel class in g4f/models.py
- Added audio/video generation indicators in GUI components
- Added new Ai2 models: olmo-1-7b, olmo-2-32b, olmo-4-synthetic
- Added new commit message generation tool in etc/tool/commit.py
* New provider added(g4f/Provider/Websim.py)
* New provider added(g4f/Provider/Dynaspark.py)
* feat(g4f/gui/client/static/js/chat.v1.js): Enhance provider labeling for HuggingFace integrations
* feat(g4f/gui/server/api.py): add Hugging Face Space compatibility flag to provider data
* feat(g4f/models.py): add new providers and update model configurations
* Update g4f/Provider/__init__.py
* feat(g4f/Provider/AllenAI.py): expand model alias mappings for AllenAI provider
* feat(g4f/Provider/Blackbox.py): restructure image model handling and response processing
* feat(g4f/Provider/PollinationsAI.py): add new model aliases and streamline headers
* Update g4f/Provider/hf_space/*
* refactor(g4f/Provider/Copilot.py): update model alias mapping
* chore(g4f/models.py): update provider configurations for OpenAI models
* docs(docs/providers-and-models.md): update provider tables and model categorization
* fix(etc/examples/vision_images.py): update model and simplify client configuration
* fix(docs/providers-and-models.md): correct streaming status for GlhfChat provider
* docs(docs/providers-and-models.md): update provider capabilities and model documentation
* fix(models): update provider configurations for Mistral models
* fix(g4f/Provider/Blackbox.py): correct model alias key for Mistral variant
* feat(g4f/Provider/hf_space/CohereForAI_C4AI_Command.py): update supported model versions and aliases (close#2802)
* fix(documentation): correct model names and provider counts (https://github.com/xtekky/gpt4free/pull/2805#issuecomment-2727489835)
* fix(g4f/models.py): correct mistral model configurations
* fix(g4f/Provider/DeepInfraChat.py): correct mixtral-small alias key
* New provider added(g4f/Provider/LambdaChat.py)
* feat(g4f/models.py): add new providers and enhance model configurations
* docs(docs/providers-and-models.md): add LambdaChat provider and update model listings
* feat(g4f/models.py): add new Liquid AI model and enhance providers
* docs(docs/providers-and-models.md): update model listings and provider counts
* feat(g4f/Provider/LambdaChat.py): add conditional reasoning processing based on model
* fix(g4f/tools/run_tools.py): handle combined thinking tags in single chunk
* New provider added(g4f/Provider/Goabror.py)
* feat(g4f/Provider/Blackbox.py): implement dynamic session management and model access control
* refactor(g4f/models.py): update provider configurations and model entries
* docs(docs/providers-and-models.md): update model listings and provider counts
---------
Co-authored-by: kqlio67 <>