- Updated error handling in g4f/Provider/DDG.py to raise ResponseError instead of yield error strings
- Replaced yield statements with raises in g4f/Provider/DDG.py for HTTP and response errors
- Added response raising in g4f/Provider/DeepInfraChat.py for image upload responses
- Included model alias validation and error raising in g4f/Provider/hf/HuggingFaceMedia.py
- Corrected model alias dictionary key in g4f/Provider/hf_space/StabilityAI_SD35Large.py
- Ensured referrer parameter default value in g4f/Provider/PollinationsImage.py
- Removed duplicate imports and adjusted get_models method in g4f/Provider/har/__init__.py
- Modified g4f/gui/server/api.py to remove unused conversation parameter in _create_response_stream
- Fixed logic to handle single exception in g4f/providers/retry_provider.py
- Added missing import of JsonConversation in g4f/providers/retry_provider.py
- Corrected stream_read_files to replace extension in return string in g4f/tools/files.py
- Fixed duplicate model entries in Blackbox provider model_aliases
- Added meta-llama- to llama- name cleaning in Cloudflare provider
- Enhanced PollinationsAI provider with improved vision model detection
- Added reasoning support to PollinationsAI provider
- Fixed HuggingChat authentication to include headers and impersonate
- Removed unused max_inputs_length parameter from HuggingFaceAPI
- Renamed extra_data to extra_body for consistency across providers
- Added Puter provider with grouped model support
- Enhanced AnyProvider with grouped model display and better model organization
- Fixed model cleaning in AnyProvider to handle more model name variations
- Added api_key handling for HuggingFace providers in AnyProvider
- Added see_stream helper function to parse event streams
- Updated GUI server to handle JsonConversation properly
- Fixed aspect ratio handling in image generation functions
- Added ResponsesConfig and ClientResponse for new API endpoint
- Updated requirements to include markitdown
- Fix SSL parameter in Liaobots provider (change verify_ssl to ssl=False)
- Simplify model aliases in PollinationsAI by removing list-based random selection
- Add referrer parameter to PollinationsAI provider methods
- Fix image URL generation in PollinationsAI to prevent URL length issues
- Add Gemini-2.5-flash model to Gemini provider models dictionary
- Add Gemini-2.5-pro alias in Gemini provider
- Remove try/except blocks in Provider/__init__.py for more direct imports
- Fix response_format handling in PollinationsAI provider
- Update RequestLogin handling in Gemini provider
- Changed the model alias for "gpt-4.1-nano" to be a list containing "openai-fast" and "openai-small".
- Updated the model alias for "gpt-4.1" to be a list containing "openai", "openai-large", and "openai-xlarge".
- Modified the model alias for "gpt-4.1-mini" to be a list containing "openai", "openai-roblox", and "roblox-rp".
- Changed the model alias for "deepseek-r1" to be a list containing "deepseek-reasoning-large" and "deepseek-reasoning".
- Added a new class method `get_model` to retrieve the internal model name based on user-provided model names, including handling for aliases that are lists.
- Implemented error handling in `get_model` to raise a `ModelNotFoundError` if the model is not found.
- Added `hashlib` import to enable hash generation.
- Introduced `generate_user_agent` method to create a dynamic user agent string.
- Modified `generate_fe_version` to accept `page_content` as a parameter and extract the feature hash.
- Added `generate_x_vqd_hash_1` method to compute a hash based on `vqd` and `fe_version`.
- Updated `create_async_generator` to use the new user agent and hash generation methods.
- Simplified message history management by using a single line to find the last user message.
- Removed unnecessary comments and improved code readability.
- Removed a debug print statement that was logging the payload being sent in the ARTA provider
- The removed line was printing form_data with the message "Sending payload: {form_data}"
- This change helps clean up unnecessary logging in the production code
- Mark FreeRouter provider as not working by changing working=True to working=False
- Fix line ending issues in DuckDuckGo.py and LMArenaProvider.py by ensuring proper newline at end of file
- Changed default model in commit.py from "gpt-4o" to "claude-3.7-sonnet"
- Fixed ARTA provider by adding proper auth token handling and form data submission
- Updated Blackbox provider to use OpenRouter models instead of premium models
- Improved DDG provider with simplified authentication and better error handling
- Updated DeepInfraChat provider with new models and aliases
- Removed non-working providers: Goabror, Jmuz, OIVSCode, AllenAI, ChatGptEs, FreeRouter, Glider
- Moved non-working providers to the not_working directory
- Added BlackboxPro provider in needs_auth directory with premium model support
- Updated Liaobots provider with new models and improved authentication
- Renamed Microsoft_Phi_4 to Microsoft_Phi_4_Multimodal for clarity
- Updated LambdaChat provider with direct API implementation instead of HuggingChat
- Updated models.py with new model definitions and provider mappings
- Removed BlackForestLabs_Flux1Schnell from HuggingSpace providers
- Updated model aliases across multiple providers for better compatibility
- Fixed Dynaspark provider endpoint URL to prevent spam detection
- Changed documentation URL in README.md for detailed guidance link
- In g4f/Provider/Cloudflare.py, broadened exception handling in async argument fetching to catch all exceptions in one place and only specific exceptions in another
- In g4f/Provider/PollinationsAI.py, removed raising of exception for unknown model not in image_models and replaced it with pass
- In g4f/Provider/needs_auth/OpenaiChat.py, modified session post call to always use cls._headers
- Changed if-chain in OpenaiChat.py to use elif for checking element prefix "sediment://"
- Added logic to extract and yield generated images for unique "file-service://" matches in streamed responses within OpenaiChat.py
- Commented out multimodal_text image asset pointer handling in OpenaiChat.py
- In g4f/client/__init__.py resolve_media(), set media name to basename of file path using os.path.basename
This PR addresses an issue where some text type updates from the process_generating stream were not being yielded, resulting in missing output content.
Root Cause
update[2] blocks with type: "text" were not handled unless embedded in certain paths. As a result, standalone or final text entries were silently ignored during streaming.
Solution
Added explicit handling for type: "text" updates.
- Modified `get_models` method in `TypeGPT` class in `g4f/Provider/TypeGPT.py`
- Changed model filtering condition to exclude models starting with `-` instead of including only those starting with `+`
- Replaced `model.split("@")[0][1:]` with `model.split("@")[0].strip("+")` to extract model name without leading `+
- Introduce `Qwen_Qwen_3` provider in `g4f/Provider/hf_space/Qwen_Qwen_3.py`
- Register Qwen_Qwen_3 in `g4f/Provider/hf_space/__init__.py` and add it to `HuggingSpace`
- Update `MarkItDown` in `g4f/Provider/audio/MarkItDown.py` to accept and forward `llm_client` and `llm_model` kwargs; add async handling for `text_content`
- Modify audio route in `g4f/api/__init__.py` to pass `llm_client` for MarkItDown and set `modalities` only for other providers
- Adjust `OpenaiChat` (needs_auth) to merge media for upload and check for media presence before requesting images
- Change `get_tempfile` in `g4f/tools/files.py` to determine suffix from file extension using `os.path.splitext`
- Refactor provider listing and model mapping in `AnyProvider.get_models()` (g4f/providers/any_provider.py) to update provider order, support new `HarProvider`, initialize attributes, and guard against model_aliases being None
- Ensure `AnyProvider.create_async_generator` calls `get_models` before working with providers
- Replaced inline `get_args_from_nodriver` logic with a new async function `nodriver_read_models` inside `Cloudflare` class
- Added `async def nodriver_read_models()` to handle asynchronous execution of `get_args_from_nodriver` and call `read_models()`
- Moved `try/except` block for handling `RuntimeError` and `FileNotFoundError` inside the new async function
- Updated fallback assignment `cls.models = cls.fallback_models` and debug logging to be within `nodriver_read_models` exception handler
- Replaced `asyncio.run(args)` with `asyncio.run(nodriver_read_models())` to execute the new async function
- Modified logic inside `except ResponseStatusError` block in `Cloudflare` class to incorporate the new structure
- Changed default value of `return_conversation` from `False` to `True` in `Grok._create_completion` method in `g4f/Provider/needs_auth/Grok.py`
- Updated HTTP 200 response mapping to use `{"content": {"audio/*": {}}}` instead of `{"class": FileResponse}` in `Api.markdown_to_audio` method in `g4f/api/__init__.py
- Added validation for `media` parameter in `MarkItDown` class to raise `ValueError` if `media` is not provided.
- Included a check for `has_markitdown` in `MarkItDown` class to raise `ImportError` if `markitdown` is not installed.
- Refactored API response structure by defining `responses` as a reusable dictionary in `g4f/api/__init__.py`.
- Updated `/v1/chat/completions` endpoint to include a new parameter `conversation_id` in the `chat_completions` function.
- Enhanced logic in `chat_completions` to set `conversation_id` from configuration or function input.
- Added a new endpoint `/api/{provider}/{conversation_id}/chat/completions` in `g4f/api/__init__.py` to handle chat completions with `conversation_id`.
- Replaced duplicate response dictionary definitions with the reusable `responses` dictionary in the `/api/{provider}/chat/completions` and `/v1/chat/completions` endpoints.
- Changed `generate_commit_message` return to `.strip("`").strip()` in `commit.py`
- Added new model mappings in `PollinationsAI.py`, including `gpt-4.1`, `gpt-4.1-mini`, and `deepseek-r1-distill-*`
- Removed `print` debug statement from `PollinationsAI.py` request payload
- Replaced temp file handling in `MarkItDown.py` with `get_tempfile` utility
- Added `get_tempfile` function to `files.py` for consistent tempfile creation
- Added `gpt-4.1` to `text_models` list in `models.py`
- Added `ModelNotSupportedError` to exception handling in `OpenaiChat.py`
- Updated message content creation to use `to_string()` in `OpenaiChat.py`
- Wrapped `get_model()` in try-except to ignore `ModelNotSupportedError` in `OpenaiChat.py`
- Adjusted `convert` endpoint in `api/__init__.py` to accept optional `provider` param
- Refactored `/api/markitdown` to reuse `convert()` handler in `api/__init__.py
- Added new `/v1/audio/speech` and `/api/{path_provider}/audio/speech` endpoints in `g4f/api/__init__.py` for generating speech from text
- Introduced `AudioSpeechConfig` model in `g4f/api/stubs.py` with fields for input, model, provider, voice, instructions, and response format
- Updated `PollinationsAI.py` to support `modalities` in `kwargs` when checking for audio
- Set default voice for audio models in `PollinationsAI.py` if not provided in `kwargs`
- Added debug print in `PollinationsAI.py` to log request data to text API endpoint
- Extended supported FastAPI response types in `g4f/api/__init__.py` to include `FileResponse` from `starlette.responses`
- Added `BackgroundTask` to clean up generated audio files after serving in `g4f/api/__init__.py`
- Modified `AnyProvider.py` to include `EdgeTTS`, `gTTS`, and `MarkItDown` as audio providers when `audio` is in `kwargs` or `modalities`
- Created `resolve_media` helper in `g4f/client/__init__.py` to standardize media handling for audio/image input
- Replaced manual media preprocessing in `Completions`, `AsyncCompletions`, and `Images` classes with `resolve_media`
- Added `/docs/README.md` with a link to the documentation site
- Added new MarkItDown audio provider in g4f/Provider/audio/MarkItDown.py for handling audio transcription using markitdown external module
- Included MarkItDown provider import in g4f/Provider/audio/__init__.py
- Implemented /v1/audio/transcriptions POST API endpoint with support for file upload, model selection, provider choice, and prompt in g4f/api/__init__.py
- Added TranscriptionResponseModel Pydantic schema to g4f/api/stubs.py for transcription responses
- Fixed media tuple handling in g4f/client/__init__.py to correctly unpack and assign file/name pairs in Completions, Images, and AsyncCompletions classes
- Updated g4f/Provider/LambdaChat.py to remove redundant origin attribute and simplify URL assignment
- Added handling in AnyProvider to append provider if model is matched in provider map and provider is working (g4f/providers/any_provider.py)
- Modified backend_api.py to fix web_search logic and default filter_markdown parameter extraction from query parameters
- Modified `Cloudflare` class in `Cloudflare.py` to add logic for loading `_args` from a cache file if it exists and `_args` is `None`
- Inserted code in `Cloudflare.py` to check existence of cache file and read JSON content into `_args`
- Refactored `Copilot` class in `Copilot.py` by removing `try`/`finally` block around websocket message loop
- Moved websocket close logic to the end of the message handling loop in `Copilot.py`
- Removed nested `try`/`except` block inside the websocket loop in `Copilot.py`
- Preserved original message handling structure while simplifying control flow in `Copilot.py
- Updated `kwargs["media"]` assignment in the `Completions` class to use a list containing a tuple instead of a single tuple.
- Removed an unnecessary blank line in the `AsyncCompletions` class after adjusting `kwargs["media"]` values.
- Changes affect the `g4f/client/__init__.py` file, specifically the `Completions` and `AsyncCompletions` classes.
- Changed `DEFAULT_MODEL` from `"o1"` to `"gpt-4o"` in `etc/tool/commit.py`
- Replaced `FALLBACK_MODELS` list with an empty list in `etc/tool/commit.py`
- Moved spinner stop logic inside content-checking block in `generate_commit_message` in `etc/tool/commit.py`
- Trimmed trailing characters "` \n" from returned content in `generate_commit_message` in `etc/tool/commit.py`
- Wrapped cache check in `g4f/gui/server/api.py` in a `try` block to catch `RuntimeError`
- Ensured fallback to `version.utils.latest_version` if cache access fails in `g4f/gui/server/api.py
- Added `audio_models` attribute in `EdgeTTS.py` to store available audio models.
- Defined `default_model` attribute in `gTTS.py` with `"en-US"` and added `audio_models` list.
- Introduced `default_model` in `HarProvider` within `har/__init__.py` and updated model retrieval logic.
- Modified `HuggingFaceMedia.py` to change the `label` attribute from `"HuggingFace (Image/Video Generation)"` to `"HuggingFace"`.
- Improved caching behavior in `base_provider.py` by ensuring the cache file is written only once per generation cycle.
- Removed redundant `finally` block in `base_provider.py` that was rewriting the cache file unnecessarily.
- Removed default `tool_calls` assignment from `Api.__call__` in `api.py`
- Removed `debug.error()` logging in two exception blocks in `api.py`
- Fixed malformed URL in `CohereForAI_C4AI_Command.url` by removing leading tab in `CohereForAI_C4AI_Command.py`
- Replaced incorrect `result.text_content` access with just `result` in `Backend_Api.upload_bucket` in `backend_api.py`
- Replaced `shutil.copyfile` with `os.rename`, and added fallback to `shutil.copyfile` on failure in `Backend_Api.upload_bucket` in `backend_api.py
- **g4f/Provider/har/__init__.py**
- `get_models`/`create_async`: iterate over `(domain, harFile)` and filter with `domain in request_url`
- `read_har_files` now yields `(domain, har_data)`; fixes file variable shadowing and uses `json.load`
- remove stray `print`, add type hint for `find_str`, replace manual loops with `yield from`
- small whitespace clean-up
- **g4f/Provider/needs_auth/Grok.py**
- `ImagePreview` now passes `auth_result.cookies` and `auth_result.headers`
- **g4f/Provider/needs_auth/OpenaiChat.py**
- add `Union` import; rename/refactor `get_generated_images` → `get_generated_image`
- support `file-service://` and `sediment://` pointers; choose correct download URL
- return `ImagePreview` or `ImageResponse` accordingly and stream each image part
- propagate 422 errors, update prompt assignment and image handling paths
- **g4f/client/__init__.py**
- drop unused `ignore_working` parameter in sync/async `Completions`
- normalise `media` argument: accept single tuple, infer filename when missing, fix index loop
- `Images.create_variation` updated to use the new media logic
- **g4f/gui/server/api.py**
- expose `latest_version_cached` via `?cache=` query parameter
- **g4f/gui/server/backend_api.py**
- optional Markdown extraction via `MarkItDown`; save rendered text as `<file>.md`
- upload flow rewrites: copy to temp file, move to bucket/media dir, clean temp, store filenames
- introduce `has_markitdown` guard and improved logging/exception handling
- **g4f/tools/files.py**
- remove trailing spaces in HAR code-block header string
- **g4f/version.py**
- add `latest_version_cached` `@cached_property` for memoised version lookup