Update backend_anon_url in har_file.py for correct endpoint
Add async context manager for get_nodriver_session in requests module
Fix start-browser.sh to remove stale cookie file before launching Chrome
- **g4f/Provider/Copilot.py**
- Added `"Smart (GPT-5)"` to `models` list.
- Added `"gpt-5"` alias mapping to `"GPT-5"` in `model_aliases`.
- Introduced `mode` selection logic to support `"smart"` mode for GPT-5 models alongside existing `"reasoning"` and `"chat"` modes.
- **g4f/Provider/EasyChat.py**
- Added `get_models` class method to map `-free` models to aliases and store them in `cls.models`.
- Resolved model via `cls.get_model(model)` at start of `create_async_generator`.
- Reset `cls.captchaToken` to `None` at the beginning of `callback`.
- Wrapped main generator logic in a loop to allow retry once if `CLEAR-CAPTCHA-TOKEN` error occurs, clearing auth file and resetting args.
- **g4f/Provider/needs_auth/OpenaiChat.py**
- Added handling for image models: detect and set `image_model` flag, use `default_model` when sending requests if image model selected, and include `"picture_v2"` in `system_hints` when applicable.
- Replaced textarea/button detection code in page load sequence with `nodriver` `select` calls, sending "Hello" before clicking send button, and included profile button selection if class needs auth.
- **g4f/Provider/openai/models.py**
- Changed `default_image_model` from `"dall-e-3"` to `"gpt-image"`.
- Added `"gpt-5"` and `"gpt-5-thinking"` to `text_models` list.
- Added alias mapping for `"dall-e-3"` pointing to new `default_image_model`.
- Added `fallback_model = "deepseek"` to `PollinationsAI` class in `PollinationsAI.py`
- Modified `PollinationsAI._agenerate` to safely call `get_model` only if `model` is not None
- Removed unused login loop in `OpenaiChat.synthesize` method in `OpenaiChat.py`
- Replaced full CLI parser and main function implementation in `__main__.py` with import from `.main`
- Added `get_auth_result` method to `AsyncAuthedProvider` in `base_provider.py` for reusable auth retrieval
- Replaced repeated auth loading logic in `create_completion` and `create_streaming_completion` with call to `get_auth_result` in `base_provider.py
- Added `timeout` parameter support to `LMArenaBeta._create_async_generator` and passed it to `StreamSession`
- Ensured fallback to `default_model` in `LMArenaBeta` if `model` is not provided
- Modified `OpenaiChat._create_completion` to rebuild messages excluding assistant roles if conversation ID exists
- Corrected OpenaiChat `nodriver_auth` to await `browser.get` and replaced page access with reload
- Improved `save_content` in `client.py` with robust content extraction, null checks, and logging for missing content
- Removed premature `input_text.strip()` in `stream_response` and relocated it to `run_client_args`
- Simplified and centralized markdown filtering call in `save_content`
- Replaced raw `print` logging in `__init__.py` with `debug.log` for `nodriver` URL opening message
- Added `create_function` and `async_create_function` class attributes with default implementations in `base_provider.py` for `AbstractProvider`, `AsyncProvider`, and `AsyncGeneratorProvider`
- Updated `get_create_function` and `get_async_create_function` methods to return these class attributes
- Replaced calls to `provider.get_create_function()` and `provider.get_async_create_function()` with direct attribute access `provider.create_function` and `provider.async_create_function` across `g4f/__init__.py`, `g4f/client/__init__.py`, `g4f/providers/retry_provider.py`, and `g4f/tools/run_tools.py`
- Removed redundant `get_create_function` and `get_async_create_function` methods from `providers/base_provider.py` and `providers/types.py`
- Ensured all provider response calls now use the class attributes for creating completions asynchronously and synchronously as needed
- Updated imports to use format_media_prompt in g4f/Provider/ARTA.py, PollinationsAI.py, PollinationsImage.py, Websim.py, audio/OpenAIFM.py, hf_space/BlackForestLabs_Flux1Dev.py, hf_space/DeepseekAI_JanusPro7b.py, hf_space/G4F.py, hf_space/Microsoft_Phi_4_Multimodal.py, hf_space/StabilityAI_SD35Large.py, needs_auth/BingCreateImages.py, needs_auth/BlackboxPro.py, needs_auth/DeepInfra.py, needs_auth/Gemini.py, needs_auth/MicrosoftDesigner.py, needs_auth/OpenaiChat.py, needs_auth/hf/HuggingChat.py, needs_auth/hf/HuggingFaceInference.py, needs_auth/hf/HuggingFaceMedia.py, not_working/AllenAI.py, template/OpenaiTemplate.py, api.py, and gui/server/api.py
- Replaced calls to format_image_prompt with format_media_prompt in relevant locations
- Changed media prompt handling in various providers to ensure consistent usage of format_media_prompt
- Modified the __aenter__ and __aexit__ methods of requests/aiohttp.py to properly manage ClientSession lifecycle
- Changed the default value of `extra_body` from an empty dictionary to `None` in `ImageLabs` and `PollinationsAI` classes.
- Added a check to initialize `extra_body` to an empty dictionary if it is `None` in the `ImageLabs` class.
- Removed the `extra_image_models` list from the `PollinationsAI` class.
- Updated the way image models are combined in the `PollinationsAI` class to avoid duplicates.
- Changed the error handling for unsupported models from `ModelNotSupportedError` to `ModelNotFoundError` in multiple classes including `OpenaiChat`, `HuggingFaceAPI`, and `HuggingFaceInference`.
- Updated the `save_response_media` function to handle both string and bytes responses.
- Adjusted the handling of audio data in the `PollinationsAI` class to ensure proper processing of audio responses.
- Replaced the large GitHub project stats table in `README.md` with summaries and logos for Pollinations AI and MoneyPrinter V2
- Introduced `STATIC_URL` and `DIST_DIR` constants in new `g4f/constants.py` and used them across multiple files
- Updated `PollinationsAI.py` to support conversation title and follow-up generation using tool calls
- Modified `PollinationsAI.py` and `PollinationsImage.py` to use `STATIC_URL` for the `referrer` header
- Enhanced `PollinationsAI.stream_complete` to yield `ToolCalls`, `TitleGeneration`, and `SuggestedFollowups`
- Added `ToolCalls` handling in `client/__init__.py` to support non-stream and stream modes
- Updated `ChatCompletionDelta` model in `client/stubs.py` to support `ToolCalls`
- Modified `HarProvider` to merge `DEFAULT_HEADERS` into request headers
- Improved `OpenaiChat.py` by adding optional chaining to page evaluation expressions for robustness
- Updated `any_provider.py` to force use of `PollinationsAI` if `tools` key is present in kwargs
- Refactored `is_content` into a reusable function in `providers/response.py` and used in `retry_provider.py`
- Updated `gui/server/website.py` to use `STATIC_URL` and simplify `GPT4FREE_URL` handling
- Removed redundant constants from `version.py` and imported them from `constants.py
- Changed documentation URL in README.md for detailed guidance link
- In g4f/Provider/Cloudflare.py, broadened exception handling in async argument fetching to catch all exceptions in one place and only specific exceptions in another
- In g4f/Provider/PollinationsAI.py, removed raising of exception for unknown model not in image_models and replaced it with pass
- In g4f/Provider/needs_auth/OpenaiChat.py, modified session post call to always use cls._headers
- Changed if-chain in OpenaiChat.py to use elif for checking element prefix "sediment://"
- Added logic to extract and yield generated images for unique "file-service://" matches in streamed responses within OpenaiChat.py
- Commented out multimodal_text image asset pointer handling in OpenaiChat.py
- In g4f/client/__init__.py resolve_media(), set media name to basename of file path using os.path.basename
- Introduce `Qwen_Qwen_3` provider in `g4f/Provider/hf_space/Qwen_Qwen_3.py`
- Register Qwen_Qwen_3 in `g4f/Provider/hf_space/__init__.py` and add it to `HuggingSpace`
- Update `MarkItDown` in `g4f/Provider/audio/MarkItDown.py` to accept and forward `llm_client` and `llm_model` kwargs; add async handling for `text_content`
- Modify audio route in `g4f/api/__init__.py` to pass `llm_client` for MarkItDown and set `modalities` only for other providers
- Adjust `OpenaiChat` (needs_auth) to merge media for upload and check for media presence before requesting images
- Change `get_tempfile` in `g4f/tools/files.py` to determine suffix from file extension using `os.path.splitext`
- Refactor provider listing and model mapping in `AnyProvider.get_models()` (g4f/providers/any_provider.py) to update provider order, support new `HarProvider`, initialize attributes, and guard against model_aliases being None
- Ensure `AnyProvider.create_async_generator` calls `get_models` before working with providers
- Changed `generate_commit_message` return to `.strip("`").strip()` in `commit.py`
- Added new model mappings in `PollinationsAI.py`, including `gpt-4.1`, `gpt-4.1-mini`, and `deepseek-r1-distill-*`
- Removed `print` debug statement from `PollinationsAI.py` request payload
- Replaced temp file handling in `MarkItDown.py` with `get_tempfile` utility
- Added `get_tempfile` function to `files.py` for consistent tempfile creation
- Added `gpt-4.1` to `text_models` list in `models.py`
- Added `ModelNotSupportedError` to exception handling in `OpenaiChat.py`
- Updated message content creation to use `to_string()` in `OpenaiChat.py`
- Wrapped `get_model()` in try-except to ignore `ModelNotSupportedError` in `OpenaiChat.py`
- Adjusted `convert` endpoint in `api/__init__.py` to accept optional `provider` param
- Refactored `/api/markitdown` to reuse `convert()` handler in `api/__init__.py
- **g4f/Provider/har/__init__.py**
- `get_models`/`create_async`: iterate over `(domain, harFile)` and filter with `domain in request_url`
- `read_har_files` now yields `(domain, har_data)`; fixes file variable shadowing and uses `json.load`
- remove stray `print`, add type hint for `find_str`, replace manual loops with `yield from`
- small whitespace clean-up
- **g4f/Provider/needs_auth/Grok.py**
- `ImagePreview` now passes `auth_result.cookies` and `auth_result.headers`
- **g4f/Provider/needs_auth/OpenaiChat.py**
- add `Union` import; rename/refactor `get_generated_images` → `get_generated_image`
- support `file-service://` and `sediment://` pointers; choose correct download URL
- return `ImagePreview` or `ImageResponse` accordingly and stream each image part
- propagate 422 errors, update prompt assignment and image handling paths
- **g4f/client/__init__.py**
- drop unused `ignore_working` parameter in sync/async `Completions`
- normalise `media` argument: accept single tuple, infer filename when missing, fix index loop
- `Images.create_variation` updated to use the new media logic
- **g4f/gui/server/api.py**
- expose `latest_version_cached` via `?cache=` query parameter
- **g4f/gui/server/backend_api.py**
- optional Markdown extraction via `MarkItDown`; save rendered text as `<file>.md`
- upload flow rewrites: copy to temp file, move to bucket/media dir, clean temp, store filenames
- introduce `has_markitdown` guard and improved logging/exception handling
- **g4f/tools/files.py**
- remove trailing spaces in HAR code-block header string
- **g4f/version.py**
- add `latest_version_cached` `@cached_property` for memoised version lookup
- Updated `PollinationsAI` to exclude "gemini" model from `audio_models`
- Added logic in `PollinationsAI` to expand `audio_models` with voices from `default_audio_model`
- Appended voice names to `text_models` list in `PollinationsAI` if present in `default_audio_model`
- Modified `PollinationsAI._generate_text` to inject `audio` parameters when a voice model is used
- Updated `save_response_media` call to include voice name in model list
- Changed `OpenaiChat.get_generated_image` to support both `file-service://` and `sediment://` URLs using `conversation_id`
- Modified `OpenaiChat.create_messages` to optionally pass `prompt`
- Adjusted `OpenaiChat.run` to determine `prompt` explicitly and set messages accordingly
- Updated `OpenaiChat.iter_messages_line` to handle `None` in `fields.p` safely
- Passed `prompt` and `conversation_id` to `OpenaiChat.get_generated_image` inside image parsing loop
- Fixed redirect logic in `backend_api.py` to safely handle missing `skip` query param
- Enhanced `render` function in `website.py` to support live file serving with `live` query param
- Added new route `/dist/<path:name>` to serve static files from `DIST_DIR` in `website.py`
- Adjusted `render` to include `.live` suffix in cache filename when applicable
- Modified HTML replacements in `render` to preserve local `dist/` path if `add_origion` is True
- **g4f/Provider/needs_auth/OpenaiChat.py:**
- Increase the default timeout parameter from 180 to 360 seconds.
- Modify the regex pattern used in `re.sub` from
`(?:cite\nturn[0-9]+search|cite\nturn[0-9]+news|turn[0-9]+news|turn[0-9]+search)(\d+)`
to
`(?:cite\nturn[0-9]+|turn[0-9]+)(?:search|news|view)(\d+)`.
- **g4f/gui/client/static/js/chat.v1.js:**
- Change the `filter_message` function to wrap the result of `text.replaceAll(...)` with `filter_message_content`, removing the chained `.replace` calls.
- In `add_message_chunk`, update the `update_message` call by passing `null` instead of the rendered reasoning.
- In `update_message`, refine the content-building logic by:
- Checking that both `reasoning_storage[message_id]` and `message_storage[message_id]` exist before concatenating rendered reasoning with the markdown-rendered message.
- Using an alternative branch to render only the reasoning when no message content is available.
- Appending an error message (if one exists) to the content and then updating `content_map.inner.innerHTML` unconditionally.
- In **FreeRouter.py**, change the `working` flag from `False` to `True`.
- In **LMArenaProvider.py**, replace the `.rstrip("▌")` call with a manual check that, if the content ends with `▌`, slices off the trailing characters.
- In **hf_space/__init__.py**, update the async generator call to pass the `media` parameter instead of `images`.
- In **OpenaiChat.py**:
- Modify the citation replacement regex to use `[0-9]+` (supporting any turn number) instead of a hardcoded `0`.
- Replace `fields.is_recipient` boolean checks with comparisons against `fields.recipient == "all"` for processing text and metadata.
- Add a new branch to process `/message/metadata/content_references` for adding source links.
- Update the conversation initialization by replacing `self.is_recipient` with setting `self.recipient` to `"all"`.
- Change the auth check from using `cls._api_key` to checking `cls.request_config.access_token`.
- In **chat.v1.js**, adjust the QR code URL assignment to use `window.conversation_id` if available, else default to `/qrcode`.
- In **raise_for_status.py**, update error handling by replacing `ResponseStatusError` with `MissingAuthError` for 403 responses detected as OpenAI Bot.
feat: add private chat mode & update token parsing
- In g4f/Provider/Blackbox.py, import PaymentRequiredError and raise it when the response equals "You have reached your request limit for the hour".
- In g4f/Provider/needs_auth/OpenaiChat.py, modify token parsing by splitting the "OpenAI-Sentinel-Proof-Token" header on "~" after the initial split.
- In g4f/gui/client/index.html, add a new "Private Conversation" button with the corresponding icon.
- In g4f/gui/client/static/js/chat.v1.js:
- Introduce the variable `privateConversation` to handle private chats.
- Update `new_conversation` to accept a private flag, setting `window.conversation_id` to null and updating the conversation title accordingly.
- Adjust `get_conversation` to return `privateConversation` when conversation_id is null.
- Revise `save_conversation` and `add_conversation` to store private conversation data when conversation_id is null.
- Modify on-load conversation handling to incorporate the updated conversation logic.
```
Update conversation body in OpenaiChat provider
Update ThinkingProcessor in run_tools
Add unittests for ThinkingProcessor
Update default headers in requests module
Add AuthFileMixin to base_provider
Add Reasoning to OpenaiChat provider
Check for pipeline_tag in HuggingChat providers
Add image preview in PollinationsAI
Add input of custom Model in GUI
Add new default HuggingFace provider
Add format_image_prompt and get_last_user_message helper
Add stop_browser callable to get_nodriver function
Fix content type response in images route