- **g4f/Provider/Copilot.py**
- Added `"Smart (GPT-5)"` to `models` list.
- Added `"gpt-5"` alias mapping to `"GPT-5"` in `model_aliases`.
- Introduced `mode` selection logic to support `"smart"` mode for GPT-5 models alongside existing `"reasoning"` and `"chat"` modes.
- **g4f/Provider/EasyChat.py**
- Added `get_models` class method to map `-free` models to aliases and store them in `cls.models`.
- Resolved model via `cls.get_model(model)` at start of `create_async_generator`.
- Reset `cls.captchaToken` to `None` at the beginning of `callback`.
- Wrapped main generator logic in a loop to allow retry once if `CLEAR-CAPTCHA-TOKEN` error occurs, clearing auth file and resetting args.
- **g4f/Provider/needs_auth/OpenaiChat.py**
- Added handling for image models: detect and set `image_model` flag, use `default_model` when sending requests if image model selected, and include `"picture_v2"` in `system_hints` when applicable.
- Replaced textarea/button detection code in page load sequence with `nodriver` `select` calls, sending "Hello" before clicking send button, and included profile button selection if class needs auth.
- **g4f/Provider/openai/models.py**
- Changed `default_image_model` from `"dall-e-3"` to `"gpt-image"`.
- Added `"gpt-5"` and `"gpt-5-thinking"` to `text_models` list.
- Added alias mapping for `"dall-e-3"` pointing to new `default_image_model`.
- Introduced a new provider class `LMArenaBeta` in `g4f/Provider/LMArenaBeta.py` with capabilities for text and image models.
- Updated `g4f/Provider/Cloudflare.py` to remove an unused import of `Cookies`.
- Modified `g4f/Provider/PollinationsAI.py` to change the condition for checking the action in the `next` command.
- Added a new provider `PuterJS` in `g4f/Provider/PuterJS.py` with various model handling and authentication logic.
- Removed the old `PuterJS` implementation from `g4f/Provider/not_working/PuterJS.py`.
- Updated `g4f/Provider/__init__.py` to include the new `LMArenaBeta` and `PuterJS` providers.
- Changed the label of `HarProvider` in `g4f/Provider/har/__init__.py` to "LMArena (Har)".
- Adjusted the model list in `g4f/Provider/openai/models.py` to ensure consistency in model definitions.
- Updated the API response handling in `g4f/providers/response.py` to calculate total tokens in the `Usage` class constructor.
* Update PuterJS provider to set `working` to `False` and `return_conversation` to `True`
* Update `PuterJS` class to raise `RateLimitError` instead of yielding error messages
* Update `PuterJS` class to raise `RuntimeError` instead of yielding error messages for authentication and other exceptions
* Remove `PuterJS` as the best provider for various models and replace with other providers
* Update `DeepSeekAPI` to include `model_aliases` for "deepseek-chat"
* Update `openai/models.py` to include "o4-mini" and "o4-mini-high" in `text_models`
* Remove `PuterJS` from the list of providers in `g4f/models.py`
* Update `ModelUtils` to remove `PuterJS` from the list of models
- Changed `generate_commit_message` return to `.strip("`").strip()` in `commit.py`
- Added new model mappings in `PollinationsAI.py`, including `gpt-4.1`, `gpt-4.1-mini`, and `deepseek-r1-distill-*`
- Removed `print` debug statement from `PollinationsAI.py` request payload
- Replaced temp file handling in `MarkItDown.py` with `get_tempfile` utility
- Added `get_tempfile` function to `files.py` for consistent tempfile creation
- Added `gpt-4.1` to `text_models` list in `models.py`
- Added `ModelNotSupportedError` to exception handling in `OpenaiChat.py`
- Updated message content creation to use `to_string()` in `OpenaiChat.py`
- Wrapped `get_model()` in try-except to ignore `ModelNotSupportedError` in `OpenaiChat.py`
- Adjusted `convert` endpoint in `api/__init__.py` to accept optional `provider` param
- Refactored `/api/markitdown` to reuse `convert()` handler in `api/__init__.py
Add Reasoning to OpenaiChat provider
Check for pipeline_tag in HuggingChat providers
Add image preview in PollinationsAI
Add input of custom Model in GUI