- **g4f/Provider/Copilot.py**
- Added `"Smart (GPT-5)"` to `models` list.
- Added `"gpt-5"` alias mapping to `"GPT-5"` in `model_aliases`.
- Introduced `mode` selection logic to support `"smart"` mode for GPT-5 models alongside existing `"reasoning"` and `"chat"` modes.
- **g4f/Provider/EasyChat.py**
- Added `get_models` class method to map `-free` models to aliases and store them in `cls.models`.
- Resolved model via `cls.get_model(model)` at start of `create_async_generator`.
- Reset `cls.captchaToken` to `None` at the beginning of `callback`.
- Wrapped main generator logic in a loop to allow retry once if `CLEAR-CAPTCHA-TOKEN` error occurs, clearing auth file and resetting args.
- **g4f/Provider/needs_auth/OpenaiChat.py**
- Added handling for image models: detect and set `image_model` flag, use `default_model` when sending requests if image model selected, and include `"picture_v2"` in `system_hints` when applicable.
- Replaced textarea/button detection code in page load sequence with `nodriver` `select` calls, sending "Hello" before clicking send button, and included profile button selection if class needs auth.
- **g4f/Provider/openai/models.py**
- Changed `default_image_model` from `"dall-e-3"` to `"gpt-image"`.
- Added `"gpt-5"` and `"gpt-5-thinking"` to `text_models` list.
- Added alias mapping for `"dall-e-3"` pointing to new `default_image_model`.
- Added `Qwen` to `g4f/Provider/__init__.py` for provider registration
- Created new Qwen provider in `g4f/Provider/Qwen.py` using `AsyncGeneratorProvider`
- Implemented conversation state via new `JsonConversation` argument
- Replaced raw `print` statements with `debug.log` for internal logging
- Introduced `get_last_user_message()` for improved prompt extraction
- Added support for `Reasoning` and `Usage` response types during SSE parsing
- Replaced manual SSE parsing with `sse_stream()` utility from `requests`
- Added `active_by_default = True` to `Qwen` and modified related headers
- Tracked message and parent IDs for contextual threading
- Updated `Usage` class in `g4f/providers/response.py` to support `input_tokens` and `output_tokens`
- Refactored Nvidia provider: removed unused attributes and set `models_needs_auth = True
- Added `await asyncio.sleep(1)` inside captcha verification loop in `EasyChat.py` to introduce delay between checks
- Modified `Grok.py` to send "Hello" input to a selected textarea element during auth flow
- Added delay after sending keys to textarea in `Grok.py` using `await asyncio.sleep(1)`
- Added logic to select and click a submit button if present in `Grok.py` during header check loop
- All changes are within the `EasyChat` and `Grok` class definitions respectively
- Added new `EasyChat` provider (`g4f/Provider/EasyChat.py`) with captcha handling, nodriver callback, and token caching
- Added new `GLM` provider (`g4f/Provider/GLM.py`) with model retrieval, auth token fetch, and SSE streaming support
- Updated `g4f/Provider/__init__.py` to import `EasyChat` and `GLM`
- Modified `LMArenaBeta` in `g4f/Provider/needs_auth/LMArenaBeta.py` to remove nodriver availability check and always use `get_args_from_nodriver` with callback
- Updated `HuggingFaceAPI` in `g4f/Provider/needs_auth/hf/HuggingFaceAPI.py` to use `default_model` from `models` instead of `default_llama_model` and removed commented `max_inputs_lenght` param
- Updated `HuggingFace` in `g4f/Provider/needs_auth/hf/__init__.py` to import `default_model` instead of `default_vision_model`, set `default_model` class attribute, and commented out HuggingFaceInference and image model handling logic
- Modified `OpenaiTemplate` in `g4f/Provider/template/OpenaiTemplate.py` to prefer `"name"` over `"id"` when populating `vision_models`, `models`, and `models_count`
- Enhanced `sse_stream` in `g4f/requests/__init__.py` to strip and skip empty `data:` lines, handle JSON decode errors, and raise `ValueError` on invalid JSON
- Added `OPENROUTER_API_KEY` and `AZURE_API_KEYS` to `example.env`.
- Updated `AZURE_DEFAULT_MODEL` to "model-router" in `example.env`.
- Added `AZURE_ROUTES` with multiple model URLs in `example.env`.
- Changed the mapping for `"phi-4-multimodal"` in `DeepInfraChat.py` to `"microsoft/Phi-4-multimodal-instruct"`.
- Added `media` parameter to `GptOss.create_completion` method and raised a `ValueError` if `media` is provided.
- Updated `model_aliases` in `any_model_map.py` to include new mappings for various models.
- Removed several model aliases from `PollinationsAI` in `any_model_map.py`.
- Added new models and updated existing models in `model_map` across various files, including `any_model_map.py` and `__init__.py`.
- Refactored `AnyModelProviderMixin` to include `model_aliases` and updated the logic for handling model aliases.
- Append "flux.1-kontext-pro" to the vision_models list in Azure.py
- Introduce the image_models list containing "flux-1.1-pro" and "flux.1-kontext-pro"
- Ensure api_endpoint is checked for null before searching for "/images/"
- No other code modifications or logic changes in this
- Updated `Azure.create_completion` to support media uploads and image generation via `/images/` endpoint
- Added `media` parameter to `Azure.create_completion` and handled image-related request formatting
- Imported `StreamSession`, `FormData`, `raise_for_status`, `get_width_height`, `to_bytes`, `save_response_media`, and `format_media_prompt` in `Azure.py`
- Modified `get_models` to load `AZURE_API_KEYS` from environment and parse it into `cls.api_keys`
- Adjusted `get_width_height` in `image/__init__.py` to return higher default resolutions for "16:9" and "9:16" aspect ratios
- Modified `save_response_media` in `image/copy_images.py` to accept optional `content_type` parameter and use it when provided
- Updated `FormData` class logic in `requests/curl_cffi.py` to define it only when `has_curl_mime` is True and raise an error otherwise
- In PerplexityLabs.py, added logic to filter consecutive assistant messages and update message array accordingly
- Modified PerplexityLabs.py to change "messages" field to use the new formatted message list
- Adjusted error handling in PerplexityLabs.py to include a newline in error messages
- Import os in BlackForestLabs_Flux1KontextDev.py and replace media filename assignment with basename if media is None
- In Groq.py, set "active_by_default" to True for the provider
- In OpenRouter.py, added "active_by_default" as True
- In Together.py, set "active_by_default" to True
- In HuggingFaceInference.py, set "working" to False
- In models.py, changed default_model to "openai/gpt-oss-120b" instead of previous value
- In backend_api.py, added a null check in jsonify_provider_models to return 404 if response is None, and simplified get_provider_models call
- Added new provider `GptOss` in `g4f/Provider/GptOss.py` with support for async message generation via SSE
- Registered `GptOss` in `g4f/Provider/__init__.py`
- Implemented logic in `GptOss.create_async_generator` to handle both new and existing conversations with SSE streaming response handling
- Handled event types including `thread.created`, `thread.item_updated`, and `thread.updated` within `GptOss`
- Modified `read_response` in `OpenaiTemplate.py` to yield `Reasoning` objects using `reasoning_content` or fallback to `reasoning` from `choice["delta"]
- Added new `GeminiCLI.py` provider under `g4f/Provider/needs_auth/` with full implementation of Gemini CLI support including OAuth2 handling, SSE streaming, tool calling, and media handling
- Registered `GeminiCLI` in `g4f/Provider/needs_auth/__init__.py`
- Modified `g4f/client/stubs.py`:
- Removed `serialize_reasoning_content` method
- Added inline reasoning_content join logic in `model_construct` override
- Updated `Azure.py`:
- Removed `"stream": False` from `model_extra_body`
- Added inline `stream = False` assignment when using `model_extra_body`
- Updated `DeepInfra.py`:
- Added import of `DeepInfraChat`
- Set `model_aliases` to `DeepInfraChat.model_aliases
- In g4f/Provider/Kimi.py, added a try-except block around raise_for_status to catch exceptions containing "匿名聊天使用次数超过" and raise MissingAuthError; also included a yield statement for JsonConversation.
- In g4f/Provider/PollinationsAI.py, added a yield statement for Reasoning before the class definition.
- Updated get_image function in PollinationsAI to remove responses.add for the response URL and streamline response handling.
- In the main loop of PollinationsAI, modified response processing to handle exceptions by cancelling tasks and raising errors if conditions are met, or yielding Reasoning with status and progress labels.
- Adjusted responses handling to increment finished count and yield progress Reasoning only when no exception occurs.
- Changed "Inference API" to "Interference API" and updated corresponding documentation links in README.md
- Removed "o1" and "dall-e-3" entries from Copilot.py model_aliases
- Added "stream" and "extra_body" parameters with default values in Azure.py's create_async_generator method
- In CopilotAccount.py, included model_aliases with "gpt-4", "gpt-4o", "o1", and "dall-e-3"
- Updated conditional for provider comparison from "==" to "in" list in any_provider.py
- Modified g4f/api/__init__.py to set g4f_api_key from environment variable
- In backend_api.py, added "user" field to cached data with default "unknown"
- Changed logic in OpenaiTemplate.py read_response to check if "choice" exists before processing, and cleaned up indentation and conditionals in response parsing
- Removed unnecessary "stop" and "prompt" parameters from comments or unused code in OpenaiTemplate.py
- Tightened the check for "provider" comparison in any_provider.py to handle multiple providers properly
- Updated error message formatting in `get_provider_models` call within `Backend_Api` class
- Changed `MissingAuthError` handling to include exception type name in response
- Added generic `Exception` catch to handle unexpected errors with HTTP 500 response
- Modified `backend_api.py` file in `g4f/gui/server` directory
- Ensured all returned error messages use consistent structure with exception type and message
- Replaced all imports and usages of `see_stream` with `sse_stream` across:
- `g4f/Provider/Kimi.py`
- `g4f/Provider/hf_space/BlackForestLabs_Flux1KontextDev.py`
- `g4f/Provider/needs_auth/PuterJS.py`
- `g4f/Provider/template/OpenaiTemplate.py`
- `g4f/requests/__init__.py` (renamed function `see_stream` to `sse_stream`)
- Modified `g4f/Provider/needs_auth/GeminiPro.py`:
- Updated `default_model` from `gemini-2.5-flash-preview-04-17` to `gemini-2.5-flash`
- Removed `gemini-2.5-flash-preview-04-17` from `fallback_models`
- Updated `etc/tool/md2html.py`:
- Added `re` import
- Changed `process_single_file_with_output` to check if output file exists
- If exists, uses regex to update `<title>` and `itemprop="text">` content instead of writing full template
- If not, generates HTML using the template as before
- Set `working = False` in Free2GPT, Startnest, and Reka providers
- Changed `default_model` in LambdaChat from `deepseek-v3-0324` to `deepseek-r1`
- Removed `deepseek-v3` alias from LambdaChat's `model_aliases`
- In Kimi provider:
- Replaced manual status check with `await raise_for_status(response)`
- Set `model` field to `"k2"` in chat completion request
- Removed unused `pass` statement
- In WeWordle provider:
- Removed `**kwargs` from `data_payload` construction
- In Reka provider:
- Set default value for `stream` to `True`
- Modified `get_cookies` call to use `cache_result=False`
- In `cli/client.py`:
- Added conditional import for `MarkItDown` with `has_markitdown` flag
- Raised `MissingRequirementsError` if `MarkItDown` is not installed
- In `gui/server/backend_api.py`:
- Imported `MissingAuthError`
- Wrapped `get_provider_models` call in try-except block to return 401 if `MissingAuthError` is raised