Update backend_anon_url in har_file.py for correct endpoint
Add async context manager for get_nodriver_session in requests module
Fix start-browser.sh to remove stale cookie file before launching Chrome
- **ApiAirforce.py**: Added `use_image_size = True` class attribute.
- **EasyChat.py**: Added `user_data_dir=None` argument when calling `get_args_from_nodriver`.
- **LMArenaBeta.py**:
- Only set `cls.share_url` from `G4F_SHARE_URL` if `cls.share_url` is `None`.
- Added `user_data_dir=None` argument when calling `get_args_from_nodriver`.
- Modified media filtering to check `url` is a `str` before `startswith("https://")`.
- **OpenaiTemplate.py**:
- Added `use_image_size = False` class attribute.
- Modified image generation payload to optionally include `"size"` key when `use_image_size` is `True` and width/height present.
- Refactored `read_response` to store `content` in a variable, strip on first chunk, and yield `ToolCalls` directly from `tool_calls` variable.
- **g4f/requests/__init__.py**:
- Added `user_data_dir` parameter (default `"nodriver"`) to `get_args_from_nodriver`.
- Passed `user_data_dir` to `get_nodriver` call.
- Added new `EasyChat` provider (`g4f/Provider/EasyChat.py`) with captcha handling, nodriver callback, and token caching
- Added new `GLM` provider (`g4f/Provider/GLM.py`) with model retrieval, auth token fetch, and SSE streaming support
- Updated `g4f/Provider/__init__.py` to import `EasyChat` and `GLM`
- Modified `LMArenaBeta` in `g4f/Provider/needs_auth/LMArenaBeta.py` to remove nodriver availability check and always use `get_args_from_nodriver` with callback
- Updated `HuggingFaceAPI` in `g4f/Provider/needs_auth/hf/HuggingFaceAPI.py` to use `default_model` from `models` instead of `default_llama_model` and removed commented `max_inputs_lenght` param
- Updated `HuggingFace` in `g4f/Provider/needs_auth/hf/__init__.py` to import `default_model` instead of `default_vision_model`, set `default_model` class attribute, and commented out HuggingFaceInference and image model handling logic
- Modified `OpenaiTemplate` in `g4f/Provider/template/OpenaiTemplate.py` to prefer `"name"` over `"id"` when populating `vision_models`, `models`, and `models_count`
- Enhanced `sse_stream` in `g4f/requests/__init__.py` to strip and skip empty `data:` lines, handle JSON decode errors, and raise `ValueError` on invalid JSON
- Replaced all imports and usages of `see_stream` with `sse_stream` across:
- `g4f/Provider/Kimi.py`
- `g4f/Provider/hf_space/BlackForestLabs_Flux1KontextDev.py`
- `g4f/Provider/needs_auth/PuterJS.py`
- `g4f/Provider/template/OpenaiTemplate.py`
- `g4f/requests/__init__.py` (renamed function `see_stream` to `sse_stream`)
- Modified `g4f/Provider/needs_auth/GeminiPro.py`:
- Updated `default_model` from `gemini-2.5-flash-preview-04-17` to `gemini-2.5-flash`
- Removed `gemini-2.5-flash-preview-04-17` from `fallback_models`
- Updated `etc/tool/md2html.py`:
- Added `re` import
- Changed `process_single_file_with_output` to check if output file exists
- If exists, uses regex to update `<title>` and `itemprop="text">` content instead of writing full template
- If not, generates HTML using the template as before
- Added "transparent" to `image_models` in `PollinationsAI` and mapped it to "gptimage"
- Modified transparent flag handling in `_generate_image` call in `PollinationsAI`
- Removed unused `cls.get_models()` call from image generation method in `PollinationsAI`
- Replaced `AsyncGeneratorProvider` with `AsyncAuthedProvider` in `HarProvider`
- Implemented `on_auth_async` in `HarProvider` to support browser-based auth via `nodriver`
- Replaced `create_async_generator` with `create_authed` in `HarProvider` to support `AuthResult`
- Removed custom `headers` in HAR post requests; used `auth_result.get_dict()` for `StreamSession`
- Refactored `Video` provider to support optional search in `get_response`
- Added `search` parameter to `RequestConfig.get_response` and `Video.create_async_generator`
- Improved browser automation and element interaction logic in `Video` provider
- Extracted video request interception to collect URLs using `nodriver`
- Reduced video polling loop timeout from 600 to 300 iterations in `Video`
- Updated CLI `client.py` to fix handling of `conversation.conversation` assignment
- Fixed argparse config: removed `nargs='?'` and added `metavar` for `--conversation-file`
- Improved image metadata extraction in API and backend when Pillow is available
- Modified `ImageResponse.__str__` to output HTML anchor/image tags with dimensions if present
- Added support for returning `target_path` from `copy_media` if `return_target` is True
- Changed default image processing size in `process_image` from 800x400 to 400x400
- Disabled RGBA-to-RGB flattening in `process_image`
- Improved `get_args_from_nodriver` to ensure proper referer and cookie handling
- Added helper `get_target_paths_and_urls` in `Api` to extract image dimensions from disk paths
- Added `timeout` parameter support to `LMArenaBeta._create_async_generator` and passed it to `StreamSession`
- Ensured fallback to `default_model` in `LMArenaBeta` if `model` is not provided
- Modified `OpenaiChat._create_completion` to rebuild messages excluding assistant roles if conversation ID exists
- Corrected OpenaiChat `nodriver_auth` to await `browser.get` and replaced page access with reload
- Improved `save_content` in `client.py` with robust content extraction, null checks, and logging for missing content
- Removed premature `input_text.strip()` in `stream_response` and relocated it to `run_client_args`
- Simplified and centralized markdown filtering call in `save_content`
- Replaced raw `print` logging in `__init__.py` with `debug.log` for `nodriver` URL opening message
- Fixed duplicate model entries in Blackbox provider model_aliases
- Added meta-llama- to llama- name cleaning in Cloudflare provider
- Enhanced PollinationsAI provider with improved vision model detection
- Added reasoning support to PollinationsAI provider
- Fixed HuggingChat authentication to include headers and impersonate
- Removed unused max_inputs_length parameter from HuggingFaceAPI
- Renamed extra_data to extra_body for consistency across providers
- Added Puter provider with grouped model support
- Enhanced AnyProvider with grouped model display and better model organization
- Fixed model cleaning in AnyProvider to handle more model name variations
- Added api_key handling for HuggingFace providers in AnyProvider
- Added see_stream helper function to parse event streams
- Updated GUI server to handle JsonConversation properly
- Fixed aspect ratio handling in image generation functions
- Added ResponsesConfig and ClientResponse for new API endpoint
- Updated requirements to include markitdown
feat: add __call__ in field_serializer and update body wait logic
- In g4f/client/stubs.py, add a __call__ method to the field_serializer class that returns the first positional argument.
- In g4f/requests/__init__.py, replace the wait_for call with a while loop that repeatedly evaluates "document.querySelector('body:not(.no-js)')" and sleeps until the element is found.
```
Add new default HuggingFace provider
Add format_image_prompt and get_last_user_message helper
Add stop_browser callable to get_nodriver function
Fix content type response in images route
Updates for memory with mem0
Fix asyncio import in nodriver function
Add provider specific api endpoints
Support for open settings in UI at /chat/settings
Restore browser instance on start up errors in nodriver
Restored instances can be used as usual or to stop the browser
Add demo modus to web ui for HuggingSpace
Add rate limit support to web ui. Simply install flask_limiter
Add home for demo with Access Token input and validation
Add stripped model list for demo
Add ignores for encoding error in web_search and file upload
Add none auth with OpenAI using nodriver
Fix missing 1 required positional argument: 'cls'
Update count tokens in GUI
Fix streaming example in requests guide
Remove ChatGptEs as default model
Fix for RetryProviders doesn't retry
Add retry and continue for DuckDuckGo provider
Add cache for Cloudflare provider
Add cache for prompts on gui home
Add scroll to bottom checkbox in gui
Improve prompts on home gui
Fix response content type in api for files
* Fix api streaming, fix AsyncClient, Improve Client class, Some providers fixes, Update models list, Fix some tests, Update model list in Airforce provid
er, Add OpenAi image generation url to api, Fix reload and debug in api arguments, Fix websearch in gui
* Fix Cloadflare and Pi and AmigoChat provider
* Fix conversation support in DDG provider, Add cloudflare bypass with nodriver
* Fix unittests without curl_cffi