Commit graph

3613 commits

Author SHA1 Message Date
hlohaus
f3923f8e50 feat: add new GPT-5 support and improve captcha handling
- **g4f/Provider/Copilot.py**
  - Added `"Smart (GPT-5)"` to `models` list.
  - Added `"gpt-5"` alias mapping to `"GPT-5"` in `model_aliases`.
  - Introduced `mode` selection logic to support `"smart"` mode for GPT-5 models alongside existing `"reasoning"` and `"chat"` modes.
- **g4f/Provider/EasyChat.py**
  - Added `get_models` class method to map `-free` models to aliases and store them in `cls.models`.
  - Resolved model via `cls.get_model(model)` at start of `create_async_generator`.
  - Reset `cls.captchaToken` to `None` at the beginning of `callback`.
  - Wrapped main generator logic in a loop to allow retry once if `CLEAR-CAPTCHA-TOKEN` error occurs, clearing auth file and resetting args.
- **g4f/Provider/needs_auth/OpenaiChat.py**
  - Added handling for image models: detect and set `image_model` flag, use `default_model` when sending requests if image model selected, and include `"picture_v2"` in `system_hints` when applicable.
  - Replaced textarea/button detection code in page load sequence with `nodriver` `select` calls, sending "Hello" before clicking send button, and included profile button selection if class needs auth.
- **g4f/Provider/openai/models.py**
  - Changed `default_image_model` from `"dall-e-3"` to `"gpt-image"`.
  - Added `"gpt-5"` and `"gpt-5-thinking"` to `text_models` list.
  - Added alias mapping for `"dall-e-3"` pointing to new `default_image_model`.
2025-08-09 01:33:56 +02:00
hlohaus
ba1f9bb3c3 Log requests 2025-08-08 11:44:58 +02:00
hlohaus
31fee02cce feat: add Qwen provider with conversation support and stream handling
- Added `Qwen` to `g4f/Provider/__init__.py` for provider registration
- Created new Qwen provider in `g4f/Provider/Qwen.py` using `AsyncGeneratorProvider`
- Implemented conversation state via new `JsonConversation` argument
- Replaced raw `print` statements with `debug.log` for internal logging
- Introduced `get_last_user_message()` for improved prompt extraction
- Added support for `Reasoning` and `Usage` response types during SSE parsing
- Replaced manual SSE parsing with `sse_stream()` utility from `requests`
- Added `active_by_default = True` to `Qwen` and modified related headers
- Tracked message and parent IDs for contextual threading
- Updated `Usage` class in `g4f/providers/response.py` to support `input_tokens` and `output_tokens`
- Refactored Nvidia provider: removed unused attributes and set `models_needs_auth = True
2025-08-08 10:56:48 +02:00
H Lohaus
95e2b697c0
Merge pull request #3119 from restartxx/feature/qwen-provider
feat: Add provider for chat.qwen.ai
2025-08-08 09:35:35 +02:00
lorriscript
608d0b140b
Feat/provider Nvidia (#3117)
* Nvidia provider + listing of all models categorized + working for Chat LLMs

---------

Co-authored-by: grubux <lehuedelorris@gmail.com>
2025-08-08 09:35:02 +02:00
hlohaus
8100b38dea Add models_needs_auth to Azure 2025-08-08 09:29:21 +02:00
hlohaus
da324565ad Test search 2025-08-08 07:21:57 +02:00
hlohaus
66bb4ce511 Test search 2025-08-08 07:19:23 +02:00
hlohaus
41716dadd2 Test search 2025-08-08 07:18:18 +02:00
Oppon
da40335033
feat: Add provider for chat.qwen.ai 2025-08-08 00:28:02 -03:00
hlohaus
c019c2c4b6 Fix model names 2025-08-08 04:01:56 +02:00
hlohaus
f93786fc6c Update model list 2025-08-08 02:45:17 +02:00
hlohaus
2aa40bb8a8 Fix unittests 2025-08-08 02:20:19 +02:00
hlohaus
5b65101a2a feat: add delays and input actions for captcha and auth flows
- Added `await asyncio.sleep(1)` inside captcha verification loop in `EasyChat.py` to introduce delay between checks
- Modified `Grok.py` to send "Hello" input to a selected textarea element during auth flow
- Added delay after sending keys to textarea in `Grok.py` using `await asyncio.sleep(1)`
- Added logic to select and click a submit button if present in `Grok.py` during header check loop
- All changes are within the `EasyChat` and `Grok` class definitions respectively
2025-08-08 01:46:15 +02:00
hlohaus
4164cb5978 Move on_request function 2025-08-08 01:37:10 +02:00
hlohaus
b9dfe1a460 feat: add EasyChat and GLM providers, update HuggingFace and SSE parsing
- Added new `EasyChat` provider (`g4f/Provider/EasyChat.py`) with captcha handling, nodriver callback, and token caching
- Added new `GLM` provider (`g4f/Provider/GLM.py`) with model retrieval, auth token fetch, and SSE streaming support
- Updated `g4f/Provider/__init__.py` to import `EasyChat` and `GLM`
- Modified `LMArenaBeta` in `g4f/Provider/needs_auth/LMArenaBeta.py` to remove nodriver availability check and always use `get_args_from_nodriver` with callback
- Updated `HuggingFaceAPI` in `g4f/Provider/needs_auth/hf/HuggingFaceAPI.py` to use `default_model` from `models` instead of `default_llama_model` and removed commented `max_inputs_lenght` param
- Updated `HuggingFace` in `g4f/Provider/needs_auth/hf/__init__.py` to import `default_model` instead of `default_vision_model`, set `default_model` class attribute, and commented out HuggingFaceInference and image model handling logic
- Modified `OpenaiTemplate` in `g4f/Provider/template/OpenaiTemplate.py` to prefer `"name"` over `"id"` when populating `vision_models`, `models`, and `models_count`
- Enhanced `sse_stream` in `g4f/requests/__init__.py` to strip and skip empty `data:` lines, handle JSON decode errors, and raise `ValueError` on invalid JSON
2025-08-08 01:19:32 +02:00
hlohaus
f8f8251cee Add model count for FenayAI 2025-08-07 06:34:59 +02:00
hlohaus
b4c3c30766 Fix auth with api_key 2025-08-07 04:39:10 +02:00
hlohaus
b3c1d1f3b1 Check api_key for models 2025-08-07 03:02:50 +02:00
hlohaus
9563f8df3a feat: Update environment variables and modify model mappings
- Added `OPENROUTER_API_KEY` and `AZURE_API_KEYS` to `example.env`.
- Updated `AZURE_DEFAULT_MODEL` to "model-router" in `example.env`.
- Added `AZURE_ROUTES` with multiple model URLs in `example.env`.
- Changed the mapping for `"phi-4-multimodal"` in `DeepInfraChat.py` to `"microsoft/Phi-4-multimodal-instruct"`.
- Added `media` parameter to `GptOss.create_completion` method and raised a `ValueError` if `media` is provided.
- Updated `model_aliases` in `any_model_map.py` to include new mappings for various models.
- Removed several model aliases from `PollinationsAI` in `any_model_map.py`.
- Added new models and updated existing models in `model_map` across various files, including `any_model_map.py` and `__init__.py`.
- Refactored `AnyModelProviderMixin` to include `model_aliases` and updated the logic for handling model aliases.
2025-08-07 01:21:22 +02:00
hlohaus
2c3a437c75 Add FenayAI provider 2025-08-06 22:33:01 +02:00
hlohaus
a62a1b6e71 Update models list 2025-08-06 20:54:32 +02:00
hlohaus
fe79b11070 Fix Azure image 2025-08-06 08:16:27 +02:00
hlohaus
05784da883 Fix PerplexityLabs 2025-08-06 08:06:54 +02:00
hlohaus
80deba1642 fix: add "flux.1-kontext-pro" to vision_models and define image_models
- Append "flux.1-kontext-pro" to the vision_models list in Azure.py
- Introduce the image_models list containing "flux-1.1-pro" and "flux.1-kontext-pro"
- Ensure api_endpoint is checked for null before searching for "/images/"
- No other code modifications or logic changes in this
2025-08-06 08:05:12 +02:00
hlohaus
00bd517f27 feat: add image generation support to Azure provider
- Updated `Azure.create_completion` to support media uploads and image generation via `/images/` endpoint
- Added `media` parameter to `Azure.create_completion` and handled image-related request formatting
- Imported `StreamSession`, `FormData`, `raise_for_status`, `get_width_height`, `to_bytes`, `save_response_media`, and `format_media_prompt` in `Azure.py`
- Modified `get_models` to load `AZURE_API_KEYS` from environment and parse it into `cls.api_keys`
- Adjusted `get_width_height` in `image/__init__.py` to return higher default resolutions for "16:9" and "9:16" aspect ratios
- Modified `save_response_media` in `image/copy_images.py` to accept optional `content_type` parameter and use it when provided
- Updated `FormData` class logic in `requests/curl_cffi.py` to define it only when `has_curl_mime` is True and raise an error otherwise
2025-08-06 07:18:48 +02:00
hlohaus
e7a1bcdf54 fix: improve message formatting and set default activation for providers
- In PerplexityLabs.py, added logic to filter consecutive assistant messages and update message array accordingly
- Modified PerplexityLabs.py to change "messages" field to use the new formatted message list
- Adjusted error handling in PerplexityLabs.py to include a newline in error messages
- Import os in BlackForestLabs_Flux1KontextDev.py and replace media filename assignment with basename if media is None
- In Groq.py, set "active_by_default" to True for the provider
- In OpenRouter.py, added "active_by_default" as True
- In Together.py, set "active_by_default" to True
- In HuggingFaceInference.py, set "working" to False
- In models.py, changed default_model to "openai/gpt-oss-120b" instead of previous value
- In backend_api.py, added a null check in jsonify_provider_models to return 404 if response is None, and simplified get_provider_models call
2025-08-06 04:46:28 +02:00
hlohaus
bf285b5665 feat: add GptOss provider and reasoning handling in OpenaiTemplate
- Added new provider `GptOss` in `g4f/Provider/GptOss.py` with support for async message generation via SSE
- Registered `GptOss` in `g4f/Provider/__init__.py`
- Implemented logic in `GptOss.create_async_generator` to handle both new and existing conversations with SSE streaming response handling
- Handled event types including `thread.created`, `thread.item_updated`, and `thread.updated` within `GptOss`
- Modified `read_response` in `OpenaiTemplate.py` to yield `Reasoning` objects using `reasoning_content` or fallback to `reasoning` from `choice["delta"]
2025-08-06 01:52:21 +02:00
hlohaus
7638485ccd Fix oauth client id in GemniCLI 2025-08-06 00:03:53 +02:00
H Lohaus
37802e7569
Merge pull request #3114 from hlohaus/5Aug
feat: add GeminiCLI provider and update related auth and config
2025-08-05 22:05:33 +02:00
hlohaus
b2d6dbd173 Fix unittests 2025-08-05 20:03:36 +02:00
hlohaus
50deda5ed5 feat: add GeminiCLI provider and update related auth and config
- Added new `GeminiCLI.py` provider under `g4f/Provider/needs_auth/` with full implementation of Gemini CLI support including OAuth2 handling, SSE streaming, tool calling, and media handling
- Registered `GeminiCLI` in `g4f/Provider/needs_auth/__init__.py`
- Modified `g4f/client/stubs.py`:
  - Removed `serialize_reasoning_content` method
  - Added inline reasoning_content join logic in `model_construct` override
- Updated `Azure.py`:
  - Removed `"stream": False` from `model_extra_body`
  - Added inline `stream = False` assignment when using `model_extra_body`
- Updated `DeepInfra.py`:
  - Added import of `DeepInfraChat`
  - Set `model_aliases` to `DeepInfraChat.model_aliases
2025-08-05 19:31:30 +02:00
hlohaus
bf9439440e Enable free providers 2025-08-01 12:07:55 +02:00
hlohaus
9eeafff5e4 fix: improve error handling and response processing in Kimi and PollinationsAI
- In g4f/Provider/Kimi.py, added a try-except block around raise_for_status to catch exceptions containing "匿名聊天使用次数超过" and raise MissingAuthError; also included a yield statement for JsonConversation.
- In g4f/Provider/PollinationsAI.py, added a yield statement for Reasoning before the class definition.
- Updated get_image function in PollinationsAI to remove responses.add for the response URL and streamline response handling.
- In the main loop of PollinationsAI, modified response processing to handle exceptions by cancelling tasks and raising errors if conditions are met, or yielding Reasoning with status and progress labels.
- Adjusted responses handling to increment finished count and yield progress Reasoning only when no exception occurs.
2025-08-01 02:05:59 +02:00
hlohaus
d4b46f34de fix: correct typo in API section title and update links, and adjust provider aliases
- Changed "Inference API" to "Interference API" and updated corresponding documentation links in README.md
- Removed "o1" and "dall-e-3" entries from Copilot.py model_aliases
- Added "stream" and "extra_body" parameters with default values in Azure.py's create_async_generator method
- In CopilotAccount.py, included model_aliases with "gpt-4", "gpt-4o", "o1", and "dall-e-3"
- Updated conditional for provider comparison from "==" to "in" list in any_provider.py
- Modified g4f/api/__init__.py to set g4f_api_key from environment variable
- In backend_api.py, added "user" field to cached data with default "unknown"
- Changed logic in OpenaiTemplate.py read_response to check if "choice" exists before processing, and cleaned up indentation and conditionals in response parsing
- Removed unnecessary "stop" and "prompt" parameters from comments or unused code in OpenaiTemplate.py
- Tightened the check for "provider" comparison in any_provider.py to handle multiple providers properly
2025-08-01 00:18:29 +02:00
hlohaus
f246e7cfa8 fix: improve error messaging and handling in get_provider_models
- Updated error message formatting in `get_provider_models` call within `Backend_Api` class
- Changed `MissingAuthError` handling to include exception type name in response
- Added generic `Exception` catch to handle unexpected errors with HTTP 500 response
- Modified `backend_api.py` file in `g4f/gui/server` directory
- Ensured all returned error messages use consistent structure with exception type and message
2025-07-29 20:35:21 +02:00
hlohaus
499dcc0154 refactor: replace see_stream with sse_stream and update md2html output logic
- Replaced all imports and usages of `see_stream` with `sse_stream` across:
  - `g4f/Provider/Kimi.py`
  - `g4f/Provider/hf_space/BlackForestLabs_Flux1KontextDev.py`
  - `g4f/Provider/needs_auth/PuterJS.py`
  - `g4f/Provider/template/OpenaiTemplate.py`
  - `g4f/requests/__init__.py` (renamed function `see_stream` to `sse_stream`)
- Modified `g4f/Provider/needs_auth/GeminiPro.py`:
  - Updated `default_model` from `gemini-2.5-flash-preview-04-17` to `gemini-2.5-flash`
  - Removed `gemini-2.5-flash-preview-04-17` from `fallback_models`
- Updated `etc/tool/md2html.py`:
  - Added `re` import
  - Changed `process_single_file_with_output` to check if output file exists
  - If exists, uses regex to update `<title>` and `itemprop="text">` content instead of writing full template
  - If not, generates HTML using the template as before
2025-07-29 19:57:13 +02:00
hlohaus
f83c92446e fix: update provider status, models, error handling, and imports
- Set `working = False` in Free2GPT, Startnest, and Reka providers
- Changed `default_model` in LambdaChat from `deepseek-v3-0324` to `deepseek-r1`
- Removed `deepseek-v3` alias from LambdaChat's `model_aliases`
- In Kimi provider:
  - Replaced manual status check with `await raise_for_status(response)`
  - Set `model` field to `"k2"` in chat completion request
  - Removed unused `pass` statement
- In WeWordle provider:
  - Removed `**kwargs` from `data_payload` construction
- In Reka provider:
  - Set default value for `stream` to `True`
  - Modified `get_cookies` call to use `cache_result=False`
- In `cli/client.py`:
  - Added conditional import for `MarkItDown` with `has_markitdown` flag
  - Raised `MissingRequirementsError` if `MarkItDown` is not installed
- In `gui/server/backend_api.py`:
  - Imported `MissingAuthError`
  - Wrapped `get_provider_models` call in try-except block to return 401 if `MissingAuthError` is raised
2025-07-27 18:03:54 +02:00
hlohaus
8892b00ac1 Add Kimi provider, add vision support to LMArenaBeta 2025-07-25 16:43:06 +02:00
hlohaus
91b658dbb1 Disable PenguinAI provider
Remove codegeneration.ai api_key
2025-07-21 17:35:55 +02:00
hlohaus
0b6d8e62a1 Fix python combility 2025-07-19 16:13:44 +02:00
hlohaus
be46ba6025 Fix CLI client 2025-07-19 15:46:37 +02:00
Tekky
74080a817f
. 2025-07-18 14:25:59 +02:00
hlohaus
a3e02ddf39 Add asia 2025-07-18 03:33:16 +02:00
hlohaus
d3f978e095 Add asia 2025-07-18 03:23:03 +02:00
hlohaus
3161f70c29 Add flags 2025-07-18 02:58:06 +02:00
hlohaus
645b64af35 Update gemini models 2025-07-17 22:55:49 +02:00
hlohaus
46817f7bad Fix YouTube provider download 2025-07-17 19:08:11 +02:00
H Lohaus
72413043fd
Merge pull request #3088 from vuthaihoc/0.5.7-custom
keep api_key for next provider when use retry_provider
2025-07-17 08:47:20 +02:00
vuthaihoc
c510d9b86e keep api_key for next provider when use retry_provider 2025-07-17 13:33:09 +07:00