Commit graph

3 commits

Author SHA1 Message Date
hlohaus
1edd0fff17 Refactor model handling and improve timeout functionality
- Removed empty string mapping from model_map in AnyModelProviderMixin.
- Updated clean_name function to exclude 'chat' from version patterns.
- Added stream_timeout parameter to AsyncGeneratorProvider for more flexible timeout handling.
- Enhanced chunk yielding in AsyncAuthedProvider to support stream_timeout, allowing for better control over asynchronous responses.
2025-09-04 18:11:05 +02:00
Oppon
bd87f40ca5
Enable reasoning model on GLM provider 2025-08-09 17:35:08 -03:00
hlohaus
b9dfe1a460 feat: add EasyChat and GLM providers, update HuggingFace and SSE parsing
- Added new `EasyChat` provider (`g4f/Provider/EasyChat.py`) with captcha handling, nodriver callback, and token caching
- Added new `GLM` provider (`g4f/Provider/GLM.py`) with model retrieval, auth token fetch, and SSE streaming support
- Updated `g4f/Provider/__init__.py` to import `EasyChat` and `GLM`
- Modified `LMArenaBeta` in `g4f/Provider/needs_auth/LMArenaBeta.py` to remove nodriver availability check and always use `get_args_from_nodriver` with callback
- Updated `HuggingFaceAPI` in `g4f/Provider/needs_auth/hf/HuggingFaceAPI.py` to use `default_model` from `models` instead of `default_llama_model` and removed commented `max_inputs_lenght` param
- Updated `HuggingFace` in `g4f/Provider/needs_auth/hf/__init__.py` to import `default_model` instead of `default_vision_model`, set `default_model` class attribute, and commented out HuggingFaceInference and image model handling logic
- Modified `OpenaiTemplate` in `g4f/Provider/template/OpenaiTemplate.py` to prefer `"name"` over `"id"` when populating `vision_models`, `models`, and `models_count`
- Enhanced `sse_stream` in `g4f/requests/__init__.py` to strip and skip empty `data:` lines, handle JSON decode errors, and raise `ValueError` on invalid JSON
2025-08-08 01:19:32 +02:00