gpt4free/g4f/Provider/hf/__init__.py
H Lohaus 0a070bdf10
feat: introduce AnyProvider & LM Arena, overhaul model/provider logic (#2925)
* feat: introduce AnyProvider & LM Arena, overhaul model/provider logic

- **Provider additions & removals**
  - Added `Provider/LMArenaProvider.py` with full async stream implementation and vision model support
  - Registered `LMArenaProvider` in `Provider/__init__.py`; removed old `hf_space/LMArenaProvider.py`
  - Created `providers/any_provider.py`; registers `AnyProvider` dynamically in `Provider`
- **Provider framework enhancements**
  - `providers/base_provider.py`
    - Added `video_models` and `audio_models` attributes
  - `providers/retry_provider.py`
    - Introduced `is_content()` helper; now treats `AudioResponse` as stream content
- **Cloudflare provider refactor**
  - `Provider/Cloudflare.py`
    - Re‑implemented `get_models()` with `read_models()` helper, `fallback_models`, robust nodriver/curl handling and model‑name cleaning
- **Other provider tweaks**
  - `Provider/Copilot.py` – removed `"reasoning"` alias and initial `setOptions` WS message
  - `Provider/PollinationsAI.py` & `PollinationsImage.py`
    - Converted `audio_models` from list to dict, adjusted usage checks and labels
  - `Provider/hf/__init__.py` – applies `model_aliases` remap before dispatch
  - `Provider/hf_space/DeepseekAI_JanusPro7b.py` – now merges media before upload
  - `needs_auth/Gemini.py` – dropped obsolete Gemini model entries
  - `needs_auth/GigaChat.py` – added lowercase `"gigachat"` alias
- **API & client updates**
  - Replaced `ProviderUtils` with new `Provider` map usage throughout API and GUI server
  - Integrated `AnyProvider` as default fallback in `g4f/client` sync & async flows
  - API endpoints now return counts of providers per model and filter by `x_ignored` header
- **GUI improvements**
  - Updated JS labels with emoji icons, provider ignore logic, model count display
- **Model registry**
  - Renamed base model `"GigaChat:latest"` ➜ `"gigachat"` in `models.py`
- **Miscellaneous**
  - Added audio/video flags to GUI provider list
  - Tightened error propagation in `retry_provider.raise_exceptions`

* Fix unittests

* fix: handle None conversation when accessing provider-specific data

- Modified `AnyProvider` class in `g4f/providers/any_provider.py`
- Updated logic to check if `conversation` is not None before accessing `provider.__name__` attribute
- Wrapped `getattr(conversation, provider.__name__, None)` block in an additional `if conversation is not None` condition
- Changed `setattr(conversation, provider.__name__, chunk)` to use `chunk.get_dict()` instead of the object directly
- Ensured consistent use of `JsonConversation` when modifying or assigning `conversation` data

* ```
feat: add provider string conversion & update IterListProvider call

- In g4f/client/__init__.py, within both Completions and AsyncCompletions, added a check to convert the provider from a string using convert_to_provider(provider) when applicable.
- In g4f/providers/any_provider.py, removed the second argument (False) from the IterListProvider constructor call in the async for loop.
```

---------

Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-04-18 14:10:51 +02:00

76 lines
No EOL
2.9 KiB
Python

from __future__ import annotations
import random
from ...typing import AsyncResult, Messages
from ...providers.response import ImageResponse
from ...errors import ModelNotSupportedError, MissingAuthError
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .HuggingChat import HuggingChat
from .HuggingFaceAPI import HuggingFaceAPI
from .HuggingFaceInference import HuggingFaceInference
from .HuggingFaceMedia import HuggingFaceMedia
from .models import model_aliases, vision_models, default_vision_model
from ... import debug
class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://huggingface.co"
login_url = "https://huggingface.co/settings/tokens"
working = True
supports_message_history = True
@classmethod
def get_models(cls, **kwargs) -> list[str]:
if not cls.models:
cls.models = HuggingFaceInference.get_models()
cls.image_models = HuggingFaceInference.image_models
return cls.models
model_aliases = model_aliases
vision_models = vision_models
default_vision_model = default_vision_model
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
**kwargs
) -> AsyncResult:
if model in cls.model_aliases:
model = cls.model_aliases[model]
if "tools" not in kwargs and "media" not in kwargs and random.random() >= 0.5:
try:
is_started = False
async for chunk in HuggingFaceInference.create_async_generator(model, messages, **kwargs):
if isinstance(chunk, (str, ImageResponse)):
is_started = True
yield chunk
if is_started:
return
except Exception as e:
if is_started:
raise e
debug.error(f"{cls.__name__} {type(e).__name__}; {e}")
if not cls.image_models:
cls.get_models()
try:
async for chunk in HuggingFaceMedia.create_async_generator(model, messages, **kwargs):
yield chunk
return
except ModelNotSupportedError:
pass
if model in cls.image_models:
if "api_key" not in kwargs:
async for chunk in HuggingChat.create_async_generator(model, messages, **kwargs):
yield chunk
else:
async for chunk in HuggingFaceInference.create_async_generator(model, messages, **kwargs):
yield chunk
return
try:
async for chunk in HuggingFaceAPI.create_async_generator(model, messages, **kwargs):
yield chunk
except (ModelNotSupportedError, MissingAuthError):
async for chunk in HuggingFaceInference.create_async_generator(model, messages, **kwargs):
yield chunk