mirror of
https://github.com/xtekky/gpt4free.git
synced 2025-12-06 02:30:41 -08:00
refactor: remove deprecated providers, rename LMArenaProvider, update LMArena and models
- Deleted multiple deprecated providers including Acytoo, AiAsk, AiService, Aibn, Aivvm, Berlin, ChatAnywhere, ChatgptDuo, CodeLinkAva, Cromicle, DfeHub, EasyChat, FakeGpt, FastGpt, Forefront, GPTalk, GeekGpt, GetGpt, H2o, Hashnode, Myshell, NoowAi, Opchatgpts, OpenAssistant, V50, Vitalentum, VoiGpt, Wewordle, Wuguokai, Ylokh, Yqcloud, and corresponding deprecated/__init__.py
- Renamed LMArenaProvider.py to LMArena.py and incorporated its functionality with enhancements, including updated model lists, aliases, comprehensive model discovery, payload building, and asynchronous generator for completions
- Removed LMArenaProvider import and added LMArena import in Provider/__init__.py
- Modified Blackbox provider:
- Removed generate_session_data method and updated generate_session to use fixed email
- Updated session payload usage in send request to use generate_session without email argument
- Added asyncMode flag set to False in session payload
- In DeepInfraChat, removed model aliases for "llama-4-maverick-17b" and "llama-4-scout-17b"
- In PollinationsAI, updated model aliases: replaced "command-r-plus-08-2024" with "command-r-plus"; added "gpt-image" and "grok-3-mini" aliases
- In LambdaChat, added "llama-3.3-70b" mapping to "llama3.3-70b-instruct"
- In hf_space:
- Deleted Qwen_QVQ_72B and Voodoohop_Flux1Schnell providers
- Updated model_aliases in Qwen_Qwen_2_5_Max to fix model alias key from "qwen-2-5-max" to "qwen-2.5-max"
- Changed model_aliases in StabilityAI_SD35Large from "stable-diffusion-3.5-large" to "sd-3.5-large"
- Removed imports of deleted providers in hf_space/__init__.py and updated defaults accordingly
- In BingCreateImages moved import to relative from .bing.create_images
- Moved bing directory into needs_auth directory and updated imports accordingly
- Changed PuterJS provider:
- Changed working flag from True to False
- Changed return_conversation default from False to True in create_async_generator
- Changed yield error messages to raising exceptions for authentication and rate limits
- Modified models.py:
- Added ModelRegistry class for dynamic registration and lookup of Model instances
- Modified Model dataclass to auto-register instances on initialization via ModelRegistry
- Adjusted imports and removed PuterJS from lists of providers and best_provider assignments
- Replaced many references of PuterJS as best_provider with LMArena or IterListProvider including core models like gpt-3.5-turbo, gpt-4, gpt-4o, llama series, mistral, hermes, Microsoft phi, gemini, anthopic claude, cohere, qwen, deepseek, and others
- Fixed aliases and model names (e.g., "qwen-2-5-max" to "qwen-2.5-max")
- Removed outdated or deprecated model definitions referencing PuterJS
- Updated HarProvider label to "LM Arena (Har)" from "LM Arena"
- Removed deprecated providers imports from Provider/__init__.py and not_working directory imports updated accordingly
- Various exact functions impacted: create_async_generator in providers Blackbox, LMArena, PollinationsAI, LambdaChat, PuterJS; model aliases and model definitions in models.py; Provider package __init__.py files; BingCreateImages import; and deletions of numerous deprecated providers and not_working providers.
This commit is contained in:
parent
f65617ee1e
commit
4f2bf3048b
66 changed files with 1157 additions and 4588 deletions
1325
g4f/models.py
1325
g4f/models.py
File diff suppressed because it is too large
Load diff
Loading…
Add table
Add a link
Reference in a new issue