mirror of
https://github.com/xtekky/gpt4free.git
synced 2025-12-06 02:30:41 -08:00
Standardize model configurations and enhance error handling (#2818)
* feat(docs/providers-and-models.md): add TypeGPT provider and update model support * feat(g4f/models.py): add TypeGPT provider and enhance model configurations * refactor(g4f/Provider/hf_space/BlackForestLabs_Flux1Dev.py): update model aliases and image models definition * refactor(g4f/Provider/hf_space/BlackForestLabs_Flux1Schnell.py): adjust model configuration and aliases * refactor(g4f/Provider/hf_space/CohereForAI_C4AI_Command.py): update model configuration and aliases * refactor(g4f/Provider/hf_space/DeepseekAI_JanusPro7b.py): reorganize model classification attributes * feat(g4f/Provider/hf_space/Microsoft_Phi_4.py): add model aliases and update vision model handling * refactor(g4f/Provider/hf_space/Qwen_QVQ_72B.py): restructure model configuration and aliases * feat(g4f/Provider/hf_space/Qwen_Qwen_2_5M_Demo.py): add model alias support for Qwen provider * refactor(g4f/Provider/hf_space/Qwen_Qwen_2_72B_Instruct.py): derive model list from aliases * refactor(g4f/Provider/hf_space/StabilityAI_SD35Large.py): adjust model list definitions using aliases * fix(g4f/Provider/hf_space/Voodoohop_Flux1Schnell.py): correct image_models definition in Voodoohop provider * feat(g4f/Provider/DDG.py): enhance request handling and error recovery * feat(g4f/Provider/DeepInfraChat.py): update model configurations and request handling * feat(g4f/Provider/TypeGPT.py): enhance TypeGPT API client with media support and async capabilities * refactor(docs/providers-and-models.md): remove streaming column from provider tables * Update docs/providers-and-models.md * added(g4f/Provider/hf_space/Qwen_Qwen_2_5_Max.py): new provider * Update g4f/Provider/hf_space/Qwen_Qwen_2_72B_Instruct.py * added(g4f/Provider/hf_space/Qwen_Qwen_2_5.py): new provider * Update g4f/Provider/DeepInfraChat.py g4f/Provider/TypeGPT.py * Update g4f/Provider/LambdaChat.py * Update g4f/Provider/DDG.py * Update g4f/Provider/DeepInfraChat.py * Update g4f/Provider/TypeGPT.py * Add audio generation model and update documentation * Update providers-and-models documentation and include ARTA in flux best providers * Update ARTA provider details and adjust documentation for image models * Remove redundant text_models assignment in LambdaChat provider --------- Co-authored-by: kqlio67 <>
This commit is contained in:
parent
3cbcbe1047
commit
97f1964bb6
20 changed files with 657 additions and 274 deletions
|
|
@ -1,13 +1,9 @@
|
|||
|
||||
|
||||
|
||||
# G4F - Providers and Models
|
||||
|
||||
This document provides an overview of various AI providers and models, including text generation, image generation, and vision capabilities. It aims to help users navigate the diverse landscape of AI services and choose the most suitable option for their needs.
|
||||
|
||||
> **Note**: See our [Authentication Guide](authentication.md) for authentication instructions for the provider.
|
||||
|
||||
|
||||
## Table of Contents
|
||||
- [Providers](#providers)
|
||||
- [No auth required](#providers-not-needs-auth)
|
||||
|
|
@ -38,106 +34,108 @@ This document provides an overview of various AI providers and models, including
|
|||
|
||||
---
|
||||
### Providers No auth required
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[playground.allenai.org](https://playground.allenai.org)|No auth required|`g4f.Provider.AllenAI`|`tulu-3-405b, olmo-2-13b, tulu-3-1-8b, tulu-3-70b, olmoe-0125`|❌|❌|❌|✔||
|
||||
|[ai-arta.com](https://ai-arta.com)|No auth required|`g4f.Provider.ARTA`|❌|✔ _**(17+)**_|❌|❌|❌||
|
||||
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gpt-4o-mini, deepseek-chat, deepseek-v3, deepseek-r1, gpt-4o, o1, o3-mini, claude-3.7-sonnet` _**(40+)**_|`flux`|❌|`blackboxai, gpt-4o, o1, o3-mini, deepseek-v3` _**(7+)**_|✔||
|
||||
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|❌|✔||
|
||||
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(7+)**_|❌|❌|❌|✔||
|
||||
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|❌|✔||
|
||||
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|❌|✔||❌|
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, o1`|❌|❌|❌|✔||
|
||||
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, llama-3.3-70b, claude-3-haiku, o3-mini, mixtral-small-24b`|❌|❌|❌|✔||
|
||||
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-24b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|❌|`llama-3.2-90b, minicpm-2.5`|✔||
|
||||
|[dynaspark.onrender.com](https://dynaspark.onrender.com)|No auth required|`g4f.Provider.Dynaspark`|`gemini-1.5-flash, gemini-2.0-flash`|❌|❌|`gemini-1.5-flash, gemini-2.0-flash`|✔||
|
||||
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|✔||
|
||||
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|✔||
|
||||
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|❌|✔||
|
||||
|[glider.so](https://glider.so)|No auth required|`g4f.Provider.Glider`|`llama-3.1-70b, llama-3.1-8b, llama-3.2-3b, deepseek-r1`|❌|❌|❌|✔||
|
||||
|[goabror.uz](https://goabror.uz)|No auth required|`g4f.Provider.Goabror`|`gpt-4`|❌|❌|❌|✔||
|
||||
|[hailuo.ai](https://www.hailuo.ai)|No auth required|`g4f.Provider.HailuoAI`|`MiniMax` _**(1+)**_|❌|❌|❌|✔||
|
||||
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|❌|✔||
|
||||
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b, command-a`|`flux-dev, flux-schnell, sd-3.5`|❌|❌|✔||
|
||||
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|❌|✔||
|
||||
|[lambda.chat](https://lambda.chat)|No auth required|`g4f.Provider.LambdaChat`|`deepseek-v3, deepseek-r1, hermes-3, nemotron-70b, llama-3.3-70b`|❌|❌|❌|✔||
|
||||
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thinking, claude-3-opus, claude-3-sonnet, deepseek-r1, deepseek-v3, gemini-2.0-flash, gemini-2.0-flash-thinking, gemini-2.0-pro, gpt-4, gpt-4o, gpt-4o-mini, grok-3, grok-3-r1, o3-mini`|❌|❌|❌|✔||
|
||||
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini, deepseek-v3`|❌|❌|`gpt-4o-mini`|✔||
|
||||
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|❌|✔||
|
||||
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|❌|✔||
|
||||
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|❌|✔||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, o1-mini, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, llama-3.1-8b, deepseek-r1, phi-4` _**(9+)**_|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|✔ _**(1+)**_|`gpt-4o, gpt-4o-mini, o1-mini, o3-mini`|❌|✔||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsImage`|❌|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|❌|❌|✔||
|
||||
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|✔||
|
||||
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|❌|✔|✔||
|
||||
|[websim.ai](https://websim.ai)|No auth required|`g4f.Provider.Websim`|`gemini-1.5-pro, gemini-1.5-flash`|`flux`|❌|❌|✔||
|
||||
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|❌|❌|✔||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[playground.allenai.org](https://playground.allenai.org)|No auth required|`g4f.Provider.AllenAI`|`tulu-3-405b, olmo-2-13b, tulu-3-1-8b, tulu-3-70b, olmoe-0125`|❌|❌|❌||
|
||||
|[ai-arta.com](https://ai-arta.com)|No auth required|`g4f.Provider.ARTA`|❌|✔ _**(17+)**_|❌|❌||
|
||||
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gpt-4o-mini, deepseek-chat, deepseek-v3, deepseek-r1, gpt-4o, o1, o3-mini, claude-3.7-sonnet` _**(40+)**_|`flux`|❌|`blackboxai, gpt-4o, o1, o3-mini, deepseek-v3` _**(7+)**_||
|
||||
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|❌||
|
||||
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(7+)**_|❌|❌|❌||
|
||||
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|❌||
|
||||
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|❌||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, o1`|❌|❌|❌||
|
||||
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, llama-3.3-70b, claude-3-haiku, o3-mini, mixtral-small-24b`|❌|❌|❌||
|
||||
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-24b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|❌|`llama-3.2-90b, minicpm-2.5`||
|
||||
|[dynaspark.onrender.com](https://dynaspark.onrender.com)|No auth required|`g4f.Provider.Dynaspark`|`gemini-1.5-flash, gemini-2.0-flash`|❌|❌|`gemini-1.5-flash, gemini-2.0-flash`||
|
||||
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌||
|
||||
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌||
|
||||
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|❌||
|
||||
|[glider.so](https://glider.so)|No auth required|`g4f.Provider.Glider`|`llama-3.1-70b, llama-3.1-8b, llama-3.2-3b, deepseek-r1`|❌|❌|❌||
|
||||
|[goabror.uz](https://goabror.uz)|No auth required|`g4f.Provider.Goabror`|`gpt-4`|❌|❌|❌||
|
||||
|[hailuo.ai](https://www.hailuo.ai)|No auth required|`g4f.Provider.HailuoAI`|`MiniMax` _**(1+)**_|❌|❌|❌||
|
||||
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|❌||
|
||||
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b, command-a`|`flux-dev, flux-schnell, sd-3.5`|❌|❌||
|
||||
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|❌||
|
||||
|[lambda.chat](https://lambda.chat)|No auth required|`g4f.Provider.LambdaChat`|`deepseek-v3, deepseek-r1, hermes-3, nemotron-70b, llama-3.3-70b`|❌|❌|❌||
|
||||
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thinking, claude-3-opus, claude-3-sonnet, deepseek-r1, deepseek-v3, gemini-2.0-flash, gemini-2.0-flash-thinking, gemini-2.0-pro, gpt-4, gpt-4o, gpt-4o-mini, grok-3, grok-3-r1, o3-mini`|❌|❌|❌||
|
||||
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini, deepseek-v3`|❌|❌|`gpt-4o-mini`||
|
||||
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|❌||
|
||||
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|❌||
|
||||
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|❌||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, o1-mini, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, llama-3.1-8b, deepseek-r1, phi-4` _**(9+)**_|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|`gpt-4o-audio`|`gpt-4o, gpt-4o-mini, o1-mini, o3-mini`||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsImage`|❌|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|❌|❌||
|
||||
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌||
|
||||
|[chat.typegpt.net](https://chat.typegpt.net)|No auth required|`g4f.Provider.TypeGPT`|`gpt-3.5-turbo, o3-mini, deepseek-r1, deepseek-v3, evil, o1`|❌|❌|`gpt-3.5-turbo, o3-mini`||
|
||||
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|❌|✔||
|
||||
|[websim.ai](https://websim.ai)|No auth required|`g4f.Provider.Websim`|`gemini-1.5-pro, gemini-1.5-flash`|`flux`|❌|❌||
|
||||
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|❌|❌||
|
||||
|
||||
---
|
||||
### Providers HuggingFace
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, deepseek-r1, qwq-32b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|❌|✔||
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|❌|✔||
|
||||
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|❌|✔ _**(1+)**_|❌||✔|
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, deepseek-r1, qwq-32b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|❌||
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|❌||
|
||||
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|❌|✔ _**(1+)**_||
|
||||
|
||||
---
|
||||
### Providers HuggingSpace
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Dev`|❌|`flux-dev`|❌|❌|✔||
|
||||
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Schnell`|❌|`flux-schnell`|❌|❌|✔||
|
||||
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI_C4AI_Command`|`command-r, command-r-plus, command-r7b`|❌|❌|❌|✔||
|
||||
|[huggingface.co/spaces/deepseek-ai/Janus-Pro-7B](https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.DeepseekAI_Janus_Pro_7b`|✔|✔|❌|❌|✔||
|
||||
|[roxky-flux-1-dev.hf.space](https://roxky-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.G4F`|✔ _**(1+)**_|✔ _**(4+)**_|❌|✔ _**(1+)**_|✔||
|
||||
|[microsoft-phi-4-multimodal.hf.space](https://microsoft-phi-4-multimodal.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Microsoft_Phi_4`|✔ _**(1+)**_|❌|❌|✔ _**(1+)**_|✔||
|
||||
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|❌|✔||
|
||||
|[qwen-qwen2-5-1m-demo.hf.space](https://qwen-qwen2-5-1m-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5M_Demo`|`qwen-2.5-1m-demo`|❌|❌|❌|✔||
|
||||
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B_Instruct`|`qwen-2-72b`|❌|❌|❌|✔||
|
||||
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StabilityAI_SD35Large`|❌|`sd-3.5`|❌|❌|✔||
|
||||
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Voodoohop_Flux1Schnell`|❌|`flux-schnell`|❌|❌|✔||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Dev`|❌|`flux, flux-dev`|❌|❌||
|
||||
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Schnell`|❌|`flux, flux-schnell`|❌|❌||
|
||||
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI_C4AI_Command`|`command-r, command-r-plus, command-r7b`|❌|❌|❌||
|
||||
|[huggingface.co/spaces/deepseek-ai/Janus-Pro-7B](https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.DeepseekAI_Janus_Pro_7b`|✔|✔|❌|❌||
|
||||
|[roxky-flux-1-dev.hf.space](https://roxky-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.G4F`|✔ _**(1+)**_|✔ _**(4+)**_|❌|✔ _**(1+)**_||
|
||||
|[microsoft-phi-4-multimodal.hf.space](https://microsoft-phi-4-multimodal.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Microsoft_Phi_4`|`phi-4`|❌|❌|`phi-4`||
|
||||
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|❌||
|
||||
|[qwen-qwen2-5.hf.space](https://qwen-qwen2-5.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5`|`qwen-2.5`|❌|❌|❌||
|
||||
|[qwen-qwen2-5-1m-demo.hf.space](https://qwen-qwen2-5-1m-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5M`|`qwen-2.5-1m`|❌|❌|❌||
|
||||
|[qwen-qwen2-5-max-demo.hf.space](https://qwen-qwen2-5-max-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5_Max`|`qwen-2-5-max`|❌|❌|❌||
|
||||
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B`|`qwen-2-72b`|❌|❌|❌||
|
||||
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StabilityAI_SD35Large`|❌|`sd-3.5`|❌|❌||
|
||||
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Voodoohop_Flux1Schnell`|❌|`flux, flux-schnell`|❌|❌||
|
||||
|
||||
### Providers Local
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[]( )|No auth required|`g4f.Provider.Local`|✔|❌|❌|❌|✔||
|
||||
|[ollama.com](https://ollama.com)|No auth required|`g4f.Provider.Ollama`|✔|❌|❌|❌|✔||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[]( )|No auth required|`g4f.Provider.Local`|✔|❌|❌|❌||
|
||||
|[ollama.com](https://ollama.com)|No auth required|`g4f.Provider.Ollama`|✔|❌|❌|❌||
|
||||
|
||||
---
|
||||
### Providers MiniMax
|
||||
| Website | API Credentials | Provider | Text generation | Image generation| Audio generation | Vision (Image Upload) | Stream | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[hailuo.ai/chat](https://www.hailuo.ai/chat)|[Get API key](https://intl.minimaxi.com/user-center/basic-information/interface-key)|`g4f.Provider.MiniMax`|`MiniMax` _**(1)**_|❌|❌|❌|✔||
|
||||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[hailuo.ai/chat](https://www.hailuo.ai/chat)|[Get API key](https://intl.minimaxi.com/user-center/basic-information/interface-key)|`g4f.Provider.MiniMax`|`MiniMax` _**(1+)**_|❌|❌|❌||
|
||||
|
||||
---
|
||||
### Providers Needs Auth
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|❌|✔||
|
||||
|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|❌||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|❌||
|
||||
|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌||
|
||||
|[cablyai.com/chat](https://cablyai.com/chat)|[Get API key](https://cablyai.com)|`g4f.Provider.CablyAI`|✔|✔|❌|✔||
|
||||
|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|❌|✔||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|❌|✔ _**(1+)**_|✔||
|
||||
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|❌|✔||
|
||||
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌|❌||
|
||||
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini-2.0`|`gemini-2.0`|❌|`gemini-2.0`|❌||
|
||||
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|❌|`gemini-1.5-pro`|❌||
|
||||
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌|❌||
|
||||
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌|❌||
|
||||
|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌|❌||
|
||||
|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|❌|✔|❌||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|❌|✔||✔|
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|❌|✔||
|
||||
|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|❌||
|
||||
|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|❌|✔||
|
||||
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔ _**(1)**_|❌|✔ _**(8+)**_|✔||
|
||||
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|❌|✔||
|
||||
|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|✔|❌|❌|✔||
|
||||
|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|❌|✔||
|
||||
|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|❌|✔||
|
||||
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|❌|✔||
|
||||
|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|❌|✔||
|
||||
|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|❌||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|❌|✔ _**(1+)**_||
|
||||
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|❌||
|
||||
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌||
|
||||
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini-2.0`|`gemini-2.0`|❌|`gemini-2.0`||
|
||||
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|❌|`gemini-1.5-pro`||
|
||||
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌||
|
||||
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌||
|
||||
|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌||
|
||||
|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|❌|✔||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|❌||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|❌||
|
||||
|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌||
|
||||
|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|❌||
|
||||
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔ _**(1)**_|❌|✔ _**(8+)**_||
|
||||
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|❌||
|
||||
|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|✔|❌|❌||
|
||||
|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|❌||
|
||||
|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|❌||
|
||||
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|❌||
|
||||
|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|❌||
|
||||
|
||||
---
|
||||
## Models
|
||||
|
|
@ -145,12 +143,13 @@ This document provides an overview of various AI providers and models, including
|
|||
### Text generation models
|
||||
| Model | Base Provider | Providers | Website |
|
||||
|-------|---------------|-----------|---------|
|
||||
|gpt-3.5-turbo|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/engines/gpt-3.5-turbo)|
|
||||
|gpt-4|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|
||||
|gpt-4o|OpenAI|6+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|
||||
|gpt-4o-mini|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|
||||
|o1|OpenAI|3+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|
||||
|gpt-4o-mini|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|
||||
|o1|OpenAI|4+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|
||||
|o1-mini|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|
||||
|o3-mini|OpenAI|3+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|
||||
|o3-mini|OpenAI|4+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|
||||
|gigachat|GigaChat|1+ Providers|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|
|
||||
|meta-ai|Meta|1+ Providers|[ai.meta.com](https://ai.meta.com/)|
|
||||
|llama-2-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)|
|
||||
|
|
@ -170,7 +169,7 @@ This document provides an overview of various AI providers and models, including
|
|||
|mixtral-small-24b|Mistral|2+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)|
|
||||
|hermes-3|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B-FP8)|
|
||||
|phi-3.5-mini|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)|
|
||||
|phi-4|Microsoft|2+ Providers|[techcommunity.microsoft.com](https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090)|
|
||||
|phi-4|Microsoft|3+ Providers|[techcommunity.microsoft.com](https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090)|
|
||||
|wizardlm-2-7b|Microsoft|1+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|
||||
|wizardlm-2-8x22b|Microsoft|2+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|
||||
|gemini-exp|Google DeepMind|1+ Providers|[blog.google](https://blog.google/feed/gemini-exp-1206/)|
|
||||
|
|
@ -197,13 +196,14 @@ This document provides an overview of various AI providers and models, including
|
|||
|qwen-2-vl-7b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-VL-7B)|
|
||||
|qwen-2.5-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)|
|
||||
|qwen-2.5-coder-32b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|
||||
|qwen-2.5-1m-demo|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-1M-Demo)|
|
||||
|qwen-2.5-1m|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-1M-Demo)|
|
||||
|qwen-2-5-max|Qwen|1+ Providers|[qwen-ai.com](https://www.qwen-ai.com/2-5-max/)|
|
||||
|qwq-32b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|
||||
|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)|
|
||||
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|
||||
|deepseek-chat|DeepSeek|2+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|
||||
|deepseek-v3|DeepSeek|4+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|deepseek-r1|DeepSeek|9+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|deepseek-v3|DeepSeek|5+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|deepseek-r1|DeepSeek|10+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|janus-pro-7b|DeepSeek|2+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/docs/janus-pro-7b)|
|
||||
|grok-3|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-3)|
|
||||
|grok-3-r1|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-3)|
|
||||
|
|
@ -228,7 +228,7 @@ This document provides an overview of various AI providers and models, including
|
|||
|tulu-3-70b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|olmoe-0125|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|lfm-40b|Liquid AI|1+ Providers|[liquid.ai](https://www.liquid.ai/liquid-foundation-models)|
|
||||
|evil|Evil Mode - Experimental|1+ Providers|[]( )|
|
||||
|evil|Evil Mode - Experimental|2+ Providers|[]( )|
|
||||
|
||||
---
|
||||
### Image generation models
|
||||
|
|
@ -243,14 +243,19 @@ This document provides an overview of various AI providers and models, including
|
|||
|dall-e-3|OpenAI|5+ Providers|[openai.com](https://openai.com/index/dall-e/)|
|
||||
|midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|
|
||||
|
||||
---
|
||||
### Audio generation models
|
||||
| Model | Base Provider | Providers | Website |
|
||||
|-------|---------------|-----------|---------|
|
||||
|gpt-4o-audio|Stability AI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-audio)|
|
||||
|
||||
|
||||
## Conclusion and Usage Tips
|
||||
This document provides a comprehensive overview of various AI providers and models available for text generation, image generation, and vision tasks. **When choosing a provider or model, consider the following factors:**
|
||||
1. **Availability**: Check the status of the provider to ensure it's currently active and accessible.
|
||||
2. **Model Capabilities**: Different models excel at different tasks. Choose a model that best fits your specific needs, whether it's text generation, image creation, or vision-related tasks.
|
||||
3. **Authentication**: Some providers require authentication, while others don't. Consider this when selecting a provider for your project.
|
||||
4. **Streaming Support**: If real-time responses are important for your application, prioritize providers that offer streaming capabilities.
|
||||
5. **Vision Models**: For tasks requiring image understanding or multimodal interactions, look for providers offering vision models.
|
||||
4. **Vision Models**: For tasks requiring image understanding or multimodal interactions, look for providers offering vision models.
|
||||
|
||||
Remember to stay updated with the latest developments in the AI field, as new models and providers are constantly emerging and evolving.
|
||||
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ class ARTA(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
"professional": "Professional",
|
||||
"black_ink": "Black Ink"
|
||||
}
|
||||
image_models = [*model_aliases.keys()]
|
||||
image_models = list(model_aliases.keys())
|
||||
models = image_models
|
||||
|
||||
@classmethod
|
||||
|
|
|
|||
|
|
@ -5,19 +5,22 @@ from aiohttp import ClientSession, ClientTimeout
|
|||
import json
|
||||
import asyncio
|
||||
import random
|
||||
from yarl import URL
|
||||
|
||||
from ..typing import AsyncResult, Messages, Cookies
|
||||
from ..requests.raise_for_status import raise_for_status
|
||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from .helper import format_prompt
|
||||
from .helper import format_prompt, get_last_user_message
|
||||
from ..providers.response import FinishReason, JsonConversation
|
||||
from ..errors import ModelNotSupportedError, ResponseStatusError, RateLimitError, TimeoutError, ConversationLimitError
|
||||
|
||||
|
||||
class DuckDuckGoSearchException(Exception):
|
||||
"""Base exception class for duckduckgo_search."""
|
||||
|
||||
class Conversation(JsonConversation):
|
||||
vqd: str = None
|
||||
vqd_hash_1: str = None
|
||||
message_history: Messages = []
|
||||
cookies: dict = {}
|
||||
|
||||
|
|
@ -30,7 +33,7 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
api_endpoint = "https://duckduckgo.com/duckchat/v1/chat"
|
||||
status_url = "https://duckduckgo.com/duckchat/v1/status"
|
||||
|
||||
working = False
|
||||
working = True
|
||||
supports_stream = True
|
||||
supports_system_message = True
|
||||
supports_message_history = True
|
||||
|
|
@ -46,6 +49,8 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
}
|
||||
|
||||
last_request_time = 0
|
||||
max_retries = 3
|
||||
base_delay = 2
|
||||
|
||||
@classmethod
|
||||
def validate_model(cls, model: str) -> str:
|
||||
|
|
@ -59,43 +64,78 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
return model
|
||||
|
||||
@classmethod
|
||||
async def sleep(cls):
|
||||
async def sleep(cls, multiplier=1.0):
|
||||
"""Implements rate limiting between requests"""
|
||||
now = time.time()
|
||||
if cls.last_request_time > 0:
|
||||
delay = max(0.0, 0.75 - (now - cls.last_request_time))
|
||||
delay = max(0.0, 1.5 - (now - cls.last_request_time)) * multiplier
|
||||
if delay > 0:
|
||||
await asyncio.sleep(delay)
|
||||
cls.last_request_time = now
|
||||
cls.last_request_time = time.time()
|
||||
|
||||
@classmethod
|
||||
async def fetch_vqd(cls, session: ClientSession, max_retries: int = 3) -> str:
|
||||
"""Fetches the required VQD token for the chat session with retries."""
|
||||
headers = {
|
||||
"accept": "text/event-stream",
|
||||
"content-type": "application/json",
|
||||
"x-vqd-accept": "1",
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
|
||||
}
|
||||
|
||||
for attempt in range(max_retries):
|
||||
async def get_default_cookies(cls, session: ClientSession) -> dict:
|
||||
"""Obtains default cookies needed for API requests"""
|
||||
try:
|
||||
await cls.sleep()
|
||||
# Make initial request to get cookies
|
||||
async with session.get(cls.url) as response:
|
||||
# We also manually set required cookies
|
||||
cookies = {}
|
||||
cookies_dict = {'dcs': '1', 'dcm': '3'}
|
||||
|
||||
for name, value in cookies_dict.items():
|
||||
cookies[name] = value
|
||||
url_obj = URL(cls.url)
|
||||
session.cookie_jar.update_cookies({name: value}, url_obj)
|
||||
|
||||
return cookies
|
||||
except Exception as e:
|
||||
return {}
|
||||
|
||||
@classmethod
|
||||
async def fetch_vqd_and_hash(cls, session: ClientSession, retry_count: int = 0) -> tuple[str, str]:
|
||||
"""Fetches the required VQD token and hash for the chat session with retries."""
|
||||
headers = {
|
||||
"accept": "text/event-stream",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"cache-control": "no-cache",
|
||||
"content-type": "application/json",
|
||||
"pragma": "no-cache",
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36",
|
||||
"origin": "https://duckduckgo.com",
|
||||
"referer": "https://duckduckgo.com/",
|
||||
"x-vqd-accept": "1",
|
||||
}
|
||||
|
||||
# Make sure we have cookies first
|
||||
if len(session.cookie_jar) == 0:
|
||||
await cls.get_default_cookies(session)
|
||||
|
||||
try:
|
||||
await cls.sleep(multiplier=1.0 + retry_count * 0.5)
|
||||
async with session.get(cls.status_url, headers=headers) as response:
|
||||
await raise_for_status(response)
|
||||
vqd = response.headers.get("x-vqd-4", "")
|
||||
if vqd:
|
||||
return vqd
|
||||
response_text = await response.text()
|
||||
raise RuntimeError(f"Failed to fetch VQD token: {response.status} {response_text}")
|
||||
except ResponseStatusError as e:
|
||||
if attempt < max_retries - 1:
|
||||
wait_time = random.uniform(1, 3) * (attempt + 1)
|
||||
await asyncio.sleep(wait_time)
|
||||
else:
|
||||
raise RuntimeError(f"Failed to fetch VQD token after {max_retries} attempts: {str(e)}")
|
||||
|
||||
raise RuntimeError("Failed to fetch VQD token: Maximum retries exceeded")
|
||||
vqd = response.headers.get("x-vqd-4", "")
|
||||
vqd_hash_1 = response.headers.get("x-vqd-hash-1", "")
|
||||
|
||||
if vqd and vqd_hash_1:
|
||||
return vqd, vqd_hash_1
|
||||
|
||||
if vqd and not vqd_hash_1:
|
||||
return vqd, ""
|
||||
|
||||
response_text = await response.text()
|
||||
raise RuntimeError(f"Failed to fetch VQD token and hash: {response.status} {response_text}")
|
||||
|
||||
except Exception as e:
|
||||
if retry_count < cls.max_retries:
|
||||
wait_time = cls.base_delay * (2 ** retry_count) * (1 + random.random())
|
||||
await asyncio.sleep(wait_time)
|
||||
return await cls.fetch_vqd_and_hash(session, retry_count + 1)
|
||||
else:
|
||||
raise RuntimeError(f"Failed to fetch VQD token and hash after {cls.max_retries} attempts: {str(e)}")
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
|
@ -103,40 +143,69 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
model: str,
|
||||
messages: Messages,
|
||||
proxy: str = None,
|
||||
timeout: int = 30,
|
||||
timeout: int = 60,
|
||||
cookies: Cookies = None,
|
||||
conversation: Conversation = None,
|
||||
return_conversation: bool = False,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
model = cls.validate_model(model)
|
||||
retry_count = 0
|
||||
|
||||
if cookies is None and conversation is not None:
|
||||
cookies = conversation.cookies
|
||||
|
||||
while retry_count <= cls.max_retries:
|
||||
try:
|
||||
async with ClientSession(timeout=ClientTimeout(total=timeout), cookies=cookies) as session:
|
||||
session_timeout = ClientTimeout(total=timeout)
|
||||
async with ClientSession(timeout=session_timeout, cookies=cookies) as session:
|
||||
if conversation is None:
|
||||
# Get initial cookies if not provided
|
||||
if not cookies:
|
||||
await cls.get_default_cookies(session)
|
||||
|
||||
conversation = Conversation(model)
|
||||
conversation.vqd = await cls.fetch_vqd(session)
|
||||
vqd, vqd_hash_1 = await cls.fetch_vqd_and_hash(session)
|
||||
conversation.vqd = vqd
|
||||
conversation.vqd_hash_1 = vqd_hash_1
|
||||
conversation.message_history = [{"role": "user", "content": format_prompt(messages)}]
|
||||
else:
|
||||
conversation.message_history.append(messages[-1])
|
||||
last_message = get_last_user_message(messages.copy())
|
||||
conversation.message_history.append({"role": "user", "content": last_message})
|
||||
|
||||
headers = {
|
||||
"accept": "text/event-stream",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"content-type": "application/json",
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36",
|
||||
"origin": "https://duckduckgo.com",
|
||||
"referer": "https://duckduckgo.com/",
|
||||
"x-vqd-4": conversation.vqd,
|
||||
}
|
||||
|
||||
# Add the x-vqd-hash-1 header if available
|
||||
if conversation.vqd_hash_1:
|
||||
headers["x-vqd-hash-1"] = conversation.vqd_hash_1
|
||||
|
||||
data = {
|
||||
"model": model,
|
||||
"messages": conversation.message_history,
|
||||
}
|
||||
|
||||
await cls.sleep()
|
||||
await cls.sleep(multiplier=1.0 + retry_count * 0.5)
|
||||
async with session.post(cls.api_endpoint, json=data, headers=headers, proxy=proxy) as response:
|
||||
# Handle 429 errors specifically
|
||||
if response.status == 429:
|
||||
response_text = await response.text()
|
||||
|
||||
if retry_count < cls.max_retries:
|
||||
retry_count += 1
|
||||
wait_time = cls.base_delay * (2 ** retry_count) * (1 + random.random())
|
||||
await asyncio.sleep(wait_time)
|
||||
|
||||
# Get fresh tokens and cookies
|
||||
cookies = await cls.get_default_cookies(session)
|
||||
continue
|
||||
else:
|
||||
raise RateLimitError(f"Rate limited after {cls.max_retries} retries")
|
||||
|
||||
await raise_for_status(response)
|
||||
reason = None
|
||||
full_message = ""
|
||||
|
|
@ -169,14 +238,27 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
if return_conversation:
|
||||
conversation.message_history.append({"role": "assistant", "content": full_message})
|
||||
conversation.vqd = response.headers.get("x-vqd-4", conversation.vqd)
|
||||
conversation.vqd_hash_1 = response.headers.get("x-vqd-hash-1", conversation.vqd_hash_1)
|
||||
conversation.cookies = {
|
||||
n: c.value
|
||||
for n, c in session.cookie_jar.filter_cookies(cls.url).items()
|
||||
for n, c in session.cookie_jar.filter_cookies(URL(cls.url)).items()
|
||||
}
|
||||
yield conversation
|
||||
|
||||
if reason is not None:
|
||||
yield FinishReason(reason)
|
||||
|
||||
# If we got here, the request was successful
|
||||
break
|
||||
|
||||
except (RateLimitError, ResponseStatusError) as e:
|
||||
if "429" in str(e) and retry_count < cls.max_retries:
|
||||
retry_count += 1
|
||||
wait_time = cls.base_delay * (2 ** retry_count) * (1 + random.random())
|
||||
await asyncio.sleep(wait_time)
|
||||
else:
|
||||
raise
|
||||
except asyncio.TimeoutError as e:
|
||||
raise TimeoutError(f"Request timed out: {str(e)}")
|
||||
except Exception as e:
|
||||
raise
|
||||
|
|
|
|||
|
|
@ -1,8 +1,6 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from ..typing import AsyncResult, Messages, MediaListType
|
||||
from .template import OpenaiTemplate
|
||||
from ..image import to_data_uri
|
||||
|
||||
class DeepInfraChat(OpenaiTemplate):
|
||||
url = "https://deepinfra.com/chat"
|
||||
|
|
@ -10,8 +8,8 @@ class DeepInfraChat(OpenaiTemplate):
|
|||
working = True
|
||||
|
||||
default_model = 'deepseek-ai/DeepSeek-V3'
|
||||
default_vision_model = 'meta-llama/Llama-3.2-90B-Vision-Instruct'
|
||||
vision_models = [default_vision_model, 'openbmb/MiniCPM-Llama3-V-2_5']
|
||||
default_vision_model = 'openbmb/MiniCPM-Llama3-V-2_5'
|
||||
vision_models = [default_vision_model, 'meta-llama/Llama-3.2-90B-Vision-Instruct']
|
||||
models = [
|
||||
'meta-llama/Meta-Llama-3.1-8B-Instruct',
|
||||
'meta-llama/Llama-3.3-70B-Instruct-Turbo',
|
||||
|
|
@ -59,36 +57,3 @@ class DeepInfraChat(OpenaiTemplate):
|
|||
"mixtral-8x22b": "mistralai/Mixtral-8x22B-Instruct-v0.1",
|
||||
"minicpm-2.5": "openbmb/MiniCPM-Llama3-V-2_5",
|
||||
}
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls,
|
||||
model: str,
|
||||
messages: Messages,
|
||||
stream: bool = True,
|
||||
top_p: float = 0.9,
|
||||
temperature: float = 0.7,
|
||||
max_tokens: int = None,
|
||||
headers: dict = {},
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
headers = {
|
||||
'Accept-Language': 'en-US,en;q=0.9',
|
||||
'Origin': 'https://deepinfra.com',
|
||||
'Referer': 'https://deepinfra.com/',
|
||||
'X-Deepinfra-Source': 'web-page',
|
||||
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
|
||||
**headers
|
||||
}
|
||||
|
||||
async for chunk in super().create_async_generator(
|
||||
model,
|
||||
messages,
|
||||
headers=headers,
|
||||
stream=stream,
|
||||
top_p=top_p,
|
||||
temperature=temperature,
|
||||
max_tokens=max_tokens,
|
||||
**kwargs
|
||||
):
|
||||
yield chunk
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ class LambdaChat(HuggingChat):
|
|||
default_model = "deepseek-llama3.3-70b"
|
||||
reasoning_model = "deepseek-r1"
|
||||
image_models = []
|
||||
models = []
|
||||
fallback_models = [
|
||||
default_model,
|
||||
reasoning_model,
|
||||
|
|
@ -23,6 +22,8 @@ class LambdaChat(HuggingChat):
|
|||
"lfm-40b",
|
||||
"llama3.3-70b-instruct-fp8"
|
||||
]
|
||||
models = fallback_models.copy()
|
||||
|
||||
model_aliases = {
|
||||
"deepseek-v3": default_model,
|
||||
"hermes-3": "hermes-3-llama-3.1-405b-fp8",
|
||||
|
|
|
|||
|
|
@ -5,13 +5,14 @@ from .template import OpenaiTemplate
|
|||
class TypeGPT(OpenaiTemplate):
|
||||
label = "TypeGpt"
|
||||
url = "https://chat.typegpt.net"
|
||||
api_endpoint = "https://chat.typegpt.net/api/openai/typegpt/v1/chat/completions"
|
||||
api_base = "https://chat.typegpt.net/api/openai/typegpt/v1"
|
||||
working = True
|
||||
|
||||
default_model = "gpt-4o-mini-2024-07-18"
|
||||
models = [
|
||||
default_model, "o1", "o3-mini", "gemini-1.5-flash", "deepseek-r1", "deepseek-v3", "gemini-pro", "evil"
|
||||
]
|
||||
default_model = 'gpt-4o-mini-2024-07-18'
|
||||
default_vision_model = default_model
|
||||
vision_models = ['gpt-3.5-turbo', 'gpt-3.5-turbo-202201', default_vision_model, "o3-mini"]
|
||||
models = vision_models + ["deepseek-r1", "deepseek-v3", "evil", "o1"]
|
||||
model_aliases = {
|
||||
"gpt-3.5-turbo": "gpt-3.5-turbo-202201",
|
||||
"gpt-4o-mini": "gpt-4o-mini-2024-07-18",
|
||||
}
|
||||
|
|
@ -22,8 +22,8 @@ class BlackForestLabs_Flux1Dev(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
|
||||
default_model = 'black-forest-labs-flux-1-dev'
|
||||
default_image_model = default_model
|
||||
model_aliases = {"flux-dev": default_model, "flux": default_model}
|
||||
image_models = [default_image_model, *model_aliases.keys()]
|
||||
model_aliases = {"flux-dev": default_image_model, "flux": default_image_model}
|
||||
image_models = list(model_aliases.keys())
|
||||
models = image_models
|
||||
|
||||
@classmethod
|
||||
|
|
|
|||
|
|
@ -19,8 +19,8 @@ class BlackForestLabs_Flux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
|
||||
default_model = "black-forest-labs-flux-1-schnell"
|
||||
default_image_model = default_model
|
||||
model_aliases = {"flux-schnell": default_model, "flux": default_model}
|
||||
image_models = [default_image_model, *model_aliases.keys()]
|
||||
model_aliases = {"flux-schnell": default_image_model, "flux": default_image_model}
|
||||
image_models = list(model_aliases.keys())
|
||||
models = image_models
|
||||
|
||||
@classmethod
|
||||
|
|
|
|||
|
|
@ -17,22 +17,16 @@ class CohereForAI_C4AI_Command(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
working = True
|
||||
|
||||
default_model = "command-a-03-2025"
|
||||
models = [
|
||||
default_model,
|
||||
"command-r-plus-08-2024",
|
||||
"command-r-08-2024",
|
||||
"command-r-plus",
|
||||
"command-r",
|
||||
"command-r7b-12-2024",
|
||||
"command-r7b-arabic-02-2025",
|
||||
]
|
||||
model_aliases = {
|
||||
"command-a": "command-a-03-2025",
|
||||
"command-a": default_model,
|
||||
"command-r-plus": "command-r-plus-08-2024",
|
||||
"command-r": "command-r-08-2024",
|
||||
"command-r": "command-r",
|
||||
"command-r7b": "command-r7b-12-2024",
|
||||
}
|
||||
|
||||
models = list(model_aliases.keys())
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls, model: str, messages: Messages,
|
||||
|
|
|
|||
|
|
@ -34,8 +34,9 @@ class DeepseekAI_JanusPro7b(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
default_model = "janus-pro-7b"
|
||||
default_image_model = "janus-pro-7b-image"
|
||||
default_vision_model = default_model
|
||||
models = [default_model, default_image_model]
|
||||
image_models = [default_image_model]
|
||||
vision_models = [default_vision_model]
|
||||
models = vision_models + image_models
|
||||
|
||||
@classmethod
|
||||
def run(cls, method: str, session: StreamSession, prompt: str, conversation: JsonConversation, image: dict = None, seed: int = 0):
|
||||
|
|
|
|||
|
|
@ -29,7 +29,9 @@ class Microsoft_Phi_4(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
|
||||
default_model = "phi-4-multimodal"
|
||||
default_vision_model = default_model
|
||||
models = [default_model]
|
||||
model_aliases = {"phi-4": default_vision_model}
|
||||
vision_models = list(model_aliases.keys())
|
||||
models = vision_models
|
||||
|
||||
@classmethod
|
||||
def run(cls, method: str, session: StreamSession, prompt: str, conversation: JsonConversation, media: list = None):
|
||||
|
|
|
|||
|
|
@ -18,9 +18,10 @@ class Qwen_QVQ_72B(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
working = True
|
||||
|
||||
default_model = "qwen-qvq-72b-preview"
|
||||
models = [default_model]
|
||||
model_aliases = {"qvq-72b": default_model}
|
||||
vision_models = models
|
||||
default_vision_model = default_model
|
||||
model_aliases = {"qvq-72b": default_vision_model}
|
||||
vision_models = list(model_aliases.keys())
|
||||
models = vision_models
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
|
|
|||
146
g4f/Provider/hf_space/Qwen_Qwen_2_5.py
Normal file
146
g4f/Provider/hf_space/Qwen_Qwen_2_5.py
Normal file
|
|
@ -0,0 +1,146 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import aiohttp
|
||||
import json
|
||||
import uuid
|
||||
import re
|
||||
|
||||
from ...typing import AsyncResult, Messages
|
||||
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ..helper import format_prompt
|
||||
from ... import debug
|
||||
|
||||
class Qwen_Qwen_2_5(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen Qwen-2.5"
|
||||
url = "https://qwen-qwen2-5.hf.space"
|
||||
api_endpoint = "https://qwen-qwen2-5.hf.space/queue/join"
|
||||
|
||||
working = True
|
||||
supports_stream = True
|
||||
supports_system_message = True
|
||||
supports_message_history = False
|
||||
|
||||
default_model = "qwen-qwen2-5"
|
||||
model_aliases = {"qwen-2.5": default_model}
|
||||
models = list(model_aliases.keys())
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls,
|
||||
model: str,
|
||||
messages: Messages,
|
||||
proxy: str = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
def generate_session_hash():
|
||||
"""Generate a unique session hash."""
|
||||
return str(uuid.uuid4()).replace('-', '')[:10]
|
||||
|
||||
# Generate a unique session hash
|
||||
session_hash = generate_session_hash()
|
||||
|
||||
headers_join = {
|
||||
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:136.0) Gecko/20100101 Firefox/136.0',
|
||||
'Accept': '*/*',
|
||||
'Accept-Language': 'en-US,en;q=0.5',
|
||||
'Accept-Encoding': 'gzip, deflate, br, zstd',
|
||||
'Referer': f'{cls.url}/?__theme=system',
|
||||
'content-type': 'application/json',
|
||||
'Origin': cls.url,
|
||||
'Connection': 'keep-alive',
|
||||
'Sec-Fetch-Dest': 'empty',
|
||||
'Sec-Fetch-Mode': 'cors',
|
||||
'Sec-Fetch-Site': 'same-origin',
|
||||
'Pragma': 'no-cache',
|
||||
'Cache-Control': 'no-cache',
|
||||
}
|
||||
|
||||
# Prepare the prompt
|
||||
system_prompt = "\n".join([message["content"] for message in messages if message["role"] == "system"])
|
||||
if not system_prompt:
|
||||
system_prompt = "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."
|
||||
messages = [message for message in messages if message["role"] != "system"]
|
||||
prompt = format_prompt(messages)
|
||||
|
||||
payload_join = {
|
||||
"data": [prompt, [], system_prompt, "72B"],
|
||||
"event_data": None,
|
||||
"fn_index": 3,
|
||||
"trigger_id": 25,
|
||||
"session_hash": session_hash
|
||||
}
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
# Send join request
|
||||
async with session.post(cls.api_endpoint, headers=headers_join, json=payload_join) as response:
|
||||
event_id = (await response.json())['event_id']
|
||||
|
||||
# Prepare data stream request
|
||||
url_data = f'{cls.url}/queue/data'
|
||||
|
||||
headers_data = {
|
||||
'Accept': 'text/event-stream',
|
||||
'Accept-Language': 'en-US,en;q=0.5',
|
||||
'Referer': f'{cls.url}/?__theme=system',
|
||||
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:136.0) Gecko/20100101 Firefox/136.0',
|
||||
}
|
||||
|
||||
params_data = {
|
||||
'session_hash': session_hash
|
||||
}
|
||||
|
||||
# Send data stream request
|
||||
async with session.get(url_data, headers=headers_data, params=params_data) as response:
|
||||
full_response = ""
|
||||
async for line in response.content:
|
||||
decoded_line = line.decode('utf-8')
|
||||
if decoded_line.startswith('data: '):
|
||||
try:
|
||||
json_data = json.loads(decoded_line[6:])
|
||||
|
||||
# Look for generation stages
|
||||
if json_data.get('msg') == 'process_generating':
|
||||
if 'output' in json_data and 'data' in json_data['output']:
|
||||
output_data = json_data['output']['data']
|
||||
if len(output_data) > 1 and len(output_data[1]) > 0:
|
||||
for item in output_data[1]:
|
||||
if isinstance(item, list) and len(item) > 1:
|
||||
# Extract the fragment, handling both string and dict types
|
||||
fragment = item[1]
|
||||
if isinstance(fragment, dict) and 'text' in fragment:
|
||||
# For the first chunk, extract only the text part
|
||||
fragment = fragment['text']
|
||||
else:
|
||||
fragment = str(fragment)
|
||||
|
||||
# Ignore [0, 1] type fragments and duplicates
|
||||
if not re.match(r'^\[.*\]$', fragment) and not full_response.endswith(fragment):
|
||||
full_response += fragment
|
||||
yield fragment
|
||||
|
||||
# Check for completion
|
||||
if json_data.get('msg') == 'process_completed':
|
||||
# Final check to ensure we get the complete response
|
||||
if 'output' in json_data and 'data' in json_data['output']:
|
||||
output_data = json_data['output']['data']
|
||||
if len(output_data) > 1 and len(output_data[1]) > 0:
|
||||
# Get the final response text
|
||||
response_item = output_data[1][0][1]
|
||||
if isinstance(response_item, dict) and 'text' in response_item:
|
||||
final_full_response = response_item['text']
|
||||
else:
|
||||
final_full_response = str(response_item)
|
||||
|
||||
# Clean up the final response
|
||||
if isinstance(final_full_response, str) and final_full_response.startswith(full_response):
|
||||
final_text = final_full_response[len(full_response):]
|
||||
else:
|
||||
final_text = final_full_response
|
||||
|
||||
# Yield the remaining part of the final response
|
||||
if final_text and final_text != full_response:
|
||||
yield final_text
|
||||
break
|
||||
|
||||
except json.JSONDecodeError:
|
||||
debug.log("Could not parse JSON:", decoded_line)
|
||||
|
|
@ -11,8 +11,8 @@ from ...providers.response import JsonConversation, Reasoning
|
|||
from ..helper import get_last_user_message
|
||||
from ... import debug
|
||||
|
||||
class Qwen_Qwen_2_5M_Demo(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen Qwen-2.5M-Demo"
|
||||
class Qwen_Qwen_2_5M(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen Qwen-2.5M"
|
||||
url = "https://qwen-qwen2-5-1m-demo.hf.space"
|
||||
api_endpoint = f"{url}/run/predict?__theme=light"
|
||||
|
||||
|
|
@ -22,7 +22,8 @@ class Qwen_Qwen_2_5M_Demo(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
supports_message_history = False
|
||||
|
||||
default_model = "qwen-2.5-1m-demo"
|
||||
models = [default_model]
|
||||
model_aliases = {"qwen-2.5-1m": default_model}
|
||||
models = list(model_aliases.keys())
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
133
g4f/Provider/hf_space/Qwen_Qwen_2_5_Max.py
Normal file
133
g4f/Provider/hf_space/Qwen_Qwen_2_5_Max.py
Normal file
|
|
@ -0,0 +1,133 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import aiohttp
|
||||
import json
|
||||
import uuid
|
||||
import re
|
||||
|
||||
from ...typing import AsyncResult, Messages
|
||||
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ..helper import format_prompt
|
||||
from ... import debug
|
||||
|
||||
class Qwen_Qwen_2_5_Max(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen Qwen-2.5-Max"
|
||||
url = "https://qwen-qwen2-5-max-demo.hf.space"
|
||||
api_endpoint = "https://qwen-qwen2-5-max-demo.hf.space/gradio_api/queue/join?"
|
||||
|
||||
working = True
|
||||
supports_stream = True
|
||||
supports_system_message = True
|
||||
supports_message_history = False
|
||||
|
||||
default_model = "qwen-qwen2-5-max"
|
||||
model_aliases = {"qwen-2-5-max": default_model}
|
||||
models = list(model_aliases.keys())
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls,
|
||||
model: str,
|
||||
messages: Messages,
|
||||
proxy: str = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
def generate_session_hash():
|
||||
"""Generate a unique session hash."""
|
||||
return str(uuid.uuid4()).replace('-', '')[:8] + str(uuid.uuid4()).replace('-', '')[:4]
|
||||
|
||||
# Generate a unique session hash
|
||||
session_hash = generate_session_hash()
|
||||
|
||||
headers_join = {
|
||||
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:136.0) Gecko/20100101 Firefox/136.0',
|
||||
'Accept': '*/*',
|
||||
'Accept-Language': 'en-US,en;q=0.5',
|
||||
'Accept-Encoding': 'gzip, deflate, br, zstd',
|
||||
'Referer': f'{cls.url}/?__theme=system',
|
||||
'content-type': 'application/json',
|
||||
'Origin': cls.url,
|
||||
'Connection': 'keep-alive',
|
||||
'Sec-Fetch-Dest': 'empty',
|
||||
'Sec-Fetch-Mode': 'cors',
|
||||
'Sec-Fetch-Site': 'same-origin',
|
||||
'Pragma': 'no-cache',
|
||||
'Cache-Control': 'no-cache',
|
||||
}
|
||||
|
||||
# Prepare the prompt
|
||||
system_prompt = "\n".join([message["content"] for message in messages if message["role"] == "system"])
|
||||
if not system_prompt:
|
||||
system_prompt = "You are a helpful assistant."
|
||||
messages = [message for message in messages if message["role"] != "system"]
|
||||
prompt = format_prompt(messages)
|
||||
|
||||
payload_join = {
|
||||
"data": [prompt, [], system_prompt],
|
||||
"event_data": None,
|
||||
"fn_index": 0,
|
||||
"trigger_id": 11,
|
||||
"session_hash": session_hash
|
||||
}
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
# Send join request
|
||||
async with session.post(cls.api_endpoint, headers=headers_join, json=payload_join) as response:
|
||||
event_id = (await response.json())['event_id']
|
||||
|
||||
# Prepare data stream request
|
||||
url_data = f'{cls.url}/gradio_api/queue/data'
|
||||
|
||||
headers_data = {
|
||||
'Accept': 'text/event-stream',
|
||||
'Accept-Language': 'en-US,en;q=0.5',
|
||||
'Referer': f'{cls.url}/?__theme=system',
|
||||
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:136.0) Gecko/20100101 Firefox/136.0',
|
||||
}
|
||||
|
||||
params_data = {
|
||||
'session_hash': session_hash
|
||||
}
|
||||
|
||||
# Send data stream request
|
||||
async with session.get(url_data, headers=headers_data, params=params_data) as response:
|
||||
full_response = ""
|
||||
final_full_response = ""
|
||||
async for line in response.content:
|
||||
decoded_line = line.decode('utf-8')
|
||||
if decoded_line.startswith('data: '):
|
||||
try:
|
||||
json_data = json.loads(decoded_line[6:])
|
||||
|
||||
# Look for generation stages
|
||||
if json_data.get('msg') == 'process_generating':
|
||||
if 'output' in json_data and 'data' in json_data['output']:
|
||||
output_data = json_data['output']['data']
|
||||
if len(output_data) > 1 and len(output_data[1]) > 0:
|
||||
for item in output_data[1]:
|
||||
if isinstance(item, list) and len(item) > 1:
|
||||
fragment = str(item[1])
|
||||
# Ignore [0, 1] type fragments and duplicates
|
||||
if not re.match(r'^\[.*\]$', fragment) and not full_response.endswith(fragment):
|
||||
full_response += fragment
|
||||
yield fragment
|
||||
|
||||
# Check for completion
|
||||
if json_data.get('msg') == 'process_completed':
|
||||
# Final check to ensure we get the complete response
|
||||
if 'output' in json_data and 'data' in json_data['output']:
|
||||
output_data = json_data['output']['data']
|
||||
if len(output_data) > 1 and len(output_data[1]) > 0:
|
||||
final_full_response = output_data[1][0][1]
|
||||
|
||||
# Clean up the final response
|
||||
if final_full_response.startswith(full_response):
|
||||
final_full_response = final_full_response[len(full_response):]
|
||||
|
||||
# Yield the remaining part of the final response
|
||||
if final_full_response:
|
||||
yield final_full_response
|
||||
break
|
||||
|
||||
except json.JSONDecodeError:
|
||||
debug.log("Could not parse JSON:", decoded_line)
|
||||
|
|
@ -10,8 +10,8 @@ from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
|||
from ..helper import format_prompt
|
||||
from ... import debug
|
||||
|
||||
class Qwen_Qwen_2_72B_Instruct(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen Qwen-2.72B-Instruct"
|
||||
class Qwen_Qwen_2_72B(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen Qwen-2.72B"
|
||||
url = "https://qwen-qwen2-72b-instruct.hf.space"
|
||||
api_endpoint = "https://qwen-qwen2-72b-instruct.hf.space/queue/join?"
|
||||
|
||||
|
|
@ -21,8 +21,8 @@ class Qwen_Qwen_2_72B_Instruct(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
supports_message_history = False
|
||||
|
||||
default_model = "qwen-qwen2-72b-instruct"
|
||||
models = [default_model]
|
||||
model_aliases = {"qwen-2-72b": default_model}
|
||||
models = list(model_aliases.keys())
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
|
@ -18,9 +18,9 @@ class StabilityAI_SD35Large(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
|
||||
default_model = 'stabilityai-stable-diffusion-3-5-large'
|
||||
default_image_model = default_model
|
||||
image_models = [default_model]
|
||||
models = image_models
|
||||
model_aliases = {"sd-3.5": default_model}
|
||||
image_models = list(model_aliases.keys())
|
||||
models = image_models
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class Voodoohop_Flux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
default_model = "voodoohop-flux-1-schnell"
|
||||
default_image_model = default_model
|
||||
model_aliases = {"flux-schnell": default_model, "flux": default_model}
|
||||
image_models = [default_image_model, *model_aliases.keys()]
|
||||
image_models = list(model_aliases.keys())
|
||||
models = image_models
|
||||
|
||||
@classmethod
|
||||
|
|
|
|||
|
|
@ -13,8 +13,10 @@ from .DeepseekAI_JanusPro7b import DeepseekAI_JanusPro7b
|
|||
from .G4F import G4F
|
||||
from .Microsoft_Phi_4 import Microsoft_Phi_4
|
||||
from .Qwen_QVQ_72B import Qwen_QVQ_72B
|
||||
from .Qwen_Qwen_2_5M_Demo import Qwen_Qwen_2_5M_Demo
|
||||
from .Qwen_Qwen_2_72B_Instruct import Qwen_Qwen_2_72B_Instruct
|
||||
from .Qwen_Qwen_2_5 import Qwen_Qwen_2_5
|
||||
from .Qwen_Qwen_2_5M import Qwen_Qwen_2_5M
|
||||
from .Qwen_Qwen_2_5_Max import Qwen_Qwen_2_5_Max
|
||||
from .Qwen_Qwen_2_72B import Qwen_Qwen_2_72B
|
||||
from .StabilityAI_SD35Large import StabilityAI_SD35Large
|
||||
from .Voodoohop_Flux1Schnell import Voodoohop_Flux1Schnell
|
||||
|
||||
|
|
@ -23,7 +25,7 @@ class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
|
||||
working = True
|
||||
|
||||
default_model = Qwen_Qwen_2_72B_Instruct.default_model
|
||||
default_model = Qwen_Qwen_2_72B.default_model
|
||||
default_image_model = BlackForestLabs_Flux1Dev.default_model
|
||||
default_vision_model = Qwen_QVQ_72B.default_model
|
||||
providers = [
|
||||
|
|
@ -34,8 +36,10 @@ class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
G4F,
|
||||
Microsoft_Phi_4,
|
||||
Qwen_QVQ_72B,
|
||||
Qwen_Qwen_2_5M_Demo,
|
||||
Qwen_Qwen_2_72B_Instruct,
|
||||
Qwen_Qwen_2_5,
|
||||
Qwen_Qwen_2_5M,
|
||||
Qwen_Qwen_2_5_Max,
|
||||
Qwen_Qwen_2_72B,
|
||||
StabilityAI_SD35Large,
|
||||
Voodoohop_Flux1Schnell,
|
||||
]
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@ from .Provider import (
|
|||
Pi,
|
||||
PollinationsAI,
|
||||
PollinationsImage,
|
||||
TypeGPT,
|
||||
TeachAnything,
|
||||
Websim,
|
||||
Yqcloud,
|
||||
|
|
@ -73,6 +74,9 @@ class Model:
|
|||
class ImageModel(Model):
|
||||
pass
|
||||
|
||||
class AudioModel(Model):
|
||||
pass
|
||||
|
||||
class VisionModel(Model):
|
||||
pass
|
||||
|
||||
|
|
@ -87,6 +91,7 @@ default = Model(
|
|||
DeepInfraChat,
|
||||
AllenAI,
|
||||
PollinationsAI,
|
||||
TypeGPT,
|
||||
OIVSCode,
|
||||
ChatGptEs,
|
||||
Free2GPT,
|
||||
|
|
@ -105,6 +110,7 @@ default_vision = Model(
|
|||
best_provider = IterListProvider([
|
||||
Blackbox,
|
||||
OIVSCode,
|
||||
TypeGPT,
|
||||
DeepInfraChat,
|
||||
PollinationsAI,
|
||||
Dynaspark,
|
||||
|
|
@ -117,11 +123,18 @@ default_vision = Model(
|
|||
], shuffle=False)
|
||||
)
|
||||
|
||||
###################
|
||||
### Text/Vision ###
|
||||
###################
|
||||
##########################
|
||||
### Text//Audio/Vision ###
|
||||
##########################
|
||||
|
||||
### OpenAI ###
|
||||
# gpt-3.5
|
||||
gpt_3_5_turbo = Model(
|
||||
name = 'gpt-3.5-turbo',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = TypeGPT
|
||||
)
|
||||
|
||||
# gpt-4
|
||||
gpt_4 = Model(
|
||||
name = 'gpt-4',
|
||||
|
|
@ -139,14 +152,20 @@ gpt_4o = VisionModel(
|
|||
gpt_4o_mini = Model(
|
||||
name = 'gpt-4o-mini',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = IterListProvider([DDG, Blackbox, ChatGptEs, Jmuz, PollinationsAI, OIVSCode, Liaobots, OpenaiChat])
|
||||
best_provider = IterListProvider([DDG, Blackbox, ChatGptEs, TypeGPT, PollinationsAI, OIVSCode, Liaobots, Jmuz, OpenaiChat])
|
||||
)
|
||||
|
||||
gpt_4o_audio = AudioModel(
|
||||
name = 'gpt-4o-audio',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = PollinationsAI
|
||||
)
|
||||
|
||||
# o1
|
||||
o1 = Model(
|
||||
name = 'o1',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = IterListProvider([Blackbox, Copilot, OpenaiAccount])
|
||||
best_provider = IterListProvider([Blackbox, Copilot, TypeGPT, OpenaiAccount])
|
||||
)
|
||||
|
||||
o1_mini = Model(
|
||||
|
|
@ -159,7 +178,7 @@ o1_mini = Model(
|
|||
o3_mini = Model(
|
||||
name = 'o3-mini',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = IterListProvider([DDG, Blackbox, Liaobots, PollinationsAI])
|
||||
best_provider = IterListProvider([DDG, Blackbox, TypeGPT, PollinationsAI, Liaobots])
|
||||
)
|
||||
|
||||
### GigaChat ###
|
||||
|
|
@ -294,7 +313,7 @@ phi_3_5_mini = Model(
|
|||
phi_4 = Model(
|
||||
name = "phi-4",
|
||||
base_provider = "Microsoft",
|
||||
best_provider = IterListProvider([DeepInfraChat, PollinationsAI])
|
||||
best_provider = IterListProvider([DeepInfraChat, PollinationsAI, HuggingSpace])
|
||||
)
|
||||
|
||||
# wizardlm
|
||||
|
|
@ -438,11 +457,14 @@ command_a = Model(
|
|||
)
|
||||
|
||||
### Qwen ###
|
||||
# qwen-1.5
|
||||
qwen_1_5_7b = Model(
|
||||
name = 'qwen-1.5-7b',
|
||||
base_provider = 'Qwen',
|
||||
best_provider = Cloudflare
|
||||
)
|
||||
|
||||
# qwen-2
|
||||
qwen_2_72b = Model(
|
||||
name = 'qwen-2-72b',
|
||||
base_provider = 'Qwen',
|
||||
|
|
@ -453,6 +475,14 @@ qwen_2_vl_7b = VisionModel(
|
|||
base_provider = 'Qwen',
|
||||
best_provider = HuggingFaceAPI
|
||||
)
|
||||
|
||||
# qwen-2.5
|
||||
qwen_2_5 = Model(
|
||||
name = 'qwen-2.5',
|
||||
base_provider = 'Qwen',
|
||||
best_provider = HuggingSpace
|
||||
)
|
||||
|
||||
qwen_2_5_72b = Model(
|
||||
name = 'qwen-2.5-72b',
|
||||
base_provider = 'Qwen',
|
||||
|
|
@ -464,7 +494,13 @@ qwen_2_5_coder_32b = Model(
|
|||
best_provider = IterListProvider([PollinationsAI, Jmuz, HuggingChat])
|
||||
)
|
||||
qwen_2_5_1m = Model(
|
||||
name = 'qwen-2.5-1m-demo',
|
||||
name = 'qwen-2.5-1m',
|
||||
base_provider = 'Qwen',
|
||||
best_provider = HuggingSpace
|
||||
)
|
||||
|
||||
qwen_2_5_max = Model(
|
||||
name = 'qwen-2-5-max',
|
||||
base_provider = 'Qwen',
|
||||
best_provider = HuggingSpace
|
||||
)
|
||||
|
|
@ -498,13 +534,13 @@ deepseek_chat = Model(
|
|||
deepseek_v3 = Model(
|
||||
name = 'deepseek-v3',
|
||||
base_provider = 'DeepSeek',
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, LambdaChat, OIVSCode, Liaobots])
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, LambdaChat, OIVSCode, TypeGPT, Liaobots])
|
||||
)
|
||||
|
||||
deepseek_r1 = Model(
|
||||
name = 'deepseek-r1',
|
||||
base_provider = 'DeepSeek',
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, Glider, LambdaChat, PollinationsAI, Jmuz, Liaobots, HuggingChat, HuggingFace])
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, Glider, LambdaChat, PollinationsAI, TypeGPT, Liaobots, Jmuz, HuggingChat, HuggingFace])
|
||||
)
|
||||
|
||||
janus_pro_7b = VisionModel(
|
||||
|
|
@ -667,7 +703,7 @@ lfm_40b = Model(
|
|||
evil = Model(
|
||||
name = 'evil',
|
||||
base_provider = 'Evil Mode - Experimental',
|
||||
best_provider = PollinationsAI
|
||||
best_provider = IterListProvider([PollinationsAI, TypeGPT])
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -741,12 +777,16 @@ class ModelUtils:
|
|||
############
|
||||
|
||||
### OpenAI ###
|
||||
# gpt-3.5
|
||||
gpt_3_5_turbo.name: gpt_3_5_turbo,
|
||||
|
||||
# gpt-4
|
||||
gpt_4.name: gpt_4,
|
||||
|
||||
# gpt-4o
|
||||
gpt_4o.name: gpt_4o,
|
||||
gpt_4o_mini.name: gpt_4o_mini,
|
||||
gpt_4o_audio.name: gpt_4o_audio,
|
||||
|
||||
# o1
|
||||
o1.name: o1,
|
||||
|
|
@ -837,12 +877,19 @@ class ModelUtils:
|
|||
gigachat.name: gigachat,
|
||||
|
||||
### Qwen ###
|
||||
# qwen-1.5
|
||||
qwen_1_5_7b.name: qwen_1_5_7b,
|
||||
|
||||
# qwen-2
|
||||
qwen_2_72b.name: qwen_2_72b,
|
||||
qwen_2_vl_7b.name: qwen_2_vl_7b,
|
||||
|
||||
# qwen-2.5
|
||||
qwen_2_5.name: qwen_2_5,
|
||||
qwen_2_5_72b.name: qwen_2_5_72b,
|
||||
qwen_2_5_coder_32b.name: qwen_2_5_coder_32b,
|
||||
qwen_2_5_1m.name: qwen_2_5_1m,
|
||||
qwen_2_5_max.name: qwen_2_5_max,
|
||||
|
||||
# qwq/qvq
|
||||
qwq_32b.name: qwq_32b,
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue