AI Provider and Model Updates: Adding New, Removing Deprecated, and Enhancing Functionality (#2739)

* docs(docs/providers-and-models.md): Update provider listings and model information

* feat(g4f/models.py): update model configurations and expand provider support

* fix(g4f/gui/client/static/js/chat.v1.js): correct provider checkbox initialization logic

* feat(g4f/Provider/Blackbox.py): update model configurations and premium handling

* feat(g4f/Provider/ChatGLM.py): add finish reason handling and update default model

* chore(g4f/Provider/DDG.py): Reorder model entries for consistency

* feat(g4f/Provider/ImageLabs.py): Update default image model to sdxl-turbo

* feat(g4f/Provider/Liaobots.py): update supported model configurations and aliases

* feat(g4f/Provider/OIVSCode.py): Update API endpoint and expand model support

* fix(g4f/Provider/needs_auth/CablyAI.py): Enforce authentication requirement

* Removed the provider (g4f/Provider/BlackboxAPI.py)

* fix(g4f/providers/base_provider.py): improve cache validation in AsyncAuthedProvider

* Update g4f/models.py

* fix(g4f/Provider/Liaobots.py): remove deprecated Gemini model aliases

* chore(g4f/models.py): Remove Grok-2 and update Gemini provider configurations

* chore(docs/providers-and-models.md): Remove deprecated Grok models from provider listings

* New provider added (g4f/Provider/AllenAI.py)

* feat(g4f/models.py): Add Ai2 models and update provider references

* feat(docs/providers-and-models.md): update providers and models documentation

* fix(g4f/models.py): update experimental model provider configuration

* fix(g4f/Provider/PollinationsImage.py): Initialize image_models list and update label

* fix(g4f/Provider/PollinationsAI.py): Resolve model initialization and alias conflicts

* refactor(g4f/Provider/PollinationsAI.py): improve model initialization and error handling

* refactor(g4f/Provider/PollinationsImage.py): Improve model synchronization and initialization

* Update g4f/Provider/AllenAI.py

---------

Co-authored-by: kqlio67 <>
This commit is contained in:
kqlio67 2025-02-24 14:53:20 +00:00 committed by GitHub
parent f23f66518b
commit 07a8dfdff7
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
15 changed files with 422 additions and 238 deletions

View file

@ -37,14 +37,14 @@ This document provides an overview of various AI providers and models, including
### Providers No auth required
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, o3-mini, gemini-1.5-flash, gemini-1.5-pro, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-small-28b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo, deepseek-r1, gemini-2.0-flash` _**(+32)**_|`flux`|`blackboxai, gpt-4o, o3-mini, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, gemini-2.0-flash`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[api.blackbox.ai](https://api.blackbox.ai)|No auth required|`g4f.Provider.BlackboxAPI`|`deepseek-v3, deepseek-r1, deepseek-chat, mixtral-small-28b, dbrx-instruct, qwq-32b, hermes-2-dpo`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[playground.allenai.org](https://playground.allenai.org)|No auth required|`g4f.Provider.AllenAI`|`tulu-3-405b, olmo-2-13b, tulu-3-1-8b, tulu-3-70b, olmoe-0125`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gemini-1.5-flash, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-small-28b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo, deepseek-v3, deepseek-r1, gemini-2.0-flash` _**(+35)**_|`flux`|`blackboxai, gpt-4o, o3-mini, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, gemini-2.0-flash`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(+7)**_|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)|
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, gpt-4o`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`o3-mini, gpt-4, gpt-4o-mini, claude-3-haiku, llama-3.3-70b, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, llama-3.3-70b, claude-3-haiku, o3-mini, mixtral-small-24b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-28b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|`llama-3.2-90b, minicpm-2.5`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`mistral-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
@ -55,9 +55,9 @@ This document provides an overview of various AI providers and models, including
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b`|`flux-dev, flux-schnell, sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3-haiku, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-2, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, deepseek-r1, deepseek-v3, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash, gemini-2.0-flash-thinking`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`gpt-4o-mini, gpt-4o, gpt-4, o1-preview, deepseek-r1, deepseek-v3, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-2.0-flash, gemini-2.0-flash-thinking, grok-3, grok-3-r1, o3-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[mhystical.cc](https://mhystical.cc)|[Optional API key](https://mhystical.cc/dashboard)|`g4f.Provider.Mhystical`|`gpt-4`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini`|❌|`gpt-4o-mini`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini, deepseek-v3`|❌|`gpt-4o-mini`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
@ -92,7 +92,7 @@ This document provides an overview of various AI providers and models, including
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|✔ _**(1+)**_|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini, gemini-1.5-flash, gemini-1.5-pro`|`gemini`|`gemini`|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini-2.0`|`gemini-2.0`|`gemini-2.0`|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|`gemini-1.5-pro`|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
@ -120,13 +120,12 @@ This document provides an overview of various AI providers and models, including
### Text Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|gpt-4|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|6+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4o-mini|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|o1|OpenAI|1+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-preview|OpenAI|1+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-mini|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|o3-mini|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|o3-mini|OpenAI|2+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|gigachat|GigaChat|1+ Providers|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|
|meta-ai|Meta|1+ Providers|[ai.meta.com](https://ai.meta.com/)|
|llama-2-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)|
@ -140,19 +139,20 @@ This document provides an overview of various AI providers and models, including
|llama-3.2-11b|Meta Llama|3+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llama-3.2-90b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)|
|llama-3.3-70b|Meta Llama|6+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-3/)|
|mixtral-8x7b|Mistral|2+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mixtral-8x7b|Mistral|1+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mixtral-8x22b|Mistral|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1)|
|mistral-nemo|Mistral|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|mixtral-small-24b|Mistral|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)|
|mixtral-small-28b|Mistral|3+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-small-28b/)|
|hermes-2-dpo|NousResearch|2+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|hermes-2-dpo|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|phi-3.5-mini|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)|
|phi-4|Microsoft|1+ Providers|[techcommunity.microsoft.com](https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090)|
|wizardlm-2-7b|Microsoft|1+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|wizardlm-2-8x22b|Microsoft|2+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|gemini|Google DeepMind|1+|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|gemini-2.0|Google DeepMind|1+|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|gemini-exp|Google DeepMind|1+ Providers|[blog.google](https://blog.google/feed/gemini-exp-1206/)|
|gemini-1.5-flash|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-1.5-pro|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemini-1.5-flash|Google DeepMind|3+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-1.5-pro|Google DeepMind|2+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemini-2.0-flash|Google DeepMind|4+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-2.0-flash-thinking|Google DeepMind|1+ Providers|[ai.google.dev](https://ai.google.dev/gemini-api/docs/thinking-mode)|
|claude-3-haiku|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
@ -171,19 +171,21 @@ This document provides an overview of various AI providers and models, including
|qwen-2.5-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)|
|qwen-2.5-coder-32b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|qwen-2.5-1m-demo|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-1M-Demo)|
|qwq-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|qwq-32b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)|
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|deepseek-chat|DeepSeek|4+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|deepseek-chat|DeepSeek|3+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|deepseek-v3|DeepSeek|4+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|deepseek-r1|DeepSeek|9+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|grok-2|x.ai|1+|[x.ai](https://x.ai/blog/grok-2)|
|deepseek-r1|DeepSeek|8+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|grok-3|x.ai|1+|[x.ai](https://x.ai/blog/grok-3)|
|grok-3-r1|x.ai|1+|[x.ai](https://x.ai/blog/grok-3)|
|sonar|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-pro|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-reasoning|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-reasoning-pro|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|r1-1776|Perplexity AI|1+ Providers|[perplexity.ai](https://www.perplexity.ai/hub/blog/open-sourcing-r1-1776)|
|nemotron-70b|Nvidia|2+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|dbrx-instruct|Databricks|3+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|dbrx-instruct|Databricks|2+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|glm-4|THUDM|1+ Providers|[github.com/THUDM](https://github.com/THUDM/GLM-4)|
|mini_max|MiniMax|1+ Providers|[hailuo.ai](https://www.hailuo.ai/)|
|yi-34b|01-ai|1+ Providers|[huggingface.co](https://huggingface.co/01-ai/Yi-34B-Chat)|
@ -192,6 +194,11 @@ This document provides an overview of various AI providers and models, including
|airoboros-70b|DeepInfra|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|lzlv-70b|Lizpreciatior|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|minicpm-2.5|OpenBMB|1+ Providers|[huggingface.co](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5)|
|tulu-3-405b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|olmo-2-13b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|tulu-3-1-8b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|tulu-3-70b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|olmoe-0125|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|evil|Evil Mode - Experimental|1+ Providers|[]( )|
---

176
g4f/Provider/AllenAI.py Normal file
View file

@ -0,0 +1,176 @@
from __future__ import annotations
import json
from uuid import uuid4
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..requests.raise_for_status import raise_for_status
from ..providers.response import FinishReason, JsonConversation
from .helper import format_prompt
class Conversation(JsonConversation):
parent: str = None
x_anonymous_user_id: str = None
def __init__(self, model: str):
super().__init__() # Ensure parent class is initialized
self.model = model
self.messages = [] # Instance-specific list
if not self.x_anonymous_user_id:
self.x_anonymous_user_id = str(uuid4())
class AllenAI(AsyncGeneratorProvider, ProviderModelMixin):
label = "Ai2 Playground"
url = "https://playground.allenai.org"
login_url = None
api_endpoint = "https://olmo-api.allen.ai/v4/message/stream"
working = True
needs_auth = False
use_nodriver = False
supports_stream = True
supports_system_message = False
supports_message_history = True
default_model = 'tulu3-405b'
models = [
default_model,
'OLMo-2-1124-13B-Instruct',
'tulu-3-1-8b',
'Llama-3-1-Tulu-3-70B',
'olmoe-0125'
]
model_aliases = {
"tulu-3-405b": default_model,
"olmo-2-13b": "OLMo-2-1124-13B-Instruct",
"tulu-3-1-8b": "tulu-3-1-8b",
"tulu-3-70b": "Llama-3-1-Tulu-3-70B",
}
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
host: str = "inferd",
private: bool = True,
parent: str = None,
top_p: float = None,
temperature: float = None,
conversation: Conversation = None,
return_conversation: bool = False,
**kwargs
) -> AsyncResult:
# Initialize or update conversation
if conversation is None:
conversation = Conversation(model)
# Generate new boundary for each request
boundary = f"----WebKitFormBoundary{uuid4().hex}"
headers = {
"accept": "*/*",
"accept-language": "en-US,en;q=0.9",
"content-type": f"multipart/form-data; boundary={boundary}",
"origin": cls.url,
"referer": f"{cls.url}/",
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36",
"x-anonymous-user-id": conversation.x_anonymous_user_id,
}
# Build multipart form data
form_data = [
f'--{boundary}\r\n'
f'Content-Disposition: form-data; name="model"\r\n\r\n{model}\r\n',
f'--{boundary}\r\n'
f'Content-Disposition: form-data; name="host"\r\n\r\n{host}\r\n',
f'--{boundary}\r\n'
f'Content-Disposition: form-data; name="content"\r\n\r\n{format_prompt(messages)}\r\n',
f'--{boundary}\r\n'
f'Content-Disposition: form-data; name="private"\r\n\r\n{str(private).lower()}\r\n'
]
# Add parent if exists in conversation
if conversation.parent:
form_data.append(
f'--{boundary}\r\n'
f'Content-Disposition: form-data; name="parent"\r\n\r\n{conversation.parent}\r\n'
)
# Add optional parameters
if temperature is not None:
form_data.append(
f'--{boundary}\r\n'
f'Content-Disposition: form-data; name="temperature"\r\n\r\n{temperature}\r\n'
)
if top_p is not None:
form_data.append(
f'--{boundary}\r\n'
f'Content-Disposition: form-data; name="top_p"\r\n\r\n{top_p}\r\n'
)
form_data.append(f'--{boundary}--\r\n')
data = "".join(form_data).encode()
async with ClientSession(headers=headers) as session:
async with session.post(
cls.api_endpoint,
data=data,
proxy=proxy,
) as response:
await raise_for_status(response)
current_parent = None
async for chunk in response.content:
if not chunk:
continue
decoded = chunk.decode(errors="ignore")
for line in decoded.splitlines():
line = line.strip()
if not line:
continue
try:
data = json.loads(line)
except json.JSONDecodeError:
continue
if isinstance(data, dict):
# Update the parental ID
if data.get("children"):
for child in data["children"]:
if child.get("role") == "assistant":
current_parent = child.get("id")
break
# We process content only from the assistant
if "message" in data and data.get("content"):
content = data["content"]
# Skip empty content blocks
if content.strip():
yield content
# Processing the final response
if data.get("final") or data.get("finish_reason") == "stop":
if current_parent:
conversation.parent = current_parent
# Add a message to the story
conversation.messages.extend([
{"role": "user", "content": format_prompt(messages)},
{"role": "assistant", "content": "".join(conversation.messages[-1]["content"] + content if conversation.messages else content)}
])
if return_conversation:
yield conversation
yield FinishReason("stop")
return

View file

@ -7,7 +7,6 @@ import random
import string
from pathlib import Path
from typing import Optional
from datetime import datetime, timezone
from ..typing import AsyncResult, Messages, ImagesType
from ..requests.raise_for_status import raise_for_status
@ -36,14 +35,16 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
supports_system_message = True
supports_message_history = True
default_model = "blackboxai"
default_model = "BLACKBOXAI"
default_vision_model = default_model
default_image_model = 'ImageGeneration'
image_models = [default_image_model]
vision_models = [default_vision_model, 'GPT-4o', 'o3-mini', 'Gemini-PRO', 'gemini-1.5-flash', 'llama-3.1-8b', 'llama-3.1-70b', 'llama-3.1-405b', 'Gemini-Flash-2.0']
userSelectedModel = ['GPT-4o', 'o3-mini', 'Gemini-PRO', 'Claude-Sonnet-3.5', 'DeepSeek-V3', 'DeepSeek-R1', 'blackboxai-pro', 'Meta-Llama-3.3-70B-Instruct-Turbo', 'Mistral-Small-24B-Instruct-2501', 'DeepSeek-LLM-Chat-(67B)', 'DBRX-Instruct', 'Qwen-QwQ-32B-Preview', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'Gemini-Flash-2.0']
premium_models = ['GPT-4o', 'o1', 'o3-mini', 'Gemini-PRO', 'Claude-Sonnet-3.5']
userSelectedModel = ['DeepSeek-V3', 'DeepSeek-R1', 'BLACKBOXAI-PRO', 'Meta-Llama-3.3-70B-Instruct-Turbo', 'Mistral-Small-24B-Instruct-2501', 'DeepSeek-LLM-Chat-(67B)', 'DBRX-Instruct', 'Qwen-QwQ-32B-Preview', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'Gemini-Flash-2.0'] + premium_models
agentMode = {
'DeepSeek-V3': {'mode': True, 'id': "deepseek-chat", 'name': "DeepSeek-V3"},
@ -58,6 +59,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
}
trendingAgentMode = {
"o1": {'mode': True, 'id': 'o1'},
"o3-mini": {'mode': True, 'id': 'o3-mini'},
"gemini-1.5-flash": {'mode': True, 'id': 'Gemini'},
"llama-3.1-8b": {'mode': True, 'id': "llama-3.1-8b"},
@ -75,9 +77,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
'PyTorch Agent': {'mode': True, 'id': "PyTorch Agent"},
'React Agent': {'mode': True, 'id': "React Agent"},
'Xcode Agent': {'mode': True, 'id': "Xcode Agent"},
'AngularJS Agent': {'mode': True, 'id': "AngularJS Agent"},
'blackboxai-pro': {'mode': True, 'id': "BLACKBOXAI-PRO"},
'repomap': {'mode': True, 'id': "repomap"},
'BLACKBOXAI-PRO': {'mode': True, 'id': "BLACKBOXAI-PRO"},
'Heroku Agent': {'mode': True, 'id': "Heroku Agent"},
'Godot Agent': {'mode': True, 'id': "Godot Agent"},
'Go Agent': {'mode': True, 'id': "Go Agent"},
@ -97,16 +97,11 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
'builder Agent': {'mode': True, 'id': "builder Agent"},
}
premium_models = ['Claude-Sonnet-3.5']
models = list(dict.fromkeys([default_model, *userSelectedModel, *image_models, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
model_aliases = {
"gpt-4": "GPT-4o",
"gpt-4o": "GPT-4o",
"claude-3.5-sonnet": "Claude-Sonnet-3.5", # Premium
"blackboxai": "BLACKBOXAI",
"gemini-1.5-flash": "gemini-1.5-flash",
"gemini-1.5-pro": "Gemini-PRO",
"deepseek-v3": "DeepSeek-V3",
"deepseek-r1": "DeepSeek-R1",
"llama-3.3-70b": "Meta-Llama-3.3-70B-Instruct-Turbo",
@ -116,6 +111,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
"qwq-32b": "Qwen-QwQ-32B-Preview",
"hermes-2-dpo": "Nous-Hermes-2-Mixtral-8x7B-DPO",
"gemini-2.0-flash": "Gemini-Flash-2.0",
"blackboxai-pro": "BLACKBOXAI-PRO",
"flux": "ImageGeneration",
}
@ -261,9 +257,6 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
"content": msg["content"],
"role": msg["role"]
}
if msg["role"] == "assistant" and i == len(messages)-1:
current_time = datetime.now(timezone.utc).isoformat(timespec='milliseconds').replace('+00:00', 'Z')
current_msg["createdAt"] = current_time
current_messages.append(current_msg)
if images is not None:
@ -279,14 +272,6 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
"title": ""
}
# Calculate the value for expires + lastChecked
expires_iso = datetime.now(timezone.utc).isoformat(timespec='milliseconds').replace('+00:00', 'Z')
last_checked_millis = int(datetime.now().timestamp() * 1000)
# Fake data of a premium user (temporarily working)
fake_session = {"user":{"name":"John Doe","email":"john.doe@gmail.com","image":"https://lh3.googleusercontent.com/a/ACg8ocK9X7mNpQ2vR4jH3tY8wL5nB1xM6fDS9JW2kLpTn4Vy3hR2xN4m=s96-c"},"expires":expires_iso}
fake_subscriptionCache = {"status":"PREMIUM", "expiryTimestamp":None,"lastChecked":last_checked_millis}
data = {
"messages": current_messages,
"agentMode": cls.agentMode.get(model, {}) if model in cls.agentMode else {},
@ -316,10 +301,17 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
"domains": None,
"vscodeClient": False,
"codeInterpreterMode": False,
"customProfile": {"name": "", "occupation": "", "traits": [], "additionalInfo": "", "enableNewChats": False},
"session": fake_session,
"isPremium": True,
"subscriptionCache": fake_subscriptionCache,
"customProfile": {
"name": "",
"occupation": "",
"traits": [],
"additionalInfo": "",
"enableNewChats": False
},
"session": None,
"isPremium": False,
"subscriptionCache": None,
"beastMode": False,
"webSearchMode": False
}

View file

@ -1,75 +0,0 @@
from __future__ import annotations
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..requests.raise_for_status import raise_for_status
class BlackboxAPI(AsyncGeneratorProvider, ProviderModelMixin):
label = "Blackbox AI API"
url = "https://api.blackbox.ai"
api_endpoint = "https://api.blackbox.ai/api/chat"
working = True
needs_auth = False
supports_stream = False
supports_system_message = True
supports_message_history = True
default_model = 'deepseek-ai/DeepSeek-V3'
models = [
default_model,
'deepseek-ai/DeepSeek-R1',
'mistralai/Mistral-Small-24B-Instruct-2501',
'deepseek-ai/deepseek-llm-67b-chat',
'databricks/dbrx-instruct',
'Qwen/QwQ-32B-Preview',
'NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO'
]
model_aliases = {
"deepseek-v3": "deepseek-ai/DeepSeek-V3",
"deepseek-r1": "deepseek-ai/DeepSeek-R1",
"deepseek-chat": "deepseek-ai/deepseek-llm-67b-chat",
"mixtral-small-28b": "mistralai/Mistral-Small-24B-Instruct-2501",
"dbrx-instruct": "databricks/dbrx-instruct",
"qwq-32b": "Qwen/QwQ-32B-Preview",
"hermes-2-dpo": "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
}
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
max_tokens: str = None,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
"Content-Type": "application/json",
}
async with ClientSession(headers=headers) as session:
data = {
"messages": messages,
"model": model,
"max_tokens": max_tokens
}
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
await raise_for_status(response)
async for chunk in response.content:
if not chunk:
continue
text = chunk.decode(errors='ignore')
try:
if text:
yield text
except Exception as e:
return

View file

@ -8,6 +8,7 @@ from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from ..requests.raise_for_status import raise_for_status
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..providers.response import FinishReason
class ChatGLM(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://chatglm.cn"
@ -18,9 +19,8 @@ class ChatGLM(AsyncGeneratorProvider, ProviderModelMixin):
supports_system_message = False
supports_message_history = False
default_model = "all-tools-230b"
default_model = "glm-4"
models = [default_model]
model_aliases = {"glm-4": default_model}
@classmethod
async def create_async_generator(
@ -85,9 +85,13 @@ class ChatGLM(AsyncGeneratorProvider, ProviderModelMixin):
if parts:
content = parts[0].get('content', [])
if content:
text = content[0].get('text', '')[yield_text:]
text_content = content[0].get('text', '')
text = text_content[yield_text:]
if text:
yield text
yield_text += len(text)
# Yield FinishReason when status is 'finish'
if json_data.get('status') == 'finish':
yield FinishReason("stop")
except json.JSONDecodeError:
pass

View file

@ -36,12 +36,12 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True
default_model = "gpt-4o-mini"
models = [default_model, "o3-mini", "claude-3-haiku-20240307", "meta-llama/Llama-3.3-70B-Instruct-Turbo", "mistralai/Mistral-Small-24B-Instruct-2501"]
models = [default_model, "meta-llama/Llama-3.3-70B-Instruct-Turbo", "claude-3-haiku-20240307", "o3-mini", "mistralai/Mistral-Small-24B-Instruct-2501"]
model_aliases = {
"gpt-4": "gpt-4o-mini",
"claude-3-haiku": "claude-3-haiku-20240307",
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"claude-3-haiku": "claude-3-haiku-20240307",
"mixtral-small-24b": "mistralai/Mistral-Small-24B-Instruct-2501",
}

View file

@ -18,11 +18,10 @@ class ImageLabs(AsyncGeneratorProvider, ProviderModelMixin):
supports_system_message = False
supports_message_history = False
default_model = 'general'
default_model = 'sdxl-turbo'
default_image_model = default_model
image_models = [default_image_model]
models = image_models
model_aliases = {"sdxl-turbo": default_model}
@classmethod
async def create_async_generator(

View file

@ -18,6 +18,51 @@ models = {
"tokenLimit": 7800,
"context": "8K",
},
"grok-3": {
"id": "grok-3",
"name": "Grok-3",
"model": "Grok",
"provider": "x.ai",
"maxLength": 800000,
"tokenLimit": 200000,
"context": "200K",
},
"grok-3-r1": {
"id": "grok-3-r1",
"name": "Grok-3-Thinking",
"model": "Grok",
"provider": "x.ai",
"maxLength": 800000,
"tokenLimit": 200000,
"context": "200K",
},
"deepseek-r1": {
"id": "deepseek-r1",
"name": "DeepSeek-R1",
"model": "DeepSeek-R1",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"deepseek-r1-distill-llama-70b": {
"id": "deepseek-r1-distill-llama-70b",
"name": "DeepSeek-R1-70B",
"model": "DeepSeek-R1-70B",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"deepseek-v3": {
"id": "deepseek-v3",
"name": "DeepSeek-V3",
"model": "DeepSeek-V3",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"gpt-4o-2024-11-20": {
"id": "gpt-4o-2024-11-20",
"name": "GPT-4o",
@ -36,6 +81,15 @@ models = {
"tokenLimit": 126000,
"context": "128K",
},
"o3-mini": {
"id": "o3-mini",
"name": "o3-mini",
"model": "o3",
"provider": "OpenAI",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"o1-preview-2024-09-12": {
"id": "o1-preview-2024-09-12",
"name": "o1-preview",
@ -45,51 +99,6 @@ models = {
"tokenLimit": 100000,
"context": "128K",
},
"o1-mini-2024-09-12": {
"id": "o1-mini-2024-09-12",
"name": "o1-mini",
"model": "o1",
"provider": "OpenAI",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"DeepSeek-R1-Distill-Llama-70b": {
"id": "DeepSeek-R1-Distill-Llama-70b",
"name": "DeepSeek-R1-70B",
"model": "DeepSeek-R1-70B",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"DeepSeek-R1": {
"id": "DeepSeek-R1",
"name": "DeepSeek-R1",
"model": "DeepSeek-R1",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"DeepSeek-V3": {
"id": "DeepSeek-V3",
"name": "DeepSeek-V3",
"model": "DeepSeek-V3",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"grok-2": {
"id": "grok-2",
"name": "Grok-2",
"model": "Grok",
"provider": "x.ai",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "100K",
},
"claude-3-opus-20240229": {
"id": "claude-3-opus-20240229",
"name": "Claude-3-Opus",
@ -144,9 +153,9 @@ models = {
"tokenLimit": 200000,
"context": "200K",
},
"gemini-2.0-flash-exp": {
"id": "gemini-2.0-flash-exp",
"name": "Gemini-2.0-Flash-Exp",
"gemini-2.0-flash": {
"id": "gemini-2.0-flash",
"name": "Gemini-2.0-Flash",
"model": "Gemini",
"provider": "Google",
"maxLength": 4000000,
@ -162,18 +171,9 @@ models = {
"tokenLimit": 1000000,
"context": "1024K",
},
"gemini-1.5-flash-002": {
"id": "gemini-1.5-flash-002",
"name": "Gemini-1.5-Flash-1M",
"model": "Gemini",
"provider": "Google",
"maxLength": 4000000,
"tokenLimit": 1000000,
"context": "1024K",
},
"gemini-1.5-pro-002": {
"id": "gemini-1.5-pro-002",
"name": "Gemini-1.5-Pro-1M",
"gemini-2.0-pro-exp": {
"id": "gemini-2.0-pro-exp",
"name": "Gemini-2.0-Pro-Exp",
"model": "Gemini",
"provider": "Google",
"maxLength": 4000000,
@ -197,11 +197,8 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
"gpt-4": default_model,
"o1-preview": "o1-preview-2024-09-12",
"o1-mini": "o1-mini-2024-09-12",
"deepseek-r1": "DeepSeek-R1-Distill-Llama-70b",
"deepseek-r1": "DeepSeek-R1",
"deepseek-v3": "DeepSeek-V3",
"deepseek-r1": "deepseek-r1-distill-llama-70b",
"claude-3-opus": "claude-3-opus-20240229",
"claude-3.5-sonnet": "claude-3-5-sonnet-20240620",
@ -210,10 +207,7 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
"claude-3-opus": "claude-3-opus-20240229-t",
"claude-3.5-sonnet": "claude-3-5-sonnet-20241022-t",
"gemini-2.0-flash": "gemini-2.0-flash-exp",
"gemini-2.0-flash-thinking": "gemini-2.0-flash-thinking-exp",
"gemini-1.5-flash": "gemini-1.5-flash-002",
"gemini-1.5-pro": "gemini-1.5-pro-002"
}
_auth_code = ""

View file

@ -5,7 +5,7 @@ from .template import OpenaiTemplate
class OIVSCode(OpenaiTemplate):
label = "OI VSCode Server"
url = "https://oi-vscode-server.onrender.com"
api_base = "https://oi-vscode-server.onrender.com/v1"
api_base = "https://oi-vscode-server-2.onrender.com/v1"
working = True
needs_auth = False
@ -16,6 +16,9 @@ class OIVSCode(OpenaiTemplate):
default_model = "gpt-4o-mini-2024-07-18"
default_vision_model = default_model
vision_models = [default_model, "gpt-4o-mini"]
models = vision_models
models = vision_models + ["deepseek-ai/DeepSeek-V3"]
model_aliases = {"gpt-4o-mini": "gpt-4o-mini-2024-07-18"}
model_aliases = {
"gpt-4o-mini": "gpt-4o-mini-2024-07-18",
"deepseek-v3": "deepseek-ai/DeepSeek-V3"
}

View file

@ -39,9 +39,12 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
default_model = "openai"
default_image_model = "flux"
default_vision_model = "gpt-4o"
image_models = ["flux-pro", "flux-dev", "flux-schnell", "midjourney", "dall-e-3", "turbo"]
text_models = [default_model]
image_models = [default_image_model]
extra_image_models = ["flux-pro", "flux-dev", "flux-schnell", "midjourney", "dall-e-3"]
vision_models = [default_vision_model, "gpt-4o-mini"]
extra_text_models = ["claude", "claude-email", "deepseek-reasoner", "deepseek-r1"] + vision_models
_models_loaded = False
model_aliases = {
### Text Models ###
"gpt-4o-mini": "openai",
@ -66,27 +69,51 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
### Image Models ###
"sdxl-turbo": "turbo",
}
text_models = []
@classmethod
def get_models(cls, **kwargs):
if not cls.text_models or not cls.image_models:
if not cls._models_loaded:
try:
# Update of image models
image_response = requests.get("https://image.pollinations.ai/models")
image_response.raise_for_status()
new_image_models = image_response.json()
cls.image_models = list(dict.fromkeys([*cls.image_models, *new_image_models]))
# Combine models without duplicates
all_image_models = (
cls.image_models + # Already contains the default
cls.extra_image_models +
new_image_models
)
cls.image_models = list(dict.fromkeys(all_image_models))
# Update of text models
text_response = requests.get("https://text.pollinations.ai/models")
text_response.raise_for_status()
original_text_models = [model.get("name") for model in text_response.json()]
original_text_models = [
model.get("name")
for model in text_response.json()
]
combined_text = cls.extra_text_models + [
# Combining text models
combined_text = (
cls.text_models + # Already contains the default
cls.extra_text_models +
[
model for model in original_text_models
if model not in cls.extra_text_models
]
)
cls.text_models = list(dict.fromkeys(combined_text))
cls._models_loaded = True
except Exception as e:
# Save default models in case of an error
if not cls.text_models:
cls.text_models = [cls.default_model]
if not cls.image_models:
cls.image_models = [cls.default_image_model]
raise RuntimeError(f"Failed to fetch models: {e}") from e
return cls.text_models + cls.image_models
@ -114,6 +141,7 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
cache: bool = False,
**kwargs
) -> AsyncResult:
cls.get_models()
if images is not None and not model:
model = cls.default_vision_model
try:

View file

@ -10,13 +10,23 @@ class PollinationsImage(PollinationsAI):
default_model = "flux"
default_vision_model = None
default_image_model = default_model
image_models = [default_image_model] # Default models
_models_loaded = False # Add a checkbox for synchronization
@classmethod
def get_models(cls, **kwargs):
if not cls.models:
super().get_models(**kwargs)
cls.models = cls.image_models
return cls.models
if not cls._models_loaded:
# Calling the parent method to load models
super().get_models()
# Combine models from the parent class and additional ones
all_image_models = list(dict.fromkeys(
cls.image_models +
PollinationsAI.image_models +
cls.extra_image_models
))
cls.image_models = all_image_models
cls._models_loaded = True
return cls.image_models
@classmethod
async def create_async_generator(
@ -35,6 +45,8 @@ class PollinationsImage(PollinationsAI):
safe: bool = False,
**kwargs
) -> AsyncResult:
# Calling model updates before creating a generator
cls.get_models()
async for chunk in cls._generate_image(
model=model,
prompt=format_image_prompt(messages, prompt),

View file

@ -14,8 +14,8 @@ from .hf_space import *
from .mini_max import HailuoAI, MiniMax
from .template import OpenaiTemplate, BackendApi
from .AllenAI import AllenAI
from .Blackbox import Blackbox
from .BlackboxAPI import BlackboxAPI
from .ChatGLM import ChatGLM
from .ChatGpt import ChatGpt
from .ChatGptEs import ChatGptEs

View file

@ -9,7 +9,7 @@ class CablyAI(OpenaiTemplate):
api_base = "https://cablyai.com/v1"
working = True
needs_auth = False
needs_auth = True
supports_stream = True
supports_system_message = True
supports_message_history = True

View file

@ -5,8 +5,8 @@ from dataclasses import dataclass
from .Provider import IterListProvider, ProviderType
from .Provider import (
### no auth required ###
AllenAI,
Blackbox,
BlackboxAPI,
ChatGLM,
ChatGptEs,
Cloudflare,
@ -115,14 +115,14 @@ default_vision = Model(
gpt_4 = Model(
name = 'gpt-4',
base_provider = 'OpenAI',
best_provider = IterListProvider([Blackbox, DDG, Jmuz, ChatGptEs, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
best_provider = IterListProvider([DDG, Jmuz, ChatGptEs, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
)
# gpt-4o
gpt_4o = VisionModel(
name = 'gpt-4o',
base_provider = 'OpenAI',
best_provider = IterListProvider([Blackbox, Jmuz, ChatGptEs, PollinationsAI, Copilot, Liaobots, OpenaiChat])
best_provider = IterListProvider([Jmuz, ChatGptEs, PollinationsAI, Copilot, Liaobots, OpenaiChat])
)
gpt_4o_mini = Model(
@ -144,17 +144,11 @@ o1_preview = Model(
best_provider = Liaobots
)
o1_mini = Model(
name = 'o1-mini',
base_provider = 'OpenAI',
best_provider = Liaobots
)
# o3
o3_mini = Model(
name = 'o3-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, Blackbox])
best_provider = IterListProvider([DDG, Liaobots])
)
### GigaChat ###
@ -268,14 +262,14 @@ mixtral_small_24b = Model(
mixtral_small_28b = Model(
name = "mixtral-small-28b",
base_provider = "Mistral",
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat])
best_provider = IterListProvider([Blackbox, DeepInfraChat])
)
### NousResearch ###
hermes_2_dpo = Model(
name = "hermes-2-dpo",
base_provider = "NousResearch",
best_provider = IterListProvider([Blackbox, BlackboxAPI])
best_provider = Blackbox
)
### Microsoft ###
@ -324,13 +318,13 @@ gemini_exp = Model(
gemini_1_5_flash = Model(
name = 'gemini-1.5-flash',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([Blackbox, Jmuz, GeminiPro, Liaobots])
best_provider = IterListProvider([Blackbox, Jmuz, GeminiPro])
)
gemini_1_5_pro = Model(
name = 'gemini-1.5-pro',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([Blackbox, Jmuz, GeminiPro, Liaobots])
best_provider = IterListProvider([Jmuz, GeminiPro])
)
# gemini-2.0
@ -449,7 +443,7 @@ qwen_2_5_1m = Model(
qwq_32b = Model(
name = 'qwq-32b',
base_provider = 'Qwen',
best_provider = IterListProvider([Blackbox, BlackboxAPI, Jmuz, HuggingChat])
best_provider = IterListProvider([Blackbox, Jmuz, HuggingChat])
)
qvq_72b = VisionModel(
name = 'qvq-72b',
@ -468,19 +462,19 @@ pi = Model(
deepseek_chat = Model(
name = 'deepseek-chat',
base_provider = 'DeepSeek',
best_provider = IterListProvider([Blackbox, BlackboxAPI, Jmuz, PollinationsAI])
best_provider = IterListProvider([Blackbox, Jmuz, PollinationsAI])
)
deepseek_v3 = Model(
name = 'deepseek-v3',
base_provider = 'DeepSeek',
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat, Liaobots])
best_provider = IterListProvider([Blackbox, DeepInfraChat, OIVSCode, Liaobots])
)
deepseek_r1 = Model(
name = 'deepseek-r1',
base_provider = 'DeepSeek',
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat, Glider, PollinationsAI, Jmuz, Liaobots, HuggingChat, HuggingFace])
best_provider = IterListProvider([Blackbox, DeepInfraChat, Glider, PollinationsAI, Jmuz, Liaobots, HuggingChat, HuggingFace])
)
janus_pro_7b = VisionModel(
@ -490,8 +484,14 @@ janus_pro_7b = VisionModel(
)
### x.ai ###
grok_2 = Model(
name = 'grok-2',
grok_3 = Model(
name = 'grok-3',
base_provider = 'x.ai',
best_provider = Liaobots
)
grok_3_r1 = Model(
name = 'grok-3-r1',
base_provider = 'x.ai',
best_provider = Liaobots
)
@ -521,6 +521,12 @@ sonar_reasoning_pro = Model(
best_provider = PerplexityLabs
)
r1_1776 = Model(
name = 'r1-1776',
base_provider = 'Perplexity AI',
best_provider = PerplexityLabs
)
### Nvidia ###
nemotron_70b = Model(
name = 'nemotron-70b',
@ -532,7 +538,7 @@ nemotron_70b = Model(
dbrx_instruct = Model(
name = 'dbrx-instruct',
base_provider = 'Databricks',
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat])
best_provider = IterListProvider([Blackbox, DeepInfraChat])
)
### THUDM ###
@ -590,6 +596,37 @@ minicpm_2_5 = Model(
best_provider = DeepInfraChat
)
### Ai2 ###
tulu_3_405b = Model(
name = "tulu-3-405b",
base_provider = "Ai2",
best_provider = AllenAI
)
olmo_2_13b = Model(
name = "olmo-2-13b",
base_provider = "Ai2",
best_provider = AllenAI
)
tulu_3_1_8b = Model(
name = "tulu-3-1-8b",
base_provider = "Ai2",
best_provider = AllenAI
)
tulu_3_70b = Model(
name = "tulu-3-70b",
base_provider = "Ai2",
best_provider = AllenAI
)
olmoe_0125 = Model(
name = "olmoe-0125",
base_provider = "Ai2",
best_provider = AllenAI
)
### Uncensored AI ###
evil = Model(
name = 'evil',
@ -678,7 +715,6 @@ class ModelUtils:
# o1
o1.name: o1,
o1_preview.name: o1_preview,
o1_mini.name: o1_mini,
# o3
o3_mini.name: o3_mini,
@ -776,13 +812,14 @@ class ModelUtils:
pi.name: pi,
### x.ai ###
grok_2.name: grok_2,
grok_3.name: grok_3,
### Perplexity AI ###
sonar.name: sonar,
sonar_pro.name: sonar_pro,
sonar_reasoning.name: sonar_reasoning,
sonar_reasoning_pro.name: sonar_reasoning_pro,
r1_1776.name: r1_1776,
### DeepSeek ###
deepseek_chat.name: deepseek_chat,
@ -803,6 +840,13 @@ class ModelUtils:
lzlv_70b.name: lzlv_70b, ### Lizpreciatior ###
minicpm_2_5.name: minicpm_2_5, ### OpenBMB ###
### Ai2 ###
tulu_3_405b.name: tulu_3_405b,
olmo_2_13b.name: olmo_2_13b,
tulu_3_1_8b.name: tulu_3_1_8b,
tulu_3_70b.name: tulu_3_70b,
olmoe_0125.name: olmoe_0125,
evil.name: evil, ### Uncensored AI ###
#############