Merge pull request #2679 from kqlio67/main

Updated some providers, added new providers and added new models
This commit is contained in:
H Lohaus 2025-02-03 21:32:05 +01:00 committed by GitHub
commit fff1ce4482
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
12 changed files with 428 additions and 203 deletions

View file

@ -39,17 +39,17 @@ This document provides an overview of various AI providers and models, including
|----------|-------------|--------------|---------------|--------|--------|------|------|
|[aichatfree.info](https://aichatfree.info)|No auth required|`g4f.Provider.AIChatFree`|`gemini-1.5-pro` _**(1+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[autonomous.ai](https://www.autonomous.ai/anon/)|No auth required|`g4f.Provider.AutonomousAI`|`llama-3.3-70b, qwen-2.5-coder-32b, hermes-3, llama-3.2-90b, llama-3.3-70b, llama-3-2-70b`|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, gemini-1.5-flash, gemini-1.5-pro, claude-3.5-sonnet, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-7b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo, deepseek-r1` _**(+31)**_|`flux`|`blackboxai, gpt-4o, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[cablyai.com](https://cablyai.com)|No auth required|`g4f.Provider.CablyAI`|`cably-80b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gemini-1.5-flash, gemini-1.5-pro, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-small-28b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo, deepseek-r1` _**(+34)**_|`flux`|`blackboxai, gpt-4o, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[api.blackbox.ai](https://api.blackbox.ai)|No auth required|`g4f.Provider.BlackboxAPI`|`deepseek-v3, deepseek-r1, deepseek-chat, mixtral-small-28b, dbrx-instruct, qwq-32b, hermes-2-dpo`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[cablyai.com](https://cablyai.com)|Optional API key|`g4f.Provider.CablyAI`|`gpt-4o-mini, llama-3.1-8b, deepseek-v3, deepseek-r1, o3-mini-low` _**(2+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(+7)**_|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)|
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgptt.me](https://chatgptt.me)|No auth required|`g4f.Provider.ChatGptt`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, gpt-4o`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[darkai.foundation](https://darkai.foundation)|No auth required|`g4f.Provider.DarkAI`|`gpt-3.5-turbo, gpt-4o, llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, claude-3-haiku, llama-3.1-70b, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.1-70b, deepseek-chat, qwq-32b, wizardlm-2-8x22b, wizardlm-2-7b, qwen-2.5-72b, qwen-2.5-coder-32b, nemotron-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-28b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`mistral-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
@ -60,7 +60,7 @@ This document provides an overview of various AI providers and models, including
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b`|`flux-dev, flux-schnell, sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3-haiku, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-2, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash, gemini-2.0-flash-thinking`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-2, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, deepseek-r1, deepseek-v3, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash, gemini-2.0-flash-thinking`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[mhystical.cc](https://mhystical.cc)|[Optional API key](https://mhystical.cc/dashboard)|`g4f.Provider.Mhystical`|`gpt-4`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini`|❌|`gpt-4o-mini`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar-online, sonar-chat, llama-3.3-70b, llama-3.1-8b, llama-3.1-70b, lfm-40b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
@ -124,44 +124,43 @@ This document provides an overview of various AI providers and models, including
### Text Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|gpt-3|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-3-5-turbo)|
|gpt-3.5-turbo|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-3-5-turbo)|
|gpt-4|OpenAI|11+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4o-mini|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|gpt-4|OpenAI|10+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4o-mini|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|o1|OpenAI|1+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-preview|OpenAI|1+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-mini|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|o3-mini-low|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|gigachat|GigaChat|1+ Providers|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|
|meta-ai|Meta|1+ Providers|[ai.meta.com](https://ai.meta.com/)|
|llama-2-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)|
|llama-3-8b|Meta Llama|2+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3-70b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Meta-Llama-3-70B)|
|llama-3.1-8b|Meta Llama|6+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-70b|Meta Llama|6+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-8b|Meta Llama|7+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-70b|Meta Llama|5+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-405b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.1-405B)|
|llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)|
|llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-3B)|
|llama-3.2-11b|Meta Llama|3+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llama-3.2-90b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)|
|llama-3.3-70b|Meta Llama|6+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-3/)|
|mixtral-7b|Mistral|1+|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mixtral-8x7b|Mistral|2+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mistral-nemo|Mistral|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|hermes-2-dpo|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|mixtral-small-28b|Mistral|2+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-small-28b/)|
|hermes-2-dpo|NousResearch|2+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|phi-3.5-mini|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)|
|wizardlm-2-7b|Microsoft|1+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|phi-4|Microsoft|1+ Providers|[techcommunity.microsoft.com](https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090)|
|wizardlm-2-8x22b|Microsoft|2+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|gemini|Google DeepMind|1+|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|gemini-exp|Google DeepMind|1+ Providers|[blog.google](https://blog.google/feed/gemini-exp-1206/)|
|gemini-1.5-flash|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-1.5-pro|Google DeepMind|6+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemini-1.5-pro|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemini-2.0-flash|Google DeepMind|2+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-2.0-flash-thinking|Google DeepMind|1+ Providers|[ai.google.dev](https://ai.google.dev/gemini-api/docs/thinking-mode)|
|claude-3-haiku|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
|claude-3-sonnet|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3.5-sonnet|Anthropic|3+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|claude-3.5-sonnet|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|reka-core|Reka AI|1+ Providers|[reka.ai](https://www.reka.ai/ourmodels)|
|blackboxai|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|blackboxai-pro|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
@ -174,26 +173,23 @@ This document provides an overview of various AI providers and models, including
|qwen-2.5-72b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)|
|qwen-2.5-coder-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|qwen-2.5-1m-demo|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-1M-Demo)|
|qwq-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|qwq-32b|Qwen|5+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)|
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|deepseek-chat|DeepSeek|3+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|deepseek-v3|DeepSeek|2+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|deepseek-r1|DeepSeek|6+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|deepseek-chat|DeepSeek|4+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|deepseek-v3|DeepSeek|4+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|deepseek-r1|DeepSeek|8+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|grok-2|x.ai|1+|[x.ai](https://x.ai/blog/grok-2)|
|sonar|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|sonar-pro|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|sonar-reasoning|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|nemotron-70b|Nvidia|3+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|dbrx-instruct|Databricks|1+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|p1|PollinationsAI|1+ Providers|[pollinations.ai](https://pollinations.ai/)|
|cably-80b|CablyAI|1+ Providers|[cablyai.com](https://cablyai.com)|
|dbrx-instruct|Databricks|2+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|glm-4|THUDM|1+ Providers|[github.com/THUDM](https://github.com/THUDM/GLM-4)|
|mini_max|MiniMax|1+ Providers|[hailuo.ai](https://www.hailuo.ai/)|
|evil|Evil Mode - Experimental|1+ Providers||
---
### Image Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
@ -207,6 +203,7 @@ This document provides an overview of various AI providers and models, including
|midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|
## Conclusion and Usage Tips
This document provides a comprehensive overview of various AI providers and models available for text generation, image generation, and vision tasks. **When choosing a provider or model, consider the following factors:**
1. **Availability**: Check the status of the provider to ensure it's currently active and accessible.

View file

@ -38,16 +38,17 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
default_model = "blackboxai"
default_vision_model = default_model
default_image_model = 'ImageGeneration'
image_models = [default_image_model, "ImageGeneration2"]
vision_models = [default_vision_model, 'gpt-4o', 'gemini-pro', 'deepseek-v3', 'gemini-1.5-flash', 'llama-3.1-8b', 'llama-3.1-70b', 'llama-3.1-405b']
reasoning_models = ['deepseek-r1']
userSelectedModel = ['gpt-4o', 'gemini-pro', 'claude-sonnet-3.5', 'deepseek-r1', 'deepseek-v3', 'blackboxai-pro']
image_models = [default_image_model]
vision_models = [default_vision_model, 'gpt-4o', 'o3-mini', 'gemini-pro', 'DeepSeek-V3', 'gemini-1.5-flash', 'llama-3.1-8b', 'llama-3.1-70b', 'llama-3.1-405b']
reasoning_models = ['DeepSeek-R1']
userSelectedModel = ['gpt-4o', 'o3-mini', 'claude-sonnet-3.5', 'gemini-pro', 'blackboxai-pro']
agentMode = {
'ImageGeneration': {'mode': True, 'id': "ImageGenerationLV45LJp", 'name': "Image Generation"},
'DeepSeek-V3': {'mode': True, 'id': "deepseek-chat", 'name': "DeepSeek-V3"},
'DeepSeek-R1': {'mode': True, 'id': "deepseek-reasoner", 'name': "DeepSeek-R1"},
'Meta-Llama-3.3-70B-Instruct-Turbo': {'mode': True, 'id': "meta-llama/Llama-3.3-70B-Instruct-Turbo", 'name': "Meta-Llama-3.3-70B-Instruct-Turbo"},
'Mistral-(7B)-Instruct-v0.2': {'mode': True, 'id': "mistralai/Mistral-7B-Instruct-v0.2", 'name': "Mistral-(7B)-Instruct-v0.2"},
'Mistral-Small-24B-Instruct-2501': {'mode': True, 'id': "mistralai/Mistral-Small-24B-Instruct-2501", 'name': "Mistral-Small-24B-Instruct-2501"},
'DeepSeek-LLM-Chat-(67B)': {'mode': True, 'id': "deepseek-ai/deepseek-llm-67b-chat", 'name': "DeepSeek-LLM-Chat-(67B)"},
'DBRX-Instruct': {'mode': True, 'id': "databricks/dbrx-instruct", 'name': "DBRX-Instruct"},
'Qwen-QwQ-32B-Preview': {'mode': True, 'id': "Qwen/QwQ-32B-Preview", 'name': "Qwen-QwQ-32B-Preview"},
@ -96,12 +97,12 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
models = list(dict.fromkeys([default_model, *userSelectedModel, *reasoning_models, *image_models, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
model_aliases = {
"gpt-4": "gpt-4o",
"gemini-1.5-flash": "gemini-1.5-flash",
"gemini-1.5-pro": "gemini-pro",
"claude-3.5-sonnet": "claude-sonnet-3.5",
"deepseek-v3": "DeepSeek-V3",
"deepseek-r1": "DeepSeek-R1",
"llama-3.3-70b": "Meta-Llama-3.3-70B-Instruct-Turbo",
"mixtral-7b": "Mistral-(7B)-Instruct-v0.2",
"mixtral-small-28b": "Mistral-Small-24B-Instruct-2501",
"deepseek-chat": "DeepSeek-LLM-Chat-(67B)",
"dbrx-instruct": "DBRX-Instruct",
"qwq-32b": "Qwen-QwQ-32B-Preview",
@ -196,7 +197,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
}
async with ClientSession(headers=headers) as session:
if model == "ImageGeneration2":
if model in "ImageGeneration":
prompt = format_image_prompt(messages, prompt)
data = {
"query": format_image_prompt(messages, prompt),
@ -294,32 +295,48 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
if not text_to_yield or text_to_yield.isspace():
return
if model in cls.image_models:
image_url_match = re.search(r'!\[.*?\]\((.*?)\)', text_to_yield)
if image_url_match:
image_url = image_url_match.group(1)
prompt = format_image_prompt(messages, prompt)
yield ImageResponse(images=[image_url], alt=prompt)
else:
if "Generated by BLACKBOX.AI" in text_to_yield:
conversation.validated_value = await cls.fetch_validated(force_refresh=True)
if conversation.validated_value:
data["validated"] = conversation.validated_value
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as new_response:
await raise_for_status(new_response)
new_response_text = await new_response.text()
new_parts = new_response_text.split('$~~~$')
new_text = new_parts[2] if len(new_parts) >= 3 else new_response_text
if new_text and not new_text.isspace():
yield new_text
else:
if text_to_yield and not text_to_yield.isspace():
yield text_to_yield
if model in cls.reasoning_models and "\n\n\n" in text_to_yield:
think_split = text_to_yield.split("\n\n\n", 1)
if len(think_split) > 1:
think_content, answer = think_split[0].strip(), think_split[1].strip()
yield Reasoning(status=think_content)
yield answer
else:
yield text_to_yield
elif "<think>" in text_to_yield:
pre_think, rest = text_to_yield.split('<think>', 1)
think_content, post_think = rest.split('</think>', 1)
pre_think = pre_think.strip()
think_content = think_content.strip()
post_think = post_think.strip()
if pre_think:
yield pre_think
if think_content:
yield Reasoning(status=think_content)
if post_think:
yield post_think
elif "Generated by BLACKBOX.AI" in text_to_yield:
conversation.validated_value = await cls.fetch_validated(force_refresh=True)
if conversation.validated_value:
data["validated"] = conversation.validated_value
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as new_response:
await raise_for_status(new_response)
new_response_text = await new_response.text()
new_parts = new_response_text.split('$~~~$')
new_text = new_parts[2] if len(new_parts) >= 3 else new_response_text
if new_text and not new_text.isspace():
yield new_text
else:
if text_to_yield and not text_to_yield.isspace():
yield text_to_yield
else:
if text_to_yield and not text_to_yield.isspace():
yield text_to_yield
if return_conversation:
conversation.message_history.append({"role": "assistant", "content": text_to_yield})
yield conversation
if return_conversation:
conversation.message_history.append({"role": "assistant", "content": text_to_yield})
yield conversation

103
g4f/Provider/BlackboxAPI.py Normal file
View file

@ -0,0 +1,103 @@
from __future__ import annotations
import json
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..requests.raise_for_status import raise_for_status
from ..providers.response import Reasoning
from .helper import format_prompt
class BlackboxAPI(AsyncGeneratorProvider, ProviderModelMixin):
label = "Blackbox AI API"
url = "https://api.blackbox.ai"
api_endpoint = "https://api.blackbox.ai/api/chat"
working = True
needs_auth = False
supports_stream = False
supports_system_message = True
supports_message_history = True
default_model = 'deepseek-ai/DeepSeek-V3'
reasoning_models = ['deepseek-ai/DeepSeek-R1']
models = [
default_model,
'mistralai/Mistral-Small-24B-Instruct-2501',
'deepseek-ai/deepseek-llm-67b-chat',
'databricks/dbrx-instruct',
'Qwen/QwQ-32B-Preview',
'NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO'
] + reasoning_models
model_aliases = {
"deepseek-v3": "deepseek-ai/DeepSeek-V3",
"deepseek-r1": "deepseek-ai/DeepSeek-R1",
"deepseek-chat": "deepseek-ai/deepseek-llm-67b-chat",
"mixtral-small-28b": "mistralai/Mistral-Small-24B-Instruct-2501",
"dbrx-instruct": "databricks/dbrx-instruct",
"qwq-32b": "Qwen/QwQ-32B-Preview",
"hermes-2-dpo": "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
}
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
max_tokens: str = None,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
"Content-Type": "application/json",
}
async with ClientSession(headers=headers) as session:
data = {
"messages": messages,
"model": model,
"max_tokens": max_tokens
}
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
await raise_for_status(response)
is_reasoning = False
current_reasoning = ""
async for chunk in response.content:
if not chunk:
continue
text = chunk.decode(errors='ignore')
if model in cls.reasoning_models:
if "<think>" in text:
text = text.replace("<think>", "")
is_reasoning = True
current_reasoning = text
continue
if "</think>" in text:
text = text.replace("</think>", "")
is_reasoning = False
current_reasoning += text
yield Reasoning(status=current_reasoning.strip())
current_reasoning = ""
continue
if is_reasoning:
current_reasoning += text
continue
try:
if text:
yield text
except Exception as e:
return
if is_reasoning and current_reasoning:
yield Reasoning(status=current_reasoning.strip())

View file

@ -1,37 +1,166 @@
from __future__ import annotations
import json
from typing import AsyncGenerator
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from .template import OpenaiTemplate
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..requests.raise_for_status import raise_for_status
from ..providers.response import FinishReason, Reasoning
class CablyAI(OpenaiTemplate):
class CablyAI(AsyncGeneratorProvider, ProviderModelMixin):
label = "CablyAI"
url = "https://cablyai.com"
login_url = None
needs_auth = False
api_base = "https://cablyai.com/v1"
api_endpoint = "https://cablyai.com/v1/chat/completions"
api_key = "sk-your-openai-api-key"
working = True
default_model = "Cably-80B"
models = [default_model]
model_aliases = {"cably-80b": default_model}
needs_auth = False
supports_stream = True
supports_system_message = True
supports_message_history = True
default_model = 'gpt-4o-mini'
reasoning_models = ['deepseek-r1-uncensored']
models = [
default_model,
'searchgpt',
'llama-3.1-8b-instruct',
'deepseek-v3',
'tinyswallow1.5b',
'andy-3.5',
'o3-mini-low',
] + reasoning_models
model_aliases = {
"gpt-4o-mini": "searchgpt",
"llama-3.1-8b": "llama-3.1-8b-instruct",
"deepseek-r1": "deepseek-r1-uncensored",
}
@classmethod
def create_async_generator(
async def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
stream: bool = True,
proxy: str = None,
**kwargs
) -> AsyncResult:
) -> AsyncResult:
model = cls.get_model(model)
api_key = api_key or cls.api_key
headers = {
'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.9',
'Content-Type': 'application/json',
'Origin': 'https://cablyai.com',
'Referer': 'https://cablyai.com/chat',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36'
"Accept": "*/*",
"Accept-Language": "en-US,en;q=0.9",
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"Origin": cls.url,
"Referer": f"{cls.url}/chat",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
}
return super().create_async_generator(
model=model,
messages=messages,
headers=headers,
**kwargs
)
async with ClientSession(headers=headers) as session:
data = {
"model": model,
"messages": messages,
"stream": stream
}
async with session.post(
cls.api_endpoint,
json=data,
proxy=proxy
) as response:
await raise_for_status(response)
if stream:
reasoning_buffer = []
in_reasoning = False
async for line in response.content:
if not line:
continue
line = line.decode('utf-8').strip()
print(line)
if not line.startswith("data: "):
continue
if line == "data: [DONE]":
if in_reasoning and reasoning_buffer:
yield Reasoning(status="".join(reasoning_buffer).strip())
yield FinishReason("stop")
return
try:
json_data = json.loads(line[6:])
delta = json_data["choices"][0].get("delta", {})
content = delta.get("content", "")
finish_reason = json_data["choices"][0].get("finish_reason")
if finish_reason:
if in_reasoning and reasoning_buffer:
yield Reasoning(status="".join(reasoning_buffer).strip())
yield FinishReason(finish_reason)
return
if model in cls.reasoning_models:
# Processing the beginning of a tag
if "<think>" in content:
pre, _, post = content.partition("<think>")
if pre:
yield pre
in_reasoning = True
content = post
# Tag end processing
if "</think>" in content:
in_reasoning = False
thought, _, post = content.partition("</think>")
if thought:
reasoning_buffer.append(thought)
if reasoning_buffer:
yield Reasoning(status="".join(reasoning_buffer).strip())
reasoning_buffer.clear()
if post:
yield post
continue
# Buffering content inside tags
if in_reasoning:
reasoning_buffer.append(content)
else:
if content:
yield content
else:
if content:
yield content
except json.JSONDecodeError:
continue
except Exception:
yield FinishReason("error")
return
else:
try:
response_data = await response.json()
message = response_data["choices"][0]["message"]
content = message["content"]
if model in cls.reasoning_models and "<think>" in content:
think_start = content.find("<think>") + 7
think_end = content.find("</think>")
if think_start > 6 and think_end > 0:
reasoning = content[think_start:think_end].strip()
yield Reasoning(status=reasoning)
content = content[think_end + 8:].strip()
yield content
yield FinishReason("stop")
except Exception:
yield FinishReason("error")

View file

@ -10,30 +10,30 @@ class DeepInfraChat(OpenaiTemplate):
default_model = 'meta-llama/Llama-3.3-70B-Instruct-Turbo'
models = [
'meta-llama/Llama-3.3-70B-Instruct',
'meta-llama/Meta-Llama-3.1-8B-Instruct',
'meta-llama/Llama-3.2-90B-Vision-Instruct',
default_model,
'meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo',
'deepseek-ai/DeepSeek-V3',
'Qwen/QwQ-32B-Preview',
'mistralai/Mistral-Small-24B-Instruct-2501',
'deepseek-ai/DeepSeek-R1',
'deepseek-ai/DeepSeek-R1-Distill-Llama-70B',
'deepseek-ai/DeepSeek-R1-Distill-Qwen-32B',
'microsoft/phi-4',
'microsoft/WizardLM-2-8x22B',
'microsoft/WizardLM-2-7B',
'Qwen/Qwen2.5-72B-Instruct',
'Qwen/Qwen2.5-Coder-32B-Instruct',
'nvidia/Llama-3.1-Nemotron-70B-Instruct',
]
model_aliases = {
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct",
"llama-3.1-8b": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"llama-3.2-90b": "meta-llama/Llama-3.2-90B-Vision-Instruct",
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"llama-3.1-70b": "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
"deepseek-v3": "deepseek-ai/DeepSeek-V3",
"qwq-32b": "Qwen/QwQ-32B-Preview",
"mixtral-small-28b": "mistralai/Mistral-Small-24B-Instruct-2501",
"deepseek-r1": "deepseek-ai/DeepSeek-R1",
"deepseek-r1": "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"deepseek-r1": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"phi-4": "microsoft/phi-4",
"wizardlm-2-8x22b": "microsoft/WizardLM-2-8x22B",
"wizardlm-2-7b": "microsoft/WizardLM-2-7B",
"qwen-2.5-72b": "Qwen/Qwen2.5-72B-Instruct",
"qwen-2.5-coder-32b": "Qwen/Qwen2.5-Coder-32B-Instruct",
"nemotron-70b": "nvidia/Llama-3.1-Nemotron-70B-Instruct",
}
@classmethod
@ -41,6 +41,10 @@ class DeepInfraChat(OpenaiTemplate):
cls,
model: str,
messages: Messages,
stream: bool = True,
top_p: float = 0.9,
temperature: float = 0.7,
max_tokens: int = None,
headers: dict = {},
**kwargs
) -> AsyncResult:

View file

@ -20,12 +20,12 @@ class Glider(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True
default_model = 'chat-llama-3-1-70b'
reasoning_models = ['deepseek-ai/DeepSeek-R1']
models = [
'chat-llama-3-1-70b',
'chat-llama-3-1-8b',
'chat-llama-3-2-3b',
] + reasoning_models
'deepseek-ai/DeepSeek-R1'
]
model_aliases = {
"llama-3.1-70b": "chat-llama-3-1-70b",
@ -69,9 +69,6 @@ class Glider(AsyncGeneratorProvider, ProviderModelMixin):
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
await raise_for_status(response)
is_reasoning = False
current_reasoning = ""
async for chunk in response.content:
if not chunk:
continue
@ -82,34 +79,12 @@ class Glider(AsyncGeneratorProvider, ProviderModelMixin):
continue
if "[DONE]" in text:
if is_reasoning and current_reasoning:
yield Reasoning(status=current_reasoning.strip())
yield FinishReason("stop")
return
try:
json_data = json.loads(text[6:])
content = json_data["choices"][0].get("delta", {}).get("content", "")
if model in cls.reasoning_models:
if "<think>" in content:
content = content.replace("<think>", "")
is_reasoning = True
current_reasoning = content
continue
if "</think>" in content:
content = content.replace("</think>", "")
is_reasoning = False
current_reasoning += content
yield Reasoning(status=current_reasoning.strip())
current_reasoning = ""
continue
if is_reasoning:
current_reasoning += content
continue
if content:
yield content

View file

@ -54,6 +54,33 @@ models = {
"tokenLimit": 100000,
"context": "128K",
},
"DeepSeek-R1-Distill-Llama-70b": {
"id": "DeepSeek-R1-Distill-Llama-70b",
"name": "DeepSeek-R1-70B",
"model": "DeepSeek-R1-70B",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"DeepSeek-R1": {
"id": "DeepSeek-R1",
"name": "DeepSeek-R1",
"model": "DeepSeek-R1",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"DeepSeek-V3": {
"id": "DeepSeek-V3",
"name": "DeepSeek-V3",
"model": "DeepSeek-V3",
"provider": "DeepSeek",
"maxLength": 400000,
"tokenLimit": 100000,
"context": "128K",
},
"grok-2": {
"id": "grok-2",
"name": "Grok-2",
@ -172,6 +199,10 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
"o1-preview": "o1-preview-2024-09-12",
"o1-mini": "o1-mini-2024-09-12",
"deepseek-r1": "DeepSeek-R1-Distill-Llama-70b",
"deepseek-r1": "DeepSeek-R1",
"deepseek-v3": "DeepSeek-V3",
"claude-3-opus": "claude-3-opus-20240229",
"claude-3.5-sonnet": "claude-3-5-sonnet-20240620",
"claude-3.5-sonnet": "claude-3-5-sonnet-20241022",

View file

@ -15,6 +15,7 @@ from .mini_max import HailuoAI, MiniMax
from .template import OpenaiTemplate, BackendApi
from .Blackbox import Blackbox
from .BlackboxAPI import BlackboxAPI
from .CablyAI import CablyAI
from .ChatGLM import ChatGLM
from .ChatGpt import ChatGpt
@ -22,14 +23,12 @@ from .ChatGptEs import ChatGptEs
from .ChatGptt import ChatGptt
from .Cloudflare import Cloudflare
from .Copilot import Copilot
from .DarkAI import DarkAI
from .DDG import DDG
from .DeepInfraChat import DeepInfraChat
from .Free2GPT import Free2GPT
from .FreeGpt import FreeGpt
from .GizAI import GizAI
from .Glider import Glider
from .GPROChat import GPROChat
from .ImageLabs import ImageLabs
from .Jmuz import Jmuz
from .Liaobots import Liaobots

View file

@ -3,15 +3,16 @@ from __future__ import annotations
import json
from aiohttp import ClientSession, ClientTimeout, StreamReader
from ..typing import AsyncResult, Messages
from ..requests.raise_for_status import raise_for_status
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ...typing import AsyncResult, Messages
from ...requests.raise_for_status import raise_for_status
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt
class DarkAI(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://darkai.foundation/chat"
api_endpoint = "https://darkai.foundation/chat"
working = True
working = False
supports_stream = True
default_model = 'llama-3-70b'

View file

@ -4,15 +4,15 @@ import time
import hashlib
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ...typing import AsyncResult, Messages
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt
class GPROChat(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://gprochat.com"
api_endpoint = "https://gprochat.com/api/generate"
working = True
working = False
supports_stream = True
supports_message_history = True
default_model = 'gemini-1.5-pro'
@ -24,15 +24,6 @@ class GPROChat(AsyncGeneratorProvider, ProviderModelMixin):
signature = hashlib.sha256(hash_input.encode('utf-8')).hexdigest()
return signature
@classmethod
def get_model(cls, model: str) -> str:
if model in cls.models:
return model
elif model in cls.model_aliases:
return cls.model_aliases[model]
else:
return cls.default_model
@classmethod
async def create_async_generator(
cls,

View file

@ -10,8 +10,10 @@ from .Aura import Aura
from .Chatgpt4o import Chatgpt4o
from .Chatgpt4Online import Chatgpt4Online
from .ChatgptFree import ChatgptFree
from .DarkAI import DarkAI
from .FlowGpt import FlowGpt
from .FreeNetfly import FreeNetfly
from .GPROChat import GPROChat
from .Koala import Koala
from .MagickPen import MagickPen
from .MyShell import MyShell

View file

@ -6,18 +6,17 @@ from .Provider import IterListProvider, ProviderType
from .Provider import (
### no auth required ###
Blackbox,
BlackboxAPI,
CablyAI,
ChatGLM,
ChatGptEs,
ChatGptt,
Cloudflare,
Copilot,
DarkAI,
DDG,
DeepInfraChat,
HuggingSpace,
Glider,
GPROChat,
ImageLabs,
Jmuz,
Liaobots,
@ -86,7 +85,6 @@ default = Model(
Jmuz,
CablyAI,
OIVSCode,
DarkAI,
OpenaiChat,
Cloudflare,
])
@ -112,31 +110,24 @@ default_vision = Model(
###################
### OpenAI ###
# gpt-3.5
gpt_35_turbo = Model(
name = 'gpt-3.5-turbo',
base_provider = 'OpenAI',
best_provider = DarkAI
)
# gpt-4
gpt_4 = Model(
name = 'gpt-4',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, Blackbox, Jmuz, ChatGptEs, ChatGptt, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
best_provider = IterListProvider([DDG, Jmuz, ChatGptEs, ChatGptt, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
)
# gpt-4o
gpt_4o = VisionModel(
name = 'gpt-4o',
base_provider = 'OpenAI',
best_provider = IterListProvider([Blackbox, ChatGptt, Jmuz, ChatGptEs, PollinationsAI, DarkAI, Copilot, Liaobots, OpenaiChat])
best_provider = IterListProvider([ChatGptt, Jmuz, ChatGptEs, PollinationsAI, Copilot, Liaobots, OpenaiChat])
)
gpt_4o_mini = Model(
name = 'gpt-4o-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, ChatGptEs, ChatGptt, Jmuz, PollinationsAI, OIVSCode, Liaobots, OpenaiChat])
best_provider = IterListProvider([DDG, ChatGptEs, ChatGptt, Jmuz, PollinationsAI, OIVSCode, CablyAI, Liaobots, OpenaiChat])
)
# o1
@ -158,6 +149,13 @@ o1_mini = Model(
best_provider = Liaobots
)
# o3
o3_mini_low = Model(
name = 'o3-mini-low',
base_provider = 'OpenAI',
best_provider = CablyAI
)
### GigaChat ###
gigachat = Model(
name = 'GigaChat:latest',
@ -195,13 +193,13 @@ llama_3_70b = Model(
llama_3_1_8b = Model(
name = "llama-3.1-8b",
base_provider = "Meta Llama",
best_provider = IterListProvider([Blackbox, DeepInfraChat, Glider, Jmuz, PollinationsAI, Cloudflare])
best_provider = IterListProvider([Blackbox, DeepInfraChat, Glider, Jmuz, PollinationsAI, CablyAI, Cloudflare])
)
llama_3_1_70b = Model(
name = "llama-3.1-70b",
base_provider = "Meta Llama",
best_provider = IterListProvider([DDG, Blackbox, Glider, Jmuz, TeachAnything, DarkAI])
best_provider = IterListProvider([DDG, Blackbox, Glider, Jmuz, TeachAnything])
)
llama_3_1_405b = Model(
@ -232,7 +230,7 @@ llama_3_2_11b = VisionModel(
llama_3_2_90b = Model(
name = "llama-3.2-90b",
base_provider = "Meta Llama",
best_provider = Jmuz
best_provider = IterListProvider([DeepInfraChat, Jmuz])
)
# llama 3.3
@ -243,12 +241,6 @@ llama_3_3_70b = Model(
)
### Mistral ###
mixtral_7b = Model(
name = "mixtral-7b",
base_provider = "Mistral",
best_provider = Blackbox
)
mixtral_8x7b = Model(
name = "mixtral-8x7b",
base_provider = "Mistral",
@ -261,11 +253,17 @@ mistral_nemo = Model(
best_provider = IterListProvider([PollinationsAI, HuggingChat, HuggingFace])
)
mixtral_small_28b = Model(
name = "mixtral-small-28b",
base_provider = "Mistral",
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat])
)
### NousResearch ###
hermes_2_dpo = Model(
name = "hermes-2-dpo",
base_provider = "NousResearch",
best_provider = Blackbox
best_provider = IterListProvider([Blackbox, BlackboxAPI])
)
@ -277,13 +275,13 @@ phi_3_5_mini = Model(
best_provider = HuggingChat
)
# wizardlm
wizardlm_2_7b = Model(
name = 'wizardlm-2-7b',
base_provider = 'Microsoft',
phi_4 = Model(
name = "phi-4",
base_provider = "Microsoft",
best_provider = DeepInfraChat
)
# wizardlm
wizardlm_2_8x22b = Model(
name = 'wizardlm-2-8x22b',
base_provider = 'Microsoft',
@ -315,7 +313,7 @@ gemini_1_5_flash = Model(
gemini_1_5_pro = Model(
name = 'gemini-1.5-pro',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([Blackbox, Jmuz, GPROChat, Gemini, GeminiPro, Liaobots])
best_provider = IterListProvider([Blackbox, Jmuz, Gemini, GeminiPro, Liaobots])
)
# gemini-2.0
@ -356,7 +354,7 @@ claude_3_opus = Model(
claude_3_5_sonnet = Model(
name = 'claude-3.5-sonnet',
base_provider = 'Anthropic',
best_provider = IterListProvider([Blackbox, Jmuz, Liaobots])
best_provider = IterListProvider([Jmuz, Liaobots])
)
### Reka AI ###
@ -422,7 +420,7 @@ qwen_2_5_72b = Model(
qwen_2_5_coder_32b = Model(
name = 'qwen-2.5-coder-32b',
base_provider = 'Qwen',
best_provider = IterListProvider([DeepInfraChat, PollinationsAI, Jmuz, HuggingChat])
best_provider = IterListProvider([PollinationsAI, Jmuz, HuggingChat])
)
qwen_2_5_1m = Model(
name = 'qwen-2.5-1m-demo',
@ -434,7 +432,7 @@ qwen_2_5_1m = Model(
qwq_32b = Model(
name = 'qwq-32b',
base_provider = 'Qwen',
best_provider = IterListProvider([Blackbox, DeepInfraChat, Jmuz, HuggingChat])
best_provider = IterListProvider([Blackbox, BlackboxAPI, Jmuz, HuggingChat])
)
qvq_72b = VisionModel(
name = 'qvq-72b',
@ -453,19 +451,19 @@ pi = Model(
deepseek_chat = Model(
name = 'deepseek-chat',
base_provider = 'DeepSeek',
best_provider = IterListProvider([Blackbox, Jmuz, PollinationsAI])
best_provider = IterListProvider([Blackbox, BlackboxAPI, Jmuz, PollinationsAI])
)
deepseek_v3 = Model(
name = 'deepseek-v3',
base_provider = 'DeepSeek',
best_provider = IterListProvider([Blackbox, DeepInfraChat])
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat, CablyAI, Liaobots])
)
deepseek_r1 = Model(
name = 'deepseek-r1',
base_provider = 'DeepSeek',
best_provider = IterListProvider([Blackbox, Glider, PollinationsAI, Jmuz, HuggingChat, HuggingFace])
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat, Glider, PollinationsAI, Jmuz, CablyAI, Liaobots, HuggingChat, HuggingFace])
)
### x.ai ###
@ -498,28 +496,14 @@ sonar_reasoning = Model(
nemotron_70b = Model(
name = 'nemotron-70b',
base_provider = 'Nvidia',
best_provider = IterListProvider([DeepInfraChat, HuggingChat, HuggingFace])
best_provider = IterListProvider([HuggingChat, HuggingFace])
)
### Databricks ###
dbrx_instruct = Model(
name = 'dbrx-instruct',
base_provider = 'Databricks',
best_provider = Blackbox
)
### PollinationsAI ###
p1 = Model(
name = 'p1',
base_provider = 'PollinationsAI',
best_provider = PollinationsAI
)
### CablyAI ###
cably_80b = Model(
name = 'cably-80b',
base_provider = 'CablyAI',
best_provider = CablyAI
best_provider = IterListProvider([Blackbox, BlackboxAPI])
)
### THUDM ###
@ -614,12 +598,6 @@ class ModelUtils:
############
### OpenAI ###
# gpt-3
'gpt-3': gpt_35_turbo,
# gpt-3.5
gpt_35_turbo.name: gpt_35_turbo,
# gpt-4
gpt_4.name: gpt_4,
@ -657,9 +635,9 @@ class ModelUtils:
llama_3_3_70b.name: llama_3_3_70b,
### Mistral ###
mixtral_7b.name: mixtral_7b,
mixtral_8x7b.name: mixtral_8x7b,
mistral_nemo.name: mistral_nemo,
mixtral_small_28b.name: mixtral_small_28b,
### NousResearch ###
hermes_2_dpo.name: hermes_2_dpo,
@ -667,9 +645,9 @@ class ModelUtils:
### Microsoft ###
# phi
phi_3_5_mini.name: phi_3_5_mini,
phi_4.name: phi_4,
# wizardlm
wizardlm_2_7b.name: wizardlm_2_7b,
wizardlm_2_8x22b.name: wizardlm_2_8x22b,
### Google ###
@ -735,8 +713,6 @@ class ModelUtils:
nemotron_70b.name: nemotron_70b, ### Nvidia ###
dbrx_instruct.name: dbrx_instruct, ### Databricks ###
p1.name: p1, ### PollinationsAI ###
cably_80b.name: cably_80b, ### CablyAI ###
glm_4.name: glm_4, ### THUDM ###
mini_max.name: mini_max, ## MiniMax
evil.name: evil, ### Uncensored AI ###