Adding New Models and Enhancing Provider Functionality (#2689)

* Adding New Models and Enhancing Provider Functionality

* fix(core): handle model errors and improve configuration

- Import ModelNotSupportedError for proper exception handling in model resolution
- Update login_url configuration to reference class URL attribute dynamically
- Remove redundant typing imports after internal module reorganization

* feat(g4f/Provider/PerplexityLabs.py): Add new Perplexity models and update provider listings

- Update PerplexityLabs provider with expanded Sonar model family including pro/reasoning variants
- Add new text models: sonar-reasoning-pro to supported model catalog
- Standardize model naming conventions across provider documentation

* feat(g4f/models.py): add Sonar Reasoning Pro model configuration

- Add new  model to Perplexity AI text models section
- Include model in ModelUtils.convert mapping with PerplexityLabs provider
- Maintain consistent configuration pattern with existing Sonar variants

* feat(docs/providers-and-models.md): update provider models and add new reasoning model

- Update PerplexityLabs text models to standardized sonar naming convention
- Add new sonar-reasoning-pro model to text models table
- Include latest Perplexity AI documentation references for new model

* docs(docs/providers-and-models.md): update AI providers documentation

- Remove deprecated chatgptt.me from no-auth providers list
- Delete redundant Auth column from HuggingSpace providers table
- Update PerplexityLabs model website URLs to sonar.perplexity.ai
- Adjust provider counts for GPT-4/GPT-4o models in text models section
- Fix inconsistent formatting in image models provider listings

* chore(g4f/models.py): remove deprecated ChatGptt provider integration

- Remove ChatGptt import from provider dependencies
- Exclude ChatGptt from default model's best_provider list
- Update gpt_4 model configuration to eliminate ChatGptt reference
- Modify gpt_4o vision model provider hierarchy
- Adjust gpt_4o_mini provider selection parameters

BREAKING CHANGE: Existing integrations using ChatGptt provider will no longer function

* Disabled provider (g4f/Provider/ChatGptt.py > g4f/Provider/not_working/ChatGptt.py): Problem with Cloudflare

* fix(g4f/Provider/CablyAI.py): update API endpoints and model configurations

* docs(docs/providers-and-models.md): update model listings and provider capabilities

* feat(g4f/models.py): Add Hermes-3 model and enhance provider configs

* feat(g4f/Provider/CablyAI.py): Add free tier indicators to model aliases

* refactor(g4f/tools/run_tools.py): modularize thinking chunk handling

* fix(g4f/Provider/DeepInfraChat.py): resolve duplicate keys and enhance request headers

* feat(g4f/Provider/DeepInfraChat.py): Add multimodal image support and improve model handling

* chore(g4f/models.py): update default vision model providers

* feat(docs/providers-and-models.md): update provider capabilities and model specifications

* Update docs/client.md

* docs(docs/providers-and-models.md): Update DeepInfraChat models documentation

* feat(g4f/Provider/DeepInfraChat.py): add new vision models and expand model aliases

* feat(g4f/models.py): update model configurations and add new providers

* feat(g4f/models.py): Update model configurations and add new AI models

---------

Co-authored-by: kqlio67 <>
This commit is contained in:
kqlio67 2025-02-07 12:54:00 +00:00 committed by GitHub
parent 5d35b746f2
commit 88e7ef98f0
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
15 changed files with 381 additions and 229 deletions

View file

@ -181,8 +181,8 @@ for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content or "", end="")
```
---
---
### Using a Vision Model
**Analyze an image and generate a description:**
```python

View file

@ -38,18 +38,16 @@ This document provides an overview of various AI providers and models, including
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status |
|----------|-------------|--------------|---------------|--------|--------|------|------|
|[aichatfree.info](https://aichatfree.info)|No auth required|`g4f.Provider.AIChatFree`|`gemini-1.5-pro` _**(1+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[autonomous.ai](https://www.autonomous.ai/anon/)|No auth required|`g4f.Provider.AutonomousAI`|`llama-3.3-70b, qwen-2.5-coder-32b, hermes-3, llama-3.2-90b, llama-3.3-70b, llama-3-2-70b`|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gemini-1.5-flash, gemini-1.5-pro, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-small-28b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo, deepseek-r1` _**(+34)**_|`flux`|`blackboxai, gpt-4o, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, o3-mini, claude-3.5-sonnet, gemini-1.5-flash, gemini-1.5-pro, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-small-28b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo, deepseek-r1` _**(+34)**_|`flux`|`blackboxai, gpt-4o, o3-mini, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[api.blackbox.ai](https://api.blackbox.ai)|No auth required|`g4f.Provider.BlackboxAPI`|`deepseek-v3, deepseek-r1, deepseek-chat, mixtral-small-28b, dbrx-instruct, qwq-32b, hermes-2-dpo`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[cablyai.com](https://cablyai.com)|Optional API key|`g4f.Provider.CablyAI`|`gpt-4o-mini, llama-3.1-8b, deepseek-v3, deepseek-r1, o3-mini-low` _**(2+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[cablyai.com](https://cablyai.com)|Optional API key|`g4f.Provider.CablyAI`|`gpt-4o-mini, llama-3.1-8b, deepseek-v3, deepseek-r1, hermes-3, o3-mini-low, o3-mini, sonar-reasoning` _**(2+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(+7)**_|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)|
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chatgptt.me](https://chatgptt.me)|No auth required|`g4f.Provider.ChatGptt`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, gpt-4o`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, claude-3-haiku, llama-3.1-70b, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-28b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-28b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|`llama-3.2-90b, minicpm-2.5`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`mistral-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
@ -63,10 +61,10 @@ This document provides an overview of various AI providers and models, including
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-2, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, deepseek-r1, deepseek-v3, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash, gemini-2.0-flash-thinking`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[mhystical.cc](https://mhystical.cc)|[Optional API key](https://mhystical.cc/dashboard)|`g4f.Provider.Mhystical`|`gpt-4`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini`|❌|`gpt-4o-mini`|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar-online, sonar-chat, llama-3.3-70b, llama-3.1-8b, llama-3.1-70b, lfm-40b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, qwen-2.5-72b, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, deepseek-chat, llama-3.1-8b, deepseek-r1` _**(2+)**_|`flux, flux-pro, flux-realism, flux-cablyai, flux-anime, flux-3d, midjourney, dall-e-3, sdxl-turbo`|gpt-4o, gpt-4o-mini|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, qwen-2.5-72b, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, deepseek-chat, llama-3.1-8b, deepseek-r1, gemini-2.0-flash, gemini-2.0-flash-thinking` _**(3+)**_|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|gpt-4o, gpt-4o-mini|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[app.prodia.com](https://app.prodia.com)|No auth required|`g4f.Provider.Prodia`|❌|✔ _**(46)**_|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|
@ -124,12 +122,13 @@ This document provides an overview of various AI providers and models, including
### Text Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|gpt-4|OpenAI|10+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4o-mini|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|gpt-4o-mini|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|o1|OpenAI|1+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-preview|OpenAI|1+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-mini|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|o3-mini|OpenAI|2+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|o3-mini-low|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|gigachat|GigaChat|1+ Providers|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|
|meta-ai|Meta|1+ Providers|[ai.meta.com](https://ai.meta.com/)|
@ -145,22 +144,25 @@ This document provides an overview of various AI providers and models, including
|llama-3.2-90b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)|
|llama-3.3-70b|Meta Llama|6+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-3/)|
|mixtral-8x7b|Mistral|2+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mixtral-8x22b|Mistral|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1)|
|mistral-nemo|Mistral|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|mixtral-small-28b|Mistral|2+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-small-28b/)|
|mixtral-small-28b|Mistral|3+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-small-28b/)|
|hermes-2-dpo|NousResearch|2+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|hermes-3|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B)|
|phi-3.5-mini|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)|
|phi-4|Microsoft|1+ Providers|[techcommunity.microsoft.com](https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090)|
|wizardlm-2-7b|Microsoft|1+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|wizardlm-2-8x22b|Microsoft|2+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|gemini|Google DeepMind|1+|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|gemini-exp|Google DeepMind|1+ Providers|[blog.google](https://blog.google/feed/gemini-exp-1206/)|
|gemini-1.5-flash|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-1.5-pro|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemini-2.0-flash|Google DeepMind|2+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-2.0-flash|Google DeepMind|3+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-2.0-flash-thinking|Google DeepMind|1+ Providers|[ai.google.dev](https://ai.google.dev/gemini-api/docs/thinking-mode)|
|claude-3-haiku|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
|claude-3-sonnet|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3.5-sonnet|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|claude-3.5-sonnet|Anthropic|3+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|reka-core|Reka AI|1+ Providers|[reka.ai](https://www.reka.ai/ourmodels)|
|blackboxai|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|blackboxai-pro|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
@ -168,26 +170,33 @@ This document provides an overview of various AI providers and models, including
|command-r-plus|CohereForAI|2+ Providers|[docs.cohere.com](https://docs.cohere.com/docs/command-r-plus)|
|command-r7b|CohereForAI|1+ Providers|[huggingface.co](https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024)|
|qwen-1.5-7b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-7B)|
|qwen-2-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-72B)|
|qwen-2-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-72B)|
|qwen-2-vl-7b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-VL-7B)|
|qwen-2.5-72b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)|
|qwen-2.5-coder-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|qwen-2.5-coder-32b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|qwen-2.5-1m-demo|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-1M-Demo)|
|qwq-32b|Qwen|5+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|qwq-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)|
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|deepseek-chat|DeepSeek|4+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|deepseek-v3|DeepSeek|4+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|deepseek-r1|DeepSeek|8+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|deepseek-v3|DeepSeek|5+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|deepseek-r1|DeepSeek|10+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|grok-2|x.ai|1+|[x.ai](https://x.ai/blog/grok-2)|
|sonar|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|sonar-pro|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|sonar-reasoning|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|nemotron-70b|Nvidia|3+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|dbrx-instruct|Databricks|2+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|sonar|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-pro|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-reasoning|Perplexity AI|2+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|sonar-reasoning-pro|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|nemotron-70b|Nvidia|2+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|dbrx-instruct|Databricks|3+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|glm-4|THUDM|1+ Providers|[github.com/THUDM](https://github.com/THUDM/GLM-4)|
|mini_max|MiniMax|1+ Providers|[hailuo.ai](https://www.hailuo.ai/)|
|evil|Evil Mode - Experimental|1+ Providers||
|yi-34b|01-ai|1+ Providers|[huggingface.co](https://huggingface.co/01-ai/Yi-34B-Chat)|
|dolphin-2.6|Cognitive Computations|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b)|
|dolphin-2.9|Cognitive Computations|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|airoboros-70b|DeepInfra|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|lzlv-70b|Lizpreciatior|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|minicpm-2.5|OpenBMB|1+ Providers|[huggingface.co](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5)|
|evil|Evil Mode - Experimental|1+ Providers|[]( )|
---
### Image Models
@ -197,8 +206,8 @@ This document provides an overview of various AI providers and models, including
|sd-3.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)|
|flux|Black Forest Labs|3+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|flux-pro|Black Forest Labs|1+ Providers|[huggingface.co](https://huggingface.co/enhanceaiteam/FLUX.1-Pro)|
|flux-dev|Black Forest Labs|3+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-dev)|
|flux-schnell|Black Forest Labs|3+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|flux-dev|Black Forest Labs|4+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-dev)|
|flux-schnell|Black Forest Labs|4+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|dall-e-3|OpenAI|5+ Providers|[openai.com](https://openai.com/index/dall-e/)|
|midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|

View file

@ -15,7 +15,7 @@ from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..image import to_data_uri
from ..cookies import get_cookies_dir
from .helper import format_prompt, format_image_prompt
from ..providers.response import JsonConversation, ImageResponse, Reasoning
from ..providers.response import JsonConversation, ImageResponse
class Conversation(JsonConversation):
validated_value: str = None
@ -39,10 +39,9 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
default_vision_model = default_model
default_image_model = 'ImageGeneration'
image_models = [default_image_model]
vision_models = [default_vision_model, 'gpt-4o', 'o3-mini', 'gemini-pro', 'DeepSeek-V3', 'gemini-1.5-flash', 'llama-3.1-8b', 'llama-3.1-70b', 'llama-3.1-405b']
reasoning_models = ['DeepSeek-R1']
vision_models = [default_vision_model, 'gpt-4o', 'o3-mini', 'gemini-pro', 'gemini-1.5-flash', 'llama-3.1-8b', 'llama-3.1-70b', 'llama-3.1-405b']
userSelectedModel = ['gpt-4o', 'o3-mini', 'claude-sonnet-3.5', 'gemini-pro', 'blackboxai-pro']
userSelectedModel = ['gpt-4o', 'o3-mini', 'gemini-pro', 'claude-sonnet-3.5', 'DeepSeek-V3', 'DeepSeek-R1', 'blackboxai-pro', 'Meta-Llama-3.3-70B-Instruct-Turbo', 'Mistral-Small-24B-Instruct-2501', 'DeepSeek-LLM-Chat-(67B)', 'DBRX-Instruct', 'Qwen-QwQ-32B-Preview', 'Nous-Hermes-2-Mixtral-8x7B-DPO']
agentMode = {
'DeepSeek-V3': {'mode': True, 'id': "deepseek-chat", 'name': "DeepSeek-V3"},
@ -56,6 +55,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
}
trendingAgentMode = {
"o3-mini": {'mode': True, 'id': 'o3-mini'},
"gemini-1.5-flash": {'mode': True, 'id': 'Gemini'},
"llama-3.1-8b": {'mode': True, 'id': "llama-3.1-8b"},
'llama-3.1-70b': {'mode': True, 'id': "llama-3.1-70b"},
@ -94,9 +94,11 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
'builder Agent': {'mode': True, 'id': "builder Agent"},
}
models = list(dict.fromkeys([default_model, *userSelectedModel, *reasoning_models, *image_models, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
models = list(dict.fromkeys([default_model, *userSelectedModel, *image_models, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
model_aliases = {
"gpt-4": "gpt-4o",
"claude-3.5-sonnet": "claude-sonnet-3.5",
"gemini-1.5-flash": "gemini-1.5-flash",
"gemini-1.5-pro": "gemini-pro",
"deepseek-v3": "DeepSeek-V3",
@ -177,7 +179,6 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
messages: Messages,
prompt: str = None,
proxy: str = None,
web_search: bool = False,
images: ImagesType = None,
top_p: float = None,
temperature: float = None,
@ -283,60 +284,20 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
"vscodeClient": False,
"codeInterpreterMode": False,
"customProfile": {"name": "", "occupation": "", "traits": [], "additionalInfo": "", "enableNewChats": False},
"webSearchMode": web_search
"session": {"user":{"name":"John Doe","email":"john.doe@gmail.com","image":"https://lh3.googleusercontent.com/a/ACg8ocK9X7mNpQ2vR4jH3tY8wL5nB1xM6fDS9JW2kLpTn4Vy3hR2xN4m=s96-c"},"expires":datetime.now(timezone.utc).isoformat(timespec='milliseconds').replace('+00:00', 'Z'), "status": "PREMIUM"},
"webSearchMode": False
}
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
await raise_for_status(response)
response_text = await response.text()
parts = response_text.split('$~~~$')
text_to_yield = parts[2] if len(parts) >= 3 else response_text
if not text_to_yield or text_to_yield.isspace():
return
if model in cls.reasoning_models and "\n\n\n" in text_to_yield:
think_split = text_to_yield.split("\n\n\n", 1)
if len(think_split) > 1:
think_content, answer = think_split[0].strip(), think_split[1].strip()
yield Reasoning(status=think_content)
yield answer
else:
yield text_to_yield
elif "<think>" in text_to_yield:
pre_think, rest = text_to_yield.split('<think>', 1)
think_content, post_think = rest.split('</think>', 1)
pre_think = pre_think.strip()
think_content = think_content.strip()
post_think = post_think.strip()
if pre_think:
yield pre_think
if think_content:
yield Reasoning(status=think_content)
if post_think:
yield post_think
elif "Generated by BLACKBOX.AI" in text_to_yield:
conversation.validated_value = await cls.fetch_validated(force_refresh=True)
if conversation.validated_value:
data["validated"] = conversation.validated_value
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as new_response:
await raise_for_status(new_response)
new_response_text = await new_response.text()
new_parts = new_response_text.split('$~~~$')
new_text = new_parts[2] if len(new_parts) >= 3 else new_response_text
if new_text and not new_text.isspace():
yield new_text
else:
if text_to_yield and not text_to_yield.isspace():
yield text_to_yield
else:
if text_to_yield and not text_to_yield.isspace():
yield text_to_yield
full_response = []
async for chunk in response.content.iter_any():
if chunk:
chunk_text = chunk.decode()
full_response.append(chunk_text)
yield chunk_text
if return_conversation:
conversation.message_history.append({"role": "assistant", "content": text_to_yield})
full_response_text = ''.join(full_response)
conversation.message_history.append({"role": "assistant", "content": full_response_text})
yield conversation

View file

@ -5,7 +5,6 @@ from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..requests.raise_for_status import raise_for_status
from ..providers.response import Reasoning
from .helper import format_prompt
class BlackboxAPI(AsyncGeneratorProvider, ProviderModelMixin):
@ -20,15 +19,15 @@ class BlackboxAPI(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True
default_model = 'deepseek-ai/DeepSeek-V3'
reasoning_models = ['deepseek-ai/DeepSeek-R1']
models = [
default_model,
'deepseek-ai/DeepSeek-R1',
'mistralai/Mistral-Small-24B-Instruct-2501',
'deepseek-ai/deepseek-llm-67b-chat',
'databricks/dbrx-instruct',
'Qwen/QwQ-32B-Preview',
'NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO'
] + reasoning_models
]
model_aliases = {
"deepseek-v3": "deepseek-ai/DeepSeek-V3",
@ -65,39 +64,14 @@ class BlackboxAPI(AsyncGeneratorProvider, ProviderModelMixin):
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
await raise_for_status(response)
is_reasoning = False
current_reasoning = ""
async for chunk in response.content:
if not chunk:
continue
text = chunk.decode(errors='ignore')
if model in cls.reasoning_models:
if "<think>" in text:
text = text.replace("<think>", "")
is_reasoning = True
current_reasoning = text
continue
if "</think>" in text:
text = text.replace("</think>", "")
is_reasoning = False
current_reasoning += text
yield Reasoning(status=current_reasoning.strip())
current_reasoning = ""
continue
if is_reasoning:
current_reasoning += text
continue
try:
if text:
yield text
except Exception as e:
return
if is_reasoning and current_reasoning:
yield Reasoning(status=current_reasoning.strip())

View file

@ -4,31 +4,43 @@ from ..errors import ModelNotSupportedError
from .template import OpenaiTemplate
class CablyAI(OpenaiTemplate):
label = "CablyAI"
url = "https://cablyai.com"
login_url = url
url = "https://cablyai.com/chat"
login_url = "https://cablyai.com"
api_base = "https://cablyai.com/v1"
api_key = "sk-your-openai-api-key"
working = True
needs_auth = False
supports_stream = True
supports_system_message = True
supports_message_history = True
default_model = 'gpt-4o-mini'
reasoning_models = ['deepseek-r1-uncensored']
fallback_models = [
default_model,
'searchgpt',
'llama-3.1-8b-instruct',
'deepseek-r1-uncensored',
'deepseek-r1',
'deepseek-reasoner',
'deepseek-v3',
'tinyswallow1.5b',
'andy-3.5',
'hermes-3-llama-3.2-3b',
'llama-3.1-8b-instruct',
'o3-mini',
'o3-mini-low',
] + reasoning_models
'sonar-reasoning',
'tinyswallow1.5b',
]
model_aliases = {
"gpt-4o-mini": "searchgpt",
"llama-3.1-8b": "llama-3.1-8b-instruct",
"deepseek-r1": "deepseek-r1-uncensored",
"gpt-4o-mini": "searchgpt (free)",
"deepseek-r1": "deepseek-r1-uncensored (free)",
"deepseek-r1": "deepseek-reasoner (free)",
"hermes-3": "hermes-3-llama-3.2-3b (free)",
"llama-3.1-8b": "llama-3.1-8b-instruct (free)",
"o3-mini-low": "o3-mini-low (free)",
"o3-mini": "o3-mini-low (free)",
"o3-mini": "o3-mini (free)",
}
@classmethod
@ -42,6 +54,34 @@ class CablyAI(OpenaiTemplate):
model = super().get_model(model, **kwargs)
return model.split(" (free)")[0]
except ModelNotSupportedError:
if f"{model} (free)" in cls.models:
if f"f{model} (free)" in cls.models:
return model
raise
@classmethod
def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
stream: bool = True,
**kwargs
) -> AsyncResult:
api_key = api_key or cls.api_key
headers = {
"Accept": "*/*",
"Accept-Language": "en-US,en;q=0.9",
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"Origin": cls.url,
"Referer": f"{cls.url}/chat",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
}
return super().create_async_generator(
model=model,
messages=messages,
api_key=api_key,
stream=stream,
headers=headers,
**kwargs
)

View file

@ -1,7 +1,8 @@
from __future__ import annotations
from ..typing import AsyncResult, Messages
from ..typing import AsyncResult, Messages, ImagesType
from .template import OpenaiTemplate
from ..image import to_data_uri
class DeepInfraChat(OpenaiTemplate):
url = "https://deepinfra.com/chat"
@ -9,10 +10,12 @@ class DeepInfraChat(OpenaiTemplate):
working = True
default_model = 'meta-llama/Llama-3.3-70B-Instruct-Turbo'
default_vision_model = 'meta-llama/Llama-3.2-90B-Vision-Instruct'
vision_models = [default_vision_model, 'openbmb/MiniCPM-Llama3-V-2_5']
models = [
'meta-llama/Meta-Llama-3.1-8B-Instruct',
'meta-llama/Llama-3.2-90B-Vision-Instruct',
default_model,
'meta-llama/Llama-3.3-70B-Instruct',
'deepseek-ai/DeepSeek-V3',
'mistralai/Mistral-Small-24B-Instruct-2501',
'deepseek-ai/DeepSeek-R1',
@ -21,19 +24,38 @@ class DeepInfraChat(OpenaiTemplate):
'microsoft/phi-4',
'microsoft/WizardLM-2-8x22B',
'Qwen/Qwen2.5-72B-Instruct',
]
'01-ai/Yi-34B-Chat',
'Qwen/Qwen2-72B-Instruct',
'cognitivecomputations/dolphin-2.6-mixtral-8x7b',
'cognitivecomputations/dolphin-2.9.1-llama-3-70b',
'databricks/dbrx-instruct',
'deepinfra/airoboros-70b',
'lizpreciatior/lzlv_70b_fp16_hf',
'microsoft/WizardLM-2-7B',
'mistralai/Mixtral-8x22B-Instruct-v0.1',
] + vision_models
model_aliases = {
"llama-3.1-8b": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"llama-3.2-90b": "meta-llama/Llama-3.2-90B-Vision-Instruct",
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct",
"deepseek-v3": "deepseek-ai/DeepSeek-V3",
"mixtral-small-28b": "mistralai/Mistral-Small-24B-Instruct-2501",
"deepseek-r1": "deepseek-ai/DeepSeek-R1",
"deepseek-r1": "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"deepseek-r1": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"deepseek-r1-distill-llama": "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"deepseek-r1-distill-qwen": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"phi-4": "microsoft/phi-4",
"wizardlm-2-8x22b": "microsoft/WizardLM-2-8x22B",
"qwen-2.5-72b": "Qwen/Qwen2.5-72B-Instruct",
"yi-34b": "01-ai/Yi-34B-Chat",
"qwen-2-72b": "Qwen/Qwen2-72B-Instruct",
"dolphin-2.6": "cognitivecomputations/dolphin-2.6-mixtral-8x7b",
"dolphin-2.9": "cognitivecomputations/dolphin-2.9.1-llama-3-70b",
"dbrx-instruct": "databricks/dbrx-instruct",
"airoboros-70b": "deepinfra/airoboros-70b",
"lzlv-70b": "lizpreciatior/lzlv_70b_fp16_hf",
"wizardlm-2-7b": "microsoft/WizardLM-2-7B",
"mixtral-8x22b": "mistralai/Mixtral-8x22B-Instruct-v0.1",
"minicpm-2.5": "openbmb/MiniCPM-Llama3-V-2_5",
}
@classmethod
@ -46,6 +68,7 @@ class DeepInfraChat(OpenaiTemplate):
temperature: float = 0.7,
max_tokens: int = None,
headers: dict = {},
images: ImagesType = None,
**kwargs
) -> AsyncResult:
headers = {
@ -53,7 +76,35 @@ class DeepInfraChat(OpenaiTemplate):
'Origin': 'https://deepinfra.com',
'Referer': 'https://deepinfra.com/',
'X-Deepinfra-Source': 'web-page',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
**headers
}
async for chunk in super().create_async_generator(model, messages, headers=headers, **kwargs):
if images is not None:
if not model or model not in cls.models:
model = cls.default_vision_model
if messages:
last_message = messages[-1].copy()
last_message["content"] = [
*[{
"type": "image_url",
"image_url": {"url": to_data_uri(image)}
} for image, _ in images],
{
"type": "text",
"text": last_message["content"]
}
]
messages[-1] = last_message
async for chunk in super().create_async_generator(
model,
messages,
headers=headers,
stream=stream,
top_p=top_p,
temperature=temperature,
max_tokens=max_tokens,
**kwargs
):
yield chunk

View file

@ -5,7 +5,7 @@ from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..requests.raise_for_status import raise_for_status
from ..providers.response import FinishReason, Reasoning
from ..providers.response import FinishReason
from .helper import format_prompt
class Glider(AsyncGeneratorProvider, ProviderModelMixin):

View file

@ -6,9 +6,16 @@ class OIVSCode(OpenaiTemplate):
label = "OI VSCode Server"
url = "https://oi-vscode-server.onrender.com"
api_base = "https://oi-vscode-server.onrender.com/v1"
working = True
needs_auth = False
supports_stream = True
supports_system_message = True
supports_message_history = True
default_model = "gpt-4o-mini"
default_model = "gpt-4o-mini-2024-07-18"
default_vision_model = default_model
vision_models = [default_model, "gpt-4o-mini"]
models = vision_models
model_aliases = {"gpt-4o-mini": "gpt-4o-mini-2024-07-18"}

View file

@ -17,9 +17,10 @@ class PerplexityLabs(AsyncGeneratorProvider, ProviderModelMixin):
default_model = "sonar-pro"
models = [
default_model,
"sonar",
default_model,
"sonar-reasoning",
"sonar-reasoning-pro",
]
@classmethod

View file

@ -37,21 +37,11 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
# Models configuration
default_model = "openai"
default_image_model = "flux"
default_vision_model = "gpt-4o"
extra_image_models = [
"flux",
"flux-pro",
"flux-realism",
"flux-anime",
"flux-3d",
"flux-cablyai",
"turbo",
"midjourney",
"dall-e-3",
]
extra_image_models = ["flux-pro", "flux-dev", "flux-schnell", "midjourney", "dall-e-3"]
vision_models = [default_vision_model, "gpt-4o-mini"]
reasoning_models = ['deepseek-reasoner', 'deepseek-r1']
extra_text_models = ["claude", "claude-email", "p1"] + vision_models + reasoning_models
extra_text_models = ["claude", "claude-email", "deepseek-reasoner", "deepseek-r1"] + vision_models
model_aliases = {
### Text Models ###
"gpt-4o-mini": "openai",
@ -64,7 +54,6 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
"gpt-4o-mini": "rtist",
"gpt-4o": "searchgpt",
"gpt-4o-mini": "p1",
"deepseek-chat": "deepseek",
"deepseek-chat": "claude-hybridspace",
"llama-3.1-8b": "llamalight",
"gpt-4o-vision": "gpt-4o",
@ -72,33 +61,38 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
"gpt-4o-mini": "claude",
"deepseek-chat": "claude-email",
"deepseek-r1": "deepseek-reasoner",
"gemini-2.0-flash": "gemini",
"gemini-2.0-flash-thinking": "gemini-thinking",
### Image Models ###
"sdxl-turbo": "turbo",
"flux-schnell": "flux",
"flux-dev": "flux",
}
text_models = []
image_models = []
@classmethod
def get_models(cls, **kwargs):
if not cls.text_models:
url = "https://image.pollinations.ai/models"
response = requests.get(url)
raise_for_status(response)
new_image_models = response.json()
cls.extra_image_models = list(dict.fromkeys([*cls.extra_image_models, *new_image_models]))
if not cls.text_models or not cls.image_models:
image_url = "https://image.pollinations.ai/models"
image_response = requests.get(image_url)
raise_for_status(image_response)
new_image_models = image_response.json()
cls.image_models = list(dict.fromkeys([*cls.extra_image_models, *new_image_models]))
cls.extra_image_models = cls.image_models.copy()
text_url = "https://text.pollinations.ai/models"
text_response = requests.get(text_url)
raise_for_status(text_response)
original_text_models = [model.get("name") for model in text_response.json()]
url = "https://text.pollinations.ai/models"
response = requests.get(url)
raise_for_status(response)
original_text_models = [model.get("name") for model in response.json()]
combined_text = cls.extra_text_models + [
model for model in original_text_models
if model not in cls.extra_text_models
]
cls.text_models = list(dict.fromkeys(combined_text))
return cls.text_models
return cls.text_models + cls.image_models
@classmethod
async def create_async_generator(
@ -194,7 +188,7 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
yield ImagePreview(url, prompt)
async with ClientSession(headers=DEFAULT_HEADERS, connector=get_connector(proxy=proxy)) as session:
async with session.head(url) as response:
if response.status != 500: # Server is busy
if response.status != 500:
await raise_for_status(response)
yield ImageResponse(str(response.url), prompt)
@ -257,11 +251,6 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
choice = data["choices"][0]
message = choice.get("message") or choice.get("delta", {})
# Handle reasoning content
if model in cls.reasoning_models:
if "reasoning_content" in message:
yield Reasoning(status=message["reasoning_content"].strip())
if "usage" in data:
yield Usage(**data["usage"])
content = message.get("content", "")

View file

@ -20,7 +20,6 @@ from .CablyAI import CablyAI
from .ChatGLM import ChatGLM
from .ChatGpt import ChatGpt
from .ChatGptEs import ChatGptEs
from .ChatGptt import ChatGptt
from .Cloudflare import Cloudflare
from .Copilot import Copilot
from .DDG import DDG

View file

@ -2,19 +2,18 @@ from __future__ import annotations
import os
import re
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from ..requests.raise_for_status import raise_for_status
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ...typing import AsyncResult, Messages
from ...requests.raise_for_status import raise_for_status
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt
class ChatGptt(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://chatgptt.me"
api_endpoint = "https://chatgptt.me/wp-admin/admin-ajax.php"
working = True
working = False
supports_stream = True
supports_system_message = True
supports_message_history = True
@ -41,10 +40,22 @@ class ChatGptt(AsyncGeneratorProvider, ProviderModelMixin):
}
async with ClientSession(headers=headers) as session:
# Get initial page content
initial_response = await session.get(cls.url)
nonce_ = re.findall(r'data-nonce="(.+?)"', await initial_response.text())[0]
post_id = re.findall(r'data-post-id="(.+?)"', await initial_response.text())[0]
await raise_for_status(initial_response)
html = await initial_response.text()
# Extract nonce and post ID with error handling
nonce_match = re.search(r'data-nonce=["\']([^"\']+)["\']', html)
post_id_match = re.search(r'data-post-id=["\']([^"\']+)["\']', html)
if not nonce_match or not post_id_match:
raise RuntimeError("Required authentication tokens not found in page HTML")
nonce_ = nonce_match.group(1)
post_id = post_id_match.group(1)
# Prepare payload with session data
payload = {
'_wpnonce': nonce_,
'post_id': post_id,
@ -57,6 +68,7 @@ class ChatGptt(AsyncGeneratorProvider, ProviderModelMixin):
'wpaicg_chat_history': None
}
# Stream the response
async with session.post(cls.api_endpoint, headers=headers, data=payload, proxy=proxy) as response:
await raise_for_status(response)
result = await response.json()

View file

@ -10,6 +10,7 @@ from .Aura import Aura
from .Chatgpt4o import Chatgpt4o
from .Chatgpt4Online import Chatgpt4Online
from .ChatgptFree import ChatgptFree
from .ChatGptt import ChatGptt
from .DarkAI import DarkAI
from .FlowGpt import FlowGpt
from .FreeNetfly import FreeNetfly

View file

@ -10,7 +10,6 @@ from .Provider import (
CablyAI,
ChatGLM,
ChatGptEs,
ChatGptt,
Cloudflare,
Copilot,
DDG,
@ -81,7 +80,6 @@ default = Model(
Copilot,
DeepInfraChat,
ChatGptEs,
ChatGptt,
PollinationsAI,
Jmuz,
CablyAI,
@ -96,6 +94,8 @@ default_vision = Model(
base_provider = "",
best_provider = IterListProvider([
Blackbox,
OIVSCode,
DeepInfraChat,
PollinationsAI,
HuggingSpace,
GeminiPro,
@ -115,20 +115,20 @@ default_vision = Model(
gpt_4 = Model(
name = 'gpt-4',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, Jmuz, ChatGptEs, ChatGptt, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
best_provider = IterListProvider([Blackbox, DDG, Jmuz, ChatGptEs, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
)
# gpt-4o
gpt_4o = VisionModel(
name = 'gpt-4o',
base_provider = 'OpenAI',
best_provider = IterListProvider([ChatGptt, Jmuz, ChatGptEs, PollinationsAI, Copilot, Liaobots, OpenaiChat])
best_provider = IterListProvider([Blackbox, Jmuz, ChatGptEs, PollinationsAI, Copilot, Liaobots, OpenaiChat])
)
gpt_4o_mini = Model(
name = 'gpt-4o-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, ChatGptEs, ChatGptt, Jmuz, PollinationsAI, OIVSCode, CablyAI, Liaobots, OpenaiChat])
best_provider = IterListProvider([DDG, ChatGptEs, Jmuz, PollinationsAI, OIVSCode, CablyAI, Liaobots, OpenaiChat])
)
# o1
@ -151,6 +151,12 @@ o1_mini = Model(
)
# o3
o3_mini = Model(
name = 'o3-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([Blackbox, CablyAI])
)
o3_mini_low = Model(
name = 'o3-mini-low',
base_provider = 'OpenAI',
@ -247,6 +253,11 @@ mixtral_8x7b = Model(
base_provider = "Mistral",
best_provider = IterListProvider([DDG, Jmuz])
)
mixtral_8x22b = Model(
name = "mixtral-8x22b",
base_provider = "Mistral",
best_provider = DeepInfraChat
)
mistral_nemo = Model(
name = "mistral-nemo",
@ -267,6 +278,12 @@ hermes_2_dpo = Model(
best_provider = IterListProvider([Blackbox, BlackboxAPI])
)
hermes_3 = Model(
name = "hermes-3",
base_provider = "NousResearch",
best_provider = CablyAI
)
### Microsoft ###
# phi
@ -283,6 +300,12 @@ phi_4 = Model(
)
# wizardlm
wizardlm_2_7b = Model(
name = 'wizardlm-2-7b',
base_provider = 'Microsoft',
best_provider = DeepInfraChat
)
wizardlm_2_8x22b = Model(
name = 'wizardlm-2-8x22b',
base_provider = 'Microsoft',
@ -321,7 +344,7 @@ gemini_1_5_pro = Model(
gemini_2_0_flash = Model(
name = 'gemini-2.0-flash',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([GeminiPro, Liaobots])
best_provider = IterListProvider([PollinationsAI, GeminiPro, Liaobots])
)
gemini_2_0_flash_thinking = Model(
@ -355,7 +378,7 @@ claude_3_opus = Model(
claude_3_5_sonnet = Model(
name = 'claude-3.5-sonnet',
base_provider = 'Anthropic',
best_provider = IterListProvider([Jmuz, Liaobots])
best_provider = IterListProvider([Blackbox, Jmuz, Liaobots])
)
### Reka AI ###
@ -406,7 +429,7 @@ qwen_1_5_7b = Model(
qwen_2_72b = Model(
name = 'qwen-2-72b',
base_provider = 'Qwen',
best_provider = HuggingSpace
best_provider = IterListProvider([DeepInfraChat, HuggingSpace])
)
qwen_2_vl_7b = VisionModel(
name = "qwen-2-vl-7b",
@ -416,7 +439,7 @@ qwen_2_vl_7b = VisionModel(
qwen_2_5_72b = Model(
name = 'qwen-2.5-72b',
base_provider = 'Qwen',
best_provider = IterListProvider([DeepInfraChat, PollinationsAI, Jmuz])
best_provider = IterListProvider([PollinationsAI, Jmuz])
)
qwen_2_5_coder_32b = Model(
name = 'qwen-2.5-coder-32b',
@ -490,6 +513,12 @@ sonar_pro = Model(
sonar_reasoning = Model(
name = 'sonar-reasoning',
base_provider = 'Perplexity AI',
best_provider = IterListProvider([PerplexityLabs, CablyAI])
)
sonar_reasoning_pro = Model(
name = 'sonar-reasoning-pro',
base_provider = 'Perplexity AI',
best_provider = PerplexityLabs
)
@ -504,7 +533,7 @@ nemotron_70b = Model(
dbrx_instruct = Model(
name = 'dbrx-instruct',
base_provider = 'Databricks',
best_provider = IterListProvider([Blackbox, BlackboxAPI])
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat])
)
### THUDM ###
@ -514,13 +543,54 @@ glm_4 = Model(
best_provider = ChatGLM
)
### MiniMax
### MiniMax ###
mini_max = Model(
name = "MiniMax",
base_provider = "MiniMax",
best_provider = HailuoAI
)
### 01-ai ###
yi_34b = Model(
name = "yi-34b",
base_provider = "01-ai",
best_provider = DeepInfraChat
)
### Cognitive Computations ###
dolphin_2_6 = Model(
name = "dolphin-2.6",
base_provider = "Cognitive Computations",
best_provider = DeepInfraChat
)
dolphin_2_9 = Model(
name = "dolphin-2.9",
base_provider = "Cognitive Computations",
best_provider = DeepInfraChat
)
### DeepInfra ###
airoboros_70b = Model(
name = "airoboros-70b",
base_provider = "DeepInfra",
best_provider = DeepInfraChat
)
### Lizpreciatior ###
lzlv_70b = Model(
name = "lzlv-70b",
base_provider = "Lizpreciatior",
best_provider = DeepInfraChat
)
### OpenBMB ###
minicpm_2_5 = Model(
name = "minicpm-2.5",
base_provider = "OpenBMB",
best_provider = DeepInfraChat
)
### Uncensored AI ###
evil = Model(
name = 'evil',
@ -556,19 +626,19 @@ flux = ImageModel(
flux_pro = ImageModel(
name = 'flux-pro',
base_provider = 'Black Forest Labs',
best_provider = PollinationsImage
best_provider = PollinationsAI
)
flux_dev = ImageModel(
name = 'flux-dev',
base_provider = 'Black Forest Labs',
best_provider = IterListProvider([HuggingSpace, HuggingChat, HuggingFace])
best_provider = IterListProvider([PollinationsImage, HuggingSpace, HuggingChat, HuggingFace])
)
flux_schnell = ImageModel(
name = 'flux-schnell',
base_provider = 'Black Forest Labs',
best_provider = IterListProvider([HuggingSpace, HuggingChat, HuggingFace])
best_provider = IterListProvider([PollinationsImage, HuggingSpace, HuggingChat, HuggingFace])
)
@ -611,6 +681,10 @@ class ModelUtils:
o1_preview.name: o1_preview,
o1_mini.name: o1_mini,
# o3
o3_mini.name: o3_mini,
o3_mini_low.name: o3_mini_low,
### Meta ###
meta.name: meta,
@ -637,11 +711,13 @@ class ModelUtils:
### Mistral ###
mixtral_8x7b.name: mixtral_8x7b,
mixtral_8x22b.name: mixtral_8x22b,
mistral_nemo.name: mistral_nemo,
mixtral_small_28b.name: mixtral_small_28b,
### NousResearch ###
hermes_2_dpo.name: hermes_2_dpo,
hermes_3.name: hermes_3,
### Microsoft ###
# phi
@ -649,6 +725,7 @@ class ModelUtils:
phi_4.name: phi_4,
# wizardlm
wizardlm_2_7b.name: wizardlm_2_7b,
wizardlm_2_8x22b.name: wizardlm_2_8x22b,
### Google ###
@ -706,6 +783,7 @@ class ModelUtils:
sonar.name: sonar,
sonar_pro.name: sonar_pro,
sonar_reasoning.name: sonar_reasoning,
sonar_reasoning_pro.name: sonar_reasoning_pro,
### DeepSeek ###
deepseek_chat.name: deepseek_chat,
@ -715,7 +793,17 @@ class ModelUtils:
nemotron_70b.name: nemotron_70b, ### Nvidia ###
dbrx_instruct.name: dbrx_instruct, ### Databricks ###
glm_4.name: glm_4, ### THUDM ###
mini_max.name: mini_max, ## MiniMax
mini_max.name: mini_max, ## MiniMax ###
yi_34b.name: yi_34b, ## 01-ai ###
### Cognitive Computations ###
dolphin_2_6.name: dolphin_2_6,
dolphin_2_9.name: dolphin_2_9,
airoboros_70b.name: airoboros_70b, ### DeepInfra ###
lzlv_70b.name: lzlv_70b, ### Lizpreciatior ###
minicpm_2_5.name: minicpm_2_5, ### OpenBMB ###
evil.name: evil, ### Uncensored AI ###
#############
@ -742,11 +830,10 @@ class ModelUtils:
demo_models = {
gpt_4o.name: [gpt_4o, [PollinationsAI, Blackbox]],
gpt_4o_mini.name: [gpt_4o_mini, [PollinationsAI, CablyAI, DDG]],
deepseek_r1.name: [deepseek_r1, [PollinationsAI, HuggingFace]],
"default": [llama_3_2_11b, [HuggingFace]],
qwen_2_vl_7b.name: [qwen_2_vl_7b, [HuggingFaceAPI]],
qvq_72b.name: [qvq_72b, [HuggingSpace]],
deepseek_r1.name: [deepseek_r1, [HuggingFace]],
command_r.name: [command_r, [HuggingSpace]],
command_r_plus.name: [command_r_plus, [HuggingSpace]],
command_r7b.name: [command_r7b, [HuggingSpace]],
@ -756,7 +843,7 @@ demo_models = {
llama_3_3_70b.name: [llama_3_3_70b, [HuggingFace]],
sd_3_5.name: [sd_3_5, [HuggingSpace, HuggingFace]],
flux_dev.name: [flux_dev, [PollinationsImage, HuggingSpace, HuggingFace]],
flux_schnell.name: [flux_schnell, [PollinationsImage, HuggingFace, HuggingSpace]],
flux_schnell.name: [flux_schnell, [HuggingFace, HuggingSpace, PollinationsImage]],
}
# Create a list of all models and his providers

View file

@ -88,6 +88,51 @@ async def async_iter_run_tools(provider: ProviderType, model: str, messages, too
async for chunk in response:
yield chunk
def process_thinking_chunk(chunk: str, start_time: float = 0) -> tuple[float, list]:
"""Process a thinking chunk and return timing and results."""
results = []
# Handle non-thinking chunk
if not start_time and "<think>" not in chunk:
return 0, [chunk]
# Handle thinking start
if "<think>" in chunk and not "`<think>`" in chunk:
before_think, *after = chunk.split("<think>", 1)
if before_think:
results.append(before_think)
results.append(Reasoning(status="🤔 Is thinking...", is_thinking="<think>"))
if after and after[0]:
results.append(Reasoning(after[0]))
return time.time(), results
# Handle thinking end
if "</think>" in chunk:
before_end, *after = chunk.split("</think>", 1)
if before_end:
results.append(Reasoning(before_end))
thinking_duration = time.time() - start_time if start_time > 0 else 0
status = f"Thought for {thinking_duration:.2f}s" if thinking_duration > 1 else "Finished"
results.append(Reasoning(status=status, is_thinking="</think>"))
if after and after[0]:
results.append(after[0])
return 0, results
# Handle ongoing thinking
if start_time:
return start_time, [Reasoning(chunk)]
return start_time, [chunk]
def iter_run_tools(
iter_callback: Callable,
model: str,
@ -149,37 +194,13 @@ def iter_run_tools(
if has_bucket and isinstance(messages[-1]["content"], str):
messages[-1]["content"] += BUCKET_INSTRUCTIONS
is_thinking = 0
thinking_start_time = 0
for chunk in iter_callback(model=model, messages=messages, provider=provider, **kwargs):
if not isinstance(chunk, str):
yield chunk
continue
if "<think>" in chunk and not "`<think>`" in chunk:
if chunk != "<think>":
chunk = chunk.split("<think>", 1)
if len(chunk) > 0 and chunk[0]:
yield chunk[0]
yield Reasoning(status="🤔 Is thinking...", is_thinking="<think>")
if chunk != "<think>":
if len(chunk) > 1 and chunk[1]:
yield Reasoning(chunk[1])
is_thinking = time.time()
else:
if "</think>" in chunk:
if chunk != "<think>":
chunk = chunk.split("</think>", 1)
if len(chunk) > 0 and chunk[0]:
yield Reasoning(chunk[0])
is_thinking = time.time() - is_thinking if is_thinking > 0 else 0
if is_thinking > 1:
yield Reasoning(status=f"Thought for {is_thinking:.2f}s", is_thinking="</think>")
else:
yield Reasoning(status=f"Finished", is_thinking="</think>")
if chunk != "<think>":
if len(chunk) > 1 and chunk[1]:
yield chunk[1]
is_thinking = 0
elif is_thinking:
yield Reasoning(chunk)
else:
yield chunk
thinking_start_time, results = process_thinking_chunk(chunk, thinking_start_time)
for result in results:
yield result