mirror of
https://github.com/xtekky/gpt4free.git
synced 2025-12-06 02:30:41 -08:00
Add new providers and enhance existing provider configurations (#2805)
* New provider added(g4f/Provider/Websim.py) * New provider added(g4f/Provider/Dynaspark.py) * feat(g4f/gui/client/static/js/chat.v1.js): Enhance provider labeling for HuggingFace integrations * feat(g4f/gui/server/api.py): add Hugging Face Space compatibility flag to provider data * feat(g4f/models.py): add new providers and update model configurations * Update g4f/Provider/__init__.py * feat(g4f/Provider/AllenAI.py): expand model alias mappings for AllenAI provider * feat(g4f/Provider/Blackbox.py): restructure image model handling and response processing * feat(g4f/Provider/PollinationsAI.py): add new model aliases and streamline headers * Update g4f/Provider/hf_space/* * refactor(g4f/Provider/Copilot.py): update model alias mapping * chore(g4f/models.py): update provider configurations for OpenAI models * docs(docs/providers-and-models.md): update provider tables and model categorization * fix(etc/examples/vision_images.py): update model and simplify client configuration * fix(docs/providers-and-models.md): correct streaming status for GlhfChat provider * docs(docs/providers-and-models.md): update provider capabilities and model documentation * fix(models): update provider configurations for Mistral models * fix(g4f/Provider/Blackbox.py): correct model alias key for Mistral variant * feat(g4f/Provider/hf_space/CohereForAI_C4AI_Command.py): update supported model versions and aliases (close #2802) * fix(documentation): correct model names and provider counts (https://github.com/xtekky/gpt4free/pull/2805#issuecomment-2727489835) * fix(g4f/models.py): correct mistral model configurations * fix(g4f/Provider/DeepInfraChat.py): correct mixtral-small alias key * New provider added(g4f/Provider/LambdaChat.py) * feat(g4f/models.py): add new providers and enhance model configurations * docs(docs/providers-and-models.md): add LambdaChat provider and update model listings * feat(g4f/models.py): add new Liquid AI model and enhance providers * docs(docs/providers-and-models.md): update model listings and provider counts * feat(g4f/Provider/LambdaChat.py): add conditional reasoning processing based on model * fix(g4f/tools/run_tools.py): handle combined thinking tags in single chunk * New provider added(g4f/Provider/Goabror.py) * feat(g4f/Provider/Blackbox.py): implement dynamic session management and model access control * refactor(g4f/models.py): update provider configurations and model entries * docs(docs/providers-and-models.md): update model listings and provider counts --------- Co-authored-by: kqlio67 <>
This commit is contained in:
parent
9c94f9b2e9
commit
52ecfb5019
28 changed files with 1183 additions and 306 deletions
|
|
@ -17,8 +17,8 @@ This document provides an overview of various AI providers and models, including
|
|||
- [MiniMax](#providers-minimax)
|
||||
- [Needs auth](#providers-needs-auth)
|
||||
- [Models](#models)
|
||||
- [Text Models](#text-models)
|
||||
- [Image Models](#image-models)
|
||||
- [Text generation models](#text-generation-models)
|
||||
- [Image generation models](#image-generation-models)
|
||||
- [Conclusion and Usage Tips](#conclusion-and-usage-tips)
|
||||
|
||||
---
|
||||
|
|
@ -38,112 +38,117 @@ This document provides an overview of various AI providers and models, including
|
|||
|
||||
---
|
||||
### Providers No auth required
|
||||
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[playground.allenai.org](https://playground.allenai.org)|No auth required|`g4f.Provider.AllenAI`|`tulu-3-405b, olmo-2-13b, tulu-3-1-8b, tulu-3-70b, olmoe-0125`|❌|❌|✔||
|
||||
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gemini-1.5-flash, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-small-28b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo, deepseek-v3, deepseek-r1, gemini-2.0-flash, gpt-4o, o1, o3-mini, gemini-1.5-pro, claude-3.7-sonnet` _**(+29)**_|`flux`|`blackboxai, gpt-4o, o1, o3-mini, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, gemini-2.0-flash, deepseek-v3`|✔||
|
||||
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|✔||
|
||||
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(+7)**_|❌|❌|✔||
|
||||
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔||
|
||||
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔||❌|
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, gpt-4o`|❌|❌|✔||
|
||||
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, llama-3.3-70b, claude-3-haiku, o3-mini, mixtral-small-24b`|❌|❌|✔||
|
||||
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-28b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|`llama-3.2-90b, minicpm-2.5`|✔||
|
||||
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|✔||
|
||||
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|✔||
|
||||
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|✔||
|
||||
|[glider.so](https://glider.so)|No auth required|`g4f.Provider.Glider`|`llama-3.1-70b, llama-3.1-8b, llama-3.2-3b, deepseek-r1`|❌|❌|✔||
|
||||
|[hailuo.ai](https://www.hailuo.ai)|No auth required|`g4f.Provider.HailuoAI`|`MiniMax` _**(1)**_|❌|❌|✔||
|
||||
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|✔||
|
||||
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b`|`flux-dev, flux-schnell, sd-3.5`|❌|✔||
|
||||
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3-haiku, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|✔||
|
||||
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thinking, claude-3-opus, claude-3-sonnet, deepseek-r1, deepseek-v3, gemini-2.0-flash, gemini-2.0-flash-thinking, gemini-2.0-pro, gpt-4, gpt-4o, gpt-4o-mini, grok-3, grok-3-r1, o3-mini`|❌|❌|✔||
|
||||
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini, deepseek-v3`|❌|`gpt-4o-mini`|✔||
|
||||
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|✔||
|
||||
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|✔||
|
||||
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, o1-mini, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, llama-3.1-8b, deepseek-r1, phi-4` _**(6+)**_|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|gpt-4o, gpt-4o-mini, o1-mini|✔||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsImage`|❌|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|❌|✔||
|
||||
|[app.prodia.com](https://app.prodia.com)|No auth required|`g4f.Provider.Prodia`|❌|✔ _**(46)**_|❌|❌||
|
||||
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|✔||
|
||||
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔||
|
||||
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|✔|✔||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[playground.allenai.org](https://playground.allenai.org)|No auth required|`g4f.Provider.AllenAI`|`tulu-3-405b, olmo-2-13b, tulu-3-1-8b, tulu-3-70b, olmoe-0125`|❌|❌|❌|✔||
|
||||
|[ai-arta.com](https://ai-arta.com)|No auth required|`g4f.Provider.ARTA`|❌|✔ _**(17+)**_|❌|❌|❌||
|
||||
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gpt-4o-mini, deepseek-chat, deepseek-v3, deepseek-r1, gpt-4o, o1, o3-mini, claude-3.7-sonnet` _**(40+)**_|`flux`|❌|`blackboxai, gpt-4o, o1, o3-mini, deepseek-v3` _**(7+)**_|✔||
|
||||
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|❌|✔||
|
||||
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(7+)**_|❌|❌|❌|✔||
|
||||
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|❌|✔||
|
||||
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|❌|✔||❌|
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, o1`|❌|❌|❌|✔||
|
||||
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, llama-3.3-70b, claude-3-haiku, o3-mini, mixtral-small-24b`|❌|❌|❌|✔||
|
||||
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-24b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|❌|`llama-3.2-90b, minicpm-2.5`|✔||
|
||||
|[dynaspark.onrender.com](https://dynaspark.onrender.com)|No auth required|`g4f.Provider.Dynaspark`|`gemini-1.5-flash, gemini-2.0-flash`|❌|❌|`gemini-1.5-flash, gemini-2.0-flash`|✔||
|
||||
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|✔||
|
||||
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|✔||
|
||||
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|❌|✔||
|
||||
|[glider.so](https://glider.so)|No auth required|`g4f.Provider.Glider`|`llama-3.1-70b, llama-3.1-8b, llama-3.2-3b, deepseek-r1`|❌|❌|❌|✔||
|
||||
|[goabror.uz](https://goabror.uz)|No auth required|`g4f.Provider.Goabror`|`gpt-4`|❌|❌|❌|✔||
|
||||
|[hailuo.ai](https://www.hailuo.ai)|No auth required|`g4f.Provider.HailuoAI`|`MiniMax` _**(1+)**_|❌|❌|❌|✔||
|
||||
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|❌|✔||
|
||||
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b, command-a`|`flux-dev, flux-schnell, sd-3.5`|❌|❌|✔||
|
||||
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|❌|✔||
|
||||
|[lambda.chat](https://lambda.chat)|No auth required|`g4f.Provider.LambdaChat`|`deepseek-v3, deepseek-r1, hermes-3, nemotron-70b, llama-3.3-70b`|❌|❌|❌|✔||
|
||||
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thinking, claude-3-opus, claude-3-sonnet, deepseek-r1, deepseek-v3, gemini-2.0-flash, gemini-2.0-flash-thinking, gemini-2.0-pro, gpt-4, gpt-4o, gpt-4o-mini, grok-3, grok-3-r1, o3-mini`|❌|❌|❌|✔||
|
||||
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini, deepseek-v3`|❌|❌|`gpt-4o-mini`|✔||
|
||||
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|❌|✔||
|
||||
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|❌|✔||
|
||||
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|❌|✔||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, o1-mini, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, llama-3.1-8b, deepseek-r1, phi-4` _**(9+)**_|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|✔ _**(1+)**_|`gpt-4o, gpt-4o-mini, o1-mini, o3-mini`|❌|✔||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsImage`|❌|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|❌|❌|✔||
|
||||
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|✔||
|
||||
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|❌|✔|✔||
|
||||
|[websim.ai](https://websim.ai)|No auth required|`g4f.Provider.Websim`|`gemini-1.5-pro, gemini-1.5-flash`|`flux`|❌|❌|✔||
|
||||
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|❌|❌|✔||
|
||||
|
||||
---
|
||||
### Providers HuggingFace
|
||||
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, deepseek-r1, qwq-32b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|✔||
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|✔||
|
||||
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|✔ _**(1+)**_|❌||✔|
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, deepseek-r1, qwq-32b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|❌|✔||
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|❌|✔||
|
||||
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|❌|✔ _**(1+)**_|❌||✔|
|
||||
|
||||
---
|
||||
### Providers HuggingSpace
|
||||
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status | Auth |
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabsFlux1Dev`|❌|`flux-dev`|❌|✔||
|
||||
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabsFlux1Schnell`|❌|`flux-schnell`|❌|✔||
|
||||
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI`|`command-r, command-r-plus, command-r7b`|❌|❌|✔||
|
||||
|[roxky-flux-1-dev.hf.space](https://roxky-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|❌|❌|✔ _**(3)**_|✔||
|
||||
|[huggingface.co/spaces/deepseek-ai/Janus-Pro-7B](https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Janus_Pro_7B`|✔|✔|❌|✔||
|
||||
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|✔||
|
||||
|[qwen-qwen2-5-1m-demo.hf.space](https://qwen-qwen2-5-1m-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5M_Demo`|`qwen-2.5-1m-demo`|❌|❌|✔||
|
||||
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B_Instruct`|`qwen-2-72b`|❌|❌|✔||
|
||||
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StableDiffusion35Large`|❌|`sd-3.5`|❌|✔||
|
||||
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.VoodoohopFlux1Schnell`|❌|`flux-schnell`|❌|✔||
|
||||
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Dev`|❌|`flux-dev`|❌|❌|✔||
|
||||
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Schnell`|❌|`flux-schnell`|❌|❌|✔||
|
||||
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI_C4AI_Command`|`command-r, command-r-plus, command-r7b`|❌|❌|❌|✔||
|
||||
|[huggingface.co/spaces/deepseek-ai/Janus-Pro-7B](https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.DeepseekAI_Janus_Pro_7b`|✔|✔|❌|❌|✔||
|
||||
|[roxky-flux-1-dev.hf.space](https://roxky-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.G4F`|✔ _**(1+)**_|✔ _**(4+)**_|❌|✔ _**(1+)**_|✔||
|
||||
|[microsoft-phi-4-multimodal.hf.space](https://microsoft-phi-4-multimodal.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Microsoft_Phi_4`|✔ _**(1+)**_|❌|❌|✔ _**(1+)**_|✔||
|
||||
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|❌|✔||
|
||||
|[qwen-qwen2-5-1m-demo.hf.space](https://qwen-qwen2-5-1m-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5M_Demo`|`qwen-2.5-1m-demo`|❌|❌|❌|✔||
|
||||
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B_Instruct`|`qwen-2-72b`|❌|❌|❌|✔||
|
||||
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StabilityAI_SD35Large`|❌|`sd-3.5`|❌|❌|✔||
|
||||
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Voodoohop_Flux1Schnell`|❌|`flux-schnell`|❌|❌|✔||
|
||||
|
||||
### Providers Local
|
||||
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[]( )|No auth required|`g4f.Provider.Local`|✔|❌|❌|✔||
|
||||
|[ollama.com](https://ollama.com)|No auth required|`g4f.Provider.Ollama`|✔|❌|❌|✔||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[]( )|No auth required|`g4f.Provider.Local`|✔|❌|❌|❌|✔||
|
||||
|[ollama.com](https://ollama.com)|No auth required|`g4f.Provider.Ollama`|✔|❌|❌|❌|✔||
|
||||
|
||||
---
|
||||
### Providers MiniMax
|
||||
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[hailuo.ai/chat](https://www.hailuo.ai/chat)|[Get API key](https://intl.minimaxi.com/user-center/basic-information/interface-key)|`g4f.Provider.MiniMax`|`MiniMax` _**(1)**_|❌|❌|✔||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation| Audio generation | Vision (Image Upload) | Stream | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[hailuo.ai/chat](https://www.hailuo.ai/chat)|[Get API key](https://intl.minimaxi.com/user-center/basic-information/interface-key)|`g4f.Provider.MiniMax`|`MiniMax` _**(1)**_|❌|❌|❌|✔||
|
||||
|
||||
|
||||
---
|
||||
### Providers Needs Auth
|
||||
| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|✔||
|
||||
|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌||
|
||||
|[cablyai.com/chat](https://cablyai.com/chat)|[Get API key](https://cablyai.com)|`g4f.Provider.CablyAI`|✔|✔|✔|✔||
|
||||
|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|✔||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|✔ _**(1+)**_|✔||
|
||||
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|✔||
|
||||
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌||
|
||||
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini-2.0`|`gemini-2.0`|`gemini-2.0`|❌||
|
||||
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|`gemini-1.5-pro`|❌||
|
||||
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌||
|
||||
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌||
|
||||
|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌||
|
||||
|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|✔|❌||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|✔||✔|
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|✔||
|
||||
|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌||
|
||||
|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|✔||
|
||||
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔_**(1)**_|✔_**(8+)**_|✔||
|
||||
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|✔||
|
||||
|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|❌|✔|✔||
|
||||
|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|✔||
|
||||
|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|✔||
|
||||
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔||
|
||||
|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|✔||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Streaming | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|❌|✔||
|
||||
|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|❌||
|
||||
|[cablyai.com/chat](https://cablyai.com/chat)|[Get API key](https://cablyai.com)|`g4f.Provider.CablyAI`|✔|✔|❌|✔||
|
||||
|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|❌|✔||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|❌|✔ _**(1+)**_|✔||
|
||||
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|❌|✔||
|
||||
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌|❌||
|
||||
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini-2.0`|`gemini-2.0`|❌|`gemini-2.0`|❌||
|
||||
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|❌|`gemini-1.5-pro`|❌||
|
||||
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌|❌||
|
||||
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌|❌||
|
||||
|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌|❌||
|
||||
|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|❌|✔|❌||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|❌|✔||✔|
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|❌|✔||
|
||||
|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|❌||
|
||||
|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|❌|✔||
|
||||
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔ _**(1)**_|❌|✔ _**(8+)**_|✔||
|
||||
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|❌|✔||
|
||||
|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|✔|❌|❌|✔||
|
||||
|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|❌|✔||
|
||||
|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|❌|✔||
|
||||
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|❌|✔||
|
||||
|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|❌|✔||
|
||||
|
||||
---
|
||||
## Models
|
||||
|
||||
### Text Models
|
||||
### Text generation models
|
||||
| Model | Base Provider | Providers | Website |
|
||||
|-------|---------------|-----------|---------|
|
||||
|gpt-4|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|
||||
|gpt-4o|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|
||||
|gpt-4o-mini|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|
||||
|o1|OpenAI|2+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|
||||
|gpt-4|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|
||||
|gpt-4o|OpenAI|6+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|
||||
|gpt-4o-mini|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|
||||
|o1|OpenAI|3+ Providers|[openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|
||||
|o1-mini|OpenAI|1+ Providers|[openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|
||||
|o3-mini|OpenAI|3+ Providers|[openai.com](https://openai.com/index/openai-o3-mini/)|
|
||||
|gigachat|GigaChat|1+ Providers|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|
|
||||
|
|
@ -162,17 +167,16 @@ This document provides an overview of various AI providers and models, including
|
|||
|mixtral-8x7b|Mistral|1+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|
||||
|mixtral-8x22b|Mistral|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1)|
|
||||
|mistral-nemo|Mistral|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|
||||
|mixtral-small-24b|Mistral|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)|
|
||||
|mixtral-small-28b|Mistral|2+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-small-28b/)|
|
||||
|hermes-2-dpo|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|
||||
|mixtral-small-24b|Mistral|2+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)|
|
||||
|hermes-3|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B-FP8)|
|
||||
|phi-3.5-mini|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)|
|
||||
|phi-4|Microsoft|2+ Providers|[techcommunity.microsoft.com](https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090)|
|
||||
|wizardlm-2-7b|Microsoft|1+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|
||||
|wizardlm-2-8x22b|Microsoft|2+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)|
|
||||
|gemini-2.0|Google DeepMind|1+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|
||||
|gemini-exp|Google DeepMind|1+ Providers|[blog.google](https://blog.google/feed/gemini-exp-1206/)|
|
||||
|gemini-1.5-flash|Google DeepMind|6+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|
||||
|gemini-1.5-flash|Google DeepMind|7+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|
||||
|gemini-1.5-pro|Google DeepMind|6+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|
||||
|gemini-2.0|Google DeepMind|1+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|
||||
|gemini-2.0-flash|Google DeepMind|3+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|
||||
|gemini-2.0-flash-thinking|Google DeepMind|1+ Providers|[ai.google.dev](https://ai.google.dev/gemini-api/docs/thinking-mode)|
|
||||
|gemini-2.0-pro|Google DeepMind|1+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash-thinking/)|
|
||||
|
|
@ -184,22 +188,22 @@ This document provides an overview of various AI providers and models, including
|
|||
|claude-3.7-sonnet-thinking|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/claude/sonnet)|
|
||||
|reka-core|Reka AI|1+ Providers|[reka.ai](https://www.reka.ai/ourmodels)|
|
||||
|blackboxai|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|
||||
|blackboxai-pro|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|
||||
|command-r|CohereForAI|1+ Providers|[docs.cohere.com](https://docs.cohere.com/docs/command-r-plus)|
|
||||
|command-r-plus|CohereForAI|2+ Providers|[docs.cohere.com](https://docs.cohere.com/docs/command-r-plus)|
|
||||
|command-r7b|CohereForAI|1+ Providers|[huggingface.co](https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024)|
|
||||
|command-a|CohereForAI|1+ Providers|[docs.cohere.com](https://docs.cohere.com/v2/docs/command-a)|
|
||||
|qwen-1.5-7b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-7B)|
|
||||
|qwen-2-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-72B)|
|
||||
|qwen-2-vl-7b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-VL-7B)|
|
||||
|qwen-2.5-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)|
|
||||
|qwen-2.5-coder-32b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|
||||
|qwen-2.5-1m-demo|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-1M-Demo)|
|
||||
|qwq-32b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|
||||
|qwq-32b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|
||||
|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)|
|
||||
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|
||||
|deepseek-chat|DeepSeek|2+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|
||||
|deepseek-v3|DeepSeek|4+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|deepseek-r1|DeepSeek|8+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|deepseek-r1|DeepSeek|9+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|janus-pro-7b|DeepSeek|2+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/docs/janus-pro-7b)|
|
||||
|grok-3|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-3)|
|
||||
|grok-3-r1|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-3)|
|
||||
|
|
@ -208,8 +212,8 @@ This document provides an overview of various AI providers and models, including
|
|||
|sonar-reasoning|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|
||||
|sonar-reasoning-pro|Perplexity AI|1+ Providers|[sonar.perplexity.ai](https://sonar.perplexity.ai/)|
|
||||
|r1-1776|Perplexity AI|1+ Providers|[perplexity.ai](https://www.perplexity.ai/hub/blog/open-sourcing-r1-1776)|
|
||||
|nemotron-70b|Nvidia|2+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|
||||
|dbrx-instruct|Databricks|2+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|
||||
|nemotron-70b|Nvidia|3+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|
||||
|dbrx-instruct|Databricks|1+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|
||||
|glm-4|THUDM|1+ Providers|[github.com/THUDM](https://github.com/THUDM/GLM-4)|
|
||||
|mini_max|MiniMax|1+ Providers|[hailuo.ai](https://www.hailuo.ai/)|
|
||||
|yi-34b|01-ai|1+ Providers|[huggingface.co](https://huggingface.co/01-ai/Yi-34B-Chat)|
|
||||
|
|
@ -223,15 +227,16 @@ This document provides an overview of various AI providers and models, including
|
|||
|tulu-3-1-8b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|tulu-3-70b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|olmoe-0125|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|lfm-40b|Liquid AI|1+ Providers|[liquid.ai](https://www.liquid.ai/liquid-foundation-models)|
|
||||
|evil|Evil Mode - Experimental|1+ Providers|[]( )|
|
||||
|
||||
---
|
||||
### Image Models
|
||||
### Image generation models
|
||||
| Model | Base Provider | Providers | Website |
|
||||
|-------|---------------|-----------|---------|
|
||||
|sdxl-turbo|Stability AI|2+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sdxl-turbo)|
|
||||
|sd-3.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)|
|
||||
|flux|Black Forest Labs|3+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|
||||
|flux|Black Forest Labs|4+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|
||||
|flux-pro|Black Forest Labs|1+ Providers|[huggingface.co](https://huggingface.co/enhanceaiteam/FLUX.1-Pro)|
|
||||
|flux-dev|Black Forest Labs|4+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-dev)|
|
||||
|flux-schnell|Black Forest Labs|4+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|
||||
|
|
@ -239,7 +244,6 @@ This document provides an overview of various AI providers and models, including
|
|||
|midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|
|
||||
|
||||
|
||||
|
||||
## Conclusion and Usage Tips
|
||||
This document provides a comprehensive overview of various AI providers and models available for text generation, image generation, and vision tasks. **When choosing a provider or model, consider the following factors:**
|
||||
1. **Availability**: Check the status of the provider to ensure it's currently active and accessible.
|
||||
|
|
|
|||
|
|
@ -2,16 +2,13 @@ import g4f
|
|||
import requests
|
||||
|
||||
from g4f.client import Client
|
||||
from g4f.Provider.Blackbox import Blackbox
|
||||
|
||||
client = Client(
|
||||
provider=Blackbox
|
||||
)
|
||||
client = Client()
|
||||
|
||||
# Processing remote image
|
||||
remote_image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).content
|
||||
response_remote = client.chat.completions.create(
|
||||
model=g4f.models.default,
|
||||
model=g4f.models.default_vision,
|
||||
messages=[
|
||||
{"role": "user", "content": "What are on this image?"}
|
||||
],
|
||||
|
|
@ -25,7 +22,7 @@ print("\n" + "-"*50 + "\n") # Separator
|
|||
# Processing local image
|
||||
local_image = open("docs/images/cat.jpeg", "rb")
|
||||
response_local = client.chat.completions.create(
|
||||
model=g4f.models.default,
|
||||
model=g4f.models.default_vision,
|
||||
messages=[
|
||||
{"role": "user", "content": "What are on this image?"}
|
||||
],
|
||||
|
|
|
|||
|
|
@ -48,6 +48,9 @@ class AllenAI(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
"olmo-2-13b": "OLMo-2-1124-13B-Instruct",
|
||||
"tulu-3-1-8b": "tulu-3-1-8b",
|
||||
"tulu-3-70b": "Llama-3-1-Tulu-3-70B",
|
||||
"llama-3.1-405b": "Llama-3-1-Tulu-3-70B",
|
||||
"llama-3.1-8b": "tulu-3-1-8b",
|
||||
"llama-3.1-70b": "Llama-3-1-Tulu-3-70B",
|
||||
}
|
||||
|
||||
@classmethod
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from aiohttp import ClientSession
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
import random
|
||||
|
|
@ -8,6 +9,7 @@ import string
|
|||
import base64
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from ..typing import AsyncResult, Messages, ImagesType
|
||||
from ..requests.raise_for_status import raise_for_status
|
||||
|
|
@ -17,6 +19,7 @@ from ..cookies import get_cookies_dir
|
|||
from .helper import format_prompt, format_image_prompt
|
||||
from ..providers.response import JsonConversation, ImageResponse
|
||||
from ..errors import ModelNotSupportedError
|
||||
from .. import debug
|
||||
|
||||
class Conversation(JsonConversation):
|
||||
validated_value: str = None
|
||||
|
|
@ -38,79 +41,157 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
|
||||
default_model = "blackboxai"
|
||||
default_vision_model = default_model
|
||||
default_image_model = 'ImageGeneration'
|
||||
default_image_model = 'flux'
|
||||
|
||||
# Completely free models
|
||||
fallback_models = [
|
||||
"blackboxai",
|
||||
"gpt-4o-mini",
|
||||
"GPT-4o",
|
||||
"o1",
|
||||
"o3-mini",
|
||||
"Claude-sonnet-3.7",
|
||||
"DeepSeek-V3",
|
||||
"DeepSeek-R1",
|
||||
"DeepSeek-LLM-Chat-(67B)",
|
||||
# Image models
|
||||
"flux",
|
||||
# Trending agent modes
|
||||
'Python Agent',
|
||||
'HTML Agent',
|
||||
'Builder Agent',
|
||||
'Java Agent',
|
||||
'JavaScript Agent',
|
||||
'React Agent',
|
||||
'Android Agent',
|
||||
'Flutter Agent',
|
||||
'Next.js Agent',
|
||||
'AngularJS Agent',
|
||||
'Swift Agent',
|
||||
'MongoDB Agent',
|
||||
'PyTorch Agent',
|
||||
'Xcode Agent',
|
||||
'Azure Agent',
|
||||
'Bitbucket Agent',
|
||||
'DigitalOcean Agent',
|
||||
'Docker Agent',
|
||||
'Electron Agent',
|
||||
'Erlang Agent',
|
||||
'FastAPI Agent',
|
||||
'Firebase Agent',
|
||||
'Flask Agent',
|
||||
'Git Agent',
|
||||
'Gitlab Agent',
|
||||
'Go Agent',
|
||||
'Godot Agent',
|
||||
'Google Cloud Agent',
|
||||
'Heroku Agent'
|
||||
]
|
||||
|
||||
image_models = [default_image_model]
|
||||
vision_models = [default_vision_model, 'gpt-4o', 'o1', 'o3-mini', 'gemini-pro', 'gemini-1.5-flash', 'llama-3.1-8b', 'llama-3.1-70b', 'llama-3.1-405b', 'gemini-2.0-flash', 'deepseek-v3']
|
||||
vision_models = [default_vision_model, 'GPT-4o', 'o1', 'o3-mini', 'Gemini-PRO', 'Gemini Agent', 'llama-3.1-8b Agent', 'llama-3.1-70b Agent', 'llama-3.1-405 Agent', 'Gemini-Flash-2.0', 'DeepSeek-V3']
|
||||
|
||||
userSelectedModel = ['gpt-4o', 'o1', 'o3-mini', 'gemini-pro', 'claude-sonnet-3.7', 'deepseek-v3', 'deepseek-r1', 'blackboxai-pro', 'Meta-Llama-3.3-70B-Instruct-Turbo', 'Mistral-Small-24B-Instruct-2501', 'DeepSeek-LLM-Chat-(67B)', 'dbrx-instruct', 'Qwen-QwQ-32B-Preview', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'gemini-2.0-flash']
|
||||
userSelectedModel = ['GPT-4o', 'o1', 'o3-mini', 'Gemini-PRO', 'Claude-sonnet-3.7', 'DeepSeek-V3', 'DeepSeek-R1', 'Meta-Llama-3.3-70B-Instruct-Turbo', 'Mistral-Small-24B-Instruct-2501', 'DeepSeek-LLM-Chat-(67B)', 'DBRX-Instruct', 'Qwen-QwQ-32B-Preview', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'Gemini-Flash-2.0']
|
||||
|
||||
# Agent mode configurations
|
||||
agentMode = {
|
||||
'deepseek-v3': {'mode': True, 'id': "deepseek-chat", 'name': "DeepSeek-V3"},
|
||||
'deepseek-r1': {'mode': True, 'id': "deepseek-reasoner", 'name': "DeepSeek-R1"},
|
||||
'GPT-4o': {'mode': True, 'id': "GPT-4o", 'name': "GPT-4o"},
|
||||
'Gemini-PRO': {'mode': True, 'id': "Gemini-PRO", 'name': "Gemini-PRO"},
|
||||
'Claude-sonnet-3.7': {'mode': True, 'id': "Claude-sonnet-3.7", 'name': "Claude-sonnet-3.7"},
|
||||
'DeepSeek-V3': {'mode': True, 'id': "deepseek-chat", 'name': "DeepSeek-V3"},
|
||||
'DeepSeek-R1': {'mode': True, 'id': "deepseek-reasoner", 'name': "DeepSeek-R1"},
|
||||
'Meta-Llama-3.3-70B-Instruct-Turbo': {'mode': True, 'id': "meta-llama/Llama-3.3-70B-Instruct-Turbo", 'name': "Meta-Llama-3.3-70B-Instruct-Turbo"},
|
||||
'Gemini-Flash-2.0': {'mode': True, 'id': "Gemini/Gemini-Flash-2.0", 'name': "Gemini-Flash-2.0"},
|
||||
'Mistral-Small-24B-Instruct-2501': {'mode': True, 'id': "mistralai/Mistral-Small-24B-Instruct-2501", 'name': "Mistral-Small-24B-Instruct-2501"},
|
||||
'DeepSeek-LLM-Chat-(67B)': {'mode': True, 'id': "deepseek-ai/deepseek-llm-67b-chat", 'name': "DeepSeek-LLM-Chat-(67B)"},
|
||||
'dbrx-instruct': {'mode': True, 'id': "databricks/dbrx-instruct", 'name': "DBRX-Instruct"},
|
||||
'DBRX-Instruct': {'mode': True, 'id': "databricks/dbrx-instruct", 'name': "DBRX-Instruct"},
|
||||
'Qwen-QwQ-32B-Preview': {'mode': True, 'id': "Qwen/QwQ-32B-Preview", 'name': "Qwen-QwQ-32B-Preview"},
|
||||
'Nous-Hermes-2-Mixtral-8x7B-DPO': {'mode': True, 'id': "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", 'name': "Nous-Hermes-2-Mixtral-8x7B-DPO"},
|
||||
'gemini-2.0-flash': {'mode': True, 'id': "Gemini/Gemini-Flash-2.0", 'name': "Gemini-Flash-2.0"},
|
||||
}
|
||||
|
||||
# Trending agent modes
|
||||
trendingAgentMode = {
|
||||
"gemini-1.5-flash": {'mode': True, 'id': 'Gemini'},
|
||||
"llama-3.1-8b": {'mode': True, 'id': "llama-3.1-8b"},
|
||||
'llama-3.1-70b': {'mode': True, 'id': "llama-3.1-70b"},
|
||||
'llama-3.1-405b': {'mode': True, 'id': "llama-3.1-405"},
|
||||
'Python Agent': {'mode': True, 'id': "Python Agent"},
|
||||
'Java Agent': {'mode': True, 'id': "Java Agent"},
|
||||
'JavaScript Agent': {'mode': True, 'id': "JavaScript Agent"},
|
||||
'HTML Agent': {'mode': True, 'id': "HTML Agent"},
|
||||
'Google Cloud Agent': {'mode': True, 'id': "Google Cloud Agent"},
|
||||
'Android Developer': {'mode': True, 'id': "Android Developer"},
|
||||
'Swift Developer': {'mode': True, 'id': "Swift Developer"},
|
||||
'Next.js Agent': {'mode': True, 'id': "Next.js Agent"},
|
||||
'MongoDB Agent': {'mode': True, 'id': "MongoDB Agent"},
|
||||
'PyTorch Agent': {'mode': True, 'id': "PyTorch Agent"},
|
||||
'React Agent': {'mode': True, 'id': "React Agent"},
|
||||
'Xcode Agent': {'mode': True, 'id': "Xcode Agent"},
|
||||
'blackboxai-pro': {'mode': True, 'id': "BLACKBOXAI-PRO"},
|
||||
'Heroku Agent': {'mode': True, 'id': "Heroku Agent"},
|
||||
'Godot Agent': {'mode': True, 'id': "Godot Agent"},
|
||||
'Go Agent': {'mode': True, 'id': "Go Agent"},
|
||||
'Gitlab Agent': {'mode': True, 'id': "Gitlab Agent"},
|
||||
'Git Agent': {'mode': True, 'id': "Git Agent"},
|
||||
'Flask Agent': {'mode': True, 'id': "Flask Agent"},
|
||||
'Firebase Agent': {'mode': True, 'id': "Firebase Agent"},
|
||||
'FastAPI Agent': {'mode': True, 'id': "FastAPI Agent"},
|
||||
'Erlang Agent': {'mode': True, 'id': "Erlang Agent"},
|
||||
'Electron Agent': {'mode': True, 'id': "Electron Agent"},
|
||||
'Docker Agent': {'mode': True, 'id': "Docker Agent"},
|
||||
'DigitalOcean Agent': {'mode': True, 'id': "DigitalOcean Agent"},
|
||||
'Bitbucket Agent': {'mode': True, 'id': "Bitbucket Agent"},
|
||||
'Azure Agent': {'mode': True, 'id': "Azure Agent"},
|
||||
'Flutter Agent': {'mode': True, 'id': "Flutter Agent"},
|
||||
'Youtube Agent': {'mode': True, 'id': "Youtube Agent"},
|
||||
'builder Agent': {'mode': True, 'id': "builder Agent"},
|
||||
"Gemini Agent": {'mode': True, 'id': 'gemini'},
|
||||
"llama-3.1-405 Agent": {'mode': True, 'id': "llama-3.1-405"},
|
||||
'llama-3.1-70b Agent': {'mode': True, 'id': "llama-3.1-70b"},
|
||||
'llama-3.1-8b Agent': {'mode': True, 'id': "llama-3.1-8b"},
|
||||
'Python Agent': {'mode': True, 'id': "python"},
|
||||
'HTML Agent': {'mode': True, 'id': "html"},
|
||||
'Builder Agent': {'mode': True, 'id': "builder"},
|
||||
'Java Agent': {'mode': True, 'id': "java"},
|
||||
'JavaScript Agent': {'mode': True, 'id': "javascript"},
|
||||
'React Agent': {'mode': True, 'id': "react"},
|
||||
'Android Agent': {'mode': True, 'id': "android"},
|
||||
'Flutter Agent': {'mode': True, 'id': "flutter"},
|
||||
'Next.js Agent': {'mode': True, 'id': "next.js"},
|
||||
'AngularJS Agent': {'mode': True, 'id': "angularjs"},
|
||||
'Swift Agent': {'mode': True, 'id': "swift"},
|
||||
'MongoDB Agent': {'mode': True, 'id': "mongodb"},
|
||||
'PyTorch Agent': {'mode': True, 'id': "pytorch"},
|
||||
'Xcode Agent': {'mode': True, 'id': "xcode"},
|
||||
'Azure Agent': {'mode': True, 'id': "azure"},
|
||||
'Bitbucket Agent': {'mode': True, 'id': "bitbucket"},
|
||||
'DigitalOcean Agent': {'mode': True, 'id': "digitalocean"},
|
||||
'Docker Agent': {'mode': True, 'id': "docker"},
|
||||
'Electron Agent': {'mode': True, 'id': "electron"},
|
||||
'Erlang Agent': {'mode': True, 'id': "erlang"},
|
||||
'FastAPI Agent': {'mode': True, 'id': "fastapi"},
|
||||
'Firebase Agent': {'mode': True, 'id': "firebase"},
|
||||
'Flask Agent': {'mode': True, 'id': "flask"},
|
||||
'Git Agent': {'mode': True, 'id': "git"},
|
||||
'Gitlab Agent': {'mode': True, 'id': "gitlab"},
|
||||
'Go Agent': {'mode': True, 'id': "go"},
|
||||
'Godot Agent': {'mode': True, 'id': "godot"},
|
||||
'Google Cloud Agent': {'mode': True, 'id': "googlecloud"},
|
||||
'Heroku Agent': {'mode': True, 'id': "heroku"},
|
||||
}
|
||||
|
||||
models = list(dict.fromkeys([default_model, *userSelectedModel, *image_models, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
|
||||
|
||||
model_aliases = {
|
||||
"gemini-1.5-flash": "gemini-1.5-flash",
|
||||
"gemini-1.5-pro": "gemini-pro",
|
||||
"llama-3.3-70b": "Meta-Llama-3.3-70B-Instruct-Turbo",
|
||||
"mixtral-small-28b": "Mistral-Small-24B-Instruct-2501",
|
||||
"deepseek-chat": "DeepSeek-LLM-Chat-(67B)",
|
||||
"qwq-32b": "Qwen-QwQ-32B-Preview",
|
||||
"hermes-2-dpo": "Nous-Hermes-2-Mixtral-8x7B-DPO",
|
||||
"claude-3.7-sonnet": "claude-sonnet-3.7",
|
||||
"flux": "ImageGeneration",
|
||||
}
|
||||
# Complete list of all models (for authorized users)
|
||||
_all_models = list(dict.fromkeys([
|
||||
default_model,
|
||||
*userSelectedModel,
|
||||
*image_models,
|
||||
*list(agentMode.keys()),
|
||||
*list(trendingAgentMode.keys())
|
||||
]))
|
||||
|
||||
ENCRYPTED_SESSION = "eyJ1c2VyIjogeyJuYW1lIjogIkJMQUNLQk9YIEFJIiwgImVtYWlsIjogImdpc2VsZUBibGFja2JveC5haSIsICJpbWFnZSI6ICJodHRwczovL3l0My5nb29nbGV1c2VyY29udGVudC5jb20vQjd6RVlVSzUxWnNQYmFSUFVhMF9ZbnQ1WV9URFZoTE4tVjAzdndRSHM0eF96a2g4a1psLXkxcXFxb3hoeFFzcS1wUVBHS0R0WFE9czE2MC1jLWstYzB4MDBmZmZmZmYtbm8tcmoifSwgImV4cGlyZXMiOiBudWxsfQ=="
|
||||
ENCRYPTED_SUBSCRIPTION_CACHE = "eyJzdGF0dXMiOiAiUFJFTUlVTSIsICJleHBpcnlUaW1lc3RhbXAiOiBudWxsLCAibGFzdENoZWNrZWQiOiBudWxsLCAiaXNUcmlhbFN1YnNjcmlwdGlvbiI6IHRydWV9"
|
||||
ENCRYPTED_IS_PREMIUM = "dHJ1ZQ=="
|
||||
@classmethod
|
||||
def generate_session(cls, id_length: int = 21, days_ahead: int = 365) -> dict:
|
||||
"""
|
||||
Generate a dynamic session with proper ID and expiry format.
|
||||
|
||||
Args:
|
||||
id_length: Length of the numeric ID (default: 21)
|
||||
days_ahead: Number of days ahead for expiry (default: 365)
|
||||
|
||||
Returns:
|
||||
dict: A session dictionary with user information and expiry
|
||||
"""
|
||||
# Generate numeric ID
|
||||
numeric_id = ''.join(random.choice('0123456789') for _ in range(id_length))
|
||||
|
||||
# Generate future expiry date
|
||||
future_date = datetime.now() + timedelta(days=days_ahead)
|
||||
expiry = future_date.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'
|
||||
|
||||
# Decode the encoded email
|
||||
encoded_email = "Z2lzZWxlQGJsYWNrYm94LmFp" # Base64 encoded email
|
||||
email = base64.b64decode(encoded_email).decode('utf-8')
|
||||
|
||||
# Generate random image ID for the new URL format
|
||||
chars = string.ascii_letters + string.digits + "-"
|
||||
random_img_id = ''.join(random.choice(chars) for _ in range(48))
|
||||
image_url = f"https://lh3.googleusercontent.com/a/ACg8oc{random_img_id}=s96-c"
|
||||
|
||||
return {
|
||||
"user": {
|
||||
"name": "BLACKBOX AI",
|
||||
"email": email,
|
||||
"image": image_url,
|
||||
"id": numeric_id
|
||||
},
|
||||
"expires": expiry
|
||||
}
|
||||
|
||||
@classmethod
|
||||
async def fetch_validated(cls, url: str = "https://www.blackbox.ai", force_refresh: bool = False) -> Optional[str]:
|
||||
|
|
@ -123,7 +204,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
if data.get('validated_value'):
|
||||
return data['validated_value']
|
||||
except Exception as e:
|
||||
print(f"Error reading cache: {e}")
|
||||
debug.log(f"Blackbox: Error reading cache: {e}")
|
||||
|
||||
js_file_pattern = r'static/chunks/\d{4}-[a-fA-F0-9]+\.js'
|
||||
uuid_pattern = r'["\']([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12})["\']'
|
||||
|
|
@ -158,12 +239,12 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
with open(cache_file, 'w') as f:
|
||||
json.dump({'validated_value': validated_value}, f)
|
||||
except Exception as e:
|
||||
print(f"Error writing cache: {e}")
|
||||
debug.log(f"Blackbox: Error writing cache: {e}")
|
||||
|
||||
return validated_value
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error retrieving validated_value: {e}")
|
||||
debug.log(f"Blackbox: Error retrieving validated_value: {e}")
|
||||
|
||||
return None
|
||||
|
||||
|
|
@ -172,20 +253,190 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
chars = string.ascii_letters + string.digits
|
||||
return ''.join(random.choice(chars) for _ in range(length))
|
||||
|
||||
@staticmethod
|
||||
def decrypt_data(encrypted_data):
|
||||
try:
|
||||
return json.loads(base64.b64decode(encrypted_data).decode('utf-8'))
|
||||
except:
|
||||
return None
|
||||
@classmethod
|
||||
def get_models(cls) -> list:
|
||||
"""
|
||||
Returns a list of available models based on authorization status.
|
||||
Authorized users get the full list of models.
|
||||
Unauthorized users only get fallback_models.
|
||||
"""
|
||||
# Check if there are valid session data in HAR files
|
||||
has_premium_access = cls._check_premium_access()
|
||||
|
||||
if has_premium_access:
|
||||
# For authorized users - all models
|
||||
debug.log(f"Blackbox: Returning full model list with {len(cls._all_models)} models")
|
||||
return cls._all_models
|
||||
else:
|
||||
# For demo accounts - only free models
|
||||
debug.log(f"Blackbox: Returning free model list with {len(cls.fallback_models)} models")
|
||||
return cls.fallback_models
|
||||
|
||||
@staticmethod
|
||||
def decrypt_bool(encrypted_data):
|
||||
@classmethod
|
||||
def _check_premium_access(cls) -> bool:
|
||||
"""
|
||||
Checks for an authorized session in HAR files.
|
||||
Returns True if a valid session is found that differs from the demo.
|
||||
"""
|
||||
try:
|
||||
return base64.b64decode(encrypted_data).decode('utf-8').lower() == 'true'
|
||||
except:
|
||||
har_dir = get_cookies_dir()
|
||||
if not os.access(har_dir, os.R_OK):
|
||||
return False
|
||||
|
||||
for root, _, files in os.walk(har_dir):
|
||||
for file in files:
|
||||
if file.endswith(".har"):
|
||||
try:
|
||||
with open(os.path.join(root, file), 'rb') as f:
|
||||
har_data = json.load(f)
|
||||
|
||||
for entry in har_data['log']['entries']:
|
||||
# Only check requests to blackbox API
|
||||
if 'blackbox.ai/api' in entry['request']['url']:
|
||||
if 'response' in entry and 'content' in entry['response']:
|
||||
content = entry['response']['content']
|
||||
if ('text' in content and
|
||||
isinstance(content['text'], str) and
|
||||
'"user"' in content['text'] and
|
||||
'"email"' in content['text']):
|
||||
|
||||
try:
|
||||
# Process request text
|
||||
text = content['text'].strip()
|
||||
if text.startswith('{') and text.endswith('}'):
|
||||
text = text.replace('\\"', '"')
|
||||
session_data = json.loads(text)
|
||||
|
||||
# Check if this is a valid session
|
||||
if (isinstance(session_data, dict) and
|
||||
'user' in session_data and
|
||||
'email' in session_data['user']):
|
||||
|
||||
# Check if this is not a demo session
|
||||
demo_session = cls.generate_session()
|
||||
if (session_data['user'].get('email') !=
|
||||
demo_session['user'].get('email')):
|
||||
# This is not a demo session, so user has premium access
|
||||
return True
|
||||
except:
|
||||
pass
|
||||
except:
|
||||
pass
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error checking premium access: {e}")
|
||||
return False
|
||||
|
||||
# Initialize models with fallback_models
|
||||
models = fallback_models
|
||||
|
||||
model_aliases = {
|
||||
"gpt-4o": "GPT-4o",
|
||||
"claude-3.7-sonnet": "Claude-sonnet-3.7",
|
||||
"deepseek-v3": "DeepSeek-V3",
|
||||
"deepseek-r1": "DeepSeek-R1",
|
||||
"deepseek-chat": "DeepSeek-LLM-Chat-(67B)",
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def generate_session(cls, id_length: int = 21, days_ahead: int = 365) -> dict:
|
||||
"""
|
||||
Generate a dynamic session with proper ID and expiry format.
|
||||
|
||||
Args:
|
||||
id_length: Length of the numeric ID (default: 21)
|
||||
days_ahead: Number of days ahead for expiry (default: 365)
|
||||
|
||||
Returns:
|
||||
dict: A session dictionary with user information and expiry
|
||||
"""
|
||||
# Generate numeric ID
|
||||
numeric_id = ''.join(random.choice('0123456789') for _ in range(id_length))
|
||||
|
||||
# Generate future expiry date
|
||||
future_date = datetime.now() + timedelta(days=days_ahead)
|
||||
expiry = future_date.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'
|
||||
|
||||
# Decode the encoded email
|
||||
encoded_email = "Z2lzZWxlQGJsYWNrYm94LmFp" # Base64 encoded email
|
||||
email = base64.b64decode(encoded_email).decode('utf-8')
|
||||
|
||||
# Generate random image ID for the new URL format
|
||||
chars = string.ascii_letters + string.digits + "-"
|
||||
random_img_id = ''.join(random.choice(chars) for _ in range(48))
|
||||
image_url = f"https://lh3.googleusercontent.com/a/ACg8oc{random_img_id}=s96-c"
|
||||
|
||||
return {
|
||||
"user": {
|
||||
"name": "BLACKBOX AI",
|
||||
"email": email,
|
||||
"image": image_url,
|
||||
"id": numeric_id
|
||||
},
|
||||
"expires": expiry
|
||||
}
|
||||
|
||||
|
||||
@classmethod
|
||||
async def fetch_validated(cls, url: str = "https://www.blackbox.ai", force_refresh: bool = False) -> Optional[str]:
|
||||
cache_file = Path(get_cookies_dir()) / 'blackbox.json'
|
||||
|
||||
if not force_refresh and cache_file.exists():
|
||||
try:
|
||||
with open(cache_file, 'r') as f:
|
||||
data = json.load(f)
|
||||
if data.get('validated_value'):
|
||||
return data['validated_value']
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error reading cache: {e}")
|
||||
|
||||
js_file_pattern = r'static/chunks/\d{4}-[a-fA-F0-9]+\.js'
|
||||
uuid_pattern = r'["\']([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12})["\']'
|
||||
|
||||
def is_valid_context(text: str) -> bool:
|
||||
return any(char + '=' in text for char in 'abcdefghijklmnopqrstuvwxyz')
|
||||
|
||||
async with ClientSession() as session:
|
||||
try:
|
||||
async with session.get(url) as response:
|
||||
if response.status != 200:
|
||||
return None
|
||||
|
||||
page_content = await response.text()
|
||||
js_files = re.findall(js_file_pattern, page_content)
|
||||
|
||||
for js_file in js_files:
|
||||
js_url = f"{url}/_next/{js_file}"
|
||||
async with session.get(js_url) as js_response:
|
||||
if js_response.status == 200:
|
||||
js_content = await js_response.text()
|
||||
for match in re.finditer(uuid_pattern, js_content):
|
||||
start = max(0, match.start() - 10)
|
||||
end = min(len(js_content), match.end() + 10)
|
||||
context = js_content[start:end]
|
||||
|
||||
if is_valid_context(context):
|
||||
validated_value = match.group(1)
|
||||
|
||||
cache_file.parent.mkdir(exist_ok=True)
|
||||
try:
|
||||
with open(cache_file, 'w') as f:
|
||||
json.dump({'validated_value': validated_value}, f)
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error writing cache: {e}")
|
||||
|
||||
return validated_value
|
||||
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error retrieving validated_value: {e}")
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def generate_id(cls, length: int = 7) -> str:
|
||||
chars = string.ascii_letters + string.digits
|
||||
return ''.join(random.choice(chars) for _ in range(length))
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls,
|
||||
|
|
@ -212,30 +463,6 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
}
|
||||
|
||||
async with ClientSession(headers=headers) as session:
|
||||
if model in "ImageGeneration":
|
||||
prompt = format_image_prompt(messages, prompt)
|
||||
data = {
|
||||
"query": format_image_prompt(messages, prompt),
|
||||
"agentMode": True
|
||||
}
|
||||
headers['content-type'] = 'text/plain;charset=UTF-8'
|
||||
|
||||
async with session.post(
|
||||
"https://www.blackbox.ai/api/image-generator",
|
||||
json=data,
|
||||
proxy=proxy,
|
||||
headers=headers
|
||||
) as response:
|
||||
await raise_for_status(response)
|
||||
response_json = await response.json()
|
||||
|
||||
if "markdown" in response_json:
|
||||
image_url_match = re.search(r'!\[.*?\]\((.*?)\)', response_json["markdown"])
|
||||
if image_url_match:
|
||||
image_url = image_url_match.group(1)
|
||||
yield ImageResponse(images=[image_url], alt=format_image_prompt(messages, prompt))
|
||||
return
|
||||
|
||||
if conversation is None or not hasattr(conversation, "chat_id"):
|
||||
conversation = Conversation(model)
|
||||
conversation.validated_value = await cls.fetch_validated()
|
||||
|
|
@ -265,6 +492,71 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
"title": ""
|
||||
}
|
||||
|
||||
# Try to get session data from HAR files
|
||||
session_data = cls.generate_session() # Default fallback
|
||||
session_found = False
|
||||
|
||||
# Look for HAR session data
|
||||
har_dir = get_cookies_dir()
|
||||
if os.access(har_dir, os.R_OK):
|
||||
for root, _, files in os.walk(har_dir):
|
||||
for file in files:
|
||||
if file.endswith(".har"):
|
||||
try:
|
||||
with open(os.path.join(root, file), 'rb') as f:
|
||||
har_data = json.load(f)
|
||||
|
||||
for entry in har_data['log']['entries']:
|
||||
# Only look at blackbox API responses
|
||||
if 'blackbox.ai/api' in entry['request']['url']:
|
||||
# Look for a response that has the right structure
|
||||
if 'response' in entry and 'content' in entry['response']:
|
||||
content = entry['response']['content']
|
||||
# Look for both regular and Google auth session formats
|
||||
if ('text' in content and
|
||||
isinstance(content['text'], str) and
|
||||
'"user"' in content['text'] and
|
||||
'"email"' in content['text'] and
|
||||
'"expires"' in content['text']):
|
||||
|
||||
try:
|
||||
# Remove any HTML or other non-JSON content
|
||||
text = content['text'].strip()
|
||||
if text.startswith('{') and text.endswith('}'):
|
||||
# Replace escaped quotes
|
||||
text = text.replace('\\"', '"')
|
||||
har_session = json.loads(text)
|
||||
|
||||
# Check if this is a valid session object (supports both regular and Google auth)
|
||||
if (isinstance(har_session, dict) and
|
||||
'user' in har_session and
|
||||
'email' in har_session['user'] and
|
||||
'expires' in har_session):
|
||||
|
||||
file_path = os.path.join(root, file)
|
||||
debug.log(f"Blackbox: Found session in HAR file")
|
||||
|
||||
session_data = har_session
|
||||
session_found = True
|
||||
break
|
||||
except json.JSONDecodeError as e:
|
||||
# Only print error for entries that truly look like session data
|
||||
if ('"user"' in content['text'] and
|
||||
'"email"' in content['text']):
|
||||
debug.log(f"Blackbox: Error parsing likely session data: {e}")
|
||||
|
||||
if session_found:
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error reading HAR file: {e}")
|
||||
|
||||
if session_found:
|
||||
break
|
||||
|
||||
if session_found:
|
||||
break
|
||||
|
||||
data = {
|
||||
"messages": current_messages,
|
||||
"agentMode": cls.agentMode.get(model, {}) if model in cls.agentMode else {},
|
||||
|
|
@ -288,7 +580,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
"mobileClient": False,
|
||||
"userSelectedModel": model if model in cls.userSelectedModel else None,
|
||||
"validated": conversation.validated_value,
|
||||
"imageGenerationMode": False,
|
||||
"imageGenerationMode": model == cls.default_image_model,
|
||||
"webSearchModePrompt": False,
|
||||
"deepSearchMode": False,
|
||||
"domains": None,
|
||||
|
|
@ -301,23 +593,57 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
"additionalInfo": "",
|
||||
"enableNewChats": False
|
||||
},
|
||||
"session": cls.decrypt_data(cls.ENCRYPTED_SESSION),
|
||||
"isPremium": cls.decrypt_bool(cls.ENCRYPTED_IS_PREMIUM),
|
||||
"subscriptionCache": cls.decrypt_data(cls.ENCRYPTED_SUBSCRIPTION_CACHE),
|
||||
"session": session_data if session_data else cls.generate_session(),
|
||||
"isPremium": True,
|
||||
"subscriptionCache": None,
|
||||
"beastMode": False,
|
||||
"webSearchMode": False
|
||||
}
|
||||
|
||||
|
||||
# Add debugging before making the API call
|
||||
if isinstance(session_data, dict) and 'user' in session_data:
|
||||
# Генеруємо демо-сесію для порівняння
|
||||
demo_session = cls.generate_session()
|
||||
is_demo = False
|
||||
|
||||
if demo_session and isinstance(demo_session, dict) and 'user' in demo_session:
|
||||
if session_data['user'].get('email') == demo_session['user'].get('email'):
|
||||
is_demo = True
|
||||
|
||||
if is_demo:
|
||||
debug.log(f"Blackbox: Making API request with built-in Developer Premium Account")
|
||||
else:
|
||||
user_email = session_data['user'].get('email', 'unknown')
|
||||
debug.log(f"Blackbox: Making API request with HAR session email: {user_email}")
|
||||
|
||||
# Continue with the API request and async generator behavior
|
||||
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
|
||||
await raise_for_status(response)
|
||||
|
||||
# Collect the full response
|
||||
full_response = []
|
||||
async for chunk in response.content.iter_any():
|
||||
if chunk:
|
||||
chunk_text = chunk.decode()
|
||||
full_response.append(chunk_text)
|
||||
yield chunk_text
|
||||
|
||||
# Only yield chunks for non-image models
|
||||
if model != cls.default_image_model:
|
||||
yield chunk_text
|
||||
|
||||
full_response_text = ''.join(full_response)
|
||||
|
||||
# For image models, check for image markdown
|
||||
if model == cls.default_image_model:
|
||||
image_url_match = re.search(r'!\[.*?\]\((.*?)\)', full_response_text)
|
||||
if image_url_match:
|
||||
image_url = image_url_match.group(1)
|
||||
yield ImageResponse(images=[image_url], alt=format_image_prompt(messages, prompt))
|
||||
return
|
||||
|
||||
# Handle conversation history once, in one place
|
||||
if return_conversation:
|
||||
full_response_text = ''.join(full_response)
|
||||
conversation.message_history.append({"role": "assistant", "content": full_response_text})
|
||||
yield conversation
|
||||
# For image models that didn't produce an image, fall back to text response
|
||||
elif model == cls.default_image_model:
|
||||
yield full_response_text
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ class Copilot(AbstractProvider, ProviderModelMixin):
|
|||
models = [default_model]
|
||||
model_aliases = {
|
||||
"gpt-4": default_model,
|
||||
"gpt-4o": default_model,
|
||||
"o1": default_model,
|
||||
}
|
||||
|
||||
websocket_url = "wss://copilot.microsoft.com/c/api/chat?api-version=2"
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ class DeepInfraChat(OpenaiTemplate):
|
|||
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
|
||||
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct",
|
||||
"deepseek-v3": default_model,
|
||||
"mixtral-small-28b": "mistralai/Mistral-Small-24B-Instruct-2501",
|
||||
"mixtral-small-24b": "mistralai/Mistral-Small-24B-Instruct-2501",
|
||||
"deepseek-r1": "deepseek-ai/DeepSeek-R1-Turbo",
|
||||
"deepseek-r1": "deepseek-ai/DeepSeek-R1",
|
||||
"deepseek-r1-distill-llama": "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
|
||||
|
|
|
|||
67
g4f/Provider/Dynaspark.py
Normal file
67
g4f/Provider/Dynaspark.py
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from aiohttp import ClientSession, FormData
|
||||
|
||||
from ..typing import AsyncResult, Messages, ImagesType
|
||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ..requests.raise_for_status import raise_for_status
|
||||
from ..image import to_data_uri, to_bytes, is_accepted_format
|
||||
from .helper import format_prompt
|
||||
|
||||
|
||||
class Dynaspark(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
url = "https://dynaspark.onrender.com"
|
||||
login_url = None
|
||||
api_endpoint = "https://dynaspark.onrender.com/generate_response"
|
||||
|
||||
working = True
|
||||
needs_auth = False
|
||||
use_nodriver = True
|
||||
supports_stream = True
|
||||
supports_system_message = False
|
||||
supports_message_history = False
|
||||
|
||||
default_model = 'gemini-1.5-flash'
|
||||
default_vision_model = default_model
|
||||
vision_models = [default_vision_model, 'gemini-1.5-flash-8b', 'gemini-2.0-flash', 'gemini-2.0-flash-lite']
|
||||
models = vision_models
|
||||
|
||||
model_aliases = {
|
||||
"gemini-1.5-flash": "gemini-1.5-flash-8b",
|
||||
"gemini-2.0-flash": "gemini-2.0-flash-lite",
|
||||
}
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls,
|
||||
model: str,
|
||||
messages: Messages,
|
||||
proxy: str = None,
|
||||
images: ImagesType = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
headers = {
|
||||
'accept': '*/*',
|
||||
'accept-language': 'en-US,en;q=0.9',
|
||||
'origin': 'https://dynaspark.onrender.com',
|
||||
'referer': 'https://dynaspark.onrender.com/',
|
||||
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36',
|
||||
'x-requested-with': 'XMLHttpRequest'
|
||||
}
|
||||
|
||||
async with ClientSession(headers=headers) as session:
|
||||
form = FormData()
|
||||
form.add_field('user_input', format_prompt(messages))
|
||||
form.add_field('ai_model', model)
|
||||
|
||||
if images is not None and len(images) > 0:
|
||||
image, image_name = images[0]
|
||||
image_bytes = to_bytes(image)
|
||||
form.add_field('file', image_bytes, filename=image_name, content_type=is_accepted_format(image_bytes))
|
||||
|
||||
async with session.post(f"{cls.api_endpoint}", data=form, proxy=proxy) as response:
|
||||
await raise_for_status(response)
|
||||
response_text = await response.text()
|
||||
response_json = json.loads(response_text)
|
||||
yield response_json["response"]
|
||||
49
g4f/Provider/Goabror.py
Normal file
49
g4f/Provider/Goabror.py
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from aiohttp import ClientSession
|
||||
|
||||
from ..typing import AsyncResult, Messages
|
||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ..requests.raise_for_status import raise_for_status
|
||||
from .helper import format_prompt
|
||||
|
||||
|
||||
class Goabror(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
url = "https://goabror.uz"
|
||||
api_endpoint = "https://goabror.uz/api/gpt.php"
|
||||
|
||||
working = True
|
||||
|
||||
default_model = 'gpt-4'
|
||||
models = [default_model]
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls,
|
||||
model: str,
|
||||
messages: Messages,
|
||||
proxy: str = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
headers = {
|
||||
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
|
||||
'accept-language': 'en-US,en;q=0.9',
|
||||
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36'
|
||||
}
|
||||
|
||||
async with ClientSession(headers=headers) as session:
|
||||
params = {
|
||||
"user": format_prompt(messages)
|
||||
}
|
||||
async with session.get(f"{cls.api_endpoint}", params=params, proxy=proxy) as response:
|
||||
await raise_for_status(response)
|
||||
text_response = await response.text()
|
||||
try:
|
||||
json_response = json.loads(text_response)
|
||||
if "data" in json_response:
|
||||
yield json_response["data"]
|
||||
else:
|
||||
yield text_response
|
||||
except json.JSONDecodeError:
|
||||
yield text_response
|
||||
180
g4f/Provider/LambdaChat.py
Normal file
180
g4f/Provider/LambdaChat.py
Normal file
|
|
@ -0,0 +1,180 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import uuid
|
||||
from aiohttp import ClientSession, FormData
|
||||
|
||||
from ..typing import AsyncResult, Messages
|
||||
from ..requests import raise_for_status
|
||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from .helper import format_prompt, get_last_user_message
|
||||
from ..providers.response import JsonConversation, TitleGeneration, Reasoning
|
||||
|
||||
class LambdaChat(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Lambda Chat"
|
||||
url = "https://lambda.chat"
|
||||
conversation_url = f"{url}/conversation"
|
||||
|
||||
working = True
|
||||
|
||||
default_model = "deepseek-llama3.3-70b"
|
||||
reasoning_model = "deepseek-r1"
|
||||
models = [
|
||||
default_model,
|
||||
reasoning_model,
|
||||
"hermes-3-llama-3.1-405b-fp8",
|
||||
"hermes3-405b-fp8-128k",
|
||||
"llama3.1-nemotron-70b-instruct",
|
||||
"lfm-40b",
|
||||
"llama3.3-70b-instruct-fp8"
|
||||
]
|
||||
model_aliases = {
|
||||
"deepseek-v3": default_model,
|
||||
"hermes-3": "hermes-3-llama-3.1-405b-fp8",
|
||||
"hermes-3": "hermes3-405b-fp8-128k",
|
||||
"nemotron-70b": "llama3.1-nemotron-70b-instruct",
|
||||
"llama-3.3-70b": "llama3.3-70b-instruct-fp8"
|
||||
}
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls, model: str, messages: Messages,
|
||||
api_key: str = None,
|
||||
proxy: str = None,
|
||||
cookies: dict = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
model = cls.get_model(model)
|
||||
headers = {
|
||||
"Origin": cls.url,
|
||||
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36",
|
||||
"Accept": "*/*",
|
||||
"Accept-Language": "en-US,en;q=0.9",
|
||||
"Referer": cls.url,
|
||||
"Sec-Fetch-Dest": "empty",
|
||||
"Sec-Fetch-Mode": "cors",
|
||||
"Sec-Fetch-Site": "same-origin",
|
||||
"Priority": "u=1, i",
|
||||
"Pragma": "no-cache",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
|
||||
# Initialize cookies if not provided
|
||||
if cookies is None:
|
||||
cookies = {
|
||||
"hf-chat": str(uuid.uuid4()) # Generate a session ID
|
||||
}
|
||||
|
||||
async with ClientSession(headers=headers, cookies=cookies) as session:
|
||||
# Step 1: Create a new conversation
|
||||
data = {"model": model}
|
||||
async with session.post(cls.conversation_url, json=data, proxy=proxy) as response:
|
||||
await raise_for_status(response)
|
||||
conversation_response = await response.json()
|
||||
conversation_id = conversation_response["conversationId"]
|
||||
|
||||
# Update cookies with any new ones from the response
|
||||
for cookie_name, cookie in response.cookies.items():
|
||||
cookies[cookie_name] = cookie.value
|
||||
|
||||
# Step 2: Get data for this conversation to extract message ID
|
||||
async with session.get(
|
||||
f"{cls.conversation_url}/{conversation_id}/__data.json?x-sveltekit-invalidated=11",
|
||||
proxy=proxy
|
||||
) as response:
|
||||
await raise_for_status(response)
|
||||
response_text = await response.text()
|
||||
|
||||
# Update cookies again
|
||||
for cookie_name, cookie in response.cookies.items():
|
||||
cookies[cookie_name] = cookie.value
|
||||
|
||||
try:
|
||||
data_line = response_text.splitlines()[0]
|
||||
data_json = json.loads(data_line)
|
||||
|
||||
# Navigate to the data section containing message info
|
||||
message_id = None
|
||||
|
||||
# For debugging, print the JSON structure
|
||||
if "nodes" in data_json and len(data_json["nodes"]) > 1:
|
||||
node = data_json["nodes"][1]
|
||||
if "data" in node:
|
||||
data = node["data"]
|
||||
|
||||
# Try to find the system message ID
|
||||
if len(data) > 1 and isinstance(data[1], list) and len(data[1]) > 2:
|
||||
for item in data[1]:
|
||||
if isinstance(item, dict) and "id" in item:
|
||||
# Found a potential message ID
|
||||
message_id = item["id"]
|
||||
break
|
||||
|
||||
if not message_id:
|
||||
# Fallback: Just search for any UUID in the response
|
||||
uuid_pattern = r"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
|
||||
uuids = re.findall(uuid_pattern, response_text)
|
||||
if uuids:
|
||||
message_id = uuids[0]
|
||||
|
||||
if not message_id:
|
||||
raise ValueError("Could not find message ID in response")
|
||||
|
||||
except (IndexError, KeyError, ValueError, json.JSONDecodeError) as e:
|
||||
raise RuntimeError(f"Failed to parse conversation data: {str(e)}")
|
||||
|
||||
# Step 3: Send the user message
|
||||
user_message = get_last_user_message(messages)
|
||||
|
||||
# Prepare form data exactly
|
||||
form_data = FormData()
|
||||
form_data.add_field(
|
||||
"data",
|
||||
json.dumps({
|
||||
"inputs": user_message,
|
||||
"id": message_id,
|
||||
"is_retry": False,
|
||||
"is_continue": False,
|
||||
"web_search": False,
|
||||
"tools": []
|
||||
}),
|
||||
content_type="application/json"
|
||||
)
|
||||
|
||||
async with session.post(
|
||||
f"{cls.conversation_url}/{conversation_id}",
|
||||
data=form_data,
|
||||
proxy=proxy
|
||||
) as response:
|
||||
await raise_for_status(response)
|
||||
|
||||
async for chunk in response.content:
|
||||
if not chunk:
|
||||
continue
|
||||
|
||||
chunk_str = chunk.decode('utf-8', errors='ignore')
|
||||
|
||||
try:
|
||||
data = json.loads(chunk_str)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
# Handling different types of responses
|
||||
if data.get("type") == "stream" and "token" in data:
|
||||
token = data["token"].replace("\u0000", "")
|
||||
if token:
|
||||
yield token
|
||||
elif data.get("type") == "title":
|
||||
yield TitleGeneration(data.get("title", ""))
|
||||
elif data.get("type") == "reasoning" and model == cls.reasoning_model: # Only process reasoning for reasoning_model
|
||||
subtype = data.get("subtype")
|
||||
token = data.get("token", "").replace("\u0000", "")
|
||||
status = data.get("status", "")
|
||||
|
||||
if subtype == "stream" and token:
|
||||
yield Reasoning(token=token)
|
||||
elif subtype == "status" and status:
|
||||
yield Reasoning(status=status)
|
||||
elif data.get("type") == "finalAnswer":
|
||||
break
|
||||
|
|
@ -21,13 +21,6 @@ DEFAULT_HEADERS = {
|
|||
"accept": "*/*",
|
||||
'accept-language': 'en-US,en;q=0.9',
|
||||
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36",
|
||||
"priority": "u=1, i",
|
||||
"sec-ch-ua": "\"Not(A:Brand\";v=\"99\", \"Google Chrome\";v=\"133\", \"Chromium\";v=\"133\"",
|
||||
"sec-ch-ua-mobile": "?0",
|
||||
"sec-ch-ua-platform": "\"Linux\"",
|
||||
"sec-fetch-dest": "empty",
|
||||
"sec-fetch-mode": "cors",
|
||||
"sec-fetch-site": "same-site",
|
||||
"referer": "https://pollinations.ai/",
|
||||
"origin": "https://pollinations.ai",
|
||||
}
|
||||
|
|
@ -71,6 +64,8 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
"gemini-2.0": "gemini",
|
||||
"gemini-2.0-flash": "gemini",
|
||||
"gemini-2.0-flash-thinking": "gemini-thinking",
|
||||
"deepseek-r1": "deepseek-r1-llama",
|
||||
"gpt-4o-audio": "openai-audio",
|
||||
|
||||
### Image Models ###
|
||||
"sdxl-turbo": "turbo",
|
||||
|
|
|
|||
184
g4f/Provider/Websim.py
Normal file
184
g4f/Provider/Websim.py
Normal file
|
|
@ -0,0 +1,184 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import random
|
||||
import string
|
||||
import asyncio
|
||||
from aiohttp import ClientSession
|
||||
|
||||
from ..typing import AsyncResult, Messages
|
||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ..requests.raise_for_status import raise_for_status
|
||||
from ..errors import ResponseStatusError
|
||||
from ..providers.response import ImageResponse
|
||||
from .helper import format_prompt, format_image_prompt
|
||||
|
||||
|
||||
class Websim(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
url = "https://websim.ai"
|
||||
login_url = None
|
||||
chat_api_endpoint = "https://websim.ai/api/v1/inference/run_chat_completion"
|
||||
image_api_endpoint = "https://websim.ai/api/v1/inference/run_image_generation"
|
||||
|
||||
working = True
|
||||
needs_auth = False
|
||||
use_nodriver = False
|
||||
supports_stream = False
|
||||
supports_system_message = True
|
||||
supports_message_history = True
|
||||
|
||||
default_model = 'gemini-1.5-pro'
|
||||
default_image_model = 'flux'
|
||||
image_models = [default_image_model]
|
||||
models = [default_model, 'gemini-1.5-flash'] + image_models
|
||||
|
||||
@staticmethod
|
||||
def generate_project_id(for_image=False):
|
||||
"""
|
||||
Generate a project ID in the appropriate format
|
||||
|
||||
For chat: format like 'ke3_xh5gai3gjkmruomu'
|
||||
For image: format like 'kx0m131_rzz66qb2xoy7'
|
||||
"""
|
||||
chars = string.ascii_lowercase + string.digits
|
||||
|
||||
if for_image:
|
||||
first_part = ''.join(random.choices(chars, k=7))
|
||||
second_part = ''.join(random.choices(chars, k=12))
|
||||
return f"{first_part}_{second_part}"
|
||||
else:
|
||||
prefix = ''.join(random.choices(chars, k=3))
|
||||
suffix = ''.join(random.choices(chars, k=15))
|
||||
return f"{prefix}_{suffix}"
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls,
|
||||
model: str,
|
||||
messages: Messages,
|
||||
prompt: str = None,
|
||||
proxy: str = None,
|
||||
aspect_ratio: str = "1:1",
|
||||
project_id: str = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
is_image_request = model in cls.image_models
|
||||
|
||||
if project_id is None:
|
||||
project_id = cls.generate_project_id(for_image=is_image_request)
|
||||
|
||||
headers = {
|
||||
'accept': '*/*',
|
||||
'accept-language': 'en-US,en;q=0.9',
|
||||
'content-type': 'text/plain;charset=UTF-8',
|
||||
'origin': 'https://websim.ai',
|
||||
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36',
|
||||
'websim-flags;': ''
|
||||
}
|
||||
|
||||
if is_image_request:
|
||||
headers['referer'] = 'https://websim.ai/@ISWEARIAMNOTADDICTEDTOPILLOW/ai-image-prompt-generator'
|
||||
async for result in cls._handle_image_request(
|
||||
project_id=project_id,
|
||||
messages=messages,
|
||||
prompt=prompt,
|
||||
aspect_ratio=aspect_ratio,
|
||||
headers=headers,
|
||||
proxy=proxy,
|
||||
**kwargs
|
||||
):
|
||||
yield result
|
||||
else:
|
||||
headers['referer'] = 'https://websim.ai/@ISWEARIAMNOTADDICTEDTOPILLOW/zelos-ai-assistant'
|
||||
async for result in cls._handle_chat_request(
|
||||
project_id=project_id,
|
||||
messages=messages,
|
||||
headers=headers,
|
||||
proxy=proxy,
|
||||
**kwargs
|
||||
):
|
||||
yield result
|
||||
|
||||
@classmethod
|
||||
async def _handle_image_request(
|
||||
cls,
|
||||
project_id: str,
|
||||
messages: Messages,
|
||||
prompt: str,
|
||||
aspect_ratio: str,
|
||||
headers: dict,
|
||||
proxy: str = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
used_prompt = format_image_prompt(messages, prompt)
|
||||
|
||||
async with ClientSession(headers=headers) as session:
|
||||
data = {
|
||||
"project_id": project_id,
|
||||
"prompt": used_prompt,
|
||||
"aspect_ratio": aspect_ratio
|
||||
}
|
||||
async with session.post(f"{cls.image_api_endpoint}", json=data, proxy=proxy) as response:
|
||||
await raise_for_status(response)
|
||||
response_text = await response.text()
|
||||
response_json = json.loads(response_text)
|
||||
image_url = response_json.get("url")
|
||||
if image_url:
|
||||
yield ImageResponse(images=[image_url], alt=used_prompt)
|
||||
|
||||
@classmethod
|
||||
async def _handle_chat_request(
|
||||
cls,
|
||||
project_id: str,
|
||||
messages: Messages,
|
||||
headers: dict,
|
||||
proxy: str = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
max_retries = 3
|
||||
retry_count = 0
|
||||
last_error = None
|
||||
|
||||
while retry_count < max_retries:
|
||||
try:
|
||||
async with ClientSession(headers=headers) as session:
|
||||
data = {
|
||||
"project_id": project_id,
|
||||
"messages": messages
|
||||
}
|
||||
async with session.post(f"{cls.chat_api_endpoint}", json=data, proxy=proxy) as response:
|
||||
if response.status == 429:
|
||||
response_text = await response.text()
|
||||
last_error = ResponseStatusError(f"Response {response.status}: {response_text}")
|
||||
retry_count += 1
|
||||
if retry_count < max_retries:
|
||||
wait_time = 2 ** retry_count
|
||||
await asyncio.sleep(wait_time)
|
||||
continue
|
||||
else:
|
||||
raise last_error
|
||||
|
||||
await raise_for_status(response)
|
||||
|
||||
response_text = await response.text()
|
||||
try:
|
||||
response_json = json.loads(response_text)
|
||||
content = response_json.get("content", "")
|
||||
yield content.strip()
|
||||
break
|
||||
except json.JSONDecodeError:
|
||||
yield response_text
|
||||
break
|
||||
|
||||
except ResponseStatusError as e:
|
||||
if "Rate limit exceeded" in str(e) and retry_count < max_retries:
|
||||
retry_count += 1
|
||||
wait_time = 2 ** retry_count
|
||||
await asyncio.sleep(wait_time)
|
||||
else:
|
||||
if retry_count >= max_retries:
|
||||
raise e
|
||||
else:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise
|
||||
|
|
@ -24,12 +24,15 @@ from .Cloudflare import Cloudflare
|
|||
from .Copilot import Copilot
|
||||
from .DDG import DDG
|
||||
from .DeepInfraChat import DeepInfraChat
|
||||
from .Dynaspark import Dynaspark
|
||||
from .Free2GPT import Free2GPT
|
||||
from .FreeGpt import FreeGpt
|
||||
from .GizAI import GizAI
|
||||
from .Glider import Glider
|
||||
from .Goabror import Goabror
|
||||
from .ImageLabs import ImageLabs
|
||||
from .Jmuz import Jmuz
|
||||
from .LambdaChat import LambdaChat
|
||||
from .Liaobots import Liaobots
|
||||
from .OIVSCode import OIVSCode
|
||||
from .PerplexityLabs import PerplexityLabs
|
||||
|
|
@ -39,8 +42,10 @@ from .PollinationsAI import PollinationsAI
|
|||
from .PollinationsImage import PollinationsImage
|
||||
from .TeachAnything import TeachAnything
|
||||
from .You import You
|
||||
from .Websim import Websim
|
||||
from .Yqcloud import Yqcloud
|
||||
|
||||
|
||||
import sys
|
||||
|
||||
__modules__: list = [
|
||||
|
|
|
|||
|
|
@ -9,10 +9,11 @@ from ...requests import StreamSession
|
|||
from ...errors import ResponseError
|
||||
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ..helper import format_image_prompt
|
||||
from .Janus_Pro_7B import get_zerogpu_token
|
||||
from .DeepseekAI_JanusPro7b import get_zerogpu_token
|
||||
from .raise_for_status import raise_for_status
|
||||
|
||||
class BlackForestLabsFlux1Dev(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
class BlackForestLabs_Flux1Dev(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "BlackForestLabs Flux-1-Dev"
|
||||
url = "https://black-forest-labs-flux-1-dev.hf.space"
|
||||
space = "black-forest-labs/FLUX.1-dev"
|
||||
referer = f"{url}/?__theme=light"
|
||||
|
|
@ -112,4 +113,4 @@ class BlackForestLabsFlux1Dev(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
yield ImageResponse(json_data['output']['data'][0]["url"], prompt)
|
||||
break
|
||||
except (json.JSONDecodeError, KeyError, TypeError) as e:
|
||||
raise RuntimeError(f"Failed to parse message: {chunk.decode(errors='replace')}", e)
|
||||
raise RuntimeError(f"Failed to parse message: {chunk.decode(errors='replace')}", e)
|
||||
|
|
@ -10,7 +10,8 @@ from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
|||
from ..helper import format_image_prompt
|
||||
from .raise_for_status import raise_for_status
|
||||
|
||||
class BlackForestLabsFlux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
class BlackForestLabs_Flux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "BlackForestLabs Flux-1-Schnell"
|
||||
url = "https://black-forest-labs-flux-1-schnell.hf.space"
|
||||
api_endpoint = "https://black-forest-labs-flux-1-schnell.hf.space/call/infer"
|
||||
|
||||
|
|
@ -9,21 +9,25 @@ from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
|||
from ..helper import format_prompt, get_last_user_message
|
||||
from ...providers.response import JsonConversation, TitleGeneration
|
||||
|
||||
class CohereForAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
class CohereForAI_C4AI_Command(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "CohereForAI C4AI Command"
|
||||
url = "https://cohereforai-c4ai-command.hf.space"
|
||||
conversation_url = f"{url}/conversation"
|
||||
|
||||
working = True
|
||||
|
||||
default_model = "command-r-plus-08-2024"
|
||||
default_model = "command-a-03-2025"
|
||||
models = [
|
||||
default_model,
|
||||
"command-r-plus-08-2024",
|
||||
"command-r-08-2024",
|
||||
"command-r-plus",
|
||||
"command-r",
|
||||
"command-r7b-12-2024",
|
||||
"command-r7b-arabic-02-2025",
|
||||
]
|
||||
model_aliases = {
|
||||
"command-a": "command-a-03-2025",
|
||||
"command-r-plus": "command-r-plus-08-2024",
|
||||
"command-r": "command-r-08-2024",
|
||||
"command-r7b": "command-r7b-12-2024",
|
||||
|
|
@ -95,4 +99,4 @@ class CohereForAI(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
elif data["type"] == "title":
|
||||
yield TitleGeneration(data["title"])
|
||||
elif data["type"] == "finalAnswer":
|
||||
break
|
||||
break
|
||||
|
|
@ -19,7 +19,8 @@ from ...errors import ResponseError
|
|||
from ... import debug
|
||||
from .raise_for_status import raise_for_status
|
||||
|
||||
class Janus_Pro_7B(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
class DeepseekAI_JanusPro7b(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "DeepseekAI Janus-Pro-7B"
|
||||
space = "deepseek-ai/Janus-Pro-7B"
|
||||
url = f"https://huggingface.co/spaces/{space}"
|
||||
api_url = "https://deepseek-ai-janus-pro-7b.hf.space"
|
||||
|
|
@ -180,4 +181,4 @@ async def get_zerogpu_token(space: str, session: StreamSession, conversation: Js
|
|||
if "token" in response_data:
|
||||
zerogpu_token = response_data["token"]
|
||||
|
||||
return zerogpu_uuid, zerogpu_token
|
||||
return zerogpu_uuid, zerogpu_token
|
||||
|
|
@ -8,16 +8,16 @@ import asyncio
|
|||
from ...typing import AsyncResult, Messages
|
||||
from ...providers.response import ImageResponse, Reasoning, JsonConversation
|
||||
from ..helper import format_image_prompt, get_random_string
|
||||
from .Janus_Pro_7B import Janus_Pro_7B, get_zerogpu_token
|
||||
from .BlackForestLabsFlux1Dev import BlackForestLabsFlux1Dev
|
||||
from .DeepseekAI_JanusPro7b import DeepseekAI_JanusPro7b, get_zerogpu_token
|
||||
from .BlackForestLabs_Flux1Dev import BlackForestLabs_Flux1Dev
|
||||
from .raise_for_status import raise_for_status
|
||||
|
||||
class FluxDev(BlackForestLabsFlux1Dev):
|
||||
class FluxDev(BlackForestLabs_Flux1Dev):
|
||||
url = "https://roxky-flux-1-dev.hf.space"
|
||||
space = "roxky/FLUX.1-dev"
|
||||
referer = f"{url}/?__theme=light"
|
||||
|
||||
class G4F(Janus_Pro_7B):
|
||||
class G4F(DeepseekAI_JanusPro7b):
|
||||
label = "G4F framework"
|
||||
space = "roxky/Janus-Pro-7B"
|
||||
url = f"https://huggingface.co/spaces/roxky/g4f-space"
|
||||
|
|
@ -27,8 +27,8 @@ class G4F(Janus_Pro_7B):
|
|||
|
||||
default_model = "flux"
|
||||
model_aliases = {"flux-schnell": default_model}
|
||||
image_models = [Janus_Pro_7B.default_image_model, default_model, "flux-dev", *model_aliases.keys()]
|
||||
models = [Janus_Pro_7B.default_model, *image_models]
|
||||
image_models = [DeepseekAI_JanusPro7b.default_image_model, default_model, "flux-dev", *model_aliases.keys()]
|
||||
models = [DeepseekAI_JanusPro7b.default_model, *image_models]
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
|
@ -120,4 +120,4 @@ class G4F(Janus_Pro_7B):
|
|||
yield Reasoning(status=f"Generating {time.time() - started:.2f}s")
|
||||
await asyncio.sleep(0.2)
|
||||
yield await task
|
||||
yield Reasoning(status=f"Finished {time.time() - started:.2f}s")
|
||||
yield Reasoning(status=f"Finished {time.time() - started:.2f}s")
|
||||
|
|
|
|||
|
|
@ -12,10 +12,11 @@ from ...requests.raise_for_status import raise_for_status
|
|||
from ...image import to_bytes, is_accepted_format, is_data_an_wav
|
||||
from ...errors import ResponseError
|
||||
from ... import debug
|
||||
from .Janus_Pro_7B import get_zerogpu_token
|
||||
from .DeepseekAI_JanusPro7b import get_zerogpu_token
|
||||
from .raise_for_status import raise_for_status
|
||||
|
||||
class Phi_4(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
class Microsoft_Phi_4(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Microsoft Phi-4"
|
||||
space = "microsoft/phi-4-multimodal"
|
||||
url = f"https://huggingface.co/spaces/{space}"
|
||||
api_url = "https://microsoft-phi-4-multimodal.hf.space"
|
||||
|
|
@ -158,4 +159,4 @@ class Phi_4(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
break
|
||||
|
||||
except json.JSONDecodeError:
|
||||
debug.log("Could not parse JSON:", line.decode(errors="replace"))
|
||||
debug.log("Could not parse JSON:", line.decode(errors="replace"))
|
||||
|
|
@ -11,6 +11,7 @@ from ..helper import format_prompt, get_random_string
|
|||
from ...image import to_bytes, is_accepted_format
|
||||
|
||||
class Qwen_QVQ_72B(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen QVQ-72B"
|
||||
url = "https://qwen-qvq-72b-preview.hf.space"
|
||||
api_endpoint = "/gradio_api/call/generate"
|
||||
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ from ..helper import get_last_user_message
|
|||
from ... import debug
|
||||
|
||||
class Qwen_Qwen_2_5M_Demo(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen Qwen-2.5M-Demo"
|
||||
url = "https://qwen-qwen2-5-1m-demo.hf.space"
|
||||
api_endpoint = f"{url}/run/predict?__theme=light"
|
||||
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ from ..helper import format_prompt
|
|||
from ... import debug
|
||||
|
||||
class Qwen_Qwen_2_72B_Instruct(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Qwen Qwen-2.72B-Instruct"
|
||||
url = "https://qwen-qwen2-72b-instruct.hf.space"
|
||||
api_endpoint = "https://qwen-qwen2-72b-instruct.hf.space/queue/join?"
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,8 @@ from ...errors import ResponseError
|
|||
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ..helper import format_image_prompt
|
||||
|
||||
class StableDiffusion35Large(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
class StabilityAI_SD35Large(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "StabilityAI SD-3.5-Large"
|
||||
url = "https://stabilityai-stable-diffusion-3-5-large.hf.space"
|
||||
api_endpoint = "/gradio_api/call/infer"
|
||||
|
||||
|
|
@ -10,7 +10,8 @@ from ...requests.raise_for_status import raise_for_status
|
|||
from ..helper import format_image_prompt
|
||||
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
|
||||
class VoodoohopFlux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
class Voodoohop_Flux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Voodoohop Flux-1-Schnell"
|
||||
url = "https://voodoohop-flux-1-schnell.hf.space"
|
||||
api_endpoint = "https://voodoohop-flux-1-schnell.hf.space/call/infer"
|
||||
|
||||
|
|
@ -6,17 +6,17 @@ from ...typing import AsyncResult, Messages, ImagesType
|
|||
from ...errors import ResponseError
|
||||
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
|
||||
from .BlackForestLabsFlux1Dev import BlackForestLabsFlux1Dev
|
||||
from .BlackForestLabsFlux1Schnell import BlackForestLabsFlux1Schnell
|
||||
from .VoodoohopFlux1Schnell import VoodoohopFlux1Schnell
|
||||
from .CohereForAI import CohereForAI
|
||||
from .Janus_Pro_7B import Janus_Pro_7B
|
||||
from .Phi_4 import Phi_4
|
||||
from .BlackForestLabs_Flux1Dev import BlackForestLabs_Flux1Dev
|
||||
from .BlackForestLabs_Flux1Schnell import BlackForestLabs_Flux1Schnell
|
||||
from .CohereForAI_C4AI_Command import CohereForAI_C4AI_Command
|
||||
from .DeepseekAI_JanusPro7b import DeepseekAI_JanusPro7b
|
||||
from .G4F import G4F
|
||||
from .Microsoft_Phi_4 import Microsoft_Phi_4
|
||||
from .Qwen_QVQ_72B import Qwen_QVQ_72B
|
||||
from .Qwen_Qwen_2_5M_Demo import Qwen_Qwen_2_5M_Demo
|
||||
from .Qwen_Qwen_2_72B_Instruct import Qwen_Qwen_2_72B_Instruct
|
||||
from .StableDiffusion35Large import StableDiffusion35Large
|
||||
from .G4F import G4F
|
||||
from .StabilityAI_SD35Large import StabilityAI_SD35Large
|
||||
from .Voodoohop_Flux1Schnell import Voodoohop_Flux1Schnell
|
||||
|
||||
class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
url = "https://huggingface.co/spaces"
|
||||
|
|
@ -24,20 +24,20 @@ class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
working = True
|
||||
|
||||
default_model = Qwen_Qwen_2_72B_Instruct.default_model
|
||||
default_image_model = BlackForestLabsFlux1Dev.default_model
|
||||
default_image_model = BlackForestLabs_Flux1Dev.default_model
|
||||
default_vision_model = Qwen_QVQ_72B.default_model
|
||||
providers = [
|
||||
BlackForestLabsFlux1Dev,
|
||||
BlackForestLabsFlux1Schnell,
|
||||
VoodoohopFlux1Schnell,
|
||||
CohereForAI,
|
||||
Janus_Pro_7B,
|
||||
Phi_4,
|
||||
BlackForestLabs_Flux1Dev,
|
||||
BlackForestLabs_Flux1Schnell,
|
||||
CohereForAI_C4AI_Command,
|
||||
DeepseekAI_JanusPro7b,
|
||||
G4F,
|
||||
Microsoft_Phi_4,
|
||||
Qwen_QVQ_72B,
|
||||
Qwen_Qwen_2_5M_Demo,
|
||||
Qwen_Qwen_2_72B_Instruct,
|
||||
StableDiffusion35Large,
|
||||
G4F
|
||||
StabilityAI_SD35Large,
|
||||
Voodoohop_Flux1Schnell,
|
||||
]
|
||||
|
||||
@classmethod
|
||||
|
|
@ -97,4 +97,5 @@ class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
|
|||
raise error
|
||||
|
||||
for provider in HuggingSpace.providers:
|
||||
provider.parent = HuggingSpace.__name__
|
||||
provider.parent = HuggingSpace.__name__
|
||||
provider.hf_space = True
|
||||
|
|
|
|||
|
|
@ -2145,6 +2145,7 @@ async function on_api() {
|
|||
+ (provider.vision ? " (Image Upload)" : "")
|
||||
+ (provider.image ? " (Image Generation)" : "")
|
||||
+ (provider.nodriver ? " (Browser)" : "")
|
||||
+ (provider.hf_space ? " (HuggingSpace)" : "")
|
||||
+ (!provider.nodriver && provider.auth ? " (Auth)" : "");
|
||||
if (provider.parent)
|
||||
option.dataset.parent = provider.parent;
|
||||
|
|
|
|||
|
|
@ -67,6 +67,7 @@ class Api:
|
|||
"image": bool(getattr(provider, "image_models", False)),
|
||||
"vision": getattr(provider, "default_vision_model", None) is not None,
|
||||
"nodriver": getattr(provider, "use_nodriver", False),
|
||||
"hf_space": getattr(provider, "hf_space", False),
|
||||
"auth": provider.needs_auth,
|
||||
"login_url": getattr(provider, "login_url", None),
|
||||
} for provider in __providers__ if provider.working]
|
||||
|
|
@ -245,4 +246,4 @@ class Api:
|
|||
return self._format_json("provider", provider_handler.get_dict())
|
||||
|
||||
def get_error_message(exception: Exception) -> str:
|
||||
return f"{type(exception).__name__}: {exception}"
|
||||
return f"{type(exception).__name__}: {exception}"
|
||||
|
|
|
|||
119
g4f/models.py
119
g4f/models.py
|
|
@ -13,14 +13,17 @@ from .Provider import (
|
|||
Copilot,
|
||||
DDG,
|
||||
DeepInfraChat,
|
||||
Dynaspark,
|
||||
Free2GPT,
|
||||
FreeGpt,
|
||||
HuggingSpace,
|
||||
G4F,
|
||||
Janus_Pro_7B,
|
||||
DeepseekAI_JanusPro7b,
|
||||
Glider,
|
||||
Goabror,
|
||||
ImageLabs,
|
||||
Jmuz,
|
||||
LambdaChat,
|
||||
Liaobots,
|
||||
OIVSCode,
|
||||
PerplexityLabs,
|
||||
|
|
@ -28,6 +31,7 @@ from .Provider import (
|
|||
PollinationsAI,
|
||||
PollinationsImage,
|
||||
TeachAnything,
|
||||
Websim,
|
||||
Yqcloud,
|
||||
|
||||
### Needs Auth ###
|
||||
|
|
@ -88,6 +92,7 @@ default = Model(
|
|||
Free2GPT,
|
||||
FreeGpt,
|
||||
Glider,
|
||||
Dynaspark,
|
||||
OpenaiChat,
|
||||
Jmuz,
|
||||
Cloudflare,
|
||||
|
|
@ -102,6 +107,7 @@ default_vision = Model(
|
|||
OIVSCode,
|
||||
DeepInfraChat,
|
||||
PollinationsAI,
|
||||
Dynaspark,
|
||||
HuggingSpace,
|
||||
GeminiPro,
|
||||
HuggingFaceAPI,
|
||||
|
|
@ -120,27 +126,27 @@ default_vision = Model(
|
|||
gpt_4 = Model(
|
||||
name = 'gpt-4',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = IterListProvider([DDG, Jmuz, ChatGptEs, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots])
|
||||
best_provider = IterListProvider([DDG, Jmuz, ChatGptEs, PollinationsAI, Yqcloud, Goabror, Copilot, OpenaiChat, Liaobots])
|
||||
)
|
||||
|
||||
# gpt-4o
|
||||
gpt_4o = VisionModel(
|
||||
name = 'gpt-4o',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = IterListProvider([Blackbox, Jmuz, ChatGptEs, PollinationsAI, Copilot, Liaobots, OpenaiChat])
|
||||
best_provider = IterListProvider([Blackbox, Jmuz, ChatGptEs, PollinationsAI, Liaobots, OpenaiChat])
|
||||
)
|
||||
|
||||
gpt_4o_mini = Model(
|
||||
name = 'gpt-4o-mini',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = IterListProvider([DDG, ChatGptEs, Jmuz, PollinationsAI, OIVSCode, Liaobots, OpenaiChat])
|
||||
best_provider = IterListProvider([DDG, Blackbox, ChatGptEs, Jmuz, PollinationsAI, OIVSCode, Liaobots, OpenaiChat])
|
||||
)
|
||||
|
||||
# o1
|
||||
o1 = Model(
|
||||
name = 'o1',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = IterListProvider([Blackbox, OpenaiAccount])
|
||||
best_provider = IterListProvider([Blackbox, Copilot, OpenaiAccount])
|
||||
)
|
||||
|
||||
o1_mini = Model(
|
||||
|
|
@ -193,19 +199,19 @@ llama_3_70b = Model(
|
|||
llama_3_1_8b = Model(
|
||||
name = "llama-3.1-8b",
|
||||
base_provider = "Meta Llama",
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, Glider, Jmuz, PollinationsAI, Cloudflare])
|
||||
best_provider = IterListProvider([DeepInfraChat, Glider, PollinationsAI, AllenAI, Jmuz, Cloudflare])
|
||||
)
|
||||
|
||||
llama_3_1_70b = Model(
|
||||
name = "llama-3.1-70b",
|
||||
base_provider = "Meta Llama",
|
||||
best_provider = IterListProvider([Blackbox, Glider, Jmuz])
|
||||
best_provider = IterListProvider([Glider, AllenAI, Jmuz])
|
||||
)
|
||||
|
||||
llama_3_1_405b = Model(
|
||||
name = "llama-3.1-405b",
|
||||
base_provider = "Meta Llama",
|
||||
best_provider = IterListProvider([Blackbox, Jmuz])
|
||||
best_provider = IterListProvider([AllenAI, Jmuz])
|
||||
)
|
||||
|
||||
# llama 3.2
|
||||
|
|
@ -243,7 +249,7 @@ llama_3_2_90b = Model(
|
|||
llama_3_3_70b = Model(
|
||||
name = "llama-3.3-70b",
|
||||
base_provider = "Meta Llama",
|
||||
best_provider = IterListProvider([DDG, Blackbox, DeepInfraChat, PollinationsAI, Jmuz, HuggingChat, HuggingFace])
|
||||
best_provider = IterListProvider([DDG, DeepInfraChat, LambdaChat, PollinationsAI, Jmuz, HuggingChat, HuggingFace])
|
||||
)
|
||||
|
||||
### Mistral ###
|
||||
|
|
@ -267,20 +273,14 @@ mistral_nemo = Model(
|
|||
mixtral_small_24b = Model(
|
||||
name = "mixtral-small-24b",
|
||||
base_provider = "Mistral",
|
||||
best_provider = DDG
|
||||
)
|
||||
|
||||
mixtral_small_28b = Model(
|
||||
name = "mixtral-small-28b",
|
||||
base_provider = "Mistral",
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat])
|
||||
best_provider = IterListProvider([DDG, DeepInfraChat])
|
||||
)
|
||||
|
||||
### NousResearch ###
|
||||
hermes_2_dpo = Model(
|
||||
name = "hermes-2-dpo",
|
||||
hermes_3 = Model(
|
||||
name = "hermes-3",
|
||||
base_provider = "NousResearch",
|
||||
best_provider = Blackbox
|
||||
best_provider = LambdaChat
|
||||
)
|
||||
|
||||
### Microsoft ###
|
||||
|
|
@ -329,20 +329,20 @@ gemini_exp = Model(
|
|||
gemini_1_5_flash = Model(
|
||||
name = 'gemini-1.5-flash',
|
||||
base_provider = 'Google DeepMind',
|
||||
best_provider = IterListProvider([Blackbox, Free2GPT, FreeGpt, TeachAnything, Jmuz, GeminiPro])
|
||||
best_provider = IterListProvider([Free2GPT, FreeGpt, TeachAnything, Websim, Dynaspark, Jmuz, GeminiPro])
|
||||
)
|
||||
|
||||
gemini_1_5_pro = Model(
|
||||
name = 'gemini-1.5-pro',
|
||||
base_provider = 'Google DeepMind',
|
||||
best_provider = IterListProvider([Blackbox, Free2GPT, FreeGpt, TeachAnything, Jmuz, GeminiPro])
|
||||
best_provider = IterListProvider([Free2GPT, FreeGpt, TeachAnything, Websim, Jmuz, GeminiPro])
|
||||
)
|
||||
|
||||
# gemini-2.0
|
||||
gemini_2_0_flash = Model(
|
||||
name = 'gemini-2.0-flash',
|
||||
base_provider = 'Google DeepMind',
|
||||
best_provider = IterListProvider([Blackbox, GeminiPro, Liaobots])
|
||||
best_provider = IterListProvider([Dynaspark, GeminiPro, Liaobots])
|
||||
)
|
||||
|
||||
gemini_2_0_flash_thinking = Model(
|
||||
|
|
@ -412,12 +412,6 @@ blackboxai = Model(
|
|||
best_provider = Blackbox
|
||||
)
|
||||
|
||||
blackboxai_pro = Model(
|
||||
name = 'blackboxai-pro',
|
||||
base_provider = 'Blackbox AI',
|
||||
best_provider = Blackbox
|
||||
)
|
||||
|
||||
### CohereForAI ###
|
||||
command_r = Model(
|
||||
name = 'command-r',
|
||||
|
|
@ -437,6 +431,12 @@ command_r7b = Model(
|
|||
best_provider = HuggingSpace
|
||||
)
|
||||
|
||||
command_a = Model(
|
||||
name = 'command-a',
|
||||
base_provider = 'CohereForAI',
|
||||
best_provider = HuggingSpace
|
||||
)
|
||||
|
||||
### Qwen ###
|
||||
qwen_1_5_7b = Model(
|
||||
name = 'qwen-1.5-7b',
|
||||
|
|
@ -473,7 +473,7 @@ qwen_2_5_1m = Model(
|
|||
qwq_32b = Model(
|
||||
name = 'qwq-32b',
|
||||
base_provider = 'Qwen',
|
||||
best_provider = IterListProvider([Blackbox, Jmuz, HuggingChat])
|
||||
best_provider = IterListProvider([Jmuz, HuggingChat])
|
||||
)
|
||||
qvq_72b = VisionModel(
|
||||
name = 'qvq-72b',
|
||||
|
|
@ -498,19 +498,19 @@ deepseek_chat = Model(
|
|||
deepseek_v3 = Model(
|
||||
name = 'deepseek-v3',
|
||||
base_provider = 'DeepSeek',
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, OIVSCode, Liaobots])
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, LambdaChat, OIVSCode, Liaobots])
|
||||
)
|
||||
|
||||
deepseek_r1 = Model(
|
||||
name = 'deepseek-r1',
|
||||
base_provider = 'DeepSeek',
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, Glider, PollinationsAI, Jmuz, Liaobots, HuggingChat, HuggingFace])
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, Glider, LambdaChat, PollinationsAI, Jmuz, Liaobots, HuggingChat, HuggingFace])
|
||||
)
|
||||
|
||||
janus_pro_7b = VisionModel(
|
||||
name = Janus_Pro_7B.default_model,
|
||||
name = DeepseekAI_JanusPro7b.default_model,
|
||||
base_provider = 'DeepSeek',
|
||||
best_provider = IterListProvider([Janus_Pro_7B, G4F])
|
||||
best_provider = IterListProvider([DeepseekAI_JanusPro7b, G4F])
|
||||
)
|
||||
|
||||
### x.ai ###
|
||||
|
|
@ -561,14 +561,14 @@ r1_1776 = Model(
|
|||
nemotron_70b = Model(
|
||||
name = 'nemotron-70b',
|
||||
base_provider = 'Nvidia',
|
||||
best_provider = IterListProvider([HuggingChat, HuggingFace])
|
||||
best_provider = IterListProvider([LambdaChat, HuggingChat, HuggingFace])
|
||||
)
|
||||
|
||||
### Databricks ###
|
||||
dbrx_instruct = Model(
|
||||
name = 'dbrx-instruct',
|
||||
base_provider = 'Databricks',
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat])
|
||||
best_provider = DeepInfraChat
|
||||
)
|
||||
|
||||
### THUDM ###
|
||||
|
|
@ -657,6 +657,12 @@ olmoe_0125 = Model(
|
|||
best_provider = AllenAI
|
||||
)
|
||||
|
||||
lfm_40b = Model(
|
||||
name = "lfm-40b",
|
||||
base_provider = "Liquid AI",
|
||||
best_provider = LambdaChat
|
||||
)
|
||||
|
||||
### Uncensored AI ###
|
||||
evil = Model(
|
||||
name = 'evil',
|
||||
|
|
@ -686,7 +692,7 @@ sd_3_5 = ImageModel(
|
|||
flux = ImageModel(
|
||||
name = 'flux',
|
||||
base_provider = 'Black Forest Labs',
|
||||
best_provider = IterListProvider([Blackbox, PollinationsImage, HuggingSpace])
|
||||
best_provider = IterListProvider([Blackbox, PollinationsImage, Websim, HuggingSpace])
|
||||
)
|
||||
|
||||
flux_pro = ImageModel(
|
||||
|
|
@ -778,10 +784,9 @@ class ModelUtils:
|
|||
mixtral_8x22b.name: mixtral_8x22b,
|
||||
mistral_nemo.name: mistral_nemo,
|
||||
mixtral_small_24b.name: mixtral_small_24b,
|
||||
mixtral_small_28b.name: mixtral_small_28b,
|
||||
|
||||
### NousResearch ###
|
||||
hermes_2_dpo.name: hermes_2_dpo,
|
||||
hermes_3.name: hermes_3,
|
||||
|
||||
### Microsoft ###
|
||||
# phi
|
||||
|
|
@ -821,12 +826,12 @@ class ModelUtils:
|
|||
|
||||
### Blackbox AI ###
|
||||
blackboxai.name: blackboxai,
|
||||
blackboxai_pro.name: blackboxai_pro,
|
||||
|
||||
### CohereForAI ###
|
||||
command_r.name: command_r,
|
||||
command_r_plus.name: command_r_plus,
|
||||
command_r7b.name: command_r7b,
|
||||
command_a.name: command_a,
|
||||
|
||||
### GigaChat ###
|
||||
gigachat.name: gigachat,
|
||||
|
|
@ -861,19 +866,33 @@ class ModelUtils:
|
|||
deepseek_v3.name: deepseek_v3,
|
||||
deepseek_r1.name: deepseek_r1,
|
||||
|
||||
nemotron_70b.name: nemotron_70b, ### Nvidia ###
|
||||
dbrx_instruct.name: dbrx_instruct, ### Databricks ###
|
||||
glm_4.name: glm_4, ### THUDM ###
|
||||
mini_max.name: mini_max, ## MiniMax ###
|
||||
yi_34b.name: yi_34b, ## 01-ai ###
|
||||
### Nvidia ###
|
||||
nemotron_70b.name: nemotron_70b,
|
||||
|
||||
### Databricks ###
|
||||
dbrx_instruct.name: dbrx_instruct,
|
||||
|
||||
### THUDM ###
|
||||
glm_4.name: glm_4,
|
||||
|
||||
## MiniMax ###
|
||||
mini_max.name: mini_max,
|
||||
|
||||
## 01-ai ###
|
||||
yi_34b.name: yi_34b,
|
||||
|
||||
### Cognitive Computations ###
|
||||
dolphin_2_6.name: dolphin_2_6,
|
||||
dolphin_2_9.name: dolphin_2_9,
|
||||
|
||||
airoboros_70b.name: airoboros_70b, ### DeepInfra ###
|
||||
lzlv_70b.name: lzlv_70b, ### Lizpreciatior ###
|
||||
minicpm_2_5.name: minicpm_2_5, ### OpenBMB ###
|
||||
### DeepInfra ###
|
||||
airoboros_70b.name: airoboros_70b,
|
||||
|
||||
### Lizpreciatior ###
|
||||
lzlv_70b.name: lzlv_70b,
|
||||
|
||||
### OpenBMB ###
|
||||
minicpm_2_5.name: minicpm_2_5,
|
||||
|
||||
### Ai2 ###
|
||||
tulu_3_405b.name: tulu_3_405b,
|
||||
|
|
@ -882,7 +901,11 @@ class ModelUtils:
|
|||
tulu_3_70b.name: tulu_3_70b,
|
||||
olmoe_0125.name: olmoe_0125,
|
||||
|
||||
evil.name: evil, ### Uncensored AI ###
|
||||
### Liquid AI ###
|
||||
lfm_40b.name: lfm_40b,
|
||||
|
||||
### Uncensored AI ###
|
||||
evil.name: evil,
|
||||
|
||||
#############
|
||||
### Image ###
|
||||
|
|
|
|||
|
|
@ -156,6 +156,33 @@ class ThinkingProcessor:
|
|||
# Handle non-thinking chunk
|
||||
if not start_time and "<think>" not in chunk and "</think>" not in chunk:
|
||||
return 0, [chunk]
|
||||
|
||||
# Handle case where both opening and closing tags are in the same chunk
|
||||
if "<think>" in chunk and "</think>" in chunk:
|
||||
parts = chunk.split("<think>", 1)
|
||||
before_think = parts[0]
|
||||
thinking_and_after = parts[1]
|
||||
|
||||
thinking_parts = thinking_and_after.split("</think>", 1)
|
||||
thinking_content = thinking_parts[0]
|
||||
after_think = thinking_parts[1] if len(thinking_parts) > 1 else ""
|
||||
|
||||
if before_think.strip():
|
||||
results.append(before_think)
|
||||
|
||||
# Only add thinking content if it contains non-whitespace
|
||||
if thinking_content.strip():
|
||||
results.append(Reasoning(thinking_content))
|
||||
|
||||
thinking_duration = time.time() - start_time if start_time > 0 else 0
|
||||
status = f"Thought for {thinking_duration:.2f}s" if thinking_duration > 1 else "Finished"
|
||||
results.append(Reasoning(status=status, is_thinking="</think>"))
|
||||
|
||||
# Important: Add the content that comes after the thinking tags
|
||||
if after_think.strip():
|
||||
results.append(after_think)
|
||||
|
||||
return 0, results
|
||||
|
||||
# Handle thinking start
|
||||
if "<think>" in chunk and "`<think>`" not in chunk:
|
||||
|
|
@ -183,7 +210,8 @@ class ThinkingProcessor:
|
|||
status = f"Thought for {thinking_duration:.2f}s" if thinking_duration > 1 else "Finished"
|
||||
results.append(Reasoning(status=status, is_thinking="</think>"))
|
||||
|
||||
if after and after[0]:
|
||||
# Make sure to handle text after the closing tag
|
||||
if after and after[0].strip():
|
||||
results.append(after[0])
|
||||
|
||||
return 0, results
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue