Adding New Models and Enhancing Provider Functionality (#2689)

* Adding New Models and Enhancing Provider Functionality

* fix(core): handle model errors and improve configuration

- Import ModelNotSupportedError for proper exception handling in model resolution
- Update login_url configuration to reference class URL attribute dynamically
- Remove redundant typing imports after internal module reorganization

* feat(g4f/Provider/PerplexityLabs.py): Add new Perplexity models and update provider listings

- Update PerplexityLabs provider with expanded Sonar model family including pro/reasoning variants
- Add new text models: sonar-reasoning-pro to supported model catalog
- Standardize model naming conventions across provider documentation

* feat(g4f/models.py): add Sonar Reasoning Pro model configuration

- Add new  model to Perplexity AI text models section
- Include model in ModelUtils.convert mapping with PerplexityLabs provider
- Maintain consistent configuration pattern with existing Sonar variants

* feat(docs/providers-and-models.md): update provider models and add new reasoning model

- Update PerplexityLabs text models to standardized sonar naming convention
- Add new sonar-reasoning-pro model to text models table
- Include latest Perplexity AI documentation references for new model

* docs(docs/providers-and-models.md): update AI providers documentation

- Remove deprecated chatgptt.me from no-auth providers list
- Delete redundant Auth column from HuggingSpace providers table
- Update PerplexityLabs model website URLs to sonar.perplexity.ai
- Adjust provider counts for GPT-4/GPT-4o models in text models section
- Fix inconsistent formatting in image models provider listings

* chore(g4f/models.py): remove deprecated ChatGptt provider integration

- Remove ChatGptt import from provider dependencies
- Exclude ChatGptt from default model's best_provider list
- Update gpt_4 model configuration to eliminate ChatGptt reference
- Modify gpt_4o vision model provider hierarchy
- Adjust gpt_4o_mini provider selection parameters

BREAKING CHANGE: Existing integrations using ChatGptt provider will no longer function

* Disabled provider (g4f/Provider/ChatGptt.py > g4f/Provider/not_working/ChatGptt.py): Problem with Cloudflare

* fix(g4f/Provider/CablyAI.py): update API endpoints and model configurations

* docs(docs/providers-and-models.md): update model listings and provider capabilities

* feat(g4f/models.py): Add Hermes-3 model and enhance provider configs

* feat(g4f/Provider/CablyAI.py): Add free tier indicators to model aliases

* refactor(g4f/tools/run_tools.py): modularize thinking chunk handling

* fix(g4f/Provider/DeepInfraChat.py): resolve duplicate keys and enhance request headers

* feat(g4f/Provider/DeepInfraChat.py): Add multimodal image support and improve model handling

* chore(g4f/models.py): update default vision model providers

* feat(docs/providers-and-models.md): update provider capabilities and model specifications

* Update docs/client.md

* docs(docs/providers-and-models.md): Update DeepInfraChat models documentation

* feat(g4f/Provider/DeepInfraChat.py): add new vision models and expand model aliases

* feat(g4f/models.py): update model configurations and add new providers

* feat(g4f/models.py): Update model configurations and add new AI models

---------

Co-authored-by: kqlio67 <>
This commit is contained in:
kqlio67 2025-02-07 12:54:00 +00:00 committed by GitHub
parent 5d35b746f2
commit 88e7ef98f0
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
15 changed files with 381 additions and 229 deletions

View file

@ -10,7 +10,6 @@ from .Provider import (
CablyAI,
ChatGLM,
ChatGptEs,
ChatGptt,
Cloudflare,
Copilot,
DDG,
@ -81,7 +80,6 @@ default = Model(
Copilot,
DeepInfraChat,
ChatGptEs,
ChatGptt,
PollinationsAI,
Jmuz,
CablyAI,
@ -96,6 +94,8 @@ default_vision = Model(
base_provider = "",
best_provider = IterListProvider([
Blackbox,
OIVSCode,
DeepInfraChat,
PollinationsAI,
HuggingSpace,
GeminiPro,
@ -115,20 +115,20 @@ default_vision = Model(
gpt_4 = Model(
name = 'gpt-4',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, Jmuz, ChatGptEs, ChatGptt, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
best_provider = IterListProvider([Blackbox, DDG, Jmuz, ChatGptEs, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical])
)
# gpt-4o
gpt_4o = VisionModel(
name = 'gpt-4o',
base_provider = 'OpenAI',
best_provider = IterListProvider([ChatGptt, Jmuz, ChatGptEs, PollinationsAI, Copilot, Liaobots, OpenaiChat])
best_provider = IterListProvider([Blackbox, Jmuz, ChatGptEs, PollinationsAI, Copilot, Liaobots, OpenaiChat])
)
gpt_4o_mini = Model(
name = 'gpt-4o-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, ChatGptEs, ChatGptt, Jmuz, PollinationsAI, OIVSCode, CablyAI, Liaobots, OpenaiChat])
best_provider = IterListProvider([DDG, ChatGptEs, Jmuz, PollinationsAI, OIVSCode, CablyAI, Liaobots, OpenaiChat])
)
# o1
@ -151,6 +151,12 @@ o1_mini = Model(
)
# o3
o3_mini = Model(
name = 'o3-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([Blackbox, CablyAI])
)
o3_mini_low = Model(
name = 'o3-mini-low',
base_provider = 'OpenAI',
@ -247,6 +253,11 @@ mixtral_8x7b = Model(
base_provider = "Mistral",
best_provider = IterListProvider([DDG, Jmuz])
)
mixtral_8x22b = Model(
name = "mixtral-8x22b",
base_provider = "Mistral",
best_provider = DeepInfraChat
)
mistral_nemo = Model(
name = "mistral-nemo",
@ -267,6 +278,12 @@ hermes_2_dpo = Model(
best_provider = IterListProvider([Blackbox, BlackboxAPI])
)
hermes_3 = Model(
name = "hermes-3",
base_provider = "NousResearch",
best_provider = CablyAI
)
### Microsoft ###
# phi
@ -283,6 +300,12 @@ phi_4 = Model(
)
# wizardlm
wizardlm_2_7b = Model(
name = 'wizardlm-2-7b',
base_provider = 'Microsoft',
best_provider = DeepInfraChat
)
wizardlm_2_8x22b = Model(
name = 'wizardlm-2-8x22b',
base_provider = 'Microsoft',
@ -321,7 +344,7 @@ gemini_1_5_pro = Model(
gemini_2_0_flash = Model(
name = 'gemini-2.0-flash',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([GeminiPro, Liaobots])
best_provider = IterListProvider([PollinationsAI, GeminiPro, Liaobots])
)
gemini_2_0_flash_thinking = Model(
@ -355,7 +378,7 @@ claude_3_opus = Model(
claude_3_5_sonnet = Model(
name = 'claude-3.5-sonnet',
base_provider = 'Anthropic',
best_provider = IterListProvider([Jmuz, Liaobots])
best_provider = IterListProvider([Blackbox, Jmuz, Liaobots])
)
### Reka AI ###
@ -406,7 +429,7 @@ qwen_1_5_7b = Model(
qwen_2_72b = Model(
name = 'qwen-2-72b',
base_provider = 'Qwen',
best_provider = HuggingSpace
best_provider = IterListProvider([DeepInfraChat, HuggingSpace])
)
qwen_2_vl_7b = VisionModel(
name = "qwen-2-vl-7b",
@ -416,7 +439,7 @@ qwen_2_vl_7b = VisionModel(
qwen_2_5_72b = Model(
name = 'qwen-2.5-72b',
base_provider = 'Qwen',
best_provider = IterListProvider([DeepInfraChat, PollinationsAI, Jmuz])
best_provider = IterListProvider([PollinationsAI, Jmuz])
)
qwen_2_5_coder_32b = Model(
name = 'qwen-2.5-coder-32b',
@ -490,6 +513,12 @@ sonar_pro = Model(
sonar_reasoning = Model(
name = 'sonar-reasoning',
base_provider = 'Perplexity AI',
best_provider = IterListProvider([PerplexityLabs, CablyAI])
)
sonar_reasoning_pro = Model(
name = 'sonar-reasoning-pro',
base_provider = 'Perplexity AI',
best_provider = PerplexityLabs
)
@ -504,7 +533,7 @@ nemotron_70b = Model(
dbrx_instruct = Model(
name = 'dbrx-instruct',
base_provider = 'Databricks',
best_provider = IterListProvider([Blackbox, BlackboxAPI])
best_provider = IterListProvider([Blackbox, BlackboxAPI, DeepInfraChat])
)
### THUDM ###
@ -514,13 +543,54 @@ glm_4 = Model(
best_provider = ChatGLM
)
### MiniMax
### MiniMax ###
mini_max = Model(
name = "MiniMax",
base_provider = "MiniMax",
best_provider = HailuoAI
)
### 01-ai ###
yi_34b = Model(
name = "yi-34b",
base_provider = "01-ai",
best_provider = DeepInfraChat
)
### Cognitive Computations ###
dolphin_2_6 = Model(
name = "dolphin-2.6",
base_provider = "Cognitive Computations",
best_provider = DeepInfraChat
)
dolphin_2_9 = Model(
name = "dolphin-2.9",
base_provider = "Cognitive Computations",
best_provider = DeepInfraChat
)
### DeepInfra ###
airoboros_70b = Model(
name = "airoboros-70b",
base_provider = "DeepInfra",
best_provider = DeepInfraChat
)
### Lizpreciatior ###
lzlv_70b = Model(
name = "lzlv-70b",
base_provider = "Lizpreciatior",
best_provider = DeepInfraChat
)
### OpenBMB ###
minicpm_2_5 = Model(
name = "minicpm-2.5",
base_provider = "OpenBMB",
best_provider = DeepInfraChat
)
### Uncensored AI ###
evil = Model(
name = 'evil',
@ -556,19 +626,19 @@ flux = ImageModel(
flux_pro = ImageModel(
name = 'flux-pro',
base_provider = 'Black Forest Labs',
best_provider = PollinationsImage
best_provider = PollinationsAI
)
flux_dev = ImageModel(
name = 'flux-dev',
base_provider = 'Black Forest Labs',
best_provider = IterListProvider([HuggingSpace, HuggingChat, HuggingFace])
best_provider = IterListProvider([PollinationsImage, HuggingSpace, HuggingChat, HuggingFace])
)
flux_schnell = ImageModel(
name = 'flux-schnell',
base_provider = 'Black Forest Labs',
best_provider = IterListProvider([HuggingSpace, HuggingChat, HuggingFace])
best_provider = IterListProvider([PollinationsImage, HuggingSpace, HuggingChat, HuggingFace])
)
@ -610,6 +680,10 @@ class ModelUtils:
o1.name: o1,
o1_preview.name: o1_preview,
o1_mini.name: o1_mini,
# o3
o3_mini.name: o3_mini,
o3_mini_low.name: o3_mini_low,
### Meta ###
meta.name: meta,
@ -637,11 +711,13 @@ class ModelUtils:
### Mistral ###
mixtral_8x7b.name: mixtral_8x7b,
mixtral_8x22b.name: mixtral_8x22b,
mistral_nemo.name: mistral_nemo,
mixtral_small_28b.name: mixtral_small_28b,
### NousResearch ###
hermes_2_dpo.name: hermes_2_dpo,
hermes_3.name: hermes_3,
### Microsoft ###
# phi
@ -649,6 +725,7 @@ class ModelUtils:
phi_4.name: phi_4,
# wizardlm
wizardlm_2_7b.name: wizardlm_2_7b,
wizardlm_2_8x22b.name: wizardlm_2_8x22b,
### Google ###
@ -706,6 +783,7 @@ class ModelUtils:
sonar.name: sonar,
sonar_pro.name: sonar_pro,
sonar_reasoning.name: sonar_reasoning,
sonar_reasoning_pro.name: sonar_reasoning_pro,
### DeepSeek ###
deepseek_chat.name: deepseek_chat,
@ -715,7 +793,17 @@ class ModelUtils:
nemotron_70b.name: nemotron_70b, ### Nvidia ###
dbrx_instruct.name: dbrx_instruct, ### Databricks ###
glm_4.name: glm_4, ### THUDM ###
mini_max.name: mini_max, ## MiniMax
mini_max.name: mini_max, ## MiniMax ###
yi_34b.name: yi_34b, ## 01-ai ###
### Cognitive Computations ###
dolphin_2_6.name: dolphin_2_6,
dolphin_2_9.name: dolphin_2_9,
airoboros_70b.name: airoboros_70b, ### DeepInfra ###
lzlv_70b.name: lzlv_70b, ### Lizpreciatior ###
minicpm_2_5.name: minicpm_2_5, ### OpenBMB ###
evil.name: evil, ### Uncensored AI ###
#############
@ -742,11 +830,10 @@ class ModelUtils:
demo_models = {
gpt_4o.name: [gpt_4o, [PollinationsAI, Blackbox]],
gpt_4o_mini.name: [gpt_4o_mini, [PollinationsAI, CablyAI, DDG]],
deepseek_r1.name: [deepseek_r1, [PollinationsAI, HuggingFace]],
"default": [llama_3_2_11b, [HuggingFace]],
qwen_2_vl_7b.name: [qwen_2_vl_7b, [HuggingFaceAPI]],
qvq_72b.name: [qvq_72b, [HuggingSpace]],
deepseek_r1.name: [deepseek_r1, [HuggingFace]],
command_r.name: [command_r, [HuggingSpace]],
command_r_plus.name: [command_r_plus, [HuggingSpace]],
command_r7b.name: [command_r7b, [HuggingSpace]],
@ -756,7 +843,7 @@ demo_models = {
llama_3_3_70b.name: [llama_3_3_70b, [HuggingFace]],
sd_3_5.name: [sd_3_5, [HuggingSpace, HuggingFace]],
flux_dev.name: [flux_dev, [PollinationsImage, HuggingSpace, HuggingFace]],
flux_schnell.name: [flux_schnell, [PollinationsImage, HuggingFace, HuggingSpace]],
flux_schnell.name: [flux_schnell, [HuggingFace, HuggingSpace, PollinationsImage]],
}
# Create a list of all models and his providers