Updating provider documentation and small fixes in providers (#2469)

* refactor(g4f/Provider/Airforce.py): improve model handling and filtering

- Add hidden_models set to exclude specific models
- Add evil alias for uncensored model handling
- Extend filtering for model-specific response tokens
- Add response buffering for streamed content
- Update model fetching with error handling

* refactor(g4f/Provider/Blackbox.py): improve caching and model handling

- Add caching system for validated values with file-based storage
- Rename 'flux' model to 'ImageGeneration' and update references
- Add temperature, top_p and max_tokens parameters to generator
- Simplify HTTP headers and remove redundant options
- Add model alias mapping for ImageGeneration
- Add file system utilities for cache management

* feat(g4f/Provider/RobocodersAPI.py): add caching and error handling

- Add file-based caching system for access tokens and sessions
- Add robust error handling with specific error messages
- Add automatic dialog continuation on resource limits
- Add HTML parsing with BeautifulSoup for token extraction
- Add debug logging for error tracking
- Add timeout configuration for API requests

* refactor(g4f/Provider/DarkAI.py): update DarkAI default model and aliases

- Change default model from llama-3-405b to llama-3-70b
- Remove llama-3-405b from supported models list
- Remove llama-3.1-405b from model aliases

* feat(g4f/Provider/Blackbox2.py): add image generation support

- Add image model 'flux' with dedicated API endpoint
- Refactor generator to support both text and image outputs
- Extract headers into reusable static method
- Add type hints for AsyncGenerator return type
- Split generation logic into _generate_text and _generate_image methods
- Add ImageResponse handling for image generation results

BREAKING CHANGE: create_async_generator now returns AsyncGenerator instead of AsyncResult

* refactor(g4f/Provider/ChatGptEs.py): update ChatGptEs model configuration

- Update models list to include gpt-3.5-turbo
- Remove chatgpt-4o-latest from supported models
- Remove model_aliases mapping for gpt-4o

* feat(g4f/Provider/DeepInfraChat.py): add Accept-Language header support

- Add Accept-Language header for internationalization
- Maintain existing header configuration
- Improve request compatibility with language preferences

* refactor(g4f/Provider/needs_auth/Gemini.py): add ProviderModelMixin inheritance

- Add ProviderModelMixin to class inheritance
- Import ProviderModelMixin from base_provider
- Move BaseConversation import to base_provider imports

* refactor(g4f/Provider/Liaobots.py): update model details and aliases

- Add version suffix to o1 model IDs
- Update model aliases for o1-preview and o1-mini
- Standardize version format across model definitions

* refactor(g4f/Provider/PollinationsAI.py): enhance model support and generation

- Split generation logic into dedicated image/text methods
- Add additional text models including sur and claude
- Add width/height parameters for image generation
- Add model existence validation
- Add hasattr checks for model lists initialization

* chore(gitignore): add provider cache directory

- Add g4f/Provider/.cache to gitignore patterns

* refactor(g4f/Provider/ReplicateHome.py): update model configuration

- Update default model to gemma-2b-it
- Add default_image_model configuration
- Remove llava-13b from supported models
- Simplify request headers

* feat(g4f/models.py): expand provider and model support

- Add new providers DarkAI and PollinationsAI
- Add new models for Mistral, Flux and image generation
- Update provider lists for existing models
- Add P1 and Evil models with experimental providers

BREAKING CHANGE: Remove llava-13b model support

* refactor(Airforce): Update type hint for split_message return

- Change return type of  from  to  for consistency with import.
- Maintain overall functionality and structure of the  class.
- Ensure compatibility with type hinting standards in Python.

* refactor(g4f/Provider/Airforce.py): Update type hint for split_message return

- Change return type of 'split_message' from 'list[str]' to 'List[str]' for consistency with import.
- Maintain overall functionality and structure of the 'Airforce' class.
- Ensure compatibility with type hinting standards in Python.

* feat(g4f/Provider/RobocodersAPI.py): Add support for optional BeautifulSoup dependency

- Introduce a check for the BeautifulSoup library and handle its absence gracefully.
- Raise a  if BeautifulSoup is not installed, prompting the user to install it.
- Remove direct import of BeautifulSoup to avoid import errors when the library is missing.

* fix: Updating provider documentation and small fixes in providers

* Disabled the provider (RobocodersAPI)

* Fix: Conflicting file g4f/models.py

* Update g4f/models.py g4f/Provider/Airforce.py

* Update docs/providers-and-models.md g4f/models.py g4f/Provider/Airforce.py g4f/Provider/PollinationsAI.py

* Update docs/providers-and-models.md

* Update .gitignore

* Update g4f/models.py

* Update g4f/Provider/PollinationsAI.py

---------

Co-authored-by: kqlio67 <>
This commit is contained in:
kqlio67 2024-12-09 15:52:25 +00:00 committed by GitHub
parent 76c3683403
commit bb9132bcb4
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
33 changed files with 311 additions and 392 deletions

1
.gitignore vendored
View file

@ -66,4 +66,3 @@ bench.py
to-reverse.txt
g4f/Provider/OpenaiChat2.py
generated_images/
g4f/Provider/.cache

View file

@ -1,203 +1,171 @@
# G4F - Providers and Models
This document provides an overview of various AI providers and models, including text generation, image generation, and vision capabilities. It aims to help users navigate the diverse landscape of AI services and choose the most suitable option for their needs.
## Table of Contents
- [Providers](#providers)
- [Free](#providers-free)
- [Needs Auth](#providers-needs-auth)
- [Models](#models)
- [Text Models](#text-models)
- [Image Models](#image-models)
- [Vision Models](#vision-models)
- [Providers and vision models](#providers-and-vision-models)
- [Conclusion and Usage Tips](#conclusion-and-usage-tips)
---
## Providers
### Providers Free
| Website | Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth |
|----------|-------------|--------------|---------------|--------|--------|------|------|
|[api.airforce](https://api.airforce)|`g4f.Provider.Airforce`|`phi-2, gpt-4, gpt-4o-mini, gpt-4o, gpt-4-turbo, o1-mini, openchat-3.5, deepseek-coder, hermes-2-dpo, hermes-2-pro, openhermes-2.5, lfm-40b, german-7b, llama-2-7b, llama-3.1-70b, neural-7b, zephyr-7b, evil,`|`sdxl, flux-pro, flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌+✔|
|[amigochat.io](https://amigochat.io/chat/)|`g4f.Provider.AmigoChat`|✔|✔|❌|✔|![Error](https://img.shields.io/badge/RateLimit-f48d37)|❌|
|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, gemini-pro, claude-3.5-sonnet, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|`flux`|`blackboxai, gpt-4o, gemini-pro, gemini-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.Blackbox2`|`llama-3.1-70b`|`flux`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chatgpt.com](https://chatgpt.com)|`g4f.Provider.ChatGpt`|✔|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)|❌|
|[chatgpt.es](https://chatgpt.es)|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[copilot.microsoft.com](https://copilot.microsoft.com)|`g4f.Provider.Copilot`|`gpt-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[darkai.foundation](https://darkai.foundation)|`g4f.Provider.DarkAI`|`gpt-3.5-turbo, gpt-4o, llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, claude-3-haiku, llama-3.1-70b, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[deepinfra.com/chat](https://deepinfra.com/chat)|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.1-70b, qwq-32b, wizardlm-2-8x22b, qwen-2-72b, qwen-2.5-coder-32b, nemotron-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|`g4f.Provider.Flux`|❌|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|`g4f.Provider.Free2GPT`|`mistral-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|`g4f.Provider.FreeGpt`|`gemini-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|`g4f.Provider.GizAI`|`gemini-flash`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[liaobots.work](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-beta, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-flash, gemini-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[mhystical.cc](https://mhystical.cc)|`g4f.Provider.Mhystical`|`gpt-4`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[labs.perplexity.ai](https://labs.perplexity.ai)|`g4f.Provider.PerplexityLabs`|`sonar-online, sonar-chat, llama-3.3-70b, llama-3.1-8b, llama-3.1-70b, lfm-40b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[pi.ai/talk](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[pizzagpt.it](https://www.pizzagpt.it)|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[pollinations.ai](https://pollinations.ai)|`g4f.Provider.PollinationsAI`|`gpt-4o, mistral-large, mistral-nemo, llama-3.1-70b, gpt-4, qwen-2.5-coder-32b, claude-3.5-sonnet, command-r, evil, p1,turbo, unity, midijourney, rtist`|`flux, flux-realism, flux-cablyai, flux-anime, flux-3d, any-dark, flux-pro, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[app.prodia.com](https://app.prodia.com)|`g4f.Provider.Prodia`|❌|✔|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[replicate.com](https://replicate.com)|`g4f.Provider.ReplicateHome`|`gemma-2b`|`sd-3, sdxl, playground-v2.5`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[rubiks.ai](https://rubiks.ai)|`g4f.Provider.RubiksAI`|`gpt-4o-mini, llama-3.1-70b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌|
|[teach-anything.com](https://www.teach-anything.com)|`g4f.Provider.TeachAnything`|`llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[you.com](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
---
### Providers Needs Auth
| Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth |
|----------|-------------|--------------|---------------|--------|--------|------|
|[ai4chat.co](https://www.ai4chat.co)|`g4f.Provider.Ai4Chat`|`gpt-4`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[aichatfree.info](https://aichatfree.info)|`g4f.Provider.AIChatFree`|`gemini-pro`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[api.airforce](https://api.airforce)|`g4f.Provider.Airforce`|`gpt-4o, gpt-4o-mini, gpt-4-turbo, llama-2-7b, llama-3.1-8b, llama-3.1-70b, hermes-2-pro, hermes-2-dpo, phi-2, deepseek-coder, openchat-3.5, openhermes-2.5, lfm-40b, german-7b, zephyr-7b, neural-7b`|`flux, flux-realism', flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, sdxl`|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[aiuncensored.info](https://www.aiuncensored.info)|`g4f.Provider.AIUncensored`|✔|✔|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[allyfy.chat](https://allyfy.chat/)|`g4f.Provider.Allyfy`|`gpt-3.5-turbo`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[bing.com](https://bing.com/chat)|`g4f.Provider.Bing`|`gpt-4`|✔|`gpt-4-vision`|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌+✔|
|[bing.com/images](https://www.bing.com/images/create)|`g4f.Provider.BingCreateImages`|`❌|✔|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.Blackbox`|`blackboxai, blackboxai-pro, gemini-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, gpt-4o, gemini-pro, claude-3.5-sonnet`|`flux`|`blackboxai, gemini-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, gpt-4o, gemini-pro`|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chatgot.one](https://www.chatgot.one/)|`g4f.Provider.ChatGot`|`gemini-pro`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chatgpt.com](https://chatgpt.com)|`g4f.Provider.ChatGpt`|`?`|`?`|`?`|?|![Unknown](https://img.shields.io/badge/Unknown-grey) |❌|
|[chatgpt.es](https://chatgpt.es)|`g4f.Provider.ChatGptEs`|`gpt-4o, gpt-4o-mini`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`gemma-7b, llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, phi-2, qwen-1.5-0-5b, qwen-1.5-8b, qwen-1.5-14b, qwen-1.5-7b`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[darkai.foundation/chat](https://darkai.foundation/chat)|`g4f.Provider.DarkAI`|`gpt-4o, gpt-3.5-turbo, llama-3-70b, llama-3-405b`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[duckduckgo.com](https://duckduckgo.com/duckchat/v1/chat)|`g4f.Provider.DDG`|`gpt-4o-mini, claude-3-haiku, llama-3.1-70b, mixtral-8x7b`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[deepinfra.com](https://deepinfra.com)|`g4f.Provider.DeepInfra`|✔|❌|❌|✔|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[deepinfra.com/chat](https://deepinfra.com/chat)|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.1-70b, wizardlm-2-8x22b, qwen-2-72b`|❌|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[deepinfra.com](https://deepinfra.com)|`g4f.Provider.DeepInfraImage`|❌|✔|❌|❌|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[chat10.free2gpt.xyz](chat10.free2gpt.xyz)|`g4f.Provider.Free2GPT`|`mixtral-7b`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|`g4f.Provider.FreeGpt`|`gemini-pro`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[gemini.google.com](https://gemini.google.com)|`g4f.Provider.Gemini`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[ai.google.dev](https://ai.google.dev)|`g4f.Provider.GeminiPro`|✔|❌|✔|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[app.giz.ai](https://app.giz.ai/assistant/)|`g4f.Provider.GizAI`|`gemini-flash`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[developers.sber.ru](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[console.groq.com/playground](https://console.groq.com/playground)|`g4f.Provider.Groq`|✔|❌|❌|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[huggingface.co/chat](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`llama-3.1-70b, command-r-plus, qwen-2-72b, llama-3.2-11b, hermes-3, mistral-nemo, phi-3.5-mini`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[huggingface.co](https://huggingface.co/chat)|`g4f.Provider.HuggingFace`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[liaobots.work](https://liaobots.work)|`g4f.Provider.Liaobots`|`gpt-3.5-turbo, gpt-4o-mini, gpt-4o, gpt-4-turbo, grok-2, grok-2-mini, claude-3-opus, claude-3-sonnet, claude-3-5-sonnet, claude-3-haiku, claude-2.1, gemini-flash, gemini-pro`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[magickpen.com](https://magickpen.com)|`g4f.Provider.MagickPen`|`gpt-4o-mini`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[meta.ai](https://www.meta.ai)|`g4f.Provider.MetaAI`|✔|✔|?|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[platform.openai.com](https://platform.openai.com/)|`g4f.Provider.Openai`|✔|❌|✔||![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[chatgpt.com](https://chatgpt.com/)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4`|❌|✔||![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[www.perplexity.ai)](https://www.perplexity.ai)|`g4f.Provider.PerplexityAi`|✔|❌|❌|?|![Disabled](https://img.shields.io/badge/Disabled-red)|❌|
|[perplexity.ai](https://www.perplexity.ai)|`g4f.Provider.PerplexityApi`|✔|❌|❌|?|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[labs.perplexity.ai](https://labs.perplexity.ai)|`g4f.Provider.PerplexityLabs`|`sonar-online, sonar-chat, llama-3.1-8b, llama-3.1-70b`|❌|❌|?|![Cloudflare](https://img.shields.io/badge/Cloudflare-f48d37)|❌|
|[pi.ai/talk](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|?|![Unknown](https://img.shields.io/badge/Unknown-grey)|❌|
|[]()|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[poe.com](https://poe.com)|`g4f.Provider.Poe`|✔|❌|❌|?|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[app.prodia.com](https://app.prodia.com)|`g4f.Provider.Prodia`|❌|✔|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[raycast.com](https://raycast.com)|`g4f.Provider.Raycast`|✔|❌|❌|✔|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[chat.reka.ai](https://chat.reka.ai/)|`g4f.Provider.Reka`|✔|❌|✔|✔|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[replicate.com](https://replicate.com)|`g4f.Provider.Replicate`|✔|❌|❌|?|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[replicate.com](https://replicate.com)|`g4f.Provider.ReplicateHome`|`gemma-2b, llava-13b`|`sd-3, sdxl, playground-v2.5`|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[replicate.com](https://replicate.com)|`g4f.Provider.RubiksAI`|`llama-3.1-70b, gpt-4o-mini`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[talkai.info](https://talkai.info)|`g4f.Provider.TalkAi`|✔|❌|❌|✔|![Disabled](https://img.shields.io/badge/Disabled-red)|❌|
|[teach-anything.com](https://www.teach-anything.com)|`g4f.Provider.TeachAnything`|`llama-3.1-70b`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[beta.theb.ai](https://beta.theb.ai)|`g4f.Provider.Theb`|✔|❌|❌|✔|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[beta.theb.ai](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔|❌|❌|✔|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[console.upstage.ai/playground/chat](https://console.upstage.ai/playground/chat)|`g4f.Provider.Upstage`|`solar-pro, solar-mini`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|?|![Unknown](https://img.shields.io/badge/Unknown-grey)|✔|
|[you.com](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔|![Unknown](https://img.shields.io/badge/Unknown-grey)|❌+✔|
|[bing.com/images/create](https://www.bing.com/images/create)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[inference.cerebras.ai](https://inference.cerebras.ai/)|`g4f.Provider.Cerebras`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[deepinfra.com](https://deepinfra.com)|`g4f.Provider.DeepInfra`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[deepinfra.com](https://deepinfra.com)|`g4f.Provider.DeepInfraImage`|❌|✔|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[gemini.google.com](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini`|`gemini`|`gemini`|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[ai.google.dev](https://ai.google.dev)|`g4f.Provider.GeminiPro`|`gemini-pro`|❌|`gemini-pro`|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[github.com/copilot](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[console.groq.com/playground](https://console.groq.com/playground)|`g4f.Provider.Groq`|✔|❌|✔|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[huggingface.co/chat](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, qwq-32b, nemotron-70b, nemotron-70b, llama-3.2-11b, hermes-3, mistral-nemo, phi-3.5-mini`|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[huggingface.co/chat](https://huggingface.co/chat)|`g4f.Provider.HuggingFace`|✔|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|`g4f.Provider.HuggingFaceAPI`|✔|❌|✔|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[meta.ai](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[designer.microsoft.com](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[platform.openai.com](https://platform.openai.com)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[chatgpt.com](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4, ...`|❌|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[perplexity.ai](https://www.perplexity.ai)|`g4f.Provider.PerplexityApi`|`gpt-4o, gpt-4o-mini, gpt-4, ...`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[poe.com](https://poe.com)|`g4f.Provider.Poe`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[raycast.com](https://raycast.com)|`g4f.Provider.Raycast`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[chat.reka.ai](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|❌|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[replicate.com](https://replicate.com)|`g4f.Provider.Replicate`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[beta.theb.ai](https://beta.theb.ai)|`g4f.Provider.Theb`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[beta.theb.ai](https://beta.theb.ai)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔|
---
## Models
### Text Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|gpt-3.5-turbo|OpenAI|4+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-3-5-turbo)|
|gpt-4|OpenAI|6+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4-turbo|OpenAI|4+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4o-mini|OpenAI|10+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|o1|OpenAI|0+ Providers|[platform.openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-mini|OpenAI|0+ Providers|[platform.openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|llama-2-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)|
|llama-2-13b|Meta Llama|1+ Providers|[llama.com](https://www.llama.com/llama2/)|
|llama-3-8b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3-70b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3.1-8b|Meta Llama|7+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-70b|Meta Llama|14+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-405b|Meta Llama|5+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)|
|llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/blog/llama32)|
|llama-3.2-11b|Meta Llama|3+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llama-3.2-90b|Meta Llama|2+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llamaguard-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/LlamaGuard-7b)|
|llamaguard-2-8b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B)|
|mistral-7b|Mistral AI|4+ Providers|[mistral.ai](https://mistral.ai/news/announcing-mistral-7b/)|
|mixtral-8x7b|Mistral AI|6+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mixtral-8x22b|Mistral AI|3+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-8x22b/)|
|mistral-nemo|Mistral AI|2+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|mistral-large|Mistral AI|2+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)|
|mixtral-8x7b-dpo|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|gpt_35_turbo|OpenAI|2+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-3-5-turbo)|
|gpt-4|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4-turbo|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4o-mini|OpenAI|8+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|o1-preview|OpenAI|1+ Providers|[platform.openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-mini|OpenAI|2+ Providers|[platform.openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|gigachat||1+ Providers|[]( )|
|llama-2-7b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)|
|llama-3-8b|Meta Llama|1+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3.1-8b|Meta Llama|5+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-70b|Meta Llama|12+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-405b|Meta Llama|1+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.2-11b|Meta Llama|2+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llama-3.3-70b|Meta Llama|3+ Providers|[llama.com/]()|
|mixtral-8x7b|Mistral AI|1+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mistral-nemo|Mistral AI|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|mistral-large|Mistral AI|1+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)|
|hermes-2-dpo|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|hermes-2|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|
|yi-34b|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)|
|hermes-3|NousResearch|2+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)|
|hermes-2-pro|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|
|hermes-3|NousResearch|1+ Providers|[nousresearch.com](https://nousresearch.com/hermes3/)|
|gemini|Google DeepMind|1+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|gemini-flash|Google DeepMind|4+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-pro|Google DeepMind|10+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemma-2b|Google|5+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2b)|
|gemma-2b-9b|Google|1+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2-9b)|
|gemma-2b-27b|Google|2+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2-27b)|
|gemma-7b|Google|1+ Providers|[huggingface.co](https://huggingface.co/google/gemma-7b)|
|gemma_2_27b|Google|1+ Providers|[huggingface.co](https://huggingface.co/blog/gemma2)|
|claude-2.1|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-2)|
|claude-3-haiku|Anthropic|4+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
|claude-3-sonnet|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3.5-sonnet|Anthropic|6+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|gemini-pro|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemma-2b|Google|1+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2b)|
|claude-3-haiku|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
|claude-3-sonnet|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3-opus|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3.5-sonnet|Anthropic|3+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|reka-core|Reka AI|1+ Providers|[reka.ai](https://www.reka.ai/ourmodels)|
|blackboxai|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|blackboxai-pro|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|yi-1.5-9b|01-ai|1+ Providers|[huggingface.co](https://huggingface.co/01-ai/Yi-1.5-9B)|
|phi-2|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/phi-2)|
|phi-3-medium-4k|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct)|
|phi-3.5-mini|Microsoft|2+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct)|
|dbrx-instruct|Databricks|1+ Providers|[huggingface.co](https://huggingface.co/databricks/dbrx-instruct)|
|command-r-plus|CohereForAI|1+ Providers|[docs.cohere.com](https://docs.cohere.com/docs/command-r-plus)|
|sparkdesk-v1.1|iFlytek|1+ Providers|[xfyun.cn](https://www.xfyun.cn/doc/spark/Guide.html)|
|command-r|CohereForAI|1+ Providers|[docs.cohere.com](https://docs.cohere.com/docs/command-r-plus)|
|qwen|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen)|
|qwen-1.5-0.5b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-0.5B)|
|qwen-1.5-7b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-7B)|
|qwen-1.5-14b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-14B)|
|qwen-1.5-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-72B)|
|qwen-1.5-110b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-110B)|
|qwen-1.5-1.8b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen1.5-1.8B)|
|qwen-2-72b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-72B)|
|glm-3-6b|Zhipu AI|1+ Providers|[github.com/THUDM/ChatGLM3](https://github.com/THUDM/ChatGLM3)|
|glm-4-9B|Zhipu AI|1+ Providers|[github.com/THUDM/GLM-4](https://github.com/THUDM/GLM-4)|
|solar-1-mini|Upstage|1+ Providers|[upstage.ai/](https://www.upstage.ai/feed/product/solarmini-performance-report)|
|solar-10-7b|Upstage|1+ Providers|[huggingface.co](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)|
|solar-pro|Upstage|1+ Providers|[huggingface.co](https://huggingface.co/upstage/solar-pro-preview-instruct)|
|qwen-2-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-72B)|
|qwen-2.5-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)|
|qwen-2.5-coder-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|qwq-32b|Qwen|3+ Providers|[qwen2.org](https://qwen2.org/qwq-32b-preview/)|
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|deepseek-coder|DeepSeek|1+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct)|
|wizardlm-2-7b|WizardLM|1+ Providers|[huggingface.co](https://huggingface.co/dreamgen/WizardLM-2-7B)|
|wizardlm-2-8x22b|WizardLM|2+ Providers|[huggingface.co](https://huggingface.co/alpindale/WizardLM-2-8x22B)|
|sh-n-7b|Together|1+ Providers|[huggingface.co](https://huggingface.co/togethercomputer/StripedHyena-Nous-7B)|
|llava-13b|Yorickvp|1+ Providers|[huggingface.co](https://huggingface.co/liuhaotian/llava-v1.5-13b)|
|lzlv-70b|Lzlv|1+ Providers|[huggingface.co](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)|
|wizardlm-2-8x22b|WizardLM|1+ Providers|[huggingface.co](https://huggingface.co/alpindale/WizardLM-2-8x22B)|
|openchat-3.5|OpenChat|1+ Providers|[huggingface.co](https://huggingface.co/openchat/openchat_3.5)|
|openchat-3.6-8b|OpenChat|1+ Providers|[huggingface.co](https://huggingface.co/openchat/openchat-3.6-8b-20240522)|
|phind-codellama-34b-v2|Phind|1+ Providers|[huggingface.co](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2)|
|dolphin-2.9.1-llama-3-70b|Cognitive Computations|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|grok-2-mini|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-2)|
|grok-2|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-2)|
|grok-beta|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-2)|
|sonar-online|Perplexity AI|2+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|sonar-chat|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)|
|mythomax-l2-13b|Gryphe|1+ Providers|[huggingface.co](https://huggingface.co/Gryphe/MythoMax-L2-13b)|
|cosmosrp|Gryphe|1+ Providers|[huggingface.co](https://huggingface.co/PawanKrd/CosmosRP-8k)|
|german-7b|TheBloke|1+ Providers|[huggingface.co](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF)|
|tinyllama-1.1b|TinyLlama|1+ Providers|[huggingface.co](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)|
|cybertron-7b|TheBloke|1+ Providers|[huggingface.co](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)|
|nemotron-70b|Nvidia|3+ Providers|[build.nvidia.com](https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct)|
|openhermes-2.5|Teknium|1+ Providers|[huggingface.co](https://huggingface.co/datasets/teknium/OpenHermes-2.5)|
|lfm-40b|Liquid|1+ Providers|[liquid.ai](https://www.liquid.ai/liquid-foundation-models)|
|lfm-40b|Liquid|2+ Providers|[liquid.ai](https://www.liquid.ai/liquid-foundation-models)|
|german-7b|TheBloke|1+ Providers|[huggingface.co](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF)|
|zephyr-7b|HuggingFaceH4|1+ Providers|[huggingface.co](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)|
|neural-7b|Inferless|1+ Providers|[huggingface.co](https://huggingface.co/Intel/neural-chat-7b-v3-1)|
|p1|PollinationsAI|1+ Providers|[]( )|
|evil|Evil Mode - Experimental|2+ Providers|[]( )|
|midijourney||1+ Providers|[]( )|
|turbo||1+ Providers|[]( )|
|unity||1+ Providers|[]( )|
|rtist||1+ Providers|[]( )|
### Image Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|sdxl|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)|
|sdxl|Stability AI|2+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)|
|sdxl-lora|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/blog/lcm_lora)|
|sdxl-turbo|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sdxl-turbo)|
|sd-1.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/runwayml/stable-diffusion-v1-5)|
|sd-3|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3)|
|playground-v2.5|Playground AI|1+ Providers|[huggingface.co](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic)|
|flux|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|flux|Black Forest Labs|4+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|flux-pro|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|flux-realism|Flux AI|2+ Providers|[]()|
|flux-anime|Flux AI|1+ Providers|[]()|
|flux-3d|Flux AI|1+ Providers|[]()|
|flux-disney|Flux AI|1+ Providers|[]()|
|flux-pixel|Flux AI|1+ Providers|[]()|
|flux-4o|Flux AI|1+ Providers|[]()|
|flux-dev|Black Forest Labs|3+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-dev)|
|flux-realism|Flux AI|2+ Providers|[]( )|
|flux-cablyai|Flux AI|1+ Providers|[]( )|
|flux-anime|Flux AI|2+ Providers|[]( )|
|flux-3d|Flux AI|2+ Providers|[]( )|
|flux-disney|Flux AI|1+ Providers|[]( )|
|flux-pixel|Flux AI|1+ Providers|[]( )|
|flux-4o|Flux AI|1+ Providers|[]( )|
|flux-schnell|Black Forest Labs|2+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|dalle|OpenAI|1+ Providers|[openai.com](https://openai.com/index/dall-e/)|
|dalle-2|OpenAI|1+ Providers|[openai.com](https://openai.com/index/dall-e-2/)|
|emi||1+ Providers|[]()|
|any-dark||1+ Providers|[]()|
|midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|
### Vision Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|gpt-4-vision|OpenAI|1+ Providers|[openai.com](https://openai.com/research/gpt-4v-system-card)|
|gemini-pro-vision|Google DeepMind|1+ Providers | [deepmind.google](https://deepmind.google/technologies/gemini/)|
|blackboxai|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
### Providers and vision models
| Provider | Base Provider | | Vision Models | Status | Auth |
|-------|---------------|-----------|---------|---------|---------|
| `g4f.Provider.Blackbox` | Blackbox AI | | `blackboxai, gemini-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, gpt-4o, gemini-pro` | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|dall-e-3|OpenAI|5+ Providers|[openai.com](https://openai.com/index/dall-e/)|
|midjourney|Midjourney|2+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|
|any-dark||2+ Providers|[]( )|
## Conclusion and Usage Tips
This document provides a comprehensive overview of various AI providers and models available for text generation, image generation, and vision tasks. **When choosing a provider or model, consider the following factors:**

View file

@ -42,22 +42,24 @@ class Airforce(AsyncGeneratorProvider, ProviderModelMixin):
hidden_models = {"Flux-1.1-Pro"}
additional_models_imagine = ["flux-1.1-pro", "dall-e-3"]
additional_models_imagine = ["flux-1.1-pro", "midjourney", "dall-e-3"]
model_aliases = {
# Alias mappings for models
"gpt-4": "gpt-4o",
"openchat-3.5": "openchat-3.5-0106",
"deepseek-coder": "deepseek-coder-6.7b-instruct",
"hermes-2-dpo": "Nous-Hermes-2-Mixtral-8x7B-DPO",
"hermes-2-pro": "hermes-2-pro-mistral-7b",
"openhermes-2.5": "openhermes-2.5-mistral-7b",
"lfm-40b": "lfm-40b-moe",
"discolm-german-7b": "discolm-german-7b-v1",
"german-7b": "discolm-german-7b-v1",
"llama-2-7b": "llama-2-7b-chat-int8",
"llama-3.1-70b": "llama-3.1-70b-turbo",
"neural-7b": "neural-chat-7b-v3-1",
"zephyr-7b": "zephyr-7b-beta",
"evil": "any-uncensored",
"sdxl": "stable-diffusion-xl-lightning",
"sdxl": "stable-diffusion-xl-base",
"flux-pro": "flux-1.1-pro",
"llama-3.1-8b": "llama-3.1-8b-chat"

View file

@ -108,7 +108,6 @@ class AmigoChat(AsyncGeneratorProvider, ProviderModelMixin):
"mythomax-13b": "Gryphe/MythoMax-L2-13b",
"mixtral-7b": "mistralai/Mistral-7B-Instruct-v0.3",
"mistral-tiny": "mistralai/mistral-tiny",
"mistral-nemo": "mistralai/mistral-nemo",
"deepseek-chat": "deepseek-ai/deepseek-llm-67b-chat",
@ -127,7 +126,6 @@ class AmigoChat(AsyncGeneratorProvider, ProviderModelMixin):
### image ###
"flux-realism": "flux-realism",
"flux-dev": "flux/dev",
}

View file

@ -98,12 +98,12 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
models = list(dict.fromkeys([default_model, *userSelectedModel, *list(agentMode.keys()), *list(trendingAgentMode.keys())]))
model_aliases = {
"gpt-4": "blackboxai",
### chat ###
"gpt-4": "gpt-4o",
"gpt-4o-mini": "gpt-4o",
"gpt-3.5-turbo": "blackboxai",
"gemini-flash": "gemini-1.5-flash",
"claude-3.5-sonnet": "claude-sonnet-3.5",
### image ###
"flux": "ImageGeneration",
}

View file

@ -19,7 +19,7 @@ class ChatGptEs(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True
default_model = 'gpt-4o'
models = ['gpt-3.5-turbo', 'gpt-4o', 'gpt-4o-mini']
models = ['gpt-4', 'gpt-4o', 'gpt-4o-mini']
@classmethod
def get_model(cls, model: str) -> str:

View file

@ -30,6 +30,7 @@ class Conversation(BaseConversation):
self.model = model
class DDG(AsyncGeneratorProvider, ProviderModelMixin):
label = "DuckDuckGo AI Chat"
url = "https://duckduckgo.com/aichat"
api_endpoint = "https://duckduckgo.com/duckchat/v1/chat"
working = True

View file

@ -9,6 +9,7 @@ from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://deepinfra.com/chat"
api_endpoint = "https://api.deepinfra.com/v1/openai/chat/completions"
working = True
supports_stream = True
supports_system_message = True

View file

@ -8,13 +8,14 @@ from ..image import ImageResponse, ImagePreview
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
class Flux(AsyncGeneratorProvider, ProviderModelMixin):
label = "Flux Provider"
label = "HuggingSpace (black-forest-labs-flux-1-dev)"
url = "https://black-forest-labs-flux-1-dev.hf.space"
api_endpoint = "/gradio_api/call/infer"
working = True
default_model = 'flux-dev'
models = [default_model]
image_models = [default_model]
model_aliases = {"flux-dev": "flux-1-dev"}
@classmethod
async def create_async_generator(
@ -55,4 +56,4 @@ class Flux(AsyncGeneratorProvider, ProviderModelMixin):
yield ImagePreview(url, prompt)
else:
yield ImageResponse(url, prompt)
break
break

View file

@ -21,9 +21,11 @@ RATE_LIMIT_ERROR_MESSAGE = "当前地区当日额度已消耗完"
class FreeGpt(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://freegptsnav.aifree.site"
working = True
supports_message_history = True
supports_system_message = True
default_model = 'gemini-pro'
@classmethod

View file

@ -10,6 +10,7 @@ from .helper import format_prompt
class GizAI(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://app.giz.ai/assistant"
api_endpoint = "https://app.giz.ai/api/data/users/inferenceServer.infer"
working = True
supports_stream = False
supports_system_message = True
@ -17,7 +18,6 @@ class GizAI(AsyncGeneratorProvider, ProviderModelMixin):
default_model = 'chat-gemini-flash'
models = [default_model]
model_aliases = {"gemini-flash": "chat-gemini-flash",}
@classmethod

View file

@ -143,9 +143,9 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
working = True
supports_message_history = True
supports_system_message = True
default_model = "gpt-4o-2024-08-06"
models = list(models.keys())
model_aliases = {
"gpt-4o-mini": "gpt-4o-mini-free",
"gpt-4o": "gpt-4o-2024-08-06",

View file

@ -29,6 +29,7 @@ class PerplexityLabs(AsyncGeneratorProvider, ProviderModelMixin):
"sonar-online": "sonar-small-128k-online",
"sonar-chat": "llama-3.1-sonar-large-128k-chat",
"sonar-chat": "llama-3.1-sonar-small-128k-chat",
"llama-3.3-70b": "llama-3.3-70b-instruct",
"llama-3.1-8b": "llama-3.1-8b-instruct",
"llama-3.1-70b": "llama-3.1-70b-instruct",
"lfm-40b": "/models/LiquidCloud",
@ -78,9 +79,9 @@ class PerplexityLabs(AsyncGeneratorProvider, ProviderModelMixin):
assert(await ws.receive_str())
assert(await ws.receive_str() == "6")
message_data = {
"version": "2.5",
"version": "2.13",
"source": "default",
"model": cls.get_model(model),
"model": model,
"messages": messages
}
await ws.send_str("42" + json.dumps(["perplexity_labs", message_data]))

View file

@ -13,7 +13,7 @@ from .needs_auth.OpenaiAPI import OpenaiAPI
from .helper import format_prompt
class PollinationsAI(OpenaiAPI):
label = "Pollinations.AI"
label = "Pollinations AI"
url = "https://pollinations.ai"
working = True
@ -22,28 +22,30 @@ class PollinationsAI(OpenaiAPI):
default_model = "openai"
additional_models_image = ["unity", "midijourney", "rtist"]
additional_models_image = ["midjourney", "dall-e-3"]
additional_models_text = ["sur", "sur-mistral", "claude"]
model_aliases = {
"gpt-4o": "openai",
"mistral-nemo": "mistral",
"llama-3.1-70b": "llama", #
"gpt-3.5-turbo": "searchgpt",
"gpt-4": "searchgpt",
"gpt-3.5-turbo": "claude",
"gpt-4": "claude",
"qwen-2.5-coder-32b": "qwen-coder",
"claude-3.5-sonnet": "sur",
}
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
}
@classmethod
def get_models(cls):
if not hasattr(cls, 'image_models'):
cls.image_models = []
if not cls.image_models:
url = "https://image.pollinations.ai/models"
response = requests.get(url)
response = requests.get(url, headers=cls.headers)
raise_for_status(response)
cls.image_models = response.json()
cls.image_models.extend(cls.additional_models_image)
@ -51,7 +53,7 @@ class PollinationsAI(OpenaiAPI):
cls.models = []
if not cls.models:
url = "https://text.pollinations.ai/models"
response = requests.get(url)
response = requests.get(url, headers=cls.headers)
raise_for_status(response)
cls.models = [model.get("name") for model in response.json()]
cls.models.extend(cls.image_models)
@ -94,7 +96,7 @@ class PollinationsAI(OpenaiAPI):
@classmethod
async def _generate_text(cls, model: str, messages: Messages, api_base: str, api_key: str = None, proxy: str = None, **kwargs):
if api_key is None:
async with ClientSession(connector=get_connector(proxy=proxy)) as session:
async with ClientSession(connector=get_connector(proxy=proxy), headers=cls.headers) as session:
prompt = format_prompt(messages)
async with session.get(f"https://text.pollinations.ai/{quote(prompt)}?model={quote(model)}") as response:
await raise_for_status(response)

View file

@ -16,6 +16,7 @@ class RubiksAI(AsyncGeneratorProvider, ProviderModelMixin):
label = "Rubiks AI"
url = "https://rubiks.ai"
api_endpoint = "https://rubiks.ai/search/api/"
working = True
supports_stream = True
supports_system_message = True
@ -127,4 +128,4 @@ class RubiksAI(AsyncGeneratorProvider, ProviderModelMixin):
yield content
if web_search and sources:
yield Sources(sources)
yield Sources(sources)

View file

@ -22,25 +22,21 @@ from .Copilot import Copilot
from .DarkAI import DarkAI
from .DDG import DDG
from .DeepInfraChat import DeepInfraChat
from .Flux import Flux
from .Free2GPT import Free2GPT
from .FreeGpt import FreeGpt
from .GizAI import GizAI
from .Liaobots import Liaobots
from .MagickPen import MagickPen
from .Mhystical import Mhystical
from .PerplexityLabs import PerplexityLabs
from .Pi import Pi
from .Pizzagpt import Pizzagpt
from .PollinationsAI import PollinationsAI
from .Prodia import Prodia
from .Reka import Reka
from .ReplicateHome import ReplicateHome
from .RobocodersAPI import RobocodersAPI
from .RubiksAI import RubiksAI
from .TeachAnything import TeachAnything
from .Upstage import Upstage
from .You import You
from .Mhystical import Mhystical
from .Flux import Flux
import sys
@ -61,4 +57,4 @@ __map__: dict[str, ProviderType] = dict([
])
class ProviderUtils:
convert: dict[str, ProviderType] = __map__
convert: dict[str, ProviderType] = __map__

View file

@ -51,14 +51,22 @@ UPLOAD_IMAGE_HEADERS = {
}
class Gemini(AsyncGeneratorProvider, ProviderModelMixin):
label = "Google Gemini"
url = "https://gemini.google.com"
needs_auth = True
working = True
default_model = 'gemini'
image_models = ["gemini"]
default_vision_model = "gemini"
models = ["gemini", "gemini-1.5-flash", "gemini-1.5-pro"]
model_aliases = {
"gemini-flash": "gemini-1.5-flash",
"gemini-pro": "gemini-1.5-pro",
}
synthesize_content_type = "audio/vnd.wav"
_cookies: Cookies = None
_snlm0e: str = None
_sid: str = None

View file

@ -11,14 +11,20 @@ from ...errors import MissingAuthError
from ..helper import get_connector
class GeminiPro(AsyncGeneratorProvider, ProviderModelMixin):
label = "Gemini API"
label = "Google Gemini API"
url = "https://ai.google.dev"
working = True
supports_message_history = True
needs_auth = True
default_model = "gemini-1.5-pro"
default_vision_model = default_model
models = [default_model, "gemini-pro", "gemini-1.5-flash", "gemini-1.5-flash-8b"]
model_aliases = {
"gemini-flash": "gemini-1.5-flash",
"gemini-flash": "gemini-1.5-flash-8b",
}
@classmethod
async def create_async_generator(
@ -108,4 +114,4 @@ class GeminiPro(AsyncGeneratorProvider, ProviderModelMixin):
if candidate["finishReason"] == "STOP":
yield candidate["content"]["parts"][0]["text"]
else:
yield candidate["finishReason"] + ' ' + candidate["safetyRatings"]
yield candidate["finishReason"] + ' ' + candidate["safetyRatings"]

View file

@ -16,10 +16,12 @@ class Conversation(BaseConversation):
self.conversation_id = conversation_id
class GithubCopilot(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://copilot.microsoft.com"
url = "https://github.com/copilot"
working = True
needs_auth = True
supports_stream = True
default_model = "gpt-4o"
models = [default_model, "o1-mini", "o1-preview", "claude-3.5-sonnet"]
@ -90,4 +92,4 @@ class GithubCopilot(AsyncGeneratorProvider, ProviderModelMixin):
if line.startswith(b"data: "):
data = json.loads(line[6:])
if data.get("type") == "content":
yield data.get("body")
yield data.get("body")

View file

@ -24,16 +24,19 @@ class Conversation(BaseConversation):
class HuggingChat(AbstractProvider, ProviderModelMixin):
url = "https://huggingface.co/chat"
working = True
supports_stream = True
needs_auth = True
default_model = "Qwen/Qwen2.5-72B-Instruct"
default_image_model = "black-forest-labs/FLUX.1-dev"
image_models = [
"black-forest-labs/FLUX.1-dev"
]
models = [
default_model,
'meta-llama/Meta-Llama-3.1-70B-Instruct',
'meta-llama/Llama-3.3-70B-Instruct',
'CohereForAI/c4ai-command-r-plus-08-2024',
'Qwen/QwQ-32B-Preview',
'nvidia/Llama-3.1-Nemotron-70B-Instruct-HF',
@ -45,8 +48,9 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
*image_models
]
model_aliases = {
### Chat ###
"qwen-2.5-72b": "Qwen/Qwen2.5-72B-Instruct",
"llama-3.1-70b": "meta-llama/Meta-Llama-3.1-70B-Instruct",
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct",
"command-r-plus": "CohereForAI/c4ai-command-r-plus-08-2024",
"qwq-32b": "Qwen/QwQ-32B-Preview",
"nemotron-70b": "nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
@ -55,6 +59,8 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
"hermes-3": "NousResearch/Hermes-3-Llama-3.1-8B",
"mistral-nemo": "mistralai/Mistral-Nemo-Instruct-2407",
"phi-3.5-mini": "microsoft/Phi-3.5-mini-instruct",
### Image ###
"flux-dev": "black-forest-labs/FLUX.1-dev",
}
@ -214,4 +220,4 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
return data[message_keys["id"]]
except (KeyError, IndexError, TypeError) as e:
raise RuntimeError(f"Failed to extract message ID: {str(e)}")
raise RuntimeError(f"Failed to extract message ID: {str(e)}")

View file

@ -17,7 +17,7 @@ class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
working = True
supports_message_history = True
default_model = HuggingChat.default_model
default_image_model = "black-forest-labs/FLUX.1-dev"
default_image_model = HuggingChat.default_image_model
models = [*HuggingChat.models, default_image_model]
image_models = [default_image_model]
model_aliases = HuggingChat.model_aliases

View file

@ -4,9 +4,9 @@ from .OpenaiAPI import OpenaiAPI
from .HuggingChat import HuggingChat
from ...typing import AsyncResult, Messages
class HuggingFace2(OpenaiAPI):
class HuggingFaceAPI(OpenaiAPI):
label = "HuggingFace (Inference API)"
url = "https://huggingface.co"
url = "https://api-inference.huggingface.co"
working = True
default_model = "meta-llama/Llama-3.2-11B-Vision-Instruct"
default_vision_model = default_model

View file

@ -24,8 +24,8 @@ class Poe(AbstractProvider):
url = "https://poe.com"
working = True
needs_auth = True
supports_gpt_35_turbo = True
supports_stream = True
models = models.keys()
@classmethod
@ -113,4 +113,4 @@ if(window._message && window._message != window._last_message) {
elif chunk != "":
break
else:
time.sleep(0.1)
time.sleep(0.1)

View file

@ -10,8 +10,6 @@ from ..base_provider import AbstractProvider
class Raycast(AbstractProvider):
url = "https://raycast.com"
supports_gpt_35_turbo = True
supports_gpt_4 = True
supports_stream = True
needs_auth = True
working = True

View file

@ -1,10 +1,10 @@
from __future__ import annotations
import os, requests, time, json
from ..typing import CreateResult, Messages, ImageType
from .base_provider import AbstractProvider
from ..cookies import get_cookies
from ..image import to_bytes
from ...typing import CreateResult, Messages, ImageType
from ..base_provider import AbstractProvider
from ...cookies import get_cookies
from ...image import to_bytes
class Reka(AbstractProvider):
url = "https://chat.reka.ai/"
@ -145,4 +145,4 @@ class Reka(AbstractProvider):
return response.json()['accessToken']
except Exception as e:
raise ValueError(f"Failed to get access token: {e}, refresh your cookies / log in into chat.reka.ai")
raise ValueError(f"Failed to get access token: {e}, refresh your cookies / log in into chat.reka.ai")

View file

@ -35,9 +35,8 @@ class Theb(AbstractProvider):
label = "TheB.AI"
url = "https://beta.theb.ai"
working = True
supports_gpt_35_turbo = True
supports_gpt_4 = True
supports_stream = True
models = models.keys()
@classmethod
@ -155,4 +154,4 @@ return '';
elif chunk != "":
break
else:
time.sleep(0.1)
time.sleep(0.1)

View file

@ -11,7 +11,7 @@ from .GithubCopilot import GithubCopilot
from .Groq import Groq
from .HuggingChat import HuggingChat
from .HuggingFace import HuggingFace
from .HuggingFace2 import HuggingFace2
from .HuggingFaceAPI import HuggingFaceAPI
from .MetaAI import MetaAI
from .MetaAIAccount import MetaAIAccount
from .MicrosoftDesigner import MicrosoftDesigner
@ -21,6 +21,7 @@ from .OpenaiChat import OpenaiChat
from .PerplexityApi import PerplexityApi
from .Poe import Poe
from .Raycast import Raycast
from .Reka import Reka
from .Replicate import Replicate
from .Theb import Theb
from .ThebApi import ThebApi

View file

@ -6,9 +6,9 @@ import time
import random
import re
import json
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ...typing import AsyncResult, Messages
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt
class MagickPen(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://magickpen.com"

View file

@ -12,20 +12,20 @@ except ImportError:
BeautifulSoup = None
from aiohttp import ClientTimeout
from ..errors import MissingRequirementsError
from ..typing import AsyncResult, Messages
from ..cookies import get_cookies_dir
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ...errors import MissingRequirementsError
from ...typing import AsyncResult, Messages
from ...cookies import get_cookies_dir
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt
from .. import debug
from ... import debug
class RobocodersAPI(AsyncGeneratorProvider, ProviderModelMixin):
label = "API Robocoders AI"
url = "https://api.robocoders.ai/docs"
api_endpoint = "https://api.robocoders.ai/chat"
working = True
working = False
supports_message_history = True
default_model = 'GeneralCodingAgent'
agent = [default_model, "RepoAgent", "FrontEndAgent"]

View file

@ -3,9 +3,9 @@ from __future__ import annotations
from aiohttp import ClientSession
import json
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ...typing import AsyncResult, Messages
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt
class Upstage(AsyncGeneratorProvider, ProviderModelMixin):

View file

@ -5,10 +5,13 @@ from .AiChats import AiChats
from .AIUncensored import AIUncensored
from .Aura import Aura
from .Chatgpt4o import Chatgpt4o
from .Chatgpt4Online import Chatgpt4Online
from .ChatgptFree import ChatgptFree
from .FlowGpt import FlowGpt
from .FreeNetfly import FreeNetfly
from .GPROChat import GPROChat
from .Koala import Koala
from .MagickPen import MagickPen
from .MyShell import MyShell
from .Chatgpt4Online import Chatgpt4Online
from .RobocodersAPI import RobocodersAPI
from .Upstage import Upstage

View file

@ -284,7 +284,7 @@
<option value="OpenaiChat">OpenAI ChatGPT</option>
<option value="Copilot">Microsoft Copilot</option>
<option value="Gemini">Google Gemini</option>
<option value="DDG">DuckDuckGo</option>
<option value="DDG">DuckDuckGo AI Chat</option>
<option disabled="disabled">----</option>
</select>
</div>
@ -302,4 +302,4 @@
<i class="fa-solid fa-bars"></i>
</div>
</body>
</html>
</html>

View file

@ -5,7 +5,6 @@ from dataclasses import dataclass
from .Provider import IterListProvider, ProviderType
from .Provider import (
AIChatFree,
AmigoChat,
Blackbox,
Blackbox2,
BingCreateImages,
@ -17,6 +16,7 @@ from .Provider import (
DarkAI,
DDG,
DeepInfraChat,
Flux,
Free2GPT,
GigaChat,
Gemini,
@ -25,7 +25,6 @@ from .Provider import (
HuggingFace,
Liaobots,
Airforce,
MagickPen,
Mhystical,
MetaAI,
MicrosoftDesigner,
@ -39,8 +38,6 @@ from .Provider import (
ReplicateHome,
RubiksAI,
TeachAnything,
Upstage,
Flux,
)
@dataclass(unsafe_hash=True)
@ -74,7 +71,6 @@ default = Model(
Pizzagpt,
ReplicateHome,
Blackbox2,
Upstage,
Blackbox,
Free2GPT,
DeepInfraChat,
@ -82,7 +78,7 @@ default = Model(
ChatGptEs,
Cloudflare,
Mhystical,
AmigoChat,
PollinationsAI,
])
)
@ -95,20 +91,14 @@ default = Model(
gpt_35_turbo = Model(
name = 'gpt-3.5-turbo',
base_provider = 'OpenAI',
best_provider = IterListProvider([Blackbox, ChatGptEs, PollinationsAI, DarkAI])
best_provider = IterListProvider([DarkAI, ChatGpt])
)
# gpt-4
gpt_4o = Model(
name = 'gpt-4o',
gpt_4 = Model(
name = 'gpt-4',
base_provider = 'OpenAI',
best_provider = IterListProvider([Blackbox, ChatGptEs, PollinationsAI, DarkAI, ChatGpt, AmigoChat, Airforce, Liaobots, OpenaiChat])
)
gpt_4o_mini = Model(
name = 'gpt-4o-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, Blackbox, ChatGptEs, Pizzagpt, ChatGpt, AmigoChat, Airforce, RubiksAI, MagickPen, Liaobots, OpenaiChat])
best_provider = IterListProvider([DDG, Blackbox, ChatGptEs, PollinationsAI, Copilot, OpenaiChat, Liaobots, Airforce])
)
gpt_4_turbo = Model(
@ -117,10 +107,17 @@ gpt_4_turbo = Model(
best_provider = Airforce
)
gpt_4 = Model(
name = 'gpt-4',
# gpt-4o
gpt_4o = Model(
name = 'gpt-4o',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, Blackbox, PollinationsAI, Copilot, OpenaiChat, Liaobots])
best_provider = IterListProvider([Blackbox, ChatGptEs, PollinationsAI, DarkAI, ChatGpt, Airforce, Liaobots, OpenaiChat])
)
gpt_4o_mini = Model(
name = 'gpt-4o-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, ChatGptEs, Pizzagpt, ChatGpt, Airforce, RubiksAI, Liaobots, OpenaiChat])
)
# o1
@ -173,13 +170,13 @@ llama_3_1_8b = Model(
llama_3_1_70b = Model(
name = "llama-3.1-70b",
base_provider = "Meta Llama",
best_provider = IterListProvider([DDG, DeepInfraChat, Blackbox, Blackbox2, TeachAnything, PollinationsAI, DarkAI, Airforce, RubiksAI, HuggingChat, HuggingFace, PerplexityLabs])
best_provider = IterListProvider([DDG, DeepInfraChat, Blackbox, Blackbox2, TeachAnything, PollinationsAI, DarkAI, Airforce, RubiksAI, PerplexityLabs])
)
llama_3_1_405b = Model(
name = "llama-3.1-405b",
base_provider = "Meta Llama",
best_provider = IterListProvider([Blackbox, AmigoChat])
best_provider = Blackbox
)
# llama 3.2
@ -195,42 +192,24 @@ llama_3_2_11b = Model(
best_provider = IterListProvider([HuggingChat, HuggingFace])
)
llama_3_2_90b = Model(
name = "llama-3.2-90b",
# llama 3.3
llama_3_3_70b = Model(
name = "llama-3.3-70b",
base_provider = "Meta Llama",
best_provider = AmigoChat
)
# CodeLlama
codellama_34b = Model(
name = "codellama-34b",
base_provider = "Meta Llama",
best_provider = AmigoChat
best_provider = IterListProvider([HuggingChat, HuggingFace, PerplexityLabs])
)
### Mistral ###
mixtral_7b = Model(
name = "mixtral-7b",
base_provider = "Mistral",
best_provider = AmigoChat
)
mixtral_8x7b = Model(
name = "mixtral-8x7b",
base_provider = "Mistral",
best_provider = DDG
)
mistral_tiny = Model(
name = "mistral-tiny",
base_provider = "Mistral",
best_provider = AmigoChat
)
mistral_nemo = Model(
name = "mistral-nemo",
base_provider = "Mistral",
best_provider = IterListProvider([PollinationsAI, HuggingChat, AmigoChat, HuggingFace])
best_provider = IterListProvider([PollinationsAI, HuggingChat, HuggingFace])
)
mistral_large = Model(
@ -258,6 +237,7 @@ hermes_3 = Model(
best_provider = IterListProvider([HuggingChat, HuggingFace])
)
### Microsoft ###
phi_2 = Model(
name = "phi-2",
@ -276,13 +256,13 @@ phi_3_5_mini = Model(
gemini_pro = Model(
name = 'gemini-pro',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([Blackbox, AIChatFree, GeminiPro, Liaobots])
best_provider = IterListProvider([Blackbox, AIChatFree, Gemini, GeminiPro, Liaobots])
)
gemini_flash = Model(
name = 'gemini-flash',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([Blackbox, AmigoChat, Liaobots])
best_provider = IterListProvider([Blackbox, Gemini, GeminiPro, Liaobots])
)
gemini = Model(
@ -295,7 +275,7 @@ gemini = Model(
gemma_2b = Model(
name = 'gemma-2b',
base_provider = 'Google',
best_provider = IterListProvider([ReplicateHome, AmigoChat])
best_provider = ReplicateHome
)
### Anthropic ###
@ -322,13 +302,7 @@ claude_3_haiku = Model(
claude_3_5_sonnet = Model(
name = 'claude-3.5-sonnet',
base_provider = 'Anthropic',
best_provider = IterListProvider([Blackbox, PollinationsAI, AmigoChat, Liaobots])
)
claude_3_5_haiku = Model(
name = 'claude-3.5-haiku',
base_provider = 'Anthropic',
best_provider = AmigoChat
best_provider = IterListProvider([Blackbox, PollinationsAI, Liaobots])
)
### Reka AI ###
@ -355,7 +329,13 @@ blackboxai_pro = Model(
command_r_plus = Model(
name = 'command-r-plus',
base_provider = 'CohereForAI',
best_provider = IterListProvider([HuggingChat, AmigoChat])
best_provider = HuggingChat
)
command_r = Model(
name = 'command-r',
base_provider = 'CohereForAI',
best_provider = PollinationsAI
)
### Qwen ###
@ -377,7 +357,7 @@ qwen_2_72b = Model(
qwen_2_5_72b = Model(
name = 'qwen-2.5-72b',
base_provider = 'Qwen',
best_provider = IterListProvider([AmigoChat, HuggingChat, HuggingFace])
best_provider = IterListProvider([HuggingChat, HuggingFace])
)
qwen_2_5_coder_32b = Model(
@ -392,20 +372,6 @@ qwq_32b = Model(
best_provider = IterListProvider([DeepInfraChat, HuggingChat, HuggingFace])
)
### Upstage ###
solar_mini = Model(
name = 'solar-mini',
base_provider = 'Upstage',
best_provider = Upstage
)
solar_pro = Model(
name = 'solar-pro',
base_provider = 'Upstage',
best_provider = Upstage
)
### Inflection ###
pi = Model(
name = 'pi',
@ -414,12 +380,6 @@ pi = Model(
)
### DeepSeek ###
deepseek_chat = Model(
name = 'deepseek-chat',
base_provider = 'DeepSeek',
best_provider = AmigoChat
)
deepseek_coder = Model(
name = 'deepseek-coder',
base_provider = 'DeepSeek',
@ -445,7 +405,7 @@ openchat_3_5 = Model(
grok_beta = Model(
name = 'grok-beta',
base_provider = 'x.ai',
best_provider = IterListProvider([AmigoChat, Liaobots])
best_provider = Liaobots
)
@ -484,6 +444,14 @@ lfm_40b = Model(
best_provider = IterListProvider([Airforce, PerplexityLabs])
)
### DiscoResearch ###
german_7b = Model(
name = 'german-7b',
base_provider = 'DiscoResearch',
best_provider = Airforce
)
### HuggingFaceH4 ###
zephyr_7b = Model(
name = 'zephyr-7b',
@ -494,38 +462,10 @@ zephyr_7b = Model(
### Inferless ###
neural_7b = Model(
name = 'neural-7b',
base_provider = 'inferless',
base_provider = 'Inferless',
best_provider = Airforce
)
### Gryphe ###
mythomax_13b = Model(
name = 'mythomax-13b',
base_provider = 'Gryphe',
best_provider = AmigoChat
)
### databricks ###
dbrx_instruct = Model(
name = 'dbrx-instruct',
base_provider = 'databricks',
best_provider = AmigoChat
)
### anthracite-org ###
magnum_72b = Model(
name = 'magnum-72b',
base_provider = 'anthracite-org',
best_provider = AmigoChat
)
### ai21 ###
jamba_mini = Model(
name = 'jamba-mini',
base_provider = 'ai21',
best_provider = AmigoChat
)
### PollinationsAI ###
p1 = Model(
name = 'p1',
@ -540,6 +480,30 @@ evil = Model(
best_provider = IterListProvider([PollinationsAI, Airforce])
)
### Other ###
midijourney = Model(
name = 'midijourney',
base_provider = 'Other',
best_provider = PollinationsAI
)
turbo = Model(
name = 'turbo',
base_provider = 'Other',
best_provider = PollinationsAI
)
unity = Model(
name = 'unity',
base_provider = 'Other',
best_provider = PollinationsAI
)
rtist = Model(
name = 'rtist',
base_provider = 'Other',
best_provider = PollinationsAI
)
#############
### Image ###
#############
@ -582,16 +546,16 @@ flux_pro = ImageModel(
flux_dev = ImageModel(
name = 'flux-dev',
base_provider = 'Flux AI',
best_provider = IterListProvider([Flux, AmigoChat, HuggingChat, HuggingFace])
best_provider = IterListProvider([Flux, HuggingChat, HuggingFace])
)
flux_realism = ImageModel(
name = 'flux-realism',
base_provider = 'Flux AI',
best_provider = IterListProvider([PollinationsAI, Airforce, AmigoChat])
best_provider = IterListProvider([PollinationsAI, Airforce])
)
flux_cablyai = Model(
flux_cablyai = ImageModel(
name = 'flux-cablyai',
base_provider = 'Flux AI',
best_provider = PollinationsAI
@ -631,21 +595,14 @@ flux_4o = ImageModel(
dall_e_3 = ImageModel(
name = 'dall-e-3',
base_provider = 'OpenAI',
best_provider = IterListProvider([Airforce, CopilotAccount, OpenaiAccount, MicrosoftDesigner, BingCreateImages])
)
### Recraft ###
recraft_v3 = ImageModel(
name = 'recraft-v3',
base_provider = 'Recraft',
best_provider = AmigoChat
best_provider = IterListProvider([Airforce, PollinationsAI, CopilotAccount, OpenaiAccount, MicrosoftDesigner, BingCreateImages])
)
### Midjourney ###
midijourney = Model(
name = 'midijourney',
midjourney = ImageModel(
name = 'midjourney',
base_provider = 'Midjourney',
best_provider = PollinationsAI
best_provider = IterListProvider([PollinationsAI, Airforce])
)
### Other ###
@ -655,24 +612,6 @@ any_dark = ImageModel(
best_provider = IterListProvider([PollinationsAI, Airforce])
)
turbo = Model(
name = 'turbo',
base_provider = 'Other',
best_provider = PollinationsAI
)
unity = Model(
name = 'unity',
base_provider = 'Other',
best_provider = PollinationsAI
)
rtist = Model(
name = 'rtist',
base_provider = 'Other',
best_provider = PollinationsAI
)
class ModelUtils:
"""
Utility class for mapping string identifiers to Model instances.
@ -693,11 +632,13 @@ class ModelUtils:
'gpt-3.5-turbo': gpt_35_turbo,
# gpt-4
'gpt-4o': gpt_4o,
'gpt-4o-mini': gpt_4o_mini,
'gpt-4': gpt_4,
'gpt-4-turbo': gpt_4_turbo,
# gpt-4o
'gpt-4o': gpt_4o,
'gpt-4o-mini': gpt_4o_mini,
# o1
'o1-preview': o1_preview,
'o1-mini': o1_mini,
@ -719,15 +660,12 @@ class ModelUtils:
# llama-3.2
'llama-3.2-1b': llama_3_2_1b,
'llama-3.2-11b': llama_3_2_11b,
'llama-3.2-90b': llama_3_2_90b,
# CodeLlama
'codellama-34b': codellama_34b,
# llama-3.3
'llama-3.3-70b': llama_3_3_70b,
### Mistral ###
'mixtral-7b': mixtral_7b,
'mixtral-8x7b': mixtral_8x7b,
'mistral-tiny': mistral_tiny,
'mistral-nemo': mistral_nemo,
'mistral-large': mistral_large,
@ -757,7 +695,6 @@ class ModelUtils:
# claude 3.5
'claude-3.5-sonnet': claude_3_5_sonnet,
'claude-3.5-haiku': claude_3_5_haiku,
### Reka AI ###
'reka-core': reka_core,
@ -768,6 +705,7 @@ class ModelUtils:
### CohereForAI ###
'command-r+': command_r_plus,
'command-r': command_r,
### GigaChat ###
'gigachat': gigachat,
@ -783,10 +721,6 @@ class ModelUtils:
'qwen-2.5-72b': qwen_2_5_72b,
'qwen-2.5-coder-32b': qwen_2_5_coder_32b,
'qwq-32b': qwq_32b,
### Upstage ###
'solar-mini': solar_mini,
'solar-pro': solar_pro,
### Inflection ###
'pi': pi,
@ -805,9 +739,11 @@ class ModelUtils:
'sonar-chat': sonar_chat,
### DeepSeek ###
'deepseek-chat': deepseek_chat,
'deepseek-coder': deepseek_coder,
### TheBloke ###
'german-7b': german_7b,
### Nvidia ###
'nemotron-70b': nemotron_70b,
@ -817,30 +753,24 @@ class ModelUtils:
### Liquid ###
'lfm-40b': lfm_40b,
### databricks ###
'dbrx-instruct': dbrx_instruct,
### anthracite-org ###
'magnum-72b': magnum_72b,
### anthracite-org ###
'jamba-mini': jamba_mini,
### HuggingFaceH4 ###
'zephyr-7b': zephyr_7b,
### Inferless ###
'neural-7b': neural_7b,
### Gryphe ###
'mythomax-13b': mythomax_13b,
### PollinationsAI ###
'p1': p1,
### Uncensored AI ###
'evil': evil,
### Other ###
'midijourney': midijourney,
'turbo': turbo,
'unity': unity,
'rtist': rtist,
#############
### Image ###
#############
@ -866,18 +796,12 @@ class ModelUtils:
### OpenAI ###
'dall-e-3': dall_e_3,
### Recraft ###
'recraft-v3': recraft_v3,
### Midjourney ###
'midijourney': midijourney,
'midjourney': midjourney,
### Other ###
'any-dark': any_dark,
'turbo': turbo,
'unity': unity,
'rtist': rtist,
}
# Create a list of all working models
@ -893,4 +817,4 @@ __models__ = {model.name: (model, providers) for model, providers in [
] if providers}
# Update the ModelUtils.convert with the working models
ModelUtils.convert = {model.name: model for model, _ in __models__.values()}
_all_models = list(ModelUtils.convert.keys())
_all_models = list(ModelUtils.convert.keys())