* Update DDG.py: added Llama 3.3 Instruct and o3-mini
Duck.ai now supports o3-mini, and previous Llama 3.1 70B is now replaced by Llama 3.3 70B.
* Update DDG.py: change Llama 3.3 70B Instruct ID to "meta-llama/Llama-3.3-70B-Instruct-Turbo"
Fixed typo in full model name
* Update Cerebras.py: add "deepseek-r1-distill-llama-70b" to the models list
Cerebras now provides inference for a DeepSeek-R1 distilled to Llama 3.3 70B as well.
* Update models.py: reflect changes in DDG provider
- Removed DDG from best providers list for Llama 3.1 70B
- Added DDG to best providers list for o3-mini and Llama 3.3 70B
* A small update in HuggingFaceInference get_models() method
Previously, get_models() method was returning "TypeError: string indices must be integers, not 'str' on line 31" from time to time, possibly because of network error so the models list couldn't load and method was trying to parse this data. Now the code is updated in order to check for any potential errors first.
* Update BlackboxAPI.py: remove unused imports
format_prompt() and JSON library are not being used here, so they may be removed safely.
* Update copilot.yml
This job is failing due to the error in JavaScript code; this commit is trying to fix it.
* Update providers-and-models.md to reflect latest changes
Updated models list and removed providers that are currently not working.
* Adding New Models and Enhancing Provider Functionality
* fix(core): handle model errors and improve configuration
- Import ModelNotSupportedError for proper exception handling in model resolution
- Update login_url configuration to reference class URL attribute dynamically
- Remove redundant typing imports after internal module reorganization
* feat(g4f/Provider/PerplexityLabs.py): Add new Perplexity models and update provider listings
- Update PerplexityLabs provider with expanded Sonar model family including pro/reasoning variants
- Add new text models: sonar-reasoning-pro to supported model catalog
- Standardize model naming conventions across provider documentation
* feat(g4f/models.py): add Sonar Reasoning Pro model configuration
- Add new model to Perplexity AI text models section
- Include model in ModelUtils.convert mapping with PerplexityLabs provider
- Maintain consistent configuration pattern with existing Sonar variants
* feat(docs/providers-and-models.md): update provider models and add new reasoning model
- Update PerplexityLabs text models to standardized sonar naming convention
- Add new sonar-reasoning-pro model to text models table
- Include latest Perplexity AI documentation references for new model
* docs(docs/providers-and-models.md): update AI providers documentation
- Remove deprecated chatgptt.me from no-auth providers list
- Delete redundant Auth column from HuggingSpace providers table
- Update PerplexityLabs model website URLs to sonar.perplexity.ai
- Adjust provider counts for GPT-4/GPT-4o models in text models section
- Fix inconsistent formatting in image models provider listings
* chore(g4f/models.py): remove deprecated ChatGptt provider integration
- Remove ChatGptt import from provider dependencies
- Exclude ChatGptt from default model's best_provider list
- Update gpt_4 model configuration to eliminate ChatGptt reference
- Modify gpt_4o vision model provider hierarchy
- Adjust gpt_4o_mini provider selection parameters
BREAKING CHANGE: Existing integrations using ChatGptt provider will no longer function
* Disabled provider (g4f/Provider/ChatGptt.py > g4f/Provider/not_working/ChatGptt.py): Problem with Cloudflare
* fix(g4f/Provider/CablyAI.py): update API endpoints and model configurations
* docs(docs/providers-and-models.md): update model listings and provider capabilities
* feat(g4f/models.py): Add Hermes-3 model and enhance provider configs
* feat(g4f/Provider/CablyAI.py): Add free tier indicators to model aliases
* refactor(g4f/tools/run_tools.py): modularize thinking chunk handling
* fix(g4f/Provider/DeepInfraChat.py): resolve duplicate keys and enhance request headers
* feat(g4f/Provider/DeepInfraChat.py): Add multimodal image support and improve model handling
* chore(g4f/models.py): update default vision model providers
* feat(docs/providers-and-models.md): update provider capabilities and model specifications
* Update docs/client.md
* docs(docs/providers-and-models.md): Update DeepInfraChat models documentation
* feat(g4f/Provider/DeepInfraChat.py): add new vision models and expand model aliases
* feat(g4f/models.py): update model configurations and add new providers
* feat(g4f/models.py): Update model configurations and add new AI models
---------
Co-authored-by: kqlio67 <>
* Fixed the model name in the Blackbox.py provider in the vision_models list, the DeepSeek-R1 model
* Update g4f/Provider/Blackbox.py
* Update docs/providers-and-models.md
* Uodate g4f/models.py g4f/Provider/DeepInfraChat.py docs/providers-and-models.md
---------
Co-authored-by: kqlio67 <>
* Update provider capabilities and model support
- Update provider documentation with latest model support
- Remove deprecated models and update model counts
- Add new model variants and fix formatting
- Update provider class labels for better clarity
- Add support for new models including DeepSeek-R1 and sd-turbo
- Clean up unused model aliases and improve code organization
Key changes:
- Update Blackbox vision capabilities
- Remove legacy models (midijourney, unity, rtist)
- Add flux variants and update provider counts
- Set explicit provider labels
- Update model aliases and mappings
- Add new model support in multiple providers
* Upodate g4f/models.py
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/Blackbox.py
---------
Co-authored-by: kqlio67 <>
Add "flux"as alias in HuggingSpace providers
Choice a random space provider in HuggingSpace provider
Add "Selecting a Provider" Documentation
Update requirements list in pypi packages
Fix label of CablyAI and DeepInfraChat provider
* Update model configurations, provider implementations, and documentation
- Updated model names and aliases for Qwen QVQ 72B and Qwen 2 72B (@TheFirstNoob)
- Revised HuggingSpace class configuration, added default_image_model
- Added llama-3.2-70b alias for Llama 3.2 70B model in AutonomousAI
- Removed BlackboxCreateAgent class
- Added gpt-4o alias for Copilot model
- Moved api_key to Mhystical class attribute
- Added models property with default_model value for Free2GPT
- Simplified Jmuz class implementation
- Improved image generation and model handling in DeepInfra
- Standardized default models and removed aliases in Gemini
- Replaced model aliases with direct model list in GlhfChat (@TheFirstNoob)
- Removed trailing slash from image generation URL in PollinationsAI (https://github.com/xtekky/gpt4free/issues/2571)
- Updated llama and qwen model configurations
- Enhanced provider documentation and model details
* Removed from (g4f/models.py) 'Yqcloud' provider from Default due to error 'ResponseStatusError: Response 429: 文字过长,请删减后重试。'
* Update docs/providers-and-models.md
* refactor(g4f/Provider/DDG.py): Add error handling and rate limiting to DDG provider
- Add custom exception classes for rate limits, timeouts, and conversation limits
- Implement rate limiting with sleep between requests (0.75s minimum delay)
- Add model validation method to check supported models
- Add proper error handling for API responses with custom exceptions
- Improve session cookie handling for conversation persistence
- Clean up User-Agent string and remove redundant code
- Add proper error propagation through async generator
Breaking changes:
- New custom exceptions may require updates to error handling code
- Rate limiting affects request timing and throughput
- Model validation is now stricter
Related:
- Adds error handling similar to standard API clients
- Improves reliability and robustness of chat interactions
* Update g4f/models.py g4f/Provider/PollinationsAI.py
* Update g4f/models.py
* Restored provider which was not working and was disabled (g4f/Provider/DeepInfraChat.py)
* Fixing a bug with Streaming Completions
* Update g4f/Provider/PollinationsAI.py
* Update g4f/Provider/Blackbox.py g4f/Provider/DDG.py
* Added another model for generating images 'ImageGeneration2' to the 'Blackbox' provider
* Update docs/providers-and-models.md
* Update g4f/models.py g4f/Provider/Blackbox.py
* Added a new OIVSCode provider from the Text Models and Vision (Image Upload) model
* Update docs/providers-and-models.md
* docs: add Conversation Memory class with context handling requested by @TheFirstNoob
* Simplified README.md documentation added new docs/configuration.md documentation
* Update add README.md docs/configuration.md
* Update README.md
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/PollinationsAI.py
* Added new model deepseek-r1 to Blackbox provider. @TheFirstNoob
* Fixed bugs and updated docs/providers-and-models.md etc/unittest/client.py g4f/models.py g4f/Provider/.
---------
Co-authored-by: kqlio67 <>
Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
* Update providers, restore old providers, remove non-working providers
* Restoring the original providers
* Restore the original provider g4f/Provider/needs_auth/GeminiPro.py
* Deleted non-working providers, fixed providers
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/hf_space/CohereForAI.py
* Restore g4f/Provider/Airforce.py Updated alias g4f/Provider/hf_space/CohereForAI.py
* Disabled provider 'g4f/Provider/ReplicateHome.py' and moved to 'g4f/Provider/not_working'
* Disconnected provider problem with Pizzagpt response
* Fix for why web_search = True didn't work
* Update docs/client.md
* Fix for why web_search = True did not work in the asychronous and sychronous versions
---------
Co-authored-by: kqlio67 <>
Add none auth with OpenAI using nodriver
Fix missing 1 required positional argument: 'cls'
Update count tokens in GUI
Fix streaming example in requests guide
Remove ChatGptEs as default model
Remove old text_to_speech service from gui
Update gui and client readmes,
Add HuggingSpaces group provider;
Add providers parameters config forms to gui
* refactor(g4f/Provider/Airforce.py): improve model handling and filtering
- Add hidden_models set to exclude specific models
- Add evil alias for uncensored model handling
- Extend filtering for model-specific response tokens
- Add response buffering for streamed content
- Update model fetching with error handling
* refactor(g4f/Provider/Blackbox.py): improve caching and model handling
- Add caching system for validated values with file-based storage
- Rename 'flux' model to 'ImageGeneration' and update references
- Add temperature, top_p and max_tokens parameters to generator
- Simplify HTTP headers and remove redundant options
- Add model alias mapping for ImageGeneration
- Add file system utilities for cache management
* feat(g4f/Provider/RobocodersAPI.py): add caching and error handling
- Add file-based caching system for access tokens and sessions
- Add robust error handling with specific error messages
- Add automatic dialog continuation on resource limits
- Add HTML parsing with BeautifulSoup for token extraction
- Add debug logging for error tracking
- Add timeout configuration for API requests
* refactor(g4f/Provider/DarkAI.py): update DarkAI default model and aliases
- Change default model from llama-3-405b to llama-3-70b
- Remove llama-3-405b from supported models list
- Remove llama-3.1-405b from model aliases
* feat(g4f/Provider/Blackbox2.py): add image generation support
- Add image model 'flux' with dedicated API endpoint
- Refactor generator to support both text and image outputs
- Extract headers into reusable static method
- Add type hints for AsyncGenerator return type
- Split generation logic into _generate_text and _generate_image methods
- Add ImageResponse handling for image generation results
BREAKING CHANGE: create_async_generator now returns AsyncGenerator instead of AsyncResult
* refactor(g4f/Provider/ChatGptEs.py): update ChatGptEs model configuration
- Update models list to include gpt-3.5-turbo
- Remove chatgpt-4o-latest from supported models
- Remove model_aliases mapping for gpt-4o
* feat(g4f/Provider/DeepInfraChat.py): add Accept-Language header support
- Add Accept-Language header for internationalization
- Maintain existing header configuration
- Improve request compatibility with language preferences
* refactor(g4f/Provider/needs_auth/Gemini.py): add ProviderModelMixin inheritance
- Add ProviderModelMixin to class inheritance
- Import ProviderModelMixin from base_provider
- Move BaseConversation import to base_provider imports
* refactor(g4f/Provider/Liaobots.py): update model details and aliases
- Add version suffix to o1 model IDs
- Update model aliases for o1-preview and o1-mini
- Standardize version format across model definitions
* refactor(g4f/Provider/PollinationsAI.py): enhance model support and generation
- Split generation logic into dedicated image/text methods
- Add additional text models including sur and claude
- Add width/height parameters for image generation
- Add model existence validation
- Add hasattr checks for model lists initialization
* chore(gitignore): add provider cache directory
- Add g4f/Provider/.cache to gitignore patterns
* refactor(g4f/Provider/ReplicateHome.py): update model configuration
- Update default model to gemma-2b-it
- Add default_image_model configuration
- Remove llava-13b from supported models
- Simplify request headers
* feat(g4f/models.py): expand provider and model support
- Add new providers DarkAI and PollinationsAI
- Add new models for Mistral, Flux and image generation
- Update provider lists for existing models
- Add P1 and Evil models with experimental providers
BREAKING CHANGE: Remove llava-13b model support
* refactor(Airforce): Update type hint for split_message return
- Change return type of from to for consistency with import.
- Maintain overall functionality and structure of the class.
- Ensure compatibility with type hinting standards in Python.
* refactor(g4f/Provider/Airforce.py): Update type hint for split_message return
- Change return type of 'split_message' from 'list[str]' to 'List[str]' for consistency with import.
- Maintain overall functionality and structure of the 'Airforce' class.
- Ensure compatibility with type hinting standards in Python.
* feat(g4f/Provider/RobocodersAPI.py): Add support for optional BeautifulSoup dependency
- Introduce a check for the BeautifulSoup library and handle its absence gracefully.
- Raise a if BeautifulSoup is not installed, prompting the user to install it.
- Remove direct import of BeautifulSoup to avoid import errors when the library is missing.
* fix: Updating provider documentation and small fixes in providers
* Disabled the provider (RobocodersAPI)
* Fix: Conflicting file g4f/models.py
* Update g4f/models.py g4f/Provider/Airforce.py
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/Airforce.py g4f/Provider/PollinationsAI.py
* Update docs/providers-and-models.md
* Update .gitignore
* Update g4f/models.py
* Update g4f/Provider/PollinationsAI.py
* feat(g4f/Provider/Blackbox.py): add support for additional AI models and agents
- Introduce new agent modes for Meta-Llama, Mistral, DeepSeek, DBRX, Qwen, and Nous-Hermes
- Update model aliases to include newly supported models
* Update (g4f/Provider/Blackbox.py)
* Update (g4f/Provider/Blackbox.py)
* feat(g4f/Provider/Blackbox2.py): add license key caching and validation
- Add cache file management for license key persistence
- Implement async license key extraction from JavaScript files
- Add license key validation to text generation requests
- Update type hints for async generators
- Add error handling for cache file operations
Breaking changes:
- Text generation now requires license key validation
---------
Co-authored-by: kqlio67 <>
* Add multiple images support
* Add multiple images support in gui
* Support multiple images in legacy client and in the api
Fix some model names in provider model list
* Fix unittests
* Add vision and providers docs
* refactor(g4f/Provider/Airforce.py): improve model handling and filtering
- Add hidden_models set to exclude specific models
- Add evil alias for uncensored model handling
- Extend filtering for model-specific response tokens
- Add response buffering for streamed content
- Update model fetching with error handling
* refactor(g4f/Provider/Blackbox.py): improve caching and model handling
- Add caching system for validated values with file-based storage
- Rename 'flux' model to 'ImageGeneration' and update references
- Add temperature, top_p and max_tokens parameters to generator
- Simplify HTTP headers and remove redundant options
- Add model alias mapping for ImageGeneration
- Add file system utilities for cache management
* feat(g4f/Provider/RobocodersAPI.py): add caching and error handling
- Add file-based caching system for access tokens and sessions
- Add robust error handling with specific error messages
- Add automatic dialog continuation on resource limits
- Add HTML parsing with BeautifulSoup for token extraction
- Add debug logging for error tracking
- Add timeout configuration for API requests
* refactor(g4f/Provider/DarkAI.py): update DarkAI default model and aliases
- Change default model from llama-3-405b to llama-3-70b
- Remove llama-3-405b from supported models list
- Remove llama-3.1-405b from model aliases
* feat(g4f/Provider/Blackbox2.py): add image generation support
- Add image model 'flux' with dedicated API endpoint
- Refactor generator to support both text and image outputs
- Extract headers into reusable static method
- Add type hints for AsyncGenerator return type
- Split generation logic into _generate_text and _generate_image methods
- Add ImageResponse handling for image generation results
BREAKING CHANGE: create_async_generator now returns AsyncGenerator instead of AsyncResult
* refactor(g4f/Provider/ChatGptEs.py): update ChatGptEs model configuration
- Update models list to include gpt-3.5-turbo
- Remove chatgpt-4o-latest from supported models
- Remove model_aliases mapping for gpt-4o
* feat(g4f/Provider/DeepInfraChat.py): add Accept-Language header support
- Add Accept-Language header for internationalization
- Maintain existing header configuration
- Improve request compatibility with language preferences
* refactor(g4f/Provider/needs_auth/Gemini.py): add ProviderModelMixin inheritance
- Add ProviderModelMixin to class inheritance
- Import ProviderModelMixin from base_provider
- Move BaseConversation import to base_provider imports
* refactor(g4f/Provider/Liaobots.py): update model details and aliases
- Add version suffix to o1 model IDs
- Update model aliases for o1-preview and o1-mini
- Standardize version format across model definitions
* refactor(g4f/Provider/PollinationsAI.py): enhance model support and generation
- Split generation logic into dedicated image/text methods
- Add additional text models including sur and claude
- Add width/height parameters for image generation
- Add model existence validation
- Add hasattr checks for model lists initialization
* chore(gitignore): add provider cache directory
- Add g4f/Provider/.cache to gitignore patterns
* refactor(g4f/Provider/ReplicateHome.py): update model configuration
- Update default model to gemma-2b-it
- Add default_image_model configuration
- Remove llava-13b from supported models
- Simplify request headers
* feat(g4f/models.py): expand provider and model support
- Add new providers DarkAI and PollinationsAI
- Add new models for Mistral, Flux and image generation
- Update provider lists for existing models
- Add P1 and Evil models with experimental providers
BREAKING CHANGE: Remove llava-13b model support
* refactor(Airforce): Update type hint for split_message return
- Change return type of from to for consistency with import.
- Maintain overall functionality and structure of the class.
- Ensure compatibility with type hinting standards in Python.
* refactor(g4f/Provider/Airforce.py): Update type hint for split_message return
- Change return type of 'split_message' from 'list[str]' to 'List[str]' for consistency with import.
- Maintain overall functionality and structure of the 'Airforce' class.
- Ensure compatibility with type hinting standards in Python.
* feat(g4f/Provider/RobocodersAPI.py): Add support for optional BeautifulSoup dependency
- Introduce a check for the BeautifulSoup library and handle its absence gracefully.
- Raise a if BeautifulSoup is not installed, prompting the user to install it.
- Remove direct import of BeautifulSoup to avoid import errors when the library is missing.
* fix: Updating provider documentation and small fixes in providers
* Disabled the provider (RobocodersAPI)
* Fix: Conflicting file g4f/models.py
* Update g4f/models.py g4f/Provider/Airforce.py
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/Airforce.py g4f/Provider/PollinationsAI.py
* Update docs/providers-and-models.md
* Update .gitignore
* Update g4f/models.py
* Update g4f/Provider/PollinationsAI.py
---------
Co-authored-by: kqlio67 <>
* refactor(g4f/Provider/Airforce.py): Enhance Airforce provider with dynamic model fetching
* refactor(g4f/Provider/Blackbox.py): Enhance Blackbox AI provider configuration and streamline code
* feat(g4f/Provider/RobocodersAPI.py): Add RobocodersAPI new async chat provider
* refactor(g4f/client/__init__.py): Improve provider handling in async_generate method
* refactor(g4f/models.py): Update provider configurations for multiple models
* refactor(g4f/Provider/Blackbox.py): Streamline model configuration and improve response handling
* feat(g4f/Provider/DDG.py): Enhance model support and improve conversation handling
* refactor(g4f/Provider/Copilot.py): Enhance Copilot provider with model support
* refactor(g4f/Provider/AmigoChat.py): update models and improve code structure
* chore(g4f/Provider/not_working/AIUncensored.): move AIUncensored to not_working directory
* chore(g4f/Provider/not_working/Allyfy.py): remove Allyfy provider
* Update (g4f/Provider/not_working/AIUncensored.py g4f/Provider/not_working/__init__.py)
* refactor(g4f/Provider/ChatGptEs.py): Implement format_prompt for message handling
* refactor(g4f/Provider/Blackbox.py): Update message formatting and improve code structure
* refactor(g4f/Provider/LLMPlayground.py): Enhance text generation and error handling
* refactor(g4f/Provider/needs_auth/PollinationsAI.py): move PollinationsAI to needs_auth directory
* refactor(g4f/Provider/Liaobots.py): Update Liaobots provider models and aliases
* feat(g4f/Provider/DeepInfraChat.py): Add new DeepInfra models and aliases
* Update (g4f/Provider/__init__.py)
* Update (g4f/models.py)
* g4f/models.py
* Update g4f/models.py
* Update g4f/Provider/LLMPlayground.py
* Update (g4f/models.py g4f/Provider/Airforce.py
g4f/Provider/__init__.py g4f/Provider/LLMPlayground.py)
* Update g4f/Provider/__init__.py
* refactor(g4f/Provider/Airforce.py): Enhance text generation with retry and timeout
* Update g4f/Provider/AmigoChat.py g4f/Provider/__init__.py
* refactor(g4f/Provider/Blackbox.py): update model prefixes and image handling
Fixes#2445
- Update model prefixes for gpt-4o, gemini-pro, and claude-sonnet-3.5
- Add 'gpt-3.5-turbo' alias for 'blackboxai' model
- Modify image handling in create_async_generator method
- Add 'imageGenerationMode' and 'webSearchModePrompt' flags to API request
- Remove redundant 'imageBase64' field from image data structure
* New provider (g4f/Provider/Blackbox2.py)
Support for model llama-3.1-70b text generation
* docs(docs/async_client.md): update AsyncClient API guide with minor improvements
- Improve formatting and readability of code examples
- Add line breaks for better visual separation of sections
- Fix minor typos and inconsistencies in text
- Enhance clarity of explanations in various sections
- Remove unnecessary whitespace
* feat(docs/client.md): add response_format parameter
- Add 'response_format' parameter to image generation examples
- Specify 'url' format for standard image generation
- Include 'b64_json' format for base64 encoded image response
- Update documentation to reflect new parameter usage
- Improve code examples for clarity and consistency
* docs(README.md): update usage examples and add image generation
- Update text generation example to use new Client API
- Add image generation example with Client API
- Update configuration section with new cookie setting instructions
- Add response_format parameter to image generation example
- Remove outdated information and reorganize sections
- Update contributors list
* refactor(g4f/client/__init__.py): optimize image processing and response handling
- Modify _process_image_response to handle 'url' format without local saving
- Update ImagesResponse construction to include 'created' timestamp
- Simplify image processing logic for different response formats
- Improve error handling and logging for image generation
- Enhance type hints and docstrings for better code clarity
* feat(g4f/models.py): update model providers and add new models
- Add Blackbox2 to Provider imports
- Update gpt-3.5-turbo best provider to Blackbox
- Add Blackbox2 to llama-3.1-70b best providers
- Rename dalle_3 to dall_e_3 and update its best providers
- Add new models: solar_mini, openhermes_2_5, lfm_40b, zephyr_7b, neural_7b, mythomax_13b
- Update ModelUtils.convert with new models and changes
- Remove duplicate 'dalle-3' entry in ModelUtils.convert
* refactor(Airforce): improve API handling and add authentication
- Implement API key authentication with check_api_key method
- Refactor image generation to use new imagine2 endpoint
- Improve text generation with better error handling and streaming
- Update model aliases and add new image models
- Enhance content filtering for various model outputs
- Replace StreamSession with aiohttp's ClientSession for async operations
- Simplify model fetching logic and remove redundant code
- Add is_image_model method for better model type checking
- Update class attributes for better organization and clarity
* feat(g4f/Provider/HuggingChat.py): update HuggingChat model list and aliases
Request by @TheFirstNoob
- Add 'Qwen/Qwen2.5-72B-Instruct' as the first model in the list
- Update model aliases to include 'qwen-2.5-72b'
- Reorder existing models in the list for consistency
- Remove duplicate entry for 'Qwen/Qwen2.5-72B-Instruct' in models list
* refactor(g4f/Provider/ReplicateHome.py): remove unused text models
Request by @TheFirstNoob
- Removed the 'meta/meta-llama-3-70b-instruct' and 'mistralai/mixtral-8x7b-instruct-v0.1' text models from the list
- Updated the list to only include the remaining text and image models
- This change simplifies the model configuration and reduces the number of available models, focusing on the core text and image models provided by Replicate
* refactor(g4f/Provider/HuggingChat.py): Move HuggingChat to needs_auth directory
Request by @TheFirstNoob
* Update (g4f/Provider/needs_auth/HuggingChat.py)
* Update g4f/models.py
* Update g4f/Provider/Airforce.py
* Update g4f/models.py g4f/Provider/needs_auth/HuggingChat.py
* Added 'Airforce' provider to the 'o1-mini' model (g4f/models.py)
* Update (g4f/Provider/Airforce.py g4f/Provider/AmigoChat.py)
* Update g4f/models.py g4f/Provider/DeepInfraChat.py g4f/Provider/Airforce.py
* Update g4f/Provider/DeepInfraChat.py
* Update (g4f/Provider/DeepInfraChat.py)
* Update g4f/Provider/Blackbox.py
* Update (docs/client.md docs/async_client.md g4f/client/__init__.py)
* Update (docs/async_client.md docs/client.md)
* Update (g4f/client/__init__.py)
---------
Co-authored-by: kqlio67 <kqlio67@users.noreply.github.com>
Co-authored-by: kqlio67 <>
Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
* Add more contributers, add link to Swagger UI
* Update Dockerfile-slim
* Update retry_provider.py
* Add html preview to gui, fix urls in website manifest
* Missing chunks in OpenaiChat