gpt4free/g4f/Provider/OIVSCodeSer2.py
kqlio67 f4cd4890d3
feat: enhance provider support and add PuterJS provider (#2999)
* feat: enhance provider support and add PuterJS provider

- Add new PuterJS provider with extensive model support and authentication handling
- Add three new OIVSCode providers (OIVSCodeSer2, OIVSCodeSer5, OIVSCodeSer0501)
- Fix Blackbox provider with improved model handling and session generation
- Update model aliases across multiple providers for consistency
- Mark DDG provider as not working
- Move TypeGPT to not_working directory
- Fix model name formatting in DeepInfraChat and other providers (qwen3 → qwen-3)
- Add get_model method to LambdaChat and other providers for better model alias handling
- Add ModelNotFoundError import to providers that need it
- Update model definitions in models.py with new providers and aliases
- Fix client/stubs.py to allow arbitrary types in ChatCompletionMessage

* Fix conflicts g4f/Provider/needs_auth/Grok.py

* fix: update Blackbox provider default settings

- Changed  parameter to use only the passed value without fallback to 1024
- Set  to  instead of  in request payload

* feat: add WeWordle provider with gpt-4 support

- Created new WeWordle.py provider file implementing AsyncGeneratorProvider
- Added WeWordle class with API endpoint at wewordle.org/gptapi/v1/web/turbo
- Set provider properties: working=True, needs_auth=False, supports_stream=True
- Configured default_model as 'gpt-4' with retry mechanism for API requests
- Implemented URL sanitization logic to handle malformed URLs
- Added response parsing for different JSON response formats
- Added WeWordle to Provider/__init__.py imports
- Added WeWordle to default model providers list in models.py
- Added WeWordle to gpt_4 best_provider list in models.py

* feat: add DocsBot provider with GPT-4o support

- Added new DocsBot.py provider file implementing AsyncGeneratorProvider and ProviderModelMixin
- Created Conversation class extending JsonConversation to track conversation state
- Implemented create_async_generator method with support for:
  - Streaming and non-streaming responses
  - System messages
  - Message history
  - Image handling via data URIs
  - Conversation tracking
- Added DocsBot to Provider/__init__.py imports
- Added DocsBot to default and default_vision model providers in models.py
- Added DocsBot as a provider for gpt_4o model in models.py
- Set default_model and vision support to 'gpt-4o'
- Implemented API endpoint communication with docsbot.ai

* feat: add OpenAIFM provider and update audio model references

- Added new OpenAIFM provider in g4f/Provider/audio/OpenAIFM.py for text-to-speech functionality
- Updated PollinationsAI.py to rename "gpt-4o-audio" to "gpt-4o-mini-audio"
- Added OpenAIFM to audio provider imports in g4f/Provider/audio/__init__.py
- Modified save_response_media() in g4f/image/copy_images.py to handle source_url separately from media_url
- Added new gpt_4o_mini_tts AudioModel in g4f/models.py with OpenAIFM as best provider
- Updated ModelUtils dictionary in models.py to include both gpt_4o_mini_audio and gpt_4o_mini_tts

* fix: improve PuterJS provider and add Gemini to best providers

- Changed client_id generation in PuterJS from time-based to UUID format
- Fixed duplicate json import in PuterJS.py
- Added uuid module import in PuterJS.py
- Changed host header from "api.puter.com" to "puter.com"
- Modified error handling to use Exception instead of RateLimitError
- Added Gemini to best_provider list for gemini-2.5-flash model
- Added Gemini to best_provider list for gemini-2.5-pro model
- Fixed missing newline at end of Gemini.py file

---------

Co-authored-by: kqlio67 <kqlio67.noreply.github.com>
2025-05-19 11:44:15 +02:00

39 lines
1.3 KiB
Python

from __future__ import annotations
import secrets
import string
from .template import OpenaiTemplate
class OIVSCodeSer2(OpenaiTemplate):
label = "OI VSCode Server 2"
url = "https://oi-vscode-server-2.onrender.com"
api_base = "https://oi-vscode-server-2.onrender.com/v1"
api_endpoint = "https://oi-vscode-server-2.onrender.com/v1/chat/completions"
working = True
needs_auth = False
supports_stream = True
supports_system_message = True
supports_message_history = True
default_model = "gpt-4o-mini"
default_vision_model = default_model
vision_models = [default_vision_model]
models = vision_models
@classmethod
def get_headers(cls, stream: bool, api_key: str = None, headers: dict = None) -> dict:
# Generate a random user ID similar to the JavaScript code
userid = ''.join(secrets.choice(string.ascii_letters + string.digits) for _ in range(21))
return {
"Accept": "text/event-stream" if stream else "application/json",
"Content-Type": "application/json",
"userid": userid,
**(
{"Authorization": f"Bearer {api_key}"}
if api_key else {}
),
**({} if headers is None else headers)
}