Commit graph

30 commits

Author SHA1 Message Date
hlohaus
4399b432c4 Refactor OpenaiChat authentication flow; replace get_nodriver with async context manager and improve error handling
Update backend_anon_url in har_file.py for correct endpoint
Add async context manager for get_nodriver_session in requests module
Fix start-browser.sh to remove stale cookie file before launching Chrome
2025-10-02 02:08:20 +02:00
hlohaus
58d19785b2 Update backend_url in OpenaiChat 2025-07-15 19:51:23 +02:00
hlohaus
3ab36ebc64 feat: introduce render_messages and enhance HAR/conversation handling
- **g4f/providers/helper.py**
  - Add `render_messages()` to normalise message contents that are lists of blocks.

- **g4f/Provider/Blackbox.py**
  - Import `get_har_files` and `render_messages`.
  - Replace manual walk of `get_cookies_dir()` with `get_har_files()` in `_find_session_in_har`.
  - Simplify session‑parsing loop and exception logging; drop permissions check.
  - Build `current_messages` with `render_messages(messages)` instead of raw list.

- **g4f/Provider/Cloudflare.py**
  - Swap `to_string` import for `render_messages`.
  - Add `"impersonate": "chrome"` to default `_args`.
  - Construct `data["messages"]` with `render_messages(messages)` and inline `"parts"`; remove `to_string()` calls.
  - Move `cache_file` write outside inner `try` to always save arguments.

- **g4f/Provider/Copilot.py**
  - Defer `yield conversation` until after `conversation` is created when `return_conversation` is requested.

- **g4f/Provider/openai/har_file.py**
  - Break out of `os.walk` after first directory in `get_har_files()` to avoid deep traversal.

- **g4f/api/__init__.py**
  - Use `config.conversation` directly and set `return_conversation` when present.

- **g4f/client/__init__.py**
  - Pass `conversation` to both `ChatCompletionChunk.model_construct()` and `ChatCompletion.model_construct()`.

- **g4f/client/stubs.py**
  - Import `field_serializer` (with stub fallback).
  - Add serializers for `conversation` (objects and dicts) and for `content` fields.
  - Extend model constructors to accept/propagate `conversation`.

- **g4f/cookies.py**
  - Insert ".huggingface.co" into `DOMAINS` list.
  - Stop recursive directory walk in `read_cookie_files()` with early `break`.

- **g4f/gui/client/background.html**
  - Reorder error‑handling branches; reset `errorImage` in `onload`.
  - Revise `skipRefresh` logic and random image URL building.

- **g4f/gui/server/backend_api.py**
  - Add `self.match_files` cache for repeated image searches.
  - Use `safe_search` for sanitised term matching and `min` comparison.
  - Limit walk to one directory level; support deterministic random selection via `random` query param.

- **Miscellaneous**
  - Update imports where `render_messages` replaces `to_string`.
  - Ensure all modified providers iterate messages through `render_messages` for consistent formatting.
2025-04-17 07:14:34 +02:00
hlohaus
06546649db feat: add LM Arena provider, async‑ify Copilot & surface follow‑up suggestions
* **Provider/Blackbox.py**
  * Raise `RateLimitError` when `"You have reached your request limit for the hour"` substring is detected
* **Provider/Copilot.py**
  * Convert class to `AsyncGeneratorProvider`; rename `create_completion` → `create_async_generator`
  * Swap `curl_cffi.requests.Session` for `AsyncSession`; reduce default timeout to **30 s**
  * Fully async websocket flow (`await session.ws_connect`, `await wss.send/recv/close`)
  * Emit new response types: `TitleGeneration`, `SourceLink`, aggregated `Sources`
  * Track request completion with `done` flag; collect citations in `sources` dict
* **Provider/DuckDuckGo.py**
  * Replace `duckduckgo_search.DDGS` with `duckai.DuckAI`
  * Change base class to `AbstractProvider`; drop nodriver‑based auth
* **Provider/PollinationsAI.py**
  * Re‑build text/audio model lists ensuring uniqueness; remove unused `extra_text_models`
  * Fix image seed logic (`i==1` for first retry); propagate streaming `error` field via `ResponseError`
* **Provider/hf_space**
  * **New file** `LMArenaProvider.py` implementing async queue/stream client
  * Register `LMArenaProvider` in `hf_space/__init__.py`; delete `G4F` import
* **Provider/needs_auth/CopilotAccount.py**
  * Inherit order changed to `Copilot, AsyncAuthedProvider`
  * Refactor token & cookie propagation; add `cookies_to_dict` helper
* **Provider/needs_auth/OpenaiChat.py**
  * Parse reasoning thoughts/summary; yield `Reasoning` responses
  * Tighten access‑token validation and nodriver JS evaluations (`return_by_value`)
  * Extend `Conversation` with `p` and `thoughts_summary`
* **providers/response.py**
  * Add `SourceLink` response class returning single formatted citation link
* **providers/base_provider.py**
  * Serialize `AuthResult` with custom `json.dump` to handle non‑serializable fields
  * Gracefully skip empty cache files when loading auth data
* **image/copy_images.py**
  * Ignore file extensions longer than 4 chars when inferring type
* **requests/__init__.py**
  * Use `return_by_value=True` for `navigator.userAgent` extraction
* **models.py**
  * Remove `G4F` from model provider lists; update `janus_pro_7b` best providers
* **GUI server/api.py**
  * Stream `SuggestedFollowups` to client (`"suggestions"` event)
* **GUI static assets**
  * **style.css**: bold chat title, add `.suggestions` styles, remove padding from `.chat-body`
  * **chat.v1.js**
    * Capture `suggestions` packets, render buttons, and send as quick replies
    * Re‑order finish‑reason logic; adjust token count placement and system‑prompt toggling
  * **chat-top-panel / footer** interactions updated accordingly
* **gui/client/static/js/chat.v1.js** & **css** further UI refinements (scroll handling, token counting, hide prompt toggle)
* Minor updates across multiple files to match new async interfaces and headers (`userAgent`, `raise_for_status`)
2025-04-17 01:21:58 +02:00
hlohaus
791b9f5c5a Add default llama 3 model 2025-02-27 18:47:48 +01:00
sobelmangentz
f02b2c502c refactor: use dependency injection for RequestConfig 2025-02-04 18:42:25 +03:30
Heiner Lohaus
2e531d227c Fix invalid escape in requests module
Add none auth with OpenAI using nodriver
Fix missing 1 required positional argument: 'cls'
Update count tokens in GUI
Fix streaming example in requests guide
Remove ChatGptEs as default model
2025-01-06 23:20:29 +01:00
Heiner Lohaus
315a2f2595 Add streaming and system messages support in Airforce 2024-12-14 23:34:13 +01:00
Heiner Lohaus
7f8e5181f2 Fix links in Readme, update OpenaiChat provider 2024-12-07 05:06:24 +01:00
H Lohaus
79c407b939
IterListProvider support for generating images (#2441)
* IterListProvider support for generating images
* Add missing get_har_files import in Copilot
* Fix typo in dall-e-3 model name
* Add image client unittests
* Add MicrosoftDesigner provider
* Import MicrosoftDesigner and add it to the model list
2024-11-29 13:56:11 +01:00
Heiner Lohaus
4ae3d98df8 Sort .har files by date, filter None from result 2024-11-26 19:28:41 +01:00
H Lohaus
e4bfd9db5c
Improve slim docker image example, clean up OpenaiChat provider (#2397)
* Improve slim docker image example, clean up OpenaiChat provider

* Enhance event loop management for asynchronous generators

* Fix attribute " shutdown_default_executor" not found in old python versions

* asyncio file added with all async helpers
2024-11-21 14:05:50 +01:00
Heiner Lohaus
1e2c18580c Improve reading .har file in OpenaiChat 2024-11-21 08:22:48 +01:00
Heiner Lohaus
6f2b6cccbd Add upload cookie files 2024-11-21 07:14:36 +01:00
Heiner Lohaus
8f3fbee0d8 Add show log option to gui 2024-11-19 15:26:03 +01:00
Heiner Lohaus
2a7770ea51 Add full nodriver support to OpenaiChat
Move small docker images before old images
2024-11-18 02:53:50 +01:00
Heiner Lohaus
b7a8e03220 Update docker tags in workfloe for slim images,
Update read har file in OpenaiChat provider
Remove webdriver in OpenaiChat provider
Add supported_encodings and supported_encodings in OpenaiChat
2024-11-17 19:51:26 +01:00
Richard Steininger
acc52bc014 Fix har openai access token parsing 2024-06-10 22:16:40 +02:00
Heiner Lohaus
9ddac1715f Add get/set cookies dir, hide prompt option in gui 2024-05-18 23:13:57 +02:00
Heiner Lohaus
96e378e9e2 Fix OpenaiChat provider, improve proofofwork 2024-05-18 15:37:46 +02:00
Heiner Lohaus
b1dafc0ef7 Improve Liabots provider, Add image api support 2024-05-18 07:37:37 +02:00
Heiner Lohaus
59fcf9d2d3 Update chatgpt url, uvloop support 2024-05-15 02:27:51 +02:00
Heiner Lohaus
74c399d675 Add conversation title change in gui
Fix bug with multiple requests a .har in OpenaiChat
2024-04-23 17:34:42 +02:00
Heiner Lohaus
8dcef3b2a7 Improve python support 2024-04-21 22:44:29 +02:00
Heiner Lohaus
a26421bcd8 Add image model list 2024-04-21 15:15:55 +02:00
ochen1
8c7292035e
Don't give up searching for accessToken in HAR file 2024-04-08 16:07:07 -06:00
Heiner Lohaus
1e2cf48cba Add authless OpenaiChat 2024-04-05 21:00:35 +02:00
Heiner Lohaus
fd92918b77 Fix load .har files, add hardir to docker, add docs 2024-03-26 21:45:53 +01:00
Heiner Lohaus
95bab66dad No arkose token and .har files 2024-03-26 06:42:47 +01:00
Heiner Lohaus
347d3f92da Add .har file support for OpenaiChat
Update model list of HuggingChat
Update styles of scrollbar in gui
Fix image upload in gui
2024-03-25 21:06:51 +01:00