Update docs: Using the OpenAI Library

Add sse function to requests sessions
Small improvments in OpenaiChat and ARTA provider
This commit is contained in:
hlohaus 2025-03-22 07:32:30 +01:00
parent fa2344b031
commit 8f6efd5366
17 changed files with 291 additions and 86 deletions

View file

@ -8,7 +8,7 @@
- [From Repository](#from-repository)
- [Using the Interference API](#using-the-interference-api)
- [Basic Usage](#basic-usage)
- [With OpenAI Library](#with-openai-library)
- [Using the OpenAI Library](#using-the-openai-library)
- [With Requests Library](#with-requests-library)
- [Selecting a Provider](#selecting-a-provider)
- [Key Points](#key-points)
@ -95,35 +95,45 @@ curl -X POST "http://localhost:1337/v1/images/generate" \
}'
```
---
### With OpenAI Library
### Using the OpenAI Library
**To utilize the Inference API with the OpenAI Python library, you can specify the `base_url` to point to your endpoint:**
**You can use the Interference API with the OpenAI Python library by changing the `base_url`:**
```python
from openai import OpenAI
# Initialize the OpenAI client
client = OpenAI(
api_key="secret",
base_url="http://localhost:1337/v1"
api_key="secret", # Set an API key (use "secret" if your provider doesn't require one)
base_url="http://localhost:1337/v1" # Point to your local or custom API endpoint
)
# Create a chat completion request
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Write a poem about a tree"}],
stream=True,
model="gpt-4o-mini", # Specify the model to use
messages=[{"role": "user", "content": "Write a poem about a tree"}], # Define the input message
stream=True, # Enable streaming for real-time responses
)
# Handle the response
if isinstance(response, dict):
# Not streaming
# Non-streaming response
print(response.choices[0].message.content)
else:
# Streaming
# Streaming response
for token in response:
content = token.choices[0].delta.content
if content is not None:
print(content, end="", flush=True)
```
**Notes:**
- The `api_key` is required by the OpenAI Python library. If your provider does not require an API key, you can set it to `"secret"`. This value will be ignored by providers in G4F.
- Replace `"http://localhost:1337/v1"` with the appropriate URL for your custom or local inference API.
---
### With Requests Library