mirror of
https://github.com/xtekky/gpt4free.git
synced 2025-09-27 04:36:17 +08:00
Fix generate Images with OpenaiChat
Add "flux"as alias in HuggingSpace providers Choice a random space provider in HuggingSpace provider Add "Selecting a Provider" Documentation Update requirements list in pypi packages Fix label of CablyAI and DeepInfraChat provider
This commit is contained in:
16
README.md
16
README.md
@@ -34,9 +34,18 @@ docker pull hlohaus789/g4f
|
||||
```
|
||||
|
||||
## 🆕 What's New
|
||||
- **For comprehensive details on new features and updates, please refer to our** [Releases](https://github.com/xtekky/gpt4free/releases) **page**
|
||||
- **Join our Telegram Channel:** 📨 [telegram.me/g4f_channel](https://telegram.me/g4f_channel)
|
||||
- **Join our Discord Group:** 💬🆕️ [https://discord.gg/5E39JUWUFa](https://discord.gg/5E39JUWUFa)
|
||||
|
||||
- **Explore the latest features and updates**
|
||||
Find comprehensive details on our [Releases Page](https://github.com/xtekky/gpt4free/releases).
|
||||
|
||||
- **Stay updated with our Telegram Channel** 📨
|
||||
Join us at [telegram.me/g4f_channel](https://telegram.me/g4f_channel).
|
||||
|
||||
- **Get support in our Discord Community** 🤝💻
|
||||
Reach out for help in our [Support Group: discord.gg/qXA4Wf4Fsm](https://discord.gg/qXA4Wf4Fsm).
|
||||
|
||||
- **Subscribe to our Discord News Channel** 💬🆕️
|
||||
Stay informed about updates via our [News Channel: discord.gg/5E39JUWUFa](https://discord.gg/5E39JUWUFa).
|
||||
|
||||
## 🔻 Site Takedown
|
||||
|
||||
@@ -218,6 +227,7 @@ The **Interference API** enables seamless integration with OpenAI's services thr
|
||||
- **Documentation**: [Interference API Docs](docs/interference-api.md)
|
||||
- **Endpoint**: `http://localhost:1337/v1`
|
||||
- **Swagger UI**: Explore the OpenAPI documentation via Swagger UI at `http://localhost:1337/docs`
|
||||
- **Provider Selection**: [How to Specify a Provider?](docs/selecting_a_provider.md)
|
||||
|
||||
This API is designed for straightforward implementation and enhanced compatibility with other OpenAI integrations.
|
||||
|
||||
|
@@ -10,10 +10,10 @@
|
||||
- [Basic Usage](#basic-usage)
|
||||
- [With OpenAI Library](#with-openai-library)
|
||||
- [With Requests Library](#with-requests-library)
|
||||
- [Selecting a Provider](#selecting-a-provider)
|
||||
- [Key Points](#key-points)
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
|
||||
## Introduction
|
||||
The G4F Interference API is a powerful tool that allows you to serve other OpenAI integrations using G4F (Gpt4free). It acts as a proxy, translating requests intended for the OpenAI API into requests compatible with G4F providers. This guide will walk you through the process of setting up, running, and using the Interference API effectively.
|
||||
|
||||
@@ -149,6 +149,12 @@ for choice in json_response:
|
||||
|
||||
```
|
||||
|
||||
## Selecting a Provider
|
||||
|
||||
**Provider Selection**: [How to Specify a Provider?](docs/selecting_a_provider.md)
|
||||
|
||||
Selecting the right provider is a key step in configuring the G4F Interference API to suit your needs. Refer to the guide linked above for detailed instructions on choosing and specifying a provider.
|
||||
|
||||
## Key Points
|
||||
- The Interference API translates OpenAI API requests into G4F provider requests.
|
||||
- It can be run from either the PyPI package or the cloned repository.
|
||||
|
132
docs/selecting_a_provider.md
Normal file
132
docs/selecting_a_provider.md
Normal file
@@ -0,0 +1,132 @@
|
||||
|
||||
### Selecting a Provider
|
||||
|
||||
**The Interference API also allows you to specify which provider(s) to use for processing requests. This is done using the `provider` parameter, which can be included alongside the `model` parameter in your API requests. Providers can be specified as a space-separated string of provider IDs.**
|
||||
|
||||
#### How to Specify a Provider
|
||||
|
||||
To select one or more providers, include the `provider` parameter in your request body. This parameter accepts a string of space-separated provider IDs. Each ID represents a specific provider available in the system.
|
||||
|
||||
#### Example: Getting a List of Available Providers
|
||||
|
||||
Use the following Python code to fetch the list of available providers:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
url = "http://localhost:1337/v1/providers"
|
||||
|
||||
response = requests.get(url, headers={"accept": "application/json"})
|
||||
providers = response.json()
|
||||
|
||||
for provider in providers:
|
||||
print(f"ID: {provider['id']}, URL: {provider['url']}")
|
||||
```
|
||||
|
||||
#### Example: Getting Detailed Information About a Specific Provider
|
||||
|
||||
Retrieve details about a specific provider, including supported models and parameters:
|
||||
|
||||
```python
|
||||
provider_id = "HuggingChat"
|
||||
url = f"http://localhost:1337/v1/providers/{provider_id}"
|
||||
|
||||
response = requests.get(url, headers={"accept": "application/json"})
|
||||
provider_details = response.json()
|
||||
|
||||
print(f"Provider ID: {provider_details['id']}")
|
||||
print(f"Supported Models: {provider_details['models']}")
|
||||
print(f"Parameters: {provider_details['params']}")
|
||||
```
|
||||
|
||||
#### Example: Using a Single Provider in Text Generation
|
||||
|
||||
Specify a single provider (`HuggingChat`) in the request body:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
url = "http://localhost:1337/v1/chat/completions"
|
||||
|
||||
payload = {
|
||||
"model": "gpt-4o-mini",
|
||||
"provider": "HuggingChat",
|
||||
"messages": [
|
||||
{"role": "user", "content": "Write a short story about a robot"}
|
||||
]
|
||||
}
|
||||
|
||||
response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
|
||||
data = response.json()
|
||||
|
||||
if "choices" in data:
|
||||
for choice in data["choices"]:
|
||||
print(choice["message"]["content"])
|
||||
else:
|
||||
print("No response received")
|
||||
```
|
||||
|
||||
#### Example: Using Multiple Providers in Text Generation
|
||||
|
||||
Specify multiple providers by separating their IDs with a space:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
url = "http://localhost:1337/v1/chat/completions"
|
||||
|
||||
payload = {
|
||||
"model": "gpt-4o-mini",
|
||||
"provider": "HuggingChat AnotherProvider",
|
||||
"messages": [
|
||||
{"role": "user", "content": "What are the benefits of AI in education?"}
|
||||
]
|
||||
}
|
||||
|
||||
response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
|
||||
data = response.json()
|
||||
|
||||
if "choices" in data:
|
||||
for choice in data["choices"]:
|
||||
print(choice["message"]["content"])
|
||||
else:
|
||||
print("No response received")
|
||||
```
|
||||
|
||||
#### Example: Using a Provider for Image Generation
|
||||
|
||||
You can also use the `provider` parameter for image generation:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
url = "http://localhost:1337/v1/images/generate"
|
||||
|
||||
payload = {
|
||||
"prompt": "a futuristic cityscape at sunset",
|
||||
"model": "flux",
|
||||
"provider": "HuggingSpace",
|
||||
"response_format": "url"
|
||||
}
|
||||
|
||||
response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
|
||||
data = response.json()
|
||||
|
||||
if "data" in data:
|
||||
for item in data["data"]:
|
||||
print(f"Image URL: {item['url']}")
|
||||
else:
|
||||
print("No response received")
|
||||
```
|
||||
|
||||
### Key Points About Providers
|
||||
- **Flexibility:** Use the `provider` parameter to select one or more providers for your requests.
|
||||
- **Discoverability:** Fetch available providers using the `/providers` endpoint.
|
||||
- **Compatibility:** Check provider details to ensure support for the desired models and parameters.
|
||||
|
||||
By specifying providers in a space-separated string, you can efficiently target specific providers or combine multiple providers in a single request. This approach gives you fine-grained control over how your requests are processed.
|
||||
|
||||
|
||||
---
|
||||
|
||||
[Go to Interference API Docs](docs/interference-api.md)
|
@@ -4,6 +4,7 @@ from ..typing import AsyncResult, Messages
|
||||
from .needs_auth import OpenaiAPI
|
||||
|
||||
class CablyAI(OpenaiAPI):
|
||||
label = __name__
|
||||
url = "https://cablyai.com"
|
||||
login_url = None
|
||||
needs_auth = False
|
||||
|
@@ -4,6 +4,7 @@ from ..typing import AsyncResult, Messages
|
||||
from .needs_auth import OpenaiAPI
|
||||
|
||||
class DeepInfraChat(OpenaiAPI):
|
||||
label = __name__
|
||||
url = "https://deepinfra.com/chat"
|
||||
login_url = None
|
||||
needs_auth = False
|
||||
|
@@ -16,9 +16,9 @@ class BlackForestLabsFlux1Dev(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
|
||||
default_model = 'black-forest-labs-flux-1-dev'
|
||||
default_image_model = default_model
|
||||
image_models = [default_image_model]
|
||||
model_aliases = {"flux-dev": default_model, "flux": default_model}
|
||||
image_models = [default_image_model, *model_aliases.keys()]
|
||||
models = image_models
|
||||
model_aliases = {"flux-dev": default_model}
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
@@ -17,9 +17,9 @@ class BlackForestLabsFlux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
|
||||
default_model = "black-forest-labs-flux-1-schnell"
|
||||
default_image_model = default_model
|
||||
image_models = [default_image_model]
|
||||
model_aliases = {"flux-schnell": default_model, "flux": default_model}
|
||||
image_models = [default_image_model, *model_aliases.keys()]
|
||||
models = image_models
|
||||
model_aliases = {"flux-schnell": default_model}
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
@@ -1,7 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import uuid
|
||||
from aiohttp import ClientSession, FormData
|
||||
|
||||
from ...typing import AsyncResult, Messages
|
||||
@@ -24,12 +23,10 @@ class CohereForAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
"command-r",
|
||||
"command-r7b-12-2024",
|
||||
]
|
||||
|
||||
model_aliases = {
|
||||
"command-r-plus": "command-r-plus-08-2024",
|
||||
"command-r": "command-r-08-2024",
|
||||
"command-r7b": "command-r7b-12-2024",
|
||||
|
||||
}
|
||||
|
||||
@classmethod
|
||||
|
@@ -17,9 +17,9 @@ class VoodoohopFlux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
|
||||
default_model = "voodoohop-flux-1-schnell"
|
||||
default_image_model = default_model
|
||||
image_models = [default_image_model]
|
||||
model_aliases = {"flux-schnell": default_model, "flux": default_model}
|
||||
image_models = [default_image_model, *model_aliases.keys()]
|
||||
models = image_models
|
||||
model_aliases = {"flux-schnell": default_model}
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
@@ -1,5 +1,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import random
|
||||
|
||||
from ...typing import AsyncResult, Messages, ImagesType
|
||||
from ...errors import ResponseError
|
||||
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
@@ -23,8 +25,6 @@ class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
default_vision_model = Qwen_QVQ_72B.default_model
|
||||
providers = [BlackForestLabsFlux1Dev, BlackForestLabsFlux1Schnell, VoodoohopFlux1Schnell, CohereForAI, Qwen_QVQ_72B, Qwen_Qwen_2_72B_Instruct, StableDiffusion35Large]
|
||||
|
||||
|
||||
|
||||
@classmethod
|
||||
def get_parameters(cls, **kwargs) -> dict:
|
||||
parameters = {}
|
||||
@@ -57,6 +57,7 @@ class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
if not model and images is not None:
|
||||
model = cls.default_vision_model
|
||||
is_started = False
|
||||
random.shuffle(cls.providers)
|
||||
for provider in cls.providers:
|
||||
if model in provider.model_aliases:
|
||||
async for chunk in provider.create_async_generator(provider.model_aliases[model], messages, **kwargs):
|
||||
|
@@ -264,7 +264,7 @@ class OpenaiChat(AsyncAuthedProvider, ProviderModelMixin):
|
||||
return messages
|
||||
|
||||
@classmethod
|
||||
async def get_generated_image(cls, auth_result: AuthResult, session: StreamSession, element: dict, prompt: str = None) -> ImageResponse:
|
||||
async def get_generated_image(cls, session: StreamSession, auth_result: AuthResult, element: dict, prompt: str = None) -> ImageResponse:
|
||||
try:
|
||||
prompt = element["metadata"]["dalle"]["prompt"]
|
||||
file_id = element["asset_pointer"].split("file-service://", 1)[1]
|
||||
@@ -452,7 +452,7 @@ class OpenaiChat(AsyncAuthedProvider, ProviderModelMixin):
|
||||
await raise_for_status(response)
|
||||
buffer = u""
|
||||
async for line in response.iter_lines():
|
||||
async for chunk in cls.iter_messages_line(session, line, conversation, sources):
|
||||
async for chunk in cls.iter_messages_line(session, auth_result, line, conversation, sources):
|
||||
if isinstance(chunk, str):
|
||||
chunk = chunk.replace("\ue203", "").replace("\ue204", "").replace("\ue206", "")
|
||||
buffer += chunk
|
||||
@@ -500,7 +500,7 @@ class OpenaiChat(AsyncAuthedProvider, ProviderModelMixin):
|
||||
yield FinishReason(conversation.finish_reason)
|
||||
|
||||
@classmethod
|
||||
async def iter_messages_line(cls, session: StreamSession, line: bytes, fields: Conversation, sources: Sources) -> AsyncIterator:
|
||||
async def iter_messages_line(cls, session: StreamSession, auth_result: AuthResult, line: bytes, fields: Conversation, sources: Sources) -> AsyncIterator:
|
||||
if not line.startswith(b"data: "):
|
||||
return
|
||||
elif line.startswith(b"data: [DONE]"):
|
||||
@@ -546,7 +546,7 @@ class OpenaiChat(AsyncAuthedProvider, ProviderModelMixin):
|
||||
generated_images = []
|
||||
for element in c.get("parts"):
|
||||
if isinstance(element, dict) and element.get("content_type") == "image_asset_pointer":
|
||||
image = cls.get_generated_image(session, cls._headers, element)
|
||||
image = cls.get_generated_image(session, auth_result, element)
|
||||
generated_images.append(image)
|
||||
for image_response in await asyncio.gather(*generated_images):
|
||||
if image_response is not None:
|
||||
|
9
setup.py
9
setup.py
@@ -48,11 +48,11 @@ EXTRA_REQUIRE = {
|
||||
'slim': [
|
||||
"curl_cffi>=0.6.2",
|
||||
"certifi",
|
||||
"browser_cookie3",
|
||||
"duckduckgo-search>=5.0" ,# internet.search
|
||||
"beautifulsoup4", # internet.search and bing.create_images
|
||||
"aiohttp_socks", # proxy
|
||||
"pillow", # image
|
||||
"cairosvg", # svg image
|
||||
"werkzeug", "flask", # gui
|
||||
"fastapi", # api
|
||||
"uvicorn", # api
|
||||
@@ -68,7 +68,8 @@ EXTRA_REQUIRE = {
|
||||
"webview": [
|
||||
"pywebview",
|
||||
"platformdirs",
|
||||
"cryptography"
|
||||
"plyer",
|
||||
"cryptography",
|
||||
],
|
||||
"api": [
|
||||
"loguru", "fastapi",
|
||||
@@ -79,10 +80,10 @@ EXTRA_REQUIRE = {
|
||||
"werkzeug", "flask",
|
||||
"beautifulsoup4", "pillow",
|
||||
"duckduckgo-search>=5.0",
|
||||
"browser_cookie3",
|
||||
],
|
||||
"search": [
|
||||
"beautifulsoup4", "pillow",
|
||||
"beautifulsoup4",
|
||||
"pillow",
|
||||
"duckduckgo-search>=5.0",
|
||||
],
|
||||
"local": [
|
||||
|
Reference in New Issue
Block a user