mirror of
https://github.com/xtekky/gpt4free.git
synced 2025-10-05 08:16:58 +08:00
docs: update providers documentation and enhance support for Blackbox HAR auth
- Added "No auth / HAR file" authentication type in providers-and-models.md - Added "Video generation" column to provider tables for future capability - Updated model counts and provider capabilities throughout documentation - Fixed ARTA provider with improved error handling and response validation - Enhanced AllenAI provider with vision model support and proper image handling - Significantly improved Blackbox provider: - Added HAR file authentication support - Added subscription status checking - Added premium/demo model differentiation - Improved session handling and error recovery - Enhanced DDG provider with better error handling for challenges - Improved PollinationsAI and PollinationsImage providers' model handling - Added VideoModel class in g4f/models.py - Added audio/video generation indicators in GUI components - Added new Ai2 models: olmo-1-7b, olmo-2-32b, olmo-4-synthetic - Added new commit message generation tool in etc/tool/commit.py
This commit is contained in:
@@ -26,6 +26,7 @@ This document provides an overview of various AI providers and models, including
|
||||
- **Optional API key** - Works without authentication, but you can provide an API key for better rate limits or additional features. The service is usable without an API key.
|
||||
- **API key / Cookies** - Supports both authentication methods. You can use either an API key or browser cookies for authentication.
|
||||
- **No auth required** - No authentication needed. The service is publicly available without any credentials.
|
||||
- **No auth / HAR file** - Supports both authentication methods. The service works without authentication, but you can also use HAR file authentication for potentially enhanced features or capabilities.
|
||||
|
||||
**Symbols:**
|
||||
- ✔ - Feature is supported
|
||||
@@ -34,108 +35,110 @@ This document provides an overview of various AI providers and models, including
|
||||
|
||||
---
|
||||
### Providers No auth required
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[playground.allenai.org](https://playground.allenai.org)|No auth required|`g4f.Provider.AllenAI`|`tulu-3-405b, olmo-2-13b, tulu-3-1-8b, tulu-3-70b, olmoe-0125`|❌|❌|❌||
|
||||
|[ai-arta.com](https://ai-arta.com)|No auth required|`g4f.Provider.ARTA`|❌|✔ _**(17+)**_|❌|❌||
|
||||
|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, blackboxai-pro, gpt-4o-mini, deepseek-chat, deepseek-v3, deepseek-r1, gpt-4o, o1, o3-mini, claude-3.7-sonnet` _**(40+)**_|`flux`|❌|`blackboxai, gpt-4o, o1, o3-mini, deepseek-v3` _**(7+)**_||
|
||||
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|❌||
|
||||
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(7+)**_|❌|❌|❌||
|
||||
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|❌||
|
||||
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|❌||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, o1`|❌|❌|❌||
|
||||
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, llama-3.3-70b, claude-3-haiku, o3-mini, mixtral-small-24b`|❌|❌|❌||
|
||||
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-24b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|❌|`llama-3.2-90b, minicpm-2.5`||
|
||||
|[dynaspark.onrender.com](https://dynaspark.onrender.com)|No auth required|`g4f.Provider.Dynaspark`|`gemini-1.5-flash, gemini-2.0-flash`|❌|❌|`gemini-1.5-flash, gemini-2.0-flash`||
|
||||
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌||
|
||||
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌||
|
||||
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|❌||
|
||||
|[glider.so](https://glider.so)|No auth required|`g4f.Provider.Glider`|`llama-3.1-70b, llama-3.1-8b, llama-3.2-3b, deepseek-r1`|❌|❌|❌||
|
||||
|[goabror.uz](https://goabror.uz)|No auth required|`g4f.Provider.Goabror`|`gpt-4`|❌|❌|❌||
|
||||
|[hailuo.ai](https://www.hailuo.ai)|No auth required|`g4f.Provider.HailuoAI`|`MiniMax` _**(1+)**_|❌|❌|❌||
|
||||
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|❌||
|
||||
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b, command-a`|`flux-dev, flux-schnell, sd-3.5`|❌|❌||
|
||||
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|❌||
|
||||
|[lambda.chat](https://lambda.chat)|No auth required|`g4f.Provider.LambdaChat`|`deepseek-v3, deepseek-r1, hermes-3, nemotron-70b, llama-3.3-70b`|❌|❌|❌||
|
||||
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thinking, claude-3-opus, claude-3-sonnet, deepseek-r1, deepseek-v3, gemini-2.0-flash, gemini-2.0-flash-thinking, gemini-2.0-pro, gpt-4, gpt-4o, gpt-4o-mini, grok-3, grok-3-r1, o3-mini`|❌|❌|❌||
|
||||
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini, deepseek-v3`|❌|❌|`gpt-4o-mini`||
|
||||
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|❌||
|
||||
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|❌||
|
||||
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|❌||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, o1-mini, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, llama-3.1-8b, deepseek-r1, phi-4` _**(9+)**_|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|`gpt-4o-audio`|`gpt-4o, gpt-4o-mini, o1-mini, o3-mini`||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsImage`|❌|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|❌|❌||
|
||||
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌||
|
||||
|[chat.typegpt.net](https://chat.typegpt.net)|No auth required|`g4f.Provider.TypeGPT`|`gpt-3.5-turbo, o3-mini, deepseek-r1, deepseek-v3, evil, o1`|❌|❌|`gpt-3.5-turbo, o3-mini`||
|
||||
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|❌|✔||
|
||||
|[websim.ai](https://websim.ai)|No auth required|`g4f.Provider.Websim`|`gemini-1.5-pro, gemini-1.5-flash`|`flux`|❌|❌||
|
||||
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|❌|❌||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[playground.allenai.org](https://playground.allenai.org)|No auth required|`g4f.Provider.AllenAI`|`tulu-3-405b, olmo-2-13b, tulu-3-1-8b, tulu-3-70b, olmoe-0125, olmo-2-32b`|❌|❌|❌|`olmo-4-synthetic`||
|
||||
|[ai-arta.com](https://ai-arta.com)|No auth required|`g4f.Provider.ARTA`|❌|`flux` _**(16+)**_|❌|❌|❌||
|
||||
|[blackbox.ai](https://www.blackbox.ai)|No auth / HAR file|`g4f.Provider.Blackbox`|`blackboxai, blackboxai-pro, gpt-4o-mini, deepseek-chat, deepseek-v3, deepseek-r1, gpt-4o, o1, o3-mini, claude-3.7-sonnet, llama-3.3-70b, mixtral-small-24b, qwq-32b` _**(40+)**_|`flux`|❌|❌|`blackboxai, gpt-4o, o1, o3-mini, deepseek-v3` _**(7+)**_||
|
||||
|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|❌|❌||
|
||||
|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(7+)**_|❌|❌|❌|❌||
|
||||
|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|❌|❌||
|
||||
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|[Automatic cookies](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|❌|❌||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, o1`|❌|❌|❌|❌||
|
||||
|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, llama-3.3-70b, claude-3-haiku, o3-mini, mixtral-small-24b`|❌|❌|❌|❌||
|
||||
|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.2-90b, llama-3.3-70b, deepseek-v3, mixtral-small-24b, deepseek-r1, phi-4, wizardlm-2-8x22b, qwen-2.5-72b, yi-34b, qwen-2-72b, dolphin-2.6, dolphin-2.9, dbrx-instruct, airoboros-70b, lzlv-70b, wizardlm-2-7b, mixtral-8x22b, minicpm-2.5`|❌|❌|❌|`llama-3.2-90b, minicpm-2.5`||
|
||||
|[dynaspark.onrender.com](https://dynaspark.onrender.com)|No auth required|`g4f.Provider.Dynaspark`|`gemini-1.5-flash, gemini-2.0-flash`|❌|❌|❌|`gemini-1.5-flash, gemini-2.0-flash`||
|
||||
|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|❌||
|
||||
|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|❌||
|
||||
|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|❌|❌||
|
||||
|[glider.so](https://glider.so)|No auth required|`g4f.Provider.Glider`|`llama-3.1-70b, llama-3.1-8b, llama-3.2-3b, deepseek-r1`|❌|❌|❌|❌||
|
||||
|[goabror.uz](https://goabror.uz)|No auth required|`g4f.Provider.Goabror`|`gpt-4`|❌|❌|❌|❌||
|
||||
|[hailuo.ai](https://www.hailuo.ai)|No auth required|`g4f.Provider.HailuoAI`|`MiniMax` _**(1+)**_|❌|❌|❌|❌||
|
||||
|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|`sdxl-turbo`|❌|❌|❌||
|
||||
|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b, command-a`|`flux-dev, flux-schnell, sd-3.5`|❌|❌|❌||
|
||||
|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`claude-3-haiku, claude-3-opus, claude-3.5-sonnet, deepseek-r1, deepseek-chat, gemini-exp, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash-thinking, gpt-4, gpt-4o, gpt-4o-mini, llama-3-70b, llama-3-8b, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.2-90b, llama-3.3-70b, mixtral-8x7b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b`|❌|❌|❌|❌||
|
||||
|[lambda.chat](https://lambda.chat)|No auth required|`g4f.Provider.LambdaChat`|`deepseek-v3, deepseek-r1, hermes-3, nemotron-70b, llama-3.3-70b`|❌|❌|❌|❌||
|
||||
|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thinking, claude-3-opus, claude-3-sonnet, deepseek-r1, deepseek-v3, gemini-2.0-flash, gemini-2.0-flash-thinking, gemini-2.0-pro, gpt-4, gpt-4o, gpt-4o-mini, grok-3, grok-3-r1, o3-mini`|❌|❌|❌|❌||
|
||||
|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini, deepseek-v3`|❌|❌|❌|`gpt-4o-mini`||
|
||||
|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro`|❌|❌|❌|❌||
|
||||
|[pi.ai/talk](https://pi.ai/talk)|[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|❌|❌||
|
||||
|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|❌|❌||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o-mini, gpt-4o, o1-mini, qwen-2.5-coder-32b, llama-3.3-70b, mistral-nemo, llama-3.1-8b, deepseek-r1, phi-4. qwq-32b, deepseek-v3, llama-3.2-11b` _**(9+)**_|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|`gpt-4o-audio` _**(3+)**_|❌|`gpt-4o, gpt-4o-mini, o1-mini, o3-mini`||
|
||||
|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsImage`|❌|`flux, flux-pro, flux-dev, flux-schnell, dall-e-3, sdxl-turbo`|❌|❌|❌||
|
||||
|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`gemini-1.5-pro, gemini-1.5-flash`|❌|❌|❌|❌||
|
||||
|[chat.typegpt.net](https://chat.typegpt.net)|No auth required|`g4f.Provider.TypeGPT`|`gpt-3.5-turbo, o3-mini, deepseek-r1, deepseek-v3, evil, o1`|❌|❌|❌|`gpt-3.5-turbo, o3-mini`||
|
||||
|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|❌|❌|✔||
|
||||
|[websim.ai](https://websim.ai)|No auth required|`g4f.Provider.Websim`|`gemini-1.5-pro, gemini-1.5-flash`|`flux`|❌|❌|❌||
|
||||
|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|❌|❌|❌||
|
||||
|
||||
---
|
||||
### Providers HuggingFace
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, deepseek-r1, qwq-32b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|❌||
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|❌||
|
||||
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|❌|✔ _**(1+)**_||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, deepseek-r1, qwq-32b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|❌|❌||
|
||||
|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|❌|❌||
|
||||
|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|❌|❌|✔ _**(1+)**_||
|
||||
|
||||
---
|
||||
### Providers HuggingSpace
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Dev`|❌|`flux, flux-dev`|❌|❌||
|
||||
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Schnell`|❌|`flux, flux-schnell`|❌|❌||
|
||||
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI_C4AI_Command`|`command-r, command-r-plus, command-r7b`|❌|❌|❌||
|
||||
|[huggingface.co/spaces/deepseek-ai/Janus-Pro-7B](https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.DeepseekAI_Janus_Pro_7b`|✔|✔|❌|❌||
|
||||
|[roxky-flux-1-dev.hf.space](https://roxky-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.G4F`|✔ _**(1+)**_|✔ _**(4+)**_|❌|✔ _**(1+)**_||
|
||||
|[microsoft-phi-4-multimodal.hf.space](https://microsoft-phi-4-multimodal.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Microsoft_Phi_4`|`phi-4`|❌|❌|`phi-4`||
|
||||
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|❌||
|
||||
|[qwen-qwen2-5.hf.space](https://qwen-qwen2-5.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5`|`qwen-2.5`|❌|❌|❌||
|
||||
|[qwen-qwen2-5-1m-demo.hf.space](https://qwen-qwen2-5-1m-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5M`|`qwen-2.5-1m`|❌|❌|❌||
|
||||
|[qwen-qwen2-5-max-demo.hf.space](https://qwen-qwen2-5-max-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5_Max`|`qwen-2-5-max`|❌|❌|❌||
|
||||
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B`|`qwen-2-72b`|❌|❌|❌||
|
||||
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StabilityAI_SD35Large`|❌|`sd-3.5`|❌|❌||
|
||||
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Voodoohop_Flux1Schnell`|❌|`flux, flux-schnell`|❌|❌||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Dev`|❌|`flux, flux-dev`|❌|❌|❌||
|
||||
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabs_Flux1Schnell`|❌|`flux, flux-schnell`|❌|❌|❌||
|
||||
|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI_C4AI_Command`|`command-r, command-r-plus, command-r7b`|❌|❌|❌|❌||
|
||||
|[huggingface.co/spaces/deepseek-ai/Janus-Pro-7B](https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.DeepseekAI_Janus_Pro_7b`|✔|✔|❌|❌|❌||
|
||||
|[roxky-flux-1-dev.hf.space](https://roxky-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.G4F`|✔ _**(1+)**_|✔ _**(4+)**_|❌|❌|✔ _**(1+)**_||
|
||||
|[microsoft-phi-4-multimodal.hf.space](https://microsoft-phi-4-multimodal.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Microsoft_Phi_4`|`phi-4`|❌|❌|❌|`phi-4`||
|
||||
|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|❌|❌||
|
||||
|[qwen-qwen2-5.hf.space](https://qwen-qwen2-5.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5`|`qwen-2.5`|❌|❌|❌|❌||
|
||||
|[qwen-qwen2-5-1m-demo.hf.space](https://qwen-qwen2-5-1m-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5M`|`qwen-2.5-1m`|❌|❌|❌|❌||
|
||||
|[qwen-qwen2-5-max-demo.hf.space](https://qwen-qwen2-5-max-demo.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_5_Max`|`qwen-2-5-max`|❌|❌|❌|❌||
|
||||
|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B`|`qwen-2-72b`|❌|❌|❌|❌||
|
||||
|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StabilityAI_SD35Large`|❌|`sd-3.5`|❌|❌|❌||
|
||||
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Voodoohop_Flux1Schnell`|❌|`flux, flux-schnell`|❌|❌|❌||
|
||||
|
||||
### Providers Local
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[]( )|No auth required|`g4f.Provider.Local`|✔|❌|❌|❌||
|
||||
|[ollama.com](https://ollama.com)|No auth required|`g4f.Provider.Ollama`|✔|❌|❌|❌||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[]( )|No auth required|`g4f.Provider.Local`|✔|❌|❌|❌|❌||
|
||||
|[ollama.com](https://ollama.com)|No auth required|`g4f.Provider.Ollama`|✔|❌|❌|❌|❌||
|
||||
|
||||
---
|
||||
### Providers MiniMax
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[hailuo.ai/chat](https://www.hailuo.ai/chat)|[Get API key](https://intl.minimaxi.com/user-center/basic-information/interface-key)|`g4f.Provider.MiniMax`|`MiniMax` _**(1+)**_|❌|❌|❌||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[hailuo.ai/chat](https://www.hailuo.ai/chat)|[Get API key](https://intl.minimaxi.com/user-center/basic-information/interface-key)|`g4f.Provider.MiniMax`|`MiniMax` _**(1+)**_|❌|❌|❌|❌||
|
||||
|
||||
---
|
||||
### Providers Needs Auth
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|
|
||||
|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|❌||
|
||||
|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌||
|
||||
|[cablyai.com/chat](https://cablyai.com/chat)|[Get API key](https://cablyai.com)|`g4f.Provider.CablyAI`|✔|✔|❌|✔||
|
||||
|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|❌||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|❌|✔ _**(1+)**_||
|
||||
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|❌||
|
||||
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌||
|
||||
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini-2.0`|`gemini-2.0`|❌|`gemini-2.0`||
|
||||
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|❌|`gemini-1.5-pro`||
|
||||
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌||
|
||||
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌||
|
||||
|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌||
|
||||
|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|❌|✔||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|❌||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|❌||
|
||||
|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌||
|
||||
|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|❌||
|
||||
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔ _**(1)**_|❌|✔ _**(8+)**_||
|
||||
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|❌||
|
||||
|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|✔|❌|❌||
|
||||
|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|❌||
|
||||
|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|❌||
|
||||
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|❌||
|
||||
|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|❌||
|
||||
| Website | API Credentials | Provider | Text generation | Image generation | Audio generation | Video generation | Vision (Image Upload) | Status |
|
||||
|----------|-------------|--------------|---------------|--------|--------|------|------|------|
|
||||
|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|❌|❌||
|
||||
|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|❌||
|
||||
|[cablyai.com/chat](https://cablyai.com/chat)|[Get API key](https://cablyai.com)|`g4f.Provider.CablyAI`|✔|✔|❌|❌|✔||
|
||||
|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|❌|❌||
|
||||
|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|❌|❌|✔ _**(1+)**_||
|
||||
|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|❌|❌||
|
||||
|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌|❌||
|
||||
|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini-2.0`|`gemini-2.0`|❌|❌|`gemini-2.0`||
|
||||
|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|❌|❌|`gemini-1.5-pro`||
|
||||
|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌|❌||
|
||||
|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌|❌||
|
||||
|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌|❌||
|
||||
|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|❌|❌|✔||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|❌|❌||
|
||||
|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|❌|❌||
|
||||
|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|❌||
|
||||
|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|❌|❌||
|
||||
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔ _**(1)**_|❌|❌|✔ _**(8+)**_||
|
||||
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|❌|❌||
|
||||
|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔ _**(1)**_|❌|❌|✔ _**(8+)**_||
|
||||
|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|❌|❌||
|
||||
|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|✔|❌|❌|❌||
|
||||
|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|❌|❌||
|
||||
|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|❌|❌||
|
||||
|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|❌|❌||
|
||||
|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|❌|❌||
|
||||
|
||||
---
|
||||
## Models
|
||||
@@ -160,13 +163,13 @@ This document provides an overview of various AI providers and models, including
|
||||
|llama-3.1-405b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.1-405B)|
|
||||
|llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)|
|
||||
|llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-3B)|
|
||||
|llama-3.2-11b|Meta Llama|3+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|
||||
|llama-3.2-11b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|
||||
|llama-3.2-90b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)|
|
||||
|llama-3.3-70b|Meta Llama|7+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-3/)|
|
||||
|llama-3.3-70b|Meta Llama|8+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-3/)|
|
||||
|mixtral-8x7b|Mistral|1+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|
||||
|mixtral-8x22b|Mistral|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1)|
|
||||
|mistral-nemo|Mistral|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|
||||
|mixtral-small-24b|Mistral|2+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)|
|
||||
|mixtral-small-24b|Mistral|3+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)|
|
||||
|hermes-3|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B-FP8)|
|
||||
|phi-3.5-mini|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)|
|
||||
|phi-4|Microsoft|3+ Providers|[techcommunity.microsoft.com](https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090)|
|
||||
@@ -199,11 +202,11 @@ This document provides an overview of various AI providers and models, including
|
||||
|qwen-2.5-coder-32b|Qwen|3+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)|
|
||||
|qwen-2.5-1m|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-1M-Demo)|
|
||||
|qwen-2-5-max|Qwen|1+ Providers|[qwen-ai.com](https://www.qwen-ai.com/2-5-max/)|
|
||||
|qwq-32b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|
||||
|qwq-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)|
|
||||
|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)|
|
||||
|pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)|
|
||||
|deepseek-chat|DeepSeek|2+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)|
|
||||
|deepseek-v3|DeepSeek|5+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|deepseek-v3|DeepSeek|6+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|deepseek-r1|DeepSeek|10+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/news/news250120)|
|
||||
|janus-pro-7b|DeepSeek|2+ Providers|[api-docs.deepseek.com](https://api-docs.deepseek.com/docs/janus-pro-7b)|
|
||||
|grok-3|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-3)|
|
||||
@@ -223,11 +226,13 @@ This document provides an overview of various AI providers and models, including
|
||||
|airoboros-70b|DeepInfra|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|
||||
|lzlv-70b|Lizpreciatior|1+ Providers|[huggingface.co](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b)|
|
||||
|minicpm-2.5|OpenBMB|1+ Providers|[huggingface.co](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5)|
|
||||
|tulu-3-405b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|olmo-2-13b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|tulu-3-1-8b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|tulu-3-70b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|olmoe-0125|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|tulu-3-405b|Ai2|1+ Providers|[allenai.org](https://allenai.org/documentation)|
|
||||
|olmo-1-7b|Ai2|1+ Providers|[allenai.org](https://allenai.org/olmo)|
|
||||
|olmo-2-13b|Ai2|1+ Providers|[allenai.org](https://allenai.org/olmo)|
|
||||
|olmo-2-32b|Ai2|1+ Providers|[allenai.org](https://allenai.org/olmo)|
|
||||
|olmo-4-synthetic|Ai2|1+ Providers|[allenai.org](https://allenai.org/olmo)|
|
||||
|lfm-40b|Liquid AI|1+ Providers|[liquid.ai](https://www.liquid.ai/liquid-foundation-models)|
|
||||
|evil|Evil Mode - Experimental|2+ Providers|[]( )|
|
||||
|
||||
@@ -237,7 +242,7 @@ This document provides an overview of various AI providers and models, including
|
||||
|-------|---------------|-----------|---------|
|
||||
|sdxl-turbo|Stability AI|2+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sdxl-turbo)|
|
||||
|sd-3.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)|
|
||||
|flux|Black Forest Labs|4+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|
||||
|flux|Black Forest Labs|5+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|
||||
|flux-pro|Black Forest Labs|1+ Providers|[huggingface.co](https://huggingface.co/enhanceaiteam/FLUX.1-Pro)|
|
||||
|flux-dev|Black Forest Labs|4+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-dev)|
|
||||
|flux-schnell|Black Forest Labs|4+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|
||||
|
105
etc/tool/commit.py
Executable file
105
etc/tool/commit.py
Executable file
@@ -0,0 +1,105 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
AI Commit Message Generator using gpt4free (g4f)
|
||||
|
||||
This tool uses AI to generate meaningful git commit messages based on
|
||||
staged changes. It analyzes the git diff and suggests appropriate commit
|
||||
messages following conventional commit format.
|
||||
|
||||
Usage:
|
||||
python -m etc.tool.commit
|
||||
"""
|
||||
import subprocess
|
||||
import sys
|
||||
from g4f.client import Client
|
||||
|
||||
def get_git_diff():
|
||||
"""Get the current git diff for staged changes"""
|
||||
try:
|
||||
diff_process = subprocess.run(
|
||||
["git", "diff", "--staged"],
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
return diff_process.stdout
|
||||
except Exception as e:
|
||||
print(f"Error running git diff: {e}")
|
||||
return None
|
||||
|
||||
def generate_commit_message(diff_text):
|
||||
"""Generate a commit message based on the git diff"""
|
||||
if not diff_text or diff_text.strip() == "":
|
||||
return "No changes staged for commit"
|
||||
|
||||
client = Client()
|
||||
|
||||
prompt = f"""
|
||||
{diff_text}
|
||||
```
|
||||
|
||||
Analyze ONLY the exact changes in this git diff and create a precise commit message.
|
||||
|
||||
FORMAT:
|
||||
1. First line: "<type>: <summary>" (max 70 chars)
|
||||
- Type: feat, fix, docs, refactor, test, etc.
|
||||
- Summary must describe ONLY actual changes shown in the diff
|
||||
|
||||
2. Leave one blank line
|
||||
|
||||
3. Add sufficient bullet points to:
|
||||
- Describe ALL specific changes seen in the diff
|
||||
- Reference exact functions/files/components that were modified
|
||||
- Do NOT mention anything not explicitly shown in the code changes
|
||||
- Avoid general statements or assumptions not directly visible in diff
|
||||
- Include enough points to cover all significant changes (don't limit to a specific number)
|
||||
|
||||
IMPORTANT: Be 100% factual. Only mention code that was actually changed. Never invent or assume changes not shown in the diff. If unsure about a change's purpose, describe what changed rather than why. Output nothing except for the commit message, and don't surround it in quotes.
|
||||
"""
|
||||
|
||||
try:
|
||||
response = client.chat.completions.create(
|
||||
model="claude-3.7-sonnet",
|
||||
messages=[{"role": "user", "content": prompt}]
|
||||
)
|
||||
|
||||
return response.choices[0].message.content.strip()
|
||||
except Exception as e:
|
||||
print(f"Error generating commit message: {e}")
|
||||
return None
|
||||
|
||||
def main():
|
||||
print("Fetching git diff...")
|
||||
diff = get_git_diff()
|
||||
|
||||
if diff is None:
|
||||
print("Failed to get git diff. Are you in a git repository?")
|
||||
sys.exit(1)
|
||||
|
||||
if diff.strip() == "":
|
||||
print("No changes staged for commit. Stage changes with 'git add' first.")
|
||||
sys.exit(0)
|
||||
|
||||
print("Generating commit message...")
|
||||
commit_message = generate_commit_message(diff)
|
||||
|
||||
if commit_message:
|
||||
print("\nGenerated commit message:")
|
||||
print("-" * 50)
|
||||
print(commit_message)
|
||||
print("-" * 50)
|
||||
|
||||
user_input = input("\nDo you want to use this commit message? (y/n): ")
|
||||
if user_input.lower() == 'y':
|
||||
try:
|
||||
subprocess.run(
|
||||
["git", "commit", "-m", commit_message],
|
||||
check=True
|
||||
)
|
||||
print("Commit successful!")
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"Error making commit: {e}")
|
||||
else:
|
||||
print("Failed to generate commit message.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@@ -27,7 +27,8 @@ class ARTA(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
default_model = "Flux"
|
||||
default_image_model = default_model
|
||||
model_aliases = {
|
||||
"flux": "Flux",
|
||||
default_image_model: default_image_model,
|
||||
"flux": default_image_model,
|
||||
"medieval": "Medieval",
|
||||
"vincent_van_gogh": "Vincent Van Gogh",
|
||||
"f_dev": "F Dev",
|
||||
@@ -91,7 +92,17 @@ class ARTA(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
# Step 1: Generate Authentication Token
|
||||
auth_payload = {"clientType": "CLIENT_TYPE_ANDROID"}
|
||||
async with session.post(cls.auth_url, json=auth_payload, proxy=proxy) as auth_response:
|
||||
if auth_response.status >= 400:
|
||||
error_text = await auth_response.text()
|
||||
raise ResponseError(f"Failed to obtain authentication token. Status: {auth_response.status}, Response: {error_text}")
|
||||
|
||||
try:
|
||||
auth_data = await auth_response.json()
|
||||
except Exception as e:
|
||||
error_text = await auth_response.text()
|
||||
content_type = auth_response.headers.get('Content-Type', 'unknown')
|
||||
raise ResponseError(f"Failed to parse auth response as JSON. Content-Type: {content_type}, Error: {str(e)}, Response: {error_text}")
|
||||
|
||||
auth_token = auth_data.get("idToken")
|
||||
#refresh_token = auth_data.get("refreshToken")
|
||||
if not auth_token:
|
||||
@@ -107,7 +118,17 @@ class ARTA(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
"refresh_token": refresh_token,
|
||||
}
|
||||
async with session.post(cls.token_refresh_url, data=payload, proxy=proxy) as response:
|
||||
if response.status >= 400:
|
||||
error_text = await response.text()
|
||||
raise ResponseError(f"Failed to refresh token. Status: {response.status}, Response: {error_text}")
|
||||
|
||||
try:
|
||||
response_data = await response.json()
|
||||
except Exception as e:
|
||||
error_text = await response.text()
|
||||
content_type = response.headers.get('Content-Type', 'unknown')
|
||||
raise ResponseError(f"Failed to parse token refresh response as JSON. Content-Type: {content_type}, Error: {str(e)}, Response: {error_text}")
|
||||
|
||||
return response_data.get("id_token"), response_data.get("refresh_token")
|
||||
|
||||
@classmethod
|
||||
@@ -167,9 +188,18 @@ class ARTA(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
}
|
||||
|
||||
async with session.post(cls.image_generation_url, data=image_payload, headers=headers, proxy=proxy) as image_response:
|
||||
image_data = await image_response.json()
|
||||
record_id = image_data.get("record_id")
|
||||
if image_response.status >= 400:
|
||||
error_text = await image_response.text()
|
||||
raise ResponseError(f"Failed to initiate image generation. Status: {image_response.status}, Response: {error_text}")
|
||||
|
||||
try:
|
||||
image_data = await image_response.json()
|
||||
except Exception as e:
|
||||
error_text = await image_response.text()
|
||||
content_type = image_response.headers.get('Content-Type', 'unknown')
|
||||
raise ResponseError(f"Failed to parse response as JSON. Content-Type: {content_type}, Error: {str(e)}, Response: {error_text}")
|
||||
|
||||
record_id = image_data.get("record_id")
|
||||
if not record_id:
|
||||
raise ResponseError(f"Failed to initiate image generation: {image_data}")
|
||||
|
||||
@@ -180,7 +210,17 @@ class ARTA(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
last_status = None
|
||||
while True:
|
||||
async with session.get(status_url, headers=headers, proxy=proxy) as status_response:
|
||||
if status_response.status >= 400:
|
||||
error_text = await status_response.text()
|
||||
raise ResponseError(f"Failed to check image generation status. Status: {status_response.status}, Response: {error_text}")
|
||||
|
||||
try:
|
||||
status_data = await status_response.json()
|
||||
except Exception as e:
|
||||
error_text = await status_response.text()
|
||||
content_type = status_response.headers.get('Content-Type', 'unknown')
|
||||
raise ResponseError(f"Failed to parse status response as JSON. Content-Type: {content_type}, Error: {str(e)}, Response: {error_text}")
|
||||
|
||||
status = status_data.get("status")
|
||||
|
||||
if status == "DONE":
|
||||
|
@@ -2,21 +2,23 @@ from __future__ import annotations
|
||||
import json
|
||||
from uuid import uuid4
|
||||
from aiohttp import ClientSession
|
||||
from ..typing import AsyncResult, Messages
|
||||
from ..typing import AsyncResult, Messages, MediaListType
|
||||
from ..image import to_bytes, is_accepted_format, to_data_uri
|
||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ..requests.raise_for_status import raise_for_status
|
||||
from ..providers.response import FinishReason, JsonConversation
|
||||
from .helper import format_prompt, get_last_user_message
|
||||
from .helper import format_prompt, get_last_user_message, format_image_prompt
|
||||
from ..tools.media import merge_media
|
||||
|
||||
|
||||
class Conversation(JsonConversation):
|
||||
parent: str = None
|
||||
x_anonymous_user_id: str = None
|
||||
|
||||
def __init__(self, model: str):
|
||||
super().__init__() # Ensure parent class is initialized
|
||||
self.model = model
|
||||
self.messages = [] # Instance-specific list
|
||||
self.parent = None # Initialize parent as instance attribute
|
||||
if not self.x_anonymous_user_id:
|
||||
self.x_anonymous_user_id = str(uuid4())
|
||||
|
||||
@@ -35,42 +37,73 @@ class AllenAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
supports_message_history = True
|
||||
|
||||
default_model = 'tulu3-405b'
|
||||
models = [
|
||||
default_model,
|
||||
'OLMo-2-1124-13B-Instruct',
|
||||
'tulu-3-1-8b',
|
||||
'Llama-3-1-Tulu-3-70B',
|
||||
'olmoe-0125'
|
||||
]
|
||||
default_vision_model = 'mm-olmo-uber-model-v4-synthetic'
|
||||
vision_models = [default_vision_model]
|
||||
# Map models to their required hosts
|
||||
model_hosts = {
|
||||
default_model: "inferd",
|
||||
"OLMo-2-1124-13B-Instruct": "modal",
|
||||
"tulu-3-1-8b": "modal",
|
||||
"Llama-3-1-Tulu-3-70B": "modal",
|
||||
"olmoe-0125": "modal",
|
||||
"olmo-2-0325-32b-instruct": "modal",
|
||||
"mm-olmo-uber-model-v4-synthetic": "modal",
|
||||
}
|
||||
|
||||
models = list(model_hosts.keys())
|
||||
|
||||
model_aliases = {
|
||||
"tulu-3-405b": default_model,
|
||||
"olmo-1-7b": "olmoe-0125",
|
||||
"olmo-2-13b": "OLMo-2-1124-13B-Instruct",
|
||||
"olmo-2-32b": "olmo-2-0325-32b-instruct",
|
||||
"tulu-3-1-8b": "tulu-3-1-8b",
|
||||
"tulu-3-70b": "Llama-3-1-Tulu-3-70B",
|
||||
"llama-3.1-405b": "tulu3-405b",
|
||||
"llama-3.1-8b": "tulu-3-1-8b",
|
||||
"llama-3.1-70b": "Llama-3-1-Tulu-3-70B",
|
||||
"olmo-4-synthetic": "mm-olmo-uber-model-v4-synthetic",
|
||||
}
|
||||
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls,
|
||||
model: str,
|
||||
messages: Messages,
|
||||
proxy: str = None,
|
||||
host: str = "inferd",
|
||||
host: str = None,
|
||||
private: bool = True,
|
||||
top_p: float = None,
|
||||
temperature: float = None,
|
||||
conversation: Conversation = None,
|
||||
return_conversation: bool = False,
|
||||
media: MediaListType = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
actual_model = cls.get_model(model)
|
||||
|
||||
# Use format_image_prompt for vision models when media is provided
|
||||
if media is not None and len(media) > 0:
|
||||
# For vision models, use format_image_prompt
|
||||
if actual_model in cls.vision_models:
|
||||
prompt = format_image_prompt(messages)
|
||||
else:
|
||||
# For non-vision models with images, still use the last user message
|
||||
prompt = get_last_user_message(messages)
|
||||
else:
|
||||
# For text-only messages, use the standard format
|
||||
prompt = format_prompt(messages) if conversation is None else get_last_user_message(messages)
|
||||
|
||||
# Determine the correct host for the model
|
||||
if host is None:
|
||||
# Use model-specific host from model_hosts dictionary
|
||||
host = cls.model_hosts[actual_model]
|
||||
|
||||
# Initialize or update conversation
|
||||
if conversation is None:
|
||||
conversation = Conversation(model)
|
||||
# For mm-olmo-uber-model-v4-synthetic, always create a new conversation
|
||||
if conversation is None or actual_model == 'mm-olmo-uber-model-v4-synthetic':
|
||||
conversation = Conversation(actual_model)
|
||||
|
||||
# Generate new boundary for each request
|
||||
boundary = f"----WebKitFormBoundary{uuid4().hex}"
|
||||
@@ -101,7 +134,7 @@ class AllenAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
]
|
||||
|
||||
# Add parent if exists in conversation
|
||||
if conversation.parent:
|
||||
if hasattr(conversation, 'parent') and conversation.parent:
|
||||
form_data.append(
|
||||
f'--{boundary}\r\n'
|
||||
f'Content-Disposition: form-data; name="parent"\r\n\r\n{conversation.parent}\r\n'
|
||||
@@ -120,8 +153,25 @@ class AllenAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
f'Content-Disposition: form-data; name="top_p"\r\n\r\n{top_p}\r\n'
|
||||
)
|
||||
|
||||
# Always create a new conversation when an image is attached to avoid 403 errors
|
||||
if media is not None and len(media) > 0:
|
||||
conversation = Conversation(actual_model)
|
||||
|
||||
# Add image if provided
|
||||
if media is not None and len(media) > 0:
|
||||
# For each image in the media list (using merge_media to handle different formats)
|
||||
for image, image_name in merge_media(media, messages):
|
||||
image_bytes = to_bytes(image)
|
||||
form_data.extend([
|
||||
f'--{boundary}\r\n'
|
||||
f'Content-Disposition: form-data; name="files"; filename="{image_name}"\r\n'
|
||||
f'Content-Type: {is_accepted_format(image_bytes)}\r\n\r\n'
|
||||
])
|
||||
form_data.append(image_bytes.decode('latin1'))
|
||||
form_data.append('\r\n')
|
||||
|
||||
form_data.append(f'--{boundary}--\r\n')
|
||||
data = "".join(form_data).encode()
|
||||
data = "".join(form_data).encode('latin1')
|
||||
|
||||
async with ClientSession(headers=headers) as session:
|
||||
async with session.post(
|
||||
@@ -164,6 +214,9 @@ class AllenAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
# Processing the final response
|
||||
if data.get("final") or data.get("finish_reason") == "stop":
|
||||
if current_parent:
|
||||
# Ensure the parent attribute exists before setting it
|
||||
if not hasattr(conversation, 'parent'):
|
||||
setattr(conversation, 'parent', None)
|
||||
conversation.parent = current_parent
|
||||
|
||||
# Add a message to the story
|
||||
|
@@ -43,18 +43,79 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
default_vision_model = default_model
|
||||
default_image_model = 'flux'
|
||||
|
||||
# Completely free models
|
||||
# Free models (available without subscription)
|
||||
fallback_models = [
|
||||
"blackboxai",
|
||||
default_model,
|
||||
"gpt-4o-mini",
|
||||
"DeepSeek-V3",
|
||||
"DeepSeek-R1",
|
||||
"Meta-Llama-3.3-70B-Instruct-Turbo",
|
||||
"Mistral-Small-24B-Instruct-2501",
|
||||
"DeepSeek-LLM-Chat-(67B)",
|
||||
"Qwen-QwQ-32B-Preview",
|
||||
# Image models
|
||||
"flux",
|
||||
# Trending agent modes
|
||||
'Python Agent',
|
||||
'HTML Agent',
|
||||
'Builder Agent',
|
||||
'Java Agent',
|
||||
'JavaScript Agent',
|
||||
'React Agent',
|
||||
'Android Agent',
|
||||
'Flutter Agent',
|
||||
'Next.js Agent',
|
||||
'AngularJS Agent',
|
||||
'Swift Agent',
|
||||
'MongoDB Agent',
|
||||
'PyTorch Agent',
|
||||
'Xcode Agent',
|
||||
'Azure Agent',
|
||||
'Bitbucket Agent',
|
||||
'DigitalOcean Agent',
|
||||
'Docker Agent',
|
||||
'Electron Agent',
|
||||
'Erlang Agent',
|
||||
'FastAPI Agent',
|
||||
'Firebase Agent',
|
||||
'Flask Agent',
|
||||
'Git Agent',
|
||||
'Gitlab Agent',
|
||||
'Go Agent',
|
||||
'Godot Agent',
|
||||
'Google Cloud Agent',
|
||||
'Heroku Agent'
|
||||
]
|
||||
|
||||
# Premium models (require subscription)
|
||||
premium_models = [
|
||||
"GPT-4o",
|
||||
"o1",
|
||||
"o3-mini",
|
||||
"Claude-sonnet-3.7",
|
||||
"Claude-sonnet-3.5",
|
||||
"Gemini-Flash-2.0",
|
||||
"DBRX-Instruct",
|
||||
"blackboxai-pro",
|
||||
"Gemini-PRO"
|
||||
]
|
||||
|
||||
# Models available in the demo account
|
||||
demo_models = [
|
||||
default_model,
|
||||
"blackboxai-pro",
|
||||
"gpt-4o-mini",
|
||||
"GPT-4o",
|
||||
"o1",
|
||||
"o3-mini",
|
||||
"Claude-sonnet-3.7",
|
||||
"Claude-sonnet-3.5",
|
||||
"DeepSeek-V3",
|
||||
"DeepSeek-R1",
|
||||
"DeepSeek-LLM-Chat-(67B)",
|
||||
"Meta-Llama-3.3-70B-Instruct-Turbo",
|
||||
"Mistral-Small-24B-Instruct-2501",
|
||||
"Qwen-QwQ-32B-Preview",
|
||||
# Image models
|
||||
"flux",
|
||||
# Trending agent modes
|
||||
@@ -92,13 +153,14 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
image_models = [default_image_model]
|
||||
vision_models = [default_vision_model, 'GPT-4o', 'o1', 'o3-mini', 'Gemini-PRO', 'Gemini Agent', 'llama-3.1-8b Agent', 'llama-3.1-70b Agent', 'llama-3.1-405 Agent', 'Gemini-Flash-2.0', 'DeepSeek-V3']
|
||||
|
||||
userSelectedModel = ['GPT-4o', 'o1', 'o3-mini', 'Gemini-PRO', 'Claude-sonnet-3.7', 'DeepSeek-V3', 'DeepSeek-R1', 'Meta-Llama-3.3-70B-Instruct-Turbo', 'Mistral-Small-24B-Instruct-2501', 'DeepSeek-LLM-Chat-(67B)', 'DBRX-Instruct', 'Qwen-QwQ-32B-Preview', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'Gemini-Flash-2.0']
|
||||
userSelectedModel = ['GPT-4o', 'o1', 'o3-mini', 'Gemini-PRO', 'Claude-sonnet-3.7', 'Claude-sonnet-3.5', 'DeepSeek-V3', 'DeepSeek-R1', 'Meta-Llama-3.3-70B-Instruct-Turbo', 'Mistral-Small-24B-Instruct-2501', 'DeepSeek-LLM-Chat-(67B)', 'DBRX-Instruct', 'Qwen-QwQ-32B-Preview', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'Gemini-Flash-2.0']
|
||||
|
||||
# Agent mode configurations
|
||||
agentMode = {
|
||||
'GPT-4o': {'mode': True, 'id': "GPT-4o", 'name': "GPT-4o"},
|
||||
'Gemini-PRO': {'mode': True, 'id': "Gemini-PRO", 'name': "Gemini-PRO"},
|
||||
'Claude-sonnet-3.7': {'mode': True, 'id': "Claude-sonnet-3.7", 'name': "Claude-sonnet-3.7"},
|
||||
'Claude-sonnet-3.5': {'mode': True, 'id': "Claude-sonnet-3.5", 'name': "Claude-sonnet-3.5"},
|
||||
'DeepSeek-V3': {'mode': True, 'id': "deepseek-chat", 'name': "DeepSeek-V3"},
|
||||
'DeepSeek-R1': {'mode': True, 'id': "deepseek-reasoner", 'name': "DeepSeek-R1"},
|
||||
'Meta-Llama-3.3-70B-Instruct-Turbo': {'mode': True, 'id': "meta-llama/Llama-3.3-70B-Instruct-Turbo", 'name': "Meta-Llama-3.3-70B-Instruct-Turbo"},
|
||||
@@ -150,13 +212,174 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
|
||||
# Complete list of all models (for authorized users)
|
||||
_all_models = list(dict.fromkeys([
|
||||
default_model,
|
||||
*userSelectedModel,
|
||||
*fallback_models, # Include all free models
|
||||
*premium_models, # Include all premium models
|
||||
*image_models,
|
||||
*list(agentMode.keys()),
|
||||
*list(trendingAgentMode.keys())
|
||||
]))
|
||||
|
||||
# Initialize models with fallback_models
|
||||
models = fallback_models
|
||||
|
||||
model_aliases = {
|
||||
"gpt-4o": "GPT-4o",
|
||||
"claude-3.7-sonnet": "Claude-sonnet-3.7",
|
||||
"claude-3.5-sonnet": "Claude-sonnet-3.5",
|
||||
"deepseek-v3": "DeepSeek-V3",
|
||||
"deepseek-r1": "DeepSeek-R1",
|
||||
"deepseek-chat": "DeepSeek-LLM-Chat-(67B)",
|
||||
"llama-3.3-70b": "Meta-Llama-3.3-70B-Instruct-Turbo",
|
||||
"mixtral-small-24b": "Mistral-Small-24B-Instruct-2501",
|
||||
"qwq-32b": "Qwen-QwQ-32B-Preview",
|
||||
}
|
||||
|
||||
@classmethod
|
||||
async def get_models_async(cls) -> list:
|
||||
"""
|
||||
Asynchronous version of get_models that checks subscription status.
|
||||
Returns a list of available models based on subscription status.
|
||||
Premium users get the full list of models.
|
||||
Free users get fallback_models.
|
||||
Demo accounts get demo_models.
|
||||
"""
|
||||
# Check if there are valid session data in HAR files
|
||||
session_data = cls._find_session_in_har_files()
|
||||
|
||||
if not session_data:
|
||||
# For demo accounts - return demo models
|
||||
debug.log(f"Blackbox: Returning demo model list with {len(cls.demo_models)} models")
|
||||
return cls.demo_models
|
||||
|
||||
# Check if this is a demo session
|
||||
demo_session = cls.generate_session()
|
||||
is_demo = (session_data['user'].get('email') == demo_session['user'].get('email'))
|
||||
|
||||
if is_demo:
|
||||
# For demo accounts - return demo models
|
||||
debug.log(f"Blackbox: Returning demo model list with {len(cls.demo_models)} models")
|
||||
return cls.demo_models
|
||||
|
||||
# For non-demo accounts, check subscription status
|
||||
if 'user' in session_data and 'email' in session_data['user']:
|
||||
subscription = await cls.check_subscription(session_data['user']['email'])
|
||||
if subscription['status'] == "PREMIUM":
|
||||
debug.log(f"Blackbox: Returning premium model list with {len(cls._all_models)} models")
|
||||
return cls._all_models
|
||||
|
||||
# For free accounts - return free models
|
||||
debug.log(f"Blackbox: Returning free model list with {len(cls.fallback_models)} models")
|
||||
return cls.fallback_models
|
||||
|
||||
@classmethod
|
||||
def get_models(cls) -> list:
|
||||
"""
|
||||
Returns a list of available models based on authorization status.
|
||||
Authorized users get the full list of models.
|
||||
Free users get fallback_models.
|
||||
Demo accounts get demo_models.
|
||||
|
||||
Note: This is a synchronous method that can't check subscription status,
|
||||
so it falls back to the basic premium access check.
|
||||
For more accurate results, use get_models_async when possible.
|
||||
"""
|
||||
# Check if there are valid session data in HAR files
|
||||
session_data = cls._find_session_in_har_files()
|
||||
|
||||
if not session_data:
|
||||
# For demo accounts - return demo models
|
||||
debug.log(f"Blackbox: Returning demo model list with {len(cls.demo_models)} models")
|
||||
return cls.demo_models
|
||||
|
||||
# Check if this is a demo session
|
||||
demo_session = cls.generate_session()
|
||||
is_demo = (session_data['user'].get('email') == demo_session['user'].get('email'))
|
||||
|
||||
if is_demo:
|
||||
# For demo accounts - return demo models
|
||||
debug.log(f"Blackbox: Returning demo model list with {len(cls.demo_models)} models")
|
||||
return cls.demo_models
|
||||
|
||||
# For non-demo accounts, check premium access
|
||||
has_premium_access = cls._check_premium_access()
|
||||
|
||||
if has_premium_access:
|
||||
# For premium users - all models
|
||||
debug.log(f"Blackbox: Returning premium model list with {len(cls._all_models)} models")
|
||||
return cls._all_models
|
||||
|
||||
# For free accounts - return free models
|
||||
debug.log(f"Blackbox: Returning free model list with {len(cls.fallback_models)} models")
|
||||
return cls.fallback_models
|
||||
|
||||
@classmethod
|
||||
async def check_subscription(cls, email: str) -> dict:
|
||||
"""
|
||||
Check subscription status for a given email using the Blackbox API.
|
||||
|
||||
Args:
|
||||
email: The email to check subscription for
|
||||
|
||||
Returns:
|
||||
dict: Subscription status information with keys:
|
||||
- status: "PREMIUM" or "FREE"
|
||||
- customerId: Customer ID if available
|
||||
- isTrialSubscription: Whether this is a trial subscription
|
||||
"""
|
||||
if not email:
|
||||
return {"status": "FREE", "customerId": None, "isTrialSubscription": False}
|
||||
|
||||
headers = {
|
||||
'accept': '*/*',
|
||||
'accept-language': 'en',
|
||||
'content-type': 'application/json',
|
||||
'origin': 'https://www.blackbox.ai',
|
||||
'referer': 'https://www.blackbox.ai/?ref=login-success',
|
||||
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36'
|
||||
}
|
||||
|
||||
try:
|
||||
async with ClientSession(headers=headers) as session:
|
||||
async with session.post(
|
||||
'https://www.blackbox.ai/api/check-subscription',
|
||||
json={"email": email}
|
||||
) as response:
|
||||
if response.status != 200:
|
||||
debug.log(f"Blackbox: Subscription check failed with status {response.status}")
|
||||
return {"status": "FREE", "customerId": None, "isTrialSubscription": False}
|
||||
|
||||
result = await response.json()
|
||||
status = "PREMIUM" if result.get("hasActiveSubscription", False) else "FREE"
|
||||
|
||||
return {
|
||||
"status": status,
|
||||
"customerId": result.get("customerId"),
|
||||
"isTrialSubscription": result.get("isTrialSubscription", False)
|
||||
}
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error checking subscription: {e}")
|
||||
return {"status": "FREE", "customerId": None, "isTrialSubscription": False}
|
||||
|
||||
@classmethod
|
||||
def _check_premium_access(cls) -> bool:
|
||||
"""
|
||||
Checks for an authorized session in HAR files.
|
||||
Returns True if a valid session is found that differs from the demo.
|
||||
"""
|
||||
try:
|
||||
session_data = cls._find_session_in_har_files()
|
||||
if not session_data:
|
||||
return False
|
||||
|
||||
# Check if this is not a demo session
|
||||
demo_session = cls.generate_session()
|
||||
if (session_data['user'].get('email') != demo_session['user'].get('email')):
|
||||
return True
|
||||
return False
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error checking premium access: {e}")
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def generate_session(cls, id_length: int = 21, days_ahead: int = 365) -> dict:
|
||||
"""
|
||||
@@ -196,94 +419,17 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
}
|
||||
|
||||
@classmethod
|
||||
async def fetch_validated(cls, url: str = "https://www.blackbox.ai", force_refresh: bool = False) -> Optional[str]:
|
||||
cache_file = Path(get_cookies_dir()) / 'blackbox.json'
|
||||
|
||||
if not force_refresh and cache_file.exists():
|
||||
try:
|
||||
with open(cache_file, 'r') as f:
|
||||
data = json.load(f)
|
||||
if data.get('validated_value'):
|
||||
return data['validated_value']
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error reading cache: {e}")
|
||||
|
||||
js_file_pattern = r'static/chunks/\d{4}-[a-fA-F0-9]+\.js'
|
||||
uuid_pattern = r'["\']([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12})["\']'
|
||||
|
||||
def is_valid_context(text: str) -> bool:
|
||||
return any(char + '=' in text for char in 'abcdefghijklmnopqrstuvwxyz')
|
||||
|
||||
async with ClientSession() as session:
|
||||
try:
|
||||
async with session.get(url) as response:
|
||||
if response.status != 200:
|
||||
return None
|
||||
|
||||
page_content = await response.text()
|
||||
js_files = re.findall(js_file_pattern, page_content)
|
||||
|
||||
for js_file in js_files:
|
||||
js_url = f"{url}/_next/{js_file}"
|
||||
async with session.get(js_url) as js_response:
|
||||
if js_response.status == 200:
|
||||
js_content = await js_response.text()
|
||||
for match in re.finditer(uuid_pattern, js_content):
|
||||
start = max(0, match.start() - 10)
|
||||
end = min(len(js_content), match.end() + 10)
|
||||
context = js_content[start:end]
|
||||
|
||||
if is_valid_context(context):
|
||||
validated_value = match.group(1)
|
||||
|
||||
cache_file.parent.mkdir(exist_ok=True)
|
||||
try:
|
||||
with open(cache_file, 'w') as f:
|
||||
json.dump({'validated_value': validated_value}, f)
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error writing cache: {e}")
|
||||
|
||||
return validated_value
|
||||
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error retrieving validated_value: {e}")
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def generate_id(cls, length: int = 7) -> str:
|
||||
chars = string.ascii_letters + string.digits
|
||||
return ''.join(random.choice(chars) for _ in range(length))
|
||||
|
||||
@classmethod
|
||||
def get_models(cls) -> list:
|
||||
def _find_session_in_har_files(cls) -> Optional[dict]:
|
||||
"""
|
||||
Returns a list of available models based on authorization status.
|
||||
Authorized users get the full list of models.
|
||||
Unauthorized users only get fallback_models.
|
||||
"""
|
||||
# Check if there are valid session data in HAR files
|
||||
has_premium_access = cls._check_premium_access()
|
||||
Search for valid session data in HAR files.
|
||||
|
||||
if has_premium_access:
|
||||
# For authorized users - all models
|
||||
debug.log(f"Blackbox: Returning full model list with {len(cls._all_models)} models")
|
||||
return cls._all_models
|
||||
else:
|
||||
# For demo accounts - only free models
|
||||
debug.log(f"Blackbox: Returning free model list with {len(cls.fallback_models)} models")
|
||||
return cls.fallback_models
|
||||
|
||||
@classmethod
|
||||
def _check_premium_access(cls) -> bool:
|
||||
"""
|
||||
Checks for an authorized session in HAR files.
|
||||
Returns True if a valid session is found that differs from the demo.
|
||||
Returns:
|
||||
Optional[dict]: Session data if found, None otherwise
|
||||
"""
|
||||
try:
|
||||
har_dir = get_cookies_dir()
|
||||
if not os.access(har_dir, os.R_OK):
|
||||
return False
|
||||
return None
|
||||
|
||||
for root, _, files in os.walk(har_dir):
|
||||
for file in files:
|
||||
@@ -293,91 +439,45 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
har_data = json.load(f)
|
||||
|
||||
for entry in har_data['log']['entries']:
|
||||
# Only check requests to blackbox API
|
||||
# Only look at blackbox API responses
|
||||
if 'blackbox.ai/api' in entry['request']['url']:
|
||||
# Look for a response that has the right structure
|
||||
if 'response' in entry and 'content' in entry['response']:
|
||||
content = entry['response']['content']
|
||||
# Look for both regular and Google auth session formats
|
||||
if ('text' in content and
|
||||
isinstance(content['text'], str) and
|
||||
'"user"' in content['text'] and
|
||||
'"email"' in content['text']):
|
||||
'"email"' in content['text'] and
|
||||
'"expires"' in content['text']):
|
||||
|
||||
try:
|
||||
# Process request text
|
||||
# Remove any HTML or other non-JSON content
|
||||
text = content['text'].strip()
|
||||
if text.startswith('{') and text.endswith('}'):
|
||||
# Replace escaped quotes
|
||||
text = text.replace('\\"', '"')
|
||||
session_data = json.loads(text)
|
||||
har_session = json.loads(text)
|
||||
|
||||
# Check if this is a valid session
|
||||
if (isinstance(session_data, dict) and
|
||||
'user' in session_data and
|
||||
'email' in session_data['user']):
|
||||
# Check if this is a valid session object
|
||||
if (isinstance(har_session, dict) and
|
||||
'user' in har_session and
|
||||
'email' in har_session['user'] and
|
||||
'expires' in har_session):
|
||||
|
||||
# Check if this is not a demo session
|
||||
demo_session = cls.generate_session()
|
||||
if (session_data['user'].get('email') !=
|
||||
demo_session['user'].get('email')):
|
||||
# This is not a demo session, so user has premium access
|
||||
return True
|
||||
except:
|
||||
pass
|
||||
except:
|
||||
pass
|
||||
return False
|
||||
debug.log(f"Blackbox: Found session in HAR file: {file}")
|
||||
return har_session
|
||||
except json.JSONDecodeError as e:
|
||||
# Only print error for entries that truly look like session data
|
||||
if ('"user"' in content['text'] and
|
||||
'"email"' in content['text']):
|
||||
debug.log(f"Blackbox: Error parsing likely session data: {e}")
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error checking premium access: {e}")
|
||||
return False
|
||||
|
||||
# Initialize models with fallback_models
|
||||
models = fallback_models
|
||||
|
||||
model_aliases = {
|
||||
"gpt-4o": "GPT-4o",
|
||||
"claude-3.7-sonnet": "Claude-sonnet-3.7",
|
||||
"deepseek-v3": "DeepSeek-V3",
|
||||
"deepseek-r1": "DeepSeek-R1",
|
||||
"deepseek-chat": "DeepSeek-LLM-Chat-(67B)",
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def generate_session(cls, id_length: int = 21, days_ahead: int = 365) -> dict:
|
||||
"""
|
||||
Generate a dynamic session with proper ID and expiry format.
|
||||
|
||||
Args:
|
||||
id_length: Length of the numeric ID (default: 21)
|
||||
days_ahead: Number of days ahead for expiry (default: 365)
|
||||
|
||||
Returns:
|
||||
dict: A session dictionary with user information and expiry
|
||||
"""
|
||||
# Generate numeric ID
|
||||
numeric_id = ''.join(random.choice('0123456789') for _ in range(id_length))
|
||||
|
||||
# Generate future expiry date
|
||||
future_date = datetime.now() + timedelta(days=days_ahead)
|
||||
expiry = future_date.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'
|
||||
|
||||
# Decode the encoded email
|
||||
encoded_email = "Z2lzZWxlQGJsYWNrYm94LmFp" # Base64 encoded email
|
||||
email = base64.b64decode(encoded_email).decode('utf-8')
|
||||
|
||||
# Generate random image ID for the new URL format
|
||||
chars = string.ascii_letters + string.digits + "-"
|
||||
random_img_id = ''.join(random.choice(chars) for _ in range(48))
|
||||
image_url = f"https://lh3.googleusercontent.com/a/ACg8oc{random_img_id}=s96-c"
|
||||
|
||||
return {
|
||||
"user": {
|
||||
"name": "BLACKBOX AI",
|
||||
"email": email,
|
||||
"image": image_url,
|
||||
"id": numeric_id
|
||||
},
|
||||
"expires": expiry
|
||||
}
|
||||
|
||||
debug.log(f"Blackbox: Error reading HAR file {file}: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error searching HAR files: {e}")
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
async def fetch_validated(cls, url: str = "https://www.blackbox.ai", force_refresh: bool = False) -> Optional[str]:
|
||||
@@ -494,70 +594,41 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
"title": ""
|
||||
}
|
||||
|
||||
# Try to get session data from HAR files
|
||||
session_data = cls.generate_session() # Default fallback
|
||||
session_found = False
|
||||
# Get session data - try HAR files first, fall back to generated session
|
||||
session_data = cls._find_session_in_har_files() or cls.generate_session()
|
||||
|
||||
# Look for HAR session data
|
||||
har_dir = get_cookies_dir()
|
||||
if os.access(har_dir, os.R_OK):
|
||||
for root, _, files in os.walk(har_dir):
|
||||
for file in files:
|
||||
if file.endswith(".har"):
|
||||
try:
|
||||
with open(os.path.join(root, file), 'rb') as f:
|
||||
har_data = json.load(f)
|
||||
# Log which session type is being used
|
||||
demo_session = cls.generate_session()
|
||||
is_demo = (session_data['user'].get('email') == demo_session['user'].get('email'))
|
||||
|
||||
for entry in har_data['log']['entries']:
|
||||
# Only look at blackbox API responses
|
||||
if 'blackbox.ai/api' in entry['request']['url']:
|
||||
# Look for a response that has the right structure
|
||||
if 'response' in entry and 'content' in entry['response']:
|
||||
content = entry['response']['content']
|
||||
# Look for both regular and Google auth session formats
|
||||
if ('text' in content and
|
||||
isinstance(content['text'], str) and
|
||||
'"user"' in content['text'] and
|
||||
'"email"' in content['text'] and
|
||||
'"expires"' in content['text']):
|
||||
if is_demo:
|
||||
debug.log("Blackbox: Using generated demo session")
|
||||
# For demo account, set default values without checking subscription
|
||||
subscription_status = {"status": "FREE", "customerId": None, "isTrialSubscription": False}
|
||||
# Check if the requested model is in demo_models
|
||||
is_premium = model in cls.demo_models
|
||||
if not is_premium:
|
||||
debug.log(f"Blackbox: Model {model} not available in demo account, falling back to default model")
|
||||
model = cls.default_model
|
||||
is_premium = True
|
||||
else:
|
||||
debug.log(f"Blackbox: Using session from HAR file (email: {session_data['user'].get('email', 'unknown')})")
|
||||
# Only check subscription for non-demo accounts
|
||||
subscription_status = {"status": "FREE", "customerId": None, "isTrialSubscription": False}
|
||||
if session_data.get('user', {}).get('email'):
|
||||
subscription_status = await cls.check_subscription(session_data['user']['email'])
|
||||
debug.log(f"Blackbox: Subscription status for {session_data['user']['email']}: {subscription_status['status']}")
|
||||
|
||||
try:
|
||||
# Remove any HTML or other non-JSON content
|
||||
text = content['text'].strip()
|
||||
if text.startswith('{') and text.endswith('}'):
|
||||
# Replace escaped quotes
|
||||
text = text.replace('\\"', '"')
|
||||
har_session = json.loads(text)
|
||||
|
||||
# Check if this is a valid session object (supports both regular and Google auth)
|
||||
if (isinstance(har_session, dict) and
|
||||
'user' in har_session and
|
||||
'email' in har_session['user'] and
|
||||
'expires' in har_session):
|
||||
|
||||
file_path = os.path.join(root, file)
|
||||
debug.log(f"Blackbox: Found session in HAR file")
|
||||
|
||||
session_data = har_session
|
||||
session_found = True
|
||||
break
|
||||
except json.JSONDecodeError as e:
|
||||
# Only print error for entries that truly look like session data
|
||||
if ('"user"' in content['text'] and
|
||||
'"email"' in content['text']):
|
||||
debug.log(f"Blackbox: Error parsing likely session data: {e}")
|
||||
|
||||
if session_found:
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
debug.log(f"Blackbox: Error reading HAR file: {e}")
|
||||
|
||||
if session_found:
|
||||
break
|
||||
|
||||
if session_found:
|
||||
break
|
||||
# Determine if user has premium access based on subscription status
|
||||
if subscription_status['status'] == "PREMIUM":
|
||||
is_premium = True
|
||||
else:
|
||||
# For free accounts, check if the requested model is in fallback_models
|
||||
is_premium = model in cls.fallback_models
|
||||
if not is_premium:
|
||||
debug.log(f"Blackbox: Model {model} not available in free account, falling back to default model")
|
||||
model = cls.default_model
|
||||
is_premium = True
|
||||
|
||||
data = {
|
||||
"messages": current_messages,
|
||||
@@ -595,29 +666,19 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
"additionalInfo": "",
|
||||
"enableNewChats": False
|
||||
},
|
||||
"session": session_data if session_data else cls.generate_session(),
|
||||
"isPremium": True,
|
||||
"subscriptionCache": None,
|
||||
"session": session_data,
|
||||
"isPremium": is_premium,
|
||||
"subscriptionCache": {
|
||||
"status": subscription_status['status'],
|
||||
"customerId": subscription_status['customerId'],
|
||||
"isTrialSubscription": subscription_status['isTrialSubscription'],
|
||||
"lastChecked": int(datetime.now().timestamp() * 1000)
|
||||
},
|
||||
"beastMode": False,
|
||||
"reasoningMode": False,
|
||||
"webSearchMode": False
|
||||
}
|
||||
|
||||
# Add debugging before making the API call
|
||||
if isinstance(session_data, dict) and 'user' in session_data:
|
||||
# Генеруємо демо-сесію для порівняння
|
||||
demo_session = cls.generate_session()
|
||||
is_demo = False
|
||||
|
||||
if demo_session and isinstance(demo_session, dict) and 'user' in demo_session:
|
||||
if session_data['user'].get('email') == demo_session['user'].get('email'):
|
||||
is_demo = True
|
||||
|
||||
if is_demo:
|
||||
debug.log(f"Blackbox: Making API request with built-in Developer Premium Account")
|
||||
else:
|
||||
user_email = session_data['user'].get('email', 'unknown')
|
||||
debug.log(f"Blackbox: Making API request with HAR session email: {user_email}")
|
||||
|
||||
# Continue with the API request and async generator behavior
|
||||
async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
|
||||
await raise_for_status(response)
|
||||
|
@@ -26,6 +26,9 @@ except ImportError:
|
||||
class DuckDuckGoSearchException(Exception):
|
||||
"""Base exception class for duckduckgo_search."""
|
||||
|
||||
class DuckDuckGoChallengeError(ResponseStatusError):
|
||||
"""Raised when DuckDuckGo presents a challenge that needs to be solved."""
|
||||
|
||||
class Conversation(JsonConversation):
|
||||
vqd: str = None
|
||||
vqd_hash_1: str = None
|
||||
@@ -48,15 +51,20 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
supports_message_history = True
|
||||
|
||||
default_model = "gpt-4o-mini"
|
||||
models = [default_model, "meta-llama/Llama-3.3-70B-Instruct-Turbo", "claude-3-haiku-20240307", "o3-mini", "mistralai/Mistral-Small-24B-Instruct-2501"]
|
||||
|
||||
model_aliases = {
|
||||
"gpt-4": "gpt-4o-mini",
|
||||
# Model mapping from user-friendly names to API model names
|
||||
_chat_models = {
|
||||
"gpt-4": default_model,
|
||||
"gpt-4o-mini": default_model,
|
||||
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
|
||||
"claude-3-haiku": "claude-3-haiku-20240307",
|
||||
"o3-mini": "o3-mini",
|
||||
"mixtral-small-24b": "mistralai/Mistral-Small-24B-Instruct-2501",
|
||||
}
|
||||
|
||||
# Available models (user-friendly names)
|
||||
models = list(_chat_models.keys())
|
||||
|
||||
last_request_time = 0
|
||||
max_retries = 3
|
||||
base_delay = 2
|
||||
@@ -100,19 +108,52 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
def build_x_vqd_hash_1(cls, vqd_hash_1: str, headers: dict) -> str:
|
||||
"""Build the x-vqd-hash-1 header value."""
|
||||
try:
|
||||
# If we received a valid base64 string, try to decode it
|
||||
if vqd_hash_1 and len(vqd_hash_1) > 20:
|
||||
try:
|
||||
# Try to decode and parse as JSON first
|
||||
decoded_json = json.loads(base64.b64decode(vqd_hash_1).decode())
|
||||
# If it's already a complete structure with meta, return it as is
|
||||
if isinstance(decoded_json, dict) and "meta" in decoded_json:
|
||||
return vqd_hash_1
|
||||
|
||||
# Otherwise, extract what we can from it
|
||||
if isinstance(decoded_json, dict) and "server_hashes" in decoded_json:
|
||||
server_hashes = decoded_json.get("server_hashes", ["1", "2"])
|
||||
else:
|
||||
# Fall back to parsing from string
|
||||
decoded = base64.b64decode(vqd_hash_1).decode()
|
||||
server_hashes = cls.parse_server_hashes(decoded)
|
||||
dom_fingerprint = cls.parse_dom_fingerprint(decoded)
|
||||
except (json.JSONDecodeError, UnicodeDecodeError):
|
||||
# If it's not valid JSON, try to parse it as a string
|
||||
decoded = base64.b64decode(vqd_hash_1).decode()
|
||||
server_hashes = cls.parse_server_hashes(decoded)
|
||||
else:
|
||||
# Default server hashes if we can't extract them
|
||||
server_hashes = ["1", "2"]
|
||||
|
||||
# Generate fingerprints
|
||||
dom_fingerprint = "1000" # Default value
|
||||
ua_fingerprint = headers.get("User-Agent", "") + headers.get("sec-ch-ua", "")
|
||||
ua_hash = cls.sha256_base64(ua_fingerprint)
|
||||
dom_hash = cls.sha256_base64(dom_fingerprint)
|
||||
|
||||
# Create a challenge ID (random hex string)
|
||||
challenge_id = ''.join(random.choice('0123456789abcdef') for _ in range(40)) + 'h8jbt'
|
||||
|
||||
# Build the complete structure including meta
|
||||
final_result = {
|
||||
"server_hashes": server_hashes,
|
||||
"client_hashes": [ua_hash, dom_hash],
|
||||
"signals": {},
|
||||
"meta": {
|
||||
"v": "1",
|
||||
"challenge_id": challenge_id,
|
||||
"origin": "https://duckduckgo.com",
|
||||
"stack": "Error\nat ke (https://duckduckgo.com/dist/wpm.chat.js:1:29526)\nat async dispatchServiceInitialVQD (https://duckduckgo.com/dist/wpm.chat.js:1:45076)"
|
||||
}
|
||||
}
|
||||
|
||||
base64_final_result = base64.b64encode(json.dumps(final_result).encode()).decode()
|
||||
return base64_final_result
|
||||
except Exception as e:
|
||||
@@ -121,13 +162,18 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
|
||||
@classmethod
|
||||
def validate_model(cls, model: str) -> str:
|
||||
"""Validates and returns the correct model name"""
|
||||
"""Validates and returns the correct model name for the API"""
|
||||
if not model:
|
||||
return cls.default_model
|
||||
|
||||
# Check aliases first
|
||||
if model in cls.model_aliases:
|
||||
model = cls.model_aliases[model]
|
||||
|
||||
# Check if it's a valid model name
|
||||
if model not in cls.models:
|
||||
raise ModelNotSupportedError(f"Model {model} not supported. Available models: {cls.models}")
|
||||
|
||||
return model
|
||||
|
||||
@classmethod
|
||||
@@ -147,18 +193,45 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
await cls.sleep()
|
||||
# Make initial request to get cookies
|
||||
async with session.get(cls.url) as response:
|
||||
# We also manually set required cookies
|
||||
# Set the required cookies
|
||||
cookies = {}
|
||||
cookies_dict = {'dcs': '1', 'dcm': '3'}
|
||||
|
||||
# Add any cookies from the response
|
||||
for cookie in response.cookies.values():
|
||||
cookies[cookie.key] = cookie.value
|
||||
|
||||
# Ensure our required cookies are set
|
||||
for name, value in cookies_dict.items():
|
||||
cookies[name] = value
|
||||
url_obj = URL(cls.url)
|
||||
session.cookie_jar.update_cookies({name: value}, url_obj)
|
||||
|
||||
# Make a second request to the status endpoint to get any additional cookies
|
||||
headers = {
|
||||
"accept": "text/event-stream",
|
||||
"accept-language": "en",
|
||||
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
|
||||
"origin": "https://duckduckgo.com",
|
||||
"referer": "https://duckduckgo.com/",
|
||||
}
|
||||
|
||||
await cls.sleep()
|
||||
async with session.get(cls.status_url, headers=headers) as status_response:
|
||||
# Add any cookies from the status response
|
||||
for cookie in status_response.cookies.values():
|
||||
cookies[cookie.key] = cookie.value
|
||||
url_obj = URL(cls.url)
|
||||
session.cookie_jar.update_cookies({cookie.key: cookie.value}, url_obj)
|
||||
|
||||
return cookies
|
||||
except Exception as e:
|
||||
return {}
|
||||
# Return at least the required cookies on error
|
||||
cookies = {'dcs': '1', 'dcm': '3'}
|
||||
url_obj = URL(cls.url)
|
||||
for name, value in cookies.items():
|
||||
session.cookie_jar.update_cookies({name: value}, url_obj)
|
||||
return cookies
|
||||
|
||||
@classmethod
|
||||
async def fetch_fe_version(cls, session: ClientSession) -> str:
|
||||
@@ -175,26 +248,38 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
|
||||
# Extract x-fe-version components
|
||||
try:
|
||||
# Try to extract the version components
|
||||
xfe1 = content.split('__DDG_BE_VERSION__="', 1)[1].split('"', 1)[0]
|
||||
xfe2 = content.split('__DDG_FE_CHAT_HASH__="', 1)[1].split('"', 1)[0]
|
||||
cls._chat_xfe = f"{xfe1}-{xfe2}"
|
||||
|
||||
# Format it like "serp_YYYYMMDD_HHMMSS_ET-hash"
|
||||
from datetime import datetime
|
||||
current_date = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
cls._chat_xfe = f"serp_{current_date}_ET-{xfe2}"
|
||||
|
||||
return cls._chat_xfe
|
||||
except Exception:
|
||||
# If extraction fails, return an empty string
|
||||
return ""
|
||||
# Fallback to a default format if extraction fails
|
||||
from datetime import datetime
|
||||
current_date = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
cls._chat_xfe = f"serp_{current_date}_ET-78c2e87e3d286691cc21"
|
||||
return cls._chat_xfe
|
||||
except Exception:
|
||||
return ""
|
||||
# Fallback to a default format if request fails
|
||||
from datetime import datetime
|
||||
current_date = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
cls._chat_xfe = f"serp_{current_date}_ET-78c2e87e3d286691cc21"
|
||||
return cls._chat_xfe
|
||||
|
||||
@classmethod
|
||||
async def fetch_vqd_and_hash(cls, session: ClientSession, retry_count: int = 0) -> tuple[str, str]:
|
||||
"""Fetches the required VQD token and hash for the chat session with retries."""
|
||||
headers = {
|
||||
"accept": "text/event-stream",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"accept-language": "en",
|
||||
"cache-control": "no-cache",
|
||||
"content-type": "application/json",
|
||||
"pragma": "no-cache",
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36",
|
||||
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
|
||||
"origin": "https://duckduckgo.com",
|
||||
"referer": "https://duckduckgo.com/",
|
||||
"x-vqd-accept": "1",
|
||||
@@ -270,30 +355,44 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
last_message = get_last_user_message(messages.copy())
|
||||
conversation.message_history.append({"role": "user", "content": last_message})
|
||||
|
||||
# Step 4: Prepare headers - IMPORTANT: send empty x-vqd-hash-1 for the first request
|
||||
# Step 4: Prepare headers with proper x-vqd-hash-1
|
||||
headers = {
|
||||
"accept": "text/event-stream",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"accept-language": "en",
|
||||
"cache-control": "no-cache",
|
||||
"content-type": "application/json",
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36",
|
||||
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
|
||||
"origin": "https://duckduckgo.com",
|
||||
"referer": "https://duckduckgo.com/",
|
||||
"sec-ch-ua": '"Chromium";v="133", "Not_A Brand";v="8"',
|
||||
"pragma": "no-cache",
|
||||
"priority": "u=1, i",
|
||||
"sec-ch-ua": '"Not:A-Brand";v="24", "Chromium";v="134"',
|
||||
"sec-ch-ua-mobile": "?0",
|
||||
"sec-ch-ua-platform": '"Linux"',
|
||||
"sec-fetch-dest": "empty",
|
||||
"sec-fetch-mode": "cors",
|
||||
"sec-fetch-site": "same-origin",
|
||||
"x-fe-version": conversation.fe_version or cls._chat_xfe,
|
||||
"x-vqd-4": conversation.vqd,
|
||||
"x-vqd-hash-1": "", # Send empty string initially
|
||||
}
|
||||
|
||||
# For the first request, send an empty x-vqd-hash-1 header
|
||||
# This matches the behavior in the duckduckgo_search module
|
||||
headers["x-vqd-hash-1"] = ""
|
||||
|
||||
# Step 5: Prepare the request data
|
||||
# Convert the user-friendly model name to the API model name
|
||||
api_model = cls._chat_models.get(model, model)
|
||||
|
||||
data = {
|
||||
"model": model,
|
||||
"model": api_model,
|
||||
"messages": conversation.message_history,
|
||||
}
|
||||
|
||||
# Step 6: Send the request
|
||||
await cls.sleep(multiplier=1.0 + retry_count * 0.5)
|
||||
async with session.post(cls.api_endpoint, json=data, headers=headers, proxy=proxy) as response:
|
||||
# Handle 429 errors specifically
|
||||
# Handle 429 and 418 errors specifically
|
||||
if response.status == 429:
|
||||
response_text = await response.text()
|
||||
|
||||
@@ -307,7 +406,69 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
continue
|
||||
else:
|
||||
raise RateLimitError(f"Rate limited after {cls.max_retries} retries")
|
||||
elif response.status == 418:
|
||||
# Check if it's a challenge error
|
||||
try:
|
||||
response_text = await response.text()
|
||||
try:
|
||||
response_json = json.loads(response_text)
|
||||
|
||||
# Extract challenge data if available
|
||||
challenge_data = None
|
||||
if response_json.get("type") == "ERR_CHALLENGE" and "cd" in response_json:
|
||||
challenge_data = response_json["cd"]
|
||||
|
||||
if retry_count < cls.max_retries:
|
||||
retry_count += 1
|
||||
wait_time = cls.base_delay * (2 ** retry_count) * (1 + random.random())
|
||||
await asyncio.sleep(wait_time)
|
||||
|
||||
# Reset tokens and try again with fresh session
|
||||
conversation = None
|
||||
cls._chat_xfe = ""
|
||||
|
||||
# Get fresh cookies
|
||||
cookies = await cls.get_default_cookies(session)
|
||||
|
||||
# If we have challenge data, try to use it
|
||||
if challenge_data and isinstance(challenge_data, dict):
|
||||
# Extract any useful information from challenge data
|
||||
# This could be used to build a better response in the future
|
||||
pass
|
||||
|
||||
continue
|
||||
else:
|
||||
raise DuckDuckGoChallengeError(f"Challenge error after {cls.max_retries} retries")
|
||||
except json.JSONDecodeError:
|
||||
# If we can't parse the JSON, assume it's a challenge error anyway
|
||||
if retry_count < cls.max_retries:
|
||||
retry_count += 1
|
||||
wait_time = cls.base_delay * (2 ** retry_count) * (1 + random.random())
|
||||
await asyncio.sleep(wait_time)
|
||||
|
||||
# Reset tokens and try again with fresh session
|
||||
conversation = None
|
||||
cls._chat_xfe = ""
|
||||
cookies = await cls.get_default_cookies(session)
|
||||
continue
|
||||
else:
|
||||
raise DuckDuckGoChallengeError(f"Challenge error after {cls.max_retries} retries")
|
||||
except Exception as e:
|
||||
# If any other error occurs during handling, still try to recover
|
||||
if retry_count < cls.max_retries:
|
||||
retry_count += 1
|
||||
wait_time = cls.base_delay * (2 ** retry_count) * (1 + random.random())
|
||||
await asyncio.sleep(wait_time)
|
||||
|
||||
# Reset tokens and try again with fresh session
|
||||
conversation = None
|
||||
cls._chat_xfe = ""
|
||||
cookies = await cls.get_default_cookies(session)
|
||||
continue
|
||||
else:
|
||||
raise DuckDuckGoChallengeError(f"Challenge error after {cls.max_retries} retries: {str(e)}")
|
||||
|
||||
# For other status codes, use the standard error handler
|
||||
await raise_for_status(response)
|
||||
reason = None
|
||||
full_message = ""
|
||||
@@ -328,6 +489,11 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
if error_type == "ERR_CONVERSATION_LIMIT":
|
||||
raise ConversationLimitError(error_type)
|
||||
raise RateLimitError(error_type)
|
||||
elif message.get("status") == 418 and error_type == "ERR_CHALLENGE":
|
||||
# Handle challenge error by refreshing tokens and retrying
|
||||
if retry_count < cls.max_retries:
|
||||
# Don't raise here, let the outer exception handler retry
|
||||
raise DuckDuckGoChallengeError(f"Challenge detected: {error_type}")
|
||||
raise DuckDuckGoSearchException(error_type)
|
||||
|
||||
if "message" in message:
|
||||
@@ -339,15 +505,19 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
reason = "stop"
|
||||
|
||||
# Step 8: Update conversation with response information
|
||||
if return_conversation:
|
||||
conversation.message_history.append({"role": "assistant", "content": full_message})
|
||||
# Update tokens from response headers
|
||||
# Always update the VQD tokens from the response headers
|
||||
conversation.vqd = response.headers.get("x-vqd-4", conversation.vqd)
|
||||
conversation.vqd_hash_1 = response.headers.get("x-vqd-hash-1", conversation.vqd_hash_1)
|
||||
|
||||
# Update cookies
|
||||
conversation.cookies = {
|
||||
n: c.value
|
||||
for n, c in session.cookie_jar.filter_cookies(URL(cls.url)).items()
|
||||
}
|
||||
|
||||
# If requested, return the updated conversation
|
||||
if return_conversation:
|
||||
conversation.message_history.append({"role": "assistant", "content": full_message})
|
||||
yield conversation
|
||||
|
||||
if reason is not None:
|
||||
@@ -356,11 +526,18 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
# If we got here, the request was successful
|
||||
break
|
||||
|
||||
except (RateLimitError, ResponseStatusError) as e:
|
||||
if "429" in str(e) and retry_count < cls.max_retries:
|
||||
except (RateLimitError, ResponseStatusError, DuckDuckGoChallengeError) as e:
|
||||
if ("429" in str(e) or isinstance(e, DuckDuckGoChallengeError)) and retry_count < cls.max_retries:
|
||||
retry_count += 1
|
||||
wait_time = cls.base_delay * (2 ** retry_count) * (1 + random.random())
|
||||
await asyncio.sleep(wait_time)
|
||||
|
||||
# For challenge errors, refresh tokens and cookies
|
||||
if isinstance(e, DuckDuckGoChallengeError):
|
||||
# Reset conversation to force new token acquisition
|
||||
conversation = None
|
||||
# Clear class cache to force refresh
|
||||
cls._chat_xfe = ""
|
||||
else:
|
||||
raise
|
||||
except asyncio.TimeoutError as e:
|
||||
|
@@ -46,11 +46,13 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
default_model = "openai"
|
||||
default_image_model = "flux"
|
||||
default_vision_model = default_model
|
||||
default_audio_model = "openai-audio"
|
||||
text_models = [default_model]
|
||||
image_models = [default_image_model]
|
||||
audio_models = [default_audio_model]
|
||||
extra_image_models = ["flux-pro", "flux-dev", "flux-schnell", "midjourney", "dall-e-3", "turbo"]
|
||||
vision_models = [default_vision_model, "gpt-4o-mini", "o3-mini", "openai", "openai-large"]
|
||||
extra_text_models = vision_models
|
||||
vision_models = ["gpt-4o-mini", "o3-mini", "openai-large"] # Removed duplicates with default_model
|
||||
extra_text_models = [] # Will be populated with unique vision models
|
||||
_models_loaded = False
|
||||
model_aliases = {
|
||||
### Text Models ###
|
||||
@@ -68,7 +70,12 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
"gemini-2.0": "gemini",
|
||||
"gemini-2.0-flash": "gemini",
|
||||
"gemini-2.0-flash-thinking": "gemini-thinking",
|
||||
"deepseek-r1": "deepseek-r1-llama",
|
||||
"gemini-2.0-flash-thinking": "gemini-reasoning",
|
||||
"deepseek-r1": "deepseek-reasoning-large",
|
||||
"deepseek-r1": "deepseek-reasoning",
|
||||
"deepseek-v3": "deepseek",
|
||||
"qwq-32b": "qwen-reasoning",
|
||||
"llama-3.2-11b": "llama-vision",
|
||||
"gpt-4o-audio": "openai-audio",
|
||||
|
||||
### Image Models ###
|
||||
@@ -86,37 +93,52 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
else:
|
||||
new_image_models = []
|
||||
|
||||
# Combine models without duplicates
|
||||
all_image_models = (
|
||||
cls.image_models + # Already contains the default
|
||||
cls.extra_image_models +
|
||||
new_image_models
|
||||
)
|
||||
cls.image_models = list(dict.fromkeys(all_image_models))
|
||||
# Combine image models without duplicates
|
||||
all_image_models = [cls.default_image_model] # Start with default model
|
||||
|
||||
# Add extra image models if not already in the list
|
||||
for model in cls.extra_image_models + new_image_models:
|
||||
if model not in all_image_models:
|
||||
all_image_models.append(model)
|
||||
|
||||
cls.image_models = all_image_models
|
||||
|
||||
# Update of text models
|
||||
text_response = requests.get("https://text.pollinations.ai/models")
|
||||
text_response.raise_for_status()
|
||||
models = text_response.json()
|
||||
original_text_models = [
|
||||
|
||||
# Purpose of text models
|
||||
cls.text_models = [
|
||||
model.get("name")
|
||||
for model in models
|
||||
if model.get("type") == "chat"
|
||||
if "input_modalities" in model and "text" in model["input_modalities"]
|
||||
]
|
||||
|
||||
# Purpose of audio models
|
||||
cls.audio_models = {
|
||||
model.get("name"): model.get("voices")
|
||||
for model in models
|
||||
if model.get("audio")
|
||||
}
|
||||
|
||||
# Combining text models
|
||||
combined_text = (
|
||||
cls.text_models + # Already contains the default
|
||||
cls.extra_text_models +
|
||||
original_text_models +
|
||||
cls.vision_models
|
||||
)
|
||||
cls.text_models = list(dict.fromkeys(combined_text))
|
||||
# Create a set of unique text models starting with default model
|
||||
unique_text_models = {cls.default_model}
|
||||
|
||||
# Add models from vision_models
|
||||
unique_text_models.update(cls.vision_models)
|
||||
|
||||
# Add models from the API response
|
||||
for model in models:
|
||||
model_name = model.get("name")
|
||||
if model_name and "input_modalities" in model and "text" in model["input_modalities"]:
|
||||
unique_text_models.add(model_name)
|
||||
|
||||
# Convert to list and update text_models
|
||||
cls.text_models = list(unique_text_models)
|
||||
|
||||
# Update extra_text_models with unique vision models
|
||||
cls.extra_text_models = [model for model in cls.vision_models if model != cls.default_model]
|
||||
|
||||
cls._models_loaded = True
|
||||
|
||||
@@ -128,7 +150,13 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
cls.image_models = [cls.default_image_model]
|
||||
debug.error(f"Failed to fetch models: {e}")
|
||||
|
||||
return cls.text_models + cls.image_models
|
||||
# Return unique models across all categories
|
||||
all_models = set(cls.text_models)
|
||||
all_models.update(cls.image_models)
|
||||
all_models.update(cls.audio_models.keys())
|
||||
result = list(all_models)
|
||||
return result
|
||||
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
@@ -11,6 +11,7 @@ class PollinationsImage(PollinationsAI):
|
||||
default_model = "flux"
|
||||
default_vision_model = None
|
||||
default_image_model = default_model
|
||||
audio_models = None
|
||||
image_models = [default_image_model] # Default models
|
||||
_models_loaded = False # Add a checkbox for synchronization
|
||||
|
||||
|
@@ -2306,7 +2306,7 @@ async function on_api() {
|
||||
models.forEach((model) => {
|
||||
let option = document.createElement("option");
|
||||
option.value = model.name;
|
||||
option.text = model.name + (model.image ? " (Image Generation)" : "") + (model.vision ? " (Image Upload)" : "");
|
||||
option.text = model.name + (model.image ? " (Image Generation)" : "") + (model.vision ? " (Image Upload)" : "") + (model.audio ? " (Audio Generation)" : "") + (model.video ? " (Video Generation)" : "");
|
||||
option.dataset.providers = model.providers.join(" ");
|
||||
modelSelect.appendChild(option);
|
||||
is_demo = model.demo;
|
||||
@@ -2355,6 +2355,8 @@ async function on_api() {
|
||||
option.text = provider.label
|
||||
+ (provider.vision ? " (Image Upload)" : "")
|
||||
+ (provider.image ? " (Image Generation)" : "")
|
||||
+ (provider.audio ? " (Audio Generation)" : "")
|
||||
+ (provider.video ? " (Video Generation)" : "")
|
||||
+ (provider.nodriver ? " (Browser)" : "")
|
||||
+ (provider.hf_space ? " (HuggingSpace)" : "")
|
||||
+ (!provider.nodriver && provider.auth ? " (Auth)" : "");
|
||||
@@ -2942,7 +2944,7 @@ async function load_provider_models(provider=null) {
|
||||
let option = document.createElement('option');
|
||||
option.value = model.model;
|
||||
option.dataset.label = model.model;
|
||||
option.text = `${model.model}${model.image ? " (Image Generation)" : ""}${model.vision ? " (Image Upload)" : ""}`;
|
||||
option.text = `${model.model}${model.image ? " (Image Generation)" : ""}${model.audio ? " (Audio Generation)" : ""}${model.video ? " (Video Generation)" : ""}${model.vision ? " (Image Upload)" : ""}`;
|
||||
if (model.task) {
|
||||
option.text += ` (${model.task})`;
|
||||
}
|
||||
|
@@ -30,6 +30,8 @@ class Api:
|
||||
"name": model.name,
|
||||
"image": isinstance(model, models.ImageModel),
|
||||
"vision": isinstance(model, models.VisionModel),
|
||||
"audio": isinstance(model, models.AudioModel),
|
||||
"video": isinstance(model, models.VideoModel),
|
||||
"providers": [
|
||||
getattr(provider, "parent", provider.__name__)
|
||||
for provider in providers
|
||||
@@ -52,6 +54,8 @@ class Api:
|
||||
"model": model,
|
||||
"default": model == provider.default_model,
|
||||
"vision": getattr(provider, "default_vision_model", None) == model or model in getattr(provider, "vision_models", []),
|
||||
"audio": getattr(provider, "default_audio_model", None) == model or model in getattr(provider, "audio_models", []),
|
||||
"video": getattr(provider, "default_video_model", None) == model or model in getattr(provider, "video_models", []),
|
||||
"image": False if provider.image_models is None else model in provider.image_models,
|
||||
"task": None if not hasattr(provider, "task_mapping") else provider.task_mapping[model] if model in provider.task_mapping else None
|
||||
}
|
||||
@@ -66,6 +70,8 @@ class Api:
|
||||
"label": provider.label if hasattr(provider, "label") else provider.__name__,
|
||||
"parent": getattr(provider, "parent", None),
|
||||
"image": bool(getattr(provider, "image_models", False)),
|
||||
"audio": getattr(provider, "audio_models", None) is not None,
|
||||
"video": getattr(provider, "video_models", None) is not None,
|
||||
"vision": getattr(provider, "default_vision_model", None) is not None,
|
||||
"nodriver": getattr(provider, "use_nodriver", False),
|
||||
"hf_space": getattr(provider, "hf_space", False),
|
||||
|
@@ -99,6 +99,8 @@ class Backend_Api(Api):
|
||||
"name": model.name,
|
||||
"image": isinstance(model, models.ImageModel),
|
||||
"vision": isinstance(model, models.VisionModel),
|
||||
"audio": isinstance(model, models.AudioModel),
|
||||
"video": isinstance(model, models.VideoModel),
|
||||
"providers": [
|
||||
getattr(provider, "parent", provider.__name__)
|
||||
for provider in providers
|
||||
|
@@ -6,6 +6,7 @@ from .Provider import IterListProvider, ProviderType
|
||||
from .Provider import (
|
||||
### No Auth Required ###
|
||||
AllenAI,
|
||||
ARTA,
|
||||
Blackbox,
|
||||
ChatGLM,
|
||||
ChatGptEs,
|
||||
@@ -78,6 +79,9 @@ class ImageModel(Model):
|
||||
class AudioModel(Model):
|
||||
pass
|
||||
|
||||
class VideoModel(Model):
|
||||
pass
|
||||
|
||||
class VisionModel(Model):
|
||||
pass
|
||||
|
||||
@@ -105,7 +109,7 @@ default = Model(
|
||||
])
|
||||
)
|
||||
|
||||
default_vision = Model(
|
||||
default_vision = VisionModel(
|
||||
name = "",
|
||||
base_provider = "",
|
||||
best_provider = IterListProvider([
|
||||
@@ -115,6 +119,7 @@ default_vision = Model(
|
||||
DeepInfraChat,
|
||||
PollinationsAI,
|
||||
Dynaspark,
|
||||
AllenAI,
|
||||
HuggingSpace,
|
||||
GeminiPro,
|
||||
HuggingFaceAPI,
|
||||
@@ -263,7 +268,7 @@ llama_3_2_90b = Model(
|
||||
llama_3_3_70b = Model(
|
||||
name = "llama-3.3-70b",
|
||||
base_provider = "Meta Llama",
|
||||
best_provider = IterListProvider([DDG, DeepInfraChat, LambdaChat, PollinationsAI, Jmuz, HuggingChat, HuggingFace])
|
||||
best_provider = IterListProvider([Blackbox, DDG, DeepInfraChat, LambdaChat, PollinationsAI, Jmuz, HuggingChat, HuggingFace])
|
||||
)
|
||||
|
||||
### Mistral ###
|
||||
@@ -287,7 +292,7 @@ mistral_nemo = Model(
|
||||
mixtral_small_24b = Model(
|
||||
name = "mixtral-small-24b",
|
||||
base_provider = "Mistral",
|
||||
best_provider = IterListProvider([DDG, DeepInfraChat])
|
||||
best_provider = IterListProvider([Blackbox, DDG, DeepInfraChat])
|
||||
)
|
||||
|
||||
### NousResearch ###
|
||||
@@ -383,7 +388,7 @@ claude_3_haiku = Model(
|
||||
claude_3_5_sonnet = Model(
|
||||
name = 'claude-3.5-sonnet',
|
||||
base_provider = 'Anthropic',
|
||||
best_provider = IterListProvider([Jmuz, Liaobots])
|
||||
best_provider = IterListProvider([Blackbox, Jmuz, Liaobots])
|
||||
)
|
||||
|
||||
# claude 3.7
|
||||
@@ -491,7 +496,7 @@ qwen_2_5_max = Model(
|
||||
qwq_32b = Model(
|
||||
name = 'qwq-32b',
|
||||
base_provider = 'Qwen',
|
||||
best_provider = IterListProvider([Jmuz, HuggingChat])
|
||||
best_provider = IterListProvider([Blackbox, PollinationsAI, Jmuz, HuggingChat])
|
||||
)
|
||||
qvq_72b = VisionModel(
|
||||
name = 'qvq-72b',
|
||||
@@ -516,7 +521,7 @@ deepseek_chat = Model(
|
||||
deepseek_v3 = Model(
|
||||
name = 'deepseek-v3',
|
||||
base_provider = 'DeepSeek',
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, LambdaChat, OIVSCode, TypeGPT, Liaobots])
|
||||
best_provider = IterListProvider([Blackbox, DeepInfraChat, LambdaChat, PollinationsAI, OIVSCode, TypeGPT, Liaobots])
|
||||
)
|
||||
|
||||
deepseek_r1 = Model(
|
||||
@@ -645,8 +650,8 @@ minicpm_2_5 = Model(
|
||||
)
|
||||
|
||||
### Ai2 ###
|
||||
tulu_3_405b = Model(
|
||||
name = "tulu-3-405b",
|
||||
olmo_1_7b = Model(
|
||||
name = "olmo-1-7b",
|
||||
base_provider = "Ai2",
|
||||
best_provider = AllenAI
|
||||
)
|
||||
@@ -657,6 +662,18 @@ olmo_2_13b = Model(
|
||||
best_provider = AllenAI
|
||||
)
|
||||
|
||||
olmo_2_32b = Model(
|
||||
name = "olmo-2-32b",
|
||||
base_provider = "Ai2",
|
||||
best_provider = AllenAI
|
||||
)
|
||||
|
||||
olmo_4_synthetic = VisionModel(
|
||||
name = "olmo-4-synthetic",
|
||||
base_provider = "Ai2",
|
||||
best_provider = AllenAI
|
||||
)
|
||||
|
||||
tulu_3_1_8b = Model(
|
||||
name = "tulu-3-1-8b",
|
||||
base_provider = "Ai2",
|
||||
@@ -669,12 +686,13 @@ tulu_3_70b = Model(
|
||||
best_provider = AllenAI
|
||||
)
|
||||
|
||||
olmoe_0125 = Model(
|
||||
name = "olmoe-0125",
|
||||
tulu_3_405b = Model(
|
||||
name = "tulu-3-405b",
|
||||
base_provider = "Ai2",
|
||||
best_provider = AllenAI
|
||||
)
|
||||
|
||||
### Liquid AI ###
|
||||
lfm_40b = Model(
|
||||
name = "lfm-40b",
|
||||
base_provider = "Liquid AI",
|
||||
@@ -710,7 +728,7 @@ sd_3_5 = ImageModel(
|
||||
flux = ImageModel(
|
||||
name = 'flux',
|
||||
base_provider = 'Black Forest Labs',
|
||||
best_provider = IterListProvider([Blackbox, PollinationsImage, Websim, HuggingSpace])
|
||||
best_provider = IterListProvider([Blackbox, PollinationsImage, Websim, HuggingSpace, ARTA])
|
||||
)
|
||||
|
||||
flux_pro = ImageModel(
|
||||
@@ -922,11 +940,13 @@ class ModelUtils:
|
||||
minicpm_2_5.name: minicpm_2_5,
|
||||
|
||||
### Ai2 ###
|
||||
tulu_3_405b.name: tulu_3_405b,
|
||||
olmo_1_7b.name: olmo_1_7b,
|
||||
olmo_2_13b.name: olmo_2_13b,
|
||||
olmo_2_32b.name: olmo_2_32b,
|
||||
olmo_4_synthetic.name: olmo_4_synthetic,
|
||||
tulu_3_1_8b.name: tulu_3_1_8b,
|
||||
tulu_3_70b.name: tulu_3_70b,
|
||||
olmoe_0125.name: olmoe_0125,
|
||||
tulu_3_405b.name: tulu_3_405b,
|
||||
|
||||
### Liquid AI ###
|
||||
lfm_40b.name: lfm_40b,
|
||||
|
Reference in New Issue
Block a user