* feat: introduce AnyProvider & LM Arena, overhaul model/provider logic
- **Provider additions & removals**
- Added `Provider/LMArenaProvider.py` with full async stream implementation and vision model support
- Registered `LMArenaProvider` in `Provider/__init__.py`; removed old `hf_space/LMArenaProvider.py`
- Created `providers/any_provider.py`; registers `AnyProvider` dynamically in `Provider`
- **Provider framework enhancements**
- `providers/base_provider.py`
- Added `video_models` and `audio_models` attributes
- `providers/retry_provider.py`
- Introduced `is_content()` helper; now treats `AudioResponse` as stream content
- **Cloudflare provider refactor**
- `Provider/Cloudflare.py`
- Re‑implemented `get_models()` with `read_models()` helper, `fallback_models`, robust nodriver/curl handling and model‑name cleaning
- **Other provider tweaks**
- `Provider/Copilot.py` – removed `"reasoning"` alias and initial `setOptions` WS message
- `Provider/PollinationsAI.py` & `PollinationsImage.py`
- Converted `audio_models` from list to dict, adjusted usage checks and labels
- `Provider/hf/__init__.py` – applies `model_aliases` remap before dispatch
- `Provider/hf_space/DeepseekAI_JanusPro7b.py` – now merges media before upload
- `needs_auth/Gemini.py` – dropped obsolete Gemini model entries
- `needs_auth/GigaChat.py` – added lowercase `"gigachat"` alias
- **API & client updates**
- Replaced `ProviderUtils` with new `Provider` map usage throughout API and GUI server
- Integrated `AnyProvider` as default fallback in `g4f/client` sync & async flows
- API endpoints now return counts of providers per model and filter by `x_ignored` header
- **GUI improvements**
- Updated JS labels with emoji icons, provider ignore logic, model count display
- **Model registry**
- Renamed base model `"GigaChat:latest"` ➜ `"gigachat"` in `models.py`
- **Miscellaneous**
- Added audio/video flags to GUI provider list
- Tightened error propagation in `retry_provider.raise_exceptions`
* Fix unittests
* fix: handle None conversation when accessing provider-specific data
- Modified `AnyProvider` class in `g4f/providers/any_provider.py`
- Updated logic to check if `conversation` is not None before accessing `provider.__name__` attribute
- Wrapped `getattr(conversation, provider.__name__, None)` block in an additional `if conversation is not None` condition
- Changed `setattr(conversation, provider.__name__, chunk)` to use `chunk.get_dict()` instead of the object directly
- Ensured consistent use of `JsonConversation` when modifying or assigning `conversation` data
* ```
feat: add provider string conversion & update IterListProvider call
- In g4f/client/__init__.py, within both Completions and AsyncCompletions, added a check to convert the provider from a string using convert_to_provider(provider) when applicable.
- In g4f/providers/any_provider.py, removed the second argument (False) from the IterListProvider constructor call in the async for loop.
```
---------
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
- In **g4f/Provider/Cloudflare.py**:
- Added `from .helper import to_string`.
- Replaced conditional string checks with `to_string(message["content"])` for both `"content"` and elements in `"parts"`.
- In **g4f/Provider/PollinationsAI.py**:
- Removed `"o3-mini"` from the `vision_models` list.
- Updated the alias mapping dictionary by:
- Removing the `"o3-mini": "openai-reasoning"` entry.
- Removing the duplicate `"gpt-4o-mini": "searchgpt"` mapping.
- Removing the duplicate `"gemini-2.0-flash-thinking": "gemini-reasoning"` entry.
- Removing the `"qwq-32b": "qwen-reasoning"` mapping.
- Adding a new alias `"llama-4-scout": "llamascout"`.
- In **g4f/gui/client/static/css/style.css**:
- Changed the `border-left` property value from `var(--colour-4)` to `var(--media-select)`.
- In **g4f/models.py**:
- For the `"o3-mini"` model, removed `PollinationsAI` from its `best_provider` list.
- Changed the comment from `# llama 2` to `### llama 2-4 ###` and removed redundant comments for llama 3.1 and 3.2.
- Added a new model `llama_4_scout` with `base_provider` set to `"Meta Llama"` and `best_provider` as `IterListProvider([Cloudflare, PollinationsAI])`.
- For the `"qwq-32b"` model, removed `PollinationsAI` from its `best_provider` list.
- Updated the `ModelUtils` mapping to include the new `llama_4_scout` model.
- Added "No auth / HAR file" authentication type in providers-and-models.md
- Added "Video generation" column to provider tables for future capability
- Updated model counts and provider capabilities throughout documentation
- Fixed ARTA provider with improved error handling and response validation
- Enhanced AllenAI provider with vision model support and proper image handling
- Significantly improved Blackbox provider:
- Added HAR file authentication support
- Added subscription status checking
- Added premium/demo model differentiation
- Improved session handling and error recovery
- Enhanced DDG provider with better error handling for challenges
- Improved PollinationsAI and PollinationsImage providers' model handling
- Added VideoModel class in g4f/models.py
- Added audio/video generation indicators in GUI components
- Added new Ai2 models: olmo-1-7b, olmo-2-32b, olmo-4-synthetic
- Added new commit message generation tool in etc/tool/commit.py
* feat(docs/providers-and-models.md): add TypeGPT provider and update model support
* feat(g4f/models.py): add TypeGPT provider and enhance model configurations
* refactor(g4f/Provider/hf_space/BlackForestLabs_Flux1Dev.py): update model aliases and image models definition
* refactor(g4f/Provider/hf_space/BlackForestLabs_Flux1Schnell.py): adjust model configuration and aliases
* refactor(g4f/Provider/hf_space/CohereForAI_C4AI_Command.py): update model configuration and aliases
* refactor(g4f/Provider/hf_space/DeepseekAI_JanusPro7b.py): reorganize model classification attributes
* feat(g4f/Provider/hf_space/Microsoft_Phi_4.py): add model aliases and update vision model handling
* refactor(g4f/Provider/hf_space/Qwen_QVQ_72B.py): restructure model configuration and aliases
* feat(g4f/Provider/hf_space/Qwen_Qwen_2_5M_Demo.py): add model alias support for Qwen provider
* refactor(g4f/Provider/hf_space/Qwen_Qwen_2_72B_Instruct.py): derive model list from aliases
* refactor(g4f/Provider/hf_space/StabilityAI_SD35Large.py): adjust model list definitions using aliases
* fix(g4f/Provider/hf_space/Voodoohop_Flux1Schnell.py): correct image_models definition in Voodoohop provider
* feat(g4f/Provider/DDG.py): enhance request handling and error recovery
* feat(g4f/Provider/DeepInfraChat.py): update model configurations and request handling
* feat(g4f/Provider/TypeGPT.py): enhance TypeGPT API client with media support and async capabilities
* refactor(docs/providers-and-models.md): remove streaming column from provider tables
* Update docs/providers-and-models.md
* added(g4f/Provider/hf_space/Qwen_Qwen_2_5_Max.py): new provider
* Update g4f/Provider/hf_space/Qwen_Qwen_2_72B_Instruct.py
* added(g4f/Provider/hf_space/Qwen_Qwen_2_5.py): new provider
* Update g4f/Provider/DeepInfraChat.py g4f/Provider/TypeGPT.py
* Update g4f/Provider/LambdaChat.py
* Update g4f/Provider/DDG.py
* Update g4f/Provider/DeepInfraChat.py
* Update g4f/Provider/TypeGPT.py
* Add audio generation model and update documentation
* Update providers-and-models documentation and include ARTA in flux best providers
* Update ARTA provider details and adjust documentation for image models
* Remove redundant text_models assignment in LambdaChat provider
---------
Co-authored-by: kqlio67 <>
* New provider added(g4f/Provider/Websim.py)
* New provider added(g4f/Provider/Dynaspark.py)
* feat(g4f/gui/client/static/js/chat.v1.js): Enhance provider labeling for HuggingFace integrations
* feat(g4f/gui/server/api.py): add Hugging Face Space compatibility flag to provider data
* feat(g4f/models.py): add new providers and update model configurations
* Update g4f/Provider/__init__.py
* feat(g4f/Provider/AllenAI.py): expand model alias mappings for AllenAI provider
* feat(g4f/Provider/Blackbox.py): restructure image model handling and response processing
* feat(g4f/Provider/PollinationsAI.py): add new model aliases and streamline headers
* Update g4f/Provider/hf_space/*
* refactor(g4f/Provider/Copilot.py): update model alias mapping
* chore(g4f/models.py): update provider configurations for OpenAI models
* docs(docs/providers-and-models.md): update provider tables and model categorization
* fix(etc/examples/vision_images.py): update model and simplify client configuration
* fix(docs/providers-and-models.md): correct streaming status for GlhfChat provider
* docs(docs/providers-and-models.md): update provider capabilities and model documentation
* fix(models): update provider configurations for Mistral models
* fix(g4f/Provider/Blackbox.py): correct model alias key for Mistral variant
* feat(g4f/Provider/hf_space/CohereForAI_C4AI_Command.py): update supported model versions and aliases (close#2802)
* fix(documentation): correct model names and provider counts (https://github.com/xtekky/gpt4free/pull/2805#issuecomment-2727489835)
* fix(g4f/models.py): correct mistral model configurations
* fix(g4f/Provider/DeepInfraChat.py): correct mixtral-small alias key
* New provider added(g4f/Provider/LambdaChat.py)
* feat(g4f/models.py): add new providers and enhance model configurations
* docs(docs/providers-and-models.md): add LambdaChat provider and update model listings
* feat(g4f/models.py): add new Liquid AI model and enhance providers
* docs(docs/providers-and-models.md): update model listings and provider counts
* feat(g4f/Provider/LambdaChat.py): add conditional reasoning processing based on model
* fix(g4f/tools/run_tools.py): handle combined thinking tags in single chunk
* New provider added(g4f/Provider/Goabror.py)
* feat(g4f/Provider/Blackbox.py): implement dynamic session management and model access control
* refactor(g4f/models.py): update provider configurations and model entries
* docs(docs/providers-and-models.md): update model listings and provider counts
---------
Co-authored-by: kqlio67 <>
* docs(docs/providers-and-models.md): update documentation structure and model listings
* refactor(g4f/debug.py): add type hints and docstrings
* refactor(g4f/tools/run_tools.py): Restructure tool handling and improve modularity
* refactor(g4f/providers/response.py): enhance type hints and code documentation
* feat(g4f/models.py): Update model providers and add new models
* feat(g4f/Provider/Blackbox.py): add encrypted session handling and model updates
* fix(g4f/Provider/ChatGptEs.py): migrate to curl_cffi for request handling and improve error resilience
* feat(g4f/Provider/DeepInfraChat.py): Update default model and add new DeepSeek variants
* feat(g4f/Provider/Free2GPT.py): add Gemini models and streamline headers
* feat(g4f/Provider/FreeGpt.py): Add support for Gemini 1.5 Flash model
* feat(g4f/Provider/Liaobots.py): Add Claude 3.7 models and update default GPT-4o
* fix(g4f/Provider/PollinationsAI.py): Correct model mappings and generation parameters
* feat(g4f/Provider/PollinationsImage.py): Add class identifier label
* chore(g4f/Provider/TeachAnything.py): Update default model and simplify model handling
* (g4f/Provider/Mhystical.py): Remove class implementation
* chore(g4f/Provider/Prodia.py > g4f/Provider/not_working/Prodia.py): mark Prodia provider as non-working
* feat(g4f/Provider/Blackbox.py): Add Claude 3.7 Sonnet model alias
* chore(g4f/models.py): Update model configurations
* fix(g4f/Provider/ChatGptEs.py): improve request reliability and nonce detection
---------
Co-authored-by: kqlio67 <>
* docs(docs/providers-and-models.md): Update provider listings and model information
* feat(g4f/models.py): update model configurations and expand provider support
* fix(g4f/gui/client/static/js/chat.v1.js): correct provider checkbox initialization logic
* feat(g4f/Provider/Blackbox.py): update model configurations and premium handling
* feat(g4f/Provider/ChatGLM.py): add finish reason handling and update default model
* chore(g4f/Provider/DDG.py): Reorder model entries for consistency
* feat(g4f/Provider/ImageLabs.py): Update default image model to sdxl-turbo
* feat(g4f/Provider/Liaobots.py): update supported model configurations and aliases
* feat(g4f/Provider/OIVSCode.py): Update API endpoint and expand model support
* fix(g4f/Provider/needs_auth/CablyAI.py): Enforce authentication requirement
* Removed the provider (g4f/Provider/BlackboxAPI.py)
* fix(g4f/providers/base_provider.py): improve cache validation in AsyncAuthedProvider
* Update g4f/models.py
* fix(g4f/Provider/Liaobots.py): remove deprecated Gemini model aliases
* chore(g4f/models.py): Remove Grok-2 and update Gemini provider configurations
* chore(docs/providers-and-models.md): Remove deprecated Grok models from provider listings
* New provider added (g4f/Provider/AllenAI.py)
* feat(g4f/models.py): Add Ai2 models and update provider references
* feat(docs/providers-and-models.md): update providers and models documentation
* fix(g4f/models.py): update experimental model provider configuration
* fix(g4f/Provider/PollinationsImage.py): Initialize image_models list and update label
* fix(g4f/Provider/PollinationsAI.py): Resolve model initialization and alias conflicts
* refactor(g4f/Provider/PollinationsAI.py): improve model initialization and error handling
* refactor(g4f/Provider/PollinationsImage.py): Improve model synchronization and initialization
* Update g4f/Provider/AllenAI.py
---------
Co-authored-by: kqlio67 <>
* Update DDG.py: added Llama 3.3 Instruct and o3-mini
Duck.ai now supports o3-mini, and previous Llama 3.1 70B is now replaced by Llama 3.3 70B.
* Update DDG.py: change Llama 3.3 70B Instruct ID to "meta-llama/Llama-3.3-70B-Instruct-Turbo"
Fixed typo in full model name
* Update Cerebras.py: add "deepseek-r1-distill-llama-70b" to the models list
Cerebras now provides inference for a DeepSeek-R1 distilled to Llama 3.3 70B as well.
* Update models.py: reflect changes in DDG provider
- Removed DDG from best providers list for Llama 3.1 70B
- Added DDG to best providers list for o3-mini and Llama 3.3 70B
* A small update in HuggingFaceInference get_models() method
Previously, get_models() method was returning "TypeError: string indices must be integers, not 'str' on line 31" from time to time, possibly because of network error so the models list couldn't load and method was trying to parse this data. Now the code is updated in order to check for any potential errors first.
* Update BlackboxAPI.py: remove unused imports
format_prompt() and JSON library are not being used here, so they may be removed safely.
* Update copilot.yml
This job is failing due to the error in JavaScript code; this commit is trying to fix it.
* Update providers-and-models.md to reflect latest changes
Updated models list and removed providers that are currently not working.
* Adding New Models and Enhancing Provider Functionality
* fix(core): handle model errors and improve configuration
- Import ModelNotSupportedError for proper exception handling in model resolution
- Update login_url configuration to reference class URL attribute dynamically
- Remove redundant typing imports after internal module reorganization
* feat(g4f/Provider/PerplexityLabs.py): Add new Perplexity models and update provider listings
- Update PerplexityLabs provider with expanded Sonar model family including pro/reasoning variants
- Add new text models: sonar-reasoning-pro to supported model catalog
- Standardize model naming conventions across provider documentation
* feat(g4f/models.py): add Sonar Reasoning Pro model configuration
- Add new model to Perplexity AI text models section
- Include model in ModelUtils.convert mapping with PerplexityLabs provider
- Maintain consistent configuration pattern with existing Sonar variants
* feat(docs/providers-and-models.md): update provider models and add new reasoning model
- Update PerplexityLabs text models to standardized sonar naming convention
- Add new sonar-reasoning-pro model to text models table
- Include latest Perplexity AI documentation references for new model
* docs(docs/providers-and-models.md): update AI providers documentation
- Remove deprecated chatgptt.me from no-auth providers list
- Delete redundant Auth column from HuggingSpace providers table
- Update PerplexityLabs model website URLs to sonar.perplexity.ai
- Adjust provider counts for GPT-4/GPT-4o models in text models section
- Fix inconsistent formatting in image models provider listings
* chore(g4f/models.py): remove deprecated ChatGptt provider integration
- Remove ChatGptt import from provider dependencies
- Exclude ChatGptt from default model's best_provider list
- Update gpt_4 model configuration to eliminate ChatGptt reference
- Modify gpt_4o vision model provider hierarchy
- Adjust gpt_4o_mini provider selection parameters
BREAKING CHANGE: Existing integrations using ChatGptt provider will no longer function
* Disabled provider (g4f/Provider/ChatGptt.py > g4f/Provider/not_working/ChatGptt.py): Problem with Cloudflare
* fix(g4f/Provider/CablyAI.py): update API endpoints and model configurations
* docs(docs/providers-and-models.md): update model listings and provider capabilities
* feat(g4f/models.py): Add Hermes-3 model and enhance provider configs
* feat(g4f/Provider/CablyAI.py): Add free tier indicators to model aliases
* refactor(g4f/tools/run_tools.py): modularize thinking chunk handling
* fix(g4f/Provider/DeepInfraChat.py): resolve duplicate keys and enhance request headers
* feat(g4f/Provider/DeepInfraChat.py): Add multimodal image support and improve model handling
* chore(g4f/models.py): update default vision model providers
* feat(docs/providers-and-models.md): update provider capabilities and model specifications
* Update docs/client.md
* docs(docs/providers-and-models.md): Update DeepInfraChat models documentation
* feat(g4f/Provider/DeepInfraChat.py): add new vision models and expand model aliases
* feat(g4f/models.py): update model configurations and add new providers
* feat(g4f/models.py): Update model configurations and add new AI models
---------
Co-authored-by: kqlio67 <>
Add orginal url to downloaded image
Support ssl argument in StreamSession
Report Provider and Errors in RetryProvider
Support ssl argument in OpenaiTemplate
Remove model duplication in OpenaiChat
Disable ChatGpt provider and remove it from models.py
Update slim requirements
Support provider names as model name in Image generation
Add model qwen-2.5-1m-demo to models.py
* Fixed the model name in the Blackbox.py provider in the vision_models list, the DeepSeek-R1 model
* Update g4f/Provider/Blackbox.py
* Update docs/providers-and-models.md
* Uodate g4f/models.py g4f/Provider/DeepInfraChat.py docs/providers-and-models.md
---------
Co-authored-by: kqlio67 <>
Update demo model list
Disable upload cookies in demo
Track usage in demo mode
Add messages without asking the ai
Add hint for browser usage in provider list
Add qwen2 prompt template to HuggingFace provider
Trim automatic messages in HuggingFaceAPI
Restore browser instance on start up errors in nodriver
Restored instances can be used as usual or to stop the browser
Add demo modus to web ui for HuggingSpace
Add rate limit support to web ui. Simply install flask_limiter
Add home for demo with Access Token input and validation
Add stripped model list for demo
Add ignores for encoding error in web_search and file upload
* Update provider capabilities and model support
- Update provider documentation with latest model support
- Remove deprecated models and update model counts
- Add new model variants and fix formatting
- Update provider class labels for better clarity
- Add support for new models including DeepSeek-R1 and sd-turbo
- Clean up unused model aliases and improve code organization
Key changes:
- Update Blackbox vision capabilities
- Remove legacy models (midijourney, unity, rtist)
- Add flux variants and update provider counts
- Set explicit provider labels
- Update model aliases and mappings
- Add new model support in multiple providers
* Upodate g4f/models.py
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/Blackbox.py
---------
Co-authored-by: kqlio67 <>
* Update model configurations, provider implementations, and documentation
- Updated model names and aliases for Qwen QVQ 72B and Qwen 2 72B (@TheFirstNoob)
- Revised HuggingSpace class configuration, added default_image_model
- Added llama-3.2-70b alias for Llama 3.2 70B model in AutonomousAI
- Removed BlackboxCreateAgent class
- Added gpt-4o alias for Copilot model
- Moved api_key to Mhystical class attribute
- Added models property with default_model value for Free2GPT
- Simplified Jmuz class implementation
- Improved image generation and model handling in DeepInfra
- Standardized default models and removed aliases in Gemini
- Replaced model aliases with direct model list in GlhfChat (@TheFirstNoob)
- Removed trailing slash from image generation URL in PollinationsAI (https://github.com/xtekky/gpt4free/issues/2571)
- Updated llama and qwen model configurations
- Enhanced provider documentation and model details
* Removed from (g4f/models.py) 'Yqcloud' provider from Default due to error 'ResponseStatusError: Response 429: 文字过长,请删减后重试。'
* Update docs/providers-and-models.md
* refactor(g4f/Provider/DDG.py): Add error handling and rate limiting to DDG provider
- Add custom exception classes for rate limits, timeouts, and conversation limits
- Implement rate limiting with sleep between requests (0.75s minimum delay)
- Add model validation method to check supported models
- Add proper error handling for API responses with custom exceptions
- Improve session cookie handling for conversation persistence
- Clean up User-Agent string and remove redundant code
- Add proper error propagation through async generator
Breaking changes:
- New custom exceptions may require updates to error handling code
- Rate limiting affects request timing and throughput
- Model validation is now stricter
Related:
- Adds error handling similar to standard API clients
- Improves reliability and robustness of chat interactions
* Update g4f/models.py g4f/Provider/PollinationsAI.py
* Update g4f/models.py
* Restored provider which was not working and was disabled (g4f/Provider/DeepInfraChat.py)
* Fixing a bug with Streaming Completions
* Update g4f/Provider/PollinationsAI.py
* Update g4f/Provider/Blackbox.py g4f/Provider/DDG.py
* Added another model for generating images 'ImageGeneration2' to the 'Blackbox' provider
* Update docs/providers-and-models.md
* Update g4f/models.py g4f/Provider/Blackbox.py
* Added a new OIVSCode provider from the Text Models and Vision (Image Upload) model
* Update docs/providers-and-models.md
* docs: add Conversation Memory class with context handling requested by @TheFirstNoob
* Simplified README.md documentation added new docs/configuration.md documentation
* Update add README.md docs/configuration.md
* Update README.md
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/PollinationsAI.py
* Added new model deepseek-r1 to Blackbox provider. @TheFirstNoob
* Fixed bugs and updated docs/providers-and-models.md etc/unittest/client.py g4f/models.py g4f/Provider/.
---------
Co-authored-by: kqlio67 <>
Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
* Update providers, restore old providers, remove non-working providers
* Restoring the original providers
* Restore the original provider g4f/Provider/needs_auth/GeminiPro.py
* Deleted non-working providers, fixed providers
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/hf_space/CohereForAI.py
* Restore g4f/Provider/Airforce.py Updated alias g4f/Provider/hf_space/CohereForAI.py
* Disabled provider 'g4f/Provider/ReplicateHome.py' and moved to 'g4f/Provider/not_working'
* Disconnected provider problem with Pizzagpt response
* Fix for why web_search = True didn't work
* Update docs/client.md
* Fix for why web_search = True did not work in the asychronous and sychronous versions
---------
Co-authored-by: kqlio67 <>
Add "Custom Provider": Set API Url in the settings
Remove Discord link from result, add them to attr: Jmuz
Fix Bug: File content are added to the prompt
Changed response from /v1/models API
Disable Pizzagpt Provider