* feat: introduce AnyProvider & LM Arena, overhaul model/provider logic
- **Provider additions & removals**
- Added `Provider/LMArenaProvider.py` with full async stream implementation and vision model support
- Registered `LMArenaProvider` in `Provider/__init__.py`; removed old `hf_space/LMArenaProvider.py`
- Created `providers/any_provider.py`; registers `AnyProvider` dynamically in `Provider`
- **Provider framework enhancements**
- `providers/base_provider.py`
- Added `video_models` and `audio_models` attributes
- `providers/retry_provider.py`
- Introduced `is_content()` helper; now treats `AudioResponse` as stream content
- **Cloudflare provider refactor**
- `Provider/Cloudflare.py`
- Re‑implemented `get_models()` with `read_models()` helper, `fallback_models`, robust nodriver/curl handling and model‑name cleaning
- **Other provider tweaks**
- `Provider/Copilot.py` – removed `"reasoning"` alias and initial `setOptions` WS message
- `Provider/PollinationsAI.py` & `PollinationsImage.py`
- Converted `audio_models` from list to dict, adjusted usage checks and labels
- `Provider/hf/__init__.py` – applies `model_aliases` remap before dispatch
- `Provider/hf_space/DeepseekAI_JanusPro7b.py` – now merges media before upload
- `needs_auth/Gemini.py` – dropped obsolete Gemini model entries
- `needs_auth/GigaChat.py` – added lowercase `"gigachat"` alias
- **API & client updates**
- Replaced `ProviderUtils` with new `Provider` map usage throughout API and GUI server
- Integrated `AnyProvider` as default fallback in `g4f/client` sync & async flows
- API endpoints now return counts of providers per model and filter by `x_ignored` header
- **GUI improvements**
- Updated JS labels with emoji icons, provider ignore logic, model count display
- **Model registry**
- Renamed base model `"GigaChat:latest"` ➜ `"gigachat"` in `models.py`
- **Miscellaneous**
- Added audio/video flags to GUI provider list
- Tightened error propagation in `retry_provider.raise_exceptions`
* Fix unittests
* fix: handle None conversation when accessing provider-specific data
- Modified `AnyProvider` class in `g4f/providers/any_provider.py`
- Updated logic to check if `conversation` is not None before accessing `provider.__name__` attribute
- Wrapped `getattr(conversation, provider.__name__, None)` block in an additional `if conversation is not None` condition
- Changed `setattr(conversation, provider.__name__, chunk)` to use `chunk.get_dict()` instead of the object directly
- Ensured consistent use of `JsonConversation` when modifying or assigning `conversation` data
* ```
feat: add provider string conversion & update IterListProvider call
- In g4f/client/__init__.py, within both Completions and AsyncCompletions, added a check to convert the provider from a string using convert_to_provider(provider) when applicable.
- In g4f/providers/any_provider.py, removed the second argument (False) from the IterListProvider constructor call in the async for loop.
```
---------
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
- **g4f/Provider/needs_auth/OpenaiChat.py:**
- Increase the default timeout parameter from 180 to 360 seconds.
- Modify the regex pattern used in `re.sub` from
`(?:cite\nturn[0-9]+search|cite\nturn[0-9]+news|turn[0-9]+news|turn[0-9]+search)(\d+)`
to
`(?:cite\nturn[0-9]+|turn[0-9]+)(?:search|news|view)(\d+)`.
- **g4f/gui/client/static/js/chat.v1.js:**
- Change the `filter_message` function to wrap the result of `text.replaceAll(...)` with `filter_message_content`, removing the chained `.replace` calls.
- In `add_message_chunk`, update the `update_message` call by passing `null` instead of the rendered reasoning.
- In `update_message`, refine the content-building logic by:
- Checking that both `reasoning_storage[message_id]` and `message_storage[message_id]` exist before concatenating rendered reasoning with the markdown-rendered message.
- Using an alternative branch to render only the reasoning when no message content is available.
- Appending an error message (if one exists) to the content and then updating `content_map.inner.innerHTML` unconditionally.
- In **FreeRouter.py**, change the `working` flag from `False` to `True`.
- In **LMArenaProvider.py**, replace the `.rstrip("▌")` call with a manual check that, if the content ends with `▌`, slices off the trailing characters.
- In **hf_space/__init__.py**, update the async generator call to pass the `media` parameter instead of `images`.
- In **OpenaiChat.py**:
- Modify the citation replacement regex to use `[0-9]+` (supporting any turn number) instead of a hardcoded `0`.
- Replace `fields.is_recipient` boolean checks with comparisons against `fields.recipient == "all"` for processing text and metadata.
- Add a new branch to process `/message/metadata/content_references` for adding source links.
- Update the conversation initialization by replacing `self.is_recipient` with setting `self.recipient` to `"all"`.
- Change the auth check from using `cls._api_key` to checking `cls.request_config.access_token`.
- In **chat.v1.js**, adjust the QR code URL assignment to use `window.conversation_id` if available, else default to `/qrcode`.
- In **raise_for_status.py**, update error handling by replacing `ResponseStatusError` with `MissingAuthError` for 403 responses detected as OpenAI Bot.
feat: add private chat mode & update token parsing
- In g4f/Provider/Blackbox.py, import PaymentRequiredError and raise it when the response equals "You have reached your request limit for the hour".
- In g4f/Provider/needs_auth/OpenaiChat.py, modify token parsing by splitting the "OpenAI-Sentinel-Proof-Token" header on "~" after the initial split.
- In g4f/gui/client/index.html, add a new "Private Conversation" button with the corresponding icon.
- In g4f/gui/client/static/js/chat.v1.js:
- Introduce the variable `privateConversation` to handle private chats.
- Update `new_conversation` to accept a private flag, setting `window.conversation_id` to null and updating the conversation title accordingly.
- Adjust `get_conversation` to return `privateConversation` when conversation_id is null.
- Revise `save_conversation` and `add_conversation` to store private conversation data when conversation_id is null.
- Modify on-load conversation handling to incorporate the updated conversation logic.
```
- Added "No auth / HAR file" authentication type in providers-and-models.md
- Added "Video generation" column to provider tables for future capability
- Updated model counts and provider capabilities throughout documentation
- Fixed ARTA provider with improved error handling and response validation
- Enhanced AllenAI provider with vision model support and proper image handling
- Significantly improved Blackbox provider:
- Added HAR file authentication support
- Added subscription status checking
- Added premium/demo model differentiation
- Improved session handling and error recovery
- Enhanced DDG provider with better error handling for challenges
- Improved PollinationsAI and PollinationsImage providers' model handling
- Added VideoModel class in g4f/models.py
- Added audio/video generation indicators in GUI components
- Added new Ai2 models: olmo-1-7b, olmo-2-32b, olmo-4-synthetic
- Added new commit message generation tool in etc/tool/commit.py
* New provider added(g4f/Provider/Websim.py)
* New provider added(g4f/Provider/Dynaspark.py)
* feat(g4f/gui/client/static/js/chat.v1.js): Enhance provider labeling for HuggingFace integrations
* feat(g4f/gui/server/api.py): add Hugging Face Space compatibility flag to provider data
* feat(g4f/models.py): add new providers and update model configurations
* Update g4f/Provider/__init__.py
* feat(g4f/Provider/AllenAI.py): expand model alias mappings for AllenAI provider
* feat(g4f/Provider/Blackbox.py): restructure image model handling and response processing
* feat(g4f/Provider/PollinationsAI.py): add new model aliases and streamline headers
* Update g4f/Provider/hf_space/*
* refactor(g4f/Provider/Copilot.py): update model alias mapping
* chore(g4f/models.py): update provider configurations for OpenAI models
* docs(docs/providers-and-models.md): update provider tables and model categorization
* fix(etc/examples/vision_images.py): update model and simplify client configuration
* fix(docs/providers-and-models.md): correct streaming status for GlhfChat provider
* docs(docs/providers-and-models.md): update provider capabilities and model documentation
* fix(models): update provider configurations for Mistral models
* fix(g4f/Provider/Blackbox.py): correct model alias key for Mistral variant
* feat(g4f/Provider/hf_space/CohereForAI_C4AI_Command.py): update supported model versions and aliases (close#2802)
* fix(documentation): correct model names and provider counts (https://github.com/xtekky/gpt4free/pull/2805#issuecomment-2727489835)
* fix(g4f/models.py): correct mistral model configurations
* fix(g4f/Provider/DeepInfraChat.py): correct mixtral-small alias key
* New provider added(g4f/Provider/LambdaChat.py)
* feat(g4f/models.py): add new providers and enhance model configurations
* docs(docs/providers-and-models.md): add LambdaChat provider and update model listings
* feat(g4f/models.py): add new Liquid AI model and enhance providers
* docs(docs/providers-and-models.md): update model listings and provider counts
* feat(g4f/Provider/LambdaChat.py): add conditional reasoning processing based on model
* fix(g4f/tools/run_tools.py): handle combined thinking tags in single chunk
* New provider added(g4f/Provider/Goabror.py)
* feat(g4f/Provider/Blackbox.py): implement dynamic session management and model access control
* refactor(g4f/models.py): update provider configurations and model entries
* docs(docs/providers-and-models.md): update model listings and provider counts
---------
Co-authored-by: kqlio67 <>