- Fixed duplicate model entries in Blackbox provider model_aliases
- Added meta-llama- to llama- name cleaning in Cloudflare provider
- Enhanced PollinationsAI provider with improved vision model detection
- Added reasoning support to PollinationsAI provider
- Fixed HuggingChat authentication to include headers and impersonate
- Removed unused max_inputs_length parameter from HuggingFaceAPI
- Renamed extra_data to extra_body for consistency across providers
- Added Puter provider with grouped model support
- Enhanced AnyProvider with grouped model display and better model organization
- Fixed model cleaning in AnyProvider to handle more model name variations
- Added api_key handling for HuggingFace providers in AnyProvider
- Added see_stream helper function to parse event streams
- Updated GUI server to handle JsonConversation properly
- Fixed aspect ratio handling in image generation functions
- Added ResponsesConfig and ClientResponse for new API endpoint
- Updated requirements to include markitdown
- Changed `DEFAULT_MODEL` from `"o1"` to `"gpt-4o"` in `etc/tool/commit.py`
- Replaced `FALLBACK_MODELS` list with an empty list in `etc/tool/commit.py`
- Moved spinner stop logic inside content-checking block in `generate_commit_message` in `etc/tool/commit.py`
- Trimmed trailing characters "` \n" from returned content in `generate_commit_message` in `etc/tool/commit.py`
- Wrapped cache check in `g4f/gui/server/api.py` in a `try` block to catch `RuntimeError`
- Ensured fallback to `version.utils.latest_version` if cache access fails in `g4f/gui/server/api.py
- Removed default `tool_calls` assignment from `Api.__call__` in `api.py`
- Removed `debug.error()` logging in two exception blocks in `api.py`
- Fixed malformed URL in `CohereForAI_C4AI_Command.url` by removing leading tab in `CohereForAI_C4AI_Command.py`
- Replaced incorrect `result.text_content` access with just `result` in `Backend_Api.upload_bucket` in `backend_api.py`
- Replaced `shutil.copyfile` with `os.rename`, and added fallback to `shutil.copyfile` on failure in `Backend_Api.upload_bucket` in `backend_api.py
- **g4f/Provider/har/__init__.py**
- `get_models`/`create_async`: iterate over `(domain, harFile)` and filter with `domain in request_url`
- `read_har_files` now yields `(domain, har_data)`; fixes file variable shadowing and uses `json.load`
- remove stray `print`, add type hint for `find_str`, replace manual loops with `yield from`
- small whitespace clean-up
- **g4f/Provider/needs_auth/Grok.py**
- `ImagePreview` now passes `auth_result.cookies` and `auth_result.headers`
- **g4f/Provider/needs_auth/OpenaiChat.py**
- add `Union` import; rename/refactor `get_generated_images` → `get_generated_image`
- support `file-service://` and `sediment://` pointers; choose correct download URL
- return `ImagePreview` or `ImageResponse` accordingly and stream each image part
- propagate 422 errors, update prompt assignment and image handling paths
- **g4f/client/__init__.py**
- drop unused `ignore_working` parameter in sync/async `Completions`
- normalise `media` argument: accept single tuple, infer filename when missing, fix index loop
- `Images.create_variation` updated to use the new media logic
- **g4f/gui/server/api.py**
- expose `latest_version_cached` via `?cache=` query parameter
- **g4f/gui/server/backend_api.py**
- optional Markdown extraction via `MarkItDown`; save rendered text as `<file>.md`
- upload flow rewrites: copy to temp file, move to bucket/media dir, clean temp, store filenames
- introduce `has_markitdown` guard and improved logging/exception handling
- **g4f/tools/files.py**
- remove trailing spaces in HAR code-block header string
- **g4f/version.py**
- add `latest_version_cached` `@cached_property` for memoised version lookup
* feat: introduce AnyProvider & LM Arena, overhaul model/provider logic
- **Provider additions & removals**
- Added `Provider/LMArenaProvider.py` with full async stream implementation and vision model support
- Registered `LMArenaProvider` in `Provider/__init__.py`; removed old `hf_space/LMArenaProvider.py`
- Created `providers/any_provider.py`; registers `AnyProvider` dynamically in `Provider`
- **Provider framework enhancements**
- `providers/base_provider.py`
- Added `video_models` and `audio_models` attributes
- `providers/retry_provider.py`
- Introduced `is_content()` helper; now treats `AudioResponse` as stream content
- **Cloudflare provider refactor**
- `Provider/Cloudflare.py`
- Re‑implemented `get_models()` with `read_models()` helper, `fallback_models`, robust nodriver/curl handling and model‑name cleaning
- **Other provider tweaks**
- `Provider/Copilot.py` – removed `"reasoning"` alias and initial `setOptions` WS message
- `Provider/PollinationsAI.py` & `PollinationsImage.py`
- Converted `audio_models` from list to dict, adjusted usage checks and labels
- `Provider/hf/__init__.py` – applies `model_aliases` remap before dispatch
- `Provider/hf_space/DeepseekAI_JanusPro7b.py` – now merges media before upload
- `needs_auth/Gemini.py` – dropped obsolete Gemini model entries
- `needs_auth/GigaChat.py` – added lowercase `"gigachat"` alias
- **API & client updates**
- Replaced `ProviderUtils` with new `Provider` map usage throughout API and GUI server
- Integrated `AnyProvider` as default fallback in `g4f/client` sync & async flows
- API endpoints now return counts of providers per model and filter by `x_ignored` header
- **GUI improvements**
- Updated JS labels with emoji icons, provider ignore logic, model count display
- **Model registry**
- Renamed base model `"GigaChat:latest"` ➜ `"gigachat"` in `models.py`
- **Miscellaneous**
- Added audio/video flags to GUI provider list
- Tightened error propagation in `retry_provider.raise_exceptions`
* Fix unittests
* fix: handle None conversation when accessing provider-specific data
- Modified `AnyProvider` class in `g4f/providers/any_provider.py`
- Updated logic to check if `conversation` is not None before accessing `provider.__name__` attribute
- Wrapped `getattr(conversation, provider.__name__, None)` block in an additional `if conversation is not None` condition
- Changed `setattr(conversation, provider.__name__, chunk)` to use `chunk.get_dict()` instead of the object directly
- Ensured consistent use of `JsonConversation` when modifying or assigning `conversation` data
* ```
feat: add provider string conversion & update IterListProvider call
- In g4f/client/__init__.py, within both Completions and AsyncCompletions, added a check to convert the provider from a string using convert_to_provider(provider) when applicable.
- In g4f/providers/any_provider.py, removed the second argument (False) from the IterListProvider constructor call in the async for loop.
```
---------
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
- Added "No auth / HAR file" authentication type in providers-and-models.md
- Added "Video generation" column to provider tables for future capability
- Updated model counts and provider capabilities throughout documentation
- Fixed ARTA provider with improved error handling and response validation
- Enhanced AllenAI provider with vision model support and proper image handling
- Significantly improved Blackbox provider:
- Added HAR file authentication support
- Added subscription status checking
- Added premium/demo model differentiation
- Improved session handling and error recovery
- Enhanced DDG provider with better error handling for challenges
- Improved PollinationsAI and PollinationsImage providers' model handling
- Added VideoModel class in g4f/models.py
- Added audio/video generation indicators in GUI components
- Added new Ai2 models: olmo-1-7b, olmo-2-32b, olmo-4-synthetic
- Added new commit message generation tool in etc/tool/commit.py
* New provider added(g4f/Provider/Websim.py)
* New provider added(g4f/Provider/Dynaspark.py)
* feat(g4f/gui/client/static/js/chat.v1.js): Enhance provider labeling for HuggingFace integrations
* feat(g4f/gui/server/api.py): add Hugging Face Space compatibility flag to provider data
* feat(g4f/models.py): add new providers and update model configurations
* Update g4f/Provider/__init__.py
* feat(g4f/Provider/AllenAI.py): expand model alias mappings for AllenAI provider
* feat(g4f/Provider/Blackbox.py): restructure image model handling and response processing
* feat(g4f/Provider/PollinationsAI.py): add new model aliases and streamline headers
* Update g4f/Provider/hf_space/*
* refactor(g4f/Provider/Copilot.py): update model alias mapping
* chore(g4f/models.py): update provider configurations for OpenAI models
* docs(docs/providers-and-models.md): update provider tables and model categorization
* fix(etc/examples/vision_images.py): update model and simplify client configuration
* fix(docs/providers-and-models.md): correct streaming status for GlhfChat provider
* docs(docs/providers-and-models.md): update provider capabilities and model documentation
* fix(models): update provider configurations for Mistral models
* fix(g4f/Provider/Blackbox.py): correct model alias key for Mistral variant
* feat(g4f/Provider/hf_space/CohereForAI_C4AI_Command.py): update supported model versions and aliases (close#2802)
* fix(documentation): correct model names and provider counts (https://github.com/xtekky/gpt4free/pull/2805#issuecomment-2727489835)
* fix(g4f/models.py): correct mistral model configurations
* fix(g4f/Provider/DeepInfraChat.py): correct mixtral-small alias key
* New provider added(g4f/Provider/LambdaChat.py)
* feat(g4f/models.py): add new providers and enhance model configurations
* docs(docs/providers-and-models.md): add LambdaChat provider and update model listings
* feat(g4f/models.py): add new Liquid AI model and enhance providers
* docs(docs/providers-and-models.md): update model listings and provider counts
* feat(g4f/Provider/LambdaChat.py): add conditional reasoning processing based on model
* fix(g4f/tools/run_tools.py): handle combined thinking tags in single chunk
* New provider added(g4f/Provider/Goabror.py)
* feat(g4f/Provider/Blackbox.py): implement dynamic session management and model access control
* refactor(g4f/models.py): update provider configurations and model entries
* docs(docs/providers-and-models.md): update model listings and provider counts
---------
Co-authored-by: kqlio67 <>
Add new default HuggingFace provider
Add format_image_prompt and get_last_user_message helper
Add stop_browser callable to get_nodriver function
Fix content type response in images route
Add orginal url to downloaded image
Support ssl argument in StreamSession
Report Provider and Errors in RetryProvider
Support ssl argument in OpenaiTemplate
Remove model duplication in OpenaiChat
Disable ChatGpt provider and remove it from models.py
Update slim requirements
Support provider names as model name in Image generation
Add model qwen-2.5-1m-demo to models.py
Updates for memory with mem0
Fix asyncio import in nodriver function
Add provider specific api endpoints
Support for open settings in UI at /chat/settings
Update demo model list
Disable upload cookies in demo
Track usage in demo mode
Add messages without asking the ai
Add hint for browser usage in provider list
Add qwen2 prompt template to HuggingFace provider
Trim automatic messages in HuggingFaceAPI
Restore browser instance on start up errors in nodriver
Restored instances can be used as usual or to stop the browser
Add demo modus to web ui for HuggingSpace
Add rate limit support to web ui. Simply install flask_limiter
Add home for demo with Access Token input and validation
Add stripped model list for demo
Add ignores for encoding error in web_search and file upload
Improve model list in HuggingSpace, PollinationsAI
Fix Image Generation in PollinationsAI
Add Image Upload in PollinationsAI
Support Usage, FinishReason, jsonMode in PollinationsAI
Add Reasoning to Web UI
Fix using provider api_keys in Web UI
Add Web Share Target to Mainifest
Add target to GUI: /chat/?prompt=your input:
Add local Font Awesome Free
Update icons to new Font Awesome Version
Add Providers to Model list in /v1/models
Reefresh auth on 403 error in OpenaiChat
Add "Custom Provider": Set API Url in the settings
Remove Discord link from result, add them to attr: Jmuz
Fix Bug: File content are added to the prompt
Changed response from /v1/models API
Disable Pizzagpt Provider
Add none auth with OpenAI using nodriver
Fix missing 1 required positional argument: 'cls'
Update count tokens in GUI
Fix streaming example in requests guide
Remove ChatGptEs as default model
Remove old text_to_speech service from gui
Update gui and client readmes,
Add HuggingSpaces group provider;
Add providers parameters config forms to gui
* Add Path and PathLike support when uploading images
Improve raise_for_status in special cases
Move ImageResponse to providers.response module
Improve OpenaiChat and OpenaiAccount providers
Add Sources for web_search in OpenaiChat
Add JsonConversation for import and export conversations to js
Add RequestLogin response type
Add TitleGeneration support in OpenaiChat and gui
* Improve Docker Container Guide in README.md
* Add tool calls api support, add search tool support