- Introduced a new provider class `LMArenaBeta` in `g4f/Provider/LMArenaBeta.py` with capabilities for text and image models.
- Updated `g4f/Provider/Cloudflare.py` to remove an unused import of `Cookies`.
- Modified `g4f/Provider/PollinationsAI.py` to change the condition for checking the action in the `next` command.
- Added a new provider `PuterJS` in `g4f/Provider/PuterJS.py` with various model handling and authentication logic.
- Removed the old `PuterJS` implementation from `g4f/Provider/not_working/PuterJS.py`.
- Updated `g4f/Provider/__init__.py` to include the new `LMArenaBeta` and `PuterJS` providers.
- Changed the label of `HarProvider` in `g4f/Provider/har/__init__.py` to "LMArena (Har)".
- Adjusted the model list in `g4f/Provider/openai/models.py` to ensure consistency in model definitions.
- Updated the API response handling in `g4f/providers/response.py` to calculate total tokens in the `Usage` class constructor.
- Added try-except block to catch RuntimeError around asyncio.run(nodriver_read_models()) in Cloudflare.py to set cls.models to fallback_models if encountered
- Corrected indentation of "followups" key in PollinationsAI.py from 43 to 44, changing it from nested to proper dictionary key
- No other code logic changed in these files
- Replaced the large GitHub project stats table in `README.md` with summaries and logos for Pollinations AI and MoneyPrinter V2
- Introduced `STATIC_URL` and `DIST_DIR` constants in new `g4f/constants.py` and used them across multiple files
- Updated `PollinationsAI.py` to support conversation title and follow-up generation using tool calls
- Modified `PollinationsAI.py` and `PollinationsImage.py` to use `STATIC_URL` for the `referrer` header
- Enhanced `PollinationsAI.stream_complete` to yield `ToolCalls`, `TitleGeneration`, and `SuggestedFollowups`
- Added `ToolCalls` handling in `client/__init__.py` to support non-stream and stream modes
- Updated `ChatCompletionDelta` model in `client/stubs.py` to support `ToolCalls`
- Modified `HarProvider` to merge `DEFAULT_HEADERS` into request headers
- Improved `OpenaiChat.py` by adding optional chaining to page evaluation expressions for robustness
- Updated `any_provider.py` to force use of `PollinationsAI` if `tools` key is present in kwargs
- Refactored `is_content` into a reusable function in `providers/response.py` and used in `retry_provider.py`
- Updated `gui/server/website.py` to use `STATIC_URL` and simplify `GPT4FREE_URL` handling
- Removed redundant constants from `version.py` and imported them from `constants.py
- Fixed duplicate model entries in Blackbox provider model_aliases
- Added meta-llama- to llama- name cleaning in Cloudflare provider
- Enhanced PollinationsAI provider with improved vision model detection
- Added reasoning support to PollinationsAI provider
- Fixed HuggingChat authentication to include headers and impersonate
- Removed unused max_inputs_length parameter from HuggingFaceAPI
- Renamed extra_data to extra_body for consistency across providers
- Added Puter provider with grouped model support
- Enhanced AnyProvider with grouped model display and better model organization
- Fixed model cleaning in AnyProvider to handle more model name variations
- Added api_key handling for HuggingFace providers in AnyProvider
- Added see_stream helper function to parse event streams
- Updated GUI server to handle JsonConversation properly
- Fixed aspect ratio handling in image generation functions
- Added ResponsesConfig and ClientResponse for new API endpoint
- Updated requirements to include markitdown
- Changed documentation URL in README.md for detailed guidance link
- In g4f/Provider/Cloudflare.py, broadened exception handling in async argument fetching to catch all exceptions in one place and only specific exceptions in another
- In g4f/Provider/PollinationsAI.py, removed raising of exception for unknown model not in image_models and replaced it with pass
- In g4f/Provider/needs_auth/OpenaiChat.py, modified session post call to always use cls._headers
- Changed if-chain in OpenaiChat.py to use elif for checking element prefix "sediment://"
- Added logic to extract and yield generated images for unique "file-service://" matches in streamed responses within OpenaiChat.py
- Commented out multimodal_text image asset pointer handling in OpenaiChat.py
- In g4f/client/__init__.py resolve_media(), set media name to basename of file path using os.path.basename
- Replaced inline `get_args_from_nodriver` logic with a new async function `nodriver_read_models` inside `Cloudflare` class
- Added `async def nodriver_read_models()` to handle asynchronous execution of `get_args_from_nodriver` and call `read_models()`
- Moved `try/except` block for handling `RuntimeError` and `FileNotFoundError` inside the new async function
- Updated fallback assignment `cls.models = cls.fallback_models` and debug logging to be within `nodriver_read_models` exception handler
- Replaced `asyncio.run(args)` with `asyncio.run(nodriver_read_models())` to execute the new async function
- Modified logic inside `except ResponseStatusError` block in `Cloudflare` class to incorporate the new structure
- Modified `Cloudflare` class in `Cloudflare.py` to add logic for loading `_args` from a cache file if it exists and `_args` is `None`
- Inserted code in `Cloudflare.py` to check existence of cache file and read JSON content into `_args`
- Refactored `Copilot` class in `Copilot.py` by removing `try`/`finally` block around websocket message loop
- Moved websocket close logic to the end of the message handling loop in `Copilot.py`
- Removed nested `try`/`except` block inside the websocket loop in `Copilot.py`
- Preserved original message handling structure while simplifying control flow in `Copilot.py
* feat: introduce AnyProvider & LM Arena, overhaul model/provider logic
- **Provider additions & removals**
- Added `Provider/LMArenaProvider.py` with full async stream implementation and vision model support
- Registered `LMArenaProvider` in `Provider/__init__.py`; removed old `hf_space/LMArenaProvider.py`
- Created `providers/any_provider.py`; registers `AnyProvider` dynamically in `Provider`
- **Provider framework enhancements**
- `providers/base_provider.py`
- Added `video_models` and `audio_models` attributes
- `providers/retry_provider.py`
- Introduced `is_content()` helper; now treats `AudioResponse` as stream content
- **Cloudflare provider refactor**
- `Provider/Cloudflare.py`
- Re‑implemented `get_models()` with `read_models()` helper, `fallback_models`, robust nodriver/curl handling and model‑name cleaning
- **Other provider tweaks**
- `Provider/Copilot.py` – removed `"reasoning"` alias and initial `setOptions` WS message
- `Provider/PollinationsAI.py` & `PollinationsImage.py`
- Converted `audio_models` from list to dict, adjusted usage checks and labels
- `Provider/hf/__init__.py` – applies `model_aliases` remap before dispatch
- `Provider/hf_space/DeepseekAI_JanusPro7b.py` – now merges media before upload
- `needs_auth/Gemini.py` – dropped obsolete Gemini model entries
- `needs_auth/GigaChat.py` – added lowercase `"gigachat"` alias
- **API & client updates**
- Replaced `ProviderUtils` with new `Provider` map usage throughout API and GUI server
- Integrated `AnyProvider` as default fallback in `g4f/client` sync & async flows
- API endpoints now return counts of providers per model and filter by `x_ignored` header
- **GUI improvements**
- Updated JS labels with emoji icons, provider ignore logic, model count display
- **Model registry**
- Renamed base model `"GigaChat:latest"` ➜ `"gigachat"` in `models.py`
- **Miscellaneous**
- Added audio/video flags to GUI provider list
- Tightened error propagation in `retry_provider.raise_exceptions`
* Fix unittests
* fix: handle None conversation when accessing provider-specific data
- Modified `AnyProvider` class in `g4f/providers/any_provider.py`
- Updated logic to check if `conversation` is not None before accessing `provider.__name__` attribute
- Wrapped `getattr(conversation, provider.__name__, None)` block in an additional `if conversation is not None` condition
- Changed `setattr(conversation, provider.__name__, chunk)` to use `chunk.get_dict()` instead of the object directly
- Ensured consistent use of `JsonConversation` when modifying or assigning `conversation` data
* ```
feat: add provider string conversion & update IterListProvider call
- In g4f/client/__init__.py, within both Completions and AsyncCompletions, added a check to convert the provider from a string using convert_to_provider(provider) when applicable.
- In g4f/providers/any_provider.py, removed the second argument (False) from the IterListProvider constructor call in the async for loop.
```
---------
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
- **g4f/providers/helper.py**
- Add `render_messages()` to normalise message contents that are lists of blocks.
- **g4f/Provider/Blackbox.py**
- Import `get_har_files` and `render_messages`.
- Replace manual walk of `get_cookies_dir()` with `get_har_files()` in `_find_session_in_har`.
- Simplify session‑parsing loop and exception logging; drop permissions check.
- Build `current_messages` with `render_messages(messages)` instead of raw list.
- **g4f/Provider/Cloudflare.py**
- Swap `to_string` import for `render_messages`.
- Add `"impersonate": "chrome"` to default `_args`.
- Construct `data["messages"]` with `render_messages(messages)` and inline `"parts"`; remove `to_string()` calls.
- Move `cache_file` write outside inner `try` to always save arguments.
- **g4f/Provider/Copilot.py**
- Defer `yield conversation` until after `conversation` is created when `return_conversation` is requested.
- **g4f/Provider/openai/har_file.py**
- Break out of `os.walk` after first directory in `get_har_files()` to avoid deep traversal.
- **g4f/api/__init__.py**
- Use `config.conversation` directly and set `return_conversation` when present.
- **g4f/client/__init__.py**
- Pass `conversation` to both `ChatCompletionChunk.model_construct()` and `ChatCompletion.model_construct()`.
- **g4f/client/stubs.py**
- Import `field_serializer` (with stub fallback).
- Add serializers for `conversation` (objects and dicts) and for `content` fields.
- Extend model constructors to accept/propagate `conversation`.
- **g4f/cookies.py**
- Insert ".huggingface.co" into `DOMAINS` list.
- Stop recursive directory walk in `read_cookie_files()` with early `break`.
- **g4f/gui/client/background.html**
- Reorder error‑handling branches; reset `errorImage` in `onload`.
- Revise `skipRefresh` logic and random image URL building.
- **g4f/gui/server/backend_api.py**
- Add `self.match_files` cache for repeated image searches.
- Use `safe_search` for sanitised term matching and `min` comparison.
- Limit walk to one directory level; support deterministic random selection via `random` query param.
- **Miscellaneous**
- Update imports where `render_messages` replaces `to_string`.
- Ensure all modified providers iterate messages through `render_messages` for consistent formatting.
- In **g4f/Provider/Cloudflare.py**:
- Added `from .helper import to_string`.
- Replaced conditional string checks with `to_string(message["content"])` for both `"content"` and elements in `"parts"`.
- In **g4f/Provider/PollinationsAI.py**:
- Removed `"o3-mini"` from the `vision_models` list.
- Updated the alias mapping dictionary by:
- Removing the `"o3-mini": "openai-reasoning"` entry.
- Removing the duplicate `"gpt-4o-mini": "searchgpt"` mapping.
- Removing the duplicate `"gemini-2.0-flash-thinking": "gemini-reasoning"` entry.
- Removing the `"qwq-32b": "qwen-reasoning"` mapping.
- Adding a new alias `"llama-4-scout": "llamascout"`.
- In **g4f/gui/client/static/css/style.css**:
- Changed the `border-left` property value from `var(--colour-4)` to `var(--media-select)`.
- In **g4f/models.py**:
- For the `"o3-mini"` model, removed `PollinationsAI` from its `best_provider` list.
- Changed the comment from `# llama 2` to `### llama 2-4 ###` and removed redundant comments for llama 3.1 and 3.2.
- Added a new model `llama_4_scout` with `base_provider` set to `"Meta Llama"` and `best_provider` as `IterListProvider([Cloudflare, PollinationsAI])`.
- For the `"qwq-32b"` model, removed `PollinationsAI` from its `best_provider` list.
- Updated the `ModelUtils` mapping to include the new `llama_4_scout` model.
Update conversation body in OpenaiChat provider
Update ThinkingProcessor in run_tools
Add unittests for ThinkingProcessor
Update default headers in requests module
Add AuthFileMixin to base_provider
Update demo model list
Disable upload cookies in demo
Track usage in demo mode
Add messages without asking the ai
Add hint for browser usage in provider list
Add qwen2 prompt template to HuggingFace provider
Trim automatic messages in HuggingFaceAPI
Fix for RetryProviders doesn't retry
Add retry and continue for DuckDuckGo provider
Add cache for Cloudflare provider
Add cache for prompts on gui home
Add scroll to bottom checkbox in gui
Improve prompts on home gui
Fix response content type in api for files
- **Cloudflare Provider**: Added error handling for missing requirements when fetching arguments.
- **Copilot Provider**: Updated the prompt formatting to use a maximum length function, improving message handling.
- **PollinationsAI Provider**: Adjusted the prompt length to a maximum of 5000 characters.
- **GitHub Copilot Provider**: Updated to use `ClientSession` for better session management.
- **CSS Updates**: Enhanced the gradient styling in the GUI for a more visually appealing interface.
- **JavaScript Updates**: Added functionality to toggle search options in the chat interface.
* Fix api streaming, fix AsyncClient, Improve Client class, Some providers fixes, Update models list, Fix some tests, Update model list in Airforce provid
er, Add OpenAi image generation url to api, Fix reload and debug in api arguments, Fix websearch in gui
* Fix Cloadflare and Pi and AmigoChat provider
* Fix conversation support in DDG provider, Add cloudflare bypass with nodriver
* Fix unittests without curl_cffi