2322 Commits

Author SHA1 Message Date
hlohaus
b0813eb3f6 Refactor authentication cookie retrieval in HuggingChat; enhance header update logic in AppConfig and Backend_Api for improved flexibility 2025-12-23 23:18:49 +01:00
hlohaus
55e22ea02b Remove unused cf_ipcountry header and simplify user identification logic; update default browser path for Linux systems in get_nodriver function 2025-12-23 20:52:45 +01:00
hlohaus
509b5b8503 Fix conversation return logic in HuggingChat class 2025-12-23 18:08:40 +01:00
hlohaus
502c02b5f6 Update default model to gpt-5-2 and adjust text models accordingly 2025-12-22 14:00:54 +01:00
Andrea Lops
66841752ec Gemini 3 Pro Preview Model Support (#3300)
* Update GeminiCLI default model to gemini-3-pro-preview and add model mapping

* Update g4f/providers/any_model_map.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-22 13:44:30 +01:00
Ammar
ddcdcef882 update Yupp and LMArena providers (#3291)
* Handle Cloudflare WAF errors in Qwen provider

Added detection and raising of CloudflareError when the response indicates an Aliyun WAF block. This improves error handling for cases where requests are blocked by Cloudflare.

* Improve Qwen and LMArena provider authentication handling

Refactors Qwen provider to better manage authentication cookies and cache, including fallback and refresh logic for Cloudflare errors and rate limits. Adds use of AuthFileMixin to Qwen, improves argument retrieval from cache or nodriver, and ensures cookies are merged after requests. Updates LMArena to prioritize args from kwargs before reading from cache, improving flexibility and reliability in authentication.

* Improve LMArena provider recaptcha handling and error logging

Refactors the LMArena provider to better handle recaptcha token acquisition and error cases, including new async methods for recaptcha retrieval and improved error logging. Updates dependencies and imports, and enhances the raise_for_status utility to detect LMArena-specific recaptcha validation failures.

* Add image upload and caching to LMArena provider

Introduces image upload support with caching in the LMArena provider by implementing a prepare_images method. Images are uploaded, cached by hash, and attached to user messages for models supporting vision. Refactors attachment handling to use the new upload logic and improves code formatting and error handling.

* Update and expand model definitions in LMArena.py

Replaces the previous 'models' list with an updated and expanded set of model definitions, including new fields such as 'name', 'rank', and 'rankByModality'. This change adds new models, updates capabilities, and provides more detailed metadata for each model, improving model selection and feature support.

* Improve reCAPTCHA handling and set default timeout

Refactors the reCAPTCHA execution to use the enterprise.ready callback and adds error handling for token retrieval. Also sets a default timeout of 5 minutes for StreamSession if not provided.

* Update LMArena.py

* StreamSession

* Improve error logging for Qwen Cloudflare errors

Replaces a generic debug log with a more detailed error log that includes the exception message when a CloudflareError is caught in the Qwen provider. This enhances troubleshooting by providing more context in logs.

* generate ssxmod

* Improve error handling for Qwen provider responses

Adds checks for JSON error responses and raises RuntimeError when 'success' is false or a 'code' is present in the response data. Also refines HTML error detection logic in raise_for_status.

* Update fingerprint.py

* Update Yupp.py

for test only

* Update Yupp.py

* Add Qwen bx-ua header generator and update Qwen provider

Introduces g4f/Provider/qwen/generate_ua.py for generating bx-ua headers, including AES encryption and fingerprinting logic. Updates Qwen provider to support dynamic UA/cookie handling and refactors image preparation in LMArena to handle empty media lists. Minor cleanup in cookie_generator.py and preparation for integrating bx-ua header in Qwen requests.

* Update LMArena.py

* Update LMArena.py

* Update LMArena.py

* Add user_info method to Yupp provider

Introduces a new async class method user_info to fetch and parse user details, credits, and model information from Yupp. Updates create_async_generator to yield user_info at the start of the conversation flow. Also fixes a bug in get_last_user_message call by passing a boolean for the prompt argument.

* Update Yupp.py

* Update models.py

* Update Yupp.py

* Enhance LMArena action ID handling and parsing

Refactored LMArena to dynamically extract and update action IDs from HTML/JS, replacing hardcoded values with a class-level dictionary. Added HTML parsing logic to load available actions and models, improving maintainability and adaptability to backend changes. Minor cleanup and improved code structure in Yupp and LMArena providers.

* Update LMArena.py
2025-12-22 13:43:35 +01:00
H Lohaus
807b7c8b06 Merge pull request #3272 from ayu-haker/patch-2
Refactor CLI argument parsers for API and MCP modes
2025-12-12 14:51:29 +01:00
H Lohaus
9535512efe Merge pull request #3286 from michaelbrinkworth/add-ai-badger
add AI Badgr as a OpenAI-compatible backend
2025-12-12 14:50:52 +01:00
Copilot
f244ee0c83 Fix undefined names, type errors, and code quality issues (#3288)
* Initial plan

* Fix critical code issues: undefined names, unused nonlocal, and type annotations

Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>

* Fix code style issues: improve None comparisons, membership tests, and lambda usage

Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>

* Remove unused imports: clean up re and os imports from tools

Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-12-12 08:33:43 +01:00
copilot-swe-agent[bot]
2e83084718 Remove deprecated and not_working providers
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-12-11 17:42:31 +00:00
hlohaus
e2e996a7f9 Refactor usage data retrieval in Backend_Api; improve response handling for missing data and ensure cache directory creation 2025-12-11 12:03:02 +01:00
michael m
8516f167ea feat: add AI Badgr as OpenAI-compatible provider
- Add AIBadgr provider class extending OpenaiTemplate
- API endpoint: https://aibadgr.com/api/v1
- Full support for streaming, system messages, and message history
- Add example usage script in etc/examples/aibadgr.py
- Provider requires API key authentication
2025-12-11 08:26:45 +10:00
hlohaus
d90ad1d00d Refactor model alias handling in PollinationsAI; simplify logic for setting model aliases and improve video model filtering 2025-12-10 15:13:50 +01:00
hlohaus
1467d7ec44 Enhance HuggingFaceAPI to support additional provider API paths; refactor Completions and AsyncCompletions to use client.api_key and client.base_url 2025-12-10 14:19:10 +01:00
hlohaus
1032b165fe Add support for additional parameters in get_models method for BlackboxPro and GeminiPro classes 2025-12-09 19:12:58 +01:00
hlohaus
0366acbcd1 Refactor model handling in PollinationsAI and related providers; update API endpoints, enhance model loading logic, and improve error handling in get_models method 2025-12-09 18:08:35 +01:00
hlohaus
aa0aade9c4 Update provider configurations to set 'working' status to False; refactor tool handling in GeminiCLI and enhance response structure in Usage class 2025-12-08 17:13:26 +01:00
hlohaus
b8d2640753 Update HuggingChat authentication flow and improve error handling; add image and audio templates in tools 2025-12-08 11:37:00 +01:00
H Lohaus
9ada0ce84c Merge pull request #3276 from 3mora2/main
fix g4f/Provider/Qwen.py, g4f/Provider/needs_auth/OpenaiChat.py
2025-12-07 19:34:30 +01:00
hlohaus
dd08d9e488 Update PollinationsAI login URL and fix response handling in Completions class 2025-12-07 19:32:24 +01:00
hlohaus
5ca011fe43 Fix model list in GradientNetwork provider by replacing duplicate entry with "Qwen3 235B" 2025-12-07 01:18:29 +01:00
Ammar
619cf90c13 Increase response stream buffer size to 10MB
Set the response content's _high_water attribute to 10MB per line to address ValueError: Chunk too big during stream processing.
2025-12-06 18:26:27 +02:00
Ammar
6b4355e85e Merge branch 'main' of https://github.com/3mora2/gpt4free 2025-12-06 16:41:02 +02:00
Ammar
e4399bc8c0 Update OpenaiChat.py 2025-12-06 16:40:35 +02:00
Ammar
c7201723a4 Merge branch 'xtekky:main' into main 2025-12-06 16:36:30 +02:00
Ammar
e5ca022142 Add websocket media streaming for OpenaiChat
Introduces the wss_media method to stream media updates via websocket in OpenaiChat, and updates logic to yield media as it becomes available. Also adds wait_media as a fallback polling method, tracks image generation tasks in Conversation, and fixes a bug in curl_cffi.py when deleting the 'autoping' key from kwargs.
2025-12-06 16:34:48 +02:00
Ammar
027d486b57 Update Qwen.py 2025-12-06 09:07:23 +02:00
hlohaus
957d73a76e Add UTF-8 encoding to file writes in Backend_Api class 2025-12-05 23:34:10 +01:00
hlohaus
468dc7bd67 Add "gemini-3-pro-preview" model to GeminiCLI provider 2025-11-30 22:30:14 +01:00
hlohaus
1fd9b8d116 Refactor GradientNetwork and ItalyGPT providers; update BAAI_Ling for improved functionality and model handling 2025-11-30 11:20:29 +01:00
H Lohaus
ed84c2dc6b Merge pull request #3268 from HexyeDEV/patch-4
Add ItalyGPT provider
2025-11-30 01:54:56 +01:00
H Lohaus
cda4634d34 Merge pull request #3273 from xtekky/gemini-3
Add support for gemini-3-pro model in Gemini provider
2025-11-30 01:54:35 +01:00
H Lohaus
1b3628dfee Merge pull request #3270 from xtekky/copilot/add-ling-1t-model
Add Ling-1T model support via HuggingFace Space provider
2025-11-30 01:54:08 +01:00
ayushman bosu roy
e7a7dbbe70 Refactor CLI argument parsers for API and MCP modes
Refactor CLI argument parsing for API and MCP modes, enhancing help descriptions and maintaining backward compatibility for deprecated flags.
2025-11-30 02:53:59 +05:30
copilot-swe-agent[bot]
21113c51a6 Remove redundant continue statement for cluster message handling
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-11-29 04:39:45 +00:00
copilot-swe-agent[bot]
098b2401ea Fix response parsing: use type "reply" with data.content/reasoningContent, update models
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-11-29 04:36:25 +00:00
copilot-swe-agent[bot]
04e300d7a6 Fix code review issues in BAAI_Ling provider
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-11-29 04:35:54 +00:00
copilot-swe-agent[bot]
c364425250 Add BAAI_Ling provider for Ling-1T model
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-11-29 04:32:32 +00:00
copilot-swe-agent[bot]
f57663cbe8 Address code review: pass enable_thinking value directly, explicit skip for cluster messages
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-11-29 04:25:22 +00:00
copilot-swe-agent[bot]
da4d7d118d Use StreamSession for proper line-by-line NDJSON parsing
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-11-29 04:24:09 +00:00
copilot-swe-agent[bot]
f0ea4c5b95 Add GradientNetwork provider for chat.gradient.network
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-11-29 04:22:02 +00:00
Ammar
d76e56a66f fix error (#3265)
fix error
2025-11-29 05:15:05 +01:00
hlohaus
6be76e3e84 Add support for gemini-3-pro model in Gemini provider 2025-11-29 05:12:23 +01:00
Hexye
688640b764 Add ItalyGPT provider 2025-11-28 22:47:54 +01:00
Ammar
c32c676b5b Update OpenaiChat.py
fix error
2025-11-27 09:21:13 +02:00
H Lohaus
2e6d417d02 Update Grok.py 2025-11-26 16:19:47 +01:00
Ammar
7771cf3d43 Improve Yupp provider account handling , request timeout and get byte from url (#3249)
* Add image caching to Yupp provider

Introduces an image cache to avoid redundant uploads in the Yupp provider. Refactors media attachment handling into a new prepare_files method, improving efficiency and code organization. Updates .gitignore to exclude .idea directory.

* Refactor Yupp stream handling and chunk processing

Improves stream segmentation in the Yupp provider by introducing buffers for target, variant, quick, thinking, and extra streams. Refactors chunk processing to better handle image-gen, quick responses, and variant outputs, and adds more robust stream ID extraction and routing logic. Yields a consolidated JsonResponse with all stream segments for downstream use.

* Handle ClientResponseError in Yupp provider

Adds specific handling for aiohttp ClientResponseError in the Yupp provider. Marks account as invalid on 500 Internal Server Error, otherwise increments error count and raises ProviderException for other errors.

* Update Yupp.py

fix 429  'Too Many Requests'

* Update Yupp.py

* Improve Yupp provider account handling and request timeout

Refactored account loading to preserve account history and error counts when updating tokens. Enhanced request logic to support custom timeouts using aiohttp's ClientTimeout, allowing for more flexible timeout configuration.

* Update __init__.py

* Handle multi-line <think> and <yapp> blocks in Yupp

Added logic to capture and process multi-line <think> and <yapp class="image-gen"> blocks referenced by special IDs. Introduced block storage and extraction functions, enabling reasoning and image-gen content to be handled via references in the response stream.

* Update LMArena.py

Not Found Model error

* Refactor to use StreamSession in Qwen and Yupp providers

Replaced aiohttp.ClientSession with StreamSession in Qwen.py and Yupp.py for improved session handling. Updated exception and timeout references in Yupp.py to use aiohttp types. Improved default argument handling in StreamSession initialization.

* Update Yupp.py

* Add status parameter to get_generated_image method

Introduces a 'status' parameter to the get_generated_image method to allow passing image generation status. Updates method calls and response objects to include status in their metadata for improved tracking of image generation progress.

* Update OpenaiChat.py

* Refactor Qwen image upload and caching logic and token

Reworked the image upload flow in Qwen provider to use direct file uploads with OSS headers, added caching for uploaded images, and improved file type detection. Updated prepare_files to handle uploads via session and cache results, and added utility for generating OSS headers. Minor imports and typing adjustments included and token support.

* Refactor Qwen and Yupp providers for improved async handling

Updated Qwen provider to handle timeout via kwargs and improved type annotations. Refactored Yupp provider for better code organization, formatting, and async account rotation logic. Enhanced readability and maintainability by reordering imports, adding whitespace, and clarifying function implementations.

* Add image caching to OpenaiChat provider

Introduces an image cache mechanism to OpenaiChat for uploaded images, reducing redundant uploads and improving efficiency. Also refactors code for clarity, updates type hints, and makes minor formatting improvements throughout the file.
2025-11-26 14:02:51 +01:00
keacwu
05c108d3f6 Improve input handling in Grok.py
Refactor input selection and submission logic for better error handling and clarity.
2025-11-21 15:19:17 +08:00
Ammar
18fda760cb Add image caching to Yupp provider (#3246)
* Add image caching to Yupp provider

Introduces an image cache to avoid redundant uploads in the Yupp provider. Refactors media attachment handling into a new prepare_files method, improving efficiency and code organization. Updates .gitignore to exclude .idea directory.

* Refactor Yupp stream handling and chunk processing

Improves stream segmentation in the Yupp provider by introducing buffers for target, variant, quick, thinking, and extra streams. Refactors chunk processing to better handle image-gen, quick responses, and variant outputs, and adds more robust stream ID extraction and routing logic. Yields a consolidated JsonResponse with all stream segments for downstream use.

* Handle ClientResponseError in Yupp provider

Adds specific handling for aiohttp ClientResponseError in the Yupp provider. Marks account as invalid on 500 Internal Server Error, otherwise increments error count and raises ProviderException for other errors.

* Update Yupp.py

fix 429  'Too Many Requests'

* Update Yupp.py
2025-11-15 18:16:03 +01:00
hlohaus
fb26557dbb Fix model retrieval process in AnyModelProviderMixin
- Added error handling around model retrieval to prevent crashes when a provider fails.
- Ensured that exceptions during model fetching are logged for debugging purposes.
- Cleaned up the indentation and structure of the model retrieval logic for better readability.
2025-11-14 17:24:28 +01:00