Commit Graph

474 Commits

Author SHA1 Message Date
H Lohaus
9535512efe Merge pull request #3286 from michaelbrinkworth/add-ai-badger
add AI Badgr as a OpenAI-compatible backend
2025-12-12 14:50:52 +01:00
Copilot
f244ee0c83 Fix undefined names, type errors, and code quality issues (#3288)
* Initial plan

* Fix critical code issues: undefined names, unused nonlocal, and type annotations

Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>

* Fix code style issues: improve None comparisons, membership tests, and lambda usage

Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>

* Remove unused imports: clean up re and os imports from tools

Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: hlohaus <983577+hlohaus@users.noreply.github.com>
2025-12-12 08:33:43 +01:00
michael m
8516f167ea feat: add AI Badgr as OpenAI-compatible provider
- Add AIBadgr provider class extending OpenaiTemplate
- API endpoint: https://aibadgr.com/api/v1
- Full support for streaming, system messages, and message history
- Add example usage script in etc/examples/aibadgr.py
- Provider requires API key authentication
2025-12-11 08:26:45 +10:00
hlohaus
1467d7ec44 Enhance HuggingFaceAPI to support additional provider API paths; refactor Completions and AsyncCompletions to use client.api_key and client.base_url 2025-12-10 14:19:10 +01:00
hlohaus
1032b165fe Add support for additional parameters in get_models method for BlackboxPro and GeminiPro classes 2025-12-09 19:12:58 +01:00
hlohaus
0366acbcd1 Refactor model handling in PollinationsAI and related providers; update API endpoints, enhance model loading logic, and improve error handling in get_models method 2025-12-09 18:08:35 +01:00
hlohaus
aa0aade9c4 Update provider configurations to set 'working' status to False; refactor tool handling in GeminiCLI and enhance response structure in Usage class 2025-12-08 17:13:26 +01:00
hlohaus
b8d2640753 Update HuggingChat authentication flow and improve error handling; add image and audio templates in tools 2025-12-08 11:37:00 +01:00
Ammar
6b4355e85e Merge branch 'main' of https://github.com/3mora2/gpt4free 2025-12-06 16:41:02 +02:00
Ammar
e4399bc8c0 Update OpenaiChat.py 2025-12-06 16:40:35 +02:00
Ammar
c7201723a4 Merge branch 'xtekky:main' into main 2025-12-06 16:36:30 +02:00
Ammar
e5ca022142 Add websocket media streaming for OpenaiChat
Introduces the wss_media method to stream media updates via websocket in OpenaiChat, and updates logic to yield media as it becomes available. Also adds wait_media as a fallback polling method, tracks image generation tasks in Conversation, and fixes a bug in curl_cffi.py when deleting the 'autoping' key from kwargs.
2025-12-06 16:34:48 +02:00
hlohaus
468dc7bd67 Add "gemini-3-pro-preview" model to GeminiCLI provider 2025-11-30 22:30:14 +01:00
H Lohaus
cda4634d34 Merge pull request #3273 from xtekky/gemini-3
Add support for gemini-3-pro model in Gemini provider
2025-11-30 01:54:35 +01:00
Ammar
d76e56a66f fix error (#3265)
fix error
2025-11-29 05:15:05 +01:00
hlohaus
6be76e3e84 Add support for gemini-3-pro model in Gemini provider 2025-11-29 05:12:23 +01:00
Ammar
c32c676b5b Update OpenaiChat.py
fix error
2025-11-27 09:21:13 +02:00
H Lohaus
2e6d417d02 Update Grok.py 2025-11-26 16:19:47 +01:00
Ammar
7771cf3d43 Improve Yupp provider account handling , request timeout and get byte from url (#3249)
* Add image caching to Yupp provider

Introduces an image cache to avoid redundant uploads in the Yupp provider. Refactors media attachment handling into a new prepare_files method, improving efficiency and code organization. Updates .gitignore to exclude .idea directory.

* Refactor Yupp stream handling and chunk processing

Improves stream segmentation in the Yupp provider by introducing buffers for target, variant, quick, thinking, and extra streams. Refactors chunk processing to better handle image-gen, quick responses, and variant outputs, and adds more robust stream ID extraction and routing logic. Yields a consolidated JsonResponse with all stream segments for downstream use.

* Handle ClientResponseError in Yupp provider

Adds specific handling for aiohttp ClientResponseError in the Yupp provider. Marks account as invalid on 500 Internal Server Error, otherwise increments error count and raises ProviderException for other errors.

* Update Yupp.py

fix 429  'Too Many Requests'

* Update Yupp.py

* Improve Yupp provider account handling and request timeout

Refactored account loading to preserve account history and error counts when updating tokens. Enhanced request logic to support custom timeouts using aiohttp's ClientTimeout, allowing for more flexible timeout configuration.

* Update __init__.py

* Handle multi-line <think> and <yapp> blocks in Yupp

Added logic to capture and process multi-line <think> and <yapp class="image-gen"> blocks referenced by special IDs. Introduced block storage and extraction functions, enabling reasoning and image-gen content to be handled via references in the response stream.

* Update LMArena.py

Not Found Model error

* Refactor to use StreamSession in Qwen and Yupp providers

Replaced aiohttp.ClientSession with StreamSession in Qwen.py and Yupp.py for improved session handling. Updated exception and timeout references in Yupp.py to use aiohttp types. Improved default argument handling in StreamSession initialization.

* Update Yupp.py

* Add status parameter to get_generated_image method

Introduces a 'status' parameter to the get_generated_image method to allow passing image generation status. Updates method calls and response objects to include status in their metadata for improved tracking of image generation progress.

* Update OpenaiChat.py

* Refactor Qwen image upload and caching logic and token

Reworked the image upload flow in Qwen provider to use direct file uploads with OSS headers, added caching for uploaded images, and improved file type detection. Updated prepare_files to handle uploads via session and cache results, and added utility for generating OSS headers. Minor imports and typing adjustments included and token support.

* Refactor Qwen and Yupp providers for improved async handling

Updated Qwen provider to handle timeout via kwargs and improved type annotations. Refactored Yupp provider for better code organization, formatting, and async account rotation logic. Enhanced readability and maintainability by reordering imports, adding whitespace, and clarifying function implementations.

* Add image caching to OpenaiChat provider

Introduces an image cache mechanism to OpenaiChat for uploaded images, reducing redundant uploads and improving efficiency. Also refactors code for clarity, updates type hints, and makes minor formatting improvements throughout the file.
2025-11-26 14:02:51 +01:00
keacwu
05c108d3f6 Improve input handling in Grok.py
Refactor input selection and submission logic for better error handling and clarity.
2025-11-21 15:19:17 +08:00
hlohaus
2955729584 Fix docker build, fix temporary chat 2025-11-10 19:56:51 +01:00
hlohaus
213e04bae7 Fix LMAreana provider 2025-11-10 09:30:53 +01:00
hlohaus
4d98885ec0 Refactor LMArena provider to generate unique evaluation session IDs; remove redundant assignment 2025-11-02 09:02:31 +01:00
hlohaus
af56ac0c03 Enhance MCP server tests to reflect updated tool count; improve model fetching with timeout handling in providers 2025-11-02 08:01:20 +01:00
hlohaus
2317bd5a83 Refactor MarkItDown and OpenaiChat classes for improved media handling and optional parameters; enhance is_data_an_media function to support binary/octet-stream return type for unsupported URLs. 2025-10-31 18:16:19 +01:00
hlohaus
da6c00e2a2 Refactor Cloudflare and LMArena providers to enhance authentication handling and improve WebSocket communication 2025-10-30 21:34:45 +01:00
Ammar
d322083305 OpenaiChat: add auth_result.headers to ImageResponse (#3211)
* Update OpenaiChat.py

---------

Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
2025-10-18 17:33:17 +02:00
hlohaus
84105bd033 Refactor LMArena to use UUID version 7 for message IDs and add a new uuid.py module for UUID generation 2025-10-17 20:18:59 +02:00
hlohaus
1fb8b7e4c9 Refactor PollinationsAI model alias handling for improved string conversion; add audio_tokens field to CompletionTokenDetails; update data structure in Backend_Api for user data handling. 2025-10-04 00:48:46 +02:00
hlohaus
6b210f44f9 Refactor search and response handling; introduce CachedSearch and DDGS classes for improved web search functionality and response management. Add PlainTextResponse for handling plain text responses. Update requirements and setup for new dependencies. 2025-10-03 11:38:24 +02:00
hlohaus
e9527637b1 Refactor token details classes in stubs.py; rename and enhance token details structure for better clarity and functionality 2025-10-03 07:05:59 +02:00
hlohaus
cd42fa6d09 Refactor Copilot authentication handling; streamline cookie management and access token retrieval 2025-10-03 06:19:47 +02:00
hlohaus
edfc0e7c79 Enhance Copilot provider with cookie handling and user identity support; add Kimi provider; refactor usage tracking in run_tools. 2025-10-02 18:11:11 +02:00
hlohaus
f6309cb693 Refactor OpenaiTemplate to remove API key print statement; update model providers by removing Blackbox references; enhance error handling in raise_for_status_async to handle JSON decoding errors gracefully. 2025-10-02 03:12:36 +02:00
hlohaus
4399b432c4 Refactor OpenaiChat authentication flow; replace get_nodriver with async context manager and improve error handling
Update backend_anon_url in har_file.py for correct endpoint
Add async context manager for get_nodriver_session in requests module
Fix start-browser.sh to remove stale cookie file before launching Chrome
2025-10-02 02:08:20 +02:00
Ammar
c938760aac fix LMArena get_models (#3189)
* fix

* Update PollinationsAI.py

* Update Qwen.py

* Update Qwen.py

* Update Qwen.py

* Update LMArena.py

* Update LMArena.py

* Update Qwen.py

* Update Qwen.py

* Update g4f/Provider/Qwen.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-01 13:10:26 +02:00
hlohaus
74fcb27cbc Add OpenRouterFree API key to environment and adjust max_tokens 2025-09-22 22:32:49 +02:00
hlohaus
391e1f463e Enhance Azure provider error handling and update environment variables 2025-09-22 22:21:03 +02:00
hlohaus
bb02eeb481 Add model alias for Kimi-K2-Instruct and update best_provider references 2025-09-21 11:55:29 +02:00
Ammar
bfc7707cd5 Qwen add media 2025-09-19 21:49:51 +03:00
hlohaus
f46c179359 Fix typo in organization_id attribute in Claude class 2025-09-13 21:49:57 +02:00
H Lohaus
f13a4fc715 Update LMArena.py 2025-09-13 17:10:45 +02:00
H Lohaus
ce888ad47c Update OpenRouter.py 2025-09-13 17:10:01 +02:00
hlohaus
d7375d4c6c Add Claude provider with authentication handling and update Nvidia provider 2025-09-11 20:55:31 +02:00
hlohaus
217c1e85db Enhance OpenaiTemplate and Nvidia providers with support for text/plain responses and improved error handling 2025-09-09 21:18:47 +02:00
hlohaus
7e34009fc9 Enhance LMArena and PuterJS providers with error handling and model filtering improvements 2025-09-08 09:19:10 +02:00
hlohaus
2bb58a18be Enhance LMArena provider to handle image responses and raise MissingRequirementsError for missing auth files 2025-09-06 19:02:11 +02:00
hlohaus
933fff985b Enhance PuterJS provider to track live model instances and update BrowserConfig for automation tool settings 2025-09-06 16:28:17 +02:00
hlohaus
b4ed1a55da Add more live info 2025-09-06 12:42:51 +02:00
hlohaus
9fbff6347c Refactor Gemini, GeminiCLI, GeminiPro, and QwenCode providers to streamline model handling and track live instances 2025-09-06 12:32:13 +02:00