- Add AIBadgr provider class extending OpenaiTemplate
- API endpoint: https://aibadgr.com/api/v1
- Full support for streaming, system messages, and message history
- Add example usage script in etc/examples/aibadgr.py
- Provider requires API key authentication
- Rename DeepInfraChat to DeepInfra across all files
- Move DeepInfra from needs_auth to main Provider directory
- Rename LMArenaBeta to LMArena throughout codebase
- Move search-related providers to new search subdirectory (GoogleSearch, SearXNG, YouTube)
- Move deprecated providers to not_working directory (Free2GPT, LegacyLMArena, PenguinAI, ImageLabs, har)
- Add new Mintlify provider with custom AI assistant implementation
- Update Anthropic provider with Claude 4 models and Opus 4.1 parameter handling
- Update Grok provider with Grok 4 models and improved streaming support
- Update GithubCopilot with expanded model list including o3-mini, o4-mini, gpt-5 previews
- Update LambdaChat default model from deepseek-r1 to deepseek-llama3.3-70b
- Update TeachAnything default model from gemini-1.5-pro to gemma
- Remove DeepInfra from needs_auth directory
- Update all model_map references from DeepInfraChat to DeepInfra
- Update all model_map references from LMArenaBeta to LMArena
- Add beta_headers support to Anthropic for special features
- Improve Mintlify provider with system prompt handling and streaming
- Update model configurations in models.py to reflect provider changes
- Added new `GeminiCLI.py` provider under `g4f/Provider/needs_auth/` with full implementation of Gemini CLI support including OAuth2 handling, SSE streaming, tool calling, and media handling
- Registered `GeminiCLI` in `g4f/Provider/needs_auth/__init__.py`
- Modified `g4f/client/stubs.py`:
- Removed `serialize_reasoning_content` method
- Added inline reasoning_content join logic in `model_construct` override
- Updated `Azure.py`:
- Removed `"stream": False` from `model_extra_body`
- Added inline `stream = False` assignment when using `model_extra_body`
- Updated `DeepInfra.py`:
- Added import of `DeepInfraChat`
- Set `model_aliases` to `DeepInfraChat.model_aliases
- Added GithubCopilotAPI provider to g4f/Provider/needs_auth and __init__.py
- Fixed typo "GGOGLE_SID_COOKIE" to "GOOGLE_SID_COOKIE" in Gemini.py and updated all references
- Updated PollinationsAI.py:
- Refined model aliases and removed/commented unused/legacy aliases
- Updated logic for loading audio and vision models, using swap_models for alias reversals
- Adjusted get_model and model loading methods for accuracy
- Changed default model lists for text, image, and vision models
- Updated conversation title and followup labels for followups tools
- Modified save_content in g4f/cli/client.py to handle url downloads for lists, allow cookies/headers, and removed duplicate HTTP download logic
- Added asyncio sleep after stdout writes in stream_response for smoother streaming
- Changed website.py render default to "home," adjusted chat route to accept any filename, and updated filenames used for rendering
- Updated model selection in g4f/models.py by removing PollinationsAI from best_provider and changing model provider order for specific models
- Enhanced media merging in g4f/tools/media.py to clarify comment about last user message and handle content appending for lists in render_messages
- Updated OpenaiTemplate.py to add an image_url field if media with http(s) URLs is present
- Adjusted test_provider_has_model in etc/unittest/models.py to skip providers requiring auth
* feat: add repository path support and new md2html converter tool
- Add `--repo` argument to commit.py for specifying git repository path with validation
- Add `validate_git_repository()` function to check repository existence and git status
- Add `get_repository_info()` function to extract branch and remote information
- Update `get_git_diff()` and `make_commit()` functions to accept repository path parameter
- Add Path import and repository validation in main workflow
- Enhance error messages with repository-specific guidance and context
- Update argument parser description and help text for new repository functionality
- Expand module docstring with comprehensive usage examples and feature descriptions
- Add new md2html.py tool for converting Markdown files to HTML using GitHub API
- Add template.html file with GitHub-styled CSS and responsive design
- Implement batch processing, retry logic, and rate limit handling in md2html converter
- Add comprehensive command-line interface with directory processing and custom output options
* refactor: Update provider configurations and model handling
- Removed Dynaspark provider entirely by deleting `g4f/Provider/Dynaspark.py`
- Deprecated DDG provider by moving to `not_working` directory and updating imports
- Restructured HuggingFace and MiniMax providers into `needs_auth` subpackage:
- Moved all HuggingFace provider files to `needs_auth/hf/`
- Moved MiniMax providers to `needs_auth/mini_max/`
- Updated ARTA provider:
- Expanded `model_aliases` with new tattoo styles and added aliases
- Added `get_model()` method for model resolution with list support
- Simplified Blackbox provider:
- Removed openrouter models and agentMode configurations
- Reduced model lists to core GPT variants
- Set session/subscriptionCache to None in payload
- Added model resolution to Gemini providers:
- Implemented `get_model()` in Gemini.py and GeminiPro.py
- Added alias handling with list support
- Updated model definitions in `g4f/models.py`:
- Removed references to Dynaspark and DDG providers
- Added new SDXL image models with ARTA provider
- Adjusted best_provider assignments across multiple models
- Removed Dynaspark/DDG references from provider imports and AnyProvider
- Added DDG to not_working providers in `__init__.py`
* feat: Add new models to DeepInfraChat, LambdaChat, and models
- Add 'deepseek-ai/DeepSeek-R1-0528' model to DeepInfraChat provider's models list
- Include alias 'deepseek-r1-0528' for DeepSeek-R1-0528 in DeepInfraChat's model_aliases
- Add 'apriel-5b-instruct' model to LambdaChat provider's models list
- Define new 'deepseek-r1-0528' model in models.py with DeepSeek base provider and DeepInfraChat as best provider
* refactor: simplify model registry and add validation
- Remove unused imports: sys, inspect, Set, Type
- Remove ModelRegistry._discovered flag and automatic discovery mechanism
- Add ModelRegistry.clear() method for resetting registry state
- Implement ModelRegistry.list_models_by_provider() for provider-based filtering
- Add ModelRegistry.validate_all_models() for configuration checks
- Remove Model._registered field and simplify registration logic
- Fix gemma_3_12b model name from empty string to 'gemma-3-12b'
- Add image model section header in model definitions
- Replace ModelUtils.convert dict with dynamic property
- Remove ModelUtils.refresh() method
- Register 'gemini' alias directly in ModelRegistry after model creation
- Remove module-level model discovery and ModelUtils.convert initialization
* refactor: Replace ModelUtils.convert property with class variable
- Add class variable `convert` to `ModelUtils` initialized as empty dictionary
- Replace `@property convert` method with `refresh()` class method that updates `convert`
- Remove dynamic property returning `ModelRegistry.all_models()`
- Add module-level assignment to initialize `ModelUtils.convert` with `ModelRegistry.all_models()`
- Include comment for clarity on filling the convert dictionary
* refactor: Reorganize providers and update model configuration
- Removed unused providers from `g4f/Provider/__init__.py`: ChatGpt, Pi, Pizzagpt, PuterJS, You
- Moved LMArenaBeta provider to `needs_auth` directory with updated relative imports
- Moved Pi provider to `needs_auth` directory with updated relative imports
- Moved PuterJS provider to `needs_auth` directory with updated relative imports
- Moved You provider to `needs_auth` directory with updated relative imports
- Added LMArenaBeta, Pi, PuterJS, You to `needs_auth/__init__.py`
- Moved ChatGpt provider to `not_working` directory with updated relative imports
- Moved Pizzagpt provider to `not_working` directory with updated relative imports
- Added ChatGpt, Pizzagpt to `not_working/__init__.py`
- Updated `g4f/models.py` to remove Reka import and change reka_core model provider
- Changed reka_core model's best_provider from IterListProvider to LegacyLMArena in `g4f/models.py`
* feat: add Together provider and update model handling
- Add new provider `Together` in `g4f/Provider/Together.py` with model aliases and configuration
- Implement `get_activation_key` and `get_models` methods in `Together` provider
- Add `get_model` method to resolve aliases in `Together` and `DeepInfraChat`
- Update `DeepInfraChat` model mappings to support multiple versions
- Change "deepseek-v3" to list with two model options
- Change "deepseek-r1" to list with two model options
- Remove duplicate "deepseek-v3" entry
- Remove "mistral-small" alias
- Remove "midjourney" from `PollinationsAI.extra_image_models`
- Register `Together` provider in `g4f/Provider/__init__.py`
- Update `g4f/models.py` with new providers and models
- Add `Together` to default and default_vision provider lists
- Add `Together` as provider for multiple existing models
- Add new vision model `qwen_2_vl_72b`
- Add new text models: `qwen_2_5_7b`, `deepseek_r1_distill_qwen_1_5b`, `deepseek_r1_distill_qwen_14b`
- Add new image models: `flux_redux`, `flux_depth`, `flux_canny`, `flux_kontext_max`, `flux_dev_lora`, `flux_kontext_pro`
- Remove `pi` model definition
- Update provider assignments for multiple models to include `Together`
* refactor: Remove LegacyLMArena provider and update model best_providers
- Remove LegacyLMArena import from Provider list in models.py
- Delete LegacyLMArena from default model's best_provider IterListProvider
- Remove multiple obsolete model definitions (gpt_3_5_turbo, gpt_4_turbo, phi_3_small, etc.) that exclusively used LegacyLMArena
- Update best_provider for all remaining models to remove LegacyLMArena from IterListProvider arguments
- Replace LegacyLMArena with alternative providers in model definitions (e.g., OpenaiChat, Together, DeepInfraChat)
- Simplify model definitions by removing redundant IterListProvider wrappers for single providers
- Expand provider imports in any_provider.py to include Blackboxapi, OIVSCodeSer2, etc.
- Extend provider list in AnyProvider with additional working providers for fallback support
* refactor: Remove Blackboxapi provider
- Deleted Blackboxapi provider implementation file
- Removed Blackboxapi import from provider __init__ file
- Updated default model configuration to exclude Blackboxapi provider
- Removed Blackboxapi from llama-3.1-70b model's best_provider
- Updated any_provider to exclude Blackboxapi from provider list
* fix: add missing parameters to Together.get_model method signature
- Add api_key and api_base parameters to get_model method in Together class
- Import random module at the top of the file
- Add inline import comment for random module inside get_model method
* fix: remove broken providers and update model configurations
- Remove non-working providers: ChatGLM, DocsBot, GizAI, OIVSCodeSer5
- Fix Blackbox provider by removing userSelectedModel logic
- Update DeepInfraChat default model to 'deepseek-ai/DeepSeek-V3-0324'
- Add random model selection for DeepInfraChat aliases
- Update LambdaChat default model to 'deepseek-v3-0324' and expand model list
- Fix LegacyLMArena model loading with better error handling and caching
- Add retry logic and timeouts to LegacyLMArena streaming responses
- Improve LegacyLMArena response parsing to handle various data formats
- Update model references across g4f/models.py to remove deleted providers
- Fix AnyProvider model categorization logic for better grouping
- Add LegacyLMArena and ARTA to special provider handling in AnyProvider
- Update provider imports in __init__.py to exclude removed providers
- Add needs_auth flag to You.com and HailuoAI providers
- Fix GeminiPro get_model method signature to accept kwargs
* fix (g4f/Provider/LambdaChat.py)
* refactor: format models list in LMArenaBeta provider
- Convert single-line models array to multi-line format
- Add 11 new models (hunyuan, flux-kontext-pro, cobalt variants, etc.)
- Remove 6 models (bagel, goldmane, redsword, etc.)
- Update stephen model ID
---------
Co-authored-by: kqlio67 <kqlio67.noreply.github.com>
- Changed default model in commit.py from "gpt-4o" to "claude-3.7-sonnet"
- Fixed ARTA provider by adding proper auth token handling and form data submission
- Updated Blackbox provider to use OpenRouter models instead of premium models
- Improved DDG provider with simplified authentication and better error handling
- Updated DeepInfraChat provider with new models and aliases
- Removed non-working providers: Goabror, Jmuz, OIVSCode, AllenAI, ChatGptEs, FreeRouter, Glider
- Moved non-working providers to the not_working directory
- Added BlackboxPro provider in needs_auth directory with premium model support
- Updated Liaobots provider with new models and improved authentication
- Renamed Microsoft_Phi_4 to Microsoft_Phi_4_Multimodal for clarity
- Updated LambdaChat provider with direct API implementation instead of HuggingChat
- Updated models.py with new model definitions and provider mappings
- Removed BlackForestLabs_Flux1Schnell from HuggingSpace providers
- Updated model aliases across multiple providers for better compatibility
- Fixed Dynaspark provider endpoint URL to prevent spam detection
- Add new FreeRouter provider (based on OpenaiTemplate)
- Add new OpenRouter provider (needs auth) to access OpenRouter.ai service
- Update CablyAI provider imports to use Messages and AsyncResult
- Add support for new Gemini models including gemini-2.5-pro-exp, gemini-2.0-flash-thinking-exp, and gemini-deep-research
- Add processing for <think> tags in Gemini provider output by replacing <ctrl94>thought and <ctrl95> markers
- Update provider imports in __init__.py files to include the new providers
- Mark FreeRouter as not working initially
Add new default HuggingFace provider
Add format_image_prompt and get_last_user_message helper
Add stop_browser callable to get_nodriver function
Fix content type response in images route
* Update model configurations, provider implementations, and documentation
- Updated model names and aliases for Qwen QVQ 72B and Qwen 2 72B (@TheFirstNoob)
- Revised HuggingSpace class configuration, added default_image_model
- Added llama-3.2-70b alias for Llama 3.2 70B model in AutonomousAI
- Removed BlackboxCreateAgent class
- Added gpt-4o alias for Copilot model
- Moved api_key to Mhystical class attribute
- Added models property with default_model value for Free2GPT
- Simplified Jmuz class implementation
- Improved image generation and model handling in DeepInfra
- Standardized default models and removed aliases in Gemini
- Replaced model aliases with direct model list in GlhfChat (@TheFirstNoob)
- Removed trailing slash from image generation URL in PollinationsAI (https://github.com/xtekky/gpt4free/issues/2571)
- Updated llama and qwen model configurations
- Enhanced provider documentation and model details
* Removed from (g4f/models.py) 'Yqcloud' provider from Default due to error 'ResponseStatusError: Response 429: 文字过长,请删减后重试。'
* Update docs/providers-and-models.md
* refactor(g4f/Provider/DDG.py): Add error handling and rate limiting to DDG provider
- Add custom exception classes for rate limits, timeouts, and conversation limits
- Implement rate limiting with sleep between requests (0.75s minimum delay)
- Add model validation method to check supported models
- Add proper error handling for API responses with custom exceptions
- Improve session cookie handling for conversation persistence
- Clean up User-Agent string and remove redundant code
- Add proper error propagation through async generator
Breaking changes:
- New custom exceptions may require updates to error handling code
- Rate limiting affects request timing and throughput
- Model validation is now stricter
Related:
- Adds error handling similar to standard API clients
- Improves reliability and robustness of chat interactions
* Update g4f/models.py g4f/Provider/PollinationsAI.py
* Update g4f/models.py
* Restored provider which was not working and was disabled (g4f/Provider/DeepInfraChat.py)
* Fixing a bug with Streaming Completions
* Update g4f/Provider/PollinationsAI.py
* Update g4f/Provider/Blackbox.py g4f/Provider/DDG.py
* Added another model for generating images 'ImageGeneration2' to the 'Blackbox' provider
* Update docs/providers-and-models.md
* Update g4f/models.py g4f/Provider/Blackbox.py
* Added a new OIVSCode provider from the Text Models and Vision (Image Upload) model
* Update docs/providers-and-models.md
* docs: add Conversation Memory class with context handling requested by @TheFirstNoob
* Simplified README.md documentation added new docs/configuration.md documentation
* Update add README.md docs/configuration.md
* Update README.md
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/PollinationsAI.py
* Added new model deepseek-r1 to Blackbox provider. @TheFirstNoob
* Fixed bugs and updated docs/providers-and-models.md etc/unittest/client.py g4f/models.py g4f/Provider/.
---------
Co-authored-by: kqlio67 <>
Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
* Update providers, restore old providers, remove non-working providers
* Restoring the original providers
* Restore the original provider g4f/Provider/needs_auth/GeminiPro.py
* Deleted non-working providers, fixed providers
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/hf_space/CohereForAI.py
* Restore g4f/Provider/Airforce.py Updated alias g4f/Provider/hf_space/CohereForAI.py
* Disabled provider 'g4f/Provider/ReplicateHome.py' and moved to 'g4f/Provider/not_working'
* Disconnected provider problem with Pizzagpt response
* Fix for why web_search = True didn't work
* Update docs/client.md
* Fix for why web_search = True did not work in the asychronous and sychronous versions
---------
Co-authored-by: kqlio67 <>
* refactor(g4f/Provider/Airforce.py): improve model handling and filtering
- Add hidden_models set to exclude specific models
- Add evil alias for uncensored model handling
- Extend filtering for model-specific response tokens
- Add response buffering for streamed content
- Update model fetching with error handling
* refactor(g4f/Provider/Blackbox.py): improve caching and model handling
- Add caching system for validated values with file-based storage
- Rename 'flux' model to 'ImageGeneration' and update references
- Add temperature, top_p and max_tokens parameters to generator
- Simplify HTTP headers and remove redundant options
- Add model alias mapping for ImageGeneration
- Add file system utilities for cache management
* feat(g4f/Provider/RobocodersAPI.py): add caching and error handling
- Add file-based caching system for access tokens and sessions
- Add robust error handling with specific error messages
- Add automatic dialog continuation on resource limits
- Add HTML parsing with BeautifulSoup for token extraction
- Add debug logging for error tracking
- Add timeout configuration for API requests
* refactor(g4f/Provider/DarkAI.py): update DarkAI default model and aliases
- Change default model from llama-3-405b to llama-3-70b
- Remove llama-3-405b from supported models list
- Remove llama-3.1-405b from model aliases
* feat(g4f/Provider/Blackbox2.py): add image generation support
- Add image model 'flux' with dedicated API endpoint
- Refactor generator to support both text and image outputs
- Extract headers into reusable static method
- Add type hints for AsyncGenerator return type
- Split generation logic into _generate_text and _generate_image methods
- Add ImageResponse handling for image generation results
BREAKING CHANGE: create_async_generator now returns AsyncGenerator instead of AsyncResult
* refactor(g4f/Provider/ChatGptEs.py): update ChatGptEs model configuration
- Update models list to include gpt-3.5-turbo
- Remove chatgpt-4o-latest from supported models
- Remove model_aliases mapping for gpt-4o
* feat(g4f/Provider/DeepInfraChat.py): add Accept-Language header support
- Add Accept-Language header for internationalization
- Maintain existing header configuration
- Improve request compatibility with language preferences
* refactor(g4f/Provider/needs_auth/Gemini.py): add ProviderModelMixin inheritance
- Add ProviderModelMixin to class inheritance
- Import ProviderModelMixin from base_provider
- Move BaseConversation import to base_provider imports
* refactor(g4f/Provider/Liaobots.py): update model details and aliases
- Add version suffix to o1 model IDs
- Update model aliases for o1-preview and o1-mini
- Standardize version format across model definitions
* refactor(g4f/Provider/PollinationsAI.py): enhance model support and generation
- Split generation logic into dedicated image/text methods
- Add additional text models including sur and claude
- Add width/height parameters for image generation
- Add model existence validation
- Add hasattr checks for model lists initialization
* chore(gitignore): add provider cache directory
- Add g4f/Provider/.cache to gitignore patterns
* refactor(g4f/Provider/ReplicateHome.py): update model configuration
- Update default model to gemma-2b-it
- Add default_image_model configuration
- Remove llava-13b from supported models
- Simplify request headers
* feat(g4f/models.py): expand provider and model support
- Add new providers DarkAI and PollinationsAI
- Add new models for Mistral, Flux and image generation
- Update provider lists for existing models
- Add P1 and Evil models with experimental providers
BREAKING CHANGE: Remove llava-13b model support
* refactor(Airforce): Update type hint for split_message return
- Change return type of from to for consistency with import.
- Maintain overall functionality and structure of the class.
- Ensure compatibility with type hinting standards in Python.
* refactor(g4f/Provider/Airforce.py): Update type hint for split_message return
- Change return type of 'split_message' from 'list[str]' to 'List[str]' for consistency with import.
- Maintain overall functionality and structure of the 'Airforce' class.
- Ensure compatibility with type hinting standards in Python.
* feat(g4f/Provider/RobocodersAPI.py): Add support for optional BeautifulSoup dependency
- Introduce a check for the BeautifulSoup library and handle its absence gracefully.
- Raise a if BeautifulSoup is not installed, prompting the user to install it.
- Remove direct import of BeautifulSoup to avoid import errors when the library is missing.
* fix: Updating provider documentation and small fixes in providers
* Disabled the provider (RobocodersAPI)
* Fix: Conflicting file g4f/models.py
* Update g4f/models.py g4f/Provider/Airforce.py
* Update docs/providers-and-models.md g4f/models.py g4f/Provider/Airforce.py g4f/Provider/PollinationsAI.py
* Update docs/providers-and-models.md
* Update .gitignore
* Update g4f/models.py
* Update g4f/Provider/PollinationsAI.py
---------
Co-authored-by: kqlio67 <>
* refactor(g4f/Provider/Airforce.py): improve model handling and filtering
- Add hidden_models set to exclude specific models
- Add evil alias for uncensored model handling
- Extend filtering for model-specific response tokens
- Add response buffering for streamed content
- Update model fetching with error handling
* refactor(g4f/Provider/Blackbox.py): improve caching and model handling
- Add caching system for validated values with file-based storage
- Rename 'flux' model to 'ImageGeneration' and update references
- Add temperature, top_p and max_tokens parameters to generator
- Simplify HTTP headers and remove redundant options
- Add model alias mapping for ImageGeneration
- Add file system utilities for cache management
* feat(g4f/Provider/RobocodersAPI.py): add caching and error handling
- Add file-based caching system for access tokens and sessions
- Add robust error handling with specific error messages
- Add automatic dialog continuation on resource limits
- Add HTML parsing with BeautifulSoup for token extraction
- Add debug logging for error tracking
- Add timeout configuration for API requests
* refactor(g4f/Provider/DarkAI.py): update DarkAI default model and aliases
- Change default model from llama-3-405b to llama-3-70b
- Remove llama-3-405b from supported models list
- Remove llama-3.1-405b from model aliases
* feat(g4f/Provider/Blackbox2.py): add image generation support
- Add image model 'flux' with dedicated API endpoint
- Refactor generator to support both text and image outputs
- Extract headers into reusable static method
- Add type hints for AsyncGenerator return type
- Split generation logic into _generate_text and _generate_image methods
- Add ImageResponse handling for image generation results
BREAKING CHANGE: create_async_generator now returns AsyncGenerator instead of AsyncResult
* refactor(g4f/Provider/ChatGptEs.py): update ChatGptEs model configuration
- Update models list to include gpt-3.5-turbo
- Remove chatgpt-4o-latest from supported models
- Remove model_aliases mapping for gpt-4o
* feat(g4f/Provider/DeepInfraChat.py): add Accept-Language header support
- Add Accept-Language header for internationalization
- Maintain existing header configuration
- Improve request compatibility with language preferences
* refactor(g4f/Provider/needs_auth/Gemini.py): add ProviderModelMixin inheritance
- Add ProviderModelMixin to class inheritance
- Import ProviderModelMixin from base_provider
- Move BaseConversation import to base_provider imports
* refactor(g4f/Provider/Liaobots.py): update model details and aliases
- Add version suffix to o1 model IDs
- Update model aliases for o1-preview and o1-mini
- Standardize version format across model definitions
* refactor(g4f/Provider/PollinationsAI.py): enhance model support and generation
- Split generation logic into dedicated image/text methods
- Add additional text models including sur and claude
- Add width/height parameters for image generation
- Add model existence validation
- Add hasattr checks for model lists initialization
* chore(gitignore): add provider cache directory
- Add g4f/Provider/.cache to gitignore patterns
* refactor(g4f/Provider/ReplicateHome.py): update model configuration
- Update default model to gemma-2b-it
- Add default_image_model configuration
- Remove llava-13b from supported models
- Simplify request headers
* feat(g4f/models.py): expand provider and model support
- Add new providers DarkAI and PollinationsAI
- Add new models for Mistral, Flux and image generation
- Update provider lists for existing models
- Add P1 and Evil models with experimental providers
BREAKING CHANGE: Remove llava-13b model support
* refactor(Airforce): Update type hint for split_message return
- Change return type of from to for consistency with import.
- Maintain overall functionality and structure of the class.
- Ensure compatibility with type hinting standards in Python.
* refactor(g4f/Provider/Airforce.py): Update type hint for split_message return
- Change return type of 'split_message' from 'list[str]' to 'List[str]' for consistency with import.
- Maintain overall functionality and structure of the 'Airforce' class.
- Ensure compatibility with type hinting standards in Python.
* feat(g4f/Provider/RobocodersAPI.py): Add support for optional BeautifulSoup dependency
- Introduce a check for the BeautifulSoup library and handle its absence gracefully.
- Raise a if BeautifulSoup is not installed, prompting the user to install it.
- Remove direct import of BeautifulSoup to avoid import errors when the library is missing.
---------
Co-authored-by: kqlio67 <>
* refactor(g4f/Provider/Airforce.py): Enhance Airforce provider with dynamic model fetching
* refactor(g4f/Provider/Blackbox.py): Enhance Blackbox AI provider configuration and streamline code
* feat(g4f/Provider/RobocodersAPI.py): Add RobocodersAPI new async chat provider
* refactor(g4f/client/__init__.py): Improve provider handling in async_generate method
* refactor(g4f/models.py): Update provider configurations for multiple models
* refactor(g4f/Provider/Blackbox.py): Streamline model configuration and improve response handling
* feat(g4f/Provider/DDG.py): Enhance model support and improve conversation handling
* refactor(g4f/Provider/Copilot.py): Enhance Copilot provider with model support
* refactor(g4f/Provider/AmigoChat.py): update models and improve code structure
* chore(g4f/Provider/not_working/AIUncensored.): move AIUncensored to not_working directory
* chore(g4f/Provider/not_working/Allyfy.py): remove Allyfy provider
* Update (g4f/Provider/not_working/AIUncensored.py g4f/Provider/not_working/__init__.py)
* refactor(g4f/Provider/ChatGptEs.py): Implement format_prompt for message handling
* refactor(g4f/Provider/Blackbox.py): Update message formatting and improve code structure
* refactor(g4f/Provider/LLMPlayground.py): Enhance text generation and error handling
* refactor(g4f/Provider/needs_auth/PollinationsAI.py): move PollinationsAI to needs_auth directory
* refactor(g4f/Provider/Liaobots.py): Update Liaobots provider models and aliases
* feat(g4f/Provider/DeepInfraChat.py): Add new DeepInfra models and aliases
* Update (g4f/Provider/__init__.py)
* Update (g4f/models.py)
* g4f/models.py
* Update g4f/models.py
* Update g4f/Provider/LLMPlayground.py
* Update (g4f/models.py g4f/Provider/Airforce.py
g4f/Provider/__init__.py g4f/Provider/LLMPlayground.py)
* Update g4f/Provider/__init__.py
* refactor(g4f/Provider/Airforce.py): Enhance text generation with retry and timeout
* Update g4f/Provider/AmigoChat.py g4f/Provider/__init__.py
* refactor(g4f/Provider/Blackbox.py): update model prefixes and image handling
Fixes#2445
- Update model prefixes for gpt-4o, gemini-pro, and claude-sonnet-3.5
- Add 'gpt-3.5-turbo' alias for 'blackboxai' model
- Modify image handling in create_async_generator method
- Add 'imageGenerationMode' and 'webSearchModePrompt' flags to API request
- Remove redundant 'imageBase64' field from image data structure
* New provider (g4f/Provider/Blackbox2.py)
Support for model llama-3.1-70b text generation
* docs(docs/async_client.md): update AsyncClient API guide with minor improvements
- Improve formatting and readability of code examples
- Add line breaks for better visual separation of sections
- Fix minor typos and inconsistencies in text
- Enhance clarity of explanations in various sections
- Remove unnecessary whitespace
* feat(docs/client.md): add response_format parameter
- Add 'response_format' parameter to image generation examples
- Specify 'url' format for standard image generation
- Include 'b64_json' format for base64 encoded image response
- Update documentation to reflect new parameter usage
- Improve code examples for clarity and consistency
* docs(README.md): update usage examples and add image generation
- Update text generation example to use new Client API
- Add image generation example with Client API
- Update configuration section with new cookie setting instructions
- Add response_format parameter to image generation example
- Remove outdated information and reorganize sections
- Update contributors list
* refactor(g4f/client/__init__.py): optimize image processing and response handling
- Modify _process_image_response to handle 'url' format without local saving
- Update ImagesResponse construction to include 'created' timestamp
- Simplify image processing logic for different response formats
- Improve error handling and logging for image generation
- Enhance type hints and docstrings for better code clarity
* feat(g4f/models.py): update model providers and add new models
- Add Blackbox2 to Provider imports
- Update gpt-3.5-turbo best provider to Blackbox
- Add Blackbox2 to llama-3.1-70b best providers
- Rename dalle_3 to dall_e_3 and update its best providers
- Add new models: solar_mini, openhermes_2_5, lfm_40b, zephyr_7b, neural_7b, mythomax_13b
- Update ModelUtils.convert with new models and changes
- Remove duplicate 'dalle-3' entry in ModelUtils.convert
* refactor(Airforce): improve API handling and add authentication
- Implement API key authentication with check_api_key method
- Refactor image generation to use new imagine2 endpoint
- Improve text generation with better error handling and streaming
- Update model aliases and add new image models
- Enhance content filtering for various model outputs
- Replace StreamSession with aiohttp's ClientSession for async operations
- Simplify model fetching logic and remove redundant code
- Add is_image_model method for better model type checking
- Update class attributes for better organization and clarity
* feat(g4f/Provider/HuggingChat.py): update HuggingChat model list and aliases
Request by @TheFirstNoob
- Add 'Qwen/Qwen2.5-72B-Instruct' as the first model in the list
- Update model aliases to include 'qwen-2.5-72b'
- Reorder existing models in the list for consistency
- Remove duplicate entry for 'Qwen/Qwen2.5-72B-Instruct' in models list
* refactor(g4f/Provider/ReplicateHome.py): remove unused text models
Request by @TheFirstNoob
- Removed the 'meta/meta-llama-3-70b-instruct' and 'mistralai/mixtral-8x7b-instruct-v0.1' text models from the list
- Updated the list to only include the remaining text and image models
- This change simplifies the model configuration and reduces the number of available models, focusing on the core text and image models provided by Replicate
* refactor(g4f/Provider/HuggingChat.py): Move HuggingChat to needs_auth directory
Request by @TheFirstNoob
* Update (g4f/Provider/needs_auth/HuggingChat.py)
* Update g4f/models.py
* Update g4f/Provider/Airforce.py
* Update g4f/models.py g4f/Provider/needs_auth/HuggingChat.py
* Added 'Airforce' provider to the 'o1-mini' model (g4f/models.py)
* Update (g4f/Provider/Airforce.py g4f/Provider/AmigoChat.py)
* Update g4f/models.py g4f/Provider/DeepInfraChat.py g4f/Provider/Airforce.py
* Update g4f/Provider/DeepInfraChat.py
* Update (g4f/Provider/DeepInfraChat.py)
* Update g4f/Provider/Blackbox.py
* Update (docs/client.md docs/async_client.md g4f/client/__init__.py)
* Update (docs/async_client.md docs/client.md)
* Update (g4f/client/__init__.py)
---------
Co-authored-by: kqlio67 <kqlio67@users.noreply.github.com>
Co-authored-by: kqlio67 <>
Co-authored-by: H Lohaus <hlohaus@users.noreply.github.com>
* IterListProvider support for generating images
* Add missing get_har_files import in Copilot
* Fix typo in dall-e-3 model name
* Add image client unittests
* Add MicrosoftDesigner provider
* Import MicrosoftDesigner and add it to the model list
* Add web_search function to OpenaiChat provider
* GithubCopilot provider added, it need a api_key
* Remove nodriver login in Gemini synthesize
* Update api / add a synthesize and upload_cookies endpoint