* Handle Cloudflare WAF errors in Qwen provider Added detection and raising of CloudflareError when the response indicates an Aliyun WAF block. This improves error handling for cases where requests are blocked by Cloudflare. * Improve Qwen and LMArena provider authentication handling Refactors Qwen provider to better manage authentication cookies and cache, including fallback and refresh logic for Cloudflare errors and rate limits. Adds use of AuthFileMixin to Qwen, improves argument retrieval from cache or nodriver, and ensures cookies are merged after requests. Updates LMArena to prioritize args from kwargs before reading from cache, improving flexibility and reliability in authentication. * Improve LMArena provider recaptcha handling and error logging Refactors the LMArena provider to better handle recaptcha token acquisition and error cases, including new async methods for recaptcha retrieval and improved error logging. Updates dependencies and imports, and enhances the raise_for_status utility to detect LMArena-specific recaptcha validation failures. * Add image upload and caching to LMArena provider Introduces image upload support with caching in the LMArena provider by implementing a prepare_images method. Images are uploaded, cached by hash, and attached to user messages for models supporting vision. Refactors attachment handling to use the new upload logic and improves code formatting and error handling. * Update and expand model definitions in LMArena.py Replaces the previous 'models' list with an updated and expanded set of model definitions, including new fields such as 'name', 'rank', and 'rankByModality'. This change adds new models, updates capabilities, and provides more detailed metadata for each model, improving model selection and feature support. * Improve reCAPTCHA handling and set default timeout Refactors the reCAPTCHA execution to use the enterprise.ready callback and adds error handling for token retrieval. Also sets a default timeout of 5 minutes for StreamSession if not provided. * Update LMArena.py * StreamSession * Improve error logging for Qwen Cloudflare errors Replaces a generic debug log with a more detailed error log that includes the exception message when a CloudflareError is caught in the Qwen provider. This enhances troubleshooting by providing more context in logs. * generate ssxmod * Improve error handling for Qwen provider responses Adds checks for JSON error responses and raises RuntimeError when 'success' is false or a 'code' is present in the response data. Also refines HTML error detection logic in raise_for_status. * Update fingerprint.py * Update Yupp.py for test only * Update Yupp.py * Add Qwen bx-ua header generator and update Qwen provider Introduces g4f/Provider/qwen/generate_ua.py for generating bx-ua headers, including AES encryption and fingerprinting logic. Updates Qwen provider to support dynamic UA/cookie handling and refactors image preparation in LMArena to handle empty media lists. Minor cleanup in cookie_generator.py and preparation for integrating bx-ua header in Qwen requests. * Update LMArena.py * Update LMArena.py * Update LMArena.py * Add user_info method to Yupp provider Introduces a new async class method user_info to fetch and parse user details, credits, and model information from Yupp. Updates create_async_generator to yield user_info at the start of the conversation flow. Also fixes a bug in get_last_user_message call by passing a boolean for the prompt argument. * Update Yupp.py * Update models.py * Update Yupp.py * Enhance LMArena action ID handling and parsing Refactored LMArena to dynamically extract and update action IDs from HTML/JS, replacing hardcoded values with a class-level dictionary. Added HTML parsing logic to load available actions and models, improving maintainability and adaptability to backend changes. Minor cleanup and improved code structure in Yupp and LMArena providers. * Update LMArena.py
GPT4Free (g4f)
Created by @xtekky,
maintained by @hlohaus
Support the project on GitHub Sponsors ❤️
Live demo & docs: https://g4f.dev | Documentation: https://g4f.dev/docs
GPT4Free (g4f) is a community-driven project that aggregates multiple accessible providers and interfaces to make working with modern LLMs and media-generation models easier and more flexible. GPT4Free aims to offer multi-provider support, local GUI, OpenAI-compatible REST APIs, and convenient Python and JavaScript clients — all under a community-first license.
This README is a consolidated, improved, and complete guide to installing, running, and contributing to GPT4Free.
Table of contents
- What’s included
- Quick links
- Requirements & compatibility
- Installation
- Running the app
- Using the Python client
- Using GPT4Free.js (browser JS client)
- Providers & models (overview)
- Local inference & media
- Configuration & customization
- Running on smartphone
- Interference API (OpenAI‑compatible)
- Examples & common patterns
- Contributing
- Security, privacy & takedown policy
- Credits, contributors & attribution
- Powered-by highlights
- Changelog & releases
- Manifesto / Project principles
- License
- Contact & sponsorship
- Appendix: Quick commands & examples
What’s included
- Python client library and async client.
- Optional local web GUI.
- FastAPI-based OpenAI-compatible API (Interference API).
- Official browser JS client (g4f.dev distribution).
- Docker images (full and slim).
- Multi-provider adapters (LLMs, media providers, local inference backends).
- Tooling for image/audio/video generation and media persistence.
Quick links
- Website & docs: https://g4f.dev | https://g4f.dev/docs
- PyPI: https://pypi.org/project/g4f
- Docker image: https://hub.docker.com/r/hlohaus789/g4f
- Releases: https://github.com/xtekky/gpt4free/releases
- Issues: https://github.com/xtekky/gpt4free/issues
- Community: Telegram (https://telegram.me/g4f_channel) · Discord News (https://discord.gg/5E39JUWUFa) · Discord Support (https://discord.gg/qXA4Wf4Fsm)
Requirements & compatibility
- Python 3.10+ recommended.
- Google Chrome/Chromium for providers using browser automation.
- Docker for containerized deployment.
- Works on x86_64 and arm64 (slim image supports both).
- Some provider adapters may require platform-specific tooling (Chrome/Chromium, etc.). Check provider docs for details.
Installation
Docker (recommended)
- Install Docker: https://docs.docker.com/get-docker/
- Create persistent directories:
- Example (Linux/macOS):
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media sudo chown -R 1200:1201 ${PWD}/har_and_cookies ${PWD}/generated_media
- Example (Linux/macOS):
- Pull image:
docker pull hlohaus789/g4f - Run container:
docker run -p 8080:8080 -p 7900:7900 \ --shm-size="2g" \ -v ${PWD}/har_and_cookies:/app/har_and_cookies \ -v ${PWD}/generated_media:/app/generated_media \ hlohaus789/g4f:latest
Notes:
- Port 8080 serves GUI/API; 7900 can expose a VNC-like desktop for provider logins (optional).
- Increase --shm-size for heavier browser automation tasks.
Slim Docker image (x64 & arm64)
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media
chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_media
docker run \
-p 1337:8080 -p 8080:8080 \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_media:/app/generated_media \
hlohaus789/g4f:latest-slim
Notes:
- The slim image can update the g4f package on startup and installs additional dependencies as needed.
- In this example, the Interference API is mapped to 1337.
Windows Guide (.exe)
👉 Check out the Windows launcher for GPT4Free:
🔗 https://github.com/gpt4free/g4f.exe 🚀
- Download the release artifact
g4f.exe.zipfrom: https://github.com/xtekky/gpt4free/releases/latest - Unzip and run
g4f.exe. - Open GUI at: http://localhost:8080/chat/
- If Windows Firewall blocks access, allow the application.
Python Installation (pip / from source / partial installs)
Prerequisites:
- Python 3.10+ (https://www.python.org/downloads/)
- Chrome/Chromium for some providers.
Install from PyPI (recommended):
pip install -U g4f[all]
Partial installs
- To install only specific functionality, use optional extras groups. See docs/requirements.md in the project docs.
Install from source:
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
pip install -r requirements.txt
pip install -e .
Notes:
- Some features require Chrome/Chromium or other tools; follow provider-specific docs.
Running the app
GUI (web client)
- Run via Python:
from g4f.gui import run_gui
run_gui()
- Or via CLI:
python -m g4f.cli gui --port 8080 --debug
FastAPI / Interference API
- Start FastAPI server:
python -m g4f --port 8080 --debug
- If using slim docker mapping, Interference API may be available at
http://localhost:1337/v1 - Swagger UI:
http://localhost:1337/docs
CLI
- Start GUI server:
python -m g4f.cli gui --port 8080 --debug
MCP Server
GPT4Free now includes a Model Context Protocol (MCP) server that allows AI assistants like Claude to access web search, scraping, and image generation capabilities.
Starting the MCP server (stdio mode):
# Using g4f command
g4f mcp
# Or using Python module
python -m g4f.mcp
Starting the MCP server (HTTP mode):
# Start HTTP server on port 8765
g4f mcp --http --port 8765
# Custom host and port
g4f mcp --http --host 127.0.0.1 --port 3000
HTTP mode provides:
POST http://localhost:8765/mcp- JSON-RPC endpointGET http://localhost:8765/health- Health check
Configuring with Claude Desktop:
Add to your claude_desktop_config.json:
{
"mcpServers": {
"gpt4free": {
"command": "python",
"args": ["-m", "g4f.mcp"]
}
}
}
Available MCP Tools:
web_search- Search the web using DuckDuckGoweb_scrape- Extract text content from web pagesimage_generation- Generate images from text prompts
For detailed MCP documentation, see g4f/mcp/README.md
Optional provider login (desktop within container)
- Accessible at:
http://localhost:7900/?autoconnect=1&resize=scale&password=secret - Useful for logging into web-based providers to obtain cookies/HAR files.
Using the Python client
Install:
pip install -U g4f[all]
Synchronous text example:
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, how are you?"}],
web_search=False
)
print(response.choices[0].message.content)
Expected:
Hello! How can I assist you today?
Image generation example:
from g4f.client import Client
client = Client()
response = client.images.generate(
model="flux",
prompt="a white siamese cat",
response_format="url"
)
print(f"Generated image URL: {response.data[0].url}")
Async client example:
from g4f.client import AsyncClient
import asyncio
async def main():
client = AsyncClient()
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Explain quantum computing briefly"}],
)
print(response.choices[0].message.content)
asyncio.run(main())
Notes:
- See the full API reference for streaming, tool-calling patterns, and advanced options: https://g4f.dev/docs/client
Using GPT4Free.js (browser JS client)
Use the official JS client in the browser—no backend required.
Example:
<script type="module">
import Client from 'https://g4f.dev/dist/js/client.js';
const client = new Client();
const result = await client.chat.completions.create({
model: 'gpt-4.1', // Or "gpt-4o", "deepseek-v3", etc.
messages: [{ role: 'user', content: 'Explain quantum computing' }]
});
console.log(result.choices[0].message.content);
</script>
Notes:
- The JS client is distributed via the g4f.dev CDN for easy usage. Review CORS considerations and usage limits.
Providers & models (overview)
- GPT4Free integrates many providers including (but not limited to) OpenAI-compatible endpoints, PerplexityLabs, Gemini, MetaAI, Pollinations (media), and local inference backends.
- Model availability and behavior depend on provider capabilities. See the providers doc for current, supported provider/model lists: https://g4f.dev/docs/providers-and-models
Provider requirements may include:
- API keys or tokens (for authenticated providers)
- Browser cookies / HAR files for providers scraped via browser automation
- Chrome/Chromium or headless browser tooling
- Local model binaries and runtime (for local inference)
Local inference & media
- GPT4Free supports local inference backends. See docs/local.md for supported runtimes and hardware guidance.
- Media generation (image, audio, video) is supported through providers (e.g., Pollinations). See docs/media.md for formats, options, and sample usage.
Configuration & customization
- Configure via environment variables, CLI flags, or config files. See docs/config.md.
- To reduce install size, use partial requirement groups. See docs/requirements.md.
- Provider selection: learn how to set defaults and override per-request at docs/selecting_a_provider.md.
- Persistence: HAR files, cookies, and generated media persist in mapped directories (e.g., har_and_cookies, generated_media).
Running on smartphone
- The web GUI is responsive and can be accessed from a phone by visiting your host IP:8080 or via a tunnel. See docs/guides/phone.md.
Interference API (OpenAI‑compatible)
- The Interference API enables OpenAI-like workflows routed through GPT4Free provider selection.
- Docs: docs/interference-api.md
- Default endpoint (example slim docker):
http://localhost:1337/v1 - Swagger UI:
http://localhost:1337/docs
Examples & common patterns
- Streaming completions, stopping criteria, system messages, and tool-calling patterns are documented in:
- Integrations (LangChain, PydanticAI): docs/pydantic_ai.md
- Legacy examples: docs/legacy.md
Contributing
Contributions are welcome — new providers, features, docs, and fixes are appreciated.
How to contribute:
- Fork the repository.
- Create a branch for your change.
- Run tests and linters.
- Open a Pull Request with a clear description and tests/examples if applicable.
Repository: https://github.com/xtekky/gpt4free
How to create a new provider
- Read the guide: docs/guides/create_provider.md
- Typical steps:
- Implement a provider adapter in
g4f/Provider/ - Add configuration and dependency notes
- Include tests and usage examples
- Respect third‑party code licenses and attribute appropriately
- Implement a provider adapter in
How AI can help you write code
- See: docs/guides/help_me.md for prompt templates and workflows to accelerate development.
Security, privacy & takedown policy
- Do not store or share sensitive credentials. Use per-provider recommended security practices.
- If your site appears in the project’s links and you want it removed, send proof of ownership to takedown@g4f.ai and it will be removed promptly.
- For production, secure the server with HTTPS, authentication, and firewall rules. Limit access to provider credentials and cookie/HAR storage.
Credits, contributors & attribution
- Core creators: @xtekky (original), maintained by @hlohaus.
- Full contributor graph: https://github.com/xtekky/gpt4free/graphs/contributors
- Notable code inputs and attributions:
har_file.py— input from xqdoo00o/ChatGPT-to-APIPerplexityLabs.py— input from nathanrchn/perplexityaiGemini.py— input from dsdanielpark/Gemini-API and HanaokaYuzu/Gemini-APIMetaAI.py— inspired by meta-ai-api by Strvmproofofwork.py— input from missuo/FreeGPT35
Many more contributors are acknowledged in the repository.
Powered-by highlights
- Pollinations AI — generative media: https://github.com/pollinations/pollinations
- MoneyPrinter V2 — example project using GPT4Free: https://github.com/FujiwaraChoki/MoneyPrinterV2
- For a full list of projects and sites using GPT4Free, see: docs/powered-by.md
Changelog & releases
- Releases and full changelog: https://github.com/xtekky/gpt4free/releases
- Subscribe to Discord/Telegram for announcements.
Manifesto / Project principles
GPT4Free is guided by community principles:
- Open access to AI tooling and models.
- Collaboration across providers and projects.
- Opposition to monopolistic, closed systems that restrict creativity.
- Community-centered development and broad access to AI technologies.
- Promote innovation, creativity, and accessibility.
License
This program is licensed under the GNU General Public License v3.0 (GPLv3). See the full license: https://www.gnu.org/licenses/gpl-3.0.txt
Summary:
- You may redistribute and/or modify under the terms of GPLv3.
- The program is provided WITHOUT ANY WARRANTY.
Copyright notice
xtekky/gpt4free: Copyright (C) 2025 xtekky
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
Contact & sponsorship
- Maintainers: https://github.com/hlohaus
- Sponsorship: https://github.com/sponsors/hlohaus
- Issues & feature requests: https://github.com/xtekky/gpt4free/issues
- Takedown requests: takedown@g4f.ai
Appendix: Quick commands & examples
Install (pip):
pip install -U g4f[all]
Run GUI (Python):
python -m g4f.cli gui --port 8080 --debug
# or
python -c "from g4f.gui import run_gui; run_gui()"
Docker (full):
docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 7900:7900 \
--shm-size="2g" \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_media:/app/generated_media \
hlohaus789/g4f:latest
Docker (slim):
docker run -p 1337:8080 -p 8080:8080 \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_media:/app/generated_media \
hlohaus789/g4f:latest-slim
Python usage patterns:
client.chat.completions.create(...)client.images.generate(...)- Async variants via
AsyncClient
Docs & deeper reading
- Full docs: https://g4f.dev/docs
- Client API docs: https://g4f.dev/docs/client
- Async client docs: https://g4f.dev/docs/async_client
- Provider guides: https://g4f.dev/docs/guides
- Local inference: https://g4f.dev/docs/local
Thank you for using and contributing to GPT4Free — together we make powerful AI tooling accessible, flexible, and community-driven.