feat: add redirectAdvisor to rtsp plugin

This commit is contained in:
langhuihui
2025-11-11 13:02:56 +08:00
parent 7e64183b05
commit 31d0b48774
7 changed files with 2124 additions and 5 deletions

160
plugin/hls/README.md Normal file
View File

@@ -0,0 +1,160 @@
# HLS Plugin Guide
## Table of Contents
- [Introduction](#introduction)
- [Key Features](#key-features)
- [Configuration](#configuration)
- [Live Playback Endpoints](#live-playback-endpoints)
- [Pulling Remote HLS Streams](#pulling-remote-hls-streams)
- [Recording and VOD](#recording-and-vod)
- [Built-in Player Assets](#built-in-player-assets)
- [Low-Latency HLS (LL-HLS)](#low-latency-hls-ll-hls)
- [Troubleshooting](#troubleshooting)
## Introduction
The HLS plugin turns Monibuca into an HTTP Live Streaming (HLS) origin, providing on-the-fly segmenting, in-memory caching, recording, download, and proxy capabilities. It exposes classic `.m3u8` playlists under `/hls`, supports time-ranged VOD playback, and can pull external HLS feeds back into the platform. The plugin ships with a pre-packaged `hls.js` demo so you can verify playback instantly.
## Key Features
- **Live HLS output** Publish any stream in Monibuca and play it back at `http(s)://{host}/hls/{streamPath}.m3u8` with configurable segment length/window.
- **Adaptive in-memory cache** Keeps the latest `Window` segments in memory; default `5s x 3` produces ~15 seconds of rolling content.
- **Remote HLS ingestion** Pull `.m3u8` feeds, repackage the TS segments, and republish them through Monibuca.
- **Recording workflows** Record HLS segments to disk, start/stop via config or REST API, and export TS/MP4/FMP4 archives.
- **Time-range download** Generate stitched TS files from historical recordings or auto-convert MP4/FMP4 archives when TS is unavailable.
- **VOD playlists** Serve historical playlists under `/vod/{streamPath}.m3u8` with support for `start`, `end`, or `range` queries.
- **Bundled player** Access `hls.js` demos (benchmark, metrics, light UI) directly from the plugin without external hosting.
## Configuration
Basic YAML example:
```yaml
hls:
onpub:
transform:
^live/.+: 5s x 3 # fragment duration x playlist window
record:
^live/.+:
filepath: record/$0
fragment: 1m # split TS files every minute
pull:
live/apple-demo:
url: https://devstreaming-cdn.apple.com/.../prog_index.m3u8
relaymode: mix # remux (default), relay, or mix
onsub:
pull:
^vod_hls_\d+/(.+)$: $1 # lazy pull VOD streams on demand
```
### Transform Input Format
`onpub.transform` accepts a string literal `{fragment} x {window}`. For example `5s x 3` tells the transformer to create 5-second segments and retain 3 completed segments (plus one in-progress) in-memory. Increase the window to improve DVR length; decrease it to lower latency.
### Recording Options
`onpub.record` uses the shared `config.Record` structure:
- `fragment` TS file duration (`1m`, `10s`, etc.).
- `filepath` Directory pattern; `$0` expands to the matched stream path.
- `realtime` `true` writes segments as they are generated; `false` caches in memory and flushes when the segment closes.
- `append` Append to existing files instead of creating new ones.
You can also trigger recordings dynamically via the HTTP API (see [Recording and VOD](#recording-and-vod)).
### Pull Parameters
`hls.pull` entries inherit from `config.Pull`:
- `url` Remote playlist URL (HTTPS recommended).
- `maxretry`, `retryinterval` Automatic retry behaviour (default 5 seconds between attempts).
- `proxy` Optional HTTP proxy.
- `header` Extra HTTP headers (cookies, tokens, user agents).
- `relaymode` Choose how TS data is handled:
- `remux` (default) Decode TS to Monibuca tracks only; segments are regenerated.
- `relay` Cache raw TS in memory and skip remux/recording.
- `mix` Remux for playback and keep a copy of each TS segment in memory (best for redistributing the original segments).
## Live Playback Endpoints
- `GET /hls/{streamPath}.m3u8` Rolling playlist. Add `?timeout=30s` to wait for a publisher to appear; the plugin auto-subscribes internally during the wait window.
- `GET /hls/{streamPath}/{segment}.ts` TS segments held in memory or read from disk when available.
- `GET /hls/{resource}` Any other path (e.g. `index.html`, `hls.js`) is served from the embedded `hls.js.zip` archive.
When Monibuca listens on standard ports, `PlayAddr` entries like `http://{host}/hls/live/demo.m3u8` and `https://{host}/hls/live/demo.m3u8` are announced automatically.
## Pulling Remote HLS Streams
1. Configure a `pull` block under the `hls` section or use the unified API:
```bash
curl -X POST http://localhost:8080/api/stream/pull \
-H "Content-Type: application/json" \
-d '{
"protocol": "hls",
"streamPath": "live/apple-demo",
"remoteURL": "https://devstreaming-cdn.apple.com/.../prog_index.m3u8"
}'
```
2. The puller fetches the primary playlist, optionally follows variant streams, downloads the newest TS segments, and republishes them to Monibuca.
3. Choose `relaymode: mix` if you need the original TS bytes for downstream consumers (`MemoryTs` keeps a rolling window of segments).
Progress telemetry is available through the task system with step names `m3u8_fetch`, `parse`, and `ts_download`.
## Recording and VOD
### Start/Stop via REST
- `POST /hls/api/record/start/{streamPath}?fragment=30s&filePath=record/live` Starts a TS recorder and returns a numeric task ID.
- `POST /hls/api/record/stop/{id}` Stops the recorder (`id` is the value returned from the start call).
If no recorder exists for the same `streamPath` + `filePath`, the plugin creates one and persists metadata in the configured database (if enabled).
### Time-Range Playlists
- `GET /vod/{streamPath}.m3u8?start=2024-12-01T08:00:00&end=2024-12-01T09:00:00`
- `GET /vod/{streamPath}.m3u8?range=1700000000-1700003600`
The handler looks up matching `RecordStream` rows (`ts`, `mp4`, or `fmp4`) and builds a playlist that points either to existing files or to `/mp4/download/{stream}.fmp4?id={recordID}` for fMP4 archives.
### TS Download API
`GET /hls/download/{streamPath}.ts?start=1700000000&end=1700003600`
- If TS recordings exist, the plugin stitches them together, skipping duplicate PAT/PMT packets after the first file.
- When only MP4/FMP4 recordings are available, it invokes the MP4 demuxer, converts samples to TS on the fly, and streams the result to the client.
- Responses set `Content-Type: video/mp2t` and `Content-Disposition: attachment` so browsers download the merged file.
## Built-in Player Assets
Static assets bundled in `hls.js.zip` are served directly from `/hls`. Useful entry points:
- `/hls/index.html` Full-featured `hls.js` demo.
- `/hls/index-light.html` Minimal UI variant.
- `/hls/basic-usage.html` Step-by-step example demonstrating basic playback controls.
- `/hls/metrics.html` Visualise playlist latency, buffer length, and network metrics.
These pages load the local `/hls/hls.js` script, making it easy to sanity-check streams without external CDNs.
## Low-Latency HLS (LL-HLS)
The same package registers an `llhls` plugin for Low-Latency HLS output. Enable it by adding a transform:
```yaml
llhls:
onpub:
transform:
^live/.+: 1s x 7 # duration x segment count (SegmentMinDuration x SegmentCount)
```
LL-HLS exposes playlists at `http(s)://{host}/llhls/{streamPath}/index.m3u8` and keeps a `SegmentMinDuration` and `SegmentCount` tuned for sub-two-second glass-to-glass latency. The muxer automatically maps H.264/H.265 video and AAC audio using `gohlslib`.
## Troubleshooting
- **Playlist stays empty** Confirm the publisher is active and that `onpub.transform` matches the stream path. Use `?timeout=30s` on the playlist request to give the server time to subscribe.
- **Segments expire too quickly** Increase the transform window (e.g. `5s x 6`) or switch pull jobs to `relaymode: mix` to preserve original TS segments longer.
- **Download returns 404** Ensure the database is enabled and recording metadata exists; the plugin relies on `RecordStream` entries to discover files.
- **Large time-range downloads stall** The downloader streams sequentially; consider slicing the range or moving recordings to faster storage.
- **Access from browsers** The `/hls` paths are plain HTTP GET endpoints. Configure CORS or a reverse proxy if you plan to fetch playlists from another origin.

158
plugin/hls/README_CN.md Normal file
View File

@@ -0,0 +1,158 @@
# HLS 插件指南
## 目录
- [简介](#简介)
- [核心特性](#核心特性)
- [配置示例](#配置示例)
- [直播播放接口](#直播播放接口)
- [拉取远端 HLS 流](#拉取远端-hls-流)
- [录像与点播](#录像与点播)
- [内置播放器资源](#内置播放器资源)
- [低延迟 HLSLL-HLS](#低延迟-hlsll-hls)
- [故障排查](#故障排查)
## 简介
HLS 插件让 Monibuca 具备完整的 HTTP Live Streaming 产出能力:实时切片、内存缓存、录像、下载、代理等功能一应俱全。它会在 `/hls` 下暴露标准的 `.m3u8` 播放列表,支持携带时间范围参数播放历史录像,还能将外部 HLS 流拉回 Monibuca。插件自带打包好的 `hls.js` 演示页面,可直接验证播放效果。
## 核心特性
- **直播 HLS 输出**:任何进驻 Monibuca 的流都可以在 `http(s)://{host}/hls/{streamPath}.m3u8` 访问,分片长度与窗口可配置。
- **内存缓存**:默认配置 `5s x 3`,即 5 秒分片、保留 3 个完整分片,形成约 15 秒滚动缓存,可按需调整。
- **远端 HLS 拉流**:拉取外部 `.m3u8`,重新封装 TS 分片并再次发布。
- **录像流程**:可录制 TS支持通过配置或 REST API 启停,并导出 TS/MP4/FMP4 文件。
- **时间段下载**:根据录制记录拼接 TS若不存在 TS 则自动将 MP4/FMP4 转码为 TS 并下发。
- **点播播放列表**`/vod/{streamPath}.m3u8` 支持 `start``end``range` 等查询参数,快速构建点播列表。
- **自带播放器**:插件打包常用的 `hls.js` Demobenchmark、metrics、light 等),无需额外托管即可测试。
## 配置示例
```yaml
hls:
onpub:
transform:
^live/.+: 5s x 3 # 分片时长 x 播放列表窗口
record:
^live/.+:
filepath: record/$0
fragment: 1m # 每 1 分钟切一个 TS 文件
pull:
live/apple-demo:
url: https://devstreaming-cdn.apple.com/.../prog_index.m3u8
relaymode: mix # remux默认、relay 或 mix
onsub:
pull:
^vod_hls_\d+/(.+)$: $1 # 拉取按需点播资源
```
### Transform 的输入格式
`onpub.transform` 接受字符串 `{分片时长} x {窗口大小}`,例如 `5s x 3` 表示生成 5 秒分片并保留 3 个已完成分片(外加一个进行中的分片)。窗口越大,类 DVR 时长越长;窗口越小,首屏延迟越低。
### 录像参数
`onpub.record` 复用全局的 `config.Record` 结构:
- `fragment`TS 文件时长(`1m``10s` 等)。
- `filepath`:存储目录,`$0` 会替换为匹配到的流路径。
- `realtime``true` 代表实时写入文件;`false` 表示先写入内存,分片结束后再落盘。
- `append`:是否向已有文件追加内容。
也可通过 HTTP 接口动态开启或停止录像,详见[录像与点播](#录像与点播)。
### 拉流参数
`hls.pull` 使用 `config.Pull`,常用字段如下:
- `url`:远端播放列表 URL建议 HTTPS
- `maxretry``retryinterval`:断开重试策略(默认每 5 秒重试一次)。
- `proxy`:可选的 HTTP 代理。
- `header`自定义请求头Cookie、Token、UA 等)。
- `relaymode`TS 处理方式:
- `remux`(默认)—— 只解复用 TS按 Monibuca 内部轨道重新封装。
- `relay` —— 保留原始 TS 分片,跳过解复用/录制。
- `mix` —— 既重新封装供播放,又保留 TS 分片,便于下游复用。
## 直播播放接口
- `GET /hls/{streamPath}.m3u8`:实时播放列表,可携带 `?timeout=30s` 等参数等待发布者上线(等待期间插件会自动内部订阅)。
- `GET /hls/{streamPath}/{segment}.ts`TS 分片,会从内存缓存或磁盘读取。
- `GET /hls/{resource}`:访问 `hls.js.zip` 中的静态资源,例如 `index.html``hls.js`
若使用标准端口监听,插件会自动在 `PlayAddr` 中登记 `http://{host}/hls/live/demo.m3u8``https://{host}/hls/live/demo.m3u8` 等地址。
## 拉取远端 HLS 流
1. 在配置中新增 `pull` 项,或调统一接口:
```bash
curl -X POST http://localhost:8080/api/stream/pull \
-H "Content-Type: application/json" \
-d '{
"protocol": "hls",
"streamPath": "live/apple-demo",
"remoteURL": "https://devstreaming-cdn.apple.com/.../prog_index.m3u8"
}'
```
2. 拉流器会抓取主播放列表,必要时跟进多码率分支,下载最新 TS 分片并在 Monibuca 中重新发布。
3. 若需要最大限度保留原始 TS可设置 `relaymode: mix`;插件会在 `MemoryTs` 中维持滚动分片缓存。
任务进度可通过任务系统查看,关键步骤名称包括 `m3u8_fetch`、`parse`、`ts_download`。
## 录像与点播
### REST 启停
- `POST /hls/api/record/start/{streamPath}?fragment=30s&filePath=record/live`:启动 TS 录像,返回任务指针 ID。
- `POST /hls/api/record/stop/{id}`:停止录像(`id` 为启动接口返回的数值)。
如果同一 `streamPath` + `filePath` 已存在任务会返回错误;插件会在启用数据库时将录像元数据写入 `RecordStream` 表。
### 时间范围播放列表
- `GET /vod/{streamPath}.m3u8?start=2024-12-01T08:00:00&end=2024-12-01T09:00:00`
- `GET /vod/{streamPath}.m3u8?range=1700000000-1700003600`
处理逻辑会查询数据库中类型为 `ts`、`mp4` 或 `fmp4` 的录像记录,返回一个指向文件路径或 `/mp4/download/{stream}.fmp4?id={recordID}` 的播放列表。
### TS 下载接口
`GET /hls/download/{streamPath}.ts?start=1700000000&end=1700003600`
- 如果存在 TS 录像,会拼接多个分片,并在首个文件后跳过重复 PAT/PMT。
- 若只有 MP4/FMP4插件会调用 MP4 解复用器,将样本转为 TS 实时输出。
- 响应包含 `Content-Type: video/mp2t` 与 `Content-Disposition: attachment`,方便浏览器直接下载。
## 内置播放器资源
`hls.js.zip` 中打包的静态文件可直接从 `/hls` 访问,常用入口:
- `/hls/index.html` —— 全功能 Demo。
- `/hls/index-light.html` —— 精简界面版本。
- `/hls/basic-usage.html` —— 入门示例。
- `/hls/metrics.html` —— 延迟、缓冲等指标可视化。
这些页面加载本地的 `/hls/hls.js`,无需外部 CDN 即可测试拉流。
## 低延迟 HLSLL-HLS
同目录还注册了 `llhls` 插件,可生成低延迟播放列表。示例:
```yaml
llhls:
onpub:
transform:
^live/.+: 1s x 7 # SegmentMinDuration x SegmentCount
```
LL-HLS 的访问路径为 `http(s)://{host}/llhls/{streamPath}/index.m3u8`,内部使用 `gohlslib` 将 H.264/H.265 与 AAC 轨道写入低延迟分片,默认总时移小于 2 秒。
## 故障排查
- **播放列表为空**:确认发布者在线,并确保 `onpub.transform` 正则匹配流路径;可在请求中增加 `?timeout=30s` 给予自动订阅时间。
- **分片过快被清理**:增大窗口(例如 `5s x 6`),或在拉流任务中改用 `relaymode: mix` 以延长原始 TS 的保留时长。
- **下载返回 404**:确认已启用数据库并存在对应 `RecordStream` 元数据,插件依赖数据库定位文件。
- **长时间段下载卡顿**:下载流程串行读写,建议拆分时间段或使用更快的存储介质。
- **浏览器跨域访问**`/hls` 是标准 HTTP GET 接口,跨域访问需自行配置 CORS 或反向代理。

View File

@@ -472,7 +472,7 @@ func (nc *NetConnection) SendMessage(t byte, msg RtmpMessage) (err error) {
}
func (nc *NetConnection) sendChunk(mem gomem.Memory, head *ChunkHeader, headType byte) (err error) {
nc.SetWriteDeadline(time.Now().Add(time.Second * 5)) // 设置写入超时时间为5秒
nc.SetWriteDeadline(time.Now().Add(time.Second * 10)) // 设置写入超时时间为5秒
head.WriteTo(headType, &nc.chunkHeaderBuf)
defer func(reuse net.Buffers) {
nc.sendBuffers = reuse

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"net"
"strings"
"sync"
task "github.com/langhuihui/gotask"
"m7s.live/v5/pkg/util"
@@ -23,10 +24,16 @@ var _ = m7s.InstallPlugin[RTSPPlugin](m7s.PluginMeta{
type RTSPPlugin struct {
m7s.Plugin
UserName string `desc:"用户名"`
Password string `desc:"密码"`
UdpPort util.Range[uint16] `default:"20001-30000" desc:"媒体端口范围"` //媒体端口范围
udpPorts chan uint16
UserName string `desc:"用户名"`
Password string `desc:"密码"`
UdpPort util.Range[uint16] `default:"20001-30000" desc:"媒体端口范围"` //媒体端口范围
udpPorts chan uint16
advisorOnce sync.Once
redirectAdvisor rtspRedirectAdvisor
}
type rtspRedirectAdvisor interface {
ShouldRedirectRTSP(streamPath, currentHost string) (string, bool)
}
func (p *RTSPPlugin) OnTCPConnect(conn *net.TCPConn) task.ITask {
@@ -52,6 +59,18 @@ func (p *RTSPPlugin) Start() (err error) {
return
}
func (p *RTSPPlugin) findRedirectAdvisor() rtspRedirectAdvisor {
p.advisorOnce.Do(func() {
for plugin := range p.Server.Plugins.Range {
if advisor, ok := plugin.GetHandler().(rtspRedirectAdvisor); ok {
p.redirectAdvisor = advisor
break
}
}
})
return p.redirectAdvisor
}
// 初始化UDP端口池
func (p *RTSPPlugin) initUDPPortPool() {
if p.UdpPort.Valid() {

View File

@@ -113,6 +113,21 @@ func (task *RTSPServer) Go() (err error) {
if rawQuery != "" {
streamPath += "?" + rawQuery
}
if advisor := task.conf.findRedirectAdvisor(); advisor != nil {
if location, ok := advisor.ShouldRedirectRTSP(streamPath, task.URL.Host); ok {
res := &util.Response{
StatusCode: http.StatusFound,
Status: "Found",
Header: textproto.MIMEHeader{
"Location": {location},
},
Request: req,
}
task.Info("RTSP redirect issued", "location", location, "streamPath", streamPath)
_ = task.WriteResponse(res)
return nil
}
}
sender.Subscriber, err = task.conf.Subscribe(task, streamPath)
if err != nil {
res := &util.Response{

801
plugin/webrtc/README.md Normal file
View File

@@ -0,0 +1,801 @@
# WebRTC Plugin Guide
## Table of Contents
- [Introduction](#introduction)
- [Plugin Overview](#plugin-overview)
- [Configuration](#configuration)
- [Basic Usage](#basic-usage)
- [Publishing](#publishing)
- [Playing](#playing)
- [WHIP/WHEP Support](#whipwhep-support)
- [Advanced Features](#advanced-features)
- [Docker Notes](#docker-notes)
- [STUN/TURN Reference](#stunturn-reference)
- [FAQ](#faq)
## Introduction
WebRTC (Web Real-Time Communication) is an open standard jointly developed by W3C and IETF for real-time audio/video communication in browsers and mobile apps. Key characteristics include:
### Highlights
1. **Peer-to-Peer Communication** Direct browser-to-browser connections reduce server load and latency.
2. **Low Latency** UDP transport and modern codecs deliver millisecond-level latency.
3. **Adaptive Bitrate** Automatically adjusts quality based on network conditions.
4. **NAT Traversal** ICE negotiation handles NAT/firewall traversal automatically.
### WebRTC Flow
1. **Signaling** Exchange SDP (Session Description Protocol) and ICE candidates via HTTP/WebSocket, etc.
2. **ICE Candidate Gathering** Collect local and remote network candidates.
3. **Connection Establishment** Use ICE to traverse NAT and set up the P2P link.
4. **Media Transport** Stream encrypted audio/video over SRTP.
### Connection Sequence Diagram
```mermaid
sequenceDiagram
participant Client as Client
participant Server as Monibuca Server
Note over Client,Server: 1. Signaling
Client->>Server: POST /webrtc/push/{streamPath}<br/>Body: SDP Offer
Server->>Server: Create PeerConnection
Server->>Server: Prepare SDP Answer
Server->>Client: HTTP 201 Created<br/>Body: SDP Answer
Note over Client,Server: 2. ICE Exchange
Client->>Client: Gather local ICE candidates
Server->>Server: Gather local ICE candidates
Client->>Server: Send ICE candidates
Server->>Client: Send ICE candidates
Note over Client,Server: 3. Connectivity Checks
Client->>Server: Attempt Host/SRFLX candidates
alt Connectivity succeeds
Client->>Server: Establish P2P connection
Server-->>Client: Confirm connection
else Continue negotiation
Client->>Server: Send additional candidates
Server-->>Client: Return negotiation result
end
Note over Client,Server: 4. Media Transport
Client->>Server: Send SRTP media
Server-->>Client: Send RTCP feedback
```
## Plugin Overview
The Monibuca WebRTC plugin is built on Pion WebRTC v4 and provides complete WebRTC publishing/playing capabilities:
- ✅ Publishing via WHIP
- ✅ Playback via WHEP
- ✅ Video codecs: H.264, H.265, AV1, VP9
- ✅ Audio codecs: Opus, PCMA, PCMU
- ✅ TCP/UDP transport
- ✅ ICE server configuration
- ✅ DataChannel fallback
- ✅ Built-in test pages
## Configuration
### Basic Configuration
Example `config.yaml` snippet:
```yaml
webrtc:
# Optional ICE servers. See "STUN/TURN Reference" for details.
iceservers: []
# Listening port options:
# - tcp:9000 (TCP port)
# - udp:9000 (UDP port)
# - udp:10000-20000 (UDP port range)
port: tcp:9000
# Interval for sending PLI after video packet loss
pli: 2s
# Enable DataChannel fallback when codecs are unsupported
enabledc: false
# MimeType filter; empty means no restriction
mimetype:
- video/H264
- video/H265
- audio/PCMA
- audio/PCMU
```
### Parameter Details
#### ICE Servers
Configure the ICE server list for negotiation. See the [STUN/TURN Reference](#stunturn-reference) section for full details and examples.
#### Port Configuration
1. **TCP Port** Suitable for restrictive firewalls.
```yaml
port: tcp:9000
```
2. **UDP Port** Lower latency.
```yaml
port: udp:9000
```
3. **UDP Range** Allocate ports per session.
```yaml
port: udp:10000-20000
```
#### PLI Interval
Duration between Picture Loss Indication (PLI) requests, default 2s.
#### DataChannel
Fallback transport for unsupported codecs (e.g., MP4A). Data is sent as FLV over DataChannel.
#### MimeType Filter
Restrict allowed codec types. Leave empty to accept all supported codecs.
#### Public IP Configuration
Required when the server is behind NAT (e.g., Docker, private network).
##### Public IP Workflow
```yaml
webrtc:
publicip: 203.0.113.1 # IPv4 address
publicipv6: 2001:db8::1 # Optional IPv6 address
port: tcp:9000
pli: 2s
enabledc: false
```
##### Diagram
```
┌─────────────────────────────────────────────────────────┐
│ Public Internet │
│ │
│ ┌──────────────┐ │
│ │ Client │ │
│ │ │ │
│ └──────┬───────┘ │
│ │ │
│ │ 1. Obtain public address information │
│ │ (via STUN/TURN if needed) │
│ │ │
└─────────┼───────────────────────────────────────────────┘
┌─────────▼───────────────────────────────────────────────┐
│ NAT / Firewall │
│ │
│ ┌──────────────┐ │
│ │ Monibuca │ │
│ │ Server │ │
│ │ │ │
│ │ Private IP: │ │
│ │ 192.168.1.100 │
│ │ │ │
│ │ PublicIP: 203.0.113.1 │
│ └──────────────┘ │
│ │
│ 2. Use PublicIP when creating ICE candidates │
│ 3. SDP answer contains public address │
│ 4. Client connects via public address │
└─────────────────────────────────────────────────────────┘
```
##### Notes
1. Always configure `publicip` if the server sits behind NAT.
2. Ensure the IP matches the actual public address.
3. Verify port forwarding when using Docker or reverse proxies.
4. Set both IPv4 and IPv6 if dual-stack connectivity is required.
## Basic Usage
### Start the Service
After enabling the WebRTC plugin, Monibuca exposes the following endpoints:
- `POST /webrtc/push/{streamPath}` WHIP publish endpoint
- `POST /webrtc/play/{streamPath}` WHEP playback endpoint
- `GET /webrtc/test/{name}` Built-in test pages
### Test Pages
- Publish test: `http://localhost:8080/webrtc/test/publish`
- Subscribe test: `http://localhost:8080/webrtc/test/subscribe`
- Screen share test: `http://localhost:8080/webrtc/test/screenshare`
## Publishing
### Using the Test Page
1. Visit `http://localhost:8080/webrtc/test/publish`.
2. Allow camera/microphone permissions.
3. Select a camera if multiple devices are available.
4. The page automatically starts WebRTC publishing.
### Custom Publishing
#### JavaScript Example
```javascript
const mediaStream = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true,
});
const pc = new RTCPeerConnection({
// Configure ICE servers if needed, see STUN/TURN reference
iceServers: [],
});
mediaStream.getTracks().forEach(track => {
pc.addTrack(track, mediaStream);
});
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
const response = await fetch('/webrtc/push/live/test', {
method: 'POST',
headers: { 'Content-Type': 'application/sdp' },
body: offer.sdp,
});
const answerSdp = await response.text();
await pc.setRemoteDescription(
new RTCSessionDescription({ type: 'answer', sdp: answerSdp })
);
```
#### Forcing H.265
```javascript
const transceiver = pc.getTransceivers().find(
t => t.sender.track && t.sender.track.kind === 'video'
);
if (transceiver) {
const capabilities = RTCRtpSender.getCapabilities('video');
const h265 = capabilities.codecs.find(
c => c.mimeType.toLowerCase() === 'video/h265'
);
if (h265) {
transceiver.setCodecPreferences([h265]);
}
}
```
Add `?h265` to the test page URL to attempt H.265 publishing: `/webrtc/test/publish?h265`.
### Publish URL Parameters
- `streamPath` e.g., `live/test`
- `bearer` Bearer token for authentication
Example:
```
POST /webrtc/push/live/test?bearer=token
```
### Stop Publishing
```javascript
pc.close();
```
## Playing
### Using the Test Page
1. Ensure a stream is already publishing.
2. Visit `http://localhost:8080/webrtc/test/subscribe?streamPath=live/test`.
3. The page automatically starts playback.
### Custom Playback
#### JavaScript Example
```javascript
const pc = new RTCPeerConnection({
// Configure ICE servers if needed, see STUN/TURN reference
iceServers: [],
});
pc.ontrack = event => {
if (event.streams.length > 0) {
videoElement.srcObject = event.streams[0];
videoElement.play();
}
};
pc.addTransceiver('video', { direction: 'recvonly' });
pc.addTransceiver('audio', { direction: 'recvonly' });
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
const response = await fetch('/webrtc/play/live/test', {
method: 'POST',
headers: { 'Content-Type': 'application/sdp' },
body: offer.sdp,
});
const answerSdp = await response.text();
await pc.setRemoteDescription(
new RTCSessionDescription({ type: 'answer', sdp: answerSdp })
);
```
### Playback URL Parameters
- `streamPath` e.g., `live/test`
Example:
```
POST /webrtc/play/live/test
```
### Stop Playback
```javascript
pc.close();
```
## WHIP/WHEP Support
### WHIP (WebRTC-HTTP Ingestion Protocol)
#### Workflow
1. Client creates PeerConnection and Offer.
2. Client `POST /webrtc/push/{streamPath}` with SDP Offer.
3. Server returns SDP Answer (HTTP 201 Created).
4. Client sets Answer.
5. Media streaming starts.
#### Sequence Diagram
```mermaid
sequenceDiagram
participant Client as Client
participant Server as Monibuca Server
Note over Client,Server: 1. Preparation
Client->>Client: getUserMedia()
Client->>Client: New RTCPeerConnection
Client->>Client: Add tracks
Note over Client,Server: 2. Create Offer
Client->>Client: createOffer()
Client->>Client: setLocalDescription()
Client->>Client: Gather ICE candidates
Note over Client,Server: 3. Send Offer
Client->>Server: POST /webrtc/push/{streamPath}
Server->>Server: Parse Offer
Server->>Server: New PeerConnection
Server->>Server: setRemoteDescription()
Server->>Server: Publish stream
Server->>Server: Gather ICE candidates
Note over Client,Server: 4. Server Answer
Server->>Server: createAnswer()
Server->>Server: setLocalDescription()
Server->>Client: HTTP 201 + SDP Answer
Server->>Client: Location: /webrtc/api/stop/push/{streamPath}
Note over Client,Server: 5. Client Processes Answer
Client->>Client: setRemoteDescription()
Client->>Client: ICE exchange
Server->>Client: ICE exchange
Note over Client,Server: 6. Connection
Client->>Server: ICE connected
Server->>Client: ICE connected
Client->>Server: Create DataChannel (optional)
Server-->>Client: DataChannel confirmation
Note over Client,Server: 7. Streaming
Client->>Server: Send SRTP media
Server-->>Client: RTCP feedback
```
#### Client Example
```javascript
const pc = new RTCPeerConnection();
// Add tracks, create offer...
```
### WHEP (WebRTC HTTP Egress Protocol)
#### Workflow
1. Client creates PeerConnection and Offer (recvonly tracks).
2. Client `POST /webrtc/play/{streamPath}` with SDP Offer.
3. Server returns SDP Answer.
4. Client sets Answer.
5. Media streaming starts.
#### Sequence Diagram
```mermaid
sequenceDiagram
participant Client as Client
participant Server as Monibuca Server
Note over Client,Server: 1. Preparation
Client->>Client: New RTCPeerConnection
Client->>Client: Add recvonly transceivers
Client->>Client: Listen ontrack
Note over Client,Server: 2. Create Offer
Client->>Client: createOffer()
Client->>Client: setLocalDescription()
Client->>Client: Gather ICE candidates
Note over Client,Server: 3. Send Offer
Client->>Server: POST /webrtc/play/{streamPath}
Server->>Server: Parse Offer
Server->>Server: Ensure stream exists
alt Stream missing
Server->>Client: HTTP 404 Not Found
else Stream available
Server->>Server: New PeerConnection
Server->>Server: setRemoteDescription()
Server->>Server: Subscribe to stream
Server->>Server: Add tracks
Server->>Server: Gather ICE candidates
Note over Client,Server: 4. Server Answer
Server->>Server: createAnswer()
Server->>Server: setLocalDescription()
Server->>Client: HTTP 200 OK + SDP Answer
Note over Client,Server: 5. Client Processes Answer
Client->>Client: setRemoteDescription()
Client->>Client: ICE exchange
Server->>Client: ICE exchange
Note over Client,Server: 6. Connection
Client->>Server: ICE connected
Server->>Client: ICE connected
Client->>Server: DataChannel setup (optional)
Server-->>Client: DataChannel confirmation
Note over Client,Server: 7. Streaming
Server->>Client: Send SRTP media
Client->>Client: Play media
end
```
## Acting as WHIP/WHEP Client
### Pull via WHEP
```yaml
pull:
streams:
- url: https://whep.example.com/play/stream1
streamPath: live/stream1
```
### Push via WHIP
```yaml
push:
streams:
- url: https://whip.example.com/push/stream1
streamPath: live/stream1
```
## Advanced Features
### Codec Support
- **Video**: H.264, H.265/HEVC, AV1, VP9
- **Audio**: Opus, PCMA (G.711 A-law), PCMU (G.711 μ-law)
### DataChannel Transport
Enable DataChannel for unsupported codecs (e.g., MP4A audio) by setting `enabledc: true`. Data is encapsulated in FLV over the DataChannel.
### NAT Traversal
Configure `publicip`/`publicipv6` when running behind NAT; see [Public IP Configuration](#public-ip-configuration).
### Multi-stream Support
Use `http://localhost:8080/webrtc/test/batchv2` to test multi-stream scenarios.
#### BatchV2 Mode
- **Signaling channel**: WebSocket endpoint `ws(s)://{host}/webrtc/batchv2` (upgrade from HTTP).
- **Initial handshake**:
1. Create a `RTCPeerConnection`, run `createOffer`/`setLocalDescription`.
2. Send `{ "type": "offer", "sdp": "..." }` over the WebSocket.
3. Server replies `{ "type": "answer", "sdp": "..." }`; call `setRemoteDescription`.
- **Common commands** (all JSON text frames):
- `getStreamList`
```json
{ "type": "getStreamList" }
```
Response example: `{ "type": "streamList", "streams": [{ "path": "live/cam1", "codec": "H264", "width": 1280, "height": 720, "fps": 25 }] }`.
- `subscribe`
```json
{
"type": "subscribe",
"streamList": ["live/cam1", "live/cam2"],
"offer": "SDP..."
}
```
Server renegotiates and returns `{ "type": "answer", "sdp": "..." }`; call `setRemoteDescription` again.
- `unsubscribe` same structure as `subscribe`, with `streamList` containing the streams to remove.
- `publish`
```json
{
"type": "publish",
"streamPath": "live/cam3",
"offer": "SDP..."
}
```
Server responds with a new SDP answer that must be applied client-side.
- `unpublish`
```json
{ "type": "unpublish", "streamPath": "live/cam3" }
```
Triggers renegotiation; server returns a fresh answer.
- `ping`: `{ "type": "ping" }` keeps the connection alive; server answers with `pong`.
- **Media scope**: current implementation subscribes video only (`SubAudio` disabled). Extend as needed if audio tracks are required.
- **Client helper**: `web/BatchV2Client.ts` implements the browser-side workflow; see `webrtc/test/batchv2` for a functional demo (stream list, publish, subscribe management).
- **Troubleshooting**:
- Errors arrive as `{ "type": "error", "message": "..." }`; inspect browser console/WebSocket inspector for details.
- Each `subscribe`/`publish` triggers a new SDP cycle; ensure the app performs `setLocalDescription` → send message → `setRemoteDescription` without skipping.
### Connection Monitoring
```javascript
pc.oniceconnectionstatechange = () => {
console.log('ICE State:', pc.iceConnectionState);
// new, checking, connected, completed, failed, disconnected, closed
};
pc.onconnectionstatechange = () => {
console.log('Connection State:', pc.connectionState);
};
```
## Docker Notes
### 1. Network Mode
Prefer `host` mode:
```bash
docker run --network host monibuca/monibuca
```
When using `bridge` mode:
- Map WebRTC ports (TCP/UDP)
- Configure correct public IP
- Ensure port mapping matches plugin config
```bash
docker run -p 8080:8080 -p 9000:9000/udp monibuca/monibuca
```
### 2. Port Mapping
**TCP mode**
```bash
docker run -p 8080:8080 -p 9000:9000/tcp monibuca/monibuca
```
**UDP mode**
```bash
docker run -p 8080:8080 -p 9000:9000/udp monibuca/monibuca
```
**UDP range**
```bash
docker run -p 8080:8080 -p 10000-20000:10000-20000/udp monibuca/monibuca
```
> Note: Mapping large UDP ranges can be tricky; prefer a single UDP port or `host` mode when possible.
### 3. Public IP
Always set `publicip` when running inside Docker (container IPs are private).
```bash
curl ifconfig.me
dig +short myip.opendns.com @resolver1.opendns.com
```
Example configuration:
```yaml
webrtc:
publicip: 203.0.113.1
port: udp:9000
```
### 4. Docker Compose Example
```yaml
version: '3.8'
services:
monibuca:
image: monibuca/monibuca:latest
network_mode: host # recommended
# Or bridge mode:
# ports:
# - "8080:8080"
# - "9000:9000/udp"
volumes:
- ./config.yaml:/app/config.yaml
- ./logs:/app/logs
environment:
- PUBLICIP=203.0.113.1
```
### 5. Common Docker Issues
- **Connection failures** Configure `publicip`, prefer `host` network, verify port mapping.
- **Unstable UDP mapping** Prefer `host` mode or TCP mode (`port: tcp:9000`); inspect firewall rules.
- **Multiple instances** Assign different ports (e.g., `tcp:9000`, `tcp:9001`) and map accordingly.
### 6. Best Practices
1. Prefer `host` network for better performance.
2. Always provide `publicip`/`publicipv6` when behind NAT.
3. Switch to TCP mode if UDP mapping is problematic.
4. Monitor WebRTC logs to track connection states.
5. Configure TURN servers as a fallback to improve success rates.
## STUN/TURN Reference
### Why STUN/TURN Matters
- **STUN (Session Traversal Utilities for NAT)** helps endpoints discover public addresses and ports.
- **TURN (Traversal Using Relays around NAT)** relays media when direct connectivity fails (e.g., symmetric NAT).
- STUN is sufficient for most public/home networks; enterprise/mobile scenarios often require TURN fallback.
### Configuration Example
```yaml
webrtc:
iceservers:
- urls:
- stun:stun.l.google.com:19302
- urls:
- turn:turn.example.com:3478
username: user
credential: password
```
- `urls` accepts multiple entries, mixing `stun:`, `turn:`, `turns:` URIs.
- Rotate TURN credentials regularly; consider short-lived tokens (e.g., coturn REST API).
### Deployment Tips
1. **Deploy close to users** Lower latency boosts stability.
2. **Reserve bandwidth** TURN relays bidirectional media streams.
3. **Secure access** Protect TURN credentials with authentication or token mechanisms.
4. **Monitor usage** Track sessions, bandwidth, failure rates, and alert on anomalies.
5. **Multi-region redundancy** Provide regional STUN/TURN nodes for global coverage.
## FAQ
### 1. Connection Fails
**Problem**: WebRTC connection cannot be established.
**Solutions**:
- Verify ICE server configuration.
- Ensure firewall rules allow the configured ports.
- Try TCP mode: `port: tcp:9000`.
- Configure TURN as a relay fallback.
### 2. Video Not Displayed
**Problem**: Connection succeeds but no video is shown.
**Solutions**:
- Check browser console errors.
- Confirm the stream path is correct.
- Ensure the codec is supported.
- Test with the built-in subscribe page.
- Inspect the Network panel to confirm SDP responses and track creation.
- Run `pc.getReceivers().map(r => r.track)` in the console and verify tracks are `live`.
- Review server logs to confirm the subscriber receives video frames.
- Use `chrome://webrtc-internals` or `edge://webrtc-internals` for detailed stats (bitrate, frame rate, ICE state).
### 3. H.265 Unsupported
**Problem**: Browser lacks H.265 decoding support.
**Solutions**:
- Enable DataChannel fallback: `enabledc: true`.
- Publish H.264 instead.
- Wait for browser support (Chrome 113+ provides partial support).
### 4. CORS Issues
**Problem**: Requests are blocked due to CORS.
**Solutions**:
- Configure correct CORS headers.
- Deploy under the same origin.
- Use a reverse proxy.
### 5. Port Already in Use
**Problem**: Configured port is unavailable.
**Solutions**:
- Change the port: `port: tcp:9001`.
- Check whether other services occupy the port.
- Use a UDP range: `port: udp:10000-20000`.
### 6. Docker Connection Issues
**Problem**: WebRTC fails when running inside Docker.
**Solutions**:
- Configure `publicip` correctly.
- Prefer `host` network mode.
- Double-check port mappings (remember `/udp`).
- Ensure firewall rules allow the traffic.
- Review the [Docker Notes](#docker-notes).
### 7. PublicIP Not Working
**Problem**: Configured `publicip`, but clients still cannot connect.
**Solutions**:
- Verify the value matches the actual public address.
- Ensure port forwarding aligns with the plugin configuration.
- Test with `host` network mode.
- Inspect firewall/NAT rules.
- Check server logs to confirm ICE candidates include the public IP.
### 8. AAC Audio Not Playing
**Problem**: Audio unavailable when the source uses AAC/MP4A.
**Solutions**:
- The plugin currently supports Opus, PCMA, and PCMU.
- Options:
- Publish Opus instead of AAC.
- Enable DataChannel (`enabledc: true`) to transport FLV with AAC.
- Transcode audio to a supported codec before publishing.
## Summary
The Monibuca WebRTC plugin delivers full WHIP/WHEP functionality for low-latency real-time streaming in modern browsers. Follow the configuration, deployment, and troubleshooting guidance above to integrate WebRTC publishing and playback into your Monibuca deployment quickly.
Further reading:
- [WebRTC official site](https://webrtc.org/)
- [WHIP draft](https://datatracker.ietf.org/doc/html/draft-ietf-wish-whip)
- [WHEP draft](https://datatracker.ietf.org/doc/html/draft-murillo-whep)

966
plugin/webrtc/README_CN.md Normal file
View File

@@ -0,0 +1,966 @@
# WebRTC 插件使用教程
## 目录
- [WebRTC 简介](#webrtc-简介)
- [插件概述](#插件概述)
- [配置说明](#配置说明)
- [基本使用](#基本使用)
- [推流Publish](#推流publish)
- [拉流Play](#拉流play)
- [WHIP/WHEP 协议支持](#whipwhep-协议支持)
- [高级功能](#高级功能)
- [Docker 使用注意事项](#docker-使用注意事项)
- [STUN/TURN 服务器说明](#stunturn-服务器说明)
- [常见问题](#常见问题)
## WebRTC 简介
WebRTCWeb Real-Time Communication是一个由 W3C 和 IETF 共同制定的开放标准用于在浏览器和移动应用中实现实时音视频通信。WebRTC 的核心特点包括:
### 核心特性
1. **点对点通信P2P**WebRTC 允许浏览器之间直接建立连接,减少服务器负载和延迟
2. **低延迟**:通过 UDP 传输和优化的编解码器,实现毫秒级延迟
3. **自适应码率**:根据网络状况自动调整视频质量
4. **NAT 穿透**:通过 ICEInteractive Connectivity Establishment协议自动处理 NAT 和防火墙
### WebRTC 工作流程
1. **信令交换Signaling**:通过 HTTP/WebSocket 等协议交换 SDPSession Description Protocol和 ICE 候选
2. **ICE 候选收集**:收集本地和远程的网络地址信息
3. **连接建立**:通过 ICE 协商完成 NAT 穿透并建立 P2P 连接
4. **媒体传输**:使用 SRTPSecure Real-time Transport Protocol加密传输音视频数据
### WebRTC 连接建立时序图
```mermaid
sequenceDiagram
participant Client as 客户端
participant Server as Monibuca 服务器
Note over Client,Server: 1. 信令交换阶段
Client->>Server: POST /webrtc/push/{streamPath}<br/>Body: SDP Offer
Server->>Server: 创建 PeerConnection
Server->>Server: 准备应答
Server->>Client: HTTP 201 Created<br/>Body: SDP Answer
Note over Client,Server: 2. ICE 候选交换
Client->>Client: 收集本地 ICE 候选
Server->>Server: 收集本地 ICE 候选
Client->>Server: 发送 ICE 候选
Server->>Client: 发送 ICE 候选
Note over Client,Server: 3. 连接尝试
Client->>Server: 连接尝试 (Host/SRFLX 候选)
alt 连接成功
Client->>Server: 建立 P2P 连接
Server-->>Client: 确认连接
else 继续协商
Client->>Server: 发送更多候选
Server-->>Client: 返回协商结果
end
Note over Client,Server: 4. 媒体传输
Client->>Server: SRTP 媒体发送
Server-->>Client: RTCP 反馈
```
## 插件概述
Monibuca 的 WebRTC 插件基于 Pion WebRTC v4 实现,提供了完整的 WebRTC 推拉流功能。插件支持:
- ✅ 推流WHIP 协议)
- ✅ 拉流WHEP 协议)
- ✅ 多种视频编解码器H.264、H.265、AV1、VP9
- ✅ 多种音频编解码器Opus、PCMA、PCMU
- ✅ TCP/UDP 传输支持
- ✅ ICE 服务器配置
- ✅ DataChannel 备用传输
- ✅ 内置测试页面
## 配置说明
### 基本配置
`config.yaml` 中配置 WebRTC 插件:
```yaml
webrtc:
# ICE 服务器配置(可选)。如需配置 STUN/TURN请参见“STUN/TURN 服务器说明”。
iceservers: []
# 监听端口配置
# 支持格式:
# - tcp:9000 (TCP 端口)
# - udp:9000 (UDP 端口)
# - udp:10000-20000 (UDP 端口范围)
port: tcp:9000
# PLI 请求间隔(视频丢包后请求关键帧)
pli: 2s
# 是否启用 DataChannel用于不支持编码格式时的备用传输
enabledc: false
# MimeType 过滤列表(为空则不过滤)
mimetype:
- video/H264
- video/H265
- audio/PCMA
- audio/PCMU
```
### 配置参数详解
#### ICE 服务器ICEServers
用于配置用于 ICE 协商的服务器列表。详细说明与示例请参见文末的“STUN/TURN 服务器说明”。
#### 端口配置Port
支持三种端口配置方式:
1. **TCP 端口**:使用 TCP 传输,适合防火墙限制严格的环境
```yaml
port: tcp:9000
```
2. **UDP 端口**:使用 UDP 传输,延迟更低
```yaml
port: udp:9000
```
3. **UDP 端口范围**:为每个连接分配不同的端口
```yaml
port: udp:10000-20000
```
#### PLI 间隔PLI
PLIPicture Loss Indication用于在视频丢包时请求关键帧。默认 2 秒发送一次 PLI 请求。
#### DataChannelEnableDC
当客户端不支持某些编码格式(如 H.265)时,可以启用 DataChannel 作为备用传输方式。DataChannel 会将媒体数据封装为 FLV 格式传输。
#### MimeType 过滤MimeType
限制允许的编解码器类型。如果为空,则不进行过滤,支持所有编解码器。
#### 公网 IP 配置PublicIP
当 Monibuca 服务器部署在 NAT 后面(如 Docker 容器、内网服务器)时,需要配置公网 IP 地址,以便客户端能够正确建立 WebRTC 连接。
##### PublicIP 的作用原理
在 NAT 环境中,服务器只能获取到内网 IP 地址(如 `192.168.1.100`),但客户端需要知道服务器的公网 IP 地址才能建立连接。`PublicIP` 配置的作用是:
1. **NAT 1:1 IP 映射**:通过 `SetNAT1To1IPs` 方法,将内网 IP 映射到公网 IP
2. **ICE 候选生成**:在生成 ICE 候选时,使用配置的公网 IP 而不是内网 IP
3. **SDP 中的 IP 地址**SDP Answer 中的 `c=` 行和 ICE 候选中的 IP 地址会使用公网 IP
##### 配置示例
```yaml
webrtc:
# WebRTC 插件的公网地址配置
publicip: 203.0.113.1 # IPv4 公网 IP
publicipv6: 2001:db8::1 # IPv6 公网 IP可选
# 其他常见配置...
port: tcp:9000
pli: 2s
enabledc: false
```
##### 工作原理示意图
```
┌─────────────────────────────────────────────────────────┐
│ 公网环境 │
│ │
│ ┌──────────────┐ │
│ │ 客户端 │ │
│ │ │ │
│ └──────┬───────┘ │
│ │ │
│ │ 1. 获取可用公网地址信息 │
│ │ (可通过 STUN/TURN见说明章节
│ │ │
└─────────┼───────────────────────────────────────────────┘
┌─────────▼───────────────────────────────────────────────┐
│ NAT/防火墙 │
│ │
│ ┌──────────────┐ │
│ │ Monibuca │ │
│ │ 服务器 │ │
│ │ │ │
│ │ 内网 IP: │ │
│ │ 192.168.1.100 │
│ │ │ │
│ │ 配置 PublicIP: 203.0.113.1 │
│ └──────────────┘ │
│ │
│ 2. 服务器使用配置的 PublicIP 生成 ICE 候选 │
│ 3. SDP Answer 中包含公网 IP │
│ 4. 客户端根据公网 IP 与服务器建立连接 │
└─────────────────────────────────────────────────────────┘
```
##### 注意事项
1. **必须配置**:如果服务器在 NAT 后面,必须配置正确的公网 IP否则客户端无法连接
2. **IP 地址准确性**:确保配置的公网 IP 是服务器实际对外的公网 IP
3. **端口映射**:如果使用 Docker 或端口转发,确保公网端口已正确映射
4. **IPv6 支持**:如果服务器有 IPv6 地址,可以同时配置 `publicipv6`
## 基本使用
### 启动服务
确保 WebRTC 插件已启用,启动 Monibuca 服务后,插件会自动注册以下 HTTP 端点:
- `POST /webrtc/push/{streamPath}` - 推流端点WHIP
- `POST /webrtc/play/{streamPath}` - 拉流端点WHEP
- `GET /webrtc/test/{name}` - 测试页面
### 测试页面
插件提供了内置的测试页面,方便快速测试功能:
- **推流测试**`http://localhost:8080/webrtc/test/publish`
- **拉流测试**`http://localhost:8080/webrtc/test/subscribe`
- **屏幕共享**`http://localhost:8080/webrtc/test/screenshare`
## 推流Publish
### 使用测试页面推流
1. 打开浏览器访问:`http://localhost:8080/webrtc/test/publish`
2. 浏览器会请求摄像头和麦克风权限
3. 选择摄像头设备(如果有多个)
4. 页面会自动创建 WebRTC 连接并开始推流
### 自定义推流实现
#### JavaScript 示例
```javascript
// 获取媒体流
const mediaStream = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
});
// 创建 PeerConnection
const pc = new RTCPeerConnection({
// 如需配置 STUN/TURN请参见“STUN/TURN 服务器说明”
iceServers: []
});
// 添加媒体轨道
mediaStream.getTracks().forEach(track => {
pc.addTrack(track, mediaStream);
});
// 创建 Offer
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
// 发送 Offer 到服务器
const response = await fetch('/webrtc/push/live/test', {
method: 'POST',
headers: { 'Content-Type': 'application/sdp' },
body: offer.sdp
});
// 接收 Answer
const answerSdp = await response.text();
await pc.setRemoteDescription(
new RTCSessionDescription({ type: 'answer', sdp: answerSdp })
);
```
#### 支持 H.265 推流
如果浏览器支持 H.265,可以在推流时指定使用 H.265
```javascript
// 添加视频轨道后,设置编解码器偏好
const videoTransceiver = pc.getTransceivers().find(
t => t.sender.track && t.sender.track.kind === 'video'
);
if (videoTransceiver) {
const capabilities = RTCRtpSender.getCapabilities('video');
const h265Codec = capabilities.codecs.find(
c => c.mimeType.toLowerCase() === 'video/h265'
);
if (h265Codec) {
videoTransceiver.setCodecPreferences([h265Codec]);
}
}
```
访问测试页面时添加 `?h265` 参数即可:`/webrtc/test/publish?h265`
### 推流 URL 参数
推流端点支持以下 URL 参数:
- `streamPath`:流路径,例如 `live/test`
- `bearer`Bearer Token用于认证
示例:
```
POST /webrtc/push/live/test?bearer=your_token
```
### 停止推流
关闭 PeerConnection 即可停止推流:
```javascript
pc.close();
```
## 拉流Play
### 使用测试页面拉流
1. 确保已有流在推流(使用推流测试页面或其他方式)
2. 打开浏览器访问:`http://localhost:8080/webrtc/test/subscribe?streamPath=live/test`
3. 页面会自动创建 WebRTC 连接并开始播放
### 自定义拉流实现
#### JavaScript 示例
```javascript
// 创建 PeerConnection
const pc = new RTCPeerConnection({
// 如需配置 STUN/TURN请参见“STUN/TURN 服务器说明”
iceServers: []
});
// 监听远程轨道
pc.ontrack = (event) => {
if (event.streams.length > 0) {
videoElement.srcObject = event.streams[0];
videoElement.play();
}
};
// 添加接收器
pc.addTransceiver('video', { direction: 'recvonly' });
pc.addTransceiver('audio', { direction: 'recvonly' });
// 创建 Offer
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
// 发送 Offer 到服务器
const response = await fetch('/webrtc/play/live/test', {
method: 'POST',
headers: { 'Content-Type': 'application/sdp' },
body: offer.sdp
});
// 接收 Answer
const answerSdp = await response.text();
await pc.setRemoteDescription(
new RTCSessionDescription({ type: 'answer', sdp: answerSdp })
);
```
### 拉流 URL 参数
拉流端点支持以下 URL 参数:
- `streamPath`:流路径,例如 `live/test`
示例:
```
POST /webrtc/play/live/test
```
### 停止拉流
关闭 PeerConnection 即可停止拉流:
```javascript
pc.close();
```
## WHIP/WHEP 协议支持
### WHIPWebRTC-HTTP Ingestion Protocol
WHIP 是一个基于 HTTP 的 WebRTC 推流协议Monibuca 的 WebRTC 插件完全支持 WHIP 协议。
#### WHIP 推流流程
1. 客户端创建 PeerConnection 和 Offer
2. 客户端发送 `POST /webrtc/push/{streamPath}` 请求Body 为 SDP Offer
3. 服务器返回 SDP AnswerHTTP 201 Created
4. 客户端设置 Answer建立连接
5. 开始传输媒体数据
#### WHIP 推流时序图
```mermaid
sequenceDiagram
participant Client as 客户端
participant Server as Monibuca 服务器
Note over Client,Server: 1. 准备阶段
Client->>Client: 获取媒体流 (getUserMedia)
Client->>Client: 创建 RTCPeerConnection
Client->>Client: 添加媒体轨道 (addTrack)
Note over Client,Server: 2. 创建 Offer
Client->>Client: 创建 Offer (createOffer)
Client->>Client: 设置本地描述 (setLocalDescription)
Client->>Client: 收集 ICE 候选
Note over Client,Server: 3. 发送 Offer 到服务器
Client->>Server: POST /webrtc/push/{streamPath}<br/>Content-Type: application/sdp<br/>Body: SDP Offer
Server->>Server: 解析 SDP Offer
Server->>Server: 创建 PeerConnection
Server->>Server: 设置远程描述 (setRemoteDescription)
Server->>Server: 创建发布者 (Publish)
Server->>Server: 收集 ICE 候选
Note over Client,Server: 4. 服务器返回 Answer
Server->>Server: 创建 Answer (createAnswer)
Server->>Server: 设置本地描述 (setLocalDescription)
Server->>Client: HTTP 201 Created<br/>Content-Type: application/sdp<br/>Body: SDP Answer
Server->>Client: Location: /webrtc/api/stop/push/{streamPath}
Note over Client,Server: 5. 客户端处理 Answer
Client->>Client: 设置远程描述 (setRemoteDescription)
Client->>Client: ICE 候选交换
Server->>Client: ICE 候选交换
Note over Client,Server: 6. 建立连接
Client->>Server: ICE 连接建立
Server->>Client: ICE 连接建立
Client->>Server: 数据通道建立
Server-->>Client: 数据通道确认
Note over Client,Server: 7. 媒体传输
Client->>Server: 发送媒体数据 (SRTP)
Server-->>Client: RTCP 反馈
```
#### WHIP 客户端实现
```javascript
const pc = new RTCPeerConnection();
// ... 添加轨道和创建 Offer ...
const response = await fetch('http://server:8080/webrtc/push/live/test', {
method: 'POST',
headers: { 'Content-Type': 'application/sdp' },
body: offer.sdp
});
if (response.status === 201) {
const answerSdp = await response.text();
await pc.setRemoteDescription(
new RTCSessionDescription({ type: 'answer', sdp: answerSdp })
);
}
```
### WHEPWebRTC HTTP Egress Protocol
WHEP 是一个基于 HTTP 的 WebRTC 拉流协议Monibuca 的 WebRTC 插件完全支持 WHEP 协议。
#### WHEP 拉流流程
1. 客户端创建 PeerConnection 和 Offer包含 recvonly 轨道)
2. 客户端发送 `POST /webrtc/play/{streamPath}` 请求Body 为 SDP Offer
3. 服务器返回 SDP Answer
4. 客户端设置 Answer建立连接
5. 开始接收媒体数据
#### WHEP 拉流时序图
```mermaid
sequenceDiagram
participant Client as 客户端
participant Server as Monibuca 服务器
Note over Client,Server: 1. 准备阶段
Client->>Client: 创建 RTCPeerConnection
Client->>Client: 添加接收器 (addTransceiver recvonly)
Client->>Client: 监听远程轨道 (ontrack)
Note over Client,Server: 2. 创建 Offer
Client->>Client: 创建 Offer (createOffer)
Client->>Client: 设置本地描述 (setLocalDescription)
Client->>Client: 收集 ICE 候选
Note over Client,Server: 3. 发送 Offer 到服务器
Client->>Server: POST /webrtc/play/{streamPath}<br/>Content-Type: application/sdp<br/>Body: SDP Offer
Server->>Server: 解析 SDP Offer
Server->>Server: 检查流是否存在
alt 流不存在
Server->>Client: HTTP 404 Not Found
else 流存在
Server->>Server: 创建 PeerConnection
Server->>Server: 设置远程描述 (setRemoteDescription)
Server->>Server: 创建订阅者 (Subscribe)
Server->>Server: 添加媒体轨道 (AddTrack)
Server->>Server: 收集 ICE 候选
Note over Client,Server: 4. 服务器返回 Answer
Server->>Server: 创建 Answer (createAnswer)
Server->>Server: 设置本地描述 (setLocalDescription)
Server->>Client: HTTP 200 OK<br/>Content-Type: application/sdp<br/>Body: SDP Answer
Note over Client,Server: 5. 客户端处理 Answer
Client->>Client: 设置远程描述 (setRemoteDescription)
Client->>Client: ICE 候选交换
Server->>Client: ICE 候选交换
Note over Client,Server: 6. 建立连接
Client->>Server: ICE 连接建立
Server->>Client: ICE 连接建立
Client->>Server: 数据通道建立
Server-->>Client: 数据通道确认
Note over Client,Server: 7. 媒体传输
Server->>Client: 发送媒体数据 (SRTP)
Client->>Client: 接收并播放媒体流
end
```
#### WHEP 客户端实现
```javascript
const pc = new RTCPeerConnection();
pc.addTransceiver('video', { direction: 'recvonly' });
pc.addTransceiver('audio', { direction: 'recvonly' });
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
const response = await fetch('http://server:8080/webrtc/play/live/test', {
method: 'POST',
headers: { 'Content-Type': 'application/sdp' },
body: offer.sdp
});
const answerSdp = await response.text();
await pc.setRemoteDescription(
new RTCSessionDescription({ type: 'answer', sdp: answerSdp })
);
```
### 作为 WHIP/WHEP 客户端
Monibuca 的 WebRTC 插件也可以作为客户端,从其他 WHIP/WHEP 服务器拉流或推流。
#### 配置拉流WHEP
在 `config.yaml` 中配置:
```yaml
pull:
streams:
- url: https://whep-server.example.com/play/stream1
streamPath: live/stream1
```
#### 配置推流WHIP
在 `config.yaml` 中配置:
```yaml
push:
streams:
- url: https://whip-server.example.com/push/stream1
streamPath: live/stream1
```
## 高级功能
### 编解码器支持
插件支持以下编解码器:
#### 视频编解码器
- **H.264**:最广泛支持的视频编解码器
- **H.265/HEVC**:更高效的视频编解码器(需要浏览器支持)
- **AV1**:新一代开源视频编解码器
- **VP9**Google 开发的视频编解码器
#### 音频编解码器
- **Opus**:现代音频编解码器,质量高
- **PCMA**G.711 A-law常用于电话系统
- **PCMU**G.711 μ-law常用于电话系统
### DataChannel 传输
当客户端不支持某些编码格式时,可以启用 DataChannel 作为备用传输方式。DataChannel 会将媒体数据封装为 FLV 格式传输。
启用 DataChannel
```yaml
webrtc:
enabledc: true
```
### NAT 穿透配置
如果服务器部署在 NAT 后面,需要配置公网 IP。详细原理请参考 [公网 IP 配置PublicIP](#公网-ip-配置publicip) 部分。
### Docker 使用注意事项
在 Docker 环境中使用 WebRTC 插件时,需要注意以下事项:
#### 1. 网络模式配置
推荐使用 `host` 网络模式,这样可以避免端口映射问题:
```bash
docker run --network host monibuca/monibuca
```
如果必须使用 `bridge` 网络模式,需要:
- 映射 WebRTC 端口TCP 或 UDP
- 配置正确的公网 IP
- 确保端口映射正确
```bash
# 使用 bridge 模式
docker run -p 8080:8080 -p 9000:9000/udp monibuca/monibuca
```
#### 2. 端口映射配置
##### TCP 模式
如果使用 TCP 模式(`port: tcp:9000`),需要映射 TCP 端口:
```bash
docker run -p 8080:8080 -p 9000:9000/tcp monibuca/monibuca
```
##### UDP 模式
如果使用 UDP 模式(`port: udp:9000`),需要映射 UDP 端口:
```bash
docker run -p 8080:8080 -p 9000:9000/udp monibuca/monibuca
```
##### UDP 端口范围
如果使用端口范围(`port: udp:10000-20000`),需要映射整个端口范围:
```bash
docker run -p 8080:8080 -p 10000-20000:10000-20000/udp monibuca/monibuca
```
**注意**Docker 的端口范围映射可能有限制,建议使用单个 UDP 端口或 `host` 网络模式。
#### 3. PublicIP 配置
在 Docker 环境中,**必须配置公网 IP**,因为容器内的 IP 地址是内网地址。
##### 获取公网 IP
可以通过以下方式获取服务器的公网 IP
```bash
# 方法 1: 使用 curl
curl ifconfig.me
# 方法 2: 使用 dig
dig +short myip.opendns.com @resolver1.opendns.com
# 方法 3: 查看服务器配置
# 在云服务商控制台查看
```
##### 配置示例
```yaml
# config.yaml
publicip: 203.0.113.1 # 替换为实际公网 IP
webrtc:
port: udp:9000
```
#### 4. Docker Compose 配置示例
```yaml
version: '3.8'
services:
monibuca:
image: monibuca/monibuca:latest
network_mode: host # 推荐使用 host 模式
# 或者使用 bridge 模式
# ports:
# - "8080:8080"
# - "9000:9000/udp"
volumes:
- ./config.yaml:/app/config.yaml
- ./logs:/app/logs
environment:
- PUBLICIP=203.0.113.1 # 如果通过环境变量配置
```
#### 5. 常见问题
##### 问题 1: 连接失败
**原因**Docker 容器内的 IP 地址是内网地址,客户端无法直接连接。
**解决方案**
- 配置正确的 `publicip`
- 使用 `host` 网络模式
- 确保端口映射正确
##### 问题 2: UDP 端口无法映射
**原因**Docker 的 UDP 端口映射在某些情况下可能不稳定。
**解决方案**
- 使用 `host` 网络模式
- 使用 TCP 模式:`port: tcp:9000`
- 检查防火墙规则
##### 问题 3: 多容器部署
如果需要在同一台服务器上部署多个 Monibuca 实例:
```yaml
# 实例 1
webrtc:
port: tcp:9000
# 实例 2
webrtc:
port: tcp:9001
```
然后分别映射不同的端口:
```bash
docker run -p 8080:8080 -p 9000:9000/tcp monibuca1
docker run -p 8081:8081 -p 9001:9001/tcp monibuca2
```
#### 6. 最佳实践
1. **使用 host 网络模式**:避免端口映射问题,性能更好
2. **配置 PublicIP**:确保客户端能够正确连接
3. **使用 TCP 模式**:在 Docker 环境中更稳定
4. **监控连接状态**:通过日志监控 WebRTC 连接状态
5. **配置 TURN 服务器**:作为备用方案,提高连接成功率
### 多路流支持
插件支持在单个 WebRTC 连接中传输多个流,通过 BatchV2 API 实现。
访问批量流页面:`http://localhost:8080/webrtc/test/batchv2`
#### BatchV2 多流模式
- **信令通道**:通过 WebSocket 与服务器的 `/webrtc/batchv2` Endpoint 通信HTTP → WebSocket Upgrade
- **初始握手**
1. 客户端创建 `RTCPeerConnection`,执行 `createOffer`/`setLocalDescription`。
2. 通过 WebSocket 发送 `{ "type": "offer", "sdp": "..." }`。
3. 服务器返回 `{ "type": "answer", "sdp": "..." }`,客户端执行 `setRemoteDescription`。
- **常用指令**(全部为 JSON 文本帧):
- `getStreamList``{ "type": "getStreamList" }` → 服务器返回 `{ "type": "streamList", "streams": [{ "path": "live/cam1", "codec": "H264", "width": 1280, "height": 720, "fps": 25 }, ...] }`。
- `subscribe`:向现有连接追加拉流,消息格式
```json
{
"type": "subscribe",
"streamList": ["live/cam1", "live/cam2"],
"offer": "SDP..."
}
```
服务器完成重协商后返回 `{ "type": "answer", "sdp": "..." }`,需再次 `setRemoteDescription`。
- `unsubscribe`:移除指定流,结构与 `subscribe` 相同(`streamList` 填写需移除的流列表)。
- `publish`:在同一连接中推流
```json
{
"type": "publish",
"streamPath": "live/cam3",
"offer": "SDP..."
}
```
服务器返回应答后设置远端描述即可开始发送。
- `unpublish``{ "type": "unpublish", "streamPath": "live/cam3" }`,服务器会返回新的 SDP 应答。
- `ping``{ "type": "ping" }`,服务器回 `pong` 维持在线状态。
- **媒体限制**:当前订阅侧默认只拉取视频轨道(`SubAudio` 关闭),如需音频需自行扩展。
- **客户端工具**`web/BatchV2Client.ts` 提供了完整的浏览器端实现,示例页面 `webrtc/test/batchv2` 演示了多路拉流、推流与流列表操作。
- **失败排查**
- 若服务器返回 `error`,消息体中包含 `message` 字段,可在前端控制台或 WebSocket 调试工具中查看。
- 每次 `subscribe`/`publish` 都会触发新的 SDP确保在应用层完成 `setLocalDescription` → 发送 → `setRemoteDescription` 的协商流程。
### 连接状态监控
可以通过监听 PeerConnection 的状态事件来监控连接:
```javascript
pc.oniceconnectionstatechange = () => {
console.log('ICE Connection State:', pc.iceConnectionState);
// 可能的值:
// - new: 新建连接
// - checking: 正在检查连接
// - connected: 已连接
// - completed: 连接完成
// - failed: 连接失败
// - disconnected: 连接断开
// - closed: 连接关闭
};
pc.onconnectionstatechange = () => {
console.log('Connection State:', pc.connectionState);
};
```
## STUN/TURN 服务器说明
### 为什么需要 STUN/TURN
- **STUNSession Traversal Utilities for NAT**:帮助 WebRTC 终端获知自身的公网地址与端口,用于打洞和构建 ICE 候选。
- **TURNTraversal Using Relays around NAT**:在双方均无法直接穿透 NAT/防火墙时提供中继服务,保证连接成功率。
- 在大多数公网或家庭网络环境下STUN 足以完成 P2P 建立;但在企业网络、移动网络或对称 NAT 场景下,需要 TURN 作为兜底。
### 配置示例
在 `config.yaml` 中配置 STUN/TURN 服务器列表:
```yaml
webrtc:
iceservers:
- urls:
- stun:stun.l.google.com:19302
- urls:
- turn:turn.example.com:3478
username: user
credential: password
```
- `urls` 支持同时配置多个地址,可混合 `stun:`、`turn:`、`turns:` 等 URI。
- TURN 服务器通常需要用户名、密码,建议使用长期凭据或临时 Token如使用 [coturn](https://github.com/coturn/coturn))。
### 部署建议
1. **优选就近节点**STUN/TURN 延迟会直接影响媒体传输质量,建议部署在离客户端较近的区域。
2. **保证带宽**TURN 会中继双向媒体流,应为其预留足够的带宽与并发资源。
3. **安全控制**TURN 用户名、密码应定期轮换;若提供给外部用户,建议结合鉴权或令牌机制。
4. **监控告警**:建议监控连接数、带宽和失败率,便于快速发现穿透异常。
5. **多区域容灾**:为全球用户提供服务时,可按地域部署多套 STUN/TURN并通过 DNS 或业务逻辑选择最优节点。
## 常见问题
### 1. 连接失败
**问题**WebRTC 连接无法建立
**解决方案**
- 检查 ICE 服务器配置是否正确
- 检查防火墙是否开放了相应端口
- 尝试使用 TCP 模式:`port: tcp:9000`
- 配置 TURN 服务器作为中继
### 2. 视频无法显示
**问题**:连接成功但视频无法显示
**解决方案**
- 检查浏览器控制台是否有错误
- 确认流路径是否正确
- 检查编解码器是否支持
- 尝试使用测试页面验证
- 打开浏览器开发者工具的 Network 面板,确认 SDP 应答正常返回且媒体流轨道已创建
- 在浏览器控制台执行 `pc.getReceivers().map(r => r.track)`,确认远端轨道状态是否为 `live`
- 查看服务器日志,确认对应的订阅者已成功获取视频帧
- 在 Chrome 中访问 `chrome://webrtc-internals`(或 Edge 中访问 `edge://webrtc-internals`),查看对应 PeerConnection 的统计信息与远端流状态,排查码率、帧率及 ICE 状态问题
### 3. H.265 不支持
**问题**:浏览器不支持 H.265
**解决方案**
- 启用 DataChannel`enabledc: true`
- 使用 H.264 编解码器
- 等待浏览器支持 H.265Chrome 113+ 已支持)
### 4. 跨域问题
**问题**:跨域请求被阻止
**解决方案**
- 配置 CORS 头
- 使用同源部署
- 配置代理服务器
### 5. 端口被占用
**问题**:端口已被占用
**解决方案**
- 更改端口配置:`port: tcp:9001`
- 检查是否有其他服务占用端口
- 使用端口范围:`port: udp:10000-20000`
### 6. Docker 环境连接失败
**问题**:在 Docker 环境中 WebRTC 连接失败
**解决方案**
- 配置正确的 `publicip`
- 使用 `host` 网络模式:`docker run --network host`
- 确保端口映射正确UDP 端口需要 `/udp` 后缀)
- 检查防火墙是否开放端口
- 参考 [Docker 使用注意事项](#docker-使用注意事项) 部分
### 7. PublicIP 配置无效
**问题**:配置了 PublicIP 但客户端仍无法连接
**解决方案**
- 确认 PublicIP 是服务器实际对外的公网 IP
- 检查端口映射是否正确Docker 环境)
- 使用 `host` 网络模式测试
- 检查防火墙和 NAT 规则
- 查看服务器日志确认 ICE 候选中的 IP 地址
### 8. AAC 音频无法播放
**问题**:拉流时音频不可用,日志显示 AAC 或 MP4A 编解码不受支持。
**解决方案**
- WebRTC 插件当前仅支持 Opus、PCMA、PCMU 等音频编码。
- 如源流为 AACMP4A可选择
- 在编码端改用 Opus
- 启用 DataChannel`enabledc: true`),通过 DataChannel 传输 FLV 封装的 AAC 音频;
- 在推流前进行音频转码(如通过其他插件或外部转码流程)。
## 总结
Monibuca 的 WebRTC 插件提供了完整的 WebRTC 推拉流功能,支持标准的 WHIP/WHEP 协议,可以轻松集成到现有的 Web 应用中。通过合理的配置和优化,可以实现低延迟、高质量的实时音视频传输。
更多信息请参考:
- [WebRTC 官方文档](https://webrtc.org/)
- [WHIP 协议规范](https://datatracker.ietf.org/doc/html/draft-ietf-wish-whip)
- [WHEP 协议规范](https://datatracker.ietf.org/doc/html/draft-murillo-whep)