Compare commits

...

13 Commits

Author SHA1 Message Date
pggiroro
d2bd4b2c7a fix: when rtmp sequence header change,mp4 record two diffent sequence header into the same mp4 file. 2025-10-20 16:50:00 +08:00
langhuihui
6693676fe2 fix: mux ICodecCtx sync 2025-10-20 14:28:45 +08:00
langhuihui
be391f9528 fix: bufreader doc 2025-10-19 08:03:11 +08:00
pggiroro
6779b88755 fix: 1.hls record failed;2.mp4 record filename use mileseconds;3.gb28181 update channels 2025-10-14 21:38:02 +08:00
langhuihui
fe5d31ad08 fix: rtsp tcp read timeout 2025-10-14 20:37:05 +08:00
langhuihui
a87eeb8a30 fix: update gotask version and update bufreader doc 2025-10-14 10:44:21 +08:00
langhuihui
4f301e724d doc: add bufrader doc 2025-10-13 16:46:22 +08:00
langhuihui
3e17f13731 doc: add readme to rtsp plugin 2025-10-11 14:11:49 +08:00
langhuihui
4a2b2a4f06 fix: remove xdp 2025-10-11 13:37:33 +08:00
langhuihui
3151d9c101 fix: avoid hls transform task to retry 2025-10-11 13:20:25 +08:00
langhuihui
29870fb579 feat: update gotask to 1.0.0 2025-10-11 09:34:04 +08:00
pggiroro
92fa6856b7 feat: support add pullproxy to gb device 2025-10-08 22:47:39 +08:00
pggiroro
a020dc1cd2 feat: mv mp4 file to SecondaryFilePath;fix: createfile use rw mode,close file when muxer is nil 2025-10-06 23:19:59 +08:00
39 changed files with 5116 additions and 1368 deletions

692
doc/bufreader_analysis.md Normal file
View File

@@ -0,0 +1,692 @@
# BufReader: Zero-Copy Network Reading with Non-Contiguous Memory Buffers
## Table of Contents
- [1. Problem: Traditional Contiguous Memory Buffer Bottlenecks](#1-problem-traditional-contiguous-memory-buffer-bottlenecks)
- [2. Core Solution: Non-Contiguous Memory Buffer Passing Mechanism](#2-core-solution-non-contiguous-memory-buffer-passing-mechanism)
- [3. Performance Validation](#3-performance-validation)
- [4. Usage Guide](#4-usage-guide)
## TL;DR (Key Takeaways)
**Core Innovation**: Non-Contiguous Memory Buffer Passing Mechanism
- Data stored as **sliced memory blocks**, non-contiguous layout
- Pass references via **ReadRange callback**, zero-copy
- Memory blocks **reused from object pool**, avoiding allocation and GC
**Performance Data** (Streaming server, 100 concurrent streams):
```
bufio.Reader: 79 GB allocated, 134 GCs, 374.6 ns/op
BufReader: 0.6 GB allocated, 2 GCs, 30.29 ns/op
Result: 98.5% GC reduction, 11.6x throughput improvement
```
**Ideal For**: High-concurrency network servers, streaming media, long-running services
---
## 1. Problem: Traditional Contiguous Memory Buffer Bottlenecks
### 1.1 bufio.Reader's Contiguous Memory Model
The standard library `bufio.Reader` uses a **fixed-size contiguous memory buffer**:
```go
type Reader struct {
buf []byte // Single contiguous buffer (e.g., 4KB)
r, w int // Read/write pointers
}
func (b *Reader) Read(p []byte) (n int, err error) {
// Copy from contiguous buffer to target
n = copy(p, b.buf[b.r:b.w]) // Must copy
return
}
```
**Cost of Contiguous Memory**:
```
Reading 16KB data (with 4KB buffer):
Network → bufio buffer → User buffer
↓ (4KB contiguous) ↓
1st [████] → Copy to result[0:4KB]
2nd [████] → Copy to result[4KB:8KB]
3rd [████] → Copy to result[8KB:12KB]
4th [████] → Copy to result[12KB:16KB]
Total: 4 network reads + 4 memory copies
Allocates result (16KB contiguous memory)
```
### 1.2 Issues in High-Concurrency Scenarios
In streaming servers (100 concurrent connections, 30fps each):
```go
// Typical processing pattern
func handleStream(conn net.Conn) {
reader := bufio.NewReaderSize(conn, 4096)
for {
// Allocate contiguous buffer for each packet
packet := make([]byte, 1024) // Allocation 1
n, _ := reader.Read(packet) // Copy 1
// Forward to multiple subscribers
for _, sub := range subscribers {
data := make([]byte, n) // Allocations 2-N
copy(data, packet[:n]) // Copies 2-N
sub.Write(data)
}
}
}
// Performance impact:
// 100 connections × 30fps × (1 + subscribers) allocations = massive temporary memory
// Triggers frequent GC, system instability
```
**Core Problems**:
1. Must maintain contiguous memory layout → Frequent copying
2. Allocate new buffer for each packet → Massive temporary objects
3. Forwarding requires multiple copies → CPU wasted on memory operations
## 2. Core Solution: Non-Contiguous Memory Buffer Passing Mechanism
### 2.1 Design Philosophy
BufReader uses **non-contiguous memory block slices**:
```
No longer require data in contiguous memory:
1. Data scattered across multiple memory blocks (slice)
2. Each block independently managed and reused
3. Pass by reference, no data copying
```
**Core Data Structures**:
```go
type BufReader struct {
Allocator *ScalableMemoryAllocator // Object pool allocator
buf MemoryReader // Memory block slice
}
type MemoryReader struct {
Buffers [][]byte // Multiple memory blocks, non-contiguous!
Size int // Total size
Length int // Readable length
}
```
### 2.2 Non-Contiguous Memory Buffer Model
#### Contiguous vs Non-Contiguous Comparison
```
bufio.Reader (Contiguous Memory):
┌─────────────────────────────────┐
│ 4KB Fixed Buffer │
│ [Read][Available] │
└─────────────────────────────────┘
- Must copy to contiguous target buffer
- Fixed size limitation
- Read portion wastes space
BufReader (Non-Contiguous Memory):
┌──────┐ ┌──────┐ ┌────────┐ ┌──────┐
│Block1│→│Block2│→│ Block3 │→│Block4│
│ 512B │ │ 1KB │ │ 2KB │ │ 3KB │
└──────┘ └──────┘ └────────┘ └──────┘
- Directly pass reference to each block (zero-copy)
- Flexible block sizes
- Recycle immediately after processing
```
#### Memory Block Chain Workflow
```mermaid
sequenceDiagram
participant N as Network
participant P as Object Pool
participant B as BufReader.buf
participant U as User Code
N->>P: 1st read (returns 512B)
P-->>B: Block1 (512B) - from pool or new
B->>B: Buffers = [Block1]
N->>P: 2nd read (returns 1KB)
P-->>B: Block2 (1KB) - reused from pool
B->>B: Buffers = [Block1, Block2]
N->>P: 3rd read (returns 2KB)
P-->>B: Block3 (2KB)
B->>B: Buffers = [Block1, Block2, Block3]
U->>B: ReadRange(4096)
B->>U: yield(Block1) - pass reference
B->>U: yield(Block2) - pass reference
B->>U: yield(Block3) - pass reference
B->>U: yield(Block4[0:512])
U->>B: Processing complete
B->>P: Recycle Block1, Block2, Block3, Block4
Note over P: Memory blocks return to pool for reuse
```
### 2.3 Zero-Copy Passing: ReadRange API
**Core API**:
```go
func (r *BufReader) ReadRange(n int, yield func([]byte)) error
```
**How It Works**:
```go
// Internal implementation (simplified)
func (r *BufReader) ReadRange(n int, yield func([]byte)) error {
remaining := n
// Iterate through memory block slice
for _, block := range r.buf.Buffers {
if remaining <= 0 {
break
}
if len(block) <= remaining {
// Pass entire block
yield(block) // Zero-copy: pass reference directly!
remaining -= len(block)
} else {
// Pass portion
yield(block[:remaining])
remaining = 0
}
}
// Recycle processed blocks
r.recycleFront()
return nil
}
```
**Usage Example**:
```go
// Read 4096 bytes of data
reader.ReadRange(4096, func(chunk []byte) {
// chunk is reference to original memory block
// May be called multiple times with different sized blocks
// e.g.: 512B, 1KB, 2KB, 512B
processData(chunk) // Process directly, zero-copy!
})
// Characteristics:
// - No need to allocate target buffer
// - No need to copy data
// - Each chunk automatically recycled after processing
```
### 2.4 Advantages in Real Network Scenarios
**Scenario: Read 10KB from network, each read returns 500B-2KB**
```
bufio.Reader (Contiguous Memory):
1. Read 2KB to internal buffer (contiguous)
2. Copy 2KB to user buffer ← Copy
3. Read 1.5KB to internal buffer
4. Copy 1.5KB to user buffer ← Copy
5. Read 2KB...
6. Copy 2KB... ← Copy
... Repeat ...
Total: Multiple network reads + Multiple memory copies
Must allocate 10KB contiguous buffer
BufReader (Non-Contiguous Memory):
1. Read 2KB → Block1, append to slice
2. Read 1.5KB → Block2, append to slice
3. Read 2KB → Block3, append to slice
4. Read 2KB → Block4, append to slice
5. Read 2.5KB → Block5, append to slice
6. ReadRange(10KB):
→ yield(Block1) - 2KB
→ yield(Block2) - 1.5KB
→ yield(Block3) - 2KB
→ yield(Block4) - 2KB
→ yield(Block5) - 2.5KB
Total: Multiple network reads + 0 memory copies
No contiguous memory needed, process block by block
```
### 2.5 Real Application: Stream Forwarding
**Problem Scenario**: 100 concurrent streams, each forwarded to 10 subscribers
**Traditional Approach** (Contiguous Memory):
```go
func forwardStream_Traditional(reader *bufio.Reader, subscribers []net.Conn) {
packet := make([]byte, 4096) // Alloc 1: contiguous memory
n, _ := reader.Read(packet) // Copy 1: from bufio buffer
// Copy for each subscriber
for _, sub := range subscribers {
data := make([]byte, n) // Allocs 2-11: 10 times
copy(data, packet[:n]) // Copies 2-11: 10 times
sub.Write(data)
}
}
// Per packet: 11 allocations + 11 copies
// 100 concurrent × 30fps × 11 = 33,000 allocations/sec
```
**BufReader Approach** (Non-Contiguous Memory):
```go
func forwardStream_BufReader(reader *BufReader, subscribers []net.Conn) {
reader.ReadRange(4096, func(chunk []byte) {
// chunk is original memory block reference, may be non-contiguous
// All subscribers share the same memory block!
for _, sub := range subscribers {
sub.Write(chunk) // Send reference directly, zero-copy
}
})
}
// Per packet: 0 allocations + 0 copies
// 100 concurrent × 30fps × 0 = 0 allocations/sec
```
**Performance Comparison**:
- Allocations: 33,000/sec → 0/sec
- Memory copies: 33,000/sec → 0/sec
- GC pressure: High → Very low
### 2.6 Memory Block Lifecycle
```mermaid
stateDiagram-v2
[*] --> Get from Pool
Get from Pool --> Read Network Data
Read Network Data --> Append to Slice
Append to Slice --> Pass to User
Pass to User --> User Processing
User Processing --> Recycle to Pool
Recycle to Pool --> Get from Pool
note right of Get from Pool
Reuse existing blocks
Avoid GC
end note
note right of Pass to User
Pass reference, zero-copy
May pass to multiple subscribers
end note
note right of Recycle to Pool
Active recycling
Immediately reusable
end note
```
**Key Points**:
1. Memory blocks **circularly reused** in pool, bypassing GC
2. Pass references instead of copying data, achieving zero-copy
3. Recycle immediately after processing, minimizing memory footprint
### 2.7 Core Code Implementation
```go
// Create BufReader
func NewBufReader(reader io.Reader) *BufReader {
return &BufReader{
Allocator: NewScalableMemoryAllocator(16384), // Object pool
feedData: func() error {
// Get memory block from pool, read network data directly
buf, err := r.Allocator.Read(reader, r.BufLen)
if err != nil {
return err
}
// Append to slice (only add reference)
r.buf.Buffers = append(r.buf.Buffers, buf)
r.buf.Length += len(buf)
return nil
},
}
}
// Zero-copy reading
func (r *BufReader) ReadRange(n int, yield func([]byte)) error {
for r.buf.Length < n {
r.feedData() // Read more data from network
}
// Pass references block by block
for _, block := range r.buf.Buffers {
yield(block) // Zero-copy passing
}
// Recycle processed blocks
r.recycleFront()
return nil
}
// Recycle memory blocks to pool
func (r *BufReader) Recycle() {
if r.Allocator != nil {
r.Allocator.Recycle() // Return all blocks to pool
}
}
```
## 3. Performance Validation
### 3.1 Test Design
**Real Network Simulation**: Each read returns random size (64-2048 bytes), simulating real network fluctuations
**Core Test Scenarios**:
1. **Concurrent Network Connection Reading** - Simulate 100+ concurrent connections
2. **GC Pressure Test** - Demonstrate long-term running differences
3. **Streaming Server** - Real business scenario (100 streams × forwarding)
### 3.2 Performance Test Results
**Test Environment**: Apple M2 Pro, Go 1.23.0
#### GC Pressure Test (Core Comparison)
| Metric | bufio.Reader | BufReader | Improvement |
|--------|-------------|-----------|-------------|
| Operation Latency | 1874 ns/op | 112.7 ns/op | **16.6x faster** |
| Allocation Count | 5,576,659 | 3,918 | **99.93% reduction** |
| Per Operation | 2 allocs/op | 0 allocs/op | **Zero allocation** |
| Throughput | 2.8M ops/s | 45.7M ops/s | **16x improvement** |
#### Streaming Server Scenario
| Metric | bufio.Reader | BufReader | Improvement |
|--------|-------------|-----------|-------------|
| Operation Latency | 374.6 ns/op | 30.29 ns/op | **12.4x faster** |
| Memory Allocation | 79,508 MB | 601 MB | **99.2% reduction** |
| **GC Runs** | **134** | **2** | **98.5% reduction** ⭐ |
| Throughput | 10.1M ops/s | 117M ops/s | **11.6x improvement** |
#### Performance Visualization
```
📊 GC Runs Comparison (Core Advantage)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
bufio.Reader ████████████████████████████████████████████████████████████████ 134 runs
BufReader █ 2 runs ← 98.5% reduction!
📊 Total Memory Allocation
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
bufio.Reader ████████████████████████████████████████████████████████████████ 79 GB
BufReader █ 0.6 GB ← 99.2% reduction!
📊 Throughput Comparison
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
bufio.Reader █████ 10.1M ops/s
BufReader ████████████████████████████████████████████████████████ 117M ops/s
```
### 3.3 Why Non-Contiguous Memory Is So Fast
**Reason 1: Zero-Copy Passing**
```go
// bufio - Must copy
buf := make([]byte, 1024)
reader.Read(buf) // Copy to contiguous memory
// BufReader - Pass reference
reader.ReadRange(1024, func(chunk []byte) {
// chunk is original memory block, no copy
})
```
**Reason 2: Memory Block Reuse**
```
bufio: Allocate → Use → GC → Reallocate → ...
BufReader: Allocate → Use → Return to pool → Reuse from pool → ...
↑ Same memory block reused repeatedly, no GC
```
**Reason 3: Multi-Subscriber Sharing**
```
Traditional: 1 packet → Copy 10 times → 10 subscribers
BufReader: 1 packet → Pass reference → 10 subscribers share
↑ Only 1 memory block, all 10 subscribers reference it
```
## 4. Usage Guide
### 4.1 Basic Usage
```go
func handleConnection(conn net.Conn) {
// Create BufReader
reader := util.NewBufReader(conn)
defer reader.Recycle() // Return all blocks to pool
// Zero-copy read and process
reader.ReadRange(4096, func(chunk []byte) {
// chunk is non-contiguous memory block
// Process directly, no copy needed
processChunk(chunk)
})
}
```
### 4.2 Real-World Use Cases
**Scenario 1: Protocol Parsing**
```go
// Parse FLV packet (header + data)
func parseFLV(reader *BufReader) {
// Read packet type (1 byte)
packetType, _ := reader.ReadByte()
// Read data size (3 bytes)
dataSize, _ := reader.ReadBE32(3)
// Skip timestamp etc (7 bytes)
reader.Skip(7)
// Zero-copy read data (may span multiple non-contiguous blocks)
reader.ReadRange(int(dataSize), func(chunk []byte) {
// chunk may be complete data or partial
// Parse block by block, no need to wait for complete data
parseDataChunk(packetType, chunk)
})
}
```
**Scenario 2: High-Concurrency Forwarding**
```go
// Read from one source, forward to multiple targets
func relay(source *BufReader, targets []io.Writer) {
reader.ReadRange(8192, func(chunk []byte) {
// All targets share the same memory block
for _, target := range targets {
target.Write(chunk) // Zero-copy forwarding
}
})
}
```
**Scenario 3: Streaming Server**
```go
// Receive RTSP stream and distribute to subscribers
type Stream struct {
reader *BufReader
subscribers []*Subscriber
}
func (s *Stream) Process() {
s.reader.ReadRange(65536, func(frame []byte) {
// frame may be part of video frame (non-contiguous)
// Send directly to all subscribers
for _, sub := range s.subscribers {
sub.WriteFrame(frame) // Shared memory, zero-copy
}
})
}
```
### 4.3 Best Practices
**✅ Correct Usage**:
```go
// 1. Always recycle resources
reader := util.NewBufReader(conn)
defer reader.Recycle()
// 2. Process directly in callback, don't save references
reader.ReadRange(1024, func(data []byte) {
processData(data) // ✅ Process immediately
})
// 3. Explicitly copy when retention needed
var saved []byte
reader.ReadRange(1024, func(data []byte) {
saved = append(saved, data...) // ✅ Explicit copy
})
```
**❌ Wrong Usage**:
```go
// ❌ Don't save references
var dangling []byte
reader.ReadRange(1024, func(data []byte) {
dangling = data // Wrong: data will be recycled
})
// dangling is now a dangling reference!
// ❌ Don't forget to recycle
reader := util.NewBufReader(conn)
// Missing defer reader.Recycle()
// Memory blocks cannot be returned to pool
```
### 4.4 Performance Optimization Tips
**Tip 1: Batch Processing**
```go
// ✅ Optimized: Read multiple packets at once
reader.ReadRange(65536, func(chunk []byte) {
// One chunk may contain multiple packets
for len(chunk) >= 4 {
size := int(binary.BigEndian.Uint32(chunk[:4]))
packet := chunk[4 : 4+size]
processPacket(packet)
chunk = chunk[4+size:]
}
})
```
**Tip 2: Choose Appropriate Block Size**
```go
// Choose based on application scenario
const (
SmallPacket = 4 << 10 // 4KB - RTSP/HTTP
MediumPacket = 16 << 10 // 16KB - Audio streams
LargePacket = 64 << 10 // 64KB - Video streams
)
reader := util.NewBufReaderWithBufLen(conn, LargePacket)
```
## 5. Summary
### Core Innovation: Non-Contiguous Memory Buffering
BufReader's core is not "better buffering" but **fundamentally changing the memory layout model**:
```
Traditional thinking: Data must be in contiguous memory
BufReader: Data can be scattered across blocks, passed by reference
Result:
✓ Zero-copy: No need to reassemble into contiguous memory
✓ Zero allocation: Memory blocks reused from object pool
✓ Zero GC pressure: No temporary objects created
```
### Key Advantages
| Feature | Implementation | Performance Impact |
|---------|---------------|-------------------|
| **Zero-Copy** | Pass memory block references | No copy overhead |
| **Zero Allocation** | Object pool reuse | 98.5% GC reduction |
| **Multi-Subscriber Sharing** | Same block referenced multiple times | 10x+ memory savings |
| **Flexible Block Sizes** | Adapt to network fluctuations | No reassembly needed |
### Ideal Use Cases
| Scenario | Recommended | Reason |
|----------|------------|---------|
| **High-concurrency network servers** | BufReader ⭐ | 98% GC reduction, 10x+ throughput |
| **Stream forwarding** | BufReader ⭐ | Zero-copy multicast, memory sharing |
| **Protocol parsers** | BufReader ⭐ | Parse block by block, no complete packet needed |
| **Long-running services** | BufReader ⭐ | Stable system, minimal GC impact |
| Simple file reading | bufio.Reader | Standard library sufficient |
### Key Points
Remember when using BufReader:
1. **Accept non-contiguous data**: Process each block via callback
2. **Don't hold references**: Data recycled after callback returns
3. **Leverage ReadRange**: This is the core zero-copy API
4. **Must call Recycle()**: Return memory blocks to pool
### Performance Data
**Streaming Server (100 concurrent streams, continuous running)**:
```
1-hour running estimation:
bufio.Reader (Contiguous Memory):
- Allocates 2.8 TB memory
- Triggers 4,800 GCs
- Frequent system pauses
BufReader (Non-Contiguous Memory):
- Allocates 21 GB memory (133x less)
- Triggers 72 GCs (67x less)
- Almost no GC impact
```
### Testing and Documentation
**Run Tests**:
```bash
sh scripts/benchmark_bufreader.sh
```
## References
- [GoMem Project](https://github.com/langhuihui/gomem) - Memory object pool implementation
- [Monibuca v5](https://m7s.live) - Streaming media server
- Test Code: `pkg/util/buf_reader_benchmark_test.go`
---
**Core Idea**: Eliminate traditional contiguous buffer copying overhead through non-contiguous memory block slices and zero-copy reference passing, achieving high-performance network data processing.

View File

@@ -0,0 +1,694 @@
# BufReader基于非连续内存缓冲的零拷贝网络读取方案
## 目录
- [1. 问题:传统连续内存缓冲的瓶颈](#1-问题传统连续内存缓冲的瓶颈)
- [2. 核心方案:非连续内存缓冲传递机制](#2-核心方案非连续内存缓冲传递机制)
- [3. 性能验证](#3-性能验证)
- [4. 使用指南](#4-使用指南)
## TL;DR (核心要点)
**核心创新**:非连续内存缓冲传递机制
- 数据以**内存块切片**形式存储,非连续布局
- 通过 **ReadRange 回调**逐块传递引用,零拷贝
- 内存块从**对象池复用**,避免分配和 GC
**性能数据**流媒体服务器100 并发流):
```
bufio.Reader: 79 GB 分配134 次 GC374.6 ns/op
BufReader: 0.6 GB 分配2 次 GC30.29 ns/op
结果GC 减少 98.5%,吞吐量提升 11.6 倍
```
**适用场景**:高并发网络服务器、流媒体处理、长期运行服务
---
## 1. 问题:传统连续内存缓冲的瓶颈
### 1.1 bufio.Reader 的连续内存模型
标准库 `bufio.Reader` 使用**固定大小的连续内存缓冲区**
```go
type Reader struct {
buf []byte // 单一连续缓冲区(如 4KB
r, w int // 读写指针
}
func (b *Reader) Read(p []byte) (n int, err error) {
// 从连续缓冲区拷贝到目标
n = copy(p, b.buf[b.r:b.w]) // 必须拷贝
return
}
```
**连续内存的代价**
```
读取 16KB 数据(缓冲区 4KB
网络 → bufio 缓冲区 → 用户缓冲区
4KB 连续) ↓
第1次 [████] → 拷贝到 result[0:4KB]
第2次 [████] → 拷贝到 result[4KB:8KB]
第3次 [████] → 拷贝到 result[8KB:12KB]
第4次 [████] → 拷贝到 result[12KB:16KB]
总计4 次网络读取 + 4 次内存拷贝
每次分配 result (16KB 连续内存)
```
### 1.2 高并发场景的问题
在流媒体服务器100 个并发连接,每个 30fps
```go
// 典型的处理模式
func handleStream(conn net.Conn) {
reader := bufio.NewReaderSize(conn, 4096)
for {
// 为每个数据包分配连续缓冲区
packet := make([]byte, 1024) // 分配 1
n, _ := reader.Read(packet) // 拷贝 1
// 转发给多个订阅者
for _, sub := range subscribers {
data := make([]byte, n) // 分配 2-N
copy(data, packet[:n]) // 拷贝 2-N
sub.Write(data)
}
}
}
// 性能影响:
// 100 连接 × 30fps × (1 + 订阅者数) 次分配 = 大量临时内存
// 触发频繁 GC系统不稳定
```
**核心问题**
1. 必须维护连续内存布局 → 频繁拷贝
2. 每个数据包分配新缓冲区 → 大量临时对象
3. 转发需要多次拷贝 → CPU 浪费在内存操作上
## 2. 核心方案:非连续内存缓冲传递机制
### 2.1 设计理念
BufReader 采用**非连续内存块切片**
```
不再要求数据在连续内存中,而是:
1. 数据分散在多个内存块中(切片)
2. 每个块独立管理和复用
3. 通过引用传递,不拷贝数据
```
**核心数据结构**
```go
type BufReader struct {
Allocator *ScalableMemoryAllocator // 对象池分配器
buf MemoryReader // 内存块切片
}
type MemoryReader struct {
Buffers [][]byte // 多个内存块,非连续!
Size int // 总大小
Length int // 可读长度
}
```
### 2.2 非连续内存缓冲模型
#### 连续 vs 非连续对比
```
bufio.Reader连续内存
┌─────────────────────────────────┐
│ 4KB 固定缓冲区 │
│ [已读][可用] │
└─────────────────────────────────┘
- 必须拷贝到连续的目标缓冲区
- 固定大小限制
- 已读部分浪费空间
BufReader非连续内存
┌──────┐ ┌──────┐ ┌────────┐ ┌──────┐
│Block1│→│Block2│→│ Block3 │→│Block4│
│ 512B │ │ 1KB │ │ 2KB │ │ 3KB │
└──────┘ └──────┘ └────────┘ └──────┘
- 直接传递每个块的引用(零拷贝)
- 灵活的块大小
- 处理完立即回收
```
#### 内存块切片的工作流程
```mermaid
sequenceDiagram
participant N as 网络
participant P as 对象池
participant B as BufReader.buf
participant U as 用户代码
N->>P: 第1次读取返回 512B
P-->>B: Block1 (512B) - 从池获取或新建
B->>B: Buffers = [Block1]
N->>P: 第2次读取返回 1KB
P-->>B: Block2 (1KB) - 从池复用
B->>B: Buffers = [Block1, Block2]
N->>P: 第3次读取返回 2KB
P-->>B: Block3 (2KB)
B->>B: Buffers = [Block1, Block2, Block3]
U->>B: ReadRange(4096)
B->>U: yield(Block1) - 传递引用
B->>U: yield(Block2) - 传递引用
B->>U: yield(Block3) - 传递引用
B->>U: yield(Block4[0:512])
U->>B: 数据处理完成
B->>P: 回收 Block1, Block2, Block3, Block4
Note over P: 内存块回到池中等待复用
```
### 2.3 零拷贝传递ReadRange API
**核心 API**
```go
func (r *BufReader) ReadRange(n int, yield func([]byte)) error
```
**工作原理**
```go
// 内部实现(简化版)
func (r *BufReader) ReadRange(n int, yield func([]byte)) error {
remaining := n
// 遍历内存块切片
for _, block := range r.buf.Buffers {
if remaining <= 0 {
break
}
if len(block) <= remaining {
// 整块传递
yield(block) // 零拷贝:直接传递引用!
remaining -= len(block)
} else {
// 传递部分
yield(block[:remaining])
remaining = 0
}
}
// 回收已处理的块
r.recycleFront()
return nil
}
```
**使用示例**
```go
// 读取 4096 字节数据
reader.ReadRange(4096, func(chunk []byte) {
// chunk 是原始内存块的引用
// 可能被调用多次,每次接收不同大小的块
// 例如512B, 1KB, 2KB, 512B
processData(chunk) // 直接处理,零拷贝!
})
// 特点:
// - 无需分配目标缓冲区
// - 无需拷贝数据
// - 每个 chunk 处理完后自动回收
```
### 2.4 真实网络场景的优势
**场景:从网络读取 10KB 数据,网络每次返回 500B-2KB**
```
bufio.Reader连续内存方案
1. 读取 2KB 到内部缓冲区(连续)
2. 拷贝 2KB 到用户缓冲区 ← 拷贝
3. 读取 1.5KB 到内部缓冲区
4. 拷贝 1.5KB 到用户缓冲区 ← 拷贝
5. 读取 2KB...
6. 拷贝 2KB... ← 拷贝
... 重复 ...
总计:多次网络读取 + 多次内存拷贝
必须分配 10KB 连续缓冲区
BufReader非连续内存方案
1. 读取 2KB → Block1追加到切片
2. 读取 1.5KB → Block2追加到切片
3. 读取 2KB → Block3追加到切片
4. 读取 2KB → Block4追加到切片
5. 读取 2.5KB → Block5追加到切片
6. ReadRange(10KB)
→ yield(Block1) - 2KB
→ yield(Block2) - 1.5KB
→ yield(Block3) - 2KB
→ yield(Block4) - 2KB
→ yield(Block5) - 2.5KB
总计:多次网络读取 + 0 次内存拷贝
无需分配连续内存,逐块处理
```
### 2.5 实际应用:流媒体转发
**问题场景**100 个并发流,每个流转发给 10 个订阅者
**传统方式**(连续内存):
```go
func forwardStream_Traditional(reader *bufio.Reader, subscribers []net.Conn) {
packet := make([]byte, 4096) // 分配 1连续内存
n, _ := reader.Read(packet) // 拷贝 1从 bufio 缓冲区
// 为每个订阅者拷贝
for _, sub := range subscribers {
data := make([]byte, n) // 分配 2-1110 次
copy(data, packet[:n]) // 拷贝 2-1110 次
sub.Write(data)
}
}
// 每个数据包11 次分配 + 11 次拷贝
// 100 并发 × 30fps × 11 = 33,000 次分配/秒
```
**BufReader 方式**(非连续内存):
```go
func forwardStream_BufReader(reader *BufReader, subscribers []net.Conn) {
reader.ReadRange(4096, func(chunk []byte) {
// chunk 是原始内存块引用,可能非连续
// 所有订阅者共享同一块内存!
for _, sub := range subscribers {
sub.Write(chunk) // 直接发送引用,零拷贝
}
})
}
// 每个数据包0 次分配 + 0 次拷贝
// 100 并发 × 30fps × 0 = 0 次分配/秒
```
**性能对比**
- 分配次数33,000/秒 → 0/秒
- 内存拷贝33,000/秒 → 0/秒
- GC 压力:高 → 极低
### 2.6 内存块的生命周期
```mermaid
stateDiagram-v2
[*] --> 从对象池获取
从对象池获取 --> 读取网络数据
读取网络数据 --> 追加到切片
追加到切片 --> 传递给用户
传递给用户 --> 用户处理
用户处理 --> 回收到对象池
回收到对象池 --> 从对象池获取
note right of 从对象池获取
复用已有内存块
避免 GC
end note
note right of 传递给用户
传递引用,零拷贝
可能传递给多个订阅者
end note
note right of 回收到对象池
主动回收
立即可复用
end note
```
**关键点**
1. 内存块在对象池中**循环复用**,不经过 GC
2. 传递引用而非拷贝数据,实现零拷贝
3. 处理完立即回收,内存占用最小化
### 2.7 核心代码实现
```go
// 创建 BufReader
func NewBufReader(reader io.Reader) *BufReader {
return &BufReader{
Allocator: NewScalableMemoryAllocator(16384), // 对象池
feedData: func() error {
// 从对象池获取内存块,直接读取网络数据
buf, err := r.Allocator.Read(reader, r.BufLen)
if err != nil {
return err
}
// 追加到切片(只是添加引用)
r.buf.Buffers = append(r.buf.Buffers, buf)
r.buf.Length += len(buf)
return nil
},
}
}
// 零拷贝读取
func (r *BufReader) ReadRange(n int, yield func([]byte)) error {
for r.buf.Length < n {
r.feedData() // 从网络读取更多数据
}
// 逐块传递引用
for _, block := range r.buf.Buffers {
yield(block) // 零拷贝传递
}
// 回收已读取的块
r.recycleFront()
return nil
}
// 回收内存块到对象池
func (r *BufReader) Recycle() {
if r.Allocator != nil {
r.Allocator.Recycle() // 所有块归还对象池
}
}
```
## 3. 性能验证
### 3.1 测试设计
**真实网络模拟**每次读取返回随机大小64-2048 字节),模拟真实网络波动
**核心测试场景**
1. **并发网络连接读取** - 模拟 100+ 并发连接
2. **GC 压力测试** - 展示长期运行差异
3. **流媒体服务器** - 真实业务场景100 流 × 转发)
### 3.2 性能测试结果
**测试环境**Apple M2 Pro, Go 1.23.0
#### GC 压力测试(核心对比)
| 指标 | bufio.Reader | BufReader | 改善 |
|------|-------------|-----------|------|
| 操作延迟 | 1874 ns/op | 112.7 ns/op | **16.6x 快** |
| 内存分配次数 | 5,576,659 | 3,918 | **减少 99.93%** |
| 每次操作 | 2 allocs/op | 0 allocs/op | **零分配** |
| 吞吐量 | 2.8M ops/s | 45.7M ops/s | **16x 提升** |
#### 流媒体服务器场景
| 指标 | bufio.Reader | BufReader | 改善 |
|------|-------------|-----------|------|
| 操作延迟 | 374.6 ns/op | 30.29 ns/op | **12.4x 快** |
| 内存分配 | 79,508 MB | 601 MB | **减少 99.2%** |
| **GC 次数** | **134** | **2** | **减少 98.5%** ⭐ |
| 吞吐量 | 10.1M ops/s | 117M ops/s | **11.6x 提升** |
#### 性能可视化
```
📊 GC 次数对比(核心优势)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
bufio.Reader ████████████████████████████████████████████████████████████████ 134 次
BufReader █ 2 次 ← 减少 98.5%
📊 内存分配总量
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
bufio.Reader ████████████████████████████████████████████████████████████████ 79 GB
BufReader █ 0.6 GB ← 减少 99.2%
📊 吞吐量对比
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
bufio.Reader █████ 10.1M ops/s
BufReader ████████████████████████████████████████████████████████ 117M ops/s
```
### 3.3 为什么非连续内存这么快?
**原因 1零拷贝传递**
```go
// bufio - 必须拷贝
buf := make([]byte, 1024)
reader.Read(buf) // 拷贝到连续内存
// BufReader - 传递引用
reader.ReadRange(1024, func(chunk []byte) {
// chunk 是原始内存块,无拷贝
})
```
**原因 2内存块复用**
```
bufio: 分配 → 使用 → GC → 再分配 → ...
BufReader: 分配 → 使用 → 归还池 → 从池复用 → ...
↑ 同一块内存反复使用,不触发 GC
```
**原因 3多订阅者共享**
```
传统方式1 个数据包 → 拷贝 10 份 → 10 个订阅者
BufReader1 个数据包 → 传递引用 → 10 个订阅者共享
↑ 只需 1 块内存10 个订阅者都引用它
```
## 4. 使用指南
### 4.1 基本使用
```go
func handleConnection(conn net.Conn) {
// 创建 BufReader
reader := util.NewBufReader(conn)
defer reader.Recycle() // 归还所有内存块到对象池
// 零拷贝读取和处理
reader.ReadRange(4096, func(chunk []byte) {
// chunk 是非连续的内存块
// 直接处理,无需拷贝
processChunk(chunk)
})
}
```
### 4.2 实际应用场景
**场景 1协议解析**
```go
// 解析 FLV 数据包header + data
func parseFLV(reader *BufReader) {
// 读取包类型1 字节)
packetType, _ := reader.ReadByte()
// 读取数据大小3 字节)
dataSize, _ := reader.ReadBE32(3)
// 跳过时间戳等7 字节)
reader.Skip(7)
// 零拷贝读取数据(可能跨越多个非连续块)
reader.ReadRange(int(dataSize), func(chunk []byte) {
// chunk 可能是完整数据,也可能是其中一部分
// 逐块解析,无需等待完整数据
parseDataChunk(packetType, chunk)
})
}
```
**场景 2高并发转发**
```go
// 从一个源读取,转发给多个目标
func relay(source *BufReader, targets []io.Writer) {
reader.ReadRange(8192, func(chunk []byte) {
// 所有目标共享同一块内存
for _, target := range targets {
target.Write(chunk) // 零拷贝转发
}
})
}
```
**场景 3流媒体服务器**
```go
// 接收 RTSP 流并分发给订阅者
type Stream struct {
reader *BufReader
subscribers []*Subscriber
}
func (s *Stream) Process() {
s.reader.ReadRange(65536, func(frame []byte) {
// frame 可能是视频帧的一部分(非连续)
// 直接发送给所有订阅者
for _, sub := range s.subscribers {
sub.WriteFrame(frame) // 共享内存,零拷贝
}
})
}
```
### 4.3 最佳实践
**✅ 正确用法**
```go
// 1. 总是回收资源
reader := util.NewBufReader(conn)
defer reader.Recycle()
// 2. 在回调中直接处理,不要保存引用
reader.ReadRange(1024, func(data []byte) {
processData(data) // ✅ 立即处理
})
// 3. 需要保留时显式拷贝
var saved []byte
reader.ReadRange(1024, func(data []byte) {
saved = append(saved, data...) // ✅ 显式拷贝
})
```
**❌ 错误用法**
```go
// ❌ 不要保存引用
var dangling []byte
reader.ReadRange(1024, func(data []byte) {
dangling = data // 错误data 会被回收
})
// dangling 现在是悬空引用!
// ❌ 不要忘记回收
reader := util.NewBufReader(conn)
// 缺少 defer reader.Recycle()
// 内存块无法归还对象池
```
### 4.4 性能优化技巧
**技巧 1批量处理**
```go
// ✅ 优化:一次读取多个数据包
reader.ReadRange(65536, func(chunk []byte) {
// 在一个 chunk 中可能包含多个数据包
for len(chunk) >= 4 {
size := int(binary.BigEndian.Uint32(chunk[:4]))
packet := chunk[4 : 4+size]
processPacket(packet)
chunk = chunk[4+size:]
}
})
```
**技巧 2选择合适的块大小**
```go
// 根据应用场景选择
const (
SmallPacket = 4 << 10 // 4KB - RTSP/HTTP
MediumPacket = 16 << 10 // 16KB - 音频流
LargePacket = 64 << 10 // 64KB - 视频流
)
reader := util.NewBufReaderWithBufLen(conn, LargePacket)
```
## 5. 总结
### 核心创新:非连续内存缓冲
BufReader 的核心不是"更好的缓冲区",而是**彻底改变内存布局模型**
```
传统思维:数据必须在连续内存中
BufReader数据可以分散在多个块中通过引用传递
结果:
✓ 零拷贝:不需要重组成连续内存
✓ 零分配:内存块从对象池复用
✓ 零 GC 压力:不产生临时对象
```
### 关键优势
| 特性 | 实现方式 | 性能影响 |
|------|---------|---------|
| **零拷贝** | 传递内存块引用 | 无拷贝开销 |
| **零分配** | 对象池复用 | GC 减少 98.5% |
| **多订阅者共享** | 同一块被多次引用 | 内存节省 10x+ |
| **灵活块大小** | 适应网络波动 | 无需重组 |
### 适用场景
| 场景 | 推荐 | 原因 |
|------|------|------|
| **高并发网络服务器** | BufReader ⭐ | GC 减少 98%,吞吐量提升 10x+ |
| **流媒体转发** | BufReader ⭐ | 零拷贝多播,内存共享 |
| **协议解析器** | BufReader ⭐ | 逐块解析,无需完整包 |
| **长期运行服务** | BufReader ⭐ | 系统稳定GC 影响极小 |
| 简单文件读取 | bufio.Reader | 标准库足够 |
### 关键要点
使用 BufReader 时记住:
1. **接受非连续数据**:通过回调处理每个块
2. **不要持有引用**:数据在回调返回后会被回收
3. **利用 ReadRange**:这是零拷贝的核心 API
4. **必须调用 Recycle()**:归还内存块到对象池
### 性能数据
**流媒体服务器100 并发流,持续运行)**
```
1 小时运行预估:
bufio.Reader连续内存:
- 分配 2.8 TB 内存
- 触发 4,800 次 GC
- 系统频繁停顿
BufReader非连续内存:
- 分配 21 GB 内存(减少 133x
- 触发 72 次 GC减少 67x
- 系统几乎无 GC 影响
```
### 测试和文档
**运行测试**
```bash
sh scripts/benchmark_bufreader.sh
```
## 参考资料
- [GoMem 项目](https://github.com/langhuihui/gomem) - 内存对象池实现
- [Monibuca v5](https://monibuca.com) - 流媒体服务器
- 测试代码:`pkg/util/buf_reader_benchmark_test.go`
---
**核心思想**:通过非连续内存块切片和零拷贝引用传递,消除传统连续缓冲区的拷贝开销,实现高性能网络数据处理。

8
go.mod
View File

@@ -9,7 +9,6 @@ require (
github.com/beevik/etree v1.4.1
github.com/bluenviron/gohlslib v1.4.0
github.com/c0deltin/duckdb-driver v0.1.0
github.com/cilium/ebpf v0.15.0
github.com/cloudwego/goref v0.0.0-20240724113447-685d2a9523c8
github.com/deepch/vdk v0.0.27
github.com/disintegration/imaging v1.6.2
@@ -24,7 +23,7 @@ require (
github.com/icholy/digest v1.1.0
github.com/jinzhu/copier v0.4.0
github.com/kerberos-io/onvif v1.0.0
github.com/langhuihui/gotask v0.0.0-20250926063623-e8031a3bf4d2
github.com/langhuihui/gotask v1.0.1
github.com/mark3labs/mcp-go v0.27.0
github.com/mattn/go-sqlite3 v1.14.24
github.com/mcuadros/go-defaults v1.2.0
@@ -44,7 +43,6 @@ require (
github.com/stretchr/testify v1.10.0
github.com/tencentyun/cos-go-sdk-v5 v0.7.69
github.com/valyala/fasthttp v1.61.0
github.com/vishvananda/netlink v1.1.0
github.com/yapingcat/gomedia v0.0.0-20240601043430-920523f8e5c7
golang.org/x/image v0.22.0
golang.org/x/text v0.27.0
@@ -70,6 +68,7 @@ require (
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chromedp/cdproto v0.0.0-20240202021202-6d0b6a386732 // indirect
github.com/chromedp/sysutil v1.0.0 // indirect
github.com/cilium/ebpf v0.15.0 // indirect
github.com/clbanning/mxj v1.8.4 // indirect
github.com/clbanning/mxj/v2 v2.7.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
@@ -128,7 +127,6 @@ require (
github.com/valyala/gozstd v1.21.1 // indirect
github.com/valyala/histogram v1.2.0 // indirect
github.com/valyala/quicktemplate v1.8.0 // indirect
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df // indirect
github.com/wlynxg/anet v0.0.5 // indirect
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
@@ -157,7 +155,7 @@ require (
golang.org/x/exp v0.0.0-20240716175740-e3f259677ff7
golang.org/x/mod v0.25.0 // indirect
golang.org/x/net v0.41.0
golang.org/x/sys v0.34.0
golang.org/x/sys v0.34.0 // indirect
golang.org/x/tools v0.34.0 // indirect
gopkg.in/yaml.v3 v3.0.1
)

9
go.sum
View File

@@ -160,8 +160,8 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/langhuihui/gomem v0.0.0-20251001011839-023923cf7683 h1:lITBgMb71ad6OUU9gycsheCw9PpMbXy3/QA8T0V0dVM=
github.com/langhuihui/gomem v0.0.0-20251001011839-023923cf7683/go.mod h1:BTPq1+4YUP4i7w8VHzs5AUIdn3T5gXjIUXbxgHW9TIQ=
github.com/langhuihui/gotask v0.0.0-20250926063623-e8031a3bf4d2 h1:PmB8c9hTONwHGfsKd2JTwXttNWJvb1az5Xvv7yHGL+Y=
github.com/langhuihui/gotask v0.0.0-20250926063623-e8031a3bf4d2/go.mod h1:2zNqwV8M1pHoO0b5JC/A37oYpdtXrfL10Qof9AvR5IE=
github.com/langhuihui/gotask v1.0.1 h1:X+xETKZQ+OdRO8pNYudNdJH4yZ2QJM6ehHQVjw1i5RY=
github.com/langhuihui/gotask v1.0.1/go.mod h1:2zNqwV8M1pHoO0b5JC/A37oYpdtXrfL10Qof9AvR5IE=
github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80 h1:6Yzfa6GP0rIo/kULo2bwGEkFvCePZ3qHDDTC3/J9Swo=
github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
@@ -312,10 +312,6 @@ github.com/valyala/histogram v1.2.0 h1:wyYGAZZt3CpwUiIb9AU/Zbllg1llXyrtApRS815OL
github.com/valyala/histogram v1.2.0/go.mod h1:Hb4kBwb4UxsaNbbbh+RRz8ZR6pdodR57tzWUS3BUzXY=
github.com/valyala/quicktemplate v1.8.0 h1:zU0tjbIqTRgKQzFY1L42zq0qR3eh4WoQQdIdqCysW5k=
github.com/valyala/quicktemplate v1.8.0/go.mod h1:qIqW8/igXt8fdrUln5kOSb+KWMaJ4Y8QUsfd1k6L2jM=
github.com/vishvananda/netlink v1.1.0 h1:1iyaYNBLmP6L0220aDnYQpo1QEV4t4hJ+xEEhhJH8j0=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df h1:OviZH7qLw/7ZovXvuNyL3XQl8UFofeikI1NW1Gypu7k=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU=
github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
@@ -344,7 +340,6 @@ golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=

View File

@@ -122,14 +122,15 @@ type (
EventName string `json:"eventName" desc:"事件名称" gorm:"type:varchar(255);comment:事件名称"`
}
Record struct {
Mode RecordMode `json:"mode" desc:"事件类型,auto=连续录像模式event=事件录像模式" gorm:"type:varchar(255);comment:事件类型,auto=连续录像模式event=事件录像模式;default:'auto'"`
Type string `desc:"录制类型"` // 录制类型 mp4、flv、hls、hlsv7
FilePath string `desc:"录制文件路径"` // 录制文件路径
Fragment time.Duration `desc:"分片时长"` // 分片时长
RealTime bool `desc:"是否实时录制"` // 是否实时录制
Append bool `desc:"是否追加录制"` // 是否追加录制
Event *RecordEvent `json:"event" desc:"事件录像配置" gorm:"-"` // 事件录像配置
Storage map[string]any `json:"storage" desc:"存储配置" gorm:"-"` // 存储配置
Mode RecordMode `json:"mode" desc:"事件类型,auto=连续录像模式event=事件录像模式" gorm:"type:varchar(255);comment:事件类型,auto=连续录像模式event=事件录像模式;default:'auto'"`
Type string `desc:"录制类型"` // 录制类型 mp4、flv、hls、hlsv7
FilePath string `desc:"录制文件路径"` // 录制文件路径
Fragment time.Duration `desc:"分片时长"` // 分片时长
RealTime bool `desc:"是否实时录制"` // 是否实时录制
Append bool `desc:"是否追加录制"` // 是否追加录制
Event *RecordEvent `json:"event" desc:"事件录像配置" gorm:"-"` // 事件录像配置
Storage map[string]any `json:"storage" desc:"存储配置" gorm:"-"` // 存储配置
SecondaryFilePath string `json:"secondaryFilePath" desc:"录制文件次级路径" gorm:"-"`
}
TransfromOutput struct {
Target string `desc:"转码目标"` // 转码目标

View File

@@ -61,9 +61,7 @@ func (A *Mpeg2Audio) Demux() (err error) {
}
func (A *Mpeg2Audio) Mux(frame *pkg.Sample) (err error) {
if A.ICodecCtx == nil {
A.ICodecCtx = frame.GetBase()
}
A.ICodecCtx = frame.GetBase()
raw := frame.Raw.(*pkg.AudioData)
aacCtx, ok := A.ICodecCtx.(*codec.AACCtx)
if ok {

View File

@@ -161,9 +161,7 @@ func (a *AnnexB) Demux() (err error) {
}
func (a *AnnexB) Mux(fromBase *pkg.Sample) (err error) {
if a.ICodecCtx == nil {
a.ICodecCtx = fromBase.GetBase()
}
a.ICodecCtx = fromBase.GetBase()
a.InitRecycleIndexes(0)
delimiter2 := codec.NALU_Delimiter2[:]
a.PushOne(delimiter2)

View File

@@ -54,7 +54,8 @@ func (s *LocalStorage) CreateFile(ctx context.Context, path string) (File, error
return nil, fmt.Errorf("failed to create directory: %w", err)
}
file, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)
// 使用 O_RDWR 而不是 O_WRONLY,因为某些场景(如 MP4 writeTrailer)需要读取文件内容
file, err := os.OpenFile(path, os.O_CREATE|os.O_RDWR|os.O_TRUNC, 0644)
if err != nil {
return nil, fmt.Errorf("failed to create file: %w", err)
}

View File

@@ -6,6 +6,7 @@ import (
"net/textproto"
"strings"
"time"
. "github.com/langhuihui/gomem"
)
@@ -18,13 +19,24 @@ type BufReader struct {
BufLen int
Mouth chan []byte
feedData func() error
timeout time.Duration
}
func (r *BufReader) SetTimeout(timeout time.Duration) {
r.timeout = timeout
}
func NewBufReaderWithBufLen(reader io.Reader, bufLen int) (r *BufReader) {
conn, _ := reader.(net.Conn)
r = &BufReader{
Allocator: NewScalableMemoryAllocator(bufLen),
BufLen: bufLen,
feedData: func() error {
if conn != nil && r.timeout > 0 {
if err := conn.SetReadDeadline(time.Now().Add(r.timeout)); err != nil {
return err
}
}
buf, err := r.Allocator.Read(reader, r.BufLen)
if err != nil {
return err
@@ -42,34 +54,6 @@ func NewBufReaderWithBufLen(reader io.Reader, bufLen int) (r *BufReader) {
return
}
// NewBufReaderWithTimeout 创建一个具有指定读取超时时间的 BufReader
func NewBufReaderWithTimeout(conn net.Conn, timeout time.Duration) (r *BufReader) {
r = &BufReader{
Allocator: NewScalableMemoryAllocator(defaultBufSize),
BufLen: defaultBufSize,
feedData: func() error {
// 设置读取超时
if conn != nil && timeout > 0 {
if err := conn.SetReadDeadline(time.Now().Add(timeout)); err != nil {
return err
}
}
buf, err := r.Allocator.Read(conn, r.BufLen)
if err != nil {
return err
}
n := len(buf)
r.totalRead += n
r.buf.Buffers = append(r.buf.Buffers, buf)
r.buf.Size += n
r.buf.Length += n
return nil
},
}
r.buf.Memory = &Memory{}
return
}
func NewBufReaderBuffersChan(feedChan chan net.Buffers) (r *BufReader) {
r = &BufReader{
feedData: func() error {

View File

@@ -0,0 +1,408 @@
package util
import (
"bufio"
"io"
"math/rand"
"runtime"
"testing"
)
// mockNetworkReader 模拟真实网络数据源
//
// 真实的网络读取场景中,每次 Read() 调用返回的数据长度是不确定的,
// 受多种因素影响:
// - TCP 接收窗口大小
// - 网络延迟和带宽
// - 操作系统缓冲区状态
// - 网络拥塞情况
//
// 这个 mock reader 通过每次返回随机长度的数据来模拟真实网络行为,
// 使基准测试更加接近实际应用场景。
type mockNetworkReader struct {
data []byte
offset int
rng *rand.Rand
// minChunk 和 maxChunk 控制每次返回的数据块大小范围
minChunk int
maxChunk int
}
func (m *mockNetworkReader) Read(p []byte) (n int, err error) {
if m.offset >= len(m.data) {
m.offset = 0 // 循环读取
}
// 计算本次可以返回的最大长度
remaining := len(m.data) - m.offset
maxRead := len(p)
if remaining < maxRead {
maxRead = remaining
}
// 随机返回 minChunk 到 min(maxChunk, maxRead) 之间的数据
chunkSize := m.minChunk
if m.maxChunk > m.minChunk && maxRead > m.minChunk {
maxPossible := m.maxChunk
if maxRead < maxPossible {
maxPossible = maxRead
}
chunkSize = m.minChunk + m.rng.Intn(maxPossible-m.minChunk+1)
}
if chunkSize > maxRead {
chunkSize = maxRead
}
n = copy(p[:chunkSize], m.data[m.offset:m.offset+chunkSize])
m.offset += n
return n, nil
}
// newMockNetworkReader 创建一个模拟真实网络的 reader
// 每次 Read 返回随机长度的数据(在 minChunk 到 maxChunk 之间)
func newMockNetworkReader(size int, minChunk, maxChunk int) *mockNetworkReader {
data := make([]byte, size)
for i := range data {
data[i] = byte(i % 256)
}
return &mockNetworkReader{
data: data,
rng: rand.New(rand.NewSource(42)), // 固定种子保证可重复性
minChunk: minChunk,
maxChunk: maxChunk,
}
}
// newMockNetworkReaderDefault 创建默认配置的模拟网络 reader
// 每次返回 64 到 2048 字节之间的随机数据
func newMockNetworkReaderDefault(size int) *mockNetworkReader {
return newMockNetworkReader(size, 64, 2048)
}
// ============================================================
// 单元测试:验证 mockNetworkReader 的行为
// ============================================================
// TestMockNetworkReader_RandomChunks 验证随机长度读取功能
func TestMockNetworkReader_RandomChunks(t *testing.T) {
reader := newMockNetworkReader(10000, 100, 500)
buf := make([]byte, 1000)
// 读取多次,验证每次返回的长度在预期范围内
for i := 0; i < 10; i++ {
n, err := reader.Read(buf)
if err != nil {
t.Fatalf("读取失败: %v", err)
}
if n < 100 || n > 500 {
t.Errorf("第 %d 次读取返回 %d 字节,期望在 [100, 500] 范围内", i, n)
}
}
}
// ============================================================
// 核心基准测试:模拟真实网络场景
// ============================================================
// BenchmarkConcurrentNetworkRead_Bufio 模拟并发网络连接处理 - bufio.Reader
// 这个测试模拟多个并发连接持续读取和处理网络数据
// bufio.Reader 会为每个数据包分配新的缓冲区,产生大量临时内存
func BenchmarkConcurrentNetworkRead_Bufio(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
// 每个 goroutine 代表一个网络连接
reader := bufio.NewReaderSize(newMockNetworkReaderDefault(10*1024*1024), 4096)
for pb.Next() {
// 模拟读取网络数据包并处理
// 这里每次都分配新的缓冲区(真实场景中的常见做法)
buf := make([]byte, 1024) // 每次分配 1KB - 会产生 GC 压力
n, err := reader.Read(buf)
if err != nil {
b.Fatal(err)
}
// 模拟处理数据(计算校验和)
var sum int
for i := 0; i < n; i++ {
sum += int(buf[i])
}
_ = sum
}
})
}
// BenchmarkConcurrentNetworkRead_BufReader 模拟并发网络连接处理 - BufReader
// 使用 BufReader 的零拷贝特性,通过内存池复用避免频繁分配
func BenchmarkConcurrentNetworkRead_BufReader(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
// 每个 goroutine 代表一个网络连接
reader := NewBufReader(newMockNetworkReaderDefault(10 * 1024 * 1024))
defer reader.Recycle()
for pb.Next() {
// 使用零拷贝的 ReadRange无需分配缓冲区
var sum int
err := reader.ReadRange(1024, func(data []byte) {
// 直接处理原始数据,无内存分配
for _, b := range data {
sum += int(b)
}
})
if err != nil {
b.Fatal(err)
}
_ = sum
}
})
}
// BenchmarkConcurrentProtocolParsing_Bufio 模拟并发协议解析 - bufio.Reader
// 模拟流媒体服务器解析多个并发流的数据包
func BenchmarkConcurrentProtocolParsing_Bufio(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
reader := bufio.NewReaderSize(newMockNetworkReaderDefault(10*1024*1024), 4096)
for pb.Next() {
// 读取包头4字节长度
header := make([]byte, 4) // 分配 1
_, err := io.ReadFull(reader, header)
if err != nil {
b.Fatal(err)
}
// 计算数据包大小256-1024 字节)
size := 256 + int(header[3])%768
// 读取数据包内容
packet := make([]byte, size) // 分配 2
_, err = io.ReadFull(reader, packet)
if err != nil {
b.Fatal(err)
}
// 模拟处理数据包
_ = packet
}
})
}
// BenchmarkConcurrentProtocolParsing_BufReader 模拟并发协议解析 - BufReader
func BenchmarkConcurrentProtocolParsing_BufReader(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
reader := NewBufReader(newMockNetworkReaderDefault(10 * 1024 * 1024))
defer reader.Recycle()
for pb.Next() {
// 读取包头
size, err := reader.ReadBE32(4)
if err != nil {
b.Fatal(err)
}
// 计算数据包大小
packetSize := 256 + int(size)%768
// 零拷贝读取和处理
err = reader.ReadRange(packetSize, func(data []byte) {
// 直接处理,无需分配
_ = data
})
if err != nil {
b.Fatal(err)
}
}
})
}
// BenchmarkHighFrequencyReads_Bufio 高频小包读取 - bufio.Reader
// 模拟视频流的高频小包场景(如 30fps 视频流)
func BenchmarkHighFrequencyReads_Bufio(b *testing.B) {
reader := bufio.NewReaderSize(newMockNetworkReaderDefault(10*1024*1024), 4096)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
// 每次读取小数据包128 字节)
buf := make([]byte, 128) // 频繁分配小对象
_, err := reader.Read(buf)
if err != nil {
b.Fatal(err)
}
_ = buf
}
}
// BenchmarkHighFrequencyReads_BufReader 高频小包读取 - BufReader
func BenchmarkHighFrequencyReads_BufReader(b *testing.B) {
reader := NewBufReader(newMockNetworkReaderDefault(10 * 1024 * 1024))
defer reader.Recycle()
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
// 零拷贝读取
err := reader.ReadRange(128, func(data []byte) {
_ = data
})
if err != nil {
b.Fatal(err)
}
}
}
// ============================================================
// GC 压力测试:展示长时间运行下的 GC 影响
// ============================================================
// BenchmarkGCPressure_Bufio 展示 bufio.Reader 在持续运行下的 GC 压力
// 这个测试会产生大量临时内存分配,触发频繁 GC
func BenchmarkGCPressure_Bufio(b *testing.B) {
var beforeGC runtime.MemStats
runtime.ReadMemStats(&beforeGC)
// 模拟 10 个并发连接持续处理数据
b.SetParallelism(10)
b.RunParallel(func(pb *testing.PB) {
reader := bufio.NewReaderSize(newMockNetworkReaderDefault(100*1024*1024), 4096)
for pb.Next() {
// 模拟处理一个数据包:读取 + 处理 + 临时分配
buf := make([]byte, 512) // 每次分配 512 字节
n, err := reader.Read(buf)
if err != nil {
b.Fatal(err)
}
// 模拟数据处理(可能需要额外分配)
processed := make([]byte, n) // 再分配一次
copy(processed, buf[:n])
// 模拟业务处理
var sum int64
for _, v := range processed {
sum += int64(v)
}
_ = sum
}
})
var afterGC runtime.MemStats
runtime.ReadMemStats(&afterGC)
// 报告 GC 统计
b.ReportMetric(float64(afterGC.NumGC-beforeGC.NumGC), "gc-runs")
b.ReportMetric(float64(afterGC.TotalAlloc-beforeGC.TotalAlloc)/1024/1024, "MB-alloc")
b.ReportMetric(float64(afterGC.Mallocs-beforeGC.Mallocs), "mallocs")
}
// BenchmarkGCPressure_BufReader 展示 BufReader 通过内存复用降低 GC 压力
// 零拷贝 + 内存池复用,几乎不产生临时对象
func BenchmarkGCPressure_BufReader(b *testing.B) {
var beforeGC runtime.MemStats
runtime.ReadMemStats(&beforeGC)
b.SetParallelism(10)
b.RunParallel(func(pb *testing.PB) {
reader := NewBufReader(newMockNetworkReaderDefault(100 * 1024 * 1024))
defer reader.Recycle()
for pb.Next() {
// 零拷贝处理,无临时分配
var sum int64
err := reader.ReadRange(512, func(data []byte) {
// 直接在原始内存上处理,无需拷贝
for _, v := range data {
sum += int64(v)
}
})
if err != nil {
b.Fatal(err)
}
_ = sum
}
})
var afterGC runtime.MemStats
runtime.ReadMemStats(&afterGC)
// 报告 GC 统计
b.ReportMetric(float64(afterGC.NumGC-beforeGC.NumGC), "gc-runs")
b.ReportMetric(float64(afterGC.TotalAlloc-beforeGC.TotalAlloc)/1024/1024, "MB-alloc")
b.ReportMetric(float64(afterGC.Mallocs-beforeGC.Mallocs), "mallocs")
}
// BenchmarkStreamingServer_Bufio 模拟流媒体服务器场景 - bufio.Reader
// 100 个并发连接,每个连接持续读取和转发数据
func BenchmarkStreamingServer_Bufio(b *testing.B) {
var beforeGC runtime.MemStats
runtime.ReadMemStats(&beforeGC)
b.RunParallel(func(pb *testing.PB) {
reader := bufio.NewReaderSize(newMockNetworkReaderDefault(50*1024*1024), 8192)
frameNum := 0
for pb.Next() {
// 读取一帧数据1KB-4KB 之间变化)
frameSize := 1024 + (frameNum%3)*1024
frameNum++
frame := make([]byte, frameSize)
_, err := io.ReadFull(reader, frame)
if err != nil {
b.Fatal(err)
}
// 模拟转发给多个订阅者(需要拷贝)
for i := 0; i < 3; i++ {
subscriber := make([]byte, len(frame))
copy(subscriber, frame)
_ = subscriber
}
}
})
var afterGC runtime.MemStats
runtime.ReadMemStats(&afterGC)
gcRuns := afterGC.NumGC - beforeGC.NumGC
totalAlloc := float64(afterGC.TotalAlloc-beforeGC.TotalAlloc) / 1024 / 1024
b.ReportMetric(float64(gcRuns), "gc-runs")
b.ReportMetric(totalAlloc, "MB-alloc")
}
// BenchmarkStreamingServer_BufReader 模拟流媒体服务器场景 - BufReader
func BenchmarkStreamingServer_BufReader(b *testing.B) {
var beforeGC runtime.MemStats
runtime.ReadMemStats(&beforeGC)
b.RunParallel(func(pb *testing.PB) {
reader := NewBufReader(newMockNetworkReaderDefault(50 * 1024 * 1024))
defer reader.Recycle()
for pb.Next() {
// 零拷贝读取
err := reader.ReadRange(1024+1024, func(frame []byte) {
// 直接使用原始数据,无需拷贝
// 模拟转发(实际可以使用引用计数或共享内存)
for i := 0; i < 3; i++ {
_ = frame
}
})
if err != nil {
b.Fatal(err)
}
}
})
var afterGC runtime.MemStats
runtime.ReadMemStats(&afterGC)
gcRuns := afterGC.NumGC - beforeGC.NumGC
totalAlloc := float64(afterGC.TotalAlloc-beforeGC.TotalAlloc) / 1024 / 1024
b.ReportMetric(float64(gcRuns), "gc-runs")
b.ReportMetric(totalAlloc, "MB-alloc")
}

View File

@@ -1,935 +0,0 @@
//go:build enable_xdp
package util
import (
"fmt"
"github.com/cilium/ebpf"
"github.com/cilium/ebpf/asm"
"github.com/vishvananda/netlink"
"golang.org/x/sys/unix"
"reflect"
"syscall"
"time"
"unsafe"
)
// DefaultSocketOptions is the default SocketOptions used by an xdp.Socket created without specifying options.
var DefaultSocketOptions = SocketOptions{
NumFrames: 128,
FrameSize: 2048,
FillRingNumDescs: 64,
CompletionRingNumDescs: 64,
RxRingNumDescs: 64,
TxRingNumDescs: 64,
}
type umemRing struct {
Producer *uint32
Consumer *uint32
Descs []uint64
}
type rxTxRing struct {
Producer *uint32
Consumer *uint32
Descs []Desc
}
// A Socket is an implementation of the AF_XDP Linux socket type for reading packets from a device.
type Socket struct {
fd int
umem []byte
fillRing umemRing
rxRing rxTxRing
txRing rxTxRing
completionRing umemRing
qidconfMap *ebpf.Map
xsksMap *ebpf.Map
program *ebpf.Program
ifindex int
numTransmitted int
numFilled int
freeRXDescs, freeTXDescs []bool
options SocketOptions
rxDescs []Desc
getTXDescs, getRXDescs []Desc
}
// SocketOptions are configuration settings used to bind an XDP socket.
type SocketOptions struct {
NumFrames int
FrameSize int
FillRingNumDescs int
CompletionRingNumDescs int
RxRingNumDescs int
TxRingNumDescs int
}
// Desc represents an XDP Rx/Tx descriptor.
type Desc unix.XDPDesc
// Stats contains various counters of the XDP socket, such as numbers of
// sent/received frames.
type Stats struct {
// Filled is the number of items consumed thus far by the Linux kernel
// from the Fill ring queue.
Filled uint64
// Received is the number of items consumed thus far by the user of
// this package from the Rx ring queue.
Received uint64
// Transmitted is the number of items consumed thus far by the Linux
// kernel from the Tx ring queue.
Transmitted uint64
// Completed is the number of items consumed thus far by the user of
// this package from the Completion ring queue.
Completed uint64
// KernelStats contains the in-kernel statistics of the corresponding
// XDP socket, such as the number of invalid descriptors that were
// submitted into Fill or Tx ring queues.
KernelStats unix.XDPStatistics
}
// DefaultSocketFlags are the flags which are passed to bind(2) system call
// when the XDP socket is bound, possible values include unix.XDP_SHARED_UMEM,
// unix.XDP_COPY, unix.XDP_ZEROCOPY.
var DefaultSocketFlags uint16
// DefaultXdpFlags are the flags which are passed when the XDP program is
// attached to the network link, possible values include
// unix.XDP_FLAGS_DRV_MODE, unix.XDP_FLAGS_HW_MODE, unix.XDP_FLAGS_SKB_MODE,
// unix.XDP_FLAGS_UPDATE_IF_NOEXIST.
var DefaultXdpFlags uint32
func init() {
DefaultSocketFlags = 0
DefaultXdpFlags = 0
}
// NewSocket returns a new XDP socket attached to the network interface which
// has the given interface, and attached to the given queue on that network
// interface.
func NewSocket(Ifindex int, QueueID int, options *SocketOptions) (xsk *Socket, err error) {
if options == nil {
options = &DefaultSocketOptions
}
xsk = &Socket{fd: -1, ifindex: Ifindex, options: *options}
xsk.fd, err = syscall.Socket(unix.AF_XDP, syscall.SOCK_RAW, 0)
if err != nil {
return nil, fmt.Errorf("syscall.Socket failed: %v", err)
}
xsk.umem, err = syscall.Mmap(-1, 0, options.NumFrames*options.FrameSize,
syscall.PROT_READ|syscall.PROT_WRITE,
syscall.MAP_PRIVATE|syscall.MAP_ANONYMOUS|syscall.MAP_POPULATE)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("syscall.Mmap failed: %v", err)
}
xdpUmemReg := unix.XDPUmemReg{
Addr: uint64(uintptr(unsafe.Pointer(&xsk.umem[0]))),
Len: uint64(len(xsk.umem)),
Size: uint32(options.FrameSize),
Headroom: 0,
}
var errno syscall.Errno
var rc uintptr
rc, _, errno = unix.Syscall6(syscall.SYS_SETSOCKOPT, uintptr(xsk.fd),
unix.SOL_XDP, unix.XDP_UMEM_REG,
uintptr(unsafe.Pointer(&xdpUmemReg)),
unsafe.Sizeof(xdpUmemReg), 0)
if rc != 0 {
xsk.Close()
return nil, fmt.Errorf("unix.SetsockoptUint64 XDP_UMEM_REG failed: %v", errno)
}
err = syscall.SetsockoptInt(xsk.fd, unix.SOL_XDP, unix.XDP_UMEM_FILL_RING,
options.FillRingNumDescs)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("unix.SetsockoptUint64 XDP_UMEM_FILL_RING failed: %v", err)
}
err = unix.SetsockoptInt(xsk.fd, unix.SOL_XDP, unix.XDP_UMEM_COMPLETION_RING,
options.CompletionRingNumDescs)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("unix.SetsockoptUint64 XDP_UMEM_COMPLETION_RING failed: %v", err)
}
var rxRing bool
if options.RxRingNumDescs > 0 {
err = unix.SetsockoptInt(xsk.fd, unix.SOL_XDP, unix.XDP_RX_RING,
options.RxRingNumDescs)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("unix.SetsockoptUint64 XDP_RX_RING failed: %v", err)
}
rxRing = true
}
var txRing bool
if options.TxRingNumDescs > 0 {
err = unix.SetsockoptInt(xsk.fd, unix.SOL_XDP, unix.XDP_TX_RING,
options.TxRingNumDescs)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("unix.SetsockoptUint64 XDP_TX_RING failed: %v", err)
}
txRing = true
}
if !(rxRing || txRing) {
return nil, fmt.Errorf("RxRingNumDescs and TxRingNumDescs cannot both be set to zero")
}
var offsets unix.XDPMmapOffsets
var vallen uint32
vallen = uint32(unsafe.Sizeof(offsets))
rc, _, errno = unix.Syscall6(syscall.SYS_GETSOCKOPT, uintptr(xsk.fd),
unix.SOL_XDP, unix.XDP_MMAP_OFFSETS,
uintptr(unsafe.Pointer(&offsets)),
uintptr(unsafe.Pointer(&vallen)), 0)
if rc != 0 {
xsk.Close()
return nil, fmt.Errorf("unix.Syscall6 getsockopt XDP_MMAP_OFFSETS failed: %v", errno)
}
fillRingSlice, err := syscall.Mmap(xsk.fd, unix.XDP_UMEM_PGOFF_FILL_RING,
int(offsets.Fr.Desc+uint64(options.FillRingNumDescs)*uint64(unsafe.Sizeof(uint64(0)))),
syscall.PROT_READ|syscall.PROT_WRITE,
syscall.MAP_SHARED|syscall.MAP_POPULATE)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("syscall.Mmap XDP_UMEM_PGOFF_FILL_RING failed: %v", err)
}
xsk.fillRing.Producer = (*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&fillRingSlice[0])) + uintptr(offsets.Fr.Producer)))
xsk.fillRing.Consumer = (*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&fillRingSlice[0])) + uintptr(offsets.Fr.Consumer)))
sh := (*reflect.SliceHeader)(unsafe.Pointer(&xsk.fillRing.Descs))
sh.Data = uintptr(unsafe.Pointer(&fillRingSlice[0])) + uintptr(offsets.Fr.Desc)
sh.Len = options.FillRingNumDescs
sh.Cap = options.FillRingNumDescs
completionRingSlice, err := syscall.Mmap(xsk.fd, unix.XDP_UMEM_PGOFF_COMPLETION_RING,
int(offsets.Cr.Desc+uint64(options.CompletionRingNumDescs)*uint64(unsafe.Sizeof(uint64(0)))),
syscall.PROT_READ|syscall.PROT_WRITE,
syscall.MAP_SHARED|syscall.MAP_POPULATE)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("syscall.Mmap XDP_UMEM_PGOFF_COMPLETION_RING failed: %v", err)
}
xsk.completionRing.Producer = (*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&completionRingSlice[0])) + uintptr(offsets.Cr.Producer)))
xsk.completionRing.Consumer = (*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&completionRingSlice[0])) + uintptr(offsets.Cr.Consumer)))
sh = (*reflect.SliceHeader)(unsafe.Pointer(&xsk.completionRing.Descs))
sh.Data = uintptr(unsafe.Pointer(&completionRingSlice[0])) + uintptr(offsets.Cr.Desc)
sh.Len = options.CompletionRingNumDescs
sh.Cap = options.CompletionRingNumDescs
if rxRing {
rxRingSlice, err := syscall.Mmap(xsk.fd, unix.XDP_PGOFF_RX_RING,
int(offsets.Rx.Desc+uint64(options.RxRingNumDescs)*uint64(unsafe.Sizeof(Desc{}))),
syscall.PROT_READ|syscall.PROT_WRITE,
syscall.MAP_SHARED|syscall.MAP_POPULATE)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("syscall.Mmap XDP_PGOFF_RX_RING failed: %v", err)
}
xsk.rxRing.Producer = (*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&rxRingSlice[0])) + uintptr(offsets.Rx.Producer)))
xsk.rxRing.Consumer = (*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&rxRingSlice[0])) + uintptr(offsets.Rx.Consumer)))
sh = (*reflect.SliceHeader)(unsafe.Pointer(&xsk.rxRing.Descs))
sh.Data = uintptr(unsafe.Pointer(&rxRingSlice[0])) + uintptr(offsets.Rx.Desc)
sh.Len = options.RxRingNumDescs
sh.Cap = options.RxRingNumDescs
xsk.rxDescs = make([]Desc, 0, options.RxRingNumDescs)
}
if txRing {
txRingSlice, err := syscall.Mmap(xsk.fd, unix.XDP_PGOFF_TX_RING,
int(offsets.Tx.Desc+uint64(options.TxRingNumDescs)*uint64(unsafe.Sizeof(Desc{}))),
syscall.PROT_READ|syscall.PROT_WRITE,
syscall.MAP_SHARED|syscall.MAP_POPULATE)
if err != nil {
xsk.Close()
return nil, fmt.Errorf("syscall.Mmap XDP_PGOFF_TX_RING failed: %v", err)
}
xsk.txRing.Producer = (*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&txRingSlice[0])) + uintptr(offsets.Tx.Producer)))
xsk.txRing.Consumer = (*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&txRingSlice[0])) + uintptr(offsets.Tx.Consumer)))
sh = (*reflect.SliceHeader)(unsafe.Pointer(&xsk.txRing.Descs))
sh.Data = uintptr(unsafe.Pointer(&txRingSlice[0])) + uintptr(offsets.Tx.Desc)
sh.Len = options.TxRingNumDescs
sh.Cap = options.TxRingNumDescs
}
sa := unix.SockaddrXDP{
Flags: DefaultSocketFlags,
Ifindex: uint32(Ifindex),
QueueID: uint32(QueueID),
}
if err = unix.Bind(xsk.fd, &sa); err != nil {
xsk.Close()
return nil, fmt.Errorf("syscall.Bind SockaddrXDP failed: %v", err)
}
xsk.freeRXDescs = make([]bool, options.NumFrames)
xsk.freeTXDescs = make([]bool, options.NumFrames)
for i := range xsk.freeRXDescs {
xsk.freeRXDescs[i] = true
}
for i := range xsk.freeTXDescs {
xsk.freeTXDescs[i] = true
}
xsk.getTXDescs = make([]Desc, 0, options.CompletionRingNumDescs)
xsk.getRXDescs = make([]Desc, 0, options.FillRingNumDescs)
return xsk, nil
}
// Fill submits the given descriptors to be filled (i.e. to receive frames into)
// it returns how many descriptors where actually put onto Fill ring queue.
// The descriptors can be acquired either by calling the GetDescs() method or
// by calling Receive() method.
func (xsk *Socket) Fill(descs []Desc) int {
numFreeSlots := xsk.NumFreeFillSlots()
if numFreeSlots < len(descs) {
descs = descs[:numFreeSlots]
}
prod := *xsk.fillRing.Producer
for _, desc := range descs {
xsk.fillRing.Descs[prod&uint32(xsk.options.FillRingNumDescs-1)] = desc.Addr
prod++
xsk.freeRXDescs[desc.Addr/uint64(xsk.options.FrameSize)] = false
}
//fencer.SFence()
*xsk.fillRing.Producer = prod
xsk.numFilled += len(descs)
return len(descs)
}
// Receive returns the descriptors which were filled, i.e. into which frames
// were received into.
func (xsk *Socket) Receive(num int) []Desc {
numAvailable := xsk.NumReceived()
if num > int(numAvailable) {
num = int(numAvailable)
}
descs := xsk.rxDescs[:0]
cons := *xsk.rxRing.Consumer
//fencer.LFence()
for i := 0; i < num; i++ {
descs = append(descs, xsk.rxRing.Descs[cons&uint32(xsk.options.RxRingNumDescs-1)])
cons++
xsk.freeRXDescs[descs[i].Addr/uint64(xsk.options.FrameSize)] = true
}
//fencer.MFence()
*xsk.rxRing.Consumer = cons
xsk.numFilled -= len(descs)
return descs
}
// Transmit submits the given descriptors to be sent out, it returns how many
// descriptors were actually pushed onto the Tx ring queue.
// The descriptors can be acquired either by calling the GetDescs() method or
// by calling Receive() method.
func (xsk *Socket) Transmit(descs []Desc) (numSubmitted int) {
numFreeSlots := xsk.NumFreeTxSlots()
if len(descs) > numFreeSlots {
descs = descs[:numFreeSlots]
}
prod := *xsk.txRing.Producer
for _, desc := range descs {
xsk.txRing.Descs[prod&uint32(xsk.options.TxRingNumDescs-1)] = desc
prod++
xsk.freeTXDescs[desc.Addr/uint64(xsk.options.FrameSize)] = false
}
//fencer.SFence()
*xsk.txRing.Producer = prod
xsk.numTransmitted += len(descs)
numSubmitted = len(descs)
var rc uintptr
var errno syscall.Errno
for {
rc, _, errno = unix.Syscall6(syscall.SYS_SENDTO,
uintptr(xsk.fd),
0, 0,
uintptr(unix.MSG_DONTWAIT),
0, 0)
if rc != 0 {
switch errno {
case unix.EINTR:
// try again
case unix.EAGAIN:
return
case unix.EBUSY: // "completed but not sent"
return
default:
panic(fmt.Errorf("sendto failed with rc=%d and errno=%d", rc, errno))
}
} else {
break
}
}
return
}
// FD returns the file descriptor associated with this xdp.Socket which can be
// used e.g. to do polling.
func (xsk *Socket) FD() int {
return xsk.fd
}
// Poll blocks until kernel informs us that it has either received
// or completed (i.e. actually sent) some frames that were previously submitted
// using Fill() or Transmit() methods.
// The numReceived return value can be used as the argument for subsequent
// Receive() method call.
func (xsk *Socket) Poll(timeout int) (numReceived int, numCompleted int, err error) {
var events int16
if xsk.numFilled > 0 {
events |= unix.POLLIN
}
if xsk.numTransmitted > 0 {
events |= unix.POLLOUT
}
if events == 0 {
return
}
var pfds [1]unix.PollFd
pfds[0].Fd = int32(xsk.fd)
pfds[0].Events = events
for err = unix.EINTR; err == unix.EINTR; {
_, err = unix.Poll(pfds[:], timeout)
}
if err != nil {
return 0, 0, err
}
numReceived = xsk.NumReceived()
if numCompleted = xsk.NumCompleted(); numCompleted > 0 {
xsk.Complete(numCompleted)
}
return
}
// GetDescs returns up to n descriptors which are not currently in use.
// if rx is true, return desc in first half of umem, 2nd half otherwise
func (xsk *Socket) GetDescs(n int, rx bool) []Desc {
if n > cap(xsk.getRXDescs) {
n = cap(xsk.getRXDescs)
}
if !rx {
if n > cap(xsk.getTXDescs) {
n = cap(xsk.getTXDescs)
}
}
// numOfUMEMChunks := len(xsk.freeRXDescs) / 2
// if n > numOfUMEMChunks {
// n = numOfUMEMChunks
// }
descs := xsk.getRXDescs[:0]
j := 0
start := 0
end := cap(xsk.getRXDescs)
freeList := xsk.freeRXDescs
if !rx {
start = cap(xsk.getRXDescs)
end = len(xsk.freeTXDescs)
freeList = xsk.freeTXDescs
descs = xsk.getTXDescs[:0]
}
for i := start; i < end && j < n; i++ {
if freeList[i] == true {
descs = append(descs, Desc{
Addr: uint64(i) * uint64(xsk.options.FrameSize),
Len: uint32(xsk.options.FrameSize),
})
j++
}
}
return descs
}
// GetFrame returns the buffer containing the frame corresponding to the given
// descriptor. The returned byte slice points to the actual buffer of the
// corresponding frame, so modiyfing this slice modifies the frame contents.
func (xsk *Socket) GetFrame(d Desc) []byte {
return xsk.umem[d.Addr : d.Addr+uint64(d.Len)]
}
// Close closes and frees the resources allocated by the Socket.
func (xsk *Socket) Close() error {
allErrors := []error{}
var err error
if xsk.fd != -1 {
if err = unix.Close(xsk.fd); err != nil {
allErrors = append(allErrors, fmt.Errorf("failed to close XDP socket: %v", err))
}
xsk.fd = -1
var sh *reflect.SliceHeader
sh = (*reflect.SliceHeader)(unsafe.Pointer(&xsk.completionRing.Descs))
sh.Data = uintptr(0)
sh.Len = 0
sh.Cap = 0
sh = (*reflect.SliceHeader)(unsafe.Pointer(&xsk.txRing.Descs))
sh.Data = uintptr(0)
sh.Len = 0
sh.Cap = 0
sh = (*reflect.SliceHeader)(unsafe.Pointer(&xsk.rxRing.Descs))
sh.Data = uintptr(0)
sh.Len = 0
sh.Cap = 0
sh = (*reflect.SliceHeader)(unsafe.Pointer(&xsk.fillRing.Descs))
sh.Data = uintptr(0)
sh.Len = 0
sh.Cap = 0
}
if xsk.umem != nil {
if err := syscall.Munmap(xsk.umem); err != nil {
allErrors = append(allErrors, fmt.Errorf("failed to unmap the UMEM: %v", err))
}
xsk.umem = nil
}
if len(allErrors) > 0 {
return allErrors[0]
}
return nil
}
// Complete consumes up to n descriptors from the Completion ring queue to
// which the kernel produces when it has actually transmitted a descriptor it
// got from Tx ring queue.
// You should use this method if you are doing polling on the xdp.Socket file
// descriptor yourself, rather than using the Poll() method.
func (xsk *Socket) Complete(n int) {
cons := *xsk.completionRing.Consumer
//fencer.LFence()
for i := 0; i < n; i++ {
addr := xsk.completionRing.Descs[cons&uint32(xsk.options.CompletionRingNumDescs-1)]
cons++
xsk.freeTXDescs[addr/uint64(xsk.options.FrameSize)] = true
}
//fencer.MFence()
*xsk.completionRing.Consumer = cons
xsk.numTransmitted -= n
}
// NumFreeFillSlots returns how many free slots are available on the Fill ring
// queue, i.e. the queue to which we produce descriptors which should be filled
// by the kernel with incoming frames.
func (xsk *Socket) NumFreeFillSlots() int {
prod := *xsk.fillRing.Producer
cons := *xsk.fillRing.Consumer
max := uint32(xsk.options.FillRingNumDescs)
n := max - (prod - cons)
if n > max {
n = max
}
return int(n)
}
// NumFreeTxSlots returns how many free slots are available on the Tx ring
// queue, i.e. the queue to which we produce descriptors which should be
// transmitted by the kernel to the wire.
func (xsk *Socket) NumFreeTxSlots() int {
prod := *xsk.txRing.Producer
cons := *xsk.txRing.Consumer
max := uint32(xsk.options.TxRingNumDescs)
n := max - (prod - cons)
if n > max {
n = max
}
return int(n)
}
// NumReceived returns how many descriptors are there on the Rx ring queue
// which were produced by the kernel and which we have not yet consumed.
func (xsk *Socket) NumReceived() int {
prod := *xsk.rxRing.Producer
cons := *xsk.rxRing.Consumer
max := uint32(xsk.options.RxRingNumDescs)
n := prod - cons
if n > max {
n = max
}
return int(n)
}
// NumCompleted returns how many descriptors are there on the Completion ring
// queue which were produced by the kernel and which we have not yet consumed.
func (xsk *Socket) NumCompleted() int {
prod := *xsk.completionRing.Producer
cons := *xsk.completionRing.Consumer
max := uint32(xsk.options.CompletionRingNumDescs)
n := prod - cons
if n > max {
n = max
}
return int(n)
}
// NumFilled returns how many descriptors are there on the Fill ring
// queue which have not yet been consumed by the kernel.
// This method is useful if you're polling the xdp.Socket file descriptor
// yourself, rather than using the Poll() method - if it returns a number
// greater than zero it means you should set the unix.POLLIN flag.
func (xsk *Socket) NumFilled() int {
return xsk.numFilled
}
// NumTransmitted returns how many descriptors are there on the Tx ring queue
// which have not yet been consumed by the kernel.
// Note that even after the descriptors are consumed by the kernel from the Tx
// ring queue, it doesn't mean that they have actually been sent out on the
// wire, that can be assumed only after the descriptors have been produced by
// the kernel to the Completion ring queue.
// This method is useful if you're polling the xdp.Socket file descriptor
// yourself, rather than using the Poll() method - if it returns a number
// greater than zero it means you should set the unix.POLLOUT flag.
func (xsk *Socket) NumTransmitted() int {
return xsk.numTransmitted
}
// Stats returns various statistics for this XDP socket.
func (xsk *Socket) Stats() (Stats, error) {
var stats Stats
var size uint64
stats.Filled = uint64(*xsk.fillRing.Consumer)
stats.Received = uint64(*xsk.rxRing.Consumer)
if xsk.txRing.Consumer != nil {
stats.Transmitted = uint64(*xsk.txRing.Consumer)
}
if xsk.completionRing.Consumer != nil {
stats.Completed = uint64(*xsk.completionRing.Consumer)
}
size = uint64(unsafe.Sizeof(stats.KernelStats))
rc, _, errno := unix.Syscall6(syscall.SYS_GETSOCKOPT,
uintptr(xsk.fd),
unix.SOL_XDP, unix.XDP_STATISTICS,
uintptr(unsafe.Pointer(&stats.KernelStats)),
uintptr(unsafe.Pointer(&size)), 0)
if rc != 0 {
return stats, fmt.Errorf("getsockopt XDP_STATISTICS failed with errno %d", errno)
}
return stats, nil
}
// Program represents the necessary data structures for a simple XDP program that can filter traffic
// based on the attached rx queue.
type Program struct {
Program *ebpf.Program
Queues *ebpf.Map
Sockets *ebpf.Map
}
// Attach the XDP Program to an interface.
func (p *Program) Attach(Ifindex int) error {
if err := removeProgram(Ifindex); err != nil {
return err
}
return attachProgram(Ifindex, p.Program)
}
// Detach the XDP Program from an interface.
func (p *Program) Detach(Ifindex int) error {
return removeProgram(Ifindex)
}
// Register adds the socket file descriptor as the recipient for packets from the given queueID.
func (p *Program) Register(queueID int, fd int) error {
if err := p.Sockets.Put(uint32(queueID), uint32(fd)); err != nil {
return fmt.Errorf("failed to update xsksMap: %v", err)
}
if err := p.Queues.Put(uint32(queueID), uint32(1)); err != nil {
return fmt.Errorf("failed to update qidconfMap: %v", err)
}
return nil
}
// Unregister removes any associated mapping to sockets for the given queueID.
func (p *Program) Unregister(queueID int) error {
if err := p.Queues.Delete(uint32(queueID)); err != nil {
return err
}
if err := p.Sockets.Delete(uint32(queueID)); err != nil {
return err
}
return nil
}
// Close closes and frees the resources allocated for the Program.
func (p *Program) Close() error {
allErrors := []error{}
if p.Sockets != nil {
if err := p.Sockets.Close(); err != nil {
allErrors = append(allErrors, fmt.Errorf("failed to close xsksMap: %v", err))
}
p.Sockets = nil
}
if p.Queues != nil {
if err := p.Queues.Close(); err != nil {
allErrors = append(allErrors, fmt.Errorf("failed to close qidconfMap: %v", err))
}
p.Queues = nil
}
if p.Program != nil {
if err := p.Program.Close(); err != nil {
allErrors = append(allErrors, fmt.Errorf("failed to close XDP program: %v", err))
}
p.Program = nil
}
if len(allErrors) > 0 {
return allErrors[0]
}
return nil
}
// NewProgram returns a translation of the default eBPF XDP program found in the
// xsk_load_xdp_prog() function in <linux>/tools/lib/bpf/xsk.c:
// https://github.com/torvalds/linux/blob/master/tools/lib/bpf/xsk.c#L259
func NewProgram(maxQueueEntries int) (*Program, error) {
qidconfMap, err := ebpf.NewMap(&ebpf.MapSpec{
Name: "qidconf_map",
Type: ebpf.Array,
KeySize: uint32(unsafe.Sizeof(int32(0))),
ValueSize: uint32(unsafe.Sizeof(int32(0))),
MaxEntries: uint32(maxQueueEntries),
Flags: 0,
InnerMap: nil,
})
if err != nil {
return nil, fmt.Errorf("ebpf.NewMap qidconf_map failed (try increasing RLIMIT_MEMLOCK): %v", err)
}
xsksMap, err := ebpf.NewMap(&ebpf.MapSpec{
Name: "xsks_map",
Type: ebpf.XSKMap,
KeySize: uint32(unsafe.Sizeof(int32(0))),
ValueSize: uint32(unsafe.Sizeof(int32(0))),
MaxEntries: uint32(maxQueueEntries),
Flags: 0,
InnerMap: nil,
})
if err != nil {
return nil, fmt.Errorf("ebpf.NewMap xsks_map failed (try increasing RLIMIT_MEMLOCK): %v", err)
}
/*
This is a translation of the default eBPF XDP program found in the
xsk_load_xdp_prog() function in <linux>/tools/lib/bpf/xsk.c:
https://github.com/torvalds/linux/blob/master/tools/lib/bpf/xsk.c#L259
// This is the C-program:
// SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx)
// {
// int *qidconf, index = ctx->rx_queue_index;
//
// // A set entry here means that the correspnding queue_id
// // has an active AF_XDP socket bound to it.
// qidconf = bpf_map_lookup_elem(&qidconf_map, &index);
// if (!qidconf)
// return XDP_ABORTED;
//
// if (*qidconf)
// return bpf_redirect_map(&xsks_map, index, 0);
//
// return XDP_PASS;
// }
//
struct bpf_insn prog[] = {
// r1 = *(u32 *)(r1 + 16)
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 16), // 0
// *(u32 *)(r10 - 4) = r1
BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_1, -4), // 1
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), // 2
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), // 3
BPF_LD_MAP_FD(BPF_REG_1, xsk->qidconf_map_fd), // 4 (2 instructions)
BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), // 5
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), // 6
BPF_MOV32_IMM(BPF_REG_0, 0), // 7
// if r1 == 0 goto +8
BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 8), // 8
BPF_MOV32_IMM(BPF_REG_0, 2), // 9
// r1 = *(u32 *)(r1 + 0)
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 0), // 10
// if r1 == 0 goto +5
BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 5), // 11
// r2 = *(u32 *)(r10 - 4)
BPF_LD_MAP_FD(BPF_REG_1, xsk->xsks_map_fd), // 12 (2 instructions)
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_10, -4), // 13
BPF_MOV32_IMM(BPF_REG_3, 0), // 14
BPF_EMIT_CALL(BPF_FUNC_redirect_map), // 15
// The jumps are to this instruction
BPF_EXIT_INSN(), // 16
};
eBPF instructions:
0: code: 97 dst_reg: 1 src_reg: 1 off: 16 imm: 0 // 0
1: code: 99 dst_reg: 10 src_reg: 1 off: -4 imm: 0 // 1
2: code: 191 dst_reg: 2 src_reg: 10 off: 0 imm: 0 // 2
3: code: 7 dst_reg: 2 src_reg: 0 off: 0 imm: -4 // 3
4: code: 24 dst_reg: 1 src_reg: 1 off: 0 imm: 4 // 4 XXX use qidconfMap.FD as IMM
5: code: 0 dst_reg: 0 src_reg: 0 off: 0 imm: 0 // part of the same instruction
6: code: 133 dst_reg: 0 src_reg: 0 off: 0 imm: 1 // 5
7: code: 191 dst_reg: 1 src_reg: 0 off: 0 imm: 0 // 6
8: code: 180 dst_reg: 0 src_reg: 0 off: 0 imm: 0 // 7
9: code: 21 dst_reg: 1 src_reg: 0 off: 8 imm: 0 // 8
10: code: 180 dst_reg: 0 src_reg: 0 off: 0 imm: 2 // 9
11: code: 97 dst_reg: 1 src_reg: 1 off: 0 imm: 0 // 10
12: code: 21 dst_reg: 1 src_reg: 0 off: 5 imm: 0 // 11
13: code: 24 dst_reg: 1 src_reg: 1 off: 0 imm: 5 // 12 XXX use xsksMap.FD as IMM
14: code: 0 dst_reg: 0 src_reg: 0 off: 0 imm: 0 // part of the same instruction
15: code: 97 dst_reg: 2 src_reg: 10 off: -4 imm: 0 // 13
16: code: 180 dst_reg: 3 src_reg: 0 off: 0 imm: 0 // 14
17: code: 133 dst_reg: 0 src_reg: 0 off: 0 imm: 51 // 15
18: code: 149 dst_reg: 0 src_reg: 0 off: 0 imm: 0 // 16
*/
program, err := ebpf.NewProgram(&ebpf.ProgramSpec{
Name: "xsk_ebpf",
Type: ebpf.XDP,
Instructions: asm.Instructions{
{OpCode: 97, Dst: 1, Src: 1, Offset: 16}, // 0: code: 97 dst_reg: 1 src_reg: 1 off: 16 imm: 0 // 0
{OpCode: 99, Dst: 10, Src: 1, Offset: -4}, // 1: code: 99 dst_reg: 10 src_reg: 1 off: -4 imm: 0 // 1
{OpCode: 191, Dst: 2, Src: 10}, // 2: code: 191 dst_reg: 2 src_reg: 10 off: 0 imm: 0 // 2
{OpCode: 7, Dst: 2, Src: 0, Offset: 0, Constant: -4}, // 3: code: 7 dst_reg: 2 src_reg: 0 off: 0 imm: -4 // 3
{OpCode: 24, Dst: 1, Src: 1, Offset: 0, Constant: int64(qidconfMap.FD())}, // 4: code: 24 dst_reg: 1 src_reg: 1 off: 0 imm: 4 // 4 XXX use qidconfMap.FD as IMM
//{ OpCode: 0 }, // 5: code: 0 dst_reg: 0 src_reg: 0 off: 0 imm: 0 // part of the same instruction
{OpCode: 133, Dst: 0, Src: 0, Constant: 1}, // 6: code: 133 dst_reg: 0 src_reg: 0 off: 0 imm: 1 // 5
{OpCode: 191, Dst: 1, Src: 0}, // 7: code: 191 dst_reg: 1 src_reg: 0 off: 0 imm: 0 // 6
{OpCode: 180, Dst: 0, Src: 0}, // 8: code: 180 dst_reg: 0 src_reg: 0 off: 0 imm: 0 // 7
{OpCode: 21, Dst: 1, Src: 0, Offset: 8}, // 9: code: 21 dst_reg: 1 src_reg: 0 off: 8 imm: 0 // 8
{OpCode: 180, Dst: 0, Src: 0, Constant: 2}, // 10: code: 180 dst_reg: 0 src_reg: 0 off: 0 imm: 2 // 9
{OpCode: 97, Dst: 1, Src: 1}, // 11: code: 97 dst_reg: 1 src_reg: 1 off: 0 imm: 0 // 10
{OpCode: 21, Dst: 1, Offset: 5}, // 12: code: 21 dst_reg: 1 src_reg: 0 off: 5 imm: 0 // 11
{OpCode: 24, Dst: 1, Src: 1, Constant: int64(xsksMap.FD())}, // 13: code: 24 dst_reg: 1 src_reg: 1 off: 0 imm: 5 // 12 XXX use xsksMap.FD as IMM
//{ OpCode: 0 }, // 14: code: 0 dst_reg: 0 src_reg: 0 off: 0 imm: 0 // part of the same instruction
{OpCode: 97, Dst: 2, Src: 10, Offset: -4}, // 15: code: 97 dst_reg: 2 src_reg: 10 off: -4 imm: 0 // 13
{OpCode: 180, Dst: 3}, // 16: code: 180 dst_reg: 3 src_reg: 0 off: 0 imm: 0 // 14
{OpCode: 133, Constant: 51}, // 17: code: 133 dst_reg: 0 src_reg: 0 off: 0 imm: 51 // 15
{OpCode: 149}, // 18: code: 149 dst_reg: 0 src_reg: 0 off: 0 imm: 0 // 16
},
License: "LGPL-2.1 or BSD-2-Clause",
KernelVersion: 0,
})
if err != nil {
return nil, fmt.Errorf("error: ebpf.NewProgram failed: %v", err)
}
return &Program{Program: program, Queues: qidconfMap, Sockets: xsksMap}, nil
}
// LoadProgram load a external XDP program, along with queue and socket map;
// fname is the BPF kernel program file (.o);
// funcname is the function name in the program file;
// qidmapname is the Queues map name;
// xskmapname is the Sockets map name;
func LoadProgram(fname, funcname, qidmapname, xskmapname string) (*Program, error) {
prog := new(Program)
col, err := ebpf.LoadCollection(fname)
if err != nil {
return nil, err
}
var ok bool
if prog.Program, ok = col.Programs[funcname]; !ok {
return nil, fmt.Errorf("%v doesn't contain a function named %v", fname, funcname)
}
if prog.Queues, ok = col.Maps[qidmapname]; !ok {
return nil, fmt.Errorf("%v doesn't contain a queue map named %v", fname, qidmapname)
}
if prog.Sockets, ok = col.Maps[xskmapname]; !ok {
return nil, fmt.Errorf("%v doesn't contain a socket map named %v", fname, xskmapname)
}
return prog, nil
}
// removeProgram removes an existing XDP program from the given network interface.
func removeProgram(Ifindex int) error {
var link netlink.Link
var err error
link, err = netlink.LinkByIndex(Ifindex)
if err != nil {
return err
}
if !isXdpAttached(link) {
return nil
}
if err = netlink.LinkSetXdpFd(link, -1); err != nil {
return fmt.Errorf("netlink.LinkSetXdpFd(link, -1) failed: %v", err)
}
for {
link, err = netlink.LinkByIndex(Ifindex)
if err != nil {
return err
}
if !isXdpAttached(link) {
break
}
time.Sleep(time.Second)
}
return nil
}
func isXdpAttached(link netlink.Link) bool {
if link.Attrs() != nil && link.Attrs().Xdp != nil && link.Attrs().Xdp.Attached {
return true
}
return false
}
// attachProgram attaches the given XDP program to the network interface.
func attachProgram(Ifindex int, program *ebpf.Program) error {
link, err := netlink.LinkByIndex(Ifindex)
if err != nil {
return err
}
return netlink.LinkSetXdpFdWithFlags(link, program.FD(), int(DefaultXdpFlags))
}

View File

@@ -2978,3 +2978,378 @@ func (gb *GB28181Plugin) UpdateChannel(ctx context.Context, req *pb.UpdateChanne
resp.Message = "通道信息更新成功"
return resp, nil
}
// AddChannelWithProxy 添加通道并关联拉流代理
func (gb *GB28181Plugin) AddChannelWithProxy(ctx context.Context, req *pb.AddChannelWithProxyRequest) (*pb.BaseResponse, error) {
resp := &pb.BaseResponse{}
// 1. 参数验证
if req.ChannelId == "" {
resp.Code = 400
resp.Message = "channelId不能为空"
return resp, nil
}
if req.Name == "" {
resp.Code = 400
resp.Message = "name不能为空"
return resp, nil
}
if req.StreamPath == "" {
resp.Code = 400
resp.Message = "streamPath不能为空"
return resp, nil
}
// 2. 检查数据库连接
if gb.DB == nil {
resp.Code = 500
resp.Message = "数据库未初始化"
return resp, nil
}
// 3. 重复检查 - 检查customChannelId是否已存在
var existingChannel gb28181.DeviceChannel
if err := gb.DB.Where("custom_channel_id = ?", req.ChannelId).First(&existingChannel).Error; err == nil {
resp.Code = 409
resp.Message = "通道ID已存在请使用其他ID"
return resp, nil
} else if err != gorm.ErrRecordNotFound {
resp.Code = 500
resp.Message = fmt.Sprintf("检查通道ID失败: %v", err)
return resp, nil
}
// 4. 生成ID和相关字段
channelID := req.ChannelId + "_" + req.ChannelId
deviceID := req.ChannelId
// 5. 创建DeviceChannel实例
now := time.Now().Format("2006-01-02 15:04:05")
deviceChannel := &gb28181.DeviceChannel{
ID: channelID,
DeviceId: deviceID,
ChannelId: req.ChannelId,
CustomChannelId: req.ChannelId,
Name: req.Name,
CustomName: req.Name,
Manufacturer: req.Manufacturer,
Model: req.Model,
Owner: req.Owner,
CivilCode: req.CivilCode,
Block: req.Block,
Address: req.Address,
Port: int(req.Port),
Parental: int(req.Parental),
ParentId: req.ParentId,
SafetyWay: int(req.SafetyWay),
RegisterWay: int(req.RegisterWay),
CertNum: req.CertNum,
Certifiable: int(req.Certifiable),
ErrCode: int(req.ErrCode),
EndTime: req.EndTime,
Secrecy: int(req.Secrecy),
IPAddress: req.IpAddress,
Password: req.Password,
PTZType: int(req.PtzType),
PositionType: int(req.PositionType),
RoomType: int(req.RoomType),
UseType: int(req.UseType),
SupplyLightType: int(req.SupplyLightType),
DirectionType: int(req.DirectionType),
Resolution: req.Resolution,
BusinessGroupID: req.BusinessGroupId,
DownloadSpeed: req.DownloadSpeed,
SVCSpaceSupportMod: int(req.SvcSpaceSupportMod),
SVCTimeSupportMode: int(req.SvcTimeSupportMode),
Status: gb28181.ChannelStatus(req.Status),
CreateTime: now,
StreamPath: req.StreamPath, // 关联拉流代理的流路径
}
// 6. 处理经纬度 - 字符串转float64
if req.Longitude != "" {
// 使用fmt.Sscanf解析字符串为float64
var lon float64
if _, err := fmt.Sscanf(req.Longitude, "%f", &lon); err == nil {
deviceChannel.Longitude = lon
deviceChannel.GbLongitude = lon
}
}
if req.Latitude != "" {
var lat float64
if _, err := fmt.Sscanf(req.Latitude, "%f", &lat); err == nil {
deviceChannel.Latitude = lat
deviceChannel.GbLatitude = lat
}
}
// 7. 设置默认状态
if deviceChannel.Status == "" {
deviceChannel.Status = gb28181.ChannelOffStatus
}
// 8. 保存到数据库
if err := gb.DB.Create(deviceChannel).Error; err != nil {
resp.Code = 500
resp.Message = fmt.Sprintf("保存通道失败: %v", err)
return resp, nil
}
// 9. 添加到内存集合
channel := &Channel{
DeviceChannel: deviceChannel,
Device: nil, // 这是虚拟设备通道不关联真实GB设备
Logger: gb.Logger.With("channel", channelID),
}
gb.channels.Add(channel)
// 10. 记录日志
gb.Info("添加通道成功",
"channelId", req.ChannelId,
"id", channelID,
"deviceId", deviceID,
"streamPath", req.StreamPath,
"name", req.Name)
resp.Code = 0
resp.Message = "通道添加成功"
return resp, nil
}
// UpdateChannelWithProxy 更新通道信息
func (gb *GB28181Plugin) UpdateChannelWithProxy(ctx context.Context, req *pb.UpdateChannelWithProxyRequest) (*pb.BaseResponse, error) {
resp := &pb.BaseResponse{}
// 1. 参数验证
if req.ChannelId == "" {
resp.Code = 400
resp.Message = "channelId不能为空"
return resp, nil
}
// 2. 检查数据库连接
if gb.DB == nil {
resp.Code = 500
resp.Message = "数据库未初始化"
return resp, nil
}
// 3. 生成ID并查找通道
channelID := req.ChannelId + "_" + req.ChannelId
var existingChannel gb28181.DeviceChannel
if err := gb.DB.Where("id = ?", channelID).First(&existingChannel).Error; err != nil {
if err == gorm.ErrRecordNotFound {
resp.Code = 404
resp.Message = "通道不存在"
return resp, nil
}
resp.Code = 500
resp.Message = fmt.Sprintf("查询通道失败: %v", err)
return resp, nil
}
// 4. 构建更新字段(只更新非空字段)
updates := make(map[string]interface{})
if req.StreamPath != "" {
updates["stream_path"] = req.StreamPath
}
if req.Name != "" {
updates["name"] = req.Name
updates["custom_name"] = req.Name
}
if req.Manufacturer != "" {
updates["manufacturer"] = req.Manufacturer
}
if req.Model != "" {
updates["model"] = req.Model
}
if req.Owner != "" {
updates["owner"] = req.Owner
}
if req.CivilCode != "" {
updates["civil_code"] = req.CivilCode
}
if req.Block != "" {
updates["block"] = req.Block
}
if req.Address != "" {
updates["address"] = req.Address
}
if req.Port > 0 {
updates["port"] = req.Port
}
if req.Parental >= 0 {
updates["parental"] = req.Parental
}
if req.ParentId != "" {
updates["parent_id"] = req.ParentId
}
if req.SafetyWay >= 0 {
updates["safety_way"] = req.SafetyWay
}
if req.RegisterWay >= 0 {
updates["register_way"] = req.RegisterWay
}
if req.CertNum != "" {
updates["cert_num"] = req.CertNum
}
if req.Certifiable >= 0 {
updates["certifiable"] = req.Certifiable
}
if req.ErrCode >= 0 {
updates["err_code"] = req.ErrCode
}
if req.EndTime != "" {
updates["end_time"] = req.EndTime
}
if req.Secrecy >= 0 {
updates["secrecy"] = req.Secrecy
}
if req.IpAddress != "" {
updates["ip_address"] = req.IpAddress
}
if req.Password != "" {
updates["password"] = req.Password
}
if req.PtzType >= 0 {
updates["ptz_type"] = req.PtzType
}
if req.PositionType >= 0 {
updates["position_type"] = req.PositionType
}
if req.RoomType >= 0 {
updates["room_type"] = req.RoomType
}
if req.UseType >= 0 {
updates["use_type"] = req.UseType
}
if req.SupplyLightType >= 0 {
updates["supply_light_type"] = req.SupplyLightType
}
if req.DirectionType >= 0 {
updates["direction_type"] = req.DirectionType
}
if req.Resolution != "" {
updates["resolution"] = req.Resolution
}
if req.BusinessGroupId != "" {
updates["business_group_id"] = req.BusinessGroupId
}
if req.DownloadSpeed != "" {
updates["download_speed"] = req.DownloadSpeed
}
if req.SvcSpaceSupportMod >= 0 {
updates["svc_space_support_mod"] = req.SvcSpaceSupportMod
}
if req.SvcTimeSupportMode >= 0 {
updates["svc_time_support_mode"] = req.SvcTimeSupportMode
}
if req.Status != "" {
updates["status"] = req.Status
}
if req.Longitude != "" {
var lon float64
if _, err := fmt.Sscanf(req.Longitude, "%f", &lon); err == nil {
updates["longitude"] = lon
updates["gb_longitude"] = lon
}
}
if req.Latitude != "" {
var lat float64
if _, err := fmt.Sscanf(req.Latitude, "%f", &lat); err == nil {
updates["latitude"] = lat
updates["gb_latitude"] = lat
}
}
// 5. 如果没有要更新的字段
if len(updates) == 0 {
resp.Code = 400
resp.Message = "没有要更新的字段"
return resp, nil
}
// 6. 更新数据库
if err := gb.DB.Model(&gb28181.DeviceChannel{}).Where("id = ?", channelID).Updates(updates).Error; err != nil {
resp.Code = 500
resp.Message = fmt.Sprintf("更新通道失败: %v", err)
return resp, nil
}
// 7. 更新内存中的通道(如果存在)
if channel, ok := gb.channels.Get(channelID); ok {
// 重新从数据库加载最新数据
if err := gb.DB.Where("id = ?", channelID).First(channel.DeviceChannel).Error; err == nil {
gb.Info("内存中的通道已更新", "channelId", req.ChannelId)
}
}
// 8. 记录日志
gb.Info("更新通道成功",
"channelId", req.ChannelId,
"id", channelID,
"updatedFields", len(updates))
resp.Code = 0
resp.Message = "通道更新成功"
return resp, nil
}
// DeleteChannelWithProxy 删除通道
func (gb *GB28181Plugin) DeleteChannelWithProxy(ctx context.Context, req *pb.DeleteChannelWithProxyRequest) (*pb.BaseResponse, error) {
resp := &pb.BaseResponse{}
// 1. 参数验证
if req.ChannelId == "" {
resp.Code = 400
resp.Message = "channelId不能为空"
return resp, nil
}
// 2. 检查数据库连接
if gb.DB == nil {
resp.Code = 500
resp.Message = "数据库未初始化"
return resp, nil
}
// 3. 生成ID
channelID := req.ChannelId + "_" + req.ChannelId
// 4. 检查通道是否存在
var existingChannel gb28181.DeviceChannel
if err := gb.DB.Where("id = ?", channelID).First(&existingChannel).Error; err != nil {
if err == gorm.ErrRecordNotFound {
resp.Code = 404
resp.Message = "通道不存在"
return resp, nil
}
resp.Code = 500
resp.Message = fmt.Sprintf("查询通道失败: %v", err)
return resp, nil
}
// 5. 从数据库删除
if err := gb.DB.Where("id = ?", channelID).Delete(&gb28181.DeviceChannel{}).Error; err != nil {
resp.Code = 500
resp.Message = fmt.Sprintf("删除通道失败: %v", err)
return resp, nil
}
// 6. 从内存中移除
if channel, ok := gb.channels.Get(channelID); ok {
gb.channels.RemoveByKey(channel.ID)
gb.Info("从内存中移除通道", "channelId", req.ChannelId)
}
// 7. 记录日志
gb.Info("删除通道成功",
"channelId", req.ChannelId,
"id", channelID,
"streamPath", existingChannel.StreamPath)
resp.Code = 0
resp.Message = "通道删除成功"
return resp, nil
}

View File

@@ -230,6 +230,7 @@ func (c *catalogHandlerTask) Run() (err error) {
c.CustomChannelId = c.ChannelId
}
// 使用 Save 进行 upsert 操作
d.Debug("ready to addOrUpdateChannel", "channel.ID is", c.ID, "channel.Status is", c.Status)
d.addOrUpdateChannel(c)
catalogReq.TotalCount++
}
@@ -631,7 +632,8 @@ func (d *Device) frontEndCmdString(cmdCode int32, parameter1 int32, parameter2 i
}
func (d *Device) addOrUpdateChannel(c gb28181.DeviceChannel) {
if channel, ok := d.channels.Get(c.ID); ok {
var resultChannel *Channel
if channel, ok := d.plugin.channels.Get(c.ID); ok {
// 通道已存在,保留自定义字段
if channel.DeviceChannel != nil {
// 保存原有的自定义字段
@@ -648,16 +650,18 @@ func (d *Device) addOrUpdateChannel(c gb28181.DeviceChannel) {
}
// 更新通道信息
channel.DeviceChannel = &c
resultChannel = channel
d.Debug("addOrUpdateChannel, get channel from d.plugin.channels", "channel.ID is ", c.ID, "channel.Status is", c.Status)
} else {
// 创建新通道
channel = &Channel{
resultChannel = &Channel{
Device: d,
Logger: d.Logger.With("channel", c.ID),
DeviceChannel: &c,
}
d.channels.Set(channel)
d.plugin.channels.Set(channel)
}
d.channels.Set(resultChannel)
d.plugin.channels.Set(resultChannel)
}
func (d *Device) GetID() string {

View File

@@ -1,9 +1,12 @@
package plugin_gb28181pro
import (
"encoding/binary"
"errors"
"fmt"
"io"
"log/slog"
"net"
"net/http"
"os"
"regexp"
@@ -13,7 +16,10 @@ import (
"sync"
"time"
"github.com/langhuihui/gomem"
"github.com/pion/rtp"
"m7s.live/v5/pkg"
mpegps "m7s.live/v5/pkg/format/ps"
"github.com/emiago/sipgo"
"github.com/emiago/sipgo/sip"
@@ -365,9 +371,9 @@ func (gb *GB28181Plugin) checkDeviceExpire() (err error) {
// }
//})
// 加载设备的通道
// 加载设备的通道包括deviceId或parentId等于device.DeviceId的通道
var channels []gb28181.DeviceChannel
if err := gb.DB.Where(&gb28181.DeviceChannel{DeviceId: device.DeviceId}).Find(&channels).Error; err != nil {
if err := gb.DB.Where("device_id = ? OR parent_id = ?", device.DeviceId, device.DeviceId).Find(&channels).Error; err != nil {
gb.Error("加载通道失败", "error", err, "deviceId", device.DeviceId)
continue
}
@@ -412,6 +418,26 @@ func (gb *GB28181Plugin) checkDeviceExpire() (err error) {
gb.Info("设备有效", "deviceId", device.DeviceId, "registerTime", device.RegisterTime, "expireTime", expireTime, "isExpired", isExpired, "device.Online", device.Online, "device.Status", device.Status)
}
// 查询streamPath不为空的拉流代理通道
var proxyChannels []gb28181.DeviceChannel
if err := gb.DB.Where("stream_path != ? AND stream_path IS NOT NULL", "").Find(&proxyChannels).Error; err != nil {
gb.Error("查询拉流代理通道失败", "error", err)
} else if len(proxyChannels) > 0 {
gb.Info("找到拉流代理通道", "count", len(proxyChannels))
for _, c := range proxyChannels {
// 创建Channel实例
channel := &Channel{
DeviceChannel: &c,
Device: nil, // 拉流代理通道不关联真实GB设备
Logger: gb.Logger.With("channel", c.ID, "streamPath", c.StreamPath),
}
// 添加到内存集合
gb.channels.Add(channel)
gb.Info("加载拉流代理通道", "channelId", c.ChannelId, "id", c.ID, "streamPath", c.StreamPath)
}
}
return nil
}
@@ -887,7 +913,12 @@ func (gb *GB28181Plugin) OnInvite(req *sip.Request, tx sip.ServerTransaction) {
contentType := sip.ContentTypeHeader("application/sdp")
response.AppendHeader(&contentType)
response.SetBody([]byte(strings.Join(content, "\r\n") + "\r\n"))
var ip = ""
var streamMode mrtp.StreamMode
if channel.StreamPath == "" {
ip = channel.Device.MediaIp
streamMode = channel.Device.StreamMode
}
// 创建并保存SendRtpInfo以供OnAck方法使用
forwardDialog := &ForwardDialog{
gb: gb,
@@ -899,10 +930,12 @@ func (gb *GB28181Plugin) OnInvite(req *sip.Request, tx sip.ServerTransaction) {
// 初始化 ForwardConfig
ForwardConfig: mrtp.ForwardConfig{
Source: mrtp.ConnectionConfig{
IP: channel.Device.MediaIp, // 将在 Run 方法中从 SDP 响应中获取
Port: 0, // 将在 Run 方法中从 SDP 响应中获取
Mode: channel.Device.StreamMode, // 默认值,将在 Run 方法中根据 StreamMode 更新
SSRC: 0, // 将在 Start 方法中设置
//IP: util.Conditional(channel.StreamPath != "", "", channel.Device.MediaIp), // 将在 Run 方法中从 SDP 响应中获取
IP: ip, // 将在 Run 方法中从 SDP 响应中获取
Port: 0, // 将在 Run 方法中从 SDP 响应中获取
//Mode: util.Conditional(channel.StreamPath != "", "", channel.Device.StreamMode), // 默认值,将在 Run 方法中根据 StreamMode 更新
Mode: streamMode, // 默认值,将在 Run 方法中根据 StreamMode 更新
SSRC: 0, // 将在 Start 方法中设置
},
Target: mrtp.ConnectionConfig{
IP: inviteInfo.IP,
@@ -913,7 +946,7 @@ func (gb *GB28181Plugin) OnInvite(req *sip.Request, tx sip.ServerTransaction) {
Relay: false,
},
}
forwardDialog.Logger = gb.Logger.With("ssrc", inviteInfo.SSRC, "platformid", platform.PlatformModel.ServerGBID, "deviceid", channel.Device.DeviceId)
forwardDialog.Logger = gb.Logger.With("ssrc", inviteInfo.SSRC, "platformid", platform.PlatformModel.ServerGBID, "deviceid", channel.ID)
gb.forwardDialogs.Set(forwardDialog)
gb.Info("OnInvite", "action", "sendRtpInfo created", "callId", req.CallID().Value())
@@ -937,17 +970,152 @@ func (gb *GB28181Plugin) OnAck(req *sip.Request, tx sip.ServerTransaction) {
if forwardDialog, ok := gb.forwardDialogs.Find(func(dialog *ForwardDialog) bool {
return dialog.platformCallId == callID
}); ok {
pullUrl := fmt.Sprintf("%s/%s", forwardDialog.channel.DeviceId, forwardDialog.channel.ChannelId)
streamPath := fmt.Sprintf("platform_%d/%s/%s", time.Now().UnixMilli(), forwardDialog.channel.DeviceId, forwardDialog.channel.ChannelId)
if forwardDialog.channel.StreamPath == "" { //为空表示是正常的GB设备
pullUrl := fmt.Sprintf("%s/%s", forwardDialog.channel.ParentId, forwardDialog.channel.ChannelId)
streamPath := fmt.Sprintf("platform_%d/%s/%s", time.Now().UnixMilli(), forwardDialog.channel.DeviceId, forwardDialog.channel.ChannelId)
// 创建配置
pullConf := config.Pull{
URL: pullUrl,
// 创建配置
pullConf := config.Pull{
URL: pullUrl,
}
// 初始化拉流任务
forwardDialog.GetPullJob().Init(forwardDialog, &gb.Plugin, streamPath, pullConf, nil)
} else { //不为空表示是个拉流代理相关联的设备,直接推送已有的流
// 异步推送PS流到上级平台
go gb.sendPSToUpstream(forwardDialog)
}
// 初始化拉流任务
forwardDialog.GetPullJob().Init(forwardDialog, &gb.Plugin, streamPath, pullConf, nil)
} else {
gb.Error("OnAck", "error", "forwardDialog not found", "callID", callID)
return
}
}
// sendPSToUpstream 将拉流代理的流转换为PS格式并推送到上级平台
func (gb *GB28181Plugin) sendPSToUpstream(forwardDialog *ForwardDialog) {
streamPath := forwardDialog.channel.StreamPath
targetIP := forwardDialog.ForwardConfig.Target.IP
targetPort := forwardDialog.ForwardConfig.Target.Port
isUDP := forwardDialog.ForwardConfig.Target.Mode == mrtp.StreamModeUDP
ssrc := forwardDialog.ForwardConfig.Target.SSRC
// 订阅流 - 使用gb作为context
suber, err := gb.Subscribe(gb, streamPath)
if err != nil {
gb.Error("sendPSToUpstream", "error", "subscribe stream failed", "err", err, "streamPath", streamPath)
return
}
var w io.WriteCloser
var writeRTP func() error
var mem gomem.RecyclableMemory
allocator := gomem.NewScalableMemoryAllocator(1 << gomem.MinPowerOf2)
mem.SetAllocator(allocator)
defer allocator.Recycle()
var headerBuf [14]byte
writeBuffer := make(net.Buffers, 1)
var totalBytesSent int
var packet rtp.Packet
packet.Version = 2
packet.SSRC = ssrc
packet.PayloadType = 96
defer func() {
gb.Info("sendPSToUpstream", "action", "complete", "total", packet.SequenceNumber, "totalBytesSent", totalBytesSent)
}()
if isUDP {
// UDP模式
conn, err := net.DialUDP("udp", nil, &net.UDPAddr{
IP: net.ParseIP(targetIP),
Port: int(targetPort),
})
if err != nil {
gb.Error("sendPSToUpstream", "error", "dial udp failed", "err", err)
return
}
w = conn
writeRTP = func() (err error) {
defer mem.Recycle()
r := mem.NewReader()
packet.Timestamp = uint32(time.Now().UnixMilli()) * 90
for r.Length > 0 {
packet.SequenceNumber += 1
buf := writeBuffer
buf[0] = headerBuf[:12]
_, err = packet.Header.MarshalTo(headerBuf[:12])
if err != nil {
return
}
r.RangeN(mrtp.MTUSize, func(b []byte) {
buf = append(buf, b)
})
n, _ := buf.WriteTo(w)
totalBytesSent += int(n)
}
return
}
} else {
// TCP模式
gb.Info("sendPSToUpstream", "action", "connect tcp", "ip", targetIP, "port", targetPort)
conn, err := net.DialTCP("tcp", nil, &net.TCPAddr{
IP: net.ParseIP(targetIP),
Port: int(targetPort),
})
if err != nil {
gb.Error("sendPSToUpstream", "error", "dial tcp failed", "err", err)
return
}
w = conn
writeRTP = func() (err error) {
defer mem.Recycle()
r := mem.NewReader()
packet.Timestamp = uint32(time.Now().UnixMilli()) * 90
// 检查是否需要分割成多个RTP包
const maxRTPSize = 65535 - 12 // uint16最大值减去RTP头部长度
for r.Length > 0 {
buf := writeBuffer
buf[0] = headerBuf[:14]
packet.SequenceNumber += 1
// 计算当前包的有效载荷大小
payloadSize := r.Length
if payloadSize > maxRTPSize {
payloadSize = maxRTPSize
}
// 设置TCP长度字段 (2字节) + RTP头部长度 (12字节) + 载荷长度
rtpPacketSize := uint16(12 + payloadSize)
binary.BigEndian.PutUint16(headerBuf[:2], rtpPacketSize)
// 生成RTP头部
_, err = packet.Header.MarshalTo(headerBuf[2:14])
if err != nil {
return
}
// 添加载荷数据
r.RangeN(payloadSize, func(b []byte) {
buf = append(buf, b)
})
// 发送RTP包
n, writeErr := buf.WriteTo(w)
if writeErr != nil {
return writeErr
}
totalBytesSent += int(n)
}
return
}
}
defer w.Close()
// 创建PS封装器
var muxer mpegps.MpegPSMuxer
muxer.Subscriber = suber
muxer.Packet = &mem
muxer.Mux(writeRTP)
gb.Info("sendPSToUpstream", "action", "stream ended", "streamPath", streamPath)
}

File diff suppressed because it is too large Load Diff

View File

@@ -3201,6 +3201,127 @@ func local_request_Api_ReceiveAlarm_0(ctx context.Context, marshaler runtime.Mar
return msg, metadata, err
}
func request_Api_AddChannelWithProxy_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AddChannelWithProxyRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
val, ok := pathParams["streamPath"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "streamPath")
}
protoReq.StreamPath, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "streamPath", err)
}
msg, err := client.AddChannelWithProxy(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_AddChannelWithProxy_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AddChannelWithProxyRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
val, ok := pathParams["streamPath"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "streamPath")
}
protoReq.StreamPath, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "streamPath", err)
}
msg, err := server.AddChannelWithProxy(ctx, &protoReq)
return msg, metadata, err
}
func request_Api_UpdateChannelWithProxy_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq UpdateChannelWithProxyRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
val, ok := pathParams["channelId"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "channelId")
}
protoReq.ChannelId, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "channelId", err)
}
msg, err := client.UpdateChannelWithProxy(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_UpdateChannelWithProxy_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq UpdateChannelWithProxyRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
val, ok := pathParams["channelId"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "channelId")
}
protoReq.ChannelId, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "channelId", err)
}
msg, err := server.UpdateChannelWithProxy(ctx, &protoReq)
return msg, metadata, err
}
func request_Api_DeleteChannelWithProxy_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeleteChannelWithProxyRequest
metadata runtime.ServerMetadata
err error
)
io.Copy(io.Discard, req.Body)
val, ok := pathParams["channelId"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "channelId")
}
protoReq.ChannelId, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "channelId", err)
}
msg, err := client.DeleteChannelWithProxy(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_DeleteChannelWithProxy_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeleteChannelWithProxyRequest
metadata runtime.ServerMetadata
err error
)
val, ok := pathParams["channelId"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "channelId")
}
protoReq.ChannelId, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "channelId", err)
}
msg, err := server.DeleteChannelWithProxy(ctx, &protoReq)
return msg, metadata, err
}
// RegisterApiHandlerServer registers the http handlers for service Api to "mux".
// UnaryRPC :call ApiServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
@@ -4547,6 +4668,66 @@ func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server
}
forward_Api_ReceiveAlarm_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_AddChannelWithProxy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/gb28181pro.Api/AddChannelWithProxy", runtime.WithHTTPPathPattern("/gb28181/api/channel/add_with_proxy/{streamPath=**}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_AddChannelWithProxy_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_AddChannelWithProxy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_UpdateChannelWithProxy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/gb28181pro.Api/UpdateChannelWithProxy", runtime.WithHTTPPathPattern("/gb28181/api/channel/update_with_proxy/{channelId}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_UpdateChannelWithProxy_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_UpdateChannelWithProxy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_DeleteChannelWithProxy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/gb28181pro.Api/DeleteChannelWithProxy", runtime.WithHTTPPathPattern("/gb28181/api/channel/delete_with_proxy/{channelId}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_DeleteChannelWithProxy_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_DeleteChannelWithProxy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
@@ -5726,6 +5907,57 @@ func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client
}
forward_Api_ReceiveAlarm_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_AddChannelWithProxy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/gb28181pro.Api/AddChannelWithProxy", runtime.WithHTTPPathPattern("/gb28181/api/channel/add_with_proxy/{streamPath=**}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_AddChannelWithProxy_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_AddChannelWithProxy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_UpdateChannelWithProxy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/gb28181pro.Api/UpdateChannelWithProxy", runtime.WithHTTPPathPattern("/gb28181/api/channel/update_with_proxy/{channelId}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_UpdateChannelWithProxy_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_UpdateChannelWithProxy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_DeleteChannelWithProxy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/gb28181pro.Api/DeleteChannelWithProxy", runtime.WithHTTPPathPattern("/gb28181/api/channel/delete_with_proxy/{channelId}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_DeleteChannelWithProxy_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_DeleteChannelWithProxy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
@@ -5797,6 +6029,9 @@ var (
pattern_Api_GetGroupChannels_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"gb28181", "api", "groups", "groupId", "channels"}, ""))
pattern_Api_RemoveDevice_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 1, 0, 4, 1, 5, 4}, []string{"gb28181", "api", "device", "remove", "id"}, ""))
pattern_Api_ReceiveAlarm_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"gb28181", "api", "alarm", "receive"}, ""))
pattern_Api_AddChannelWithProxy_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 3, 0, 4, 1, 5, 4}, []string{"gb28181", "api", "channel", "add_with_proxy", "streamPath"}, ""))
pattern_Api_UpdateChannelWithProxy_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 1, 0, 4, 1, 5, 4}, []string{"gb28181", "api", "channel", "update_with_proxy", "channelId"}, ""))
pattern_Api_DeleteChannelWithProxy_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 1, 0, 4, 1, 5, 4}, []string{"gb28181", "api", "channel", "delete_with_proxy", "channelId"}, ""))
)
var (
@@ -5867,4 +6102,7 @@ var (
forward_Api_GetGroupChannels_0 = runtime.ForwardResponseMessage
forward_Api_RemoveDevice_0 = runtime.ForwardResponseMessage
forward_Api_ReceiveAlarm_0 = runtime.ForwardResponseMessage
forward_Api_AddChannelWithProxy_0 = runtime.ForwardResponseMessage
forward_Api_UpdateChannelWithProxy_0 = runtime.ForwardResponseMessage
forward_Api_DeleteChannelWithProxy_0 = runtime.ForwardResponseMessage
)

View File

@@ -495,6 +495,29 @@ service api {
body: "*"
};
}
// 添加通道并关联拉流代理
rpc AddChannelWithProxy (AddChannelWithProxyRequest) returns (BaseResponse) {
option (google.api.http) = {
post: "/gb28181/api/channel/add_with_proxy/{streamPath=**}"
body: "*"
};
}
// 更新通道信息
rpc UpdateChannelWithProxy (UpdateChannelWithProxyRequest) returns (BaseResponse) {
option (google.api.http) = {
post: "/gb28181/api/channel/update_with_proxy/{channelId}"
body: "*"
};
}
// 删除通道
rpc DeleteChannelWithProxy (DeleteChannelWithProxyRequest) returns (BaseResponse) {
option (google.api.http) = {
post: "/gb28181/api/channel/delete_with_proxy/{channelId}"
};
}
}
// 请求和响应消息定义
@@ -1168,3 +1191,87 @@ message AlarmInfoRequest {
google.protobuf.Timestamp createAt=6;//创建时间
}
// AddChannelWithProxyRequest 添加通道并关联拉流代理的请求
message AddChannelWithProxyRequest {
string streamPath = 1; // 拉流代理的流路径URL路径参数
string deviceId = 2; // 设备国标编号
string channelId = 3; // 通道ID
string name = 4; // 通道名称
string manufacturer = 5; // 设备厂商
string model = 6; // 设备型号
string owner = 7; // 设备归属
string civilCode = 8; // 行政区域
string block = 9; // 警区
string address = 10; // 安装地址
int32 port = 11; // 端口
int32 parental = 12; // 是否有子设备
string parentId = 13; // 父节点ID
int32 safetyWay = 14; // 信令安全模式
int32 registerWay = 15; // 注册方式
string certNum = 16; // 证书序列号
int32 certifiable = 17; // 证书有效标识
int32 errCode = 18; // 无效原因码
string endTime = 19; // 证书终止有效期
int32 secrecy = 20; // 保密属性
string ipAddress = 21; // 设备/系统IP地址
string password = 22; // 设备口令
int32 ptzType = 23; // 摄像机类型
int32 positionType = 24; // 摄像机位置类型
int32 roomType = 25; // 安装位置室内外属性
int32 useType = 26; // 用途属性
int32 supplyLightType = 27; // 摄像机补光属性
int32 directionType = 28; // 摄像机监视方位属性
string resolution = 29; // 摄像机支持的分辨率
string businessGroupId = 30; // 虚拟组织所属的业务分组ID
string downloadSpeed = 31; // 下载倍速
int32 svcSpaceSupportMod = 32; // 空域编码能力
int32 svcTimeSupportMode = 33; // 时域编码能力
string status = 34; // 设备状态
string longitude = 35; // 经度
string latitude = 36; // 纬度
}
// UpdateChannelWithProxyRequest 更新通道信息的请求
message UpdateChannelWithProxyRequest {
string channelId = 1; // 通道IDURL路径参数必填
string streamPath = 2; // 拉流代理的流路径(可选)
string name = 3; // 通道名称(可选)
string manufacturer = 4; // 设备厂商(可选)
string model = 5; // 设备型号(可选)
string owner = 6; // 设备归属(可选)
string civilCode = 7; // 行政区域(可选)
string block = 8; // 警区(可选)
string address = 9; // 安装地址(可选)
int32 port = 10; // 端口(可选)
int32 parental = 11; // 是否有子设备(可选)
string parentId = 12; // 父节点ID可选
int32 safetyWay = 13; // 信令安全模式(可选)
int32 registerWay = 14; // 注册方式(可选)
string certNum = 15; // 证书序列号(可选)
int32 certifiable = 16; // 证书有效标识(可选)
int32 errCode = 17; // 无效原因码(可选)
string endTime = 18; // 证书终止有效期(可选)
int32 secrecy = 19; // 保密属性(可选)
string ipAddress = 20; // 设备/系统IP地址可选
string password = 21; // 设备口令(可选)
int32 ptzType = 22; // 摄像机类型(可选)
int32 positionType = 23; // 摄像机位置类型(可选)
int32 roomType = 24; // 安装位置室内外属性(可选)
int32 useType = 25; // 用途属性(可选)
int32 supplyLightType = 26; // 摄像机补光属性(可选)
int32 directionType = 27; // 摄像机监视方位属性(可选)
string resolution = 28; // 摄像机支持的分辨率(可选)
string businessGroupId = 29; // 虚拟组织所属的业务分组ID可选
string downloadSpeed = 30; // 下载倍速(可选)
int32 svcSpaceSupportMod = 31; // 空域编码能力(可选)
int32 svcTimeSupportMode = 32; // 时域编码能力(可选)
string status = 33; // 设备状态(可选)
string longitude = 34; // 经度(可选)
string latitude = 35; // 纬度(可选)
}
// DeleteChannelWithProxyRequest 删除通道的请求
message DeleteChannelWithProxyRequest {
string channelId = 1; // 通道IDURL路径参数
}

View File

@@ -89,6 +89,9 @@ const (
Api_GetGroupChannels_FullMethodName = "/gb28181pro.api/GetGroupChannels"
Api_RemoveDevice_FullMethodName = "/gb28181pro.api/RemoveDevice"
Api_ReceiveAlarm_FullMethodName = "/gb28181pro.api/ReceiveAlarm"
Api_AddChannelWithProxy_FullMethodName = "/gb28181pro.api/AddChannelWithProxy"
Api_UpdateChannelWithProxy_FullMethodName = "/gb28181pro.api/UpdateChannelWithProxy"
Api_DeleteChannelWithProxy_FullMethodName = "/gb28181pro.api/DeleteChannelWithProxy"
)
// ApiClient is the client API for Api service.
@@ -229,6 +232,12 @@ type ApiClient interface {
RemoveDevice(ctx context.Context, in *RemoveDeviceRequest, opts ...grpc.CallOption) (*BaseResponse, error)
// 接收报警信息
ReceiveAlarm(ctx context.Context, in *AlarmInfoRequest, opts ...grpc.CallOption) (*BaseResponse, error)
// 添加通道并关联拉流代理
AddChannelWithProxy(ctx context.Context, in *AddChannelWithProxyRequest, opts ...grpc.CallOption) (*BaseResponse, error)
// 更新通道信息
UpdateChannelWithProxy(ctx context.Context, in *UpdateChannelWithProxyRequest, opts ...grpc.CallOption) (*BaseResponse, error)
// 删除通道
DeleteChannelWithProxy(ctx context.Context, in *DeleteChannelWithProxyRequest, opts ...grpc.CallOption) (*BaseResponse, error)
}
type apiClient struct {
@@ -909,6 +918,36 @@ func (c *apiClient) ReceiveAlarm(ctx context.Context, in *AlarmInfoRequest, opts
return out, nil
}
func (c *apiClient) AddChannelWithProxy(ctx context.Context, in *AddChannelWithProxyRequest, opts ...grpc.CallOption) (*BaseResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BaseResponse)
err := c.cc.Invoke(ctx, Api_AddChannelWithProxy_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *apiClient) UpdateChannelWithProxy(ctx context.Context, in *UpdateChannelWithProxyRequest, opts ...grpc.CallOption) (*BaseResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BaseResponse)
err := c.cc.Invoke(ctx, Api_UpdateChannelWithProxy_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *apiClient) DeleteChannelWithProxy(ctx context.Context, in *DeleteChannelWithProxyRequest, opts ...grpc.CallOption) (*BaseResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BaseResponse)
err := c.cc.Invoke(ctx, Api_DeleteChannelWithProxy_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// ApiServer is the server API for Api service.
// All implementations must embed UnimplementedApiServer
// for forward compatibility.
@@ -1047,6 +1086,12 @@ type ApiServer interface {
RemoveDevice(context.Context, *RemoveDeviceRequest) (*BaseResponse, error)
// 接收报警信息
ReceiveAlarm(context.Context, *AlarmInfoRequest) (*BaseResponse, error)
// 添加通道并关联拉流代理
AddChannelWithProxy(context.Context, *AddChannelWithProxyRequest) (*BaseResponse, error)
// 更新通道信息
UpdateChannelWithProxy(context.Context, *UpdateChannelWithProxyRequest) (*BaseResponse, error)
// 删除通道
DeleteChannelWithProxy(context.Context, *DeleteChannelWithProxyRequest) (*BaseResponse, error)
mustEmbedUnimplementedApiServer()
}
@@ -1258,6 +1303,15 @@ func (UnimplementedApiServer) RemoveDevice(context.Context, *RemoveDeviceRequest
func (UnimplementedApiServer) ReceiveAlarm(context.Context, *AlarmInfoRequest) (*BaseResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ReceiveAlarm not implemented")
}
func (UnimplementedApiServer) AddChannelWithProxy(context.Context, *AddChannelWithProxyRequest) (*BaseResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method AddChannelWithProxy not implemented")
}
func (UnimplementedApiServer) UpdateChannelWithProxy(context.Context, *UpdateChannelWithProxyRequest) (*BaseResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method UpdateChannelWithProxy not implemented")
}
func (UnimplementedApiServer) DeleteChannelWithProxy(context.Context, *DeleteChannelWithProxyRequest) (*BaseResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeleteChannelWithProxy not implemented")
}
func (UnimplementedApiServer) mustEmbedUnimplementedApiServer() {}
func (UnimplementedApiServer) testEmbeddedByValue() {}
@@ -2485,6 +2539,60 @@ func _Api_ReceiveAlarm_Handler(srv interface{}, ctx context.Context, dec func(in
return interceptor(ctx, in, info, handler)
}
func _Api_AddChannelWithProxy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(AddChannelWithProxyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).AddChannelWithProxy(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_AddChannelWithProxy_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).AddChannelWithProxy(ctx, req.(*AddChannelWithProxyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Api_UpdateChannelWithProxy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(UpdateChannelWithProxyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).UpdateChannelWithProxy(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_UpdateChannelWithProxy_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).UpdateChannelWithProxy(ctx, req.(*UpdateChannelWithProxyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Api_DeleteChannelWithProxy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteChannelWithProxyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).DeleteChannelWithProxy(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_DeleteChannelWithProxy_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).DeleteChannelWithProxy(ctx, req.(*DeleteChannelWithProxyRequest))
}
return interceptor(ctx, in, info, handler)
}
// Api_ServiceDesc is the grpc.ServiceDesc for Api service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@@ -2760,6 +2868,18 @@ var Api_ServiceDesc = grpc.ServiceDesc{
MethodName: "ReceiveAlarm",
Handler: _Api_ReceiveAlarm_Handler,
},
{
MethodName: "AddChannelWithProxy",
Handler: _Api_AddChannelWithProxy_Handler,
},
{
MethodName: "UpdateChannelWithProxy",
Handler: _Api_UpdateChannelWithProxy_Handler,
},
{
MethodName: "DeleteChannelWithProxy",
Handler: _Api_DeleteChannelWithProxy_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "gb28181.proto",

View File

@@ -62,6 +62,7 @@ type DeviceChannel struct {
PTZTypeText string `json:"ptzTypeText"` // 云台类型描述字符串
GbLongitude float64 `json:"gbLongitude"`
GbLatitude float64 `json:"gbLatitude"`
StreamPath string `json:"streamPath"` // 拉流代理的流路径
}
// SetPTZType 设置云台类型并更新描述文本

View File

@@ -134,7 +134,7 @@ func (p *Platform) Keepalive() (*sipgo.DialogClientSession, error) {
csqHeader := sip.CSeqHeader{
SeqNo: uint32(p.SN),
MethodName: "REGISTER",
MethodName: "MESSAGE",
}
p.SN++
req.AppendHeader(&csqHeader)
@@ -852,7 +852,7 @@ func (p *Platform) buildChannelItem(channel gb28181.DeviceChannel) string {
channel.RegisterWay, // 直接使用整数值
channel.Secrecy, // 直接使用整数值
parentID,
channel.Parental, // 直接使用整数值
channel.Parental, // 直接使用整数值
channel.SafetyWay) // 直接使用整数值
}

View File

@@ -10,6 +10,7 @@ import (
"sync"
"time"
task "github.com/langhuihui/gotask"
"m7s.live/v5"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/format"
@@ -53,6 +54,11 @@ func (w *HLSWriter) GetTs(key string) (any, bool) {
}
func (w *HLSWriter) checkNoBodyRead() bool {
// 如果从未被读取过(纯录制模式),不检查超时
if w.lastReadTime.IsZero() {
return false
}
// 曾经有人播放过检查是否15秒无访问
return time.Since(w.lastReadTime) > time.Second*15
}
@@ -94,12 +100,12 @@ func (w *HLSWriter) Run() (err error) {
return m7s.PlayBlock(subscriber, func(audio *format.Mpeg2Audio) error {
pesAudio.Pts = uint64(subscriber.AudioReader.AbsTime) * 90
if w.checkNoBodyRead() {
return ErrNoBodyRead
return errors.Join(ErrNoBodyRead, task.ErrStopByUser)
}
return pesAudio.WritePESPacket(audio.Memory, &w.ts.RecyclableMemory)
}, func(video *mpegts.VideoFrame) (err error) {
if w.checkNoBodyRead() {
return ErrNoBodyRead
return errors.Join(ErrNoBodyRead, task.ErrStopByUser)
}
vr := w.TransformJob.Subscriber.VideoReader
if vr.Value.IDR {

View File

@@ -1,6 +1,8 @@
package plugin_mp4
import (
"fmt"
"io"
"os"
"path/filepath"
"strings"
@@ -74,15 +76,30 @@ func (p *DeleteRecordTask) getDiskOutOfSpace(filePath string) bool {
func (p *DeleteRecordTask) deleteOldestFile() {
//当当前磁盘使用量大于AutoOverWriteDiskPercent自动覆盖磁盘使用量配置时自动删除最旧的文件
//连续录像删除最旧的文件
// 创建一个数组来存储所有的conf.FilePath
// 使用 map 去重,存储所有的 conf.FilePath
pathMap := make(map[string]bool)
if p.plugin.Server.Plugins.Length > 0 {
p.plugin.Server.Plugins.Range(func(plugin *m7s.Plugin) bool {
if len(plugin.GetCommonConf().OnPub.Record) > 0 {
for _, conf := range plugin.GetCommonConf().OnPub.Record {
// 处理路径,去掉最后的/$0部分只保留目录部分
dirPath := filepath.Dir(conf.FilePath)
if _, exists := pathMap[dirPath]; !exists {
pathMap[dirPath] = true
p.Info("deleteOldestFile", "original filepath", conf.FilePath, "processed filepath", dirPath)
} else {
p.Debug("deleteOldestFile", "duplicate path ignored", "path", dirPath)
}
}
}
return true
})
}
// 将 map 转换为 slice
var filePaths []string
if len(p.plugin.GetCommonConf().OnPub.Record) > 0 {
for _, conf := range p.plugin.GetCommonConf().OnPub.Record {
// 处理路径,去掉最后的/$0部分只保留目录部分
dirPath := filepath.Dir(conf.FilePath)
p.Info("deleteOldestFile", "original filepath", conf.FilePath, "processed filepath", dirPath)
filePaths = append(filePaths, dirPath)
}
for path := range pathMap {
filePaths = append(filePaths, path)
}
p.Debug("deleteOldestFile", "after get onpub.record,filePaths.length", len(filePaths))
if p.plugin.EventRecordFilePath != "" {
@@ -143,47 +160,304 @@ func (p *DeleteRecordTask) deleteOldestFile() {
}
}
type DeleteRecordTask struct {
type StorageManagementTask struct {
task.TickTask
DiskMaxPercent float64
AutoOverWriteDiskPercent float64
RecordFileExpireDays int
DB *gorm.DB
plugin *MP4Plugin
DiskMaxPercent float64
AutoOverWriteDiskPercent float64
MigrationThresholdPercent float64
RecordFileExpireDays int
DB *gorm.DB
plugin *MP4Plugin
}
// 为了兼容性,保留 DeleteRecordTask 作为别名
type DeleteRecordTask = StorageManagementTask
func (t *DeleteRecordTask) GetTickInterval() time.Duration {
return 1 * time.Minute
}
func (t *DeleteRecordTask) Tick(any) {
func (t *StorageManagementTask) Tick(any) {
t.Debug("StorageManagementTask", "tick started")
// 阶段1文件迁移优先级最高释放主存储空间
t.Debug("StorageManagementTask", "phase 1: file migration")
t.migrateFiles()
// 阶段2删除过期文件
t.Debug("StorageManagementTask", "phase 2: delete expired files")
t.deleteExpiredFiles()
// 阶段3删除最旧文件兜底机制
t.Debug("StorageManagementTask", "phase 3: delete oldest files")
t.deleteOldestFile()
t.Debug("StorageManagementTask", "tick completed")
}
// migrateFiles 将主存储中的文件迁移到次级存储
func (t *StorageManagementTask) migrateFiles() {
// 只有配置了迁移阈值才执行迁移
if t.MigrationThresholdPercent <= 0 {
t.Debug("migrateFiles", "migration disabled", "threshold not configured or set to 0")
return
}
t.Debug("migrateFiles", "starting migration check,threshold", t.MigrationThresholdPercent)
// 收集所有需要检查的路径(使用 map 去重)
pathMap := make(map[string]string) // primary path -> secondary path
if t.plugin.Server.Plugins.Length > 0 {
t.plugin.Server.Plugins.Range(func(plugin *m7s.Plugin) bool {
if len(plugin.GetCommonConf().OnPub.Record) > 0 {
for _, conf := range plugin.GetCommonConf().OnPub.Record {
// 只处理配置了次级路径的录像配置
if conf.SecondaryFilePath == "" {
t.Debug("migrateFiles", "skipping path without secondary storage,path", conf.FilePath)
continue
}
primaryPath := filepath.Dir(conf.FilePath)
secondaryPath := filepath.Dir(conf.SecondaryFilePath)
// 检查是否已存在
if existingSecondary, exists := pathMap[primaryPath]; exists {
if existingSecondary != secondaryPath {
t.Warn("migrateFiles", "duplicate primary path with different secondary paths",
"primary", primaryPath,
"existing secondary", existingSecondary,
"new secondary", secondaryPath)
} else {
t.Debug("migrateFiles", "duplicate path ignored,primary", primaryPath)
}
continue
}
pathMap[primaryPath] = secondaryPath
t.Debug("migrateFiles", "added path for migration check,primary", primaryPath, "secondary", secondaryPath)
}
}
return true
})
}
if len(pathMap) == 0 {
t.Debug("migrateFiles", "no secondary paths configured", "skipping migration")
return
}
t.Debug("migrateFiles", "checking paths count", len(pathMap))
// 遍历每个主存储路径
for primaryPath, secondaryPath := range pathMap {
usage := t.getDiskUsagePercent(primaryPath)
t.Debug("migrateFiles", "checking disk usage,path", primaryPath, "usage", usage, "threshold", t.MigrationThresholdPercent)
if usage < t.MigrationThresholdPercent {
t.Debug("migrateFiles", "usage below threshold,path", primaryPath, "skipping")
continue // 未达到迁移阈值,跳过
}
t.Info("migrateFiles", "migration triggered", "primary path", primaryPath, "secondary path", secondaryPath, "usage", usage, "threshold", t.MigrationThresholdPercent)
// 查找主存储中最旧的已完成录像storage_level=1
var recordStreams []m7s.RecordStream
basePath := strings.Replace(primaryPath, "\\", "\\\\", -1)
searchPattern := basePath + "%"
// 每次迁移多个文件,提高效率
err := t.DB.Where("record_level!='high' AND end_time IS NOT NULL AND storage_level=1").
Where("file_path LIKE ?", searchPattern).
Order("end_time ASC").
Limit(10). // 批量迁移10个文件
Find(&recordStreams).Error
if err != nil {
t.Error("migrateFiles", "query records error", err)
continue
}
if len(recordStreams) == 0 {
t.Debug("migrateFiles", "no files to migrate", "path", primaryPath)
continue
}
t.Info("migrateFiles", "found files to migrate", "count", len(recordStreams), "path", primaryPath)
for _, record := range recordStreams {
t.Debug("migrateFiles", "migrating file", "ID", record.ID, "filepath", record.FilePath, "endTime", record.EndTime)
if err := t.migrateFile(&record, primaryPath); err != nil {
t.Error("migrateFiles", "migrate file error", err, "ID", record.ID, "filepath", record.FilePath)
} else {
t.Info("migrateFiles", "file migrated successfully", "ID", record.ID, "from", record.FilePath, "to", record.FilePath)
}
}
}
t.Debug("migrateFiles", "migration check completed")
}
// migrateFile 迁移单个文件到次级存储
func (t *StorageManagementTask) migrateFile(record *m7s.RecordStream, primaryPath string) error {
// 获取次级存储路径
secondaryPath := t.getSecondaryPath(primaryPath)
if secondaryPath == "" {
t.Debug("migrateFile", "no secondary path found", "primaryPath", primaryPath)
return fmt.Errorf("no secondary path configured for %s", primaryPath)
}
// 构建目标路径(保持相对路径结构)
relativePath := strings.TrimPrefix(record.FilePath, primaryPath)
relativePath = strings.TrimPrefix(relativePath, string(filepath.Separator))
targetPath := filepath.Join(secondaryPath, relativePath)
t.Debug("migrateFile", "preparing migration", "from", record.FilePath, "to", targetPath)
// 创建目标目录
targetDir := filepath.Dir(targetPath)
if err := os.MkdirAll(targetDir, 0755); err != nil {
t.Error("migrateFile", "create target directory failed", err, "dir", targetDir)
return fmt.Errorf("create target directory failed: %w", err)
}
t.Debug("migrateFile", "target directory created", "dir", targetDir)
// 移动文件
if err := os.Rename(record.FilePath, targetPath); err != nil {
t.Debug("migrateFile", "rename failed, trying copy", "error", err)
// 如果跨磁盘移动失败,尝试复制后删除
if err := t.copyAndRemove(record.FilePath, targetPath); err != nil {
t.Error("migrateFile", "copy and remove failed", err)
return fmt.Errorf("move file failed: %w", err)
}
t.Debug("migrateFile", "file copied and removed")
} else {
t.Debug("migrateFile", "file renamed successfully")
}
// 更新数据库记录
oldPath := record.FilePath
record.FilePath = targetPath
record.StorageLevel = 2
if err := t.DB.Save(record).Error; err != nil {
t.Error("migrateFile", "database update failed, rolling back", err)
// 如果数据库更新失败,尝试回滚文件移动
if rollbackErr := os.Rename(targetPath, oldPath); rollbackErr != nil {
t.Error("migrateFile", "rollback failed", rollbackErr, "file may be in inconsistent state")
} else {
t.Debug("migrateFile", "rollback successful")
}
return fmt.Errorf("update database failed: %w", err)
}
t.Debug("migrateFile", "database updated", "storageLevel", 2, "newPath", targetPath)
return nil
}
// copyAndRemove 复制文件后删除原文件(用于跨磁盘移动)
func (t *StorageManagementTask) copyAndRemove(src, dst string) error {
t.Debug("copyAndRemove", "starting cross-disk copy", "from", src, "to", dst)
// 获取源文件信息
srcInfo, err := os.Stat(src)
if err != nil {
return err
}
fileSize := srcInfo.Size()
t.Debug("copyAndRemove", "source file info", "size", fileSize)
// 打开源文件
srcFile, err := os.Open(src)
if err != nil {
return err
}
defer srcFile.Close()
// 创建目标文件
dstFile, err := os.Create(dst)
if err != nil {
return err
}
defer dstFile.Close()
// 复制文件内容
t.Debug("copyAndRemove", "copying file content")
copiedBytes, err := io.Copy(dstFile, srcFile)
if err != nil {
os.Remove(dst) // 复制失败,删除目标文件
t.Error("copyAndRemove", "copy failed", err, "copiedBytes", copiedBytes)
return err
}
t.Debug("copyAndRemove", "file copied", "bytes", copiedBytes)
// 同步到磁盘
if err := dstFile.Sync(); err != nil {
t.Error("copyAndRemove", "sync failed", err)
return err
}
t.Debug("copyAndRemove", "synced to disk, removing source file")
// 删除源文件
if err := os.Remove(src); err != nil {
t.Error("copyAndRemove", "remove source file failed", err)
return err
}
t.Debug("copyAndRemove", "source file removed successfully")
return nil
}
// getSecondaryPath 获取主路径对应的次级存储路径
func (t *StorageManagementTask) getSecondaryPath(primaryPath string) string {
if len(t.plugin.GetCommonConf().OnPub.Record) > 0 {
for _, conf := range t.plugin.GetCommonConf().OnPub.Record {
dirPath := filepath.Dir(conf.FilePath)
if dirPath == primaryPath && conf.SecondaryFilePath != "" {
return filepath.Dir(conf.SecondaryFilePath)
}
}
}
return ""
}
// getDiskUsagePercent 获取磁盘使用率百分比
func (t *StorageManagementTask) getDiskUsagePercent(filePath string) float64 {
exePath := filepath.Dir(filePath)
d, err := disk.Usage(exePath)
if err != nil || d == nil {
return 0
}
return d.UsedPercent
}
// deleteExpiredFiles 删除过期文件
func (t *StorageManagementTask) deleteExpiredFiles() {
if t.RecordFileExpireDays <= 0 {
return
}
//搜索event_records表中event_id值为0的非事件录像并将其create_time与当前时间比对大于RecordFileExpireDays则进行删除数据库标记is_delete为1磁盘上删除录像文件
var records []m7s.RecordStream
expireTime := time.Now().AddDate(0, 0, -t.RecordFileExpireDays)
t.Debug("RecordFileExpireDays is set to auto delete oldestfile", "expireTime", expireTime.Format("2006-01-02 15:04:05"))
t.Debug("deleteExpiredFiles", "expireTime", expireTime.Format("2006-01-02 15:04:05"))
err := t.DB.Find(&records, "end_time < ? AND end_time IS NOT NULL", expireTime).Error
if err == nil {
for _, record := range records {
t.Info("RecordFileExpireDays is set to auto delete oldestfile", "ID", record.ID, "create time", record.EndTime, "filepath", record.FilePath)
t.Info("deleteExpiredFiles", "ID", record.ID, "endTime", record.EndTime, "filepath", record.FilePath)
err = os.Remove(record.FilePath)
if err != nil {
// 检查是否为文件不存在的错误
if os.IsNotExist(err) {
// 文件不存在,记录日志但视为删除成功
t.Warn("RecordFileExpireDays set to auto delete oldestfile", "file does not exist, continuing with database deletion", record.FilePath)
t.Warn("deleteExpiredFiles", "file does not exist", record.FilePath)
} else {
// 其他错误,记录但继续处理
t.Error("RecordFileExpireDays set to auto delete oldestfile", "delete file from disk error", err)
t.Error("deleteExpiredFiles", "delete file error", err)
}
}
// 无论文件是否存在,都删除数据库记录
err = t.DB.Delete(&record).Error
if err != nil {
t.Error("RecordFileExpireDays set to auto delete oldestfile", "delete record from db error", err)
t.Error("deleteExpiredFiles", "delete record from db error", err)
}
}
}

View File

@@ -18,14 +18,15 @@ import (
type MP4Plugin struct {
pb.UnimplementedApiServer
m7s.Plugin
BeforeDuration time.Duration `default:"30s" desc:"事件录像提前时长不配置则默认30s"`
AfterDuration time.Duration `default:"30s" desc:"事件录像结束时长不配置则默认30s"`
RecordFileExpireDays int `desc:"录像自动删除的天数,0或未设置表示不自动删除"`
DiskMaxPercent float64 `default:"90" desc:"硬盘使用百分之上限值,超上限后触发报警,并停止当前所有磁盘写入动作。"`
AutoOverWriteDiskPercent float64 `default:"0" desc:"自动覆盖功能磁盘占用上限值,超过上限时连续录像自动删除日有录像,事件录像自动删除非重要事件录像,删除规则为删除距离当日最久日期的连续录像或非重要事件录像。"`
AutoRecovery bool `default:"false" desc:"是否自动恢复"`
ExceptionPostUrl string `desc:"第三方异常上报地址"`
EventRecordFilePath string `desc:"事件录像存放地址"`
BeforeDuration time.Duration `default:"30s" desc:"事件录像提前时长不配置则默认30s"`
AfterDuration time.Duration `default:"30s" desc:"事件录像结束时长不配置则默认30s"`
RecordFileExpireDays int `desc:"录像自动删除的天数,0或未设置表示不自动删除"`
DiskMaxPercent float64 `default:"90" desc:"硬盘使用百分之上限值,超上限后触发报警,并停止当前所有磁盘写入动作。"`
AutoOverWriteDiskPercent float64 `default:"0" desc:"自动覆盖功能磁盘占用上限值,超过上限时连续录像自动删除日有录像,事件录像自动删除非重要事件录像,删除规则为删除距离当日最久日期的连续录像或非重要事件录像。"`
MigrationThresholdPercent float64 `default:"60" desc:"开始迁移到次级存储的磁盘使用率阈值,当主存储达到此阈值时自动将文件迁移到次级存储"`
AutoRecovery bool `default:"false" desc:"是否自动恢复"`
ExceptionPostUrl string `desc:"第三方异常上报地址"`
EventRecordFilePath string `desc:"事件录像存放地址"`
}
const defaultConfig m7s.DefaultYaml = `publish:
@@ -61,13 +62,14 @@ func (p *MP4Plugin) Start() (err error) {
return
}
if p.AutoOverWriteDiskPercent > 0 {
var deleteRecordTask DeleteRecordTask
deleteRecordTask.DB = p.DB
deleteRecordTask.DiskMaxPercent = p.DiskMaxPercent
deleteRecordTask.AutoOverWriteDiskPercent = p.AutoOverWriteDiskPercent
deleteRecordTask.RecordFileExpireDays = p.RecordFileExpireDays
deleteRecordTask.plugin = p
p.AddTask(&deleteRecordTask)
var storageTask StorageManagementTask
storageTask.DB = p.DB
storageTask.DiskMaxPercent = p.DiskMaxPercent
storageTask.AutoOverWriteDiskPercent = p.AutoOverWriteDiskPercent
storageTask.MigrationThresholdPercent = p.MigrationThresholdPercent
storageTask.RecordFileExpireDays = p.RecordFileExpireDays
storageTask.plugin = p
p.AddTask(&storageTask)
}
if p.AutoRecovery {
var recoveryTask RecordRecoveryTask

View File

@@ -65,6 +65,7 @@ func (t *writeTrailerTask) Run() (err error) {
// 复制 mdat box之前的内容
_, err = io.CopyN(temp, t.file, int64(t.muxer.mdatOffset)-BeforeMdatData)
if err != nil {
t.Error("copy file", "err", err)
return
}
for _, track := range t.muxer.Tracks {
@@ -74,6 +75,7 @@ func (t *writeTrailerTask) Run() (err error) {
}
err = t.muxer.WriteMoov(temp)
if err != nil {
t.Error("rewrite with moov", "err", err)
return
}
// 复制 mdat box
@@ -136,7 +138,10 @@ var CustomFileName = func(job *m7s.RecordJob) string {
if job.RecConf.Fragment == 0 {
return fmt.Sprintf("%s.mp4", job.RecConf.FilePath)
}
return filepath.Join(job.RecConf.FilePath, fmt.Sprintf("%d.mp4", time.Now().Unix()))
// 使用纳秒级时间戳,避免同一秒内多次切片时文件名冲突
// 格式秒_纳秒.mp4 (例如: 1760431346_123456789.mp4)
now := time.Now()
return filepath.Join(job.RecConf.FilePath, fmt.Sprintf("%d_%09d.mp4", now.Unix(), now.Nanosecond()))
}
func (r *Recorder) createStream(start time.Time) (err error) {
@@ -148,6 +153,10 @@ func (r *Recorder) createStream(start time.Time) (err error) {
return
}
// 注意: 不要在这里关闭旧文件,因为它已经被传递给 writeTrailerTask
// writeTrailerTask 会负责关闭旧文件
// 直接创建新文件并覆盖 r.file
// 获取存储实例
storage := r.RecordJob.GetStorage()
@@ -159,7 +168,8 @@ func (r *Recorder) createStream(start time.Time) (err error) {
}
} else {
// 默认本地文件行为
r.file, err = os.Create(r.Event.FilePath)
// 使用 OpenFile 以读写模式打开,因为 writeTrailerTask.Run() 需要读取文件内容
r.file, err = os.OpenFile(r.Event.FilePath, os.O_CREATE|os.O_RDWR|os.O_TRUNC, 0644)
if err != nil {
return
}
@@ -177,11 +187,13 @@ func (r *Recorder) createStream(start time.Time) (err error) {
func (r *Recorder) Dispose() {
if r.muxer != nil {
r.writeTailer(time.Now())
}
// 关闭存储写入器
if r.file != nil {
r.file.Close()
// 注意: 文件的关闭由 writeTrailerTask.Run() 负责
// 不在这里关闭,避免在异步任务执行前文件被关闭
} else {
// 如果没有 muxer,需要在这里关闭文件
if r.file != nil {
r.file.Close()
}
}
}
@@ -284,18 +296,18 @@ func (r *Recorder) Run() (err error) {
if vt == nil {
vt = sub.VideoReader.Track
switch vt.ICodecCtx.GetBase().(type) {
switch video.ICodecCtx.GetBase().(type) {
case *codec.H264Ctx:
track := r.muxer.AddTrack(box.MP4_CODEC_H264)
videoTrack = track
track.ICodecCtx = vt.ICodecCtx
track.ICodecCtx = video.ICodecCtx
case *codec.H265Ctx:
track := r.muxer.AddTrack(box.MP4_CODEC_H265)
videoTrack = track
track.ICodecCtx = vt.ICodecCtx
track.ICodecCtx = video.ICodecCtx
}
}
ctx := vt.ICodecCtx.(pkg.IVideoCodecCtx)
ctx := video.ICodecCtx.(pkg.IVideoCodecCtx)
if videoTrackCtx, ok := videoTrack.ICodecCtx.(pkg.IVideoCodecCtx); ok && videoTrackCtx != ctx {
width, height := uint32(ctx.Width()), uint32(ctx.Height())
oldWidth, oldHeight := uint32(videoTrackCtx.Width()), uint32(videoTrackCtx.Height())
@@ -310,6 +322,17 @@ func (r *Recorder) Run() (err error) {
at, vt = nil, nil
if vr := sub.VideoReader; vr != nil {
vr.ResetAbsTime()
vt = vr.Track
switch video.ICodecCtx.GetBase().(type) {
case *codec.H264Ctx:
track := r.muxer.AddTrack(box.MP4_CODEC_H264)
videoTrack = track
track.ICodecCtx = video.ICodecCtx
case *codec.H265Ctx:
track := r.muxer.AddTrack(box.MP4_CODEC_H265)
videoTrack = track
track.ICodecCtx = video.ICodecCtx
}
}
if ar := sub.AudioReader; ar != nil {
ar.ResetAbsTime()

View File

@@ -73,9 +73,14 @@ func (track *Track) makeElstBox() *EditListBox {
// elst.entrys.entrys[0].mediaRateFraction = 0
// }
//简单起见mediaTime先固定为0,即不延迟播放
//使用第一个sample的时间戳作为MediaTime确保时间轴对齐
firstTimestamp := track.Samplelist[0].Timestamp
firstCTS := track.Samplelist[0].CTS
mediaTime := int64(firstTimestamp) + int64(firstCTS)
entrys[entryCount-1].SegmentDuration = uint64(track.Duration)
entrys[entryCount-1].MediaTime = 0
// MediaTime应该是第一个sample的PTS (DTS + CTS)
entrys[entryCount-1].MediaTime = mediaTime
entrys[entryCount-1].MediaRateInteger = 0x0001
entrys[entryCount-1].MediaRateFraction = 0

View File

@@ -45,9 +45,7 @@ func (v *VideoFrame) Demux() (err error) {
// Mux implements pkg.IAVFrame.
func (v *VideoFrame) Mux(sample *pkg.Sample) (err error) {
v.InitRecycleIndexes(0)
if v.ICodecCtx == nil {
v.ICodecCtx = sample.GetBase()
}
v.ICodecCtx = sample.GetBase()
switch rawData := sample.Raw.(type) {
case *pkg.Nalus:
// 根据编解码器类型确定 NALU 长度字段的大小

View File

@@ -76,7 +76,7 @@ func (avcc *AudioFrame) Mux(fromBase *Sample) (err error) {
avcc.InitRecycleIndexes(1)
switch c := fromBase.GetBase().(type) {
case *codec.AACCtx:
if avcc.ICodecCtx == nil {
if avcc.ICodecCtx == nil || avcc.ICodecCtx.GetBase() != c {
ctx := &AACCtx{
AACCtx: c,
}
@@ -88,9 +88,7 @@ func (avcc *AudioFrame) Mux(fromBase *Sample) (err error) {
head[0], head[1] = 0xAF, 0x01
avcc.Push(audioData.Buffers...)
default:
if avcc.ICodecCtx == nil {
avcc.ICodecCtx = c
}
avcc.ICodecCtx = c
head := avcc.NextN(1)
head[0] = byte(ParseAudioCodec(c.FourCC()))<<4 | (1 << 1)
avcc.Push(audioData.Buffers...)

View File

@@ -23,42 +23,30 @@ var rtmpPullSteps = []pkg.StepDef{
{Name: pkg.StepStreaming, Description: "Receiving media stream"},
}
func (c *Client) Start() (err error) {
var addr string
if c.direction == DIRECTION_PULL {
// Initialize progress tracking for pull operations
c.pullCtx.SetProgressStepsDefs(rtmpPullSteps)
type Client struct {
NetStream
chunkSize int
u *url.URL
}
addr = c.pullCtx.Connection.RemoteURL
err = c.pullCtx.Publish()
if err != nil {
c.pullCtx.Fail(err.Error())
return
}
func (c *Client) GetPullJob() *m7s.PullJob {
return nil
}
c.pullCtx.GoToStepConst(pkg.StepURLParsing)
} else {
addr = c.pushCtx.Connection.RemoteURL
}
func (c *Client) GetPushJob() *m7s.PushJob {
return nil
}
func (c *Client) commonStart(addr string) (err error) {
c.u, err = url.Parse(addr)
if err != nil {
if c.direction == DIRECTION_PULL {
c.pullCtx.Fail(err.Error())
}
return
}
ps := strings.Split(c.u.Path, "/")
if len(ps) < 2 {
if c.direction == DIRECTION_PULL {
c.pullCtx.Fail("illegal rtmp url")
}
return errors.New("illegal rtmp url")
}
if c.direction == DIRECTION_PULL {
c.pullCtx.GoToStepConst(pkg.StepConnection)
}
isRtmps := c.u.Scheme == "rtmps"
if strings.Count(c.u.Host, ":") == 0 {
if isRtmps {
@@ -78,72 +66,19 @@ func (c *Client) Start() (err error) {
conn, err = net.Dial("tcp", c.u.Host)
}
if err != nil {
if c.direction == DIRECTION_PULL {
c.pullCtx.Fail(err.Error())
}
return err
}
if c.direction == DIRECTION_PULL {
c.pullCtx.GoToStepConst(pkg.StepHandshake)
}
c.Init(conn)
c.SetDescription("local", conn.LocalAddr().String())
c.Info("connect")
c.WriteChunkSize = c.chunkSize
c.AppName = strings.Join(ps[1:len(ps)-1], "/")
if c.direction == DIRECTION_PULL {
c.pullCtx.GoToStepConst(pkg.StepStreaming)
}
return err
}
const (
DIRECTION_PULL = "pull"
DIRECTION_PUSH = "push"
)
type Client struct {
NetStream
chunkSize int
pullCtx m7s.PullJob
pushCtx m7s.PushJob
direction string
u *url.URL
}
func (c *Client) GetPullJob() *m7s.PullJob {
return &c.pullCtx
}
func (c *Client) GetPushJob() *m7s.PushJob {
return &c.pushCtx
}
func NewPuller(_ config.Pull) m7s.IPuller {
ret := &Client{
direction: DIRECTION_PULL,
chunkSize: 4096,
}
ret.NetConnection = &NetConnection{}
ret.SetDescription(task.OwnerTypeKey, "RTMPPuller")
return ret
}
func NewPusher() m7s.IPusher {
ret := &Client{
direction: DIRECTION_PUSH,
chunkSize: 4096,
}
ret.NetConnection = &NetConnection{}
ret.SetDescription(task.OwnerTypeKey, "RTMPPusher")
return ret
}
func (c *Client) Run() (err error) {
func (c *Client) commonRun(handler func(commander Commander) error) (err error) {
if err = c.ClientHandshake(); err != nil {
return
}
@@ -171,6 +106,7 @@ func (c *Client) Run() (err error) {
return err
}
cmd := commander.GetCommand()
c.Debug(cmd.CommandName)
switch cmd.CommandName {
case Response_Result, Response_OnStatus:
switch response := commander.(type) {
@@ -185,66 +121,149 @@ func (c *Client) Run() (err error) {
}
case *ResponseCreateStreamMessage:
c.StreamID = response.StreamId
if c.direction == DIRECTION_PULL {
m := &PlayMessage{}
m.StreamId = response.StreamId
m.TransactionId = 4
m.CommandMessage.CommandName = "play"
URL, _ := url.Parse(c.pullCtx.Connection.RemoteURL)
ps := strings.Split(URL.Path, "/")
args := URL.Query()
m.StreamName = ps[len(ps)-1]
if len(args) > 0 {
m.StreamName += "?" + args.Encode()
if handler != nil {
if err = handler(commander); err != nil {
return err
}
if c.pullCtx.Publisher != nil {
c.Writers[response.StreamId] = &struct {
m7s.PublishWriter[*AudioFrame, *VideoFrame]
*m7s.Publisher
}{Publisher: c.pullCtx.Publisher}
}
err = c.SendMessage(RTMP_MSG_AMF0_COMMAND, m)
// if response, ok := msg.MsgData.(*ResponsePlayMessage); ok {
// if response.Object["code"] == "NetStream.Play.Start" {
// } else if response.Object["level"] == Level_Error {
// return errors.New(response.Object["code"].(string))
// }
// } else {
// return errors.New("pull faild")
// }
} else {
err = c.pushCtx.Subscribe()
if err != nil {
return
}
URL, _ := url.Parse(c.pushCtx.Connection.RemoteURL)
_, streamPath, _ := strings.Cut(URL.Path, "/")
_, streamPath, _ = strings.Cut(streamPath, "/")
args := URL.Query()
if len(args) > 0 {
streamPath += "?" + args.Encode()
}
err = c.SendMessage(RTMP_MSG_AMF0_COMMAND, &PublishMessage{
CURDStreamMessage{
CommandMessage{
"publish",
1,
},
response.StreamId,
},
streamPath,
"live",
})
}
case *ResponsePublishMessage:
if response.Infomation["code"] == NetStream_Publish_Start {
c.Subscribe(c.pushCtx.Subscriber)
} else {
return errors.New(response.Infomation["code"].(string))
}
}
}
}
return
}
type Puller struct {
Client
pullCtx m7s.PullJob
}
func (p *Puller) GetPullJob() *m7s.PullJob {
return &p.pullCtx
}
func (p *Puller) Start() (err error) {
// Initialize progress tracking for pull operations
p.pullCtx.SetProgressStepsDefs(rtmpPullSteps)
addr := p.pullCtx.Connection.RemoteURL
err = p.pullCtx.Publish()
if err != nil {
p.pullCtx.Fail(err.Error())
return
}
p.pullCtx.GoToStepConst(pkg.StepURLParsing)
err = p.commonStart(addr)
if err != nil {
p.pullCtx.Fail(err.Error())
return
}
p.pullCtx.GoToStepConst(pkg.StepConnection)
p.pullCtx.GoToStepConst(pkg.StepHandshake)
p.pullCtx.GoToStepConst(pkg.StepStreaming)
return
}
func (p *Puller) Run() (err error) {
return p.commonRun(func(commander Commander) error {
switch response := commander.(type) {
case *ResponseCreateStreamMessage:
p.StreamID = response.StreamId
m := &PlayMessage{}
m.StreamId = response.StreamId
m.TransactionId = 4
m.CommandMessage.CommandName = "play"
URL, _ := url.Parse(p.pullCtx.Connection.RemoteURL)
ps := strings.Split(URL.Path, "/")
args := URL.Query()
m.StreamName = ps[len(ps)-1]
if len(args) > 0 {
m.StreamName += "?" + args.Encode()
}
if p.pullCtx.Publisher != nil {
p.Writers[response.StreamId] = &struct {
m7s.PublishWriter[*AudioFrame, *VideoFrame]
*m7s.Publisher
}{Publisher: p.pullCtx.Publisher}
}
return p.SendMessage(RTMP_MSG_AMF0_COMMAND, m)
}
return nil
})
}
type Pusher struct {
Client
pushCtx m7s.PushJob
}
func (p *Pusher) GetPushJob() *m7s.PushJob {
return &p.pushCtx
}
func (p *Pusher) Start() (err error) {
return p.commonStart(p.pushCtx.Connection.RemoteURL)
}
func (p *Pusher) Run() (err error) {
return p.commonRun(func(commander Commander) error {
switch response := commander.(type) {
case *ResponseCreateStreamMessage:
p.StreamID = response.StreamId
err = p.pushCtx.Subscribe()
if err != nil {
return err
}
URL, _ := url.Parse(p.pushCtx.Connection.RemoteURL)
_, streamPath, _ := strings.Cut(URL.Path, "/")
_, streamPath, _ = strings.Cut(streamPath, "/")
args := URL.Query()
if len(args) > 0 {
streamPath += "?" + args.Encode()
}
return p.SendMessage(RTMP_MSG_AMF0_COMMAND, &PublishMessage{
CURDStreamMessage{
CommandMessage{
"publish",
1,
},
response.StreamId,
},
streamPath,
"live",
})
case *ResponsePublishMessage:
if response.Infomation["code"] == NetStream_Publish_Start {
p.Subscribe(p.pushCtx.Subscriber)
} else {
return errors.New(response.Infomation["code"].(string))
}
}
return nil
})
}
func NewPuller(_ config.Pull) m7s.IPuller {
ret := &Puller{
Client: Client{
chunkSize: 4096,
},
}
ret.NetConnection = &NetConnection{}
ret.SetDescription(task.OwnerTypeKey, "RTMPPuller")
return ret
}
func NewPusher() m7s.IPusher {
ret := &Pusher{
Client: Client{
chunkSize: 4096,
},
}
ret.NetConnection = &NetConnection{}
ret.SetDescription(task.OwnerTypeKey, "RTMPPusher")
return ret
}

View File

@@ -91,7 +91,8 @@ func NewNetConnection(conn net.Conn) (ret *NetConnection) {
func (nc *NetConnection) Init(conn net.Conn) {
nc.Conn = conn
nc.BufReader = util.NewBufReaderWithTimeout(conn, 30*time.Second)
nc.BufReader = util.NewBufReader(conn)
nc.BufReader.SetTimeout(time.Second * 30)
nc.bandwidth = RTMP_MAX_CHUNK_SIZE << 3
nc.ReadChunkSize = RTMP_DEFAULT_CHUNK_SIZE
nc.WriteChunkSize = RTMP_DEFAULT_CHUNK_SIZE

View File

@@ -309,7 +309,7 @@ func (avcc *VideoFrame) Mux(fromBase *Sample) (err error) {
case *AV1Ctx:
panic(c)
case *codec.H264Ctx:
if avcc.ICodecCtx == nil {
if avcc.ICodecCtx == nil || avcc.GetBase() != c {
ctx := &H264Ctx{H264Ctx: c}
ctx.SequenceFrame.PushOne(append([]byte{0x17, 0, 0, 0, 0}, c.Record...))
ctx.SequenceFrame.BaseSample = &BaseSample{}
@@ -318,7 +318,7 @@ func (avcc *VideoFrame) Mux(fromBase *Sample) (err error) {
avcc.muxOld26x(CodecID_H264, fromBase)
case *codec.H265Ctx:
if true {
if avcc.ICodecCtx == nil {
if avcc.ICodecCtx == nil || avcc.GetBase() != c {
ctx := &H265Ctx{H265Ctx: c, Enhanced: true}
b := make(util.Buffer, len(ctx.Record)+5)
if ctx.Enhanced {

View File

@@ -273,7 +273,7 @@ func (r *AudioFrame) Mux(from *Sample) (err error) {
switch base := from.GetBase().(type) {
case *codec.AACCtx:
var c *AACCtx
if r.ICodecCtx == nil {
if r.ICodecCtx == nil || r.ICodecCtx.GetBase() != base {
c = &AACCtx{}
c.SSRC = uint32(uintptr(unsafe.Pointer(&ctx)))
c.AACCtx = base
@@ -306,7 +306,7 @@ func (r *AudioFrame) Mux(from *Sample) (err error) {
lastPacket.Header.Marker = true
return
case *codec.PCMACtx:
if r.ICodecCtx == nil {
if r.ICodecCtx == nil || r.ICodecCtx != base {
var ctx PCMACtx
ctx.SSRC = uint32(uintptr(unsafe.Pointer(&ctx)))
ctx.PCMACtx = base
@@ -317,7 +317,7 @@ func (r *AudioFrame) Mux(from *Sample) (err error) {
}
ctx = &r.ICodecCtx.(*PCMACtx).RTPCtx
case *codec.PCMUCtx:
if r.ICodecCtx == nil {
if r.ICodecCtx == nil || r.ICodecCtx != base {
var ctx PCMUCtx
ctx.SSRC = uint32(uintptr(unsafe.Pointer(&ctx)))
ctx.PCMUCtx = base

View File

@@ -258,8 +258,8 @@ func (av1 *AV1Ctx) GetInfo() string {
func (r *VideoFrame) Mux(baseFrame *Sample) error {
// 获取编解码器上下文
codecCtx := r.ICodecCtx
if codecCtx == nil {
switch base := baseFrame.GetBase().(type) {
if baseCtx := baseFrame.GetBase(); codecCtx == nil || codecCtx.GetBase() != baseCtx {
switch base := baseCtx.(type) {
case *codec.H264Ctx:
var ctx H264Ctx
ctx.H264Ctx = base

273
plugin/rtsp/README.md Normal file
View File

@@ -0,0 +1,273 @@
# RTSP Plugin
The RTSP plugin provides complete RTSP server and client functionality for Monibuca, enabling RTSP stream publishing, playback, and proxying.
## Features
- **RTSP Server**: Accept RTSP client connections for stream publishing and playback
- **RTSP Client**: Pull streams from remote RTSP sources
- **Dual Transport Modes**: Support both TCP and UDP transport protocols
- **Authentication**: Built-in username/password authentication
- **Bidirectional Proxy**: Support for both pull and push proxying
- **Standard Compliance**: Implements RTSP protocol (RFC 2326/RFC 7826)
## Configuration
```yaml
rtsp:
tcp:
listenaddr: :554 # RTSP server listening address
username: "" # Authentication username (optional)
password: "" # Authentication password (optional)
udpport: 20001-30000 # UDP port range for media transmission
```
### Configuration Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `tcp.listenaddr` | string | `:554` | RTSP server listening address and port |
| `username` | string | `""` | Authentication username (empty = no auth) |
| `password` | string | `""` | Authentication password |
| `udpport` | range | `20001-30000` | UDP port range for RTP/RTCP transmission |
## Usage
### Pull RTSP Stream
Pull a remote RTSP stream into Monibuca:
```yaml
rtsp:
pull:
camera1:
url: rtsp://admin:password@192.168.1.100/stream
```
Or use the unified API:
```bash
curl -X POST http://localhost:8080/api/stream/pull \
-H "Content-Type: application/json" \
-d '{
"protocol": "rtsp",
"streamPath": "camera1",
"remoteURL": "rtsp://admin:password@192.168.1.100/stream"
}'
```
### Push Stream via RTSP
Push a stream from Monibuca to a remote RTSP server:
```yaml
rtsp:
push:
camera1:
target: rtsp://192.168.1.200/live/stream
```
### Publish Stream to Monibuca RTSP Server
Use FFmpeg or other RTSP clients to publish streams:
```bash
ffmpeg -re -i input.mp4 -c copy -f rtsp rtsp://localhost:554/live/stream
```
### Play Stream from Monibuca RTSP Server
Use any RTSP client to play streams:
```bash
ffplay rtsp://localhost:554/live/stream
```
Or with VLC, OBS, or other RTSP-compatible players.
## Transport Modes
### TCP Transport (Interleaved Mode)
- More reliable, works through firewalls
- Higher latency
- Automatic fallback option
### UDP Transport
- Lower latency
- Better for local networks
- Requires open UDP port range
- RTP/RTCP use separate ports
## RTSP Methods Supported
| Method | Direction | Description |
|--------|-----------|-------------|
| OPTIONS | Both | Query supported methods |
| DESCRIBE | Pull | Get stream SDP information |
| ANNOUNCE | Push | Declare stream for publishing |
| SETUP | Both | Setup transport parameters |
| PLAY | Pull | Start playing stream |
| RECORD | Push | Start recording/publishing |
| TEARDOWN | Both | Close connection |
## Authentication
When username and password are configured, the server requires HTTP Basic Authentication:
```yaml
rtsp:
username: admin
password: secret123
```
Clients must provide credentials in the URL:
```
rtsp://admin:secret123@localhost:554/live/stream
```
## Advanced Features
### Pull Proxy
Automatically pull and cache remote RTSP streams. Configure under the `global` node:
```yaml
global:
pullproxy:
- id: 1 # Unique ID, must be > 0
name: "camera-1" # Pull proxy name
type: "rtsp" # Protocol type
streampath: "live/camera1" # Stream path in Monibuca
pullonstart: true # Auto-pull on startup
pull:
url: "rtsp://admin:password@192.168.1.100/stream"
description: "Front door camera"
- id: 2
name: "camera-2"
type: "rtsp"
streampath: "live/camera2"
pullonstart: false
pull:
url: "rtsp://admin:password@192.168.1.101/stream"
```
Or use the API to manage pull proxies dynamically (see API Endpoints section below).
### Push Proxy
Automatically push streams to remote RTSP servers. Configure under the `global` node:
```yaml
global:
pushproxy:
- id: 1 # Unique ID, must be > 0
name: "push-1" # Push proxy name
type: "rtsp" # Protocol type
streampath: "live/stream1" # Source stream path
pushonstart: true # Auto-push on startup
push:
url: "rtsp://192.168.1.200/live/stream1"
description: "Push to remote server"
```
Or use the API to manage push proxies dynamically (see API Endpoints section below).
## Compatibility
### Tested Devices/Software
- ✅ FFmpeg
- ✅ VLC Media Player
- ✅ OBS Studio
- ✅ ONVIF-compliant devices
### Known Issues
See [BAD_DEVICE.md](BAD_DEVICE.md) for devices with non-standard RTSP implementations.
## Codec Support
The plugin transparently passes through codec information. Supported codecs depend on the source and destination:
**Video**: H.264, H.265/HEVC, MPEG-4, MJPEG
**Audio**: AAC, G.711 (PCMA/PCMU), G.726, MP3, OPUS
## Performance Tips
- Use TCP transport for streams over the internet
- Use UDP transport for local network streams (lower latency)
- Adjust UDP port range based on concurrent stream count
- Enable authentication to prevent unauthorized access
- For high-concurrency scenarios, consider hardware transcoding
## Troubleshooting
### Connection Refused
- Check if port 554 requires root/administrator privileges
- Try a different port (e.g., 8554)
- Verify firewall settings
### No Video/Audio
- Check codec compatibility between source and client
- Verify SDP information in logs
- Test with VLC or FFplay to isolate issues
### UDP Packet Loss
- Increase UDP port range
- Switch to TCP transport
- Check network quality and bandwidth
## API Endpoints
### Pull Stream
Use the unified pull API with `protocol` set to `rtsp`:
```bash
POST /api/stream/pull
Content-Type: application/json
{
"protocol": "rtsp",
"streamPath": "camera1",
"remoteURL": "rtsp://admin:password@192.168.1.100/stream",
"pubAudio": true,
"pubVideo": true
}
```
**Parameters:**
- `protocol` (required): Set to `"rtsp"`
- `streamPath` (required): Local stream path in Monibuca
- `remoteURL` (required): Remote RTSP URL to pull from
- `pubAudio` (optional): Enable audio publishing
- `pubVideo` (optional): Enable video publishing
- `testMode` (optional): 0 = normal pull, 1 = pull without publishing
- Additional publish configuration options available (see GlobalPullRequest in global.proto)
### Stop Stream
```bash
POST /api/stream/stop/{streamPath}
```
### Manage Pull/Push Proxy
- `GET /api/proxy/pull/list` - List all pull proxies
- `POST /api/proxy/pull/add` - Add a pull proxy
- `POST /api/proxy/pull/update` - Update a pull proxy
- `POST /api/proxy/pull/remove/{id}` - Remove a pull proxy
- `GET /api/proxy/push/list` - List all push proxies
- `POST /api/proxy/push/add` - Add a push proxy
- `POST /api/proxy/push/update` - Update a push proxy
- `POST /api/proxy/push/remove/{id}` - Remove a push proxy
## Acknowledgments
This plugin references code and implementation ideas from the excellent [go2rtc](https://github.com/AlexxIT/go2rtc) project by AlexxIT, which provides a comprehensive media server solution with advanced streaming capabilities.
## License
This plugin is part of the Monibuca project and follows the same license terms.

273
plugin/rtsp/README_CN.md Normal file
View File

@@ -0,0 +1,273 @@
# RTSP 插件
RTSP 插件为 Monibuca 提供完整的 RTSP 服务器和客户端功能,支持 RTSP 流的发布、播放和代理。
## 功能特性
- **RTSP 服务器**:接受 RTSP 客户端连接,支持流发布和播放
- **RTSP 客户端**:从远程 RTSP 源拉取流
- **双传输模式**:支持 TCP 和 UDP 两种传输协议
- **身份认证**:内置用户名/密码认证功能
- **双向代理**:支持拉流和推流代理
- **标准兼容**:实现 RTSP 协议标准 (RFC 2326/RFC 7826)
## 配置说明
```yaml
rtsp:
tcp:
listenaddr: :554 # RTSP 服务器监听地址
username: "" # 认证用户名(可选)
password: "" # 认证密码(可选)
udpport: 20001-30000 # 媒体传输的 UDP 端口范围
```
### 配置参数
| 参数 | 类型 | 默认值 | 说明 |
|------|------|--------|------|
| `tcp.listenaddr` | string | `:554` | RTSP 服务器监听地址和端口 |
| `username` | string | `""` | 认证用户名(为空表示不启用认证)|
| `password` | string | `""` | 认证密码 |
| `udpport` | range | `20001-30000` | RTP/RTCP 传输使用的 UDP 端口范围 |
## 使用方法
### 拉取 RTSP 流
将远程 RTSP 流拉取到 Monibuca
```yaml
rtsp:
pull:
camera1:
url: rtsp://admin:password@192.168.1.100/stream
```
或使用统一的 API
```bash
curl -X POST http://localhost:8080/api/stream/pull \
-H "Content-Type: application/json" \
-d '{
"protocol": "rtsp",
"streamPath": "camera1",
"remoteURL": "rtsp://admin:password@192.168.1.100/stream"
}'
```
### 通过 RTSP 推流
将 Monibuca 中的流推送到远程 RTSP 服务器:
```yaml
rtsp:
push:
camera1:
target: rtsp://192.168.1.200/live/stream
```
### 向 Monibuca RTSP 服务器发布流
使用 FFmpeg 或其他 RTSP 客户端发布流:
```bash
ffmpeg -re -i input.mp4 -c copy -f rtsp rtsp://localhost:554/live/stream
```
### 从 Monibuca RTSP 服务器播放流
使用任何 RTSP 客户端播放流:
```bash
ffplay rtsp://localhost:554/live/stream
```
或使用 VLC、OBS 等任何支持 RTSP 的播放器。
## 传输模式
### TCP 传输(交织模式)
- 更可靠,可穿透防火墙
- 延迟较高
- 自动回退选项
### UDP 传输
- 延迟更低
- 适合局域网环境
- 需要开放 UDP 端口范围
- RTP/RTCP 使用独立端口
## 支持的 RTSP 方法
| 方法 | 方向 | 说明 |
|------|------|------|
| OPTIONS | 双向 | 查询支持的方法 |
| DESCRIBE | 拉流 | 获取流的 SDP 信息 |
| ANNOUNCE | 推流 | 声明要发布的流 |
| SETUP | 双向 | 设置传输参数 |
| PLAY | 拉流 | 开始播放流 |
| RECORD | 推流 | 开始录制/发布 |
| TEARDOWN | 双向 | 关闭连接 |
## 身份认证
配置用户名和密码后,服务器将要求 HTTP 基本认证:
```yaml
rtsp:
username: admin
password: secret123
```
客户端必须在 URL 中提供凭据:
```
rtsp://admin:secret123@localhost:554/live/stream
```
## 高级功能
### 拉流代理
自动拉取和缓存远程 RTSP 流。需要在 `global` 节点下配置:
```yaml
global:
pullproxy:
- id: 1 # 唯一ID标识必须大于0
name: "camera-1" # 拉流代理名称
type: "rtsp" # 拉流协议类型
streampath: "live/camera1" # 在Monibuca中的流路径
pullonstart: true # 是否在启动时自动拉流
pull:
url: "rtsp://admin:password@192.168.1.100/stream"
description: "前门摄像头"
- id: 2
name: "camera-2"
type: "rtsp"
streampath: "live/camera2"
pullonstart: false
pull:
url: "rtsp://admin:password@192.168.1.101/stream"
```
或使用 API 动态管理拉流代理(参见下方 API 接口部分)。
### 推流代理
自动将流推送到远程 RTSP 服务器。需要在 `global` 节点下配置:
```yaml
global:
pushproxy:
- id: 1 # 唯一ID标识必须大于0
name: "push-1" # 推流代理名称
type: "rtsp" # 推流协议类型
streampath: "live/stream1" # 源流路径
pushonstart: true # 是否在启动时自动推流
push:
url: "rtsp://192.168.1.200/live/stream1"
description: "推送到远程服务器"
```
或使用 API 动态管理推流代理(参见下方 API 接口部分)。
## 兼容性
### 已测试设备/软件
- ✅ FFmpeg
- ✅ VLC Media Player
- ✅ OBS Studio
- ✅ ONVIF 兼容设备
### 已知问题
查看 [BAD_DEVICE.md](BAD_DEVICE.md) 了解非标准 RTSP 实现的设备。
## 编解码器支持
插件透明传递编解码器信息。支持的编解码器取决于源和目标:
**视频**H.264、H.265/HEVC、MPEG-4、MJPEG
**音频**AAC、G.711 (PCMA/PCMU)、G.726、MP3、OPUS
## 性能优化建议
- 互联网传输使用 TCP 模式
- 局域网传输使用 UDP 模式(延迟更低)
- 根据并发流数量调整 UDP 端口范围
- 启用身份认证防止未授权访问
- 高并发场景考虑使用硬件转码
## 故障排查
### 连接被拒绝
- 检查 554 端口是否需要 root/管理员权限
- 尝试使用其他端口(如 8554
- 检查防火墙设置
### 无视频/音频
- 检查源和客户端之间的编解码器兼容性
- 查看日志中的 SDP 信息
- 使用 VLC 或 FFplay 测试以隔离问题
### UDP 丢包
- 增加 UDP 端口范围
- 切换到 TCP 传输
- 检查网络质量和带宽
## API 接口
### 拉取流
使用统一的拉流 API`protocol` 设置为 `rtsp`
```bash
POST /api/stream/pull
Content-Type: application/json
{
"protocol": "rtsp",
"streamPath": "camera1",
"remoteURL": "rtsp://admin:password@192.168.1.100/stream",
"pubAudio": true,
"pubVideo": true
}
```
**参数说明:**
- `protocol` (必填): 设置为 `"rtsp"`
- `streamPath` (必填): Monibuca 中的本地流路径
- `remoteURL` (必填): 要拉取的远程 RTSP URL
- `pubAudio` (可选): 是否发布音频
- `pubVideo` (可选): 是否发布视频
- `testMode` (可选): 0 = 正常拉流1 = 拉流但不发布
- 其他发布配置选项可用(参见 global.proto 中的 GlobalPullRequest
### 停止流
```bash
POST /api/stream/stop/{streamPath}
```
### 管理拉流/推流代理
- `GET /api/proxy/pull/list` - 列出所有拉流代理
- `POST /api/proxy/pull/add` - 添加拉流代理
- `POST /api/proxy/pull/update` - 更新拉流代理
- `POST /api/proxy/pull/remove/{id}` - 删除拉流代理
- `GET /api/proxy/push/list` - 列出所有推流代理
- `POST /api/proxy/push/add` - 添加推流代理
- `POST /api/proxy/push/update` - 更新推流代理
- `POST /api/proxy/push/remove/{id}` - 删除推流代理
## 致谢
本插件参考了 AlexxIT 开发的优秀项目 [go2rtc](https://github.com/AlexxIT/go2rtc) 的代码和实现思路,该项目提供了一个功能全面的媒体服务器解决方案,具有先进的流媒体处理能力。
## 许可证
本插件是 Monibuca 项目的一部分,遵循相同的许可证条款。

View File

@@ -22,12 +22,14 @@ import (
const Timeout = time.Second * 10
func NewNetConnection(conn net.Conn) *NetConnection {
return &NetConnection{
c := &NetConnection{
Conn: conn,
BufReader: util.NewBufReaderWithTimeout(conn, Timeout),
BufReader: util.NewBufReader(conn),
MemoryAllocator: gomem.NewScalableMemoryAllocator(1 << 12),
UserAgent: "monibuca" + m7s.Version,
}
c.BufReader.SetTimeout(Timeout)
return c
}
type NetConnection struct {
@@ -143,6 +145,7 @@ func (c *NetConnection) Connect(remoteURL string) (err error) {
}
c.Conn = conn
c.BufReader = util.NewBufReader(conn)
c.BufReader.SetTimeout(Timeout)
c.UserAgent = "monibuca" + m7s.Version
c.Session = ""
c.Auth = util.NewAuth(rtspURL.User)
@@ -255,9 +258,7 @@ func (c *NetConnection) Receive(sendMode bool, onReceive func(byte, []byte) erro
return
}
ts := time.Now()
if err = c.Conn.SetReadDeadline(ts.Add(util.Conditional(sendMode, time.Second*60, time.Second*15))); err != nil {
return
}
var magic []byte
// we can read:
// 1. RTP interleaved: `$` + 1B channel number + 2B size

View File

@@ -378,7 +378,7 @@ func (s *Sender) Send() (err error) {
}
}()
}
s.BufReader.SetTimeout(60 * time.Second)
// 接收处理(处理客户端发来的消息)
return s.NetConnection.Receive(true, nil, nil)
}

View File

@@ -152,8 +152,8 @@ func (task *RTSPServer) Go() (err error) {
Request: req,
}
// TCP传输模式
const tcpTransport = "RTP/AVP/TCP;unicast;interleaved="
// TCP传输模式 - 适配包含mode字段的格式
const tcpTransport = "RTP/AVP/TCP"
// UDP传输模式前缀
const udpTransport = "RTP/AVP"
@@ -164,13 +164,13 @@ func (task *RTSPServer) Go() (err error) {
if sendMode {
if i := reqTrackID(req); i >= 0 {
tr = fmt.Sprintf("RTP/AVP/TCP;unicast;interleaved=%d-%d", i*2, i*2+1)
tr = fmt.Sprintf("RTP/AVP/TCP;unicast;mode=record;interleaved=%d-%d", i*2, i*2+1)
res.Header.Set("Transport", tr)
} else {
res.Status = "400 Bad Request"
}
} else {
res.Header.Set("Transport", tr[:len(tcpTransport)+3])
res.Header.Set("Transport", tr)
}
} else if strings.HasPrefix(tr, udpTransport) && strings.Contains(tr, "unicast") && strings.Contains(tr, "client_port=") {
task.Debug("into udp play")

View File

@@ -40,19 +40,20 @@ type (
Event EventRecordStream
}
RecordStream struct {
ID uint `gorm:"primarykey"`
StartTime time.Time `gorm:"default:NULL"`
EndTime time.Time `gorm:"default:NULL"`
Duration uint32 `gorm:"comment:录像时长;default:0"`
Filename string `json:"fileName" desc:"文件名" gorm:"type:varchar(255);comment:文件名"`
Type string `json:"type" desc:"录像文件类型" gorm:"type:varchar(255);comment:录像文件类型,flv,mp4,raw,fmp4,hls"`
FilePath string
StreamPath string
AudioCodec string
VideoCodec string
CreatedAt time.Time
DeletedAt gorm.DeletedAt `gorm:"index" yaml:"-"`
RecordLevel config.EventLevel `json:"eventLevel" desc:"事件级别" gorm:"type:varchar(255);comment:事件级别,high表示重要事件无法删除且表示无需自动删除,low表示非重要事件,达到自动删除时间后,自动删除;default:'low'"`
ID uint `gorm:"primarykey"`
StartTime time.Time `gorm:"default:NULL"`
EndTime time.Time `gorm:"default:NULL"`
Duration uint32 `gorm:"comment:录像时长;default:0"`
Filename string `json:"fileName" desc:"文件名" gorm:"type:varchar(255);comment:文件名"`
Type string `json:"type" desc:"录像文件类型" gorm:"type:varchar(255);comment:录像文件类型,flv,mp4,raw,fmp4,hls"`
FilePath string
StreamPath string
AudioCodec string
VideoCodec string
CreatedAt time.Time
DeletedAt gorm.DeletedAt `gorm:"index" yaml:"-"`
RecordLevel config.EventLevel `json:"eventLevel" desc:"事件级别" gorm:"type:varchar(255);comment:事件级别,high表示重要事件无法删除且表示无需自动删除,low表示非重要事件,达到自动删除时间后,自动删除;default:'low'"`
StorageLevel int `json:"storageLevel" desc:"存储级别" gorm:"comment:存储级别,1=主存储,2=次级存储;default:1"`
}
)
@@ -78,10 +79,11 @@ func (r *DefaultRecorder) CreateStream(start time.Time, customFileName func(*Rec
}
r.Event.RecordStream = RecordStream{
StartTime: start,
StreamPath: sub.StreamPath,
FilePath: filePath,
Type: recordJob.RecConf.Type,
StartTime: start,
StreamPath: sub.StreamPath,
FilePath: filePath,
Type: recordJob.RecConf.Type,
StorageLevel: 1, // 默认为主存储
}
if sub.Publisher.HasAudioTrack() {

View File

@@ -261,6 +261,200 @@ class PacketReplayer:
return max(0, wait_time)
def replay_all_connections(self, src_ip=None, protocol=None, delay=0):
"""重放所有连接,检测到新连接时自动重连"""
self.log_with_timestamp("开始加载所有连接的数据包...")
try:
reader = PcapReader(self.pcap_file)
# 按源端口分组所有数据包
connections = defaultdict(list)
for packet in reader:
if IP not in packet or TCP not in packet:
continue
# 只收集发送到目标端口的包
if packet[TCP].dport == self.target_port and Raw in packet:
src_port = packet[TCP].sport
packet_info = {
'timestamp': float(packet.time),
'payload': packet[Raw].load,
'seq': packet[TCP].seq,
'src_port': src_port
}
connections[src_port].append(packet_info)
reader.close()
if not connections:
self.log_with_timestamp("没有找到任何连接")
return
self.log_with_timestamp(f"发现 {len(connections)} 个连接")
for port, packets in sorted(connections.items()):
total_size = sum(len(p['payload']) for p in packets)
self.log_with_timestamp(f" 端口 {port}: {len(packets)} 个包, {total_size / (1024*1024):.2f} MB")
# 按时间顺序处理所有连接
self.log_with_timestamp("\n开始按时间顺序重放所有连接...")
# 将所有连接的包合并并按时间排序
all_packets = []
for src_port, packets in connections.items():
for pkt in packets:
all_packets.append(pkt)
all_packets.sort(key=lambda x: x['timestamp'])
self.log_with_timestamp(f"总共 {len(all_packets)} 个数据包")
# 按连接分组重放
current_connection = None
connection_packets = []
for pkt in all_packets:
src_port = pkt['src_port']
# 检测到新连接
if current_connection != src_port:
# 先发送之前连接的数据
if connection_packets:
self.log_with_timestamp(f"\n[连接 {current_connection}] 发送 {len(connection_packets)} 个包")
self._send_connection_packets(connection_packets)
# 开始新连接
current_connection = src_port
connection_packets = []
self.log_with_timestamp(f"\n[新连接] 端口 {src_port}")
connection_packets.append(pkt)
# 发送最后一个连接的数据
if connection_packets:
self.log_with_timestamp(f"\n[连接 {current_connection}] 发送 {len(connection_packets)} 个包")
self._send_connection_packets(connection_packets)
self.log_with_timestamp("\n所有连接重放完成")
except Exception as e:
self.log_with_timestamp(f"重放所有连接时出错: {e}")
import traceback
traceback.print_exc()
def _send_connection_packets(self, packets):
"""发送单个连接的所有数据包"""
# 按序列号重组TCP流
packets.sort(key=lambda x: x['seq'])
tcp_segments = []
chunk_sizes = []
chunk_timestamps = []
expected_seq = packets[0]['seq']
for pkt in packets:
seq = pkt['seq']
payload = pkt['payload']
timestamp = pkt['timestamp']
if seq == expected_seq:
tcp_segments.append(payload)
chunk_sizes.append(len(payload))
chunk_timestamps.append(timestamp)
expected_seq += len(payload)
elif seq < expected_seq:
overlap = expected_seq - seq
if len(payload) > overlap:
tcp_segments.append(payload[overlap:])
chunk_sizes.append(len(payload[overlap:]))
chunk_timestamps.append(timestamp)
expected_seq = seq + len(payload)
else:
tcp_segments.append(payload)
chunk_sizes.append(len(payload))
chunk_timestamps.append(timestamp)
expected_seq = seq + len(payload)
tcp_stream = b''.join(tcp_segments)
if not tcp_stream:
self.log_with_timestamp(" 警告:没有数据可发送")
return
self.log_with_timestamp(f" 重组完成: {len(tcp_stream) / (1024*1024):.2f} MB, {len(chunk_sizes)} 个包")
# 建立新连接
if not self.establish_tcp_connection(None):
self.log_with_timestamp(" 无法建立连接")
return
# 启动响应读取线程
self.stop_reading.clear()
reader_thread = threading.Thread(target=self.response_reader)
reader_thread.daemon = True
reader_thread.start()
# 发送数据
try:
sent_bytes = 0
chunk_index = 0
replay_start_time = time.time()
first_packet_time = chunk_timestamps[0] if chunk_timestamps else 0
while sent_bytes < len(tcp_stream) and chunk_index < len(chunk_sizes):
# 计算等待时间
if chunk_index < len(chunk_timestamps):
target_time = chunk_timestamps[chunk_index] - first_packet_time
elapsed_time = time.time() - replay_start_time
wait_time = target_time - elapsed_time
if wait_time > 0:
if wait_time > 5.0:
wait_time = 5.0
time.sleep(wait_time)
current_chunk_size = chunk_sizes[chunk_index]
remaining = len(tcp_stream) - sent_bytes
current_chunk_size = min(current_chunk_size, remaining)
chunk = tcp_stream[sent_bytes:sent_bytes + current_chunk_size]
chunk_sent = 0
while chunk_sent < len(chunk):
try:
self.socket.settimeout(10)
bytes_sent = self.socket.send(chunk[chunk_sent:])
if bytes_sent == 0:
break
chunk_sent += bytes_sent
sent_bytes += bytes_sent
self.total_bytes_sent += bytes_sent
self.last_activity_time = time.time()
except socket.timeout:
continue
except socket.error:
break
if chunk_sent == len(chunk):
chunk_index += 1
else:
break
self.log_with_timestamp(f" 发送完成: {sent_bytes / (1024*1024):.2f} MB")
time.sleep(1) # 短暂等待
finally:
self.stop_reading.set()
if self.socket:
try:
self.socket.shutdown(socket.SHUT_RDWR)
except:
pass
self.socket.close()
self.socket = None
def replay_packets(self, src_ip=None, src_port=None, protocol=None, delay=0):
"""重放数据包采用正确的TCP流重组方式"""
# 首先加载所有数据包
@@ -271,15 +465,17 @@ class PacketReplayer:
self.log_with_timestamp("没有找到可发送的数据包")
return
# 正确<EFBFBD><EFBFBD><EFBFBD>TCP流重组 - 按序列号重组!
# 正确TCP流重组 - 按序列号重组!
self.log_with_timestamp("重组TCP流...")
# 第一步:按序列号排序
self.data_packets.sort(key=lambda x: x['seq'])
self.log_with_timestamp(f"按序列号排序完成,共 {len(self.data_packets)} 个数据包")
# 第二步正确重组TCP流
# 第二步正确重组TCP流,同时保留每个包的大小和时间戳信息
tcp_segments = []
chunk_sizes = [] # 保存每个原始包的大小
chunk_timestamps = [] # 保存每个原始包的时间戳
expected_seq = self.data_packets[0]['seq']
self.log_with_timestamp(f"开始TCP重组初始序列号: {expected_seq}")
@@ -289,10 +485,13 @@ class PacketReplayer:
for packet_info in self.data_packets:
seq = packet_info['seq']
payload = packet_info['payload']
timestamp = packet_info['timestamp']
if seq == expected_seq:
# 序列号正确,添加到流中
tcp_segments.append(payload)
chunk_sizes.append(len(payload)) # 记录原始包大小
chunk_timestamps.append(timestamp) # 记录时间戳
expected_seq += len(payload)
processed_packets += 1
elif seq < expected_seq:
@@ -301,6 +500,8 @@ class PacketReplayer:
if len(payload) > overlap:
# 去除重叠部分,添加剩余数据
tcp_segments.append(payload[overlap:])
chunk_sizes.append(len(payload[overlap:])) # 记录实际添加的大小
chunk_timestamps.append(timestamp) # 记录时间戳
expected_seq = seq + len(payload)
processed_packets += 1
else:
@@ -312,6 +513,8 @@ class PacketReplayer:
self.log_with_timestamp(f"检测到数据包间隙: {gap} 字节,从序列号 {expected_seq}{seq}")
# 跳过间隙,继续处理
tcp_segments.append(payload)
chunk_sizes.append(len(payload)) # 记录原始包大小
chunk_timestamps.append(timestamp) # 记录时间戳
expected_seq = seq + len(payload)
processed_packets += 1
@@ -343,63 +546,97 @@ class PacketReplayer:
reader_thread.daemon = True
reader_thread.start()
self.log_with_timestamp(f"开始发送TCP流数据")
self.log_with_timestamp(f"开始发送TCP流数据使用动态chunk大小和原始时间间隔")
self.log_with_timestamp(f"原始数据包数量: {len(chunk_sizes)}")
if chunk_sizes:
avg_size = sum(chunk_sizes) / len(chunk_sizes)
min_size = min(chunk_sizes)
max_size = max(chunk_sizes)
self.log_with_timestamp(f"包大小统计 - 平均: {avg_size:.0f}, 最小: {min_size}, 最大: {max_size}")
if chunk_timestamps:
duration = chunk_timestamps[-1] - chunk_timestamps[0]
self.log_with_timestamp(f"抓包时长: {duration:.2f}")
try:
# 使用更小的块大小确保RTMP协议完整性
chunk_size = 1024 # 减小到1KB更安全
sent_bytes = 0
chunk_count = 0
last_progress_time = time.time()
self.start_time = time.time()
replay_start_time = time.time() # 重放开始时间
first_packet_time = chunk_timestamps[0] if chunk_timestamps else 0 # 第一个包的时间戳
chunk_index = 0
while sent_bytes < len(tcp_stream):
while sent_bytes < len(tcp_stream) and chunk_index < len(chunk_sizes):
current_time = time.time()
# 每5秒显示一次进度
if current_time - last_progress_time >= 5.0:
progress = (sent_bytes / len(tcp_stream)) * 100
self.log_with_timestamp(f"发送进度: {progress:.1f}% ({sent_bytes / (1024*1024):.2f}/{len(tcp_stream) / (1024*1024):.2f} MB)")
elapsed = current_time - replay_start_time
self.log_with_timestamp(f"发送进度: {progress:.1f}% ({sent_bytes / (1024*1024):.2f}/{len(tcp_stream) / (1024*1024):.2f} MB) 已耗时: {elapsed:.1f}")
last_progress_time = current_time
# 计算当前块大小
# 计算应该等待的时间(按照原始时间间隔)
if chunk_index < len(chunk_timestamps):
target_time = chunk_timestamps[chunk_index] - first_packet_time
elapsed_time = time.time() - replay_start_time
wait_time = target_time - elapsed_time
if wait_time > 0:
# 限制最大等待时间,避免异常
if wait_time > 5.0:
self.log_with_timestamp(f"警告:等待时间过长 {wait_time:.3f}限制为5秒")
wait_time = 5.0
time.sleep(wait_time)
# 使用原始数据包的大小作为chunk_size
current_chunk_size = chunk_sizes[chunk_index]
remaining = len(tcp_stream) - sent_bytes
current_chunk_size = min(chunk_size, remaining)
current_chunk_size = min(current_chunk_size, remaining)
# 发送数据块
# 发送数据块 - 确保完全发送
chunk = tcp_stream[sent_bytes:sent_bytes + current_chunk_size]
chunk_sent = 0
try:
self.socket.settimeout(10) # 10秒超时
bytes_sent = self.socket.send(chunk)
# 循环直到当前chunk完全发送
while chunk_sent < len(chunk):
try:
self.socket.settimeout(10) # 10秒超时
bytes_sent = self.socket.send(chunk[chunk_sent:])
if bytes_sent == 0:
self.log_with_timestamp("连接已断开")
if bytes_sent == 0:
self.log_with_timestamp("连接已断开")
break
chunk_sent += bytes_sent
sent_bytes += bytes_sent
self.total_bytes_sent += bytes_sent
self.last_activity_time = time.time()
# 如果部分发送,记录日志
if chunk_sent < len(chunk):
self.log_with_timestamp(f"部分发送: {chunk_sent}/{len(chunk)} 字节,继续发送剩余部分...")
except socket.timeout:
self.log_with_timestamp(f"发送数据块超时,重试...")
continue
except socket.error as e:
self.log_with_timestamp(f"发送数据块失败: {e}")
break
sent_bytes += bytes_sent
# 检查是否完全发送
if chunk_sent == len(chunk):
chunk_count += 1
self.total_bytes_sent += bytes_sent
self.last_activity_time = time.time()
# 如果没有完全发送当前块,继续发送剩余部分
if bytes_sent < len(chunk):
self.log_with_timestamp(f"部分发送: {bytes_sent}/{len(chunk)} 字节")
continue
except socket.timeout:
self.log_with_timestamp(f"发送数据块超时,继续...")
continue
except socket.error as e:
self.log_with_timestamp(f"发送数据块失败: {e}")
chunk_index += 1
else:
# 未完全发送,停止
self.log_with_timestamp(f"无法完全发送数据块,已发送 {chunk_sent}/{len(chunk)} 字节")
break
# 更温和的流控制
if chunk_count % 50 == 0: # 每50个块休眠
time.sleep(0.005) # 5ms休眠
# 移除固定的流控制延迟让TCP自己控制
self.log_with_timestamp(f"数据发送完成,等待服务器处理...")
time.sleep(3) # 等待服务器处理完成
time.sleep(10) # 增加等待时间到10秒
self.log_with_timestamp(f"\n=== 发送完成 ===")
self.log_with_timestamp(f"成功发送了 {chunk_count} 个数据块")
@@ -506,6 +743,7 @@ def main():
parser.add_argument('--protocol', choices=['tcp', 'udp'], help='过滤协议类型')
parser.add_argument('--auto-select', action='store_true', help='自动选择数据量最大的连接')
parser.add_argument('--list-connections', action='store_true', help='列出所有连接并退出')
parser.add_argument('--all', action='store_true', help='推送所有连接的数据,不区分源端口')
args = parser.parse_args()
@@ -516,8 +754,12 @@ def main():
replayer.list_all_connections()
return
# 如果指定了--all推送所有连接自动重连
if args.all:
print("推送整个pcap文件的所有连接检测到新连接时自动重连")
replayer.replay_all_connections(args.src_ip, args.protocol, args.delay)
# 如果启用自动选择或没有指定源端口,选择最大的连接
if args.auto_select or args.src_port is None:
elif args.auto_select or args.src_port is None:
best_port = replayer.find_largest_connection(args.src_ip)
if best_port:
print(f"自动选择数据量最大的连接: 端口 {best_port}")