refactor: frame converter and mp4 track improvements

- Refactor frame converter implementation
- Update mp4 track to use ICodex
- General refactoring and code improvements

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
langhuihui
2025-08-04 09:17:12 +08:00
parent b6ee2843b0
commit 8a9fffb987
262 changed files with 20831 additions and 12141 deletions

View File

@@ -0,0 +1,5 @@
---
description: build pb
alwaysApply: false
---
如果修改了 proto 文件需要编译,请使用 scripts 目录下的脚本来编译

View File

@@ -24,7 +24,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.23.4
go-version: 1.25.0
- name: Cache Go modules
uses: actions/cache@v4

3
.gitignore vendored
View File

@@ -20,3 +20,6 @@ example/default/*
!example/default/main.go
!example/default/config.yaml
shutdown.sh
!example/test/test.db
*.mp4
shutdown.bat

199
CLAUDE.md Normal file
View File

@@ -0,0 +1,199 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Monibuca is a high-performance streaming server framework written in Go. It's designed to be a modular, scalable platform for real-time audio/video streaming with support for multiple protocols including RTMP, RTSP, HLS, WebRTC, GB28181, and more.
## Development Commands
### Building and Running
**Basic Run (with SQLite):**
```bash
cd example/default
go run -tags sqlite main.go
```
**Build Tags:**
- `sqlite` - Enable SQLite database support
- `sqliteCGO` - Enable SQLite with CGO
- `mysql` - Enable MySQL database support
- `postgres` - Enable PostgreSQL database support
- `duckdb` - Enable DuckDB database support
- `disable_rm` - Disable memory pool
- `fasthttp` - Use fasthttp instead of net/http
- `taskpanic` - Enable panics for testing
**Protocol Buffer Generation:**
```bash
# Generate all proto files
sh scripts/protoc.sh
# Generate specific plugin proto
sh scripts/protoc.sh plugin_name
```
**Release Building:**
```bash
# Uses goreleaser configuration
goreleaser build
```
**Testing:**
```bash
go test ./...
```
## Architecture Overview
### Core Components
**Server (`server.go`):** Main server instance that manages plugins, streams, and configurations. Implements the central event loop and lifecycle management.
**Plugin System (`plugin.go`):** Modular architecture where functionality is provided through plugins. Each plugin implements the `IPlugin` interface and can provide:
- Protocol handlers (RTMP, RTSP, etc.)
- Media transformers
- Pull/Push proxies
- Recording capabilities
- Custom HTTP endpoints
**Configuration System (`pkg/config/`):** Hierarchical configuration system with priority order: dynamic modifications > environment variables > config files > default YAML > global config > defaults.
**Task System (`pkg/task/`):** Asynchronous task management with dependency handling, lifecycle management, and graceful shutdown capabilities.
### Key Interfaces
**Publisher:** Handles incoming media streams and manages track information
**Subscriber:** Handles outgoing media streams to clients
**Puller:** Pulls streams from external sources
**Pusher:** Pushes streams to external destinations
**Transformer:** Processes/transcodes media streams
**Recorder:** Records streams to storage
### Stream Processing Flow
1. **Publisher** receives media data and creates tracks
2. **Tracks** handle audio/video data with specific codecs
3. **Subscribers** attach to publishers to receive media
4. **Transformers** can process streams between publishers and subscribers
5. **Plugins** provide protocol-specific implementations
## Plugin Development
### Creating a Plugin
1. Implement the `IPlugin` interface
2. Define plugin metadata using `PluginMeta`
3. Register with `InstallPlugin[YourPluginType](meta)`
4. Optionally implement protocol-specific interfaces:
- `ITCPPlugin` for TCP servers
- `IUDPPlugin` for UDP servers
- `IQUICPlugin` for QUIC servers
- `IRegisterHandler` for HTTP endpoints
### Plugin Lifecycle
1. **Init:** Configuration parsing and initialization
2. **Start:** Network listeners and task registration
3. **Run:** Active operation
4. **Dispose:** Cleanup and shutdown
## Configuration Structure
### Global Configuration
- HTTP/TCP/UDP/QUIC listeners
- Database connections (SQLite, MySQL, PostgreSQL, DuckDB)
- Authentication settings
- Admin interface settings
- Global stream alias mappings
### Plugin Configuration
Each plugin can define its own configuration structure that gets merged with global settings.
## Database Integration
Supports multiple database backends:
- **SQLite:** Default lightweight option
- **MySQL:** Production deployments
- **PostgreSQL:** Production deployments
- **DuckDB:** Analytics use cases
Automatic migration is handled for core models including users, proxies, and stream aliases.
## Protocol Support
### Built-in Plugins
- **RTMP:** Real-time messaging protocol
- **RTSP:** Real-time streaming protocol
- **HLS:** HTTP live streaming
- **WebRTC:** Web real-time communication
- **GB28181:** Chinese surveillance standard
- **FLV:** Flash video format
- **MP4:** MPEG-4 format
- **SRT:** Secure reliable transport
## Authentication & Security
- JWT-based authentication for admin interface
- Stream-level authentication with URL signing
- Role-based access control (admin/user)
- Webhook support for external auth integration
## Development Guidelines
### Code Style
- Follow existing patterns and naming conventions
- Use the task system for async operations
- Implement proper error handling and logging
- Use the configuration system for all settings
### Testing
- Unit tests should be placed alongside source files
- Integration tests can use the example configurations
- Use the mock.py script for protocol testing
### Performance Considerations
- Memory pool is enabled by default (disable with `disable_rm`)
- Zero-copy design for media data where possible
- Lock-free data structures for high concurrency
- Efficient buffer management with ring buffers
## Debugging
### Built-in Debug Plugin
- Performance monitoring and profiling
- Real-time metrics via Prometheus endpoint (`/api/metrics`)
- pprof integration for memory/cpu profiling
### Logging
- Structured logging with zerolog
- Configurable log levels
- Log rotation support
- Fatal crash logging
## Web Admin Interface
- Web-based admin UI served from `admin.zip`
- RESTful API for all operations
- Real-time stream monitoring
- Configuration management
- User management (when auth enabled)
## Common Issues
### Port Conflicts
- Default HTTP port: 8080
- Default gRPC port: 50051
- Check plugin-specific port configurations
### Database Connection
- Ensure proper build tags for database support
- Check DSN configuration strings
- Verify database file permissions
### Plugin Loading
- Plugins are auto-discovered from imports
- Check plugin enable/disable status
- Verify configuration merging

View File

@@ -48,7 +48,7 @@ func (s *Server) initStreamAlias() {
func (s *Server) GetStreamAlias(ctx context.Context, req *emptypb.Empty) (res *pb.StreamAliasListResponse, err error) {
res = &pb.StreamAliasListResponse{}
s.Streams.Call(func() error {
s.CallOnStreamTask(func() {
for alias := range s.AliasStreams.Range {
info := &pb.StreamAlias{
StreamPath: alias.StreamPath,
@@ -62,18 +62,17 @@ func (s *Server) GetStreamAlias(ctx context.Context, req *emptypb.Empty) (res *p
}
res.Data = append(res.Data, info)
}
return nil
})
return
}
func (s *Server) SetStreamAlias(ctx context.Context, req *pb.SetStreamAliasRequest) (res *pb.SuccessResponse, err error) {
res = &pb.SuccessResponse{}
s.Streams.Call(func() error {
s.CallOnStreamTask(func() {
if req.StreamPath != "" {
u, err := url.Parse(req.StreamPath)
if err != nil {
return err
return
}
req.StreamPath = strings.TrimPrefix(u.Path, "/")
publisher, canReplace := s.Streams.Get(req.StreamPath)
@@ -159,7 +158,6 @@ func (s *Server) SetStreamAlias(ctx context.Context, req *pb.SetStreamAliasReque
}
}
}
return nil
})
return
}

94
api.go
View File

@@ -12,6 +12,7 @@ import (
"strings"
"time"
"m7s.live/v5/pkg/config"
"m7s.live/v5/pkg/task"
myip "github.com/husanpao/ip"
@@ -25,7 +26,7 @@ import (
"gopkg.in/yaml.v3"
"m7s.live/v5/pb"
"m7s.live/v5/pkg"
"m7s.live/v5/pkg/config"
"m7s.live/v5/pkg/format"
"m7s.live/v5/pkg/util"
)
@@ -96,9 +97,8 @@ func (s *Server) api_Stream_AnnexB_(rw http.ResponseWriter, r *http.Request) {
return
}
defer reader.StopRead()
var annexb *pkg.AnnexB
var converter = pkg.NewAVFrameConvert[*pkg.AnnexB](publisher.VideoTrack.AVTrack, nil)
annexb, err = converter.ConvertFromAVFrame(&reader.Value)
var annexb format.AnnexB
err = pkg.ConvertFrameType(reader.Value.Wraps[0], &annexb)
if err != nil {
http.Error(rw, err.Error(), http.StatusInternalServerError)
return
@@ -150,6 +150,9 @@ func (s *Server) getStreamInfo(pub *Publisher) (res *pb.StreamInfoResponse, err
}
res.Data.AudioTrack.SampleRate = uint32(t.ICodecCtx.(pkg.IAudioCodecCtx).GetSampleRate())
res.Data.AudioTrack.Channels = uint32(t.ICodecCtx.(pkg.IAudioCodecCtx).GetChannels())
if pub.State == PublisherStateInit {
res.Data.State = int32(PublisherStateTrackAdded)
}
}
}
if t := pub.VideoTrack.AVTrack; t != nil {
@@ -165,6 +168,9 @@ func (s *Server) getStreamInfo(pub *Publisher) (res *pb.StreamInfoResponse, err
}
res.Data.VideoTrack.Width = uint32(t.ICodecCtx.(pkg.IVideoCodecCtx).Width())
res.Data.VideoTrack.Height = uint32(t.ICodecCtx.(pkg.IVideoCodecCtx).Height())
if pub.State == PublisherStateInit {
res.Data.State = int32(PublisherStateTrackAdded)
}
}
}
return
@@ -172,7 +178,7 @@ func (s *Server) getStreamInfo(pub *Publisher) (res *pb.StreamInfoResponse, err
func (s *Server) StreamInfo(ctx context.Context, req *pb.StreamSnapRequest) (res *pb.StreamInfoResponse, err error) {
var recordings []*pb.RecordingDetail
s.Records.SafeRange(func(record *RecordJob) bool {
s.Records.Range(func(record *RecordJob) bool {
if record.StreamPath == req.StreamPath {
recordings = append(recordings, &pb.RecordingDetail{
FilePath: record.RecConf.FilePath,
@@ -212,11 +218,13 @@ func (s *Server) TaskTree(context.Context, *emptypb.Empty) (res *pb.TaskTreeResp
StartTime: timestamppb.New(t.StartTime),
Description: m.GetDescriptions(),
StartReason: t.StartReason,
Level: uint32(t.GetLevel()),
}
if job, ok := m.(task.IJob); ok {
if blockedTask := job.Blocked(); blockedTask != nil {
res.Blocked = fillData(blockedTask)
}
res.EventLoopRunning = job.EventLoopRunning()
for t := range job.RangeSubTask {
child := fillData(t)
if child == nil {
@@ -251,7 +259,7 @@ func (s *Server) RestartTask(ctx context.Context, req *pb.RequestWithId64) (resp
func (s *Server) GetRecording(ctx context.Context, req *emptypb.Empty) (resp *pb.RecordingListResponse, err error) {
resp = &pb.RecordingListResponse{}
s.Records.SafeRange(func(record *RecordJob) bool {
s.Records.Range(func(record *RecordJob) bool {
resp.Data = append(resp.Data, &pb.Recording{
StreamPath: record.StreamPath,
StartTime: timestamppb.New(record.StartTime),
@@ -264,7 +272,7 @@ func (s *Server) GetRecording(ctx context.Context, req *emptypb.Empty) (resp *pb
}
func (s *Server) GetSubscribers(context.Context, *pb.SubscribersRequest) (res *pb.SubscribersResponse, err error) {
s.Streams.Call(func() error {
s.CallOnStreamTask(func() {
var subscribers []*pb.SubscriberSnapShot
for subscriber := range s.Subscribers.Range {
meta, _ := json.Marshal(subscriber.GetDescriptions())
@@ -303,7 +311,6 @@ func (s *Server) GetSubscribers(context.Context, *pb.SubscribersRequest) (res *p
Data: subscribers,
Total: int32(s.Subscribers.Length),
}
return nil
})
return
}
@@ -323,7 +330,8 @@ func (s *Server) AudioTrackSnap(_ context.Context, req *pb.StreamSnapRequest) (r
}
}
pub.AudioTrack.Ring.Do(func(v *pkg.AVFrame) {
if len(v.Wraps) > 0 {
if len(v.Wraps) > 0 && v.TryRLock() {
defer v.RUnlock()
var snap pb.TrackSnapShot
snap.Sequence = v.Sequence
snap.Timestamp = uint32(v.Timestamp / time.Millisecond)
@@ -333,7 +341,7 @@ func (s *Server) AudioTrackSnap(_ context.Context, req *pb.StreamSnapRequest) (r
data.RingDataSize += uint32(v.Wraps[0].GetSize())
for i, wrap := range v.Wraps {
snap.Wrap[i] = &pb.Wrap{
Timestamp: uint32(wrap.GetTimestamp() / time.Millisecond),
Timestamp: uint32(wrap.GetSample().Timestamp / time.Millisecond),
Size: uint32(wrap.GetSize()),
Data: wrap.String(),
}
@@ -374,7 +382,7 @@ func (s *Server) api_VideoTrack_SSE(rw http.ResponseWriter, r *http.Request) {
snap.KeyFrame = frame.IDR
for i, wrap := range frame.Wraps {
snap.Wrap[i] = &pb.Wrap{
Timestamp: uint32(wrap.GetTimestamp() / time.Millisecond),
Timestamp: uint32(wrap.GetSample().Timestamp / time.Millisecond),
Size: uint32(wrap.GetSize()),
Data: wrap.String(),
}
@@ -407,7 +415,7 @@ func (s *Server) api_AudioTrack_SSE(rw http.ResponseWriter, r *http.Request) {
snap.KeyFrame = frame.IDR
for i, wrap := range frame.Wraps {
snap.Wrap[i] = &pb.Wrap{
Timestamp: uint32(wrap.GetTimestamp() / time.Millisecond),
Timestamp: uint32(wrap.GetSample().Timestamp / time.Millisecond),
Size: uint32(wrap.GetSize()),
Data: wrap.String(),
}
@@ -433,7 +441,8 @@ func (s *Server) VideoTrackSnap(ctx context.Context, req *pb.StreamSnapRequest)
}
}
pub.VideoTrack.Ring.Do(func(v *pkg.AVFrame) {
if len(v.Wraps) > 0 {
if len(v.Wraps) > 0 && v.TryRLock() {
defer v.RUnlock()
var snap pb.TrackSnapShot
snap.Sequence = v.Sequence
snap.Timestamp = uint32(v.Timestamp / time.Millisecond)
@@ -443,7 +452,7 @@ func (s *Server) VideoTrackSnap(ctx context.Context, req *pb.StreamSnapRequest)
data.RingDataSize += uint32(v.Wraps[0].GetSize())
for i, wrap := range v.Wraps {
snap.Wrap[i] = &pb.Wrap{
Timestamp: uint32(wrap.GetTimestamp() / time.Millisecond),
Timestamp: uint32(wrap.GetSample().Timestamp / time.Millisecond),
Size: uint32(wrap.GetSize()),
Data: wrap.String(),
}
@@ -476,29 +485,27 @@ func (s *Server) Shutdown(ctx context.Context, req *pb.RequestWithId) (res *pb.S
}
func (s *Server) ChangeSubscribe(ctx context.Context, req *pb.ChangeSubscribeRequest) (res *pb.SuccessResponse, err error) {
s.Streams.Call(func() error {
s.CallOnStreamTask(func() {
if subscriber, ok := s.Subscribers.Get(req.Id); ok {
if pub, ok := s.Streams.Get(req.StreamPath); ok {
subscriber.Publisher.RemoveSubscriber(subscriber)
subscriber.StreamPath = req.StreamPath
pub.AddSubscriber(subscriber)
return nil
return
}
}
err = pkg.ErrNotFound
return nil
})
return &pb.SuccessResponse{}, err
}
func (s *Server) StopSubscribe(ctx context.Context, req *pb.RequestWithId) (res *pb.SuccessResponse, err error) {
s.Streams.Call(func() error {
s.CallOnStreamTask(func() {
if subscriber, ok := s.Subscribers.Get(req.Id); ok {
subscriber.Stop(errors.New("stop by api"))
} else {
err = pkg.ErrNotFound
}
return nil
})
return &pb.SuccessResponse{}, err
}
@@ -543,7 +550,7 @@ func (s *Server) StopPublish(ctx context.Context, req *pb.StreamSnapRequest) (re
// /api/stream/list
func (s *Server) StreamList(_ context.Context, req *pb.StreamListRequest) (res *pb.StreamListResponse, err error) {
recordingMap := make(map[string][]*pb.RecordingDetail)
for record := range s.Records.SafeRange {
for record := range s.Records.Range {
recordingMap[record.StreamPath] = append(recordingMap[record.StreamPath], &pb.RecordingDetail{
FilePath: record.RecConf.FilePath,
Mode: record.RecConf.Mode,
@@ -567,14 +574,46 @@ func (s *Server) StreamList(_ context.Context, req *pb.StreamListRequest) (res *
}
func (s *Server) WaitList(context.Context, *emptypb.Empty) (res *pb.StreamWaitListResponse, err error) {
s.Streams.Call(func() error {
s.CallOnStreamTask(func() {
res = &pb.StreamWaitListResponse{
List: make(map[string]int32),
}
for subs := range s.Waiting.Range {
res.List[subs.StreamPath] = int32(subs.Length)
}
return nil
})
return
}
func (s *Server) GetSubscriptionProgress(ctx context.Context, req *pb.StreamSnapRequest) (res *pb.SubscriptionProgressResponse, err error) {
s.CallOnStreamTask(func() {
if waitStream, ok := s.Waiting.Get(req.StreamPath); ok {
progress := waitStream.Progress
res = &pb.SubscriptionProgressResponse{
Code: 0,
Message: "success",
Data: &pb.SubscriptionProgressData{
CurrentStep: int32(progress.CurrentStep),
},
}
// Convert steps
for _, step := range progress.Steps {
pbStep := &pb.Step{
Name: step.Name,
Description: step.Description,
Error: step.Error,
}
if !step.StartedAt.IsZero() {
pbStep.StartedAt = timestamppb.New(step.StartedAt)
}
if !step.CompletedAt.IsZero() {
pbStep.CompletedAt = timestamppb.New(step.CompletedAt)
}
res.Data.Steps = append(res.Data.Steps, pbStep)
}
} else {
err = pkg.ErrNotFound
}
})
return
}
@@ -643,10 +682,10 @@ func (s *Server) Summary(context.Context, *emptypb.Empty) (res *pb.SummaryRespon
netWorks = append(netWorks, info)
}
res.StreamCount = int32(s.Streams.Length)
res.PullCount = int32(s.Pulls.Length)
res.PushCount = int32(s.Pushs.Length)
res.PullCount = int32(s.Pulls.Length())
res.PushCount = int32(s.Pushs.Length())
res.SubscribeCount = int32(s.Subscribers.Length)
res.RecordCount = int32(s.Records.Length)
res.RecordCount = int32(s.Records.Length())
res.TransformCount = int32(s.Transforms.Length)
res.NetWork = netWorks
s.lastSummary = res
@@ -920,7 +959,7 @@ func (s *Server) DeleteRecord(ctx context.Context, req *pb.ReqRecordDelete) (res
func (s *Server) GetTransformList(ctx context.Context, req *emptypb.Empty) (res *pb.TransformListResponse, err error) {
res = &pb.TransformListResponse{}
s.Transforms.Call(func() error {
s.Transforms.Call(func() {
for transform := range s.Transforms.Range {
info := &pb.Transform{
StreamPath: transform.StreamPath,
@@ -932,13 +971,12 @@ func (s *Server) GetTransformList(ctx context.Context, req *emptypb.Empty) (res
result, err = yaml.Marshal(transform.TransformJob.Config)
if err != nil {
s.Error("marshal transform config failed", "error", err)
return err
return
}
info.Config = string(result)
}
res.Data = append(res.Data, info)
}
return nil
})
return
}

View File

@@ -93,7 +93,7 @@ Plugins can add global middleware using the `AddMiddleware` method to handle all
Example code:
```go
func (p *YourPlugin) OnInit() {
func (p *YourPlugin) Start() {
// Add authentication middleware
p.GetCommonConf().AddMiddleware(func(next http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {

View File

@@ -116,7 +116,7 @@ type MyLogHandler struct {
}
// Add handler during plugin initialization
func (p *MyPlugin) OnInit() error {
func (p *MyPlugin) Start() error {
handler := &MyLogHandler{}
p.Server.LogHandler.Add(handler)
return nil

View File

@@ -93,7 +93,7 @@ Plugins start through the `Plugin.Start` method, executing these operations in s
- Start QUIC services (if implementing IQUICPlugin interface)
4. Plugin Initialization Callback
- Call plugin's OnInit method
- Call plugin's Start method
- Handle initialization errors
5. Timer Task Setup
@@ -109,7 +109,7 @@ The startup phase is crucial for plugins to begin providing services, with all p
### 4. Stop Phase (Stop)
The plugin stop phase is implemented through the `Plugin.OnStop` method and related stop handling logic, including:
The plugin stop phase is implemented through the `Plugin.OnDispose` method and related stop handling logic, including:
1. Service Shutdown
- Stop all network services (HTTP/HTTPS/TCP/UDP/QUIC)
@@ -127,7 +127,7 @@ The plugin stop phase is implemented through the `Plugin.OnStop` method and rela
- Trigger stop event notifications
4. Callback Processing
- Call plugin's custom OnStop method
- Call plugin's custom OnDispose method
- Execute registered stop callback functions
- Handle errors during stop process
@@ -143,7 +143,7 @@ The stop phase aims to ensure plugins can safely and cleanly stop running withou
The plugin destroy phase is implemented through the `Plugin.Dispose` method, the final phase in a plugin's lifecycle, including:
1. Resource Release
- Call plugin's OnStop method for stop processing
- Call plugin's OnDispose method for stop processing
- Remove from server's plugin list
- Release all allocated system resources

View File

@@ -93,7 +93,7 @@ func (p *YourPlugin) RegisterHandler() {
示例代码:
```go
func (p *YourPlugin) OnInit() {
func (p *YourPlugin) Start() {
// 添加认证中间件
p.GetCommonConf().AddMiddleware(func(next http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {

View File

@@ -116,7 +116,7 @@ type MyLogHandler struct {
}
// 在插件初始化时添加处理器
func (p *MyPlugin) OnInit() error {
func (p *MyPlugin) Start() error {
handler := &MyLogHandler{}
p.Server.LogHandler.Add(handler)
return nil

View File

@@ -109,7 +109,7 @@ Monibuca 采用插件化架构设计,通过插件机制来扩展功能。插
### 4. 停止阶段 (Stop)
插件的停止阶段通过 `Plugin.OnStop` 方法和相关的停止处理逻辑实现,主要包含以下步骤:
插件的停止阶段通过 `Plugin.OnDispose` 方法和相关的停止处理逻辑实现,主要包含以下步骤:
1. 停止服务
- 停止所有网络服务HTTP/HTTPS/TCP/UDP/QUIC

View File

@@ -10,3 +10,5 @@ cascadeclient:
onsub:
pull:
.*: m7s://$0
flv:
enable: true

View File

@@ -9,7 +9,7 @@ transcode:
transform:
^live.+:
input:
mode: rtsp
mode: pipe
output:
- target: rtmp://localhost/trans/$0/small
conf: -loglevel debug -c:a aac -c:v h264 -vf scale=320:240

View File

@@ -4,6 +4,8 @@ global:
loglevel: debug
admin:
enablelogin: false
debug:
enableTaskHistory: true #是否启用任务历史记录
srt:
listenaddr: :6000
passphrase: foobarfoobar
@@ -51,7 +53,6 @@ mp4:
# ^live/.+:
# fragment: 10s
# filepath: record/$0
# type: fmp4
# pull:
# live/test: /Users/dexter/Movies/1744963190.mp4
onsub:
@@ -108,18 +109,6 @@ snap:
savepath: "snaps" # 截图保存路径
iframeinterval: 3 # 间隔多少帧截图
querytimedelta: 3 # 查询截图时允许的最大时间差(秒)
crypto:
enable: false
isstatic: false
algo: aes_ctr # 加密算法 支持 aes_ctr xor_c
encryptlen: 1024
secret:
key: your key
iv: your iv
onpub:
transform:
.* : $0
onvif:
enable: false
discoverinterval: 3 # 发现设备的间隔单位秒默认30秒建议比rtsp插件的重连间隔大点

View File

@@ -7,13 +7,11 @@ import (
"m7s.live/v5"
_ "m7s.live/v5/plugin/cascade"
_ "m7s.live/v5/plugin/crypto"
_ "m7s.live/v5/plugin/debug"
_ "m7s.live/v5/plugin/flv"
_ "m7s.live/v5/plugin/gb28181"
_ "m7s.live/v5/plugin/hls"
_ "m7s.live/v5/plugin/logrotate"
_ "m7s.live/v5/plugin/monitor"
_ "m7s.live/v5/plugin/mp4"
_ "m7s.live/v5/plugin/onvif"
_ "m7s.live/v5/plugin/preview"

View File

@@ -16,7 +16,6 @@ import (
_ "m7s.live/v5/plugin/flv"
_ "m7s.live/v5/plugin/gb28181"
_ "m7s.live/v5/plugin/logrotate"
_ "m7s.live/v5/plugin/monitor"
_ "m7s.live/v5/plugin/mp4"
mp4 "m7s.live/v5/plugin/mp4/pkg"
_ "m7s.live/v5/plugin/preview"

2
example/test/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
global:
log_level: debug

38
example/test/main.go Normal file
View File

@@ -0,0 +1,38 @@
package main
import (
"context"
"flag"
"fmt"
"m7s.live/v5"
_ "m7s.live/v5/plugin/cascade"
_ "m7s.live/v5/plugin/debug"
_ "m7s.live/v5/plugin/flv"
_ "m7s.live/v5/plugin/gb28181"
_ "m7s.live/v5/plugin/hls"
_ "m7s.live/v5/plugin/logrotate"
_ "m7s.live/v5/plugin/mp4"
_ "m7s.live/v5/plugin/onvif"
_ "m7s.live/v5/plugin/preview"
_ "m7s.live/v5/plugin/rtmp"
_ "m7s.live/v5/plugin/rtp"
_ "m7s.live/v5/plugin/rtsp"
_ "m7s.live/v5/plugin/sei"
_ "m7s.live/v5/plugin/snap"
_ "m7s.live/v5/plugin/srt"
_ "m7s.live/v5/plugin/stress"
_ "m7s.live/v5/plugin/test"
_ "m7s.live/v5/plugin/transcode"
_ "m7s.live/v5/plugin/webrtc"
_ "m7s.live/v5/plugin/webtransport"
)
func main() {
conf := flag.String("c", "config.yaml", "config file")
flag.Parse()
// ctx, _ := context.WithDeadline(context.Background(), time.Now().Add(time.Second*100))
err := m7s.Run(context.Background(), *conf)
fmt.Println(err)
}

View File

@@ -1,126 +0,0 @@
// Copyright 2019 Asavie Technologies Ltd. All rights reserved.
//
// Use of this source code is governed by a BSD-style license
// that can be found in the LICENSE file in the root of the source
// tree.
/*
dumpframes demostrates how to receive frames from a network link using
github.com/asavie/xdp package, it sets up an XDP socket attached to a
particular network link and dumps all frames it receives to standard output.
*/
package main
import (
"encoding/hex"
"flag"
"fmt"
"log"
"net"
"github.com/asavie/xdp"
"github.com/asavie/xdp/examples/dumpframes/ebpf"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
)
func main() {
var linkName string
var queueID int
var protocol int64
log.SetFlags(log.Ldate | log.Ltime | log.Lmicroseconds)
flag.StringVar(&linkName, "linkname", "enp3s0", "The network link on which rebroadcast should run on.")
flag.IntVar(&queueID, "queueid", 0, "The ID of the Rx queue to which to attach to on the network link.")
flag.Int64Var(&protocol, "ip-proto", 0, "If greater than 0 and less than or equal to 255, limit xdp bpf_redirect_map to packets with the specified IP protocol number.")
flag.Parse()
interfaces, err := net.Interfaces()
if err != nil {
fmt.Printf("error: failed to fetch the list of network interfaces on the system: %v\n", err)
return
}
Ifindex := -1
for _, iface := range interfaces {
if iface.Name == linkName {
Ifindex = iface.Index
break
}
}
if Ifindex == -1 {
fmt.Printf("error: couldn't find a suitable network interface to attach to\n")
return
}
var program *xdp.Program
// Create a new XDP eBPF program and attach it to our chosen network link.
if protocol == 0 {
program, err = xdp.NewProgram(queueID + 1)
} else {
program, err = ebpf.NewIPProtoProgram(uint32(protocol), nil)
}
if err != nil {
fmt.Printf("error: failed to create xdp program: %v\n", err)
return
}
defer program.Close()
if err := program.Attach(Ifindex); err != nil {
fmt.Printf("error: failed to attach xdp program to interface: %v\n", err)
return
}
defer program.Detach(Ifindex)
// Create and initialize an XDP socket attached to our chosen network
// link.
xsk, err := xdp.NewSocket(Ifindex, queueID, nil)
if err != nil {
fmt.Printf("error: failed to create an XDP socket: %v\n", err)
return
}
// Register our XDP socket file descriptor with the eBPF program so it can be redirected packets
if err := program.Register(queueID, xsk.FD()); err != nil {
fmt.Printf("error: failed to register socket in BPF map: %v\n", err)
return
}
defer program.Unregister(queueID)
for {
// If there are any free slots on the Fill queue...
if n := xsk.NumFreeFillSlots(); n > 0 {
// ...then fetch up to that number of not-in-use
// descriptors and push them onto the Fill ring queue
// for the kernel to fill them with the received
// frames.
xsk.Fill(xsk.GetDescs(n, true))
}
// Wait for receive - meaning the kernel has
// produced one or more descriptors filled with a received
// frame onto the Rx ring queue.
log.Printf("waiting for frame(s) to be received...")
numRx, _, err := xsk.Poll(-1)
if err != nil {
fmt.Printf("error: %v\n", err)
return
}
if numRx > 0 {
// Consume the descriptors filled with received frames
// from the Rx ring queue.
rxDescs := xsk.Receive(numRx)
// Print the received frames and also modify them
// in-place replacing the destination MAC address with
// broadcast address.
for i := 0; i < len(rxDescs); i++ {
pktData := xsk.GetFrame(rxDescs[i])
pkt := gopacket.NewPacket(pktData, layers.LayerTypeEthernet, gopacket.Default)
log.Printf("received frame:\n%s%+v", hex.Dump(pktData[:]), pkt)
}
}
}
}

40
go.mod
View File

@@ -29,14 +29,14 @@ require (
github.com/mattn/go-sqlite3 v1.14.24
github.com/mcuadros/go-defaults v1.2.0
github.com/mozillazg/go-pinyin v0.20.0
github.com/ncruces/go-sqlite3 v0.18.1
github.com/ncruces/go-sqlite3/gormlite v0.18.0
github.com/pion/interceptor v0.1.37
github.com/pion/logging v0.2.2
github.com/ncruces/go-sqlite3 v0.27.1
github.com/ncruces/go-sqlite3/gormlite v0.24.0
github.com/pion/interceptor v0.1.40
github.com/pion/logging v0.2.4
github.com/pion/rtcp v1.2.15
github.com/pion/rtp v1.8.10
github.com/pion/sdp/v3 v3.0.9
github.com/pion/webrtc/v4 v4.0.7
github.com/pion/rtp v1.8.21
github.com/pion/sdp/v3 v3.0.15
github.com/pion/webrtc/v4 v4.1.4
github.com/quic-go/qpack v0.5.1
github.com/quic-go/quic-go v0.50.1
github.com/rs/zerolog v1.33.0
@@ -47,7 +47,7 @@ require (
github.com/vishvananda/netlink v1.1.0
github.com/yapingcat/gomedia v0.0.0-20240601043430-920523f8e5c7
golang.org/x/image v0.22.0
golang.org/x/text v0.24.0
golang.org/x/text v0.27.0
google.golang.org/genproto/googleapis/api v0.0.0-20240711142825-46eb208f015d
google.golang.org/grpc v1.65.0
google.golang.org/protobuf v1.34.2
@@ -98,15 +98,15 @@ require (
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/julianday v1.0.0 // indirect
github.com/pion/datachannel v1.5.10 // indirect
github.com/pion/dtls/v3 v3.0.4 // indirect
github.com/pion/ice/v4 v4.0.3 // indirect
github.com/pion/dtls/v3 v3.0.7 // indirect
github.com/pion/ice/v4 v4.0.10 // indirect
github.com/pion/mdns/v2 v2.0.7 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/sctp v1.8.35 // indirect
github.com/pion/srtp/v3 v3.0.4 // indirect
github.com/pion/sctp v1.8.39 // indirect
github.com/pion/srtp/v3 v3.0.7 // indirect
github.com/pion/stun/v3 v3.0.0 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/pion/turn/v4 v4.0.0 // indirect
github.com/pion/turn/v4 v4.1.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/client_model v0.6.1 // indirect
@@ -117,7 +117,7 @@ require (
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/spf13/cast v1.7.1 // indirect
github.com/tetratelabs/wazero v1.8.0 // indirect
github.com/tetratelabs/wazero v1.9.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
@@ -131,7 +131,7 @@ require (
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/sync v0.13.0 // indirect
golang.org/x/sync v0.16.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240711142825-46eb208f015d // indirect
)
@@ -149,11 +149,11 @@ require (
github.com/prometheus/client_golang v1.20.4
github.com/quangngotan95/go-m3u8 v0.1.0
go.uber.org/mock v0.5.0 // indirect
golang.org/x/crypto v0.37.0
golang.org/x/crypto v0.40.0
golang.org/x/exp v0.0.0-20240716175740-e3f259677ff7
golang.org/x/mod v0.19.0 // indirect
golang.org/x/net v0.39.0
golang.org/x/sys v0.32.0
golang.org/x/tools v0.23.0 // indirect
golang.org/x/mod v0.25.0 // indirect
golang.org/x/net v0.41.0
golang.org/x/sys v0.34.0
golang.org/x/tools v0.34.0 // indirect
gopkg.in/yaml.v3 v3.0.1
)

87
go.sum
View File

@@ -189,10 +189,10 @@ github.com/mozillazg/go-pinyin v0.20.0 h1:BtR3DsxpApHfKReaPO1fCqF4pThRwH9uwvXzm+
github.com/mozillazg/go-pinyin v0.20.0/go.mod h1:iR4EnMMRXkfpFVV5FMi4FNB6wGq9NV6uDWbUuPhP4Yc=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-sqlite3 v0.18.1 h1:iN8IMZV5EMxpH88NUac9vId23eTKNFUhP7jgY0EBbNc=
github.com/ncruces/go-sqlite3 v0.18.1/go.mod h1:eEOyZnW1dGTJ+zDpMuzfYamEUBtdFz5zeYhqLBtHxvM=
github.com/ncruces/go-sqlite3/gormlite v0.18.0 h1:KqP9a9wlX/Ba+yG+aeVX4pnNBNdaSO6xHdNDWzPxPnk=
github.com/ncruces/go-sqlite3/gormlite v0.18.0/go.mod h1:RXeT1hknrz3A0tBDL6IfluDHuNkHdJeImn5TBMQg9zc=
github.com/ncruces/go-sqlite3 v0.27.1 h1:suqlM7xhSyDVMV9RgX99MCPqt9mB6YOCzHZuiI36K34=
github.com/ncruces/go-sqlite3 v0.27.1/go.mod h1:gpF5s+92aw2MbDmZK0ZOnCdFlpe11BH20CTspVqri0c=
github.com/ncruces/go-sqlite3/gormlite v0.24.0 h1:81sHeq3CCdhjoqAB650n5wEdRlLO9VBvosArskcN3+c=
github.com/ncruces/go-sqlite3/gormlite v0.24.0/go.mod h1:vXfVWdBfg7qOgqQqHpzUWl9LLswD0h+8mK4oouaV2oc=
github.com/ncruces/julianday v1.0.0 h1:fH0OKwa7NWvniGQtxdJRxAgkBMolni2BjDHaWTxqt7M=
github.com/ncruces/julianday v1.0.0/go.mod h1:Dusn2KvZrrovOMJuOt0TNXL6tB7U2E8kvza5fFc9G7g=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
@@ -208,36 +208,36 @@ github.com/phsym/console-slog v0.3.1 h1:Fuzcrjr40xTc004S9Kni8XfNsk+qrptQmyR+wZw9
github.com/phsym/console-slog v0.3.1/go.mod h1:oJskjp/X6e6c0mGpfP8ELkfKUsrkDifYRAqJQgmdDS0=
github.com/pion/datachannel v1.5.10 h1:ly0Q26K1i6ZkGf42W7D4hQYR90pZwzFOjTq5AuCKk4o=
github.com/pion/datachannel v1.5.10/go.mod h1:p/jJfC9arb29W7WrxyKbepTU20CFgyx5oLo8Rs4Py/M=
github.com/pion/dtls/v3 v3.0.4 h1:44CZekewMzfrn9pmGrj5BNnTMDCFwr+6sLH+cCuLM7U=
github.com/pion/dtls/v3 v3.0.4/go.mod h1:R373CsjxWqNPf6MEkfdy3aSe9niZvL/JaKlGeFphtMg=
github.com/pion/ice/v4 v4.0.3 h1:9s5rI1WKzF5DRqhJ+Id8bls/8PzM7mau0mj1WZb4IXE=
github.com/pion/ice/v4 v4.0.3/go.mod h1:VfHy0beAZ5loDT7BmJ2LtMtC4dbawIkkkejHPRZNB3Y=
github.com/pion/interceptor v0.1.37 h1:aRA8Zpab/wE7/c0O3fh1PqY0AJI3fCSEM5lRWJVorwI=
github.com/pion/interceptor v0.1.37/go.mod h1:JzxbJ4umVTlZAf+/utHzNesY8tmRkM2lVmkS82TTj8Y=
github.com/pion/logging v0.2.2 h1:M9+AIj/+pxNsDfAT64+MAVgJO0rsyLnoJKCqf//DoeY=
github.com/pion/logging v0.2.2/go.mod h1:k0/tDVsRCX2Mb2ZEmTqNa7CWsQPc+YYCB7Q+5pahoms=
github.com/pion/dtls/v3 v3.0.7 h1:bItXtTYYhZwkPFk4t1n3Kkf5TDrfj6+4wG+CZR8uI9Q=
github.com/pion/dtls/v3 v3.0.7/go.mod h1:uDlH5VPrgOQIw59irKYkMudSFprY9IEFCqz/eTz16f8=
github.com/pion/ice/v4 v4.0.10 h1:P59w1iauC/wPk9PdY8Vjl4fOFL5B+USq1+xbDcN6gT4=
github.com/pion/ice/v4 v4.0.10/go.mod h1:y3M18aPhIxLlcO/4dn9X8LzLLSma84cx6emMSu14FGw=
github.com/pion/interceptor v0.1.40 h1:e0BjnPcGpr2CFQgKhrQisBU7V3GXK6wrfYrGYaU6Jq4=
github.com/pion/interceptor v0.1.40/go.mod h1:Z6kqH7M/FYirg3frjGJ21VLSRJGBXB/KqaTIrdqnOic=
github.com/pion/logging v0.2.4 h1:tTew+7cmQ+Mc1pTBLKH2puKsOvhm32dROumOZ655zB8=
github.com/pion/logging v0.2.4/go.mod h1:DffhXTKYdNZU+KtJ5pyQDjvOAh/GsNSyv1lbkFbe3so=
github.com/pion/mdns/v2 v2.0.7 h1:c9kM8ewCgjslaAmicYMFQIde2H9/lrZpjBkN8VwoVtM=
github.com/pion/mdns/v2 v2.0.7/go.mod h1:vAdSYNAT0Jy3Ru0zl2YiW3Rm/fJCwIeM0nToenfOJKA=
github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA=
github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8=
github.com/pion/rtcp v1.2.15 h1:LZQi2JbdipLOj4eBjK4wlVoQWfrZbh3Q6eHtWtJBZBo=
github.com/pion/rtcp v1.2.15/go.mod h1:jlGuAjHMEXwMUHK78RgX0UmEJFV4zUKOFHR7OP+D3D0=
github.com/pion/rtp v1.8.10 h1:puphjdbjPB+L+NFaVuZ5h6bt1g5q4kFIoI+r5q/g0CU=
github.com/pion/rtp v1.8.10/go.mod h1:8uMBJj32Pa1wwx8Fuv/AsFhn8jsgw+3rUC2PfoBZ8p4=
github.com/pion/sctp v1.8.35 h1:qwtKvNK1Wc5tHMIYgTDJhfZk7vATGVHhXbUDfHbYwzA=
github.com/pion/sctp v1.8.35/go.mod h1:EcXP8zCYVTRy3W9xtOF7wJm1L1aXfKRQzaM33SjQlzg=
github.com/pion/sdp/v3 v3.0.9 h1:pX++dCHoHUwq43kuwf3PyJfHlwIj4hXA7Vrifiq0IJY=
github.com/pion/sdp/v3 v3.0.9/go.mod h1:B5xmvENq5IXJimIO4zfp6LAe1fD9N+kFv+V/1lOdz8M=
github.com/pion/srtp/v3 v3.0.4 h1:2Z6vDVxzrX3UHEgrUyIGM4rRouoC7v+NiF1IHtp9B5M=
github.com/pion/srtp/v3 v3.0.4/go.mod h1:1Jx3FwDoxpRaTh1oRV8A/6G1BnFL+QI82eK4ms8EEJQ=
github.com/pion/rtp v1.8.21 h1:3yrOwmZFyUpcIosNcWRpQaU+UXIJ6yxLuJ8Bx0mw37Y=
github.com/pion/rtp v1.8.21/go.mod h1:bAu2UFKScgzyFqvUKmbvzSdPr+NGbZtv6UB2hesqXBk=
github.com/pion/sctp v1.8.39 h1:PJma40vRHa3UTO3C4MyeJDQ+KIobVYRZQZ0Nt7SjQnE=
github.com/pion/sctp v1.8.39/go.mod h1:cNiLdchXra8fHQwmIoqw0MbLLMs+f7uQ+dGMG2gWebE=
github.com/pion/sdp/v3 v3.0.15 h1:F0I1zds+K/+37ZrzdADmx2Q44OFDOPRLhPnNTaUX9hk=
github.com/pion/sdp/v3 v3.0.15/go.mod h1:88GMahN5xnScv1hIMTqLdu/cOcUkj6a9ytbncwMCq2E=
github.com/pion/srtp/v3 v3.0.7 h1:QUElw0A/FUg3MP8/KNMZB3i0m8F9XeMnTum86F7S4bs=
github.com/pion/srtp/v3 v3.0.7/go.mod h1:qvnHeqbhT7kDdB+OGB05KA/P067G3mm7XBfLaLiaNF0=
github.com/pion/stun/v3 v3.0.0 h1:4h1gwhWLWuZWOJIJR9s2ferRO+W3zA/b6ijOI6mKzUw=
github.com/pion/stun/v3 v3.0.0/go.mod h1:HvCN8txt8mwi4FBvS3EmDghW6aQJ24T+y+1TKjB5jyU=
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
github.com/pion/turn/v4 v4.0.0 h1:qxplo3Rxa9Yg1xXDxxH8xaqcyGUtbHYw4QSCvmFWvhM=
github.com/pion/turn/v4 v4.0.0/go.mod h1:MuPDkm15nYSklKpN8vWJ9W2M0PlyQZqYt1McGuxG7mA=
github.com/pion/webrtc/v4 v4.0.7 h1:aeq78uVnFZd2umXW0O9A2VFQYuS7+BZxWetQvSp2jPo=
github.com/pion/webrtc/v4 v4.0.7/go.mod h1:oFVBBVSHU3vAEwSgnk3BuKCwAUwpDwQhko1EDwyZWbU=
github.com/pion/turn/v4 v4.1.1 h1:9UnY2HB99tpDyz3cVVZguSxcqkJ1DsTSZ+8TGruh4fc=
github.com/pion/turn/v4 v4.1.1/go.mod h1:2123tHk1O++vmjI5VSD0awT50NywDAq5A2NNNU4Jjs8=
github.com/pion/webrtc/v4 v4.1.4 h1:/gK1ACGHXQmtyVVbJFQDxNoODg4eSRiFLB7t9r9pg8M=
github.com/pion/webrtc/v4 v4.1.4/go.mod h1:Oab9npu1iZtQRMic3K3toYq5zFPvToe/QBw7dMI2ok4=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.4.0/go.mod h1:NWz/XGvpEW1FyYQ7fCx4dqYBLlfTcE+A9FLAkNKqjFE=
@@ -287,22 +287,15 @@ github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y=
github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/sunfish-shogi/bufseekio v0.0.0-20210207115823-a4185644b365/go.mod h1:dEzdXgvImkQ3WLI+0KQpmEx8T/C/ma9KeS3AfmU899I=
github.com/tetratelabs/wazero v1.8.0 h1:iEKu0d4c2Pd+QSRieYbnQC9yiFlMS9D+Jr0LsRmcF4g=
github.com/tetratelabs/wazero v1.8.0/go.mod h1:yAI0XTsMBhREkM/YDAK/zNou3GoiAce1P6+rp/wQhjs=
github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
@@ -341,8 +334,8 @@ golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/crypto v0.40.0 h1:r4x+VvoG5Fm+eJcxMaY8CQM7Lb0l1lsmjGBQ6s8BfKM=
golang.org/x/crypto v0.40.0/go.mod h1:Qr1vMER5WyS2dfPHAlsOj01wgLbsyWtFn/aY+5+ZdxY=
golang.org/x/exp v0.0.0-20240716175740-e3f259677ff7 h1:wDLEX9a7YQoKdKNQt88rtydkqDxeGaBUTnIYc3iG/mA=
golang.org/x/exp v0.0.0-20240716175740-e3f259677ff7/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/image v0.0.0-20191009234506-e7c1f5e7dbb8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
@@ -350,17 +343,17 @@ golang.org/x/image v0.22.0 h1:UtK5yLUzilVrkjMAZAZ34DXGpASN8i8pj8g+O+yd10g=
golang.org/x/image v0.22.0/go.mod h1:9hPFhljd4zZ1GNSIZJ49sqbp45GKK9t6w+iXvGqZUz4=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.19.0 h1:fEdghXQSo20giMthA7cd28ZC+jts4amQ3YMXiP5oMQ8=
golang.org/x/mod v0.19.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -381,19 +374,19 @@ golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/text v0.27.0 h1:4fGWRpyh641NLlecmyl4LOe6yDdfaYNrGb2zdfo4JV4=
golang.org/x/text v0.27.0/go.mod h1:1D28KMCvyooCX9hBiosv5Tz/+YLxj0j7XhWjpSUF7CU=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20191216052735-49a3e744a425/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.23.0 h1:SGsXPZ+2l4JsgaCKkx+FQ9YZ5XEtA1GZYuoDjenLjvg=
golang.org/x/tools v0.23.0/go.mod h1:pnu6ufv6vQkll6szChhK3C3L/ruaIv5eBeztNG8wtsI=
golang.org/x/tools v0.34.0 h1:qIpSLOxeCYGg9TrcJokLBG4KFA6d795g0xkBkiESGlo=
golang.org/x/tools v0.34.0/go.mod h1:pAP9OwEaY1CAW3HOmg3hLZC5Z0CCmzjAF2UQMSqNARg=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/api v0.0.0-20240711142825-46eb208f015d h1:kHjw/5UfflP/L5EbledDrcG4C2597RtymmGRZvHiCuY=

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.6
// protoc v6.31.1
// protoc v5.29.3
// source: auth.proto
package pb

View File

@@ -10,7 +10,6 @@ package pb
import (
"context"
"errors"
"io"
"net/http"
@@ -25,118 +24,116 @@ import (
)
// Suppress "imported and not used" errors
var (
_ codes.Code
_ io.Reader
_ status.Status
_ = errors.New
_ = runtime.String
_ = utilities.NewDoubleArray
_ = metadata.Join
)
var _ codes.Code
var _ io.Reader
var _ status.Status
var _ = runtime.String
var _ = utilities.NewDoubleArray
var _ = metadata.Join
func request_Auth_Login_0(ctx context.Context, marshaler runtime.Marshaler, client AuthClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq LoginRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
var protoReq LoginRequest
var metadata runtime.ServerMetadata
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && err != io.EOF {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.Login(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Auth_Login_0(ctx context.Context, marshaler runtime.Marshaler, server AuthServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq LoginRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
var protoReq LoginRequest
var metadata runtime.ServerMetadata
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && err != io.EOF {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.Login(ctx, &protoReq)
return msg, metadata, err
}
func request_Auth_Logout_0(ctx context.Context, marshaler runtime.Marshaler, client AuthClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq LogoutRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
var protoReq LogoutRequest
var metadata runtime.ServerMetadata
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && err != io.EOF {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.Logout(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Auth_Logout_0(ctx context.Context, marshaler runtime.Marshaler, server AuthServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq LogoutRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
var protoReq LogoutRequest
var metadata runtime.ServerMetadata
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && err != io.EOF {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.Logout(ctx, &protoReq)
return msg, metadata, err
}
var filter_Auth_GetUserInfo_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
var (
filter_Auth_GetUserInfo_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
)
func request_Auth_GetUserInfo_0(ctx context.Context, marshaler runtime.Marshaler, client AuthClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq UserInfoRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
var protoReq UserInfoRequest
var metadata runtime.ServerMetadata
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Auth_GetUserInfo_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.GetUserInfo(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Auth_GetUserInfo_0(ctx context.Context, marshaler runtime.Marshaler, server AuthServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq UserInfoRequest
metadata runtime.ServerMetadata
)
var protoReq UserInfoRequest
var metadata runtime.ServerMetadata
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Auth_GetUserInfo_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.GetUserInfo(ctx, &protoReq)
return msg, metadata, err
}
// RegisterAuthHandlerServer registers the http handlers for service Auth to "mux".
// UnaryRPC :call AuthServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterAuthHandlerFromEndpoint instead.
// GRPC interceptors will not work for this type of registration. To use interceptors, you must use the "runtime.WithMiddlewares" option in the "runtime.NewServeMux" call.
func RegisterAuthHandlerServer(ctx context.Context, mux *runtime.ServeMux, server AuthServer) error {
mux.Handle(http.MethodPost, pattern_Auth_Login_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("POST", pattern_Auth_Login_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/pb.Auth/Login", runtime.WithHTTPPathPattern("/api/auth/login"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/pb.Auth/Login", runtime.WithHTTPPathPattern("/api/auth/login"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -148,15 +145,20 @@ func RegisterAuthHandlerServer(ctx context.Context, mux *runtime.ServeMux, serve
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Auth_Login_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Auth_Logout_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("POST", pattern_Auth_Logout_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/pb.Auth/Logout", runtime.WithHTTPPathPattern("/api/auth/logout"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/pb.Auth/Logout", runtime.WithHTTPPathPattern("/api/auth/logout"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -168,15 +170,20 @@ func RegisterAuthHandlerServer(ctx context.Context, mux *runtime.ServeMux, serve
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Auth_Logout_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Auth_GetUserInfo_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Auth_GetUserInfo_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/pb.Auth/GetUserInfo", runtime.WithHTTPPathPattern("/api/auth/userinfo"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/pb.Auth/GetUserInfo", runtime.WithHTTPPathPattern("/api/auth/userinfo"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -188,7 +195,9 @@ func RegisterAuthHandlerServer(ctx context.Context, mux *runtime.ServeMux, serve
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Auth_GetUserInfo_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
@@ -197,24 +206,25 @@ func RegisterAuthHandlerServer(ctx context.Context, mux *runtime.ServeMux, serve
// RegisterAuthHandlerFromEndpoint is same as RegisterAuthHandler but
// automatically dials to "endpoint" and closes the connection when "ctx" gets done.
func RegisterAuthHandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error) {
conn, err := grpc.NewClient(endpoint, opts...)
conn, err := grpc.DialContext(ctx, endpoint, opts...)
if err != nil {
return err
}
defer func() {
if err != nil {
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
return
}
go func() {
<-ctx.Done()
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
}()
}()
return RegisterAuthHandler(ctx, mux, conn)
}
@@ -228,13 +238,16 @@ func RegisterAuthHandler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.
// to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "AuthClient".
// Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "AuthClient"
// doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in
// "AuthClient" to call the correct interceptors. This client ignores the HTTP middlewares.
// "AuthClient" to call the correct interceptors.
func RegisterAuthHandlerClient(ctx context.Context, mux *runtime.ServeMux, client AuthClient) error {
mux.Handle(http.MethodPost, pattern_Auth_Login_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("POST", pattern_Auth_Login_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/pb.Auth/Login", runtime.WithHTTPPathPattern("/api/auth/login"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/pb.Auth/Login", runtime.WithHTTPPathPattern("/api/auth/login"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -245,13 +258,18 @@ func RegisterAuthHandlerClient(ctx context.Context, mux *runtime.ServeMux, clien
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Auth_Login_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Auth_Logout_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("POST", pattern_Auth_Logout_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/pb.Auth/Logout", runtime.WithHTTPPathPattern("/api/auth/logout"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/pb.Auth/Logout", runtime.WithHTTPPathPattern("/api/auth/logout"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -262,13 +280,18 @@ func RegisterAuthHandlerClient(ctx context.Context, mux *runtime.ServeMux, clien
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Auth_Logout_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Auth_GetUserInfo_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Auth_GetUserInfo_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/pb.Auth/GetUserInfo", runtime.WithHTTPPathPattern("/api/auth/userinfo"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/pb.Auth/GetUserInfo", runtime.WithHTTPPathPattern("/api/auth/userinfo"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -279,19 +302,26 @@ func RegisterAuthHandlerClient(ctx context.Context, mux *runtime.ServeMux, clien
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Auth_GetUserInfo_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
var (
pattern_Auth_Login_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "auth", "login"}, ""))
pattern_Auth_Logout_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "auth", "logout"}, ""))
pattern_Auth_GetUserInfo_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "auth", "userinfo"}, ""))
)
var (
forward_Auth_Login_0 = runtime.ForwardResponseMessage
forward_Auth_Logout_0 = runtime.ForwardResponseMessage
forward_Auth_GetUserInfo_0 = runtime.ForwardResponseMessage
)

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v6.31.1
// - protoc v5.29.3
// source: auth.proto
package pb

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.6
// protoc v6.31.1
// protoc v5.29.3
// source: global.proto
package pb
@@ -1042,6 +1042,8 @@ type TaskTreeData struct {
Blocked *TaskTreeData `protobuf:"bytes,8,opt,name=blocked,proto3" json:"blocked,omitempty"`
Pointer uint64 `protobuf:"varint,9,opt,name=pointer,proto3" json:"pointer,omitempty"`
StartReason string `protobuf:"bytes,10,opt,name=startReason,proto3" json:"startReason,omitempty"`
EventLoopRunning bool `protobuf:"varint,11,opt,name=eventLoopRunning,proto3" json:"eventLoopRunning,omitempty"`
Level uint32 `protobuf:"varint,12,opt,name=level,proto3" json:"level,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
@@ -1146,6 +1148,20 @@ func (x *TaskTreeData) GetStartReason() string {
return ""
}
func (x *TaskTreeData) GetEventLoopRunning() bool {
if x != nil {
return x.EventLoopRunning
}
return false
}
func (x *TaskTreeData) GetLevel() uint32 {
if x != nil {
return x.Level
}
return 0
}
type TaskTreeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Code int32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
@@ -3365,7 +3381,8 @@ type UpdatePushProxyRequest struct {
PushOnStart *bool `protobuf:"varint,7,opt,name=pushOnStart,proto3,oneof" json:"pushOnStart,omitempty"` // 启动时推流
Audio *bool `protobuf:"varint,8,opt,name=audio,proto3,oneof" json:"audio,omitempty"` // 是否推音频
Description *string `protobuf:"bytes,9,opt,name=description,proto3,oneof" json:"description,omitempty"` // 设备描述
StreamPath *string `protobuf:"bytes,10,opt,name=streamPath,proto3,oneof" json:"streamPath,omitempty"` // 流路径
Rtt *uint32 `protobuf:"varint,10,opt,name=rtt,proto3,oneof" json:"rtt,omitempty"` // 平均RTT
StreamPath *string `protobuf:"bytes,11,opt,name=streamPath,proto3,oneof" json:"streamPath,omitempty"` // 流路径
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
@@ -3463,6 +3480,13 @@ func (x *UpdatePushProxyRequest) GetDescription() string {
return ""
}
func (x *UpdatePushProxyRequest) GetRtt() uint32 {
if x != nil && x.Rtt != nil {
return *x.Rtt
}
return 0
}
func (x *UpdatePushProxyRequest) GetStreamPath() string {
if x != nil && x.StreamPath != nil {
return *x.StreamPath
@@ -5334,6 +5358,194 @@ func (x *AlarmListResponse) GetData() []*AlarmInfo {
return nil
}
type Step struct {
state protoimpl.MessageState `protogen:"open.v1"`
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
Error string `protobuf:"bytes,3,opt,name=error,proto3" json:"error,omitempty"`
StartedAt *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=startedAt,proto3" json:"startedAt,omitempty"`
CompletedAt *timestamppb.Timestamp `protobuf:"bytes,5,opt,name=completedAt,proto3" json:"completedAt,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Step) Reset() {
*x = Step{}
mi := &file_global_proto_msgTypes[71]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Step) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Step) ProtoMessage() {}
func (x *Step) ProtoReflect() protoreflect.Message {
mi := &file_global_proto_msgTypes[71]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Step.ProtoReflect.Descriptor instead.
func (*Step) Descriptor() ([]byte, []int) {
return file_global_proto_rawDescGZIP(), []int{71}
}
func (x *Step) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *Step) GetDescription() string {
if x != nil {
return x.Description
}
return ""
}
func (x *Step) GetError() string {
if x != nil {
return x.Error
}
return ""
}
func (x *Step) GetStartedAt() *timestamppb.Timestamp {
if x != nil {
return x.StartedAt
}
return nil
}
func (x *Step) GetCompletedAt() *timestamppb.Timestamp {
if x != nil {
return x.CompletedAt
}
return nil
}
type SubscriptionProgressData struct {
state protoimpl.MessageState `protogen:"open.v1"`
Steps []*Step `protobuf:"bytes,1,rep,name=steps,proto3" json:"steps,omitempty"`
CurrentStep int32 `protobuf:"varint,2,opt,name=currentStep,proto3" json:"currentStep,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SubscriptionProgressData) Reset() {
*x = SubscriptionProgressData{}
mi := &file_global_proto_msgTypes[72]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SubscriptionProgressData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SubscriptionProgressData) ProtoMessage() {}
func (x *SubscriptionProgressData) ProtoReflect() protoreflect.Message {
mi := &file_global_proto_msgTypes[72]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SubscriptionProgressData.ProtoReflect.Descriptor instead.
func (*SubscriptionProgressData) Descriptor() ([]byte, []int) {
return file_global_proto_rawDescGZIP(), []int{72}
}
func (x *SubscriptionProgressData) GetSteps() []*Step {
if x != nil {
return x.Steps
}
return nil
}
func (x *SubscriptionProgressData) GetCurrentStep() int32 {
if x != nil {
return x.CurrentStep
}
return 0
}
type SubscriptionProgressResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Code int32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
Data *SubscriptionProgressData `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SubscriptionProgressResponse) Reset() {
*x = SubscriptionProgressResponse{}
mi := &file_global_proto_msgTypes[73]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SubscriptionProgressResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SubscriptionProgressResponse) ProtoMessage() {}
func (x *SubscriptionProgressResponse) ProtoReflect() protoreflect.Message {
mi := &file_global_proto_msgTypes[73]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SubscriptionProgressResponse.ProtoReflect.Descriptor instead.
func (*SubscriptionProgressResponse) Descriptor() ([]byte, []int) {
return file_global_proto_rawDescGZIP(), []int{73}
}
func (x *SubscriptionProgressResponse) GetCode() int32 {
if x != nil {
return x.Code
}
return 0
}
func (x *SubscriptionProgressResponse) GetMessage() string {
if x != nil {
return x.Message
}
return ""
}
func (x *SubscriptionProgressResponse) GetData() *SubscriptionProgressData {
if x != nil {
return x.Data
}
return nil
}
var File_global_proto protoreflect.FileDescriptor
const file_global_proto_rawDesc = "" +
@@ -5430,7 +5642,7 @@ const file_global_proto_rawDesc = "" +
"\x0fSysInfoResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\x05R\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12'\n" +
"\x04data\x18\x03 \x01(\v2\x13.global.SysInfoDataR\x04data\"\xbf\x03\n" +
"\x04data\x18\x03 \x01(\v2\x13.global.SysInfoDataR\x04data\"\x81\x04\n" +
"\fTaskTreeData\x12\x0e\n" +
"\x02id\x18\x01 \x01(\rR\x02id\x12\x12\n" +
"\x04type\x18\x02 \x01(\rR\x04type\x12\x14\n" +
@@ -5442,7 +5654,9 @@ const file_global_proto_rawDesc = "" +
"\ablocked\x18\b \x01(\v2\x14.global.TaskTreeDataR\ablocked\x12\x18\n" +
"\apointer\x18\t \x01(\x04R\apointer\x12 \n" +
"\vstartReason\x18\n" +
" \x01(\tR\vstartReason\x1a>\n" +
" \x01(\tR\vstartReason\x12*\n" +
"\x10eventLoopRunning\x18\v \x01(\bR\x10eventLoopRunning\x12\x14\n" +
"\x05level\x18\f \x01(\rR\x05level\x1a>\n" +
"\x10DescriptionEntry\x12\x10\n" +
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"j\n" +
@@ -5699,7 +5913,7 @@ const file_global_proto_rawDesc = "" +
"\x03rtt\x18\f \x01(\rR\x03rtt\x12\x1e\n" +
"\n" +
"streamPath\x18\r \x01(\tR\n" +
"streamPath\"\xb4\x03\n" +
"streamPath\"\xd3\x03\n" +
"\x16UpdatePushProxyRequest\x12\x0e\n" +
"\x02ID\x18\x01 \x01(\rR\x02ID\x12\x1f\n" +
"\bparentID\x18\x02 \x01(\rH\x00R\bparentID\x88\x01\x01\x12\x17\n" +
@@ -5709,10 +5923,11 @@ const file_global_proto_rawDesc = "" +
"\apushURL\x18\x06 \x01(\tH\x04R\apushURL\x88\x01\x01\x12%\n" +
"\vpushOnStart\x18\a \x01(\bH\x05R\vpushOnStart\x88\x01\x01\x12\x19\n" +
"\x05audio\x18\b \x01(\bH\x06R\x05audio\x88\x01\x01\x12%\n" +
"\vdescription\x18\t \x01(\tH\aR\vdescription\x88\x01\x01\x12#\n" +
"\vdescription\x18\t \x01(\tH\aR\vdescription\x88\x01\x01\x12\x15\n" +
"\x03rtt\x18\n" +
" \x01(\rH\bR\x03rtt\x88\x01\x01\x12#\n" +
"\n" +
"streamPath\x18\n" +
" \x01(\tH\bR\n" +
"streamPath\x18\v \x01(\tH\tR\n" +
"streamPath\x88\x01\x01B\v\n" +
"\t_parentIDB\a\n" +
"\x05_nameB\a\n" +
@@ -5722,7 +5937,8 @@ const file_global_proto_rawDesc = "" +
"\b_pushURLB\x0e\n" +
"\f_pushOnStartB\b\n" +
"\x06_audioB\x0e\n" +
"\f_descriptionB\r\n" +
"\f_descriptionB\x06\n" +
"\x04_rttB\r\n" +
"\v_streamPath\"p\n" +
"\x15PushProxyListResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\x05R\x04code\x12\x18\n" +
@@ -5913,7 +6129,20 @@ const file_global_proto_rawDesc = "" +
"\x05total\x18\x03 \x01(\x05R\x05total\x12\x18\n" +
"\apageNum\x18\x04 \x01(\x05R\apageNum\x12\x1a\n" +
"\bpageSize\x18\x05 \x01(\x05R\bpageSize\x12%\n" +
"\x04data\x18\x06 \x03(\v2\x11.global.AlarmInfoR\x04data2\xab#\n" +
"\x04data\x18\x06 \x03(\v2\x11.global.AlarmInfoR\x04data\"\xca\x01\n" +
"\x04Step\x12\x12\n" +
"\x04name\x18\x01 \x01(\tR\x04name\x12 \n" +
"\vdescription\x18\x02 \x01(\tR\vdescription\x12\x14\n" +
"\x05error\x18\x03 \x01(\tR\x05error\x128\n" +
"\tstartedAt\x18\x04 \x01(\v2\x1a.google.protobuf.TimestampR\tstartedAt\x12<\n" +
"\vcompletedAt\x18\x05 \x01(\v2\x1a.google.protobuf.TimestampR\vcompletedAt\"`\n" +
"\x18SubscriptionProgressData\x12\"\n" +
"\x05steps\x18\x01 \x03(\v2\f.global.StepR\x05steps\x12 \n" +
"\vcurrentStep\x18\x02 \x01(\x05R\vcurrentStep\"\x82\x01\n" +
"\x1cSubscriptionProgressResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\x05R\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x124\n" +
"\x04data\x18\x03 \x01(\v2 .global.SubscriptionProgressDataR\x04data2\xb6$\n" +
"\x03api\x12P\n" +
"\aSysInfo\x12\x16.google.protobuf.Empty\x1a\x17.global.SysInfoResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/sysinfo\x12i\n" +
"\x0fDisabledPlugins\x12\x16.google.protobuf.Empty\x1a\x1f.global.DisabledPluginsResponse\"\x1d\x82\xd3\xe4\x93\x02\x17\x12\x15/api/plugins/disabled\x12P\n" +
@@ -5960,7 +6189,8 @@ const file_global_proto_rawDesc = "" +
"\x12GetEventRecordList\x12\x15.global.ReqRecordList\x1a\x1f.global.EventRecordResponseList\"5\x82\xd3\xe4\x93\x02/\x12-/api/record/{type}/event/list/{streamPath=**}\x12i\n" +
"\x10GetRecordCatalog\x12\x18.global.ReqRecordCatalog\x1a\x17.global.ResponseCatalog\"\"\x82\xd3\xe4\x93\x02\x1c\x12\x1a/api/record/{type}/catalog\x12u\n" +
"\fDeleteRecord\x12\x17.global.ReqRecordDelete\x1a\x16.global.ResponseDelete\"4\x82\xd3\xe4\x93\x02.:\x01*\")/api/record/{type}/delete/{streamPath=**}\x12\\\n" +
"\fGetAlarmList\x12\x18.global.AlarmListRequest\x1a\x19.global.AlarmListResponse\"\x17\x82\xd3\xe4\x93\x02\x11\x12\x0f/api/alarm/listB\x10Z\x0em7s.live/v5/pbb\x06proto3"
"\fGetAlarmList\x12\x18.global.AlarmListRequest\x1a\x19.global.AlarmListResponse\"\x17\x82\xd3\xe4\x93\x02\x11\x12\x0f/api/alarm/list\x12\x88\x01\n" +
"\x17GetSubscriptionProgress\x12\x19.global.StreamSnapRequest\x1a$.global.SubscriptionProgressResponse\",\x82\xd3\xe4\x93\x02&\x12$/api/stream/progress/{streamPath=**}B\x10Z\x0em7s.live/v5/pbb\x06proto3"
var (
file_global_proto_rawDescOnce sync.Once
@@ -5974,7 +6204,7 @@ func file_global_proto_rawDescGZIP() []byte {
return file_global_proto_rawDescData
}
var file_global_proto_msgTypes = make([]protoimpl.MessageInfo, 78)
var file_global_proto_msgTypes = make([]protoimpl.MessageInfo, 81)
var file_global_proto_goTypes = []any{
(*DisabledPluginsResponse)(nil), // 0: global.DisabledPluginsResponse
(*GetConfigRequest)(nil), // 1: global.GetConfigRequest
@@ -6047,176 +6277,185 @@ var file_global_proto_goTypes = []any{
(*AlarmInfo)(nil), // 68: global.AlarmInfo
(*AlarmListRequest)(nil), // 69: global.AlarmListRequest
(*AlarmListResponse)(nil), // 70: global.AlarmListResponse
nil, // 71: global.Formily.PropertiesEntry
nil, // 72: global.Formily.ComponentPropsEntry
nil, // 73: global.FormilyResponse.PropertiesEntry
nil, // 74: global.PluginInfo.DescriptionEntry
nil, // 75: global.TaskTreeData.DescriptionEntry
nil, // 76: global.StreamWaitListResponse.ListEntry
nil, // 77: global.TrackSnapShotData.ReaderEntry
(*timestamppb.Timestamp)(nil), // 78: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 79: google.protobuf.Duration
(*anypb.Any)(nil), // 80: google.protobuf.Any
(*emptypb.Empty)(nil), // 81: google.protobuf.Empty
(*Step)(nil), // 71: global.Step
(*SubscriptionProgressData)(nil), // 72: global.SubscriptionProgressData
(*SubscriptionProgressResponse)(nil), // 73: global.SubscriptionProgressResponse
nil, // 74: global.Formily.PropertiesEntry
nil, // 75: global.Formily.ComponentPropsEntry
nil, // 76: global.FormilyResponse.PropertiesEntry
nil, // 77: global.PluginInfo.DescriptionEntry
nil, // 78: global.TaskTreeData.DescriptionEntry
nil, // 79: global.StreamWaitListResponse.ListEntry
nil, // 80: global.TrackSnapShotData.ReaderEntry
(*timestamppb.Timestamp)(nil), // 81: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 82: google.protobuf.Duration
(*anypb.Any)(nil), // 83: google.protobuf.Any
(*emptypb.Empty)(nil), // 84: google.protobuf.Empty
}
var file_global_proto_depIdxs = []int32{
12, // 0: global.DisabledPluginsResponse.data:type_name -> global.PluginInfo
71, // 1: global.Formily.properties:type_name -> global.Formily.PropertiesEntry
72, // 2: global.Formily.componentProps:type_name -> global.Formily.ComponentPropsEntry
73, // 3: global.FormilyResponse.properties:type_name -> global.FormilyResponse.PropertiesEntry
74, // 1: global.Formily.properties:type_name -> global.Formily.PropertiesEntry
75, // 2: global.Formily.componentProps:type_name -> global.Formily.ComponentPropsEntry
76, // 3: global.FormilyResponse.properties:type_name -> global.FormilyResponse.PropertiesEntry
4, // 4: global.GetConfigResponse.data:type_name -> global.ConfigData
10, // 5: global.SummaryResponse.memory:type_name -> global.Usage
10, // 6: global.SummaryResponse.hardDisk:type_name -> global.Usage
9, // 7: global.SummaryResponse.netWork:type_name -> global.NetWorkInfo
74, // 8: global.PluginInfo.description:type_name -> global.PluginInfo.DescriptionEntry
78, // 9: global.SysInfoData.startTime:type_name -> google.protobuf.Timestamp
77, // 8: global.PluginInfo.description:type_name -> global.PluginInfo.DescriptionEntry
81, // 9: global.SysInfoData.startTime:type_name -> google.protobuf.Timestamp
12, // 10: global.SysInfoData.plugins:type_name -> global.PluginInfo
13, // 11: global.SysInfoResponse.data:type_name -> global.SysInfoData
78, // 12: global.TaskTreeData.startTime:type_name -> google.protobuf.Timestamp
75, // 13: global.TaskTreeData.description:type_name -> global.TaskTreeData.DescriptionEntry
81, // 12: global.TaskTreeData.startTime:type_name -> google.protobuf.Timestamp
78, // 13: global.TaskTreeData.description:type_name -> global.TaskTreeData.DescriptionEntry
15, // 14: global.TaskTreeData.children:type_name -> global.TaskTreeData
15, // 15: global.TaskTreeData.blocked:type_name -> global.TaskTreeData
15, // 16: global.TaskTreeResponse.data:type_name -> global.TaskTreeData
22, // 17: global.StreamListResponse.data:type_name -> global.StreamInfo
76, // 18: global.StreamWaitListResponse.list:type_name -> global.StreamWaitListResponse.ListEntry
79, // 18: global.StreamWaitListResponse.list:type_name -> global.StreamWaitListResponse.ListEntry
22, // 19: global.StreamInfoResponse.data:type_name -> global.StreamInfo
28, // 20: global.StreamInfo.audioTrack:type_name -> global.AudioTrackInfo
31, // 21: global.StreamInfo.videoTrack:type_name -> global.VideoTrackInfo
78, // 22: global.StreamInfo.startTime:type_name -> google.protobuf.Timestamp
79, // 23: global.StreamInfo.bufferTime:type_name -> google.protobuf.Duration
81, // 22: global.StreamInfo.startTime:type_name -> google.protobuf.Timestamp
82, // 23: global.StreamInfo.bufferTime:type_name -> google.protobuf.Duration
23, // 24: global.StreamInfo.recording:type_name -> global.RecordingDetail
79, // 25: global.RecordingDetail.fragment:type_name -> google.protobuf.Duration
78, // 26: global.TrackSnapShot.writeTime:type_name -> google.protobuf.Timestamp
82, // 25: global.RecordingDetail.fragment:type_name -> google.protobuf.Duration
81, // 26: global.TrackSnapShot.writeTime:type_name -> google.protobuf.Timestamp
24, // 27: global.TrackSnapShot.wrap:type_name -> global.Wrap
26, // 28: global.MemoryBlockGroup.list:type_name -> global.MemoryBlock
25, // 29: global.TrackSnapShotData.ring:type_name -> global.TrackSnapShot
77, // 30: global.TrackSnapShotData.reader:type_name -> global.TrackSnapShotData.ReaderEntry
80, // 30: global.TrackSnapShotData.reader:type_name -> global.TrackSnapShotData.ReaderEntry
27, // 31: global.TrackSnapShotData.memory:type_name -> global.MemoryBlockGroup
29, // 32: global.TrackSnapShotResponse.data:type_name -> global.TrackSnapShotData
78, // 33: global.SubscriberSnapShot.startTime:type_name -> google.protobuf.Timestamp
81, // 33: global.SubscriberSnapShot.startTime:type_name -> google.protobuf.Timestamp
37, // 34: global.SubscriberSnapShot.audioReader:type_name -> global.RingReaderSnapShot
37, // 35: global.SubscriberSnapShot.videoReader:type_name -> global.RingReaderSnapShot
79, // 36: global.SubscriberSnapShot.bufferTime:type_name -> google.protobuf.Duration
82, // 36: global.SubscriberSnapShot.bufferTime:type_name -> google.protobuf.Duration
38, // 37: global.SubscribersResponse.data:type_name -> global.SubscriberSnapShot
41, // 38: global.PullProxyListResponse.data:type_name -> global.PullProxyInfo
78, // 39: global.PullProxyInfo.createTime:type_name -> google.protobuf.Timestamp
78, // 40: global.PullProxyInfo.updateTime:type_name -> google.protobuf.Timestamp
79, // 41: global.PullProxyInfo.recordFragment:type_name -> google.protobuf.Duration
79, // 42: global.UpdatePullProxyRequest.recordFragment:type_name -> google.protobuf.Duration
78, // 43: global.PushProxyInfo.createTime:type_name -> google.protobuf.Timestamp
78, // 44: global.PushProxyInfo.updateTime:type_name -> google.protobuf.Timestamp
81, // 39: global.PullProxyInfo.createTime:type_name -> google.protobuf.Timestamp
81, // 40: global.PullProxyInfo.updateTime:type_name -> google.protobuf.Timestamp
82, // 41: global.PullProxyInfo.recordFragment:type_name -> google.protobuf.Duration
82, // 42: global.UpdatePullProxyRequest.recordFragment:type_name -> google.protobuf.Duration
81, // 43: global.PushProxyInfo.createTime:type_name -> google.protobuf.Timestamp
81, // 44: global.PushProxyInfo.updateTime:type_name -> google.protobuf.Timestamp
43, // 45: global.PushProxyListResponse.data:type_name -> global.PushProxyInfo
47, // 46: global.StreamAliasListResponse.data:type_name -> global.StreamAlias
78, // 47: global.Recording.startTime:type_name -> google.protobuf.Timestamp
81, // 47: global.Recording.startTime:type_name -> google.protobuf.Timestamp
51, // 48: global.RecordingListResponse.data:type_name -> global.Recording
78, // 49: global.PushInfo.startTime:type_name -> google.protobuf.Timestamp
81, // 49: global.PushInfo.startTime:type_name -> google.protobuf.Timestamp
53, // 50: global.PushListResponse.data:type_name -> global.PushInfo
56, // 51: global.TransformListResponse.data:type_name -> global.Transform
78, // 52: global.RecordFile.startTime:type_name -> google.protobuf.Timestamp
78, // 53: global.RecordFile.endTime:type_name -> google.protobuf.Timestamp
78, // 54: global.EventRecordFile.startTime:type_name -> google.protobuf.Timestamp
78, // 55: global.EventRecordFile.endTime:type_name -> google.protobuf.Timestamp
81, // 52: global.RecordFile.startTime:type_name -> google.protobuf.Timestamp
81, // 53: global.RecordFile.endTime:type_name -> google.protobuf.Timestamp
81, // 54: global.EventRecordFile.startTime:type_name -> google.protobuf.Timestamp
81, // 55: global.EventRecordFile.endTime:type_name -> google.protobuf.Timestamp
59, // 56: global.RecordResponseList.data:type_name -> global.RecordFile
60, // 57: global.EventRecordResponseList.data:type_name -> global.EventRecordFile
78, // 58: global.Catalog.startTime:type_name -> google.protobuf.Timestamp
78, // 59: global.Catalog.endTime:type_name -> google.protobuf.Timestamp
81, // 58: global.Catalog.startTime:type_name -> google.protobuf.Timestamp
81, // 59: global.Catalog.endTime:type_name -> google.protobuf.Timestamp
63, // 60: global.ResponseCatalog.data:type_name -> global.Catalog
59, // 61: global.ResponseDelete.data:type_name -> global.RecordFile
78, // 62: global.AlarmInfo.createdAt:type_name -> google.protobuf.Timestamp
78, // 63: global.AlarmInfo.updatedAt:type_name -> google.protobuf.Timestamp
81, // 62: global.AlarmInfo.createdAt:type_name -> google.protobuf.Timestamp
81, // 63: global.AlarmInfo.updatedAt:type_name -> google.protobuf.Timestamp
68, // 64: global.AlarmListResponse.data:type_name -> global.AlarmInfo
2, // 65: global.Formily.PropertiesEntry.value:type_name -> global.Formily
80, // 66: global.Formily.ComponentPropsEntry.value:type_name -> google.protobuf.Any
2, // 67: global.FormilyResponse.PropertiesEntry.value:type_name -> global.Formily
81, // 68: global.api.SysInfo:input_type -> google.protobuf.Empty
81, // 69: global.api.DisabledPlugins:input_type -> google.protobuf.Empty
81, // 70: global.api.Summary:input_type -> google.protobuf.Empty
33, // 71: global.api.Shutdown:input_type -> global.RequestWithId
33, // 72: global.api.Restart:input_type -> global.RequestWithId
81, // 73: global.api.TaskTree:input_type -> google.protobuf.Empty
34, // 74: global.api.StopTask:input_type -> global.RequestWithId64
34, // 75: global.api.RestartTask:input_type -> global.RequestWithId64
17, // 76: global.api.StreamList:input_type -> global.StreamListRequest
81, // 77: global.api.WaitList:input_type -> google.protobuf.Empty
20, // 78: global.api.StreamInfo:input_type -> global.StreamSnapRequest
20, // 79: global.api.PauseStream:input_type -> global.StreamSnapRequest
20, // 80: global.api.ResumeStream:input_type -> global.StreamSnapRequest
49, // 81: global.api.SetStreamSpeed:input_type -> global.SetStreamSpeedRequest
50, // 82: global.api.SeekStream:input_type -> global.SeekStreamRequest
36, // 83: global.api.GetSubscribers:input_type -> global.SubscribersRequest
20, // 84: global.api.AudioTrackSnap:input_type -> global.StreamSnapRequest
20, // 85: global.api.VideoTrackSnap:input_type -> global.StreamSnapRequest
35, // 86: global.api.ChangeSubscribe:input_type -> global.ChangeSubscribeRequest
81, // 87: global.api.GetStreamAlias:input_type -> google.protobuf.Empty
46, // 88: global.api.SetStreamAlias:input_type -> global.SetStreamAliasRequest
20, // 89: global.api.StopPublish:input_type -> global.StreamSnapRequest
33, // 90: global.api.StopSubscribe:input_type -> global.RequestWithId
81, // 91: global.api.GetConfigFile:input_type -> google.protobuf.Empty
7, // 92: global.api.UpdateConfigFile:input_type -> global.UpdateConfigFileRequest
1, // 93: global.api.GetConfig:input_type -> global.GetConfigRequest
1, // 94: global.api.GetFormily:input_type -> global.GetConfigRequest
81, // 95: global.api.GetPullProxyList:input_type -> google.protobuf.Empty
41, // 96: global.api.AddPullProxy:input_type -> global.PullProxyInfo
33, // 97: global.api.RemovePullProxy:input_type -> global.RequestWithId
42, // 98: global.api.UpdatePullProxy:input_type -> global.UpdatePullProxyRequest
81, // 99: global.api.GetPushProxyList:input_type -> google.protobuf.Empty
43, // 100: global.api.AddPushProxy:input_type -> global.PushProxyInfo
33, // 101: global.api.RemovePushProxy:input_type -> global.RequestWithId
44, // 102: global.api.UpdatePushProxy:input_type -> global.UpdatePushProxyRequest
81, // 103: global.api.GetRecording:input_type -> google.protobuf.Empty
81, // 104: global.api.GetTransformList:input_type -> google.protobuf.Empty
58, // 105: global.api.GetRecordList:input_type -> global.ReqRecordList
58, // 106: global.api.GetEventRecordList:input_type -> global.ReqRecordList
67, // 107: global.api.GetRecordCatalog:input_type -> global.ReqRecordCatalog
65, // 108: global.api.DeleteRecord:input_type -> global.ReqRecordDelete
69, // 109: global.api.GetAlarmList:input_type -> global.AlarmListRequest
14, // 110: global.api.SysInfo:output_type -> global.SysInfoResponse
0, // 111: global.api.DisabledPlugins:output_type -> global.DisabledPluginsResponse
11, // 112: global.api.Summary:output_type -> global.SummaryResponse
32, // 113: global.api.Shutdown:output_type -> global.SuccessResponse
32, // 114: global.api.Restart:output_type -> global.SuccessResponse
16, // 115: global.api.TaskTree:output_type -> global.TaskTreeResponse
32, // 116: global.api.StopTask:output_type -> global.SuccessResponse
32, // 117: global.api.RestartTask:output_type -> global.SuccessResponse
18, // 118: global.api.StreamList:output_type -> global.StreamListResponse
19, // 119: global.api.WaitList:output_type -> global.StreamWaitListResponse
21, // 120: global.api.StreamInfo:output_type -> global.StreamInfoResponse
32, // 121: global.api.PauseStream:output_type -> global.SuccessResponse
32, // 122: global.api.ResumeStream:output_type -> global.SuccessResponse
32, // 123: global.api.SetStreamSpeed:output_type -> global.SuccessResponse
32, // 124: global.api.SeekStream:output_type -> global.SuccessResponse
39, // 125: global.api.GetSubscribers:output_type -> global.SubscribersResponse
30, // 126: global.api.AudioTrackSnap:output_type -> global.TrackSnapShotResponse
30, // 127: global.api.VideoTrackSnap:output_type -> global.TrackSnapShotResponse
32, // 128: global.api.ChangeSubscribe:output_type -> global.SuccessResponse
48, // 129: global.api.GetStreamAlias:output_type -> global.StreamAliasListResponse
32, // 130: global.api.SetStreamAlias:output_type -> global.SuccessResponse
32, // 131: global.api.StopPublish:output_type -> global.SuccessResponse
32, // 132: global.api.StopSubscribe:output_type -> global.SuccessResponse
5, // 133: global.api.GetConfigFile:output_type -> global.GetConfigFileResponse
32, // 134: global.api.UpdateConfigFile:output_type -> global.SuccessResponse
6, // 135: global.api.GetConfig:output_type -> global.GetConfigResponse
6, // 136: global.api.GetFormily:output_type -> global.GetConfigResponse
40, // 137: global.api.GetPullProxyList:output_type -> global.PullProxyListResponse
32, // 138: global.api.AddPullProxy:output_type -> global.SuccessResponse
32, // 139: global.api.RemovePullProxy:output_type -> global.SuccessResponse
32, // 140: global.api.UpdatePullProxy:output_type -> global.SuccessResponse
45, // 141: global.api.GetPushProxyList:output_type -> global.PushProxyListResponse
32, // 142: global.api.AddPushProxy:output_type -> global.SuccessResponse
32, // 143: global.api.RemovePushProxy:output_type -> global.SuccessResponse
32, // 144: global.api.UpdatePushProxy:output_type -> global.SuccessResponse
52, // 145: global.api.GetRecording:output_type -> global.RecordingListResponse
57, // 146: global.api.GetTransformList:output_type -> global.TransformListResponse
61, // 147: global.api.GetRecordList:output_type -> global.RecordResponseList
62, // 148: global.api.GetEventRecordList:output_type -> global.EventRecordResponseList
64, // 149: global.api.GetRecordCatalog:output_type -> global.ResponseCatalog
66, // 150: global.api.DeleteRecord:output_type -> global.ResponseDelete
70, // 151: global.api.GetAlarmList:output_type -> global.AlarmListResponse
110, // [110:152] is the sub-list for method output_type
68, // [68:110] is the sub-list for method input_type
68, // [68:68] is the sub-list for extension type_name
68, // [68:68] is the sub-list for extension extendee
0, // [0:68] is the sub-list for field type_name
81, // 65: global.Step.startedAt:type_name -> google.protobuf.Timestamp
81, // 66: global.Step.completedAt:type_name -> google.protobuf.Timestamp
71, // 67: global.SubscriptionProgressData.steps:type_name -> global.Step
72, // 68: global.SubscriptionProgressResponse.data:type_name -> global.SubscriptionProgressData
2, // 69: global.Formily.PropertiesEntry.value:type_name -> global.Formily
83, // 70: global.Formily.ComponentPropsEntry.value:type_name -> google.protobuf.Any
2, // 71: global.FormilyResponse.PropertiesEntry.value:type_name -> global.Formily
84, // 72: global.api.SysInfo:input_type -> google.protobuf.Empty
84, // 73: global.api.DisabledPlugins:input_type -> google.protobuf.Empty
84, // 74: global.api.Summary:input_type -> google.protobuf.Empty
33, // 75: global.api.Shutdown:input_type -> global.RequestWithId
33, // 76: global.api.Restart:input_type -> global.RequestWithId
84, // 77: global.api.TaskTree:input_type -> google.protobuf.Empty
34, // 78: global.api.StopTask:input_type -> global.RequestWithId64
34, // 79: global.api.RestartTask:input_type -> global.RequestWithId64
17, // 80: global.api.StreamList:input_type -> global.StreamListRequest
84, // 81: global.api.WaitList:input_type -> google.protobuf.Empty
20, // 82: global.api.StreamInfo:input_type -> global.StreamSnapRequest
20, // 83: global.api.PauseStream:input_type -> global.StreamSnapRequest
20, // 84: global.api.ResumeStream:input_type -> global.StreamSnapRequest
49, // 85: global.api.SetStreamSpeed:input_type -> global.SetStreamSpeedRequest
50, // 86: global.api.SeekStream:input_type -> global.SeekStreamRequest
36, // 87: global.api.GetSubscribers:input_type -> global.SubscribersRequest
20, // 88: global.api.AudioTrackSnap:input_type -> global.StreamSnapRequest
20, // 89: global.api.VideoTrackSnap:input_type -> global.StreamSnapRequest
35, // 90: global.api.ChangeSubscribe:input_type -> global.ChangeSubscribeRequest
84, // 91: global.api.GetStreamAlias:input_type -> google.protobuf.Empty
46, // 92: global.api.SetStreamAlias:input_type -> global.SetStreamAliasRequest
20, // 93: global.api.StopPublish:input_type -> global.StreamSnapRequest
33, // 94: global.api.StopSubscribe:input_type -> global.RequestWithId
84, // 95: global.api.GetConfigFile:input_type -> google.protobuf.Empty
7, // 96: global.api.UpdateConfigFile:input_type -> global.UpdateConfigFileRequest
1, // 97: global.api.GetConfig:input_type -> global.GetConfigRequest
1, // 98: global.api.GetFormily:input_type -> global.GetConfigRequest
84, // 99: global.api.GetPullProxyList:input_type -> google.protobuf.Empty
41, // 100: global.api.AddPullProxy:input_type -> global.PullProxyInfo
33, // 101: global.api.RemovePullProxy:input_type -> global.RequestWithId
42, // 102: global.api.UpdatePullProxy:input_type -> global.UpdatePullProxyRequest
84, // 103: global.api.GetPushProxyList:input_type -> google.protobuf.Empty
43, // 104: global.api.AddPushProxy:input_type -> global.PushProxyInfo
33, // 105: global.api.RemovePushProxy:input_type -> global.RequestWithId
44, // 106: global.api.UpdatePushProxy:input_type -> global.UpdatePushProxyRequest
84, // 107: global.api.GetRecording:input_type -> google.protobuf.Empty
84, // 108: global.api.GetTransformList:input_type -> google.protobuf.Empty
58, // 109: global.api.GetRecordList:input_type -> global.ReqRecordList
58, // 110: global.api.GetEventRecordList:input_type -> global.ReqRecordList
67, // 111: global.api.GetRecordCatalog:input_type -> global.ReqRecordCatalog
65, // 112: global.api.DeleteRecord:input_type -> global.ReqRecordDelete
69, // 113: global.api.GetAlarmList:input_type -> global.AlarmListRequest
20, // 114: global.api.GetSubscriptionProgress:input_type -> global.StreamSnapRequest
14, // 115: global.api.SysInfo:output_type -> global.SysInfoResponse
0, // 116: global.api.DisabledPlugins:output_type -> global.DisabledPluginsResponse
11, // 117: global.api.Summary:output_type -> global.SummaryResponse
32, // 118: global.api.Shutdown:output_type -> global.SuccessResponse
32, // 119: global.api.Restart:output_type -> global.SuccessResponse
16, // 120: global.api.TaskTree:output_type -> global.TaskTreeResponse
32, // 121: global.api.StopTask:output_type -> global.SuccessResponse
32, // 122: global.api.RestartTask:output_type -> global.SuccessResponse
18, // 123: global.api.StreamList:output_type -> global.StreamListResponse
19, // 124: global.api.WaitList:output_type -> global.StreamWaitListResponse
21, // 125: global.api.StreamInfo:output_type -> global.StreamInfoResponse
32, // 126: global.api.PauseStream:output_type -> global.SuccessResponse
32, // 127: global.api.ResumeStream:output_type -> global.SuccessResponse
32, // 128: global.api.SetStreamSpeed:output_type -> global.SuccessResponse
32, // 129: global.api.SeekStream:output_type -> global.SuccessResponse
39, // 130: global.api.GetSubscribers:output_type -> global.SubscribersResponse
30, // 131: global.api.AudioTrackSnap:output_type -> global.TrackSnapShotResponse
30, // 132: global.api.VideoTrackSnap:output_type -> global.TrackSnapShotResponse
32, // 133: global.api.ChangeSubscribe:output_type -> global.SuccessResponse
48, // 134: global.api.GetStreamAlias:output_type -> global.StreamAliasListResponse
32, // 135: global.api.SetStreamAlias:output_type -> global.SuccessResponse
32, // 136: global.api.StopPublish:output_type -> global.SuccessResponse
32, // 137: global.api.StopSubscribe:output_type -> global.SuccessResponse
5, // 138: global.api.GetConfigFile:output_type -> global.GetConfigFileResponse
32, // 139: global.api.UpdateConfigFile:output_type -> global.SuccessResponse
6, // 140: global.api.GetConfig:output_type -> global.GetConfigResponse
6, // 141: global.api.GetFormily:output_type -> global.GetConfigResponse
40, // 142: global.api.GetPullProxyList:output_type -> global.PullProxyListResponse
32, // 143: global.api.AddPullProxy:output_type -> global.SuccessResponse
32, // 144: global.api.RemovePullProxy:output_type -> global.SuccessResponse
32, // 145: global.api.UpdatePullProxy:output_type -> global.SuccessResponse
45, // 146: global.api.GetPushProxyList:output_type -> global.PushProxyListResponse
32, // 147: global.api.AddPushProxy:output_type -> global.SuccessResponse
32, // 148: global.api.RemovePushProxy:output_type -> global.SuccessResponse
32, // 149: global.api.UpdatePushProxy:output_type -> global.SuccessResponse
52, // 150: global.api.GetRecording:output_type -> global.RecordingListResponse
57, // 151: global.api.GetTransformList:output_type -> global.TransformListResponse
61, // 152: global.api.GetRecordList:output_type -> global.RecordResponseList
62, // 153: global.api.GetEventRecordList:output_type -> global.EventRecordResponseList
64, // 154: global.api.GetRecordCatalog:output_type -> global.ResponseCatalog
66, // 155: global.api.DeleteRecord:output_type -> global.ResponseDelete
70, // 156: global.api.GetAlarmList:output_type -> global.AlarmListResponse
73, // 157: global.api.GetSubscriptionProgress:output_type -> global.SubscriptionProgressResponse
115, // [115:158] is the sub-list for method output_type
72, // [72:115] is the sub-list for method input_type
72, // [72:72] is the sub-list for extension type_name
72, // [72:72] is the sub-list for extension extendee
0, // [0:72] is the sub-list for field type_name
}
func init() { file_global_proto_init() }
@@ -6232,7 +6471,7 @@ func file_global_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_global_proto_rawDesc), len(file_global_proto_rawDesc)),
NumEnums: 0,
NumMessages: 78,
NumMessages: 81,
NumExtensions: 0,
NumServices: 1,
},

File diff suppressed because it is too large Load Diff

View File

@@ -250,6 +250,11 @@ service api {
get: "/api/alarm/list"
};
}
rpc GetSubscriptionProgress (StreamSnapRequest) returns (SubscriptionProgressResponse) {
option (google.api.http) = {
get: "/api/stream/progress/{streamPath=**}"
};
}
}
message DisabledPluginsResponse {
@@ -366,6 +371,8 @@ message TaskTreeData {
TaskTreeData blocked = 8;
uint64 pointer = 9;
string startReason = 10;
bool eventLoopRunning = 11;
uint32 level = 12;
}
message TaskTreeResponse {
@@ -811,3 +818,22 @@ message AlarmListResponse {
int32 pageSize = 5;
repeated AlarmInfo data = 6;
}
message Step {
string name = 1;
string description = 2;
string error = 3;
google.protobuf.Timestamp startedAt = 4;
google.protobuf.Timestamp completedAt = 5;
}
message SubscriptionProgressData {
repeated Step steps = 1;
int32 currentStep = 2;
}
message SubscriptionProgressResponse {
int32 code = 1;
string message = 2;
SubscriptionProgressData data = 3;
}

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v6.31.1
// - protoc v5.29.3
// source: global.proto
package pb
@@ -62,6 +62,7 @@ const (
Api_GetRecordCatalog_FullMethodName = "/global.api/GetRecordCatalog"
Api_DeleteRecord_FullMethodName = "/global.api/DeleteRecord"
Api_GetAlarmList_FullMethodName = "/global.api/GetAlarmList"
Api_GetSubscriptionProgress_FullMethodName = "/global.api/GetSubscriptionProgress"
)
// ApiClient is the client API for Api service.
@@ -110,6 +111,7 @@ type ApiClient interface {
GetRecordCatalog(ctx context.Context, in *ReqRecordCatalog, opts ...grpc.CallOption) (*ResponseCatalog, error)
DeleteRecord(ctx context.Context, in *ReqRecordDelete, opts ...grpc.CallOption) (*ResponseDelete, error)
GetAlarmList(ctx context.Context, in *AlarmListRequest, opts ...grpc.CallOption) (*AlarmListResponse, error)
GetSubscriptionProgress(ctx context.Context, in *StreamSnapRequest, opts ...grpc.CallOption) (*SubscriptionProgressResponse, error)
}
type apiClient struct {
@@ -540,6 +542,16 @@ func (c *apiClient) GetAlarmList(ctx context.Context, in *AlarmListRequest, opts
return out, nil
}
func (c *apiClient) GetSubscriptionProgress(ctx context.Context, in *StreamSnapRequest, opts ...grpc.CallOption) (*SubscriptionProgressResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(SubscriptionProgressResponse)
err := c.cc.Invoke(ctx, Api_GetSubscriptionProgress_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// ApiServer is the server API for Api service.
// All implementations must embed UnimplementedApiServer
// for forward compatibility.
@@ -586,6 +598,7 @@ type ApiServer interface {
GetRecordCatalog(context.Context, *ReqRecordCatalog) (*ResponseCatalog, error)
DeleteRecord(context.Context, *ReqRecordDelete) (*ResponseDelete, error)
GetAlarmList(context.Context, *AlarmListRequest) (*AlarmListResponse, error)
GetSubscriptionProgress(context.Context, *StreamSnapRequest) (*SubscriptionProgressResponse, error)
mustEmbedUnimplementedApiServer()
}
@@ -722,6 +735,9 @@ func (UnimplementedApiServer) DeleteRecord(context.Context, *ReqRecordDelete) (*
func (UnimplementedApiServer) GetAlarmList(context.Context, *AlarmListRequest) (*AlarmListResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetAlarmList not implemented")
}
func (UnimplementedApiServer) GetSubscriptionProgress(context.Context, *StreamSnapRequest) (*SubscriptionProgressResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetSubscriptionProgress not implemented")
}
func (UnimplementedApiServer) mustEmbedUnimplementedApiServer() {}
func (UnimplementedApiServer) testEmbeddedByValue() {}
@@ -1499,6 +1515,24 @@ func _Api_GetAlarmList_Handler(srv interface{}, ctx context.Context, dec func(in
return interceptor(ctx, in, info, handler)
}
func _Api_GetSubscriptionProgress_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(StreamSnapRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).GetSubscriptionProgress(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_GetSubscriptionProgress_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).GetSubscriptionProgress(ctx, req.(*StreamSnapRequest))
}
return interceptor(ctx, in, info, handler)
}
// Api_ServiceDesc is the grpc.ServiceDesc for Api service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@@ -1674,6 +1708,10 @@ var Api_ServiceDesc = grpc.ServiceDesc{
MethodName: "GetAlarmList",
Handler: _Api_GetAlarmList_Handler,
},
{
MethodName: "GetSubscriptionProgress",
Handler: _Api_GetSubscriptionProgress_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "global.proto",

View File

@@ -1,90 +0,0 @@
package pkg
import (
"bytes"
"fmt"
"io"
"time"
"github.com/deepch/vdk/codec/aacparser"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
var _ IAVFrame = (*ADTS)(nil)
type ADTS struct {
DTS time.Duration
util.RecyclableMemory
}
func (A *ADTS) Parse(track *AVTrack) (err error) {
if track.ICodecCtx == nil {
var ctx = &codec.AACCtx{}
var reader = A.NewReader()
var adts []byte
adts, err = reader.ReadBytes(7)
if err != nil {
return err
}
var hdrlen, framelen, samples int
ctx.Config, hdrlen, framelen, samples, err = aacparser.ParseADTSHeader(adts)
if err != nil {
return err
}
b := &bytes.Buffer{}
aacparser.WriteMPEG4AudioConfig(b, ctx.Config)
ctx.ConfigBytes = b.Bytes()
track.ICodecCtx = ctx
track.Info("ADTS", "hdrlen", hdrlen, "framelen", framelen, "samples", samples)
}
track.Value.Raw, err = A.Demux(track.ICodecCtx)
return
}
func (A *ADTS) ConvertCtx(ctx codec.ICodecCtx) (codec.ICodecCtx, IAVFrame, error) {
return ctx.GetBase(), nil, nil
}
func (A *ADTS) Demux(ctx codec.ICodecCtx) (any, error) {
var reader = A.NewReader()
err := reader.Skip(7)
var mem util.Memory
reader.Range(mem.AppendOne)
return mem, err
}
func (A *ADTS) Mux(ctx codec.ICodecCtx, frame *AVFrame) {
A.InitRecycleIndexes(1)
A.DTS = frame.Timestamp * 90 / time.Millisecond
aacCtx, ok := ctx.GetBase().(*codec.AACCtx)
if !ok {
A.Append(frame.Raw.(util.Memory).Buffers...)
return
}
adts := A.NextN(7)
raw := frame.Raw.(util.Memory)
aacparser.FillADTSHeader(adts, aacCtx.Config, raw.Size/aacCtx.GetSampleSize(), raw.Size)
A.Append(raw.Buffers...)
}
func (A *ADTS) GetTimestamp() time.Duration {
return A.DTS * time.Millisecond / 90
}
func (A *ADTS) GetCTS() time.Duration {
return 0
}
func (A *ADTS) GetSize() int {
return A.Size
}
func (A *ADTS) String() string {
return fmt.Sprintf("ADTS{size:%d}", A.Size)
}
func (A *ADTS) Dump(b byte, writer io.Writer) {
//TODO implement me
panic("implement me")
}

View File

@@ -1,182 +0,0 @@
package pkg
import (
"encoding/binary"
"fmt"
"io"
"time"
"github.com/deepch/vdk/codec/h264parser"
"github.com/deepch/vdk/codec/h265parser"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
var _ IAVFrame = (*AnnexB)(nil)
type AnnexB struct {
Hevc bool
PTS time.Duration
DTS time.Duration
util.RecyclableMemory
}
func (a *AnnexB) Dump(t byte, w io.Writer) {
m := a.GetAllocator().Borrow(4 + a.Size)
binary.BigEndian.PutUint32(m, uint32(a.Size))
a.CopyTo(m[4:])
w.Write(m)
}
// DecodeConfig implements pkg.IAVFrame.
func (a *AnnexB) ConvertCtx(ctx codec.ICodecCtx) (codec.ICodecCtx, IAVFrame, error) {
return ctx.GetBase(), nil, nil
}
// GetSize implements pkg.IAVFrame.
func (a *AnnexB) GetSize() int {
return a.Size
}
func (a *AnnexB) GetTimestamp() time.Duration {
return a.DTS * time.Millisecond / 90
}
func (a *AnnexB) GetCTS() time.Duration {
return (a.PTS - a.DTS) * time.Millisecond / 90
}
// Parse implements pkg.IAVFrame.
func (a *AnnexB) Parse(t *AVTrack) (err error) {
if a.Hevc {
if t.ICodecCtx == nil {
t.ICodecCtx = &codec.H265Ctx{}
}
} else {
if t.ICodecCtx == nil {
t.ICodecCtx = &codec.H264Ctx{}
}
}
if t.Value.Raw, err = a.Demux(t.ICodecCtx); err != nil {
return
}
for _, nalu := range t.Value.Raw.(Nalus) {
if a.Hevc {
ctx := t.ICodecCtx.(*codec.H265Ctx)
switch codec.ParseH265NALUType(nalu.Buffers[0][0]) {
case h265parser.NAL_UNIT_VPS:
ctx.RecordInfo.VPS = [][]byte{nalu.ToBytes()}
case h265parser.NAL_UNIT_SPS:
ctx.RecordInfo.SPS = [][]byte{nalu.ToBytes()}
case h265parser.NAL_UNIT_PPS:
ctx.RecordInfo.PPS = [][]byte{nalu.ToBytes()}
ctx.CodecData, err = h265parser.NewCodecDataFromVPSAndSPSAndPPS(ctx.VPS(), ctx.SPS(), ctx.PPS())
case h265parser.NAL_UNIT_CODED_SLICE_BLA_W_LP,
h265parser.NAL_UNIT_CODED_SLICE_BLA_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_BLA_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_IDR_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_IDR_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_CRA:
t.Value.IDR = true
}
} else {
ctx := t.ICodecCtx.(*codec.H264Ctx)
switch codec.ParseH264NALUType(nalu.Buffers[0][0]) {
case codec.NALU_SPS:
ctx.RecordInfo.SPS = [][]byte{nalu.ToBytes()}
if len(ctx.RecordInfo.PPS) > 0 {
ctx.CodecData, err = h264parser.NewCodecDataFromSPSAndPPS(ctx.SPS(), ctx.PPS())
}
case codec.NALU_PPS:
ctx.RecordInfo.PPS = [][]byte{nalu.ToBytes()}
if len(ctx.RecordInfo.SPS) > 0 {
ctx.CodecData, err = h264parser.NewCodecDataFromSPSAndPPS(ctx.SPS(), ctx.PPS())
}
case codec.NALU_IDR_Picture:
t.Value.IDR = true
}
}
}
return
}
// String implements pkg.IAVFrame.
func (a *AnnexB) String() string {
return fmt.Sprintf("%d %d", a.DTS, a.Memory.Size)
}
// Demux implements pkg.IAVFrame.
func (a *AnnexB) Demux(codecCtx codec.ICodecCtx) (ret any, err error) {
var nalus Nalus
var lastFourBytes [4]byte
var b byte
var shallow util.Memory
shallow.Append(a.Buffers...)
reader := shallow.NewReader()
gotNalu := func() {
var nalu util.Memory
for buf := range reader.ClipFront {
nalu.AppendOne(buf)
}
nalus = append(nalus, nalu)
}
for {
b, err = reader.ReadByte()
if err == nil {
copy(lastFourBytes[:], lastFourBytes[1:])
lastFourBytes[3] = b
var startCode = 0
if lastFourBytes == codec.NALU_Delimiter2 {
startCode = 4
} else if [3]byte(lastFourBytes[1:]) == codec.NALU_Delimiter1 {
startCode = 3
}
if startCode > 0 && reader.Offset() >= 3 {
if reader.Offset() == 3 {
startCode = 3
}
reader.Unread(startCode)
if reader.Offset() > 0 {
gotNalu()
}
reader.Skip(startCode)
for range reader.ClipFront {
}
}
} else if err == io.EOF {
if reader.Offset() > 0 {
gotNalu()
}
err = nil
break
}
}
ret = nalus
return
}
func (a *AnnexB) Mux(codecCtx codec.ICodecCtx, frame *AVFrame) {
a.DTS = frame.Timestamp * 90 / time.Millisecond
a.PTS = a.DTS + frame.CTS*90/time.Millisecond
a.InitRecycleIndexes(0)
delimiter2 := codec.NALU_Delimiter2[:]
a.AppendOne(delimiter2)
if frame.IDR {
switch ctx := codecCtx.(type) {
case *codec.H264Ctx:
a.Append(ctx.SPS(), delimiter2, ctx.PPS(), delimiter2)
case *codec.H265Ctx:
a.Append(ctx.SPS(), delimiter2, ctx.PPS(), delimiter2, ctx.VPS(), delimiter2)
}
}
for i, nalu := range frame.Raw.(Nalus) {
if i > 0 {
a.AppendOne(codec.NALU_Delimiter1[:])
}
a.Append(nalu.Buffers...)
}
}

219
pkg/annexb_reader.go Normal file
View File

@@ -0,0 +1,219 @@
package pkg
import (
"fmt"
"m7s.live/v5/pkg/util"
)
// AnnexBReader 专门用于读取 AnnexB 格式数据的读取器
// 模仿 MemoryReader 结构,支持跨切片读取和动态数据管理
type AnnexBReader struct {
util.Memory // 存储数据的多段内存
Length, offset0, offset1 int // 可读长度和当前读取位置
}
// AppendBuffer 追加单个数据缓冲区
func (r *AnnexBReader) AppendBuffer(buf []byte) {
r.PushOne(buf)
r.Length += len(buf)
}
// ClipFront 剔除已读取的数据,释放内存
func (r *AnnexBReader) ClipFront() {
readOffset := r.Size - r.Length
if readOffset == 0 {
return
}
// 剔除已完全读取的缓冲区(不回收内存)
if r.offset0 > 0 {
r.Buffers = r.Buffers[r.offset0:]
r.Size -= readOffset
r.offset0 = 0
}
// 处理部分读取的缓冲区(不回收内存)
if r.offset1 > 0 && len(r.Buffers) > 0 {
buf := r.Buffers[0]
r.Buffers[0] = buf[r.offset1:]
r.Size -= r.offset1
r.offset1 = 0
}
}
// FindStartCode 查找 NALU 起始码,返回起始码位置和长度
func (r *AnnexBReader) FindStartCode() (pos int, startCodeLen int, found bool) {
if r.Length < 3 {
return 0, 0, false
}
// 逐字节检查起始码
for i := 0; i <= r.Length-3; i++ {
// 优先检查 4 字节起始码
if i <= r.Length-4 {
if r.getByteAt(i) == 0x00 && r.getByteAt(i+1) == 0x00 &&
r.getByteAt(i+2) == 0x00 && r.getByteAt(i+3) == 0x01 {
return i, 4, true
}
}
// 检查 3 字节起始码(但要确保不是 4 字节起始码的一部分)
if r.getByteAt(i) == 0x00 && r.getByteAt(i+1) == 0x00 && r.getByteAt(i+2) == 0x01 {
// 确保这不是4字节起始码的一部分
if i == 0 || r.getByteAt(i-1) != 0x00 {
return i, 3, true
}
}
}
return 0, 0, false
}
// getByteAt 获取指定位置的字节,不改变读取位置
func (r *AnnexBReader) getByteAt(pos int) byte {
if pos >= r.Length {
return 0
}
// 计算在哪个缓冲区和缓冲区内的位置
currentPos := 0
bufferIndex := r.offset0
bufferOffset := r.offset1
for bufferIndex < len(r.Buffers) {
buf := r.Buffers[bufferIndex]
available := len(buf) - bufferOffset
if currentPos+available > pos {
// 目标位置在当前缓冲区内
return buf[bufferOffset+(pos-currentPos)]
}
currentPos += available
bufferIndex++
bufferOffset = 0
}
return 0
}
type InvalidDataError struct {
util.Memory
}
func (e InvalidDataError) Error() string {
return fmt.Sprintf("% 02X", e.ToBytes())
}
// ReadNALU 读取一个完整的 NALU
// withStart 用于接收“包含起始码”的内存段
// withoutStart 用于接收“不包含起始码”的内存段
// 允许 withStart 或 withoutStart 为 nil表示调用方不需要该形式的数据
func (r *AnnexBReader) ReadNALU(withStart, withoutStart *util.Memory) error {
r.ClipFront()
// 定位到第一个起始码
firstPos, startCodeLen, found := r.FindStartCode()
if !found {
return nil
}
// 跳过起始码之前的无效数据
if firstPos > 0 {
var invalidData util.Memory
var reader util.MemoryReader
reader.Memory = &r.Memory
reader.RangeN(firstPos, invalidData.PushOne)
return InvalidDataError{invalidData}
}
// 为了查找下一个起始码,需要临时跳过当前起始码再查找
saveOffset0, saveOffset1, saveLength := r.offset0, r.offset1, r.Length
r.forward(startCodeLen)
nextPosAfterStart, _, nextFound := r.FindStartCode()
// 恢复到起始码起点
r.offset0, r.offset1, r.Length = saveOffset0, saveOffset1, saveLength
if !nextFound {
return nil
}
// 依次读取并填充输出,同时推进读取位置到 NALU 末尾(不消耗下一个起始码)
remaining := startCodeLen + nextPosAfterStart
// 需要在 withoutStart 中跳过的前缀(即起始码长度)
skipForWithout := startCodeLen
for remaining > 0 && r.offset0 < len(r.Buffers) {
buf := r.getCurrentBuffer()
readLen := len(buf)
if readLen > remaining {
readLen = remaining
}
segment := buf[:readLen]
if withStart != nil {
withStart.PushOne(segment)
}
if withoutStart != nil {
if skipForWithout >= readLen {
// 本段全部属于起始码,跳过
skipForWithout -= readLen
} else {
// 仅跳过起始码前缀,余下推入 withoutStart
withoutStart.PushOne(segment[skipForWithout:])
skipForWithout = 0
}
}
if readLen == len(buf) {
r.skipCurrentBuffer()
} else {
r.forward(readLen)
}
remaining -= readLen
}
return nil
}
// getCurrentBuffer 获取当前读取位置的缓冲区
func (r *AnnexBReader) getCurrentBuffer() []byte {
if r.offset0 >= len(r.Buffers) {
return nil
}
return r.Buffers[r.offset0][r.offset1:]
}
// forward 向前移动读取位置
func (r *AnnexBReader) forward(n int) {
if n <= 0 || r.Length <= 0 {
return
}
if n > r.Length { // 防御:不允许超出剩余长度
n = r.Length
}
r.Length -= n
for n > 0 && r.offset0 < len(r.Buffers) {
cur := r.Buffers[r.offset0]
remain := len(cur) - r.offset1
if n < remain { // 仍在当前缓冲区内
r.offset1 += n
n = 0
return
}
// 用掉当前缓冲区剩余部分,跳到下一个缓冲区起点
n -= remain
r.offset0++
r.offset1 = 0
}
}
// skipCurrentBuffer 跳过当前缓冲区
func (r *AnnexBReader) skipCurrentBuffer() {
if r.offset0 < len(r.Buffers) {
curBufLen := len(r.Buffers[r.offset0]) - r.offset1
r.Length -= curBufLen
r.offset0++
r.offset1 = 0
}
}

173
pkg/annexb_reader_test.go Normal file
View File

@@ -0,0 +1,173 @@
package pkg
import (
"bytes"
_ "embed"
"math/rand"
"testing"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
func bytesFromMemory(m util.Memory) []byte {
if m.Size == 0 {
return nil
}
out := make([]byte, 0, m.Size)
for _, b := range m.Buffers {
out = append(out, b...)
}
return out
}
func TestAnnexBReader_ReadNALU_Basic(t *testing.T) {
var reader AnnexBReader
// 3 个 NALU分别使用 4 字节、3 字节、4 字节起始码
expected1 := []byte{0x67, 0x42, 0x00, 0x1E}
expected2 := []byte{0x68, 0xCE, 0x3C, 0x80}
expected3 := []byte{0x65, 0x88, 0x84, 0x00}
buf := append([]byte{0x00, 0x00, 0x00, 0x01}, expected1...)
buf = append(buf, append([]byte{0x00, 0x00, 0x01}, expected2...)...)
buf = append(buf, append([]byte{0x00, 0x00, 0x00, 0x01}, expected3...)...)
reader.AppendBuffer(append(buf, codec.NALU_Delimiter2[:]...))
// 读取并校验 3 个 NALU不包含起始码
var n util.Memory
if err := reader.ReadNALU(nil, &n); err != nil {
t.Fatalf("read nalu 1: %v", err)
}
if !bytes.Equal(bytesFromMemory(n), expected1) {
t.Fatalf("nalu1 mismatch")
}
n = util.Memory{}
if err := reader.ReadNALU(nil, &n); err != nil {
t.Fatalf("read nalu 2: %v", err)
}
if !bytes.Equal(bytesFromMemory(n), expected2) {
t.Fatalf("nalu2 mismatch")
}
n = util.Memory{}
if err := reader.ReadNALU(nil, &n); err != nil {
t.Fatalf("read nalu 3: %v", err)
}
if !bytes.Equal(bytesFromMemory(n), expected3) {
t.Fatalf("nalu3 mismatch")
}
// 再读一次应无更多起始码,返回 nil 错误且长度为 0
if err := reader.ReadNALU(nil, &n); err != nil {
t.Fatalf("expected nil error when no more nalu, got: %v", err)
}
if reader.Length != 4 {
t.Fatalf("expected length 0 after reading all, got %d", reader.Length)
}
}
func TestAnnexBReader_AppendBuffer_MultiChunk_Random(t *testing.T) {
var reader AnnexBReader
rng := rand.New(rand.NewSource(1)) // 固定种子,保证可复现
// 生成随机 NALU仅负载部分并构造 AnnexB 数据(随机 3/4 字节起始码)
numNALU := 12
expectedPayloads := make([][]byte, 0, numNALU)
fullStream := make([]byte, 0, 1024)
for i := 0; i < numNALU; i++ {
payloadLen := 1 + rng.Intn(32)
payload := make([]byte, payloadLen)
for j := 0; j < payloadLen; j++ {
payload[j] = byte(rng.Intn(256))
}
expectedPayloads = append(expectedPayloads, payload)
if rng.Intn(2) == 0 {
fullStream = append(fullStream, 0x00, 0x00, 0x01)
} else {
fullStream = append(fullStream, 0x00, 0x00, 0x00, 0x01)
}
fullStream = append(fullStream, payload...)
}
fullStream = append(fullStream, codec.NALU_Delimiter2[:]...) // 结尾加个起始码,方便读取到最后一个 NALU
// 随机切割为多段并 AppendBuffer
for i := 0; i < len(fullStream); {
// 每段长度 1..7 字节(或剩余长度)
maxStep := 7
remain := len(fullStream) - i
step := 1 + rng.Intn(maxStep)
if step > remain {
step = remain
}
reader.AppendBuffer(fullStream[i : i+step])
i += step
}
// 依次读取并校验
for idx, expected := range expectedPayloads {
var n util.Memory
if err := reader.ReadNALU(nil, &n); err != nil {
t.Fatalf("read nalu %d: %v", idx+1, err)
}
got := bytesFromMemory(n)
if !bytes.Equal(got, expected) {
t.Fatalf("nalu %d mismatch: expected %d bytes, got %d bytes", idx+1, len(expected), len(got))
}
}
// 没有更多 NALU
var n util.Memory
if err := reader.ReadNALU(nil, &n); err != nil {
t.Fatalf("expected nil error when no more nalu, got: %v", err)
}
}
// 起始码跨越两个缓冲区的情况测试(例如 00 00 | 00 01
func TestAnnexBReader_StartCodeAcrossBuffers(t *testing.T) {
var reader AnnexBReader
// 构造一个 4 字节起始码被拆成两段的情况,后跟一个短 payload
reader.AppendBuffer([]byte{0x00, 0x00})
reader.AppendBuffer([]byte{0x00})
reader.AppendBuffer([]byte{0x01, 0x11, 0x22, 0x33}) // payload: 11 22 33
reader.AppendBuffer(codec.NALU_Delimiter2[:])
var n util.Memory
if err := reader.ReadNALU(nil, &n); err != nil {
t.Fatalf("read nalu: %v", err)
}
got := bytesFromMemory(n)
expected := []byte{0x11, 0x22, 0x33}
if !bytes.Equal(got, expected) {
t.Fatalf("payload mismatch: expected %v got %v", expected, got)
}
}
//go:embed test.h264
var annexbH264Sample []byte
var clipSizesH264 = [...]int{7823, 7157, 5137, 6268, 5958, 4573, 5661, 5589, 3917, 5207, 5347, 4111, 4755, 5199, 3761, 5014, 4981, 3736, 5075, 4889, 3739, 4701, 4655, 3471, 4086, 4428, 3309, 4388, 28, 8, 63974, 63976, 37544, 4945, 6525, 6974, 4874, 6317, 6141, 4455, 5833, 4105, 5407, 5479, 3741, 5142, 4939, 3745, 4945, 4857, 3518, 4624, 4930, 3649, 4846, 5020, 3293, 4588, 4571, 3430, 4844, 4822, 21223, 8461, 7188, 4882, 6108, 5870, 4432, 5389, 5466, 3726}
func TestAnnexBReader_EmbeddedAnnexB_H265(t *testing.T) {
var reader AnnexBReader
offset := 0
for _, size := range clipSizesH264 {
reader.AppendBuffer(annexbH264Sample[offset : offset+size])
offset += size
var nalu util.Memory
if err := reader.ReadNALU(nil, &nalu); err != nil {
t.Fatalf("read nalu: %v", err)
} else {
t.Logf("read nalu: %d bytes", nalu.Size)
if nalu.Size > 0 {
tryH264Type := codec.ParseH264NALUType(nalu.Buffers[0][0])
t.Logf("tryH264Type: %d", tryH264Type)
}
}
}
}

View File

@@ -174,7 +174,9 @@ func (r *AVRingReader) ReadFrame(conf *config.Subscribe) (err error) {
r.Delay = r.Track.LastValue.Sequence - r.Value.Sequence
// fmt.Println(r.Delay)
if r.Track.ICodecCtx != nil {
if r.Logger.Enabled(context.TODO(), task.TraceLevel) {
r.Log(context.TODO(), task.TraceLevel, r.Track.FourCC().String(), "ts", r.Value.Timestamp, "delay", r.Delay, "bps", r.BPS)
}
} else {
r.Warn("no codec")
}

View File

@@ -1,8 +1,6 @@
package pkg
import (
"io"
"net"
"sync"
"time"
@@ -27,21 +25,28 @@ type (
}
// Source -> Parse -> Demux -> (ConvertCtx) -> Mux(GetAllocator) -> Recycle
IAVFrame interface {
GetAllocator() *util.ScalableMemoryAllocator
SetAllocator(*util.ScalableMemoryAllocator)
Parse(*AVTrack) error // get codec info, idr
ConvertCtx(codec.ICodecCtx) (codec.ICodecCtx, IAVFrame, error) // convert codec from source stream
Demux(codec.ICodecCtx) (any, error) // demux to raw format
Mux(codec.ICodecCtx, *AVFrame) // mux from raw format
GetTimestamp() time.Duration
GetCTS() time.Duration
GetSample() *Sample
GetSize() int
CheckCodecChange() error
Demux() error // demux to raw format
Mux(*Sample) error // mux from origin format
Recycle()
String() string
Dump(byte, io.Writer)
}
Nalus []util.Memory
ISequenceCodecCtx[T any] interface {
GetSequenceFrame() T
}
BaseSample struct {
Raw IRaw // 裸格式用于转换的中间格式
IDR bool
TS0, Timestamp, CTS time.Duration // 原始 TS、修正 TS、Composition Time Stamp
}
Sample struct {
codec.ICodecCtx
util.RecyclableMemory
*BaseSample
}
Nalus = util.ReuseArray[util.Memory]
AudioData = util.Memory
@@ -49,36 +54,130 @@ type (
AVFrame struct {
DataFrame
IDR bool
Timestamp time.Duration // 绝对时间戳
CTS time.Duration // composition time stamp
*Sample
Wraps []IAVFrame // 封装格式
}
IRaw interface {
util.Resetter
Count() int
}
AVRing = util.Ring[AVFrame]
DataFrame struct {
sync.RWMutex
discard bool
Sequence uint32 // 在一个Track中的序号
WriteTime time.Time // 写入时间,可用于比较两个帧的先后
Raw any // 裸格式
}
)
func (frame *AVFrame) Clone() {
func (sample *Sample) GetSize() int {
return sample.Size
}
func (sample *Sample) GetSample() *Sample {
return sample
}
func (sample *Sample) CheckCodecChange() (err error) {
return
}
func (sample *Sample) Demux() error {
return nil
}
func (sample *Sample) Mux(from *Sample) error {
sample.ICodecCtx = from.GetBase()
return nil
}
func ConvertFrameType(from, to IAVFrame) (err error) {
fromSampe, toSample := from.GetSample(), to.GetSample()
if !fromSampe.HasRaw() {
if err = from.Demux(); err != nil {
return
}
}
toSample.SetAllocator(fromSampe.GetAllocator())
toSample.BaseSample = fromSampe.BaseSample
return to.Mux(fromSampe)
}
func (b *BaseSample) HasRaw() bool {
return b.Raw != nil && b.Raw.Count() > 0
}
// 90Hz
func (b *BaseSample) GetDTS() time.Duration {
return b.Timestamp * 90 / time.Millisecond
}
func (b *BaseSample) GetPTS() time.Duration {
return (b.Timestamp + b.CTS) * 90 / time.Millisecond
}
func (b *BaseSample) SetDTS(dts time.Duration) {
b.Timestamp = dts * time.Millisecond / 90
}
func (b *BaseSample) SetPTS(pts time.Duration) {
b.CTS = pts*time.Millisecond/90 - b.Timestamp
}
func (b *BaseSample) SetTS32(ts uint32) {
b.Timestamp = time.Duration(ts) * time.Millisecond
}
func (b *BaseSample) GetTS32() uint32 {
return uint32(b.Timestamp / time.Millisecond)
}
func (b *BaseSample) SetCTS32(ts uint32) {
b.CTS = time.Duration(ts) * time.Millisecond
}
func (b *BaseSample) GetCTS32() uint32 {
return uint32(b.CTS / time.Millisecond)
}
func (b *BaseSample) GetNalus() *util.ReuseArray[util.Memory] {
if b.Raw == nil {
b.Raw = &Nalus{}
}
return b.Raw.(*util.ReuseArray[util.Memory])
}
func (b *BaseSample) GetAudioData() *AudioData {
if b.Raw == nil {
b.Raw = &AudioData{}
}
return b.Raw.(*AudioData)
}
func (b *BaseSample) ParseAVCC(reader *util.MemoryReader, naluSizeLen int) error {
array := b.GetNalus()
for reader.Length > 0 {
l, err := reader.ReadBE(naluSizeLen)
if err != nil {
return err
}
reader.RangeN(int(l), array.GetNextPointer().PushOne)
}
return nil
}
func (frame *AVFrame) Reset() {
frame.Timestamp = 0
frame.IDR = false
frame.CTS = 0
frame.Raw = nil
if len(frame.Wraps) > 0 {
for _, wrap := range frame.Wraps {
wrap.Recycle()
}
frame.Wraps = frame.Wraps[:0]
frame.BaseSample.IDR = false
frame.BaseSample.TS0 = 0
frame.BaseSample.Timestamp = 0
frame.BaseSample.CTS = 0
if frame.Raw != nil {
frame.Raw.Reset()
}
}
}
@@ -87,11 +186,6 @@ func (frame *AVFrame) Discard() {
frame.Reset()
}
func (frame *AVFrame) Demux(codecCtx codec.ICodecCtx) (err error) {
frame.Raw, err = frame.Wraps[0].Demux(codecCtx)
return
}
func (df *DataFrame) StartWrite() (success bool) {
if df.discard {
return
@@ -108,31 +202,6 @@ func (df *DataFrame) Ready() {
df.Unlock()
}
func (nalus *Nalus) H264Type() codec.H264NALUType {
return codec.ParseH264NALUType((*nalus)[0].Buffers[0][0])
}
func (nalus *Nalus) H265Type() codec.H265NALUType {
return codec.ParseH265NALUType((*nalus)[0].Buffers[0][0])
}
func (nalus *Nalus) Append(bytes []byte) {
*nalus = append(*nalus, util.Memory{Buffers: net.Buffers{bytes}, Size: len(bytes)})
}
func (nalus *Nalus) ParseAVCC(reader *util.MemoryReader, naluSizeLen int) error {
for reader.Length > 0 {
l, err := reader.ReadBE(naluSizeLen)
if err != nil {
return err
}
var mem util.Memory
reader.RangeN(int(l), mem.AppendOne)
*nalus = append(*nalus, mem)
}
return nil
}
func (obus *OBUs) ParseAVCC(reader *util.MemoryReader) error {
var obuHeader av1.OBUHeader
startLen := reader.Length
@@ -157,7 +226,15 @@ func (obus *OBUs) ParseAVCC(reader *util.MemoryReader) error {
if err != nil {
return err
}
(*AudioData)(obus).AppendOne(obu)
(*AudioData)(obus).PushOne(obu)
}
return nil
}
func (obus *OBUs) Reset() {
((*util.Memory)(obus)).Reset()
}
func (obus *OBUs) Count() int {
return (*util.Memory)(obus).Count()
}

View File

@@ -1,74 +0,0 @@
package pkg
import (
"reflect"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
type AVFrameConvert[T IAVFrame] struct {
FromTrack, ToTrack *AVTrack
lastFromCodecCtx codec.ICodecCtx
}
func NewAVFrameConvert[T IAVFrame](fromTrack *AVTrack, toTrack *AVTrack) *AVFrameConvert[T] {
ret := &AVFrameConvert[T]{}
ret.FromTrack = fromTrack
ret.ToTrack = toTrack
if ret.FromTrack == nil {
ret.FromTrack = &AVTrack{
RingWriter: &RingWriter{
Ring: util.NewRing[AVFrame](1),
},
}
}
if ret.ToTrack == nil {
ret.ToTrack = &AVTrack{
RingWriter: &RingWriter{
Ring: util.NewRing[AVFrame](1),
},
}
var to T
ret.ToTrack.FrameType = reflect.TypeOf(to).Elem()
}
return ret
}
func (c *AVFrameConvert[T]) ConvertFromAVFrame(avFrame *AVFrame) (to T, err error) {
to = reflect.New(c.ToTrack.FrameType).Interface().(T)
if c.ToTrack.ICodecCtx == nil {
if c.ToTrack.ICodecCtx, c.ToTrack.SequenceFrame, err = to.ConvertCtx(c.FromTrack.ICodecCtx); err != nil {
return
}
}
if err = avFrame.Demux(c.FromTrack.ICodecCtx); err != nil {
return
}
to.SetAllocator(avFrame.Wraps[0].GetAllocator())
to.Mux(c.ToTrack.ICodecCtx, avFrame)
return
}
func (c *AVFrameConvert[T]) Convert(frame IAVFrame) (to T, err error) {
to = reflect.New(c.ToTrack.FrameType).Interface().(T)
// Not From Publisher
if c.FromTrack.LastValue == nil {
err = frame.Parse(c.FromTrack)
if err != nil {
return
}
}
if c.ToTrack.ICodecCtx == nil || c.lastFromCodecCtx != c.FromTrack.ICodecCtx {
if c.ToTrack.ICodecCtx, c.ToTrack.SequenceFrame, err = to.ConvertCtx(c.FromTrack.ICodecCtx); err != nil {
return
}
}
c.lastFromCodecCtx = c.FromTrack.ICodecCtx
if c.FromTrack.Value.Raw, err = frame.Demux(c.FromTrack.ICodecCtx); err != nil {
return
}
to.SetAllocator(frame.GetAllocator())
to.Mux(c.ToTrack.ICodecCtx, &c.FromTrack.Value)
return
}

View File

@@ -27,6 +27,32 @@ type (
}
)
func NewAACCtxFromRecord(record []byte) (ret *AACCtx, err error) {
ret = &AACCtx{}
ret.CodecData, err = aacparser.NewCodecDataFromMPEG4AudioConfigBytes(record)
return
}
func NewPCMACtx() *PCMACtx {
return &PCMACtx{
AudioCtx: AudioCtx{
SampleRate: 90000,
Channels: 1,
SampleSize: 16,
},
}
}
func NewPCMUCtx() *PCMUCtx {
return &PCMUCtx{
AudioCtx: AudioCtx{
SampleRate: 90000,
Channels: 1,
SampleSize: 16,
},
}
}
func (ctx *AudioCtx) GetRecord() []byte {
return []byte{}
}

View File

@@ -112,6 +112,12 @@ type (
}
)
func NewH264CtxFromRecord(record []byte) (ret *H264Ctx, err error) {
ret = &H264Ctx{}
ret.CodecData, err = h264parser.NewCodecDataFromAVCDecoderConfRecord(record)
return
}
func (*H264Ctx) FourCC() FourCC {
return FourCC_H264
}

View File

@@ -24,6 +24,15 @@ type (
}
)
func NewH265CtxFromRecord(record []byte) (ret *H265Ctx, err error) {
ret = &H265Ctx{}
ret.CodecData, err = h265parser.NewCodecDataFromAVCDecoderConfRecord(record)
if err == nil {
ret.RecordInfo.LengthSizeMinusOne = 3
}
return
}
func (ctx *H265Ctx) GetInfo() string {
return fmt.Sprintf("fps: %d, resolution: %s", ctx.FPS(), ctx.Resolution())
}

25
pkg/codec/h26x.go Normal file
View File

@@ -0,0 +1,25 @@
package codec
type H26XCtx struct {
VPS, SPS, PPS []byte
}
func (ctx *H26XCtx) FourCC() (f FourCC) {
return
}
func (ctx *H26XCtx) GetInfo() string {
return ""
}
func (ctx *H26XCtx) GetBase() ICodecCtx {
return ctx
}
func (ctx *H26XCtx) GetRecord() []byte {
return nil
}
func (ctx *H26XCtx) String() string {
return ""
}

View File

@@ -36,6 +36,22 @@ type Config struct {
var (
durationType = reflect.TypeOf(time.Duration(0))
regexpType = reflect.TypeOf(Regexp{})
basicTypes = []reflect.Kind{
reflect.Bool,
reflect.Int,
reflect.Int8,
reflect.Int16,
reflect.Int32,
reflect.Int64,
reflect.Uint,
reflect.Uint8,
reflect.Uint16,
reflect.Uint32,
reflect.Uint64,
reflect.Float32,
reflect.Float64,
reflect.String,
}
)
func (config *Config) Range(f func(key string, value Config)) {
@@ -99,29 +115,29 @@ func (config *Config) Parse(s any, prefix ...string) {
if t.Kind() == reflect.Pointer {
t, v = t.Elem(), v.Elem()
}
isStruct := t.Kind() == reflect.Struct && t != regexpType
if isStruct {
defaults.SetDefaults(v.Addr().Interface())
}
config.Ptr = v
if !v.IsValid() {
fmt.Println("parse to ", prefix, config.name, s, "is not valid")
return
}
config.Default = v.Interface()
if l := len(prefix); l > 0 { // 读取环境变量
name := strings.ToLower(prefix[l-1])
if tag := config.tag.Get("default"); tag != "" {
_, isUnmarshaler := v.Addr().Interface().(yaml.Unmarshaler)
tag := config.tag.Get("default")
if tag != "" && isUnmarshaler {
v.Set(config.assign(name, tag))
config.Default = v.Interface()
}
if envValue := os.Getenv(strings.Join(prefix, "_")); envValue != "" {
v.Set(config.assign(name, envValue))
config.Env = v.Interface()
}
}
if t.Kind() == reflect.Struct && t != regexpType {
config.Default = v.Interface()
if isStruct {
for i, j := 0, t.NumField(); i < j; i++ {
ft, fv := t.Field(i), v.Field(i)
@@ -315,16 +331,18 @@ func (config *Config) GetMap() map[string]any {
var regexPureNumber = regexp.MustCompile(`^\d+$`)
func (config *Config) assign(k string, v any) (target reflect.Value) {
ft := config.Ptr.Type()
func unmarshal(ft reflect.Type, v any) (target reflect.Value) {
source := reflect.ValueOf(v)
for _, t := range basicTypes {
if source.Kind() == t && ft.Kind() == t {
return source
}
}
switch ft {
case durationType:
target = reflect.New(ft).Elem()
if source.Type() == durationType {
target.Set(source)
return source
} else if source.IsZero() || !source.IsValid() {
target.SetInt(0)
} else {
@@ -332,7 +350,7 @@ func (config *Config) assign(k string, v any) (target reflect.Value) {
if d, err := time.ParseDuration(timeStr); err == nil && !regexPureNumber.MatchString(timeStr) {
target.SetInt(int64(d))
} else {
slog.Error("invalid duration value please add unit (s,m,h,d)eg: 100ms, 10s, 4m, 1h", "key", k, "value", source)
slog.Error("invalid duration value please add unit (s,m,h,d)eg: 100ms, 10s, 4m, 1h", "value", timeStr)
os.Exit(1)
}
}
@@ -341,58 +359,69 @@ func (config *Config) assign(k string, v any) (target reflect.Value) {
regexpStr := source.String()
target.Set(reflect.ValueOf(Regexp{regexp.MustCompile(regexpStr)}))
default:
if ft.Kind() == reflect.Map {
target = reflect.MakeMap(ft)
switch ft.Kind() {
case reflect.Struct:
newStruct := reflect.New(ft)
defaults.SetDefaults(newStruct.Interface())
if value, ok := v.(map[string]any); ok {
for i := 0; i < ft.NumField(); i++ {
key := strings.ToLower(ft.Field(i).Name)
if vv, ok := value[key]; ok {
newStruct.Elem().Field(i).Set(unmarshal(ft.Field(i).Type, vv))
}
}
} else {
newStruct.Elem().Field(0).Set(unmarshal(ft.Field(0).Type, v))
}
return newStruct.Elem()
case reflect.Map:
if v != nil {
tmpStruct := reflect.StructOf([]reflect.StructField{
{
Name: "Key",
Type: ft.Key(),
},
})
tmpValue := reflect.New(tmpStruct)
target = reflect.MakeMap(ft)
for k, v := range v.(map[string]any) {
_ = yaml.Unmarshal([]byte(fmt.Sprintf("key: %s", k)), tmpValue.Interface())
var value reflect.Value
if ft.Elem().Kind() == reflect.Struct {
value = reflect.New(ft.Elem())
defaults.SetDefaults(value.Interface())
if reflect.TypeOf(v).Kind() != reflect.Map {
value.Elem().Field(0).Set(reflect.ValueOf(v))
} else {
out, _ := yaml.Marshal(v)
_ = yaml.Unmarshal(out, value.Interface())
}
value = value.Elem()
} else {
value = reflect.ValueOf(v)
}
target.SetMapIndex(tmpValue.Elem().Field(0), value)
target.SetMapIndex(unmarshal(ft.Key(), k), unmarshal(ft.Elem(), v))
}
}
} else {
tmpStruct := reflect.StructOf([]reflect.StructField{
{
Name: strings.ToUpper(k),
Type: ft,
},
})
tmpValue := reflect.New(tmpStruct)
case reflect.Slice:
if v != nil {
s := v.([]any)
target = reflect.MakeSlice(ft, len(s), len(s))
for i, v := range s {
target.Index(i).Set(unmarshal(ft.Elem(), v))
}
}
default:
if v != nil {
var out []byte
var err error
if vv, ok := v.(string); ok {
out = []byte(fmt.Sprintf("%s: %s", k, vv))
out = []byte(fmt.Sprintf("%s: %s", "value", vv))
} else {
out, _ = yaml.Marshal(map[string]any{k: v})
out, err = yaml.Marshal(map[string]any{"value": v})
if err != nil {
panic(err)
}
_ = yaml.Unmarshal(out, tmpValue.Interface())
}
target = tmpValue.Elem().Field(0)
tmpValue := reflect.New(reflect.StructOf([]reflect.StructField{
{
Name: "Value",
Type: ft,
},
}))
err = yaml.Unmarshal(out, tmpValue.Interface())
if err != nil {
panic(err)
}
return tmpValue.Elem().Field(0)
}
}
}
return
}
func (config *Config) assign(k string, v any) reflect.Value {
return unmarshal(config.Ptr.Type(), v)
}
func Parse(target any, conf map[string]any) {
var c Config
c.Parse(target)

View File

@@ -49,6 +49,7 @@ func (task *ListenQuicWork) Start() (err error) {
task.Error("listen quic error", err)
return
}
task.OnStop(task.Listener.Close)
task.Info("listen quic on", task.ListenAddr)
return
}
@@ -63,7 +64,3 @@ func (task *ListenQuicWork) Go() error {
task.AddTask(subTask)
}
}
func (task *ListenQuicWork) Dispose() {
_ = task.Listener.Close()
}

View File

@@ -18,6 +18,7 @@ const (
RecordModeAuto RecordMode = "auto"
RecordModeEvent RecordMode = "event"
RecordModeTest RecordMode = "test"
HookOnServerKeepAlive HookType = "server_keep_alive"
HookOnPublishStart HookType = "publish_start"
@@ -70,7 +71,7 @@ type (
IdleTimeout time.Duration `desc:"空闲(无订阅)超时"` // 空闲(无订阅)超时
PauseTimeout time.Duration `default:"30s" desc:"暂停超时时间"` // 暂停超时
BufferTime time.Duration `desc:"缓冲时长0代表取最近关键帧"` // 缓冲长度(单位:秒)0代表取最近关键帧
Speed float64 `default:"1" desc:"发送速率"` // 发送速率0 为不限速
Speed float64 `desc:"发送速率"` // 发送速率0 为不限速
Scale float64 `default:"1" desc:"缩放倍数"` // 缩放倍数
MaxFPS int `default:"60" desc:"最大FPS"` // 最大FPS
Key string `desc:"发布鉴权key"` // 发布鉴权key
@@ -97,7 +98,7 @@ type (
Pull struct {
URL string `desc:"拉流地址"`
Loop int `desc:"拉流循环次数,-1:无限循环"` // 拉流循环次数,-1 表示无限循环
MaxRetry int `default:"-1" desc:"断开后自动重试次数,0:不重试,-1:无限重试"` // 断开后自动重拉,0 表示不自动重拉,-1 表示无限重拉高于0 的数代表最大重拉次数
MaxRetry int `desc:"断开后自动重试次数,0:不重试,-1:无限重试"` // 断开后自动重拉,0 表示不自动重拉,-1 表示无限重拉高于0 的数代表最大重拉次数
RetryInterval time.Duration `default:"5s" desc:"重试间隔"` // 重试间隔
Proxy string `desc:"代理地址"` // 代理地址
Header HTTPValues
@@ -124,6 +125,7 @@ type (
Type string `desc:"录制类型"` // 录制类型 mp4、flv、hls、hlsv7
FilePath string `desc:"录制文件路径"` // 录制文件路径
Fragment time.Duration `desc:"分片时长"` // 分片时长
RealTime bool `desc:"是否实时录制"` // 是否实时录制
Append bool `desc:"是否追加录制"` // 是否追加录制
Event *RecordEvent `json:"event" desc:"事件录像配置" gorm:"-"` // 事件录像配置
}

View File

@@ -4,6 +4,7 @@ import "errors"
var (
ErrNotFound = errors.New("not found")
ErrDisposed = errors.New("disposed")
ErrDisabled = errors.New("disabled")
ErrStreamExist = errors.New("stream exist")
ErrRecordExists = errors.New("record exists")

82
pkg/format/adts.go Normal file
View File

@@ -0,0 +1,82 @@
package format
import (
"bytes"
"fmt"
"github.com/deepch/vdk/codec/aacparser"
"m7s.live/v5/pkg"
"m7s.live/v5/pkg/codec"
)
var _ pkg.IAVFrame = (*Mpeg2Audio)(nil)
type Mpeg2Audio struct {
pkg.Sample
}
func (A *Mpeg2Audio) CheckCodecChange() (err error) {
old := A.ICodecCtx
if old == nil || old.FourCC().Is(codec.FourCC_MP4A) {
var reader = A.NewReader()
var adts []byte
adts, err = reader.ReadBytes(7)
if err != nil {
return
}
var hdrlen, framelen, samples int
var conf aacparser.MPEG4AudioConfig
conf, hdrlen, framelen, samples, err = aacparser.ParseADTSHeader(adts)
if err != nil {
return
}
b := &bytes.Buffer{}
aacparser.WriteMPEG4AudioConfig(b, conf)
if old == nil || !bytes.Equal(b.Bytes(), old.GetRecord()) {
var ctx = &codec.AACCtx{}
ctx.ConfigBytes = b.Bytes()
A.ICodecCtx = ctx
if false {
println("ADTS", "hdrlen", hdrlen, "framelen", framelen, "samples", samples, "config", ctx.Config)
}
// track.Info("ADTS", "hdrlen", hdrlen, "framelen", framelen, "samples", samples)
} else {
}
}
return
}
func (A *Mpeg2Audio) Demux() (err error) {
var reader = A.NewReader()
mem := A.GetAudioData()
if A.ICodecCtx.FourCC().Is(codec.FourCC_MP4A) {
err = reader.Skip(7)
if err != nil {
return
}
}
reader.Range(mem.PushOne)
return
}
func (A *Mpeg2Audio) Mux(frame *pkg.Sample) (err error) {
if A.ICodecCtx == nil {
A.ICodecCtx = frame.GetBase()
}
raw := frame.Raw.(*pkg.AudioData)
aacCtx, ok := A.ICodecCtx.(*codec.AACCtx)
if ok {
A.InitRecycleIndexes(1)
adts := A.NextN(7)
aacparser.FillADTSHeader(adts, aacCtx.Config, raw.Size/aacCtx.GetSampleSize(), raw.Size)
} else {
A.InitRecycleIndexes(0)
}
A.Push(raw.Buffers...)
return
}
func (A *Mpeg2Audio) String() string {
return fmt.Sprintf("ADTS{size:%d}", A.Size)
}

290
pkg/format/annexb.go Normal file
View File

@@ -0,0 +1,290 @@
package format
import (
"bytes"
"fmt"
"io"
"slices"
"github.com/deepch/vdk/codec/h264parser"
"github.com/deepch/vdk/codec/h265parser"
"m7s.live/v5/pkg"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
type AnnexB struct {
pkg.Sample
}
func (a *AnnexB) CheckCodecChange() (err error) {
if !a.HasRaw() || a.ICodecCtx == nil {
err = a.Demux()
if err != nil {
return
}
}
if a.ICodecCtx == nil {
return pkg.ErrSkip
}
var vps, sps, pps []byte
a.IDR = false
for nalu := range a.Raw.(*pkg.Nalus).RangePoint {
if a.FourCC() == codec.FourCC_H265 {
switch codec.ParseH265NALUType(nalu.Buffers[0][0]) {
case h265parser.NAL_UNIT_VPS:
vps = nalu.ToBytes()
case h265parser.NAL_UNIT_SPS:
sps = nalu.ToBytes()
case h265parser.NAL_UNIT_PPS:
pps = nalu.ToBytes()
case h265parser.NAL_UNIT_CODED_SLICE_BLA_W_LP,
h265parser.NAL_UNIT_CODED_SLICE_BLA_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_BLA_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_IDR_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_IDR_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_CRA:
a.IDR = true
}
} else {
switch codec.ParseH264NALUType(nalu.Buffers[0][0]) {
case codec.NALU_SPS:
sps = nalu.ToBytes()
case codec.NALU_PPS:
pps = nalu.ToBytes()
case codec.NALU_IDR_Picture:
a.IDR = true
}
}
}
if a.FourCC() == codec.FourCC_H265 {
if vps != nil && sps != nil && pps != nil {
var codecData h265parser.CodecData
codecData, err = h265parser.NewCodecDataFromVPSAndSPSAndPPS(vps, sps, pps)
if err != nil {
return
}
if !bytes.Equal(codecData.Record, a.ICodecCtx.(*codec.H265Ctx).Record) {
a.ICodecCtx = &codec.H265Ctx{
CodecData: codecData,
}
}
}
if a.ICodecCtx.(*codec.H265Ctx).Record == nil {
err = pkg.ErrSkip
}
} else {
if sps != nil && pps != nil {
var codecData h264parser.CodecData
codecData, err = h264parser.NewCodecDataFromSPSAndPPS(sps, pps)
if err != nil {
return
}
if !bytes.Equal(codecData.Record, a.ICodecCtx.(*codec.H264Ctx).Record) {
a.ICodecCtx = &codec.H264Ctx{
CodecData: codecData,
}
}
}
if a.ICodecCtx.(*codec.H264Ctx).Record == nil {
err = pkg.ErrSkip
}
}
return
}
// String implements pkg.IAVFrame.
func (a *AnnexB) String() string {
return fmt.Sprintf("%d %d", a.Timestamp, a.Memory.Size)
}
// Demux implements pkg.IAVFrame.
func (a *AnnexB) Demux() (err error) {
nalus := a.GetNalus()
var lastFourBytes [4]byte
var b byte
var shallow util.Memory
shallow.Push(a.Buffers...)
reader := shallow.NewReader()
gotNalu := func() {
nalu := nalus.GetNextPointer()
for buf := range reader.ClipFront {
nalu.PushOne(buf)
}
if a.ICodecCtx == nil {
naluType := codec.ParseH264NALUType(nalu.Buffers[0][0])
switch naluType {
case codec.NALU_Non_IDR_Picture,
codec.NALU_IDR_Picture,
codec.NALU_SEI,
codec.NALU_SPS,
codec.NALU_PPS,
codec.NALU_Access_Unit_Delimiter:
a.ICodecCtx = &codec.H264Ctx{}
}
}
}
for {
b, err = reader.ReadByte()
if err == nil {
copy(lastFourBytes[:], lastFourBytes[1:])
lastFourBytes[3] = b
var startCode = 0
if lastFourBytes == codec.NALU_Delimiter2 {
startCode = 4
} else if [3]byte(lastFourBytes[1:]) == codec.NALU_Delimiter1 {
startCode = 3
}
if startCode > 0 && reader.Offset() >= 3 {
if reader.Offset() == 3 {
startCode = 3
}
reader.Unread(startCode)
if reader.Offset() > 0 {
gotNalu()
}
reader.Skip(startCode)
for range reader.ClipFront {
}
}
} else if err == io.EOF {
if reader.Offset() > 0 {
gotNalu()
}
err = nil
break
}
}
return
}
func (a *AnnexB) Mux(fromBase *pkg.Sample) (err error) {
if a.ICodecCtx == nil {
a.ICodecCtx = fromBase.GetBase()
}
a.InitRecycleIndexes(0)
delimiter2 := codec.NALU_Delimiter2[:]
a.PushOne(delimiter2)
if fromBase.IDR {
switch ctx := fromBase.GetBase().(type) {
case *codec.H264Ctx:
a.Push(ctx.SPS(), delimiter2, ctx.PPS(), delimiter2)
case *codec.H265Ctx:
a.Push(ctx.SPS(), delimiter2, ctx.PPS(), delimiter2, ctx.VPS(), delimiter2)
}
}
for i, nalu := range *fromBase.Raw.(*pkg.Nalus) {
if i > 0 {
a.PushOne(codec.NALU_Delimiter1[:])
}
a.Push(nalu.Buffers...)
}
return
}
func (a *AnnexB) Parse(reader *pkg.AnnexBReader) (hasFrame bool, err error) {
nalus := a.BaseSample.GetNalus()
for !hasFrame {
nalu := nalus.GetNextPointer()
reader.ReadNALU(&a.Memory, nalu)
if nalu.Size == 0 {
nalus.Reduce()
return
}
tryH264Type := codec.ParseH264NALUType(nalu.Buffers[0][0])
h265Type := codec.ParseH265NALUType(nalu.Buffers[0][0])
if a.ICodecCtx == nil {
a.ICodecCtx = &codec.H26XCtx{}
}
switch ctx := a.ICodecCtx.(type) {
case *codec.H26XCtx:
if tryH264Type == codec.NALU_SPS {
ctx.SPS = nalu.ToBytes()
nalus.Reduce()
a.Recycle()
} else if tryH264Type == codec.NALU_PPS {
ctx.PPS = nalu.ToBytes()
nalus.Reduce()
a.Recycle()
} else if h265Type == h265parser.NAL_UNIT_VPS {
ctx.VPS = nalu.ToBytes()
nalus.Reduce()
a.Recycle()
} else if h265Type == h265parser.NAL_UNIT_SPS {
ctx.SPS = nalu.ToBytes()
nalus.Reduce()
a.Recycle()
} else if h265Type == h265parser.NAL_UNIT_PPS {
ctx.PPS = nalu.ToBytes()
nalus.Reduce()
a.Recycle()
} else {
if ctx.SPS != nil && ctx.PPS != nil && tryH264Type == codec.NALU_IDR_Picture {
var codecData h264parser.CodecData
codecData, err = h264parser.NewCodecDataFromSPSAndPPS(ctx.SPS, ctx.PPS)
if err != nil {
return
}
a.ICodecCtx = &codec.H264Ctx{
CodecData: codecData,
}
*nalus = slices.Insert(*nalus, 0, util.NewMemory(ctx.SPS), util.NewMemory(ctx.PPS))
delimiter2 := codec.NALU_Delimiter2[:]
a.Buffers = slices.Insert(a.Buffers, 0, delimiter2, ctx.SPS, delimiter2, ctx.PPS)
a.Size += 8 + len(ctx.SPS) + len(ctx.PPS)
} else if ctx.VPS != nil && ctx.SPS != nil && ctx.PPS != nil && h265Type == h265parser.NAL_UNIT_CODED_SLICE_IDR_W_RADL {
var codecData h265parser.CodecData
codecData, err = h265parser.NewCodecDataFromVPSAndSPSAndPPS(ctx.VPS, ctx.SPS, ctx.PPS)
if err != nil {
return
}
a.ICodecCtx = &codec.H265Ctx{
CodecData: codecData,
}
*nalus = slices.Insert(*nalus, 0, util.NewMemory(ctx.VPS), util.NewMemory(ctx.SPS), util.NewMemory(ctx.PPS))
delimiter2 := codec.NALU_Delimiter2[:]
a.Buffers = slices.Insert(a.Buffers, 0, delimiter2, ctx.VPS, delimiter2, ctx.SPS, delimiter2, ctx.PPS)
a.Size += 24 + len(ctx.VPS) + len(ctx.SPS) + len(ctx.PPS)
} else {
nalus.Reduce()
a.Recycle()
}
}
case *codec.H264Ctx:
switch tryH264Type {
case codec.NALU_IDR_Picture:
a.IDR = true
hasFrame = true
case codec.NALU_Non_IDR_Picture:
a.IDR = false
hasFrame = true
}
case *codec.H265Ctx:
switch h265Type {
case h265parser.NAL_UNIT_CODED_SLICE_BLA_W_LP,
h265parser.NAL_UNIT_CODED_SLICE_BLA_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_BLA_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_IDR_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_IDR_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_CRA:
a.IDR = true
hasFrame = true
case h265parser.NAL_UNIT_CODED_SLICE_TRAIL_N,
h265parser.NAL_UNIT_CODED_SLICE_TRAIL_R,
h265parser.NAL_UNIT_CODED_SLICE_TSA_N,
h265parser.NAL_UNIT_CODED_SLICE_TSA_R,
h265parser.NAL_UNIT_CODED_SLICE_STSA_N,
h265parser.NAL_UNIT_CODED_SLICE_STSA_R,
h265parser.NAL_UNIT_CODED_SLICE_RADL_N,
h265parser.NAL_UNIT_CODED_SLICE_RADL_R,
h265parser.NAL_UNIT_CODED_SLICE_RASL_N,
h265parser.NAL_UNIT_CODED_SLICE_RASL_R:
a.IDR = false
hasFrame = true
}
}
}
return
}

309
pkg/format/ps/mpegps.go Normal file
View File

@@ -0,0 +1,309 @@
package mpegps
import (
"errors"
"fmt"
"io"
"time"
"m7s.live/v5"
"m7s.live/v5/pkg"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/format"
"m7s.live/v5/pkg/util"
mpegts "m7s.live/v5/pkg/format/ts"
)
const (
StartCodePS = 0x000001ba
StartCodeSYS = 0x000001bb
StartCodeMAP = 0x000001bc
StartCodePadding = 0x000001be
StartCodeVideo = 0x000001e0
StartCodeVideo1 = 0x000001e1
StartCodeVideo2 = 0x000001e2
StartCodeAudio = 0x000001c0
PrivateStreamCode = 0x000001bd
MEPGProgramEndCode = 0x000001b9
)
// PS包头常量
const (
PSPackHeaderSize = 14 // PS pack header basic size
PSSystemHeaderSize = 18 // PS system header basic size
PSMHeaderSize = 12 // PS map header basic size
PESHeaderMinSize = 9 // PES header minimum size
MaxPESPayloadSize = 0xFFEB // 0xFFFF - 14 (to leave room for headers)
)
type MpegPsDemuxer struct {
stAudio, stVideo byte
Publisher *m7s.Publisher
Allocator *util.ScalableMemoryAllocator
writer m7s.PublishWriter[*format.Mpeg2Audio, *format.AnnexB]
}
func (s *MpegPsDemuxer) Feed(reader *util.BufReader) (err error) {
writer := &s.writer
var payload util.Memory
var pesHeader mpegts.MpegPESHeader
var lastVideoPts, lastAudioPts uint64
var annexbReader pkg.AnnexBReader
for {
code, err := reader.ReadBE32(4)
if err != nil {
return err
}
switch code {
case StartCodePS:
var psl byte
if err = reader.Skip(9); err != nil {
return err
}
psl, err = reader.ReadByte()
if err != nil {
return err
}
psl &= 0x07
if err = reader.Skip(int(psl)); err != nil {
return err
}
case StartCodeVideo:
payload, err = s.ReadPayload(reader)
if err != nil {
return err
}
if !s.Publisher.PubVideo {
continue
}
if writer.PublishVideoWriter == nil {
writer.PublishVideoWriter = m7s.NewPublishVideoWriter[*format.AnnexB](s.Publisher, s.Allocator)
switch s.stVideo {
case mpegts.STREAM_TYPE_H264:
writer.VideoFrame.ICodecCtx = &codec.H264Ctx{}
case mpegts.STREAM_TYPE_H265:
writer.VideoFrame.ICodecCtx = &codec.H265Ctx{}
}
}
pes := writer.VideoFrame
reader := payload.NewReader()
pesHeader, err = mpegts.ReadPESHeader(&io.LimitedReader{R: &reader, N: int64(payload.Size)})
if err != nil {
return errors.Join(err, fmt.Errorf("failed to read PES header"))
}
if pesHeader.Pts != 0 && pesHeader.Pts != lastVideoPts {
if pes.Size > 0 {
err = writer.NextVideo()
if err != nil {
return errors.Join(err, fmt.Errorf("failed to get next video frame"))
}
pes = writer.VideoFrame
}
pes.SetDTS(time.Duration(pesHeader.Dts))
pes.SetPTS(time.Duration(pesHeader.Pts))
lastVideoPts = pesHeader.Pts
}
annexb := s.Allocator.Malloc(reader.Length)
reader.Read(annexb)
annexbReader.AppendBuffer(annexb)
_, err = pes.Parse(&annexbReader)
if err != nil {
return errors.Join(err, fmt.Errorf("failed to parse annexb"))
}
case StartCodeAudio:
payload, err = s.ReadPayload(reader)
if err != nil {
return errors.Join(err, fmt.Errorf("failed to read audio payload"))
}
if s.stAudio == 0 || !s.Publisher.PubAudio {
continue
}
if writer.PublishAudioWriter == nil {
writer.PublishAudioWriter = m7s.NewPublishAudioWriter[*format.Mpeg2Audio](s.Publisher, s.Allocator)
switch s.stAudio {
case mpegts.STREAM_TYPE_AAC:
writer.AudioFrame.ICodecCtx = &codec.AACCtx{}
case mpegts.STREAM_TYPE_G711A:
writer.AudioFrame.ICodecCtx = codec.NewPCMACtx()
case mpegts.STREAM_TYPE_G711U:
writer.AudioFrame.ICodecCtx = codec.NewPCMUCtx()
}
}
pes := writer.AudioFrame
reader := payload.NewReader()
pesHeader, err = mpegts.ReadPESHeader(&io.LimitedReader{R: &reader, N: int64(payload.Size)})
if err != nil {
return errors.Join(err, fmt.Errorf("failed to read PES header"))
}
if pesHeader.Pts != 0 && pesHeader.Pts != lastAudioPts {
if pes.Size > 0 {
err = writer.NextAudio()
if err != nil {
return errors.Join(err, fmt.Errorf("failed to get next audio frame"))
}
pes = writer.AudioFrame
}
pes.SetDTS(time.Duration(pesHeader.Pts))
pes.SetPTS(time.Duration(pesHeader.Pts))
lastAudioPts = pesHeader.Pts
}
reader.Range(func(buf []byte) {
copy(pes.NextN(len(buf)), buf)
})
// reader.Range(pes.PushOne)
case StartCodeMAP:
var psm util.Memory
psm, err = s.ReadPayload(reader)
if err != nil {
return errors.Join(err, fmt.Errorf("failed to read program stream map"))
}
err = s.decProgramStreamMap(psm)
if err != nil {
return errors.Join(err, fmt.Errorf("failed to decode program stream map"))
}
default:
payloadlen, err := reader.ReadBE(2)
if err != nil {
return errors.Join(err, fmt.Errorf("failed to read payload length"))
}
reader.Skip(payloadlen)
}
}
}
func (s *MpegPsDemuxer) ReadPayload(reader *util.BufReader) (payload util.Memory, err error) {
payloadlen, err := reader.ReadBE(2)
if err != nil {
return
}
return reader.ReadBytes(payloadlen)
}
func (s *MpegPsDemuxer) decProgramStreamMap(psm util.Memory) (err error) {
var programStreamInfoLen, programStreamMapLen, elementaryStreamInfoLength uint32
var streamType, elementaryStreamID byte
reader := psm.NewReader()
reader.Skip(2)
programStreamInfoLen, err = reader.ReadBE(2)
reader.Skip(int(programStreamInfoLen))
programStreamMapLen, err = reader.ReadBE(2)
for programStreamMapLen > 0 {
streamType, err = reader.ReadByte()
elementaryStreamID, err = reader.ReadByte()
if elementaryStreamID >= 0xe0 && elementaryStreamID <= 0xef {
s.stVideo = streamType
} else if elementaryStreamID >= 0xc0 && elementaryStreamID <= 0xdf {
s.stAudio = streamType
}
elementaryStreamInfoLength, err = reader.ReadBE(2)
reader.Skip(int(elementaryStreamInfoLength))
programStreamMapLen -= 4 + elementaryStreamInfoLength
}
return nil
}
type MpegPSMuxer struct {
*m7s.Subscriber
Packet *util.RecyclableMemory
}
func (muxer *MpegPSMuxer) Mux(onPacket func() error) {
var pesAudio, pesVideo *MpegpsPESFrame
puber := muxer.Publisher
var elementary_stream_map_length uint16
if puber.HasAudioTrack() {
elementary_stream_map_length += 4
pesAudio = &MpegpsPESFrame{}
pesAudio.StreamID = mpegts.STREAM_ID_AUDIO
switch puber.AudioTrack.ICodecCtx.FourCC() {
case codec.FourCC_ALAW:
pesAudio.StreamType = mpegts.STREAM_TYPE_G711A
case codec.FourCC_ULAW:
pesAudio.StreamType = mpegts.STREAM_TYPE_G711U
case codec.FourCC_MP4A:
pesAudio.StreamType = mpegts.STREAM_TYPE_AAC
}
}
if puber.HasVideoTrack() {
elementary_stream_map_length += 4
pesVideo = &MpegpsPESFrame{}
pesVideo.StreamID = mpegts.STREAM_ID_VIDEO
switch puber.VideoTrack.ICodecCtx.FourCC() {
case codec.FourCC_H264:
pesVideo.StreamType = mpegts.STREAM_TYPE_H264
case codec.FourCC_H265:
pesVideo.StreamType = mpegts.STREAM_TYPE_H265
}
}
var outputBuffer util.Buffer = muxer.Packet.NextN(PSPackHeaderSize + PSMHeaderSize + int(elementary_stream_map_length))
outputBuffer.Reset()
MuxPSHeader(&outputBuffer)
// System Header - 定义流的缓冲区信息
// outputBuffer.WriteUint32(StartCodeSYS)
// outputBuffer.WriteByte(0x00) // header_length high
// outputBuffer.WriteByte(0x0C) // header_length low (12 bytes)
// outputBuffer.WriteByte(0x80) // marker + rate_bound[21..15]
// outputBuffer.WriteByte(0x62) // rate_bound[14..8]
// outputBuffer.WriteByte(0x4E) // rate_bound[7..1] + marker
// outputBuffer.WriteByte(0x01) // audio_bound + fixed_flag + CSPS_flag + system_audio_lock_flag + system_video_lock_flag + marker
// outputBuffer.WriteByte(0x01) // video_bound + packet_rate_restriction_flag + reserved
// outputBuffer.WriteByte(frame.StreamId) // stream_id
// outputBuffer.WriteByte(0xC0) // '11' + P-STD_buffer_bound_scale
// outputBuffer.WriteByte(0x20) // P-STD_buffer_size_bound low
// outputBuffer.WriteByte(0x00) // P-STD_buffer_size_bound high
// outputBuffer.WriteByte(0x00)
// outputBuffer.WriteByte(0x00)
// outputBuffer.WriteByte(0x00)
// PSM Header - 程序流映射,定义流类型
outputBuffer.WriteUint32(StartCodeMAP)
outputBuffer.WriteUint16(uint16(PSMHeaderSize) + elementary_stream_map_length - 6) // psm_length
outputBuffer.WriteByte(0xE0) // current_next_indicator + reserved + psm_version
outputBuffer.WriteByte(0xFF) // reserved + marker
outputBuffer.WriteUint16(0) // program_stream_info_length
outputBuffer.WriteUint16(elementary_stream_map_length)
if pesAudio != nil {
outputBuffer.WriteByte(pesAudio.StreamType) // stream_type
outputBuffer.WriteByte(pesAudio.StreamID) // elementary_stream_id
outputBuffer.WriteUint16(0) // elementary_stream_info_length
}
if pesVideo != nil {
outputBuffer.WriteByte(pesVideo.StreamType) // stream_type
outputBuffer.WriteByte(pesVideo.StreamID) // elementary_stream_id
outputBuffer.WriteUint16(0) // elementary_stream_info_length
}
onPacket()
m7s.PlayBlock(muxer.Subscriber, func(audio *format.Mpeg2Audio) error {
pesAudio.Pts = uint64(audio.GetPTS())
pesAudio.WritePESPacket(audio.Memory, muxer.Packet)
return onPacket()
}, func(video *format.AnnexB) error {
pesVideo.Pts = uint64(video.GetPTS())
pesVideo.Dts = uint64(video.GetDTS())
pesVideo.WritePESPacket(video.Memory, muxer.Packet)
return onPacket()
})
}
func MuxPSHeader(outputBuffer *util.Buffer) {
// 写入PS Pack Header - 参考MPEG-2程序流标准
// Pack start code: 0x000001BA
outputBuffer.WriteUint32(StartCodePS)
// SCR字段 (System Clock Reference) - 参考ps-muxer.go的实现
// 系统时钟参考
scr := uint64(time.Now().UnixMilli()) * 90
outputBuffer.WriteByte(0x44 | byte((scr>>30)&0x07)) // '01' + SCR[32..30]
outputBuffer.WriteByte(byte((scr >> 22) & 0xFF)) // SCR[29..22]
outputBuffer.WriteByte(0x04 | byte((scr>>20)&0x03)) // marker + SCR[21..20]
outputBuffer.WriteByte(byte((scr >> 12) & 0xFF)) // SCR[19..12]
outputBuffer.WriteByte(0x04 | byte((scr>>10)&0x03)) // marker + SCR[11..10]
outputBuffer.WriteByte(byte((scr >> 2) & 0xFF)) // SCR[9..2]
outputBuffer.WriteByte(0x04 | byte(scr&0x03)) // marker + SCR[1..0]
outputBuffer.WriteByte(0x01) // SCR_ext + marker
outputBuffer.WriteByte(0x89) // program_mux_rate high
outputBuffer.WriteByte(0xC8) // program_mux_rate low + markers + reserved + stuffing_length(0)
}

View File

@@ -0,0 +1,853 @@
package mpegps
import (
"bytes"
"io"
"testing"
"m7s.live/v5/pkg/util"
)
func min(a, b int) int {
if a < b {
return a
}
return b
}
func TestMpegPSConstants(t *testing.T) {
// Test that PS constants are properly defined
t.Run("Constants", func(t *testing.T) {
if StartCodePS != 0x000001ba {
t.Errorf("Expected StartCodePS %x, got %x", 0x000001ba, StartCodePS)
}
if PSPackHeaderSize != 14 {
t.Errorf("Expected PSPackHeaderSize %d, got %d", 14, PSPackHeaderSize)
}
if MaxPESPayloadSize != 0xFFEB {
t.Errorf("Expected MaxPESPayloadSize %x, got %x", 0xFFEB, MaxPESPayloadSize)
}
})
}
func TestMuxPSHeader(t *testing.T) {
// Test PS header generation
t.Run("PSHeader", func(t *testing.T) {
// Create a buffer for testing - initialize with length 0 to allow appending
buffer := make([]byte, 0, PSPackHeaderSize)
utilBuffer := util.Buffer(buffer)
// Call MuxPSHeader
MuxPSHeader(&utilBuffer)
// Check the buffer length
if len(utilBuffer) != PSPackHeaderSize {
t.Errorf("Expected buffer length %d, got %d", PSPackHeaderSize, len(utilBuffer))
}
// Check PS start code (first 4 bytes should be 0x00 0x00 0x01 0xBA)
expectedStartCode := []byte{0x00, 0x00, 0x01, 0xBA}
if !bytes.Equal(utilBuffer[:4], expectedStartCode) {
t.Errorf("Expected PS start code %x, got %x", expectedStartCode, utilBuffer[:4])
}
t.Logf("PS Header: %x", utilBuffer)
t.Logf("Buffer length: %d", len(utilBuffer))
})
}
func TestMpegpsPESFrame(t *testing.T) {
// Test MpegpsPESFrame basic functionality
t.Run("PESFrame", func(t *testing.T) {
// Create PES frame
pesFrame := &MpegpsPESFrame{
StreamType: 0x1B, // H.264
}
pesFrame.Pts = 90000 // 1 second in 90kHz clock
pesFrame.Dts = 90000
// Test basic properties
if pesFrame.StreamType != 0x1B {
t.Errorf("Expected stream type 0x1B, got %x", pesFrame.StreamType)
}
if pesFrame.Pts != 90000 {
t.Errorf("Expected PTS %d, got %d", 90000, pesFrame.Pts)
}
if pesFrame.Dts != 90000 {
t.Errorf("Expected DTS %d, got %d", 90000, pesFrame.Dts)
}
t.Logf("PES Frame: StreamType=%x, PTS=%d, DTS=%d", pesFrame.StreamType, pesFrame.Pts, pesFrame.Dts)
})
}
func TestReadPayload(t *testing.T) {
// Test ReadPayload functionality
t.Run("ReadPayload", func(t *testing.T) {
// Create test data with payload length and payload
testData := []byte{
0x00, 0x05, // Payload length = 5 bytes
0x01, 0x02, 0x03, 0x04, 0x05, // Payload data
}
demuxer := &MpegPsDemuxer{}
reader := util.NewBufReader(bytes.NewReader(testData))
payload, err := demuxer.ReadPayload(reader)
if err != nil {
t.Fatalf("ReadPayload failed: %v", err)
}
if payload.Size != 5 {
t.Errorf("Expected payload size 5, got %d", payload.Size)
}
expectedPayload := []byte{0x01, 0x02, 0x03, 0x04, 0x05}
if !bytes.Equal(payload.ToBytes(), expectedPayload) {
t.Errorf("Expected payload %x, got %x", expectedPayload, payload.ToBytes())
}
t.Logf("ReadPayload successful: %x", payload.ToBytes())
})
}
func TestMpegPSMuxerBasic(t *testing.T) {
// Test MpegPSMuxer basic functionality
t.Run("MuxBasic", func(t *testing.T) {
// Test basic PS header generation without PlayBlock
// This focuses on testing the header generation logic
var outputBuffer util.Buffer = make([]byte, 0, 1024)
outputBuffer.Reset()
// Test PS header generation
MuxPSHeader(&outputBuffer)
// Add stuffing bytes as expected by the demuxer
// The demuxer expects: 9 bytes + 1 stuffing length byte + stuffing bytes
stuffingLength := byte(0x00) // No stuffing bytes
outputBuffer.WriteByte(stuffingLength)
// Verify PS header contains expected start code
if len(outputBuffer) != PSPackHeaderSize+1 {
t.Errorf("Expected PS header size %d, got %d", PSPackHeaderSize+1, len(outputBuffer))
}
// Check for PS start code
if !bytes.Contains(outputBuffer, []byte{0x00, 0x00, 0x01, 0xBA}) {
t.Error("PS header does not contain PS start code")
}
t.Logf("PS Header: %x", outputBuffer)
t.Logf("PS Header size: %d bytes", len(outputBuffer))
// Test PSM header generation
var pesAudio, pesVideo *MpegpsPESFrame
var elementary_stream_map_length uint16
// Simulate audio stream
hasAudio := true
if hasAudio {
elementary_stream_map_length += 4
pesAudio = &MpegpsPESFrame{}
pesAudio.StreamID = 0xC0 // MPEG audio
pesAudio.StreamType = 0x0F // AAC
}
// Simulate video stream
hasVideo := true
if hasVideo {
elementary_stream_map_length += 4
pesVideo = &MpegpsPESFrame{}
pesVideo.StreamID = 0xE0 // MPEG video
pesVideo.StreamType = 0x1B // H.264
}
// Create PSM header with proper payload length
psmData := make([]byte, 0, PSMHeaderSize+int(elementary_stream_map_length))
psmBuffer := util.Buffer(psmData)
psmBuffer.Reset()
// Write PSM start code
psmBuffer.WriteUint32(StartCodeMAP)
psmLength := uint16(PSMHeaderSize + int(elementary_stream_map_length) - 6)
psmBuffer.WriteUint16(psmLength) // psm_length
psmBuffer.WriteByte(0xE0) // current_next_indicator + reserved + psm_version
psmBuffer.WriteByte(0xFF) // reserved + marker
psmBuffer.WriteUint16(0) // program_stream_info_length
psmBuffer.WriteUint16(elementary_stream_map_length)
if pesAudio != nil {
psmBuffer.WriteByte(pesAudio.StreamType) // stream_type
psmBuffer.WriteByte(pesAudio.StreamID) // elementary_stream_id
psmBuffer.WriteUint16(0) // elementary_stream_info_length
}
if pesVideo != nil {
psmBuffer.WriteByte(pesVideo.StreamType) // stream_type
psmBuffer.WriteByte(pesVideo.StreamID) // elementary_stream_id
psmBuffer.WriteUint16(0) // elementary_stream_info_length
}
// Verify PSM header
if len(psmBuffer) != PSMHeaderSize+int(elementary_stream_map_length) {
t.Errorf("Expected PSM size %d, got %d", PSMHeaderSize+int(elementary_stream_map_length), len(psmBuffer))
}
// Check for PSM start code
if !bytes.Contains(psmBuffer, []byte{0x00, 0x00, 0x01, 0xBC}) {
t.Error("PSM header does not contain PSM start code")
}
t.Logf("PSM Header: %x", psmBuffer)
t.Logf("PSM Header size: %d bytes", len(psmBuffer))
// Test ReadPayload function directly
t.Run("ReadPayload", func(t *testing.T) {
// Create test payload data
testPayload := []byte{0x01, 0x02, 0x03, 0x04, 0x05}
// Create a packet with length prefix
packetData := make([]byte, 0, 2+len(testPayload))
packetData = append(packetData, byte(len(testPayload)>>8), byte(len(testPayload)))
packetData = append(packetData, testPayload...)
reader := util.NewBufReader(bytes.NewReader(packetData))
demuxer := &MpegPsDemuxer{}
// Test ReadPayload function
payload, err := demuxer.ReadPayload(reader)
if err != nil {
t.Fatalf("ReadPayload failed: %v", err)
}
if payload.Size != len(testPayload) {
t.Errorf("Expected payload size %d, got %d", len(testPayload), payload.Size)
}
if !bytes.Equal(payload.ToBytes(), testPayload) {
t.Errorf("Expected payload %x, got %x", testPayload, payload.ToBytes())
}
t.Logf("ReadPayload test passed: %x", payload.ToBytes())
})
// Test basic demuxing with PS header only
t.Run("PSHeader", func(t *testing.T) {
// Create a simple test that just verifies the PS header structure
// without trying to demux it (which expects more data)
if len(outputBuffer) < 4 {
t.Errorf("PS header too short: %d bytes", len(outputBuffer))
}
// Check that it starts with the correct start code
if !bytes.HasPrefix(outputBuffer, []byte{0x00, 0x00, 0x01, 0xBA}) {
t.Errorf("PS header does not start with correct start code: %x", outputBuffer[:4])
}
t.Logf("PS header structure test passed")
})
t.Logf("Basic mux/demux test completed successfully")
})
// Test basic PES packet generation without PlayBlock
t.Run("PESGeneration", func(t *testing.T) {
// Create a test that simulates PES packet generation
// without requiring a full subscriber setup
// Create test payload
testPayload := make([]byte, 5000)
for i := range testPayload {
testPayload[i] = byte(i % 256)
}
// Create PES frame
pesFrame := &MpegpsPESFrame{
StreamType: 0x1B, // H.264
}
pesFrame.Pts = 90000
pesFrame.Dts = 90000
// Create allocator for testing
allocator := util.NewScalableMemoryAllocator(1024*1024)
packet := util.NewRecyclableMemory(allocator)
// Write PES packet
err := pesFrame.WritePESPacket(util.NewMemory(testPayload), &packet)
if err != nil {
t.Fatalf("WritePESPacket failed: %v", err)
}
// Verify packet was written
packetData := packet.ToBytes()
if len(packetData) == 0 {
t.Fatal("No data was written to packet")
}
t.Logf("PES packet generated: %d bytes", len(packetData))
t.Logf("Packet data (first 64 bytes): %x", packetData[:min(64, len(packetData))])
// Verify PS header is present
if !bytes.Contains(packetData, []byte{0x00, 0x00, 0x01, 0xBA}) {
t.Error("PES packet does not contain PS start code")
}
// Test reading back the packet
reader := util.NewBufReader(bytes.NewReader(packetData))
// Skip PS header
code, err := reader.ReadBE32(4)
if err != nil {
t.Fatalf("Failed to read start code: %v", err)
}
if code != StartCodePS {
t.Errorf("Expected PS start code %x, got %x", StartCodePS, code)
}
// Skip PS header
if err = reader.Skip(9); err != nil {
t.Fatalf("Failed to skip PS header: %v", err)
}
psl, err := reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read stuffing length: %v", err)
}
psl &= 0x07
if err = reader.Skip(int(psl)); err != nil {
t.Fatalf("Failed to skip stuffing bytes: %v", err)
}
// Read PES packets directly by parsing the PES structure
totalPayloadSize := 0
packetCount := 0
for reader.Buffered() > 0 {
// Read PES packet start code (0x00000100 + stream_id)
pesStartCode, err := reader.ReadBE32(4)
if err != nil {
if err == io.EOF {
break
}
t.Fatalf("Failed to read PES start code: %v", err)
}
// Check if it's a PES packet (starts with 0x000001)
if pesStartCode&0xFFFFFF00 != 0x00000100 {
t.Errorf("Invalid PES start code: %x", pesStartCode)
break
}
// // streamID := byte(pesStartCode & 0xFF)
t.Logf("PES packet %d: stream_id=0x%02x", packetCount+1, pesStartCode&0xFF)
// Read PES packet length
pesLength, err := reader.ReadBE(2)
if err != nil {
t.Fatalf("Failed to read PES length: %v", err)
}
// Read PES header
// Skip the first byte (flags)
_, err = reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES flags1: %v", err)
}
// Skip the second byte (flags)
_, err = reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES flags2: %v", err)
}
// Read header data length
headerDataLength, err := reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES header data length: %v", err)
}
// Skip header data
if err = reader.Skip(int(headerDataLength)); err != nil {
t.Fatalf("Failed to skip PES header data: %v", err)
}
// Calculate payload size
payloadSize := pesLength - 3 - int(headerDataLength) // 3 = flags1 + flags2 + headerDataLength
if payloadSize > 0 {
// Read payload data
payload, err := reader.ReadBytes(payloadSize)
if err != nil {
t.Fatalf("Failed to read PES payload: %v", err)
}
totalPayloadSize += payload.Size
t.Logf("PES packet %d: %d bytes payload", packetCount+1, payload.Size)
}
packetCount++
}
// Verify total payload size matches
if totalPayloadSize != len(testPayload) {
t.Errorf("Expected total payload size %d, got %d", len(testPayload), totalPayloadSize)
}
t.Logf("PES generation test completed successfully: %d packets, total %d bytes", packetCount, totalPayloadSize)
})
}
func TestPESPacketWriteRead(t *testing.T) {
// Test PES packet writing and reading functionality
t.Run("PESWriteRead", func(t *testing.T) {
// Create test payload data
testPayload := make([]byte, 1000)
for i := range testPayload {
testPayload[i] = byte(i % 256)
}
// Create PES frame
pesFrame := &MpegpsPESFrame{
StreamType: 0x1B, // H.264
}
pesFrame.Pts = 90000 // 1 second in 90kHz clock
pesFrame.Dts = 90000
// Create allocator for testing
allocator := util.NewScalableMemoryAllocator(1024)
packet := util.NewRecyclableMemory(allocator)
// Write PES packet
err := pesFrame.WritePESPacket(util.NewMemory(testPayload), &packet)
if err != nil {
t.Fatalf("WritePESPacket failed: %v", err)
}
// Verify that packet was written
packetData := packet.ToBytes()
if len(packetData) == 0 {
t.Fatal("No data was written to packet")
}
t.Logf("PES packet written: %d bytes", len(packetData))
t.Logf("Packet data (first 64 bytes): %x", packetData[:min(64, len(packetData))])
// Verify PS header is present
if !bytes.Contains(packetData, []byte{0x00, 0x00, 0x01, 0xBA}) {
t.Error("PES packet does not contain PS start code")
}
// Now test reading the PES packet back
reader := util.NewBufReader(bytes.NewReader(packetData))
// Read and process the PS header
code, err := reader.ReadBE32(4)
if err != nil {
t.Fatalf("Failed to read start code: %v", err)
}
if code != StartCodePS {
t.Errorf("Expected PS start code %x, got %x", StartCodePS, code)
}
// Skip PS header (9 bytes + stuffing length)
if err = reader.Skip(9); err != nil {
t.Fatalf("Failed to skip PS header: %v", err)
}
psl, err := reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read stuffing length: %v", err)
}
psl &= 0x07
if err = reader.Skip(int(psl)); err != nil {
t.Fatalf("Failed to skip stuffing bytes: %v", err)
}
// Read PES packet directly by parsing the PES structure
totalPayloadSize := 0
packetCount := 0
for reader.Buffered() > 0 {
// Read PES packet start code (0x00000100 + stream_id)
pesStartCode, err := reader.ReadBE32(4)
if err != nil {
if err == io.EOF {
break
}
t.Fatalf("Failed to read PES start code: %v", err)
}
// Check if it's a PES packet (starts with 0x000001)
if pesStartCode&0xFFFFFF00 != 0x00000100 {
t.Errorf("Invalid PES start code: %x", pesStartCode)
break
}
// // streamID := byte(pesStartCode & 0xFF)
t.Logf("PES packet %d: stream_id=0x%02x", packetCount+1, pesStartCode&0xFF)
// Read PES packet length
pesLength, err := reader.ReadBE(2)
if err != nil {
t.Fatalf("Failed to read PES length: %v", err)
}
// Read PES header
// Skip the first byte (flags)
_, err = reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES flags1: %v", err)
}
// Skip the second byte (flags)
_, err = reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES flags2: %v", err)
}
// Read header data length
headerDataLength, err := reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES header data length: %v", err)
}
// Skip header data
if err = reader.Skip(int(headerDataLength)); err != nil {
t.Fatalf("Failed to skip PES header data: %v", err)
}
// Calculate payload size
payloadSize := pesLength - 3 - int(headerDataLength) // 3 = flags1 + flags2 + headerDataLength
if payloadSize > 0 {
// Read payload data
payload, err := reader.ReadBytes(payloadSize)
if err != nil {
t.Fatalf("Failed to read PES payload: %v", err)
}
totalPayloadSize += payload.Size
t.Logf("PES packet %d: %d bytes payload", packetCount+1, payload.Size)
}
packetCount++
}
t.Logf("PES payload read: %d bytes", totalPayloadSize)
// Verify payload size
if totalPayloadSize != len(testPayload) {
t.Errorf("Expected payload size %d, got %d", len(testPayload), totalPayloadSize)
}
// Note: We can't easily verify the content because the payload is fragmented across multiple PES packets
// But we can verify the total size is correct
t.Logf("PES packet write-read test completed successfully")
})
}
func TestLargePESPacket(t *testing.T) {
// Test large PES packet handling (payload > 65535 bytes)
t.Run("LargePESPacket", func(t *testing.T) {
// Create large test payload (exceeds 65535 bytes)
largePayload := make([]byte, 70000) // 70KB payload
for i := range largePayload {
largePayload[i] = byte(i % 256)
}
// Create PES frame
pesFrame := &MpegpsPESFrame{
StreamType: 0x1B, // H.264
}
pesFrame.Pts = 180000 // 2 seconds in 90kHz clock
pesFrame.Dts = 180000
// Create allocator for testing
allocator := util.NewScalableMemoryAllocator(1024*1024) // 1MB allocator
packet := util.NewRecyclableMemory(allocator)
// Write large PES packet
t.Logf("Writing large PES packet with %d bytes payload", len(largePayload))
err := pesFrame.WritePESPacket(util.NewMemory(largePayload), &packet)
if err != nil {
t.Fatalf("WritePESPacket failed for large payload: %v", err)
}
// Verify that packet was written
packetData := packet.ToBytes()
if len(packetData) == 0 {
t.Fatal("No data was written to packet")
}
t.Logf("Large PES packet written: %d bytes", len(packetData))
// Verify PS header is present
if !bytes.Contains(packetData, []byte{0x00, 0x00, 0x01, 0xBA}) {
t.Error("Large PES packet does not contain PS start code")
}
// Count number of PES packets (should be multiple due to size limitation)
pesCount := 0
reader := util.NewBufReader(bytes.NewReader(packetData))
// Skip PS header
code, err := reader.ReadBE32(4)
if err != nil {
t.Fatalf("Failed to read start code: %v", err)
}
if code != StartCodePS {
t.Errorf("Expected PS start code %x, got %x", StartCodePS, code)
}
// Skip PS header
if err = reader.Skip(9); err != nil {
t.Fatalf("Failed to skip PS header: %v", err)
}
psl, err := reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read stuffing length: %v", err)
}
psl &= 0x07
if err = reader.Skip(int(psl)); err != nil {
t.Fatalf("Failed to skip stuffing bytes: %v", err)
}
// Read and count PES packets
totalPayloadSize := 0
for reader.Buffered() > 0 {
// Read PES packet start code (0x00000100 + stream_id)
pesStartCode, err := reader.ReadBE32(4)
if err != nil {
if err == io.EOF {
break
}
t.Fatalf("Failed to read PES start code: %v", err)
}
// Check if it's a PES packet (starts with 0x000001)
if pesStartCode&0xFFFFFF00 != 0x00000100 {
t.Errorf("Invalid PES start code: %x", pesStartCode)
break
}
// streamID := byte(pesStartCode & 0xFF)
// Read PES packet length
pesLength, err := reader.ReadBE(2)
if err != nil {
t.Fatalf("Failed to read PES length: %v", err)
}
// Read PES header
// Skip the first byte (flags)
_, err = reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES flags1: %v", err)
}
// Skip the second byte (flags)
_, err = reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES flags2: %v", err)
}
// Read header data length
headerDataLength, err := reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES header data length: %v", err)
}
// Skip header data
if err = reader.Skip(int(headerDataLength)); err != nil {
t.Fatalf("Failed to skip PES header data: %v", err)
}
// Calculate payload size
payloadSize := pesLength - 3 - int(headerDataLength) // 3 = flags1 + flags2 + headerDataLength
if payloadSize > 0 {
// Read payload data
payload, err := reader.ReadBytes(payloadSize)
if err != nil {
t.Fatalf("Failed to read PES payload: %v", err)
}
totalPayloadSize += payload.Size
t.Logf("PES packet %d: %d bytes payload", pesCount+1, payload.Size)
}
pesCount++
}
// Verify that we got multiple PES packets
if pesCount < 2 {
t.Errorf("Expected multiple PES packets for large payload, got %d", pesCount)
}
// Verify total payload size
if totalPayloadSize != len(largePayload) {
t.Errorf("Expected total payload size %d, got %d", len(largePayload), totalPayloadSize)
}
// Verify individual PES packet sizes don't exceed maximum
maxPacketSize := MaxPESPayloadSize + PESHeaderMinSize
if pesCount == 1 && len(packetData) > maxPacketSize {
t.Errorf("Single PES packet exceeds maximum size: %d > %d", len(packetData), maxPacketSize)
}
t.Logf("Large PES packet test completed successfully: %d packets, total %d bytes", pesCount, totalPayloadSize)
})
}
func TestPESPacketBoundaryConditions(t *testing.T) {
// Test PES packet boundary conditions
t.Run("BoundaryConditions", func(t *testing.T) {
testCases := []struct {
name string
payloadSize int
}{
{"EmptyPayload", 0},
{"SmallPayload", 1},
{"ExactBoundary", MaxPESPayloadSize},
{"JustOverBoundary", MaxPESPayloadSize + 1},
{"MultipleBoundary", MaxPESPayloadSize * 2 + 100},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create test payload
testPayload := make([]byte, tc.payloadSize)
for i := range testPayload {
testPayload[i] = byte(i % 256)
}
// Create PES frame
pesFrame := &MpegpsPESFrame{
StreamType: 0x1B, // H.264
}
pesFrame.Pts = uint64(tc.payloadSize) * 90 // Use payload size as PTS
pesFrame.Dts = uint64(tc.payloadSize) * 90
// Create allocator for testing
allocator := util.NewScalableMemoryAllocator(1024*1024)
packet := util.NewRecyclableMemory(allocator)
// Write PES packet
err := pesFrame.WritePESPacket(util.NewMemory(testPayload), &packet)
if err != nil {
t.Fatalf("WritePESPacket failed: %v", err)
}
// Verify that packet was written
packetData := packet.ToBytes()
if len(packetData) == 0 && tc.payloadSize > 0 {
t.Fatal("No data was written to packet for non-empty payload")
}
t.Logf("%s: %d bytes payload -> %d bytes packet", tc.name, tc.payloadSize, len(packetData))
// For non-empty payloads, verify we can read them back
if tc.payloadSize > 0 {
reader := util.NewBufReader(bytes.NewReader(packetData))
// Skip PS header
code, err := reader.ReadBE32(4)
if err != nil {
t.Fatalf("Failed to read start code: %v", err)
}
if code != StartCodePS {
t.Errorf("Expected PS start code %x, got %x", StartCodePS, code)
}
// Skip PS header
if err = reader.Skip(9); err != nil {
t.Fatalf("Failed to skip PS header: %v", err)
}
psl, err := reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read stuffing length: %v", err)
}
psl &= 0x07
if err = reader.Skip(int(psl)); err != nil {
t.Fatalf("Failed to skip stuffing bytes: %v", err)
}
// Read PES packets
totalPayloadSize := 0
packetCount := 0
for reader.Buffered() > 0 {
// Read PES packet start code (0x00000100 + stream_id)
pesStartCode, err := reader.ReadBE32(4)
if err != nil {
if err == io.EOF {
break
}
t.Fatalf("Failed to read PES start code: %v", err)
}
// Check if it's a PES packet (starts with 0x000001)
if pesStartCode&0xFFFFFF00 != 0x00000100 {
t.Errorf("Invalid PES start code: %x", pesStartCode)
break
}
// // streamID := byte(pesStartCode & 0xFF)
// Read PES packet length
pesLength, err := reader.ReadBE(2)
if err != nil {
t.Fatalf("Failed to read PES length: %v", err)
}
// Read PES header
// Skip the first byte (flags)
_, err = reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES flags1: %v", err)
}
// Skip the second byte (flags)
_, err = reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES flags2: %v", err)
}
// Read header data length
headerDataLength, err := reader.ReadByte()
if err != nil {
t.Fatalf("Failed to read PES header data length: %v", err)
}
// Skip header data
if err = reader.Skip(int(headerDataLength)); err != nil {
t.Fatalf("Failed to skip PES header data: %v", err)
}
// Calculate payload size
payloadSize := pesLength - 3 - int(headerDataLength) // 3 = flags1 + flags2 + headerDataLength
if payloadSize > 0 {
// Read payload data
payload, err := reader.ReadBytes(payloadSize)
if err != nil {
t.Fatalf("Failed to read PES payload: %v", err)
}
totalPayloadSize += payload.Size
}
packetCount++
}
// Verify total payload size matches
if totalPayloadSize != tc.payloadSize {
t.Errorf("Expected total payload size %d, got %d", tc.payloadSize, totalPayloadSize)
}
t.Logf("%s: Successfully read back %d PES packets", tc.name, packetCount)
}
})
}
})
}

35
pkg/format/ps/pes.go Normal file
View File

@@ -0,0 +1,35 @@
package mpegps
import (
mpegts "m7s.live/v5/pkg/format/ts"
"m7s.live/v5/pkg/util"
)
type MpegpsPESFrame struct {
StreamType byte // Stream type (e.g., video, audio)
mpegts.MpegPESHeader
}
func (frame *MpegpsPESFrame) WritePESPacket(payload util.Memory, allocator *util.RecyclableMemory) (err error) {
frame.DataAlignmentIndicator = 1
pesReader := payload.NewReader()
var outputMemory util.Buffer = allocator.NextN(PSPackHeaderSize)
outputMemory.Reset()
MuxPSHeader(&outputMemory)
for pesReader.Length > 0 {
currentPESPayload := min(pesReader.Length, MaxPESPayloadSize)
var pesHeadItem util.Buffer
pesHeadItem, err = frame.WritePESHeader(currentPESPayload)
if err != nil {
return
}
copy(allocator.NextN(pesHeadItem.Len()), pesHeadItem)
// 申请输出缓冲
outputMemory = allocator.NextN(currentPESPayload)
pesReader.Read(outputMemory)
frame.DataAlignmentIndicator = 0
}
return nil
}

131
pkg/format/raw.go Normal file
View File

@@ -0,0 +1,131 @@
package format
import (
"bytes"
"fmt"
"github.com/deepch/vdk/codec/h264parser"
"github.com/deepch/vdk/codec/h265parser"
"m7s.live/v5/pkg"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
var _ pkg.IAVFrame = (*RawAudio)(nil)
type RawAudio struct {
pkg.Sample
}
func (r *RawAudio) GetSize() int {
return r.Raw.(*util.Memory).Size
}
func (r *RawAudio) Demux() error {
r.Raw = &r.Memory
return nil
}
func (r *RawAudio) Mux(from *pkg.Sample) (err error) {
r.InitRecycleIndexes(0)
r.Memory = *from.Raw.(*util.Memory)
r.ICodecCtx = from.GetBase()
return
}
func (r *RawAudio) String() string {
return fmt.Sprintf("RawAudio{FourCC: %s, Timestamp: %s, Size: %d}", r.FourCC(), r.Timestamp, r.Size)
}
var _ pkg.IAVFrame = (*H26xFrame)(nil)
type H26xFrame struct {
pkg.Sample
}
func (h *H26xFrame) CheckCodecChange() (err error) {
if h.ICodecCtx == nil {
return pkg.ErrUnsupportCodec
}
var hasVideoFrame bool
switch ctx := h.GetBase().(type) {
case *codec.H264Ctx:
var sps, pps []byte
for nalu := range h.Raw.(*pkg.Nalus).RangePoint {
switch codec.ParseH264NALUType(nalu.Buffers[0][0]) {
case codec.NALU_SPS:
sps = nalu.ToBytes()
case codec.NALU_PPS:
pps = nalu.ToBytes()
case codec.NALU_IDR_Picture:
h.IDR = true
case codec.NALU_Non_IDR_Picture:
hasVideoFrame = true
}
}
if sps != nil && pps != nil {
var codecData h264parser.CodecData
codecData, err = h264parser.NewCodecDataFromSPSAndPPS(sps, pps)
if err != nil {
return
}
if !bytes.Equal(codecData.Record, ctx.Record) {
h.ICodecCtx = &codec.H264Ctx{
CodecData: codecData,
}
}
}
case *codec.H265Ctx:
var vps, sps, pps []byte
for nalu := range h.Raw.(*pkg.Nalus).RangePoint {
switch codec.ParseH265NALUType(nalu.Buffers[0][0]) {
case h265parser.NAL_UNIT_VPS:
vps = nalu.ToBytes()
case h265parser.NAL_UNIT_SPS:
sps = nalu.ToBytes()
case h265parser.NAL_UNIT_PPS:
pps = nalu.ToBytes()
case h265parser.NAL_UNIT_CODED_SLICE_BLA_W_LP,
h265parser.NAL_UNIT_CODED_SLICE_BLA_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_BLA_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_IDR_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_IDR_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_CRA:
h.IDR = true
case 1, 2, 3, 4, 5, 6, 7, 8, 9:
hasVideoFrame = true
}
}
if vps != nil && sps != nil && pps != nil {
var codecData h265parser.CodecData
codecData, err = h265parser.NewCodecDataFromVPSAndSPSAndPPS(vps, sps, pps)
if err != nil {
return
}
if !bytes.Equal(codecData.Record, ctx.Record) {
h.ICodecCtx = &codec.H265Ctx{
CodecData: codecData,
}
}
}
}
// Return ErrSkip if no video frames are present (only metadata NALUs)
if !hasVideoFrame && !h.IDR {
return pkg.ErrSkip
}
return
}
func (r *H26xFrame) GetSize() (ret int) {
switch raw := r.Raw.(type) {
case *pkg.Nalus:
for nalu := range raw.RangePoint {
ret += nalu.Size
}
}
return
}
func (h *H26xFrame) String() string {
return fmt.Sprintf("H26xFrame{FourCC: %s, Timestamp: %s, CTS: %s}", h.FourCC, h.Timestamp, h.CTS)
}

View File

@@ -4,7 +4,11 @@ import (
"bytes"
"errors"
"io"
"io/ioutil"
"time"
"m7s.live/v5"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/format"
"m7s.live/v5/pkg/util"
//"sync"
)
@@ -103,20 +107,14 @@ const (
type MpegTsStream struct {
PAT MpegTsPAT // PAT表信息
PMT MpegTsPMT // PMT表信息
PESBuffer map[uint16]*MpegTsPESPacket
PESChan chan *MpegTsPESPacket
Publisher *m7s.Publisher
Allocator *util.ScalableMemoryAllocator
writer m7s.PublishWriter[*format.Mpeg2Audio, *VideoFrame]
audioPID, videoPID, pmtPID uint16
tsPacket [TS_PACKET_SIZE]byte
}
// ios13818-1-CN.pdf 33/165
//
// TS
//
// Packet == Header + Payload == 188 bytes
type MpegTsPacket struct {
Header MpegTsHeader
Payload []byte
}
// 前面32bit的数据即TS分组首部,它指出了这个分组的属性
type MpegTsHeader struct {
@@ -185,25 +183,6 @@ type MpegTsDescriptor struct {
Data []byte
}
func ReadTsPacket(r io.Reader) (packet MpegTsPacket, err error) {
lr := &io.LimitedReader{R: r, N: TS_PACKET_SIZE}
// header
packet.Header, err = ReadTsHeader(lr)
if err != nil {
return
}
// payload
packet.Payload = make([]byte, lr.N)
_, err = lr.Read(packet.Payload)
if err != nil {
return
}
return
}
func ReadTsHeader(r io.Reader) (header MpegTsHeader, err error) {
var h uint32
@@ -365,7 +344,7 @@ func ReadTsHeader(r io.Reader) (header MpegTsHeader, err error) {
// Discard 是一个 io.Writer,对它进行的任何 Write 调用都将无条件成功
// 但是ioutil.Discard不记录copy得到的数值
// 用于发送需要读取但不想存储的数据,目的是耗尽读取端的数据
if _, err = io.CopyN(ioutil.Discard, lr, int64(lr.N)); err != nil {
if _, err = io.CopyN(io.Discard, lr, int64(lr.N)); err != nil {
return
}
}
@@ -440,138 +419,96 @@ func WriteTsHeader(w io.Writer, header MpegTsHeader) (written int, err error) {
return
}
//
//func (s *MpegTsStream) TestWrite(fileName string) error {
//
// if fileName != "" {
// file, err := os.Create(fileName)
// if err != nil {
// panic(err)
// }
// defer file.Close()
//
// patTsHeader := []byte{0x47, 0x40, 0x00, 0x10}
//
// if err := WritePATPacket(file, patTsHeader, *s.pat); err != nil {
// panic(err)
// }
//
// // TODO:这里的pid应该是由PAT给的
// pmtTsHeader := []byte{0x47, 0x41, 0x00, 0x10}
//
// if err := WritePMTPacket(file, pmtTsHeader, *s.pmt); err != nil {
// panic(err)
// }
// }
//
// var videoFrame int
// var audioFrame int
// for {
// tsPesPkt, ok := <-s.TsPesPktChan
// if !ok {
// fmt.Println("frame index, video , audio :", videoFrame, audioFrame)
// break
// }
//
// if tsPesPkt.PesPkt.Header.StreamID == STREAM_ID_AUDIO {
// audioFrame++
// }
//
// if tsPesPkt.PesPkt.Header.StreamID == STREAM_ID_VIDEO {
// println(tsPesPkt.PesPkt.Header.Pts)
// videoFrame++
// }
//
// fmt.Sprintf("%s", tsPesPkt)
//
// // if err := WritePESPacket(file, tsPesPkt.TsPkt.Header, tsPesPkt.PesPkt); err != nil {
// // return err
// // }
//
// }
//
// return nil
//}
func (s *MpegTsStream) ReadPAT(packet *MpegTsPacket, pr io.Reader) (err error) {
// 首先找到PID==0x00的TS包(PAT)
if PID_PAT == packet.Header.Pid {
if len(packet.Payload) == 188 {
pr = &util.Crc32Reader{R: pr, Crc32: 0xffffffff}
}
// Header + PSI + Paylod
s.PAT, err = ReadPAT(pr)
}
return
}
func (s *MpegTsStream) ReadPMT(packet *MpegTsPacket, pr io.Reader) (err error) {
// 在读取PAT中已经将所有频道节目信息(PMT_PID)保存了起来
// 接着读取所有TS包里面的PID,找出PID==PMT_PID的TS包,就是PMT表
for _, v := range s.PAT.Program {
if v.ProgramMapPID == packet.Header.Pid {
if len(packet.Payload) == 188 {
pr = &util.Crc32Reader{R: pr, Crc32: 0xffffffff}
}
// Header + PSI + Paylod
s.PMT, err = ReadPMT(pr)
}
}
return
}
func (s *MpegTsStream) Feed(ts io.Reader) (err error) {
writer := &s.writer
var reader bytes.Reader
var lr io.LimitedReader
lr.R = &reader
var tsHeader MpegTsHeader
tsData := make([]byte, TS_PACKET_SIZE)
for {
_, err = io.ReadFull(ts, tsData)
var pesHeader MpegPESHeader
for !s.Publisher.IsStopped() {
_, err = io.ReadFull(ts, s.tsPacket[:])
if err == io.EOF {
// 文件结尾 把最后面的数据发出去
for _, pesPkt := range s.PESBuffer {
if pesPkt != nil {
s.PESChan <- pesPkt
}
}
return nil
} else if err != nil {
return
}
reader.Reset(tsData)
reader.Reset(s.tsPacket[:])
lr.N = TS_PACKET_SIZE
if tsHeader, err = ReadTsHeader(&lr); err != nil {
return
}
if tsHeader.Pid == PID_PAT {
switch tsHeader.Pid {
case PID_PAT:
if s.PAT, err = ReadPAT(&lr); err != nil {
return
}
s.pmtPID = s.PAT.Program[0].ProgramMapPID
continue
case s.pmtPID:
if len(s.PMT.Stream) != 0 {
continue
}
if len(s.PMT.Stream) == 0 {
for _, v := range s.PAT.Program {
if v.ProgramMapPID == tsHeader.Pid {
if s.PMT, err = ReadPMT(&lr); err != nil {
return
}
for _, v := range s.PMT.Stream {
s.PESBuffer[v.ElementaryPID] = nil
for _, pmt := range s.PMT.Stream {
switch pmt.StreamType {
case STREAM_TYPE_H265:
s.videoPID = pmt.ElementaryPID
writer.PublishVideoWriter = m7s.NewPublishVideoWriter[*VideoFrame](s.Publisher, s.Allocator)
writer.VideoFrame.ICodecCtx = &codec.H265Ctx{}
case STREAM_TYPE_H264:
s.videoPID = pmt.ElementaryPID
writer.PublishVideoWriter = m7s.NewPublishVideoWriter[*VideoFrame](s.Publisher, s.Allocator)
writer.VideoFrame.ICodecCtx = &codec.H264Ctx{}
case STREAM_TYPE_AAC:
s.audioPID = pmt.ElementaryPID
writer.PublishAudioWriter = m7s.NewPublishAudioWriter[*format.Mpeg2Audio](s.Publisher, s.Allocator)
writer.AudioFrame.ICodecCtx = &codec.AACCtx{}
case STREAM_TYPE_G711A:
s.audioPID = pmt.ElementaryPID
writer.PublishAudioWriter = m7s.NewPublishAudioWriter[*format.Mpeg2Audio](s.Publisher, s.Allocator)
writer.AudioFrame.ICodecCtx = codec.NewPCMACtx()
case STREAM_TYPE_G711U:
s.audioPID = pmt.ElementaryPID
writer.PublishAudioWriter = m7s.NewPublishAudioWriter[*format.Mpeg2Audio](s.Publisher, s.Allocator)
writer.AudioFrame.ICodecCtx = codec.NewPCMUCtx()
}
}
continue
}
} else if pesPkt, ok := s.PESBuffer[tsHeader.Pid]; ok {
case s.audioPID:
if tsHeader.PayloadUnitStartIndicator == 1 {
if pesPkt != nil {
s.PESChan <- pesPkt
}
pesPkt = &MpegTsPESPacket{}
s.PESBuffer[tsHeader.Pid] = pesPkt
if pesPkt.Header, err = ReadPESHeader(&lr); err != nil {
if pesHeader, err = ReadPESHeader0(&lr); err != nil {
return
}
if !s.Publisher.PubAudio {
continue
}
io.Copy(&pesPkt.Payload, &lr)
if writer.AudioFrame.Size > 0 {
if err = writer.NextAudio(); err != nil {
continue
}
}
writer.AudioFrame.SetDTS(time.Duration(pesHeader.Pts))
}
lr.Read(writer.AudioFrame.NextN(int(lr.N)))
case s.videoPID:
if tsHeader.PayloadUnitStartIndicator == 1 {
if pesHeader, err = ReadPESHeader0(&lr); err != nil {
return
}
if !s.Publisher.PubVideo {
continue
}
if writer.VideoFrame.Size > 0 {
if err = writer.NextVideo(); err != nil {
continue
}
}
writer.VideoFrame.SetDTS(time.Duration(pesHeader.Dts))
writer.VideoFrame.SetPTS(time.Duration(pesHeader.Pts))
}
lr.Read(writer.VideoFrame.NextN(int(lr.N)))
}
}
return
}

View File

@@ -2,39 +2,19 @@ package mpegts
import (
"errors"
"fmt"
"io"
"m7s.live/v5/pkg/util"
"net"
)
// ios13818-1-CN.pdf 45/166
//
// PES
//
// 每个传输流和节目流在逻辑上都是由 PES 包构造的
type MpegTsPesStream struct {
TsPkt MpegTsPacket
PesPkt MpegTsPESPacket
}
// PES--Packetized Elementary Streams (分组的ES),ES形成的分组称为PES分组,是用来传递ES的一种数据结构
// 1110 xxxx 为视频流(0xE0)
// 110x xxxx 为音频流(0xC0)
type MpegTsPESPacket struct {
Header MpegTsPESHeader
Payload util.Buffer //从TS包中读取的数据
Buffers net.Buffers //用于写TS包
}
type MpegTsPESHeader struct {
PacketStartCodePrefix uint32 // 24 bits 同跟随它的 stream_id 一起组成标识包起始端的包起始码.packet_start_code_prefix 为比特串"0000 0000 0000 0000 0000 0001"(0x000001)
type MpegPESHeader struct {
header [32]byte
StreamID byte // 8 bits stream_id 指示基本流的类型和编号,如 stream_id 表 2-22 所定义的.传输流中,stream_id 可以设置为准确描述基本流类型的任何有效值,如表 2-22 所规定的.传输流中,基本流类型在 2.4.4 中所指示的节目特定信息中指定
PesPacketLength uint16 // 16 bits 指示 PES 包中跟随该字段最后字节的字节数.0->指示 PES 包长度既未指示也未限定并且仅在这样的 PES 包中才被允许,该 PES 包的有效载荷由来自传输流包中所包含的视频基本流的字节组成
MpegTsOptionalPESHeader
PayloadLength uint64 // 这个不是标准文档里面的字段,是自己添加的,方便计算
}
// 可选的PES Header = MpegTsOptionalPESHeader + stuffing bytes(0xFF) m * 8
@@ -102,20 +82,32 @@ type MpegtsPESFrame struct {
Pid uint16
IsKeyFrame bool
ContinuityCounter byte
ProgramClockReferenceBase uint64
MpegPESHeader
}
func ReadPESHeader(r io.Reader) (header MpegTsPESHeader, err error) {
var flags uint8
var length uint
func CreatePESWriters() (pesAudio, pesVideo MpegtsPESFrame) {
pesAudio, pesVideo = MpegtsPESFrame{
Pid: PID_AUDIO,
}, MpegtsPESFrame{
Pid: PID_VIDEO,
}
pesAudio.DataAlignmentIndicator = 1
pesVideo.DataAlignmentIndicator = 1
pesAudio.StreamID = STREAM_ID_AUDIO
pesVideo.StreamID = STREAM_ID_VIDEO
return
}
func ReadPESHeader0(r *io.LimitedReader) (header MpegPESHeader, err error) {
var length uint
var packetStartCodePrefix uint32
// packetStartCodePrefix(24) (0x000001)
header.PacketStartCodePrefix, err = util.ReadByteToUint24(r, true)
packetStartCodePrefix, err = util.ReadByteToUint24(r, true)
if err != nil {
return
}
if header.PacketStartCodePrefix != 0x0000001 {
if packetStartCodePrefix != 0x0000001 {
err = errors.New("read PacketStartCodePrefix is not 0x0000001")
return
}
@@ -141,18 +133,27 @@ func ReadPESHeader(r io.Reader) (header MpegTsPESHeader, err error) {
if length == 0 {
length = 1 << 31
}
var header1 MpegPESHeader
header1, err = ReadPESHeader(r)
if err == nil {
if header.PesPacketLength == 0 {
header1.PesPacketLength = uint16(r.N)
}
header1.StreamID = header.StreamID
return header1, nil
}
return
}
// lrPacket 和 lrHeader 位置指针是在同一位置的
lrPacket := &io.LimitedReader{R: r, N: int64(length)}
lrHeader := lrPacket
func ReadPESHeader(lrPacket *io.LimitedReader) (header MpegPESHeader, err error) {
var flags uint8
// constTen(2)
// pes_ScramblingControl(2)
// pes_Priority(1)
// dataAlignmentIndicator(1)
// copyright(1)
// originalOrCopy(1)
flags, err = util.ReadByteToUint8(lrHeader)
flags, err = util.ReadByteToUint8(lrPacket)
if err != nil {
return
}
@@ -171,7 +172,7 @@ func ReadPESHeader(r io.Reader) (header MpegTsPESHeader, err error) {
// additionalCopyInfoFlag(1)
// pes_CRCFlag(1)
// pes_ExtensionFlag(1)
flags, err = util.ReadByteToUint8(lrHeader)
flags, err = util.ReadByteToUint8(lrPacket)
if err != nil {
return
}
@@ -185,14 +186,14 @@ func ReadPESHeader(r io.Reader) (header MpegTsPESHeader, err error) {
header.PesExtensionFlag = flags & 0x01
// pes_HeaderDataLength(8)
header.PesHeaderDataLength, err = util.ReadByteToUint8(lrHeader)
header.PesHeaderDataLength, err = util.ReadByteToUint8(lrPacket)
if err != nil {
return
}
length = uint(header.PesHeaderDataLength)
length := uint(header.PesHeaderDataLength)
lrHeader = &io.LimitedReader{R: lrHeader, N: int64(length)}
lrHeader := &io.LimitedReader{R: lrPacket, N: int64(length)}
// 00 -> PES 包头中既无任何PTS 字段也无任何DTS 字段存在
// 10 -> PES 包头中PTS 字段存在
@@ -219,6 +220,8 @@ func ReadPESHeader(r io.Reader) (header MpegTsPESHeader, err error) {
}
header.Dts = util.GetPtsDts(dts)
} else {
header.Dts = header.Pts
}
// reserved(2) + escr_Base1(3) + marker_bit(1) +
@@ -336,48 +339,31 @@ func ReadPESHeader(r io.Reader) (header MpegTsPESHeader, err error) {
}
}
// 2的16次方,16个字节
if lrPacket.N < 65536 {
// 这里得到的其实是负载长度,因为已经偏移过了Header部分.
//header.pes_PacketLength = uint16(lrPacket.N)
header.PayloadLength = uint64(lrPacket.N)
}
return
}
func WritePESHeader(w io.Writer, header MpegTsPESHeader) (written int, err error) {
if header.PacketStartCodePrefix != 0x0000001 {
err = errors.New("write PacketStartCodePrefix is not 0x0000001")
return
func (header *MpegPESHeader) WritePESHeader(esSize int) (w util.Buffer, err error) {
if header.DataAlignmentIndicator == 1 {
if header.Pts == header.Dts {
header.PtsDtsFlags = 0x80
header.PesHeaderDataLength = 5
} else {
header.PtsDtsFlags = 0xC0
header.PesHeaderDataLength = 10
}
// packetStartCodePrefix(24) (0x000001)
if err = util.WriteUint24ToByte(w, header.PacketStartCodePrefix, true); err != nil {
return
} else {
header.PtsDtsFlags = 0
header.PesHeaderDataLength = 0
}
written += 3
// streamID(8)
if err = util.WriteUint8ToByte(w, header.StreamID); err != nil {
return
pktLength := esSize + int(header.PesHeaderDataLength) + 3
if pktLength > 0xffff {
pktLength = 0
}
header.PesPacketLength = uint16(pktLength)
written += 1
// pes_PacketLength(16)
// PES包长度可能为0,这个时候,需要自己去算
// 0 <= len <= 65535
if err = util.WriteUint16ToByte(w, header.PesPacketLength, true); err != nil {
return
}
//fmt.Println("Length :", payloadLength)
//fmt.Println("PES Packet Length :", header.pes_PacketLength)
written += 2
w = header.header[:0]
w.WriteUint32(0x00000100 | uint32(header.StreamID))
w.WriteUint16(header.PesPacketLength)
// constTen(2)
// pes_ScramblingControl(2)
// pes_Priority(1)
@@ -385,18 +371,9 @@ func WritePESHeader(w io.Writer, header MpegTsPESHeader) (written int, err error
// copyright(1)
// originalOrCopy(1)
// 1000 0001
if header.ConstTen != 0x80 {
err = errors.New("pes header ConstTen != 0x80")
return
}
flags := header.ConstTen | header.PesScramblingControl | header.PesPriority | header.DataAlignmentIndicator | header.Copyright | header.OriginalOrCopy
if err = util.WriteUint8ToByte(w, flags); err != nil {
return
}
written += 1
flags := 0x80 | header.PesScramblingControl | header.PesPriority | header.DataAlignmentIndicator | header.Copyright | header.OriginalOrCopy
w.WriteByte(flags)
// pts_dts_Flags(2)
// escr_Flag(1)
// es_RateFlag(1)
@@ -405,19 +382,8 @@ func WritePESHeader(w io.Writer, header MpegTsPESHeader) (written int, err error
// pes_CRCFlag(1)
// pes_ExtensionFlag(1)
sevenFlags := header.PtsDtsFlags | header.EscrFlag | header.EsRateFlag | header.DsmTrickModeFlag | header.AdditionalCopyInfoFlag | header.PesCRCFlag | header.PesExtensionFlag
if err = util.WriteUint8ToByte(w, sevenFlags); err != nil {
return
}
written += 1
// pes_HeaderDataLength(8)
if err = util.WriteUint8ToByte(w, header.PesHeaderDataLength); err != nil {
return
}
written += 1
w.WriteByte(sevenFlags)
w.WriteByte(header.PesHeaderDataLength)
// PtsDtsFlags == 192(11), 128(10), 64(01)禁用, 0(00)
if header.PtsDtsFlags&0x80 != 0 {
// PTS和DTS都存在(11),否则只有PTS(10)
@@ -425,30 +391,121 @@ func WritePESHeader(w io.Writer, header MpegTsPESHeader) (written int, err error
// 11:PTS和DTS
// PTS(33) + 4 + 3
pts := util.PutPtsDts(header.Pts) | 3<<36
if err = util.WriteUint40ToByte(w, pts, true); err != nil {
if err = util.WriteUint40ToByte(&w, pts, true); err != nil {
return
}
written += 5
// DTS(33) + 4 + 3
dts := util.PutPtsDts(header.Dts) | 1<<36
if err = util.WriteUint40ToByte(w, dts, true); err != nil {
if err = util.WriteUint40ToByte(&w, dts, true); err != nil {
return
}
written += 5
} else {
// 10:只有PTS
// PTS(33) + 4 + 3
pts := util.PutPtsDts(header.Pts) | 2<<36
if err = util.WriteUint40ToByte(w, pts, true); err != nil {
if err = util.WriteUint40ToByte(&w, pts, true); err != nil {
return
}
}
}
return
}
written += 5
func (frame *MpegtsPESFrame) WritePESPacket(payload util.Memory, allocator *util.RecyclableMemory) (err error) {
var pesHeadItem util.Buffer
pesHeadItem, err = frame.WritePESHeader(payload.Size)
if err != nil {
return
}
pesBuffers := util.NewMemory(pesHeadItem)
payload.Range(pesBuffers.PushOne)
pesPktLength := int64(pesBuffers.Size)
pesReader := pesBuffers.NewReader()
var tsHeaderLength int
for i := 0; pesPktLength > 0; i++ {
var buffer util.Buffer = allocator.NextN(TS_PACKET_SIZE)
bwTsHeader := &buffer
bwTsHeader.Reset()
tsHeader := MpegTsHeader{
SyncByte: 0x47,
TransportErrorIndicator: 0,
PayloadUnitStartIndicator: 0,
TransportPriority: 0,
Pid: frame.Pid,
TransportScramblingControl: 0,
AdaptionFieldControl: 1,
ContinuityCounter: frame.ContinuityCounter,
}
frame.ContinuityCounter++
frame.ContinuityCounter = frame.ContinuityCounter % 16
// 每一帧的开头,当含有pcr的时候,包含调整字段
if i == 0 {
tsHeader.PayloadUnitStartIndicator = 1
// 当PCRFlag为1的时候,包含调整字段
if frame.IsKeyFrame {
tsHeader.AdaptionFieldControl = 0x03
tsHeader.AdaptationFieldLength = 7
tsHeader.PCRFlag = 1
tsHeader.RandomAccessIndicator = 1
tsHeader.ProgramClockReferenceBase = frame.Pts
}
}
// 每一帧的结尾,当不满足188个字节的时候,包含调整字段
if pesPktLength < TS_PACKET_SIZE-4 {
var tsStuffingLength uint8
tsHeader.AdaptionFieldControl = 0x03
tsHeader.AdaptationFieldLength = uint8(TS_PACKET_SIZE - 4 - 1 - pesPktLength)
// TODO:如果第一个TS包也是最后一个TS包,是不是需要考虑这个情况?
// MpegTsHeader最少占6个字节.(前4个走字节 + AdaptationFieldLength(1 byte) + 3个指示符5个标志位(1 byte))
if tsHeader.AdaptationFieldLength >= 1 {
tsStuffingLength = tsHeader.AdaptationFieldLength - 1
} else {
tsStuffingLength = 0
}
// error
tsHeaderLength, err = WriteTsHeader(bwTsHeader, tsHeader)
if err != nil {
return
}
if tsStuffingLength > 0 {
if _, err = bwTsHeader.Write(Stuffing[:tsStuffingLength]); err != nil {
return
}
}
tsHeaderLength += int(tsStuffingLength)
} else {
tsHeaderLength, err = WriteTsHeader(bwTsHeader, tsHeader)
if err != nil {
return
}
}
tsPayloadLength := TS_PACKET_SIZE - tsHeaderLength
//fmt.Println("tsPayloadLength :", tsPayloadLength)
// 这里不断的减少PES包
written, _ := io.CopyN(bwTsHeader, &pesReader, int64(tsPayloadLength))
// tmp := tsHeaderByte[3] << 2
// tmp = tmp >> 6
// if tmp == 2 {
// fmt.Println("fuck you mother.")
// }
pesPktLength -= written
tsPktByteLen := bwTsHeader.Len()
if tsPktByteLen != TS_PACKET_SIZE {
err = errors.New(fmt.Sprintf("%s, packet size=%d", "TS_PACKET_SIZE != 188,", tsPktByteLen))
return
}
}
return nil
}

20
pkg/format/ts/video.go Normal file
View File

@@ -0,0 +1,20 @@
package mpegts
import (
"m7s.live/v5/pkg"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/format"
)
type VideoFrame struct {
format.AnnexB
}
func (a *VideoFrame) Mux(fromBase *pkg.Sample) (err error) {
if fromBase.GetBase().FourCC().Is(codec.FourCC_H265) {
a.PushOne(codec.AudNalu)
} else {
a.PushOne(codec.NALU_AUD_BYTE)
}
return a.AnnexB.Mux(fromBase)
}

View File

@@ -1,236 +0,0 @@
package pkg
import (
"fmt"
"io"
"time"
"github.com/deepch/vdk/codec/aacparser"
"github.com/deepch/vdk/codec/h264parser"
"github.com/deepch/vdk/codec/h265parser"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
var _ IAVFrame = (*RawAudio)(nil)
type RawAudio struct {
codec.FourCC
Timestamp time.Duration
util.RecyclableMemory
}
func (r *RawAudio) Parse(track *AVTrack) (err error) {
if track.ICodecCtx == nil {
switch r.FourCC {
case codec.FourCC_MP4A:
ctx := &codec.AACCtx{}
ctx.CodecData, err = aacparser.NewCodecDataFromMPEG4AudioConfigBytes(r.ToBytes())
track.ICodecCtx = ctx
case codec.FourCC_ALAW:
track.ICodecCtx = &codec.PCMACtx{
AudioCtx: codec.AudioCtx{
SampleRate: 8000,
Channels: 1,
SampleSize: 8,
},
}
case codec.FourCC_ULAW:
track.ICodecCtx = &codec.PCMUCtx{
AudioCtx: codec.AudioCtx{
SampleRate: 8000,
Channels: 1,
SampleSize: 8,
},
}
}
}
return
}
func (r *RawAudio) ConvertCtx(ctx codec.ICodecCtx) (codec.ICodecCtx, IAVFrame, error) {
c := ctx.GetBase()
if c.FourCC().Is(codec.FourCC_MP4A) {
seq := &RawAudio{
FourCC: codec.FourCC_MP4A,
Timestamp: r.Timestamp,
}
seq.SetAllocator(r.GetAllocator())
seq.Memory.Append(c.GetRecord())
return c, seq, nil
}
return c, nil, nil
}
func (r *RawAudio) Demux(ctx codec.ICodecCtx) (any, error) {
return r.Memory, nil
}
func (r *RawAudio) Mux(ctx codec.ICodecCtx, frame *AVFrame) {
r.InitRecycleIndexes(0)
r.FourCC = ctx.FourCC()
r.Memory = frame.Raw.(util.Memory)
r.Timestamp = frame.Timestamp
}
func (r *RawAudio) GetTimestamp() time.Duration {
return r.Timestamp
}
func (r *RawAudio) GetCTS() time.Duration {
return 0
}
func (r *RawAudio) GetSize() int {
return r.Size
}
func (r *RawAudio) String() string {
return fmt.Sprintf("RawAudio{FourCC: %s, Timestamp: %s, Size: %d}", r.FourCC, r.Timestamp, r.Size)
}
func (r *RawAudio) Dump(b byte, writer io.Writer) {
//TODO implement me
panic("implement me")
}
var _ IAVFrame = (*H26xFrame)(nil)
type H26xFrame struct {
codec.FourCC
Timestamp time.Duration
CTS time.Duration
Nalus
util.RecyclableMemory
}
func (h *H26xFrame) Parse(track *AVTrack) (err error) {
var hasVideoFrame bool
switch h.FourCC {
case codec.FourCC_H264:
var ctx *codec.H264Ctx
if track.ICodecCtx != nil {
ctx = track.ICodecCtx.GetBase().(*codec.H264Ctx)
}
for _, nalu := range h.Nalus {
switch codec.ParseH264NALUType(nalu.Buffers[0][0]) {
case h264parser.NALU_SPS:
ctx = &codec.H264Ctx{}
track.ICodecCtx = ctx
ctx.RecordInfo.SPS = [][]byte{nalu.ToBytes()}
if ctx.SPSInfo, err = h264parser.ParseSPS(ctx.SPS()); err != nil {
return
}
case h264parser.NALU_PPS:
ctx.RecordInfo.PPS = [][]byte{nalu.ToBytes()}
ctx.CodecData, err = h264parser.NewCodecDataFromSPSAndPPS(ctx.SPS(), ctx.PPS())
if err != nil {
return
}
case codec.NALU_IDR_Picture:
track.Value.IDR = true
hasVideoFrame = true
case codec.NALU_Non_IDR_Picture:
hasVideoFrame = true
}
}
case codec.FourCC_H265:
var ctx *codec.H265Ctx
if track.ICodecCtx != nil {
ctx = track.ICodecCtx.GetBase().(*codec.H265Ctx)
}
for _, nalu := range h.Nalus {
switch codec.ParseH265NALUType(nalu.Buffers[0][0]) {
case h265parser.NAL_UNIT_VPS:
ctx = &codec.H265Ctx{}
ctx.RecordInfo.VPS = [][]byte{nalu.ToBytes()}
track.ICodecCtx = ctx
case h265parser.NAL_UNIT_SPS:
ctx.RecordInfo.SPS = [][]byte{nalu.ToBytes()}
if ctx.SPSInfo, err = h265parser.ParseSPS(ctx.SPS()); err != nil {
return
}
case h265parser.NAL_UNIT_PPS:
ctx.RecordInfo.PPS = [][]byte{nalu.ToBytes()}
ctx.CodecData, err = h265parser.NewCodecDataFromVPSAndSPSAndPPS(ctx.VPS(), ctx.SPS(), ctx.PPS())
case h265parser.NAL_UNIT_CODED_SLICE_BLA_W_LP,
h265parser.NAL_UNIT_CODED_SLICE_BLA_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_BLA_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_IDR_W_RADL,
h265parser.NAL_UNIT_CODED_SLICE_IDR_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_CRA:
track.Value.IDR = true
hasVideoFrame = true
case 0, 1, 2, 3, 4, 5, 6, 7, 8, 9:
hasVideoFrame = true
}
}
}
// Return ErrSkip if no video frames are present (only metadata NALUs)
if !hasVideoFrame {
return ErrSkip
}
return
}
func (h *H26xFrame) ConvertCtx(ctx codec.ICodecCtx) (codec.ICodecCtx, IAVFrame, error) {
switch c := ctx.GetBase().(type) {
case *codec.H264Ctx:
return c, &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory(c.SPS()),
util.NewMemory(c.PPS()),
},
}, nil
case *codec.H265Ctx:
return c, &H26xFrame{
FourCC: codec.FourCC_H265,
Nalus: []util.Memory{
util.NewMemory(c.VPS()),
util.NewMemory(c.SPS()),
util.NewMemory(c.PPS()),
},
}, nil
}
return ctx.GetBase(), nil, nil
}
func (h *H26xFrame) Demux(ctx codec.ICodecCtx) (any, error) {
return h.Nalus, nil
}
func (h *H26xFrame) Mux(ctx codec.ICodecCtx, frame *AVFrame) {
h.FourCC = ctx.FourCC()
h.Nalus = frame.Raw.(Nalus)
h.Timestamp = frame.Timestamp
h.CTS = frame.CTS
}
func (h *H26xFrame) GetTimestamp() time.Duration {
return h.Timestamp
}
func (h *H26xFrame) GetCTS() time.Duration {
return h.CTS
}
func (h *H26xFrame) GetSize() int {
var size int
for _, nalu := range h.Nalus {
size += nalu.Size
}
return size
}
func (h *H26xFrame) String() string {
return fmt.Sprintf("H26xFrame{FourCC: %s, Timestamp: %s, CTS: %s}", h.FourCC, h.Timestamp, h.CTS)
}
func (h *H26xFrame) Dump(b byte, writer io.Writer) {
//TODO implement me
panic("implement me")
}

View File

@@ -1,157 +0,0 @@
package pkg
import (
"testing"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
func TestH26xFrame_Parse_VideoFrameDetection(t *testing.T) {
// Test H264 IDR Picture (should not skip)
t.Run("H264_IDR_Picture", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x65}), // IDR Picture NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H264 IDR frame to not be skipped, but got ErrSkip")
}
if !track.Value.IDR {
t.Error("Expected IDR flag to be set for H264 IDR frame")
}
})
// Test H264 Non-IDR Picture (should not skip)
t.Run("H264_Non_IDR_Picture", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x21}), // Non-IDR Picture NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H264 Non-IDR frame to not be skipped, but got ErrSkip")
}
})
// Test H264 metadata only (should skip)
t.Run("H264_SPS_Only", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x67}), // SPS NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err != ErrSkip {
t.Errorf("Expected H264 SPS-only frame to be skipped, but got: %v", err)
}
})
// Test H264 PPS only (should skip)
t.Run("H264_PPS_Only", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x68}), // PPS NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err != ErrSkip {
t.Errorf("Expected H264 PPS-only frame to be skipped, but got: %v", err)
}
})
// Test H265 IDR slice (should not skip)
t.Run("H265_IDR_Slice", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H265,
Nalus: []util.Memory{
util.NewMemory([]byte{0x4E, 0x01}), // IDR_W_RADL slice type (19 << 1 = 38 = 0x26, so first byte should be 0x4C, but let's use a simpler approach)
// Using NAL_UNIT_CODED_SLICE_IDR_W_RADL which should be type 19
},
}
track := &AVTrack{}
// Let's use the correct byte pattern for H265 IDR slice
// NAL_UNIT_CODED_SLICE_IDR_W_RADL = 19
// H265 header: (type << 1) | layer_id_bit
idrSliceByte := byte(19 << 1) // 19 * 2 = 38 = 0x26
frame.Nalus[0] = util.NewMemory([]byte{idrSliceByte})
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H265 IDR slice to not be skipped, but got ErrSkip")
}
if !track.Value.IDR {
t.Error("Expected IDR flag to be set for H265 IDR slice")
}
})
// Test H265 metadata only (should skip)
t.Run("H265_VPS_Only", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H265,
Nalus: []util.Memory{
util.NewMemory([]byte{0x40, 0x01}), // VPS NALU type (32 << 1 = 64 = 0x40)
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err != ErrSkip {
t.Errorf("Expected H265 VPS-only frame to be skipped, but got: %v", err)
}
})
// Test mixed H264 frame with SPS and IDR (should not skip)
t.Run("H264_Mixed_SPS_And_IDR", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x67}), // SPS NALU type
util.NewMemory([]byte{0x65}), // IDR Picture NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H264 mixed SPS+IDR frame to not be skipped, but got ErrSkip")
}
if !track.Value.IDR {
t.Error("Expected IDR flag to be set for H264 mixed frame with IDR")
}
})
// Test mixed H265 frame with VPS and IDR (should not skip)
t.Run("H265_Mixed_VPS_And_IDR", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H265,
Nalus: []util.Memory{
util.NewMemory([]byte{0x40, 0x01}), // VPS NALU type (32 << 1)
util.NewMemory([]byte{0x4C, 0x01}), // IDR_W_RADL slice type (19 << 1)
},
}
track := &AVTrack{}
// Fix the IDR slice byte for H265
idrSliceByte := byte(19 << 1) // NAL_UNIT_CODED_SLICE_IDR_W_RADL = 19
frame.Nalus[1] = util.NewMemory([]byte{idrSliceByte, 0x01})
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H265 mixed VPS+IDR frame to not be skipped, but got ErrSkip")
}
if !track.Value.IDR {
t.Error("Expected IDR flag to be set for H265 mixed frame with IDR")
}
})
}

View File

@@ -3,6 +3,7 @@ package pkg
import (
"log/slog"
"sync"
"sync/atomic"
"time"
"m7s.live/v5/pkg/task"
@@ -21,6 +22,7 @@ type RingWriter struct {
Size int
LastValue *AVFrame
SLogger *slog.Logger
status atomic.Int32 // 0: init, 1: writing, 2: disposed
}
func NewRingWriter(sizeRange util.Range[int]) (rb *RingWriter) {
@@ -90,7 +92,9 @@ func (rb *RingWriter) reduce(size int) {
func (rb *RingWriter) Dispose() {
rb.SLogger.Debug("dispose")
rb.Value.Ready()
if rb.status.Add(-1) == -1 { // normal dispose
rb.Value.Unlock()
}
}
func (rb *RingWriter) GetIDR() *util.Ring[AVFrame] {
@@ -185,6 +189,53 @@ func (rb *RingWriter) Step() (normal bool) {
rb.LastValue = &rb.Value
nextSeq := rb.LastValue.Sequence + 1
/*
sequenceDiagram
autonumber
participant Caller as Caller
participant RW as RingWriter
participant Val as AVFrame.Value
Note over RW: status initial = 0 (idle)
Caller->>RW: Step()
activate RW
RW->>RW: status.Add(1) (0→1)
alt entered writing (result == 1)
Note over RW: writing
RW->>Val: StartWrite()
RW->>Val: Reset()
opt Dispose during write
Caller->>RW: Dispose()
RW->>RW: status.Add(-1) (1→0)
end
RW->>RW: status.Add(-1) at end of Step
alt returns 0 (write completed)
RW->>Val: Ready()
else returns -1 (disposed during write)
RW->>Val: Unlock()
end
else not entered
Note over RW: Step aborted (already disposed/busy)
end
deactivate RW
Caller->>RW: Dispose()
activate RW
RW->>RW: status.Add(-1)
alt returns -1 (idle dispose)
RW->>Val: Unlock()
else returns 0 (dispose during write)
Note over RW: Unlock will occur at Step end (no Ready)
end
deactivate RW
Note over RW: States: -1 (disposed), 0 (idle), 1 (writing)
*/
if rb.status.Add(1) == 1 {
if normal = next.Value.StartWrite(); normal {
next.Value.Reset()
rb.Ring = next
@@ -197,6 +248,11 @@ func (rb *RingWriter) Step() (normal bool) {
}
}
rb.Value.Sequence = nextSeq
if rb.status.Add(-1) == 0 {
rb.LastValue.Ready()
} else {
rb.Value.Unlock()
}
}
return
}

View File

@@ -5,6 +5,8 @@ import (
"log/slog"
"testing"
"time"
"m7s.live/v5/pkg/util"
)
func TestRing(t *testing.T) {
@@ -13,7 +15,7 @@ func TestRing(t *testing.T) {
ctx, _ := context.WithTimeout(context.Background(), time.Second*5)
go t.Run("writer", func(t *testing.T) {
for i := 0; ctx.Err() == nil; i++ {
w.Value.Raw = i
w.Value.Raw = &util.Memory{}
normal := w.Step()
t.Log("write", i, normal)
time.Sleep(time.Millisecond * 50)
@@ -76,7 +78,7 @@ func BenchmarkRing(b *testing.B) {
ctx, _ := context.WithTimeout(context.Background(), time.Second*5)
go func() {
for i := 0; ctx.Err() == nil; i++ {
w.Value.Raw = i
w.Value.Raw = &util.Memory{}
w.Step()
time.Sleep(time.Millisecond * 50)
}

21
pkg/steps.go Normal file
View File

@@ -0,0 +1,21 @@
package pkg
// StepName is a typed alias for all workflow step identifiers.
type StepName string
// StepDef defines a step with typed name and description.
type StepDef struct {
Name StepName
Description string
}
// Standard, cross-plugin step name constants for pull/publish workflows.
// Plugin-specific step names should be defined in their respective plugin packages.
const (
StepPublish StepName = "publish"
StepURLParsing StepName = "url_parsing"
StepConnection StepName = "connection"
StepHandshake StepName = "handshake"
StepParsing StepName = "parsing"
StepStreaming StepName = "streaming"
)

59
pkg/task/README.md Normal file
View File

@@ -0,0 +1,59 @@
# 任务系统概要
# 任务的启动
任务通过调用父任务的 AddTask 来启动,此时会进入队列中等待启动,父任务的 EventLoop 会接受到子任务,然后调用子任务的 Start 方法进行启动操作
## EventLoop 的初始化
为了节省资源EventLoop 在没有子任务时不会创建协程,一直等到有子任务时才会创建,并且如果这个子任务也是一个空的 Job即没有 Start、Run、Go则仍然不会创建协程。
## EventLoop 停止
为了节省资源,当 EventLoop 中没有待执行的子任务时需要退出协程。EventLoop 会在以下情况退出:
1. 没有待处理的任务且没有活跃的子任务,且父任务的 keepalive() 返回 false
2. EventLoop 的状态被设置为停止状态(-1
# 任务的停止
## 主动停止某个任务
调用任务的 Stop 方法即可停止某个任务,此时该任务会由其父任务的 eventLoop 检测到 context 取消信号然后开始执行任务的 dispose 来进行销毁
## 任务的意外退出
当任务的 Run 返回错误,或者 context 被取消时,任务会退出,最终流程会同主动停止一样
## 父任务停止
当父任务停止并销毁时,会按照以下步骤处理子任务:
### 步骤
1. **设置 EventLoop 的状态为停止状态**:调用 `stop()` 方法设置 status = -1防止继续添加子任务
2. **激活 EventLoop 处理剩余任务**:调用 `active()` 方法,即使状态为 -1 也能处理剩余的子任务
3. **停止所有子任务**:调用所有子任务的 Stop 方法
4. **等待子任务销毁完成**:等待 EventLoop 处理完所有子任务的销毁工作
### 设计要点
- EventLoop 的 `active()` 方法允许在状态为 -1 时调用,以确保剩余的子任务能被正确处理
- 使用互斥锁保护状态转换,避免竞态条件
- 先停止再处理剩余任务,确保不会添加新的子任务
## 竞态条件处理
为了确保任务系统的线程安全,我们采取了以下措施:
### 状态管理
- 使用 `sync.RWMutex` 保护 EventLoop 的状态转换
- `add()` 方法使用读锁检查状态,防止在停止后添加新任务
- `stop()` 方法使用写锁设置状态,确保原子性
### EventLoop 生命周期
- EventLoop 只有在状态从 0ready转换到 1running时才启动新的 goroutine
- 即使状态为 -1stopped`active()` 方法仍可被调用以处理剩余任务
- 使用 `hasPending` 标志和互斥锁跟踪待处理任务,避免频繁检查 channel 长度
### 任务添加
- 添加任务时会检查 EventLoop 状态,如果已停止则返回 `ErrDisposed`
- 使用 `pendingMux` 保护 `hasPending` 标志,避免竞态条件

View File

@@ -1,34 +0,0 @@
package task
type CallBackTask struct {
Task
startHandler func() error
disposeHandler func()
}
func (t *CallBackTask) GetTaskType() TaskType {
return TASK_TYPE_CALL
}
func (t *CallBackTask) Start() error {
return t.startHandler()
}
func (t *CallBackTask) Dispose() {
if t.disposeHandler != nil {
t.disposeHandler()
}
}
func CreateTaskByCallBack(start func() error, dispose func()) *CallBackTask {
var task CallBackTask
task.startHandler = func() error {
err := start()
if err == nil && dispose == nil {
err = ErrTaskComplete
}
return err
}
task.disposeHandler = dispose
return &task
}

View File

@@ -42,6 +42,9 @@ func (t *TickTask) GetTickInterval() time.Duration {
func (t *TickTask) Start() (err error) {
t.Ticker = time.NewTicker(t.handler.(ITickTask).GetTickInterval())
t.SignalChan = t.Ticker.C
t.OnStop(func() {
t.Ticker.Reset(time.Millisecond)
})
return
}

167
pkg/task/event_loop.go Normal file
View File

@@ -0,0 +1,167 @@
package task
import (
"errors"
"reflect"
"runtime/debug"
"slices"
"sync"
"sync/atomic"
)
type Singleton[T comparable] struct {
instance atomic.Value
mux sync.Mutex
}
func (s *Singleton[T]) Load() T {
return s.instance.Load().(T)
}
func (s *Singleton[T]) Get(newF func() T) T {
ch := s.instance.Load() //fast
if ch == nil { // slow
s.mux.Lock()
defer s.mux.Unlock()
if ch = s.instance.Load(); ch == nil {
ch = newF()
s.instance.Store(ch)
}
}
return ch.(T)
}
type EventLoop struct {
cases []reflect.SelectCase
children []ITask
addSub Singleton[chan any]
running atomic.Bool
}
func (e *EventLoop) getInput() chan any {
return e.addSub.Get(func() chan any {
return make(chan any, 20)
})
}
func (e *EventLoop) active(mt *Job) {
if mt.parent != nil {
mt.parent.eventLoop.active(mt.parent)
}
if e.running.CompareAndSwap(false, true) {
go e.run(mt)
}
}
func (e *EventLoop) add(mt *Job, sub any) (err error) {
shouldActive := true
switch sub.(type) {
case TaskStarter, TaskBlock, TaskGo:
case IJob:
shouldActive = false
}
select {
case e.getInput() <- sub:
if shouldActive || mt.IsStopped() {
e.active(mt)
}
return nil
default:
return ErrTooManyChildren
}
}
func (e *EventLoop) run(mt *Job) {
mt.Debug("event loop start", "jobId", mt.GetTaskID(), "type", mt.GetOwnerType())
ch := e.getInput()
e.cases = []reflect.SelectCase{{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(ch)}}
defer func() {
err := recover()
if err != nil {
mt.Error("job panic", "err", err, "stack", string(debug.Stack()))
if !ThrowPanic {
mt.Stop(errors.Join(err.(error), ErrPanic))
} else {
panic(err)
}
}
mt.Debug("event loop exit", "jobId", mt.GetTaskID(), "type", mt.GetOwnerType())
if !mt.handler.keepalive() {
if mt.blocked != nil {
mt.Stop(errors.Join(mt.blocked.StopReason(), ErrAutoStop))
} else {
mt.Stop(ErrAutoStop)
}
}
mt.blocked = nil
}()
// Main event loop - only exit when no more events AND no children
for {
if len(ch) == 0 && len(e.children) == 0 {
if e.running.CompareAndSwap(true, false) {
if len(ch) > 0 { // if add before running set to false
e.active(mt)
}
return
}
}
mt.blocked = nil
if chosen, rev, ok := reflect.Select(e.cases); chosen == 0 {
if !ok {
mt.Debug("job addSub channel closed, exiting", "taskId", mt.GetTaskID())
mt.Stop(ErrAutoStop)
return
}
switch v := rev.Interface().(type) {
case func():
v()
case ITask:
if len(e.cases) >= 65535 {
mt.Warn("task children too many, may cause performance issue", "count", len(e.cases), "taskId", mt.GetTaskID(), "taskType", mt.GetTaskType(), "ownerType", mt.GetOwnerType())
v.Stop(ErrTooManyChildren)
continue
}
if mt.blocked = v; v.start() {
e.cases = append(e.cases, reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(v.GetSignal())})
e.children = append(e.children, v)
mt.onChildStart(v)
} else {
mt.removeChild(v)
}
}
} else {
taskIndex := chosen - 1
child := e.children[taskIndex]
mt.blocked = child
switch tt := mt.blocked.(type) {
case IChannelTask:
if tt.IsStopped() {
switch ttt := tt.(type) {
case ITickTask:
ttt.GetTicker().Stop()
}
mt.onChildDispose(child)
mt.removeChild(child)
e.children = slices.Delete(e.children, taskIndex, taskIndex+1)
e.cases = slices.Delete(e.cases, chosen, chosen+1)
} else {
tt.Tick(rev.Interface())
}
default:
if !ok {
if mt.onChildDispose(child); child.checkRetry(child.StopReason()) {
if child.reset(); child.start() {
e.cases[chosen].Chan = reflect.ValueOf(child.GetSignal())
mt.onChildStart(child)
continue
}
}
mt.removeChild(child)
e.children = slices.Delete(e.children, taskIndex, taskIndex+1)
e.cases = slices.Delete(e.cases, chosen, chosen+1)
}
}
}
}
}

View File

@@ -2,13 +2,9 @@ package task
import (
"context"
"errors"
"fmt"
"log/slog"
"reflect"
"runtime"
"runtime/debug"
"slices"
"strings"
"sync"
"sync/atomic"
@@ -32,15 +28,12 @@ func GetNextTaskID() uint32 {
// Job include tasks
type Job struct {
Task
cases []reflect.SelectCase
addSub chan ITask
children []ITask
lazyRun sync.Once
eventLoopLock sync.Mutex
childrenDisposed chan struct{}
children sync.Map
descendantsDisposeListeners []func(ITask)
descendantsStartListeners []func(ITask)
blocked ITask
eventLoop EventLoop
Size atomic.Int32
}
func (*Job) GetTaskType() TaskType {
@@ -55,19 +48,18 @@ func (mt *Job) Blocked() ITask {
return mt.blocked
}
func (mt *Job) waitChildrenDispose() {
blocked := mt.blocked
defer func() {
// 忽略由于在任务关闭过程中可能存在竞态条件,当父任务关闭时子任务可能已经被释放。
if err := recover(); err != nil {
mt.Debug("waitChildrenDispose panic", "err", err)
}
mt.addSub <- nil
<-mt.childrenDisposed
}()
if blocked != nil {
blocked.Stop(mt.StopReason())
func (mt *Job) EventLoopRunning() bool {
return mt.eventLoop.running.Load()
}
func (mt *Job) waitChildrenDispose(stopReason error) {
mt.eventLoop.active(mt)
mt.children.Range(func(key, value any) bool {
child := value.(ITask)
child.Stop(stopReason)
child.WaitStopped()
return true
})
}
func (mt *Job) OnDescendantsDispose(listener func(ITask)) {
@@ -84,12 +76,21 @@ func (mt *Job) onDescendantsDispose(descendants ITask) {
}
func (mt *Job) onChildDispose(child ITask) {
if child.GetTaskType() != TASK_TYPE_CALL || child.GetOwnerType() != "CallBack" {
mt.onDescendantsDispose(child)
}
child.dispose()
}
func (mt *Job) removeChild(child ITask) {
value, loaded := mt.children.LoadAndDelete(child.getKey())
if loaded {
if value != child {
panic("remove child")
}
remains := mt.Size.Add(-1)
mt.Debug("remove child", "id", child.GetTaskID(), "remains", remains)
}
}
func (mt *Job) OnDescendantsStart(listener func(ITask)) {
mt.descendantsStartListeners = append(mt.descendantsStartListeners, listener)
}
@@ -104,24 +105,24 @@ func (mt *Job) onDescendantsStart(descendants ITask) {
}
func (mt *Job) onChildStart(child ITask) {
if child.GetTaskType() != TASK_TYPE_CALL || child.GetOwnerType() != "CallBack" {
mt.onDescendantsStart(child)
}
}
func (mt *Job) RangeSubTask(callback func(task ITask) bool) {
for _, task := range mt.children {
callback(task)
}
mt.children.Range(func(key, value any) bool {
callback(value.(ITask))
return true
})
}
func (mt *Job) AddDependTask(t ITask, opt ...any) (task *Task) {
mt.Depend(t)
t.Using(mt)
opt = append(opt, 1)
return mt.AddTask(t, opt...)
}
func (mt *Job) AddTask(t ITask, opt ...any) (task *Task) {
if task = t.GetTask(); t != task.handler { // first add
func (mt *Job) initContext(task *Task, opt ...any) {
callDepth := 2
for _, o := range opt {
switch v := o.(type) {
case context.Context:
@@ -132,36 +133,15 @@ func (mt *Job) AddTask(t ITask, opt ...any) (task *Task) {
task.retry = v
case *slog.Logger:
task.Logger = v
case int:
callDepth += v
}
}
task.parent = mt
task.handler = t
switch t.(type) {
case TaskStarter, TaskBlock, TaskGo:
// need start now
case IJob:
// lazy start
return
}
}
_, file, line, ok := runtime.Caller(1)
_, file, line, ok := runtime.Caller(callDepth)
if ok {
task.StartReason = fmt.Sprintf("%s:%d", strings.TrimPrefix(file, sourceFilePathPrefix), line)
}
mt.lazyRun.Do(func() {
if mt.eventLoopLock.TryLock() {
defer mt.eventLoopLock.Unlock()
if mt.parent != nil && mt.Context == nil {
mt.parent.AddTask(mt.handler) // second add, lazy start
}
mt.childrenDisposed = make(chan struct{})
mt.addSub = make(chan ITask, 20)
go mt.run()
}
})
if task.Context == nil {
task.parent = mt
if task.parentCtx == nil {
task.parentCtx = mt.Context
}
@@ -172,98 +152,51 @@ func (mt *Job) AddTask(t ITask, opt ...any) (task *Task) {
task.Context, task.CancelCauseFunc = context.WithCancelCause(task.parentCtx)
task.startup = util.NewPromise(task.Context)
task.shutdown = util.NewPromise(context.Background())
task.handler = t
if task.Logger == nil {
task.Logger = mt.Logger
}
}
func (mt *Job) AddTask(t ITask, opt ...any) (task *Task) {
task = t.GetTask()
task.handler = t
mt.initContext(task, opt...)
if mt.IsStopped() {
task.startup.Reject(mt.StopReason())
return
}
if len(mt.addSub) > 10 {
mt.Warn("task wait list too many", "count", len(mt.addSub), "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType(), "parent", mt.GetOwnerType())
}
mt.addSub <- t
actual, loaded := mt.children.LoadOrStore(t.getKey(), t)
if loaded {
task.startup.Reject(ExistTaskError{
Task: actual.(ITask),
})
return
}
func (mt *Job) Call(callback func() error, args ...any) {
mt.Post(callback, args...).WaitStarted()
}
func (mt *Job) Post(callback func() error, args ...any) *Task {
task := CreateTaskByCallBack(callback, nil)
if len(args) > 0 {
task.SetDescription(OwnerTypeKey, args[0])
}
return mt.AddTask(task)
}
func (mt *Job) run() {
mt.cases = []reflect.SelectCase{{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(mt.addSub)}}
var err error
defer func() {
err := recover()
if err != nil {
mt.Error("job panic", "err", err, "stack", string(debug.Stack()))
if !ThrowPanic {
mt.Stop(errors.Join(err.(error), ErrPanic))
} else {
panic(err)
mt.children.Delete(t.getKey())
task.startup.Reject(err)
}
}
stopReason := mt.StopReason()
for _, task := range mt.children {
task.Stop(stopReason)
mt.onChildDispose(task)
}
mt.children = nil
close(mt.childrenDisposed)
}()
for {
mt.blocked = nil
if chosen, rev, ok := reflect.Select(mt.cases); chosen == 0 {
if rev.IsNil() {
mt.Debug("job addSub channel closed, exiting", "taskId", mt.GetTaskID())
if err = mt.eventLoop.add(mt, t); err != nil {
return
}
if mt.blocked = rev.Interface().(ITask); mt.blocked.start() {
mt.children = append(mt.children, mt.blocked)
mt.cases = append(mt.cases, reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(mt.blocked.GetSignal())})
mt.onChildStart(mt.blocked)
if mt.IsStopped() {
err = mt.StopReason()
return
}
} else {
taskIndex := chosen - 1
mt.blocked = mt.children[taskIndex]
switch tt := mt.blocked.(type) {
case IChannelTask:
if tt.IsStopped() {
switch ttt := tt.(type) {
case ITickTask:
ttt.GetTicker().Stop()
}
mt.onChildDispose(mt.blocked)
mt.children = slices.Delete(mt.children, taskIndex, taskIndex+1)
mt.cases = slices.Delete(mt.cases, chosen, chosen+1)
} else {
tt.Tick(rev.Interface())
}
default:
if !ok {
if mt.onChildDispose(mt.blocked); mt.blocked.checkRetry(mt.blocked.StopReason()) {
if mt.blocked.reset(); mt.blocked.start() {
mt.cases[chosen].Chan = reflect.ValueOf(mt.blocked.GetSignal())
mt.onChildStart(mt.blocked)
continue
}
}
mt.children = slices.Delete(mt.children, taskIndex, taskIndex+1)
mt.cases = slices.Delete(mt.cases, chosen, chosen+1)
}
}
}
if !mt.handler.keepalive() && len(mt.children) == 0 {
mt.Stop(ErrAutoStop)
remains := mt.Size.Add(1)
mt.Debug("child added", "id", task.ID, "remains", remains)
return
}
func (mt *Job) Call(callback func()) {
if mt.Size.Load() <= 0 {
callback()
return
}
ctx, cancel := context.WithCancel(mt)
_ = mt.eventLoop.add(mt, func() { callback(); cancel() })
<-ctx.Done()
}

View File

@@ -2,12 +2,21 @@ package task
import (
"errors"
"fmt"
. "m7s.live/v5/pkg/util"
)
var ErrExist = errors.New("exist")
type ExistTaskError struct {
Task ITask
}
func (e ExistTaskError) Error() string {
return fmt.Sprintf("%v exist", e.Task.getKey())
}
type ManagerItem[K comparable] interface {
ITask
GetKey() K
@@ -30,15 +39,25 @@ func (m *Manager[K, T]) Add(ctx T, opt ...any) *Task {
m.Remove(ctx)
m.Debug("remove", "key", ctx.GetKey(), "count", m.Length)
})
opt = append(opt, 1)
return m.AddTask(ctx, opt...)
}
func (m *Manager[K, T]) SafeHas(key K) (ok bool) {
if m.L == nil {
m.Call(func() {
ok = m.Collection.Has(key)
})
return ok
}
return m.Collection.Has(key)
}
// SafeGet 用于不同协程获取元素,防止并发请求
func (m *Manager[K, T]) SafeGet(key K) (item T, ok bool) {
if m.L == nil {
m.Call(func() error {
m.Call(func() {
item, ok = m.Collection.Get(key)
return nil
})
} else {
item, ok = m.Collection.Get(key)
@@ -49,9 +68,8 @@ func (m *Manager[K, T]) SafeGet(key K) (item T, ok bool) {
// SafeRange 用于不同协程获取元素,防止并发请求
func (m *Manager[K, T]) SafeRange(f func(T) bool) {
if m.L == nil {
m.Call(func() error {
m.Call(func() {
m.Collection.Range(f)
return nil
})
} else {
m.Collection.Range(f)
@@ -61,9 +79,8 @@ func (m *Manager[K, T]) SafeRange(f func(T) bool) {
// SafeFind 用于不同协程获取元素,防止并发请求
func (m *Manager[K, T]) SafeFind(f func(T) bool) (item T, ok bool) {
if m.L == nil {
m.Call(func() error {
m.Call(func() {
item, ok = m.Collection.Find(f)
return nil
})
} else {
item, ok = m.Collection.Find(f)

View File

@@ -22,15 +22,20 @@ func (o *OSSignal) Start() error {
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, syscall.SIGHUP, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
o.SignalChan = signalChan
o.OnStop(func() {
signal.Stop(signalChan)
close(signalChan)
})
return nil
}
func (o *OSSignal) Tick(any) {
println("OSSignal Tick")
go o.root.Shutdown()
}
type RootManager[K comparable, T ManagerItem[K]] struct {
Manager[K, T]
WorkCollection[K, T]
}
func (m *RootManager[K, T]) Init() {

View File

@@ -4,6 +4,7 @@ import (
"context"
"errors"
"fmt"
"io"
"log/slog"
"maps"
"reflect"
@@ -26,8 +27,11 @@ var (
ErrStopByUser = errors.New("stop by user")
ErrRestart = errors.New("restart")
ErrTaskComplete = errors.New("complete")
ErrTimeout = errors.New("timeout")
ErrExit = errors.New("exit")
ErrPanic = errors.New("panic")
ErrTooManyChildren = errors.New("too many children in job")
ErrDisposed = errors.New("disposed")
)
const (
@@ -45,7 +49,6 @@ const (
TASK_TYPE_JOB
TASK_TYPE_Work
TASK_TYPE_CHANNEL
TASK_TYPE_CALL
)
type (
@@ -71,14 +74,15 @@ type (
SetDescription(key string, value any)
SetDescriptions(value Description)
SetRetry(maxRetry int, retryInterval time.Duration)
Depend(ITask)
Using(resource ...any)
OnStop(any)
OnStart(func())
OnBeforeDispose(func())
OnDispose(func())
GetState() TaskState
GetLevel() byte
WaitStopped() error
WaitStarted() error
getKey() any
}
IJob interface {
ITask
@@ -88,8 +92,8 @@ type (
OnDescendantsDispose(func(ITask))
OnDescendantsStart(func(ITask))
Blocked() ITask
Call(func() error, ...any)
Post(func() error, ...any) *Task
EventLoopRunning() bool
Call(func())
}
IChannelTask interface {
ITask
@@ -123,7 +127,10 @@ type (
context.CancelCauseFunc
handler ITask
retry RetryConfig
afterStartListeners, beforeDisposeListeners, afterDisposeListeners []func()
afterStartListeners, afterDisposeListeners []func()
closeOnStop []any
resources []any
stopOnce sync.Once
description sync.Map
startup, shutdown *util.Promise
parent *Job
@@ -183,12 +190,19 @@ func (task *Task) GetKey() uint32 {
return task.ID
}
func (task *Task) getKey() any {
return reflect.ValueOf(task.handler).MethodByName("GetKey").Call(nil)[0].Interface()
}
func (task *Task) WaitStarted() error {
if task.startup == nil {
return nil
}
return task.startup.Await()
}
func (task *Task) WaitStopped() (err error) {
err = task.startup.Await()
err = task.WaitStarted()
if err != nil {
return err
}
@@ -229,33 +243,50 @@ func (task *Task) Stop(err error) {
task.Error("task stop with nil error", "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType(), "parent", task.GetParent().GetOwnerType())
panic("task stop with nil error")
}
if task.CancelCauseFunc != nil {
if tt := task.handler.GetTaskType(); tt != TASK_TYPE_CALL {
_, file, line, _ := runtime.Caller(1)
task.Debug("task stop", "caller", fmt.Sprintf("%s:%d", strings.TrimPrefix(file, sourceFilePathPrefix), line), "reason", err, "elapsed", time.Since(task.StartTime), "taskId", task.ID, "taskType", tt, "ownerType", task.GetOwnerType())
task.stopOnce.Do(func() {
if task.CancelCauseFunc != nil {
msg := "task stop"
if task.startup.IsRejected() {
msg = "task start failed"
}
task.Debug(msg, "caller", fmt.Sprintf("%s:%d", strings.TrimPrefix(file, sourceFilePathPrefix), line), "reason", err, "elapsed", time.Since(task.StartTime), "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType())
task.CancelCauseFunc(err)
}
task.stop()
})
}
func (task *Task) Depend(t ITask) {
t.OnDispose(func() {
task.Stop(t.StopReason())
})
func (task *Task) stop() {
for _, resource := range task.closeOnStop {
switch v := resource.(type) {
case func():
v()
case func() error:
v()
case ITask:
v.Stop(task.StopReason())
}
}
task.closeOnStop = task.closeOnStop[:0]
}
func (task *Task) OnStart(listener func()) {
task.afterStartListeners = append(task.afterStartListeners, listener)
}
func (task *Task) OnBeforeDispose(listener func()) {
task.beforeDisposeListeners = append(task.beforeDisposeListeners, listener)
}
func (task *Task) OnDispose(listener func()) {
task.afterDisposeListeners = append(task.afterDisposeListeners, listener)
}
func (task *Task) Using(resource ...any) {
task.resources = append(task.resources, resource...)
}
func (task *Task) OnStop(resource any) {
task.closeOnStop = append(task.closeOnStop, resource)
}
func (task *Task) GetSignal() any {
return task.Done()
}
@@ -300,9 +331,7 @@ func (task *Task) start() bool {
}
for {
task.StartTime = time.Now()
if tt := task.handler.GetTaskType(); tt != TASK_TYPE_CALL {
task.Debug("task start", "taskId", task.ID, "taskType", tt, "ownerType", task.GetOwnerType(), "reason", task.StartReason)
}
task.Debug("task start", "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType(), "reason", task.StartReason)
task.state = TASK_STATE_STARTING
if v, ok := task.handler.(TaskStarter); ok {
err = v.Start()
@@ -350,6 +379,7 @@ func (task *Task) start() bool {
}
func (task *Task) reset() {
task.stopOnce = sync.Once{}
task.Context, task.CancelCauseFunc = context.WithCancelCause(task.parentCtx)
task.shutdown = util.NewPromise(context.Background())
task.startup = util.NewPromise(task.Context)
@@ -363,6 +393,10 @@ func (task *Task) GetDescriptions() map[string]string {
})
}
func (task *Task) GetDescription(key string) (any, bool) {
return task.description.Load(key)
}
func (task *Task) SetDescription(key string, value any) {
task.description.Store(key, value)
}
@@ -380,41 +414,41 @@ func (task *Task) SetDescriptions(value Description) {
func (task *Task) dispose() {
taskType, ownerType := task.handler.GetTaskType(), task.GetOwnerType()
if task.state < TASK_STATE_STARTED {
if taskType != TASK_TYPE_CALL {
task.Debug("task dispose canceled", "taskId", task.ID, "taskType", taskType, "ownerType", ownerType, "state", task.state)
}
return
}
reason := task.StopReason()
task.state = TASK_STATE_DISPOSING
if taskType != TASK_TYPE_CALL {
yargs := []any{"reason", reason, "taskId", task.ID, "taskType", taskType, "ownerType", ownerType}
task.Debug("task dispose", yargs...)
defer task.Debug("task disposed", yargs...)
}
befores := len(task.beforeDisposeListeners)
for i, listener := range task.beforeDisposeListeners {
task.SetDescription("disposeProcess", fmt.Sprintf("b:%d/%d", i, befores))
listener()
}
if job, ok := task.handler.(IJob); ok {
mt := job.getJob()
task.SetDescription("disposeProcess", "wait children")
mt.eventLoopLock.Lock()
if mt.addSub != nil {
mt.waitChildrenDispose()
mt.lazyRun = sync.Once{}
}
mt.eventLoopLock.Unlock()
mt.waitChildrenDispose(reason)
}
task.SetDescription("disposeProcess", "self")
if v, ok := task.handler.(TaskDisposal); ok {
v.Dispose()
}
task.shutdown.Fulfill(reason)
afters := len(task.afterDisposeListeners)
task.SetDescription("disposeProcess", "resources")
task.stopOnce.Do(task.stop)
for _, resource := range task.resources {
switch v := resource.(type) {
case func():
v()
case ITask:
v.Stop(task.StopReason())
case util.Recyclable:
v.Recycle()
case io.Closer:
v.Close()
}
}
task.resources = task.resources[:0]
for i, listener := range task.afterDisposeListeners {
task.SetDescription("disposeProcess", fmt.Sprintf("a:%d/%d", i, afters))
task.SetDescription("disposeProcess", fmt.Sprintf("a:%d/%d", i, len(task.afterDisposeListeners)))
listener()
}
task.SetDescription("disposeProcess", "done")
@@ -482,3 +516,25 @@ func (task *Task) Error(msg string, args ...any) {
func (task *Task) TraceEnabled() bool {
return task.Logger.Enabled(task.Context, TraceLevel)
}
func (task *Task) RunTask(t ITask, opt ...any) (err error) {
tt := t.GetTask()
tt.handler = t
mt := task.parent
if job, ok := task.handler.(IJob); ok {
mt = job.getJob()
}
mt.initContext(tt, opt...)
if mt.IsStopped() {
err = mt.StopReason()
task.startup.Reject(err)
return
}
task.OnStop(t)
started := tt.start()
<-tt.Done()
if started {
tt.dispose()
}
return tt.StopReason()
}

View File

@@ -24,9 +24,12 @@ func Test_AddTask_AddsTaskSuccessfully(t *testing.T) {
var task Task
root.AddTask(&task)
_ = task.WaitStarted()
if len(root.children) != 1 {
t.Errorf("expected 1 child task, got %d", len(root.children))
root.RangeSubTask(func(t ITask) bool {
if t.GetTaskID() == task.GetTaskID() {
return false
}
return true
})
}
type retryDemoTask struct {
@@ -51,9 +54,9 @@ func Test_RetryTask(t *testing.T) {
func Test_Call_ExecutesCallback(t *testing.T) {
called := false
root.Call(func() error {
root.Call(func() {
called = true
return nil
return
})
if !called {
t.Errorf("expected callback to be called")
@@ -162,6 +165,24 @@ func Test_StartFail(t *testing.T) {
}
}
func Test_Block(t *testing.T) {
var task Task
block := make(chan struct{})
var job Job
task.OnStart(func() {
task.OnStop(func() {
close(block)
})
<-block
})
time.AfterFunc(time.Second*2, func() {
job.Stop(ErrTaskComplete)
})
root.AddTask(&job)
job.AddTask(&task)
job.WaitStopped()
}
//
//type DemoTask struct {
// Task

View File

@@ -11,3 +11,57 @@ func (m *Work) keepalive() bool {
func (*Work) GetTaskType() TaskType {
return TASK_TYPE_Work
}
type WorkCollection[K comparable, T interface {
ITask
GetKey() K
}] struct {
Work
}
func (c *WorkCollection[K, T]) Find(f func(T) bool) (item T, ok bool) {
c.RangeSubTask(func(task ITask) bool {
if v, _ok := task.(T); _ok && f(v) {
item = v
ok = true
return false
}
return true
})
return
}
func (c *WorkCollection[K, T]) Get(key K) (item T, ok bool) {
var value any
value, ok = c.children.Load(key)
if ok {
item, ok = value.(T)
}
return
}
func (c *WorkCollection[K, T]) Range(f func(T) bool) {
c.RangeSubTask(func(task ITask) bool {
if v, ok := task.(T); ok && !f(v) {
return false
}
return true
})
}
func (c *WorkCollection[K, T]) Has(key K) (ok bool) {
_, ok = c.children.Load(key)
return
}
func (c *WorkCollection[K, T]) ToList() (list []T) {
c.Range(func(t T) bool {
list = append(list, t)
return true
})
return
}
func (c *WorkCollection[K, T]) Length() int {
return int(c.Size.Load())
}

BIN
pkg/test.h264 Normal file

Binary file not shown.

View File

@@ -51,13 +51,11 @@ type (
LastDropLevelChange time.Time
DropFrameLevel int // 0: no drop, 1: drop P-frame, 2: drop all
}
AVTrack struct {
Track
*RingWriter
codec.ICodecCtx
Allocator *util.ScalableMemoryAllocator
SequenceFrame IAVFrame
WrapIndex int
TsTamer
SpeedController
@@ -71,11 +69,13 @@ func NewAVTrack(args ...any) (t *AVTrack) {
switch v := arg.(type) {
case IAVFrame:
t.FrameType = reflect.TypeOf(v)
t.Allocator = v.GetAllocator()
sample := v.GetSample()
t.Allocator = sample.GetAllocator()
t.ICodecCtx = sample.ICodecCtx
case reflect.Type:
t.FrameType = v
case *slog.Logger:
t.Logger = v
t.Logger = v.With("frameType", t.FrameType.String())
case *AVTrack:
t.Logger = v.Logger.With("subtrack", t.FrameType.String())
t.RingWriter = v.RingWriter
@@ -118,9 +118,25 @@ func (t *AVTrack) AddBytesIn(n int) {
}
}
func (t *AVTrack) AcceptFrame(data IAVFrame) {
func (t *AVTrack) FixTimestamp(data *Sample, scale float64) {
t.AddBytesIn(data.Size)
data.Timestamp = t.Tame(data.Timestamp, t.FPS, scale)
}
func (t *AVTrack) NewFrame(avFrame *AVFrame) (frame IAVFrame) {
frame = reflect.New(t.FrameType.Elem()).Interface().(IAVFrame)
if avFrame.Sample == nil {
avFrame.Sample = frame.GetSample()
}
if avFrame.BaseSample == nil {
avFrame.BaseSample = &BaseSample{}
}
frame.GetSample().BaseSample = avFrame.BaseSample
return
}
func (t *AVTrack) AcceptFrame() {
t.acceptFrameCount++
t.Value.Wraps = append(t.Value.Wraps, data)
}
func (t *AVTrack) changeDropFrameLevel(newLevel int) {
@@ -230,23 +246,28 @@ func (t *AVTrack) AddPausedTime(d time.Duration) {
t.pausedTime += d
}
func (s *SpeedController) speedControl(speed float64, ts time.Duration) {
if speed != s.speed || s.beginTime.IsZero() {
s.speed = speed
s.beginTime = time.Now()
s.beginTimestamp = ts
s.pausedTime = 0
func (t *AVTrack) speedControl(speed float64, ts time.Duration) {
if speed != t.speed || t.beginTime.IsZero() {
t.speed = speed
t.beginTime = time.Now()
t.beginTimestamp = ts
t.pausedTime = 0
} else {
elapsed := time.Since(s.beginTime) - s.pausedTime
elapsed := time.Since(t.beginTime) - t.pausedTime
if speed == 0 {
s.Delta = ts - elapsed
t.Delta = ts - elapsed
if t.Logger.Enabled(t.ready, task.TraceLevel) {
t.Trace("speed 0", "ts", ts, "elapsed", elapsed, "delta", t.Delta)
}
return
}
should := time.Duration(float64(ts-s.beginTimestamp) / speed)
s.Delta = should - elapsed
// fmt.Println(speed, elapsed, should, s.Delta)
if s.Delta > threshold {
time.Sleep(min(s.Delta, time.Millisecond*500))
should := time.Duration(float64(ts-t.beginTimestamp) / speed)
t.Delta = should - elapsed
if t.Delta > threshold {
if t.Logger.Enabled(t.ready, task.TraceLevel) {
t.Trace("speed control", "speed", speed, "elapsed", elapsed, "should", should, "delta", t.Delta)
}
time.Sleep(min(t.Delta, time.Millisecond*500))
}
}
}

63
pkg/util/buddy_disable.go Normal file
View File

@@ -0,0 +1,63 @@
//go:build !enable_buddy
package util
import (
"sync"
"unsafe"
)
var pool0, pool1, pool2 sync.Pool
func init() {
pool0.New = func() any {
ret := createMemoryAllocator(defaultBufSize)
ret.recycle = func() {
pool0.Put(ret)
}
return ret
}
pool1.New = func() any {
ret := createMemoryAllocator(1 << MinPowerOf2)
ret.recycle = func() {
pool1.Put(ret)
}
return ret
}
pool2.New = func() any {
ret := createMemoryAllocator(1 << (MinPowerOf2 + 2))
ret.recycle = func() {
pool2.Put(ret)
}
return ret
}
}
func createMemoryAllocator(size int) *MemoryAllocator {
memory := make([]byte, size)
ret := &MemoryAllocator{
allocator: NewAllocator(size),
Size: size,
memory: memory,
start: int64(uintptr(unsafe.Pointer(&memory[0]))),
}
ret.allocator.Init(size)
return ret
}
func GetMemoryAllocator(size int) (ret *MemoryAllocator) {
switch size {
case defaultBufSize:
ret = pool0.Get().(*MemoryAllocator)
ret.allocator.Init(size)
case 1 << MinPowerOf2:
ret = pool1.Get().(*MemoryAllocator)
ret.allocator.Init(size)
case 1 << (MinPowerOf2 + 2):
ret = pool2.Get().(*MemoryAllocator)
ret.allocator.Init(size)
default:
ret = createMemoryAllocator(size)
}
return
}

44
pkg/util/buddy_enable.go Normal file
View File

@@ -0,0 +1,44 @@
//go:build enable_buddy
package util
import "unsafe"
func createMemoryAllocator(size int, buddy *Buddy, offset int) *MemoryAllocator {
ret := &MemoryAllocator{
allocator: NewAllocator(size),
Size: size,
memory: buddy.memoryPool[offset : offset+size],
start: buddy.poolStart + int64(offset),
recycle: func() {
buddy.Free(offset >> MinPowerOf2)
},
}
ret.allocator.Init(size)
return ret
}
func GetMemoryAllocator(size int) (ret *MemoryAllocator) {
if size < BuddySize {
requiredSize := size >> MinPowerOf2
// 循环尝试从池中获取可用的 buddy
for {
buddy := GetBuddy()
defer PutBuddy(buddy)
offset, err := buddy.Alloc(requiredSize)
if err == nil {
// 分配成功,使用这个 buddy
return createMemoryAllocator(size, buddy, offset<<MinPowerOf2)
}
}
}
// 池中的 buddy 都无法分配或大小不够,使用系统内存
memory := make([]byte, size)
start := int64(uintptr(unsafe.Pointer(&memory[0])))
return &MemoryAllocator{
allocator: NewAllocator(size),
Size: size,
memory: memory,
start: start,
}
}

View File

@@ -4,7 +4,6 @@ import (
"io"
"net"
"net/textproto"
"os"
"strings"
)
@@ -15,8 +14,8 @@ type BufReader struct {
buf MemoryReader
totalRead int
BufLen int
Mouth chan []byte
feedData func() error
Dump *os.File
}
func NewBufReaderWithBufLen(reader io.Reader, bufLen int) (r *BufReader) {
@@ -62,8 +61,10 @@ func NewBufReaderBuffersChan(feedChan chan net.Buffers) (r *BufReader) {
return
}
func NewBufReaderChan(feedChan chan []byte) (r *BufReader) {
func NewBufReaderChan(bufferSize int) (r *BufReader) {
feedChan := make(chan []byte, bufferSize)
r = &BufReader{
Mouth: feedChan,
feedData: func() error {
data, ok := <-feedChan
if !ok {
@@ -81,6 +82,15 @@ func NewBufReaderChan(feedChan chan []byte) (r *BufReader) {
return
}
func (r *BufReader) Feed(data []byte) bool {
select {
case r.Mouth <- data:
return true
default:
return false
}
}
func NewBufReader(reader io.Reader) (r *BufReader) {
return NewBufReaderWithBufLen(reader, defaultBufSize)
}
@@ -90,6 +100,9 @@ func (r *BufReader) Recycle() {
if r.Allocator != nil {
r.Allocator.Recycle()
}
if r.Mouth != nil {
close(r.Mouth)
}
}
func (r *BufReader) Buffered() int {
@@ -176,9 +189,6 @@ func (r *BufReader) ReadRange(n int, yield func([]byte)) (err error) {
func (r *BufReader) Read(to []byte) (n int, err error) {
n = len(to)
err = r.ReadNto(n, to)
if r.Dump != nil {
r.Dump.Write(to)
}
return
}
@@ -199,7 +209,7 @@ func (r *BufReader) ReadString(n int) (s string, err error) {
}
func (r *BufReader) ReadBytes(n int) (mem Memory, err error) {
err = r.ReadRange(n, mem.AppendOne)
err = r.ReadRange(n, mem.PushOne)
return
}

View File

@@ -24,7 +24,7 @@ func TestReadBytesTo(t *testing.T) {
s := RandomString(100)
t.Logf("s:%s", s)
var m Memory
m.AppendOne([]byte(s))
m.PushOne([]byte(s))
r := m.NewReader()
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
var total []byte
@@ -34,7 +34,7 @@ func TestReadBytesTo(t *testing.T) {
continue
}
buf := make([]byte, i)
n := r.ReadBytesTo(buf)
n, _ := r.Read(buf)
t.Logf("n:%d buf:%s", n, string(buf))
total = append(total, buf[:n]...)
if n == 0 {

View File

@@ -101,23 +101,6 @@ func (c *Collection[K, T]) RemoveByKey(key K) bool {
return false
}
// func (c *Collection[K, T]) GetOrCreate(key K) (item T, find bool) {
// if c.L != nil {
// c.L.Lock()
// defer c.L.Unlock()
// }
// if c.m != nil {
// item, find = c.m[key]
// return item, find
// }
// for _, item = range c.Items {
// if item.GetKey() == key {
// return item, true
// }
// }
// item = reflect.New(reflect.TypeOf(item).Elem()).Interface().(T)
// return
// }
func (c *Collection[K, T]) Has(key K) bool {
_, ok := c.Get(key)
return ok
@@ -169,10 +152,6 @@ func (c *Collection[K, T]) Search(f func(T) bool) func(yield func(item T) bool)
}
}
func (c *Collection[K, T]) GetKey() K {
return c.Items[0].GetKey()
}
func (c *Collection[K, T]) Clear() {
if c.L != nil {
c.L.Lock()

View File

@@ -0,0 +1,60 @@
package util
import (
"io"
"net"
"net/http"
"time"
"github.com/gobwas/ws/wsutil"
)
type HTTP_WS_Writer struct {
io.Writer
Conn net.Conn
ContentType string
WriteTimeout time.Duration
IsWebSocket bool
buffer []byte
}
func (m *HTTP_WS_Writer) Write(p []byte) (n int, err error) {
if m.IsWebSocket {
m.buffer = append(m.buffer, p...)
return len(p), nil
}
if m.Conn != nil && m.WriteTimeout > 0 {
m.Conn.SetWriteDeadline(time.Now().Add(m.WriteTimeout))
}
return m.Writer.Write(p)
}
func (m *HTTP_WS_Writer) Flush() (err error) {
if m.IsWebSocket {
if m.WriteTimeout > 0 {
m.Conn.SetWriteDeadline(time.Now().Add(m.WriteTimeout))
}
err = wsutil.WriteServerBinary(m.Conn, m.buffer)
m.buffer = m.buffer[:0]
}
return
}
func (m *HTTP_WS_Writer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if m.Conn == nil {
w.Header().Set("Transfer-Encoding", "chunked")
w.Header().Set("Content-Type", m.ContentType)
w.WriteHeader(http.StatusOK)
if hijacker, ok := w.(http.Hijacker); ok && m.WriteTimeout > 0 {
m.Conn, _, _ = hijacker.Hijack()
m.Conn.SetWriteDeadline(time.Now().Add(m.WriteTimeout))
m.Writer = m.Conn
} else {
m.Writer = w
w.(http.Flusher).Flush()
}
} else {
m.IsWebSocket = true
m.Writer = m.Conn
}
}

View File

@@ -16,6 +16,10 @@ type ReadWriteSeekCloser interface {
io.Closer
}
type Recyclable interface {
Recycle()
}
type Object = map[string]any
func Conditional[T any](cond bool, t, f T) T {
@@ -70,3 +74,59 @@ func Exist(filename string) bool {
_, err := os.Stat(filename)
return err == nil || os.IsExist(err)
}
type ReuseArray[T any] []T
func (s *ReuseArray[T]) GetNextPointer() (r *T) {
ss := *s
l := len(ss)
if cap(ss) > l {
ss = ss[:l+1]
} else {
var new T
ss = append(ss, new)
}
*s = ss
r = &((ss)[l])
if resetter, ok := any(r).(Resetter); ok {
resetter.Reset()
}
return r
}
func (s ReuseArray[T]) RangePoint(f func(yield *T) bool) {
for i := range len(s) {
if !f(&s[i]) {
return
}
}
}
func (s *ReuseArray[T]) Reset() {
*s = (*s)[:0]
}
func (s *ReuseArray[T]) Reduce() ReuseArray[T] {
ss := *s
ss = ss[:len(ss)-1]
*s = ss
return ss
}
func (s *ReuseArray[T]) Remove(item *T) bool {
for i := range *s {
if &(*s)[i] == item {
*s = append((*s)[:i], (*s)[i+1:]...)
return true
}
}
return false
}
func (s *ReuseArray[T]) Count() int {
return len(*s)
}
type Resetter interface {
Reset()
}

View File

@@ -1,7 +1,110 @@
package util
import (
"io"
"net"
"slices"
)
const (
MaxBlockSize = 1 << 22
BuddySize = MaxBlockSize << 7
MinPowerOf2 = 10
)
type Memory struct {
Size int
Buffers [][]byte
}
func NewMemory(buf []byte) Memory {
return Memory{
Buffers: net.Buffers{buf},
Size: len(buf),
}
}
func (m *Memory) WriteTo(w io.Writer) (n int64, err error) {
copy := net.Buffers(slices.Clone(m.Buffers))
return copy.WriteTo(w)
}
func (m *Memory) Reset() {
m.Buffers = m.Buffers[:0]
m.Size = 0
}
func (m *Memory) UpdateBuffer(index int, buf []byte) {
if index < 0 {
index = len(m.Buffers) + index
}
m.Size = len(buf) - len(m.Buffers[index])
m.Buffers[index] = buf
}
func (m *Memory) CopyFrom(b *Memory) {
buf := make([]byte, b.Size)
b.CopyTo(buf)
m.PushOne(buf)
}
func (m *Memory) Equal(b *Memory) bool {
if m.Size != b.Size || len(m.Buffers) != len(b.Buffers) {
return false
}
for i, buf := range m.Buffers {
if !slices.Equal(buf, b.Buffers[i]) {
return false
}
}
return true
}
func (m *Memory) CopyTo(buf []byte) {
for _, b := range m.Buffers {
l := len(b)
copy(buf, b)
buf = buf[l:]
}
}
func (m *Memory) ToBytes() []byte {
buf := make([]byte, m.Size)
m.CopyTo(buf)
return buf
}
func (m *Memory) PushOne(b []byte) {
m.Buffers = append(m.Buffers, b)
m.Size += len(b)
}
func (m *Memory) Push(b ...[]byte) {
m.Buffers = append(m.Buffers, b...)
for _, level0 := range b {
m.Size += len(level0)
}
}
func (m *Memory) Append(mm Memory) *Memory {
m.Buffers = append(m.Buffers, mm.Buffers...)
m.Size += mm.Size
return m
}
func (m *Memory) Count() int {
return len(m.Buffers)
}
func (m *Memory) Range(yield func([]byte)) {
for i := range m.Count() {
yield(m.Buffers[i])
}
}
func (m *Memory) NewReader() MemoryReader {
return MemoryReader{
Memory: m,
Length: m.Size,
}
}

View File

@@ -2,93 +2,23 @@ package util
import (
"io"
"net"
"slices"
)
type Memory struct {
Size int
net.Buffers
}
type MemoryReader struct {
*Memory
Length int
offset0 int
offset1 int
Length, offset0, offset1 int
}
func NewReadableBuffersFromBytes(b ...[]byte) *MemoryReader {
func NewReadableBuffersFromBytes(b ...[]byte) MemoryReader {
buf := &Memory{Buffers: b}
for _, level0 := range b {
buf.Size += len(level0)
}
return &MemoryReader{Memory: buf, Length: buf.Size}
return MemoryReader{Memory: buf, Length: buf.Size}
}
func NewMemory(buf []byte) Memory {
return Memory{
Buffers: net.Buffers{buf},
Size: len(buf),
}
}
func (m *Memory) UpdateBuffer(index int, buf []byte) {
if index < 0 {
index = len(m.Buffers) + index
}
m.Size = len(buf) - len(m.Buffers[index])
m.Buffers[index] = buf
}
func (m *Memory) CopyFrom(b *Memory) {
buf := make([]byte, b.Size)
b.CopyTo(buf)
m.AppendOne(buf)
}
func (m *Memory) CopyTo(buf []byte) {
for _, b := range m.Buffers {
l := len(b)
copy(buf, b)
buf = buf[l:]
}
}
func (m *Memory) ToBytes() []byte {
buf := make([]byte, m.Size)
m.CopyTo(buf)
return buf
}
func (m *Memory) AppendOne(b []byte) {
m.Buffers = append(m.Buffers, b)
m.Size += len(b)
}
func (m *Memory) Append(b ...[]byte) {
m.Buffers = append(m.Buffers, b...)
for _, level0 := range b {
m.Size += len(level0)
}
}
func (m *Memory) Count() int {
return len(m.Buffers)
}
func (m *Memory) Range(yield func([]byte)) {
for i := range m.Count() {
yield(m.Buffers[i])
}
}
func (m *Memory) NewReader() *MemoryReader {
var reader MemoryReader
reader.Memory = m
reader.Length = m.Size
return &reader
}
var _ io.Reader = (*MemoryReader)(nil)
func (r *MemoryReader) Offset() int {
return r.Size - r.Length
@@ -108,9 +38,9 @@ func (r *MemoryReader) MoveToEnd() {
r.Length = 0
}
func (r *MemoryReader) ReadBytesTo(buf []byte) (actual int) {
func (r *MemoryReader) Read(buf []byte) (actual int, err error) {
if r.Length == 0 {
return 0
return 0, io.EOF
}
n := len(buf)
curBuf := r.GetCurrent()
@@ -142,6 +72,7 @@ func (r *MemoryReader) ReadBytesTo(buf []byte) (actual int) {
actual += curBufLen
r.skipBuf()
if r.Length == 0 && n > 0 {
err = io.EOF
return
}
}
@@ -204,6 +135,9 @@ func (r *MemoryReader) getCurrentBufLen() int {
return len(r.Memory.Buffers[r.offset0]) - r.offset1
}
func (r *MemoryReader) Skip(n int) error {
if n <= 0 {
return nil
}
if n > r.Length {
return io.EOF
}
@@ -248,8 +182,8 @@ func (r *MemoryReader) ReadBytes(n int) ([]byte, error) {
return nil, io.EOF
}
b := make([]byte, n)
actual := r.ReadBytesTo(b)
return b[:actual], nil
actual, err := r.Read(b)
return b[:actual], err
}
func (r *MemoryReader) ReadBE(n int) (num uint32, err error) {

View File

@@ -22,13 +22,13 @@ func NewPromiseWithTimeout(ctx context.Context, timeout time.Duration) *Promise
p := &Promise{}
p.Context, p.CancelCauseFunc = context.WithCancelCause(ctx)
p.timer = time.AfterFunc(timeout, func() {
p.CancelCauseFunc(ErrTimeout)
p.CancelCauseFunc(errTimeout)
})
return p
}
var ErrResolve = errors.New("promise resolved")
var ErrTimeout = errors.New("promise timeout")
var errTimeout = errors.New("promise timeout")
func (p *Promise) Resolve() {
p.Fulfill(nil)
@@ -47,6 +47,10 @@ func (p *Promise) Await() (err error) {
return
}
func (p *Promise) IsRejected() bool {
return context.Cause(p.Context) != ErrResolve
}
func (p *Promise) Fulfill(err error) {
if p.timer != nil {
p.timer.Stop()

View File

@@ -4,12 +4,26 @@ package util
import (
"io"
"slices"
)
type RecyclableMemory struct {
Memory
}
func NewRecyclableMemory(allocator *ScalableMemoryAllocator) RecyclableMemory {
return RecyclableMemory{}
}
func (r *RecyclableMemory) Clone() RecyclableMemory {
return RecyclableMemory{
Memory: Memory{
Buffers: slices.Clone(r.Buffers),
Size: r.Size,
},
}
}
func (r *RecyclableMemory) InitRecycleIndexes(max int) {
}

View File

@@ -15,9 +15,15 @@ type RecyclableMemory struct {
recycleIndexes []int
}
func NewRecyclableMemory(allocator *ScalableMemoryAllocator) RecyclableMemory {
return RecyclableMemory{allocator: allocator}
}
func (r *RecyclableMemory) InitRecycleIndexes(max int) {
if r.recycleIndexes == nil {
r.recycleIndexes = make([]int, 0, max)
}
}
func (r *RecyclableMemory) GetAllocator() *ScalableMemoryAllocator {
return r.allocator
@@ -28,7 +34,7 @@ func (r *RecyclableMemory) NextN(size int) (memory []byte) {
if r.recycleIndexes != nil {
r.recycleIndexes = append(r.recycleIndexes, r.Count())
}
r.AppendOne(memory)
r.PushOne(memory)
return
}
@@ -36,7 +42,7 @@ func (r *RecyclableMemory) AddRecycleBytes(b []byte) {
if r.recycleIndexes != nil {
r.recycleIndexes = append(r.recycleIndexes, r.Count())
}
r.AppendOne(b)
r.PushOne(b)
}
func (r *RecyclableMemory) SetAllocator(allocator *ScalableMemoryAllocator) {
@@ -54,6 +60,7 @@ func (r *RecyclableMemory) Recycle() {
r.allocator.Free(buf)
}
}
r.Reset()
}
type MemoryAllocator struct {
@@ -61,54 +68,14 @@ type MemoryAllocator struct {
start int64
memory []byte
Size int
buddy *Buddy
}
// createMemoryAllocator 创建并初始化 MemoryAllocator
func createMemoryAllocator(size int, buddy *Buddy, offset int) *MemoryAllocator {
ret := &MemoryAllocator{
allocator: NewAllocator(size),
buddy: buddy,
Size: size,
memory: buddy.memoryPool[offset : offset+size],
start: buddy.poolStart + int64(offset),
}
ret.allocator.Init(size)
return ret
}
func GetMemoryAllocator(size int) (ret *MemoryAllocator) {
if size < BuddySize {
requiredSize := size >> MinPowerOf2
// 循环尝试从池中获取可用的 buddy
for {
buddy := GetBuddy()
offset, err := buddy.Alloc(requiredSize)
PutBuddy(buddy)
if err == nil {
// 分配成功,使用这个 buddy
return createMemoryAllocator(size, buddy, offset<<MinPowerOf2)
}
}
}
// 池中的 buddy 都无法分配或大小不够,使用系统内存
memory := make([]byte, size)
start := int64(uintptr(unsafe.Pointer(&memory[0])))
return &MemoryAllocator{
allocator: NewAllocator(size),
Size: size,
memory: memory,
start: start,
}
recycle func()
}
func (ma *MemoryAllocator) Recycle() {
ma.allocator.Recycle()
if ma.buddy != nil {
_ = ma.buddy.Free(int((ma.buddy.poolStart - ma.start) >> MinPowerOf2))
ma.buddy = nil
if ma.recycle != nil {
ma.recycle()
}
ma.memory = nil
}
func (ma *MemoryAllocator) Find(size int) (memory []byte) {

142
plugin.go
View File

@@ -6,6 +6,7 @@ import (
"crypto/md5"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"net"
"net/http"
@@ -65,8 +66,6 @@ type (
IPlugin interface {
task.IJob
OnInit() error
OnStop()
Pull(string, config.Pull, *config.Publish) (*PullJob, error)
Push(string, config.Push, *config.Subscribe)
Transform(*Publisher, config.Transform)
@@ -163,27 +162,46 @@ func (plugin *PluginMeta) Init(s *Server, userConfig map[string]any) (p *Plugin)
return
}
}
if err := s.AddTask(instance).WaitStarted(); err != nil {
if err = s.AddTask(instance).WaitStarted(); err != nil {
p.disable(instance.StopReason().Error())
return
}
if err = p.listen(); err != nil {
p.Stop(err)
p.disable(err.Error())
return
}
if p.Meta.ServiceDesc != nil && s.grpcServer != nil {
s.grpcServer.RegisterService(p.Meta.ServiceDesc, p.handler)
if p.Meta.RegisterGRPCHandler != nil {
if err = p.Meta.RegisterGRPCHandler(p.Context, s.config.HTTP.GetGRPCMux(), s.grpcClientConn); err != nil {
p.Stop(err)
p.disable(fmt.Sprintf("grpc %v", err))
return
} else {
p.Info("grpc handler registered")
}
}
}
if p.config.Hook != nil {
if hook, ok := p.config.Hook[config.HookOnServerKeepAlive]; ok && hook.Interval > 0 {
p.AddTask(&ServerKeepAliveTask{plugin: p})
}
}
var handlers map[string]http.HandlerFunc
if v, ok := instance.(IRegisterHandler); ok {
handlers = v.RegisterHandler()
}
p.registerHandler(handlers)
p.OnDispose(func() {
s.Plugins.Remove(p)
})
s.Plugins.Add(p)
return
}
// InstallPlugin 安装插件
func InstallPlugin[C iPlugin](options ...any) error {
var meta PluginMeta
for _, option := range options {
if m, ok := option.(PluginMeta); ok {
meta = m
}
}
func InstallPlugin[C iPlugin](meta PluginMeta) error {
var c *C
meta.Type = reflect.TypeOf(c).Elem()
if meta.Name == "" {
@@ -198,30 +216,6 @@ func InstallPlugin[C iPlugin](options ...any) error {
meta.Version = "dev"
}
}
for _, option := range options {
switch v := option.(type) {
case OnExitHandler:
meta.OnExit = v
case DefaultYaml:
meta.DefaultYaml = v
case PullerFactory:
meta.NewPuller = v
case PusherFactory:
meta.NewPusher = v
case RecorderFactory:
meta.NewRecorder = v
case TransformerFactory:
meta.NewTransformer = v
case AuthPublisher:
meta.OnAuthPub = v
case AuthSubscriber:
meta.OnAuthSub = v
case *grpc.ServiceDesc:
meta.ServiceDesc = v
case func(context.Context, *gatewayRuntime.ServeMux, *grpc.ClientConn) error:
meta.RegisterGRPCHandler = v
}
}
plugins = append(plugins, meta)
return nil
}
@@ -281,40 +275,6 @@ func (p *Plugin) disable(reason string) {
p.Server.disabledPlugins = append(p.Server.disabledPlugins, p)
}
func (p *Plugin) Start() (err error) {
s := p.Server
s.AddTask(&webHookQueueTask)
if err = p.listen(); err != nil {
return
}
if err = p.handler.OnInit(); err != nil {
return
}
if p.Meta.ServiceDesc != nil && s.grpcServer != nil {
s.grpcServer.RegisterService(p.Meta.ServiceDesc, p.handler)
if p.Meta.RegisterGRPCHandler != nil {
if err = p.Meta.RegisterGRPCHandler(p.Context, s.config.HTTP.GetGRPCMux(), s.grpcClientConn); err != nil {
p.disable(fmt.Sprintf("grpc %v", err))
return
} else {
p.Info("grpc handler registered")
}
}
}
if p.config.Hook != nil {
if hook, ok := p.config.Hook[config.HookOnServerKeepAlive]; ok && hook.Interval > 0 {
p.AddTask(&ServerKeepAliveTask{plugin: p})
}
}
return
}
func (p *Plugin) Dispose() {
p.handler.OnStop()
p.Server.Plugins.Remove(p)
}
func (p *Plugin) listen() (err error) {
httpConf := &p.config.HTTP
@@ -374,14 +334,6 @@ func (p *Plugin) listen() (err error) {
return
}
func (p *Plugin) OnInit() error {
return nil
}
func (p *Plugin) OnStop() {
}
type WebHookQueueTask struct {
task.Work
}
@@ -596,7 +548,11 @@ func (p *Plugin) OnSubscribe(streamPath string, args url.Values) {
if p.Meta.NewPuller != nil && reg.MatchString(streamPath) {
conf.Args = config.HTTPValues(args)
conf.URL = reg.Replace(streamPath, conf.URL)
p.handler.Pull(streamPath, conf, nil)
if job, err := p.handler.Pull(streamPath, conf, nil); err == nil {
if w, ok := p.Server.Waiting.Get(streamPath); ok {
job.Progress = &w.Progress
}
}
}
}
@@ -620,7 +576,17 @@ func (p *Plugin) OnSubscribe(streamPath string, args url.Values) {
}
func (p *Plugin) PublishWithConfig(ctx context.Context, streamPath string, conf config.Publish) (publisher *Publisher, err error) {
publisher = createPublisher(p, streamPath, conf)
publisher = &Publisher{Publish: conf}
publisher.Type = conf.PubType
publisher.ID = task.GetNextTaskID()
publisher.Plugin = p
if conf.PublishTimeout > 0 {
publisher.TimeoutTimer = time.NewTimer(conf.PublishTimeout)
} else {
publisher.TimeoutTimer = time.NewTimer(time.Hour * 24 * 365)
}
publisher.Logger = p.Logger.With("streamPath", streamPath, "pId", publisher.ID)
publisher.Init(streamPath, &publisher.Publish)
if p.config.EnableAuth && publisher.Type == PublishTypeServer {
onAuthPub := p.Meta.OnAuthPub
if onAuthPub == nil {
@@ -638,7 +604,8 @@ func (p *Plugin) PublishWithConfig(ctx context.Context, streamPath string, conf
}
}
}
err = p.Server.Streams.AddTask(publisher, ctx).WaitStarted()
for {
err = p.Server.Streams.Add(publisher, ctx).WaitStarted()
if err == nil {
if sender, webhook := p.getHookSender(config.HookOnPublishEnd); sender != nil {
publisher.OnDispose(func() {
@@ -659,8 +626,18 @@ func (p *Plugin) PublishWithConfig(ctx context.Context, streamPath string, conf
}
sender(webhook, alarmInfo)
}
}
return
} else if oldStream := new(task.ExistTaskError); errors.As(err, oldStream) {
if conf.KickExist {
publisher.takeOver(oldStream.Task.(*Publisher))
oldStream.Task.WaitStopped()
} else {
return nil, ErrStreamExist
}
} else {
return
}
}
}
func (p *Plugin) Publish(ctx context.Context, streamPath string) (publisher *Publisher, err error) {
@@ -743,14 +720,13 @@ func (p *Plugin) Push(streamPath string, conf config.Push, subConf *config.Subsc
func (p *Plugin) Record(pub *Publisher, conf config.Record, subConf *config.Subscribe) *RecordJob {
recorder := p.Meta.NewRecorder(conf)
job := recorder.GetRecordJob().Init(recorder, p, pub.StreamPath, conf, subConf)
job.Depend(pub)
pub.Using(job)
return job
}
func (p *Plugin) Transform(pub *Publisher, conf config.Transform) {
transformer := p.Meta.NewTransformer()
job := transformer.GetTransformJob().Init(transformer, p, pub, conf)
job.Depend(pub)
pub.Using(transformer.GetTransformJob().Init(transformer, p, pub, conf))
}
func (p *Plugin) registerHandler(handlers map[string]http.HandlerFunc) {

View File

@@ -6,6 +6,12 @@
- Visual Studio Code
- Goland
- Cursor
- CodeBuddy
- Trae
- Qoder
- Claude Code
- Kiro
- Windsurf
### Install gRPC
```shell
@@ -53,14 +59,16 @@ Example:
const defaultConfig = m7s.DefaultYaml(`tcp:
listenaddr: :5554`)
var _ = m7s.InstallPlugin[MyPlugin](defaultConfig)
var _ = m7s.InstallPlugin[MyPlugin](m7s.PluginMeta{
DefaultYaml: defaultConfig,
})
```
## 3. Implement Event Callbacks (Optional)
### Initialization Callback
```go
func (config *MyPlugin) OnInit() (err error) {
func (config *MyPlugin) Start() (err error) {
// Initialize things
return
}
@@ -121,22 +129,25 @@ func (config *MyPlugin) test1(rw http.ResponseWriter, r *http.Request) {
Push client needs to implement IPusher interface and pass the creation method to InstallPlugin.
```go
type Pusher struct {
pullCtx m7s.PullJob
task.Task
pushJob m7s.PushJob
}
func (c *Pusher) GetPullJob() *m7s.PullJob {
return &c.pullCtx
func (c *Pusher) GetPushJob() *m7s.PushJob {
return &c.pushJob
}
func NewPusher(_ config.Push) m7s.IPusher {
return &Pusher{}
}
var _ = m7s.InstallPlugin[MyPlugin](NewPusher)
var _ = m7s.InstallPlugin[MyPlugin](m7s.PluginMeta{
NewPusher: NewPusher,
})
```
### Implement Pull Client
Pull client needs to implement IPuller interface and pass the creation method to InstallPlugin.
The following Puller inherits from m7s.HTTPFilePuller for basic file and HTTP pulling:
The following Puller inherits from m7s.HTTPFilePuller for basic file and HTTP pulling. You need to override the Start method for specific pulling logic:
```go
type Puller struct {
m7s.HTTPFilePuller
@@ -145,7 +156,9 @@ type Puller struct {
func NewPuller(_ config.Pull) m7s.IPuller {
return &Puller{}
}
var _ = m7s.InstallPlugin[MyPlugin](NewPuller)
var _ = m7s.InstallPlugin[MyPlugin](m7s.PluginMeta{
NewPuller: NewPuller,
})
```
## 6. Implement gRPC Service
@@ -226,7 +239,10 @@ import (
"m7s.live/v5/plugin/myplugin/pb"
)
var _ = m7s.InstallPlugin[MyPlugin](&pb.Api_ServiceDesc, pb.RegisterApiHandler)
var _ = m7s.InstallPlugin[MyPlugin](m7s.PluginMeta{
ServiceDesc: &pb.Api_ServiceDesc,
RegisterGRPCHandler: pb.RegisterApiHandler,
})
type MyPlugin struct {
pb.UnimplementedApiServer
@@ -247,43 +263,72 @@ Accessible via GET request to `/myplugin/api/test1`
## 7. Publishing Streams
```go
publisher, err = p.Publish(streamPath, connectInfo)
publisher, err := p.Publish(ctx, streamPath)
```
The last two parameters are optional.
The `ctx` parameter is required, `streamPath` parameter is required.
After obtaining the `publisher`, you can publish audio/video data using `publisher.WriteAudio` and `publisher.WriteVideo`.
### Writing Audio/Video Data
The old `WriteAudio` and `WriteVideo` methods have been replaced with a more structured writer pattern using generics:
#### **Create Writers**
```go
// Audio writer
audioWriter := m7s.NewPublishAudioWriter[*AudioFrame](publisher, allocator)
// Video writer
videoWriter := m7s.NewPublishVideoWriter[*VideoFrame](publisher, allocator)
// Combined audio/video writer
writer := m7s.NewPublisherWriter[*AudioFrame, *VideoFrame](publisher, allocator)
```
#### **Write Frames**
```go
// Set timestamp and write audio frame
writer.AudioFrame.SetTS32(timestamp)
err := writer.NextAudio()
// Set timestamp and write video frame
writer.VideoFrame.SetTS32(timestamp)
err := writer.NextVideo()
```
#### **Write Custom Data**
```go
// For custom data frames
err := publisher.WriteData(data IDataFrame)
```
### Define Audio/Video Data
If existing audio/video data formats don't meet your needs, you can define custom formats by implementing this interface:
```go
IAVFrame interface {
GetAllocator() *util.ScalableMemoryAllocator
SetAllocator(*util.ScalableMemoryAllocator)
Parse(*AVTrack) error
ConvertCtx(codec.ICodecCtx) (codec.ICodecCtx, IAVFrame, error)
Demux(codec.ICodecCtx) (any, error)
Mux(codec.ICodecCtx, *AVFrame)
GetTimestamp() time.Duration
GetCTS() time.Duration
GetSample() *Sample
GetSize() int
CheckCodecChange() error
Demux() error // demux to raw format
Mux(*Sample) error // mux from origin format
Recycle()
String() string
Dump(byte, io.Writer)
}
```
> Define separate types for audio and video
- GetAllocator/SetAllocator: Automatically implemented when embedding RecyclableMemory
- Parse: Identifies key frames, sequence frames, and other important information
- ConvertCtx: Called when protocol conversion is needed
- Demux: Called when audio/video data needs to be demuxed
- Mux: Called when audio/video data needs to be muxed
- Recycle: Automatically implemented when embedding RecyclableMemory
- String: Prints audio/video data information
The methods serve the following purposes:
- GetSample: Gets the Sample object containing codec context and raw data
- GetSize: Gets the size of audio/video data
- GetTimestamp: Gets the timestamp in nanoseconds
- GetCTS: Gets the Composition Time Stamp in nanoseconds (PTS = DTS+CTS)
- Dump: Prints binary audio/video data
- CheckCodecChange: Checks if the codec has changed
- Demux: Demuxes audio/video data to raw format for use by other formats
- Mux: Muxes from original format to custom audio/video data format
- Recycle: Recycles resources, automatically implemented when embedding RecyclableMemory
- String: Prints audio/video data information
### Memory Management
The new pattern includes built-in memory management:
- `util.ScalableMemoryAllocator` - For efficient memory allocation
- Frame recycling through `Recycle()` method
- Automatic memory pool management
## 8. Subscribing to Streams
```go
@@ -293,7 +338,245 @@ go m7s.PlayBlock(suber, handleAudio, handleVideo)
```
Note that handleAudio and handleVideo are callback functions you need to implement. They take an audio/video format type as input and return an error. If the error is not nil, the subscription is terminated.
## 9. Prometheus Integration
## 9. Working with H26xFrame for Raw Stream Data
### 9.1 Understanding H26xFrame Structure
The `H26xFrame` struct is used for handling H.264/H.265 raw stream data:
```go
type H26xFrame struct {
pkg.Sample
}
```
Key characteristics:
- Inherits from `pkg.Sample` - contains codec context, memory management, and timing
- Uses `Raw.(*pkg.Nalus)` to store NALU (Network Abstraction Layer Unit) data
- Supports both H.264 (AVC) and H.265 (HEVC) formats
- Uses efficient memory allocators for zero-copy operations
### 9.2 Creating H26xFrame for Publishing
```go
import (
"m7s.live/v5"
"m7s.live/v5/pkg/format"
"m7s.live/v5/pkg/util"
"time"
)
// Create publisher with H26xFrame support
func publishRawH264Stream(streamPath string, h264Frames [][]byte) error {
// Get publisher
publisher, err := p.Publish(streamPath)
if err != nil {
return err
}
// Create memory allocator
allocator := util.NewScalableMemoryAllocator(1 << util.MinPowerOf2)
defer allocator.Recycle()
// Create writer for H26xFrame
writer := m7s.NewPublisherWriter[*format.RawAudio, *format.H26xFrame](publisher, allocator)
// Set up H264 codec context
writer.VideoFrame.ICodecCtx = &format.H264{}
// Publish multiple frames
// Note: This is a demonstration of multi-frame writing. In actual scenarios,
// frames should be written gradually as they are received from the video source.
startTime := time.Now()
for i, frameData := range h264Frames {
// Create H26xFrame for each frame
frame := writer.VideoFrame
// Set timestamp with proper interval
frame.Timestamp = startTime.Add(time.Duration(i) * time.Second / 30) // 30 FPS
// Write NALU data
nalus := frame.GetNalus()
// if frameData is a single NALU, otherwise need to loop
p := nalus.GetNextPointer()
mem := frame.NextN(len(frameData))
copy(mem, frameData)
p.PushOne(mem)
// Publish frame
if err := writer.NextVideo(); err != nil {
return err
}
}
return nil
}
// Example usage with continuous streaming
func continuousH264Publishing(streamPath string, frameSource <-chan []byte, stopChan <-chan struct{}) error {
// Get publisher
publisher, err := p.Publish(streamPath)
if err != nil {
return err
}
defer publisher.Dispose()
// Create memory allocator
allocator := util.NewScalableMemoryAllocator(1 << util.MinPowerOf2)
defer allocator.Recycle()
// Create writer for H26xFrame
writer := m7s.NewPublisherWriter[*format.RawAudio, *format.H26xFrame](publisher, allocator)
// Set up H264 codec context
writer.VideoFrame.ICodecCtx = &format.H264{}
startTime := time.Now()
frameCount := 0
for {
select {
case frameData := <-frameSource:
// Create H26xFrame for each frame
frame := writer.VideoFrame
// Set timestamp with proper interval
frame.Timestamp = startTime.Add(time.Duration(frameCount) * time.Second / 30) // 30 FPS
// Write NALU data
nalus := frame.GetNalus()
mem := frame.NextN(len(frameData))
copy(mem, frameData)
// Publish frame
if err := writer.NextVideo(); err != nil {
return err
}
frameCount++
case <-stopChan:
// Stop publishing
return nil
}
}
}
```
### 9.3 Processing H26xFrame (Transform Pattern)
```go
type MyTransform struct {
m7s.DefaultTransformer
Writer *m7s.PublishWriter[*format.RawAudio, *format.H26xFrame]
}
func (t *MyTransform) Go() {
defer t.Dispose()
for video := range t.Video {
if err := t.processH26xFrame(video); err != nil {
t.Error("process frame failed", "error", err)
break
}
}
}
func (t *MyTransform) processH26xFrame(video *format.H26xFrame) error {
// Copy frame metadata
copyVideo := t.Writer.VideoFrame
copyVideo.ICodecCtx = video.ICodecCtx
*copyVideo.BaseSample = *video.BaseSample
nalus := copyVideo.GetNalus()
// Process each NALU unit
for nalu := range video.Raw.(*pkg.Nalus).RangePoint {
p := nalus.GetNextPointer()
mem := copyVideo.NextN(nalu.Size)
nalu.CopyTo(mem)
// Example: Filter or modify specific NALU types
if video.FourCC() == codec.FourCC_H264 {
switch codec.ParseH264NALUType(mem[0]) {
case codec.NALU_IDR_Picture, codec.NALU_Non_IDR_Picture:
// Process video frame NALUs
// Example: Apply transformations, filters, etc.
case codec.NALU_SPS, codec.NALU_PPS:
// Process parameter set NALUs
}
} else if video.FourCC() == codec.FourCC_H265 {
switch codec.ParseH265NALUType(mem[0]) {
case h265parser.NAL_UNIT_CODED_SLICE_IDR_W_RADL:
// Process H.265 IDR frames
}
}
// Push processed NALU
p.PushOne(mem)
}
return t.Writer.NextVideo()
}
```
### 9.4 Common NALU Types for H.264/H.265
#### H.264 NALU Types
```go
const (
NALU_Non_IDR_Picture = 1 // Non-IDR picture (P-frames)
NALU_IDR_Picture = 5 // IDR picture (I-frames)
NALU_SEI = 6 // Supplemental enhancement information
NALU_SPS = 7 // Sequence parameter set
NALU_PPS = 8 // Picture parameter set
)
// Parse NALU type from first byte
naluType := codec.ParseH264NALUType(mem[0])
```
#### H.265 NALU Types
```go
// Parse H.265 NALU type from first byte
naluType := codec.ParseH265NALUType(mem[0])
```
### 9.5 Memory Management Best Practices
```go
// Use memory allocators for efficient operations
allocator := util.NewScalableMemoryAllocator(1 << 20) // 1MB initial size
defer allocator.Recycle()
// When processing multiple frames, reuse the same allocator
writer := m7s.NewPublisherWriter[*format.RawAudio, *format.H26xFrame](publisher, allocator)
```
### 9.6 Error Handling and Validation
```go
func processFrame(video *format.H26xFrame) error {
// Check codec changes
if err := video.CheckCodecChange(); err != nil {
return err
}
// Validate frame data
if video.Raw == nil {
return fmt.Errorf("empty frame data")
}
// Process NALUs safely
nalus, ok := video.Raw.(*pkg.Nalus)
if !ok {
return fmt.Errorf("invalid NALUs format")
}
// Process frame...
return nil
}
```
## 10. Prometheus Integration
Just implement the Collector interface, and the system will automatically collect metrics from all plugins:
```go
func (p *MyPlugin) Describe(ch chan<- *prometheus.Desc) {

View File

@@ -6,6 +6,13 @@
- Visual Studio Code
- Goland
- Cursor
- CodeBuddy
- Trae
- Qoder
- Claude Code
- Kiro
- Windsurf
### 安装gRPC
```shell
$ go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
@@ -51,12 +58,14 @@ type MyPlugin struct {
const defaultConfig = m7s.DefaultYaml(`tcp:
listenaddr: :5554`)
var _ = m7s.InstallPlugin[MyPlugin](defaultConfig)
var _ = m7s.InstallPlugin[MyPlugin](m7s.PluginMeta{
DefaultYaml: defaultConfig,
})
```
## 3. 实现事件回调(可选)
### 初始化回调
```go
func (config *MyPlugin) OnInit() (err error) {
func (config *MyPlugin) Start() (err error) {
// 初始化一些东西
return
}
@@ -113,26 +122,29 @@ func (config *MyPlugin) test1(rw http.ResponseWriter, r *http.Request) {
## 5. 实现推拉流客户端
### 实现推流客户端
推流客户端就是想要实现一个 IPusher然后将创建 IPusher 的方法传入 InstallPlugin 中。
推流客户端要实现 IPusher 接口,然后将创建 IPusher 的方法传入 InstallPlugin 中。
```go
type Pusher struct {
pullCtx m7s.PullJob
task.Task
pushJob m7s.PushJob
}
func (c *Pusher) GetPullJob() *m7s.PullJob {
return &c.pullCtx
func (c *Pusher) GetPushJob() *m7s.PushJob {
return &c.pushJob
}
func NewPusher(_ config.Push) m7s.IPusher {
return &Pusher{}
}
var _ = m7s.InstallPlugin[MyPlugin](NewPusher)
var _ = m7s.InstallPlugin[MyPlugin](m7s.PluginMeta{
NewPusher: NewPusher,
})
```
### 实现拉流客户端
拉流客户端就是想要实现一个 IPuller然后将创建 IPuller 的方法传入 InstallPlugin 中。
下面这个 Puller 继承了 m7s.HTTPFilePuller可以实现基本的文件和 HTTP拉流。具体拉流逻辑需要覆盖 Run 方法。
拉流客户端要实现 IPuller 接口,然后将创建 IPuller 的方法传入 InstallPlugin 中。
下面这个 Puller 继承了 m7s.HTTPFilePuller可以实现基本的文件和 HTTP拉流。具体拉流逻辑需要覆盖 Start 方法。
```go
type Puller struct {
m7s.HTTPFilePuller
@@ -141,7 +153,9 @@ type Puller struct {
func NewPuller(_ config.Pull) m7s.IPuller {
return &Puller{}
}
var _ = m7s.InstallPlugin[MyPlugin](NewPuller)
var _ = m7s.InstallPlugin[MyPlugin](m7s.PluginMeta{
NewPuller: NewPuller,
})
```
## 6. 实现gRPC服务
@@ -221,7 +235,10 @@ import (
"m7s.live/v5/plugin/myplugin/pb"
)
var _ = m7s.InstallPlugin[MyPlugin](&pb.Api_ServiceDesc, pb.RegisterApiHandler)
var _ = m7s.InstallPlugin[MyPlugin](m7s.PluginMeta{
ServiceDesc: &pb.Api_ServiceDesc,
RegisterGRPCHandler: pb.RegisterApiHandler,
})
type MyPlugin struct {
pb.UnimplementedApiServer
@@ -239,51 +256,78 @@ func (config *MyPlugin) API_test1(rw http.ResponseWriter, r *http.Request) {
```
就可以通过 get 请求`/myplugin/api/test1`来调用`API_test1`方法。
## 5. 发布流
## 7. 发布流
```go
publisher, err = p.Publish(streamPath, connectInfo)
publisher, err := p.Publish(ctx, streamPath)
```
`ctx` 参数是必需的,`streamPath` 参数是必需的。
### 写入音视频数据
旧的 `WriteAudio``WriteVideo` 方法已被更结构化的写入器模式取代,使用泛型实现:
#### **创建写入器**
```go
// 音频写入器
audioWriter := m7s.NewPublishAudioWriter[*AudioFrame](publisher, allocator)
// 视频写入器
videoWriter := m7s.NewPublishVideoWriter[*VideoFrame](publisher, allocator)
// 组合音视频写入器
writer := m7s.NewPublisherWriter[*AudioFrame, *VideoFrame](publisher, allocator)
```
#### **写入帧**
```go
// 设置时间戳并写入音频帧
writer.AudioFrame.SetTS32(timestamp)
err := writer.NextAudio()
// 设置时间戳并写入视频帧
writer.VideoFrame.SetTS32(timestamp)
err := writer.NextVideo()
```
#### **写入自定义数据**
```go
// 对于自定义数据帧
err := publisher.WriteData(data IDataFrame)
```
后面两个入参是可选的
得到 `publisher` 后,就可以通过调用 `publisher.WriteAudio``publisher.WriteVideo` 来发布音视频数据。
### 定义音视频数据
如果有的音视频数据格式无法满足需求,可以自定义音视频数据格式。
如果有的音视频数据格式无法满足需求,可以自定义音视频数据格式。
但需要满足转换格式的要求。即需要实现下面这个接口:
```go
IAVFrame interface {
GetAllocator() *util.ScalableMemoryAllocator
SetAllocator(*util.ScalableMemoryAllocator)
Parse(*AVTrack) error // get codec info, idr
ConvertCtx(codec.ICodecCtx) (codec.ICodecCtx, IAVFrame, error) // convert codec from source stream
Demux(codec.ICodecCtx) (any, error) // demux to raw format
Mux(codec.ICodecCtx, *AVFrame) // mux from raw format
GetTimestamp() time.Duration
GetCTS() time.Duration
GetSample() *Sample
GetSize() int
CheckCodecChange() error
Demux() error // demux to raw format
Mux(*Sample) error // mux from origin format
Recycle()
String() string
Dump(byte, io.Writer)
}
```
> 音频和视频需要定义两个不同的类型
其中 `Parse` 方法用于解析音视频数据,`ConvertCtx` 方法用于转换音视频数据格式的上下文,`Demux` 方法用于解封装音视频数据,`Mux` 方法用于封装音视频数据,`Recycle` 方法用于回收资源。
- GetAllocator 方法用于获取内存分配器。(嵌入 RecyclableMemory 会自动实现)
- SetAllocator 方法用于设置内存分配器。(嵌入 RecyclableMemory 会自动实现)
- Parse方法主要从数据中识别关键帧序列帧等重要信息。
- ConvertCtx 会在需要转换协议的时候调用,传入原始的协议上下文,返回新的协议上下文(即自定义格式的上下文)。
- Demux 会在需要解封装音视频数据的时候调用,传入协议上下文,返回解封装后的音视频数据,用于给其他格式封装使用。
- Mux 会在需要封装音视频数据的时候调用,传入协议上下文和解封装后的音视频数据,用于封装成自定义格式的音视频数据。
- Recycle 方法会在嵌入 RecyclableMemory 时自动实现,无需手动实现。
- String 方法用于打印音视频数据的信息。
其中各方法的作用如下:
- GetSample 方法用于获取音视频数据的Sample对象包含编解码上下文和原始数据。
- GetSize 方法用于获取音视频数据的大小。
- GetTimestamp 方法用于获取音视频数据的时间戳(单位:纳秒)
- GetCTS 方法用于获取音视频数据的Composition Time Stamp(单位:纳秒)。PTS = DTS+CTS
- Dump 方法用于打印音视频数据的二进制数据。
- CheckCodecChange 方法用于检查编解码器是否发生变化
- Demux 方法用于解封装音视频数据到裸格式,用于给其他格式封装使用。
- Mux 方法用于从原始格式封装成自定义格式的音视频数据。
- Recycle 方法用于回收资源,会在嵌入 RecyclableMemory 时自动实现。
- String 方法用于打印音视频数据的信息。
### 6. 订阅流
### 内存管理
新的模式包含内置的内存管理:
- `util.ScalableMemoryAllocator` - 用于高效的内存分配
- 通过 `Recycle()` 方法进行帧回收
- 自动内存池管理
## 8. 订阅流
```go
var suber *m7s.Subscriber
suber, err = p.Subscribe(ctx,streamPath)
@@ -292,7 +336,244 @@ go m7s.PlayBlock(suber, handleAudio, handleVideo)
这里需要注意的是 handleAudio, handleVideo 是处理音视频数据的回调函数,需要自己实现。
handleAudio/Video 的入参是一个你需要接受到的音视频格式类型,返回 error如果返回的 error 不是 nil则订阅中止。
## 7. 接入 Prometheus
## 9. 使用 H26xFrame 处理裸流数据
### 9.1 理解 H26xFrame 结构
`H26xFrame` 结构体用于处理 H.264/H.265 裸流数据:
```go
type H26xFrame struct {
pkg.Sample
}
```
主要特性:
- 继承自 `pkg.Sample` - 包含编解码上下文、内存管理和时间戳信息
- 使用 `Raw.(*pkg.Nalus)` 存储 NALU网络抽象层单元数据
- 支持 H.264 (AVC) 和 H.265 (HEVC) 格式
- 使用高效的内存分配器实现零拷贝操作
### 9.2 创建 H26xFrame 进行发布
```go
import (
"m7s.live/v5"
"m7s.live/v5/pkg/format"
"m7s.live/v5/pkg/util"
"time"
)
// 创建支持 H26xFrame 的发布器 - 多帧发布
func publishRawH264Stream(streamPath string, h264Frames [][]byte) error {
// 获取发布器
publisher, err := p.Publish(streamPath)
if err != nil {
return err
}
// 创建内存分配器
allocator := util.NewScalableMemoryAllocator(1 << util.MinPowerOf2)
defer allocator.Recycle()
// 创建 H26xFrame 写入器
writer := m7s.NewPublisherWriter[*format.RawAudio, *format.H26xFrame](publisher, allocator)
// 设置 H264 编码器上下文
writer.VideoFrame.ICodecCtx = &format.H264{}
// 发布多帧
// 注意:这只是演示一次写入多帧,实际情况是逐步写入的,即从视频源接收到一帧就写入一帧
startTime := time.Now()
for i, frameData := range h264Frames {
// 为每帧创建 H26xFrame
frame := writer.VideoFrame
// 设置正确间隔的时间戳
frame.Timestamp = startTime.Add(time.Duration(i) * time.Second / 30) // 30 FPS
// 写入 NALU 数据
nalus := frame.GetNalus()
// 假如 frameData 中只有一个 NALU否则需要循环执行下面的代码
p := nalus.GetNextPointer()
mem := frame.NextN(len(frameData))
copy(mem, frameData)
p.PushOne(mem)
// 发布帧
if err := writer.NextVideo(); err != nil {
return err
}
}
return nil
}
// 连续流发布示例
func continuousH264Publishing(streamPath string, frameSource <-chan []byte, stopChan <-chan struct{}) error {
// 获取发布器
publisher, err := p.Publish(streamPath)
if err != nil {
return err
}
defer publisher.Dispose()
// 创建内存分配器
allocator := util.NewScalableMemoryAllocator(1 << util.MinPowerOf2)
defer allocator.Recycle()
// 创建 H26xFrame 写入器
writer := m7s.NewPublisherWriter[*format.RawAudio, *format.H26xFrame](publisher, allocator)
// 设置 H264 编码器上下文
writer.VideoFrame.ICodecCtx = &format.H264{}
startTime := time.Now()
frameCount := 0
for {
select {
case frameData := <-frameSource:
// 为每帧创建 H26xFrame
frame := writer.VideoFrame
// 设置正确间隔的时间戳
frame.Timestamp = startTime.Add(time.Duration(frameCount) * time.Second / 30) // 30 FPS
// 写入 NALU 数据
nalus := frame.GetNalus()
mem := frame.NextN(len(frameData))
copy(mem, frameData)
// 发布帧
if err := writer.NextVideo(); err != nil {
return err
}
frameCount++
case <-stopChan:
// 停止发布
return nil
}
}
}
```
### 9.3 处理 H26xFrame转换器模式
```go
type MyTransform struct {
m7s.DefaultTransformer
Writer *m7s.PublishWriter[*format.RawAudio, *format.H26xFrame]
}
func (t *MyTransform) Go() {
defer t.Dispose()
for video := range t.Video {
if err := t.processH26xFrame(video); err != nil {
t.Error("process frame failed", "error", err)
break
}
}
}
func (t *MyTransform) processH26xFrame(video *format.H26xFrame) error {
// 复制帧元数据
copyVideo := t.Writer.VideoFrame
copyVideo.ICodecCtx = video.ICodecCtx
*copyVideo.BaseSample = *video.BaseSample
nalus := copyVideo.GetNalus()
// 处理每个 NALU 单元
for nalu := range video.Raw.(*pkg.Nalus).RangePoint {
p := nalus.GetNextPointer()
mem := copyVideo.NextN(nalu.Size)
nalu.CopyTo(mem)
// 示例:过滤或修改特定 NALU 类型
if video.FourCC() == codec.FourCC_H264 {
switch codec.ParseH264NALUType(mem[0]) {
case codec.NALU_IDR_Picture, codec.NALU_Non_IDR_Picture:
// 处理视频帧 NALU
// 示例:应用转换、滤镜等
case codec.NALU_SPS, codec.NALU_PPS:
// 处理参数集 NALU
}
} else if video.FourCC() == codec.FourCC_H265 {
switch codec.ParseH265NALUType(mem[0]) {
case h265parser.NAL_UNIT_CODED_SLICE_IDR_W_RADL:
// 处理 H.265 IDR 帧
}
}
// 推送处理后的 NALU
p.PushOne(mem)
}
return t.Writer.NextVideo()
}
```
### 9.4 H.264/H.265 常见 NALU 类型
#### H.264 NALU 类型
```go
const (
NALU_Non_IDR_Picture = 1 // 非 IDR 图像P 帧)
NALU_IDR_Picture = 5 // IDR 图像I 帧)
NALU_SEI = 6 // 补充增强信息
NALU_SPS = 7 // 序列参数集
NALU_PPS = 8 // 图像参数集
)
// 从第一个字节解析 NALU 类型
naluType := codec.ParseH264NALUType(mem[0])
```
#### H.265 NALU 类型
```go
// 从第一个字节解析 H.265 NALU 类型
naluType := codec.ParseH265NALUType(mem[0])
```
### 9.5 内存管理最佳实践
```go
// 使用内存分配器进行高效操作
allocator := util.NewScalableMemoryAllocator(1 << 20) // 1MB 初始大小
defer allocator.Recycle()
// 处理多帧时重用同一个分配器
writer := m7s.NewPublisherWriter[*format.RawAudio, *format.H26xFrame](publisher, allocator)
```
### 9.6 错误处理和验证
```go
func processFrame(video *format.H26xFrame) error {
// 检查编解码器变化
if err := video.CheckCodecChange(); err != nil {
return err
}
// 验证帧数据
if video.Raw == nil {
return fmt.Errorf("empty frame data")
}
// 安全处理 NALU
nalus, ok := video.Raw.(*pkg.Nalus)
if !ok {
return fmt.Errorf("invalid NALUs format")
}
// 处理帧...
return nil
}
```
## 10. 接入 Prometheus
只需要实现 Collector 接口,系统会自动收集所有插件的指标信息。
```go
func (p *MyPlugin) Describe(ch chan<- *prometheus.Desc) {
@@ -303,4 +584,41 @@ func (p *MyPlugin) Collect(ch chan<- prometheus.Metric) {
}
## 插件合并说明
### Monitor 插件合并到 Debug 插件
v5 版本开始Monitor 插件的功能已经合并到 Debug 插件中这种合并简化了插件结构并提供了更统一的调试和监控体验
#### 功能变更
- Monitor 插件的所有功能现在可以通过 Debug 插件访问
- 任务监控 API 路径从 `/monitor/api/*` 变更为 `/debug/api/monitor/*`
- 数据模型和数据库结构保持不变
- Session Task 的监控逻辑完全迁移到 Debug 插件
#### 使用方法
以前通过 Monitor 插件访问的 API 现在应该通过 Debug 插件访问
```
# 旧路径
GET /monitor/api/session/list
GET /monitor/api/search/task/{sessionId}
# 新路径
GET /debug/api/monitor/session/list
GET /debug/api/monitor/task/{sessionId}
```
#### 配置变更
不再需要单独配置 Monitor 插件,只需配置 Debug 插件即可。Debug 插件会自动初始化监控功能。
```yaml
debug:
enable: true
# 其他 debug 配置项
```
```

View File

@@ -19,10 +19,12 @@ type CascadeClientPlugin struct {
AutoPush bool `desc:"自动推流到上级"` //自动推流到上级
Server string `desc:"上级服务器"` // TODO: support multiple servers
Secret string `desc:"连接秘钥"`
conn quic.Connection
client *CascadeClient
}
var _ = m7s.InstallPlugin[CascadeClientPlugin]()
var _ = m7s.InstallPlugin[CascadeClientPlugin](m7s.PluginMeta{
NewPuller: cascade.NewCascadePuller,
})
type CascadeClient struct {
task.Work
@@ -79,7 +81,7 @@ func (task *CascadeClient) Run() (err error) {
return
}
func (c *CascadeClientPlugin) OnInit() (err error) {
func (c *CascadeClientPlugin) Start() (err error) {
if c.Secret == "" && c.Server == "" {
return nil
}
@@ -88,12 +90,13 @@ func (c *CascadeClientPlugin) OnInit() (err error) {
}
connectTask.SetRetry(-1, time.Second)
c.AddTask(&connectTask)
c.client = &connectTask
return
}
func (c *CascadeClientPlugin) Pull(streamPath string, conf config.Pull, pub *config.Publish) (job *m7s.PullJob, err error) {
puller := &cascade.Puller{
Connection: c.conn,
Connection: c.client.Connection,
}
job = puller.GetPullJob()
job.Init(puller, &c.Plugin, streamPath, conf, pub)

View File

@@ -5,6 +5,7 @@ import (
"github.com/quic-go/quic-go"
"m7s.live/v5"
"m7s.live/v5/pkg/config"
flv "m7s.live/v5/plugin/flv/pkg"
)
@@ -17,7 +18,7 @@ func (p *Puller) GetPullJob() *m7s.PullJob {
return &p.PullJob
}
func NewCascadePuller() m7s.IPuller {
func NewCascadePuller(config.Pull) m7s.IPuller {
return &Puller{}
}

View File

@@ -29,7 +29,7 @@ type CascadeServerPlugin struct {
clients util.Collection[uint, *cascade.Instance]
}
func (c *CascadeServerPlugin) OnInit() (err error) {
func (c *CascadeServerPlugin) Start() (err error) {
if c.GetCommonConf().Quic.ListenAddr == "" {
return pkg.ErrNotListen
}
@@ -50,8 +50,12 @@ func (c *CascadeServerPlugin) OnInit() (err error) {
return
}
var _ = m7s.InstallPlugin[CascadeServerPlugin](m7s.DefaultYaml(`quic:
listenaddr: :44944`), &pb.Server_ServiceDesc, pb.RegisterServerHandler)
var _ = m7s.InstallPlugin[CascadeServerPlugin](m7s.PluginMeta{
DefaultYaml: `quic:
listenaddr: :44944`,
ServiceDesc: &pb.Server_ServiceDesc,
RegisterGRPCHandler: pb.RegisterServerHandler,
})
type CascadeServer struct {
task.Work

View File

@@ -22,7 +22,7 @@ var _ = m7s.InstallPlugin[CrontabPlugin](m7s.PluginMeta{
RegisterGRPCHandler: pb.RegisterApiHandler,
})
func (ct *CrontabPlugin) OnInit() (err error) {
func (ct *CrontabPlugin) Start() (err error) {
if ct.DB == nil {
ct.Error("DB is nil")
} else {

View File

@@ -1,71 +0,0 @@
# Monibuca 加密插件
该插件提供了对视频流进行加密的功能,支持多种加密算法,可以使用静态密钥或动态密钥。
## 配置说明
在 config.yaml 中添加如下配置:
```yaml
crypto:
isStatic: false # 是否使用静态密钥
algo: "aes_ctr" # 加密算法:支持 aes_ctr、xor_s、xor_c
encryptLen: 1024 # 加密字节长度
secret:
key: "your key" # 加密密钥
iv: "your iv" # 加密向量(仅 aes_ctr 和 xor_c 需要)
onpub:
transform:
.* : $0 # 哪些流需要加密,正则表达式,这里是所有流
```
### 加密算法说明
1. `aes_ctr`: AES-CTR 模式加密
- key 长度要求32字节
- iv 长度要求16字节
2. `xor_s`: 简单异或加密
- key 长度要求32字节
- 不需要 iv
3. `xor_c`: 复杂异或加密
- key 长度要求32字节
- iv 长度要求16字节
## 密钥获取
### API 接口
获取加密密钥的 API 接口:
```
GET /crypto?stream={streamPath}
```
参数说明:
- stream: 流路径
返回示例:
```text
{key}.{iv}
```
且返回的密钥格式为 rawstd base64 编码
### 密钥生成规则
1. 静态密钥模式 (isStatic: true)
- 直接使用配置文件中的 key 和 iv
2. 动态密钥模式 (isStatic: false)
- key = md5(配置的密钥 + 流路径)
- iv = md5(流路径)的前16字节
## 注意事项
1. 加密仅对视频帧的关键数据部分进行加密,保留了 NALU 头部信息
2. 使用动态密钥时,确保配置文件中设置了有效的 secret.key
3. 使用 AES-CTR 或 XOR-C 算法时,必须同时配置 key 和 iv
4. 建议在生产环境中使用动态密钥模式,提高安全性

View File

@@ -1,43 +0,0 @@
package plugin_crypto
import (
"encoding/base64"
"fmt"
"net/http"
cryptopkg "m7s.live/v5/plugin/crypto/pkg"
)
func (p *CryptoPlugin) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// 设置 CORS 头
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET, POST")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
w.Header().Set("Content-Type", "application/json")
// 获取 stream 参数
stream := r.URL.Query().Get("stream")
if stream == "" {
http.Error(w, "stream parameter is required", http.StatusBadRequest)
return
}
//判断 stream 是否存在
if !p.Server.Streams.Has(stream) {
http.Error(w, "stream not found", http.StatusNotFound)
return
}
keyConf, err := cryptopkg.ValidateAndCreateKey(p.IsStatic, p.Algo, p.Secret.Key, p.Secret.Iv, stream)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
// cryptor, err := method.GetCryptor(p.Algo, keyConf)
// if err != nil {
// http.Error(w, err.Error(), http.StatusBadRequest)
// return
// }
// w.Write([]byte(cryptor.GetKey()))
w.Write([]byte(fmt.Sprintf("%s.%s", base64.RawStdEncoding.EncodeToString([]byte(keyConf.Key)), base64.RawStdEncoding.EncodeToString([]byte(keyConf.Iv)))))
}

View File

@@ -1,44 +0,0 @@
package plugin_crypto
import (
m7s "m7s.live/v5"
crypto "m7s.live/v5/plugin/crypto/pkg"
)
var _ = m7s.InstallPlugin[CryptoPlugin](crypto.NewTransform)
type CryptoPlugin struct {
m7s.Plugin
IsStatic bool `desc:"是否静态密钥" default:"false"`
Algo string `desc:"加密算法" default:"aes_ctr"` //加密算法
EncryptLen int `desc:"加密字节长度" default:"1024"` //加密字节长度
Secret struct {
Key string `desc:"加密密钥" default:"your key"` //加密密钥
Iv string `desc:"加密向量" default:"your iv"` //加密向量
} `desc:"密钥配置"`
}
// OnInit 初始化插件时的回调函数
func (p *CryptoPlugin) OnInit() (err error) {
// 初始化全局配置
crypto.GlobalConfig = crypto.Config{
IsStatic: p.IsStatic,
Algo: p.Algo,
EncryptLen: p.EncryptLen,
Secret: struct {
Key string `desc:"加密密钥" default:"your key"`
Iv string `desc:"加密向量" default:"your iv"`
}{
Key: p.Secret.Key,
Iv: p.Secret.Iv,
},
}
p.Info("crypto config initialized",
"algo", p.Algo,
"isStatic", p.IsStatic,
"encryptLen", p.EncryptLen,
)
return nil
}

View File

@@ -1,31 +0,0 @@
package crypto
import (
"encoding/base64"
"io"
"net/http"
"strings"
"testing"
)
func TestGetKey(t *testing.T) {
stream := "/hdl/live/test0.flv"
host := "http://localhost:8080/crypto/?stream="
r, err := http.DefaultClient.Get(host + stream)
if err != nil {
t.Error("get", err)
return
}
b, err := io.ReadAll(r.Body)
if err != nil {
t.Error("read", err)
return
}
b64 := strings.Split(string(b), ".")
key, err := base64.RawStdEncoding.DecodeString(b64[0])
t.Log("key", key, err)
iv, err := base64.RawStdEncoding.DecodeString(b64[1])
t.Log("iv", iv, err)
}

View File

@@ -1,99 +0,0 @@
package method
import (
"bytes"
"crypto/aes"
"crypto/cipher"
"encoding/base64"
"errors"
)
//加密过程:
// 1、处理数据对数据进行填充采用PKCS7当密钥长度不够时缺几位补几个几的方式。
// 2、对数据进行加密采用AES加密方法中CBC加密模式
// 3、对得到的加密数据进行base64加密得到字符串
// 解密过程相反
// AesEncrypt 加密 cbc 模式
type AesCryptor struct {
key []byte
}
func newAesCbc(cfg Key) (ICryptor, error) {
var cryptor *AesCryptor
if cfg.Key == "" {
return nil, errors.New("aes cryptor config no key")
} else {
cryptor = &AesCryptor{key: []byte(cfg.Key)}
}
return cryptor, nil
}
func init() {
RegisterCryptor("aes_cbc", newAesCbc)
}
func (c *AesCryptor) Encrypt(origin []byte) ([]byte, error) {
//创建加密实例
block, err := aes.NewCipher(c.key)
if err != nil {
return nil, err
}
//判断加密快的大小
blockSize := block.BlockSize()
//填充
encryptBytes := pkcs7Padding(origin, blockSize)
//初始化加密数据接收切片
crypted := make([]byte, len(encryptBytes))
//使用cbc加密模式
blockMode := cipher.NewCBCEncrypter(block, c.key[:blockSize])
//执行加密
blockMode.CryptBlocks(crypted, encryptBytes)
return crypted, nil
}
func (c *AesCryptor) Decrypt(encrypted []byte) ([]byte, error) {
//创建实例
block, err := aes.NewCipher(c.key)
if err != nil {
return nil, err
}
//获取块的大小
blockSize := block.BlockSize()
//使用cbc
blockMode := cipher.NewCBCDecrypter(block, c.key[:blockSize])
//初始化解密数据接收切片
crypted := make([]byte, len(encrypted))
//执行解密
blockMode.CryptBlocks(crypted, encrypted)
//去除填充
crypted, err = pkcs7UnPadding(crypted)
if err != nil {
return nil, err
}
return crypted, nil
}
func (c *AesCryptor) GetKey() string {
return base64.RawStdEncoding.EncodeToString(c.key)
}
// pkcs7Padding 填充
func pkcs7Padding(data []byte, blockSize int) []byte {
//判断缺少几位长度。最少1最多 blockSize
padding := blockSize - len(data)%blockSize
//补足位数。把切片[]byte{byte(padding)}复制padding个
padText := bytes.Repeat([]byte{byte(padding)}, padding)
return append(data, padText...)
}
// pkcs7UnPadding 填充的反向操作
func pkcs7UnPadding(data []byte) ([]byte, error) {
length := len(data)
if length == 0 {
return nil, errors.New("加密字符串错误!")
}
//获取填充的个数
unPadding := int(data[length-1])
return data[:(length - unPadding)], nil
}

View File

@@ -1,61 +0,0 @@
package method
import (
"crypto/aes"
"crypto/cipher"
"encoding/base64"
"errors"
"fmt"
)
type AesCtrCryptor struct {
key []byte
iv []byte
}
func newAesCtr(cfg Key) (ICryptor, error) {
var cryptor *AesCtrCryptor
if cfg.Key == "" || cfg.Iv == "" {
return nil, errors.New("aes ctr cryptor config no key")
}
cryptor = &AesCtrCryptor{key: []byte(cfg.Key), iv: []byte(cfg.Iv)}
return cryptor, nil
}
func init() {
RegisterCryptor("aes_ctr", newAesCtr)
}
func (c *AesCtrCryptor) Encrypt(origin []byte) ([]byte, error) {
block, err := aes.NewCipher(c.key)
if err != nil {
panic(err)
}
aesCtr := cipher.NewCTR(block, c.iv)
// EncryptRaw the plaintext
ciphertext := make([]byte, len(origin))
aesCtr.XORKeyStream(ciphertext, origin)
return ciphertext, nil
}
func (c *AesCtrCryptor) Decrypt(encrypted []byte) ([]byte, error) {
block, err := aes.NewCipher(c.key)
if err != nil {
panic(err)
}
aesCtr := cipher.NewCTR(block, c.iv)
// Decrypt the ciphertext
plaintext := make([]byte, len(encrypted))
aesCtr.XORKeyStream(plaintext, encrypted)
return plaintext, nil
}
func (c *AesCtrCryptor) GetKey() string {
return fmt.Sprintf("%s.%s", base64.RawStdEncoding.EncodeToString(c.key), base64.RawStdEncoding.EncodeToString(c.iv))
}

Some files were not shown because too many files have changed in this diff Show More