Compare commits

...

37 Commits

Author SHA1 Message Date
langhuihui
4fe1472117 refactor: init plugin faild do not register http handle 2025-06-11 13:57:45 +08:00
langhuihui
a8b3a644c3 feat: record recover 2025-06-10 20:16:39 +08:00
pggiroro
4f0a097dac feat: crontab support plat with streampath in database 2025-06-08 21:01:36 +08:00
pggiroro
4df3de00af fix: gb28181 subscriber and invite sdp 2025-06-08 10:40:17 +08:00
langhuihui
9c16905f28 feat: add evn check to debug plugin 2025-06-07 21:07:28 +08:00
pggiroro
0470f78ed7 fix: register to up platform change cseq when need password, get deviceinfo do not update device name when name is not nil in db,return error when DB is nil in Oninit 2025-06-06 22:45:50 +08:00
pggiroro
7282f1f44d fix: add platform from config.yaml,add example into default/config.yaml 2025-06-06 09:03:58 +08:00
pggiroro
67186cd669 fix: subscribe stream before start mp4 record 2025-06-06 09:03:58 +08:00
pggiroro
09e9761083 feat: Added the association feature between plan and streampath, which has not been tested yet. 2025-06-06 09:03:58 +08:00
langhuihui
4acdc19beb feat: add duration to record 2025-06-05 23:51:33 +08:00
langhuihui
80e19726d4 fix: use safeGet insteadof Call and get
feat: multi buddy support
2025-06-05 20:33:59 +08:00
langhuihui
8ff14931fe feat: disable replay protection on tcp webrtc 2025-06-04 23:02:24 +08:00
pggiroro
9c7dc7e628 fix: modify gb.Logger.With 2025-06-04 20:39:49 +08:00
pggiroro
75791fe93f feat: gb28181 support add platform and platform channel from config.yaml 2025-06-04 20:36:48 +08:00
langhuihui
cf218215ff fix: tcp read block 2025-06-04 14:13:28 +08:00
langhuihui
dbf820b845 feat: downlad flv format from mp4 record file 2025-06-03 17:20:58 +08:00
langhuihui
86b9969954 feat: config support more format 2025-06-03 09:06:43 +08:00
langhuihui
b3143e8c14 fix: mp4 download 2025-06-02 22:31:25 +08:00
langhuihui
7f859e6139 fix: mp4 recovery 2025-06-02 21:12:02 +08:00
pggiroro
6eb2941087 fix: use task.Manager to resolve register handler 2025-06-02 20:09:22 +08:00
pggiroro
e8b4cea007 fix: plan.length is 168 2025-06-02 20:09:22 +08:00
pggiroro
3949773e63 fix: update config.yaml add comment about autoinvite,mediaip,sipip 2025-06-02 20:09:22 +08:00
langhuihui
d67279a404 feat: add raw check no frame 2025-05-30 14:01:18 +08:00
langhuihui
043c62f38f feat: add loop read mp4 2025-05-29 20:25:26 +08:00
pggiroro
acf9f0c677 fix: gb28181 make invite sdp mediaip or sipip correct;linux remove viaheader in sip request 2025-05-28 09:22:34 +08:00
langhuihui
49d1e7c784 feat: add s3 plugin 2025-05-28 08:40:53 +08:00
langhuihui
40bc7d4675 feat: add writerBuffer config to tcp 2025-05-27 16:56:01 +08:00
langhuihui
5aa8503aeb feat: add pull testMode 2025-05-27 10:43:34 +08:00
langhuihui
09175f0255 fix: use total insteadof totalCount 2025-05-26 16:04:20 +08:00
pggiroro
dd1a398ca2 feat: gb28181 support play sub stream 2025-05-25 21:33:14 +08:00
pggiroro
50cdfad931 fix: d.conn.NetConnection.Conn maybe nil 2025-05-25 21:33:14 +08:00
langhuihui
6df793a8fb feat: add more format for sei api 2025-05-23 17:18:43 +08:00
langhuihui
74c948d0c3 fix: rtsp memory leak 2025-05-23 10:02:36 +08:00
pggiroro
80ad1044e3 fix: gb28181 register too fast will start too many task 2025-05-22 22:56:41 +08:00
langhuihui
47884b6880 fix: rtmp timestamp start with 1 2025-05-22 22:52:21 +08:00
langhuihui
a38ddd68aa feat: add tcp dump to docker 2025-05-22 20:34:50 +08:00
banshan
a2bc3d94c1 fix: rtsp no audio or video flag 2025-05-22 20:10:17 +08:00
110 changed files with 8243 additions and 3492 deletions

View File

@@ -11,6 +11,9 @@ COPY monibuca_arm64 ./monibuca_arm64
COPY admin.zip ./admin.zip
# Install tcpdump
RUN apt-get update && apt-get install -y tcpdump && rm -rf /var/lib/apt/lists/*
# Copy the configuration file from the build context
COPY example/default/config.yaml /etc/monibuca/config.yaml

111
RELEASE_NOTES_5.0.x_CN.md Normal file
View File

@@ -0,0 +1,111 @@
# Monibuca v5.0.x Release Notes
## v5.0.2 (2025-06-05)
### 🎉 新功能 (New Features)
#### 核心功能
- **降低延迟** - 禁用了TCP WebRTC的重放保护功能降低了延迟
- **配置系统增强** - 支持更多配置格式(支持配置项中插入`-``_`和大写字母),提升配置灵活性
- **原始数据检查** - 新增原始数据无帧检查功能,提升数据处理稳定性
- **MP4循环读取** - 支持MP4文件循环读取功能通过配置 pull 配置下的 `loop` 配置)
- **S3插件** - 新增S3存储插件支持云存储集成
- **TCP读写缓冲配置** - 新增TCP连接读写缓冲区配置选项针对高并发下的吞吐能力增强
- **拉流测试模式** - 新增拉流测试模式选项(可以选择拉流时不发布),便于调试和测试
- **SEI API格式扩展** - 扩展SEI API支持更多数据格式
- **Hook扩展** - 新增更多Hook回调点增强扩展性
- **定时任务插件** - 新增crontab定时任务插件
- **服务器抓包** - 新增服务器抓包功能(调用`tcpdump`支持TCP和UDP协议,API 说明见 [tcpdump](https://api.monibuca.com/api-301117332)
#### GB28181协议增强
- **平台配置支持** - GB28181现在支持从config.yaml中添加平台和平台通道配置
- **子码流播放** - 支持GB28181子码流播放功能
- **SDP优化** - 优化invite SDP中的mediaip和sipip处理
- **本地端口保存** - 修复GB28181本地端口保存到数据库的问题
#### MP4功能增强
- **FLV格式下载** - 支持从MP4录制文件下载FLV格式
- **下载功能修复** - 修复MP4下载功能的相关问题
- **恢复功能修复** - 修复MP4恢复功能
### 🐛 问题修复 (Bug Fixes)
#### 网络通信
- **TCP读取阻塞** - 修复TCP读取阻塞问题增加了读取超时设置
- **RTSP内存泄漏** - 修复RTSP协议的内存泄漏问题
- **RTSP音视频标识** - 修复RTSP无音频或视频标识的问题
#### GB28181协议
- **任务管理** - 使用task.Manager解决注册处理器的问题
- **计划长度** - 修复plan.length为168的问题
- **注册频率** - 修复GB28181注册过快导致启动过多任务的问题
- **联系信息** - 修复GB28181获取错误联系信息的问题
#### RTMP协议
- **时间戳处理** - 修复RTMP时间戳开头跳跃问题
### 🛠️ 优化改进 (Improvements)
#### Docker支持
- **tcpdump工具** - Docker镜像中新增tcpdump网络诊断工具
#### Linux平台优化
- **SIP请求优化** - Linux平台移除SIP请求中的viaheader
### 👥 贡献者 (Contributors)
- langhuihui
- pggiroro
- banshan
---
## v5.0.1 (2025-05-21)
### 🎉 新功能 (New Features)
#### WebRTC增强
- **H265支持** - 新增WebRTC对H265编码的支持提升视频质量和压缩效率
#### GB28181协议增强
- **订阅功能扩展** - GB28181模块现在支持订阅报警、移动位置、目录信息
- **通知请求** - 支持接收通知请求,增强与设备的交互能力
#### Docker优化
- **FFmpeg集成** - Docker镜像中新增FFmpeg工具支持更多音视频处理场景
- **多架构支持** - 新增Docker多架构构建支持
### 🐛 问题修复 (Bug Fixes)
#### Docker相关
- **构建问题** - 修复Docker构建过程中的多个问题
- **构建优化** - 优化Docker构建流程提升构建效率
#### RTMP协议
- **时间戳处理** - 修复RTMP第一个chunk类型3需要添加时间戳的问题
#### GB28181协议
- **路径匹配** - 修复GB28181模块中播放流路径的正则表达式匹配问题
#### MP4处理
- **stsz box** - 修复stsz box采样大小的问题
- **G711音频** - 修复拉取MP4文件时读取G711音频的问题
- **H265解析** - 修复H265 MP4文件解析问题
### 🛠️ 优化改进 (Improvements)
#### 代码质量
- **错误处理** - 新增maxcount错误处理机制
- **文档更新** - 更新README文档和go.mod配置
#### 构建系统
- **ARM架构** - 减少JavaScript代码优化ARM架构Docker构建
- **构建标签** - 移除Docker中不必要的构建标签
### 📦 其他更新 (Other Updates)
- **MCP相关** - 更新Model Context Protocol相关功能
- **依赖更新** - 更新项目依赖和模块配置
### 👥 贡献者 (Contributors)
- langhuihui
---

146
api.go
View File

@@ -7,7 +7,6 @@ import (
"net/http"
"net/url"
"os"
"path/filepath"
"reflect"
"runtime"
"strings"
@@ -181,19 +180,17 @@ func (s *Server) getStreamInfo(pub *Publisher) (res *pb.StreamInfoResponse, err
func (s *Server) StreamInfo(ctx context.Context, req *pb.StreamSnapRequest) (res *pb.StreamInfoResponse, err error) {
var recordings []*pb.RecordingDetail
s.Records.Call(func() error {
for record := range s.Records.Range {
if record.StreamPath == req.StreamPath {
recordings = append(recordings, &pb.RecordingDetail{
FilePath: record.RecConf.FilePath,
Mode: record.Mode,
Fragment: durationpb.New(record.RecConf.Fragment),
Append: record.RecConf.Append,
PluginName: record.Plugin.Meta.Name,
})
}
s.Records.SafeRange(func(record *RecordJob) bool {
if record.StreamPath == req.StreamPath {
recordings = append(recordings, &pb.RecordingDetail{
FilePath: record.RecConf.FilePath,
Mode: record.Mode,
Fragment: durationpb.New(record.RecConf.Fragment),
Append: record.RecConf.Append,
PluginName: record.Plugin.Meta.Name,
})
}
return nil
return true
})
if pub, ok := s.Streams.SafeGet(req.StreamPath); ok {
res, err = s.getStreamInfo(pub)
@@ -261,17 +258,15 @@ func (s *Server) RestartTask(ctx context.Context, req *pb.RequestWithId64) (resp
}
func (s *Server) GetRecording(ctx context.Context, req *emptypb.Empty) (resp *pb.RecordingListResponse, err error) {
s.Records.Call(func() error {
resp = &pb.RecordingListResponse{}
for record := range s.Records.Range {
resp.Data = append(resp.Data, &pb.Recording{
StreamPath: record.StreamPath,
StartTime: timestamppb.New(record.StartTime),
Type: reflect.TypeOf(record.recorder).String(),
Pointer: uint64(record.GetTaskPointer()),
})
}
return nil
resp = &pb.RecordingListResponse{}
s.Records.SafeRange(func(record *RecordJob) bool {
resp.Data = append(resp.Data, &pb.Recording{
StreamPath: record.StreamPath,
StartTime: timestamppb.New(record.StartTime),
Type: reflect.TypeOf(record.recorder).String(),
Pointer: uint64(record.GetTaskPointer()),
})
return true
})
return
}
@@ -491,7 +486,7 @@ func (s *Server) Shutdown(ctx context.Context, req *pb.RequestWithId) (res *pb.S
func (s *Server) ChangeSubscribe(ctx context.Context, req *pb.ChangeSubscribeRequest) (res *pb.SuccessResponse, err error) {
s.Streams.Call(func() error {
if subscriber, ok := s.Subscribers.Get(req.Id); ok {
if pub, ok := s.Streams.SafeGet(req.StreamPath); ok {
if pub, ok := s.Streams.Get(req.StreamPath); ok {
subscriber.Publisher.RemoveSubscriber(subscriber)
subscriber.StreamPath = req.StreamPath
pub.AddSubscriber(subscriber)
@@ -517,86 +512,65 @@ func (s *Server) StopSubscribe(ctx context.Context, req *pb.RequestWithId) (res
}
func (s *Server) PauseStream(ctx context.Context, req *pb.StreamSnapRequest) (res *pb.SuccessResponse, err error) {
s.Streams.Call(func() error {
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Pause()
}
return nil
})
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Pause()
}
return &pb.SuccessResponse{}, err
}
func (s *Server) ResumeStream(ctx context.Context, req *pb.StreamSnapRequest) (res *pb.SuccessResponse, err error) {
s.Streams.Call(func() error {
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Resume()
}
return nil
})
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Resume()
}
return &pb.SuccessResponse{}, err
}
func (s *Server) SetStreamSpeed(ctx context.Context, req *pb.SetStreamSpeedRequest) (res *pb.SuccessResponse, err error) {
s.Streams.Call(func() error {
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Speed = float64(req.Speed)
s.Scale = float64(req.Speed)
s.Info("set stream speed", "speed", req.Speed)
}
return nil
})
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Speed = float64(req.Speed)
s.Scale = float64(req.Speed)
s.Info("set stream speed", "speed", req.Speed)
}
return &pb.SuccessResponse{}, err
}
func (s *Server) SeekStream(ctx context.Context, req *pb.SeekStreamRequest) (res *pb.SuccessResponse, err error) {
s.Streams.Call(func() error {
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Seek(time.Unix(int64(req.TimeStamp), 0))
}
return nil
})
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Seek(time.Unix(int64(req.TimeStamp), 0))
}
return &pb.SuccessResponse{}, err
}
func (s *Server) StopPublish(ctx context.Context, req *pb.StreamSnapRequest) (res *pb.SuccessResponse, err error) {
s.Streams.Call(func() error {
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Stop(task.ErrStopByUser)
}
return nil
})
if s, ok := s.Streams.SafeGet(req.StreamPath); ok {
s.Stop(task.ErrStopByUser)
}
return &pb.SuccessResponse{}, err
}
// /api/stream/list
func (s *Server) StreamList(_ context.Context, req *pb.StreamListRequest) (res *pb.StreamListResponse, err error) {
recordingMap := make(map[string][]*pb.RecordingDetail)
s.Records.Call(func() error {
for record := range s.Records.Range {
recordingMap[record.StreamPath] = append(recordingMap[record.StreamPath], &pb.RecordingDetail{
FilePath: record.RecConf.FilePath,
Mode: record.Mode,
Fragment: durationpb.New(record.RecConf.Fragment),
Append: record.RecConf.Append,
PluginName: record.Plugin.Meta.Name,
Pointer: uint64(record.GetTaskPointer()),
})
for record := range s.Records.SafeRange {
recordingMap[record.StreamPath] = append(recordingMap[record.StreamPath], &pb.RecordingDetail{
FilePath: record.RecConf.FilePath,
Mode: record.Mode,
Fragment: durationpb.New(record.RecConf.Fragment),
Append: record.RecConf.Append,
PluginName: record.Plugin.Meta.Name,
Pointer: uint64(record.GetTaskPointer()),
})
}
var streams []*pb.StreamInfo
for publisher := range s.Streams.SafeRange {
info, err := s.getStreamInfo(publisher)
if err != nil {
continue
}
return nil
})
s.Streams.Call(func() error {
var streams []*pb.StreamInfo
for publisher := range s.Streams.Range {
info, err := s.getStreamInfo(publisher)
if err != nil {
continue
}
info.Data.Recording = recordingMap[info.Data.Path]
streams = append(streams, info.Data)
}
res = &pb.StreamListResponse{Data: streams, Total: int32(s.Streams.Length), PageNum: req.PageNum, PageSize: req.PageSize}
return nil
})
info.Data.Recording = recordingMap[info.Data.Path]
streams = append(streams, info.Data)
}
res = &pb.StreamListResponse{Data: streams, Total: int32(s.Streams.Length), PageNum: req.PageNum, PageSize: req.PageSize}
return
}
@@ -718,7 +692,7 @@ func (s *Server) GetConfigFile(_ context.Context, req *emptypb.Empty) (res *pb.G
func (s *Server) UpdateConfigFile(_ context.Context, req *pb.UpdateConfigFileRequest) (res *pb.SuccessResponse, err error) {
if s.configFileContent != nil {
s.configFileContent = []byte(req.Content)
os.WriteFile(filepath.Join(ExecDir, s.conf.(string)), s.configFileContent, 0644)
os.WriteFile(s.configFilePath, s.configFileContent, 0644)
res = &pb.SuccessResponse{}
} else {
err = pkg.ErrNotFound
@@ -808,9 +782,9 @@ func (s *Server) GetRecordList(ctx context.Context, req *pb.ReqRecordList) (resp
return
}
resp = &pb.ResponseList{
TotalCount: uint32(totalCount),
PageNum: req.PageNum,
PageSize: req.PageSize,
Total: uint32(totalCount),
PageNum: req.PageNum,
PageSize: req.PageSize,
}
for _, recordFile := range result {
resp.Data = append(resp.Data, &pb.RecordFile{

View File

@@ -8,20 +8,40 @@ srt:
listenaddr: :6000
passphrase: foobarfoobar
gb28181:
enable: false
autoinvite: false
mediaip: 192.168.1.21 #流媒体收流IP
sipip: 192.168.1.21 #SIP通讯IP
enable: false # 是否启用GB28181协议
autoinvite: false #建议使用false开启后会自动邀请设备推流
mediaip: 192.168.1.21 #流媒体收流IP,外网情况下使用公网IP,内网情况下使用网卡IP,不要用127.0.0.1
sipip: 192.168.1.21 #SIP通讯IP,不管公网还是内网都使用本机网卡IP,不要用127.0.0.1
sip:
listenaddr:
- udp::5060
# pull:
# live/test: dump/34020000001320000001
onsub:
pull:
^\d{20}/\d{20}$: $0
^gb_\d+/(.+)$: $1
# .* : $0
platforms:
- enable: false #是否启用平台
name: "测试平台" #平台名称
servergbid: "34020000002000000002" #上级平台GBID
servergbdomain: "3402000000" #上级平台GB域
serverip: 192.168.1.106 #上级平台IP
serverport: 5061 #上级平台端口
devicegbid: "34020000002000000001" #本平台设备GBID
deviceip: 192.168.1.106 #本平台设备IP
deviceport: 5060 #本平台设备端口
username: "34020000002000000001" #SIP账号
password: "123456" #SIP密码
expires: 3600 #注册有效期,单位秒
keeptimeout: 60 #注册保持超时时间,单位秒
civilCode: "340200" #行政区划代码
manufacturer: "Monibuca" #设备制造商
model: "GB28181" #设备型号
address: "江苏南京" #设备地址
register_way: 1
platformchannels:
- platformservergbid: "34020000002000000002" #上级平台GBID
channeldbid: "34020000001110000003_34020000001320000005" #通道DBID,格式为设备ID_通道ID
mp4:
# enable: false
# publish:

3
go.mod
View File

@@ -6,6 +6,7 @@ require (
github.com/IOTechSystems/onvif v1.2.0
github.com/VictoriaMetrics/VictoriaMetrics v1.102.0
github.com/asavie/xdp v0.3.3
github.com/aws/aws-sdk-go v1.55.7
github.com/beevik/etree v1.4.1
github.com/bluenviron/gohlslib v1.4.0
github.com/c0deltin/duckdb-driver v0.1.0
@@ -84,6 +85,7 @@ require (
github.com/jackc/puddle/v2 v2.2.1 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
@@ -142,7 +144,6 @@ require (
github.com/google/pprof v0.0.0-20240409012703-83162a5b38cd // indirect
github.com/gorilla/websocket v1.5.1
github.com/ianlancetaylor/demangle v0.0.0-20240912202439-0a2b6291aafd
github.com/mark3labs/mcp-go v0.27.0
github.com/onsi/ginkgo/v2 v2.9.5 // indirect
github.com/phsym/console-slog v0.3.1
github.com/prometheus/client_golang v1.20.4

6
go.sum
View File

@@ -25,6 +25,8 @@ github.com/asticode/go-astikit v0.30.0 h1:DkBkRQRIxYcknlaU7W7ksNfn4gMFsB0tqMJflx
github.com/asticode/go-astikit v0.30.0/go.mod h1:h4ly7idim1tNhaVkdVBeXQZEE3L0xblP7fCWbgwipF0=
github.com/asticode/go-astits v1.13.0 h1:XOgkaadfZODnyZRR5Y0/DWkA9vrkLLPLeeOvDwfKZ1c=
github.com/asticode/go-astits v1.13.0/go.mod h1:QSHmknZ51pf6KJdHKZHJTLlMegIrhega3LPWz3ND/iI=
github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE=
github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/beevik/etree v1.4.1 h1:PmQJDDYahBGNKDcpdX8uPy1xRCwoCGVUiW669MEirVI=
github.com/beevik/etree v1.4.1/go.mod h1:gPNJNaBGVZ9AwsidazFZyygnd+0pAU38N4D+WemwKNs=
github.com/benburkert/openpgp v0.0.0-20160410205803-c2471f86866c h1:8XZeJrs4+ZYhJeJ2aZxADI2tGADS15AzIF8MQ8XAhT4=
@@ -139,6 +141,10 @@ github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.5
// protoc v5.28.3
// protoc-gen-go v1.36.6
// protoc v5.29.3
// source: auth.proto
package pb
@@ -440,64 +440,39 @@ func (x *UserInfoResponse) GetData() *UserInfo {
var File_auth_proto protoreflect.FileDescriptor
var file_auth_proto_rawDesc = string([]byte{
0x0a, 0x0a, 0x61, 0x75, 0x74, 0x68, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x02, 0x70, 0x62,
0x1a, 0x1c, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e,
0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x46,
0x0a, 0x0c, 0x4c, 0x6f, 0x67, 0x69, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1a,
0x0a, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x61,
0x73, 0x73, 0x77, 0x6f, 0x72, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x70, 0x61,
0x73, 0x73, 0x77, 0x6f, 0x72, 0x64, 0x22, 0x4e, 0x0a, 0x0c, 0x4c, 0x6f, 0x67, 0x69, 0x6e, 0x53,
0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x18,
0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x12, 0x28, 0x0a, 0x08,
0x75, 0x73, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0c,
0x2e, 0x70, 0x62, 0x2e, 0x55, 0x73, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x08, 0x75, 0x73,
0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x22, 0x63, 0x0a, 0x0d, 0x4c, 0x6f, 0x67, 0x69, 0x6e, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x18,
0x01, 0x20, 0x01, 0x28, 0x05, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d,
0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x6d, 0x65,
0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x24, 0x0a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x18, 0x03, 0x20,
0x01, 0x28, 0x0b, 0x32, 0x10, 0x2e, 0x70, 0x62, 0x2e, 0x4c, 0x6f, 0x67, 0x69, 0x6e, 0x53, 0x75,
0x63, 0x63, 0x65, 0x73, 0x73, 0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x22, 0x25, 0x0a, 0x0d, 0x4c,
0x6f, 0x67, 0x6f, 0x75, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x14, 0x0a, 0x05,
0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x74, 0x6f, 0x6b,
0x65, 0x6e, 0x22, 0x3e, 0x0a, 0x0e, 0x4c, 0x6f, 0x67, 0x6f, 0x75, 0x74, 0x52, 0x65, 0x73, 0x70,
0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x18, 0x01, 0x20, 0x01,
0x28, 0x05, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73,
0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61,
0x67, 0x65, 0x22, 0x27, 0x0a, 0x0f, 0x55, 0x73, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x18, 0x01,
0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x22, 0x45, 0x0a, 0x08, 0x55,
0x73, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x1a, 0x0a, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e,
0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e,
0x61, 0x6d, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x65, 0x78, 0x70, 0x69, 0x72, 0x65, 0x73, 0x5f, 0x61,
0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x09, 0x65, 0x78, 0x70, 0x69, 0x72, 0x65, 0x73,
0x41, 0x74, 0x22, 0x62, 0x0a, 0x10, 0x55, 0x73, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x18, 0x01,
0x20, 0x01, 0x28, 0x05, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x65,
0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x6d, 0x65, 0x73,
0x73, 0x61, 0x67, 0x65, 0x12, 0x20, 0x0a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x18, 0x03, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x70, 0x62, 0x2e, 0x55, 0x73, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f,
0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x32, 0xf4, 0x01, 0x0a, 0x04, 0x41, 0x75, 0x74, 0x68, 0x12,
0x48, 0x0a, 0x05, 0x4c, 0x6f, 0x67, 0x69, 0x6e, 0x12, 0x10, 0x2e, 0x70, 0x62, 0x2e, 0x4c, 0x6f,
0x67, 0x69, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x11, 0x2e, 0x70, 0x62, 0x2e,
0x4c, 0x6f, 0x67, 0x69, 0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x1a, 0x82,
0xd3, 0xe4, 0x93, 0x02, 0x14, 0x3a, 0x01, 0x2a, 0x22, 0x0f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61,
0x75, 0x74, 0x68, 0x2f, 0x6c, 0x6f, 0x67, 0x69, 0x6e, 0x12, 0x4c, 0x0a, 0x06, 0x4c, 0x6f, 0x67,
0x6f, 0x75, 0x74, 0x12, 0x11, 0x2e, 0x70, 0x62, 0x2e, 0x4c, 0x6f, 0x67, 0x6f, 0x75, 0x74, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x12, 0x2e, 0x70, 0x62, 0x2e, 0x4c, 0x6f, 0x67, 0x6f,
0x75, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x1b, 0x82, 0xd3, 0xe4, 0x93,
0x02, 0x15, 0x3a, 0x01, 0x2a, 0x22, 0x10, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x75, 0x74, 0x68,
0x2f, 0x6c, 0x6f, 0x67, 0x6f, 0x75, 0x74, 0x12, 0x54, 0x0a, 0x0b, 0x47, 0x65, 0x74, 0x55, 0x73,
0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x13, 0x2e, 0x70, 0x62, 0x2e, 0x55, 0x73, 0x65, 0x72,
0x49, 0x6e, 0x66, 0x6f, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x14, 0x2e, 0x70, 0x62,
0x2e, 0x55, 0x73, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
0x65, 0x22, 0x1a, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x14, 0x12, 0x12, 0x2f, 0x61, 0x70, 0x69, 0x2f,
0x61, 0x75, 0x74, 0x68, 0x2f, 0x75, 0x73, 0x65, 0x72, 0x69, 0x6e, 0x66, 0x6f, 0x42, 0x10, 0x5a,
0x0e, 0x6d, 0x37, 0x73, 0x2e, 0x6c, 0x69, 0x76, 0x65, 0x2f, 0x76, 0x35, 0x2f, 0x70, 0x62, 0x62,
0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
})
const file_auth_proto_rawDesc = "" +
"\n" +
"\n" +
"auth.proto\x12\x02pb\x1a\x1cgoogle/api/annotations.proto\"F\n" +
"\fLoginRequest\x12\x1a\n" +
"\busername\x18\x01 \x01(\tR\busername\x12\x1a\n" +
"\bpassword\x18\x02 \x01(\tR\bpassword\"N\n" +
"\fLoginSuccess\x12\x14\n" +
"\x05token\x18\x01 \x01(\tR\x05token\x12(\n" +
"\buserInfo\x18\x02 \x01(\v2\f.pb.UserInfoR\buserInfo\"c\n" +
"\rLoginResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\x05R\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12$\n" +
"\x04data\x18\x03 \x01(\v2\x10.pb.LoginSuccessR\x04data\"%\n" +
"\rLogoutRequest\x12\x14\n" +
"\x05token\x18\x01 \x01(\tR\x05token\">\n" +
"\x0eLogoutResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\x05R\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\"'\n" +
"\x0fUserInfoRequest\x12\x14\n" +
"\x05token\x18\x01 \x01(\tR\x05token\"E\n" +
"\bUserInfo\x12\x1a\n" +
"\busername\x18\x01 \x01(\tR\busername\x12\x1d\n" +
"\n" +
"expires_at\x18\x02 \x01(\x03R\texpiresAt\"b\n" +
"\x10UserInfoResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\x05R\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12 \n" +
"\x04data\x18\x03 \x01(\v2\f.pb.UserInfoR\x04data2\xf4\x01\n" +
"\x04Auth\x12H\n" +
"\x05Login\x12\x10.pb.LoginRequest\x1a\x11.pb.LoginResponse\"\x1a\x82\xd3\xe4\x93\x02\x14:\x01*\"\x0f/api/auth/login\x12L\n" +
"\x06Logout\x12\x11.pb.LogoutRequest\x1a\x12.pb.LogoutResponse\"\x1b\x82\xd3\xe4\x93\x02\x15:\x01*\"\x10/api/auth/logout\x12T\n" +
"\vGetUserInfo\x12\x13.pb.UserInfoRequest\x1a\x14.pb.UserInfoResponse\"\x1a\x82\xd3\xe4\x93\x02\x14\x12\x12/api/auth/userinfoB\x10Z\x0em7s.live/v5/pbb\x06proto3"
var (
file_auth_proto_rawDescOnce sync.Once

View File

@@ -123,7 +123,6 @@ func local_request_Auth_GetUserInfo_0(ctx context.Context, marshaler runtime.Mar
// UnaryRPC :call AuthServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterAuthHandlerFromEndpoint instead.
// GRPC interceptors will not work for this type of registration. To use interceptors, you must use the "runtime.WithMiddlewares" option in the "runtime.NewServeMux" call.
func RegisterAuthHandlerServer(ctx context.Context, mux *runtime.ServeMux, server AuthServer) error {
mux.Handle("POST", pattern_Auth_Login_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
@@ -207,21 +206,21 @@ func RegisterAuthHandlerServer(ctx context.Context, mux *runtime.ServeMux, serve
// RegisterAuthHandlerFromEndpoint is same as RegisterAuthHandler but
// automatically dials to "endpoint" and closes the connection when "ctx" gets done.
func RegisterAuthHandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error) {
conn, err := grpc.NewClient(endpoint, opts...)
conn, err := grpc.DialContext(ctx, endpoint, opts...)
if err != nil {
return err
}
defer func() {
if err != nil {
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
return
}
go func() {
<-ctx.Done()
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
}()
}()
@@ -239,7 +238,7 @@ func RegisterAuthHandler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.
// to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "AuthClient".
// Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "AuthClient"
// doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in
// "AuthClient" to call the correct interceptors. This client ignores the HTTP middlewares.
// "AuthClient" to call the correct interceptors.
func RegisterAuthHandlerClient(ctx context.Context, mux *runtime.ServeMux, client AuthClient) error {
mux.Handle("POST", pattern_Auth_Login_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v5.28.3
// - protoc v5.29.3
// source: auth.proto
package pb

File diff suppressed because it is too large Load Diff

View File

@@ -1844,7 +1844,6 @@ func local_request_Api_DeleteRecord_0(ctx context.Context, marshaler runtime.Mar
// UnaryRPC :call ApiServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterApiHandlerFromEndpoint instead.
// GRPC interceptors will not work for this type of registration. To use interceptors, you must use the "runtime.WithMiddlewares" option in the "runtime.NewServeMux" call.
func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server ApiServer) error {
mux.Handle("GET", pattern_Api_SysInfo_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
@@ -2953,21 +2952,21 @@ func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server
// RegisterApiHandlerFromEndpoint is same as RegisterApiHandler but
// automatically dials to "endpoint" and closes the connection when "ctx" gets done.
func RegisterApiHandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error) {
conn, err := grpc.NewClient(endpoint, opts...)
conn, err := grpc.DialContext(ctx, endpoint, opts...)
if err != nil {
return err
}
defer func() {
if err != nil {
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
return
}
go func() {
<-ctx.Done()
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
}()
}()
@@ -2985,7 +2984,7 @@ func RegisterApiHandler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.C
// to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "ApiClient".
// Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "ApiClient"
// doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in
// "ApiClient" to call the correct interceptors. This client ignores the HTTP middlewares.
// "ApiClient" to call the correct interceptors.
func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client ApiClient) error {
mux.Handle("GET", pattern_Api_SysInfo_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {

View File

@@ -683,7 +683,7 @@ message RecordFile {
message ResponseList {
int32 code = 1;
string message = 2;
uint32 totalCount = 3;
uint32 total = 3;
uint32 pageNum = 4;
uint32 pageSize = 5;
repeated RecordFile data = 6;

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v5.28.3
// - protoc v5.29.3
// source: global.proto
package pb

View File

@@ -41,5 +41,11 @@ func (h265 *H265Ctx) GetRecord() []byte {
}
func (h265 *H265Ctx) String() string {
return fmt.Sprintf("hvc1.%02X%02X%02X", h265.RecordInfo.AVCProfileIndication, h265.RecordInfo.ProfileCompatibility, h265.RecordInfo.AVCLevelIndication)
// 根据 HEVC 标准格式hvc1.profile.compatibility.level.constraints
profile := h265.RecordInfo.AVCProfileIndication
compatibility := h265.RecordInfo.ProfileCompatibility
level := h265.RecordInfo.AVCLevelIndication
// 简单实现,使用可用字段模拟 HEVC 格式
return fmt.Sprintf("hvc1.%d.%X.L%d.00", profile, compatibility, level)
}

View File

@@ -208,6 +208,9 @@ func (config *Config) ParseUserFile(conf map[string]any) {
}
config.File = conf
for k, v := range conf {
k = strings.ReplaceAll(k, "-", "")
k = strings.ReplaceAll(k, "_", "")
k = strings.ToLower(k)
if config.Has(k) {
if prop := config.Get(k); prop.props != nil {
if v != nil {

View File

@@ -40,6 +40,8 @@ type TCP struct {
KeyFile string `desc:"私钥文件"`
ListenNum int `desc:"同时并行监听数量0为CPU核心数量"` //同时并行监听数量0为CPU核心数量
NoDelay bool `desc:"是否禁用Nagle算法"` //是否禁用Nagle算法
WriteBuffer int `desc:"写缓冲区大小"` //写缓冲区大小
ReadBuffer int `desc:"读缓冲区大小"` //读缓冲区大小
KeepAlive bool `desc:"是否启用KeepAlive"` //是否启用KeepAlive
AutoListen bool `default:"true" desc:"是否自动监听"`
}
@@ -141,6 +143,18 @@ func (task *ListenTCPWork) listen(handler TCPHandler) {
if !task.NoDelay {
tcpConn.SetNoDelay(false)
}
if task.WriteBuffer > 0 {
if err := tcpConn.SetWriteBuffer(task.WriteBuffer); err != nil {
task.Error("failed to set write buffer", "error", err)
continue
}
}
if task.ReadBuffer > 0 {
if err := tcpConn.SetReadBuffer(task.ReadBuffer); err != nil {
task.Error("failed to set read buffer", "error", err)
continue
}
}
tempDelay = 0
subTask := handler(tcpConn)
task.AddTask(subTask)

View File

@@ -69,11 +69,13 @@ type (
HTTPValues map[string][]string
Pull struct {
URL string `desc:"拉流地址"`
Loop int `desc:"拉流循环次数,-1:无限循环"` // 拉流循环次数,-1 表示无限循环
MaxRetry int `default:"-1" desc:"断开后自动重试次数,0:不重试,-1:无限重试"` // 断开后自动重拉,0 表示不自动重拉,-1 表示无限重拉高于0 的数代表最大重拉次数
RetryInterval time.Duration `default:"5s" desc:"重试间隔"` // 重试间隔
Proxy string `desc:"代理地址"` // 代理地址
Header HTTPValues
Args HTTPValues `gorm:"-:all"` // 拉流参数
Args HTTPValues `gorm:"-:all"` // 拉流参数
TestMode int `desc:"测试模式,0:关闭,1:只拉流不发布"` // 测试模式
}
Push struct {
URL string `desc:"推送地址"` // 推送地址

View File

@@ -2,13 +2,14 @@ package pkg
import (
"fmt"
"io"
"time"
"github.com/deepch/vdk/codec/aacparser"
"github.com/deepch/vdk/codec/h264parser"
"github.com/deepch/vdk/codec/h265parser"
"io"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
"time"
)
var _ IAVFrame = (*RawAudio)(nil)
@@ -104,6 +105,8 @@ type H26xFrame struct {
}
func (h *H26xFrame) Parse(track *AVTrack) (err error) {
var hasVideoFrame bool
switch h.FourCC {
case codec.FourCC_H264:
var ctx *codec.H264Ctx
@@ -127,6 +130,9 @@ func (h *H26xFrame) Parse(track *AVTrack) (err error) {
}
case codec.NALU_IDR_Picture:
track.Value.IDR = true
hasVideoFrame = true
case codec.NALU_Non_IDR_Picture:
hasVideoFrame = true
}
}
case codec.FourCC_H265:
@@ -155,9 +161,18 @@ func (h *H26xFrame) Parse(track *AVTrack) (err error) {
h265parser.NAL_UNIT_CODED_SLICE_IDR_N_LP,
h265parser.NAL_UNIT_CODED_SLICE_CRA:
track.Value.IDR = true
hasVideoFrame = true
case 0, 1, 2, 3, 4, 5, 6, 7, 8, 9:
hasVideoFrame = true
}
}
}
// Return ErrSkip if no video frames are present (only metadata NALUs)
if !hasVideoFrame {
return ErrSkip
}
return
}

157
pkg/raw_test.go Normal file
View File

@@ -0,0 +1,157 @@
package pkg
import (
"testing"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
)
func TestH26xFrame_Parse_VideoFrameDetection(t *testing.T) {
// Test H264 IDR Picture (should not skip)
t.Run("H264_IDR_Picture", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x65}), // IDR Picture NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H264 IDR frame to not be skipped, but got ErrSkip")
}
if !track.Value.IDR {
t.Error("Expected IDR flag to be set for H264 IDR frame")
}
})
// Test H264 Non-IDR Picture (should not skip)
t.Run("H264_Non_IDR_Picture", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x21}), // Non-IDR Picture NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H264 Non-IDR frame to not be skipped, but got ErrSkip")
}
})
// Test H264 metadata only (should skip)
t.Run("H264_SPS_Only", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x67}), // SPS NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err != ErrSkip {
t.Errorf("Expected H264 SPS-only frame to be skipped, but got: %v", err)
}
})
// Test H264 PPS only (should skip)
t.Run("H264_PPS_Only", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x68}), // PPS NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err != ErrSkip {
t.Errorf("Expected H264 PPS-only frame to be skipped, but got: %v", err)
}
})
// Test H265 IDR slice (should not skip)
t.Run("H265_IDR_Slice", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H265,
Nalus: []util.Memory{
util.NewMemory([]byte{0x4E, 0x01}), // IDR_W_RADL slice type (19 << 1 = 38 = 0x26, so first byte should be 0x4C, but let's use a simpler approach)
// Using NAL_UNIT_CODED_SLICE_IDR_W_RADL which should be type 19
},
}
track := &AVTrack{}
// Let's use the correct byte pattern for H265 IDR slice
// NAL_UNIT_CODED_SLICE_IDR_W_RADL = 19
// H265 header: (type << 1) | layer_id_bit
idrSliceByte := byte(19 << 1) // 19 * 2 = 38 = 0x26
frame.Nalus[0] = util.NewMemory([]byte{idrSliceByte})
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H265 IDR slice to not be skipped, but got ErrSkip")
}
if !track.Value.IDR {
t.Error("Expected IDR flag to be set for H265 IDR slice")
}
})
// Test H265 metadata only (should skip)
t.Run("H265_VPS_Only", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H265,
Nalus: []util.Memory{
util.NewMemory([]byte{0x40, 0x01}), // VPS NALU type (32 << 1 = 64 = 0x40)
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err != ErrSkip {
t.Errorf("Expected H265 VPS-only frame to be skipped, but got: %v", err)
}
})
// Test mixed H264 frame with SPS and IDR (should not skip)
t.Run("H264_Mixed_SPS_And_IDR", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H264,
Nalus: []util.Memory{
util.NewMemory([]byte{0x67}), // SPS NALU type
util.NewMemory([]byte{0x65}), // IDR Picture NALU type
},
}
track := &AVTrack{}
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H264 mixed SPS+IDR frame to not be skipped, but got ErrSkip")
}
if !track.Value.IDR {
t.Error("Expected IDR flag to be set for H264 mixed frame with IDR")
}
})
// Test mixed H265 frame with VPS and IDR (should not skip)
t.Run("H265_Mixed_VPS_And_IDR", func(t *testing.T) {
frame := &H26xFrame{
FourCC: codec.FourCC_H265,
Nalus: []util.Memory{
util.NewMemory([]byte{0x40, 0x01}), // VPS NALU type (32 << 1)
util.NewMemory([]byte{0x4C, 0x01}), // IDR_W_RADL slice type (19 << 1)
},
}
track := &AVTrack{}
// Fix the IDR slice byte for H265
idrSliceByte := byte(19 << 1) // NAL_UNIT_CODED_SLICE_IDR_W_RADL = 19
frame.Nalus[1] = util.NewMemory([]byte{idrSliceByte, 0x01})
err := frame.Parse(track)
if err == ErrSkip {
t.Error("Expected H265 mixed VPS+IDR frame to not be skipped, but got ErrSkip")
}
if !track.Value.IDR {
t.Error("Expected IDR flag to be set for H265 mixed frame with IDR")
}
})
}

View File

@@ -56,6 +56,7 @@ func (mt *Job) Blocked() ITask {
}
func (mt *Job) waitChildrenDispose() {
blocked := mt.blocked
defer func() {
// 忽略由于在任务关闭过程中可能存在竞态条件,当父任务关闭时子任务可能已经被释放。
if err := recover(); err != nil {
@@ -64,7 +65,7 @@ func (mt *Job) waitChildrenDispose() {
mt.addSub <- nil
<-mt.childrenDisposed
}()
if blocked := mt.blocked; blocked != nil {
if blocked != nil {
blocked.Stop(mt.StopReason())
}
}
@@ -181,9 +182,7 @@ func (mt *Job) AddTask(t ITask, opt ...any) (task *Task) {
return
}
if len(mt.addSub) > 10 {
if mt.Logger != nil {
mt.Warn("task wait list too many", "count", len(mt.addSub), "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType(), "parent", mt.GetOwnerType())
}
mt.Warn("task wait list too many", "count", len(mt.addSub), "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType(), "parent", mt.GetOwnerType())
}
mt.addSub <- t
return
@@ -206,9 +205,7 @@ func (mt *Job) run() {
defer func() {
err := recover()
if err != nil {
if mt.Logger != nil {
mt.Logger.Error("job panic", "err", err, "stack", string(debug.Stack()))
}
mt.Error("job panic", "err", err, "stack", string(debug.Stack()))
if !ThrowPanic {
mt.Stop(errors.Join(err.(error), ErrPanic))
} else {
@@ -227,6 +224,7 @@ func (mt *Job) run() {
mt.blocked = nil
if chosen, rev, ok := reflect.Select(mt.cases); chosen == 0 {
if rev.IsNil() {
mt.Debug("job addSub channel closed, exiting", "taskId", mt.GetTaskID())
return
}
if mt.blocked = rev.Interface().(ITask); mt.blocked.start() {

View File

@@ -24,15 +24,11 @@ func (m *Manager[K, T]) Add(ctx T, opt ...any) *Task {
ctx.Stop(ErrExist)
return
}
if m.Logger != nil {
m.Logger.Debug("add", "key", ctx.GetKey(), "count", m.Length)
}
m.Debug("add", "key", ctx.GetKey(), "count", m.Length)
})
ctx.OnDispose(func() {
m.Remove(ctx)
if m.Logger != nil {
m.Logger.Debug("remove", "key", ctx.GetKey(), "count", m.Length)
}
m.Debug("remove", "key", ctx.GetKey(), "count", m.Length)
})
return m.AddTask(ctx, opt...)
}

View File

@@ -7,6 +7,7 @@ import (
"log/slog"
"maps"
"reflect"
"runtime"
"runtime/debug"
"strings"
"sync"
@@ -117,7 +118,7 @@ type (
ID uint32
StartTime time.Time
StartReason string
*slog.Logger
Logger *slog.Logger
context.Context
context.CancelCauseFunc
handler ITask
@@ -198,7 +199,11 @@ func (task *Task) WaitStopped() (err error) {
}
func (task *Task) Trace(msg string, fields ...any) {
task.Log(task.Context, TraceLevel, msg, fields...)
if task.Logger == nil {
slog.Default().Log(task.Context, TraceLevel, msg, fields...)
return
}
task.Logger.Log(task.Context, TraceLevel, msg, fields...)
}
func (task *Task) IsStopped() bool {
@@ -225,8 +230,9 @@ func (task *Task) Stop(err error) {
panic("task stop with nil error")
}
if task.CancelCauseFunc != nil {
if tt := task.handler.GetTaskType(); task.Logger != nil && tt != TASK_TYPE_CALL {
task.Debug("task stop", "reason", err, "elapsed", time.Since(task.StartTime), "taskId", task.ID, "taskType", tt, "ownerType", task.GetOwnerType())
if tt := task.handler.GetTaskType(); tt != TASK_TYPE_CALL {
_, file, line, _ := runtime.Caller(1)
task.Debug("task stop", "caller", fmt.Sprintf("%s:%d", strings.TrimPrefix(file, sourceFilePathPrefix), line), "reason", err, "elapsed", time.Since(task.StartTime), "taskId", task.ID, "taskType", tt, "ownerType", task.GetOwnerType())
}
task.CancelCauseFunc(err)
}
@@ -264,12 +270,10 @@ func (task *Task) checkRetry(err error) bool {
if task.retry.MaxRetry < 0 || task.retry.RetryCount < task.retry.MaxRetry {
task.retry.RetryCount++
task.SetDescription("retryCount", task.retry.RetryCount)
if task.Logger != nil {
if task.retry.MaxRetry < 0 {
task.Warn(fmt.Sprintf("retry %d/∞", task.retry.RetryCount), "taskId", task.ID)
} else {
task.Warn(fmt.Sprintf("retry %d/%d", task.retry.RetryCount, task.retry.MaxRetry), "taskId", task.ID)
}
if task.retry.MaxRetry < 0 {
task.Warn(fmt.Sprintf("retry %d/∞", task.retry.RetryCount), "taskId", task.ID)
} else {
task.Warn(fmt.Sprintf("retry %d/%d", task.retry.RetryCount, task.retry.MaxRetry), "taskId", task.ID)
}
if delta := time.Since(task.StartTime); delta < task.retry.RetryInterval {
time.Sleep(task.retry.RetryInterval - delta)
@@ -277,9 +281,7 @@ func (task *Task) checkRetry(err error) bool {
return true
} else {
if task.retry.MaxRetry > 0 {
if task.Logger != nil {
task.Warn(fmt.Sprintf("max retry %d failed", task.retry.MaxRetry))
}
task.Warn(fmt.Sprintf("max retry %d failed", task.retry.MaxRetry))
return false
}
}
@@ -292,15 +294,13 @@ func (task *Task) start() bool {
defer func() {
if r := recover(); r != nil {
err = errors.New(fmt.Sprint(r))
if task.Logger != nil {
task.Error("panic", "error", err, "stack", string(debug.Stack()))
}
task.Error("panic", "error", err, "stack", string(debug.Stack()))
}
}()
}
for {
task.StartTime = time.Now()
if tt := task.handler.GetTaskType(); task.Logger != nil && tt != TASK_TYPE_CALL {
if tt := task.handler.GetTaskType(); tt != TASK_TYPE_CALL {
task.Debug("task start", "taskId", task.ID, "taskType", tt, "ownerType", task.GetOwnerType(), "reason", task.StartReason)
}
task.state = TASK_STATE_STARTING
@@ -322,9 +322,7 @@ func (task *Task) start() bool {
task.ResetRetryCount()
if runHandler, ok := task.handler.(TaskBlock); ok {
task.state = TASK_STATE_RUNNING
if task.Logger != nil {
task.Debug("task run", "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType())
}
task.Debug("task run", "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType())
err = runHandler.Run()
if err == nil {
err = ErrTaskComplete
@@ -335,9 +333,7 @@ func (task *Task) start() bool {
if err == nil {
if goHandler, ok := task.handler.(TaskGo); ok {
task.state = TASK_STATE_GOING
if task.Logger != nil {
task.Debug("task go", "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType())
}
task.Debug("task go", "taskId", task.ID, "taskType", task.GetTaskType(), "ownerType", task.GetOwnerType())
go task.run(goHandler.Go)
}
return true
@@ -384,19 +380,17 @@ func (task *Task) SetDescriptions(value Description) {
func (task *Task) dispose() {
taskType, ownerType := task.handler.GetTaskType(), task.GetOwnerType()
if task.state < TASK_STATE_STARTED {
if task.Logger != nil && taskType != TASK_TYPE_CALL {
if taskType != TASK_TYPE_CALL {
task.Debug("task dispose canceled", "taskId", task.ID, "taskType", taskType, "ownerType", ownerType, "state", task.state)
}
return
}
reason := task.StopReason()
task.state = TASK_STATE_DISPOSING
if task.Logger != nil {
if taskType != TASK_TYPE_CALL {
yargs := []any{"reason", reason, "taskId", task.ID, "taskType", taskType, "ownerType", ownerType}
task.Debug("task dispose", yargs...)
defer task.Debug("task disposed", yargs...)
}
if taskType != TASK_TYPE_CALL {
yargs := []any{"reason", reason, "taskId", task.ID, "taskType", taskType, "ownerType", ownerType}
task.Debug("task dispose", yargs...)
defer task.Debug("task disposed", yargs...)
}
befores := len(task.beforeDisposeListeners)
for i, listener := range task.beforeDisposeListeners {
@@ -441,9 +435,7 @@ func (task *Task) run(handler func() error) {
if !ThrowPanic {
if r := recover(); r != nil {
err = errors.New(fmt.Sprint(r))
if task.Logger != nil {
task.Error("panic", "error", err, "stack", string(debug.Stack()))
}
task.Error("panic", "error", err, "stack", string(debug.Stack()))
}
}
if err == nil {
@@ -454,3 +446,39 @@ func (task *Task) run(handler func() error) {
}()
err = handler()
}
func (task *Task) Debug(msg string, args ...any) {
if task.Logger == nil {
slog.Default().Debug(msg, args...)
return
}
task.Logger.Debug(msg, args...)
}
func (task *Task) Info(msg string, args ...any) {
if task.Logger == nil {
slog.Default().Info(msg, args...)
return
}
task.Logger.Info(msg, args...)
}
func (task *Task) Warn(msg string, args ...any) {
if task.Logger == nil {
slog.Default().Warn(msg, args...)
return
}
task.Logger.Warn(msg, args...)
}
func (task *Task) Error(msg string, args ...any) {
if task.Logger == nil {
slog.Default().Error(msg, args...)
return
}
task.Logger.Error(msg, args...)
}
func (task *Task) TraceEnabled() bool {
return task.Logger.Enabled(task.Context, TraceLevel)
}

View File

@@ -2,33 +2,55 @@ package util
import (
"errors"
"sync"
"unsafe"
)
type Buddy struct {
size int
longests []int
size int
longests [BuddySize>>(MinPowerOf2-1) - 1]int
memoryPool [BuddySize]byte
poolStart int64
lock sync.Mutex // 保护 longests 数组的并发访问
}
var (
InValidParameterErr = errors.New("buddy: invalid parameter")
NotFoundErr = errors.New("buddy: can't find block")
buddyPool = sync.Pool{
New: func() interface{} {
return NewBuddy()
},
}
)
// GetBuddy 从池中获取一个 Buddy 实例
func GetBuddy() *Buddy {
buddy := buddyPool.Get().(*Buddy)
return buddy
}
// PutBuddy 将 Buddy 实例放回池中
func PutBuddy(b *Buddy) {
buddyPool.Put(b)
}
// NewBuddy creates a buddy instance.
// If the parameter isn't valid, return the nil and error as well
func NewBuddy(size int) *Buddy {
if !isPowerOf2(size) {
size = fixSize(size)
func NewBuddy() *Buddy {
size := BuddySize >> MinPowerOf2
ret := &Buddy{
size: size,
}
nodeCount := 2*size - 1
longests := make([]int, nodeCount)
for nodeSize, i := 2*size, 0; i < nodeCount; i++ {
for nodeSize, i := 2*size, 0; i < len(ret.longests); i++ {
if isPowerOf2(i + 1) {
nodeSize /= 2
}
longests[i] = nodeSize
ret.longests[i] = nodeSize
}
return &Buddy{size, longests}
ret.poolStart = int64(uintptr(unsafe.Pointer(&ret.memoryPool[0])))
return ret
}
// Alloc find a unused block according to the size
@@ -42,6 +64,8 @@ func (b *Buddy) Alloc(size int) (offset int, err error) {
if !isPowerOf2(size) {
size = fixSize(size)
}
b.lock.Lock()
defer b.lock.Unlock()
if size > b.longests[0] {
err = NotFoundErr
return
@@ -70,6 +94,8 @@ func (b *Buddy) Free(offset int) error {
if offset < 0 || offset >= b.size {
return InValidParameterErr
}
b.lock.Lock()
defer b.lock.Unlock()
nodeSize := 1
index := offset + b.size - 1
for ; b.longests[index] != 0; index = parent(index) {

View File

@@ -3,11 +3,9 @@
package util
import (
"container/list"
"fmt"
"io"
"slices"
"sync"
"unsafe"
)
@@ -58,53 +56,59 @@ func (r *RecyclableMemory) Recycle() {
}
}
var (
memoryPool [BuddySize]byte
buddy = NewBuddy(BuddySize >> MinPowerOf2)
lock sync.Mutex
poolStart = int64(uintptr(unsafe.Pointer(&memoryPool[0])))
blockPool = list.New()
//EnableCheckSize bool = false
)
type MemoryAllocator struct {
allocator *Allocator
start int64
memory []byte
Size int
buddy *Buddy
}
// createMemoryAllocator 创建并初始化 MemoryAllocator
func createMemoryAllocator(size int, buddy *Buddy, offset int) *MemoryAllocator {
ret := &MemoryAllocator{
allocator: NewAllocator(size),
buddy: buddy,
Size: size,
memory: buddy.memoryPool[offset : offset+size],
start: buddy.poolStart + int64(offset),
}
ret.allocator.Init(size)
return ret
}
func GetMemoryAllocator(size int) (ret *MemoryAllocator) {
lock.Lock()
offset, err := buddy.Alloc(size >> MinPowerOf2)
if blockPool.Len() > 0 {
ret = blockPool.Remove(blockPool.Front()).(*MemoryAllocator)
} else {
ret = &MemoryAllocator{
allocator: NewAllocator(size),
if size < BuddySize {
requiredSize := size >> MinPowerOf2
// 循环尝试从池中获取可用的 buddy
for {
buddy := GetBuddy()
offset, err := buddy.Alloc(requiredSize)
PutBuddy(buddy)
if err == nil {
// 分配成功,使用这个 buddy
return createMemoryAllocator(size, buddy, offset<<MinPowerOf2)
}
}
}
lock.Unlock()
ret.Size = size
ret.allocator.Init(size)
if err != nil {
ret.memory = make([]byte, size)
ret.start = int64(uintptr(unsafe.Pointer(&ret.memory[0])))
return
// 池中的 buddy 都无法分配或大小不够,使用系统内存
memory := make([]byte, size)
start := int64(uintptr(unsafe.Pointer(&memory[0])))
return &MemoryAllocator{
allocator: NewAllocator(size),
Size: size,
memory: memory,
start: start,
}
offset = offset << MinPowerOf2
ret.memory = memoryPool[offset : offset+size]
ret.start = poolStart + int64(offset)
return
}
func (ma *MemoryAllocator) Recycle() {
ma.allocator.Recycle()
lock.Lock()
blockPool.PushBack(ma)
_ = buddy.Free(int((poolStart - ma.start) >> MinPowerOf2))
if ma.buddy != nil {
_ = ma.buddy.Free(int((ma.buddy.poolStart - ma.start) >> MinPowerOf2))
ma.buddy = nil
}
ma.memory = nil
lock.Unlock()
}
func (ma *MemoryAllocator) Find(size int) (memory []byte) {

View File

@@ -133,24 +133,9 @@ func (plugin *PluginMeta) Init(s *Server, userConfig map[string]any) (p *Plugin)
finalConfig, _ := yaml.Marshal(p.Config.GetMap())
p.Logger.Handler().(*MultiLogHandler).SetLevel(ParseLevel(p.config.LogLevel))
p.Debug("config", "detail", string(finalConfig))
if s.DisableAll {
p.Disabled = true
}
if userConfig["enable"] == false {
p.Disabled = true
} else if userConfig["enable"] == true {
p.Disabled = false
}
if p.Disabled {
if userConfig["enable"] == false || (s.DisableAll && userConfig["enable"] != true) {
p.disable("config")
p.Warn("plugin disabled")
return
} else {
var handlers map[string]http.HandlerFunc
if v, ok := instance.(IRegisterHandler); ok {
handlers = v.RegisterHandler()
}
p.registerHandler(handlers)
}
p.Info("init", "version", plugin.Version)
var err error
@@ -172,7 +157,16 @@ func (plugin *PluginMeta) Init(s *Server, userConfig map[string]any) (p *Plugin)
return
}
}
s.AddTask(instance)
if err := s.AddTask(instance).WaitStarted(); err != nil {
p.disable(s.StopReason().Error())
return
}
var handlers map[string]http.HandlerFunc
if v, ok := instance.(IRegisterHandler); ok {
handlers = v.RegisterHandler()
}
p.registerHandler(handlers)
s.Plugins.Add(p)
return
}
@@ -277,11 +271,19 @@ func (p *Plugin) GetPublicIP(netcardIP string) string {
func (p *Plugin) disable(reason string) {
p.Disabled = true
p.SetDescription("disableReason", reason)
p.Warn("plugin disabled")
p.Server.disabledPlugins = append(p.Server.disabledPlugins, p)
}
func (p *Plugin) Start() (err error) {
s := p.Server
if err = p.listen(); err != nil {
return
}
if err = p.handler.OnInit(); err != nil {
return
}
if p.Meta.ServiceDesc != nil && s.grpcServer != nil {
s.grpcServer.RegisterService(p.Meta.ServiceDesc, p.handler)
if p.Meta.RegisterGRPCHandler != nil {
@@ -293,15 +295,6 @@ func (p *Plugin) Start() (err error) {
}
}
}
s.Plugins.Add(p)
if err = p.listen(); err != nil {
p.disable(fmt.Sprintf("listen %v", err))
return
}
if err = p.handler.OnInit(); err != nil {
p.disable(fmt.Sprintf("init %v", err))
return
}
if p.config.Hook != nil {
if hook, ok := p.config.Hook[config.HookOnServerKeepAlive]; ok && hook.Interval > 0 {
p.AddTask(&ServerKeepAliveTask{plugin: p})

View File

@@ -44,7 +44,7 @@ func (ct *CrontabPlugin) List(ctx context.Context, req *cronpb.ReqPlanList) (*cr
data = append(data, &cronpb.Plan{
Id: uint32(plan.ID),
Name: plan.Name,
Enable: plan.Enabled,
Enable: plan.Enable,
CreateTime: timestamppb.New(plan.CreatedAt),
UpdateTime: timestamppb.New(plan.UpdatedAt),
Plan: plan.Plan,
@@ -94,9 +94,9 @@ func (ct *CrontabPlugin) Add(ctx context.Context, req *cronpb.Plan) (*cronpb.Res
}
plan := &pkg.RecordPlan{
Name: req.Name,
Plan: req.Plan,
Enabled: req.Enable,
Name: req.Name,
Plan: req.Plan,
Enable: req.Enable,
}
if err := ct.DB.Create(plan).Error; err != nil {
@@ -209,3 +209,218 @@ func (ct *CrontabPlugin) Remove(ctx context.Context, req *cronpb.DeleteRequest)
Message: "success",
}, nil
}
func (ct *CrontabPlugin) ListRecordPlanStreams(ctx context.Context, req *cronpb.ReqPlanStreamList) (*cronpb.RecordPlanStreamResponseList, error) {
if req.PageNum < 1 {
req.PageNum = 1
}
if req.PageSize < 1 {
req.PageSize = 10
}
var total int64
var streams []pkg.RecordPlanStream
model := &pkg.RecordPlanStream{}
// 构建查询条件
query := ct.DB.Model(model).
Scopes(
pkg.ScopeRecordPlanID(uint(req.PlanId)),
pkg.ScopeStreamPathLike(req.StreamPath),
pkg.ScopeOrderByCreatedAtDesc(),
)
result := query.Count(&total)
if result.Error != nil {
return &cronpb.RecordPlanStreamResponseList{
Code: 500,
Message: result.Error.Error(),
}, nil
}
offset := (req.PageNum - 1) * req.PageSize
result = query.Offset(int(offset)).Limit(int(req.PageSize)).Find(&streams)
if result.Error != nil {
return &cronpb.RecordPlanStreamResponseList{
Code: 500,
Message: result.Error.Error(),
}, nil
}
data := make([]*cronpb.PlanStream, 0, len(streams))
for _, stream := range streams {
data = append(data, &cronpb.PlanStream{
PlanId: uint32(stream.PlanID),
StreamPath: stream.StreamPath,
Fragment: stream.Fragment,
FilePath: stream.FilePath,
CreatedAt: timestamppb.New(stream.CreatedAt),
UpdatedAt: timestamppb.New(stream.UpdatedAt),
Enable: stream.Enable,
})
}
return &cronpb.RecordPlanStreamResponseList{
Code: 0,
Message: "success",
TotalCount: uint32(total),
PageNum: req.PageNum,
PageSize: req.PageSize,
Data: data,
}, nil
}
func (ct *CrontabPlugin) AddRecordPlanStream(ctx context.Context, req *cronpb.PlanStream) (*cronpb.Response, error) {
if req.PlanId == 0 {
return &cronpb.Response{
Code: 400,
Message: "record_plan_id is required",
}, nil
}
if strings.TrimSpace(req.StreamPath) == "" {
return &cronpb.Response{
Code: 400,
Message: "stream_path is required",
}, nil
}
// 检查录制计划是否存在
var plan pkg.RecordPlan
if err := ct.DB.First(&plan, req.PlanId).Error; err != nil {
return &cronpb.Response{
Code: 404,
Message: "record plan not found",
}, nil
}
// 检查是否已存在相同的记录
var count int64
searchModel := pkg.RecordPlanStream{
PlanID: uint(req.PlanId),
StreamPath: req.StreamPath,
}
if err := ct.DB.Model(&searchModel).Where(&searchModel).Count(&count).Error; err != nil {
return &cronpb.Response{
Code: 500,
Message: err.Error(),
}, nil
}
if count > 0 {
return &cronpb.Response{
Code: 400,
Message: "record already exists",
}, nil
}
stream := &pkg.RecordPlanStream{
PlanID: uint(req.PlanId),
StreamPath: req.StreamPath,
Fragment: req.Fragment,
FilePath: req.FilePath,
}
if err := ct.DB.Create(stream).Error; err != nil {
return &cronpb.Response{
Code: 500,
Message: err.Error(),
}, nil
}
return &cronpb.Response{
Code: 0,
Message: "success",
}, nil
}
func (ct *CrontabPlugin) UpdateRecordPlanStream(ctx context.Context, req *cronpb.PlanStream) (*cronpb.Response, error) {
if req.PlanId == 0 {
return &cronpb.Response{
Code: 400,
Message: "record_plan_id is required",
}, nil
}
if strings.TrimSpace(req.StreamPath) == "" {
return &cronpb.Response{
Code: 400,
Message: "stream_path is required",
}, nil
}
// 检查记录是否存在
var existingStream pkg.RecordPlanStream
searchModel := pkg.RecordPlanStream{
PlanID: uint(req.PlanId),
StreamPath: req.StreamPath,
}
if err := ct.DB.Where(&searchModel).First(&existingStream).Error; err != nil {
return &cronpb.Response{
Code: 404,
Message: "record not found",
}, nil
}
// 更新记录
existingStream.Fragment = req.Fragment
existingStream.FilePath = req.FilePath
if req.Enable != existingStream.Enable {
existingStream.Enable = req.Enable
}
if err := ct.DB.Save(&existingStream).Error; err != nil {
return &cronpb.Response{
Code: 500,
Message: err.Error(),
}, nil
}
return &cronpb.Response{
Code: 0,
Message: "success",
}, nil
}
func (ct *CrontabPlugin) RemoveRecordPlanStream(ctx context.Context, req *cronpb.DeletePlanStreamRequest) (*cronpb.Response, error) {
if req.PlanId == 0 {
return &cronpb.Response{
Code: 400,
Message: "record_plan_id is required",
}, nil
}
if strings.TrimSpace(req.StreamPath) == "" {
return &cronpb.Response{
Code: 400,
Message: "stream_path is required",
}, nil
}
// 检查记录是否存在
var existingStream pkg.RecordPlanStream
searchModel := pkg.RecordPlanStream{
PlanID: uint(req.PlanId),
StreamPath: req.StreamPath,
}
if err := ct.DB.Where(&searchModel).First(&existingStream).Error; err != nil {
return &cronpb.Response{
Code: 404,
Message: "record not found",
}, nil
}
// 执行删除
if err := ct.DB.Delete(&existingStream).Error; err != nil {
return &cronpb.Response{
Code: 500,
Message: err.Error(),
}, nil
}
return &cronpb.Response{
Code: 0,
Message: "success",
}, nil
}

View File

@@ -36,7 +36,7 @@ func (r *Crontab) Tick(any) {
// 查询所有启用的录制计划
var plans []pkg.RecordPlan
model := pkg.RecordPlan{
Enabled: true,
Enable: true,
}
if err := r.ctp.DB.Where(&model).Find(&plans).Error; err != nil {
r.Error("查询录制计划失败:", err)
@@ -45,8 +45,8 @@ func (r *Crontab) Tick(any) {
// 遍历所有计划
for _, plan := range plans {
if len(plan.Plan) != 144 {
r.Error("录制计划格式错误plan长度应为144位:", plan.Name)
if len(plan.Plan) != 168 {
r.Error("录制计划格式错误plan长度应为168位:", plan.Name)
continue
}

View File

@@ -22,7 +22,7 @@ func (ct *CrontabPlugin) OnInit() (err error) {
if ct.DB == nil {
ct.Error("DB is nil")
} else {
err = ct.DB.AutoMigrate(&pkg.RecordPlan{})
err = ct.DB.AutoMigrate(&pkg.RecordPlan{}, &pkg.RecordPlanStream{})
if err != nil {
return fmt.Errorf("auto migrate tables error: %v", err)
}

View File

@@ -339,6 +339,311 @@ func (x *Response) GetMessage() string {
return ""
}
// RecordPlanStream 相关消息定义
type PlanStream struct {
state protoimpl.MessageState `protogen:"open.v1"`
PlanId uint32 `protobuf:"varint,1,opt,name=planId,proto3" json:"planId,omitempty"`
StreamPath string `protobuf:"bytes,2,opt,name=stream_path,json=streamPath,proto3" json:"stream_path,omitempty"`
Fragment string `protobuf:"bytes,3,opt,name=fragment,proto3" json:"fragment,omitempty"`
FilePath string `protobuf:"bytes,4,opt,name=filePath,proto3" json:"filePath,omitempty"`
RecordType string `protobuf:"bytes,5,opt,name=record_type,json=recordType,proto3" json:"record_type,omitempty"` // 录制类型,例如 "mp4", "flv"
CreatedAt *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
UpdatedAt *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=updated_at,json=updatedAt,proto3" json:"updated_at,omitempty"`
Enable bool `protobuf:"varint,8,opt,name=enable,proto3" json:"enable,omitempty"` // 是否启用该录制流
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *PlanStream) Reset() {
*x = PlanStream{}
mi := &file_crontab_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *PlanStream) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PlanStream) ProtoMessage() {}
func (x *PlanStream) ProtoReflect() protoreflect.Message {
mi := &file_crontab_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PlanStream.ProtoReflect.Descriptor instead.
func (*PlanStream) Descriptor() ([]byte, []int) {
return file_crontab_proto_rawDescGZIP(), []int{5}
}
func (x *PlanStream) GetPlanId() uint32 {
if x != nil {
return x.PlanId
}
return 0
}
func (x *PlanStream) GetStreamPath() string {
if x != nil {
return x.StreamPath
}
return ""
}
func (x *PlanStream) GetFragment() string {
if x != nil {
return x.Fragment
}
return ""
}
func (x *PlanStream) GetFilePath() string {
if x != nil {
return x.FilePath
}
return ""
}
func (x *PlanStream) GetRecordType() string {
if x != nil {
return x.RecordType
}
return ""
}
func (x *PlanStream) GetCreatedAt() *timestamppb.Timestamp {
if x != nil {
return x.CreatedAt
}
return nil
}
func (x *PlanStream) GetUpdatedAt() *timestamppb.Timestamp {
if x != nil {
return x.UpdatedAt
}
return nil
}
func (x *PlanStream) GetEnable() bool {
if x != nil {
return x.Enable
}
return false
}
type ReqPlanStreamList struct {
state protoimpl.MessageState `protogen:"open.v1"`
PageNum uint32 `protobuf:"varint,1,opt,name=pageNum,proto3" json:"pageNum,omitempty"`
PageSize uint32 `protobuf:"varint,2,opt,name=pageSize,proto3" json:"pageSize,omitempty"`
PlanId uint32 `protobuf:"varint,3,opt,name=planId,proto3" json:"planId,omitempty"` // 可选的按录制计划ID筛选
StreamPath string `protobuf:"bytes,4,opt,name=stream_path,json=streamPath,proto3" json:"stream_path,omitempty"` // 可选的按流路径筛选
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ReqPlanStreamList) Reset() {
*x = ReqPlanStreamList{}
mi := &file_crontab_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ReqPlanStreamList) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ReqPlanStreamList) ProtoMessage() {}
func (x *ReqPlanStreamList) ProtoReflect() protoreflect.Message {
mi := &file_crontab_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ReqPlanStreamList.ProtoReflect.Descriptor instead.
func (*ReqPlanStreamList) Descriptor() ([]byte, []int) {
return file_crontab_proto_rawDescGZIP(), []int{6}
}
func (x *ReqPlanStreamList) GetPageNum() uint32 {
if x != nil {
return x.PageNum
}
return 0
}
func (x *ReqPlanStreamList) GetPageSize() uint32 {
if x != nil {
return x.PageSize
}
return 0
}
func (x *ReqPlanStreamList) GetPlanId() uint32 {
if x != nil {
return x.PlanId
}
return 0
}
func (x *ReqPlanStreamList) GetStreamPath() string {
if x != nil {
return x.StreamPath
}
return ""
}
type RecordPlanStreamResponseList struct {
state protoimpl.MessageState `protogen:"open.v1"`
Code int32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
TotalCount uint32 `protobuf:"varint,3,opt,name=totalCount,proto3" json:"totalCount,omitempty"`
PageNum uint32 `protobuf:"varint,4,opt,name=pageNum,proto3" json:"pageNum,omitempty"`
PageSize uint32 `protobuf:"varint,5,opt,name=pageSize,proto3" json:"pageSize,omitempty"`
Data []*PlanStream `protobuf:"bytes,6,rep,name=data,proto3" json:"data,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *RecordPlanStreamResponseList) Reset() {
*x = RecordPlanStreamResponseList{}
mi := &file_crontab_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RecordPlanStreamResponseList) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RecordPlanStreamResponseList) ProtoMessage() {}
func (x *RecordPlanStreamResponseList) ProtoReflect() protoreflect.Message {
mi := &file_crontab_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RecordPlanStreamResponseList.ProtoReflect.Descriptor instead.
func (*RecordPlanStreamResponseList) Descriptor() ([]byte, []int) {
return file_crontab_proto_rawDescGZIP(), []int{7}
}
func (x *RecordPlanStreamResponseList) GetCode() int32 {
if x != nil {
return x.Code
}
return 0
}
func (x *RecordPlanStreamResponseList) GetMessage() string {
if x != nil {
return x.Message
}
return ""
}
func (x *RecordPlanStreamResponseList) GetTotalCount() uint32 {
if x != nil {
return x.TotalCount
}
return 0
}
func (x *RecordPlanStreamResponseList) GetPageNum() uint32 {
if x != nil {
return x.PageNum
}
return 0
}
func (x *RecordPlanStreamResponseList) GetPageSize() uint32 {
if x != nil {
return x.PageSize
}
return 0
}
func (x *RecordPlanStreamResponseList) GetData() []*PlanStream {
if x != nil {
return x.Data
}
return nil
}
type DeletePlanStreamRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
PlanId uint32 `protobuf:"varint,1,opt,name=planId,proto3" json:"planId,omitempty"`
StreamPath string `protobuf:"bytes,2,opt,name=streamPath,proto3" json:"streamPath,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeletePlanStreamRequest) Reset() {
*x = DeletePlanStreamRequest{}
mi := &file_crontab_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeletePlanStreamRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeletePlanStreamRequest) ProtoMessage() {}
func (x *DeletePlanStreamRequest) ProtoReflect() protoreflect.Message {
mi := &file_crontab_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeletePlanStreamRequest.ProtoReflect.Descriptor instead.
func (*DeletePlanStreamRequest) Descriptor() ([]byte, []int) {
return file_crontab_proto_rawDescGZIP(), []int{8}
}
func (x *DeletePlanStreamRequest) GetPlanId() uint32 {
if x != nil {
return x.PlanId
}
return 0
}
func (x *DeletePlanStreamRequest) GetStreamPath() string {
if x != nil {
return x.StreamPath
}
return ""
}
var File_crontab_proto protoreflect.FileDescriptor
const file_crontab_proto_rawDesc = "" +
@@ -371,12 +676,50 @@ const file_crontab_proto_rawDesc = "" +
"\x02id\x18\x01 \x01(\rR\x02id\"8\n" +
"\bResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\x05R\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage2\xbe\x02\n" +
"\amessage\x18\x02 \x01(\tR\amessage\"\xac\x02\n" +
"\n" +
"PlanStream\x12\x16\n" +
"\x06planId\x18\x01 \x01(\rR\x06planId\x12\x1f\n" +
"\vstream_path\x18\x02 \x01(\tR\n" +
"streamPath\x12\x1a\n" +
"\bfragment\x18\x03 \x01(\tR\bfragment\x12\x1a\n" +
"\bfilePath\x18\x04 \x01(\tR\bfilePath\x12\x1f\n" +
"\vrecord_type\x18\x05 \x01(\tR\n" +
"recordType\x129\n" +
"\n" +
"created_at\x18\x06 \x01(\v2\x1a.google.protobuf.TimestampR\tcreatedAt\x129\n" +
"\n" +
"updated_at\x18\a \x01(\v2\x1a.google.protobuf.TimestampR\tupdatedAt\x12\x16\n" +
"\x06enable\x18\b \x01(\bR\x06enable\"\x82\x01\n" +
"\x11ReqPlanStreamList\x12\x18\n" +
"\apageNum\x18\x01 \x01(\rR\apageNum\x12\x1a\n" +
"\bpageSize\x18\x02 \x01(\rR\bpageSize\x12\x16\n" +
"\x06planId\x18\x03 \x01(\rR\x06planId\x12\x1f\n" +
"\vstream_path\x18\x04 \x01(\tR\n" +
"streamPath\"\xcb\x01\n" +
"\x1cRecordPlanStreamResponseList\x12\x12\n" +
"\x04code\x18\x01 \x01(\x05R\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12\x1e\n" +
"\n" +
"totalCount\x18\x03 \x01(\rR\n" +
"totalCount\x12\x18\n" +
"\apageNum\x18\x04 \x01(\rR\apageNum\x12\x1a\n" +
"\bpageSize\x18\x05 \x01(\rR\bpageSize\x12'\n" +
"\x04data\x18\x06 \x03(\v2\x13.crontab.PlanStreamR\x04data\"Q\n" +
"\x17DeletePlanStreamRequest\x12\x16\n" +
"\x06planId\x18\x01 \x01(\rR\x06planId\x12\x1e\n" +
"\n" +
"streamPath\x18\x02 \x01(\tR\n" +
"streamPath2\x88\x06\n" +
"\x03api\x12O\n" +
"\x04List\x12\x14.crontab.ReqPlanList\x1a\x19.crontab.PlanResponseList\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/plan/api/list\x12A\n" +
"\x03Add\x12\r.crontab.Plan\x1a\x11.crontab.Response\"\x18\x82\xd3\xe4\x93\x02\x12:\x01*\"\r/plan/api/add\x12L\n" +
"\x06Update\x12\r.crontab.Plan\x1a\x11.crontab.Response\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/plan/api/update/{id}\x12U\n" +
"\x06Remove\x12\x16.crontab.DeleteRequest\x1a\x11.crontab.Response\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/plan/api/remove/{id}B\x1fZ\x1dm7s.live/v5/plugin/crontab/pbb\x06proto3"
"\x06Remove\x12\x16.crontab.DeleteRequest\x1a\x11.crontab.Response\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/plan/api/remove/{id}\x12x\n" +
"\x15ListRecordPlanStreams\x12\x1a.crontab.ReqPlanStreamList\x1a%.crontab.RecordPlanStreamResponseList\"\x1c\x82\xd3\xe4\x93\x02\x16\x12\x14/planstream/api/list\x12]\n" +
"\x13AddRecordPlanStream\x12\x13.crontab.PlanStream\x1a\x11.crontab.Response\"\x1e\x82\xd3\xe4\x93\x02\x18:\x01*\"\x13/planstream/api/add\x12c\n" +
"\x16UpdateRecordPlanStream\x12\x13.crontab.PlanStream\x1a\x11.crontab.Response\"!\x82\xd3\xe4\x93\x02\x1b:\x01*\"\x16/planstream/api/update\x12\x89\x01\n" +
"\x16RemoveRecordPlanStream\x12 .crontab.DeletePlanStreamRequest\x1a\x11.crontab.Response\":\x82\xd3\xe4\x93\x024:\x01*\"//planstream/api/remove/{planId}/{streamPath=**}B\x1fZ\x1dm7s.live/v5/plugin/crontab/pbb\x06proto3"
var (
file_crontab_proto_rawDescOnce sync.Once
@@ -390,32 +733,47 @@ func file_crontab_proto_rawDescGZIP() []byte {
return file_crontab_proto_rawDescData
}
var file_crontab_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
var file_crontab_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_crontab_proto_goTypes = []any{
(*PlanResponseList)(nil), // 0: crontab.PlanResponseList
(*Plan)(nil), // 1: crontab.Plan
(*ReqPlanList)(nil), // 2: crontab.ReqPlanList
(*DeleteRequest)(nil), // 3: crontab.DeleteRequest
(*Response)(nil), // 4: crontab.Response
(*timestamppb.Timestamp)(nil), // 5: google.protobuf.Timestamp
(*PlanResponseList)(nil), // 0: crontab.PlanResponseList
(*Plan)(nil), // 1: crontab.Plan
(*ReqPlanList)(nil), // 2: crontab.ReqPlanList
(*DeleteRequest)(nil), // 3: crontab.DeleteRequest
(*Response)(nil), // 4: crontab.Response
(*PlanStream)(nil), // 5: crontab.PlanStream
(*ReqPlanStreamList)(nil), // 6: crontab.ReqPlanStreamList
(*RecordPlanStreamResponseList)(nil), // 7: crontab.RecordPlanStreamResponseList
(*DeletePlanStreamRequest)(nil), // 8: crontab.DeletePlanStreamRequest
(*timestamppb.Timestamp)(nil), // 9: google.protobuf.Timestamp
}
var file_crontab_proto_depIdxs = []int32{
1, // 0: crontab.PlanResponseList.data:type_name -> crontab.Plan
5, // 1: crontab.Plan.createTime:type_name -> google.protobuf.Timestamp
5, // 2: crontab.Plan.updateTime:type_name -> google.protobuf.Timestamp
2, // 3: crontab.api.List:input_type -> crontab.ReqPlanList
1, // 4: crontab.api.Add:input_type -> crontab.Plan
1, // 5: crontab.api.Update:input_type -> crontab.Plan
3, // 6: crontab.api.Remove:input_type -> crontab.DeleteRequest
0, // 7: crontab.api.List:output_type -> crontab.PlanResponseList
4, // 8: crontab.api.Add:output_type -> crontab.Response
4, // 9: crontab.api.Update:output_type -> crontab.Response
4, // 10: crontab.api.Remove:output_type -> crontab.Response
7, // [7:11] is the sub-list for method output_type
3, // [3:7] is the sub-list for method input_type
3, // [3:3] is the sub-list for extension type_name
3, // [3:3] is the sub-list for extension extendee
0, // [0:3] is the sub-list for field type_name
1, // 0: crontab.PlanResponseList.data:type_name -> crontab.Plan
9, // 1: crontab.Plan.createTime:type_name -> google.protobuf.Timestamp
9, // 2: crontab.Plan.updateTime:type_name -> google.protobuf.Timestamp
9, // 3: crontab.PlanStream.created_at:type_name -> google.protobuf.Timestamp
9, // 4: crontab.PlanStream.updated_at:type_name -> google.protobuf.Timestamp
5, // 5: crontab.RecordPlanStreamResponseList.data:type_name -> crontab.PlanStream
2, // 6: crontab.api.List:input_type -> crontab.ReqPlanList
1, // 7: crontab.api.Add:input_type -> crontab.Plan
1, // 8: crontab.api.Update:input_type -> crontab.Plan
3, // 9: crontab.api.Remove:input_type -> crontab.DeleteRequest
6, // 10: crontab.api.ListRecordPlanStreams:input_type -> crontab.ReqPlanStreamList
5, // 11: crontab.api.AddRecordPlanStream:input_type -> crontab.PlanStream
5, // 12: crontab.api.UpdateRecordPlanStream:input_type -> crontab.PlanStream
8, // 13: crontab.api.RemoveRecordPlanStream:input_type -> crontab.DeletePlanStreamRequest
0, // 14: crontab.api.List:output_type -> crontab.PlanResponseList
4, // 15: crontab.api.Add:output_type -> crontab.Response
4, // 16: crontab.api.Update:output_type -> crontab.Response
4, // 17: crontab.api.Remove:output_type -> crontab.Response
7, // 18: crontab.api.ListRecordPlanStreams:output_type -> crontab.RecordPlanStreamResponseList
4, // 19: crontab.api.AddRecordPlanStream:output_type -> crontab.Response
4, // 20: crontab.api.UpdateRecordPlanStream:output_type -> crontab.Response
4, // 21: crontab.api.RemoveRecordPlanStream:output_type -> crontab.Response
14, // [14:22] is the sub-list for method output_type
6, // [6:14] is the sub-list for method input_type
6, // [6:6] is the sub-list for extension type_name
6, // [6:6] is the sub-list for extension extendee
0, // [0:6] is the sub-list for field type_name
}
func init() { file_crontab_proto_init() }
@@ -429,7 +787,7 @@ func file_crontab_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_crontab_proto_rawDesc), len(file_crontab_proto_rawDesc)),
NumEnums: 0,
NumMessages: 5,
NumMessages: 9,
NumExtensions: 0,
NumServices: 1,
},

View File

@@ -176,6 +176,145 @@ func local_request_Api_Remove_0(ctx context.Context, marshaler runtime.Marshaler
return msg, metadata, err
}
var filter_Api_ListRecordPlanStreams_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_Api_ListRecordPlanStreams_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ReqPlanStreamList
metadata runtime.ServerMetadata
)
io.Copy(io.Discard, req.Body)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_ListRecordPlanStreams_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.ListRecordPlanStreams(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_ListRecordPlanStreams_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ReqPlanStreamList
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_ListRecordPlanStreams_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.ListRecordPlanStreams(ctx, &protoReq)
return msg, metadata, err
}
func request_Api_AddRecordPlanStream_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq PlanStream
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.AddRecordPlanStream(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_AddRecordPlanStream_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq PlanStream
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.AddRecordPlanStream(ctx, &protoReq)
return msg, metadata, err
}
func request_Api_UpdateRecordPlanStream_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq PlanStream
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.UpdateRecordPlanStream(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_UpdateRecordPlanStream_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq PlanStream
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.UpdateRecordPlanStream(ctx, &protoReq)
return msg, metadata, err
}
func request_Api_RemoveRecordPlanStream_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeletePlanStreamRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
val, ok := pathParams["planId"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "planId")
}
protoReq.PlanId, err = runtime.Uint32(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "planId", err)
}
val, ok = pathParams["streamPath"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "streamPath")
}
protoReq.StreamPath, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "streamPath", err)
}
msg, err := client.RemoveRecordPlanStream(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_RemoveRecordPlanStream_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeletePlanStreamRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
val, ok := pathParams["planId"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "planId")
}
protoReq.PlanId, err = runtime.Uint32(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "planId", err)
}
val, ok = pathParams["streamPath"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "streamPath")
}
protoReq.StreamPath, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "streamPath", err)
}
msg, err := server.RemoveRecordPlanStream(ctx, &protoReq)
return msg, metadata, err
}
// RegisterApiHandlerServer registers the http handlers for service Api to "mux".
// UnaryRPC :call ApiServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
@@ -262,6 +401,86 @@ func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server
}
forward_Api_Remove_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Api_ListRecordPlanStreams_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/crontab.Api/ListRecordPlanStreams", runtime.WithHTTPPathPattern("/planstream/api/list"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_ListRecordPlanStreams_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_ListRecordPlanStreams_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_AddRecordPlanStream_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/crontab.Api/AddRecordPlanStream", runtime.WithHTTPPathPattern("/planstream/api/add"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_AddRecordPlanStream_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_AddRecordPlanStream_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_UpdateRecordPlanStream_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/crontab.Api/UpdateRecordPlanStream", runtime.WithHTTPPathPattern("/planstream/api/update"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_UpdateRecordPlanStream_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_UpdateRecordPlanStream_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_RemoveRecordPlanStream_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/crontab.Api/RemoveRecordPlanStream", runtime.WithHTTPPathPattern("/planstream/api/remove/{planId}/{streamPath=**}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_RemoveRecordPlanStream_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_RemoveRecordPlanStream_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
@@ -370,19 +589,95 @@ func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client
}
forward_Api_Remove_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Api_ListRecordPlanStreams_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/crontab.Api/ListRecordPlanStreams", runtime.WithHTTPPathPattern("/planstream/api/list"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_ListRecordPlanStreams_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_ListRecordPlanStreams_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_AddRecordPlanStream_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/crontab.Api/AddRecordPlanStream", runtime.WithHTTPPathPattern("/planstream/api/add"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_AddRecordPlanStream_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_AddRecordPlanStream_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_UpdateRecordPlanStream_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/crontab.Api/UpdateRecordPlanStream", runtime.WithHTTPPathPattern("/planstream/api/update"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_UpdateRecordPlanStream_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_UpdateRecordPlanStream_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_Api_RemoveRecordPlanStream_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/crontab.Api/RemoveRecordPlanStream", runtime.WithHTTPPathPattern("/planstream/api/remove/{planId}/{streamPath=**}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_RemoveRecordPlanStream_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_RemoveRecordPlanStream_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
var (
pattern_Api_List_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"plan", "api", "list"}, ""))
pattern_Api_Add_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"plan", "api", "add"}, ""))
pattern_Api_Update_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"plan", "api", "update", "id"}, ""))
pattern_Api_Remove_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"plan", "api", "remove", "id"}, ""))
pattern_Api_List_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"plan", "api", "list"}, ""))
pattern_Api_Add_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"plan", "api", "add"}, ""))
pattern_Api_Update_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"plan", "api", "update", "id"}, ""))
pattern_Api_Remove_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"plan", "api", "remove", "id"}, ""))
pattern_Api_ListRecordPlanStreams_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"planstream", "api", "list"}, ""))
pattern_Api_AddRecordPlanStream_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"planstream", "api", "add"}, ""))
pattern_Api_UpdateRecordPlanStream_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"planstream", "api", "update"}, ""))
pattern_Api_RemoveRecordPlanStream_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 3, 0, 4, 1, 5, 4}, []string{"planstream", "api", "remove", "planId", "streamPath"}, ""))
)
var (
forward_Api_List_0 = runtime.ForwardResponseMessage
forward_Api_Add_0 = runtime.ForwardResponseMessage
forward_Api_Update_0 = runtime.ForwardResponseMessage
forward_Api_Remove_0 = runtime.ForwardResponseMessage
forward_Api_List_0 = runtime.ForwardResponseMessage
forward_Api_Add_0 = runtime.ForwardResponseMessage
forward_Api_Update_0 = runtime.ForwardResponseMessage
forward_Api_Remove_0 = runtime.ForwardResponseMessage
forward_Api_ListRecordPlanStreams_0 = runtime.ForwardResponseMessage
forward_Api_AddRecordPlanStream_0 = runtime.ForwardResponseMessage
forward_Api_UpdateRecordPlanStream_0 = runtime.ForwardResponseMessage
forward_Api_RemoveRecordPlanStream_0 = runtime.ForwardResponseMessage
)

View File

@@ -28,6 +28,31 @@ service api {
body: "*"
};
}
// RecordPlanStream 相关接口
rpc ListRecordPlanStreams (ReqPlanStreamList) returns (RecordPlanStreamResponseList) {
option (google.api.http) = {
get: "/planstream/api/list"
};
}
rpc AddRecordPlanStream (PlanStream) returns (Response) {
option (google.api.http) = {
post: "/planstream/api/add"
body: "*"
};
}
rpc UpdateRecordPlanStream (PlanStream) returns (Response) {
option (google.api.http) = {
post: "/planstream/api/update"
body: "*"
};
}
rpc RemoveRecordPlanStream (DeletePlanStreamRequest) returns (Response) {
option (google.api.http) = {
post: "/planstream/api/remove/{planId}/{streamPath=**}"
body: "*"
};
}
}
message PlanResponseList {
@@ -60,4 +85,37 @@ message DeleteRequest {
message Response {
int32 code = 1;
string message = 2;
}
// RecordPlanStream 相关消息定义
message PlanStream {
uint32 planId = 1;
string stream_path = 2;
string fragment = 3;
string filePath = 4;
string record_type = 5; // 录制类型,例如 "mp4", "flv"
google.protobuf.Timestamp created_at = 6;
google.protobuf.Timestamp updated_at = 7;
bool enable = 8; // 是否启用该录制流
}
message ReqPlanStreamList {
uint32 pageNum = 1;
uint32 pageSize = 2;
uint32 planId = 3; // 可选的按录制计划ID筛选
string stream_path = 4; // 可选的按流路径筛选
}
message RecordPlanStreamResponseList {
int32 code = 1;
string message = 2;
uint32 totalCount = 3;
uint32 pageNum = 4;
uint32 pageSize = 5;
repeated PlanStream data = 6;
}
message DeletePlanStreamRequest {
uint32 planId = 1;
string streamPath = 2;
}

View File

@@ -19,10 +19,14 @@ import (
const _ = grpc.SupportPackageIsVersion9
const (
Api_List_FullMethodName = "/crontab.api/List"
Api_Add_FullMethodName = "/crontab.api/Add"
Api_Update_FullMethodName = "/crontab.api/Update"
Api_Remove_FullMethodName = "/crontab.api/Remove"
Api_List_FullMethodName = "/crontab.api/List"
Api_Add_FullMethodName = "/crontab.api/Add"
Api_Update_FullMethodName = "/crontab.api/Update"
Api_Remove_FullMethodName = "/crontab.api/Remove"
Api_ListRecordPlanStreams_FullMethodName = "/crontab.api/ListRecordPlanStreams"
Api_AddRecordPlanStream_FullMethodName = "/crontab.api/AddRecordPlanStream"
Api_UpdateRecordPlanStream_FullMethodName = "/crontab.api/UpdateRecordPlanStream"
Api_RemoveRecordPlanStream_FullMethodName = "/crontab.api/RemoveRecordPlanStream"
)
// ApiClient is the client API for Api service.
@@ -33,6 +37,11 @@ type ApiClient interface {
Add(ctx context.Context, in *Plan, opts ...grpc.CallOption) (*Response, error)
Update(ctx context.Context, in *Plan, opts ...grpc.CallOption) (*Response, error)
Remove(ctx context.Context, in *DeleteRequest, opts ...grpc.CallOption) (*Response, error)
// RecordPlanStream 相关接口
ListRecordPlanStreams(ctx context.Context, in *ReqPlanStreamList, opts ...grpc.CallOption) (*RecordPlanStreamResponseList, error)
AddRecordPlanStream(ctx context.Context, in *PlanStream, opts ...grpc.CallOption) (*Response, error)
UpdateRecordPlanStream(ctx context.Context, in *PlanStream, opts ...grpc.CallOption) (*Response, error)
RemoveRecordPlanStream(ctx context.Context, in *DeletePlanStreamRequest, opts ...grpc.CallOption) (*Response, error)
}
type apiClient struct {
@@ -83,6 +92,46 @@ func (c *apiClient) Remove(ctx context.Context, in *DeleteRequest, opts ...grpc.
return out, nil
}
func (c *apiClient) ListRecordPlanStreams(ctx context.Context, in *ReqPlanStreamList, opts ...grpc.CallOption) (*RecordPlanStreamResponseList, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(RecordPlanStreamResponseList)
err := c.cc.Invoke(ctx, Api_ListRecordPlanStreams_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *apiClient) AddRecordPlanStream(ctx context.Context, in *PlanStream, opts ...grpc.CallOption) (*Response, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(Response)
err := c.cc.Invoke(ctx, Api_AddRecordPlanStream_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *apiClient) UpdateRecordPlanStream(ctx context.Context, in *PlanStream, opts ...grpc.CallOption) (*Response, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(Response)
err := c.cc.Invoke(ctx, Api_UpdateRecordPlanStream_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *apiClient) RemoveRecordPlanStream(ctx context.Context, in *DeletePlanStreamRequest, opts ...grpc.CallOption) (*Response, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(Response)
err := c.cc.Invoke(ctx, Api_RemoveRecordPlanStream_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// ApiServer is the server API for Api service.
// All implementations must embed UnimplementedApiServer
// for forward compatibility.
@@ -91,6 +140,11 @@ type ApiServer interface {
Add(context.Context, *Plan) (*Response, error)
Update(context.Context, *Plan) (*Response, error)
Remove(context.Context, *DeleteRequest) (*Response, error)
// RecordPlanStream 相关接口
ListRecordPlanStreams(context.Context, *ReqPlanStreamList) (*RecordPlanStreamResponseList, error)
AddRecordPlanStream(context.Context, *PlanStream) (*Response, error)
UpdateRecordPlanStream(context.Context, *PlanStream) (*Response, error)
RemoveRecordPlanStream(context.Context, *DeletePlanStreamRequest) (*Response, error)
mustEmbedUnimplementedApiServer()
}
@@ -113,6 +167,18 @@ func (UnimplementedApiServer) Update(context.Context, *Plan) (*Response, error)
func (UnimplementedApiServer) Remove(context.Context, *DeleteRequest) (*Response, error) {
return nil, status.Errorf(codes.Unimplemented, "method Remove not implemented")
}
func (UnimplementedApiServer) ListRecordPlanStreams(context.Context, *ReqPlanStreamList) (*RecordPlanStreamResponseList, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListRecordPlanStreams not implemented")
}
func (UnimplementedApiServer) AddRecordPlanStream(context.Context, *PlanStream) (*Response, error) {
return nil, status.Errorf(codes.Unimplemented, "method AddRecordPlanStream not implemented")
}
func (UnimplementedApiServer) UpdateRecordPlanStream(context.Context, *PlanStream) (*Response, error) {
return nil, status.Errorf(codes.Unimplemented, "method UpdateRecordPlanStream not implemented")
}
func (UnimplementedApiServer) RemoveRecordPlanStream(context.Context, *DeletePlanStreamRequest) (*Response, error) {
return nil, status.Errorf(codes.Unimplemented, "method RemoveRecordPlanStream not implemented")
}
func (UnimplementedApiServer) mustEmbedUnimplementedApiServer() {}
func (UnimplementedApiServer) testEmbeddedByValue() {}
@@ -206,6 +272,78 @@ func _Api_Remove_Handler(srv interface{}, ctx context.Context, dec func(interfac
return interceptor(ctx, in, info, handler)
}
func _Api_ListRecordPlanStreams_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ReqPlanStreamList)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).ListRecordPlanStreams(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_ListRecordPlanStreams_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).ListRecordPlanStreams(ctx, req.(*ReqPlanStreamList))
}
return interceptor(ctx, in, info, handler)
}
func _Api_AddRecordPlanStream_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlanStream)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).AddRecordPlanStream(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_AddRecordPlanStream_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).AddRecordPlanStream(ctx, req.(*PlanStream))
}
return interceptor(ctx, in, info, handler)
}
func _Api_UpdateRecordPlanStream_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlanStream)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).UpdateRecordPlanStream(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_UpdateRecordPlanStream_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).UpdateRecordPlanStream(ctx, req.(*PlanStream))
}
return interceptor(ctx, in, info, handler)
}
func _Api_RemoveRecordPlanStream_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeletePlanStreamRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).RemoveRecordPlanStream(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_RemoveRecordPlanStream_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).RemoveRecordPlanStream(ctx, req.(*DeletePlanStreamRequest))
}
return interceptor(ctx, in, info, handler)
}
// Api_ServiceDesc is the grpc.ServiceDesc for Api service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@@ -229,6 +367,22 @@ var Api_ServiceDesc = grpc.ServiceDesc{
MethodName: "Remove",
Handler: _Api_Remove_Handler,
},
{
MethodName: "ListRecordPlanStreams",
Handler: _Api_ListRecordPlanStreams_Handler,
},
{
MethodName: "AddRecordPlanStream",
Handler: _Api_AddRecordPlanStream_Handler,
},
{
MethodName: "UpdateRecordPlanStream",
Handler: _Api_UpdateRecordPlanStream_Handler,
},
{
MethodName: "RemoveRecordPlanStream",
Handler: _Api_RemoveRecordPlanStream_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "crontab.proto",

View File

@@ -1,13 +0,0 @@
package pkg
import (
"gorm.io/gorm"
)
// RecordPlan 录制计划模型
type RecordPlan struct {
gorm.Model
Name string `json:"name" gorm:"default:''"`
Plan string `json:"plan" gorm:"type:text"`
Enabled bool `json:"enabled" gorm:"default:true"`
}

View File

@@ -0,0 +1,13 @@
package pkg
import (
"gorm.io/gorm"
)
// RecordPlan 录制计划模型
type RecordPlan struct {
gorm.Model
Name string `json:"name" gorm:"default:''"`
Plan string `json:"plan" gorm:"type:text"`
Enable bool `json:"enable" gorm:"default:false"` // 是否启用
}

View File

@@ -0,0 +1,51 @@
package pkg
import (
"gorm.io/gorm"
"time"
)
// RecordPlanStream 录制计划流信息模型
type RecordPlanStream struct {
PlanID uint `json:"plan_id" gorm:"primaryKey;type:bigint;not null"` // 录制计划ID
StreamPath string `json:"stream_path" gorm:"primaryKey;type:varchar(255)"`
Fragment string `json:"fragment" gorm:"type:text"`
FilePath string `json:"file_path" gorm:"type:varchar(255)"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt gorm.DeletedAt `gorm:"index"`
Enable bool `json:"enable" gorm:"default:false"` // 是否启用
RecordType string `json:"record_type" gorm:"type:varchar(255)"`
}
// TableName 设置表名
func (RecordPlanStream) TableName() string {
return "record_plans_streams"
}
// ScopeStreamPathLike 模糊查询 StreamPath
func ScopeStreamPathLike(streamPath string) func(db *gorm.DB) *gorm.DB {
return func(db *gorm.DB) *gorm.DB {
if streamPath != "" {
return db.Where("record_plans_streams.stream_path LIKE ?", "%"+streamPath+"%")
}
return db
}
}
// ScopeOrderByCreatedAtDesc 按创建时间倒序
func ScopeOrderByCreatedAtDesc() func(db *gorm.DB) *gorm.DB {
return func(db *gorm.DB) *gorm.DB {
return db.Order("record_plans_streams.created_at DESC")
}
}
// ScopeRecordPlanID 按录制计划ID查询
func ScopeRecordPlanID(recordPlanID uint) func(db *gorm.DB) *gorm.DB {
return func(db *gorm.DB) *gorm.DB {
if recordPlanID > 0 {
return db.Where(&RecordPlanStream{PlanID: recordPlanID})
}
return db
}
}

View File

@@ -15,6 +15,7 @@ import (
"github.com/gorilla/websocket"
"github.com/shirou/gopsutil/v4/cpu"
"github.com/shirou/gopsutil/v4/process"
"m7s.live/v5/pkg/task"
)
//go:embed static/*
@@ -40,8 +41,17 @@ type consumer struct {
}
type server struct {
task.TickTask
consumers []consumer
consumersMutex sync.RWMutex
data DataStorage
lastPause uint32
dataMutex sync.RWMutex
lastConsumerID uint
upgrader websocket.Upgrader
prevSysTime float64
prevUserTime float64
myProcess *process.Process
}
type SimplePair struct {
@@ -75,99 +85,91 @@ const (
maxCount int = 86400
)
var (
data DataStorage
lastPause uint32
mutex sync.RWMutex
lastConsumerID uint
s server
upgrader = websocket.Upgrader{
func (s *server) Start() error {
var err error
s.myProcess, err = process.NewProcess(int32(os.Getpid()))
if err != nil {
log.Printf("Failed to get process: %v", err)
}
// 初始化 WebSocket upgrader
s.upgrader = websocket.Upgrader{
ReadBufferSize: 1024,
WriteBufferSize: 1024,
}
prevSysTime float64
prevUserTime float64
myProcess *process.Process
)
func init() {
myProcess, _ = process.NewProcess(int32(os.Getpid()))
// preallocate arrays in data, helps save on reallocations caused by append()
// when maxCount is large
data.BytesAllocated = make([]SimplePair, 0, maxCount)
data.GcPauses = make([]SimplePair, 0, maxCount)
data.CPUUsage = make([]CPUPair, 0, maxCount)
data.Pprof = make([]PprofPair, 0, maxCount)
go s.gatherData()
s.data.BytesAllocated = make([]SimplePair, 0, maxCount)
s.data.GcPauses = make([]SimplePair, 0, maxCount)
s.data.CPUUsage = make([]CPUPair, 0, maxCount)
s.data.Pprof = make([]PprofPair, 0, maxCount)
return s.TickTask.Start()
}
func (s *server) gatherData() {
timer := time.Tick(time.Second)
func (s *server) GetTickInterval() time.Duration {
return time.Second
}
for now := range timer {
nowUnix := now.Unix()
func (s *server) Tick(any) {
now := time.Now()
nowUnix := now.Unix()
var ms runtime.MemStats
runtime.ReadMemStats(&ms)
var ms runtime.MemStats
runtime.ReadMemStats(&ms)
u := update{
Ts: nowUnix * 1000,
Block: pprof.Lookup("block").Count(),
Goroutine: pprof.Lookup("goroutine").Count(),
Heap: pprof.Lookup("heap").Count(),
Mutex: pprof.Lookup("mutex").Count(),
Threadcreate: pprof.Lookup("threadcreate").Count(),
}
data.Pprof = append(data.Pprof, PprofPair{
uint64(nowUnix) * 1000,
u.Block,
u.Goroutine,
u.Heap,
u.Mutex,
u.Threadcreate,
})
cpuTimes, err := myProcess.Times()
if err != nil {
cpuTimes = &cpu.TimesStat{}
}
if prevUserTime != 0 {
u.CPUUser = cpuTimes.User - prevUserTime
u.CPUSys = cpuTimes.System - prevSysTime
data.CPUUsage = append(data.CPUUsage, CPUPair{uint64(nowUnix) * 1000, u.CPUUser, u.CPUSys})
}
prevUserTime = cpuTimes.User
prevSysTime = cpuTimes.System
mutex.Lock()
bytesAllocated := ms.Alloc
u.BytesAllocated = bytesAllocated
data.BytesAllocated = append(data.BytesAllocated, SimplePair{uint64(nowUnix) * 1000, bytesAllocated})
if lastPause == 0 || lastPause != ms.NumGC {
gcPause := ms.PauseNs[(ms.NumGC+255)%256]
u.GcPause = gcPause
data.GcPauses = append(data.GcPauses, SimplePair{uint64(nowUnix) * 1000, gcPause})
lastPause = ms.NumGC
}
if len(data.BytesAllocated) > maxCount {
data.BytesAllocated = data.BytesAllocated[len(data.BytesAllocated)-maxCount:]
}
if len(data.GcPauses) > maxCount {
data.GcPauses = data.GcPauses[len(data.GcPauses)-maxCount:]
}
mutex.Unlock()
s.sendToConsumers(u)
u := update{
Ts: nowUnix * 1000,
Block: pprof.Lookup("block").Count(),
Goroutine: pprof.Lookup("goroutine").Count(),
Heap: pprof.Lookup("heap").Count(),
Mutex: pprof.Lookup("mutex").Count(),
Threadcreate: pprof.Lookup("threadcreate").Count(),
}
s.data.Pprof = append(s.data.Pprof, PprofPair{
uint64(nowUnix) * 1000,
u.Block,
u.Goroutine,
u.Heap,
u.Mutex,
u.Threadcreate,
})
cpuTimes, err := s.myProcess.Times()
if err != nil {
cpuTimes = &cpu.TimesStat{}
}
if s.prevUserTime != 0 {
u.CPUUser = cpuTimes.User - s.prevUserTime
u.CPUSys = cpuTimes.System - s.prevSysTime
s.data.CPUUsage = append(s.data.CPUUsage, CPUPair{uint64(nowUnix) * 1000, u.CPUUser, u.CPUSys})
}
s.prevUserTime = cpuTimes.User
s.prevSysTime = cpuTimes.System
s.dataMutex.Lock()
bytesAllocated := ms.Alloc
u.BytesAllocated = bytesAllocated
s.data.BytesAllocated = append(s.data.BytesAllocated, SimplePair{uint64(nowUnix) * 1000, bytesAllocated})
if s.lastPause == 0 || s.lastPause != ms.NumGC {
gcPause := ms.PauseNs[(ms.NumGC+255)%256]
u.GcPause = gcPause
s.data.GcPauses = append(s.data.GcPauses, SimplePair{uint64(nowUnix) * 1000, gcPause})
s.lastPause = ms.NumGC
}
if len(s.data.BytesAllocated) > maxCount {
s.data.BytesAllocated = s.data.BytesAllocated[len(s.data.BytesAllocated)-maxCount:]
}
if len(s.data.GcPauses) > maxCount {
s.data.GcPauses = s.data.GcPauses[len(s.data.GcPauses)-maxCount:]
}
s.dataMutex.Unlock()
s.sendToConsumers(u)
}
func (s *server) sendToConsumers(u update) {
@@ -203,10 +205,10 @@ func (s *server) addConsumer() consumer {
s.consumersMutex.Lock()
defer s.consumersMutex.Unlock()
lastConsumerID++
s.lastConsumerID++
c := consumer{
id: lastConsumerID,
id: s.lastConsumerID,
c: make(chan update),
}
@@ -221,7 +223,7 @@ func (s *server) dataFeedHandler(w http.ResponseWriter, r *http.Request) {
lastPong time.Time
)
conn, err := upgrader.Upgrade(w, r, nil)
conn, err := s.upgrader.Upgrade(w, r, nil)
if err != nil {
log.Println(err)
return
@@ -268,9 +270,9 @@ func (s *server) dataFeedHandler(w http.ResponseWriter, r *http.Request) {
}
}
func dataHandler(w http.ResponseWriter, r *http.Request) {
mutex.RLock()
defer mutex.RUnlock()
func (s *server) dataHandler(w http.ResponseWriter, r *http.Request) {
s.dataMutex.RLock()
defer s.dataMutex.RUnlock()
if e := r.ParseForm(); e != nil {
log.Print("error parsing form")
@@ -284,7 +286,7 @@ func dataHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
encoder := json.NewEncoder(w)
encoder.Encode(data)
encoder.Encode(s.data)
fmt.Fprint(w, ")")
}

219
plugin/debug/envcheck.go Normal file
View File

@@ -0,0 +1,219 @@
package plugin_debug
import (
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"net/url"
"time"
"google.golang.org/protobuf/types/known/timestamppb"
"gopkg.in/yaml.v3"
"m7s.live/v5/pb"
"m7s.live/v5/pkg/util"
)
type EnvCheckResult struct {
Message string `json:"message"`
Type string `json:"type"` // info, success, error, complete
}
// 自定义系统信息响应结构体,用于 JSON 解析
type SysInfoResponseJSON struct {
Code int32 `json:"code"`
Message string `json:"message"`
Data struct {
StartTime string `json:"startTime"`
LocalIP string `json:"localIP"`
PublicIP string `json:"publicIP"`
Version string `json:"version"`
GoVersion string `json:"goVersion"`
OS string `json:"os"`
Arch string `json:"arch"`
CPUs int32 `json:"cpus"`
Plugins []struct {
Name string `json:"name"`
PushAddr []string `json:"pushAddr"`
PlayAddr []string `json:"playAddr"`
Description map[string]string `json:"description"`
} `json:"plugins"`
} `json:"data"`
}
// 插件配置响应结构体
type PluginConfigResponse struct {
Code int32 `json:"code"`
Message string `json:"message"`
Data struct {
File string `json:"file"`
Modified string `json:"modified"`
Merged string `json:"merged"`
} `json:"data"`
}
// TCP 配置结构体
type TCPConfig struct {
ListenAddr string `yaml:"listenaddr"`
ListenAddrTLS string `yaml:"listenaddrtls"`
}
// 插件配置结构体
type PluginConfig struct {
TCP TCPConfig `yaml:"tcp"`
}
func (p *DebugPlugin) EnvCheck(w http.ResponseWriter, r *http.Request) {
// Get target URL from query parameter
targetURL := r.URL.Query().Get("target")
if targetURL == "" {
r.URL.Path = "/static/envcheck.html"
staticFSHandler.ServeHTTP(w, r)
return
}
// Create SSE connection
util.NewSSE(w, r.Context(), func(sse *util.SSE) {
// Function to send SSE messages
sendMessage := func(message string, msgType string) {
result := EnvCheckResult{
Message: message,
Type: msgType,
}
sse.WriteJSON(result)
}
// Parse target URL
_, err := url.Parse(targetURL)
if err != nil {
sendMessage(fmt.Sprintf("Invalid URL: %v", err), "error")
return
}
// Check if we can connect to the target server
sendMessage(fmt.Sprintf("Checking connection to %s...", targetURL), "info")
// Get system info from target server
resp, err := http.Get(fmt.Sprintf("%s/api/sysinfo", targetURL))
if err != nil {
sendMessage(fmt.Sprintf("Failed to connect to target server: %v", err), "error")
return
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
sendMessage(fmt.Sprintf("Target server returned status code: %d", resp.StatusCode), "error")
return
}
// Read and parse system info
body, err := io.ReadAll(resp.Body)
if err != nil {
sendMessage(fmt.Sprintf("Failed to read response: %v", err), "error")
return
}
var sysInfoJSON SysInfoResponseJSON
if err := json.Unmarshal(body, &sysInfoJSON); err != nil {
sendMessage(fmt.Sprintf("Failed to parse system info: %v", err), "error")
return
}
// Convert JSON response to protobuf response
sysInfo := &pb.SysInfoResponse{
Code: sysInfoJSON.Code,
Message: sysInfoJSON.Message,
Data: &pb.SysInfoData{
LocalIP: sysInfoJSON.Data.LocalIP,
PublicIP: sysInfoJSON.Data.PublicIP,
Version: sysInfoJSON.Data.Version,
GoVersion: sysInfoJSON.Data.GoVersion,
Os: sysInfoJSON.Data.OS,
Arch: sysInfoJSON.Data.Arch,
Cpus: sysInfoJSON.Data.CPUs,
},
}
// Parse start time
if startTime, err := time.Parse(time.RFC3339, sysInfoJSON.Data.StartTime); err == nil {
sysInfo.Data.StartTime = timestamppb.New(startTime)
}
// Convert plugins
for _, pluginJSON := range sysInfoJSON.Data.Plugins {
plugin := &pb.PluginInfo{
Name: pluginJSON.Name,
PushAddr: pluginJSON.PushAddr,
PlayAddr: pluginJSON.PlayAddr,
Description: pluginJSON.Description,
}
sysInfo.Data.Plugins = append(sysInfo.Data.Plugins, plugin)
}
// Check each plugin's configuration
for _, plugin := range sysInfo.Data.Plugins {
// Get plugin configuration
configResp, err := http.Get(fmt.Sprintf("%s/api/config/get/%s", targetURL, plugin.Name))
if err != nil {
sendMessage(fmt.Sprintf("Failed to get configuration for plugin %s: %v", plugin.Name, err), "error")
continue
}
defer configResp.Body.Close()
if configResp.StatusCode != http.StatusOK {
sendMessage(fmt.Sprintf("Failed to get configuration for plugin %s: status code %d", plugin.Name, configResp.StatusCode), "error")
continue
}
var configRespJSON PluginConfigResponse
if err := json.NewDecoder(configResp.Body).Decode(&configRespJSON); err != nil {
sendMessage(fmt.Sprintf("Failed to parse configuration for plugin %s: %v", plugin.Name, err), "error")
continue
}
// Parse YAML configuration
var config PluginConfig
if err := yaml.Unmarshal([]byte(configRespJSON.Data.Merged), &config); err != nil {
sendMessage(fmt.Sprintf("Failed to parse YAML configuration for plugin %s: %v", plugin.Name, err), "error")
continue
}
// Check TCP configuration
if config.TCP.ListenAddr != "" {
host, port, err := net.SplitHostPort(config.TCP.ListenAddr)
if err != nil {
sendMessage(fmt.Sprintf("Invalid listenaddr format for plugin %s: %v", plugin.Name, err), "error")
} else {
sendMessage(fmt.Sprintf("Checking TCP listenaddr %s for plugin %s...", config.TCP.ListenAddr, plugin.Name), "info")
// Try to establish TCP connection
conn, err := net.DialTimeout("tcp", fmt.Sprintf("%s:%s", host, port), 5*time.Second)
if err != nil {
sendMessage(fmt.Sprintf("TCP listenaddr %s for plugin %s is not accessible: %v", config.TCP.ListenAddr, plugin.Name, err), "error")
} else {
conn.Close()
sendMessage(fmt.Sprintf("TCP listenaddr %s for plugin %s is accessible", config.TCP.ListenAddr, plugin.Name), "success")
}
}
}
if config.TCP.ListenAddrTLS != "" {
host, port, err := net.SplitHostPort(config.TCP.ListenAddrTLS)
if err != nil {
sendMessage(fmt.Sprintf("Invalid listenaddrtls format for plugin %s: %v", plugin.Name, err), "error")
} else {
sendMessage(fmt.Sprintf("Checking TCP TLS listenaddr %s for plugin %s...", config.TCP.ListenAddrTLS, plugin.Name), "info")
// Try to establish TCP connection
conn, err := net.DialTimeout("tcp", fmt.Sprintf("%s:%s", host, port), 5*time.Second)
if err != nil {
sendMessage(fmt.Sprintf("TCP TLS listenaddr %s for plugin %s is not accessible: %v", config.TCP.ListenAddrTLS, plugin.Name, err), "error")
} else {
conn.Close()
sendMessage(fmt.Sprintf("TCP TLS listenaddr %s for plugin %s is accessible", config.TCP.ListenAddrTLS, plugin.Name), "success")
}
}
}
}
sendMessage("Environment check completed", "complete")
})
}

View File

@@ -7,9 +7,11 @@ import (
"net/http"
"net/http/pprof"
"os"
"os/exec" // 新增导入
"runtime"
runtimePPROF "runtime/pprof"
"sort"
"strconv"
"strings"
"sync"
"time"
@@ -32,13 +34,13 @@ type DebugPlugin struct {
m7s.Plugin
ProfileDuration time.Duration `default:"10s" desc:"profile持续时间"`
Profile string `desc:"采集profile存储文件"`
ChartPeriod time.Duration `default:"1s" desc:"图表更新周期"`
Grfout string `default:"grf.out" desc:"grf输出文件"`
EnableChart bool `default:"true" desc:"是否启用图表功能"`
// 添加缓存字段
cpuProfileData *profile.Profile // 缓存 CPU Profile 数据
cpuProfileOnce sync.Once // 确保只采集一次
cpuProfileLock sync.Mutex // 保护缓存数据
chartServer server
}
type WriteToFile struct {
@@ -70,6 +72,10 @@ func (p *DebugPlugin) OnInit() error {
p.Info("cpu profile done")
}()
}
if p.EnableChart {
p.AddTask(&p.chartServer)
}
return nil
}
@@ -98,11 +104,11 @@ func (p *DebugPlugin) Charts_(w http.ResponseWriter, r *http.Request) {
}
func (p *DebugPlugin) Charts_data(w http.ResponseWriter, r *http.Request) {
dataHandler(w, r)
p.chartServer.dataHandler(w, r)
}
func (p *DebugPlugin) Charts_datafeed(w http.ResponseWriter, r *http.Request) {
s.dataFeedHandler(w, r)
p.chartServer.dataFeedHandler(w, r)
}
func (p *DebugPlugin) Grf(w http.ResponseWriter, r *http.Request) {
@@ -193,7 +199,7 @@ func (p *DebugPlugin) GetHeap(ctx context.Context, empty *emptypb.Empty) (*pb.He
obj.Size += size
totalSize += size
// 构建引<EFBFBD><EFBFBD><EFBFBD>关系
// 构建引关系
for i := 1; i < len(sample.Location); i++ {
loc := sample.Location[i]
if len(loc.Line) == 0 || loc.Line[0].Function == nil {
@@ -443,3 +449,42 @@ func (p *DebugPlugin) GetHeapGraph(ctx context.Context, empty *emptypb.Empty) (*
Data: dot,
}, nil
}
func (p *DebugPlugin) API_TcpDump(rw http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
args := []string{"-W", "1"}
if query.Get("interface") != "" {
args = append(args, "-i", query.Get("interface"))
}
if query.Get("filter") != "" {
args = append(args, query.Get("filter"))
}
if query.Get("extra_args") != "" {
args = append(args, strings.Fields(query.Get("extra_args"))...)
}
if query.Get("duration") == "" {
http.Error(rw, "duration is required", http.StatusBadRequest)
return
}
rw.Header().Set("Content-Type", "text/plain")
rw.Header().Set("Cache-Control", "no-cache")
rw.Header().Set("Content-Disposition", "attachment; filename=tcpdump.txt")
cmd := exec.CommandContext(p, "tcpdump", args...)
p.Info("starting tcpdump", "args", strings.Join(cmd.Args, " "))
cmd.Stdout = rw
cmd.Stderr = os.Stderr // 将错误输出重定向到标准错误
err := cmd.Start()
if err != nil {
http.Error(rw, fmt.Sprintf("failed to start tcpdump: %v", err), http.StatusInternalServerError)
return
}
duration, err := strconv.Atoi(query.Get("duration"))
if err != nil {
http.Error(rw, "invalid duration", http.StatusBadRequest)
return
}
<-time.After(time.Duration(duration) * time.Second)
if err := cmd.Process.Kill(); err != nil {
p.Error("failed to kill tcpdump process", "error", err)
}
}

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.0
// protoc v5.29.1
// protoc-gen-go v1.36.6
// protoc v5.29.3
// source: debug.proto
package pb
@@ -14,6 +14,7 @@ import (
_ "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@@ -1007,176 +1008,107 @@ func (x *RuntimeStats) GetBlockingTimeNs() uint64 {
var File_debug_proto protoreflect.FileDescriptor
var file_debug_proto_rawDesc = []byte{
0x0a, 0x0b, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x64,
0x65, 0x62, 0x75, 0x67, 0x1a, 0x1c, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x61, 0x70, 0x69,
0x2f, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a,
0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66,
0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x22, 0x42, 0x0a, 0x0a, 0x43, 0x70, 0x75, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x18,
0x0a, 0x07, 0x72, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52,
0x07, 0x72, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x12, 0x1a, 0x0a, 0x08, 0x64, 0x75, 0x72, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x08, 0x64, 0x75, 0x72, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x22, 0x94, 0x01, 0x0a, 0x0a, 0x48, 0x65, 0x61, 0x70, 0x4f, 0x62, 0x6a,
0x65, 0x63, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28,
0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x12, 0x0a,
0x04, 0x73, 0x69, 0x7a, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x52, 0x04, 0x73, 0x69, 0x7a,
0x65, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x69, 0x7a, 0x65, 0x50, 0x65, 0x72, 0x63, 0x18, 0x04, 0x20,
0x01, 0x28, 0x01, 0x52, 0x08, 0x73, 0x69, 0x7a, 0x65, 0x50, 0x65, 0x72, 0x63, 0x12, 0x18, 0x0a,
0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07,
0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x72, 0x65, 0x66, 0x73, 0x18,
0x06, 0x20, 0x03, 0x28, 0x09, 0x52, 0x04, 0x72, 0x65, 0x66, 0x73, 0x22, 0xc7, 0x02, 0x0a, 0x09,
0x48, 0x65, 0x61, 0x70, 0x53, 0x74, 0x61, 0x74, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x61, 0x6c, 0x6c,
0x6f, 0x63, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x61, 0x6c, 0x6c, 0x6f, 0x63, 0x12,
0x1e, 0x0a, 0x0a, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x41, 0x6c, 0x6c, 0x6f, 0x63, 0x18, 0x02, 0x20,
0x01, 0x28, 0x04, 0x52, 0x0a, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x41, 0x6c, 0x6c, 0x6f, 0x63, 0x12,
0x10, 0x0a, 0x03, 0x73, 0x79, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x03, 0x73, 0x79,
0x73, 0x12, 0x14, 0x0a, 0x05, 0x6e, 0x75, 0x6d, 0x47, 0x43, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0d,
0x52, 0x05, 0x6e, 0x75, 0x6d, 0x47, 0x43, 0x12, 0x1c, 0x0a, 0x09, 0x68, 0x65, 0x61, 0x70, 0x41,
0x6c, 0x6c, 0x6f, 0x63, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x68, 0x65, 0x61, 0x70,
0x41, 0x6c, 0x6c, 0x6f, 0x63, 0x12, 0x18, 0x0a, 0x07, 0x68, 0x65, 0x61, 0x70, 0x53, 0x79, 0x73,
0x18, 0x06, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x68, 0x65, 0x61, 0x70, 0x53, 0x79, 0x73, 0x12,
0x1a, 0x0a, 0x08, 0x68, 0x65, 0x61, 0x70, 0x49, 0x64, 0x6c, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28,
0x04, 0x52, 0x08, 0x68, 0x65, 0x61, 0x70, 0x49, 0x64, 0x6c, 0x65, 0x12, 0x1c, 0x0a, 0x09, 0x68,
0x65, 0x61, 0x70, 0x49, 0x6e, 0x75, 0x73, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09,
0x68, 0x65, 0x61, 0x70, 0x49, 0x6e, 0x75, 0x73, 0x65, 0x12, 0x22, 0x0a, 0x0c, 0x68, 0x65, 0x61,
0x70, 0x52, 0x65, 0x6c, 0x65, 0x61, 0x73, 0x65, 0x64, 0x18, 0x09, 0x20, 0x01, 0x28, 0x04, 0x52,
0x0c, 0x68, 0x65, 0x61, 0x70, 0x52, 0x65, 0x6c, 0x65, 0x61, 0x73, 0x65, 0x64, 0x12, 0x20, 0x0a,
0x0b, 0x68, 0x65, 0x61, 0x70, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x18, 0x0a, 0x20, 0x01,
0x28, 0x04, 0x52, 0x0b, 0x68, 0x65, 0x61, 0x70, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x12,
0x24, 0x0a, 0x0d, 0x67, 0x63, 0x43, 0x50, 0x55, 0x46, 0x72, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e,
0x18, 0x0b, 0x20, 0x01, 0x28, 0x01, 0x52, 0x0d, 0x67, 0x63, 0x43, 0x50, 0x55, 0x46, 0x72, 0x61,
0x63, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x86, 0x01, 0x0a, 0x08, 0x48, 0x65, 0x61, 0x70, 0x44, 0x61,
0x74, 0x61, 0x12, 0x26, 0x0a, 0x05, 0x73, 0x74, 0x61, 0x74, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x10, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2e, 0x48, 0x65, 0x61, 0x70, 0x53, 0x74,
0x61, 0x74, 0x73, 0x52, 0x05, 0x73, 0x74, 0x61, 0x74, 0x73, 0x12, 0x2b, 0x0a, 0x07, 0x6f, 0x62,
0x6a, 0x65, 0x63, 0x74, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x64, 0x65,
0x62, 0x75, 0x67, 0x2e, 0x48, 0x65, 0x61, 0x70, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x52, 0x07,
0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x12, 0x25, 0x0a, 0x05, 0x65, 0x64, 0x67, 0x65, 0x73,
0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0f, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2e, 0x48,
0x65, 0x61, 0x70, 0x45, 0x64, 0x67, 0x65, 0x52, 0x05, 0x65, 0x64, 0x67, 0x65, 0x73, 0x22, 0x4c,
0x0a, 0x08, 0x48, 0x65, 0x61, 0x70, 0x45, 0x64, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x66, 0x72,
0x6f, 0x6d, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x66, 0x72, 0x6f, 0x6d, 0x12, 0x0e,
0x0a, 0x02, 0x74, 0x6f, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x74, 0x6f, 0x12, 0x1c,
0x0a, 0x09, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28,
0x09, 0x52, 0x09, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x22, 0x61, 0x0a, 0x0c,
0x48, 0x65, 0x61, 0x70, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04,
0x63, 0x6f, 0x64, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65,
0x12, 0x18, 0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28,
0x09, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x23, 0x0a, 0x04, 0x64, 0x61,
0x74, 0x61, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0f, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67,
0x2e, 0x48, 0x65, 0x61, 0x70, 0x44, 0x61, 0x74, 0x61, 0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x22,
0x55, 0x0a, 0x11, 0x48, 0x65, 0x61, 0x70, 0x47, 0x72, 0x61, 0x70, 0x68, 0x52, 0x65, 0x73, 0x70,
0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x18, 0x01, 0x20, 0x01,
0x28, 0x0d, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73,
0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61,
0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09,
0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x22, 0x54, 0x0a, 0x10, 0x43, 0x70, 0x75, 0x47, 0x72, 0x61,
0x70, 0x68, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x63, 0x6f,
0x64, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x12, 0x18,
0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x64, 0x61, 0x74, 0x61,
0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x22, 0x5f, 0x0a, 0x0b,
0x43, 0x70, 0x75, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x63,
0x6f, 0x64, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x12,
0x18, 0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09,
0x52, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x22, 0x0a, 0x04, 0x64, 0x61, 0x74,
0x61, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2e,
0x43, 0x70, 0x75, 0x44, 0x61, 0x74, 0x61, 0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x22, 0xc5, 0x02,
0x0a, 0x07, 0x43, 0x70, 0x75, 0x44, 0x61, 0x74, 0x61, 0x12, 0x29, 0x0a, 0x11, 0x74, 0x6f, 0x74,
0x61, 0x6c, 0x5f, 0x63, 0x70, 0x75, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x6e, 0x73, 0x18, 0x01,
0x20, 0x01, 0x28, 0x04, 0x52, 0x0e, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x43, 0x70, 0x75, 0x54, 0x69,
0x6d, 0x65, 0x4e, 0x73, 0x12, 0x30, 0x0a, 0x14, 0x73, 0x61, 0x6d, 0x70, 0x6c, 0x69, 0x6e, 0x67,
0x5f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x76, 0x61, 0x6c, 0x5f, 0x6e, 0x73, 0x18, 0x02, 0x20, 0x01,
0x28, 0x04, 0x52, 0x12, 0x73, 0x61, 0x6d, 0x70, 0x6c, 0x69, 0x6e, 0x67, 0x49, 0x6e, 0x74, 0x65,
0x72, 0x76, 0x61, 0x6c, 0x4e, 0x73, 0x12, 0x34, 0x0a, 0x09, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69,
0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x16, 0x2e, 0x64, 0x65, 0x62, 0x75,
0x67, 0x2e, 0x46, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x66, 0x69, 0x6c,
0x65, 0x52, 0x09, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x37, 0x0a, 0x0a,
0x67, 0x6f, 0x72, 0x6f, 0x75, 0x74, 0x69, 0x6e, 0x65, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b,
0x32, 0x17, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2e, 0x47, 0x6f, 0x72, 0x6f, 0x75, 0x74, 0x69,
0x6e, 0x65, 0x50, 0x72, 0x6f, 0x66, 0x69, 0x6c, 0x65, 0x52, 0x0a, 0x67, 0x6f, 0x72, 0x6f, 0x75,
0x74, 0x69, 0x6e, 0x65, 0x73, 0x12, 0x34, 0x0a, 0x0c, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x5f,
0x63, 0x61, 0x6c, 0x6c, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x64, 0x65,
0x62, 0x75, 0x67, 0x2e, 0x53, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x43, 0x61, 0x6c, 0x6c, 0x52, 0x0b,
0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x43, 0x61, 0x6c, 0x6c, 0x73, 0x12, 0x38, 0x0a, 0x0d, 0x72,
0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x73, 0x18, 0x06, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x13, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2e, 0x52, 0x75, 0x6e, 0x74, 0x69,
0x6d, 0x65, 0x53, 0x74, 0x61, 0x74, 0x73, 0x52, 0x0c, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65,
0x53, 0x74, 0x61, 0x74, 0x73, 0x22, 0xbf, 0x01, 0x0a, 0x0f, 0x46, 0x75, 0x6e, 0x63, 0x74, 0x69,
0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x66, 0x69, 0x6c, 0x65, 0x12, 0x23, 0x0a, 0x0d, 0x66, 0x75, 0x6e,
0x63, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x0c, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x1e,
0x0a, 0x0b, 0x63, 0x70, 0x75, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x6e, 0x73, 0x18, 0x02, 0x20,
0x01, 0x28, 0x04, 0x52, 0x09, 0x63, 0x70, 0x75, 0x54, 0x69, 0x6d, 0x65, 0x4e, 0x73, 0x12, 0x29,
0x0a, 0x10, 0x69, 0x6e, 0x76, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x63, 0x6f, 0x75,
0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0f, 0x69, 0x6e, 0x76, 0x6f, 0x63, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x63, 0x61, 0x6c,
0x6c, 0x5f, 0x73, 0x74, 0x61, 0x63, 0x6b, 0x18, 0x04, 0x20, 0x03, 0x28, 0x09, 0x52, 0x09, 0x63,
0x61, 0x6c, 0x6c, 0x53, 0x74, 0x61, 0x63, 0x6b, 0x12, 0x1d, 0x0a, 0x0a, 0x69, 0x73, 0x5f, 0x69,
0x6e, 0x6c, 0x69, 0x6e, 0x65, 0x64, 0x18, 0x05, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09, 0x69, 0x73,
0x49, 0x6e, 0x6c, 0x69, 0x6e, 0x65, 0x64, 0x22, 0x77, 0x0a, 0x10, 0x47, 0x6f, 0x72, 0x6f, 0x75,
0x74, 0x69, 0x6e, 0x65, 0x50, 0x72, 0x6f, 0x66, 0x69, 0x6c, 0x65, 0x12, 0x0e, 0x0a, 0x02, 0x69,
0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x02, 0x69, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x73,
0x74, 0x61, 0x74, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x73, 0x74, 0x61, 0x74,
0x65, 0x12, 0x1e, 0x0a, 0x0b, 0x63, 0x70, 0x75, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x6e, 0x73,
0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x63, 0x70, 0x75, 0x54, 0x69, 0x6d, 0x65, 0x4e,
0x73, 0x12, 0x1d, 0x0a, 0x0a, 0x63, 0x61, 0x6c, 0x6c, 0x5f, 0x73, 0x74, 0x61, 0x63, 0x6b, 0x18,
0x04, 0x20, 0x03, 0x28, 0x09, 0x52, 0x09, 0x63, 0x61, 0x6c, 0x6c, 0x53, 0x74, 0x61, 0x63, 0x6b,
0x22, 0x56, 0x0a, 0x0a, 0x53, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x43, 0x61, 0x6c, 0x6c, 0x12, 0x12,
0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61,
0x6d, 0x65, 0x12, 0x1e, 0x0a, 0x0b, 0x63, 0x70, 0x75, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x6e,
0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x63, 0x70, 0x75, 0x54, 0x69, 0x6d, 0x65,
0x4e, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28,
0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0xa4, 0x01, 0x0a, 0x0c, 0x52, 0x75, 0x6e,
0x74, 0x69, 0x6d, 0x65, 0x53, 0x74, 0x61, 0x74, 0x73, 0x12, 0x26, 0x0a, 0x0f, 0x67, 0x63, 0x5f,
0x63, 0x70, 0x75, 0x5f, 0x66, 0x72, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01,
0x28, 0x01, 0x52, 0x0d, 0x67, 0x63, 0x43, 0x70, 0x75, 0x46, 0x72, 0x61, 0x63, 0x74, 0x69, 0x6f,
0x6e, 0x12, 0x19, 0x0a, 0x08, 0x67, 0x63, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x02, 0x20,
0x01, 0x28, 0x04, 0x52, 0x07, 0x67, 0x63, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x27, 0x0a, 0x10,
0x67, 0x63, 0x5f, 0x70, 0x61, 0x75, 0x73, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x6e, 0x73,
0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0d, 0x67, 0x63, 0x50, 0x61, 0x75, 0x73, 0x65, 0x54,
0x69, 0x6d, 0x65, 0x4e, 0x73, 0x12, 0x28, 0x0a, 0x10, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x69, 0x6e,
0x67, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x6e, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x52,
0x0e, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x69, 0x6e, 0x67, 0x54, 0x69, 0x6d, 0x65, 0x4e, 0x73, 0x32,
0xd9, 0x02, 0x0a, 0x03, 0x61, 0x70, 0x69, 0x12, 0x4f, 0x0a, 0x07, 0x47, 0x65, 0x74, 0x48, 0x65,
0x61, 0x70, 0x12, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x1a, 0x13, 0x2e, 0x64, 0x65, 0x62,
0x75, 0x67, 0x2e, 0x48, 0x65, 0x61, 0x70, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22,
0x17, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x11, 0x12, 0x0f, 0x2f, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2f,
0x61, 0x70, 0x69, 0x2f, 0x68, 0x65, 0x61, 0x70, 0x12, 0x5f, 0x0a, 0x0c, 0x47, 0x65, 0x74, 0x48,
0x65, 0x61, 0x70, 0x47, 0x72, 0x61, 0x70, 0x68, 0x12, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79,
0x1a, 0x18, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2e, 0x48, 0x65, 0x61, 0x70, 0x47, 0x72, 0x61,
0x70, 0x68, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x1d, 0x82, 0xd3, 0xe4, 0x93,
0x02, 0x17, 0x12, 0x15, 0x2f, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x68,
0x65, 0x61, 0x70, 0x2f, 0x67, 0x72, 0x61, 0x70, 0x68, 0x12, 0x57, 0x0a, 0x0b, 0x47, 0x65, 0x74,
0x43, 0x70, 0x75, 0x47, 0x72, 0x61, 0x70, 0x68, 0x12, 0x11, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67,
0x2e, 0x43, 0x70, 0x75, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x17, 0x2e, 0x64, 0x65,
0x62, 0x75, 0x67, 0x2e, 0x43, 0x70, 0x75, 0x47, 0x72, 0x61, 0x70, 0x68, 0x52, 0x65, 0x73, 0x70,
0x6f, 0x6e, 0x73, 0x65, 0x22, 0x1c, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x16, 0x12, 0x14, 0x2f, 0x64,
0x65, 0x62, 0x75, 0x67, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x63, 0x70, 0x75, 0x2f, 0x67, 0x72, 0x61,
0x70, 0x68, 0x12, 0x47, 0x0a, 0x06, 0x47, 0x65, 0x74, 0x43, 0x70, 0x75, 0x12, 0x11, 0x2e, 0x64,
0x65, 0x62, 0x75, 0x67, 0x2e, 0x43, 0x70, 0x75, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
0x12, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2e, 0x43, 0x70, 0x75, 0x52, 0x65, 0x73, 0x70, 0x6f,
0x6e, 0x73, 0x65, 0x22, 0x16, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x10, 0x12, 0x0e, 0x2f, 0x64, 0x65,
0x62, 0x75, 0x67, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x63, 0x70, 0x75, 0x42, 0x1d, 0x5a, 0x1b, 0x6d,
0x37, 0x73, 0x2e, 0x6c, 0x69, 0x76, 0x65, 0x2f, 0x76, 0x35, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69,
0x6e, 0x2f, 0x64, 0x65, 0x62, 0x75, 0x67, 0x2f, 0x70, 0x62, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
const file_debug_proto_rawDesc = "" +
"\n" +
"\vdebug.proto\x12\x05debug\x1a\x1cgoogle/api/annotations.proto\x1a\x1bgoogle/protobuf/empty.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"B\n" +
"\n" +
"CpuRequest\x12\x18\n" +
"\arefresh\x18\x01 \x01(\bR\arefresh\x12\x1a\n" +
"\bduration\x18\x02 \x01(\rR\bduration\"\x94\x01\n" +
"\n" +
"HeapObject\x12\x12\n" +
"\x04type\x18\x01 \x01(\tR\x04type\x12\x14\n" +
"\x05count\x18\x02 \x01(\x03R\x05count\x12\x12\n" +
"\x04size\x18\x03 \x01(\x03R\x04size\x12\x1a\n" +
"\bsizePerc\x18\x04 \x01(\x01R\bsizePerc\x12\x18\n" +
"\aaddress\x18\x05 \x01(\tR\aaddress\x12\x12\n" +
"\x04refs\x18\x06 \x03(\tR\x04refs\"\xc7\x02\n" +
"\tHeapStats\x12\x14\n" +
"\x05alloc\x18\x01 \x01(\x04R\x05alloc\x12\x1e\n" +
"\n" +
"totalAlloc\x18\x02 \x01(\x04R\n" +
"totalAlloc\x12\x10\n" +
"\x03sys\x18\x03 \x01(\x04R\x03sys\x12\x14\n" +
"\x05numGC\x18\x04 \x01(\rR\x05numGC\x12\x1c\n" +
"\theapAlloc\x18\x05 \x01(\x04R\theapAlloc\x12\x18\n" +
"\aheapSys\x18\x06 \x01(\x04R\aheapSys\x12\x1a\n" +
"\bheapIdle\x18\a \x01(\x04R\bheapIdle\x12\x1c\n" +
"\theapInuse\x18\b \x01(\x04R\theapInuse\x12\"\n" +
"\fheapReleased\x18\t \x01(\x04R\fheapReleased\x12 \n" +
"\vheapObjects\x18\n" +
" \x01(\x04R\vheapObjects\x12$\n" +
"\rgcCPUFraction\x18\v \x01(\x01R\rgcCPUFraction\"\x86\x01\n" +
"\bHeapData\x12&\n" +
"\x05stats\x18\x01 \x01(\v2\x10.debug.HeapStatsR\x05stats\x12+\n" +
"\aobjects\x18\x02 \x03(\v2\x11.debug.HeapObjectR\aobjects\x12%\n" +
"\x05edges\x18\x03 \x03(\v2\x0f.debug.HeapEdgeR\x05edges\"L\n" +
"\bHeapEdge\x12\x12\n" +
"\x04from\x18\x01 \x01(\tR\x04from\x12\x0e\n" +
"\x02to\x18\x02 \x01(\tR\x02to\x12\x1c\n" +
"\tfieldName\x18\x03 \x01(\tR\tfieldName\"a\n" +
"\fHeapResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\rR\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12#\n" +
"\x04data\x18\x03 \x01(\v2\x0f.debug.HeapDataR\x04data\"U\n" +
"\x11HeapGraphResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\rR\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12\x12\n" +
"\x04data\x18\x03 \x01(\tR\x04data\"T\n" +
"\x10CpuGraphResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\rR\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12\x12\n" +
"\x04data\x18\x03 \x01(\tR\x04data\"_\n" +
"\vCpuResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\rR\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12\"\n" +
"\x04data\x18\x03 \x01(\v2\x0e.debug.CpuDataR\x04data\"\xc5\x02\n" +
"\aCpuData\x12)\n" +
"\x11total_cpu_time_ns\x18\x01 \x01(\x04R\x0etotalCpuTimeNs\x120\n" +
"\x14sampling_interval_ns\x18\x02 \x01(\x04R\x12samplingIntervalNs\x124\n" +
"\tfunctions\x18\x03 \x03(\v2\x16.debug.FunctionProfileR\tfunctions\x127\n" +
"\n" +
"goroutines\x18\x04 \x03(\v2\x17.debug.GoroutineProfileR\n" +
"goroutines\x124\n" +
"\fsystem_calls\x18\x05 \x03(\v2\x11.debug.SystemCallR\vsystemCalls\x128\n" +
"\rruntime_stats\x18\x06 \x01(\v2\x13.debug.RuntimeStatsR\fruntimeStats\"\xbf\x01\n" +
"\x0fFunctionProfile\x12#\n" +
"\rfunction_name\x18\x01 \x01(\tR\ffunctionName\x12\x1e\n" +
"\vcpu_time_ns\x18\x02 \x01(\x04R\tcpuTimeNs\x12)\n" +
"\x10invocation_count\x18\x03 \x01(\x04R\x0finvocationCount\x12\x1d\n" +
"\n" +
"call_stack\x18\x04 \x03(\tR\tcallStack\x12\x1d\n" +
"\n" +
"is_inlined\x18\x05 \x01(\bR\tisInlined\"w\n" +
"\x10GoroutineProfile\x12\x0e\n" +
"\x02id\x18\x01 \x01(\x04R\x02id\x12\x14\n" +
"\x05state\x18\x02 \x01(\tR\x05state\x12\x1e\n" +
"\vcpu_time_ns\x18\x03 \x01(\x04R\tcpuTimeNs\x12\x1d\n" +
"\n" +
"call_stack\x18\x04 \x03(\tR\tcallStack\"V\n" +
"\n" +
"SystemCall\x12\x12\n" +
"\x04name\x18\x01 \x01(\tR\x04name\x12\x1e\n" +
"\vcpu_time_ns\x18\x02 \x01(\x04R\tcpuTimeNs\x12\x14\n" +
"\x05count\x18\x03 \x01(\x04R\x05count\"\xa4\x01\n" +
"\fRuntimeStats\x12&\n" +
"\x0fgc_cpu_fraction\x18\x01 \x01(\x01R\rgcCpuFraction\x12\x19\n" +
"\bgc_count\x18\x02 \x01(\x04R\agcCount\x12'\n" +
"\x10gc_pause_time_ns\x18\x03 \x01(\x04R\rgcPauseTimeNs\x12(\n" +
"\x10blocking_time_ns\x18\x04 \x01(\x04R\x0eblockingTimeNs2\xd9\x02\n" +
"\x03api\x12O\n" +
"\aGetHeap\x12\x16.google.protobuf.Empty\x1a\x13.debug.HeapResponse\"\x17\x82\xd3\xe4\x93\x02\x11\x12\x0f/debug/api/heap\x12_\n" +
"\fGetHeapGraph\x12\x16.google.protobuf.Empty\x1a\x18.debug.HeapGraphResponse\"\x1d\x82\xd3\xe4\x93\x02\x17\x12\x15/debug/api/heap/graph\x12W\n" +
"\vGetCpuGraph\x12\x11.debug.CpuRequest\x1a\x17.debug.CpuGraphResponse\"\x1c\x82\xd3\xe4\x93\x02\x16\x12\x14/debug/api/cpu/graph\x12G\n" +
"\x06GetCpu\x12\x11.debug.CpuRequest\x1a\x12.debug.CpuResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/debug/api/cpuB\x1dZ\x1bm7s.live/v5/plugin/debug/pbb\x06proto3"
var (
file_debug_proto_rawDescOnce sync.Once
file_debug_proto_rawDescData = file_debug_proto_rawDesc
file_debug_proto_rawDescData []byte
)
func file_debug_proto_rawDescGZIP() []byte {
file_debug_proto_rawDescOnce.Do(func() {
file_debug_proto_rawDescData = protoimpl.X.CompressGZIP(file_debug_proto_rawDescData)
file_debug_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_debug_proto_rawDesc), len(file_debug_proto_rawDesc)))
})
return file_debug_proto_rawDescData
}
@@ -1233,7 +1165,7 @@ func file_debug_proto_init() {
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_debug_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_debug_proto_rawDesc), len(file_debug_proto_rawDesc)),
NumEnums: 0,
NumMessages: 14,
NumExtensions: 0,
@@ -1244,7 +1176,6 @@ func file_debug_proto_init() {
MessageInfos: file_debug_proto_msgTypes,
}.Build()
File_debug_proto = out.File
file_debug_proto_rawDesc = nil
file_debug_proto_goTypes = nil
file_debug_proto_depIdxs = nil
}

View File

@@ -10,7 +10,6 @@ package pb
import (
"context"
"errors"
"io"
"net/http"
@@ -26,129 +25,136 @@ import (
)
// Suppress "imported and not used" errors
var (
_ codes.Code
_ io.Reader
_ status.Status
_ = errors.New
_ = runtime.String
_ = utilities.NewDoubleArray
_ = metadata.Join
)
var _ codes.Code
var _ io.Reader
var _ status.Status
var _ = runtime.String
var _ = utilities.NewDoubleArray
var _ = metadata.Join
func request_Api_GetHeap_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq emptypb.Empty
metadata runtime.ServerMetadata
)
var protoReq emptypb.Empty
var metadata runtime.ServerMetadata
msg, err := client.GetHeap(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_GetHeap_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq emptypb.Empty
metadata runtime.ServerMetadata
)
var protoReq emptypb.Empty
var metadata runtime.ServerMetadata
msg, err := server.GetHeap(ctx, &protoReq)
return msg, metadata, err
}
func request_Api_GetHeapGraph_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq emptypb.Empty
metadata runtime.ServerMetadata
)
var protoReq emptypb.Empty
var metadata runtime.ServerMetadata
msg, err := client.GetHeapGraph(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_GetHeapGraph_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq emptypb.Empty
metadata runtime.ServerMetadata
)
var protoReq emptypb.Empty
var metadata runtime.ServerMetadata
msg, err := server.GetHeapGraph(ctx, &protoReq)
return msg, metadata, err
}
var filter_Api_GetCpuGraph_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
var (
filter_Api_GetCpuGraph_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
)
func request_Api_GetCpuGraph_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CpuRequest
metadata runtime.ServerMetadata
)
var protoReq CpuRequest
var metadata runtime.ServerMetadata
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_GetCpuGraph_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.GetCpuGraph(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_GetCpuGraph_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CpuRequest
metadata runtime.ServerMetadata
)
var protoReq CpuRequest
var metadata runtime.ServerMetadata
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_GetCpuGraph_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.GetCpuGraph(ctx, &protoReq)
return msg, metadata, err
}
var filter_Api_GetCpu_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
var (
filter_Api_GetCpu_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
)
func request_Api_GetCpu_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CpuRequest
metadata runtime.ServerMetadata
)
var protoReq CpuRequest
var metadata runtime.ServerMetadata
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_GetCpu_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.GetCpu(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_GetCpu_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CpuRequest
metadata runtime.ServerMetadata
)
var protoReq CpuRequest
var metadata runtime.ServerMetadata
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_GetCpu_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.GetCpu(ctx, &protoReq)
return msg, metadata, err
}
// RegisterApiHandlerServer registers the http handlers for service Api to "mux".
// UnaryRPC :call ApiServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterApiHandlerFromEndpoint instead.
// GRPC interceptors will not work for this type of registration. To use interceptors, you must use the "runtime.WithMiddlewares" option in the "runtime.NewServeMux" call.
func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server ApiServer) error {
mux.Handle(http.MethodGet, pattern_Api_GetHeap_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Api_GetHeap_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/debug.Api/GetHeap", runtime.WithHTTPPathPattern("/debug/api/heap"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/debug.Api/GetHeap", runtime.WithHTTPPathPattern("/debug/api/heap"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -160,15 +166,20 @@ func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_GetHeap_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Api_GetHeapGraph_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Api_GetHeapGraph_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/debug.Api/GetHeapGraph", runtime.WithHTTPPathPattern("/debug/api/heap/graph"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/debug.Api/GetHeapGraph", runtime.WithHTTPPathPattern("/debug/api/heap/graph"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -180,15 +191,20 @@ func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_GetHeapGraph_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Api_GetCpuGraph_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Api_GetCpuGraph_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/debug.Api/GetCpuGraph", runtime.WithHTTPPathPattern("/debug/api/cpu/graph"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/debug.Api/GetCpuGraph", runtime.WithHTTPPathPattern("/debug/api/cpu/graph"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -200,15 +216,20 @@ func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_GetCpuGraph_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Api_GetCpu_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Api_GetCpu_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/debug.Api/GetCpu", runtime.WithHTTPPathPattern("/debug/api/cpu"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/debug.Api/GetCpu", runtime.WithHTTPPathPattern("/debug/api/cpu"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -220,7 +241,9 @@ func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_GetCpu_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
@@ -229,24 +252,25 @@ func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server
// RegisterApiHandlerFromEndpoint is same as RegisterApiHandler but
// automatically dials to "endpoint" and closes the connection when "ctx" gets done.
func RegisterApiHandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error) {
conn, err := grpc.NewClient(endpoint, opts...)
conn, err := grpc.DialContext(ctx, endpoint, opts...)
if err != nil {
return err
}
defer func() {
if err != nil {
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
return
}
go func() {
<-ctx.Done()
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
}()
}()
return RegisterApiHandler(ctx, mux, conn)
}
@@ -260,13 +284,16 @@ func RegisterApiHandler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.C
// to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "ApiClient".
// Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "ApiClient"
// doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in
// "ApiClient" to call the correct interceptors. This client ignores the HTTP middlewares.
// "ApiClient" to call the correct interceptors.
func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client ApiClient) error {
mux.Handle(http.MethodGet, pattern_Api_GetHeap_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Api_GetHeap_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/debug.Api/GetHeap", runtime.WithHTTPPathPattern("/debug/api/heap"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/debug.Api/GetHeap", runtime.WithHTTPPathPattern("/debug/api/heap"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -277,13 +304,18 @@ func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_GetHeap_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Api_GetHeapGraph_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Api_GetHeapGraph_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/debug.Api/GetHeapGraph", runtime.WithHTTPPathPattern("/debug/api/heap/graph"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/debug.Api/GetHeapGraph", runtime.WithHTTPPathPattern("/debug/api/heap/graph"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -294,13 +326,18 @@ func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_GetHeapGraph_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Api_GetCpuGraph_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Api_GetCpuGraph_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/debug.Api/GetCpuGraph", runtime.WithHTTPPathPattern("/debug/api/cpu/graph"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/debug.Api/GetCpuGraph", runtime.WithHTTPPathPattern("/debug/api/cpu/graph"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -311,13 +348,18 @@ func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_GetCpuGraph_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_Api_GetCpu_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_Api_GetCpu_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/debug.Api/GetCpu", runtime.WithHTTPPathPattern("/debug/api/cpu"))
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/debug.Api/GetCpu", runtime.WithHTTPPathPattern("/debug/api/cpu"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
@@ -328,21 +370,30 @@ func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_GetCpu_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
var (
pattern_Api_GetHeap_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"debug", "api", "heap"}, ""))
pattern_Api_GetHeap_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"debug", "api", "heap"}, ""))
pattern_Api_GetHeapGraph_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"debug", "api", "heap", "graph"}, ""))
pattern_Api_GetCpuGraph_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"debug", "api", "cpu", "graph"}, ""))
pattern_Api_GetCpu_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"debug", "api", "cpu"}, ""))
pattern_Api_GetCpuGraph_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"debug", "api", "cpu", "graph"}, ""))
pattern_Api_GetCpu_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"debug", "api", "cpu"}, ""))
)
var (
forward_Api_GetHeap_0 = runtime.ForwardResponseMessage
forward_Api_GetHeap_0 = runtime.ForwardResponseMessage
forward_Api_GetHeapGraph_0 = runtime.ForwardResponseMessage
forward_Api_GetCpuGraph_0 = runtime.ForwardResponseMessage
forward_Api_GetCpu_0 = runtime.ForwardResponseMessage
forward_Api_GetCpuGraph_0 = runtime.ForwardResponseMessage
forward_Api_GetCpu_0 = runtime.ForwardResponseMessage
)

View File

@@ -132,4 +132,4 @@ message RuntimeStats {
uint64 gc_count = 2; // 垃圾回收次数
uint64 gc_pause_time_ns = 3; // 垃圾回收暂停时间(纳秒)
uint64 blocking_time_ns = 4; // 阻塞时间(纳秒)
}
}

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v5.29.1
// - protoc v5.29.3
// source: debug.proto
package pb

View File

@@ -0,0 +1,122 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Environment Check</title>
<style>
body {
font-family: Arial, sans-serif;
margin: 20px;
background-color: #f5f5f5;
}
.container {
max-width: 800px;
margin: 0 auto;
background-color: white;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
.input-group {
margin-bottom: 20px;
}
input[type="text"] {
padding: 8px;
width: 300px;
margin-right: 10px;
}
button {
padding: 8px 16px;
background-color: #4CAF50;
color: white;
border: none;
border-radius: 4px;
cursor: pointer;
}
button:hover {
background-color: #45a049;
}
#log {
background-color: #f8f9fa;
border: 1px solid #ddd;
padding: 10px;
height: 400px;
overflow-y: auto;
font-family: monospace;
white-space: pre-wrap;
}
.success {
color: #28a745;
}
.error {
color: #dc3545;
}
.info {
color: #17a2b8;
}
</style>
</head>
<body>
<div class="container">
<h1>Environment Check</h1>
<div class="input-group">
<input type="text" id="targetUrl" placeholder="Enter target URL (e.g., http://192.168.1.100:8080)">
<button onclick="startCheck()">Start Check</button>
</div>
<div id="log"></div>
</div>
<script>
function appendLog(message, type = 'info') {
const log = document.getElementById('log');
const entry = document.createElement('div');
entry.className = type;
entry.textContent = message;
log.appendChild(entry);
log.scrollTop = log.scrollHeight;
}
function startCheck() {
const targetUrl = document.getElementById('targetUrl').value;
if (!targetUrl) {
appendLog('Please enter a target URL', 'error');
return;
}
// Clear previous log
document.getElementById('log').innerHTML = '';
appendLog('Starting environment check...');
// Create SSE connection
const eventSource = new EventSource(`/debug/envcheck?target=${encodeURIComponent(targetUrl)}`);
eventSource.onmessage = function (event) {
const data = JSON.parse(event.data);
appendLog(data.message, data.type);
if (data.type === 'complete') {
eventSource.close();
}
};
eventSource.onerror = function (error) {
appendLog('Connection error occurred', 'error');
eventSource.close();
};
}
</script>
</body>
</html>

View File

@@ -1,24 +1,12 @@
package plugin_flv
import (
"bufio"
"context"
"encoding/binary"
"io"
"io/fs"
"net/http"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"google.golang.org/protobuf/types/known/emptypb"
"m7s.live/v5/pb"
"m7s.live/v5/pkg/util"
flvpb "m7s.live/v5/plugin/flv/pb"
flv "m7s.live/v5/plugin/flv/pkg"
rtmp "m7s.live/v5/plugin/rtmp/pkg"
)
func (p *FLVPlugin) List(ctx context.Context, req *flvpb.ReqRecordList) (resp *pb.ResponseList, err error) {
@@ -52,248 +40,49 @@ func (p *FLVPlugin) Delete(ctx context.Context, req *flvpb.ReqRecordDelete) (res
}
func (plugin *FLVPlugin) Download_(w http.ResponseWriter, r *http.Request) {
streamPath := strings.TrimSuffix(strings.TrimPrefix(r.URL.Path, "/download/"), ".flv")
singleFile := filepath.Join(plugin.Path, streamPath+".flv")
startTime, endTime, err := util.TimeRangeQueryParse(r.URL.Query())
// 解析请求参数
params, err := plugin.parseRequestParams(r)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
timeRange := endTime.Sub(startTime)
plugin.Info("download", "stream", streamPath, "start", startTime, "end", endTime)
dir := filepath.Join(plugin.Path, streamPath)
if util.Exist(singleFile) {
} else if util.Exist(dir) {
var fileList []fs.FileInfo
var found bool
var startOffsetTime time.Duration
err = filepath.Walk(dir, func(path string, info fs.FileInfo, err error) error {
if info.IsDir() || !strings.HasSuffix(info.Name(), ".flv") {
return nil
}
modTime := info.ModTime()
//tmp, _ := strconv.Atoi(strings.TrimSuffix(info.Name(), ".flv"))
//fileStartTime := time.Unix(tmp, 10)
if !found {
if modTime.After(startTime) {
found = true
//fmt.Println(path, modTime, startTime, found)
} else {
fileList = []fs.FileInfo{info}
startOffsetTime = startTime.Sub(modTime)
//fmt.Println(path, modTime, startTime, found)
return nil
}
}
if modTime.After(endTime) {
return fs.ErrInvalid
}
fileList = append(fileList, info)
return nil
})
if !found {
http.NotFound(w, r)
return
}
plugin.Info("download", "stream", params.streamPath, "start", params.startTime, "end", params.endTime)
w.Header().Set("Content-Type", "video/x-flv")
w.Header().Set("Content-Disposition", "attachment")
var writer io.Writer = w
flvHead := make([]byte, 9+4)
tagHead := make(util.Buffer, 11)
var contentLength uint64
// 从数据库查询录像记录
recordStreams, err := plugin.queryRecordStreams(params)
if err != nil {
plugin.Error("Failed to query record streams", "err", err)
http.Error(w, "Database query failed", http.StatusInternalServerError)
return
}
var amf *rtmp.AMF
var metaData rtmp.EcmaArray
initMetaData := func(reader io.Reader, dataLen uint32) {
data := make([]byte, dataLen+4)
_, err = io.ReadFull(reader, data)
amf = &rtmp.AMF{
Buffer: util.Buffer(data[1+2+len("onMetaData") : len(data)-4]),
}
var obj any
obj, err = amf.Unmarshal()
metaData = obj.(rtmp.EcmaArray)
}
var filepositions []uint64
var times []float64
for pass := 0; pass < 2; pass++ {
offsetTime := startOffsetTime
var offsetTimestamp, lastTimestamp uint32
var init, seqAudioWritten, seqVideoWritten bool
if pass == 1 {
metaData["keyframes"] = map[string]any{
"filepositions": filepositions,
"times": times,
}
amf.Marshals("onMetaData", metaData)
offsetDelta := amf.Len() + 15
offset := offsetDelta + len(flvHead)
contentLength += uint64(offset)
metaData["duration"] = timeRange.Seconds()
metaData["filesize"] = contentLength
for i := range filepositions {
filepositions[i] += uint64(offset)
}
metaData["keyframes"] = map[string]any{
"filepositions": filepositions,
"times": times,
}
amf.Reset()
amf.Marshals("onMetaData", metaData)
plugin.Info("start download", "metaData", metaData)
w.Header().Set("Content-Length", strconv.FormatInt(int64(contentLength), 10))
w.WriteHeader(http.StatusOK)
}
if offsetTime == 0 {
init = true
} else {
offsetTimestamp = -uint32(offsetTime.Milliseconds())
}
for i, info := range fileList {
if r.Context().Err() != nil {
return
}
filePath := filepath.Join(dir, info.Name())
plugin.Debug("read", "file", filePath)
file, err := os.Open(filePath)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
reader := bufio.NewReader(file)
if i == 0 {
_, err = io.ReadFull(reader, flvHead)
if pass == 1 {
// 第一次写入头
_, err = writer.Write(flvHead)
tagHead[0] = flv.FLV_TAG_TYPE_SCRIPT
l := amf.Len()
tagHead[1] = byte(l >> 16)
tagHead[2] = byte(l >> 8)
tagHead[3] = byte(l)
flv.PutFlvTimestamp(tagHead, 0)
writer.Write(tagHead)
writer.Write(amf.Buffer)
l += 11
binary.BigEndian.PutUint32(tagHead[:4], uint32(l))
writer.Write(tagHead[:4])
}
} else {
// 后面的头跳过
_, err = reader.Discard(13)
if !init {
offsetTime = 0
offsetTimestamp = 0
}
}
for err == nil {
_, err = io.ReadFull(reader, tagHead)
if err != nil {
break
}
tmp := tagHead
t := tmp.ReadByte()
dataLen := tmp.ReadUint24()
lastTimestamp = tmp.ReadUint24() | uint32(tmp.ReadByte())<<24
//fmt.Println(lastTimestamp, tagHead)
if init {
if t == flv.FLV_TAG_TYPE_SCRIPT {
if pass == 0 {
initMetaData(reader, dataLen)
} else {
_, err = reader.Discard(int(dataLen) + 4)
}
} else {
lastTimestamp += offsetTimestamp
if lastTimestamp >= uint32(timeRange.Milliseconds()) {
break
}
if pass == 0 {
data := make([]byte, dataLen+4)
_, err = io.ReadFull(reader, data)
frameType := (data[0] >> 4) & 0b0111
idr := frameType == 1 || frameType == 4
if idr {
filepositions = append(filepositions, contentLength)
times = append(times, float64(lastTimestamp)/1000)
}
contentLength += uint64(11 + dataLen + 4)
} else {
//fmt.Println("write", lastTimestamp)
flv.PutFlvTimestamp(tagHead, lastTimestamp)
_, err = writer.Write(tagHead)
_, err = io.CopyN(writer, reader, int64(dataLen+4))
}
}
continue
}
switch t {
case flv.FLV_TAG_TYPE_SCRIPT:
if pass == 0 {
initMetaData(reader, dataLen)
} else {
_, err = reader.Discard(int(dataLen) + 4)
}
case flv.FLV_TAG_TYPE_AUDIO:
if !seqAudioWritten {
if pass == 0 {
contentLength += uint64(11 + dataLen + 4)
_, err = reader.Discard(int(dataLen) + 4)
} else {
flv.PutFlvTimestamp(tagHead, 0)
_, err = writer.Write(tagHead)
_, err = io.CopyN(writer, reader, int64(dataLen+4))
}
seqAudioWritten = true
} else {
_, err = reader.Discard(int(dataLen) + 4)
}
case flv.FLV_TAG_TYPE_VIDEO:
if !seqVideoWritten {
if pass == 0 {
contentLength += uint64(11 + dataLen + 4)
_, err = reader.Discard(int(dataLen) + 4)
} else {
flv.PutFlvTimestamp(tagHead, 0)
_, err = writer.Write(tagHead)
_, err = io.CopyN(writer, reader, int64(dataLen+4))
}
seqVideoWritten = true
} else {
if lastTimestamp >= uint32(offsetTime.Milliseconds()) {
data := make([]byte, dataLen+4)
_, err = io.ReadFull(reader, data)
frameType := (data[0] >> 4) & 0b0111
idr := frameType == 1 || frameType == 4
if idr {
init = true
plugin.Debug("init", "lastTimestamp", lastTimestamp)
if pass == 0 {
filepositions = append(filepositions, contentLength)
times = append(times, float64(lastTimestamp)/1000)
contentLength += uint64(11 + dataLen + 4)
} else {
flv.PutFlvTimestamp(tagHead, 0)
_, err = writer.Write(tagHead)
_, err = writer.Write(data)
}
}
} else {
_, err = reader.Discard(int(dataLen) + 4)
}
}
}
}
offsetTimestamp = lastTimestamp
err = file.Close()
}
}
plugin.Info("end download")
} else {
// 构建文件信息列表
fileInfoList, found := plugin.buildFileInfoList(recordStreams, params.startTime, params.endTime)
if !found || len(fileInfoList) == 0 {
plugin.Warn("No records found", "stream", params.streamPath, "start", params.startTime, "end", params.endTime)
http.NotFound(w, r)
return
}
// 根据记录类型选择处理方式
if plugin.hasOnlyMp4Records(fileInfoList) {
// 过滤MP4文件并转换为FLV
mp4FileList := plugin.filterMp4Files(fileInfoList)
if len(mp4FileList) == 0 {
plugin.Warn("No valid MP4 files after filtering", "stream", params.streamPath)
http.NotFound(w, r)
return
}
plugin.processMp4ToFlv(w, r, mp4FileList, params)
} else {
// 过滤FLV文件并处理
flvFileList := plugin.filterFlvFiles(fileInfoList)
if len(flvFileList) == 0 {
plugin.Warn("No valid FLV files after filtering", "stream", params.streamPath)
http.NotFound(w, r)
return
}
plugin.processFlvFiles(w, r, flvFileList, params)
}
}

640
plugin/flv/download.go Normal file
View File

@@ -0,0 +1,640 @@
package plugin_flv
import (
"bufio"
"encoding/binary"
"fmt"
"io"
"net/http"
"os"
"strconv"
"strings"
"time"
m7s "m7s.live/v5"
codec "m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/util"
flv "m7s.live/v5/plugin/flv/pkg"
mp4 "m7s.live/v5/plugin/mp4/pkg"
"m7s.live/v5/plugin/mp4/pkg/box"
rtmp "m7s.live/v5/plugin/rtmp/pkg"
)
// requestParams 包含请求解析后的参数
type requestParams struct {
streamPath string
startTime time.Time
endTime time.Time
timeRange time.Duration
}
// fileInfo 包含文件信息
type fileInfo struct {
filePath string
startTime time.Time
endTime time.Time
startOffsetTime time.Duration
recordType string // "flv" 或 "mp4"
}
// parseRequestParams 解析请求参数
func (plugin *FLVPlugin) parseRequestParams(r *http.Request) (*requestParams, error) {
// 从URL路径中提取流路径去除前缀 "/download/" 和后缀 ".flv"
streamPath := strings.TrimSuffix(strings.TrimPrefix(r.URL.Path, "/download/"), ".flv")
// 解析URL查询参数中的时间范围start和end参数
startTime, endTime, err := util.TimeRangeQueryParse(r.URL.Query())
if err != nil {
return nil, err
}
return &requestParams{
streamPath: streamPath,
startTime: startTime,
endTime: endTime,
timeRange: endTime.Sub(startTime),
}, nil
}
// queryRecordStreams 从数据库查询录像记录
func (plugin *FLVPlugin) queryRecordStreams(params *requestParams) ([]m7s.RecordStream, error) {
// 检查数据库是否可用
if plugin.DB == nil {
return nil, fmt.Errorf("database not available")
}
var recordStreams []m7s.RecordStream
// 首先查询FLV记录
query := plugin.DB.Model(&m7s.RecordStream{}).Where("stream_path = ? AND type = ?", params.streamPath, "flv")
// 添加时间范围查询条件
if !params.startTime.IsZero() && !params.endTime.IsZero() {
query = query.Where("(start_time <= ? AND end_time >= ?) OR (start_time >= ? AND start_time <= ?)",
params.endTime, params.startTime, params.startTime, params.endTime)
}
err := query.Order("start_time ASC").Find(&recordStreams).Error
if err != nil {
return nil, err
}
// 如果没有找到FLV记录尝试查询MP4记录
if len(recordStreams) == 0 {
query = plugin.DB.Model(&m7s.RecordStream{}).Where("stream_path = ? AND type IN (?)", params.streamPath, []string{"mp4", "fmp4"})
if !params.startTime.IsZero() && !params.endTime.IsZero() {
query = query.Where("(start_time <= ? AND end_time >= ?) OR (start_time >= ? AND start_time <= ?)",
params.endTime, params.startTime, params.startTime, params.endTime)
}
err = query.Order("start_time ASC").Find(&recordStreams).Error
if err != nil {
return nil, err
}
}
return recordStreams, nil
}
// buildFileInfoList 构建文件信息列表
func (plugin *FLVPlugin) buildFileInfoList(recordStreams []m7s.RecordStream, startTime, endTime time.Time) ([]*fileInfo, bool) {
var fileInfoList []*fileInfo
var found bool
for _, record := range recordStreams {
// 检查文件是否存在
if !util.Exist(record.FilePath) {
plugin.Warn("Record file not found", "filePath", record.FilePath)
continue
}
var startOffsetTime time.Duration
recordStartTime := record.StartTime
recordEndTime := record.EndTime
// 计算文件内的偏移时间
if startTime.After(recordStartTime) {
startOffsetTime = startTime.Sub(recordStartTime)
}
// 检查是否在时间范围内
if recordEndTime.Before(startTime) || recordStartTime.After(endTime) {
continue
}
fileInfoList = append(fileInfoList, &fileInfo{
filePath: record.FilePath,
startTime: recordStartTime,
endTime: recordEndTime,
startOffsetTime: startOffsetTime,
recordType: record.Type,
})
found = true
}
return fileInfoList, found
}
// hasOnlyMp4Records 检查是否只有MP4记录
func (plugin *FLVPlugin) hasOnlyMp4Records(fileInfoList []*fileInfo) bool {
if len(fileInfoList) == 0 {
return false
}
for _, info := range fileInfoList {
if info.recordType == "flv" {
return false
}
}
return true
}
// filterFlvFiles 过滤FLV文件
func (plugin *FLVPlugin) filterFlvFiles(fileInfoList []*fileInfo) []*fileInfo {
var filteredList []*fileInfo
for _, info := range fileInfoList {
if info.recordType == "flv" {
filteredList = append(filteredList, info)
}
}
plugin.Debug("FLV files filtered", "original", len(fileInfoList), "filtered", len(filteredList))
return filteredList
}
// filterMp4Files 过滤MP4文件
func (plugin *FLVPlugin) filterMp4Files(fileInfoList []*fileInfo) []*fileInfo {
var filteredList []*fileInfo
for _, info := range fileInfoList {
if info.recordType == "mp4" || info.recordType == "fmp4" {
filteredList = append(filteredList, info)
}
}
plugin.Debug("MP4 files filtered", "original", len(fileInfoList), "filtered", len(filteredList))
return filteredList
}
// processMp4ToFlv 将MP4记录转换为FLV输出
func (plugin *FLVPlugin) processMp4ToFlv(w http.ResponseWriter, r *http.Request, fileInfoList []*fileInfo, params *requestParams) {
plugin.Info("Converting MP4 records to FLV", "count", len(fileInfoList))
// 设置HTTP响应头
w.Header().Set("Content-Type", "video/x-flv")
w.Header().Set("Content-Disposition", "attachment")
// 创建MP4流列表
var mp4Streams []m7s.RecordStream
for _, info := range fileInfoList {
mp4Streams = append(mp4Streams, m7s.RecordStream{
FilePath: info.filePath,
StartTime: info.startTime,
EndTime: info.endTime,
Type: info.recordType,
})
}
// 创建DemuxerRange进行MP4解复用
demuxer := &mp4.DemuxerRange{
StartTime: params.startTime,
EndTime: params.endTime,
Streams: mp4Streams,
}
// 创建FLV编码器状态
flvWriter := &flvMp4Writer{
FlvWriter: flv.NewFlvWriter(w),
plugin: plugin,
hasWritten: false,
}
// 设置回调函数
demuxer.OnVideoExtraData = flvWriter.onVideoExtraData
demuxer.OnAudioExtraData = flvWriter.onAudioExtraData
demuxer.OnVideoSample = flvWriter.onVideoSample
demuxer.OnAudioSample = flvWriter.onAudioSample
// 执行解复用和转换
err := demuxer.Demux(r.Context())
if err != nil {
plugin.Error("MP4 to FLV conversion failed", "err", err)
if !flvWriter.hasWritten {
http.Error(w, "Conversion failed", http.StatusInternalServerError)
}
return
}
plugin.Info("MP4 to FLV conversion completed")
}
type ExtraDataInfo struct {
CodecType box.MP4_CODEC_TYPE
Data []byte
}
// flvMp4Writer 处理MP4到FLV的转换写入
type flvMp4Writer struct {
*flv.FlvWriter
plugin *FLVPlugin
audioExtra, videoExtra *ExtraDataInfo
hasWritten bool // 是否已经写入FLV头
ts int64 // 当前时间戳
tsOffset int64 // 时间戳偏移量,用于多文件连续播放
}
// writeFlvHeader 写入FLV文件头
func (w *flvMp4Writer) writeFlvHeader() error {
if w.hasWritten {
return nil
}
// 使用 FlvWriter 的 WriteHeader 方法
err := w.FlvWriter.WriteHeader(w.audioExtra != nil, w.videoExtra != nil) // 有音频和视频
if err != nil {
return err
}
w.hasWritten = true
if w.videoExtra != nil {
w.onVideoExtraData(w.videoExtra.CodecType, w.videoExtra.Data)
}
if w.audioExtra != nil {
w.onAudioExtraData(w.audioExtra.CodecType, w.audioExtra.Data)
}
return nil
}
// onVideoExtraData 处理视频序列头
func (w *flvMp4Writer) onVideoExtraData(codecType box.MP4_CODEC_TYPE, data []byte) error {
if !w.hasWritten {
w.videoExtra = &ExtraDataInfo{
CodecType: codecType,
Data: data,
}
return nil
}
switch codecType {
case box.MP4_CODEC_H264:
return w.WriteTag(flv.FLV_TAG_TYPE_VIDEO, uint32(w.ts), uint32(len(data)+5), []byte{(1 << 4) | 7, 0, 0, 0, 0}, data)
case box.MP4_CODEC_H265:
return w.WriteTag(flv.FLV_TAG_TYPE_VIDEO, uint32(w.ts), uint32(len(data)+5), []byte{0b1001_0000 | rtmp.PacketTypeSequenceStart, codec.FourCC_H265[0], codec.FourCC_H265[1], codec.FourCC_H265[2], codec.FourCC_H265[3]}, data)
default:
return fmt.Errorf("unsupported video codec: %v", codecType)
}
}
// onAudioExtraData 处理音频序列头
func (w *flvMp4Writer) onAudioExtraData(codecType box.MP4_CODEC_TYPE, data []byte) error {
if !w.hasWritten {
w.audioExtra = &ExtraDataInfo{
CodecType: codecType,
Data: data,
}
return nil
}
var flvCodec byte
switch codecType {
case box.MP4_CODEC_AAC:
flvCodec = 10 // AAC
case box.MP4_CODEC_G711A:
flvCodec = 7 // G.711 A-law
case box.MP4_CODEC_G711U:
flvCodec = 8 // G.711 μ-law
default:
return fmt.Errorf("unsupported audio codec: %v", codecType)
}
// 构建FLV音频标签 - 序列头
if flvCodec == 10 { // AAC 需要两个字节头部
return w.WriteTag(flv.FLV_TAG_TYPE_AUDIO, uint32(w.ts), uint32(len(data)+2), []byte{(flvCodec << 4) | (3 << 2) | (1 << 1) | 1, 0}, data)
} else {
return w.WriteTag(flv.FLV_TAG_TYPE_AUDIO, uint32(w.ts), uint32(len(data)+1), []byte{(flvCodec << 4) | (3 << 2) | (1 << 1) | 1}, data)
}
}
// onVideoSample 处理视频样本
func (w *flvMp4Writer) onVideoSample(codecType box.MP4_CODEC_TYPE, sample box.Sample) error {
if !w.hasWritten {
if err := w.writeFlvHeader(); err != nil {
return err
}
}
// 计算调整后的时间戳
w.ts = int64(sample.Timestamp) + w.tsOffset
timestamp := uint32(w.ts)
switch codecType {
case box.MP4_CODEC_H264:
frameType := byte(2) // P帧
if sample.KeyFrame {
frameType = 1 // I帧
}
return w.WriteTag(flv.FLV_TAG_TYPE_VIDEO, timestamp, uint32(len(sample.Data)+5), []byte{(frameType << 4) | 7, 1, byte(sample.CTS >> 16), byte(sample.CTS >> 8), byte(sample.CTS)}, sample.Data)
case box.MP4_CODEC_H265:
// Enhanced RTMP格式用于H.265
var b0 byte = 0b1010_0000 // P帧标识
if sample.KeyFrame {
b0 = 0b1001_0000 // 关键帧标识
}
if sample.CTS == 0 {
// CTS为0时使用PacketTypeCodedFramesX5字节头
return w.WriteTag(flv.FLV_TAG_TYPE_VIDEO, timestamp, uint32(len(sample.Data)+5), []byte{b0 | rtmp.PacketTypeCodedFramesX, codec.FourCC_H265[0], codec.FourCC_H265[1], codec.FourCC_H265[2], codec.FourCC_H265[3]}, sample.Data)
} else {
// CTS不为0时使用PacketTypeCodedFrames8字节头包含CTS
return w.WriteTag(flv.FLV_TAG_TYPE_VIDEO, timestamp, uint32(len(sample.Data)+8), []byte{b0 | rtmp.PacketTypeCodedFrames, codec.FourCC_H265[0], codec.FourCC_H265[1], codec.FourCC_H265[2], codec.FourCC_H265[3], byte(sample.CTS >> 16), byte(sample.CTS >> 8), byte(sample.CTS)}, sample.Data)
}
default:
return fmt.Errorf("unsupported video codec: %v", codecType)
}
}
// onAudioSample 处理音频样本
func (w *flvMp4Writer) onAudioSample(codec box.MP4_CODEC_TYPE, sample box.Sample) error {
if !w.hasWritten {
if err := w.writeFlvHeader(); err != nil {
return err
}
}
// 计算调整后的时间戳
w.ts = int64(sample.Timestamp) + w.tsOffset
timestamp := uint32(w.ts)
var flvCodec byte
switch codec {
case box.MP4_CODEC_AAC:
flvCodec = 10 // AAC
case box.MP4_CODEC_G711A:
flvCodec = 7 // G.711 A-law
case box.MP4_CODEC_G711U:
flvCodec = 8 // G.711 μ-law
default:
return fmt.Errorf("unsupported audio codec: %v", codec)
}
// 构建FLV音频标签 - 音频帧
if flvCodec == 10 { // AAC 需要两个字节头部
return w.WriteTag(flv.FLV_TAG_TYPE_AUDIO, timestamp, uint32(len(sample.Data)+2), []byte{(flvCodec << 4) | (3 << 2) | (1 << 1) | 1, 1}, sample.Data)
} else {
// 对于非AAC编解码器如G.711),只需要一个字节头部
return w.WriteTag(flv.FLV_TAG_TYPE_AUDIO, timestamp, uint32(len(sample.Data)+1), []byte{(flvCodec << 4) | (3 << 2) | (1 << 1) | 1}, sample.Data)
}
}
// processFlvFiles 处理原生FLV文件
func (plugin *FLVPlugin) processFlvFiles(w http.ResponseWriter, r *http.Request, fileInfoList []*fileInfo, params *requestParams) {
plugin.Info("Processing FLV files", "count", len(fileInfoList))
// 设置HTTP响应头
w.Header().Set("Content-Type", "video/x-flv")
w.Header().Set("Content-Disposition", "attachment")
var writer io.Writer = w
flvHead := make([]byte, 9+4)
tagHead := make(util.Buffer, 11)
var contentLength uint64
var startOffsetTime time.Duration
// 计算第一个文件的偏移时间
if len(fileInfoList) > 0 {
startOffsetTime = fileInfoList[0].startOffsetTime
}
var amf *rtmp.AMF
var metaData rtmp.EcmaArray
initMetaData := func(reader io.Reader, dataLen uint32) {
data := make([]byte, dataLen+4)
_, err := io.ReadFull(reader, data)
if err != nil {
return
}
amf = &rtmp.AMF{
Buffer: util.Buffer(data[1+2+len("onMetaData") : len(data)-4]),
}
var obj any
obj, err = amf.Unmarshal()
if err == nil {
metaData = obj.(rtmp.EcmaArray)
}
}
var filepositions []uint64
var times []float64
// 两次遍历:第一次计算大小,第二次写入数据
for pass := 0; pass < 2; pass++ {
offsetTime := startOffsetTime
var offsetTimestamp, lastTimestamp uint32
var init, seqAudioWritten, seqVideoWritten bool
if pass == 1 {
// 第二次遍历时,准备写入
metaData["keyframes"] = map[string]any{
"filepositions": filepositions,
"times": times,
}
amf.Marshals("onMetaData", metaData)
offsetDelta := amf.Len() + 15
offset := offsetDelta + len(flvHead)
contentLength += uint64(offset)
metaData["duration"] = params.timeRange.Seconds()
metaData["filesize"] = contentLength
for i := range filepositions {
filepositions[i] += uint64(offset)
}
metaData["keyframes"] = map[string]any{
"filepositions": filepositions,
"times": times,
}
amf.Reset()
amf.Marshals("onMetaData", metaData)
plugin.Info("start download", "metaData", metaData)
w.Header().Set("Content-Length", strconv.FormatInt(int64(contentLength), 10))
w.WriteHeader(http.StatusOK)
}
if offsetTime == 0 {
init = true
} else {
offsetTimestamp = -uint32(offsetTime.Milliseconds())
}
for i, info := range fileInfoList {
if r.Context().Err() != nil {
return
}
plugin.Debug("Processing file", "path", info.filePath)
file, err := os.Open(info.filePath)
if err != nil {
plugin.Error("Failed to open file", "path", info.filePath, "err", err)
if pass == 1 {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
return
}
reader := bufio.NewReader(file)
if i == 0 {
_, err = io.ReadFull(reader, flvHead)
if err != nil {
file.Close()
if pass == 1 {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
return
}
if pass == 1 {
// 第一次写入头
_, err = writer.Write(flvHead)
if err != nil {
file.Close()
return
}
tagHead[0] = flv.FLV_TAG_TYPE_SCRIPT
l := amf.Len()
tagHead[1] = byte(l >> 16)
tagHead[2] = byte(l >> 8)
tagHead[3] = byte(l)
flv.PutFlvTimestamp(tagHead, 0)
writer.Write(tagHead)
writer.Write(amf.Buffer)
l += 11
binary.BigEndian.PutUint32(tagHead[:4], uint32(l))
writer.Write(tagHead[:4])
}
} else {
// 后面的头跳过
_, err = reader.Discard(13)
if err != nil {
file.Close()
continue
}
if !init {
offsetTime = 0
offsetTimestamp = 0
}
}
// 处理FLV标签
for err == nil {
_, err = io.ReadFull(reader, tagHead)
if err != nil {
break
}
tmp := tagHead
t := tmp.ReadByte()
dataLen := tmp.ReadUint24()
lastTimestamp = tmp.ReadUint24() | uint32(tmp.ReadByte())<<24
if init {
if t == flv.FLV_TAG_TYPE_SCRIPT {
if pass == 0 {
initMetaData(reader, dataLen)
} else {
_, err = reader.Discard(int(dataLen) + 4)
}
} else {
lastTimestamp += offsetTimestamp
if lastTimestamp >= uint32(params.timeRange.Milliseconds()) {
break
}
if pass == 0 {
data := make([]byte, dataLen+4)
_, err = io.ReadFull(reader, data)
if err == nil {
frameType := (data[0] >> 4) & 0b0111
idr := frameType == 1 || frameType == 4
if idr {
filepositions = append(filepositions, contentLength)
times = append(times, float64(lastTimestamp)/1000)
}
contentLength += uint64(11 + dataLen + 4)
}
} else {
flv.PutFlvTimestamp(tagHead, lastTimestamp)
_, err = writer.Write(tagHead)
if err == nil {
_, err = io.CopyN(writer, reader, int64(dataLen+4))
}
}
}
continue
}
switch t {
case flv.FLV_TAG_TYPE_SCRIPT:
if pass == 0 {
initMetaData(reader, dataLen)
} else {
_, err = reader.Discard(int(dataLen) + 4)
}
case flv.FLV_TAG_TYPE_AUDIO:
if !seqAudioWritten {
if pass == 0 {
contentLength += uint64(11 + dataLen + 4)
_, err = reader.Discard(int(dataLen) + 4)
} else {
flv.PutFlvTimestamp(tagHead, 0)
_, err = writer.Write(tagHead)
if err == nil {
_, err = io.CopyN(writer, reader, int64(dataLen+4))
}
}
seqAudioWritten = true
} else {
_, err = reader.Discard(int(dataLen) + 4)
}
case flv.FLV_TAG_TYPE_VIDEO:
if !seqVideoWritten {
if pass == 0 {
contentLength += uint64(11 + dataLen + 4)
_, err = reader.Discard(int(dataLen) + 4)
} else {
flv.PutFlvTimestamp(tagHead, 0)
_, err = writer.Write(tagHead)
if err == nil {
_, err = io.CopyN(writer, reader, int64(dataLen+4))
}
}
seqVideoWritten = true
} else {
if lastTimestamp >= uint32(offsetTime.Milliseconds()) {
data := make([]byte, dataLen+4)
_, err = io.ReadFull(reader, data)
if err == nil {
frameType := (data[0] >> 4) & 0b0111
idr := frameType == 1 || frameType == 4
if idr {
init = true
plugin.Debug("init", "lastTimestamp", lastTimestamp)
if pass == 0 {
filepositions = append(filepositions, contentLength)
times = append(times, float64(lastTimestamp)/1000)
contentLength += uint64(11 + dataLen + 4)
} else {
flv.PutFlvTimestamp(tagHead, 0)
_, err = writer.Write(tagHead)
if err == nil {
_, err = writer.Write(data)
}
}
}
}
} else {
_, err = reader.Discard(int(dataLen) + 4)
}
}
}
}
offsetTimestamp = lastTimestamp
file.Close()
}
}
plugin.Info("FLV download completed")
}

View File

@@ -2,6 +2,7 @@ package flv
import (
"errors"
"io"
"m7s.live/v5"
"m7s.live/v5/pkg/util"
@@ -15,6 +16,10 @@ type Puller struct {
func (p *Puller) Run() (err error) {
reader := util.NewBufReader(p.ReadCloser)
publisher := p.PullJob.Publisher
if publisher == nil {
io.Copy(io.Discard, p.ReadCloser)
return
}
var hasAudio, hasVideo bool
var absTS uint32
var head util.Memory

View File

@@ -9,6 +9,7 @@ import (
"time"
m7s "m7s.live/v5"
"m7s.live/v5/pkg"
"m7s.live/v5/pkg/config"
"m7s.live/v5/pkg/task"
"m7s.live/v5/pkg/util"
@@ -47,6 +48,9 @@ func (p *RecordReader) Dispose() {
func (p *RecordReader) Run() (err error) {
pullJob := &p.PullJob
publisher := pullJob.Publisher
if publisher == nil {
return pkg.ErrDisabled
}
allocator := util.NewScalableMemoryAllocator(1 << 10)
var tagHeader [11]byte
var ts int64
@@ -60,6 +64,7 @@ func (p *RecordReader) Run() (err error) {
publisher.OnGetPosition = func() time.Time {
return realTime
}
for loop := 0; loop < p.Loop; loop++ {
nextStream:
for i, stream := range p.Streams {
@@ -85,15 +90,15 @@ func (p *RecordReader) Run() (err error) {
err = head.NewReader().ReadByteTo(&flvHead[0], &flvHead[1], &flvHead[2], &version, &flag)
hasAudio := (flag & 0x04) != 0
hasVideo := (flag & 0x01) != 0
if err != nil {
return
}
if !hasAudio {
publisher.NoAudio()
}
if !hasVideo {
publisher.NoVideo()
}
if err != nil {
return
}
if flvHead != [3]byte{'F', 'L', 'V'} {
return errors.New("not flv file")
}
@@ -194,7 +199,7 @@ func (p *RecordReader) Run() (err error) {
}
}
} else {
publisher.Info("script", name, obj)
p.Info("script", name, obj)
}
default:
err = fmt.Errorf("unknown tag type: %d", t)

View File

@@ -3,6 +3,7 @@ package plugin_gb28181pro
import (
"context"
"fmt"
"gorm.io/gorm"
"net/http"
"net/url"
"os"
@@ -86,7 +87,8 @@ func (gb *GB28181Plugin) List(ctx context.Context, req *pb.GetDevicesRequest) (*
for _, c := range channels {
pbChannels = append(pbChannels, &pb.Channel{
DeviceId: c.ChannelID,
ParentId: c.ParentID,
ParentId: c.DeviceID,
ChannelId: c.ChannelID,
Name: c.Name,
Manufacturer: c.Manufacturer,
Model: c.Model,
@@ -432,10 +434,10 @@ func (gb *GB28181Plugin) SyncDevice(ctx context.Context, req *pb.SyncDeviceReque
if !ok && gb.DB != nil {
// 如果内存中没有且数据库存在,则从数据库查询
var device Device
if err := gb.DB.Where("id = ?", req.DeviceId).First(&device).Error; err == nil {
if err := gb.DB.Where("device_id = ?", req.DeviceId).First(&device).Error; err == nil {
d = &device
// 恢复设备的必要字段
d.Logger = gb.With("id", req.DeviceId)
d.Logger = gb.Logger.With("deviceid", req.DeviceId)
d.channels.L = new(sync.RWMutex)
d.plugin = gb
@@ -611,35 +613,47 @@ func (gb *GB28181Plugin) UpdateDevice(ctx context.Context, req *pb.Device) (*pb.
// 如果需要订阅目录,创建并启动目录订阅任务
if d.Online {
if d.CatalogSubscribeTask != nil {
if d.SubscribeCatalog > 0 {
if d.SubscribeCatalog > 0 {
if d.CatalogSubscribeTask != nil {
d.CatalogSubscribeTask.Ticker.Reset(time.Second * time.Duration(d.SubscribeCatalog))
d.CatalogSubscribeTask.Tick(nil)
} else {
catalogSubTask := NewCatalogSubscribeTask(d)
d.AddTask(catalogSubTask)
d.CatalogSubscribeTask.Tick(nil)
}
d.CatalogSubscribeTask.Tick(nil)
} else {
catalogSubTask := NewCatalogSubscribeTask(d)
d.AddTask(catalogSubTask)
d.CatalogSubscribeTask.Tick(nil)
if d.CatalogSubscribeTask != nil {
d.CatalogSubscribeTask.Stop(fmt.Errorf("catalog subscription disabled"))
}
}
if d.PositionSubscribeTask != nil {
if d.SubscribePosition > 0 {
if d.SubscribePosition > 0 {
if d.PositionSubscribeTask != nil {
d.PositionSubscribeTask.Ticker.Reset(time.Second * time.Duration(d.SubscribePosition))
d.PositionSubscribeTask.Tick(nil)
} else {
positionSubTask := NewPositionSubscribeTask(d)
d.AddTask(positionSubTask)
d.PositionSubscribeTask.Tick(nil)
}
d.PositionSubscribeTask.Tick(nil)
} else {
positionSubTask := NewPositionSubscribeTask(d)
d.AddTask(positionSubTask)
d.PositionSubscribeTask.Tick(nil)
if d.PositionSubscribeTask != nil {
d.PositionSubscribeTask.Stop(fmt.Errorf("position subscription disabled"))
}
}
if d.AlarmSubscribeTask != nil {
if d.SubscribeAlarm > 0 {
if d.SubscribeAlarm > 0 {
if d.AlarmSubscribeTask != nil {
d.AlarmSubscribeTask.Ticker.Reset(time.Second * time.Duration(d.SubscribeAlarm))
d.AlarmSubscribeTask.Tick(nil)
} else {
alarmSubTask := NewAlarmSubscribeTask(d)
d.AddTask(alarmSubTask)
d.AlarmSubscribeTask.Tick(nil)
}
d.AlarmSubscribeTask.Tick(nil)
} else {
alarmSubTask := NewAlarmSubscribeTask(d)
d.AddTask(alarmSubTask)
d.AlarmSubscribeTask.Tick(nil)
if d.AlarmSubscribeTask != nil {
d.AlarmSubscribeTask.Stop(fmt.Errorf("alarm subscription disabled"))
}
}
}
} else {
@@ -1142,7 +1156,7 @@ func (gb *GB28181Plugin) QueryRecord(ctx context.Context, req *pb.QueryRecordReq
return resp, nil
}
channel, ok := device.channels.Get(req.ChannelId)
channel, ok := device.channels.Get(req.DeviceId + "_" + req.ChannelId)
if !ok {
resp.Code = 404
resp.Message = "channel not found"
@@ -1271,32 +1285,36 @@ func (gb *GB28181Plugin) TestSip(ctx context.Context, req *pb.TestSipRequest) (*
// 创建一个临时设备用于测试
device := &Device{
DeviceId: "34020000002000000001",
SipIp: "192.168.1.17",
SipIp: "192.168.1.106",
Port: 5060,
IP: "192.168.1.102",
StreamMode: "TCP-PASSIVE",
}
//From: <sip:41010500002000000001@4101050000>;tag=4183af2ecc934758ad393dfe588f2dfd
// 初始化设备的SIP相关字段
device.fromHDR = sip.FromHeader{
Address: sip.Uri{
User: gb.Serial,
Host: gb.Realm,
User: "41010500002000000001",
Host: "4101050000",
},
Params: sip.NewParams(),
}
device.fromHDR.Params.Add("tag", sip.GenerateTagN(16))
device.fromHDR.Params.Add("tag", "4183af2ecc934758ad393dfe588f2dfd")
//Contact: <sip:41010500002000000001@192.168.1.106:5060>
device.contactHDR = sip.ContactHeader{
Address: sip.Uri{
User: gb.Serial,
Host: device.SipIp,
Port: device.Port,
User: "41010500002000000001",
Host: "192.168.1.106",
Port: 5060,
},
}
//Request-Line: INVITE sip:34020000001320000006@192.168.1.102:5060 SIP/2.0
// Method: INVITE
// Request-URI: sip:34020000001320000006@192.168.1.102:5060
// [Resent Packet: False]
// 初始化SIP客户端
device.client, _ = sipgo.NewClient(gb.ua, sipgo.WithClientLogger(zerolog.New(os.Stdout)), sipgo.WithClientHostname(device.SipIp))
device.client, _ = sipgo.NewClient(gb.ua, sipgo.WithClientLogger(zerolog.New(os.Stdout)), sipgo.WithClientHostname("192.168.1.106"))
if device.client == nil {
resp.Code = 500
resp.Message = "failed to create sip client"
@@ -1321,11 +1339,11 @@ func (gb *GB28181Plugin) TestSip(ctx context.Context, req *pb.TestSipRequest) (*
// 构建SDP消息体
sdpInfo := []string{
"v=0",
fmt.Sprintf("o=%s 0 0 IN IP4 %s", "34020000001320000004", device.SipIp),
fmt.Sprintf("o=%s 0 0 IN IP4 %s", "34020000001320000102", "192.168.1.106"),
"s=Play",
"c=IN IP4 " + device.SipIp,
"c=IN IP4 192.168.1.106",
"t=0 0",
"m=video 43970 TCP/RTP/AVP 96 97 98 99",
"m=video 40940 TCP/RTP/AVP 96 97 98 99",
"a=recvonly",
"a=rtpmap:96 PS/90000",
"a=rtpmap:98 H264/90000",
@@ -1333,36 +1351,40 @@ func (gb *GB28181Plugin) TestSip(ctx context.Context, req *pb.TestSipRequest) (*
"a=rtpmap:99 H265/90000",
"a=setup:passive",
"a=connection:new",
"y=0200005507",
"y=0105006213",
}
// 设置必需的头部
contentTypeHeader := sip.ContentTypeHeader("APPLICATION/SDP")
subjectHeader := sip.NewHeader("Subject", "34020000001320000006:0200005507,34020000002000000001:0")
//Subject: 34020000001320000006:0105006213,41010500002000000001:0
subjectHeader := sip.NewHeader("Subject", "34020000001320000006:0105006213,41010500002000000001:0")
//To: <sip:34020000001320000006@192.168.1.102:5060>
toHeader := sip.ToHeader{
Address: sip.Uri{
User: "34020000001320000006",
Host: device.IP,
Port: device.Port,
Host: "192.168.1.102",
Port: 5060,
},
}
userAgentHeader := sip.NewHeader("User-Agent", "WVP-Pro v2.7.3.20241218")
//Via: SIP/2.0/UDP 192.168.1.106:5060;branch=z9hG4bK9279674404;rport
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: "UDP",
Host: device.SipIp,
Port: device.Port,
Host: "192.168.1.106",
Port: 5060,
Params: sip.HeaderParams(sip.NewParams()),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
viaHeader.Params.Add("branch", "z9hG4bK9279674404").Add("rport", "")
csqHeader := sip.CSeqHeader{
SeqNo: 13,
SeqNo: 3,
MethodName: "INVITE",
}
maxforward := sip.MaxForwardsHeader(70)
contentLengthHeader := sip.ContentLengthHeader(286)
//contentLengthHeader := sip.ContentLengthHeader(288)
request.AppendHeader(&contentTypeHeader)
request.AppendHeader(subjectHeader)
request.AppendHeader(&toHeader)
@@ -1374,7 +1396,7 @@ func (gb *GB28181Plugin) TestSip(ctx context.Context, req *pb.TestSipRequest) (*
// 创建会话并发送请求
dialogClientCache := sipgo.NewDialogClientCache(device.client, device.contactHDR)
session, err := dialogClientCache.Invite(gb, recipient, request.Body(), &csqHeader, &device.fromHDR, &toHeader, &viaHeader, &maxforward, userAgentHeader, &device.contactHDR, subjectHeader, &contentTypeHeader, &contentLengthHeader)
session, err := dialogClientCache.Invite(gb, recipient, request.Body(), &csqHeader, &device.fromHDR, &toHeader, &maxforward, userAgentHeader, &device.contactHDR, subjectHeader, &contentTypeHeader)
if err != nil {
resp.Code = 500
resp.Message = fmt.Sprintf("发送INVITE请求失败: %v", err)
@@ -1532,6 +1554,13 @@ func (gb *GB28181Plugin) AddPlatformChannel(ctx context.Context, req *pb.AddPlat
resp.Message = fmt.Sprintf("提交事务失败: %v", err)
return resp, nil
}
if platform, ok := gb.platforms.Get(req.PlatformId); !ok {
for _, channelId := range req.ChannelIds {
if channel, ok := gb.channels.Get(channelId); ok {
platform.channels.Set(channel)
}
}
}
resp.Code = 0
resp.Message = "success"
@@ -1592,7 +1621,7 @@ func (gb *GB28181Plugin) Recording(ctx context.Context, req *pb.RecordingRequest
}
// 从device.channels中查找实际通道
_, ok = actualDevice.channels.Get(result.ChannelID)
_, ok = actualDevice.channels.Get(result.DeviceID + "_" + result.ChannelID)
if !ok {
resp.Code = 404
resp.Message = "实际通道未找到"
@@ -1625,7 +1654,7 @@ func (gb *GB28181Plugin) Recording(ctx context.Context, req *pb.RecordingRequest
}
// 检查通道是否存在
_, ok = device.channels.Get(req.ChannelId)
_, ok = device.channels.Get(req.DeviceId + "_" + req.ChannelId)
if !ok {
resp.Code = 404
resp.Message = "通道未找到"
@@ -1711,7 +1740,7 @@ func (gb *GB28181Plugin) GetSnap(ctx context.Context, req *pb.GetSnapRequest) (*
}
// 从device.channels中查找实际通道
_, ok = actualDevice.channels.Get(result.ChannelID)
_, ok = actualDevice.channels.Get(result.DeviceID + "_" + result.ChannelID)
if !ok {
resp.Code = 404
resp.Message = "实际通道未找到"
@@ -1755,7 +1784,7 @@ func (gb *GB28181Plugin) GetSnap(ctx context.Context, req *pb.GetSnapRequest) (*
}
// 检查通道是否存在
_, ok = device.channels.Get(req.ChannelId)
_, ok = device.channels.Get(req.DeviceId + "_" + req.ChannelId)
if !ok {
resp.Code = 404
resp.Message = "通道未找到"
@@ -2460,12 +2489,9 @@ func (gb *GB28181Plugin) PlaybackPause(ctx context.Context, req *pb.PlaybackPaus
resp.Message = fmt.Sprintf("发送暂停请求失败: %v", err)
return resp, nil
}
gb.Server.Streams.Call(func() error {
if s, ok := gb.Server.Streams.Get(req.StreamPath); ok {
s.Pause()
}
return nil
})
if s, ok := gb.Server.Streams.SafeGet(req.StreamPath); ok {
s.Pause()
}
gb.Info("暂停回放",
"streampath", req.StreamPath)
@@ -2514,12 +2540,9 @@ func (gb *GB28181Plugin) PlaybackResume(ctx context.Context, req *pb.PlaybackRes
resp.Message = fmt.Sprintf("发送恢复请求失败: %v", err)
return resp, nil
}
gb.Server.Streams.Call(func() error {
if s, ok := gb.Server.Streams.Get(req.StreamPath); ok {
s.Resume()
}
return nil
})
if s, ok := gb.Server.Streams.SafeGet(req.StreamPath); ok {
s.Resume()
}
gb.Info("恢复回放",
"streampath", req.StreamPath)
@@ -2587,14 +2610,11 @@ func (gb *GB28181Plugin) PlaybackSpeed(ctx context.Context, req *pb.PlaybackSpee
// 发送请求
_, err := dialog.session.TransactionRequest(ctx, request)
gb.Server.Streams.Call(func() error {
if s, ok := gb.Server.Streams.Get(req.StreamPath); ok {
s.Speed = float64(req.Speed)
s.Scale = float64(req.Speed)
s.Info("set stream speed", "speed", req.Speed)
}
return nil
})
if s, ok := gb.Server.Streams.SafeGet(req.StreamPath); ok {
s.Speed = float64(req.Speed)
s.Scale = float64(req.Speed)
s.Info("set stream speed", "speed", req.Speed)
}
if err != nil {
resp.Code = 500
resp.Message = fmt.Sprintf("发送倍速请求失败: %v", err)
@@ -2818,60 +2838,23 @@ func (gb *GB28181Plugin) RemoveDevice(ctx context.Context, req *pb.RemoveDeviceR
return resp, nil
}
// 检查数据库连接
if gb.DB == nil {
resp.Code = 500
resp.Message = "数据库未初始化"
return resp, nil
}
// 开启事务
tx := gb.DB.Begin()
// 先从数据库中查找设备
var dbDevice Device
if err := tx.Where(&Device{DeviceId: req.Id}).First(&dbDevice).Error; err != nil {
tx.Rollback()
resp.Code = 404
resp.Message = fmt.Sprintf("设备不存在: %v", err)
return resp, nil
}
// 使用数据库中的 DeviceId 从内存中查找设备
if device, ok := gb.devices.Get(dbDevice.DeviceId); ok {
if device, ok := gb.devices.Get(req.Id); ok {
device.DeletedAt = gorm.DeletedAt{Time: time.Now(), Valid: true}
device.channels.Range(func(channel *Channel) bool {
channel.DeletedAt = gorm.DeletedAt{Time: time.Now(), Valid: true}
return true
})
// 停止设备相关任务
device.Stop(fmt.Errorf("device removed"))
device.WaitStopped()
// device.Stop() 会调用 Dispose(),其中已包含从 gb.devices 中移除设备的逻辑
}
// 删除设备关联的所有通道
if err := tx.Where(&gb28181.DeviceChannel{DeviceID: dbDevice.DeviceId}).Delete(&gb28181.DeviceChannel{}).Error; err != nil {
tx.Rollback()
resp.Code = 500
resp.Message = fmt.Sprintf("删除设备通道失败: %v", err)
} else {
resp.Code = 404
resp.Message = "设备未找到"
return resp, nil
}
// 删除设备
if err := tx.Delete(&dbDevice).Error; err != nil {
tx.Rollback()
resp.Code = 500
resp.Message = fmt.Sprintf("删除设备失败: %v", err)
return resp, nil
}
// 提交事务
if err := tx.Commit().Error; err != nil {
resp.Code = 500
resp.Message = fmt.Sprintf("提交事务失败: %v", err)
return resp, nil
}
gb.Info("删除设备成功",
"deviceId", dbDevice.DeviceId,
"deviceName", dbDevice.Name)
resp.Code = 0
resp.Message = "success"
return resp, nil

View File

@@ -51,11 +51,11 @@ type Channel struct {
RecordReqs util.Collection[int, *RecordRequest]
PresetReqs util.Collection[int, *PresetRequest] // 预置位请求集合
*slog.Logger
gb28181.DeviceChannel
*gb28181.DeviceChannel
}
func (c *Channel) GetKey() string {
return c.ChannelID
return c.ID
}
type PullProxy struct {
@@ -75,7 +75,7 @@ func (p *PullProxy) Start() error {
streamPaths := strings.Split(p.GetStreamPath(), "/")
deviceId, channelId := streamPaths[0], streamPaths[1]
if device, ok := p.Plugin.GetHandler().(*GB28181Plugin).devices.Get(deviceId); ok {
if _, ok := device.channels.Get(channelId); ok {
if _, ok := device.channels.Get(deviceId + "_" + channelId); ok {
p.ChangeStatus(m7s.PullProxyStatusOnline)
}
}

View File

@@ -92,18 +92,16 @@ func (d *Device) TableName() string {
func (d *Device) Dispose() {
if d.plugin.DB != nil {
d.plugin.DB.Save(d)
if d.channels.Length > 0 {
d.channels.Range(func(channel *Channel) bool {
d.plugin.DB.Save(channel.DeviceChannel)
//d.plugin.DB.Model(&gb28181.DeviceChannel{}).Where("device_id = ? AND device_db_id = ?", channel.DeviceId, d.ID).Updates(channel.DeviceChannel)
return true
})
} else {
// 如果没有通道,则直接更新通道状态为 OFF
d.plugin.DB.Model(&gb28181.DeviceChannel{}).Where("device_id = ?", d.ID).Update("status", "OFF")
}
d.plugin.DB.Save(d)
}
d.plugin.devices.RemoveByKey(d.DeviceId)
}
func (d *Device) GetKey() string {
@@ -140,7 +138,7 @@ func (r *CatalogRequest) IsComplete(channelsLength int) bool {
}
func (d *Device) onMessage(req *sip.Request, tx sip.ServerTransaction, msg *gb28181.Message) (err error) {
d.Debug("into onMessage,deviceid is ", d.DeviceId)
d.plugin.Debug("into onMessage,deviceid is ", d.DeviceId)
source := req.Source()
hostname, portStr, _ := net.SplitHostPort(source)
port, _ := strconv.Atoi(portStr)
@@ -230,7 +228,7 @@ func (d *Device) onMessage(req *sip.Request, tx sip.ServerTransaction, msg *gb28
d.catalogReqs.RemoveByKey(msg.SN)
}
case "RecordInfo":
if channel, ok := d.channels.Get(msg.DeviceID); ok {
if channel, ok := d.channels.Get(d.DeviceId + "_" + msg.DeviceID); ok {
if req, ok := channel.RecordReqs.Get(msg.SN); ok {
// 添加响应并检查是否完成
if req.AddResponse(*msg) {
@@ -239,7 +237,7 @@ func (d *Device) onMessage(req *sip.Request, tx sip.ServerTransaction, msg *gb28
}
}
case "PresetQuery":
if channel, ok := d.channels.Get(msg.DeviceID); ok {
if channel, ok := d.channels.Get(d.DeviceId + "_" + msg.DeviceID); ok {
if req, ok := channel.PresetReqs.Get(msg.SN); ok {
// 添加预置位响应
req.Response = msg.PresetList.Item
@@ -325,7 +323,10 @@ func (d *Device) onMessage(req *sip.Request, tx sip.ServerTransaction, msg *gb28
}
case "DeviceInfo":
// 主设备信息
d.Name = msg.DeviceName
d.Info("DeviceInfo message", "body", req.Body(), "d.Name", d.Name, "d.DeviceId", d.DeviceId, "msg.DeviceName", msg.DeviceName)
if d.Name == "" && msg.DeviceName != "" {
d.Name = msg.DeviceName
}
d.Manufacturer = msg.Manufacturer
d.Model = msg.Model
d.Firmware = msg.Firmware
@@ -616,15 +617,16 @@ func (d *Device) frontEndCmdString(cmdCode int32, parameter1 int32, parameter2 i
}
func (d *Device) addOrUpdateChannel(c gb28181.DeviceChannel) {
if channel, ok := d.channels.Get(c.ChannelID); ok {
channel.DeviceChannel = c
if channel, ok := d.channels.Get(c.ID); ok {
channel.DeviceChannel = &c
} else {
channel = &Channel{
Device: d,
Logger: d.Logger.With("channel", c.ChannelID),
DeviceChannel: c,
Logger: d.Logger.With("channel", c.ID),
DeviceChannel: &c,
}
d.channels.Set(channel)
d.plugin.channels.Set(channel.DeviceChannel)
}
}

View File

@@ -30,10 +30,25 @@ type Dialog struct {
StreamMode string // 数据流传输模式UDP:udp传输/TCP-ACTIVEtcp主动模式/TCP-PASSIVEtcp被动模式
targetIP string // 目标设备的IP地址
targetPort int // 目标设备的端口
/**
子码流的配置,默认格式为:
stream=stream:0;stream=stream:1
GB28181-2022:
stream=streanumber:0;stream=streamnumber:1
大华为:
stream=streamprofile:0;stream=streamprofile:1
水星,tp-link:
stream=streamMode:main;stream=streamMode:sub
*/
stream string
}
func (d *Dialog) GetCallID() string {
return d.session.InviteRequest.CallID().Value()
if d.session != nil && d.session.InviteRequest != nil && d.session.InviteRequest.CallID() != nil {
return d.session.InviteRequest.CallID().Value()
} else {
return ""
}
}
func (d *Dialog) GetPullJob() *m7s.PullJob {
@@ -72,7 +87,7 @@ func (d *Dialog) Start() (err error) {
var device *Device
if deviceTmp, ok := d.gb.devices.Get(deviceId); ok {
device = deviceTmp
if channel, ok := deviceTmp.channels.Get(channelId); ok {
if channel, ok := deviceTmp.channels.Get(deviceId + "_" + channelId); ok {
d.Channel = channel
d.StreamMode = device.StreamMode
} else {
@@ -99,14 +114,14 @@ func (d *Dialog) Start() (err error) {
// 构建 SDP 内容
sdpInfo := []string{
"v=0",
fmt.Sprintf("o=%s 0 0 IN IP4 %s", channelId, device.MediaIp),
fmt.Sprintf("o=%s 0 0 IN IP4 %s", channelId, device.SipIp),
fmt.Sprintf("s=%s", util.Conditional(d.IsLive(), "Play", "Playback")), // 根据是否有时间参数决定
}
// 非直播模式下添加u行保持在s=和c=之间
//if !d.IsLive() {
sdpInfo = append(sdpInfo, fmt.Sprintf("u=%s:0", channelId))
//}
if !d.IsLive() {
sdpInfo = append(sdpInfo, fmt.Sprintf("u=%s:0", channelId))
}
// 添加c行
sdpInfo = append(sdpInfo, "c=IN IP4 "+device.MediaIp)
@@ -115,7 +130,7 @@ func (d *Dialog) Start() (err error) {
if !d.IsLive() {
startTime, endTime, err := util.TimeRangeQueryParse(url.Values{"start": []string{d.start}, "end": []string{d.end}})
if err != nil {
d.Stop(errors.New("parse end time error"))
return errors.New("parse end time error")
}
sdpInfo = append(sdpInfo, fmt.Sprintf("t=%d %d", startTime.Unix(), endTime.Unix()))
} else {
@@ -135,6 +150,10 @@ func (d *Dialog) Start() (err error) {
sdpInfo = append(sdpInfo, mediaLine)
sdpInfo = append(sdpInfo, "a=recvonly")
if d.stream != "" {
sdpInfo = append(sdpInfo, "a="+d.stream)
}
sdpInfo = append(sdpInfo, "a=rtpmap:96 PS/90000")
//根据传输模式添加 setup 和 connection 属性
switch strings.ToUpper(device.StreamMode) {
@@ -149,14 +168,13 @@ func (d *Dialog) Start() (err error) {
"a=connection:new",
)
case "UDP":
d.Stop(errors.New("do not support udp mode"))
return errors.New("do not support udp mode")
default:
sdpInfo = append(sdpInfo,
"a=setup:passive",
"a=connection:new",
)
}
sdpInfo = append(sdpInfo, "a=rtpmap:96 PS/90000")
// 添加 SSRC
sdpInfo = append(sdpInfo, fmt.Sprintf("y=%s", ssrc))
@@ -224,7 +242,7 @@ func (d *Dialog) Start() (err error) {
//}
// 最后添加Content-Length头部
if err != nil {
d.gb.Error("invite error", err)
return errors.New("dialog invite error" + err.Error())
}
return
}
@@ -233,9 +251,8 @@ func (d *Dialog) Run() (err error) {
d.Channel.Info("before WaitAnswer")
err = d.session.WaitAnswer(d.gb, sipgo.AnswerOptions{})
d.Channel.Info("after WaitAnswer")
d.gb.Error(" WaitAnswer error", err)
if err != nil {
return
return errors.New("wait answer error" + err.Error())
}
inviteResponseBody := string(d.session.InviteResponse.Body())
d.Channel.Info("inviteResponse", "body", inviteResponseBody)
@@ -295,13 +312,15 @@ func (d *Dialog) GetKey() uint32 {
func (d *Dialog) Dispose() {
d.gb.tcpPorts <- d.MediaPort
err := d.session.Bye(d)
if err != nil {
d.Error("dialog bye bye err", err)
}
err = d.session.Close()
if err != nil {
d.Error("dialog close session err", err)
if d.session != nil {
err := d.session.Bye(d)
if err != nil {
d.Error("dialog bye bye err", err)
}
err = d.session.Close()
if err != nil {
d.Error("dialog close session err", err)
}
}
d.gb.dialogs.Remove(d)
}

View File

@@ -70,7 +70,7 @@ func (d *ForwardDialog) Start() (err error) {
var device *Device
if deviceTmp, ok := d.gb.devices.Get(deviceId); ok {
device = deviceTmp
if channel, ok := deviceTmp.channels.Get(channelId); ok {
if channel, ok := deviceTmp.channels.Get(deviceId + "_" + channelId); ok {
d.channel = channel
} else {
return fmt.Errorf("channel %s not found", channelId)

File diff suppressed because it is too large Load Diff

View File

@@ -300,3 +300,7 @@ func (d *DeviceChannel) appendInfoContent(content *string) {
*content += " <SVCTimeSupportMode>" + strconv.Itoa(d.SVCTimeSupportMode) + "</SVCTimeSupportMode>\n"
}
}
func (d *DeviceChannel) GetKey() string {
return d.ID
}

View File

@@ -22,9 +22,7 @@ package gb28181
import (
"fmt"
"log/slog"
"net"
"os"
"strconv"
"strings"
"sync"
@@ -61,8 +59,7 @@ type RTPForwarder struct {
SendInterval time.Duration // 发送间隔,可用于限流
lastSendTime time.Time // 上次发送时间
stopChan chan struct{} // 停止信号通道
*slog.Logger
StreamMode string // 数据流传输模式UDP:udp传输/TCP-ACTIVEtcp主动模式/TCP-PASSIVEtcp被动模式
StreamMode string // 数据流传输模式UDP:udp传输/TCP-ACTIVEtcp主动模式/TCP-PASSIVEtcp被动模式
}
// NewRTPForwarder 创建一个新的RTP转发器
@@ -71,7 +68,6 @@ func NewRTPForwarder() *RTPForwarder {
FeedChan: make(chan []byte, 2000), // 增加缓冲区大小,减少丢包风险
SendInterval: time.Millisecond * 0, // 默认不限制发送间隔,最大速度转发
stopChan: make(chan struct{}),
Logger: slog.New(slog.NewTextHandler(os.Stdout, nil)),
}
ret.bufferPool = sync.Pool{
@@ -90,7 +86,7 @@ func (p *RTPForwarder) ReadRTP(rtpBuf util.Buffer) (err error) {
return
}
if p.Enabled(p, task.TraceLevel) {
if p.TraceEnabled() {
p.Trace("rtp", "len", rtpBuf.Len(), "seq", p.SequenceNumber, "payloadType", p.PayloadType, "ssrc", p.SSRC)
}
@@ -347,7 +343,7 @@ func (p *RTPForwarder) Demux() {
}
p.lastSendTime = time.Now()
if p.Enabled(p, task.TraceLevel) && p.ForwardCount%1000 == 0 {
if p.TraceEnabled() && p.ForwardCount%1000 == 0 {
p.Trace("forward rtp packet", "count", p.ForwardCount, "TCP", p.TCP, "TCPActive", p.TCPActive)
}
}

View File

@@ -66,3 +66,8 @@ type PlatformChannel struct {
func (*PlatformChannel) TableName() string {
return "gb28181_platform_channel"
}
func (p *PlatformChannel) GetKey() string {
return p.PlatformServerGBID + "_" + p.ChannelDBID
}

View File

@@ -9,43 +9,44 @@ import (
// 包含了平台的基本信息、SIP服务配置、设备信息、认证信息等。
// 用于存储和管理GB28181平台的所有相关参数。
type PlatformModel struct {
Enable bool `gorm:"column:enable" json:"enable"` // Enable表示该平台配置是否启用
Name string `gorm:"column:name;omitempty" json:"name"` // Name表示平台的名称
ServerGBID string `gorm:"primaryKey;column:server_gb_id;omitempty" json:"serverGBId"` // ServerGBID表示SIP服务器的国标编码
ServerGBDomain string `gorm:"column:server_gb_domain;omitempty" json:"serverGBDomain"` // ServerGBDomain表示SIP服务器的国标域
ServerIP string `gorm:"column:server_ip;omitempty" json:"serverIp"` // ServerIP表示SIP服务器的IP地址
ServerPort int `gorm:"column:server_port;omitempty" json:"serverPort"` // ServerPort表示SIP服务器的端口号
DeviceGBID string `gorm:"column:device_gb_id;omitempty" json:"deviceGBId"` // DeviceGBID表示设备的国标编号
DeviceIP string `gorm:"column:device_ip;omitempty" json:"deviceIp"` // DeviceIP表示设备的IP地址
DevicePort int `gorm:"column:device_port;omitempty" json:"devicePort"` // DevicePort表示设备的端口号
Username string `gorm:"column:username;omitempty" json:"username"` // Username表示SIP认证的用户名默认使用设备国标编号
Password string `gorm:"column:password;omitempty" json:"password"` // Password表示SIP认证的密码
Expires int `gorm:"column:expires;omitempty" json:"expires"` // Expires表示注册的过期时间单位为秒
KeepTimeout int `gorm:"column:keep_timeout;omitempty" json:"keepTimeout"` // KeepTimeout表示心跳超时时间单位为秒
Transport string `gorm:"column:transport;omitempty" json:"transport"` // Transport表示传输协议类型
CharacterSet string `gorm:"column:character_set;omitempty" json:"characterSet"` // CharacterSet表示字符集编码
PTZ bool `gorm:"column:ptz" json:"ptz"` // PTZ表示是否允许云台控制
RTCP bool `gorm:"column:rtcp" json:"rtcp"` // RTCP表示是否启用RTCP流保活
Status bool `gorm:"column:status" json:"status"` // Status表示平台当前的在线状态
ChannelCount int `gorm:"column:channel_count;omitempty" json:"channelCount"` // ChannelCount表示通道数量
CatalogSubscribe bool `gorm:"column:catalog_subscribe" json:"catalogSubscribe"` // CatalogSubscribe表示是否已订阅目录信息
AlarmSubscribe bool `gorm:"column:alarm_subscribe" json:"alarmSubscribe"` // AlarmSubscribe表示是否已订阅报警信息
MobilePositionSubscribe bool `gorm:"column:mobile_position_subscribe" json:"mobilePositionSubscribe"` // MobilePositionSubscribe表示是否已订阅移动位置信息
CatalogGroup int `gorm:"column:catalog_group;omitempty" json:"catalogGroup"` // CatalogGroup表示目录分组大小每次向上级发送通道数量
UpdateTime string `gorm:"column:update_time;omitempty" json:"updateTime"` // UpdateTime表示最后更新时间
CreateTime string `gorm:"column:create_time;omitempty" json:"createTime"` // CreateTime表示创建时间
AsMessageChannel bool `gorm:"column:as_message_channel" json:"asMessageChannel"` // AsMessageChannel表示是否作为消息通道使用
SendStreamIP string `gorm:"column:send_stream_ip;omitempty" json:"sendStreamIp"` // SendStreamIP表示点播回复200OK时使用的IP地址
AutoPushChannel bool `gorm:"column:auto_push_channel" json:"autoPushChannel"` // AutoPushChannel表示是否自动推送通道变化
CatalogWithPlatform int `gorm:"column:catalog_with_platform;omitempty" json:"catalogWithPlatform"` // CatalogWithPlatform表示目录信息是否包含平台信息(0:关闭,1:打开)
CatalogWithGroup int `gorm:"column:catalog_with_group;omitempty" json:"catalogWithGroup"` // CatalogWithGroup表示目录信息是否包含分组信息(0:关闭,1:打开)
CatalogWithRegion int `gorm:"column:catalog_with_region;omitempty" json:"catalogWithRegion"` // CatalogWithRegion表示目录信息是否包含行政区划(0:关闭,1:打开)
CivilCode string `gorm:"column:civil_code;omitempty" json:"civilCode"` // CivilCode表示行政区划代码
Manufacturer string `gorm:"column:manufacturer;omitempty" json:"manufacturer"` // Manufacturer表示平台厂商
Model string `gorm:"column:model;omitempty" json:"model"` // Model表示平台型号
Address string `gorm:"column:address;omitempty" json:"address"` // Address表示平台安装地址
RegisterWay int `gorm:"column:register_way;omitempty" json:"registerWay"` // RegisterWay表示注册方式(1:标准认证注册,2:口令认证,3:数字证书双向认证,4:数字证书单向认证)
Secrecy int `gorm:"column:secrecy;omitempty" json:"secrecy"` // Secrecy表示保密属性(0:不涉密,1:涉密)
Enable bool `gorm:"column:enable" json:"enable"` // Enable表示该平台配置是否启用
Name string `gorm:"column:name;omitempty" json:"name"` // Name表示平台的名称
ServerGBID string `gorm:"primaryKey;column:server_gb_id;omitempty" json:"serverGBId"` // ServerGBID表示SIP服务器的国标编码
ServerGBDomain string `gorm:"column:server_gb_domain;omitempty" json:"serverGBDomain"` // ServerGBDomain表示SIP服务器的国标域
ServerIP string `gorm:"column:server_ip;omitempty" json:"serverIp"` // ServerIP表示SIP服务器的IP地址
ServerPort int `gorm:"column:server_port;omitempty" json:"serverPort"` // ServerPort表示SIP服务器的端口号
DeviceGBID string `gorm:"column:device_gb_id;omitempty" json:"deviceGBId"` // DeviceGBID表示设备的国标编号
DeviceIP string `gorm:"column:device_ip;omitempty" json:"deviceIp"` // DeviceIP表示设备的IP地址
DevicePort int `gorm:"column:device_port;omitempty" json:"devicePort"` // DevicePort表示设备的端口号
Username string `gorm:"column:username;omitempty" json:"username"` // Username表示SIP认证的用户名默认使用设备国标编号
Password string `gorm:"column:password;omitempty" json:"password"` // Password表示SIP认证的密码
Expires int `gorm:"column:expires;omitempty" json:"expires"` // Expires表示注册的过期时间单位为秒
KeepTimeout int `gorm:"column:keep_timeout;omitempty" json:"keepTimeout"` // KeepTimeout表示心跳超时时间单位为秒
Transport string `gorm:"column:transport;omitempty" json:"transport"` // Transport表示传输协议类型
CharacterSet string `gorm:"column:character_set;omitempty" json:"characterSet"` // CharacterSet表示字符集编码
PTZ bool `gorm:"column:ptz" json:"ptz"` // PTZ表示是否允许云台控制
RTCP bool `gorm:"column:rtcp" json:"rtcp"` // RTCP表示是否启用RTCP流保活
Status bool `gorm:"column:status" json:"status"` // Status表示平台当前的在线状态
ChannelCount int `gorm:"column:channel_count;omitempty" json:"channelCount"` // ChannelCount表示通道数量
CatalogSubscribe bool `gorm:"column:catalog_subscribe" json:"catalogSubscribe"` // CatalogSubscribe表示是否已订阅目录信息
AlarmSubscribe bool `gorm:"column:alarm_subscribe" json:"alarmSubscribe"` // AlarmSubscribe表示是否已订阅报警信息
MobilePositionSubscribe bool `gorm:"column:mobile_position_subscribe" json:"mobilePositionSubscribe"` // MobilePositionSubscribe表示是否已订阅移动位置信息
CatalogGroup int `gorm:"column:catalog_group;omitempty" json:"catalogGroup"` // CatalogGroup表示目录分组大小每次向上级发送通道数量
UpdateTime string `gorm:"column:update_time;omitempty" json:"updateTime"` // UpdateTime表示最后更新时间
CreateTime string `gorm:"column:create_time;omitempty" json:"createTime"` // CreateTime表示创建时间
AsMessageChannel bool `gorm:"column:as_message_channel" json:"asMessageChannel"` // AsMessageChannel表示是否作为消息通道使用
SendStreamIP string `gorm:"column:send_stream_ip;omitempty" json:"sendStreamIp"` // SendStreamIP表示点播回复200OK时使用的IP地址
AutoPushChannel bool `gorm:"column:auto_push_channel" json:"autoPushChannel"` // AutoPushChannel表示是否自动推送通道变化
CatalogWithPlatform int `gorm:"column:catalog_with_platform;omitempty" json:"catalogWithPlatform"` // CatalogWithPlatform表示目录信息是否包含平台信息(0:关闭,1:打开)
CatalogWithGroup int `gorm:"column:catalog_with_group;omitempty" json:"catalogWithGroup"` // CatalogWithGroup表示目录信息是否包含分组信息(0:关闭,1:打开)
CatalogWithRegion int `gorm:"column:catalog_with_region;omitempty" json:"catalogWithRegion"` // CatalogWithRegion表示目录信息是否包含行政区划(0:关闭,1:打开)
CivilCode string `gorm:"column:civil_code;omitempty" json:"civilCode"` // CivilCode表示行政区划代码
Manufacturer string `gorm:"column:manufacturer;omitempty" json:"manufacturer"` // Manufacturer表示平台厂商
Model string `gorm:"column:model;omitempty" json:"model"` // Model表示平台型号
Address string `gorm:"column:address;omitempty" json:"address"` // Address表示平台安装地址
RegisterWay int `gorm:"column:register_way;omitempty" json:"registerWay"` // RegisterWay表示注册方式(1:标准认证注册,2:口令认证,3:数字证书双向认证,4:数字证书单向认证)
Secrecy int `gorm:"column:secrecy;omitempty" json:"secrecy"` // Secrecy表示保密属性(0:不涉密,1:涉密)
PlatformChannels []*PlatformChannel `gorm:"-:all"`
}
// TableName 指定数据库表名

View File

@@ -148,12 +148,18 @@ func (p *Receiver) ReadRTP(rtp util.Buffer) (err error) {
return
}
if lastSeq == 0 || p.SequenceNumber == lastSeq+1 {
if p.Enabled(p, task.TraceLevel) {
if p.TraceEnabled() {
p.Trace("rtp", "len", rtp.Len(), "seq", p.SequenceNumber, "payloadType", p.PayloadType, "ssrc", p.SSRC)
}
copyData := make([]byte, len(p.Payload))
copy(copyData, p.Payload)
p.FeedChan <- copyData
select {
case p.FeedChan <- copyData:
// 成功发送数据
case <-p.Done():
// 任务已停止,返回错误
return task.ErrTaskComplete
}
return
}
return ErrRTPReceiveLost

View File

@@ -3,6 +3,7 @@ package plugin_gb28181pro
import (
"context"
"fmt"
"m7s.live/v5/pkg/util"
"net/http"
"strconv"
"strings"
@@ -40,6 +41,7 @@ type Platform struct {
plugin *GB28181Plugin
ctx context.Context
unRegister bool
channels util.Collection[string, *gb28181.DeviceChannel] `gorm:"-:all"`
}
func NewPlatform(pm *gb28181.PlatformModel, plugin *GB28181Plugin, unRegister bool) *Platform {
@@ -49,7 +51,7 @@ func NewPlatform(pm *gb28181.PlatformModel, plugin *GB28181Plugin, unRegister bo
unRegister: unRegister,
}
p.ctx = context.Background()
client, err := sipgo.NewClient(p.plugin.ua, sipgo.WithClientHostname(p.PlatformModel.DeviceIP), sipgo.WithClientPort(p.PlatformModel.DevicePort))
client, err := sipgo.NewClient(p.plugin.ua, sipgo.WithClientHostname(p.PlatformModel.DeviceIP))
if err != nil {
p.Error("failed to create sip client: %v", err)
}
@@ -155,16 +157,16 @@ func (p *Platform) Keepalive() (*sipgo.DialogClientSession, error) {
}
req.AppendHeader(&toHeader)
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: p.PlatformModel.Transport,
Host: p.PlatformModel.DeviceIP,
Port: p.PlatformModel.DevicePort,
Params: sip.NewParams(),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
req.AppendHeader(&viaHeader)
//viaHeader := sip.ViaHeader{
// ProtocolName: "SIP",
// ProtocolVersion: "2.0",
// Transport: p.PlatformModel.Transport,
// Host: p.PlatformModel.DeviceIP,
// Port: p.PlatformModel.DevicePort,
// Params: sip.NewParams(),
//}
//viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
//req.AppendHeader(&viaHeader)
req.SetBody(gb28181.BuildKeepAliveXML(p.SN, p.PlatformModel.DeviceGBID))
p.SN++
@@ -240,16 +242,16 @@ func (p *Platform) Register(isUnregister bool) error {
req.AppendHeader(&toHeader)
// 添加Via头部
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: p.PlatformModel.Transport,
Host: p.PlatformModel.DeviceIP,
Port: p.PlatformModel.DevicePort,
Params: sip.NewParams(),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
req.AppendHeader(&viaHeader)
//viaHeader := sip.ViaHeader{
// ProtocolName: "SIP",
// ProtocolVersion: "2.0",
// Transport: p.PlatformModel.Transport,
// Host: p.PlatformModel.DeviceIP,
// Port: p.PlatformModel.DevicePort,
// Params: sip.NewParams(),
//}
//viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
//req.AppendHeader(&viaHeader)
req.AppendHeader(&p.MaxForwardsHDR)
@@ -333,6 +335,8 @@ func (p *Platform) Register(isUnregister bool) error {
newReq := req.Clone()
newReq.RemoveHeader("Via") // 必须由传输层重新生成
newReq.AppendHeader(sip.NewHeader("Authorization", cred.String()))
newReq.CSeq().SeqNo = uint32(p.SN) // 更新CSeq序号
p.SN++
// 发送认证请求
tx, err = p.Client.TransactionRequest(p.ctx, newReq, sipgo.ClientRequestAddVia)
@@ -457,14 +461,17 @@ func (p *Platform) handleCatalog(req *sip.Request, tx sip.ServerTransaction, msg
// 查询通道列表
var channels []gb28181.DeviceChannel
if p.plugin.DB != nil {
if err := p.plugin.DB.Table("gb28181_channel gc").
Select(`gc.*`).
Joins("left join gb28181_platform_channel gpc on gc.id=gpc.channel_db_id").
Where("gpc.platform_server_gb_id = ? and gc.status='ON'", p.PlatformModel.ServerGBID).
Find(&channels).Error; err != nil {
return fmt.Errorf("query channels error: %v", err)
}
//if p.plugin.DB != nil {
// if err := p.plugin.DB.Table("gb28181_channel gc").
// Select(`gc.*`).
// Joins("left join gb28181_platform_channel gpc on gc.id=gpc.channel_db_id").
// Where("gpc.platform_server_gb_id = ? and gc.status='ON'", p.PlatformModel.ServerGBID).
// Find(&channels).Error; err != nil {
// return fmt.Errorf("query channels error: %v", err)
// }
//}
for channel := range p.channels.Range {
channels = append(channels, *channel)
}
// 发送目录响应,无论是否有通道
@@ -506,16 +513,16 @@ func (p *Platform) sendCatalogResponse(req *sip.Request, sn string, fromTag stri
request.AppendHeader(&toHeader)
// 添加Via头部
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: p.PlatformModel.Transport,
Host: p.PlatformModel.DeviceIP,
Port: p.PlatformModel.DevicePort,
Params: sip.NewParams(),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
request.AppendHeader(&viaHeader)
//viaHeader := sip.ViaHeader{
// ProtocolName: "SIP",
// ProtocolVersion: "2.0",
// Transport: p.PlatformModel.Transport,
// Host: p.PlatformModel.DeviceIP,
// Port: p.PlatformModel.DevicePort,
// Params: sip.NewParams(),
//}
//viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
//request.AppendHeader(&viaHeader)
request.SetTransport(req.Transport())
contentTypeHeader := sip.ContentTypeHeader("Application/MANSCDP+xml")
@@ -526,7 +533,7 @@ func (p *Platform) sendCatalogResponse(req *sip.Request, sn string, fromTag stri
<Response>
<CmdType>Catalog</CmdType>
<SN>%s</SN>
<DeviceId>%s</DeviceId>
<DeviceID>%s</DeviceID>
<SumNum>0</SumNum>
<DeviceList Num="0">
</DeviceList>
@@ -648,16 +655,16 @@ func (p *Platform) sendCatalogResponse(req *sip.Request, sn string, fromTag stri
request.AppendHeader(&toHeader)
// 添加Via头部
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: p.PlatformModel.Transport,
Host: p.PlatformModel.DeviceIP,
Port: p.PlatformModel.DevicePort,
Params: sip.NewParams(),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
request.AppendHeader(&viaHeader)
//viaHeader := sip.ViaHeader{
// ProtocolName: "SIP",
// ProtocolVersion: "2.0",
// Transport: p.PlatformModel.Transport,
// Host: p.PlatformModel.DeviceIP,
// Port: p.PlatformModel.DevicePort,
// Params: sip.NewParams(),
//}
//viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
//request.AppendHeader(&viaHeader)
request.SetTransport(req.Transport())
contentTypeHeader := sip.ContentTypeHeader("Application/MANSCDP+xml")
@@ -669,7 +676,7 @@ func (p *Platform) sendCatalogResponse(req *sip.Request, sn string, fromTag stri
<Response>
<CmdType>Catalog</CmdType>
<SN>%s</SN>
<DeviceId>%s</DeviceId>
<DeviceID>%s</DeviceID>
<SumNum>%d</SumNum>
<DeviceList Num="1">
%s
@@ -807,7 +814,7 @@ func (p *Platform) buildChannelItem(channel gb28181.DeviceChannel) string {
}
return fmt.Sprintf(`<Item>
<DeviceId>%s</DeviceId>
<DeviceID>%s</DeviceID>
<Name>%s</Name>
<Manufacturer>%s</Manufacturer>
<Model>%s</Model>
@@ -882,16 +889,16 @@ func (p *Platform) handleDeviceControl(req *sip.Request, tx sip.ServerTransactio
request.AppendHeader(&toHeader)
// 添加Via头部
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: device.Transport,
Host: device.SipIp,
Port: device.LocalPort,
Params: sip.NewParams(),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
request.AppendHeader(&viaHeader)
//viaHeader := sip.ViaHeader{
// ProtocolName: "SIP",
// ProtocolVersion: "2.0",
// Transport: device.Transport,
// Host: device.SipIp,
// Port: device.LocalPort,
// Params: sip.NewParams(),
//}
//viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
//request.AppendHeader(&viaHeader)
// 设置Content-Type
contentTypeHeader := sip.ContentTypeHeader("Application/MANSCDP+xml")
@@ -988,16 +995,16 @@ func (p *Platform) sendDeviceStatusResponse(req *sip.Request, device *Device, sn
request.AppendHeader(&toHeader)
// 添加Via头部
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: p.PlatformModel.Transport,
Host: p.PlatformModel.DeviceIP,
Port: p.PlatformModel.DevicePort,
Params: sip.NewParams(),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
request.AppendHeader(&viaHeader)
//viaHeader := sip.ViaHeader{
// ProtocolName: "SIP",
// ProtocolVersion: "2.0",
// Transport: p.PlatformModel.Transport,
// Host: p.PlatformModel.DeviceIP,
// Port: p.PlatformModel.DevicePort,
// Params: sip.NewParams(),
//}
//viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
//request.AppendHeader(&viaHeader)
// 设置Content-Type
contentTypeHeader := sip.ContentTypeHeader("Application/MANSCDP+xml")
@@ -1037,7 +1044,7 @@ func (p *Platform) sendDeviceStatusResponse(req *sip.Request, device *Device, sn
<Response>
<CmdType>DeviceStatus</CmdType>
<SN>%s</SN>
<DeviceId>%s</DeviceId>
<DeviceID>%s</DeviceID>
<Result>OK</Result>
<Online>%s</Online>
<Status>%s</Status>
@@ -1136,16 +1143,16 @@ func (p *Platform) sendDeviceInfoResponse(req *sip.Request, device *Device, sn s
}
request.AppendHeader(&toHeader)
// 添加Via头部
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: p.PlatformModel.Transport,
Host: p.PlatformModel.DeviceIP,
Port: p.PlatformModel.DevicePort,
Params: sip.NewParams(),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
request.AppendHeader(&viaHeader)
//viaHeader := sip.ViaHeader{
// ProtocolName: "SIP",
// ProtocolVersion: "2.0",
// Transport: p.PlatformModel.Transport,
// Host: p.PlatformModel.DeviceIP,
// Port: p.PlatformModel.DevicePort,
// Params: sip.NewParams(),
//}
//viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
//request.AppendHeader(&viaHeader)
contentTypeHeader := sip.ContentTypeHeader("Application/MANSCDP+xml")
request.AppendHeader(&contentTypeHeader)
@@ -1157,7 +1164,7 @@ func (p *Platform) sendDeviceInfoResponse(req *sip.Request, device *Device, sn s
<Response>
<CmdType>DeviceInfo</CmdType>
<SN>%s</SN>
<DeviceId>%s</DeviceId>
<DeviceID>%s</DeviceID>
<Result>OK</Result>
<DeviceName>%s</DeviceName>
<Manufacturer>%s</Manufacturer>
@@ -1171,7 +1178,7 @@ func (p *Platform) sendDeviceInfoResponse(req *sip.Request, device *Device, sn s
<Response>
<CmdType>DeviceInfo</CmdType>
<SN>%s</SN>
<DeviceId>%s</DeviceId>
<DeviceID>%s</DeviceID>
<Result>OK</Result>
<DeviceName>%s</DeviceName>
<Manufacturer>%s</Manufacturer>
@@ -1340,16 +1347,16 @@ func (p *Platform) handlePresetQuery(req *sip.Request, tx sip.ServerTransaction,
request.AppendHeader(&toHeader)
// 添加Via头部
viaHeader := sip.ViaHeader{
ProtocolName: "SIP",
ProtocolVersion: "2.0",
Transport: device.Transport,
Host: device.SipIp,
Port: device.LocalPort,
Params: sip.NewParams(),
}
viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
request.AppendHeader(&viaHeader)
//viaHeader := sip.ViaHeader{
// ProtocolName: "SIP",
// ProtocolVersion: "2.0",
// Transport: device.Transport,
// Host: device.SipIp,
// Port: device.LocalPort,
// Params: sip.NewParams(),
//}
//viaHeader.Params.Add("branch", sip.GenerateBranchN(16)).Add("rport", "")
//request.AppendHeader(&viaHeader)
// 设置Content-Type
contentTypeHeader := sip.ContentTypeHeader("Application/MANSCDP+xml")

View File

@@ -17,7 +17,7 @@ func (gb *GB28181Plugin) RecordInfoQuery(deviceID string, channelID string, star
return nil, fmt.Errorf("device not found: %s", deviceID)
}
channel, ok := device.channels.Get(channelID)
channel, ok := device.channels.Get(deviceID + "_" + channelID)
if !ok {
return nil, fmt.Errorf("channel not found: %s", channelID)
}

View File

@@ -0,0 +1,487 @@
package plugin_gb28181pro
import (
"errors"
"fmt"
"net"
"os"
"strconv"
"sync"
"time"
"github.com/emiago/sipgo"
"github.com/emiago/sipgo/sip"
myip "github.com/husanpao/ip"
"github.com/icholy/digest"
"github.com/rs/zerolog"
"gorm.io/gorm"
"m7s.live/v5"
"m7s.live/v5/pkg/task"
"m7s.live/v5/pkg/util"
)
type DeviceRegisterQueueTask struct {
task.Work
deviceId string
}
func (queueTask *DeviceRegisterQueueTask) GetKey() string {
return queueTask.deviceId
}
type registerHandlerTask struct {
task.Task
gb *GB28181Plugin
req *sip.Request
tx sip.ServerTransaction
}
// getDevicePassword 获取设备密码
func (task *registerHandlerTask) getDevicePassword(device *Device) string {
if device != nil && device.Password != "" {
return device.Password
}
return task.gb.Password
}
func (task *registerHandlerTask) Run() (err error) {
var password string
var device *Device
var recover = false
from := task.req.From()
if from == nil || from.Address.User == "" {
task.gb.Error("OnRegister", "error", "no user")
return
}
isUnregister := false
deviceid := from.Address.User
if existingDevice, exists := task.gb.devices.Get(deviceid); exists && existingDevice != nil {
device = existingDevice
recover = true
} else {
// 尝试从数据库加载设备信息
device = &Device{DeviceId: deviceid}
if task.gb.DB != nil {
if err := task.gb.DB.First(device, Device{DeviceId: deviceid}).Error; err != nil {
if !errors.Is(err, gorm.ErrRecordNotFound) {
task.gb.Error("OnRegister", "error", err)
}
}
}
}
// 获取设备密码
password = task.getDevicePassword(device)
exp := task.req.GetHeader("Expires")
if exp == nil {
task.gb.Error("OnRegister", "error", "no expires")
return
}
expSec, err := strconv.ParseInt(exp.Value(), 10, 32)
if err != nil {
task.gb.Error("OnRegister", "error", err.Error())
return
}
if expSec == 0 {
isUnregister = true
}
// 需要密码认证的情况
if password != "" {
h := task.req.GetHeader("Authorization")
if h == nil {
// 生成认证挑战
nonce := fmt.Sprintf("%d", time.Now().UnixMicro())
chal := digest.Challenge{
Realm: task.gb.Realm,
Nonce: nonce,
Opaque: "monibuca",
Algorithm: "MD5",
QOP: []string{"auth"},
}
res := sip.NewResponseFromRequest(task.req, sip.StatusUnauthorized, "Unauthorized", nil)
res.AppendHeader(sip.NewHeader("WWW-Authenticate", chal.String()))
task.gb.Debug("sending auth challenge", "nonce", nonce, "realm", task.gb.Realm)
if err = task.tx.Respond(res); err != nil {
task.gb.Error("respond Unauthorized", "error", err.Error())
}
return
}
// 解析认证信息
cred, err := digest.ParseCredentials(h.Value())
if err != nil {
task.gb.Error("parsing credentials failed", "error", err.Error())
if err = task.tx.Respond(sip.NewResponseFromRequest(task.req, sip.StatusUnauthorized, "Bad credentials", nil)); err != nil {
task.gb.Error("respond Bad credentials", "error", err.Error())
}
return err
}
task.gb.Debug("received auth info",
"username", cred.Username,
"realm", cred.Realm,
"nonce", cred.Nonce,
"uri", cred.URI,
"qop", cred.QOP,
"nc", cred.Nc,
"cnonce", cred.Cnonce,
"response", cred.Response)
// 使用设备ID作为用户名
if cred.Username != deviceid {
task.gb.Error("username mismatch", "expected", deviceid, "got", cred.Username)
if err = task.tx.Respond(sip.NewResponseFromRequest(task.req, sip.StatusForbidden, "Invalid username", nil)); err != nil {
task.gb.Error("respond Invalid username", "error", err.Error())
}
return err
}
// 计算期望的响应
opts := digest.Options{
Method: "REGISTER",
URI: cred.URI,
Username: deviceid,
Password: password,
Cnonce: cred.Cnonce,
Count: int(cred.Nc),
}
digCred, err := digest.Digest(&digest.Challenge{
Realm: cred.Realm,
Nonce: cred.Nonce,
Opaque: cred.Opaque,
Algorithm: cred.Algorithm,
QOP: []string{cred.QOP},
}, opts)
if err != nil {
task.gb.Error("calculating digest failed", "error", err.Error())
if err = task.tx.Respond(sip.NewResponseFromRequest(task.req, sip.StatusUnauthorized, "Bad credentials", nil)); err != nil {
task.gb.Error("respond Bad credentials", "error", err.Error())
}
return err
}
task.gb.Debug("calculated response info",
"username", opts.Username,
"uri", opts.URI,
"qop", cred.QOP,
"nc", cred.Nc,
"cnonce", opts.Cnonce,
"count", opts.Count,
"response", digCred.Response)
// 比对响应
if cred.Response != digCred.Response {
task.gb.Error("response mismatch",
"expected", digCred.Response,
"got", cred.Response,
"method", opts.Method,
"uri", opts.URI,
"username", opts.Username)
if err = task.tx.Respond(sip.NewResponseFromRequest(task.req, sip.StatusUnauthorized, "Invalid credentials", nil)); err != nil {
task.gb.Error("respond Invalid credentials", "error", err.Error())
}
return err
}
task.gb.Debug("auth successful", "username", deviceid)
}
response := sip.NewResponseFromRequest(task.req, sip.StatusOK, "OK", nil)
response.AppendHeader(sip.NewHeader("Expires", fmt.Sprintf("%d", expSec)))
response.AppendHeader(sip.NewHeader("Date", time.Now().Local().Format(util.LocalTimeFormat)))
response.AppendHeader(sip.NewHeader("Server", "M7S/"+m7s.Version))
response.AppendHeader(sip.NewHeader("Allow", "INVITE,ACK,CANCEL,BYE,NOTIFY,OPTIONS,PRACK,UPDATE,REFER"))
//hostname, portStr, _ := net.SplitHostPort(req.Source())
//port, _ := strconv.Atoi(portStr)
//response.AppendHeader(&sip.ContactHeader{
// Address: sip.Uri{
// User: deviceid,
// Host: hostname,
// Port: port,
// },
//})
if err = task.tx.Respond(response); err != nil {
task.gb.Error("respond OK", "error", err.Error())
}
if isUnregister { //取消绑定操作
if d, ok := task.gb.devices.Get(deviceid); ok {
d.Online = false
d.Status = DeviceOfflineStatus
if task.gb.DB != nil {
// 更新设备状态
var dbDevice Device
if err := task.gb.DB.First(&dbDevice, Device{DeviceId: deviceid}).Error; err == nil {
d.ID = dbDevice.ID
}
d.channels.Range(func(channel *Channel) bool {
channel.Status = "OFF"
return true
})
}
d.Stop(errors.New("unregister"))
}
} else {
if recover {
task.gb.Info("into recoverdevice", "deviceId", device.DeviceId)
device.Status = DeviceOnlineStatus
task.RecoverDevice(device, task.req)
} else {
var newDevice *Device
if device == nil {
newDevice = &Device{DeviceId: deviceid}
} else {
newDevice = device
}
task.gb.Info("into StoreDevice", "deviceId", from)
task.StoreDevice(deviceid, task.req, newDevice)
}
}
task.gb.Info("registerHandlerTask start end", "deviceid", deviceid, "expires", expSec, "isUnregister", isUnregister)
return nil
}
func (task *registerHandlerTask) RecoverDevice(d *Device, req *sip.Request) {
from := req.From()
source := req.Source()
desc := req.Destination()
myIP, myPortStr, _ := net.SplitHostPort(desc)
sourceIP, sourcePortStr, _ := net.SplitHostPort(source)
sourcePort, _ := strconv.Atoi(sourcePortStr)
myPort, _ := strconv.Atoi(myPortStr)
// 如果设备IP是内网IP则使用内网IP
myIPParse := net.ParseIP(myIP)
sourceIPParse := net.ParseIP(sourceIP)
// 优先使用内网IP
myLanIP := myip.InternalIPv4()
myWanIP := myip.ExternalIPv4()
task.gb.Info("Start RecoverDevice", "source", source, "desc", desc, "myLanIP", myLanIP, "myWanIP", myWanIP)
// 处理目标地址和源地址的IP映射关系
if sourceIPParse != nil { // 源IP有效时才进行处理
if myIPParse == nil { // 目标地址是域名
if sourceIPParse.IsPrivate() { // 源IP是内网IP
myWanIP = myLanIP // 使用内网IP作为外网IP
}
} else { // 目标地址是IP
if sourceIPParse.IsPrivate() { // 源IP是内网IP
myLanIP, myWanIP = myIP, myIP // 使用目标IP作为内外网IP
}
}
}
if task.gb.MediaIP != "" {
myWanIP = task.gb.MediaIP
}
if task.gb.SipIP != "" {
myLanIP = task.gb.SipIP
}
// 设置 Recipient
d.Recipient = sip.Uri{
Host: sourceIP,
Port: sourcePort,
User: from.Address.User,
}
// 设置 contactHDR
d.contactHDR = sip.ContactHeader{
Address: sip.Uri{
User: task.gb.Serial,
Host: myIP,
Port: myPort,
},
}
d.SipIp = myLanIP
d.StartTime = time.Now()
d.IP = sourceIP
d.Port = sourcePort
d.HostAddress = d.IP + ":" + sourcePortStr
d.Status = DeviceOnlineStatus
d.UpdateTime = time.Now()
d.RegisterTime = time.Now()
d.Online = true
d.client, _ = sipgo.NewClient(task.gb.ua, sipgo.WithClientLogger(zerolog.New(os.Stdout)), sipgo.WithClientHostname(d.SipIp))
d.channels.L = new(sync.RWMutex)
d.catalogReqs.L = new(sync.RWMutex)
d.plugin = task.gb
d.plugin.Info("RecoverDevice", "source", source, "desc", desc, "device.SipIp", myLanIP, "device.WanIP", myWanIP, "recipient", req.Recipient, "myPort", myPort)
if task.gb.DB != nil {
//var existing Device
//if err := gb.DB.First(&existing, Device{DeviceId: d.DeviceId}).Error; err == nil {
// d.ID = existing.ID // 保持原有的自增ID
// gb.Info("RecoverDevice", "type", "更新设备", "deviceId", d.DeviceId)
//} else {
// gb.Info("RecoverDevice", "type", "新增设备", "deviceId", d.DeviceId)
//}
task.gb.DB.Save(d)
}
return
}
func (task *registerHandlerTask) StoreDevice(deviceid string, req *sip.Request, d *Device) {
task.gb.Debug("deviceid is ", deviceid, "req.via() is ", req.Via(), "req.Source() is ", req.Source())
source := req.Source()
sourceIP, sourcePortStr, _ := net.SplitHostPort(source)
sourcePort, _ := strconv.Atoi(sourcePortStr)
desc := req.Destination()
myIP, myPortStr, _ := net.SplitHostPort(desc)
myPort, _ := strconv.Atoi(myPortStr)
exp := req.GetHeader("Expires")
if exp == nil {
task.gb.Error("OnRegister", "error", "no expires")
return
}
expSec, err := strconv.ParseInt(exp.Value(), 10, 32)
if err != nil {
task.gb.Error("OnRegister", "error", err.Error())
return
}
// 检查myPort是否在sipPorts中如果不在则使用sipPorts[0]
if len(task.gb.sipPorts) > 0 {
portFound := false
for _, port := range task.gb.sipPorts {
if port == myPort {
portFound = true
break
}
}
if !portFound {
myPort = task.gb.sipPorts[0]
task.gb.Debug("StoreDevice", "使用默认端口替换", myPort)
}
}
// 如果设备IP是内网IP则使用内网IP
myIPParse := net.ParseIP(myIP)
sourceIPParse := net.ParseIP(sourceIP)
// 优先使用内网IP
myLanIP := myip.InternalIPv4()
myWanIP := myip.ExternalIPv4()
task.gb.Info("Start StoreDevice", "source", source, "desc", desc, "myLanIP", myLanIP, "myWanIP", myWanIP)
// 处理目标地址和源地址的IP映射关系
if sourceIPParse != nil { // 源IP有效时才进行处理
if myIPParse == nil { // 目标地址是域名
if sourceIPParse.IsPrivate() { // 源IP是内网IP
myWanIP = myLanIP // 使用内网IP作为外网IP
}
} else { // 目标地址是IP
if sourceIPParse.IsPrivate() { // 源IP是内网IP
myLanIP, myWanIP = myIP, myIP // 使用目标IP作为内外网IP
}
}
}
if task.gb.MediaIP != "" {
myWanIP = task.gb.MediaIP
}
if task.gb.SipIP != "" {
myLanIP = task.gb.SipIP
}
now := time.Now()
d.CreateTime = now
d.UpdateTime = now
d.RegisterTime = now
d.KeepaliveTime = now
d.Status = DeviceOnlineStatus
d.Online = true
d.StreamMode = "TCP-PASSIVE" // 默认UDP传输
d.Charset = "GB2312" // 默认GB2312字符集
d.GeoCoordSys = "WGS84" // 默认WGS84坐标系
d.Transport = req.Transport() // 传输协议
d.IP = sourceIP
d.Port = sourcePort
d.HostAddress = sourceIP + ":" + sourcePortStr
d.SipIp = myLanIP
d.MediaIp = myWanIP
d.Expires = int(expSec)
d.eventChan = make(chan any, 10)
d.Recipient = sip.Uri{
Host: sourceIP,
Port: sourcePort,
User: deviceid,
}
d.contactHDR = sip.ContactHeader{
Address: sip.Uri{
User: task.gb.Serial,
Host: myWanIP,
Port: myPort,
},
}
d.fromHDR = sip.FromHeader{
Address: sip.Uri{
User: task.gb.Serial,
Host: myWanIP,
Port: myPort,
},
Params: sip.NewParams(),
}
d.plugin = task.gb
d.LocalPort = myPort
d.Logger = task.gb.Logger.With("deviceid", deviceid)
d.fromHDR.Params.Add("tag", sip.GenerateTagN(16))
d.client, _ = sipgo.NewClient(task.gb.ua, sipgo.WithClientLogger(zerolog.New(os.Stdout)), sipgo.WithClientHostname(d.SipIp))
d.channels.L = new(sync.RWMutex)
d.catalogReqs.L = new(sync.RWMutex)
d.Info("StoreDevice", "source", source, "desc", desc, "device.SipIp", myLanIP, "device.WanIP", myWanIP, "req.Recipient", req.Recipient, "myPort", myPort, "d.Recipient", d.Recipient)
// 使用简单的 hash 函数将设备 ID 转换为 uint32
var hash uint32
for i := 0; i < len(d.DeviceId); i++ {
ch := d.DeviceId[i]
hash = hash*31 + uint32(ch)
}
d.Task.ID = hash
d.OnStart(func() {
task.gb.devices.Set(d)
d.channels.OnAdd(func(c *Channel) {
if absDevice, ok := task.gb.Server.PullProxies.Find(func(absDevice m7s.IPullProxy) bool {
conf := absDevice.GetConfig()
return conf.Type == "gb28181" && conf.URL == fmt.Sprintf("%s/%s", d.DeviceId, c.ChannelID)
}); ok {
c.PullProxyTask = absDevice.(*PullProxy)
absDevice.ChangeStatus(m7s.PullProxyStatusOnline)
}
})
})
d.OnDispose(func() {
d.Status = DeviceOfflineStatus
if task.gb.devices.RemoveByKey(d.DeviceId) {
for c := range d.channels.Range {
if c.PullProxyTask != nil {
c.PullProxyTask.ChangeStatus(m7s.PullProxyStatusOffline)
}
}
}
})
task.gb.AddTask(d).WaitStarted()
if task.gb.DB != nil {
var existing Device
if err := task.gb.DB.First(&existing, Device{DeviceId: d.DeviceId}).Error; err == nil {
d.ID = existing.ID // 保持原有的自增ID
task.gb.DB.Save(d).Omit("create_time")
task.gb.Info("StoreDevice", "type", "更新设备", "deviceId", d.DeviceId)
} else {
task.gb.DB.Save(d)
task.gb.Info("StoreDevice", "type", "新增设备", "deviceId", d.DeviceId)
}
}
return
}

Submodule plugin/gridb deleted from e0f8dbad92

View File

@@ -104,9 +104,8 @@ func (config *HLSPlugin) vod(w http.ResponseWriter, r *http.Request) {
playlist.Init()
for _, record := range records {
duration := record.EndTime.Sub(record.StartTime).Seconds()
playlist.WriteInf(hls.PlaylistInf{
Duration: duration,
Duration: float64(record.Duration) / 1000,
URL: fmt.Sprintf("/mp4/download/%s.fmp4?id=%d", streamPath, record.ID),
Title: record.StartTime.Format(time.RFC3339),
})
@@ -128,9 +127,8 @@ func (config *HLSPlugin) vod(w http.ResponseWriter, r *http.Request) {
playlist.Init()
for _, record := range records {
duration := record.EndTime.Sub(record.StartTime).Seconds()
playlist.WriteInf(hls.PlaylistInf{
Duration: duration,
Duration: float64(record.Duration) / 1000,
URL: record.FilePath,
})
}

View File

@@ -103,71 +103,105 @@ func (p *MP4Plugin) downloadSingleFile(stream *m7s.RecordStream, flag mp4.Flag,
}
}
// download 处理 MP4 文件下载请求
// 支持两种模式:
// 1. 单个文件下载:通过 id 参数指定特定的录制文件
// 2. 时间范围合并下载:根据时间范围合并多个录制文件
func (p *MP4Plugin) download(w http.ResponseWriter, r *http.Request) {
// 检查数据库连接
if p.DB == nil {
http.Error(w, pkg.ErrNoDB.Error(), http.StatusInternalServerError)
return
}
// 设置响应头为 MP4 视频格式
w.Header().Set("Content-Type", "video/mp4")
// 从路径中提取流路径,并检查是否为分片格式
streamPath := r.PathValue("streamPath")
var flag mp4.Flag
if strings.HasSuffix(streamPath, ".fmp4") {
// 分片 MP4 格式
flag = mp4.FLAG_FRAGMENT
streamPath = strings.TrimSuffix(streamPath, ".fmp4")
} else {
// 常规 MP4 格式
streamPath = strings.TrimSuffix(streamPath, ".mp4")
}
query := r.URL.Query()
var streams []m7s.RecordStream
// 处理单个文件下载请求
if id := query.Get("id"); id != "" {
// 设置下载文件名
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%s_%s.mp4", streamPath, id))
// 从数据库查询指定 ID 的录制记录
p.DB.Find(&streams, "id=? AND stream_path=?", id, streamPath)
if len(streams) == 0 {
http.Error(w, "record not found", http.StatusNotFound)
return
}
// 下载单个文件
p.downloadSingleFile(&streams[0], flag, w, r)
return
}
// 合并多个 mp4
// 处理时间范围合并下载请求
// 解析时间范围参数
startTime, endTime, err := util.TimeRangeQueryParse(query)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
p.Info("download", "streamPath", streamPath, "start", startTime, "end", endTime)
// 设置合并下载的文件名,包含时间范围
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%s_%s_%s.mp4", streamPath, startTime.Format("20060102150405"), endTime.Format("20060102150405")))
// 构建查询条件,查找指定时间范围内的录制记录
queryRecord := m7s.RecordStream{
Mode: m7s.RecordModeAuto,
Type: "mp4",
}
p.DB.Where(&queryRecord).Find(&streams, "end_time>? AND start_time<? AND stream_path=?", startTime, endTime, streamPath)
// 创建 MP4 混合器
muxer := mp4.NewMuxer(flag)
ftyp := muxer.CreateFTYPBox()
n := ftyp.Size()
muxer.CurrentOffset = int64(n)
var lastTs, tsOffset int64
var parts []*ContentPart
sampleOffset := muxer.CurrentOffset + mp4.BeforeMdatData
mdatOffset := sampleOffset
var audioTrack, videoTrack *mp4.Track
var file *os.File
var moov box.IBox
streamCount := len(streams)
// 初始化变量
var lastTs, tsOffset int64 // 时间戳偏移量,用于合并多个文件时保持时间连续性
var parts []*ContentPart // 内容片段列表
sampleOffset := muxer.CurrentOffset + mp4.BeforeMdatData // 样本数据偏移量
mdatOffset := sampleOffset // 媒体数据偏移量
var audioTrack, videoTrack *mp4.Track // 音频和视频轨道
var file *os.File // 当前处理的文件
var moov box.IBox // MOOV box包含元数据
streamCount := len(streams) // 流的总数
// Track ExtraData history for each track
// 轨道额外数据历史记录,用于处理编码参数变化的情况
type TrackHistory struct {
Track *mp4.Track
ExtraData []byte
}
var audioHistory, videoHistory []TrackHistory
// 添加音频轨道的函数
addAudioTrack := func(track *mp4.Track) {
t := muxer.AddTrack(track.Cid)
t.ExtraData = track.ExtraData
t.SampleSize = track.SampleSize
t.SampleRate = track.SampleRate
t.ChannelCount = track.ChannelCount
// 如果之前有音频轨道,继承其样本列表
if len(audioHistory) > 0 {
t.Samplelist = audioHistory[len(audioHistory)-1].Track.Samplelist
}
@@ -175,11 +209,13 @@ func (p *MP4Plugin) download(w http.ResponseWriter, r *http.Request) {
audioHistory = append(audioHistory, TrackHistory{Track: t, ExtraData: track.ExtraData})
}
// 添加视频轨道的函数
addVideoTrack := func(track *mp4.Track) {
t := muxer.AddTrack(track.Cid)
t.ExtraData = track.ExtraData
t.Width = track.Width
t.Height = track.Height
// 如果之前有视频轨道,继承其样本列表
if len(videoHistory) > 0 {
t.Samplelist = videoHistory[len(videoHistory)-1].Track.Samplelist
}
@@ -187,6 +223,7 @@ func (p *MP4Plugin) download(w http.ResponseWriter, r *http.Request) {
videoHistory = append(videoHistory, TrackHistory{Track: t, ExtraData: track.ExtraData})
}
// 智能添加轨道的函数,处理编码参数变化
addTrack := func(track *mp4.Track) {
var lastAudioTrack, lastVideoTrack *TrackHistory
if len(audioHistory) > 0 {
@@ -195,105 +232,150 @@ func (p *MP4Plugin) download(w http.ResponseWriter, r *http.Request) {
if len(videoHistory) > 0 {
lastVideoTrack = &videoHistory[len(videoHistory)-1]
}
if track.Cid.IsAudio() {
if lastAudioTrack == nil {
// 首次添加音频轨道
addAudioTrack(track)
} else if !bytes.Equal(lastAudioTrack.ExtraData, track.ExtraData) {
// 音频编码参数发生变化,检查是否已存在相同参数的轨道
for _, history := range audioHistory {
if bytes.Equal(history.ExtraData, track.ExtraData) {
// 找到相同参数的轨道,重用它
audioTrack = history.Track
audioTrack.Samplelist = audioHistory[len(audioHistory)-1].Track.Samplelist
return
}
}
// 创建新的音频轨道
addAudioTrack(track)
}
} else if track.Cid.IsVideo() {
if lastVideoTrack == nil {
// 首次添加视频轨道
addVideoTrack(track)
} else if !bytes.Equal(lastVideoTrack.ExtraData, track.ExtraData) {
// 视频编码参数发生变化,检查是否已存在相同参数的轨道
for _, history := range videoHistory {
if bytes.Equal(history.ExtraData, track.ExtraData) {
// 找到相同参数的轨道,重用它
videoTrack = history.Track
videoTrack.Samplelist = videoHistory[len(videoHistory)-1].Track.Samplelist
return
}
}
// 创建新的视频轨道
addVideoTrack(track)
}
}
}
// 遍历处理每个录制文件
for i, stream := range streams {
tsOffset = lastTs
tsOffset = lastTs // 设置时间戳偏移
// 打开录制文件
file, err = os.Open(stream.FilePath)
if err != nil {
return
}
p.Info("read", "file", file.Name())
// 创建解复用器并解析文件
demuxer := mp4.NewDemuxer(file)
err = demuxer.Demux()
if err != nil {
return
}
trackCount := len(demuxer.Tracks)
// 处理轨道信息
if i == 0 || flag == mp4.FLAG_FRAGMENT {
// 第一个文件或分片模式,添加所有轨道
for _, track := range demuxer.Tracks {
addTrack(track)
}
}
// 检查轨道数量是否发生变化
if trackCount != len(muxer.Tracks) {
if flag == mp4.FLAG_FRAGMENT {
// 分片模式下重新生成 MOOV box
moov = muxer.MakeMoov()
}
}
// 处理开始时间偏移(仅第一个文件)
if i == 0 {
startTimestamp := startTime.Sub(stream.StartTime).Milliseconds()
var startSample *box.Sample
if startSample, err = demuxer.SeekTime(uint64(startTimestamp)); err != nil {
tsOffset = 0
continue
if startTimestamp > 0 {
// 如果请求的开始时间晚于文件开始时间,需要定位到指定时间点
var startSample *box.Sample
if startSample, err = demuxer.SeekTime(uint64(startTimestamp)); err != nil {
continue
}
tsOffset = -int64(startSample.Timestamp)
}
tsOffset = -int64(startSample.Timestamp)
}
var part *ContentPart
// 遍历处理每个样本
for track, sample := range demuxer.RangeSample {
// 检查是否超出结束时间(仅最后一个文件)
if i == streamCount-1 && int64(sample.Timestamp) > endTime.Sub(stream.StartTime).Milliseconds() {
break
}
// 创建内容片段
if part == nil {
part = &ContentPart{
File: file,
Start: sample.Offset,
}
}
// 计算调整后的时间戳
lastTs = int64(sample.Timestamp + uint32(tsOffset))
fixSample := *sample
fixSample.Timestamp += uint32(tsOffset)
if flag == 0 {
// 常规 MP4 模式
fixSample.Offset = sampleOffset + (fixSample.Offset - part.Start)
part.Size += sample.Size
// 将样本添加到对应的轨道
if track.Cid.IsAudio() {
audioTrack.AddSampleEntry(fixSample)
} else if track.Cid.IsVideo() {
videoTrack.AddSampleEntry(fixSample)
}
} else {
// 分片 MP4 模式
// 读取样本数据
part.Seek(sample.Offset, io.SeekStart)
fixSample.Data = make([]byte, sample.Size)
part.Read(fixSample.Data)
// 创建分片
var moof, mdat box.IBox
if track.Cid.IsAudio() {
moof, mdat = muxer.CreateFlagment(audioTrack, fixSample)
} else if track.Cid.IsVideo() {
moof, mdat = muxer.CreateFlagment(videoTrack, fixSample)
}
// 添加分片到内容片段
if moof != nil {
part.boxies = append(part.boxies, moof, mdat)
part.Size += int(moof.Size() + mdat.Size())
}
}
}
// 更新偏移量并添加到片段列表
if part != nil {
sampleOffset += int64(part.Size)
parts = append(parts, part)
@@ -301,14 +383,21 @@ func (p *MP4Plugin) download(w http.ResponseWriter, r *http.Request) {
}
if flag == 0 {
// 常规 MP4 模式:生成完整的 MP4 文件
moovSize := muxer.MakeMoov().Size()
dataSize := uint64(sampleOffset - mdatOffset)
// 设置内容长度
w.Header().Set("Content-Length", fmt.Sprintf("%d", uint64(sampleOffset)+moovSize))
// 调整样本偏移量以适应 MOOV box
for _, track := range muxer.Tracks {
for i := range track.Samplelist {
track.Samplelist[i].Offset += int64(moovSize)
}
}
// 创建 MDAT box
mdatBox := box.CreateBaseBox(box.TypeMDAT, dataSize+box.BasicBoxLen)
var freeBox *box.FreeBox
@@ -318,11 +407,13 @@ func (p *MP4Plugin) download(w http.ResponseWriter, r *http.Request) {
var written, totalWritten int64
// 写入文件头部FTYP、MOOV、FREE、MDAT header
totalWritten, err = box.WriteTo(w, ftyp, muxer.MakeMoov(), freeBox, mdatBox)
if err != nil {
return
}
// 写入所有内容片段的数据
for _, part := range parts {
part.Seek(part.Start, io.SeekStart)
written, err = io.CopyN(w, part.File, int64(part.Size))
@@ -333,15 +424,21 @@ func (p *MP4Plugin) download(w http.ResponseWriter, r *http.Request) {
part.Close()
}
} else {
// 分片 MP4 模式:输出分片格式
var children []box.IBox
var totalSize uint64
// 添加文件头和所有分片
children = append(children, ftyp, moov)
totalSize += uint64(ftyp.Size() + moov.Size())
for _, part := range parts {
totalSize += uint64(part.Size)
children = append(children, part.boxies...)
part.Close()
}
// 设置内容长度并写入数据
w.Header().Set("Content-Length", fmt.Sprintf("%d", totalSize))
_, err = box.WriteTo(w, children...)
if err != nil {
@@ -361,26 +458,34 @@ func (p *MP4Plugin) StartRecord(ctx context.Context, req *mp4pb.ReqStartRecord)
filePath = req.FilePath
}
res = &mp4pb.ResponseStartRecord{}
p.Server.Records.Call(func() error {
_, recordExists = p.Server.Records.Find(func(job *m7s.RecordJob) bool {
return job.StreamPath == req.StreamPath && job.RecConf.FilePath == req.FilePath
})
return nil
_, recordExists = p.Server.Records.SafeFind(func(job *m7s.RecordJob) bool {
return job.StreamPath == req.StreamPath && job.RecConf.FilePath == req.FilePath
})
if recordExists {
err = pkg.ErrRecordExists
return
}
recordConf := config.Record{
Append: false,
Fragment: fragment,
FilePath: filePath,
}
if stream, ok := p.Server.Streams.SafeGet(req.StreamPath); ok {
recordConf := config.Record{
Append: false,
Fragment: fragment,
FilePath: filePath,
}
job := p.Record(stream, recordConf, nil)
res.Data = uint64(uintptr(unsafe.Pointer(job.GetTask())))
} else {
err = pkg.ErrNotFound
sub, err := p.Subscribe(ctx, req.StreamPath)
if err == nil && sub != nil {
if stream, ok := p.Server.Streams.SafeGet(req.StreamPath); ok {
job := p.Record(stream, recordConf, nil)
res.Data = uint64(uintptr(unsafe.Pointer(job.GetTask())))
} else {
err = pkg.ErrNotFound
}
} else {
err = pkg.ErrNotFound
}
}
return
}
@@ -388,19 +493,16 @@ func (p *MP4Plugin) StartRecord(ctx context.Context, req *mp4pb.ReqStartRecord)
func (p *MP4Plugin) StopRecord(ctx context.Context, req *mp4pb.ReqStopRecord) (res *mp4pb.ResponseStopRecord, err error) {
res = &mp4pb.ResponseStopRecord{}
var recordJob *m7s.RecordJob
p.Server.Records.Call(func() error {
recordJob, _ = p.Server.Records.Find(func(job *m7s.RecordJob) bool {
return job.StreamPath == req.StreamPath
})
if recordJob != nil {
t := recordJob.GetTask()
if t != nil {
res.Data = uint64(uintptr(unsafe.Pointer(t)))
t.Stop(task.ErrStopByUser)
}
}
return nil
recordJob, _ = p.Server.Records.SafeFind(func(job *m7s.RecordJob) bool {
return job.StreamPath == req.StreamPath
})
if recordJob != nil {
t := recordJob.GetTask()
if t != nil {
res.Data = uint64(uintptr(unsafe.Pointer(t)))
t.Stop(task.ErrStopByUser)
}
}
return
}
@@ -422,11 +524,8 @@ func (p *MP4Plugin) EventStart(ctx context.Context, req *mp4pb.ReqEventRecord) (
}
//recorder := p.Meta.Recorder(config.Record{})
var tmpJob *m7s.RecordJob
p.Server.Records.Call(func() error {
tmpJob, _ = p.Server.Records.Find(func(job *m7s.RecordJob) bool {
return job.StreamPath == req.StreamPath
})
return nil
tmpJob, _ = p.Server.Records.SafeFind(func(job *m7s.RecordJob) bool {
return job.StreamPath == req.StreamPath
})
if tmpJob == nil { //为空表示没有正在进行的录制,也就是没有自动录像,则进行正常的事件录像
if stream, ok := p.Server.Streams.SafeGet(req.StreamPath); ok {

View File

@@ -37,9 +37,27 @@
| | | | | | sbgp | | sample-to-group |
| | | | | | sgpd | | sample group description |
| | | | | | subs | | sub-sample information |
| | | udta | | | | | user-data (track level)<br>轨道级别的用户数据容器 |
| | | | cprt | | | | copyright etc.<br>版权信息 |
| | | | titl | | | | title<br>标题 |
| | | | auth | | | | author<br>作者 |
| | mvex | | | | | | movie extends box |
| | | mehd | | | | | movie extends header box |
| | | trex | | | | ✓ | track extends defaults |
| | udta | | | | | | user-data (movie level)<br>电影级别的用户数据容器 |
| | | cprt | | | | | copyright etc.<br>版权信息 |
| | | titl | | | | | title<br>标题 |
| | | auth | | | | | author<br>作者 |
| | | albm | | | | | album<br>专辑 |
| | | yrrc | | | | | year<br>年份 |
| | | rtng | | | | | rating<br>评级 |
| | | clsf | | | | | classification<br>分类 |
| | | kywd | | | | | keywords<br>关键词 |
| | | loci | | | | | location information<br>位置信息 |
| | | dscp | | | | | description<br>描述 |
| | | perf | | | | | performer<br>表演者 |
| | | gnre | | | | | genre<br>类型 |
| | | meta | | | | | metadata atom<br>元数据原子 |
| | ipmc | | | | | | IPMP Control Box |
| moof | | | | | | | movie fragment |
| | mfhd | | | | | ✓ | movie fragment header |
@@ -54,8 +72,10 @@
| mdat | | | | | | | media data container |
| free | | | | | | | free space |
| skip | | | | | | | free space |
| | udta | | | | | | user-data |
| | | cprt | | | | | copyright etc. |
| udta | | | | | | | user-data (file level)<br>文件级别的用户数据容器 |
| | cprt | | | | | | copyright etc.<br>版权信息 |
| | titl | | | | | | title<br>标题 |
| | auth | | | | | | author<br>作者 |
| meta | | | | | | | metadata |
| | hdlr | | | | | ✓ | handler, declares the metadata (handler) type |
| | dinf | | | | | | data information box, container |

View File

@@ -343,7 +343,25 @@ var (
TypeAUXV = f("auxv")
TypeHINT = f("hint")
TypeUDTA = f("udta")
TypeM7SP = f("m7sp") // Custom box type for M7S StreamPath
// Common metadata box types
TypeTITL = f("©nam") // Title
TypeART = f("©ART") // Artist/Author
TypeALB = f("©alb") // Album
TypeDAY = f("©day") // Date/Year
TypeCMT = f("©cmt") // Comment/Description
TypeGEN = f("©gen") // Genre
TypeCPRT = f("cprt") // Copyright
TypeENCO = f("©too") // Encoder/Tool
TypeWRT = f("©wrt") // Writer/Composer
TypePRD = f("©prd") // Producer
TypePRF = f("©prf") // Performer
TypeGRP = f("©grp") // Grouping
TypeLYR = f("©lyr") // Lyrics
TypeKEYW = f("keyw") // Keywords
TypeLOCI = f("loci") // Location Information
TypeRTNG = f("rtng") // Rating
TypeMETA_CUST = f("----") // Custom metadata (iTunes-style)
)
// aligned(8) class Box (unsigned int(32) boxtype, optional unsigned int(8)[16] extended_type) {

View File

@@ -0,0 +1,334 @@
package box
import (
"encoding/binary"
"io"
"time"
)
// Metadata holds various metadata information for MP4
type Metadata struct {
Title string // 标题
Artist string // 艺术家/作者
Album string // 专辑
Date string // 日期
Comment string // 注释/描述
Genre string // 类型
Copyright string // 版权信息
Encoder string // 编码器
Writer string // 作词者
Producer string // 制作人
Performer string // 表演者
Grouping string // 分组
Lyrics string // 歌词
Keywords string // 关键词
Location string // 位置信息
Rating uint8 // 评级 (0-5)
Custom map[string]string // 自定义键值对
}
// Text Data Box - for storing text metadata
type TextDataBox struct {
FullBox
Text string
}
// Metadata Data Box - for storing binary metadata with type indicator
type MetadataDataBox struct {
FullBox
DataType uint32 // Data type indicator
Country uint32 // Country code
Language uint32 // Language code
Data []byte // Actual data
}
// Copyright Box
type CopyrightBox struct {
FullBox
Language [3]byte
Notice string
}
// Custom Metadata Box (iTunes-style ---- box)
type CustomMetadataBox struct {
BaseBox
Mean string // Mean (namespace)
Name string // Name (key)
Data []byte // Data
}
// Create functions
func CreateTextDataBox(boxType BoxType, text string) *TextDataBox {
return &TextDataBox{
FullBox: FullBox{
BaseBox: BaseBox{
typ: boxType,
size: uint32(FullBoxLen + len(text)),
},
Version: 0,
Flags: [3]byte{0, 0, 0},
},
Text: text,
}
}
func CreateMetadataDataBox(dataType uint32, data []byte) *MetadataDataBox {
return &MetadataDataBox{
FullBox: FullBox{
BaseBox: BaseBox{
typ: f("data"),
size: uint32(FullBoxLen + 8 + len(data)), // 8 bytes for type+country+language
},
Version: 0,
Flags: [3]byte{0, 0, 0},
},
DataType: dataType,
Country: 0,
Language: 0,
Data: data,
}
}
func CreateCopyrightBox(language [3]byte, notice string) *CopyrightBox {
return &CopyrightBox{
FullBox: FullBox{
BaseBox: BaseBox{
typ: TypeCPRT,
size: uint32(FullBoxLen + 3 + 1 + len(notice)), // 3 for language, 1 for null terminator
},
Version: 0,
Flags: [3]byte{0, 0, 0},
},
Language: language,
Notice: notice,
}
}
func CreateCustomMetadataBox(mean, name string, data []byte) *CustomMetadataBox {
size := uint32(BasicBoxLen + 4 + len(mean) + 4 + len(name) + len(data))
return &CustomMetadataBox{
BaseBox: BaseBox{
typ: TypeMETA_CUST,
size: size,
},
Mean: mean,
Name: name,
Data: data,
}
}
// WriteTo methods
func (box *TextDataBox) WriteTo(w io.Writer) (n int64, err error) {
nn, err := w.Write([]byte(box.Text))
return int64(nn), err
}
func (box *MetadataDataBox) WriteTo(w io.Writer) (n int64, err error) {
var tmp [8]byte
binary.BigEndian.PutUint32(tmp[0:4], box.DataType)
binary.BigEndian.PutUint32(tmp[4:8], box.Country)
// Language field is implicit zero
nn, err := w.Write(tmp[:8])
if err != nil {
return int64(nn), err
}
n = int64(nn)
nn, err = w.Write(box.Data)
return n + int64(nn), err
}
func (box *CopyrightBox) WriteTo(w io.Writer) (n int64, err error) {
// Write language code
nn, err := w.Write(box.Language[:])
if err != nil {
return int64(nn), err
}
n = int64(nn)
// Write notice + null terminator
nn, err = w.Write([]byte(box.Notice + "\x00"))
return n + int64(nn), err
}
func (box *CustomMetadataBox) WriteTo(w io.Writer) (n int64, err error) {
var tmp [4]byte
// Write mean length + mean
binary.BigEndian.PutUint32(tmp[:], uint32(len(box.Mean)))
nn, err := w.Write(tmp[:])
if err != nil {
return int64(nn), err
}
n = int64(nn)
nn, err = w.Write([]byte(box.Mean))
if err != nil {
return n + int64(nn), err
}
n += int64(nn)
// Write name length + name
binary.BigEndian.PutUint32(tmp[:], uint32(len(box.Name)))
nn, err = w.Write(tmp[:])
if err != nil {
return n + int64(nn), err
}
n += int64(nn)
nn, err = w.Write([]byte(box.Name))
if err != nil {
return n + int64(nn), err
}
n += int64(nn)
// Write data
nn, err = w.Write(box.Data)
return n + int64(nn), err
}
// Unmarshal methods
func (box *TextDataBox) Unmarshal(buf []byte) (IBox, error) {
box.Text = string(buf)
return box, nil
}
func (box *MetadataDataBox) Unmarshal(buf []byte) (IBox, error) {
if len(buf) < 8 {
return nil, io.ErrShortBuffer
}
box.DataType = binary.BigEndian.Uint32(buf[0:4])
box.Country = binary.BigEndian.Uint32(buf[4:8])
box.Data = buf[8:]
return box, nil
}
func (box *CopyrightBox) Unmarshal(buf []byte) (IBox, error) {
if len(buf) < 4 {
return nil, io.ErrShortBuffer
}
copy(box.Language[:], buf[0:3])
// Find null terminator
for i := 3; i < len(buf); i++ {
if buf[i] == 0 {
box.Notice = string(buf[3:i])
break
}
}
if box.Notice == "" && len(buf) > 3 {
box.Notice = string(buf[3:])
}
return box, nil
}
func (box *CustomMetadataBox) Unmarshal(buf []byte) (IBox, error) {
if len(buf) < 8 {
return nil, io.ErrShortBuffer
}
offset := 0
// Read mean length + mean
meanLen := binary.BigEndian.Uint32(buf[offset:])
offset += 4
if offset+int(meanLen) > len(buf) {
return nil, io.ErrShortBuffer
}
box.Mean = string(buf[offset : offset+int(meanLen)])
offset += int(meanLen)
// Read name length + name
if offset+4 > len(buf) {
return nil, io.ErrShortBuffer
}
nameLen := binary.BigEndian.Uint32(buf[offset:])
offset += 4
if offset+int(nameLen) > len(buf) {
return nil, io.ErrShortBuffer
}
box.Name = string(buf[offset : offset+int(nameLen)])
offset += int(nameLen)
// Read remaining data
box.Data = buf[offset:]
return box, nil
}
// Create metadata entries from Metadata struct
func CreateMetadataEntries(metadata *Metadata) []IBox {
var entries []IBox
// Standard text metadata
if metadata.Title != "" {
entries = append(entries, CreateTextDataBox(TypeTITL, metadata.Title))
}
if metadata.Artist != "" {
entries = append(entries, CreateTextDataBox(TypeART, metadata.Artist))
}
if metadata.Album != "" {
entries = append(entries, CreateTextDataBox(TypeALB, metadata.Album))
}
if metadata.Date != "" {
entries = append(entries, CreateTextDataBox(TypeDAY, metadata.Date))
}
if metadata.Comment != "" {
entries = append(entries, CreateTextDataBox(TypeCMT, metadata.Comment))
}
if metadata.Genre != "" {
entries = append(entries, CreateTextDataBox(TypeGEN, metadata.Genre))
}
if metadata.Encoder != "" {
entries = append(entries, CreateTextDataBox(TypeENCO, metadata.Encoder))
}
if metadata.Writer != "" {
entries = append(entries, CreateTextDataBox(TypeWRT, metadata.Writer))
}
if metadata.Producer != "" {
entries = append(entries, CreateTextDataBox(TypePRD, metadata.Producer))
}
if metadata.Performer != "" {
entries = append(entries, CreateTextDataBox(TypePRF, metadata.Performer))
}
if metadata.Grouping != "" {
entries = append(entries, CreateTextDataBox(TypeGRP, metadata.Grouping))
}
if metadata.Lyrics != "" {
entries = append(entries, CreateTextDataBox(TypeLYR, metadata.Lyrics))
}
if metadata.Keywords != "" {
entries = append(entries, CreateTextDataBox(TypeKEYW, metadata.Keywords))
}
if metadata.Location != "" {
entries = append(entries, CreateTextDataBox(TypeLOCI, metadata.Location))
}
// Copyright (special format)
if metadata.Copyright != "" {
entries = append(entries, CreateCopyrightBox([3]byte{'u', 'n', 'd'}, metadata.Copyright))
}
// Custom metadata
for key, value := range metadata.Custom {
entries = append(entries, CreateCustomMetadataBox("live.m7s.custom", key, []byte(value)))
}
return entries
}
// Helper function to create current date string
func GetCurrentDateString() string {
return time.Now().Format("2006-01-02")
}
func init() {
RegisterBox[*TextDataBox](TypeTITL, TypeART, TypeALB, TypeDAY, TypeCMT, TypeGEN, TypeENCO, TypeWRT, TypePRD, TypePRF, TypeGRP, TypeLYR, TypeKEYW, TypeLOCI, TypeRTNG)
RegisterBox[*MetadataDataBox](f("data"))
RegisterBox[*CopyrightBox](TypeCPRT)
RegisterBox[*CustomMetadataBox](TypeMETA_CUST)
}

View File

@@ -12,12 +12,6 @@ type UserDataBox struct {
Entries []IBox
}
// Custom metadata box for storing stream path
type StreamPathBox struct {
FullBox
StreamPath string
}
// Create a new User Data Box
func CreateUserDataBox(entries ...IBox) *UserDataBox {
size := uint32(BasicBoxLen)
@@ -33,21 +27,6 @@ func CreateUserDataBox(entries ...IBox) *UserDataBox {
}
}
// Create a new StreamPath Box
func CreateStreamPathBox(streamPath string) *StreamPathBox {
return &StreamPathBox{
FullBox: FullBox{
BaseBox: BaseBox{
typ: TypeM7SP, // Custom box type for M7S StreamPath
size: uint32(FullBoxLen + len(streamPath)),
},
Version: 0,
Flags: [3]byte{0, 0, 0},
},
StreamPath: streamPath,
}
}
// WriteTo writes the UserDataBox to the given writer
func (box *UserDataBox) WriteTo(w io.Writer) (n int64, err error) {
return WriteTo(w, box.Entries...)
@@ -69,19 +48,6 @@ func (box *UserDataBox) Unmarshal(buf []byte) (IBox, error) {
return box, nil
}
// WriteTo writes the StreamPathBox to the given writer
func (box *StreamPathBox) WriteTo(w io.Writer) (n int64, err error) {
nn, err := w.Write([]byte(box.StreamPath))
return int64(nn), err
}
// Unmarshal parses the given buffer into a StreamPathBox
func (box *StreamPathBox) Unmarshal(buf []byte) (IBox, error) {
box.StreamPath = string(buf)
return box, nil
}
func init() {
RegisterBox[*UserDataBox](TypeUDTA)
RegisterBox[*StreamPathBox](TypeM7SP)
}

View File

@@ -0,0 +1,121 @@
package mp4
import (
"context"
"os"
"time"
"m7s.live/v5"
"m7s.live/v5/plugin/mp4/pkg/box"
)
type DemuxerRange struct {
StartTime, EndTime time.Time
Streams []m7s.RecordStream
OnAudioExtraData func(codec box.MP4_CODEC_TYPE, data []byte) error
OnVideoExtraData func(codec box.MP4_CODEC_TYPE, data []byte) error
OnAudioSample func(codec box.MP4_CODEC_TYPE, sample box.Sample) error
OnVideoSample func(codec box.MP4_CODEC_TYPE, sample box.Sample) error
}
func (d *DemuxerRange) Demux(ctx context.Context) error {
var ts, tsOffset int64
for _, stream := range d.Streams {
// 检查流的时间范围是否在指定范围内
if stream.EndTime.Before(d.StartTime) || stream.StartTime.After(d.EndTime) {
continue
}
tsOffset = ts
file, err := os.Open(stream.FilePath)
if err != nil {
continue
}
defer file.Close()
demuxer := NewDemuxer(file)
if err = demuxer.Demux(); err != nil {
return err
}
// 处理每个轨道的额外数据 (序列头)
for _, track := range demuxer.Tracks {
switch track.Cid {
case box.MP4_CODEC_H264, box.MP4_CODEC_H265:
if d.OnVideoExtraData != nil {
err := d.OnVideoExtraData(track.Cid, track.ExtraData)
if err != nil {
return err
}
}
case box.MP4_CODEC_AAC, box.MP4_CODEC_G711A, box.MP4_CODEC_G711U:
if d.OnAudioExtraData != nil {
err := d.OnAudioExtraData(track.Cid, track.ExtraData)
if err != nil {
return err
}
}
}
}
// 计算起始时间戳偏移
if !d.StartTime.IsZero() {
startTimestamp := d.StartTime.Sub(stream.StartTime).Milliseconds()
if startTimestamp < 0 {
startTimestamp = 0
}
if startSample, err := demuxer.SeekTime(uint64(startTimestamp)); err == nil {
tsOffset = -int64(startSample.Timestamp)
} else {
tsOffset = 0
}
}
// 读取和处理样本
for track, sample := range demuxer.ReadSample {
if ctx.Err() != nil {
return context.Cause(ctx)
}
// 检查是否超出结束时间
sampleTime := stream.StartTime.Add(time.Duration(sample.Timestamp) * time.Millisecond)
if !d.EndTime.IsZero() && sampleTime.After(d.EndTime) {
break
}
// 计算样本数据偏移和读取数据
sampleOffset := int(sample.Offset) - int(demuxer.mdatOffset)
if sampleOffset < 0 || sampleOffset+sample.Size > len(demuxer.mdat.Data) {
continue
}
sample.Data = demuxer.mdat.Data[sampleOffset : sampleOffset+sample.Size]
// 计算时间戳
if int64(sample.Timestamp)+tsOffset < 0 {
ts = 0
} else {
ts = int64(sample.Timestamp + uint32(tsOffset))
}
sample.Timestamp = uint32(ts)
// 根据轨道类型调用相应的回调函数
switch track.Cid {
case box.MP4_CODEC_H264, box.MP4_CODEC_H265:
if d.OnVideoSample != nil {
err := d.OnVideoSample(track.Cid, sample)
if err != nil {
return err
}
}
case box.MP4_CODEC_AAC, box.MP4_CODEC_G711A, box.MP4_CODEC_G711U:
if d.OnAudioSample != nil {
err := d.OnAudioSample(track.Cid, sample)
if err != nil {
return err
}
}
}
}
}
return nil
}

View File

@@ -152,7 +152,7 @@ func (d *Demuxer) Demux() (err error) {
switch entry.Type() {
case TypeAVC1:
track.Cid = MP4_CODEC_H264
case TypeHVC1:
case TypeHVC1, TypeHEV1:
track.Cid = MP4_CODEC_H265
}
track.Width = uint32(entry.Width)
@@ -393,8 +393,8 @@ func (d *Demuxer) ReadSample(yield func(*Track, Sample) bool) {
whichTrack = track
whichTracki = i
} else {
dts1 := minTsSample.Timestamp * uint32(d.moov.MVHD.Timescale) / uint32(whichTrack.Timescale)
dts2 := track.Samplelist[idx].Timestamp * uint32(d.moov.MVHD.Timescale) / uint32(track.Timescale)
dts1 := uint64(minTsSample.Timestamp) * uint64(d.moov.MVHD.Timescale) / uint64(whichTrack.Timescale)
dts2 := uint64(track.Samplelist[idx].Timestamp) * uint64(d.moov.MVHD.Timescale) / uint64(track.Timescale)
if dts1 > dts2 {
minTsSample = track.Samplelist[idx]
whichTrack = track

View File

@@ -29,7 +29,8 @@ type (
moov IBox
mdatOffset uint64
mdatSize uint64
StreamPath string // Added to store the stream path
StreamPath string // Added to store the stream path
Metadata *Metadata // 添加元数据支持
}
)
@@ -52,6 +53,7 @@ func NewMuxer(flag Flag) *Muxer {
Tracks: make(map[uint32]*Track),
Flag: flag,
fragDuration: 2000,
Metadata: &Metadata{Custom: make(map[string]string)},
}
}
@@ -59,6 +61,8 @@ func NewMuxer(flag Flag) *Muxer {
func NewMuxerWithStreamPath(flag Flag, streamPath string) *Muxer {
muxer := NewMuxer(flag)
muxer.StreamPath = streamPath
muxer.Metadata.Producer = "M7S Live"
muxer.Metadata.Album = streamPath
return muxer
}
@@ -232,10 +236,10 @@ func (m *Muxer) MakeMoov() IBox {
children = append(children, m.makeMvex())
}
// Add user data box with stream path if available
if m.StreamPath != "" {
streamPathBox := CreateStreamPathBox(m.StreamPath)
udta := CreateUserDataBox(streamPathBox)
// Add user data box with metadata if available
metadataEntries := CreateMetadataEntries(m.Metadata)
if len(metadataEntries) > 0 {
udta := CreateUserDataBox(metadataEntries...)
children = append(children, udta)
}
@@ -365,3 +369,82 @@ func (m *Muxer) WriteTrailer(file *os.File) (err error) {
func (m *Muxer) SetFragmentDuration(duration uint32) {
m.fragDuration = duration
}
// SetMetadata sets the metadata for the MP4 file
func (m *Muxer) SetMetadata(metadata *Metadata) {
m.Metadata = metadata
if metadata.Custom == nil {
metadata.Custom = make(map[string]string)
}
}
// SetTitle sets the title metadata
func (m *Muxer) SetTitle(title string) {
m.Metadata.Title = title
}
// SetArtist sets the artist/author metadata
func (m *Muxer) SetArtist(artist string) {
m.Metadata.Artist = artist
}
// SetAlbum sets the album metadata
func (m *Muxer) SetAlbum(album string) {
m.Metadata.Album = album
}
// SetComment sets the comment/description metadata
func (m *Muxer) SetComment(comment string) {
m.Metadata.Comment = comment
}
// SetGenre sets the genre metadata
func (m *Muxer) SetGenre(genre string) {
m.Metadata.Genre = genre
}
// SetCopyright sets the copyright metadata
func (m *Muxer) SetCopyright(copyright string) {
m.Metadata.Copyright = copyright
}
// SetEncoder sets the encoder metadata
func (m *Muxer) SetEncoder(encoder string) {
m.Metadata.Encoder = encoder
}
// SetDate sets the date metadata (format: YYYY-MM-DD)
func (m *Muxer) SetDate(date string) {
m.Metadata.Date = date
}
// SetCurrentDate sets the date metadata to current date
func (m *Muxer) SetCurrentDate() {
m.Metadata.Date = GetCurrentDateString()
}
// AddCustomMetadata adds custom key-value metadata
func (m *Muxer) AddCustomMetadata(key, value string) {
if m.Metadata.Custom == nil {
m.Metadata.Custom = make(map[string]string)
}
m.Metadata.Custom[key] = value
}
// SetKeywords sets the keywords metadata
func (m *Muxer) SetKeywords(keywords string) {
m.Metadata.Keywords = keywords
}
// SetLocation sets the location metadata
func (m *Muxer) SetLocation(location string) {
m.Metadata.Location = location
}
// SetRating sets the rating metadata (0-5)
func (m *Muxer) SetRating(rating uint8) {
if rating > 5 {
rating = 5
}
m.Metadata.Rating = rating
}

View File

@@ -20,6 +20,10 @@ type HTTPReader struct {
func (p *HTTPReader) Run() (err error) {
pullJob := &p.PullJob
publisher := pullJob.Publisher
if publisher == nil {
io.Copy(io.Discard, p.ReadCloser)
return
}
allocator := util.NewScalableMemoryAllocator(1 << 10)
var demuxer *Demuxer
defer allocator.Recycle()
@@ -36,12 +40,12 @@ func (p *HTTPReader) Run() (err error) {
}
publisher.OnSeek = func(seekTime time.Time) {
p.Stop(errors.New("seek"))
pullJob.Args.Set(util.StartKey, seekTime.Local().Format(util.LocalTimeFormat))
pullJob.Connection.Args.Set(util.StartKey, seekTime.Local().Format(util.LocalTimeFormat))
newHTTPReader := &HTTPReader{}
pullJob.AddTask(newHTTPReader)
}
if pullJob.Args.Get(util.StartKey) != "" {
seekTime, _ := time.Parse(util.LocalTimeFormat, pullJob.Args.Get(util.StartKey))
if pullJob.Connection.Args.Get(util.StartKey) != "" {
seekTime, _ := time.Parse(util.LocalTimeFormat, pullJob.Connection.Args.Get(util.StartKey))
demuxer.SeekTime(uint64(seekTime.UnixMilli()))
}
for _, track := range demuxer.Tracks {
@@ -63,70 +67,92 @@ func (p *HTTPReader) Run() (err error) {
err = publisher.WriteAudio(&sequence)
}
}
// 计算最大时间戳用于累计偏移
var maxTimestamp uint64
for track, sample := range demuxer.ReadSample {
if p.IsStopped() {
break
timestamp := uint64(sample.Timestamp) * 1000 / uint64(track.Timescale)
if timestamp > maxTimestamp {
maxTimestamp = timestamp
}
if _, err = demuxer.reader.Seek(sample.Offset, io.SeekStart); err != nil {
return
}
sample.Data = allocator.Malloc(sample.Size)
if _, err = io.ReadFull(demuxer.reader, sample.Data); err != nil {
allocator.Free(sample.Data)
return
}
switch track.Cid {
case box.MP4_CODEC_H264:
var videoFrame rtmp.RTMPVideo
videoFrame.SetAllocator(allocator)
videoFrame.CTS = sample.CTS
videoFrame.Timestamp = sample.Timestamp * 1000 / track.Timescale
videoFrame.AppendOne([]byte{util.Conditional[byte](sample.KeyFrame, 0x17, 0x27), 0x01, byte(videoFrame.CTS >> 24), byte(videoFrame.CTS >> 8), byte(videoFrame.CTS)})
videoFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteVideo(&videoFrame)
case box.MP4_CODEC_H265:
var videoFrame rtmp.RTMPVideo
videoFrame.SetAllocator(allocator)
videoFrame.CTS = uint32(sample.CTS)
videoFrame.Timestamp = sample.Timestamp * 1000 / track.Timescale
var head []byte
var b0 byte = 0b1010_0000
if sample.KeyFrame {
b0 = 0b1001_0000
}
var timestampOffset uint64
loop := p.PullJob.Loop
for {
demuxer.ReadSampleIdx = make([]uint32, len(demuxer.Tracks))
for track, sample := range demuxer.ReadSample {
if p.IsStopped() {
return
}
if videoFrame.CTS == 0 {
head = videoFrame.NextN(5)
head[0] = b0 | rtmp.PacketTypeCodedFramesX
} else {
head = videoFrame.NextN(8)
head[0] = b0 | rtmp.PacketTypeCodedFrames
util.PutBE(head[5:8], videoFrame.CTS) // cts
if _, err = demuxer.reader.Seek(sample.Offset, io.SeekStart); err != nil {
return
}
sample.Data = allocator.Malloc(sample.Size)
if _, err = io.ReadFull(demuxer.reader, sample.Data); err != nil {
allocator.Free(sample.Data)
return
}
switch track.Cid {
case box.MP4_CODEC_H264:
var videoFrame rtmp.RTMPVideo
videoFrame.SetAllocator(allocator)
videoFrame.CTS = sample.CTS
videoFrame.Timestamp = uint32(uint64(sample.Timestamp)*1000/uint64(track.Timescale) + timestampOffset)
videoFrame.AppendOne([]byte{util.Conditional[byte](sample.KeyFrame, 0x17, 0x27), 0x01, byte(videoFrame.CTS >> 24), byte(videoFrame.CTS >> 8), byte(videoFrame.CTS)})
videoFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteVideo(&videoFrame)
case box.MP4_CODEC_H265:
var videoFrame rtmp.RTMPVideo
videoFrame.SetAllocator(allocator)
videoFrame.CTS = uint32(sample.CTS)
videoFrame.Timestamp = uint32(uint64(sample.Timestamp)*1000/uint64(track.Timescale) + timestampOffset)
var head []byte
var b0 byte = 0b1010_0000
if sample.KeyFrame {
b0 = 0b1001_0000
}
if videoFrame.CTS == 0 {
head = videoFrame.NextN(5)
head[0] = b0 | rtmp.PacketTypeCodedFramesX
} else {
head = videoFrame.NextN(8)
head[0] = b0 | rtmp.PacketTypeCodedFrames
util.PutBE(head[5:8], videoFrame.CTS) // cts
}
copy(head[1:], codec.FourCC_H265[:])
videoFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteVideo(&videoFrame)
case box.MP4_CODEC_AAC:
var audioFrame rtmp.RTMPAudio
audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = uint32(uint64(sample.Timestamp)*1000/uint64(track.Timescale) + timestampOffset)
audioFrame.AppendOne([]byte{0xaf, 0x01})
audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
case box.MP4_CODEC_G711A:
var audioFrame rtmp.RTMPAudio
audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = uint32(uint64(sample.Timestamp)*1000/uint64(track.Timescale) + timestampOffset)
audioFrame.AppendOne([]byte{0x72})
audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
case box.MP4_CODEC_G711U:
var audioFrame rtmp.RTMPAudio
audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = uint32(uint64(sample.Timestamp)*1000/uint64(track.Timescale) + timestampOffset)
audioFrame.AppendOne([]byte{0x82})
audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
}
copy(head[1:], codec.FourCC_H265[:])
videoFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteVideo(&videoFrame)
case box.MP4_CODEC_AAC:
var audioFrame rtmp.RTMPAudio
audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = sample.Timestamp * 1000 / track.Timescale
audioFrame.AppendOne([]byte{0xaf, 0x01})
audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
case box.MP4_CODEC_G711A:
var audioFrame rtmp.RTMPAudio
audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = sample.Timestamp * 1000 / track.Timescale
audioFrame.AppendOne([]byte{0x72})
audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
case box.MP4_CODEC_G711U:
var audioFrame rtmp.RTMPAudio
audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = sample.Timestamp * 1000 / track.Timescale
audioFrame.AppendOne([]byte{0x82})
audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
}
if loop >= 0 {
loop--
if loop == -1 {
break
}
}
// 每次循环后累计时间戳偏移,确保下次循环的时间戳是递增的
timestampOffset += maxTimestamp + 1
}
return
}

View File

@@ -1,11 +1,11 @@
package mp4
import (
"os"
"strings"
"time"
m7s "m7s.live/v5"
"m7s.live/v5/pkg"
"m7s.live/v5/pkg/codec"
"m7s.live/v5/pkg/config"
"m7s.live/v5/pkg/task"
@@ -39,152 +39,159 @@ func NewPuller(conf config.Pull) m7s.IPuller {
func (p *RecordReader) Run() (err error) {
pullJob := &p.PullJob
publisher := pullJob.Publisher
// allocator := util.NewScalableMemoryAllocator(1 << 10)
var ts, tsOffset int64
if publisher == nil {
return pkg.ErrDisabled
}
var realTime time.Time
// defer allocator.Recycle()
publisher.OnGetPosition = func() time.Time {
return realTime
}
for loop := 0; loop < p.Loop; loop++ {
nextStream:
for i, stream := range p.Streams {
tsOffset = ts
if p.File != nil {
p.File.Close()
// 简化的时间戳管理变量
var ts int64 // 当前时间戳
var tsOffset int64 // 时间戳偏移量
// 创建可复用的 DemuxerRange 实例
demuxerRange := &DemuxerRange{}
// 设置音视频额外数据回调(序列头)
demuxerRange.OnVideoExtraData = func(codecType box.MP4_CODEC_TYPE, data []byte) error {
switch codecType {
case box.MP4_CODEC_H264:
var sequence rtmp.RTMPVideo
sequence.Append([]byte{0x17, 0x00, 0x00, 0x00, 0x00}, data)
err = publisher.WriteVideo(&sequence)
case box.MP4_CODEC_H265:
var sequence rtmp.RTMPVideo
sequence.Append([]byte{0b1001_0000 | rtmp.PacketTypeSequenceStart}, codec.FourCC_H265[:], data)
err = publisher.WriteVideo(&sequence)
}
return err
}
demuxerRange.OnAudioExtraData = func(codecType box.MP4_CODEC_TYPE, data []byte) error {
if codecType == box.MP4_CODEC_AAC {
var sequence rtmp.RTMPAudio
sequence.Append([]byte{0xaf, 0x00}, data)
err = publisher.WriteAudio(&sequence)
}
return err
}
// 设置视频样本回调
demuxerRange.OnVideoSample = func(codecType box.MP4_CODEC_TYPE, sample box.Sample) error {
if publisher.Paused != nil {
publisher.Paused.Await()
}
// 检查是否需要跳转
if needSeek, seekErr := p.CheckSeek(); seekErr != nil {
return seekErr
} else if needSeek {
return pkg.ErrSkip
}
// 简化的时间戳处理
if int64(sample.Timestamp)+tsOffset < 0 {
ts = 0
} else {
ts = int64(sample.Timestamp) + tsOffset
}
// 更新实时时间
realTime = time.Now() // 这里可以根据需要调整为更精确的时间计算
// 根据编解码器类型处理视频帧
switch codecType {
case box.MP4_CODEC_H264:
var videoFrame rtmp.RTMPVideo
videoFrame.CTS = sample.CTS
videoFrame.Timestamp = uint32(ts)
videoFrame.Append([]byte{util.Conditional[byte](sample.KeyFrame, 0x17, 0x27), 0x01, byte(videoFrame.CTS >> 24), byte(videoFrame.CTS >> 8), byte(videoFrame.CTS)}, sample.Data)
err = publisher.WriteVideo(&videoFrame)
case box.MP4_CODEC_H265:
var videoFrame rtmp.RTMPVideo
videoFrame.CTS = sample.CTS
videoFrame.Timestamp = uint32(ts)
var head []byte
var b0 byte = 0b1010_0000
if sample.KeyFrame {
b0 = 0b1001_0000
}
p.File, err = os.Open(stream.FilePath)
if err != nil {
if videoFrame.CTS == 0 {
head = videoFrame.NextN(5)
head[0] = b0 | rtmp.PacketTypeCodedFramesX
} else {
head = videoFrame.NextN(8)
head[0] = b0 | rtmp.PacketTypeCodedFrames
util.PutBE(head[5:8], videoFrame.CTS) // cts
}
copy(head[1:], codec.FourCC_H265[:])
videoFrame.AppendOne(sample.Data)
err = publisher.WriteVideo(&videoFrame)
}
return err
}
// 设置音频样本回调
demuxerRange.OnAudioSample = func(codecType box.MP4_CODEC_TYPE, sample box.Sample) error {
if publisher.Paused != nil {
publisher.Paused.Await()
}
// 检查是否需要跳转
if needSeek, seekErr := p.CheckSeek(); seekErr != nil {
return seekErr
} else if needSeek {
return pkg.ErrSkip
}
// 简化的时间戳处理
if int64(sample.Timestamp)+tsOffset < 0 {
ts = 0
} else {
ts = int64(sample.Timestamp) + tsOffset
}
// 根据编解码器类型处理音频帧
switch codecType {
case box.MP4_CODEC_AAC:
var audioFrame rtmp.RTMPAudio
audioFrame.Timestamp = uint32(ts)
audioFrame.Append([]byte{0xaf, 0x01}, sample.Data)
err = publisher.WriteAudio(&audioFrame)
case box.MP4_CODEC_G711A:
var audioFrame rtmp.RTMPAudio
audioFrame.Timestamp = uint32(ts)
audioFrame.Append([]byte{0x72}, sample.Data)
err = publisher.WriteAudio(&audioFrame)
case box.MP4_CODEC_G711U:
var audioFrame rtmp.RTMPAudio
audioFrame.Timestamp = uint32(ts)
audioFrame.Append([]byte{0x82}, sample.Data)
err = publisher.WriteAudio(&audioFrame)
}
return err
}
for loop := 0; loop < p.Loop; loop++ {
// 每次循环时更新时间戳偏移量以保持连续性
tsOffset = ts
demuxerRange.StartTime = p.PullStartTime
if !p.PullEndTime.IsZero() {
demuxerRange.EndTime = p.PullEndTime
} else if p.MaxTS > 0 {
demuxerRange.EndTime = p.PullStartTime.Add(time.Duration(p.MaxTS) * time.Millisecond)
} else {
demuxerRange.EndTime = time.Now()
}
if err = demuxerRange.Demux(p.Context); err != nil {
if err == pkg.ErrSkip {
loop--
continue
}
p.demuxer = NewDemuxer(p.File)
if err = p.demuxer.Demux(); err != nil {
return
}
for _, track := range p.demuxer.Tracks {
switch track.Cid {
case box.MP4_CODEC_H264:
var sequence rtmp.RTMPVideo
// sequence.SetAllocator(allocator)
sequence.Append([]byte{0x17, 0x00, 0x00, 0x00, 0x00}, track.ExtraData)
err = publisher.WriteVideo(&sequence)
case box.MP4_CODEC_H265:
var sequence rtmp.RTMPVideo
// sequence.SetAllocator(allocator)
sequence.Append([]byte{0b1001_0000 | rtmp.PacketTypeSequenceStart}, codec.FourCC_H265[:], track.ExtraData)
err = publisher.WriteVideo(&sequence)
case box.MP4_CODEC_AAC:
var sequence rtmp.RTMPAudio
// sequence.SetAllocator(allocator)
sequence.Append([]byte{0xaf, 0x00}, track.ExtraData)
err = publisher.WriteAudio(&sequence)
}
}
if i == 0 {
startTimestamp := p.PullStartTime.Sub(stream.StartTime).Milliseconds()
if startTimestamp < 0 {
startTimestamp = 0
}
var startSample *box.Sample
if startSample, err = p.demuxer.SeekTime(uint64(startTimestamp)); err != nil {
tsOffset = 0
continue
}
tsOffset = -int64(startSample.Timestamp)
}
for track, sample := range p.demuxer.ReadSample {
if p.IsStopped() {
return p.StopReason()
}
if publisher.Paused != nil {
publisher.Paused.Await()
}
if needSeek, err := p.CheckSeek(); err != nil {
continue
} else if needSeek {
goto nextStream
}
// if _, err = p.demuxer.reader.Seek(sample.Offset, io.SeekStart); err != nil {
// return
// }
sampleOffset := int(sample.Offset) - int(p.demuxer.mdatOffset)
if sampleOffset < 0 || sampleOffset+sample.Size > len(p.demuxer.mdat.Data) {
return
}
sample.Data = p.demuxer.mdat.Data[sampleOffset : sampleOffset+sample.Size]
// sample.Data = allocator.Malloc(sample.Size)
// if _, err = io.ReadFull(p.demuxer.reader, sample.Data); err != nil {
// allocator.Free(sample.Data)
// return
// }
if int64(sample.Timestamp)+tsOffset < 0 {
ts = 0
} else {
ts = int64(sample.Timestamp + uint32(tsOffset))
}
realTime = stream.StartTime.Add(time.Duration(sample.Timestamp) * time.Millisecond)
if p.MaxTS > 0 && ts > p.MaxTS {
return
}
switch track.Cid {
case box.MP4_CODEC_H264:
var videoFrame rtmp.RTMPVideo
// videoFrame.SetAllocator(allocator)
videoFrame.CTS = sample.CTS
videoFrame.Timestamp = uint32(ts)
videoFrame.Append([]byte{util.Conditional[byte](sample.KeyFrame, 0x17, 0x27), 0x01, byte(videoFrame.CTS >> 24), byte(videoFrame.CTS >> 8), byte(videoFrame.CTS)}, sample.Data)
// videoFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteVideo(&videoFrame)
case box.MP4_CODEC_H265:
var videoFrame rtmp.RTMPVideo
// videoFrame.SetAllocator(allocator)
videoFrame.CTS = sample.CTS
videoFrame.Timestamp = uint32(ts)
var head []byte
var b0 byte = 0b1010_0000
if sample.KeyFrame {
b0 = 0b1001_0000
}
if videoFrame.CTS == 0 {
head = videoFrame.NextN(5)
head[0] = b0 | rtmp.PacketTypeCodedFramesX
} else {
head = videoFrame.NextN(8)
head[0] = b0 | rtmp.PacketTypeCodedFrames
util.PutBE(head[5:8], videoFrame.CTS) // cts
}
copy(head[1:], codec.FourCC_H265[:])
videoFrame.AppendOne(sample.Data)
// videoFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteVideo(&videoFrame)
case box.MP4_CODEC_AAC:
var audioFrame rtmp.RTMPAudio
// audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = uint32(ts)
audioFrame.Append([]byte{0xaf, 0x01}, sample.Data)
// audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
case box.MP4_CODEC_G711A:
var audioFrame rtmp.RTMPAudio
// audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = uint32(ts)
audioFrame.Append([]byte{0x72}, sample.Data)
// audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
case box.MP4_CODEC_G711U:
var audioFrame rtmp.RTMPAudio
// audioFrame.SetAllocator(allocator)
audioFrame.Timestamp = uint32(ts)
audioFrame.Append([]byte{0x82}, sample.Data)
// audioFrame.AddRecycleBytes(sample.Data)
err = publisher.WriteAudio(&audioFrame)
}
}
return err
}
}
return

View File

@@ -269,6 +269,7 @@ func (r *Recorder) Run() (err error) {
}
return m7s.PlayBlock(sub, func(audio *pkg.RawAudio) error {
r.stream.Duration = sub.AudioReader.AbsTime
if sub.VideoReader == nil {
if recordJob.AfterDuration != 0 {
err := checkEventRecordStop(sub.VideoReader.AbsTime)
@@ -313,6 +314,7 @@ func (r *Recorder) Run() (err error) {
Timestamp: uint32(dts),
})
}, func(video *rtmp.RTMPVideo) error {
r.stream.Duration = sub.VideoReader.AbsTime
if sub.VideoReader.Value.IDR {
if recordJob.AfterDuration != 0 {
err := checkEventRecordStop(sub.VideoReader.AbsTime)

View File

@@ -1,6 +1,7 @@
package plugin_mp4
import (
"fmt"
"os"
"path/filepath"
"strings"
@@ -15,24 +16,22 @@ import (
// RecordRecoveryTask 从录像文件中恢复数据库记录的任务
type RecordRecoveryTask struct {
task.TickTask
task.Task
DB *gorm.DB
plugin *MP4Plugin
}
// GetTickInterval 设置任务执行间隔
func (t *RecordRecoveryTask) GetTickInterval() time.Duration {
return 24 * time.Hour // 默认每天执行一次
// RecoveryStats 恢复统计信息
type RecoveryStats struct {
TotalFiles int
SuccessCount int
FailureCount int
SkippedCount int
Errors []error
}
// Tick 执行任务
func (t *RecordRecoveryTask) Tick(any) {
t.Info("Starting record recovery task")
t.recoverRecordsFromFiles()
}
// recoverRecordsFromFiles 从文件系统中恢复录像记录
func (t *RecordRecoveryTask) recoverRecordsFromFiles() {
// Start 从文件系统中恢复录像记录
func (t *RecordRecoveryTask) Start() error {
// 获取所有录像目录
var recordDirs []string
if len(t.plugin.GetCommonConf().OnPub.Record) > 0 {
@@ -46,20 +45,60 @@ func (t *RecordRecoveryTask) recoverRecordsFromFiles() {
recordDirs = append(recordDirs, dirPath)
}
// 遍历所有录像目录
for _, dir := range recordDirs {
t.scanDirectory(dir)
if len(recordDirs) == 0 {
t.Info("No record directories configured, skipping recovery")
return nil
}
stats := &RecoveryStats{}
// 遍历所有录像目录,收集所有错误而不是在第一个错误时停止
for _, dir := range recordDirs {
dirStats, err := t.scanDirectory(dir)
if dirStats != nil {
stats.TotalFiles += dirStats.TotalFiles
stats.SuccessCount += dirStats.SuccessCount
stats.FailureCount += dirStats.FailureCount
stats.SkippedCount += dirStats.SkippedCount
stats.Errors = append(stats.Errors, dirStats.Errors...)
}
if err != nil {
stats.Errors = append(stats.Errors, fmt.Errorf("failed to scan directory %s: %w", dir, err))
}
}
// 记录统计信息
t.Info("Recovery completed",
"totalFiles", stats.TotalFiles,
"success", stats.SuccessCount,
"failed", stats.FailureCount,
"skipped", stats.SkippedCount,
"errors", len(stats.Errors))
// 如果有错误,返回一个汇总错误
if len(stats.Errors) > 0 {
var errorMsgs []string
for _, err := range stats.Errors {
errorMsgs = append(errorMsgs, err.Error())
}
return fmt.Errorf("recovery completed with %d errors: %s", len(stats.Errors), strings.Join(errorMsgs, "; "))
}
return nil
}
// scanDirectory 扫描目录中的MP4文件
func (t *RecordRecoveryTask) scanDirectory(dir string) {
func (t *RecordRecoveryTask) scanDirectory(dir string) (*RecoveryStats, error) {
t.Info("Scanning directory for MP4 files", "directory", dir)
stats := &RecoveryStats{}
// 递归遍历目录
err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
t.Error("Error accessing path", "path", path, "error", err)
stats.Errors = append(stats.Errors, fmt.Errorf("failed to access path %s: %w", path, err))
return nil // 继续遍历
}
@@ -73,33 +112,50 @@ func (t *RecordRecoveryTask) scanDirectory(dir string) {
return nil
}
stats.TotalFiles++
// 检查文件是否已经有记录
var count int64
t.DB.Model(&m7s.RecordStream{}).Where("file_path = ?", path).Count(&count)
if err := t.DB.Model(&m7s.RecordStream{}).Where("file_path = ?", path).Count(&count).Error; err != nil {
t.Error("Failed to check existing record", "file", path, "error", err)
stats.FailureCount++
stats.Errors = append(stats.Errors, fmt.Errorf("failed to check existing record for %s: %w", path, err))
return nil
}
if count > 0 {
// 已有记录,跳过
stats.SkippedCount++
return nil
}
// 解析MP4文件并创建记录
t.recoverRecordFromFile(path)
if err := t.recoverRecordFromFile(path); err != nil {
stats.FailureCount++
stats.Errors = append(stats.Errors, fmt.Errorf("failed to recover record from %s: %w", path, err))
} else {
stats.SuccessCount++
}
return nil
})
if err != nil {
t.Error("Error walking directory", "directory", dir, "error", err)
return stats, fmt.Errorf("failed to walk directory %s: %w", dir, err)
}
return stats, nil
}
// recoverRecordFromFile 从MP4文件中恢复记录
func (t *RecordRecoveryTask) recoverRecordFromFile(filePath string) {
func (t *RecordRecoveryTask) recoverRecordFromFile(filePath string) error {
t.Info("Recovering record from file", "file", filePath)
// 打开文件
file, err := os.Open(filePath)
if err != nil {
t.Error("Failed to open MP4 file", "file", filePath, "error", err)
return
return fmt.Errorf("failed to open MP4 file %s: %w", filePath, err)
}
defer file.Close()
@@ -108,14 +164,14 @@ func (t *RecordRecoveryTask) recoverRecordFromFile(filePath string) {
err = demuxer.Demux()
if err != nil {
t.Error("Failed to demux MP4 file", "file", filePath, "error", err)
return
return fmt.Errorf("failed to demux MP4 file %s: %w", filePath, err)
}
// 提取文件信息
fileInfo, err := file.Stat()
if err != nil {
t.Error("Failed to get file info", "file", filePath, "error", err)
return
return fmt.Errorf("failed to get file info for %s: %w", filePath, err)
}
// 尝试从MP4文件中提取流路径如果没有则从文件名和路径推断
@@ -151,10 +207,11 @@ func (t *RecordRecoveryTask) recoverRecordFromFile(filePath string) {
err = t.DB.Create(&record).Error
if err != nil {
t.Error("Failed to save record to database", "file", filePath, "error", err)
return
return fmt.Errorf("failed to save record to database for %s: %w", filePath, err)
}
t.Info("Successfully recovered record", "file", filePath, "streamPath", streamPath)
return nil
}
// extractStreamPathFromMP4 从MP4文件中提取流路径
@@ -163,8 +220,8 @@ func extractStreamPathFromMP4(demuxer *mp4.Demuxer) string {
moov := demuxer.GetMoovBox()
if moov != nil && moov.UDTA != nil {
for _, entry := range moov.UDTA.Entries {
if streamPathBox, ok := entry.(*box.StreamPathBox); ok {
return streamPathBox.StreamPath
if entry.Type() == box.TypeALB {
return entry.(*box.TextDataBox).Text
}
}
}

View File

@@ -40,7 +40,7 @@ type RTMPServer struct {
func (p *RTMPPlugin) OnTCPConnect(conn *net.TCPConn) task.ITask {
ret := &RTMPServer{conf: p}
ret.Init(conn)
ret.Logger = p.With("remote", conn.RemoteAddr().String())
ret.Logger = p.Logger.With("remote", conn.RemoteAddr().String())
return ret
}

View File

@@ -29,7 +29,7 @@ func (c *Client) Start() (err error) {
return
}
ps := strings.Split(c.u.Path, "/")
if len(ps) < 3 {
if len(ps) < 2 {
return errors.New("illegal rtmp url")
}
isRtmps := c.u.Scheme == "rtmps"
@@ -158,7 +158,9 @@ func (c *Client) Run() (err error) {
if len(args) > 0 {
m.StreamName += "?" + args.Encode()
}
c.Receivers[response.StreamId] = c.pullCtx.Publisher
if c.pullCtx.Publisher != nil {
c.Receivers[response.StreamId] = c.pullCtx.Publisher
}
err = c.SendMessage(RTMP_MSG_AMF0_COMMAND, m)
// if response, ok := msg.MsgData.(*ResponsePlayMessage); ok {
// if response.Object["code"] == "NetStream.Play.Start" {

View File

@@ -79,10 +79,10 @@ func (nc *NetConnection) Handshake(checkC2 bool) (err error) {
if len(C1) != C1S1_SIZE {
return errors.New("C1 Error")
}
var ts int
util.GetBE(C1[4:8], &ts)
var zero int
util.GetBE(C1[4:8], &zero)
if ts == 0 {
if zero == 0 {
return nc.simple_handshake(C1, checkC2)
}
@@ -92,12 +92,26 @@ func (nc *NetConnection) Handshake(checkC2 bool) (err error) {
func (nc *NetConnection) ClientHandshake() (err error) {
C0C1 := nc.mediaDataPool.NextN(C1S1_SIZE + 1)
defer nc.mediaDataPool.Recycle()
// 构造 C0
C0C1[0] = RTMP_HANDSHAKE_VERSION
// 构造 C1 使用简单握手格式
C1 := C0C1[1:]
// Time (4 bytes): 当前时间戳
util.PutBE(C1[0:4], time.Now().Unix()&0xFFFFFFFF)
// Zero (4 bytes): 必须为 0确保使用简单握手
util.PutBE(C1[4:8], 0)
// Random data (1528 bytes): 填充随机数据
for i := 8; i < C1S1_SIZE; i++ {
C1[i] = byte(rand.Int() % 256)
}
if _, err = nc.Write(C0C1); err == nil {
// read S0 S1
if _, err = io.ReadFull(nc.Conn, C0C1); err == nil {
if C0C1[0] != RTMP_HANDSHAKE_VERSION {
err = errors.New("S1 C1 Error")
err = errors.New("S0 Error")
// C2
} else if _, err = nc.Write(C0C1[1:]); err == nil {
_, err = io.ReadFull(nc.Conn, C0C1[1:]) // S2
@@ -222,13 +236,7 @@ func clientScheme(C1 []byte, schem int) (scheme int, challenge []byte, digest []
return 0, nil, nil, false, err
}
// ok
if bytes.Compare(digest, tmp_Hash) == 0 {
ok = true
} else {
ok = false
}
ok = bytes.Equal(digest, tmp_Hash)
// challenge scheme
challenge = C1[key_offset : key_offset+C1S1_KEY_DATA_SIZE]
scheme = schem

View File

@@ -5,6 +5,7 @@ import (
"net"
"runtime"
"sync/atomic"
"time"
"m7s.live/v5"
"m7s.live/v5/pkg/task"
@@ -128,6 +129,7 @@ func (nc *NetConnection) ResponseCreateStream(tid uint64, streamID uint32) error
// }
func (nc *NetConnection) readChunk() (msg *Chunk, err error) {
nc.SetReadDeadline(time.Now().Add(time.Second * 5)) // 设置读取超时时间为5秒
head, err := nc.ReadByte()
if err != nil {
return nil, err
@@ -313,6 +315,9 @@ func (nc *NetConnection) RecvMessage() (msg *Chunk, err error) {
}
}
}
if nc.IsStopped() {
err = nc.StopReason()
}
}
return
}
@@ -344,6 +349,7 @@ func (nc *NetConnection) SendMessage(t byte, msg RtmpMessage) (err error) {
if sid, ok := msg.(HaveStreamID); ok {
head.MessageStreamID = sid.GetStreamID()
}
nc.SetWriteDeadline(time.Now().Add(time.Second * 5)) // 设置写入超时时间为5秒
return nc.sendChunk(net.Buffers{nc.tmpBuf}, head, RTMP_CHUNK_HEAD_12)
}

View File

@@ -53,7 +53,7 @@ func (av *Sender) SendFrame(frame *RTMPData) (err error) {
// 后面开始,就是直接发送音视频数据,那么直接发送,不需要完整的块(Chunk Basic Header(1) + Chunk Message Header(7))
// 当Chunk Type为0时(即Chunk12),
if av.lastAbs == 0 {
av.SetTimestamp(frame.Timestamp)
av.SetTimestamp(1)
err = av.sendChunk(frame.Memory.Buffers, &av.ChunkHeader, RTMP_CHUNK_HEAD_12)
} else {
av.SetTimestamp(frame.Timestamp - av.lastAbs)

View File

@@ -50,8 +50,25 @@ func (avcc *RTMPVideo) filterH264(naluSizeLen int) {
naluBuffer = append(naluBuffer, b)
})
badType := codec.ParseH264NALUType(naluBuffer[0][0])
// 替换之前打印 badType 的逻辑,解码并打印 SliceType
if badType == 5 { // NALU type for Coded slice of a non-IDR picture or Coded slice of an IDR picture
naluData := bytes.Join(naluBuffer, nil) // bytes 包已导入
if len(naluData) > 0 {
// h264parser 包已导入 as "github.com/deepch/vdk/codec/h264parser"
// ParseSliceHeaderFromNALU 返回的第一个值就是 SliceType
sliceType, err := h264parser.ParseSliceHeaderFromNALU(naluData)
if err == nil {
println("Decoded SliceType:", sliceType.String())
} else {
println("Error parsing H.264 slice header:", err.Error())
}
} else {
println("NALU data is empty, cannot parse H.264 slice header.")
}
}
switch badType {
case 5, 6, 1:
case 5, 6, 7, 8, 1, 2, 3, 4:
afterFilter.Append(lenBuffer...)
afterFilter.Append(naluBuffer...)
default:
@@ -137,17 +154,17 @@ func (avcc *RTMPVideo) Parse(t *AVTrack) (err error) {
err = parseSequence()
return
case PacketTypeCodedFrames:
switch ctx := t.ICodecCtx.(type) {
switch t.ICodecCtx.(type) {
case *H265Ctx:
if avcc.CTS, err = reader.ReadBE(3); err != nil {
return err
}
avcc.filterH265(int(ctx.RecordInfo.LengthSizeMinusOne) + 1)
// avcc.filterH265(int(ctx.RecordInfo.LengthSizeMinusOne) + 1)
case *AV1Ctx:
// return avcc.parseAV1(reader)
}
case PacketTypeCodedFramesX:
avcc.filterH265(int(t.ICodecCtx.(*H265Ctx).RecordInfo.LengthSizeMinusOne) + 1)
// avcc.filterH265(int(t.ICodecCtx.(*H265Ctx).RecordInfo.LengthSizeMinusOne) + 1)
}
} else {
b0, err = reader.ReadByte() //sequence frame flag
@@ -168,15 +185,15 @@ func (avcc *RTMPVideo) Parse(t *AVTrack) (err error) {
return
}
} else {
switch ctx := t.ICodecCtx.(type) {
case *codec.H264Ctx:
avcc.filterH264(int(ctx.RecordInfo.LengthSizeMinusOne) + 1)
case *H265Ctx:
avcc.filterH265(int(ctx.RecordInfo.LengthSizeMinusOne) + 1)
}
if avcc.Size <= 5 {
return ErrSkip
}
// switch ctx := t.ICodecCtx.(type) {
// case *codec.H264Ctx:
// avcc.filterH264(int(ctx.RecordInfo.LengthSizeMinusOne) + 1)
// case *H265Ctx:
// avcc.filterH265(int(ctx.RecordInfo.LengthSizeMinusOne) + 1)
// }
// if avcc.Size <= 5 {
// return ErrSkip
// }
}
}
return

View File

@@ -29,7 +29,7 @@ type RTSPPlugin struct {
func (p *RTSPPlugin) OnTCPConnect(conn *net.TCPConn) task.ITask {
ret := &RTSPServer{NetConnection: NewNetConnection(conn), conf: p}
ret.Logger = p.With("remote", conn.RemoteAddr().String())
ret.Logger = p.Logger.With("remote", conn.RemoteAddr().String())
return ret
}

View File

@@ -395,18 +395,9 @@ func (c *NetConnection) Receive(sendMode bool, onReceive func(byte, []byte) erro
// 如果回调返回错误,检查是否是丢弃错误
needToFree = (err != pkg.ErrDiscard)
}
continue
}
} else if onRTCP != nil { // 奇数通道RTCP数据
err := onRTCP(channelID, buf)
if err == nil {
// 如果回调返回nil表示内存被接管
needToFree = false
} else {
// 如果回调返回错误,检查是否是丢弃错误
needToFree = (err != pkg.ErrDiscard)
}
continue
onRTCP(channelID, buf) // 处理RTCP数据,及时释放内存
}
// 如果需要释放内存,则释放

View File

@@ -47,7 +47,7 @@ func (d *RTSPPullProxy) Start() (err error) {
}
func (d *RTSPPullProxy) Dispose() {
if d.conn.NetConnection != nil {
if d.conn.NetConnection != nil && d.conn.NetConnection.Conn != nil {
_ = d.conn.Teardown()
d.conn.NetConnection.Dispose()
d.conn.NetConnection = nil

View File

@@ -369,6 +369,7 @@ func (s *Sender) Send() (err error) {
func (r *Receiver) SetMedia(medias []*Media) (err error) {
r.AudioChannelID = -1
r.VideoChannelID = -1
var hasAudio, hasVideo bool // 新增标志位
for i, media := range medias {
if codec := media.Codecs[0]; codec.IsAudio() {
r.AudioCodecParameters = &webrtc.RTPCodecParameters{
@@ -382,6 +383,7 @@ func (r *Receiver) SetMedia(medias []*Media) (err error) {
PayloadType: webrtc.PayloadType(codec.PayloadType),
}
r.AudioChannelID = i << 1
hasAudio = true // 标记找到音频
} else if codec.IsVideo() {
r.VideoChannelID = i << 1
r.VideoCodecParameters = &webrtc.RTPCodecParameters{
@@ -394,10 +396,23 @@ func (r *Receiver) SetMedia(medias []*Media) (err error) {
},
PayloadType: webrtc.PayloadType(codec.PayloadType),
}
hasVideo = true // 标记找到视频
} else {
r.Stream.Warn("media kind not support", "kind", codec.Kind())
}
}
// 在遍历后检查,如果 Publisher 存在且未找到对应媒体,则调用 NoAudio/NoVideo
if r.Publisher != nil {
if !hasAudio {
r.Publisher.NoAudio()
r.Stream.Info("SDP does not contain audio, calling Publisher.NoAudio()")
}
if !hasVideo {
r.Publisher.NoVideo()
r.Stream.Info("SDP does not contain video, calling Publisher.NoVideo()")
}
}
return
}
@@ -432,7 +447,7 @@ func (r *Receiver) Receive() (err error) {
},
}
return r.NetConnection.Receive(false, func(channelID byte, buf []byte) error {
if r.Publisher.Paused != nil {
if r.Publisher != nil && r.Publisher.Paused != nil {
r.Stream.Pause()
r.Publisher.Paused.Await()
r.Stream.Play()
@@ -456,6 +471,9 @@ func (r *Receiver) Receive() (err error) {
return err
}
}
if r.Publisher == nil {
return pkg.ErrMuted
}
switch int(channelID) {
case r.AudioChannelID:
if !r.PubAudio {
@@ -546,8 +564,6 @@ func (r *Receiver) Receive() (err error) {
videoFrame.SetAllocator(r.MemoryAllocator)
return pkg.ErrDiscard
}
default:
}
return pkg.ErrUnsupportCodec
}, func(channelID byte, buf []byte) error {

View File

@@ -46,7 +46,7 @@ func (task *RTSPServer) Go() (err error) {
if task.URL == nil {
task.URL = req.URL
task.Logger = task.With("url", task.URL.String())
task.Logger = task.Logger.With("url", task.URL.String())
task.UserAgent = req.Header.Get("User-Agent")
task.Info("connect", "userAgent", task.UserAgent)
}

196
plugin/s3/api.go Normal file
View File

@@ -0,0 +1,196 @@
package plugin_s3
import (
"bytes"
"context"
"fmt"
"net/http"
"strings"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/s3"
"google.golang.org/protobuf/types/known/emptypb"
gpb "m7s.live/v5/pb"
"m7s.live/v5/plugin/s3/pb"
)
// Upload implements the gRPC Upload method
func (p *S3Plugin) Upload(ctx context.Context, req *pb.UploadRequest) (*pb.UploadResponse, error) {
if req.Filename == "" {
return nil, fmt.Errorf("filename is required")
}
if len(req.Content) == 0 {
return nil, fmt.Errorf("content is required")
}
bucket := req.Bucket
if bucket == "" {
bucket = p.Bucket
}
// Generate S3 key
key := req.Filename
if !strings.HasPrefix(key, "/") {
key = "/" + key
}
// Determine content type
contentType := req.ContentType
if contentType == "" {
contentType = http.DetectContentType(req.Content)
}
// Upload to S3
input := &s3.PutObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
Body: bytes.NewReader(req.Content),
ContentLength: aws.Int64(int64(len(req.Content))),
ContentType: aws.String(contentType),
}
result, err := p.s3Client.PutObjectWithContext(ctx, input)
if err != nil {
p.Error("Failed to upload file to S3", "error", err, "key", key, "bucket", bucket)
return nil, fmt.Errorf("failed to upload file: %v", err)
}
// Generate public URL
url := fmt.Sprintf("%s/%s%s", p.getEndpointURL(), bucket, key)
p.Info("File uploaded successfully", "key", key, "bucket", bucket, "size", len(req.Content))
return &pb.UploadResponse{
Code: 0,
Message: "Upload successful",
Data: &pb.UploadData{
Key: key,
Url: url,
Size: int64(len(req.Content)),
Etag: aws.StringValue(result.ETag),
},
}, nil
}
// List implements the gRPC List method
func (p *S3Plugin) List(ctx context.Context, req *pb.ListRequest) (*pb.ListResponse, error) {
bucket := req.Bucket
if bucket == "" {
bucket = p.Bucket
}
input := &s3.ListObjectsInput{
Bucket: aws.String(bucket),
}
if req.Prefix != "" {
input.Prefix = aws.String(req.Prefix)
}
if req.MaxKeys > 0 {
input.MaxKeys = aws.Int64(int64(req.MaxKeys))
}
if req.Marker != "" {
input.Marker = aws.String(req.Marker)
}
result, err := p.s3Client.ListObjectsWithContext(ctx, input)
if err != nil {
p.Error("Failed to list objects from S3", "error", err, "bucket", bucket)
return nil, fmt.Errorf("failed to list objects: %v", err)
}
var objects []*pb.S3Object
for _, obj := range result.Contents {
objects = append(objects, &pb.S3Object{
Key: aws.StringValue(obj.Key),
Size: aws.Int64Value(obj.Size),
LastModified: obj.LastModified.Format(time.RFC3339),
Etag: aws.StringValue(obj.ETag),
StorageClass: aws.StringValue(obj.StorageClass),
})
}
var nextMarker string
if result.NextMarker != nil {
nextMarker = aws.StringValue(result.NextMarker)
}
p.Info("Listed objects successfully", "bucket", bucket, "count", len(objects))
return &pb.ListResponse{
Code: 0,
Message: "List successful",
Data: &pb.ListData{
Objects: objects,
IsTruncated: aws.BoolValue(result.IsTruncated),
NextMarker: nextMarker,
},
}, nil
}
// Delete implements the gRPC Delete method
func (p *S3Plugin) Delete(ctx context.Context, req *pb.DeleteRequest) (*gpb.SuccessResponse, error) {
if req.Key == "" {
return nil, fmt.Errorf("key is required")
}
bucket := req.Bucket
if bucket == "" {
bucket = p.Bucket
}
input := &s3.DeleteObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(req.Key),
}
_, err := p.s3Client.DeleteObjectWithContext(ctx, input)
if err != nil {
p.Error("Failed to delete object from S3", "error", err, "key", req.Key, "bucket", bucket)
return nil, fmt.Errorf("failed to delete object: %v", err)
}
p.Info("Object deleted successfully", "key", req.Key, "bucket", bucket)
return &gpb.SuccessResponse{
Code: 0,
Message: "Delete successful",
}, nil
}
// CheckConnection implements the gRPC CheckConnection method
func (p *S3Plugin) CheckConnection(ctx context.Context, req *emptypb.Empty) (*pb.ConnectionResponse, error) {
// Test connection by listing buckets
_, err := p.s3Client.ListBucketsWithContext(ctx, &s3.ListBucketsInput{})
connected := err == nil
message := "Connection successful"
if err != nil {
message = fmt.Sprintf("Connection failed: %v", err)
p.Error("S3 connection check failed", "error", err)
} else {
p.Info("S3 connection check successful")
}
return &pb.ConnectionResponse{
Code: 0,
Message: message,
Data: &pb.ConnectionData{
Connected: connected,
Endpoint: p.Endpoint,
Region: p.Region,
UseSsl: p.UseSSL,
Bucket: p.Bucket,
},
}, nil
}
// Helper method to get endpoint URL
func (p *S3Plugin) getEndpointURL() string {
protocol := "http"
if p.UseSSL {
protocol = "https"
}
return fmt.Sprintf("%s://%s", protocol, p.Endpoint)
}

128
plugin/s3/index.go Normal file
View File

@@ -0,0 +1,128 @@
package plugin_s3
import (
"fmt"
"os"
"strings"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"m7s.live/v5"
"m7s.live/v5/plugin/s3/pb"
)
type S3Plugin struct {
pb.UnimplementedApiServer
m7s.Plugin
Endpoint string `desc:"S3 service endpoint, such as MinIO address"`
Region string `default:"us-east-1" desc:"AWS region"`
AccessKeyID string `desc:"S3 access key ID"`
SecretAccessKey string `desc:"S3 secret access key"`
Bucket string `desc:"S3 bucket name"`
PathPrefix string `desc:"file path prefix"`
ForcePathStyle bool `desc:"force path style (required for MinIO)"`
UseSSL bool `default:"true" desc:"whether to use SSL"`
Auto bool `desc:"whether to automatically upload recorded files"`
Timeout int `default:"30" desc:"upload timeout in seconds"`
s3Client *s3.S3
}
var _ = m7s.InstallPlugin[S3Plugin](&pb.Api_ServiceDesc, pb.RegisterApiHandler)
func (p *S3Plugin) OnInit() error {
// Set default configuration
if p.Region == "" {
p.Region = "us-east-1"
}
if p.Timeout == 0 {
p.Timeout = 30
}
// Create AWS session configuration
config := &aws.Config{
Region: aws.String(p.Region),
Credentials: credentials.NewStaticCredentials(p.AccessKeyID, p.SecretAccessKey, ""),
S3ForcePathStyle: aws.Bool(p.ForcePathStyle),
}
// Set endpoint if provided (for MinIO or other S3-compatible services)
if p.Endpoint != "" {
protocol := "http"
if p.UseSSL {
protocol = "https"
}
endpoint := p.Endpoint
if !strings.HasPrefix(endpoint, "http") {
endpoint = protocol + "://" + endpoint
}
config.Endpoint = aws.String(endpoint)
config.DisableSSL = aws.Bool(!p.UseSSL)
}
// Create AWS session
sess, err := session.NewSession(config)
if err != nil {
return fmt.Errorf("failed to create AWS session: %v", err)
}
// Create S3 client
p.s3Client = s3.New(sess)
// Test connection
if err := p.testConnection(); err != nil {
return fmt.Errorf("S3 connection test failed: %v", err)
}
p.Info("S3 plugin initialized successfully")
return nil
}
// testConnection tests the S3 connection
func (p *S3Plugin) testConnection() error {
// Try to list buckets to test connection
_, err := p.s3Client.ListBuckets(&s3.ListBucketsInput{})
if err != nil {
return err
}
p.Info("S3 connection test successful")
return nil
}
// uploadFile uploads a file to S3
func (p *S3Plugin) uploadFile(filePath, objectKey string) error {
file, err := os.Open(filePath)
if err != nil {
return err
}
defer file.Close()
fileInfo, err := file.Stat()
if err != nil {
return err
}
// Add path prefix if configured
if p.PathPrefix != "" {
objectKey = strings.TrimSuffix(p.PathPrefix, "/") + "/" + objectKey
}
// Upload file to S3
input := &s3.PutObjectInput{
Bucket: aws.String(p.Bucket),
Key: aws.String(objectKey),
Body: file,
ContentLength: aws.Int64(fileInfo.Size()),
ContentType: aws.String("application/octet-stream"),
}
_, err = p.s3Client.PutObject(input)
if err != nil {
return err
}
p.Info("File uploaded successfully", "objectKey", objectKey, "size", fileInfo.Size())
return nil
}

803
plugin/s3/pb/s3.pb.go Normal file
View File

@@ -0,0 +1,803 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.6
// protoc v5.29.3
// source: s3.proto
package pb
import (
_ "google.golang.org/genproto/googleapis/api/annotations"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
emptypb "google.golang.org/protobuf/types/known/emptypb"
pb "m7s.live/v5/pb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type UploadRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Filename string `protobuf:"bytes,1,opt,name=filename,proto3" json:"filename,omitempty"` // File name
Content []byte `protobuf:"bytes,2,opt,name=content,proto3" json:"content,omitempty"` // File content
ContentType string `protobuf:"bytes,3,opt,name=content_type,json=contentType,proto3" json:"content_type,omitempty"` // MIME type
Bucket string `protobuf:"bytes,4,opt,name=bucket,proto3" json:"bucket,omitempty"` // Bucket name (optional, uses default if empty)
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *UploadRequest) Reset() {
*x = UploadRequest{}
mi := &file_s3_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *UploadRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*UploadRequest) ProtoMessage() {}
func (x *UploadRequest) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use UploadRequest.ProtoReflect.Descriptor instead.
func (*UploadRequest) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{0}
}
func (x *UploadRequest) GetFilename() string {
if x != nil {
return x.Filename
}
return ""
}
func (x *UploadRequest) GetContent() []byte {
if x != nil {
return x.Content
}
return nil
}
func (x *UploadRequest) GetContentType() string {
if x != nil {
return x.ContentType
}
return ""
}
func (x *UploadRequest) GetBucket() string {
if x != nil {
return x.Bucket
}
return ""
}
type UploadResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
Data *UploadData `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *UploadResponse) Reset() {
*x = UploadResponse{}
mi := &file_s3_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *UploadResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*UploadResponse) ProtoMessage() {}
func (x *UploadResponse) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use UploadResponse.ProtoReflect.Descriptor instead.
func (*UploadResponse) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{1}
}
func (x *UploadResponse) GetCode() uint32 {
if x != nil {
return x.Code
}
return 0
}
func (x *UploadResponse) GetMessage() string {
if x != nil {
return x.Message
}
return ""
}
func (x *UploadResponse) GetData() *UploadData {
if x != nil {
return x.Data
}
return nil
}
type UploadData struct {
state protoimpl.MessageState `protogen:"open.v1"`
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` // S3 object key
Url string `protobuf:"bytes,2,opt,name=url,proto3" json:"url,omitempty"` // Public URL
Size int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"` // File size in bytes
Etag string `protobuf:"bytes,4,opt,name=etag,proto3" json:"etag,omitempty"` // ETag from S3
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *UploadData) Reset() {
*x = UploadData{}
mi := &file_s3_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *UploadData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*UploadData) ProtoMessage() {}
func (x *UploadData) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use UploadData.ProtoReflect.Descriptor instead.
func (*UploadData) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{2}
}
func (x *UploadData) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
func (x *UploadData) GetUrl() string {
if x != nil {
return x.Url
}
return ""
}
func (x *UploadData) GetSize() int64 {
if x != nil {
return x.Size
}
return 0
}
func (x *UploadData) GetEtag() string {
if x != nil {
return x.Etag
}
return ""
}
type ListRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Prefix string `protobuf:"bytes,1,opt,name=prefix,proto3" json:"prefix,omitempty"` // Prefix filter
MaxKeys int32 `protobuf:"varint,2,opt,name=max_keys,json=maxKeys,proto3" json:"max_keys,omitempty"` // Maximum number of keys to return
Marker string `protobuf:"bytes,3,opt,name=marker,proto3" json:"marker,omitempty"` // Pagination marker
Bucket string `protobuf:"bytes,4,opt,name=bucket,proto3" json:"bucket,omitempty"` // Bucket name (optional, uses default if empty)
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListRequest) Reset() {
*x = ListRequest{}
mi := &file_s3_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListRequest) ProtoMessage() {}
func (x *ListRequest) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListRequest.ProtoReflect.Descriptor instead.
func (*ListRequest) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{3}
}
func (x *ListRequest) GetPrefix() string {
if x != nil {
return x.Prefix
}
return ""
}
func (x *ListRequest) GetMaxKeys() int32 {
if x != nil {
return x.MaxKeys
}
return 0
}
func (x *ListRequest) GetMarker() string {
if x != nil {
return x.Marker
}
return ""
}
func (x *ListRequest) GetBucket() string {
if x != nil {
return x.Bucket
}
return ""
}
type ListResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
Data *ListData `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListResponse) Reset() {
*x = ListResponse{}
mi := &file_s3_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListResponse) ProtoMessage() {}
func (x *ListResponse) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListResponse.ProtoReflect.Descriptor instead.
func (*ListResponse) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{4}
}
func (x *ListResponse) GetCode() uint32 {
if x != nil {
return x.Code
}
return 0
}
func (x *ListResponse) GetMessage() string {
if x != nil {
return x.Message
}
return ""
}
func (x *ListResponse) GetData() *ListData {
if x != nil {
return x.Data
}
return nil
}
type ListData struct {
state protoimpl.MessageState `protogen:"open.v1"`
Objects []*S3Object `protobuf:"bytes,1,rep,name=objects,proto3" json:"objects,omitempty"`
IsTruncated bool `protobuf:"varint,2,opt,name=is_truncated,json=isTruncated,proto3" json:"is_truncated,omitempty"` // Whether there are more results
NextMarker string `protobuf:"bytes,3,opt,name=next_marker,json=nextMarker,proto3" json:"next_marker,omitempty"` // Next pagination marker
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListData) Reset() {
*x = ListData{}
mi := &file_s3_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListData) ProtoMessage() {}
func (x *ListData) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListData.ProtoReflect.Descriptor instead.
func (*ListData) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{5}
}
func (x *ListData) GetObjects() []*S3Object {
if x != nil {
return x.Objects
}
return nil
}
func (x *ListData) GetIsTruncated() bool {
if x != nil {
return x.IsTruncated
}
return false
}
func (x *ListData) GetNextMarker() string {
if x != nil {
return x.NextMarker
}
return ""
}
type S3Object struct {
state protoimpl.MessageState `protogen:"open.v1"`
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` // Object key
Size int64 `protobuf:"varint,2,opt,name=size,proto3" json:"size,omitempty"` // Object size in bytes
LastModified string `protobuf:"bytes,3,opt,name=last_modified,json=lastModified,proto3" json:"last_modified,omitempty"` // Last modified timestamp
Etag string `protobuf:"bytes,4,opt,name=etag,proto3" json:"etag,omitempty"` // ETag
StorageClass string `protobuf:"bytes,5,opt,name=storage_class,json=storageClass,proto3" json:"storage_class,omitempty"` // Storage class
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *S3Object) Reset() {
*x = S3Object{}
mi := &file_s3_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *S3Object) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*S3Object) ProtoMessage() {}
func (x *S3Object) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use S3Object.ProtoReflect.Descriptor instead.
func (*S3Object) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{6}
}
func (x *S3Object) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
func (x *S3Object) GetSize() int64 {
if x != nil {
return x.Size
}
return 0
}
func (x *S3Object) GetLastModified() string {
if x != nil {
return x.LastModified
}
return ""
}
func (x *S3Object) GetEtag() string {
if x != nil {
return x.Etag
}
return ""
}
func (x *S3Object) GetStorageClass() string {
if x != nil {
return x.StorageClass
}
return ""
}
type DeleteRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` // Object key to delete
Bucket string `protobuf:"bytes,2,opt,name=bucket,proto3" json:"bucket,omitempty"` // Bucket name (optional, uses default if empty)
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteRequest) Reset() {
*x = DeleteRequest{}
mi := &file_s3_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteRequest) ProtoMessage() {}
func (x *DeleteRequest) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteRequest.ProtoReflect.Descriptor instead.
func (*DeleteRequest) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{7}
}
func (x *DeleteRequest) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
func (x *DeleteRequest) GetBucket() string {
if x != nil {
return x.Bucket
}
return ""
}
type ConnectionResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
Data *ConnectionData `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ConnectionResponse) Reset() {
*x = ConnectionResponse{}
mi := &file_s3_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ConnectionResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ConnectionResponse) ProtoMessage() {}
func (x *ConnectionResponse) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ConnectionResponse.ProtoReflect.Descriptor instead.
func (*ConnectionResponse) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{8}
}
func (x *ConnectionResponse) GetCode() uint32 {
if x != nil {
return x.Code
}
return 0
}
func (x *ConnectionResponse) GetMessage() string {
if x != nil {
return x.Message
}
return ""
}
func (x *ConnectionResponse) GetData() *ConnectionData {
if x != nil {
return x.Data
}
return nil
}
type ConnectionData struct {
state protoimpl.MessageState `protogen:"open.v1"`
Connected bool `protobuf:"varint,1,opt,name=connected,proto3" json:"connected,omitempty"` // Whether connection is successful
Endpoint string `protobuf:"bytes,2,opt,name=endpoint,proto3" json:"endpoint,omitempty"` // S3 endpoint
Region string `protobuf:"bytes,3,opt,name=region,proto3" json:"region,omitempty"` // AWS region
UseSsl bool `protobuf:"varint,4,opt,name=use_ssl,json=useSsl,proto3" json:"use_ssl,omitempty"` // Whether SSL is enabled
Bucket string `protobuf:"bytes,5,opt,name=bucket,proto3" json:"bucket,omitempty"` // Default bucket name
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ConnectionData) Reset() {
*x = ConnectionData{}
mi := &file_s3_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ConnectionData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ConnectionData) ProtoMessage() {}
func (x *ConnectionData) ProtoReflect() protoreflect.Message {
mi := &file_s3_proto_msgTypes[9]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ConnectionData.ProtoReflect.Descriptor instead.
func (*ConnectionData) Descriptor() ([]byte, []int) {
return file_s3_proto_rawDescGZIP(), []int{9}
}
func (x *ConnectionData) GetConnected() bool {
if x != nil {
return x.Connected
}
return false
}
func (x *ConnectionData) GetEndpoint() string {
if x != nil {
return x.Endpoint
}
return ""
}
func (x *ConnectionData) GetRegion() string {
if x != nil {
return x.Region
}
return ""
}
func (x *ConnectionData) GetUseSsl() bool {
if x != nil {
return x.UseSsl
}
return false
}
func (x *ConnectionData) GetBucket() string {
if x != nil {
return x.Bucket
}
return ""
}
var File_s3_proto protoreflect.FileDescriptor
const file_s3_proto_rawDesc = "" +
"\n" +
"\bs3.proto\x12\x02s3\x1a\x1cgoogle/api/annotations.proto\x1a\x1bgoogle/protobuf/empty.proto\x1a\fglobal.proto\"\x80\x01\n" +
"\rUploadRequest\x12\x1a\n" +
"\bfilename\x18\x01 \x01(\tR\bfilename\x12\x18\n" +
"\acontent\x18\x02 \x01(\fR\acontent\x12!\n" +
"\fcontent_type\x18\x03 \x01(\tR\vcontentType\x12\x16\n" +
"\x06bucket\x18\x04 \x01(\tR\x06bucket\"b\n" +
"\x0eUploadResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\rR\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12\"\n" +
"\x04data\x18\x03 \x01(\v2\x0e.s3.UploadDataR\x04data\"X\n" +
"\n" +
"UploadData\x12\x10\n" +
"\x03key\x18\x01 \x01(\tR\x03key\x12\x10\n" +
"\x03url\x18\x02 \x01(\tR\x03url\x12\x12\n" +
"\x04size\x18\x03 \x01(\x03R\x04size\x12\x12\n" +
"\x04etag\x18\x04 \x01(\tR\x04etag\"p\n" +
"\vListRequest\x12\x16\n" +
"\x06prefix\x18\x01 \x01(\tR\x06prefix\x12\x19\n" +
"\bmax_keys\x18\x02 \x01(\x05R\amaxKeys\x12\x16\n" +
"\x06marker\x18\x03 \x01(\tR\x06marker\x12\x16\n" +
"\x06bucket\x18\x04 \x01(\tR\x06bucket\"^\n" +
"\fListResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\rR\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12 \n" +
"\x04data\x18\x03 \x01(\v2\f.s3.ListDataR\x04data\"v\n" +
"\bListData\x12&\n" +
"\aobjects\x18\x01 \x03(\v2\f.s3.S3ObjectR\aobjects\x12!\n" +
"\fis_truncated\x18\x02 \x01(\bR\visTruncated\x12\x1f\n" +
"\vnext_marker\x18\x03 \x01(\tR\n" +
"nextMarker\"\x8e\x01\n" +
"\bS3Object\x12\x10\n" +
"\x03key\x18\x01 \x01(\tR\x03key\x12\x12\n" +
"\x04size\x18\x02 \x01(\x03R\x04size\x12#\n" +
"\rlast_modified\x18\x03 \x01(\tR\flastModified\x12\x12\n" +
"\x04etag\x18\x04 \x01(\tR\x04etag\x12#\n" +
"\rstorage_class\x18\x05 \x01(\tR\fstorageClass\"9\n" +
"\rDeleteRequest\x12\x10\n" +
"\x03key\x18\x01 \x01(\tR\x03key\x12\x16\n" +
"\x06bucket\x18\x02 \x01(\tR\x06bucket\"j\n" +
"\x12ConnectionResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\rR\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12&\n" +
"\x04data\x18\x03 \x01(\v2\x12.s3.ConnectionDataR\x04data\"\x93\x01\n" +
"\x0eConnectionData\x12\x1c\n" +
"\tconnected\x18\x01 \x01(\bR\tconnected\x12\x1a\n" +
"\bendpoint\x18\x02 \x01(\tR\bendpoint\x12\x16\n" +
"\x06region\x18\x03 \x01(\tR\x06region\x12\x17\n" +
"\ause_ssl\x18\x04 \x01(\bR\x06useSsl\x12\x16\n" +
"\x06bucket\x18\x05 \x01(\tR\x06bucket2\xc8\x02\n" +
"\x03api\x12J\n" +
"\x06Upload\x12\x11.s3.UploadRequest\x1a\x12.s3.UploadResponse\"\x19\x82\xd3\xe4\x93\x02\x13:\x01*\"\x0e/s3/api/upload\x12?\n" +
"\x04List\x12\x0f.s3.ListRequest\x1a\x10.s3.ListResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/s3/api/list\x12U\n" +
"\x06Delete\x12\x11.s3.DeleteRequest\x1a\x17.global.SuccessResponse\"\x1f\x82\xd3\xe4\x93\x02\x19*\x17/s3/api/delete/{key=**}\x12]\n" +
"\x0fCheckConnection\x12\x16.google.protobuf.Empty\x1a\x16.s3.ConnectionResponse\"\x1a\x82\xd3\xe4\x93\x02\x14\x12\x12/s3/api/connectionB\x1aZ\x18m7s.live/v5/plugin/s3/pbb\x06proto3"
var (
file_s3_proto_rawDescOnce sync.Once
file_s3_proto_rawDescData []byte
)
func file_s3_proto_rawDescGZIP() []byte {
file_s3_proto_rawDescOnce.Do(func() {
file_s3_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_s3_proto_rawDesc), len(file_s3_proto_rawDesc)))
})
return file_s3_proto_rawDescData
}
var file_s3_proto_msgTypes = make([]protoimpl.MessageInfo, 10)
var file_s3_proto_goTypes = []any{
(*UploadRequest)(nil), // 0: s3.UploadRequest
(*UploadResponse)(nil), // 1: s3.UploadResponse
(*UploadData)(nil), // 2: s3.UploadData
(*ListRequest)(nil), // 3: s3.ListRequest
(*ListResponse)(nil), // 4: s3.ListResponse
(*ListData)(nil), // 5: s3.ListData
(*S3Object)(nil), // 6: s3.S3Object
(*DeleteRequest)(nil), // 7: s3.DeleteRequest
(*ConnectionResponse)(nil), // 8: s3.ConnectionResponse
(*ConnectionData)(nil), // 9: s3.ConnectionData
(*emptypb.Empty)(nil), // 10: google.protobuf.Empty
(*pb.SuccessResponse)(nil), // 11: global.SuccessResponse
}
var file_s3_proto_depIdxs = []int32{
2, // 0: s3.UploadResponse.data:type_name -> s3.UploadData
5, // 1: s3.ListResponse.data:type_name -> s3.ListData
6, // 2: s3.ListData.objects:type_name -> s3.S3Object
9, // 3: s3.ConnectionResponse.data:type_name -> s3.ConnectionData
0, // 4: s3.api.Upload:input_type -> s3.UploadRequest
3, // 5: s3.api.List:input_type -> s3.ListRequest
7, // 6: s3.api.Delete:input_type -> s3.DeleteRequest
10, // 7: s3.api.CheckConnection:input_type -> google.protobuf.Empty
1, // 8: s3.api.Upload:output_type -> s3.UploadResponse
4, // 9: s3.api.List:output_type -> s3.ListResponse
11, // 10: s3.api.Delete:output_type -> global.SuccessResponse
8, // 11: s3.api.CheckConnection:output_type -> s3.ConnectionResponse
8, // [8:12] is the sub-list for method output_type
4, // [4:8] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
4, // [4:4] is the sub-list for extension extendee
0, // [0:4] is the sub-list for field type_name
}
func init() { file_s3_proto_init() }
func file_s3_proto_init() {
if File_s3_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_s3_proto_rawDesc), len(file_s3_proto_rawDesc)),
NumEnums: 0,
NumMessages: 10,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_s3_proto_goTypes,
DependencyIndexes: file_s3_proto_depIdxs,
MessageInfos: file_s3_proto_msgTypes,
}.Build()
File_s3_proto = out.File
file_s3_proto_goTypes = nil
file_s3_proto_depIdxs = nil
}

441
plugin/s3/pb/s3.pb.gw.go Normal file
View File

@@ -0,0 +1,441 @@
// Code generated by protoc-gen-grpc-gateway. DO NOT EDIT.
// source: s3.proto
/*
Package pb is a reverse proxy.
It translates gRPC into RESTful JSON APIs.
*/
package pb
import (
"context"
"io"
"net/http"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/grpc-ecosystem/grpc-gateway/v2/utilities"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/emptypb"
)
// Suppress "imported and not used" errors
var _ codes.Code
var _ io.Reader
var _ status.Status
var _ = runtime.String
var _ = utilities.NewDoubleArray
var _ = metadata.Join
func request_Api_Upload_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq UploadRequest
var metadata runtime.ServerMetadata
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && err != io.EOF {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.Upload(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_Upload_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq UploadRequest
var metadata runtime.ServerMetadata
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && err != io.EOF {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.Upload(ctx, &protoReq)
return msg, metadata, err
}
var (
filter_Api_List_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
)
func request_Api_List_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ListRequest
var metadata runtime.ServerMetadata
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_List_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.List(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_List_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ListRequest
var metadata runtime.ServerMetadata
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_List_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.List(ctx, &protoReq)
return msg, metadata, err
}
var (
filter_Api_Delete_0 = &utilities.DoubleArray{Encoding: map[string]int{"key": 0}, Base: []int{1, 1, 0}, Check: []int{0, 1, 2}}
)
func request_Api_Delete_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq DeleteRequest
var metadata runtime.ServerMetadata
var (
val string
ok bool
err error
_ = err
)
val, ok = pathParams["key"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "key")
}
protoReq.Key, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "key", err)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_Delete_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.Delete(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_Delete_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq DeleteRequest
var metadata runtime.ServerMetadata
var (
val string
ok bool
err error
_ = err
)
val, ok = pathParams["key"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "key")
}
protoReq.Key, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "key", err)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_Api_Delete_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.Delete(ctx, &protoReq)
return msg, metadata, err
}
func request_Api_CheckConnection_0(ctx context.Context, marshaler runtime.Marshaler, client ApiClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq emptypb.Empty
var metadata runtime.ServerMetadata
msg, err := client.CheckConnection(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_Api_CheckConnection_0(ctx context.Context, marshaler runtime.Marshaler, server ApiServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq emptypb.Empty
var metadata runtime.ServerMetadata
msg, err := server.CheckConnection(ctx, &protoReq)
return msg, metadata, err
}
// RegisterApiHandlerServer registers the http handlers for service Api to "mux".
// UnaryRPC :call ApiServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterApiHandlerFromEndpoint instead.
func RegisterApiHandlerServer(ctx context.Context, mux *runtime.ServeMux, server ApiServer) error {
mux.Handle("POST", pattern_Api_Upload_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/s3.Api/Upload", runtime.WithHTTPPathPattern("/s3/api/upload"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_Upload_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_Upload_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("GET", pattern_Api_List_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/s3.Api/List", runtime.WithHTTPPathPattern("/s3/api/list"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_List_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_List_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("DELETE", pattern_Api_Delete_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/s3.Api/Delete", runtime.WithHTTPPathPattern("/s3/api/delete/{key=**}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_Delete_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_Delete_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("GET", pattern_Api_CheckConnection_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateIncomingContext(ctx, mux, req, "/s3.Api/CheckConnection", runtime.WithHTTPPathPattern("/s3/api/connection"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_Api_CheckConnection_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_CheckConnection_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
// RegisterApiHandlerFromEndpoint is same as RegisterApiHandler but
// automatically dials to "endpoint" and closes the connection when "ctx" gets done.
func RegisterApiHandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error) {
conn, err := grpc.DialContext(ctx, endpoint, opts...)
if err != nil {
return err
}
defer func() {
if err != nil {
if cerr := conn.Close(); cerr != nil {
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
return
}
go func() {
<-ctx.Done()
if cerr := conn.Close(); cerr != nil {
grpclog.Infof("Failed to close conn to %s: %v", endpoint, cerr)
}
}()
}()
return RegisterApiHandler(ctx, mux, conn)
}
// RegisterApiHandler registers the http handlers for service Api to "mux".
// The handlers forward requests to the grpc endpoint over "conn".
func RegisterApiHandler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.ClientConn) error {
return RegisterApiHandlerClient(ctx, mux, NewApiClient(conn))
}
// RegisterApiHandlerClient registers the http handlers for service Api
// to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "ApiClient".
// Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "ApiClient"
// doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in
// "ApiClient" to call the correct interceptors.
func RegisterApiHandlerClient(ctx context.Context, mux *runtime.ServeMux, client ApiClient) error {
mux.Handle("POST", pattern_Api_Upload_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/s3.Api/Upload", runtime.WithHTTPPathPattern("/s3/api/upload"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_Upload_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_Upload_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("GET", pattern_Api_List_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/s3.Api/List", runtime.WithHTTPPathPattern("/s3/api/list"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_List_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_List_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("DELETE", pattern_Api_Delete_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/s3.Api/Delete", runtime.WithHTTPPathPattern("/s3/api/delete/{key=**}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_Delete_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_Delete_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("GET", pattern_Api_CheckConnection_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
var err error
var annotatedContext context.Context
annotatedContext, err = runtime.AnnotateContext(ctx, mux, req, "/s3.Api/CheckConnection", runtime.WithHTTPPathPattern("/s3/api/connection"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_Api_CheckConnection_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_Api_CheckConnection_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
var (
pattern_Api_Upload_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"s3", "api", "upload"}, ""))
pattern_Api_List_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"s3", "api", "list"}, ""))
pattern_Api_Delete_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 3, 0, 4, 1, 5, 3}, []string{"s3", "api", "delete", "key"}, ""))
pattern_Api_CheckConnection_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"s3", "api", "connection"}, ""))
)
var (
forward_Api_Upload_0 = runtime.ForwardResponseMessage
forward_Api_List_0 = runtime.ForwardResponseMessage
forward_Api_Delete_0 = runtime.ForwardResponseMessage
forward_Api_CheckConnection_0 = runtime.ForwardResponseMessage
)

96
plugin/s3/pb/s3.proto Normal file
View File

@@ -0,0 +1,96 @@
syntax = "proto3";
import "google/api/annotations.proto";
import "google/protobuf/empty.proto";
import "global.proto";
package s3;
option go_package="m7s.live/v5/plugin/s3/pb";
service api {
rpc Upload (UploadRequest) returns (UploadResponse) {
option (google.api.http) = {
post: "/s3/api/upload"
body: "*"
};
}
rpc List (ListRequest) returns (ListResponse) {
option (google.api.http) = {
get: "/s3/api/list"
};
}
rpc Delete (DeleteRequest) returns (global.SuccessResponse) {
option (google.api.http) = {
delete: "/s3/api/delete/{key=**}"
};
}
rpc CheckConnection (google.protobuf.Empty) returns (ConnectionResponse) {
option (google.api.http) = {
get: "/s3/api/connection"
};
}
}
message UploadRequest {
string filename = 1; // File name
bytes content = 2; // File content
string content_type = 3; // MIME type
string bucket = 4; // Bucket name (optional, uses default if empty)
}
message UploadResponse {
uint32 code = 1;
string message = 2;
UploadData data = 3;
}
message UploadData {
string key = 1; // S3 object key
string url = 2; // Public URL
int64 size = 3; // File size in bytes
string etag = 4; // ETag from S3
}
message ListRequest {
string prefix = 1; // Prefix filter
int32 max_keys = 2; // Maximum number of keys to return
string marker = 3; // Pagination marker
string bucket = 4; // Bucket name (optional, uses default if empty)
}
message ListResponse {
uint32 code = 1;
string message = 2;
ListData data = 3;
}
message ListData {
repeated S3Object objects = 1;
bool is_truncated = 2; // Whether there are more results
string next_marker = 3; // Next pagination marker
}
message S3Object {
string key = 1; // Object key
int64 size = 2; // Object size in bytes
string last_modified = 3; // Last modified timestamp
string etag = 4; // ETag
string storage_class = 5; // Storage class
}
message DeleteRequest {
string key = 1; // Object key to delete
string bucket = 2; // Bucket name (optional, uses default if empty)
}
message ConnectionResponse {
uint32 code = 1;
string message = 2;
ConnectionData data = 3;
}
message ConnectionData {
bool connected = 1; // Whether connection is successful
string endpoint = 2; // S3 endpoint
string region = 3; // AWS region
bool use_ssl = 4; // Whether SSL is enabled
string bucket = 5; // Default bucket name
}

237
plugin/s3/pb/s3_grpc.pb.go Normal file
View File

@@ -0,0 +1,237 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v5.29.3
// source: s3.proto
package pb
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
emptypb "google.golang.org/protobuf/types/known/emptypb"
pb "m7s.live/v5/pb"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
Api_Upload_FullMethodName = "/s3.api/Upload"
Api_List_FullMethodName = "/s3.api/List"
Api_Delete_FullMethodName = "/s3.api/Delete"
Api_CheckConnection_FullMethodName = "/s3.api/CheckConnection"
)
// ApiClient is the client API for Api service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type ApiClient interface {
Upload(ctx context.Context, in *UploadRequest, opts ...grpc.CallOption) (*UploadResponse, error)
List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error)
Delete(ctx context.Context, in *DeleteRequest, opts ...grpc.CallOption) (*pb.SuccessResponse, error)
CheckConnection(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*ConnectionResponse, error)
}
type apiClient struct {
cc grpc.ClientConnInterface
}
func NewApiClient(cc grpc.ClientConnInterface) ApiClient {
return &apiClient{cc}
}
func (c *apiClient) Upload(ctx context.Context, in *UploadRequest, opts ...grpc.CallOption) (*UploadResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(UploadResponse)
err := c.cc.Invoke(ctx, Api_Upload_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *apiClient) List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListResponse)
err := c.cc.Invoke(ctx, Api_List_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *apiClient) Delete(ctx context.Context, in *DeleteRequest, opts ...grpc.CallOption) (*pb.SuccessResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(pb.SuccessResponse)
err := c.cc.Invoke(ctx, Api_Delete_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *apiClient) CheckConnection(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*ConnectionResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ConnectionResponse)
err := c.cc.Invoke(ctx, Api_CheckConnection_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// ApiServer is the server API for Api service.
// All implementations must embed UnimplementedApiServer
// for forward compatibility.
type ApiServer interface {
Upload(context.Context, *UploadRequest) (*UploadResponse, error)
List(context.Context, *ListRequest) (*ListResponse, error)
Delete(context.Context, *DeleteRequest) (*pb.SuccessResponse, error)
CheckConnection(context.Context, *emptypb.Empty) (*ConnectionResponse, error)
mustEmbedUnimplementedApiServer()
}
// UnimplementedApiServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedApiServer struct{}
func (UnimplementedApiServer) Upload(context.Context, *UploadRequest) (*UploadResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Upload not implemented")
}
func (UnimplementedApiServer) List(context.Context, *ListRequest) (*ListResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method List not implemented")
}
func (UnimplementedApiServer) Delete(context.Context, *DeleteRequest) (*pb.SuccessResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Delete not implemented")
}
func (UnimplementedApiServer) CheckConnection(context.Context, *emptypb.Empty) (*ConnectionResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method CheckConnection not implemented")
}
func (UnimplementedApiServer) mustEmbedUnimplementedApiServer() {}
func (UnimplementedApiServer) testEmbeddedByValue() {}
// UnsafeApiServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to ApiServer will
// result in compilation errors.
type UnsafeApiServer interface {
mustEmbedUnimplementedApiServer()
}
func RegisterApiServer(s grpc.ServiceRegistrar, srv ApiServer) {
// If the following call pancis, it indicates UnimplementedApiServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Api_ServiceDesc, srv)
}
func _Api_Upload_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(UploadRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).Upload(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_Upload_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).Upload(ctx, req.(*UploadRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Api_List_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).List(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_List_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).List(ctx, req.(*ListRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Api_Delete_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).Delete(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_Delete_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).Delete(ctx, req.(*DeleteRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Api_CheckConnection_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(emptypb.Empty)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApiServer).CheckConnection(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Api_CheckConnection_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).CheckConnection(ctx, req.(*emptypb.Empty))
}
return interceptor(ctx, in, info, handler)
}
// Api_ServiceDesc is the grpc.ServiceDesc for Api service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var Api_ServiceDesc = grpc.ServiceDesc{
ServiceName: "s3.api",
HandlerType: (*ApiServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Upload",
Handler: _Api_Upload_Handler,
},
{
MethodName: "List",
Handler: _Api_List_Handler,
},
{
MethodName: "Delete",
Handler: _Api_Delete_Handler,
},
{
MethodName: "CheckConnection",
Handler: _Api_CheckConnection_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "s3.proto",
}

View File

@@ -2,6 +2,7 @@ package plugin_sei
import (
"context"
"encoding/base64"
"errors"
globalPB "m7s.live/v5/pb"
@@ -38,8 +39,15 @@ func (conf *SEIPlugin) Insert(ctx context.Context, req *pb.InsertRequest) (*glob
}).WaitStarted()
}
t := req.Type
transformer.AddSEI(byte(t), req.Data)
var data []byte
switch req.Format {
case "json", "string":
data = []byte(req.Data)
data = []byte(req.Data)
case "base64":
data, err = base64.StdEncoding.DecodeString(req.Data)
}
transformer.AddSEI(byte(t), data)
err = transformer.WaitStarted()
if err != nil {
return nil, err

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc v3.19.1
// protoc-gen-go v1.36.6
// protoc v5.29.3
// source: sei.proto
package pb
@@ -13,6 +13,7 @@ import (
pb "m7s.live/v5/pb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@@ -23,23 +24,21 @@ const (
)
type InsertRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StreamPath string `protobuf:"bytes,1,opt,name=streamPath,proto3" json:"streamPath,omitempty"`
Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
Type uint32 `protobuf:"varint,3,opt,name=type,proto3" json:"type,omitempty"`
TargetStreamPath string `protobuf:"bytes,4,opt,name=targetStreamPath,proto3" json:"targetStreamPath,omitempty"`
state protoimpl.MessageState `protogen:"open.v1"`
StreamPath string `protobuf:"bytes,1,opt,name=streamPath,proto3" json:"streamPath,omitempty"`
Data string `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
Type uint32 `protobuf:"varint,3,opt,name=type,proto3" json:"type,omitempty"`
TargetStreamPath string `protobuf:"bytes,4,opt,name=targetStreamPath,proto3" json:"targetStreamPath,omitempty"`
Format string `protobuf:"bytes,5,opt,name=format,proto3" json:"format,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *InsertRequest) Reset() {
*x = InsertRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_sei_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_sei_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *InsertRequest) String() string {
@@ -50,7 +49,7 @@ func (*InsertRequest) ProtoMessage() {}
func (x *InsertRequest) ProtoReflect() protoreflect.Message {
mi := &file_sei_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -72,11 +71,11 @@ func (x *InsertRequest) GetStreamPath() string {
return ""
}
func (x *InsertRequest) GetData() []byte {
func (x *InsertRequest) GetData() string {
if x != nil {
return x.Data
}
return nil
return ""
}
func (x *InsertRequest) GetType() uint32 {
@@ -93,47 +92,43 @@ func (x *InsertRequest) GetTargetStreamPath() string {
return ""
}
func (x *InsertRequest) GetFormat() string {
if x != nil {
return x.Format
}
return ""
}
var File_sei_proto protoreflect.FileDescriptor
var file_sei_proto_rawDesc = []byte{
0x0a, 0x09, 0x73, 0x65, 0x69, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x03, 0x73, 0x65, 0x69,
0x1a, 0x1c, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e,
0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x0c,
0x67, 0x6c, 0x6f, 0x62, 0x61, 0x6c, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x83, 0x01, 0x0a,
0x0d, 0x49, 0x6e, 0x73, 0x65, 0x72, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1e,
0x0a, 0x0a, 0x73, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x50, 0x61, 0x74, 0x68, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x0a, 0x73, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x50, 0x61, 0x74, 0x68, 0x12, 0x12,
0x0a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x04, 0x64, 0x61,
0x74, 0x61, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d,
0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x2a, 0x0a, 0x10, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74,
0x53, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x50, 0x61, 0x74, 0x68, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09,
0x52, 0x10, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x53, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x50, 0x61,
0x74, 0x68, 0x32, 0x6b, 0x0a, 0x03, 0x61, 0x70, 0x69, 0x12, 0x64, 0x0a, 0x06, 0x69, 0x6e, 0x73,
0x65, 0x72, 0x74, 0x12, 0x12, 0x2e, 0x73, 0x65, 0x69, 0x2e, 0x49, 0x6e, 0x73, 0x65, 0x72, 0x74,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x17, 0x2e, 0x67, 0x6c, 0x6f, 0x62, 0x61, 0x6c,
0x2e, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x22, 0x2d, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x27, 0x22, 0x1f, 0x2f, 0x73, 0x65, 0x69, 0x2f, 0x61,
0x70, 0x69, 0x2f, 0x69, 0x6e, 0x73, 0x65, 0x72, 0x74, 0x2f, 0x7b, 0x73, 0x74, 0x72, 0x65, 0x61,
0x6d, 0x50, 0x61, 0x74, 0x68, 0x3d, 0x2a, 0x2a, 0x7d, 0x3a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x42,
0x1f, 0x5a, 0x1d, 0x6d, 0x37, 0x73, 0x2e, 0x6c, 0x69, 0x76, 0x65, 0x2f, 0x6d, 0x37, 0x73, 0x2f,
0x76, 0x35, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x73, 0x65, 0x69, 0x2f, 0x70, 0x62,
0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
const file_sei_proto_rawDesc = "" +
"\n" +
"\tsei.proto\x12\x03sei\x1a\x1cgoogle/api/annotations.proto\x1a\fglobal.proto\"\x9b\x01\n" +
"\rInsertRequest\x12\x1e\n" +
"\n" +
"streamPath\x18\x01 \x01(\tR\n" +
"streamPath\x12\x12\n" +
"\x04data\x18\x02 \x01(\tR\x04data\x12\x12\n" +
"\x04type\x18\x03 \x01(\rR\x04type\x12*\n" +
"\x10targetStreamPath\x18\x04 \x01(\tR\x10targetStreamPath\x12\x16\n" +
"\x06format\x18\x05 \x01(\tR\x06format2k\n" +
"\x03api\x12d\n" +
"\x06insert\x12\x12.sei.InsertRequest\x1a\x17.global.SuccessResponse\"-\x82\xd3\xe4\x93\x02':\x04data\"\x1f/sei/api/insert/{streamPath=**}B\x1bZ\x19m7s.live/v5/plugin/sei/pbb\x06proto3"
var (
file_sei_proto_rawDescOnce sync.Once
file_sei_proto_rawDescData = file_sei_proto_rawDesc
file_sei_proto_rawDescData []byte
)
func file_sei_proto_rawDescGZIP() []byte {
file_sei_proto_rawDescOnce.Do(func() {
file_sei_proto_rawDescData = protoimpl.X.CompressGZIP(file_sei_proto_rawDescData)
file_sei_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_sei_proto_rawDesc), len(file_sei_proto_rawDesc)))
})
return file_sei_proto_rawDescData
}
var file_sei_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_sei_proto_goTypes = []interface{}{
var file_sei_proto_goTypes = []any{
(*InsertRequest)(nil), // 0: sei.InsertRequest
(*pb.SuccessResponse)(nil), // 1: global.SuccessResponse
}
@@ -152,25 +147,11 @@ func file_sei_proto_init() {
if File_sei_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_sei_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*InsertRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_sei_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_sei_proto_rawDesc), len(file_sei_proto_rawDesc)),
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
@@ -181,7 +162,6 @@ func file_sei_proto_init() {
MessageInfos: file_sei_proto_msgTypes,
}.Build()
File_sei_proto = out.File
file_sei_proto_rawDesc = nil
file_sei_proto_goTypes = nil
file_sei_proto_depIdxs = nil
}

View File

@@ -16,7 +16,8 @@ service api {
message InsertRequest {
string streamPath = 1;
bytes data = 2;
string data = 2;
uint32 type = 3;
string targetStreamPath = 4;
string format = 5;
}

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.2.0
// - protoc v3.19.1
// - protoc-gen-go-grpc v1.5.1
// - protoc v5.29.3
// source: sei.proto
package pb
@@ -16,8 +16,12 @@ import (
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
Api_Insert_FullMethodName = "/sei.api/insert"
)
// ApiClient is the client API for Api service.
//
@@ -35,8 +39,9 @@ func NewApiClient(cc grpc.ClientConnInterface) ApiClient {
}
func (c *apiClient) Insert(ctx context.Context, in *InsertRequest, opts ...grpc.CallOption) (*pb.SuccessResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(pb.SuccessResponse)
err := c.cc.Invoke(ctx, "/sei.api/insert", in, out, opts...)
err := c.cc.Invoke(ctx, Api_Insert_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
@@ -45,20 +50,24 @@ func (c *apiClient) Insert(ctx context.Context, in *InsertRequest, opts ...grpc.
// ApiServer is the server API for Api service.
// All implementations must embed UnimplementedApiServer
// for forward compatibility
// for forward compatibility.
type ApiServer interface {
Insert(context.Context, *InsertRequest) (*pb.SuccessResponse, error)
mustEmbedUnimplementedApiServer()
}
// UnimplementedApiServer must be embedded to have forward compatible implementations.
type UnimplementedApiServer struct {
}
// UnimplementedApiServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedApiServer struct{}
func (UnimplementedApiServer) Insert(context.Context, *InsertRequest) (*pb.SuccessResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Insert not implemented")
}
func (UnimplementedApiServer) mustEmbedUnimplementedApiServer() {}
func (UnimplementedApiServer) testEmbeddedByValue() {}
// UnsafeApiServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to ApiServer will
@@ -68,6 +77,13 @@ type UnsafeApiServer interface {
}
func RegisterApiServer(s grpc.ServiceRegistrar, srv ApiServer) {
// If the following call pancis, it indicates UnimplementedApiServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Api_ServiceDesc, srv)
}
@@ -81,7 +97,7 @@ func _Api_Insert_Handler(srv interface{}, ctx context.Context, dec func(interfac
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/sei.api/insert",
FullMethod: Api_Insert_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApiServer).Insert(ctx, req.(*InsertRequest))

View File

@@ -3,6 +3,7 @@ package plugin_stress
import (
"context"
"fmt"
"slices"
"strings"
"github.com/mcuadros/go-defaults"
@@ -16,15 +17,17 @@ import (
mp4 "m7s.live/v5/plugin/mp4/pkg"
rtmp "m7s.live/v5/plugin/rtmp/pkg"
rtsp "m7s.live/v5/plugin/rtsp/pkg"
srt "m7s.live/v5/plugin/srt/pkg"
"m7s.live/v5/plugin/stress/pb"
)
func (r *StressPlugin) pull(count int, url string, puller m7s.PullerFactory) (err error) {
func (r *StressPlugin) pull(count int, url string, testMode int32, puller m7s.PullerFactory) (err error) {
hasPlaceholder := strings.Contains(url, "%d")
if i := r.pullers.Length; count > i {
for j := i; j < count; j++ {
conf := config.Pull{}
defaults.SetDefaults(&conf)
conf.TestMode = int(testMode)
if hasPlaceholder {
conf.URL = fmt.Sprintf(url, j)
} else {
@@ -35,15 +38,18 @@ func (r *StressPlugin) pull(count int, url string, puller m7s.PullerFactory) (er
if err = ctx.WaitStarted(); err != nil {
return
}
r.pullers.AddUnique(ctx)
ctx.OnDispose(func() {
r.pullers.Remove(ctx)
})
if r.pullers.AddUnique(ctx) {
ctx.OnDispose(func() {
r.pullers.Remove(ctx)
})
} else {
ctx.Stop(task.ErrExist)
}
}
} else if count < i {
clone := slices.Clone(r.pullers.Items)
for j := i; j > count; j-- {
r.pullers.Items[j-1].Stop(task.ErrStopByUser)
r.pullers.Remove(r.pullers.Items[j-1])
clone[j-1].Stop(task.ErrStopByUser)
}
}
return
@@ -59,15 +65,18 @@ func (r *StressPlugin) push(count int, streamPath, url string, pusher m7s.Pusher
if err = ctx.WaitStarted(); err != nil {
return
}
r.pushers.AddUnique(ctx)
ctx.OnDispose(func() {
r.pushers.Remove(ctx)
})
if r.pushers.AddUnique(ctx) {
ctx.OnDispose(func() {
r.pushers.Remove(ctx)
})
} else {
ctx.Stop(task.ErrExist)
}
}
} else if count < i {
clone := slices.Clone(r.pushers.Items)
for j := i; j > count; j-- {
r.pushers.Items[j-1].Stop(task.ErrStopByUser)
r.pushers.Remove(r.pushers.Items[j-1])
clone[j-1].Stop(task.ErrStopByUser)
}
}
return
@@ -80,6 +89,8 @@ func (r *StressPlugin) StartPush(ctx context.Context, req *pb.PushRequest) (res
pusher = rtmp.NewPusher
case "rtsp":
pusher = rtsp.NewPusher
case "srt":
pusher = srt.NewPusher
default:
return nil, fmt.Errorf("unsupport protocol %s", req.Protocol)
}
@@ -93,6 +104,8 @@ func (r *StressPlugin) StartPull(ctx context.Context, req *pb.PullRequest) (res
puller = rtmp.NewPuller
case "rtsp":
puller = rtsp.NewPuller
case "srt":
puller = srt.NewPuller
case "flv":
puller = flv.NewPuller
case "mp4":
@@ -100,28 +113,28 @@ func (r *StressPlugin) StartPull(ctx context.Context, req *pb.PullRequest) (res
default:
return nil, fmt.Errorf("unsupport protocol %s", req.Protocol)
}
return &gpb.SuccessResponse{}, r.pull(int(req.PullCount), req.RemoteURL, puller)
return &gpb.SuccessResponse{}, r.pull(int(req.PullCount), req.RemoteURL, req.TestMode, puller)
}
func (r *StressPlugin) StopPush(ctx context.Context, req *emptypb.Empty) (res *gpb.SuccessResponse, err error) {
for pusher := range r.pushers.Range {
for _, pusher := range slices.Clone(r.pushers.Items) {
pusher.Stop(task.ErrStopByUser)
}
r.pushers.Clear()
return &gpb.SuccessResponse{}, nil
}
func (r *StressPlugin) StopPull(ctx context.Context, req *emptypb.Empty) (res *gpb.SuccessResponse, err error) {
for puller := range r.pullers.Range {
for _, puller := range slices.Clone(r.pullers.Items) {
puller.Stop(task.ErrStopByUser)
}
r.pullers.Clear()
return &gpb.SuccessResponse{}, nil
}
func (r *StressPlugin) GetCount(ctx context.Context, req *emptypb.Empty) (res *pb.CountResponse, err error) {
return &pb.CountResponse{
PullCount: uint32(r.pullers.Length),
PushCount: uint32(r.pushers.Length),
Data: &pb.CountResponseData{
PullCount: uint32(r.pullers.Length),
PushCount: uint32(r.pushers.Length),
},
}, nil
}

View File

@@ -1,8 +1,6 @@
package plugin_stress
import (
"sync"
"m7s.live/v5"
"m7s.live/v5/pkg/util"
"m7s.live/v5/plugin/stress/pb"
@@ -18,7 +16,5 @@ type StressPlugin struct {
var _ = m7s.InstallPlugin[StressPlugin](&pb.Api_ServiceDesc, pb.RegisterApiHandler)
func (r *StressPlugin) OnInit() error {
r.pushers.L = &sync.RWMutex{}
r.pullers.L = &sync.RWMutex{}
return nil
}

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc v3.19.1
// protoc-gen-go v1.36.6
// protoc v5.29.3
// source: stress.proto
package pb
@@ -14,6 +14,7 @@ import (
pb "m7s.live/v5/pb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@@ -23,22 +24,72 @@ const (
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type CountResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
type CountResponseData struct {
state protoimpl.MessageState `protogen:"open.v1"`
PushCount uint32 `protobuf:"varint,1,opt,name=pushCount,proto3" json:"pushCount,omitempty"`
PullCount uint32 `protobuf:"varint,2,opt,name=pullCount,proto3" json:"pullCount,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
PushCount uint32 `protobuf:"varint,1,opt,name=pushCount,proto3" json:"pushCount,omitempty"`
PullCount uint32 `protobuf:"varint,2,opt,name=pullCount,proto3" json:"pullCount,omitempty"`
func (x *CountResponseData) Reset() {
*x = CountResponseData{}
mi := &file_stress_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CountResponseData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CountResponseData) ProtoMessage() {}
func (x *CountResponseData) ProtoReflect() protoreflect.Message {
mi := &file_stress_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CountResponseData.ProtoReflect.Descriptor instead.
func (*CountResponseData) Descriptor() ([]byte, []int) {
return file_stress_proto_rawDescGZIP(), []int{0}
}
func (x *CountResponseData) GetPushCount() uint32 {
if x != nil {
return x.PushCount
}
return 0
}
func (x *CountResponseData) GetPullCount() uint32 {
if x != nil {
return x.PullCount
}
return 0
}
type CountResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
Data *CountResponseData `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CountResponse) Reset() {
*x = CountResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_stress_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_stress_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CountResponse) String() string {
@@ -48,8 +99,8 @@ func (x *CountResponse) String() string {
func (*CountResponse) ProtoMessage() {}
func (x *CountResponse) ProtoReflect() protoreflect.Message {
mi := &file_stress_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
mi := &file_stress_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -61,41 +112,45 @@ func (x *CountResponse) ProtoReflect() protoreflect.Message {
// Deprecated: Use CountResponse.ProtoReflect.Descriptor instead.
func (*CountResponse) Descriptor() ([]byte, []int) {
return file_stress_proto_rawDescGZIP(), []int{0}
return file_stress_proto_rawDescGZIP(), []int{1}
}
func (x *CountResponse) GetPushCount() uint32 {
func (x *CountResponse) GetCode() uint32 {
if x != nil {
return x.PushCount
return x.Code
}
return 0
}
func (x *CountResponse) GetPullCount() uint32 {
func (x *CountResponse) GetMessage() string {
if x != nil {
return x.PullCount
return x.Message
}
return 0
return ""
}
func (x *CountResponse) GetData() *CountResponseData {
if x != nil {
return x.Data
}
return nil
}
type PushRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
StreamPath string `protobuf:"bytes,1,opt,name=streamPath,proto3" json:"streamPath,omitempty"`
Protocol string `protobuf:"bytes,2,opt,name=protocol,proto3" json:"protocol,omitempty"`
RemoteURL string `protobuf:"bytes,3,opt,name=remoteURL,proto3" json:"remoteURL,omitempty"`
PushCount int32 `protobuf:"varint,4,opt,name=pushCount,proto3" json:"pushCount,omitempty"`
unknownFields protoimpl.UnknownFields
StreamPath string `protobuf:"bytes,1,opt,name=streamPath,proto3" json:"streamPath,omitempty"`
Protocol string `protobuf:"bytes,2,opt,name=protocol,proto3" json:"protocol,omitempty"`
RemoteURL string `protobuf:"bytes,3,opt,name=remoteURL,proto3" json:"remoteURL,omitempty"`
PushCount int32 `protobuf:"varint,4,opt,name=pushCount,proto3" json:"pushCount,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *PushRequest) Reset() {
*x = PushRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_stress_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_stress_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *PushRequest) String() string {
@@ -105,8 +160,8 @@ func (x *PushRequest) String() string {
func (*PushRequest) ProtoMessage() {}
func (x *PushRequest) ProtoReflect() protoreflect.Message {
mi := &file_stress_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
mi := &file_stress_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -118,7 +173,7 @@ func (x *PushRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use PushRequest.ProtoReflect.Descriptor instead.
func (*PushRequest) Descriptor() ([]byte, []int) {
return file_stress_proto_rawDescGZIP(), []int{1}
return file_stress_proto_rawDescGZIP(), []int{2}
}
func (x *PushRequest) GetStreamPath() string {
@@ -150,22 +205,20 @@ func (x *PushRequest) GetPushCount() int32 {
}
type PullRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
RemoteURL string `protobuf:"bytes,1,opt,name=remoteURL,proto3" json:"remoteURL,omitempty"`
Protocol string `protobuf:"bytes,2,opt,name=protocol,proto3" json:"protocol,omitempty"`
PullCount int32 `protobuf:"varint,3,opt,name=pullCount,proto3" json:"pullCount,omitempty"`
TestMode int32 `protobuf:"varint,4,opt,name=testMode,proto3" json:"testMode,omitempty"` // 0: pull, 1: pull without publish
unknownFields protoimpl.UnknownFields
RemoteURL string `protobuf:"bytes,1,opt,name=remoteURL,proto3" json:"remoteURL,omitempty"`
Protocol string `protobuf:"bytes,2,opt,name=protocol,proto3" json:"protocol,omitempty"`
PullCount int32 `protobuf:"varint,3,opt,name=pullCount,proto3" json:"pullCount,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *PullRequest) Reset() {
*x = PullRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_stress_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_stress_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *PullRequest) String() string {
@@ -175,8 +228,8 @@ func (x *PullRequest) String() string {
func (*PullRequest) ProtoMessage() {}
func (x *PullRequest) ProtoReflect() protoreflect.Message {
mi := &file_stress_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
mi := &file_stress_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -188,7 +241,7 @@ func (x *PullRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use PullRequest.ProtoReflect.Descriptor instead.
func (*PullRequest) Descriptor() ([]byte, []int) {
return file_stress_proto_rawDescGZIP(), []int{2}
return file_stress_proto_rawDescGZIP(), []int{3}
}
func (x *PullRequest) GetRemoteURL() string {
@@ -212,107 +265,82 @@ func (x *PullRequest) GetPullCount() int32 {
return 0
}
func (x *PullRequest) GetTestMode() int32 {
if x != nil {
return x.TestMode
}
return 0
}
var File_stress_proto protoreflect.FileDescriptor
var file_stress_proto_rawDesc = []byte{
0x0a, 0x0c, 0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x06,
0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x1a, 0x1c, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x61,
0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x1a, 0x0c, 0x67, 0x6c, 0x6f, 0x62, 0x61, 0x6c, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22,
0x4b, 0x0a, 0x0d, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x1c, 0x0a, 0x09, 0x70, 0x75, 0x73, 0x68, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0d, 0x52, 0x09, 0x70, 0x75, 0x73, 0x68, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x1c,
0x0a, 0x09, 0x70, 0x75, 0x6c, 0x6c, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28,
0x0d, 0x52, 0x09, 0x70, 0x75, 0x6c, 0x6c, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0x85, 0x01, 0x0a,
0x0b, 0x50, 0x75, 0x73, 0x68, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1e, 0x0a, 0x0a,
0x73, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x50, 0x61, 0x74, 0x68, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x0a, 0x73, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x50, 0x61, 0x74, 0x68, 0x12, 0x1a, 0x0a, 0x08,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63, 0x6f, 0x6c, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63, 0x6f, 0x6c, 0x12, 0x1c, 0x0a, 0x09, 0x72, 0x65, 0x6d, 0x6f,
0x74, 0x65, 0x55, 0x52, 0x4c, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65, 0x6d,
0x6f, 0x74, 0x65, 0x55, 0x52, 0x4c, 0x12, 0x1c, 0x0a, 0x09, 0x70, 0x75, 0x73, 0x68, 0x43, 0x6f,
0x75, 0x6e, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x09, 0x70, 0x75, 0x73, 0x68, 0x43,
0x6f, 0x75, 0x6e, 0x74, 0x22, 0x65, 0x0a, 0x0b, 0x50, 0x75, 0x6c, 0x6c, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x12, 0x1c, 0x0a, 0x09, 0x72, 0x65, 0x6d, 0x6f, 0x74, 0x65, 0x55, 0x52, 0x4c,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65, 0x6d, 0x6f, 0x74, 0x65, 0x55, 0x52,
0x4c, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63, 0x6f, 0x6c, 0x18, 0x02, 0x20,
0x01, 0x28, 0x09, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63, 0x6f, 0x6c, 0x12, 0x1c, 0x0a,
0x09, 0x70, 0x75, 0x6c, 0x6c, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x05,
0x52, 0x09, 0x70, 0x75, 0x6c, 0x6c, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x32, 0xf1, 0x03, 0x0a, 0x03,
0x61, 0x70, 0x69, 0x12, 0x6d, 0x0a, 0x09, 0x53, 0x74, 0x61, 0x72, 0x74, 0x50, 0x75, 0x73, 0x68,
0x12, 0x13, 0x2e, 0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x2e, 0x50, 0x75, 0x73, 0x68, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x17, 0x2e, 0x67, 0x6c, 0x6f, 0x62, 0x61, 0x6c, 0x2e, 0x53,
0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x32,
0x82, 0xd3, 0xe4, 0x93, 0x02, 0x2c, 0x22, 0x27, 0x2f, 0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x2f,
0x61, 0x70, 0x69, 0x2f, 0x70, 0x75, 0x73, 0x68, 0x2f, 0x7b, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63,
0x6f, 0x6c, 0x7d, 0x2f, 0x7b, 0x70, 0x75, 0x73, 0x68, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x7d, 0x3a,
0x01, 0x2a, 0x12, 0x6d, 0x0a, 0x09, 0x53, 0x74, 0x61, 0x72, 0x74, 0x50, 0x75, 0x6c, 0x6c, 0x12,
0x13, 0x2e, 0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x2e, 0x50, 0x75, 0x6c, 0x6c, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x1a, 0x17, 0x2e, 0x67, 0x6c, 0x6f, 0x62, 0x61, 0x6c, 0x2e, 0x53, 0x75,
0x63, 0x63, 0x65, 0x73, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x32, 0x82,
0xd3, 0xe4, 0x93, 0x02, 0x2c, 0x22, 0x27, 0x2f, 0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x2f, 0x61,
0x70, 0x69, 0x2f, 0x70, 0x75, 0x6c, 0x6c, 0x2f, 0x7b, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63, 0x6f,
0x6c, 0x7d, 0x2f, 0x7b, 0x70, 0x75, 0x6c, 0x6c, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x7d, 0x3a, 0x01,
0x2a, 0x12, 0x54, 0x0a, 0x08, 0x47, 0x65, 0x74, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x16, 0x2e,
0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e,
0x45, 0x6d, 0x70, 0x74, 0x79, 0x1a, 0x15, 0x2e, 0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x2e, 0x43,
0x6f, 0x75, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x19, 0x82, 0xd3,
0xe4, 0x93, 0x02, 0x13, 0x12, 0x11, 0x2f, 0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x2f, 0x61, 0x70,
0x69, 0x2f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x5a, 0x0a, 0x08, 0x53, 0x74, 0x6f, 0x70, 0x50,
0x75, 0x73, 0x68, 0x12, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x1a, 0x17, 0x2e, 0x67, 0x6c,
0x6f, 0x62, 0x61, 0x6c, 0x2e, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x52, 0x65, 0x73, 0x70,
0x6f, 0x6e, 0x73, 0x65, 0x22, 0x1d, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x17, 0x22, 0x15, 0x2f, 0x73,
0x74, 0x72, 0x65, 0x73, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x73, 0x74, 0x6f, 0x70, 0x2f, 0x70,
0x75, 0x73, 0x68, 0x12, 0x5a, 0x0a, 0x08, 0x53, 0x74, 0x6f, 0x70, 0x50, 0x75, 0x6c, 0x6c, 0x12,
0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x1a, 0x17, 0x2e, 0x67, 0x6c, 0x6f, 0x62, 0x61, 0x6c,
0x2e, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x22, 0x1d, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x17, 0x22, 0x15, 0x2f, 0x73, 0x74, 0x72, 0x65, 0x73,
0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x73, 0x74, 0x6f, 0x70, 0x2f, 0x70, 0x75, 0x6c, 0x6c, 0x42,
0x1e, 0x5a, 0x1c, 0x6d, 0x37, 0x73, 0x2e, 0x6c, 0x69, 0x76, 0x65, 0x2f, 0x76, 0x35, 0x2f, 0x70,
0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x73, 0x74, 0x72, 0x65, 0x73, 0x73, 0x2f, 0x70, 0x62, 0x62,
0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
const file_stress_proto_rawDesc = "" +
"\n" +
"\fstress.proto\x12\x06stress\x1a\x1cgoogle/api/annotations.proto\x1a\x1bgoogle/protobuf/empty.proto\x1a\fglobal.proto\"O\n" +
"\x11CountResponseData\x12\x1c\n" +
"\tpushCount\x18\x01 \x01(\rR\tpushCount\x12\x1c\n" +
"\tpullCount\x18\x02 \x01(\rR\tpullCount\"l\n" +
"\rCountResponse\x12\x12\n" +
"\x04code\x18\x01 \x01(\rR\x04code\x12\x18\n" +
"\amessage\x18\x02 \x01(\tR\amessage\x12-\n" +
"\x04data\x18\x03 \x01(\v2\x19.stress.CountResponseDataR\x04data\"\x85\x01\n" +
"\vPushRequest\x12\x1e\n" +
"\n" +
"streamPath\x18\x01 \x01(\tR\n" +
"streamPath\x12\x1a\n" +
"\bprotocol\x18\x02 \x01(\tR\bprotocol\x12\x1c\n" +
"\tremoteURL\x18\x03 \x01(\tR\tremoteURL\x12\x1c\n" +
"\tpushCount\x18\x04 \x01(\x05R\tpushCount\"\x81\x01\n" +
"\vPullRequest\x12\x1c\n" +
"\tremoteURL\x18\x01 \x01(\tR\tremoteURL\x12\x1a\n" +
"\bprotocol\x18\x02 \x01(\tR\bprotocol\x12\x1c\n" +
"\tpullCount\x18\x03 \x01(\x05R\tpullCount\x12\x1a\n" +
"\btestMode\x18\x04 \x01(\x05R\btestMode2\xf1\x03\n" +
"\x03api\x12m\n" +
"\tStartPush\x12\x13.stress.PushRequest\x1a\x17.global.SuccessResponse\"2\x82\xd3\xe4\x93\x02,:\x01*\"'/stress/api/push/{protocol}/{pushCount}\x12m\n" +
"\tStartPull\x12\x13.stress.PullRequest\x1a\x17.global.SuccessResponse\"2\x82\xd3\xe4\x93\x02,:\x01*\"'/stress/api/pull/{protocol}/{pullCount}\x12T\n" +
"\bGetCount\x12\x16.google.protobuf.Empty\x1a\x15.stress.CountResponse\"\x19\x82\xd3\xe4\x93\x02\x13\x12\x11/stress/api/count\x12Z\n" +
"\bStopPush\x12\x16.google.protobuf.Empty\x1a\x17.global.SuccessResponse\"\x1d\x82\xd3\xe4\x93\x02\x17\"\x15/stress/api/stop/push\x12Z\n" +
"\bStopPull\x12\x16.google.protobuf.Empty\x1a\x17.global.SuccessResponse\"\x1d\x82\xd3\xe4\x93\x02\x17\"\x15/stress/api/stop/pullB\x1eZ\x1cm7s.live/v5/plugin/stress/pbb\x06proto3"
var (
file_stress_proto_rawDescOnce sync.Once
file_stress_proto_rawDescData = file_stress_proto_rawDesc
file_stress_proto_rawDescData []byte
)
func file_stress_proto_rawDescGZIP() []byte {
file_stress_proto_rawDescOnce.Do(func() {
file_stress_proto_rawDescData = protoimpl.X.CompressGZIP(file_stress_proto_rawDescData)
file_stress_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_stress_proto_rawDesc), len(file_stress_proto_rawDesc)))
})
return file_stress_proto_rawDescData
}
var file_stress_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
var file_stress_proto_goTypes = []interface{}{
(*CountResponse)(nil), // 0: stress.CountResponse
(*PushRequest)(nil), // 1: stress.PushRequest
(*PullRequest)(nil), // 2: stress.PullRequest
(*emptypb.Empty)(nil), // 3: google.protobuf.Empty
(*pb.SuccessResponse)(nil), // 4: global.SuccessResponse
var file_stress_proto_msgTypes = make([]protoimpl.MessageInfo, 4)
var file_stress_proto_goTypes = []any{
(*CountResponseData)(nil), // 0: stress.CountResponseData
(*CountResponse)(nil), // 1: stress.CountResponse
(*PushRequest)(nil), // 2: stress.PushRequest
(*PullRequest)(nil), // 3: stress.PullRequest
(*emptypb.Empty)(nil), // 4: google.protobuf.Empty
(*pb.SuccessResponse)(nil), // 5: global.SuccessResponse
}
var file_stress_proto_depIdxs = []int32{
1, // 0: stress.api.StartPush:input_type -> stress.PushRequest
2, // 1: stress.api.StartPull:input_type -> stress.PullRequest
3, // 2: stress.api.GetCount:input_type -> google.protobuf.Empty
3, // 3: stress.api.StopPush:input_type -> google.protobuf.Empty
3, // 4: stress.api.StopPull:input_type -> google.protobuf.Empty
4, // 5: stress.api.StartPush:output_type -> global.SuccessResponse
4, // 6: stress.api.StartPull:output_type -> global.SuccessResponse
0, // 7: stress.api.GetCount:output_type -> stress.CountResponse
4, // 8: stress.api.StopPush:output_type -> global.SuccessResponse
4, // 9: stress.api.StopPull:output_type -> global.SuccessResponse
5, // [5:10] is the sub-list for method output_type
0, // [0:5] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
0, // 0: stress.CountResponse.data:type_name -> stress.CountResponseData
2, // 1: stress.api.StartPush:input_type -> stress.PushRequest
3, // 2: stress.api.StartPull:input_type -> stress.PullRequest
4, // 3: stress.api.GetCount:input_type -> google.protobuf.Empty
4, // 4: stress.api.StopPush:input_type -> google.protobuf.Empty
4, // 5: stress.api.StopPull:input_type -> google.protobuf.Empty
5, // 6: stress.api.StartPush:output_type -> global.SuccessResponse
5, // 7: stress.api.StartPull:output_type -> global.SuccessResponse
1, // 8: stress.api.GetCount:output_type -> stress.CountResponse
5, // 9: stress.api.StopPush:output_type -> global.SuccessResponse
5, // 10: stress.api.StopPull:output_type -> global.SuccessResponse
6, // [6:11] is the sub-list for method output_type
1, // [1:6] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_stress_proto_init() }
@@ -320,51 +348,13 @@ func file_stress_proto_init() {
if File_stress_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_stress_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CountResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_stress_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PushRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_stress_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PullRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_stress_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_stress_proto_rawDesc), len(file_stress_proto_rawDesc)),
NumEnums: 0,
NumMessages: 3,
NumMessages: 4,
NumExtensions: 0,
NumServices: 1,
},
@@ -373,7 +363,6 @@ func file_stress_proto_init() {
MessageInfos: file_stress_proto_msgTypes,
}.Build()
File_stress_proto = out.File
file_stress_proto_rawDesc = nil
file_stress_proto_goTypes = nil
file_stress_proto_depIdxs = nil
}

View File

@@ -35,11 +35,17 @@ service api {
}
}
message CountResponse {
message CountResponseData {
uint32 pushCount = 1;
uint32 pullCount = 2;
}
message CountResponse {
uint32 code = 1;
string message = 2;
CountResponseData data = 3;
}
message PushRequest {
string streamPath = 1;
string protocol = 2;
@@ -51,4 +57,5 @@ message PullRequest {
string remoteURL = 1;
string protocol = 2;
int32 pullCount = 3;
int32 testMode = 4; // 0: pull, 1: pull without publish
}

Some files were not shown because too many files have changed in this diff Show More