support MPEG-TS over unix sockets (#4388) (#4389) (#4828)

This commit is contained in:
Alessandro Ros
2025-08-08 18:03:38 +02:00
committed by GitHub
parent db02a08a51
commit 7feff1d1dc
9 changed files with 586 additions and 294 deletions

View File

@@ -29,7 +29,7 @@ Live streams can be published to the server with:
|[RTMP clients](#rtmp-clients)|RTMP, RTMPS, Enhanced RTMP|AV1, VP9, H265, H264|Opus, MPEG-4 Audio (AAC), MPEG-1/2 Audio (MP3), AC-3, G711 (PCMA, PCMU), LPCM|
|[RTMP cameras and servers](#rtmp-cameras-and-servers)|RTMP, RTMPS, Enhanced RTMP|AV1, VP9, H265, H264|Opus, MPEG-4 Audio (AAC), MPEG-1/2 Audio (MP3), AC-3, G711 (PCMA, PCMU), LPCM|
|[HLS cameras and servers](#hls-cameras-and-servers)|Low-Latency HLS, MP4-based HLS, legacy HLS|AV1, VP9, [H265](#supported-browsers-1), H264|Opus, MPEG-4 Audio (AAC)|
|[UDP/MPEG-TS](#udpmpeg-ts)|Unicast, broadcast, multicast|H265, H264, MPEG-4 Video (H263, Xvid), MPEG-1/2 Video|Opus, MPEG-4 Audio (AAC), MPEG-1/2 Audio (MP3), AC-3|
|[MPEG-TS](#mpeg-ts)|MPEG-TS over UDP, MPEG-TS over Unix socket|H265, H264, MPEG-4 Video (H263, Xvid), MPEG-1/2 Video|Opus, MPEG-4 Audio (AAC), MPEG-1/2 Audio (MP3), AC-3|
|[Raspberry Pi Cameras](#raspberry-pi-cameras)||H264||
Live streams can be read from the server with:
@@ -54,7 +54,7 @@ Live streams be recorded and played back with:
* Publish live streams to the server
* Read live streams from the server
* Streams are automatically converted from a protocol to another
* Serve multiple streams at once in separate paths
* Serve several streams at once in separate paths
* Record streams to disk
* Playback recorded streams
* Authenticate users
@@ -101,7 +101,7 @@ _rtsp-simple-server_ has been rebranded as _MediaMTX_. The reason is pretty obvi
* [RTMP clients](#rtmp-clients)
* [RTMP cameras and servers](#rtmp-cameras-and-servers)
* [HLS cameras and servers](#hls-cameras-and-servers)
* [UDP/MPEG-TS](#udpmpeg-ts)
* [MPEG-TS](#mpeg-ts)
* [Read from the server](#read-from-the-server)
* [By software](#by-software-1)
* [FFmpeg](#ffmpeg-1)
@@ -275,13 +275,13 @@ Otherwise, [compile the server from source](#openwrt-1).
#### FFmpeg
FFmpeg can publish a stream to the server in multiple ways (SRT client, SRT server, RTSP client, RTMP client, UDP/MPEG-TS, WebRTC with WHIP). The recommended one consists in publishing as a [RTSP client](#rtsp-clients):
FFmpeg can publish a stream to the server in several ways (SRT client, SRT server, RTSP client, RTMP client, MPEG-TS over UDP, MPEG-TS over Unix sockets, WebRTC with WHIP). The recommended one consists in publishing as a [RTSP client](#rtsp-clients):
```
ffmpeg -re -stream_loop -1 -i file.ts -c copy -f rtsp rtsp://localhost:8554/mystream
```
The RTSP protocol supports multiple underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)). You can set the transport protocol by using the `rtsp_transport` flag, for instance, in order to use TCP:
The RTSP protocol supports several underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)). You can set the transport protocol by using the `rtsp_transport` flag, for instance, in order to use TCP:
```sh
ffmpeg -re -stream_loop -1 -i file.ts -c copy -f rtsp -rtsp_transport tcp rtsp://localhost:8554/mystream
@@ -291,7 +291,7 @@ The resulting stream is available in path `/mystream`.
#### GStreamer
GStreamer can publish a stream to the server in multiple ways (SRT client, SRT server, RTSP client, RTMP client, UDP/MPEG-TS, WebRTC with WHIP). The recommended one consists in publishing as a [RTSP client](#rtsp-clients):
GStreamer can publish a stream to the server in several ways (SRT client, SRT server, RTSP client, RTMP client, MPEG-TS over UDP, WebRTC with WHIP). The recommended one consists in publishing as a [RTSP client](#rtsp-clients):
```sh
gst-launch-1.0 rtspclientsink name=s location=rtsp://localhost:8554/mystream \
@@ -307,7 +307,7 @@ gst-launch-1.0 filesrc location=file.mp4 ! qtdemux name=d \
d.video_0 ! rtspclientsink location=rtsp://localhost:8554/mystream
```
The RTSP protocol supports multiple underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)). You can set the transport protocol by using the `protocols` flag:
The RTSP protocol supports several underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)). You can set the transport protocol by using the `protocols` flag:
```sh
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux name=d \
@@ -335,7 +335,7 @@ gst-launch-1.0 videotestsrc \
#### OBS Studio
OBS Studio can publish to the server in multiple ways (SRT client, RTMP client, WebRTC client). The recommended one consists in publishing as a [RTMP client](#rtmp-clients). In `Settings -> Stream` (or in the Auto-configuration Wizard), use the following parameters:
OBS Studio can publish to the server in several ways (SRT client, RTMP client, WebRTC client). The recommended one consists in publishing as a [RTMP client](#rtmp-clients). In `Settings -> Stream` (or in the Auto-configuration Wizard), use the following parameters:
* Service: `Custom...`
* Server: `rtmp://localhost/mystream`
@@ -819,7 +819,7 @@ Known clients that can publish with RTSP are [FFmpeg](#ffmpeg), [GStreamer](#gst
#### RTSP cameras and servers
Most IP cameras expose their video stream by using a RTSP server that is embedded into the camera itself. In particular, cameras that are compliant with ONVIF profile S or T meet this requirement. You can use _MediaMTX_ to connect to one or multiple existing RTSP servers and read their video streams:
Most IP cameras expose their video stream by using a RTSP server that is embedded into the camera itself. In particular, cameras that are compliant with ONVIF profile S or T meet this requirement. You can use _MediaMTX_ to connect to one or several existing RTSP servers and read their video streams:
```yml
paths:
@@ -861,7 +861,7 @@ Known clients that can publish with RTMP are [FFmpeg](#ffmpeg), [GStreamer](#gst
#### RTMP cameras and servers
You can use _MediaMTX_ to connect to one or multiple existing RTMP servers and read their video streams:
You can use _MediaMTX_ to connect to one or several existing RTMP servers and read their video streams:
```yml
paths:
@@ -874,7 +874,7 @@ The resulting stream is available in path `/proxied`.
#### HLS cameras and servers
HLS is a streaming protocol that works by splitting streams into segments, and by serving these segments and a playlist with the HTTP protocol. You can use _MediaMTX_ to connect to one or multiple existing HLS servers and read their video streams:
HLS is a streaming protocol that works by splitting streams into segments, and by serving these segments and a playlist with the HTTP protocol. You can use _MediaMTX_ to connect to one or several existing HLS servers and read their video streams:
```yml
paths:
@@ -885,9 +885,21 @@ paths:
The resulting stream is available in path `/proxied`.
#### UDP/MPEG-TS
#### MPEG-TS
The server supports ingesting UDP/MPEG-TS packets (i.e. MPEG-TS packets sent with UDP). Packets can be unicast, broadcast or multicast. For instance, you can generate a multicast UDP/MPEG-TS stream with GStreamer:
The server supports ingesting MPEG-TS streams, shipped in several ways (UDP packets or Unix sockets).
In order to read a UDP MPEG-TS stream, edit `mediamtx.yml` and replace everything inside section `paths` with the following content:
```yml
paths:
mypath:
source: udp+mpegts://238.0.0.1:1234
```
Where `238.0.0.1` is the IP for listening packets, in this case a multicast IP.
You can generate a UDP multicast MPEG-TS stream with GStreamer:
```sh
gst-launch-1.0 -v mpegtsmux name=mux alignment=1 ! udpsink host=238.0.0.1 port=1234 \
@@ -903,22 +915,14 @@ ffmpeg -re -f lavfi -i testsrc=size=1280x720:rate=30 \
-f mpegts udp://238.0.0.1:1234?pkt_size=1316
```
Edit `mediamtx.yml` and replace everything inside section `paths` with the following content:
```yml
paths:
mypath:
source: udp://238.0.0.1:1234
```
The resulting stream is available in path `/mypath`.
If the listening IP is a multicast IP, _MediaMTX_ listens for incoming multicast packets on the default interface picked by the operating system. It is possible to specify this interface manually by using the `interface` parameter:
If the listening IP is a multicast IP, _MediaMTX_ will listen for incoming packets on the default multicast interface, picked by the operating system. It is possible to specify the interface manually by using the `interface` parameter:
```yml
paths:
mypath:
source: udp://238.0.0.1:1234?interface=eth0
source: udp+mpegts://238.0.0.1:1234?interface=eth0
```
It is possible to restrict who can send packets by using the `source` parameter:
@@ -926,10 +930,26 @@ It is possible to restrict who can send packets by using the `source` parameter:
```yml
paths:
mypath:
source: udp://0.0.0.0:1234?source=192.168.3.5
source: udp+mpegts://0.0.0.0:1234?source=192.168.3.5
```
Known clients that can publish with UDP/MPEG-TS are [FFmpeg](#ffmpeg) and [GStreamer](#gstreamer).
Known clients that can publish with UDP and MPEG-TS are [FFmpeg](#ffmpeg) and [GStreamer](#gstreamer).
Unix sockets are more efficient than UDP packets and can be used as transport by specifying the `unix+mpegts` scheme:
```yml
paths:
mypath:
source: unix+mpegts:///tmp/socket.sock
```
FFmpeg can generate such streams:
```sh
ffmpeg -re -f lavfi -i testsrc=size=1280x720:rate=30 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -b:v 600k \
-f mpegts unix:/tmp/socket.sock
```
## Read from the server
@@ -937,13 +957,13 @@ Known clients that can publish with UDP/MPEG-TS are [FFmpeg](#ffmpeg) and [GStre
#### FFmpeg
FFmpeg can read a stream from the server in multiple ways (RTSP, RTMP, HLS, WebRTC with WHEP, SRT). The recommended one consists in reading with [RTSP](#rtsp):
FFmpeg can read a stream from the server in several ways (RTSP, RTMP, HLS, WebRTC with WHEP, SRT). The recommended one consists in reading with [RTSP](#rtsp):
```sh
ffmpeg -i rtsp://localhost:8554/mystream -c copy output.mp4
```
The RTSP protocol supports multiple underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)). You can set the transport protocol by using the `rtsp_transport` flag:
The RTSP protocol supports several underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)). You can set the transport protocol by using the `rtsp_transport` flag:
```sh
ffmpeg -rtsp_transport tcp -i rtsp://localhost:8554/mystream -c copy output.mp4
@@ -951,13 +971,13 @@ ffmpeg -rtsp_transport tcp -i rtsp://localhost:8554/mystream -c copy output.mp4
#### GStreamer
GStreamer can read a stream from the server in multiple ways (RTSP, RTMP, HLS, WebRTC with WHEP, SRT). The recommended one consists in reading with [RTSP](#rtsp):
GStreamer can read a stream from the server in several ways (RTSP, RTMP, HLS, WebRTC with WHEP, SRT). The recommended one consists in reading with [RTSP](#rtsp):
```sh
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/mystream latency=0 ! decodebin ! autovideosink
```
The RTSP protocol supports multiple underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)). You can change the transport protocol by using the `protocols` flag:
The RTSP protocol supports several underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)). You can change the transport protocol by using the `protocols` flag:
```sh
gst-launch-1.0 rtspsrc protocols=tcp location=rtsp://127.0.0.1:8554/mystream latency=0 ! decodebin ! autovideosink
@@ -997,13 +1017,13 @@ audio-caps="application/x-rtp,media=audio,encoding-name=OPUS,payload=111,clock-r
#### VLC
VLC can read a stream from the server in multiple ways (RTSP, RTMP, HLS, SRT). The recommended one consists in reading with [RTSP](#rtsp):
VLC can read a stream from the server in several ways (RTSP, RTMP, HLS, SRT). The recommended one consists in reading with [RTSP](#rtsp):
```sh
vlc --network-caching=50 rtsp://localhost:8554/mystream
```
The RTSP protocol supports multiple underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)).
The RTSP protocol supports several underlying transport protocols, each with its own characteristics (see [RTSP-specific features](#rtsp-specific-features)).
In order to use the TCP transport protocol, use the `--rtsp_tcp` flag:
@@ -1164,7 +1184,7 @@ In the _Hierarchy_ window, find or create a scene. Inside the scene, add a _Canv
#### Web browsers
Web browsers can read a stream from the server in multiple ways (WebRTC or HLS).
Web browsers can read a stream from the server in several ways (WebRTC or HLS).
You can read a stream by using the [WebRTC protocol](#webrtc-1) by visiting the web page:
@@ -2334,7 +2354,7 @@ When using WHIP or WHEP to establish a WebRTC connection, there are several ways
If the server is hosted inside a container or is behind a NAT, additional configuration is required in order to allow the two WebRTC parts (server and client) to establish a connection.
Make sure that `webrtcAdditionalHosts` includes your public IPs, that are IPs that can be used by clients to reach the server. If clients are on the same LAN as the server, add the LAN address of the server. If clients are coming from the internet, add the public IP address of the server, or alternatively a DNS name, if you have one. You can add multiple values to support all scenarios:
Make sure that `webrtcAdditionalHosts` includes your public IPs, that are IPs that can be used by clients to reach the server. If clients are on the same LAN as the server, add the LAN address of the server. If clients are coming from the internet, add the public IP address of the server, or alternatively a DNS name, if you have one. You can add several values to support all scenarios:
```yml
webrtcAdditionalHosts: [192.168.x.x, 1.2.3.4, my-dns.example.org, ...]
@@ -2476,7 +2496,7 @@ rtsps://localhost:8322/mystream
#### Corrupted frames
In some scenarios, when publishing or reading from the server with RTSP, frames can get corrupted. This can be caused by multiple reasons:
In some scenarios, when publishing or reading from the server with RTSP, frames can get corrupted. This can be caused by several reasons:
* the write queue of the server is too small and can't keep up with the stream throughput. A solution consists in increasing its size:

View File

@@ -422,6 +422,14 @@ func (pconf *Path) validate(
return fmt.Errorf("'%s' is not a valid UDP URL", pconf.Source)
}
case strings.HasPrefix(pconf.Source, "udp+mpegts://"):
_, _, err := net.SplitHostPort(pconf.Source[len("udp+mpegts://"):])
if err != nil {
return fmt.Errorf("'%s' is not a valid UDP+MPEGTS URL", pconf.Source)
}
case strings.HasPrefix(pconf.Source, "unix+mpegts://"):
case strings.HasPrefix(pconf.Source, "srt://"):
_, err := gourl.Parse(pconf.Source)
if err != nil {

View File

@@ -12,11 +12,11 @@ import (
"github.com/bluenviron/mediamtx/internal/defs"
"github.com/bluenviron/mediamtx/internal/logger"
sshls "github.com/bluenviron/mediamtx/internal/staticsources/hls"
ssmpegts "github.com/bluenviron/mediamtx/internal/staticsources/mpegts"
ssrpicamera "github.com/bluenviron/mediamtx/internal/staticsources/rpicamera"
ssrtmp "github.com/bluenviron/mediamtx/internal/staticsources/rtmp"
ssrtsp "github.com/bluenviron/mediamtx/internal/staticsources/rtsp"
sssrt "github.com/bluenviron/mediamtx/internal/staticsources/srt"
ssudp "github.com/bluenviron/mediamtx/internal/staticsources/udp"
sswebrtc "github.com/bluenviron/mediamtx/internal/staticsources/webrtc"
"github.com/bluenviron/mediamtx/internal/stream"
)
@@ -117,8 +117,10 @@ func (s *Handler) Initialize() {
Parent: s,
}
case strings.HasPrefix(s.Conf.Source, "udp://"):
s.instance = &ssudp.Source{
case strings.HasPrefix(s.Conf.Source, "udp://") ||
strings.HasPrefix(s.Conf.Source, "udp+mpegts://") ||
strings.HasPrefix(s.Conf.Source, "unix+mpegts://"):
s.instance = &ssmpegts.Source{
ReadTimeout: s.ReadTimeout,
Parent: s,
}

View File

@@ -0,0 +1,142 @@
// Package mpegts contains the MPEG-TS static source.
package mpegts
import (
"fmt"
"net"
"net/url"
"time"
"github.com/bluenviron/gortsplib/v4/pkg/description"
"github.com/bluenviron/mediamtx/internal/conf"
"github.com/bluenviron/mediamtx/internal/counterdumper"
"github.com/bluenviron/mediamtx/internal/defs"
"github.com/bluenviron/mediamtx/internal/logger"
"github.com/bluenviron/mediamtx/internal/protocols/mpegts"
"github.com/bluenviron/mediamtx/internal/stream"
)
type parent interface {
logger.Writer
SetReady(req defs.PathSourceStaticSetReadyReq) defs.PathSourceStaticSetReadyRes
SetNotReady(req defs.PathSourceStaticSetNotReadyReq)
}
// Source is a MPEG-TS static source.
type Source struct {
ReadTimeout conf.Duration
Parent parent
}
// Log implements logger.Writer.
func (s *Source) Log(level logger.Level, format string, args ...interface{}) {
s.Parent.Log(level, "[MPEG-TS source] "+format, args...)
}
// Run implements StaticSource.
func (s *Source) Run(params defs.StaticSourceRunParams) error {
s.Log(logger.Debug, "connecting")
u, err := url.Parse(params.ResolvedSource)
if err != nil {
return err
}
q := u.Query()
var nc net.Conn
switch u.Scheme {
case "unix+mpegts":
nc, err = createUnix(u)
if err != nil {
return err
}
default:
nc, err = createUDP(u.Host, q)
if err != nil {
return err
}
}
readerErr := make(chan error)
go func() {
readerErr <- s.runReader(nc)
}()
select {
case err = <-readerErr:
nc.Close()
return err
case <-params.Context.Done():
nc.Close()
<-readerErr
return fmt.Errorf("terminated")
}
}
func (s *Source) runReader(nc net.Conn) error {
nc.SetReadDeadline(time.Now().Add(time.Duration(s.ReadTimeout)))
mr := &mpegts.EnhancedReader{R: nc}
err := mr.Initialize()
if err != nil {
return err
}
decodeErrors := &counterdumper.CounterDumper{
OnReport: func(val uint64) {
s.Log(logger.Warn, "%d decode %s",
val,
func() string {
if val == 1 {
return "error"
}
return "errors"
}())
},
}
decodeErrors.Start()
defer decodeErrors.Stop()
mr.OnDecodeError(func(_ error) {
decodeErrors.Increase()
})
var stream *stream.Stream
medias, err := mpegts.ToStream(mr, &stream, s)
if err != nil {
return err
}
res := s.Parent.SetReady(defs.PathSourceStaticSetReadyReq{
Desc: &description.Session{Medias: medias},
GenerateRTPPackets: true,
})
if res.Err != nil {
return res.Err
}
defer s.Parent.SetNotReady(defs.PathSourceStaticSetNotReadyReq{})
stream = res.Stream
for {
nc.SetReadDeadline(time.Now().Add(time.Duration(s.ReadTimeout)))
err = mr.Read()
if err != nil {
return err
}
}
}
// APISourceDescribe implements StaticSource.
func (*Source) APISourceDescribe() defs.APIPathSourceOrReader {
return defs.APIPathSourceOrReader{
Type: "udpSource",
ID: "",
}
}

View File

@@ -1,9 +1,10 @@
package udp
package mpegts
import (
"bufio"
"context"
"net"
"os"
"path/filepath"
"testing"
"time"
@@ -29,7 +30,7 @@ func multicastCapableInterface(t *testing.T) string {
return ""
}
func TestSource(t *testing.T) {
func TestSourceUDP(t *testing.T) {
for _, ca := range []string{
"unicast",
"multicast",
@@ -41,16 +42,16 @@ func TestSource(t *testing.T) {
switch ca {
case "unicast":
src = "udp://127.0.0.1:9001"
src = "udp+mpegts://127.0.0.1:9001"
case "multicast":
src = "udp://238.0.0.1:9001"
src = "udp+mpegts://238.0.0.1:9001"
case "multicast with interface":
src = "udp://238.0.0.1:9001?interface=" + multicastCapableInterface(t)
src = "udp+mpegts://238.0.0.1:9001?interface=" + multicastCapableInterface(t)
case "unicast with source":
src = "udp://127.0.0.1:9001?source=127.0.1.1"
src = "udp+mpegts://127.0.0.1:9001?source=127.0.1.1"
}
p := &test.StaticSourceParent{}
@@ -112,8 +113,7 @@ func TestSource(t *testing.T) {
Codec: &mpegts.CodecH264{},
}
bw := bufio.NewWriter(conn)
w := &mpegts.Writer{W: bw, Tracks: []*mpegts.Track{track}}
w := &mpegts.Writer{W: conn, Tracks: []*mpegts.Track{track}}
err = w.Initialize()
require.NoError(t, err)
@@ -127,10 +127,77 @@ func TestSource(t *testing.T) {
}})
require.NoError(t, err)
err = bw.Flush()
require.NoError(t, err)
<-p.Unit
})
}
}
func TestSourceUnixSocket(t *testing.T) {
for _, ca := range []string{
"relative",
"absolute",
} {
t.Run(ca, func(t *testing.T) {
var pa string
if ca == "relative" {
pa = "test.sock"
} else {
pa = filepath.Join(os.TempDir(), "test.sock")
}
func() {
p := &test.StaticSourceParent{}
p.Initialize()
defer p.Close()
so := &Source{
ReadTimeout: conf.Duration(10 * time.Second),
Parent: p,
}
done := make(chan struct{})
defer func() { <-done }()
ctx, ctxCancel := context.WithCancel(context.Background())
defer ctxCancel()
go func() {
so.Run(defs.StaticSourceRunParams{ //nolint:errcheck
Context: ctx,
ResolvedSource: "unix+mpegts://" + pa,
Conf: &conf.Path{},
})
close(done)
}()
time.Sleep(50 * time.Millisecond)
_, err := os.Stat(pa)
require.NoError(t, err)
conn, err := net.Dial("unix", pa)
require.NoError(t, err)
track := &mpegts.Track{
Codec: &mpegts.CodecH264{},
}
w := &mpegts.Writer{W: conn, Tracks: []*mpegts.Track{track}}
err = w.Initialize()
require.NoError(t, err)
err = w.WriteH264(track, 0, 0, [][]byte{{ // IDR
5, 1,
}})
require.NoError(t, err)
conn.Close() // trigger a flush
<-p.Unit
}()
_, err := os.Stat(pa)
require.Error(t, err)
})
}
}

View File

@@ -0,0 +1,160 @@
package mpegts
import (
"fmt"
"net"
"net/url"
"time"
"github.com/bluenviron/gortsplib/v4/pkg/multicast"
"github.com/bluenviron/mediamtx/internal/restrictnetwork"
)
const (
// same size as GStreamer's rtspsrc
udpKernelReadBufferSize = 0x80000
)
type udpConn struct {
pc net.PacketConn
sourceIP net.IP
}
func (r *udpConn) Close() error {
return r.pc.Close()
}
func (r *udpConn) Read(p []byte) (int, error) {
for {
n, addr, err := r.pc.ReadFrom(p)
if r.sourceIP != nil && addr != nil && !addr.(*net.UDPAddr).IP.Equal(r.sourceIP) {
continue
}
return n, err
}
}
func (r *udpConn) Write(_ []byte) (int, error) {
panic("unimplemented")
}
func (r *udpConn) LocalAddr() net.Addr {
panic("unimplemented")
}
func (r *udpConn) RemoteAddr() net.Addr {
panic("unimplemented")
}
func (r *udpConn) SetDeadline(_ time.Time) error {
panic("unimplemented")
}
func (r *udpConn) SetReadDeadline(t time.Time) error {
return r.pc.SetReadDeadline(t)
}
func (r *udpConn) SetWriteDeadline(_ time.Time) error {
panic("unimplemented")
}
func defaultInterfaceForMulticast(multicastAddr *net.UDPAddr) (*net.Interface, error) {
conn, err := net.Dial("udp4", multicastAddr.String())
if err != nil {
return nil, err
}
localAddr := conn.LocalAddr().(*net.UDPAddr)
conn.Close()
interfaces, err := net.Interfaces()
if err != nil {
return nil, err
}
for _, iface := range interfaces {
var addrs []net.Addr
addrs, err = iface.Addrs()
if err != nil {
continue
}
for _, addr := range addrs {
var ip net.IP
switch v := addr.(type) {
case *net.IPNet:
ip = v.IP
case *net.IPAddr:
ip = v.IP
}
if ip != nil && ip.Equal(localAddr.IP) {
return &iface, nil
}
}
}
return nil, fmt.Errorf("could not find any interface for using multicast address %s", multicastAddr)
}
type packetConn interface {
net.PacketConn
SetReadBuffer(int) error
}
func createUDP(host string, q url.Values) (net.Conn, error) {
var sourceIP net.IP
if src := q.Get("source"); src != "" {
sourceIP = net.ParseIP(src)
if sourceIP == nil {
return nil, fmt.Errorf("invalid source IP")
}
}
addr, err := net.ResolveUDPAddr("udp", host)
if err != nil {
return nil, err
}
var pc packetConn
if ip4 := addr.IP.To4(); ip4 != nil && addr.IP.IsMulticast() {
var intf *net.Interface
if intfName := q.Get("interface"); intfName != "" {
intf, err = net.InterfaceByName(intfName)
if err != nil {
return nil, err
}
} else {
intf, err = defaultInterfaceForMulticast(addr)
if err != nil {
return nil, err
}
}
pc, err = multicast.NewSingleConn(intf, addr.String(), net.ListenPacket)
if err != nil {
return nil, err
}
} else {
var tmp net.PacketConn
tmp, err = net.ListenPacket(restrictnetwork.Restrict("udp", addr.String()))
if err != nil {
return nil, err
}
pc = tmp.(*net.UDPConn)
}
// defer pc.Close()
err = pc.SetReadBuffer(udpKernelReadBufferSize)
if err != nil {
pc.Close()
return nil, err
}
return &udpConn{pc: pc, sourceIP: sourceIP}, nil
}

View File

@@ -0,0 +1,136 @@
package mpegts
import (
"fmt"
"net"
"net/url"
"os"
"sync"
"time"
)
type unixConn struct {
l net.Listener
c net.Conn
mutex sync.Mutex
closed bool
deadline time.Time
}
func (r *unixConn) Close() error {
r.mutex.Lock()
defer r.mutex.Unlock()
r.closed = true
r.l.Close()
if r.c != nil {
r.c.Close()
}
return nil
}
func (r *unixConn) acceptWithDeadline() (net.Conn, error) {
done := make(chan struct{})
defer func() { <-done }()
terminate := make(chan struct{})
defer close(terminate)
go func() {
defer close(done)
select {
case <-time.After(time.Until(r.deadline)):
r.l.Close()
case <-terminate:
return
}
}()
c, err := r.l.Accept()
if err != nil {
if time.Now().After(r.deadline) {
return nil, fmt.Errorf("deadline exceeded")
}
return nil, err
}
return c, nil
}
func (r *unixConn) setConn(c net.Conn) error {
r.mutex.Lock()
defer r.mutex.Unlock()
if r.closed {
return fmt.Errorf("closed")
}
r.c = c
return nil
}
func (r *unixConn) Read(p []byte) (int, error) {
if r.c == nil {
c, err := r.acceptWithDeadline()
if err != nil {
return 0, err
}
err = r.setConn(c)
if err != nil {
return 0, err
}
}
r.c.SetReadDeadline(r.deadline)
return r.c.Read(p)
}
func (r *unixConn) Write(_ []byte) (int, error) {
panic("unimplemented")
}
func (r *unixConn) LocalAddr() net.Addr {
panic("unimplemented")
}
func (r *unixConn) RemoteAddr() net.Addr {
panic("unimplemented")
}
func (r *unixConn) SetDeadline(_ time.Time) error {
panic("unimplemented")
}
func (r *unixConn) SetReadDeadline(t time.Time) error {
r.deadline = t
return nil
}
func (r *unixConn) SetWriteDeadline(_ time.Time) error {
panic("unimplemented")
}
func createUnix(u *url.URL) (net.Conn, error) {
var pa string
if u.Path != "" {
pa = u.Path
} else {
pa = u.Host
}
if pa == "" {
return nil, fmt.Errorf("invalid unix path")
}
os.Remove(pa)
socket, err := net.Listen("unix", pa)
if err != nil {
return nil, err
}
return &unixConn{l: socket}, nil
}

View File

@@ -1,244 +0,0 @@
// Package udp contains the UDP static source.
package udp
import (
"fmt"
"net"
"net/url"
"time"
"github.com/bluenviron/gortsplib/v4/pkg/description"
"github.com/bluenviron/gortsplib/v4/pkg/multicast"
"github.com/bluenviron/mediamtx/internal/conf"
"github.com/bluenviron/mediamtx/internal/counterdumper"
"github.com/bluenviron/mediamtx/internal/defs"
"github.com/bluenviron/mediamtx/internal/logger"
"github.com/bluenviron/mediamtx/internal/protocols/mpegts"
"github.com/bluenviron/mediamtx/internal/restrictnetwork"
"github.com/bluenviron/mediamtx/internal/stream"
)
const (
// same size as GStreamer's rtspsrc
udpKernelReadBufferSize = 0x80000
)
func defaultInterfaceForMulticast(multicastAddr *net.UDPAddr) (*net.Interface, error) {
conn, err := net.Dial("udp4", multicastAddr.String())
if err != nil {
return nil, err
}
localAddr := conn.LocalAddr().(*net.UDPAddr)
conn.Close()
interfaces, err := net.Interfaces()
if err != nil {
return nil, err
}
for _, iface := range interfaces {
var addrs []net.Addr
addrs, err = iface.Addrs()
if err != nil {
continue
}
for _, addr := range addrs {
var ip net.IP
switch v := addr.(type) {
case *net.IPNet:
ip = v.IP
case *net.IPAddr:
ip = v.IP
}
if ip != nil && ip.Equal(localAddr.IP) {
return &iface, nil
}
}
}
return nil, fmt.Errorf("could not find any interface for using multicast address %s", multicastAddr)
}
type packetConnReader struct {
pc net.PacketConn
sourceIP net.IP
}
func (r *packetConnReader) Read(p []byte) (int, error) {
for {
n, addr, err := r.pc.ReadFrom(p)
if r.sourceIP != nil && addr != nil && !addr.(*net.UDPAddr).IP.Equal(r.sourceIP) {
continue
}
return n, err
}
}
type packetConn interface {
net.PacketConn
SetReadBuffer(int) error
}
type parent interface {
logger.Writer
SetReady(req defs.PathSourceStaticSetReadyReq) defs.PathSourceStaticSetReadyRes
SetNotReady(req defs.PathSourceStaticSetNotReadyReq)
}
// Source is a UDP static source.
type Source struct {
ReadTimeout conf.Duration
Parent parent
}
// Log implements logger.Writer.
func (s *Source) Log(level logger.Level, format string, args ...interface{}) {
s.Parent.Log(level, "[UDP source] "+format, args...)
}
// Run implements StaticSource.
func (s *Source) Run(params defs.StaticSourceRunParams) error {
s.Log(logger.Debug, "connecting")
u, err := url.Parse(params.ResolvedSource)
if err != nil {
return err
}
q := u.Query()
var sourceIP net.IP
if src := q.Get("source"); src != "" {
sourceIP = net.ParseIP(src)
if sourceIP == nil {
return fmt.Errorf("invalid source IP")
}
}
addr, err := net.ResolveUDPAddr("udp", u.Host)
if err != nil {
return err
}
var pc packetConn
if ip4 := addr.IP.To4(); ip4 != nil && addr.IP.IsMulticast() {
var intf *net.Interface
if intfName := q.Get("interface"); intfName != "" {
intf, err = net.InterfaceByName(intfName)
if err != nil {
return err
}
} else {
intf, err = defaultInterfaceForMulticast(addr)
if err != nil {
return err
}
}
pc, err = multicast.NewSingleConn(intf, addr.String(), net.ListenPacket)
if err != nil {
return err
}
} else {
var tmp net.PacketConn
tmp, err = net.ListenPacket(restrictnetwork.Restrict("udp", addr.String()))
if err != nil {
return err
}
pc = tmp.(*net.UDPConn)
}
defer pc.Close()
err = pc.SetReadBuffer(udpKernelReadBufferSize)
if err != nil {
return err
}
readerErr := make(chan error)
go func() {
readerErr <- s.runReader(pc, sourceIP)
}()
select {
case err = <-readerErr:
return err
case <-params.Context.Done():
pc.Close()
<-readerErr
return fmt.Errorf("terminated")
}
}
func (s *Source) runReader(pc net.PacketConn, sourceIP net.IP) error {
pc.SetReadDeadline(time.Now().Add(time.Duration(s.ReadTimeout)))
pcr := &packetConnReader{pc: pc, sourceIP: sourceIP}
r := &mpegts.EnhancedReader{R: pcr}
err := r.Initialize()
if err != nil {
return err
}
decodeErrors := &counterdumper.CounterDumper{
OnReport: func(val uint64) {
s.Log(logger.Warn, "%d decode %s",
val,
func() string {
if val == 1 {
return "error"
}
return "errors"
}())
},
}
decodeErrors.Start()
defer decodeErrors.Stop()
r.OnDecodeError(func(_ error) {
decodeErrors.Increase()
})
var stream *stream.Stream
medias, err := mpegts.ToStream(r, &stream, s)
if err != nil {
return err
}
res := s.Parent.SetReady(defs.PathSourceStaticSetReadyReq{
Desc: &description.Session{Medias: medias},
GenerateRTPPackets: true,
})
if res.Err != nil {
return res.Err
}
defer s.Parent.SetNotReady(defs.PathSourceStaticSetNotReadyReq{})
stream = res.Stream
for {
pc.SetReadDeadline(time.Now().Add(time.Duration(s.ReadTimeout)))
err = r.Read()
if err != nil {
return err
}
}
}
// APISourceDescribe implements StaticSource.
func (*Source) APISourceDescribe() defs.APIPathSourceOrReader {
return defs.APIPathSourceOrReader{
Type: "udpSource",
ID: "",
}
}

View File

@@ -438,7 +438,8 @@ pathDefaults:
# * rtmps://existing-url -> the stream is pulled from another RTMP server / camera with RTMPS
# * http://existing-url/stream.m3u8 -> the stream is pulled from another HLS server / camera
# * https://existing-url/stream.m3u8 -> the stream is pulled from another HLS server / camera with HTTPS
# * udp://ip:port -> the stream is pulled with UDP, by listening on the specified IP and port
# * udp+mpegts://ip:port -> the stream is pulled from MPEG-TS over UDP, by listening on the specified address
# * unix+mpegts://socket -> the stream is pulled from MPEG-TS over Unix socket, by using the socket
# * srt://existing-url -> the stream is pulled from another SRT server / camera
# * whep://existing-url -> the stream is pulled from another WebRTC server / camera
# * wheps://existing-url -> the stream is pulled from another WebRTC server / camera with HTTPS