86 Commits

Author SHA1 Message Date
Ingo Oppermann
4a12b0293f Use configured logging target 2023-01-03 11:54:48 +01:00
Ingo Oppermann
f472fe150f Merge branch 'dev' into logging 2023-01-03 11:45:50 +01:00
Ingo Oppermann
1bbb7a9c1f Use config locations for import and ffmigrage 2023-01-03 11:45:10 +01:00
Ingo Oppermann
37e00407cc Allow to define a logging target 2023-01-03 11:28:57 +01:00
Ingo Oppermann
17c9f6ef13 Test different standard location for config file
If no path is given in the environment variable CORE_CONFIGFILE, different
standard locations will be probed:
- os.UserConfigDir() + /datarhei-core/config.js
- os.UserHomeDir() + /.config/datarhei-core/config.js
- ./config/config.js
If the config.js doesn't exist in any of these locations, it will be
assumed at ./config/config.js
2023-01-03 07:55:55 +01:00
Ingo Oppermann
ff6b0d9584 Require go1.19 for tests 2023-01-03 07:05:00 +01:00
Ingo Oppermann
378a3cd9cf Allow to set a soft memory limit for the binary itself
The setting debug.memory_limit_mbytes should not be used in conjuction
with debug.force_gc because the memory limit influences the garbage
collector.
2023-01-02 11:58:54 +01:00
Ingo Oppermann
992b04d180 Allow alternative syntax for auth0 tenants as environment variable 2023-01-02 11:39:58 +01:00
Ingo Oppermann
391681447e Fix MustDir config type to create directory 2023-01-02 10:54:29 +01:00
Ingo Oppermann
59aa6af767 Allow partial process config updates 2023-01-02 07:20:39 +01:00
Ingo Oppermann
c44fb30a84 Fix check for at least one process input and output 2023-01-02 06:57:02 +01:00
Ingo Oppermann
0cd8be130c Remove letsdebug module
This module has a dependency of a modules that requires cgo, that's a no-go.
2022-12-31 17:46:46 +01:00
Ingo Oppermann
65a617c2af Fix modifying DTS in RTMP packets (datarhei/restreamer#487, datarhei/restreamer#367) 2022-12-29 10:43:15 +01:00
Ingo Oppermann
8a1dc59a81 Set a default of 20ms for internal SRT latency 2022-12-27 13:46:02 +01:00
Ingo Oppermann
ee2a188be8 Allow defaults for template parameter 2022-12-27 13:41:07 +01:00
Ingo Oppermann
1a9ef8b7c9 Add Let's Debug auto TLS error diagnostic 2022-12-27 10:26:49 +01:00
Ingo Oppermann
d0262cc887 Add logging for service 2022-12-27 09:47:59 +01:00
Ingo Oppermann
18be75d013 Use new streamid format for {srt} placeholder 2022-11-22 21:25:54 +01:00
Jan Stabenow
cae5f4c973 Fix rpi build (removes armv6) 2022-11-09 15:54:58 +01:00
Jan Stabenow
b26f59fd9e Mod bump v16.11.0 2022-11-09 15:13:11 +01:00
Ingo Oppermann
0d74eeab8e Fix trying to create a backup if there's no DB 2022-11-09 13:20:34 +01:00
Ingo Oppermann
6f36f1aa51 Set new FFmpeg version in process config during migration 2022-11-09 11:35:47 +01:00
Ingo Oppermann
2936bf1e80 Fix build for ffmigrate 2022-11-09 10:46:02 +01:00
Ingo Oppermann
9ad19fbdd6 Fix reading partial config
If the config on the disk doesn't have all fields, then the missing
fields are now populated with their defaults.
2022-11-08 14:44:47 +01:00
Jan Stabenow
3c9f4b10b4 Mod updates changelog 2022-11-08 01:28:28 +01:00
Ingo Oppermann
886dc7d81a Bump version to 16.11.0 2022-11-07 12:26:15 +01:00
Jan Stabenow
490e2a03ff Mod updates image tags 2022-11-04 12:43:12 +01:00
Ingo Oppermann
8b307e4181 Use the SRT default config 2022-11-04 11:56:51 +01:00
Ingo Oppermann
c0d7a7e80a Add ffmigrate tool to run.sh 2022-11-02 22:07:38 +01:00
Ingo Oppermann
dfc81ac38f Add ffmpeg migration tool, annotate process config with ffmpeg version constraint 2022-11-02 22:02:39 +01:00
Ingo Oppermann
4cc82dd333 Update dependencies 2022-10-28 17:24:57 +02:00
Ingo Oppermann
4334105f95 Fix wrong status code (#6) 2022-10-28 11:10:16 +02:00
Ingo Oppermann
35c5c9f077 Add alternative streamid format for SRT
The streamid format that starts with #!: is recommended in the SRT
specs but it usually causes trouble where you're limited in the
use of such characters. Some hardware devices will not accept such
streamids.

The alternative format is simpler and has the form
[resource](,token:[token])?(,mode:[mode])?

token and mode are optional. mode can have the values "publish" or
"request". If mode is not provided, a value of "request" is
assumed.
2022-10-25 14:00:27 +02:00
Ingo Oppermann
07e2898857 Expose more SRT connection statistics 2022-10-24 15:25:14 +02:00
Ingo Oppermann
f746e581ae Add version annotation to API methods 2022-10-13 20:54:52 +02:00
Ingo Oppermann
3da25c0d91 Fix stale detection with progress patch 2022-10-13 12:20:26 +02:00
Ingo Oppermann
05a2268662 Reset process stats when stopped 2022-10-13 10:57:17 +02:00
Ingo Oppermann
6ef334331b Fix accumulating total sessions 2022-10-10 18:40:45 +02:00
Ingo Oppermann
8314f71402 Fix widget session data 2022-10-10 16:55:43 +02:00
Ingo Oppermann
4d4e70571e Fix proper version handling for uploading a new config 2022-10-10 16:19:45 +02:00
Ingo Oppermann
f896c1a9ac Fix datarhei/restreamer#425 2022-10-10 14:54:35 +02:00
Jan Stabenow
eb57fb5e70 Mod updates build env. 2022-09-30 15:03:21 +02:00
Ingo Oppermann
eeec59f8b1 Fix last minor version bump to patch version bump 2022-09-30 13:58:21 +02:00
Ingo Oppermann
56ff5b1c60 Update changelog 2022-09-30 12:43:37 +02:00
Ingo Oppermann
33bd7bd384 Set default email address 2022-09-30 12:25:01 +02:00
Ingo Oppermann
22f1fb2d97 Bump version to 16.11.0 2022-09-30 12:13:38 +02:00
Ingo Oppermann
fe2e9d375c Use LE porduction CA, allow to configure an email address 2022-09-30 12:12:36 +02:00
Ingo Oppermann
bbcf0ab1b1 Fix double slashes in RTMP URL 2022-09-30 09:25:29 +02:00
Ingo Oppermann
a114f426d4 Update changelog 2022-09-29 10:52:42 +02:00
Ingo Oppermann
fcdceab99d Merge branch 'dev' of github.com:datarhei/core into dev 2022-09-29 10:46:01 +02:00
Ingo Oppermann
54dd24a5c0 Fix API metadata endpoint responses 2022-09-29 10:44:21 +02:00
Jan Stabenow
8af8cc9301 Mod exposes ports 2022-09-29 10:10:05 +02:00
Ingo Oppermann
6288b620df Use pool for buffer 2022-09-28 14:53:58 +02:00
Ingo Oppermann
7d38416239 Update changelog 2022-09-23 10:08:13 +02:00
Ingo Oppermann
bc7faf9364 Replace x/crypto/acme/autocert with caddyserver/certmagic 2022-09-23 10:05:48 +02:00
Ingo Oppermann
1ebf1f7f29 Write header only if a valid return code is available 2022-09-15 13:43:48 +02:00
Ingo Oppermann
1511b950ae Update joy4 dependency for fixed increased RTMP client compatibility 2022-09-14 19:34:35 +02:00
Ingo Oppermann
2e560b635d Update joy4 dependency for increase RTMP client compatibility 2022-09-14 14:52:41 +02:00
Ingo Oppermann
ff3aa3a635 Add vulnerability check 2022-09-09 16:40:15 +02:00
Ingo Oppermann
673f9d3835 Add init command 2022-09-09 15:10:29 +02:00
Ingo Oppermann
3b0a19e18a Allow to only compress responses that have a minimum length 2022-09-08 19:16:44 +02:00
Ingo Oppermann
c522de043d Upgrade dependencies 2022-09-08 15:39:56 +02:00
Ingo Oppermann
f1d71c202b Fix HLS streaming and cleanup on diskfs 2022-09-08 15:00:09 +02:00
Ingo Oppermann
ed36f45f5f Update changelog 2022-09-08 14:54:48 +02:00
Ingo Oppermann
285ef79716 Add /v3/metrics (get) endpoint to list all known metrics 2022-09-08 13:50:53 +02:00
Ingo Oppermann
2d754b4212 Log HTTP request and response body sizes 2022-09-07 13:53:26 +02:00
Ingo Oppermann
5cb0592854 Exclude .m3u8 and .mpd files from disk cache by default 2022-08-26 11:35:56 +03:00
Ingo Oppermann
f1141d1ad9 Fix assigning cleanup rules for diskfs 2022-08-26 08:17:17 +03:00
Ingo Oppermann
6ee565b3c9 Fix correct output of purge_on_delete value 2022-08-26 07:56:29 +03:00
Ingo Oppermann
e675eccd50 Update changelog 2022-08-22 10:13:57 +03:00
Ingo Oppermann
45fa1c4498 Fix intersection of search results 2022-08-19 12:37:53 +03:00
Ingo Oppermann
f60d09963c Add RegistryReader interface for read-only registry 2022-08-19 11:46:30 +03:00
Ingo Oppermann
9cd132650e Use path without app as session reference 2022-08-19 11:24:44 +03:00
Ingo Oppermann
0febae3242 Return number of purged files 2022-08-18 12:00:37 +03:00
Ingo Oppermann
6802830c62 Don't use deprecated functions from io/ioutil 2022-08-18 10:27:33 +03:00
Ingo Oppermann
5bd04817cc Fix wrong path for swagger definition 2022-08-18 10:13:00 +03:00
Ingo Oppermann
1ab09adc69 Untrack test binary 2022-08-17 16:20:10 +03:00
Ingo Oppermann
50deaef4d3 Wait for process to exit when stopping
If a process has some cleanup with purge-on-delete defined, the purge
has to wait until the process actually exited. Otherwise it may happen
that the process got the signal, files are purged, but the process is
still writing some files in order to exit cleanly. This would lead to
some artefacts left on the filesystem.
2022-08-17 15:13:17 +03:00
Ingo Oppermann
e4463953b6 Upgrade datarhei/gosrt 2022-08-17 11:07:31 +03:00
Ingo Oppermann
20a743c594 Upgrade datarhei/gosrt 2022-08-17 10:01:04 +03:00
Ingo Oppermann
3e7b1751d5 Add process id and reference glob pattern matching
For the API endpoint /v3/process two new query parameter are introduced
in order to list only processes that match a pattern for the id and the
reference: idpattern and refpattern. The pattern is a glob pattern. If
patterns for both are given, the results will be intersected. If you use
other query parameters such as id or reference, they will be applied
after the result of the pattern matching.
2022-08-17 07:55:44 +03:00
Ingo Oppermann
11c3fce812 Fix injecting commit, branch, and build info 2022-08-02 20:38:28 +02:00
Ingo Oppermann
b376fdc87d Add compiler and arch to log output 2022-08-02 20:37:47 +02:00
Ingo Oppermann
273ca0abbc Add cache block list for extensions not to cache 2022-08-02 19:10:28 +02:00
Ingo Oppermann
6af226aea7 Fix swagger endpoint IDs 2022-07-29 11:24:22 +02:00
Ingo Oppermann
542653d3e2 Update RTMP server (datarhei/restreamer#385) 2022-07-28 20:31:17 +02:00
1198 changed files with 112170 additions and 28961 deletions

View File

@@ -62,7 +62,7 @@ jobs:
build-args: |
CORE_IMAGE=datarhei/base:${{ env.OS_NAME }}-core-${{ env.OS_VERSION }}-${{ env.CORE_VERSION }}
FFMPEG_IMAGE=datarhei/base:${{ env.OS_NAME }}-ffmpeg-rpi-${{ env.OS_VERSION }}-${{ env.FFMPEG_VERSION }}
platforms: linux/arm/v7,linux/arm/v6,linux/arm64
platforms: linux/arm/v7,linux/arm64
push: true
tags: |
datarhei/core:rpi-${{ env.CORE_VERSION }}

View File

@@ -3,20 +3,20 @@ name: tests
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 2
- uses: actions/setup-go@v2
with:
go-version: '1.18'
- name: Run coverage
run: go test -coverprofile=coverage.out -covermode=atomic -v ./...
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
flags: unit-linux
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 2
- uses: actions/setup-go@v2
with:
go-version: "1.19"
- name: Run coverage
run: go test -coverprofile=coverage.out -covermode=atomic -v ./...
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage.out
flags: unit-linux

View File

@@ -1,5 +1,5 @@
# CORE ALPINE BASE IMAGE
OS_NAME=alpine
OS_VERSION=3.15
GOLANG_IMAGE=golang:1.18.4-alpine3.15
CORE_VERSION=16.9.1
OS_VERSION=3.16
GOLANG_IMAGE=golang:1.19.3-alpine3.16
CORE_VERSION=16.11.0

View File

@@ -1,3 +1,3 @@
# CORE NVIDIA CUDA BUNDLE
FFMPEG_VERSION=4.4.2
CUDA_VERSION=11.4.2
FFMPEG_VERSION=5.1.2
CUDA_VERSION=11.7.1

View File

@@ -1,2 +1,2 @@
# CORE BUNDLE
FFMPEG_VERSION=4.4.2
FFMPEG_VERSION=5.1.2

View File

@@ -1,2 +1,2 @@
# CORE RASPBERRY-PI BUNDLE
FFMPEG_VERSION=4.4.2
FFMPEG_VERSION=5.1.2

View File

@@ -1,2 +1,2 @@
# CORE BUNDLE
FFMPEG_VERSION=4.4.2
FFMPEG_VERSION=5.1.2

View File

@@ -1,5 +1,5 @@
# CORE UBUNTU BASE IMAGE
OS_NAME=ubuntu
OS_VERSION=20.04
GOLANG_IMAGE=golang:1.18.4-alpine3.15
CORE_VERSION=16.9.1
GOLANG_IMAGE=golang:1.19.3-alpine3.16
CORE_VERSION=16.11.0

1
.gitignore vendored
View File

@@ -2,6 +2,7 @@
.env
/core*
/import*
/ffmigrate*
/data/**
/test/**
.vscode

View File

@@ -1,6 +1,43 @@
# Core
#### Core v16.9.0 > v16.9.1
### Core v16.10.1 > v16.11.0
- Add FFmpeg 4.4 to FFmpeg 5.1 migration tool
- Add alternative SRT streamid
- Mod bump FFmpeg to v5.1.2 (datarhei/core:tag bundles)
- Fix crash with custom SSL certificates ([restreamer/#425](https://github.com/datarhei/restreamer/issues/425))
- Fix proper version handling for config
- Fix widged session data
- Fix resetting process stats when process stopped
- Fix stale FFmpeg process detection for streams with only audio
- Fix wrong return status code ([#6](https://github.com/datarhei/core/issues/6)))
- Fix use SRT defaults for key material exchange
### Core v16.10.0 > v16.10.1
- Add email address in TLS config for Let's Encrypt
- Fix use of Let's Encrypt production CA
### Core v16.9.1 > v16.10.0
- Add HLS session middleware to diskfs
- Add /v3/metrics (get) endpoint to list all known metrics
- Add logging HTTP request and response body sizes
- Add process id and reference glob pattern matching
- Add cache block list for extensions not to cache
- Mod exclude .m3u8 and .mpd files from disk cache by default
- Mod replaces x/crypto/acme/autocert with caddyserver/certmagic
- Mod exposes ports (Docker desktop)
- Fix assigning cleanup rules for diskfs
- Fix wrong path for swagger definition
- Fix process cleanup on delete, remove empty directories from disk
- Fix SRT blocking port on restart (upgrade datarhei/gosrt)
- Fix RTMP communication (Blackmagic Web Presenter, thx 235 MEDIA)
- Fix RTMP communication (Blackmagic ATEM Mini, [#385](https://github.com/datarhei/restreamer/issues/385))
- Fix injecting commit, branch, and build info
- Fix API metadata endpoints responses
#### Core v16.9.0 > v16.9.1^
- Fix v1 import app
- Fix race condition

View File

@@ -1,23 +1,25 @@
ARG GOLANG_IMAGE=golang:1.18.4-alpine3.15
ARG GOLANG_IMAGE=golang:1.19.3-alpine3.16
ARG BUILD_IMAGE=alpine:3.15
ARG BUILD_IMAGE=alpine:3.16
FROM $GOLANG_IMAGE as builder
COPY . /dist/core
RUN apk add \
git \
make && \
git \
make && \
cd /dist/core && \
go version && \
make release_linux && \
make import_linux
make import_linux && \
make ffmigrate_linux
FROM $BUILD_IMAGE
COPY --from=builder /dist/core/core /core/bin/core
COPY --from=builder /dist/core/import /core/bin/import
COPY --from=builder /dist/core/ffmigrate /core/bin/ffmigrate
COPY --from=builder /dist/core/mime.types /core/mime.types
COPY --from=builder /dist/core/run.sh /core/bin/run.sh

View File

@@ -14,6 +14,12 @@ ENV CORE_CONFIGFILE=/core/config/config.json
ENV CORE_STORAGE_DISK_DIR=/core/data
ENV CORE_DB_DIR=/core/config
EXPOSE 8080/tcp
EXPOSE 8181/tcp
EXPOSE 1935/tcp
EXPOSE 1936/tcp
EXPOSE 6000/udp
VOLUME ["/core/data", "/core/config"]
ENTRYPOINT ["/core/bin/run.sh"]
WORKDIR /core

View File

@@ -1,8 +1,8 @@
FROM golang:1.18.3-alpine3.15
FROM golang:1.19.3-alpine3.16
RUN apk add alpine-sdk
COPY . /dist/core
RUN cd /dist/core && \
go test -coverprofile=coverage.out -covermode=atomic -v ./...
go test -coverprofile=coverage.out -covermode=atomic -v ./...

View File

@@ -6,6 +6,13 @@ BINSUFFIX := $(shell if [ "${GOOS}" -a "${GOARCH}" ]; then echo "-${GOOS}-${GOAR
all: build
## init: Install required apps
init:
go install honnef.co/go/tools/cmd/staticcheck@latest
go install github.com/swaggo/swag/cmd/swag@latest
go install github.com/99designs/gqlgen@latest
go install golang.org/x/vuln/cmd/govulncheck@latest
## build: Build core (default)
build:
CGO_ENABLED=${CGO_ENABLED} GOOS=${GOOS} GOARCH=${GOARCH} go build -o core${BINSUFFIX}
@@ -34,6 +41,10 @@ vet:
fmt:
go fmt ./...
## vulncheck: Check for known vulnerabilities in dependencies
vulncheck:
govulncheck ./...
## update: Update dependencies
update:
go get -u
@@ -64,6 +75,14 @@ import:
import_linux:
cd app/import && CGO_ENABLED=0 GOOS=linux GOARCH=${OSARCH} go build -o ../../import -ldflags="-s -w"
## ffmigrate: Build ffmpeg migration binary
ffmigrate:
cd app/ffmigrate && CGO_ENABLED=${CGO_ENABLED} GOOS=${GOOS} GOARCH=${GOARCH} go build -o ../../ffmigrate -ldflags="-s -w"
# github workflow workaround
ffmigrate_linux:
cd app/ffmigrate && CGO_ENABLED=0 GOOS=linux GOARCH=${OSARCH} go build -o ../../ffmigrate -ldflags="-s -w"
## coverage: Generate code coverage analysis
coverage:
go test -race -coverprofile test/cover.out ./...
@@ -75,17 +94,17 @@ commit: vet fmt lint test build
## release: Build a release binary of core
release:
CGO_ENABLED=${CGO_ENABLED} GOOS=${GOOS} GOARCH=${GOARCH} go build -o core -ldflags="-s -w -X github.com/datarhei/core/app.Commit=$(COMMIT) -X github.com/datarhei/core/app.Branch=$(BRANCH) -X github.com/datarhei/core/app.Build=$(BUILD)"
CGO_ENABLED=${CGO_ENABLED} GOOS=${GOOS} GOARCH=${GOARCH} go build -o core -ldflags="-s -w -X github.com/datarhei/core/v16/app.Commit=$(COMMIT) -X github.com/datarhei/core/v16/app.Branch=$(BRANCH) -X github.com/datarhei/core/v16/app.Build=$(BUILD)"
# github workflow workaround
release_linux:
CGO_ENABLED=0 GOOS=linux GOARCH=${OSARCH} go build -o core -ldflags="-s -w -X github.com/datarhei/core/app.Commit=$(COMMIT) -X github.com/datarhei/core/app.Branch=$(BRANCH) -X github.com/datarhei/core/app.Build=$(BUILD)"
CGO_ENABLED=0 GOOS=linux GOARCH=${OSARCH} go build -o core -ldflags="-s -w -X github.com/datarhei/core/v16/app.Commit=$(COMMIT) -X github.com/datarhei/core/v16/app.Branch=$(BRANCH) -X github.com/datarhei/core/v16/app.Build=$(BUILD)"
## docker: Build standard Docker image
docker:
docker build -t core:$(SHORTCOMMIT) .
.PHONY: help build swagger test vet fmt vendor commit coverage lint release import update
.PHONY: help init build swagger test vet fmt vulncheck vendor commit coverage lint release import ffmigrate update
## help: Show all commands
help: Makefile

View File

@@ -1,7 +1,8 @@
# Core
The cloud-native audio/video processing API.
[![License: MIT](https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg)]([https://opensource.org/licenses/MI](https://www.apache.org/licenses/LICENSE-2.0))
[![License: MIT](https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg)](<[https://opensource.org/licenses/MI](https://www.apache.org/licenses/LICENSE-2.0)>)
[![CodeQL](https://github.com/datarhei/core/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/datarhei/core/actions/workflows/codeql-analysis.yml)
[![tests](https://github.com/datarhei/core/actions/workflows/go-tests.yml/badge.svg)](https://github.com/datarhei/core/actions/workflows/go-tests.yml)
[![codecov](https://codecov.io/gh/datarhei/core/branch/main/graph/badge.svg?token=90YMPZRAFK)](https://codecov.io/gh/datarhei/core)
@@ -119,7 +120,8 @@ The currently known environment variables (but not all will be respected) are:
| CORE_STORAGE_DISK_CACHE_MAXSIZEMBYTES | `0` | Max. allowed cache size, 0 for unlimited. |
| CORE_STORAGE_DISK_CACHE_TTLSECONDS | `300` | Seconds to keep files in cache. |
| CORE_STORAGE_DISK_CACHE_MAXFILESIZEMBYTES | `1` | Max. file size to put in cache. |
| CORE_STORAGE_DISK_CACHE_TYPES | (not set) | List of file extensions to cache (space-separated, e.g. ".html .js"), empty for all. |
| CORE_STORAGE_DISK_CACHE_TYPES_ALLOW | (not set) | List of file extensions to cache (space-separated, e.g. ".html .js"), empty for all. |
| CORE_STORAGE_DISK_CACHE_TYPES_BLOCK | (not set) | List of file extensions not to cache (space-separated, e.g. ".m3u8 .mpd"), empty for none. |
| CORE_STORAGE_MEMORY_AUTH_ENABLE | `true` | Enable basic auth for PUT,POST, and DELETE on /memfs. |
| CORE_STORAGE_MEMORY_AUTH_USERNAME | (not set) | Username for Basic-Auth of `/memfs`. Required if auth is enabled. |
| CORE_STORAGE_MEMORY_AUTH_PASSWORD | (not set) | Password for Basic-Auth of `/memfs`. Required if auth is enabled. |
@@ -180,7 +182,7 @@ All other values will be filled with default values and persisted on disk. The e
```
{
"version": 1,
"version": 3,
"id": "[will be generated if not given]",
"name": "[will be generated if not given]",
"address": ":8080",
@@ -238,7 +240,10 @@ All other values will be filled with default values and persisted on disk. The e
"max_size_mbytes": 0,
"ttl_seconds": 300,
"max_file_size_mbytes": 1,
"types": []
"types": {
"allow": [],
"block": []
}
}
},
"memory": {

View File

@@ -5,11 +5,12 @@ import (
"crypto/tls"
"fmt"
"io"
"io/ioutil"
golog "log"
"math"
gonet "net"
gohttp "net/http"
"net/url"
"os"
"path/filepath"
"runtime/debug"
"sync"
@@ -17,6 +18,8 @@ import (
"github.com/datarhei/core/v16/app"
"github.com/datarhei/core/v16/config"
configstore "github.com/datarhei/core/v16/config/store"
configvars "github.com/datarhei/core/v16/config/vars"
"github.com/datarhei/core/v16/ffmpeg"
"github.com/datarhei/core/v16/http"
"github.com/datarhei/core/v16/http/cache"
@@ -37,7 +40,8 @@ import (
"github.com/datarhei/core/v16/srt"
"github.com/datarhei/core/v16/update"
"golang.org/x/crypto/acme/autocert"
"github.com/caddyserver/certmagic"
"go.uber.org/zap"
)
// The API interface is the implementation for the restreamer API.
@@ -97,7 +101,7 @@ type api struct {
config struct {
path string
store config.Store
store configstore.Store
config *config.Config
}
@@ -120,7 +124,7 @@ func New(configpath string, logwriter io.Writer) (API, error) {
a.log.writer = logwriter
if a.log.writer == nil {
a.log.writer = ioutil.Discard
a.log.writer = io.Discard
}
a.errorChan = make(chan error, 1)
@@ -144,9 +148,14 @@ func (a *api) Reload() error {
a.errorChan = make(chan error, 1)
}
logger := log.New("Core").WithOutput(log.NewConsoleWriter(a.log.writer, log.Lwarn, true))
logger := log.New("Core").WithOutput(
log.NewLevelWriter(
log.NewConsoleWriter(a.log.writer, true),
log.Lwarn,
),
)
store, err := config.NewJSONStore(a.config.path, func() {
store, err := configstore.NewJSON(a.config.path, func() {
a.errorChan <- ErrConfigReload
})
if err != nil {
@@ -154,16 +163,11 @@ func (a *api) Reload() error {
}
cfg := store.Get()
if err := cfg.Migrate(); err == nil {
store.Set(cfg)
} else {
return err
}
cfg.Merge()
if len(cfg.Host.Name) == 0 && cfg.Host.Auto {
cfg.SetPublicIPs()
cfg.Host.Name = net.GetPublicIPs(5 * time.Second)
}
cfg.Validate(false)
@@ -185,37 +189,62 @@ func (a *api) Reload() error {
break
}
buffer := log.NewBufferWriter(loglevel, cfg.Log.MaxLines)
buffer := log.NewBufferWriter(cfg.Log.MaxLines)
var writer log.Writer
logger = logger.WithOutput(log.NewLevelRewriter(
log.NewMultiWriter(
log.NewTopicWriter(
log.NewConsoleWriter(a.log.writer, loglevel, true),
cfg.Log.Topics,
),
buffer,
),
[]log.LevelRewriteRule{
// FFmpeg annoyance, move all warnings about unathorized access to memfs from ffmpeg to debug level
// ts=2022-04-28T07:24:27Z level=WARN component="HTTP" address=":8080" client="::1" latency_ms=0 method="PUT" path="/memfs/00a10a69-416a-4cd5-9d4f-6d88ed3dd7f5_0917.ts" proto="HTTP/1.1" size_bytes=65 status=401 status_text="Unauthorized" user_agent="Lavf/58.76.100"
{
Level: log.Ldebug,
Component: "HTTP",
Match: map[string]string{
"client": "^(::1|127.0.0.1)$",
"method": "^(PUT|POST|DELETE)$",
"status_text": "^Unauthorized$",
"user_agent": "^Lavf/",
if cfg.Log.Target.Output == "stdout" {
writer = log.NewConsoleWriter(
os.Stdout,
true,
)
} else if cfg.Log.Target.Output == "file" {
writer = log.NewFileWriter(
cfg.Log.Target.Path,
log.NewJSONFormatter(),
)
} else {
writer = log.NewConsoleWriter(
os.Stderr,
true,
)
}
logger = logger.WithOutput(
log.NewLevelWriter(
log.NewLevelRewriter(
log.NewMultiWriter(
log.NewTopicWriter(
writer,
cfg.Log.Topics,
),
buffer,
),
[]log.LevelRewriteRule{
// FFmpeg annoyance, move all warnings about unathorized access to memfs from ffmpeg to debug level
// ts=2022-04-28T07:24:27Z level=WARN component="HTTP" address=":8080" client="::1" latency_ms=0 method="PUT" path="/memfs/00a10a69-416a-4cd5-9d4f-6d88ed3dd7f5_0917.ts" proto="HTTP/1.1" size_bytes=65 status=401 status_text="Unauthorized" user_agent="Lavf/58.76.100"
{
Level: log.Ldebug,
Component: "HTTP",
Match: map[string]string{
"client": "^(::1|127.0.0.1)$",
"method": "^(PUT|POST|DELETE)$",
"status_text": "^Unauthorized$",
"user_agent": "^Lavf/",
},
},
},
},
},
))
),
loglevel,
),
)
logfields := log.Fields{
"application": app.Name,
"version": app.Version.String(),
"repository": "https://github.com/datarhei/core",
"license": "Apache License Version 2.0",
"arch": app.Arch,
"compiler": app.Compiler,
}
if len(app.Commit) != 0 && len(app.Branch) != 0 {
@@ -229,8 +258,10 @@ func (a *api) Reload() error {
logger.Info().WithFields(logfields).Log("")
logger.Info().WithField("path", a.config.path).Log("Read config file")
configlogger := logger.WithComponent("Config")
cfg.Messages(func(level string, v config.Variable, message string) {
cfg.Messages(func(level string, v configvars.Variable, message string) {
configlogger = configlogger.WithFields(log.Fields{
"variable": v.Name,
"value": v.Value,
@@ -366,11 +397,6 @@ func (a *api) start() error {
a.sessions = sessions
}
store := store.NewJSONStore(store.JSONConfig{
Dir: cfg.DB.Dir,
Logger: a.log.logger.core.WithComponent("ProcessStore"),
})
diskfs, err := fs.NewDiskFilesystem(fs.DiskConfig{
Dir: cfg.Storage.Disk.Dir,
Size: cfg.Storage.Disk.Size * 1024 * 1024,
@@ -450,36 +476,49 @@ func (a *api) start() error {
a.replacer = replace.New()
{
a.replacer.RegisterTemplate("diskfs", a.diskfs.Base())
a.replacer.RegisterTemplate("memfs", a.memfs.Base())
a.replacer.RegisterTemplate("diskfs", a.diskfs.Base(), nil)
a.replacer.RegisterTemplate("memfs", a.memfs.Base(), nil)
host, port, _ := gonet.SplitHostPort(cfg.RTMP.Address)
if len(host) == 0 {
host = "localhost"
}
template := "rtmp://" + host + ":" + port + cfg.RTMP.App + "/{name}"
template := "rtmp://" + host + ":" + port
if cfg.RTMP.App != "/" {
template += cfg.RTMP.App
}
template += "/{name}"
if len(cfg.RTMP.Token) != 0 {
template += "?token=" + cfg.RTMP.Token
}
a.replacer.RegisterTemplate("rtmp", template)
a.replacer.RegisterTemplate("rtmp", template, nil)
host, port, _ = gonet.SplitHostPort(cfg.SRT.Address)
if len(host) == 0 {
host = "localhost"
}
template = "srt://" + host + ":" + port + "?mode=caller&transtype=live&streamid=#!:m={mode},r={name}"
template = "srt://" + host + ":" + port + "?mode=caller&transtype=live&latency={latency}&streamid={name},mode:{mode}"
if len(cfg.SRT.Token) != 0 {
template += ",token=" + cfg.SRT.Token
template += ",token:" + cfg.SRT.Token
}
if len(cfg.SRT.Passphrase) != 0 {
template += "&passphrase=" + cfg.SRT.Passphrase
}
a.replacer.RegisterTemplate("srt", template)
a.replacer.RegisterTemplate("srt", template, map[string]string{
"latency": "20000", // 20 milliseconds, FFmpeg requires microseconds
})
}
store := store.NewJSONStore(store.JSONConfig{
Filepath: cfg.DB.Dir + "/db.json",
FFVersion: a.ffmpeg.Skills().FFmpeg.Version,
Logger: a.log.logger.core.WithComponent("ProcessStore"),
})
restream, err := restream.New(restream.Config{
ID: cfg.ID,
Name: cfg.Name,
@@ -631,11 +670,12 @@ func (a *api) start() error {
if cfg.Storage.Disk.Cache.Enable {
diskCache, err := cache.NewLRUCache(cache.LRUConfig{
TTL: time.Duration(cfg.Storage.Disk.Cache.TTL) * time.Second,
MaxSize: cfg.Storage.Disk.Cache.Size * 1024 * 1024,
MaxFileSize: cfg.Storage.Disk.Cache.FileSize * 1024 * 1024,
Extensions: cfg.Storage.Disk.Cache.Types,
Logger: a.log.logger.core.WithComponent("HTTPCache"),
TTL: time.Duration(cfg.Storage.Disk.Cache.TTL) * time.Second,
MaxSize: cfg.Storage.Disk.Cache.Size * 1024 * 1024,
MaxFileSize: cfg.Storage.Disk.Cache.FileSize * 1024 * 1024,
AllowExtensions: cfg.Storage.Disk.Cache.Types.Allow,
BlockExtensions: cfg.Storage.Disk.Cache.Types.Block,
Logger: a.log.logger.core.WithComponent("HTTPCache"),
})
if err != nil {
@@ -645,69 +685,105 @@ func (a *api) start() error {
a.cache = diskCache
}
var autocertManager *autocert.Manager
var autocertManager *certmagic.Config
if cfg.TLS.Enable && cfg.TLS.Auto {
if len(cfg.Host.Name) == 0 {
return fmt.Errorf("at least one host must be provided in host.name or RS_HOST_NAME")
}
autocertManager = &autocert.Manager{
Prompt: autocert.AcceptTOS,
HostPolicy: autocert.HostWhitelist(cfg.Host.Name...),
Cache: autocert.DirCache(cfg.DB.Dir + "/cert"),
}
// Start temporary http server on configured port
tempserver := &gohttp.Server{
Addr: cfg.Address,
Handler: autocertManager.HTTPHandler(gohttp.HandlerFunc(func(w gohttp.ResponseWriter, r *gohttp.Request) {
w.WriteHeader(gohttp.StatusNotFound)
})),
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
MaxHeaderBytes: 1 << 20,
}
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
tempserver.ListenAndServe()
wg.Done()
}()
var certerror bool
// For each domain, get the certificate
for _, host := range cfg.Host.Name {
logger := a.log.logger.core.WithComponent("Let's Encrypt").WithField("host", host)
logger.Info().Log("Acquiring certificate ...")
_, err := autocertManager.GetCertificate(&tls.ClientHelloInfo{
ServerName: host,
})
if err != nil {
logger.Error().WithField("error", err).Log("Failed to acquire certificate")
certerror = true
break
if cfg.TLS.Enable {
if cfg.TLS.Auto {
if len(cfg.Host.Name) == 0 {
return fmt.Errorf("at least one host must be provided in host.name or CORE_HOST_NAME")
}
logger.Info().Log("Successfully acquired certificate")
}
certmagic.Default.Storage = &certmagic.FileStorage{
Path: cfg.DB.Dir + "/cert",
}
certmagic.Default.DefaultServerName = cfg.Host.Name[0]
certmagic.Default.Logger = zap.NewNop()
// Shut down the temporary http server
tempserver.Close()
certmagic.DefaultACME.Agreed = true
certmagic.DefaultACME.Email = cfg.TLS.Email
certmagic.DefaultACME.CA = certmagic.LetsEncryptProductionCA
certmagic.DefaultACME.DisableHTTPChallenge = false
certmagic.DefaultACME.DisableTLSALPNChallenge = true
certmagic.DefaultACME.Logger = zap.NewNop()
wg.Wait()
magic := certmagic.NewDefault()
acme := certmagic.NewACMEIssuer(magic, certmagic.DefaultACME)
acme.Logger = zap.NewNop()
if certerror {
a.log.logger.core.Warn().Log("Continuing with disabled TLS")
autocertManager = nil
cfg.TLS.Enable = false
magic.Issuers = []certmagic.Issuer{acme}
magic.Logger = zap.NewNop()
autocertManager = magic
// Start temporary http server on configured port
tempserver := &gohttp.Server{
Addr: cfg.Address,
Handler: acme.HTTPChallengeHandler(gohttp.HandlerFunc(func(w gohttp.ResponseWriter, r *gohttp.Request) {
w.WriteHeader(gohttp.StatusNotFound)
})),
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
MaxHeaderBytes: 1 << 20,
}
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
tempserver.ListenAndServe()
wg.Done()
}()
var certerror bool
// For each domain, get the certificate
for _, host := range cfg.Host.Name {
logger := a.log.logger.core.WithComponent("Let's Encrypt").WithField("host", host)
logger.Info().Log("Acquiring certificate ...")
ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(5*time.Minute))
err := autocertManager.ManageSync(ctx, []string{host})
cancel()
if err != nil {
logger.Error().WithField("error", err).Log("Failed to acquire certificate")
certerror = true
/*
problems, err := letsdebug.Check(host, letsdebug.HTTP01)
if err != nil {
logger.Error().WithField("error", err).Log("Failed to debug certificate acquisition")
}
for _, p := range problems {
logger.Error().WithFields(log.Fields{
"name": p.Name,
"detail": p.Detail,
}).Log(p.Explanation)
}
*/
break
}
logger.Info().Log("Successfully acquired certificate")
}
// Shut down the temporary http server
tempserver.Close()
wg.Wait()
if certerror {
a.log.logger.core.Warn().Log("Continuing with disabled TLS")
autocertManager = nil
cfg.TLS.Enable = false
} else {
cfg.TLS.CertFile = ""
cfg.TLS.KeyFile = ""
}
} else {
cfg.TLS.CertFile = ""
cfg.TLS.KeyFile = ""
a.log.logger.core.Info().Log("Enabling TLS with cert and key files")
}
}
@@ -723,14 +799,15 @@ func (a *api) start() error {
Collector: a.sessions.Collector("rtmp"),
}
if autocertManager != nil && cfg.RTMP.EnableTLS {
config.TLSConfig = &tls.Config{
GetCertificate: autocertManager.GetCertificate,
}
if cfg.RTMP.EnableTLS {
config.Logger = config.Logger.WithComponent("RTMP/S")
a.log.logger.rtmps = a.log.logger.core.WithComponent("RTMPS").WithField("address", cfg.RTMP.AddressTLS)
if autocertManager != nil {
config.TLSConfig = &tls.Config{
GetCertificate: autocertManager.GetCertificate,
}
}
}
rtmpserver, err := rtmp.New(config)
@@ -902,7 +979,8 @@ func (a *api) start() error {
GetCertificate: autocertManager.GetCertificate,
}
a.sidecarserver.Handler = autocertManager.HTTPHandler(sidecarserverhandler)
acme := autocertManager.Issuers[0].(*certmagic.ACMEIssuer)
a.sidecarserver.Handler = acme.HTTPChallengeHandler(sidecarserverhandler)
}
wgStart.Add(1)
@@ -1073,6 +1151,12 @@ func (a *api) start() error {
}(ctx)
}
if cfg.Debug.MemoryLimit > 0 {
debug.SetMemoryLimit(cfg.Debug.MemoryLimit * 1024 * 1024)
} else {
debug.SetMemoryLimit(math.MaxInt64)
}
// Start the restream processes
restream.Start()
@@ -1242,4 +1326,6 @@ func (a *api) Destroy() {
a.memfs.DeleteAll()
a.memfs = nil
}
a.log.logger.core.Close()
}

196
app/ffmigrate/main.go Normal file
View File

@@ -0,0 +1,196 @@
package main
import (
"fmt"
"os"
"regexp"
cfgstore "github.com/datarhei/core/v16/config/store"
cfgvars "github.com/datarhei/core/v16/config/vars"
"github.com/datarhei/core/v16/ffmpeg"
"github.com/datarhei/core/v16/io/file"
"github.com/datarhei/core/v16/log"
"github.com/datarhei/core/v16/restream/store"
"github.com/Masterminds/semver/v3"
_ "github.com/joho/godotenv/autoload"
)
func main() {
logger := log.New("Migration").WithOutput(
log.NewLevelWriter(
log.NewConsoleWriter(os.Stderr, true),
log.Linfo,
),
).WithFields(log.Fields{
"from": "ffmpeg4",
"to": "ffmpeg5",
})
configfile := cfgstore.Location(os.Getenv("CORE_CONFIGFILE"))
configstore, err := cfgstore.NewJSON(configfile, nil)
if err != nil {
logger.Error().WithError(err).Log("Loading configuration failed")
os.Exit(1)
}
if err := doMigration(logger, configstore); err != nil {
os.Exit(1)
}
}
func doMigration(logger log.Logger, configstore cfgstore.Store) error {
if logger == nil {
logger = log.New("")
}
cfg := configstore.Get()
// Merging the persisted config with the environment variables
cfg.Merge()
cfg.Validate(false)
if cfg.HasErrors() {
logger.Error().Log("The configuration contains errors")
messages := []string{}
cfg.Messages(func(level string, v cfgvars.Variable, message string) {
if level == "error" {
logger.Error().WithFields(log.Fields{
"variable": v.Name,
"value": v.Value,
"env": v.EnvName,
"description": v.Description,
}).Log(message)
messages = append(messages, v.Name+": "+message)
}
})
return fmt.Errorf("the configuration contains errors: %v", messages)
}
var writer log.Writer
if cfg.Log.Target.Output == "stdout" {
writer = log.NewConsoleWriter(
os.Stdout,
true,
)
} else if cfg.Log.Target.Output == "file" {
writer = log.NewFileWriter(
cfg.Log.Target.Path,
log.NewJSONFormatter(),
)
} else {
writer = log.NewConsoleWriter(
os.Stderr,
true,
)
}
logger = logger.WithOutput(writer)
ff, err := ffmpeg.New(ffmpeg.Config{
Binary: cfg.FFmpeg.Binary,
})
if err != nil {
logger.Error().WithError(err).Log("Loading FFmpeg binary failed")
return fmt.Errorf("loading FFmpeg binary failed: %w", err)
}
version, err := semver.NewVersion(ff.Skills().FFmpeg.Version)
if err != nil {
logger.Error().WithError(err).Log("Parsing FFmpeg version failed")
return fmt.Errorf("parsing FFmpeg version failed: %w", err)
}
// The current FFmpeg version is 4. Nothing to do.
if version.Major() == 4 {
return nil
}
if version.Major() != 5 {
err := fmt.Errorf("unknown FFmpeg version found: %d", version.Major())
logger.Error().WithError(err).Log("Unsupported FFmpeg version found")
return fmt.Errorf("unsupported FFmpeg version found: %w", err)
}
// Check if there's a DB file
dbFilepath := cfg.DB.Dir + "/db.json"
if _, err = os.Stat(dbFilepath); err != nil {
// There's no DB to backup
logger.Info().WithField("db", dbFilepath).Log("Database not found. Migration not required")
return nil
}
// Check if we already have a backup
backupFilepath := cfg.DB.Dir + "/db_ff4.json"
if _, err = os.Stat(backupFilepath); err == nil {
// Yes, we have a backup. The migration already happened
logger.Info().WithField("backup", backupFilepath).Log("Migration already done")
return nil
}
// Create a backup
if err := file.Copy(dbFilepath, backupFilepath); err != nil {
logger.Error().WithError(err).Log("Creating backup file failed")
return fmt.Errorf("creating backup file failed: %w", err)
}
logger.Info().WithField("backup", backupFilepath).Log("Backup created")
// Load the existing DB
datastore := store.NewJSONStore(store.JSONConfig{
Filepath: cfg.DB.Dir + "/db.json",
})
data, err := datastore.Load()
if err != nil {
logger.Error().WithError(err).Log("Loading database failed")
return fmt.Errorf("loading database failed: %w", err)
}
logger.Info().Log("Migrating processes ...")
// Migrate the processes to version 5
// Only this happens:
// - for RTSP inputs, replace -stimeout with -timeout
reRTSP := regexp.MustCompile(`^rtsps?://`)
for id, p := range data.Process {
logger.Info().WithField("processid", p.ID).Log("")
for index, input := range p.Config.Input {
if !reRTSP.MatchString(input.Address) {
continue
}
for i, o := range input.Options {
if o != "-stimeout" {
continue
}
input.Options[i] = "-timeout"
}
p.Config.Input[index] = input
}
p.Config.FFVersion = version.String()
data.Process[id] = p
}
logger.Info().Log("Migrating processes done")
// Store the modified DB
if err := datastore.Store(data); err != nil {
logger.Error().WithError(err).Log("Storing database failed")
return fmt.Errorf("storing database failed: %w", err)
}
logger.Info().Log("Completed")
return nil
}

View File

@@ -6,7 +6,6 @@ package main
import (
gojson "encoding/json"
"fmt"
"io/ioutil"
"math"
"net/url"
"os"
@@ -503,7 +502,7 @@ func importV1(path string, cfg importConfig) (store.StoreData, error) {
r := store.NewStoreData()
jsondata, err := ioutil.ReadFile(path)
jsondata, err := os.ReadFile(path)
if err != nil {
return r, fmt.Errorf("failed to read data from %s: %w", path, err)
}

View File

@@ -2,7 +2,6 @@ package main
import (
gojson "encoding/json"
"io/ioutil"
"os"
"testing"
@@ -51,7 +50,7 @@ func testV1Import(t *testing.T, v1Fixture, v4Fixture string, config importConfig
require.Equal(t, nil, err)
// Read the wanted result
wantdatav4, err := ioutil.ReadFile(v4Fixture)
wantdatav4, err := os.ReadFile(v4Fixture)
require.Equal(t, nil, err)
var wantv4 store.StoreData

View File

@@ -4,7 +4,8 @@ import (
"fmt"
"os"
"github.com/datarhei/core/v16/config"
cfgstore "github.com/datarhei/core/v16/config/store"
cfgvars "github.com/datarhei/core/v16/config/vars"
"github.com/datarhei/core/v16/log"
"github.com/datarhei/core/v16/restream/store"
@@ -12,9 +13,16 @@ import (
)
func main() {
logger := log.New("Import").WithOutput(log.NewConsoleWriter(os.Stderr, log.Linfo, true)).WithField("version", "v1")
logger := log.New("Import").WithOutput(
log.NewLevelWriter(
log.NewConsoleWriter(os.Stderr, true),
log.Linfo,
),
).WithField("version", "v1")
configstore, err := config.NewJSONStore(os.Getenv("CORE_CONFIGFILE"), nil)
configfile := cfgstore.Location(os.Getenv("CORE_CONFIGFILE"))
configstore, err := cfgstore.NewJSON(configfile, nil)
if err != nil {
logger.Error().WithError(err).Log("Loading configuration failed")
os.Exit(1)
@@ -25,15 +33,12 @@ func main() {
}
}
func doImport(logger log.Logger, configstore config.Store) error {
func doImport(logger log.Logger, configstore cfgstore.Store) error {
if logger == nil {
logger = log.New("")
}
logger.Info().Log("Database import")
cfg := configstore.Get()
cfg.Migrate()
// Merging the persisted config with the environment variables
cfg.Merge()
@@ -42,7 +47,7 @@ func doImport(logger log.Logger, configstore config.Store) error {
if cfg.HasErrors() {
logger.Error().Log("The configuration contains errors")
messages := []string{}
cfg.Messages(func(level string, v config.Variable, message string) {
cfg.Messages(func(level string, v cfgvars.Variable, message string) {
if level == "error" {
logger.Error().WithFields(log.Fields{
"variable": v.Name,
@@ -58,6 +63,27 @@ func doImport(logger log.Logger, configstore config.Store) error {
return fmt.Errorf("the configuration contains errors: %v", messages)
}
var writer log.Writer
if cfg.Log.Target.Output == "stdout" {
writer = log.NewConsoleWriter(
os.Stdout,
true,
)
} else if cfg.Log.Target.Output == "file" {
writer = log.NewFileWriter(
cfg.Log.Target.Path,
log.NewJSONFormatter(),
)
} else {
writer = log.NewConsoleWriter(
os.Stderr,
true,
)
}
logger = logger.WithOutput(writer)
logger.Info().Log("Checking for database ...")
// Check if there's a v1.json from the old Restreamer
@@ -80,7 +106,7 @@ func doImport(logger log.Logger, configstore config.Store) error {
// Load an existing DB
datastore := store.NewJSONStore(store.JSONConfig{
Dir: cfg.DB.Dir,
Filepath: cfg.DB.Dir + "/db.json",
})
data, err := datastore.Load()
@@ -117,7 +143,6 @@ func doImport(logger log.Logger, configstore config.Store) error {
// Get the unmerged config for persisting
cfg = configstore.Get()
cfg.Migrate()
// Add static routes to mimic the old URLs
cfg.Router.Routes["/hls/live.stream.m3u8"] = "/memfs/" + importConfig.id + ".m3u8"

View File

@@ -3,16 +3,14 @@ package main
import (
"testing"
"github.com/datarhei/core/v16/config"
"github.com/datarhei/core/v16/config/store"
"github.com/stretchr/testify/require"
)
func TestImport(t *testing.T) {
configstore := config.NewDummyStore()
configstore := store.NewDummy()
cfg := configstore.Get()
cfg.Version = 1
cfg.Migrate()
err := configstore.Set(cfg)
require.NoError(t, err)

View File

@@ -29,8 +29,8 @@ func (v versionInfo) MinorString() string {
// Version of the app
var Version = versionInfo{
Major: 16,
Minor: 9,
Patch: 1,
Minor: 11,
Patch: 0,
}
// Commit is the git commit the app is build from. It should be filled in during compilation

View File

@@ -3,226 +3,72 @@ package config
import (
"context"
"fmt"
"net"
"os"
"strconv"
"strings"
"time"
"github.com/datarhei/core/v16/math/rand"
haikunator "github.com/atrox/haikunatorgo/v2"
"github.com/datarhei/core/v16/config/copy"
"github.com/datarhei/core/v16/config/value"
"github.com/datarhei/core/v16/config/vars"
"github.com/datarhei/core/v16/math/rand"
"github.com/google/uuid"
)
const version int64 = 2
/*
type Config interface {
// Merge merges the values of the known environment variables into the configuration
Merge()
type variable struct {
value value // The actual value
defVal string // The default value in string representation
name string // A name for this value
envName string // The environment variable that corresponds to this value
envAltNames []string // Alternative environment variable names
description string // A desriptions for this value
required bool // Whether a non-empty value is required
disguise bool // Whether the value should be disguised if printed
merged bool // Whether this value has been replaced by its corresponding environment variable
}
// Validate validates the current state of the Config for completeness and sanity. Errors are
// written to the log. Use resetLogs to indicate to reset the logs prior validation.
Validate(resetLogs bool)
type Variable struct {
Value string
Name string
EnvName string
Description string
Merged bool
}
// Messages calls for each log entry the provided callback. The level has the values 'error', 'warn', or 'info'.
// The name is the name of the configuration value, e.g. 'api.auth.enable'. The message is the log message.
Messages(logger func(level string, v vars.Variable, message string))
type message struct {
message string // The log message
variable Variable // The config field this message refers to
level string // The loglevel for this message
}
// HasErrors returns whether there are some error messages in the log.
HasErrors() bool
type Auth0Tenant struct {
Domain string `json:"domain"`
Audience string `json:"audience"`
ClientID string `json:"clientid"`
Users []string `json:"users"`
}
// Overrides returns a list of configuration value names that have been overriden by an environment variable.
Overrides() []string
// Data is the actual configuration data for the app
type Data struct {
CreatedAt time.Time `json:"created_at"`
LoadedAt time.Time `json:"-"`
UpdatedAt time.Time `json:"-"`
Version int64 `json:"version" jsonschema:"minimum=1,maximum=1"`
ID string `json:"id"`
Name string `json:"name"`
Address string `json:"address"`
CheckForUpdates bool `json:"update_check"`
Log struct {
Level string `json:"level" enums:"debug,info,warn,error,silent" jsonschema:"enum=debug,enum=info,enum=warn,enum=error,enum=silent"`
Topics []string `json:"topics"`
MaxLines int `json:"max_lines"`
} `json:"log"`
DB struct {
Dir string `json:"dir"`
} `json:"db"`
Host struct {
Name []string `json:"name"`
Auto bool `json:"auto"`
} `json:"host"`
API struct {
ReadOnly bool `json:"read_only"`
Access struct {
HTTP struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"http"`
HTTPS struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"https"`
} `json:"access"`
Auth struct {
Enable bool `json:"enable"`
DisableLocalhost bool `json:"disable_localhost"`
Username string `json:"username"`
Password string `json:"password"`
JWT struct {
Secret string `json:"secret"`
} `json:"jwt"`
Auth0 struct {
Enable bool `json:"enable"`
Tenants []Auth0Tenant `json:"tenants"`
} `json:"auth0"`
} `json:"auth"`
} `json:"api"`
TLS struct {
Address string `json:"address"`
Enable bool `json:"enable"`
Auto bool `json:"auto"`
CertFile string `json:"cert_file"`
KeyFile string `json:"key_file"`
} `json:"tls"`
Storage struct {
Disk struct {
Dir string `json:"dir"`
Size int64 `json:"max_size_mbytes"`
Cache struct {
Enable bool `json:"enable"`
Size uint64 `json:"max_size_mbytes"`
TTL int64 `json:"ttl_seconds"`
FileSize uint64 `json:"max_file_size_mbytes"`
Types []string `json:"types"`
} `json:"cache"`
} `json:"disk"`
Memory struct {
Auth struct {
Enable bool `json:"enable"`
Username string `json:"username"`
Password string `json:"password"`
} `json:"auth"`
Size int64 `json:"max_size_mbytes"`
Purge bool `json:"purge"`
} `json:"memory"`
CORS struct {
Origins []string `json:"origins"`
} `json:"cors"`
MimeTypes string `json:"mimetypes_file"`
} `json:"storage"`
RTMP struct {
Enable bool `json:"enable"`
EnableTLS bool `json:"enable_tls"`
Address string `json:"address"`
AddressTLS string `json:"address_tls"`
App string `json:"app"`
Token string `json:"token"`
} `json:"rtmp"`
SRT struct {
Enable bool `json:"enable"`
Address string `json:"address"`
Passphrase string `json:"passphrase"`
Token string `json:"token"`
Log struct {
Enable bool `json:"enable"`
Topics []string `json:"topics"`
} `json:"log"`
} `json:"srt"`
FFmpeg struct {
Binary string `json:"binary"`
MaxProcesses int64 `json:"max_processes"`
Access struct {
Input struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"input"`
Output struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"output"`
} `json:"access"`
Log struct {
MaxLines int `json:"max_lines"`
MaxHistory int `json:"max_history"`
} `json:"log"`
} `json:"ffmpeg"`
Playout struct {
Enable bool `json:"enable"`
MinPort int `json:"min_port"`
MaxPort int `json:"max_port"`
} `json:"playout"`
Debug struct {
Profiling bool `json:"profiling"`
ForceGC int `json:"force_gc"`
} `json:"debug"`
Metrics struct {
Enable bool `json:"enable"`
EnablePrometheus bool `json:"enable_prometheus"`
Range int64 `json:"range_sec"` // seconds
Interval int64 `json:"interval_sec"` // seconds
} `json:"metrics"`
Sessions struct {
Enable bool `json:"enable"`
IPIgnoreList []string `json:"ip_ignorelist"`
SessionTimeout int `json:"session_timeout_sec"`
Persist bool `json:"persist"`
PersistInterval int `json:"persist_interval_sec"`
MaxBitrate uint64 `json:"max_bitrate_mbit"`
MaxSessions uint64 `json:"max_sessions"`
} `json:"sessions"`
Service struct {
Enable bool `json:"enable"`
Token string `json:"token"`
URL string `json:"url"`
} `json:"service"`
Router struct {
BlockedPrefixes []string `json:"blocked_prefixes"`
Routes map[string]string `json:"routes"`
UIPath string `json:"ui_path"`
} `json:"router"`
Get(name string) (string, error)
Set(name, val string) error
}
*/
const version int64 = 3
// Make sure that the config.Config interface is satisfied
//var _ config.Config = &Config{}
// Config is a wrapper for Data
type Config struct {
vars []*variable
logs []message
vars vars.Variables
Data
}
// New returns a Config which is initialized with its default values
func New() *Config {
data := &Config{}
config := &Config{}
data.init()
config.init()
return data
return config
}
func (d *Config) Get(name string) (string, error) {
return d.vars.Get(name)
}
func (d *Config) Set(name, val string) error {
return d.vars.Set(name, val)
}
// NewConfigFrom returns a clone of a Config
func NewConfigFrom(d *Config) *Config {
func (d *Config) Clone() *Config {
data := New()
data.CreatedAt = d.CreatedAt
@@ -251,312 +97,204 @@ func NewConfigFrom(d *Config) *Config {
data.Service = d.Service
data.Router = d.Router
data.Log.Topics = copyStringSlice(d.Log.Topics)
data.Log.Topics = copy.Slice(d.Log.Topics)
data.Host.Name = copyStringSlice(d.Host.Name)
data.Host.Name = copy.Slice(d.Host.Name)
data.API.Access.HTTP.Allow = copyStringSlice(d.API.Access.HTTP.Allow)
data.API.Access.HTTP.Block = copyStringSlice(d.API.Access.HTTP.Block)
data.API.Access.HTTPS.Allow = copyStringSlice(d.API.Access.HTTPS.Allow)
data.API.Access.HTTPS.Block = copyStringSlice(d.API.Access.HTTPS.Block)
data.API.Access.HTTP.Allow = copy.Slice(d.API.Access.HTTP.Allow)
data.API.Access.HTTP.Block = copy.Slice(d.API.Access.HTTP.Block)
data.API.Access.HTTPS.Allow = copy.Slice(d.API.Access.HTTPS.Allow)
data.API.Access.HTTPS.Block = copy.Slice(d.API.Access.HTTPS.Block)
data.API.Auth.Auth0.Tenants = copyTenantSlice(d.API.Auth.Auth0.Tenants)
data.API.Auth.Auth0.Tenants = copy.TenantSlice(d.API.Auth.Auth0.Tenants)
data.Storage.CORS.Origins = copyStringSlice(d.Storage.CORS.Origins)
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.Storage.Disk.Cache.Types.Allow = copy.Slice(d.Storage.Disk.Cache.Types.Allow)
data.Storage.Disk.Cache.Types.Block = copy.Slice(d.Storage.Disk.Cache.Types.Block)
data.FFmpeg.Access.Input.Allow = copyStringSlice(d.FFmpeg.Access.Input.Allow)
data.FFmpeg.Access.Input.Block = copyStringSlice(d.FFmpeg.Access.Input.Block)
data.FFmpeg.Access.Output.Allow = copyStringSlice(d.FFmpeg.Access.Output.Allow)
data.FFmpeg.Access.Output.Block = copyStringSlice(d.FFmpeg.Access.Output.Block)
data.FFmpeg.Access.Input.Allow = copy.Slice(d.FFmpeg.Access.Input.Allow)
data.FFmpeg.Access.Input.Block = copy.Slice(d.FFmpeg.Access.Input.Block)
data.FFmpeg.Access.Output.Allow = copy.Slice(d.FFmpeg.Access.Output.Allow)
data.FFmpeg.Access.Output.Block = copy.Slice(d.FFmpeg.Access.Output.Block)
data.Sessions.IPIgnoreList = copyStringSlice(d.Sessions.IPIgnoreList)
data.Sessions.IPIgnoreList = copy.Slice(d.Sessions.IPIgnoreList)
data.SRT.Log.Topics = copyStringSlice(d.SRT.Log.Topics)
data.SRT.Log.Topics = copy.Slice(d.SRT.Log.Topics)
data.Router.BlockedPrefixes = copyStringSlice(d.Router.BlockedPrefixes)
data.Router.Routes = copyStringMap(d.Router.Routes)
data.Router.BlockedPrefixes = copy.Slice(d.Router.BlockedPrefixes)
data.Router.Routes = copy.StringMap(d.Router.Routes)
for i, v := range d.vars {
data.vars[i].merged = v.merged
}
data.vars.Transfer(&d.vars)
return data
}
func (d *Config) init() {
d.val(newInt64Value(&d.Version, version), "version", "", nil, "Configuration file layout version", true, false)
d.val(newTimeValue(&d.CreatedAt, time.Now()), "created_at", "", nil, "Configuration file creation time", false, false)
d.val(newStringValue(&d.ID, uuid.New().String()), "id", "CORE_ID", nil, "ID for this instance", true, false)
d.val(newStringValue(&d.Name, haikunator.New().Haikunate()), "name", "CORE_NAME", nil, "A human readable name for this instance", false, false)
d.val(newAddressValue(&d.Address, ":8080"), "address", "CORE_ADDRESS", nil, "HTTP listening address", false, false)
d.val(newBoolValue(&d.CheckForUpdates, true), "update_check", "CORE_UPDATE_CHECK", nil, "Check for updates and send anonymized data", false, false)
d.vars.Register(value.NewInt64(&d.Version, version), "version", "", nil, "Configuration file layout version", true, false)
d.vars.Register(value.NewTime(&d.CreatedAt, time.Now()), "created_at", "", nil, "Configuration file creation time", false, false)
d.vars.Register(value.NewString(&d.ID, uuid.New().String()), "id", "CORE_ID", nil, "ID for this instance", true, false)
d.vars.Register(value.NewString(&d.Name, haikunator.New().Haikunate()), "name", "CORE_NAME", nil, "A human readable name for this instance", false, false)
d.vars.Register(value.NewAddress(&d.Address, ":8080"), "address", "CORE_ADDRESS", nil, "HTTP listening address", false, false)
d.vars.Register(value.NewBool(&d.CheckForUpdates, true), "update_check", "CORE_UPDATE_CHECK", nil, "Check for updates and send anonymized data", false, false)
// Log
d.val(newStringValue(&d.Log.Level, "info"), "log.level", "CORE_LOG_LEVEL", nil, "Loglevel: silent, error, warn, info, debug", false, false)
d.val(newStringListValue(&d.Log.Topics, []string{}, ","), "log.topics", "CORE_LOG_TOPICS", nil, "Show only selected log topics", false, false)
d.val(newIntValue(&d.Log.MaxLines, 1000), "log.max_lines", "CORE_LOG_MAXLINES", nil, "Number of latest log lines to keep in memory", false, false)
d.vars.Register(value.NewString(&d.Log.Level, "info"), "log.level", "CORE_LOG_LEVEL", nil, "Loglevel: silent, error, warn, info, debug", false, false)
d.vars.Register(value.NewStringList(&d.Log.Topics, []string{}, ","), "log.topics", "CORE_LOG_TOPICS", nil, "Show only selected log topics", false, false)
d.vars.Register(value.NewInt(&d.Log.MaxLines, 1000), "log.max_lines", "CORE_LOG_MAXLINES", nil, "Number of latest log lines to keep in memory", false, false)
d.vars.Register(value.NewString(&d.Log.Target.Output, "stderr"), "log.target.output", "CORE_LOG_TARGET_OUTPUT", nil, "Where to write the logs to: stdout, stderr, file", false, false)
d.vars.Register(value.NewString(&d.Log.Target.Path, ""), "log.target.path", "CORE_LOG_TARGET_PATH", nil, "Path to log file if output is 'file'", false, false)
// DB
d.val(newMustDirValue(&d.DB.Dir, "./config"), "db.dir", "CORE_DB_DIR", nil, "Directory for holding the operational data", false, false)
d.vars.Register(value.NewMustDir(&d.DB.Dir, "./config"), "db.dir", "CORE_DB_DIR", nil, "Directory for holding the operational data", false, false)
// Host
d.val(newStringListValue(&d.Host.Name, []string{}, ","), "host.name", "CORE_HOST_NAME", nil, "Comma separated list of public host/domain names or IPs", false, false)
d.val(newBoolValue(&d.Host.Auto, true), "host.auto", "CORE_HOST_AUTO", nil, "Enable detection of public IP addresses", false, false)
d.vars.Register(value.NewStringList(&d.Host.Name, []string{}, ","), "host.name", "CORE_HOST_NAME", nil, "Comma separated list of public host/domain names or IPs", false, false)
d.vars.Register(value.NewBool(&d.Host.Auto, true), "host.auto", "CORE_HOST_AUTO", nil, "Enable detection of public IP addresses", false, false)
// API
d.val(newBoolValue(&d.API.ReadOnly, false), "api.read_only", "CORE_API_READ_ONLY", nil, "Allow only ready only access to the API", false, false)
d.val(newCIDRListValue(&d.API.Access.HTTP.Allow, []string{}, ","), "api.access.http.allow", "CORE_API_ACCESS_HTTP_ALLOW", nil, "List of IPs in CIDR notation (HTTP traffic)", false, false)
d.val(newCIDRListValue(&d.API.Access.HTTP.Block, []string{}, ","), "api.access.http.block", "CORE_API_ACCESS_HTTP_BLOCK", nil, "List of IPs in CIDR notation (HTTP traffic)", false, false)
d.val(newCIDRListValue(&d.API.Access.HTTPS.Allow, []string{}, ","), "api.access.https.allow", "CORE_API_ACCESS_HTTPS_ALLOW", nil, "List of IPs in CIDR notation (HTTPS traffic)", false, false)
d.val(newCIDRListValue(&d.API.Access.HTTPS.Block, []string{}, ","), "api.access.https.block", "CORE_API_ACCESS_HTTPS_BLOCK", nil, "List of IPs in CIDR notation (HTTPS traffic)", false, false)
d.val(newBoolValue(&d.API.Auth.Enable, false), "api.auth.enable", "CORE_API_AUTH_ENABLE", nil, "Enable authentication for all clients", false, false)
d.val(newBoolValue(&d.API.Auth.DisableLocalhost, false), "api.auth.disable_localhost", "CORE_API_AUTH_DISABLE_LOCALHOST", nil, "Disable authentication for clients from localhost", false, false)
d.val(newStringValue(&d.API.Auth.Username, ""), "api.auth.username", "CORE_API_AUTH_USERNAME", []string{"RS_USERNAME"}, "Username", false, false)
d.val(newStringValue(&d.API.Auth.Password, ""), "api.auth.password", "CORE_API_AUTH_PASSWORD", []string{"RS_PASSWORD"}, "Password", false, true)
d.vars.Register(value.NewBool(&d.API.ReadOnly, false), "api.read_only", "CORE_API_READ_ONLY", nil, "Allow only ready only access to the API", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTP.Allow, []string{}, ","), "api.access.http.allow", "CORE_API_ACCESS_HTTP_ALLOW", nil, "List of IPs in CIDR notation (HTTP traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTP.Block, []string{}, ","), "api.access.http.block", "CORE_API_ACCESS_HTTP_BLOCK", nil, "List of IPs in CIDR notation (HTTP traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTPS.Allow, []string{}, ","), "api.access.https.allow", "CORE_API_ACCESS_HTTPS_ALLOW", nil, "List of IPs in CIDR notation (HTTPS traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTPS.Block, []string{}, ","), "api.access.https.block", "CORE_API_ACCESS_HTTPS_BLOCK", nil, "List of IPs in CIDR notation (HTTPS traffic)", false, false)
d.vars.Register(value.NewBool(&d.API.Auth.Enable, false), "api.auth.enable", "CORE_API_AUTH_ENABLE", nil, "Enable authentication for all clients", false, false)
d.vars.Register(value.NewBool(&d.API.Auth.DisableLocalhost, false), "api.auth.disable_localhost", "CORE_API_AUTH_DISABLE_LOCALHOST", nil, "Disable authentication for clients from localhost", false, false)
d.vars.Register(value.NewString(&d.API.Auth.Username, ""), "api.auth.username", "CORE_API_AUTH_USERNAME", []string{"RS_USERNAME"}, "Username", false, false)
d.vars.Register(value.NewString(&d.API.Auth.Password, ""), "api.auth.password", "CORE_API_AUTH_PASSWORD", []string{"RS_PASSWORD"}, "Password", false, true)
// Auth JWT
d.val(newStringValue(&d.API.Auth.JWT.Secret, rand.String(32)), "api.auth.jwt.secret", "CORE_API_AUTH_JWT_SECRET", nil, "JWT secret, leave empty for generating a random value", false, true)
d.vars.Register(value.NewString(&d.API.Auth.JWT.Secret, rand.String(32)), "api.auth.jwt.secret", "CORE_API_AUTH_JWT_SECRET", nil, "JWT secret, leave empty for generating a random value", false, true)
// Auth Auth0
d.val(newBoolValue(&d.API.Auth.Auth0.Enable, false), "api.auth.auth0.enable", "CORE_API_AUTH_AUTH0_ENABLE", nil, "Enable Auth0", false, false)
d.val(newTenantListValue(&d.API.Auth.Auth0.Tenants, []Auth0Tenant{}, ","), "api.auth.auth0.tenants", "CORE_API_AUTH_AUTH0_TENANTS", nil, "List of Auth0 tenants", false, false)
d.vars.Register(value.NewBool(&d.API.Auth.Auth0.Enable, false), "api.auth.auth0.enable", "CORE_API_AUTH_AUTH0_ENABLE", nil, "Enable Auth0", false, false)
d.vars.Register(value.NewTenantList(&d.API.Auth.Auth0.Tenants, []value.Auth0Tenant{}, ","), "api.auth.auth0.tenants", "CORE_API_AUTH_AUTH0_TENANTS", nil, "List of Auth0 tenants", false, false)
// TLS
d.val(newAddressValue(&d.TLS.Address, ":8181"), "tls.address", "CORE_TLS_ADDRESS", nil, "HTTPS listening address", false, false)
d.val(newBoolValue(&d.TLS.Enable, false), "tls.enable", "CORE_TLS_ENABLE", nil, "Enable HTTPS", false, false)
d.val(newBoolValue(&d.TLS.Auto, false), "tls.auto", "CORE_TLS_AUTO", nil, "Enable Let's Encrypt certificate", false, false)
d.val(newFileValue(&d.TLS.CertFile, ""), "tls.cert_file", "CORE_TLS_CERTFILE", nil, "Path to certificate file in PEM format", false, false)
d.val(newFileValue(&d.TLS.KeyFile, ""), "tls.key_file", "CORE_TLS_KEYFILE", nil, "Path to key file in PEM format", false, false)
d.vars.Register(value.NewAddress(&d.TLS.Address, ":8181"), "tls.address", "CORE_TLS_ADDRESS", nil, "HTTPS listening address", false, false)
d.vars.Register(value.NewBool(&d.TLS.Enable, false), "tls.enable", "CORE_TLS_ENABLE", nil, "Enable HTTPS", false, false)
d.vars.Register(value.NewBool(&d.TLS.Auto, false), "tls.auto", "CORE_TLS_AUTO", nil, "Enable Let's Encrypt certificate", false, false)
d.vars.Register(value.NewEmail(&d.TLS.Email, "cert@datarhei.com"), "tls.email", "CORE_TLS_EMAIL", nil, "Email for Let's Encrypt registration", false, false)
d.vars.Register(value.NewFile(&d.TLS.CertFile, ""), "tls.cert_file", "CORE_TLS_CERTFILE", nil, "Path to certificate file in PEM format", false, false)
d.vars.Register(value.NewFile(&d.TLS.KeyFile, ""), "tls.key_file", "CORE_TLS_KEYFILE", nil, "Path to key file in PEM format", false, false)
// Storage
d.val(newFileValue(&d.Storage.MimeTypes, "./mime.types"), "storage.mimetypes_file", "CORE_STORAGE_MIMETYPES_FILE", []string{"CORE_MIMETYPES_FILE"}, "Path to file with mime-types", false, false)
d.vars.Register(value.NewFile(&d.Storage.MimeTypes, "./mime.types"), "storage.mimetypes_file", "CORE_STORAGE_MIMETYPES_FILE", []string{"CORE_MIMETYPES_FILE"}, "Path to file with mime-types", false, false)
// Storage (Disk)
d.val(newMustDirValue(&d.Storage.Disk.Dir, "./data"), "storage.disk.dir", "CORE_STORAGE_DISK_DIR", nil, "Directory on disk, exposed on /", false, false)
d.val(newInt64Value(&d.Storage.Disk.Size, 0), "storage.disk.max_size_mbytes", "CORE_STORAGE_DISK_MAXSIZEMBYTES", nil, "Max. allowed megabytes for storage.disk.dir, 0 for unlimited", false, false)
d.val(newBoolValue(&d.Storage.Disk.Cache.Enable, true), "storage.disk.cache.enable", "CORE_STORAGE_DISK_CACHE_ENABLE", nil, "Enable cache for /", false, false)
d.val(newUint64Value(&d.Storage.Disk.Cache.Size, 0), "storage.disk.cache.max_size_mbytes", "CORE_STORAGE_DISK_CACHE_MAXSIZEMBYTES", nil, "Max. allowed cache size, 0 for unlimited", false, false)
d.val(newInt64Value(&d.Storage.Disk.Cache.TTL, 300), "storage.disk.cache.ttl_seconds", "CORE_STORAGE_DISK_CACHE_TTLSECONDS", nil, "Seconds to keep files in cache", false, false)
d.val(newUint64Value(&d.Storage.Disk.Cache.FileSize, 1), "storage.disk.cache.max_file_size_mbytes", "CORE_STORAGE_DISK_CACHE_MAXFILESIZEMBYTES", nil, "Max. file size to put in cache", false, false)
d.val(newStringListValue(&d.Storage.Disk.Cache.Types, []string{}, " "), "storage.disk.cache.types", "CORE_STORAGE_DISK_CACHE_TYPES", nil, "File extensions to cache, empty for all", false, false)
d.vars.Register(value.NewMustDir(&d.Storage.Disk.Dir, "./data"), "storage.disk.dir", "CORE_STORAGE_DISK_DIR", nil, "Directory on disk, exposed on /", false, false)
d.vars.Register(value.NewInt64(&d.Storage.Disk.Size, 0), "storage.disk.max_size_mbytes", "CORE_STORAGE_DISK_MAXSIZEMBYTES", nil, "Max. allowed megabytes for storage.disk.dir, 0 for unlimited", false, false)
d.vars.Register(value.NewBool(&d.Storage.Disk.Cache.Enable, true), "storage.disk.cache.enable", "CORE_STORAGE_DISK_CACHE_ENABLE", nil, "Enable cache for /", false, false)
d.vars.Register(value.NewUint64(&d.Storage.Disk.Cache.Size, 0), "storage.disk.cache.max_size_mbytes", "CORE_STORAGE_DISK_CACHE_MAXSIZEMBYTES", nil, "Max. allowed cache size, 0 for unlimited", false, false)
d.vars.Register(value.NewInt64(&d.Storage.Disk.Cache.TTL, 300), "storage.disk.cache.ttl_seconds", "CORE_STORAGE_DISK_CACHE_TTLSECONDS", nil, "Seconds to keep files in cache", false, false)
d.vars.Register(value.NewUint64(&d.Storage.Disk.Cache.FileSize, 1), "storage.disk.cache.max_file_size_mbytes", "CORE_STORAGE_DISK_CACHE_MAXFILESIZEMBYTES", nil, "Max. file size to put in cache", false, false)
d.vars.Register(value.NewStringList(&d.Storage.Disk.Cache.Types.Allow, []string{}, " "), "storage.disk.cache.type.allow", "CORE_STORAGE_DISK_CACHE_TYPES_ALLOW", []string{"CORE_STORAGE_DISK_CACHE_TYPES"}, "File extensions to cache, empty for all", false, false)
d.vars.Register(value.NewStringList(&d.Storage.Disk.Cache.Types.Block, []string{".m3u8", ".mpd"}, " "), "storage.disk.cache.type.block", "CORE_STORAGE_DISK_CACHE_TYPES_BLOCK", nil, "File extensions not to cache, empty for none", false, false)
// Storage (Memory)
d.val(newBoolValue(&d.Storage.Memory.Auth.Enable, true), "storage.memory.auth.enable", "CORE_STORAGE_MEMORY_AUTH_ENABLE", nil, "Enable basic auth for PUT,POST, and DELETE on /memfs", false, false)
d.val(newStringValue(&d.Storage.Memory.Auth.Username, "admin"), "storage.memory.auth.username", "CORE_STORAGE_MEMORY_AUTH_USERNAME", nil, "Username for Basic-Auth of /memfs", false, false)
d.val(newStringValue(&d.Storage.Memory.Auth.Password, rand.StringAlphanumeric(18)), "storage.memory.auth.password", "CORE_STORAGE_MEMORY_AUTH_PASSWORD", nil, "Password for Basic-Auth of /memfs", false, true)
d.val(newInt64Value(&d.Storage.Memory.Size, 0), "storage.memory.max_size_mbytes", "CORE_STORAGE_MEMORY_MAXSIZEMBYTES", nil, "Max. allowed megabytes for /memfs, 0 for unlimited", false, false)
d.val(newBoolValue(&d.Storage.Memory.Purge, false), "storage.memory.purge", "CORE_STORAGE_MEMORY_PURGE", nil, "Automatically remove the oldest files if /memfs is full", false, false)
d.vars.Register(value.NewBool(&d.Storage.Memory.Auth.Enable, true), "storage.memory.auth.enable", "CORE_STORAGE_MEMORY_AUTH_ENABLE", nil, "Enable basic auth for PUT,POST, and DELETE on /memfs", false, false)
d.vars.Register(value.NewString(&d.Storage.Memory.Auth.Username, "admin"), "storage.memory.auth.username", "CORE_STORAGE_MEMORY_AUTH_USERNAME", nil, "Username for Basic-Auth of /memfs", false, false)
d.vars.Register(value.NewString(&d.Storage.Memory.Auth.Password, rand.StringAlphanumeric(18)), "storage.memory.auth.password", "CORE_STORAGE_MEMORY_AUTH_PASSWORD", nil, "Password for Basic-Auth of /memfs", false, true)
d.vars.Register(value.NewInt64(&d.Storage.Memory.Size, 0), "storage.memory.max_size_mbytes", "CORE_STORAGE_MEMORY_MAXSIZEMBYTES", nil, "Max. allowed megabytes for /memfs, 0 for unlimited", false, false)
d.vars.Register(value.NewBool(&d.Storage.Memory.Purge, false), "storage.memory.purge", "CORE_STORAGE_MEMORY_PURGE", nil, "Automatically remove the oldest files if /memfs is full", false, false)
// Storage (CORS)
d.val(newCORSOriginsValue(&d.Storage.CORS.Origins, []string{"*"}, ","), "storage.cors.origins", "CORE_STORAGE_CORS_ORIGINS", nil, "Allowed CORS origins for /memfs and /data", false, false)
d.vars.Register(value.NewCORSOrigins(&d.Storage.CORS.Origins, []string{"*"}, ","), "storage.cors.origins", "CORE_STORAGE_CORS_ORIGINS", nil, "Allowed CORS origins for /memfs and /data", false, false)
// RTMP
d.val(newBoolValue(&d.RTMP.Enable, false), "rtmp.enable", "CORE_RTMP_ENABLE", nil, "Enable RTMP server", false, false)
d.val(newBoolValue(&d.RTMP.EnableTLS, false), "rtmp.enable_tls", "CORE_RTMP_ENABLE_TLS", nil, "Enable RTMPS server instead of RTMP", false, false)
d.val(newAddressValue(&d.RTMP.Address, ":1935"), "rtmp.address", "CORE_RTMP_ADDRESS", nil, "RTMP server listen address", false, false)
d.val(newAddressValue(&d.RTMP.AddressTLS, ":1936"), "rtmp.address_tls", "CORE_RTMP_ADDRESS_TLS", nil, "RTMPS server listen address", false, false)
d.val(newAbsolutePathValue(&d.RTMP.App, "/"), "rtmp.app", "CORE_RTMP_APP", nil, "RTMP app for publishing", false, false)
d.val(newStringValue(&d.RTMP.Token, ""), "rtmp.token", "CORE_RTMP_TOKEN", nil, "RTMP token for publishing and playing", false, true)
d.vars.Register(value.NewBool(&d.RTMP.Enable, false), "rtmp.enable", "CORE_RTMP_ENABLE", nil, "Enable RTMP server", false, false)
d.vars.Register(value.NewBool(&d.RTMP.EnableTLS, false), "rtmp.enable_tls", "CORE_RTMP_ENABLE_TLS", nil, "Enable RTMPS server instead of RTMP", false, false)
d.vars.Register(value.NewAddress(&d.RTMP.Address, ":1935"), "rtmp.address", "CORE_RTMP_ADDRESS", nil, "RTMP server listen address", false, false)
d.vars.Register(value.NewAddress(&d.RTMP.AddressTLS, ":1936"), "rtmp.address_tls", "CORE_RTMP_ADDRESS_TLS", nil, "RTMPS server listen address", false, false)
d.vars.Register(value.NewAbsolutePath(&d.RTMP.App, "/"), "rtmp.app", "CORE_RTMP_APP", nil, "RTMP app for publishing", false, false)
d.vars.Register(value.NewString(&d.RTMP.Token, ""), "rtmp.token", "CORE_RTMP_TOKEN", nil, "RTMP token for publishing and playing", false, true)
// SRT
d.val(newBoolValue(&d.SRT.Enable, false), "srt.enable", "CORE_SRT_ENABLE", nil, "Enable SRT server", false, false)
d.val(newAddressValue(&d.SRT.Address, ":6000"), "srt.address", "CORE_SRT_ADDRESS", nil, "SRT server listen address", false, false)
d.val(newStringValue(&d.SRT.Passphrase, ""), "srt.passphrase", "CORE_SRT_PASSPHRASE", nil, "SRT encryption passphrase", false, true)
d.val(newStringValue(&d.SRT.Token, ""), "srt.token", "CORE_SRT_TOKEN", nil, "SRT token for publishing and playing", false, true)
d.val(newBoolValue(&d.SRT.Log.Enable, false), "srt.log.enable", "CORE_SRT_LOG_ENABLE", nil, "Enable SRT server logging", false, false)
d.val(newStringListValue(&d.SRT.Log.Topics, []string{}, ","), "srt.log.topics", "CORE_SRT_LOG_TOPICS", nil, "List of topics to log", false, false)
d.vars.Register(value.NewBool(&d.SRT.Enable, false), "srt.enable", "CORE_SRT_ENABLE", nil, "Enable SRT server", false, false)
d.vars.Register(value.NewAddress(&d.SRT.Address, ":6000"), "srt.address", "CORE_SRT_ADDRESS", nil, "SRT server listen address", false, false)
d.vars.Register(value.NewString(&d.SRT.Passphrase, ""), "srt.passphrase", "CORE_SRT_PASSPHRASE", nil, "SRT encryption passphrase", false, true)
d.vars.Register(value.NewString(&d.SRT.Token, ""), "srt.token", "CORE_SRT_TOKEN", nil, "SRT token for publishing and playing", false, true)
d.vars.Register(value.NewBool(&d.SRT.Log.Enable, false), "srt.log.enable", "CORE_SRT_LOG_ENABLE", nil, "Enable SRT server logging", false, false)
d.vars.Register(value.NewStringList(&d.SRT.Log.Topics, []string{}, ","), "srt.log.topics", "CORE_SRT_LOG_TOPICS", nil, "List of topics to log", false, false)
// FFmpeg
d.val(newExecValue(&d.FFmpeg.Binary, "ffmpeg"), "ffmpeg.binary", "CORE_FFMPEG_BINARY", nil, "Path to ffmpeg binary", true, false)
d.val(newInt64Value(&d.FFmpeg.MaxProcesses, 0), "ffmpeg.max_processes", "CORE_FFMPEG_MAXPROCESSES", nil, "Max. allowed simultaneously running ffmpeg instances, 0 for unlimited", false, false)
d.val(newStringListValue(&d.FFmpeg.Access.Input.Allow, []string{}, " "), "ffmpeg.access.input.allow", "CORE_FFMPEG_ACCESS_INPUT_ALLOW", nil, "List of allowed expression to match against the input addresses", false, false)
d.val(newStringListValue(&d.FFmpeg.Access.Input.Block, []string{}, " "), "ffmpeg.access.input.block", "CORE_FFMPEG_ACCESS_INPUT_BLOCK", nil, "List of blocked expression to match against the input addresses", false, false)
d.val(newStringListValue(&d.FFmpeg.Access.Output.Allow, []string{}, " "), "ffmpeg.access.output.allow", "CORE_FFMPEG_ACCESS_OUTPUT_ALLOW", nil, "List of allowed expression to match against the output addresses", false, false)
d.val(newStringListValue(&d.FFmpeg.Access.Output.Block, []string{}, " "), "ffmpeg.access.output.block", "CORE_FFMPEG_ACCESS_OUTPUT_BLOCK", nil, "List of blocked expression to match against the output addresses", false, false)
d.val(newIntValue(&d.FFmpeg.Log.MaxLines, 50), "ffmpeg.log.max_lines", "CORE_FFMPEG_LOG_MAXLINES", nil, "Number of latest log lines to keep for each process", false, false)
d.val(newIntValue(&d.FFmpeg.Log.MaxHistory, 3), "ffmpeg.log.max_history", "CORE_FFMPEG_LOG_MAXHISTORY", nil, "Number of latest logs to keep for each process", false, false)
d.vars.Register(value.NewExec(&d.FFmpeg.Binary, "ffmpeg"), "ffmpeg.binary", "CORE_FFMPEG_BINARY", nil, "Path to ffmpeg binary", true, false)
d.vars.Register(value.NewInt64(&d.FFmpeg.MaxProcesses, 0), "ffmpeg.max_processes", "CORE_FFMPEG_MAXPROCESSES", nil, "Max. allowed simultaneously running ffmpeg instances, 0 for unlimited", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Input.Allow, []string{}, " "), "ffmpeg.access.input.allow", "CORE_FFMPEG_ACCESS_INPUT_ALLOW", nil, "List of allowed expression to match against the input addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Input.Block, []string{}, " "), "ffmpeg.access.input.block", "CORE_FFMPEG_ACCESS_INPUT_BLOCK", nil, "List of blocked expression to match against the input addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Output.Allow, []string{}, " "), "ffmpeg.access.output.allow", "CORE_FFMPEG_ACCESS_OUTPUT_ALLOW", nil, "List of allowed expression to match against the output addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Output.Block, []string{}, " "), "ffmpeg.access.output.block", "CORE_FFMPEG_ACCESS_OUTPUT_BLOCK", nil, "List of blocked expression to match against the output addresses", false, false)
d.vars.Register(value.NewInt(&d.FFmpeg.Log.MaxLines, 50), "ffmpeg.log.max_lines", "CORE_FFMPEG_LOG_MAXLINES", nil, "Number of latest log lines to keep for each process", false, false)
d.vars.Register(value.NewInt(&d.FFmpeg.Log.MaxHistory, 3), "ffmpeg.log.max_history", "CORE_FFMPEG_LOG_MAXHISTORY", nil, "Number of latest logs to keep for each process", false, false)
// Playout
d.val(newBoolValue(&d.Playout.Enable, false), "playout.enable", "CORE_PLAYOUT_ENABLE", nil, "Enable playout proxy where available", false, false)
d.val(newPortValue(&d.Playout.MinPort, 0), "playout.min_port", "CORE_PLAYOUT_MINPORT", nil, "Min. playout server port", false, false)
d.val(newPortValue(&d.Playout.MaxPort, 0), "playout.max_port", "CORE_PLAYOUT_MAXPORT", nil, "Max. playout server port", false, false)
d.vars.Register(value.NewBool(&d.Playout.Enable, false), "playout.enable", "CORE_PLAYOUT_ENABLE", nil, "Enable playout proxy where available", false, false)
d.vars.Register(value.NewPort(&d.Playout.MinPort, 0), "playout.min_port", "CORE_PLAYOUT_MINPORT", nil, "Min. playout server port", false, false)
d.vars.Register(value.NewPort(&d.Playout.MaxPort, 0), "playout.max_port", "CORE_PLAYOUT_MAXPORT", nil, "Max. playout server port", false, false)
// Debug
d.val(newBoolValue(&d.Debug.Profiling, false), "debug.profiling", "CORE_DEBUG_PROFILING", nil, "Enable profiling endpoint on /profiling", false, false)
d.val(newIntValue(&d.Debug.ForceGC, 0), "debug.force_gc", "CORE_DEBUG_FORCEGC", nil, "Number of seconds between forcing GC to return memory to the OS", false, false)
d.vars.Register(value.NewBool(&d.Debug.Profiling, false), "debug.profiling", "CORE_DEBUG_PROFILING", nil, "Enable profiling endpoint on /profiling", false, false)
d.vars.Register(value.NewInt(&d.Debug.ForceGC, 0), "debug.force_gc", "CORE_DEBUG_FORCEGC", nil, "Number of seconds between forcing GC to return memory to the OS", false, false)
d.vars.Register(value.NewInt64(&d.Debug.MemoryLimit, 0), "debug.memory_limit_mbytes", "CORE_DEBUG_MEMORY_LIMIT_MBYTES", nil, "Impose a soft memory limit for the core, in megabytes", false, false)
// Metrics
d.val(newBoolValue(&d.Metrics.Enable, false), "metrics.enable", "CORE_METRICS_ENABLE", nil, "Enable collecting historic metrics data", false, false)
d.val(newBoolValue(&d.Metrics.EnablePrometheus, false), "metrics.enable_prometheus", "CORE_METRICS_ENABLE_PROMETHEUS", nil, "Enable prometheus endpoint /metrics", false, false)
d.val(newInt64Value(&d.Metrics.Range, 300), "metrics.range_seconds", "CORE_METRICS_RANGE_SECONDS", nil, "Seconds to keep history data", false, false)
d.val(newInt64Value(&d.Metrics.Interval, 2), "metrics.interval_seconds", "CORE_METRICS_INTERVAL_SECONDS", nil, "Interval for collecting metrics", false, false)
d.vars.Register(value.NewBool(&d.Metrics.Enable, false), "metrics.enable", "CORE_METRICS_ENABLE", nil, "Enable collecting historic metrics data", false, false)
d.vars.Register(value.NewBool(&d.Metrics.EnablePrometheus, false), "metrics.enable_prometheus", "CORE_METRICS_ENABLE_PROMETHEUS", nil, "Enable prometheus endpoint /metrics", false, false)
d.vars.Register(value.NewInt64(&d.Metrics.Range, 300), "metrics.range_seconds", "CORE_METRICS_RANGE_SECONDS", nil, "Seconds to keep history data", false, false)
d.vars.Register(value.NewInt64(&d.Metrics.Interval, 2), "metrics.interval_seconds", "CORE_METRICS_INTERVAL_SECONDS", nil, "Interval for collecting metrics", false, false)
// Sessions
d.val(newBoolValue(&d.Sessions.Enable, true), "sessions.enable", "CORE_SESSIONS_ENABLE", nil, "Enable collecting HLS session stats for /memfs", false, false)
d.val(newCIDRListValue(&d.Sessions.IPIgnoreList, []string{"127.0.0.1/32", "::1/128"}, ","), "sessions.ip_ignorelist", "CORE_SESSIONS_IP_IGNORELIST", nil, "List of IP ranges in CIDR notation to ignore", false, false)
d.val(newIntValue(&d.Sessions.SessionTimeout, 30), "sessions.session_timeout_sec", "CORE_SESSIONS_SESSION_TIMEOUT_SEC", nil, "Timeout for an idle session", false, false)
d.val(newBoolValue(&d.Sessions.Persist, false), "sessions.persist", "CORE_SESSIONS_PERSIST", nil, "Whether to persist session history. Will be stored as sessions.json in db.dir", false, false)
d.val(newIntValue(&d.Sessions.PersistInterval, 300), "sessions.persist_interval_sec", "CORE_SESSIONS_PERSIST_INTERVAL_SEC", nil, "Interval in seconds in which to persist the current session history", false, false)
d.val(newUint64Value(&d.Sessions.MaxBitrate, 0), "sessions.max_bitrate_mbit", "CORE_SESSIONS_MAXBITRATE_MBIT", nil, "Max. allowed outgoing bitrate in mbit/s, 0 for unlimited", false, false)
d.val(newUint64Value(&d.Sessions.MaxSessions, 0), "sessions.max_sessions", "CORE_SESSIONS_MAXSESSIONS", nil, "Max. allowed number of simultaneous sessions, 0 for unlimited", false, false)
d.vars.Register(value.NewBool(&d.Sessions.Enable, true), "sessions.enable", "CORE_SESSIONS_ENABLE", nil, "Enable collecting HLS session stats for /memfs", false, false)
d.vars.Register(value.NewCIDRList(&d.Sessions.IPIgnoreList, []string{"127.0.0.1/32", "::1/128"}, ","), "sessions.ip_ignorelist", "CORE_SESSIONS_IP_IGNORELIST", nil, "List of IP ranges in CIDR notation to ignore", false, false)
d.vars.Register(value.NewInt(&d.Sessions.SessionTimeout, 30), "sessions.session_timeout_sec", "CORE_SESSIONS_SESSION_TIMEOUT_SEC", nil, "Timeout for an idle session", false, false)
d.vars.Register(value.NewBool(&d.Sessions.Persist, false), "sessions.persist", "CORE_SESSIONS_PERSIST", nil, "Whether to persist session history. Will be stored as sessions.json in db.dir", false, false)
d.vars.Register(value.NewInt(&d.Sessions.PersistInterval, 300), "sessions.persist_interval_sec", "CORE_SESSIONS_PERSIST_INTERVAL_SEC", nil, "Interval in seconds in which to persist the current session history", false, false)
d.vars.Register(value.NewUint64(&d.Sessions.MaxBitrate, 0), "sessions.max_bitrate_mbit", "CORE_SESSIONS_MAXBITRATE_MBIT", nil, "Max. allowed outgoing bitrate in mbit/s, 0 for unlimited", false, false)
d.vars.Register(value.NewUint64(&d.Sessions.MaxSessions, 0), "sessions.max_sessions", "CORE_SESSIONS_MAXSESSIONS", nil, "Max. allowed number of simultaneous sessions, 0 for unlimited", false, false)
// Service
d.val(newBoolValue(&d.Service.Enable, false), "service.enable", "CORE_SERVICE_ENABLE", nil, "Enable connecting to the Restreamer Service", false, false)
d.val(newStringValue(&d.Service.Token, ""), "service.token", "CORE_SERVICE_TOKEN", nil, "Restreamer Service account token", false, true)
d.val(newURLValue(&d.Service.URL, "https://service.datarhei.com"), "service.url", "CORE_SERVICE_URL", nil, "URL of the Restreamer Service", false, false)
d.vars.Register(value.NewBool(&d.Service.Enable, false), "service.enable", "CORE_SERVICE_ENABLE", nil, "Enable connecting to the Restreamer Service", false, false)
d.vars.Register(value.NewString(&d.Service.Token, ""), "service.token", "CORE_SERVICE_TOKEN", nil, "Restreamer Service account token", false, true)
d.vars.Register(value.NewURL(&d.Service.URL, "https://service.datarhei.com"), "service.url", "CORE_SERVICE_URL", nil, "URL of the Restreamer Service", false, false)
// Router
d.val(newStringListValue(&d.Router.BlockedPrefixes, []string{"/api"}, ","), "router.blocked_prefixes", "CORE_ROUTER_BLOCKED_PREFIXES", nil, "List of path prefixes that can't be routed", false, false)
d.val(newStringMapStringValue(&d.Router.Routes, nil), "router.routes", "CORE_ROUTER_ROUTES", nil, "List of route mappings", false, false)
d.val(newDirValue(&d.Router.UIPath, ""), "router.ui_path", "CORE_ROUTER_UI_PATH", nil, "Path to a directory holding UI files mounted as /ui", false, false)
}
func (d *Config) val(val value, name, envName string, envAltNames []string, description string, required, disguise bool) {
d.vars = append(d.vars, &variable{
value: val,
defVal: val.String(),
name: name,
envName: envName,
envAltNames: envAltNames,
description: description,
required: required,
disguise: disguise,
})
}
func (d *Config) log(level string, v *variable, format string, args ...interface{}) {
variable := Variable{
Value: v.value.String(),
Name: v.name,
EnvName: v.envName,
Description: v.description,
Merged: v.merged,
}
if v.disguise {
variable.Value = "***"
}
l := message{
message: fmt.Sprintf(format, args...),
variable: variable,
level: level,
}
d.logs = append(d.logs, l)
}
// Merge merges the values of the known environment variables into the configuration
func (d *Config) Merge() {
for _, v := range d.vars {
if len(v.envName) == 0 {
continue
}
var envval string
var ok bool
envval, ok = os.LookupEnv(v.envName)
if !ok {
foundAltName := false
for _, envName := range v.envAltNames {
envval, ok = os.LookupEnv(envName)
if ok {
foundAltName = true
d.log("warn", v, "deprecated name, please use %s", v.envName)
break
}
}
if !foundAltName {
continue
}
}
err := v.value.Set(envval)
if err != nil {
d.log("error", v, "%s", err.Error())
}
v.merged = true
}
}
// Migrate will migrate some settings, depending on the version it finds. Migrations
// are only going upwards,i.e. from a lower version to a higher version.
func (d *Config) Migrate() error {
if d.Version == 1 {
if !strings.HasPrefix(d.RTMP.App, "/") {
d.RTMP.App = "/" + d.RTMP.App
}
if d.RTMP.EnableTLS {
d.RTMP.Enable = true
d.RTMP.AddressTLS = d.RTMP.Address
host, sport, err := net.SplitHostPort(d.RTMP.Address)
if err != nil {
return fmt.Errorf("migrating rtmp.address to rtmp.address_tls failed: %w", err)
}
port, err := strconv.Atoi(sport)
if err != nil {
return fmt.Errorf("migrating rtmp.address to rtmp.address_tls failed: %w", err)
}
d.RTMP.Address = net.JoinHostPort(host, strconv.Itoa(port-1))
}
d.Version = 2
}
return nil
d.vars.Register(value.NewStringList(&d.Router.BlockedPrefixes, []string{"/api"}, ","), "router.blocked_prefixes", "CORE_ROUTER_BLOCKED_PREFIXES", nil, "List of path prefixes that can't be routed", false, false)
d.vars.Register(value.NewStringMapString(&d.Router.Routes, nil), "router.routes", "CORE_ROUTER_ROUTES", nil, "List of route mappings", false, false)
d.vars.Register(value.NewDir(&d.Router.UIPath, ""), "router.ui_path", "CORE_ROUTER_UI_PATH", nil, "Path to a directory holding UI files mounted as /ui", false, false)
}
// Validate validates the current state of the Config for completeness and sanity. Errors are
// written to the log. Use resetLogs to indicate to reset the logs prior validation.
func (d *Config) Validate(resetLogs bool) {
if resetLogs {
d.logs = nil
d.vars.ResetLogs()
}
if d.Version != version {
d.log("error", d.findVariable("version"), "unknown configuration layout version (found version %d, expecting version %d)", d.Version, version)
d.vars.Log("error", "version", "unknown configuration layout version (found version %d, expecting version %d)", d.Version, version)
return
}
for _, v := range d.vars {
d.log("info", v, "%s", "")
err := v.value.Validate()
if err != nil {
d.log("error", v, "%s", err.Error())
}
if v.required && v.value.IsEmpty() {
d.log("error", v, "a value is required")
}
}
d.vars.Validate()
// Individual sanity checks
// If HTTP Auth is enabled, check that the username and password are set
if d.API.Auth.Enable {
if len(d.API.Auth.Username) == 0 || len(d.API.Auth.Password) == 0 {
d.log("error", d.findVariable("api.auth.enable"), "api.auth.username and api.auth.password must be set")
d.vars.Log("error", "api.auth.enable", "api.auth.username and api.auth.password must be set")
}
}
// If Auth0 is enabled, check that domain, audience, and clientid are set
if d.API.Auth.Auth0.Enable {
if len(d.API.Auth.Auth0.Tenants) == 0 {
d.log("error", d.findVariable("api.auth.auth0.enable"), "at least one tenants must be set")
d.vars.Log("error", "api.auth.auth0.enable", "at least one tenants must be set")
}
for i, t := range d.API.Auth.Auth0.Tenants {
if len(t.Domain) == 0 || len(t.Audience) == 0 || len(t.ClientID) == 0 {
d.log("error", d.findVariable("api.auth.auth0.tenants"), "domain, audience, and clientid must be set (tenant %d)", i)
d.vars.Log("error", "api.auth.auth0.tenants", "domain, audience, and clientid must be set (tenant %d)", i)
}
}
}
@@ -564,14 +302,14 @@ func (d *Config) Validate(resetLogs bool) {
// If TLS is enabled and Let's Encrypt is disabled, require certfile and keyfile
if d.TLS.Enable && !d.TLS.Auto {
if len(d.TLS.CertFile) == 0 || len(d.TLS.KeyFile) == 0 {
d.log("error", d.findVariable("tls.enable"), "tls.certfile and tls.keyfile must be set")
d.vars.Log("error", "tls.enable", "tls.certfile and tls.keyfile must be set")
}
}
// If TLS and Let's Encrypt certificate is enabled, we require a public hostname
if d.TLS.Enable && d.TLS.Auto {
if len(d.Host.Name) == 0 {
d.log("error", d.findVariable("host.name"), "a hostname must be set in order to get an automatic TLS certificate")
d.vars.Log("error", "host.name", "a hostname must be set in order to get an automatic TLS certificate")
} else {
r := &net.Resolver{
PreferGo: true,
@@ -581,7 +319,7 @@ func (d *Config) Validate(resetLogs bool) {
for _, host := range d.Host.Name {
// Don't lookup IP addresses
if ip := net.ParseIP(host); ip != nil {
d.log("error", d.findVariable("host.name"), "only host names are allowed if automatic TLS is enabled, but found IP address: %s", host)
d.vars.Log("error", "host.name", "only host names are allowed if automatic TLS is enabled, but found IP address: %s", host)
}
// Lookup host name with a timeout
@@ -589,7 +327,7 @@ func (d *Config) Validate(resetLogs bool) {
_, err := r.LookupHost(ctx, host)
if err != nil {
d.log("error", d.findVariable("host.name"), "the host '%s' can't be resolved and will not work with automatic TLS", host)
d.vars.Log("error", "host.name", "the host '%s' can't be resolved and will not work with automatic TLS", host)
}
cancel()
@@ -597,27 +335,34 @@ func (d *Config) Validate(resetLogs bool) {
}
}
// If TLS and Let's Encrypt certificate is enabled, we require a non-empty email address
if d.TLS.Enable && d.TLS.Auto {
if len(d.TLS.Email) == 0 {
d.vars.SetDefault("tls.email")
}
}
// If TLS for RTMP is enabled, TLS must be enabled
if d.RTMP.EnableTLS {
if !d.RTMP.Enable {
d.log("error", d.findVariable("rtmp.enable"), "RTMP server must be enabled if RTMPS server is enabled")
d.vars.Log("error", "rtmp.enable", "RTMP server must be enabled if RTMPS server is enabled")
}
if !d.TLS.Enable {
d.log("error", d.findVariable("rtmp.enable_tls"), "RTMPS server can only be enabled if TLS is enabled")
d.vars.Log("error", "rtmp.enable_tls", "RTMPS server can only be enabled if TLS is enabled")
}
if len(d.RTMP.AddressTLS) == 0 {
d.log("error", d.findVariable("rtmp.address_tls"), "RTMPS server address must be set")
d.vars.Log("error", "rtmp.address_tls", "RTMPS server address must be set")
}
if d.RTMP.Enable && d.RTMP.Address == d.RTMP.AddressTLS {
d.log("error", d.findVariable("rtmp.address"), "The RTMP and RTMPS server can't listen on the same address")
d.vars.Log("error", "rtmp.address", "The RTMP and RTMPS server can't listen on the same address")
}
}
// If CORE_MEMFS_USERNAME and CORE_MEMFS_PASSWORD are set, automatically active/deactivate Basic-Auth for memfs
if d.findVariable("storage.memory.auth.username").merged && d.findVariable("storage.memory.auth.password").merged {
if d.vars.IsMerged("storage.memory.auth.username") && d.vars.IsMerged("storage.memory.auth.password") {
d.Storage.Memory.Auth.Enable = true
if len(d.Storage.Memory.Auth.Username) == 0 && len(d.Storage.Memory.Auth.Password) == 0 {
@@ -628,121 +373,76 @@ func (d *Config) Validate(resetLogs bool) {
// If Basic-Auth for memfs is enable, check that the username and password are set
if d.Storage.Memory.Auth.Enable {
if len(d.Storage.Memory.Auth.Username) == 0 || len(d.Storage.Memory.Auth.Password) == 0 {
d.log("error", d.findVariable("storage.memory.auth.enable"), "storage.memory.auth.username and storage.memory.auth.password must be set")
d.vars.Log("error", "storage.memory.auth.enable", "storage.memory.auth.username and storage.memory.auth.password must be set")
}
}
// If playout is enabled, check that the port range is sane
if d.Playout.Enable {
if d.Playout.MinPort >= d.Playout.MaxPort {
d.log("error", d.findVariable("playout.min_port"), "must be bigger than playout.max_port")
d.vars.Log("error", "playout.min_port", "must be bigger than playout.max_port")
}
}
// If cache is enabled, a valid TTL has to be set to a useful value
if d.Storage.Disk.Cache.Enable && d.Storage.Disk.Cache.TTL < 0 {
d.log("error", d.findVariable("storage.disk.cache.ttl_seconds"), "must be equal or greater than 0")
d.vars.Log("error", "storage.disk.cache.ttl_seconds", "must be equal or greater than 0")
}
// If the stats are enabled, the session timeout has to be set to a useful value
if d.Sessions.Enable && d.Sessions.SessionTimeout < 1 {
d.log("error", d.findVariable("stats.session_timeout_sec"), "must be equal or greater than 1")
d.vars.Log("error", "stats.session_timeout_sec", "must be equal or greater than 1")
}
// If the stats and their persistence are enabled, the persist interval has to be set to a useful value
if d.Sessions.Enable && d.Sessions.PersistInterval < 0 {
d.log("error", d.findVariable("stats.persist_interval_sec"), "must be at equal or greater than 0")
d.vars.Log("error", "stats.persist_interval_sec", "must be at equal or greater than 0")
}
// If the service is enabled, the token and enpoint have to be defined
if d.Service.Enable {
if len(d.Service.Token) == 0 {
d.log("error", d.findVariable("service.token"), "must be non-empty")
d.vars.Log("error", "service.token", "must be non-empty")
}
if len(d.Service.URL) == 0 {
d.log("error", d.findVariable("service.url"), "must be non-empty")
d.vars.Log("error", "service.url", "must be non-empty")
}
}
// If historic metrics are enabled, the timerange and interval have to be valid
if d.Metrics.Enable {
if d.Metrics.Range <= 0 {
d.log("error", d.findVariable("metrics.range"), "must be greater 0")
d.vars.Log("error", "metrics.range", "must be greater 0")
}
if d.Metrics.Interval <= 0 {
d.log("error", d.findVariable("metrics.interval"), "must be greater 0")
d.vars.Log("error", "metrics.interval", "must be greater 0")
}
if d.Metrics.Interval > d.Metrics.Range {
d.log("error", d.findVariable("metrics.interval"), "must be smaller than the range")
d.vars.Log("error", "metrics.interval", "must be smaller than the range")
}
}
}
func (d *Config) findVariable(name string) *variable {
for _, v := range d.vars {
if v.name == name {
return v
}
}
return nil
// Merge merges the values of the known environment variables into the configuration
func (d *Config) Merge() {
d.vars.Merge()
}
// Messages calls for each log entry the provided callback. The level has the values 'error', 'warn', or 'info'.
// The name is the name of the configuration value, e.g. 'api.auth.enable'. The message is the log message.
func (d *Config) Messages(logger func(level string, v Variable, message string)) {
for _, l := range d.logs {
logger(l.level, l.variable, l.message)
}
func (d *Config) Messages(logger func(level string, v vars.Variable, message string)) {
d.vars.Messages(logger)
}
// HasErrors returns whether there are some error messages in the log.
func (d *Config) HasErrors() bool {
for _, l := range d.logs {
if l.level == "error" {
return true
}
}
return false
return d.vars.HasErrors()
}
// Overrides returns a list of configuration value names that have been overriden by an environment variable.
func (d *Config) Overrides() []string {
overrides := []string{}
for _, v := range d.vars {
if v.merged {
overrides = append(overrides, v.name)
}
}
return overrides
}
func copyStringSlice(src []string) []string {
dst := make([]string, len(src))
copy(dst, src)
return dst
}
func copyStringMap(src map[string]string) map[string]string {
dst := make(map[string]string)
for k, v := range src {
dst[k] = v
}
return dst
}
func copyTenantSlice(src []Auth0Tenant) []Auth0Tenant {
dst := make([]Auth0Tenant, len(src))
copy(dst, src)
return dst
return d.vars.Overrides()
}

View File

@@ -3,7 +3,7 @@ package config
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConfigCopy(t *testing.T) {
@@ -12,44 +12,41 @@ func TestConfigCopy(t *testing.T) {
config1.Version = 42
config1.DB.Dir = "foo"
val1 := config1.findVariable("version")
val2 := config1.findVariable("db.dir")
val3 := config1.findVariable("host.name")
val1, _ := config1.Get("version")
val2, _ := config1.Get("db.dir")
val3, _ := config1.Get("host.name")
assert.Equal(t, "42", val1.value.String())
assert.Equal(t, nil, val1.value.Validate())
assert.Equal(t, false, val1.value.IsEmpty())
require.Equal(t, "42", val1)
require.Equal(t, "foo", val2)
require.Equal(t, "(empty)", val3)
assert.Equal(t, "foo", val2.value.String())
assert.Equal(t, "(empty)", val3.value.String())
config1.Set("host.name", "foo.com")
val3, _ = config1.Get("host.name")
require.Equal(t, "foo.com", val3)
val3.value.Set("foo.com")
config2 := config1.Clone()
assert.Equal(t, "foo.com", val3.value.String())
require.Equal(t, int64(42), config2.Version)
require.Equal(t, "foo", config2.DB.Dir)
require.Equal(t, []string{"foo.com"}, config2.Host.Name)
config2 := NewConfigFrom(config1)
config1.Set("version", "77")
assert.Equal(t, int64(42), config2.Version)
assert.Equal(t, "foo", config2.DB.Dir)
assert.Equal(t, []string{"foo.com"}, config2.Host.Name)
require.Equal(t, int64(77), config1.Version)
require.Equal(t, int64(42), config2.Version)
val1.value.Set("77")
config1.Set("db.dir", "bar")
assert.Equal(t, int64(77), config1.Version)
assert.Equal(t, int64(42), config2.Version)
val2.value.Set("bar")
assert.Equal(t, "bar", config1.DB.Dir)
assert.Equal(t, "foo", config2.DB.Dir)
require.Equal(t, "bar", config1.DB.Dir)
require.Equal(t, "foo", config2.DB.Dir)
config2.DB.Dir = "baz"
assert.Equal(t, "bar", config1.DB.Dir)
assert.Equal(t, "baz", config2.DB.Dir)
require.Equal(t, "bar", config1.DB.Dir)
require.Equal(t, "baz", config2.DB.Dir)
config1.Host.Name[0] = "bar.com"
assert.Equal(t, []string{"bar.com"}, config1.Host.Name)
assert.Equal(t, []string{"foo.com"}, config2.Host.Name)
require.Equal(t, []string{"bar.com"}, config1.Host.Name)
require.Equal(t, []string{"foo.com"}, config2.Host.Name)
}

30
config/copy/copy.go Normal file
View File

@@ -0,0 +1,30 @@
package copy
import "github.com/datarhei/core/v16/config/value"
func StringMap(src map[string]string) map[string]string {
dst := make(map[string]string)
for k, v := range src {
dst[k] = v
}
return dst
}
func TenantSlice(src []value.Auth0Tenant) []value.Auth0Tenant {
dst := Slice(src)
for i, t := range src {
dst[i].Users = Slice(t.Users)
}
return dst
}
func Slice[T any](src []T) []T {
dst := make([]T, len(src))
copy(dst, src)
return dst
}

340
config/data.go Normal file
View File

@@ -0,0 +1,340 @@
package config
import (
"time"
"github.com/datarhei/core/v16/config/copy"
v2 "github.com/datarhei/core/v16/config/v2"
"github.com/datarhei/core/v16/config/value"
)
// Data is the actual configuration data for the app
type Data struct {
CreatedAt time.Time `json:"created_at"`
LoadedAt time.Time `json:"-"`
UpdatedAt time.Time `json:"-"`
Version int64 `json:"version" jsonschema:"minimum=3,maximum=3"`
ID string `json:"id"`
Name string `json:"name"`
Address string `json:"address"`
CheckForUpdates bool `json:"update_check"`
Log struct {
Level string `json:"level" enums:"debug,info,warn,error,silent" jsonschema:"enum=debug,enum=info,enum=warn,enum=error,enum=silent"`
Topics []string `json:"topics"`
MaxLines int `json:"max_lines"`
Target struct {
Output string `json:"name"`
Path string `json:"path"`
} `json:"target"` // discard, stderr, stdout, file:/path/to/file.log
} `json:"log"`
DB struct {
Dir string `json:"dir"`
} `json:"db"`
Host struct {
Name []string `json:"name"`
Auto bool `json:"auto"`
} `json:"host"`
API struct {
ReadOnly bool `json:"read_only"`
Access struct {
HTTP struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"http"`
HTTPS struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"https"`
} `json:"access"`
Auth struct {
Enable bool `json:"enable"`
DisableLocalhost bool `json:"disable_localhost"`
Username string `json:"username"`
Password string `json:"password"`
JWT struct {
Secret string `json:"secret"`
} `json:"jwt"`
Auth0 struct {
Enable bool `json:"enable"`
Tenants []value.Auth0Tenant `json:"tenants"`
} `json:"auth0"`
} `json:"auth"`
} `json:"api"`
TLS struct {
Address string `json:"address"`
Enable bool `json:"enable"`
Auto bool `json:"auto"`
Email string `json:"email"`
CertFile string `json:"cert_file"`
KeyFile string `json:"key_file"`
} `json:"tls"`
Storage struct {
Disk struct {
Dir string `json:"dir"`
Size int64 `json:"max_size_mbytes"`
Cache struct {
Enable bool `json:"enable"`
Size uint64 `json:"max_size_mbytes"`
TTL int64 `json:"ttl_seconds"`
FileSize uint64 `json:"max_file_size_mbytes"`
Types struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"types"`
} `json:"cache"`
} `json:"disk"`
Memory struct {
Auth struct {
Enable bool `json:"enable"`
Username string `json:"username"`
Password string `json:"password"`
} `json:"auth"`
Size int64 `json:"max_size_mbytes"`
Purge bool `json:"purge"`
} `json:"memory"`
CORS struct {
Origins []string `json:"origins"`
} `json:"cors"`
MimeTypes string `json:"mimetypes_file"`
} `json:"storage"`
RTMP struct {
Enable bool `json:"enable"`
EnableTLS bool `json:"enable_tls"`
Address string `json:"address"`
AddressTLS string `json:"address_tls"`
App string `json:"app"`
Token string `json:"token"`
} `json:"rtmp"`
SRT struct {
Enable bool `json:"enable"`
Address string `json:"address"`
Passphrase string `json:"passphrase"`
Token string `json:"token"`
Log struct {
Enable bool `json:"enable"`
Topics []string `json:"topics"`
} `json:"log"`
} `json:"srt"`
FFmpeg struct {
Binary string `json:"binary"`
MaxProcesses int64 `json:"max_processes"`
Access struct {
Input struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"input"`
Output struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"output"`
} `json:"access"`
Log struct {
MaxLines int `json:"max_lines"`
MaxHistory int `json:"max_history"`
} `json:"log"`
} `json:"ffmpeg"`
Playout struct {
Enable bool `json:"enable"`
MinPort int `json:"min_port"`
MaxPort int `json:"max_port"`
} `json:"playout"`
Debug struct {
Profiling bool `json:"profiling"`
ForceGC int `json:"force_gc"`
MemoryLimit int64 `json:"memory_limit_mbytes"`
} `json:"debug"`
Metrics struct {
Enable bool `json:"enable"`
EnablePrometheus bool `json:"enable_prometheus"`
Range int64 `json:"range_sec"` // seconds
Interval int64 `json:"interval_sec"` // seconds
} `json:"metrics"`
Sessions struct {
Enable bool `json:"enable"`
IPIgnoreList []string `json:"ip_ignorelist"`
SessionTimeout int `json:"session_timeout_sec"`
Persist bool `json:"persist"`
PersistInterval int `json:"persist_interval_sec"`
MaxBitrate uint64 `json:"max_bitrate_mbit"`
MaxSessions uint64 `json:"max_sessions"`
} `json:"sessions"`
Service struct {
Enable bool `json:"enable"`
Token string `json:"token"`
URL string `json:"url"`
} `json:"service"`
Router struct {
BlockedPrefixes []string `json:"blocked_prefixes"`
Routes map[string]string `json:"routes"`
UIPath string `json:"ui_path"`
} `json:"router"`
}
func UpgradeV2ToV3(d *v2.Data) (*Data, error) {
cfg := New()
return MergeV2toV3(&cfg.Data, d)
}
func MergeV2toV3(data *Data, d *v2.Data) (*Data, error) {
data.CreatedAt = d.CreatedAt
data.LoadedAt = d.LoadedAt
data.UpdatedAt = d.UpdatedAt
data.ID = d.ID
data.Name = d.Name
data.Address = d.Address
data.CheckForUpdates = d.CheckForUpdates
data.DB = d.DB
data.Host = d.Host
data.API = d.API
data.RTMP = d.RTMP
data.SRT = d.SRT
data.FFmpeg = d.FFmpeg
data.Playout = d.Playout
data.Metrics = d.Metrics
data.Sessions = d.Sessions
data.Service = d.Service
data.Router = d.Router
data.Host.Name = copy.Slice(d.Host.Name)
data.API.Access.HTTP.Allow = copy.Slice(d.API.Access.HTTP.Allow)
data.API.Access.HTTP.Block = copy.Slice(d.API.Access.HTTP.Block)
data.API.Access.HTTPS.Allow = copy.Slice(d.API.Access.HTTPS.Allow)
data.API.Access.HTTPS.Block = copy.Slice(d.API.Access.HTTPS.Block)
data.API.Auth.Auth0.Tenants = copy.TenantSlice(d.API.Auth.Auth0.Tenants)
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.FFmpeg.Access.Input.Allow = copy.Slice(d.FFmpeg.Access.Input.Allow)
data.FFmpeg.Access.Input.Block = copy.Slice(d.FFmpeg.Access.Input.Block)
data.FFmpeg.Access.Output.Allow = copy.Slice(d.FFmpeg.Access.Output.Allow)
data.FFmpeg.Access.Output.Block = copy.Slice(d.FFmpeg.Access.Output.Block)
data.Sessions.IPIgnoreList = copy.Slice(d.Sessions.IPIgnoreList)
data.SRT.Log.Topics = copy.Slice(d.SRT.Log.Topics)
data.Router.BlockedPrefixes = copy.Slice(d.Router.BlockedPrefixes)
data.Router.Routes = copy.StringMap(d.Router.Routes)
data.Storage.MimeTypes = d.Storage.MimeTypes
data.Storage.CORS = d.Storage.CORS
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.Storage.Memory = d.Storage.Memory
// Actual changes
data.Log.Level = d.Log.Level
data.Log.Topics = copy.Slice(d.Log.Topics)
data.Log.MaxLines = d.Log.MaxLines
data.Log.Target.Output = "stderr"
data.Log.Target.Path = ""
data.Debug.Profiling = d.Debug.Profiling
data.Debug.ForceGC = d.Debug.ForceGC
data.Debug.MemoryLimit = 0
data.TLS.Enable = d.TLS.Enable
data.TLS.Address = d.TLS.Address
data.TLS.Auto = d.TLS.Auto
data.TLS.CertFile = d.TLS.CertFile
data.TLS.KeyFile = d.TLS.KeyFile
data.Storage.Disk.Dir = d.Storage.Disk.Dir
data.Storage.Disk.Size = d.Storage.Disk.Size
data.Storage.Disk.Cache.Enable = d.Storage.Disk.Cache.Enable
data.Storage.Disk.Cache.Size = d.Storage.Disk.Cache.Size
data.Storage.Disk.Cache.FileSize = d.Storage.Disk.Cache.FileSize
data.Storage.Disk.Cache.TTL = d.Storage.Disk.Cache.TTL
data.Storage.Disk.Cache.Types.Allow = copy.Slice(d.Storage.Disk.Cache.Types)
data.Version = 3
return data, nil
}
func DowngradeV3toV2(d *Data) (*v2.Data, error) {
data := &v2.Data{}
data.CreatedAt = d.CreatedAt
data.LoadedAt = d.LoadedAt
data.UpdatedAt = d.UpdatedAt
data.ID = d.ID
data.Name = d.Name
data.Address = d.Address
data.CheckForUpdates = d.CheckForUpdates
data.DB = d.DB
data.Host = d.Host
data.API = d.API
data.RTMP = d.RTMP
data.SRT = d.SRT
data.FFmpeg = d.FFmpeg
data.Playout = d.Playout
data.Metrics = d.Metrics
data.Sessions = d.Sessions
data.Service = d.Service
data.Router = d.Router
data.Host.Name = copy.Slice(d.Host.Name)
data.API.Access.HTTP.Allow = copy.Slice(d.API.Access.HTTP.Allow)
data.API.Access.HTTP.Block = copy.Slice(d.API.Access.HTTP.Block)
data.API.Access.HTTPS.Allow = copy.Slice(d.API.Access.HTTPS.Allow)
data.API.Access.HTTPS.Block = copy.Slice(d.API.Access.HTTPS.Block)
data.API.Auth.Auth0.Tenants = copy.TenantSlice(d.API.Auth.Auth0.Tenants)
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.FFmpeg.Access.Input.Allow = copy.Slice(d.FFmpeg.Access.Input.Allow)
data.FFmpeg.Access.Input.Block = copy.Slice(d.FFmpeg.Access.Input.Block)
data.FFmpeg.Access.Output.Allow = copy.Slice(d.FFmpeg.Access.Output.Allow)
data.FFmpeg.Access.Output.Block = copy.Slice(d.FFmpeg.Access.Output.Block)
data.Sessions.IPIgnoreList = copy.Slice(d.Sessions.IPIgnoreList)
data.SRT.Log.Topics = copy.Slice(d.SRT.Log.Topics)
data.Router.BlockedPrefixes = copy.Slice(d.Router.BlockedPrefixes)
data.Router.Routes = copy.StringMap(d.Router.Routes)
// Actual changes
data.Log.Level = d.Log.Level
data.Log.Topics = copy.Slice(d.Log.Topics)
data.Log.MaxLines = d.Log.MaxLines
data.Debug.Profiling = d.Debug.Profiling
data.Debug.ForceGC = d.Debug.ForceGC
data.TLS.Enable = d.TLS.Enable
data.TLS.Address = d.TLS.Address
data.TLS.Auto = d.TLS.Auto
data.TLS.CertFile = d.TLS.CertFile
data.TLS.KeyFile = d.TLS.KeyFile
data.Storage.MimeTypes = d.Storage.MimeTypes
data.Storage.CORS = d.Storage.CORS
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.Storage.Memory = d.Storage.Memory
data.Storage.Disk.Dir = d.Storage.Disk.Dir
data.Storage.Disk.Size = d.Storage.Disk.Size
data.Storage.Disk.Cache.Enable = d.Storage.Disk.Cache.Enable
data.Storage.Disk.Cache.Size = d.Storage.Disk.Cache.Size
data.Storage.Disk.Cache.FileSize = d.Storage.Disk.Cache.FileSize
data.Storage.Disk.Cache.TTL = d.Storage.Disk.Cache.TTL
data.Storage.Disk.Cache.Types = copy.Slice(d.Storage.Disk.Cache.Types.Allow)
data.Version = 2
return data, nil
}

View File

@@ -1,71 +0,0 @@
package config
import (
"io/ioutil"
"net/http"
"sync"
"time"
)
// SetPublicIPs will try to figure out the public IPs (v4 and v6)
// we're running on. There's a timeout of max. 5 seconds to do it.
// If it fails, the IPs will simply not be set.
func (d *Config) SetPublicIPs() {
var wg sync.WaitGroup
ipv4 := ""
ipv6 := ""
wg.Add(2)
go func() {
defer wg.Done()
ipv4 = doRequest("https://api.ipify.org")
}()
go func() {
defer wg.Done()
ipv6 = doRequest("https://api6.ipify.org")
}()
wg.Wait()
if len(ipv4) != 0 {
d.Host.Name = append(d.Host.Name, ipv4)
}
if len(ipv6) != 0 && ipv4 != ipv6 {
d.Host.Name = append(d.Host.Name, ipv6)
}
}
func doRequest(url string) string {
client := &http.Client{
Timeout: 5 * time.Second,
}
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return ""
}
resp, err := client.Do(req)
if err != nil {
return ""
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return ""
}
if resp.StatusCode != 200 {
return ""
}
return string(body)
}

View File

@@ -1,17 +1,21 @@
package config
package store
import "fmt"
import (
"fmt"
"github.com/datarhei/core/v16/config"
)
type dummyStore struct {
current *Config
active *Config
current *config.Config
active *config.Config
}
// NewDummyStore returns a store that returns the default config
func NewDummyStore() Store {
func NewDummy() Store {
s := &dummyStore{}
cfg := New()
cfg := config.New()
cfg.DB.Dir = "."
cfg.FFmpeg.Binary = "true"
@@ -20,7 +24,7 @@ func NewDummyStore() Store {
s.current = cfg
cfg = New()
cfg = config.New()
cfg.DB.Dir = "."
cfg.FFmpeg.Binary = "true"
@@ -32,48 +36,34 @@ func NewDummyStore() Store {
return s
}
func (c *dummyStore) Get() *Config {
cfg := New()
cfg.DB.Dir = "."
cfg.FFmpeg.Binary = "true"
cfg.Storage.Disk.Dir = "."
cfg.Storage.MimeTypes = ""
return cfg
func (c *dummyStore) Get() *config.Config {
return c.current.Clone()
}
func (c *dummyStore) Set(d *Config) error {
func (c *dummyStore) Set(d *config.Config) error {
d.Validate(true)
if d.HasErrors() {
return fmt.Errorf("configuration data has errors after validation")
}
c.current = NewConfigFrom(d)
c.current = d.Clone()
return nil
}
func (c *dummyStore) GetActive() *Config {
cfg := New()
cfg.DB.Dir = "."
cfg.FFmpeg.Binary = "true"
cfg.Storage.Disk.Dir = "."
cfg.Storage.MimeTypes = ""
return cfg
func (c *dummyStore) GetActive() *config.Config {
return c.active.Clone()
}
func (c *dummyStore) SetActive(d *Config) error {
func (c *dummyStore) SetActive(d *config.Config) error {
d.Validate(true)
if d.HasErrors() {
return fmt.Errorf("configuration data has errors after validation")
}
c.active = NewConfigFrom(d)
c.active = d.Clone()
return nil
}

View File

@@ -0,0 +1,138 @@
{
"created_at": "2022-11-08T12:01:22.533279+01:00",
"version": 1,
"id": "c5ea4473-2f84-417c-a0c6-35746bfc9fc9",
"name": "cool-breeze-4646",
"address": ":8080",
"update_check": true,
"log": {
"level": "info",
"topics": [],
"max_lines": 1000
},
"db": {
"dir": "./config"
},
"host": {
"name": [],
"auto": true
},
"api": {
"read_only": false,
"access": {
"http": {
"allow": [],
"block": []
},
"https": {
"allow": [],
"block": []
}
},
"auth": {
"enable": false,
"disable_localhost": false,
"username": "",
"password": "",
"jwt": {
"secret": "L(*C[:uuHzL.]Fzpk$q=fa@PO=Z;j;56"
},
"auth0": {
"enable": false,
"tenants": []
}
}
},
"tls": {
"address": ":8181",
"enable": false,
"auto": false,
"cert_file": "",
"key_file": ""
},
"storage": {
"disk": {
"dir": "./data",
"max_size_mbytes": 0,
"cache": {
"enable": true,
"max_size_mbytes": 0,
"ttl_seconds": 300,
"max_file_size_mbytes": 1,
"types": []
}
},
"memory": {
"auth": {
"enable": true,
"username": "admin",
"password": "dcFsZVGwVFkv1bE8Rl"
},
"max_size_mbytes": 0,
"purge": false
},
"cors": {
"origins": [
"*"
]
},
"mimetypes_file": "./mime.types"
},
"ffmpeg": {
"binary": "ffmpeg",
"max_processes": 0,
"access": {
"input": {
"allow": [],
"block": []
},
"output": {
"allow": [],
"block": []
}
},
"log": {
"max_lines": 50,
"max_history": 3
}
},
"playout": {
"enable": false,
"min_port": 0,
"max_port": 0
},
"debug": {
"profiling": false,
"force_gc": 0
},
"metrics": {
"enable": false,
"enable_prometheus": false,
"range_sec": 300,
"interval_sec": 2
},
"sessions": {
"enable": true,
"ip_ignorelist": [
"127.0.0.1/32",
"::1/128"
],
"session_timeout_sec": 30,
"persist": false,
"persist_interval_sec": 300,
"max_bitrate_mbit": 0,
"max_sessions": 0
},
"service": {
"enable": false,
"token": "",
"url": "https://service.datarhei.com"
},
"router": {
"blocked_prefixes": [
"/api"
],
"routes": {},
"ui_path": ""
}
}

View File

@@ -0,0 +1,163 @@
{
"created_at": "2022-11-08T13:34:47.498911+01:00",
"version": 3,
"id": "c5ea4473-2f84-417c-a0c6-35746bfc9fc9",
"name": "cool-breeze-4646",
"address": ":8080",
"update_check": true,
"log": {
"level": "info",
"topics": [],
"max_lines": 1000
},
"db": {
"dir": "./config"
},
"host": {
"name": [],
"auto": true
},
"api": {
"read_only": false,
"access": {
"http": {
"allow": [],
"block": []
},
"https": {
"allow": [],
"block": []
}
},
"auth": {
"enable": false,
"disable_localhost": false,
"username": "",
"password": "",
"jwt": {
"secret": "L(*C[:uuHzL.]Fzpk$q=fa@PO=Z;j;56"
},
"auth0": {
"enable": false,
"tenants": []
}
}
},
"tls": {
"address": ":8181",
"enable": false,
"auto": false,
"email": "cert@datarhei.com",
"cert_file": "",
"key_file": ""
},
"storage": {
"disk": {
"dir": "./data",
"max_size_mbytes": 0,
"cache": {
"enable": true,
"max_size_mbytes": 0,
"ttl_seconds": 300,
"max_file_size_mbytes": 1,
"types": {
"allow": [],
"block": [
".m3u8",
".mpd"
]
}
}
},
"memory": {
"auth": {
"enable": true,
"username": "admin",
"password": "dcFsZVGwVFkv1bE8Rl"
},
"max_size_mbytes": 0,
"purge": false
},
"cors": {
"origins": [
"*"
]
},
"mimetypes_file": "./mime.types"
},
"rtmp": {
"enable": false,
"enable_tls": false,
"address": ":1935",
"address_tls": ":1936",
"app": "/",
"token": ""
},
"srt": {
"enable": false,
"address": ":6000",
"passphrase": "",
"token": "",
"log": {
"enable": false,
"topics": []
}
},
"ffmpeg": {
"binary": "ffmpeg",
"max_processes": 0,
"access": {
"input": {
"allow": [],
"block": []
},
"output": {
"allow": [],
"block": []
}
},
"log": {
"max_lines": 50,
"max_history": 3
}
},
"playout": {
"enable": false,
"min_port": 0,
"max_port": 0
},
"debug": {
"profiling": false,
"force_gc": 0
},
"metrics": {
"enable": false,
"enable_prometheus": false,
"range_sec": 300,
"interval_sec": 2
},
"sessions": {
"enable": true,
"ip_ignorelist": [
"127.0.0.1/32",
"::1/128"
],
"session_timeout_sec": 30,
"persist": false,
"persist_interval_sec": 300,
"max_bitrate_mbit": 0,
"max_sessions": 0
},
"service": {
"enable": false,
"token": "",
"url": "https://service.datarhei.com"
},
"router": {
"blocked_prefixes": [
"/api"
],
"routes": {},
"ui_path": ""
}
}

View File

@@ -0,0 +1,140 @@
{
"created_at": "2022-11-08T11:54:44.224213+01:00",
"version": 2,
"id": "3bddc061-e534-4315-ab56-95b48c050ec9",
"name": "super-frog-1715",
"address": ":8080",
"update_check": true,
"log": {
"level": "info",
"topics": [],
"max_lines": 1000
},
"db": {
"dir": "./config"
},
"host": {
"name": [],
"auto": true
},
"api": {
"read_only": false,
"access": {
"http": {
"allow": [],
"block": []
},
"https": {
"allow": [],
"block": []
}
},
"auth": {
"enable": false,
"disable_localhost": false,
"username": "",
"password": "",
"jwt": {
"secret": "u4+N,UDq]jGxGbbQLQN[!jcMsa\u0026weIJW"
},
"auth0": {
"enable": false,
"tenants": []
}
}
},
"tls": {
"address": ":8181",
"enable": false,
"auto": false,
"cert_file": "",
"key_file": ""
},
"storage": {
"disk": {
"dir": "./data",
"max_size_mbytes": 0,
"cache": {
"enable": true,
"max_size_mbytes": 0,
"ttl_seconds": 300,
"max_file_size_mbytes": 1,
"types": [
".ts"
]
}
},
"memory": {
"auth": {
"enable": true,
"username": "admin",
"password": "DsAKRUg9wmOk4qpvvy"
},
"max_size_mbytes": 0,
"purge": false
},
"cors": {
"origins": [
"*"
]
},
"mimetypes_file": "./mime.types"
},
"ffmpeg": {
"binary": "ffmpeg",
"max_processes": 0,
"access": {
"input": {
"allow": [],
"block": []
},
"output": {
"allow": [],
"block": []
}
},
"log": {
"max_lines": 50,
"max_history": 3
}
},
"playout": {
"enable": false,
"min_port": 0,
"max_port": 0
},
"debug": {
"profiling": false,
"force_gc": 0
},
"metrics": {
"enable": false,
"enable_prometheus": false,
"range_sec": 300,
"interval_sec": 2
},
"sessions": {
"enable": true,
"ip_ignorelist": [
"127.0.0.1/32",
"::1/128"
],
"session_timeout_sec": 30,
"persist": false,
"persist_interval_sec": 300,
"max_bitrate_mbit": 0,
"max_sessions": 0
},
"service": {
"enable": false,
"token": "",
"url": "https://service.datarhei.com"
},
"router": {
"blocked_prefixes": [
"/api"
],
"routes": {},
"ui_path": ""
}
}

View File

@@ -0,0 +1,165 @@
{
"created_at": "2022-11-08T11:54:44.224213+01:00",
"version": 3,
"id": "3bddc061-e534-4315-ab56-95b48c050ec9",
"name": "super-frog-1715",
"address": ":8080",
"update_check": true,
"log": {
"level": "info",
"topics": [],
"max_lines": 1000
},
"db": {
"dir": "./config"
},
"host": {
"name": [],
"auto": true
},
"api": {
"read_only": false,
"access": {
"http": {
"allow": [],
"block": []
},
"https": {
"allow": [],
"block": []
}
},
"auth": {
"enable": false,
"disable_localhost": false,
"username": "",
"password": "",
"jwt": {
"secret": "u4+N,UDq]jGxGbbQLQN[!jcMsa\u0026weIJW"
},
"auth0": {
"enable": false,
"tenants": []
}
}
},
"tls": {
"address": ":8181",
"enable": false,
"auto": false,
"cert_file": "",
"key_file": "",
"email": "cert@datarhei.com"
},
"storage": {
"disk": {
"dir": "./data",
"max_size_mbytes": 0,
"cache": {
"enable": true,
"max_size_mbytes": 0,
"ttl_seconds": 300,
"max_file_size_mbytes": 1,
"types": {
"allow": [
".ts"
],
"block": [
".m3u8",
".mpd"
]
}
}
},
"memory": {
"auth": {
"enable": true,
"username": "admin",
"password": "DsAKRUg9wmOk4qpvvy"
},
"max_size_mbytes": 0,
"purge": false
},
"cors": {
"origins": [
"*"
]
},
"mimetypes_file": "./mime.types"
},
"rtmp": {
"enable": false,
"enable_tls": false,
"address": ":1935",
"address_tls": ":1936",
"app": "/",
"token": ""
},
"srt": {
"enable": false,
"address": ":6000",
"passphrase": "",
"token": "",
"log": {
"enable": false,
"topics": []
}
},
"ffmpeg": {
"binary": "ffmpeg",
"max_processes": 0,
"access": {
"input": {
"allow": [],
"block": []
},
"output": {
"allow": [],
"block": []
}
},
"log": {
"max_lines": 50,
"max_history": 3
}
},
"playout": {
"enable": false,
"min_port": 0,
"max_port": 0
},
"debug": {
"profiling": false,
"force_gc": 0
},
"metrics": {
"enable": false,
"enable_prometheus": false,
"range_sec": 300,
"interval_sec": 2
},
"sessions": {
"enable": true,
"ip_ignorelist": [
"127.0.0.1/32",
"::1/128"
],
"session_timeout_sec": 30,
"persist": false,
"persist_interval_sec": 300,
"max_bitrate_mbit": 0,
"max_sessions": 0
},
"service": {
"enable": false,
"token": "",
"url": "https://service.datarhei.com"
},
"router": {
"blocked_prefixes": [
"/api"
],
"routes": {},
"ui_path": ""
}
}

View File

@@ -1,13 +1,15 @@
package config
package store
import (
gojson "encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"time"
"github.com/datarhei/core/v16/config"
v1 "github.com/datarhei/core/v16/config/v1"
v2 "github.com/datarhei/core/v16/config/v2"
"github.com/datarhei/core/v16/encoding/json"
"github.com/datarhei/core/v16/io/file"
)
@@ -15,7 +17,7 @@ import (
type jsonStore struct {
path string
data map[string]*Config
data map[string]*config.Config
reloadFn func()
}
@@ -24,14 +26,14 @@ type jsonStore struct {
// back to the path. The returned error will be nil if everything went fine.
// If the path doesn't exist, a default JSON config file will be written to that path.
// The returned ConfigStore can be used to retrieve or write the config.
func NewJSONStore(path string, reloadFn func()) (Store, error) {
func NewJSON(path string, reloadFn func()) (Store, error) {
c := &jsonStore{
path: path,
data: make(map[string]*Config),
data: make(map[string]*config.Config),
reloadFn: reloadFn,
}
c.data["base"] = New()
c.data["base"] = config.New()
if err := c.load(c.data["base"]); err != nil {
return nil, fmt.Errorf("failed to read JSON from '%s': %w", path, err)
@@ -44,16 +46,16 @@ func NewJSONStore(path string, reloadFn func()) (Store, error) {
return c, nil
}
func (c *jsonStore) Get() *Config {
return NewConfigFrom(c.data["base"])
func (c *jsonStore) Get() *config.Config {
return c.data["base"].Clone()
}
func (c *jsonStore) Set(d *Config) error {
func (c *jsonStore) Set(d *config.Config) error {
if d.HasErrors() {
return fmt.Errorf("configuration data has errors after validation")
}
data := NewConfigFrom(d)
data := d.Clone()
data.CreatedAt = time.Now()
@@ -68,26 +70,26 @@ func (c *jsonStore) Set(d *Config) error {
return nil
}
func (c *jsonStore) GetActive() *Config {
func (c *jsonStore) GetActive() *config.Config {
if x, ok := c.data["merged"]; ok {
return NewConfigFrom(x)
return x.Clone()
}
if x, ok := c.data["base"]; ok {
return NewConfigFrom(x)
return x.Clone()
}
return nil
}
func (c *jsonStore) SetActive(d *Config) error {
func (c *jsonStore) SetActive(d *config.Config) error {
d.Validate(true)
if d.HasErrors() {
return fmt.Errorf("configuration data has errors after validation")
}
c.data["merged"] = NewConfigFrom(d)
c.data["merged"] = d.Clone()
return nil
}
@@ -102,7 +104,7 @@ func (c *jsonStore) Reload() error {
return nil
}
func (c *jsonStore) load(data *Config) error {
func (c *jsonStore) load(cfg *config.Config) error {
if len(c.path) == 0 {
return nil
}
@@ -111,22 +113,29 @@ func (c *jsonStore) load(data *Config) error {
return nil
}
jsondata, err := ioutil.ReadFile(c.path)
jsondata, err := os.ReadFile(c.path)
if err != nil {
return err
}
if err = gojson.Unmarshal(jsondata, data); err != nil {
return json.FormatError(jsondata, err)
if len(jsondata) == 0 {
return nil
}
data.LoadedAt = time.Now()
data.UpdatedAt = data.LoadedAt
data, err := migrate(jsondata)
if err != nil {
return err
}
cfg.Data = *data
cfg.LoadedAt = time.Now()
cfg.UpdatedAt = cfg.LoadedAt
return nil
}
func (c *jsonStore) store(data *Config) error {
func (c *jsonStore) store(data *config.Config) error {
data.CreatedAt = time.Now()
if len(c.path) == 0 {
@@ -140,7 +149,7 @@ func (c *jsonStore) store(data *Config) error {
dir, filename := filepath.Split(c.path)
tmpfile, err := ioutil.TempFile(dir, filename)
tmpfile, err := os.CreateTemp(dir, filename)
if err != nil {
return err
}
@@ -161,3 +170,55 @@ func (c *jsonStore) store(data *Config) error {
return nil
}
func migrate(jsondata []byte) (*config.Data, error) {
data := &config.Data{}
version := DataVersion{}
if err := gojson.Unmarshal(jsondata, &version); err != nil {
return nil, json.FormatError(jsondata, err)
}
if version.Version == 1 {
dataV1 := &v1.New().Data
if err := gojson.Unmarshal(jsondata, dataV1); err != nil {
return nil, json.FormatError(jsondata, err)
}
dataV2, err := v2.UpgradeV1ToV2(dataV1)
if err != nil {
return nil, err
}
dataV3, err := config.UpgradeV2ToV3(dataV2)
if err != nil {
return nil, err
}
data = dataV3
} else if version.Version == 2 {
dataV2 := &v2.New().Data
if err := gojson.Unmarshal(jsondata, dataV2); err != nil {
return nil, json.FormatError(jsondata, err)
}
dataV3, err := config.UpgradeV2ToV3(dataV2)
if err != nil {
return nil, err
}
data = dataV3
} else if version.Version == 3 {
dataV3 := &config.New().Data
if err := gojson.Unmarshal(jsondata, dataV3); err != nil {
return nil, json.FormatError(jsondata, err)
}
data = dataV3
}
return data, nil
}

50
config/store/json_test.go Normal file
View File

@@ -0,0 +1,50 @@
package store
import (
"encoding/json"
"os"
"testing"
"time"
"github.com/datarhei/core/v16/config"
"github.com/stretchr/testify/require"
)
func TestMigrationV1ToV3(t *testing.T) {
jsondatav1, err := os.ReadFile("./fixtures/config_v1.json")
require.NoError(t, err)
jsondatav3, err := os.ReadFile("./fixtures/config_v1_v3.json")
require.NoError(t, err)
datav3 := config.New()
json.Unmarshal(jsondatav3, datav3)
data, err := migrate(jsondatav1)
require.NoError(t, err)
datav3.Data.CreatedAt = time.Time{}
data.CreatedAt = time.Time{}
require.Equal(t, datav3.Data, *data)
}
func TestMigrationV2ToV3(t *testing.T) {
jsondatav2, err := os.ReadFile("./fixtures/config_v2.json")
require.NoError(t, err)
jsondatav3, err := os.ReadFile("./fixtures/config_v2_v3.json")
require.NoError(t, err)
datav3 := config.New()
json.Unmarshal(jsondatav3, datav3)
data, err := migrate(jsondatav2)
require.NoError(t, err)
datav3.Data.CreatedAt = time.Time{}
data.CreatedAt = time.Time{}
require.Equal(t, datav3.Data, *data)
}

53
config/store/location.go Normal file
View File

@@ -0,0 +1,53 @@
package store
import (
"os"
"path"
)
// Location returns the path to the config file. If no path is provided,
// different standard location will be probed:
// - os.UserConfigDir() + /datarhei-core/config.js
// - os.UserHomeDir() + /.config/datarhei-core/config.js
// - ./config/config.js
// If the config doesn't exist in none of these locations, it will be assumed
// at ./config/config.js
func Location(filepath string) string {
configfile := filepath
if len(configfile) != 0 {
return configfile
}
locations := []string{}
if dir, err := os.UserConfigDir(); err == nil {
locations = append(locations, dir+"/datarhei-core/config.js")
}
if dir, err := os.UserHomeDir(); err == nil {
locations = append(locations, dir+"/.config/datarhei-core/config.js")
}
locations = append(locations, "./config/config.js")
for _, path := range locations {
info, err := os.Stat(path)
if err != nil {
continue
}
if info.IsDir() {
continue
}
configfile = path
}
if len(configfile) == 0 {
configfile = "./config/config.js"
}
os.MkdirAll(path.Dir(configfile), 0740)
return configfile
}

View File

@@ -1,23 +1,29 @@
package config
package store
import "github.com/datarhei/core/v16/config"
// Store is a store for the configuration data.
type Store interface {
// Get the current configuration.
Get() *Config
Get() *config.Config
// Set a new configuration for persistence.
Set(data *Config) error
Set(data *config.Config) error
// GetActive returns the configuration that has been set as
// active before, otherwise it return nil.
GetActive() *Config
GetActive() *config.Config
// SetActive will keep the given configuration
// as active in memory. It can be retrieved later with GetActive()
SetActive(data *Config) error
SetActive(data *config.Config) error
// Reload will reload the stored configuration. It has to make sure
// that all affected components will receiver their potentially
// changed configuration.
Reload() error
}
type DataVersion struct {
Version int64 `json:"version"`
}

View File

@@ -1,807 +0,0 @@
package config
import (
"encoding/base64"
"encoding/json"
"fmt"
"net"
"net/url"
"os"
"os/exec"
"path/filepath"
"regexp"
"strconv"
"strings"
"time"
"github.com/datarhei/core/v16/http/cors"
)
type value interface {
// String returns a string representation of the value.
String() string
// Set a new value for the value. Returns an
// error if the given string representation can't
// be transformed to the value. Returns nil
// if the new value has been set.
Set(string) error
// Validate the value. The returned error will
// indicate what is wrong with the current value.
// Returns nil if the value is OK.
Validate() error
// IsEmpty returns whether the value represents an empty
// representation for that value.
IsEmpty() bool
}
// string
type stringValue string
func newStringValue(p *string, val string) *stringValue {
*p = val
return (*stringValue)(p)
}
func (s *stringValue) Set(val string) error {
*s = stringValue(val)
return nil
}
func (s *stringValue) String() string {
return string(*s)
}
func (s *stringValue) Validate() error {
return nil
}
func (s *stringValue) IsEmpty() bool {
return len(string(*s)) == 0
}
// address (host?:port)
type addressValue string
func newAddressValue(p *string, val string) *addressValue {
*p = val
return (*addressValue)(p)
}
func (s *addressValue) Set(val string) error {
// Check if the new value is only a port number
re := regexp.MustCompile("^[0-9]+$")
if re.MatchString(val) {
val = ":" + val
}
*s = addressValue(val)
return nil
}
func (s *addressValue) String() string {
return string(*s)
}
func (s *addressValue) Validate() error {
_, port, err := net.SplitHostPort(string(*s))
if err != nil {
return err
}
re := regexp.MustCompile("^[0-9]+$")
if !re.MatchString(port) {
return fmt.Errorf("the port must be numerical")
}
return nil
}
func (s *addressValue) IsEmpty() bool {
return s.Validate() != nil
}
// array of strings
type stringListValue struct {
p *[]string
separator string
}
func newStringListValue(p *[]string, val []string, separator string) *stringListValue {
v := &stringListValue{
p: p,
separator: separator,
}
*p = val
return v
}
func (s *stringListValue) Set(val string) error {
list := []string{}
for _, elm := range strings.Split(val, s.separator) {
elm = strings.TrimSpace(elm)
if len(elm) != 0 {
list = append(list, elm)
}
}
*s.p = list
return nil
}
func (s *stringListValue) String() string {
if s.IsEmpty() {
return "(empty)"
}
return strings.Join(*s.p, s.separator)
}
func (s *stringListValue) Validate() error {
return nil
}
func (s *stringListValue) IsEmpty() bool {
return len(*s.p) == 0
}
// array of auth0 tenants
type tenantListValue struct {
p *[]Auth0Tenant
separator string
}
func newTenantListValue(p *[]Auth0Tenant, val []Auth0Tenant, separator string) *tenantListValue {
v := &tenantListValue{
p: p,
separator: separator,
}
*p = val
return v
}
func (s *tenantListValue) Set(val string) error {
list := []Auth0Tenant{}
for i, elm := range strings.Split(val, s.separator) {
data, err := base64.StdEncoding.DecodeString(elm)
if err != nil {
return fmt.Errorf("invalid base64 encoding of tenant %d: %w", i, err)
}
t := Auth0Tenant{}
if err := json.Unmarshal(data, &t); err != nil {
return fmt.Errorf("invalid JSON in tenant %d: %w", i, err)
}
list = append(list, t)
}
*s.p = list
return nil
}
func (s *tenantListValue) String() string {
if s.IsEmpty() {
return "(empty)"
}
list := []string{}
for _, t := range *s.p {
list = append(list, fmt.Sprintf("%s (%d users)", t.Domain, len(t.Users)))
}
return strings.Join(list, ",")
}
func (s *tenantListValue) Validate() error {
for i, t := range *s.p {
if len(t.Domain) == 0 {
return fmt.Errorf("the domain for tenant %d is missing", i)
}
if len(t.Audience) == 0 {
return fmt.Errorf("the audience for tenant %d is missing", i)
}
}
return nil
}
func (s *tenantListValue) IsEmpty() bool {
return len(*s.p) == 0
}
// map of strings to strings
type stringMapStringValue struct {
p *map[string]string
}
func newStringMapStringValue(p *map[string]string, val map[string]string) *stringMapStringValue {
v := &stringMapStringValue{
p: p,
}
if *p == nil {
*p = make(map[string]string)
}
if val != nil {
*p = val
}
return v
}
func (s *stringMapStringValue) Set(val string) error {
mappings := make(map[string]string)
for _, elm := range strings.Split(val, " ") {
elm = strings.TrimSpace(elm)
if len(elm) == 0 {
continue
}
mapping := strings.SplitN(elm, ":", 2)
mappings[mapping[0]] = mapping[1]
}
*s.p = mappings
return nil
}
func (s *stringMapStringValue) String() string {
if s.IsEmpty() {
return "(empty)"
}
mappings := make([]string, len(*s.p))
i := 0
for k, v := range *s.p {
mappings[i] = k + ":" + v
i++
}
return strings.Join(mappings, " ")
}
func (s *stringMapStringValue) Validate() error {
return nil
}
func (s *stringMapStringValue) IsEmpty() bool {
return len(*s.p) == 0
}
// array of CIDR notation IP adresses
type cidrListValue struct {
p *[]string
separator string
}
func newCIDRListValue(p *[]string, val []string, separator string) *cidrListValue {
v := &cidrListValue{
p: p,
separator: separator,
}
*p = val
return v
}
func (s *cidrListValue) Set(val string) error {
list := []string{}
for _, elm := range strings.Split(val, s.separator) {
elm = strings.TrimSpace(elm)
if len(elm) != 0 {
list = append(list, elm)
}
}
*s.p = list
return nil
}
func (s *cidrListValue) String() string {
if s.IsEmpty() {
return "(empty)"
}
return strings.Join(*s.p, s.separator)
}
func (s *cidrListValue) Validate() error {
for _, cidr := range *s.p {
_, _, err := net.ParseCIDR(cidr)
if err != nil {
return err
}
}
return nil
}
func (s *cidrListValue) IsEmpty() bool {
return len(*s.p) == 0
}
// array of origins for CORS
type corsOriginsValue struct {
p *[]string
separator string
}
func newCORSOriginsValue(p *[]string, val []string, separator string) *corsOriginsValue {
v := &corsOriginsValue{
p: p,
separator: separator,
}
*p = val
return v
}
func (s *corsOriginsValue) Set(val string) error {
list := []string{}
for _, elm := range strings.Split(val, s.separator) {
elm = strings.TrimSpace(elm)
if len(elm) != 0 {
list = append(list, elm)
}
}
*s.p = list
return nil
}
func (s *corsOriginsValue) String() string {
if s.IsEmpty() {
return "(empty)"
}
return strings.Join(*s.p, s.separator)
}
func (s *corsOriginsValue) Validate() error {
return cors.Validate(*s.p)
}
func (s *corsOriginsValue) IsEmpty() bool {
return len(*s.p) == 0
}
// boolean
type boolValue bool
func newBoolValue(p *bool, val bool) *boolValue {
*p = val
return (*boolValue)(p)
}
func (b *boolValue) Set(val string) error {
v, err := strconv.ParseBool(val)
if err != nil {
return err
}
*b = boolValue(v)
return nil
}
func (b *boolValue) String() string {
return strconv.FormatBool(bool(*b))
}
func (b *boolValue) Validate() error {
return nil
}
func (b *boolValue) IsEmpty() bool {
return !bool(*b)
}
// int
type intValue int
func newIntValue(p *int, val int) *intValue {
*p = val
return (*intValue)(p)
}
func (i *intValue) Set(val string) error {
v, err := strconv.Atoi(val)
if err != nil {
return err
}
*i = intValue(v)
return nil
}
func (i *intValue) String() string {
return strconv.Itoa(int(*i))
}
func (i *intValue) Validate() error {
return nil
}
func (i *intValue) IsEmpty() bool {
return int(*i) == 0
}
// int64
type int64Value int64
func newInt64Value(p *int64, val int64) *int64Value {
*p = val
return (*int64Value)(p)
}
func (u *int64Value) Set(val string) error {
v, err := strconv.ParseInt(val, 0, 64)
if err != nil {
return err
}
*u = int64Value(v)
return nil
}
func (u *int64Value) String() string {
return strconv.FormatInt(int64(*u), 10)
}
func (u *int64Value) Validate() error {
return nil
}
func (u *int64Value) IsEmpty() bool {
return int64(*u) == 0
}
// uint64
type uint64Value uint64
func newUint64Value(p *uint64, val uint64) *uint64Value {
*p = val
return (*uint64Value)(p)
}
func (u *uint64Value) Set(val string) error {
v, err := strconv.ParseUint(val, 0, 64)
if err != nil {
return err
}
*u = uint64Value(v)
return nil
}
func (u *uint64Value) String() string {
return strconv.FormatUint(uint64(*u), 10)
}
func (u *uint64Value) Validate() error {
return nil
}
func (u *uint64Value) IsEmpty() bool {
return uint64(*u) == 0
}
// network port
type portValue int
func newPortValue(p *int, val int) *portValue {
*p = val
return (*portValue)(p)
}
func (i *portValue) Set(val string) error {
v, err := strconv.Atoi(val)
if err != nil {
return err
}
*i = portValue(v)
return nil
}
func (i *portValue) String() string {
return strconv.Itoa(int(*i))
}
func (i *portValue) Validate() error {
val := int(*i)
if val < 0 || val >= (1<<16) {
return fmt.Errorf("%d is not in the range of [0, %d]", val, 1<<16-1)
}
return nil
}
func (i *portValue) IsEmpty() bool {
return int(*i) == 0
}
// must directory
type mustDirValue string
func newMustDirValue(p *string, val string) *mustDirValue {
*p = val
return (*mustDirValue)(p)
}
func (u *mustDirValue) Set(val string) error {
*u = mustDirValue(val)
return nil
}
func (u *mustDirValue) String() string {
return string(*u)
}
func (u *mustDirValue) Validate() error {
val := string(*u)
if len(strings.TrimSpace(val)) == 0 {
return fmt.Errorf("path name must not be empty")
}
finfo, err := os.Stat(val)
if err != nil {
return fmt.Errorf("%s does not exist", val)
}
if !finfo.IsDir() {
return fmt.Errorf("%s is not a directory", val)
}
return nil
}
func (u *mustDirValue) IsEmpty() bool {
return len(string(*u)) == 0
}
// directory
type dirValue string
func newDirValue(p *string, val string) *dirValue {
*p = val
return (*dirValue)(p)
}
func (u *dirValue) Set(val string) error {
*u = dirValue(val)
return nil
}
func (u *dirValue) String() string {
return string(*u)
}
func (u *dirValue) Validate() error {
val := string(*u)
if len(strings.TrimSpace(val)) == 0 {
return nil
}
finfo, err := os.Stat(val)
if err != nil {
return fmt.Errorf("%s does not exist", val)
}
if !finfo.IsDir() {
return fmt.Errorf("%s is not a directory", val)
}
return nil
}
func (u *dirValue) IsEmpty() bool {
return len(string(*u)) == 0
}
// executable
type execValue string
func newExecValue(p *string, val string) *execValue {
*p = val
return (*execValue)(p)
}
func (u *execValue) Set(val string) error {
*u = execValue(val)
return nil
}
func (u *execValue) String() string {
return string(*u)
}
func (u *execValue) Validate() error {
val := string(*u)
_, err := exec.LookPath(val)
if err != nil {
return fmt.Errorf("%s not found or is not executable", val)
}
return nil
}
func (u *execValue) IsEmpty() bool {
return len(string(*u)) == 0
}
// regular file
type fileValue string
func newFileValue(p *string, val string) *fileValue {
*p = val
return (*fileValue)(p)
}
func (u *fileValue) Set(val string) error {
*u = fileValue(val)
return nil
}
func (u *fileValue) String() string {
return string(*u)
}
func (u *fileValue) Validate() error {
val := string(*u)
if len(val) == 0 {
return nil
}
finfo, err := os.Stat(val)
if err != nil {
return fmt.Errorf("%s does not exist", val)
}
if !finfo.Mode().IsRegular() {
return fmt.Errorf("%s is not a regular file", val)
}
return nil
}
func (u *fileValue) IsEmpty() bool {
return len(string(*u)) == 0
}
// time
type timeValue time.Time
func newTimeValue(p *time.Time, val time.Time) *timeValue {
*p = val
return (*timeValue)(p)
}
func (u *timeValue) Set(val string) error {
v, err := time.Parse(time.RFC3339, val)
if err != nil {
return err
}
*u = timeValue(v)
return nil
}
func (u *timeValue) String() string {
v := time.Time(*u)
return v.Format(time.RFC3339)
}
func (u *timeValue) Validate() error {
return nil
}
func (u *timeValue) IsEmpty() bool {
v := time.Time(*u)
return v.IsZero()
}
// url
type urlValue string
func newURLValue(p *string, val string) *urlValue {
*p = val
return (*urlValue)(p)
}
func (u *urlValue) Set(val string) error {
*u = urlValue(val)
return nil
}
func (u *urlValue) String() string {
return string(*u)
}
func (u *urlValue) Validate() error {
val := string(*u)
if len(val) == 0 {
return nil
}
URL, err := url.Parse(val)
if err != nil {
return fmt.Errorf("%s is not a valid URL", val)
}
if len(URL.Scheme) == 0 || len(URL.Host) == 0 {
return fmt.Errorf("%s is not a valid URL", val)
}
return nil
}
func (u *urlValue) IsEmpty() bool {
return len(string(*u)) == 0
}
// absolute path
type absolutePathValue string
func newAbsolutePathValue(p *string, val string) *absolutePathValue {
*p = filepath.Clean(val)
return (*absolutePathValue)(p)
}
func (s *absolutePathValue) Set(val string) error {
*s = absolutePathValue(filepath.Clean(val))
return nil
}
func (s *absolutePathValue) String() string {
return string(*s)
}
func (s *absolutePathValue) Validate() error {
path := string(*s)
if !filepath.IsAbs(path) {
return fmt.Errorf("%s is not an absolute path", path)
}
return nil
}
func (s *absolutePathValue) IsEmpty() bool {
return len(string(*s)) == 0
}

View File

@@ -1,58 +0,0 @@
package config
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestIntValue(t *testing.T) {
var i int
ivar := newIntValue(&i, 11)
assert.Equal(t, "11", ivar.String())
assert.Equal(t, nil, ivar.Validate())
assert.Equal(t, false, ivar.IsEmpty())
i = 42
assert.Equal(t, "42", ivar.String())
assert.Equal(t, nil, ivar.Validate())
assert.Equal(t, false, ivar.IsEmpty())
ivar.Set("77")
assert.Equal(t, int(77), i)
}
type testdata struct {
value1 int
value2 int
}
func TestCopyStruct(t *testing.T) {
data1 := testdata{}
newIntValue(&data1.value1, 1)
newIntValue(&data1.value2, 2)
assert.Equal(t, int(1), data1.value1)
assert.Equal(t, int(2), data1.value2)
data2 := testdata{}
val21 := newIntValue(&data2.value1, 3)
val22 := newIntValue(&data2.value2, 4)
assert.Equal(t, int(3), data2.value1)
assert.Equal(t, int(4), data2.value2)
data2 = data1
assert.Equal(t, int(1), data2.value1)
assert.Equal(t, int(2), data2.value2)
assert.Equal(t, "1", val21.String())
assert.Equal(t, "2", val22.String())
}

397
config/v1/config.go Normal file
View File

@@ -0,0 +1,397 @@
package v1
import (
"context"
"net"
"time"
"github.com/datarhei/core/v16/config/copy"
"github.com/datarhei/core/v16/config/value"
"github.com/datarhei/core/v16/config/vars"
"github.com/datarhei/core/v16/math/rand"
haikunator "github.com/atrox/haikunatorgo/v2"
"github.com/google/uuid"
)
const version int64 = 1
// Make sure that the config.Config interface is satisfied
//var _ config.Config = &Config{}
// Config is a wrapper for Data
type Config struct {
vars vars.Variables
Data
}
// New returns a Config which is initialized with its default values
func New() *Config {
cfg := &Config{}
cfg.init()
return cfg
}
func (d *Config) Get(name string) (string, error) {
return d.vars.Get(name)
}
func (d *Config) Set(name, val string) error {
return d.vars.Set(name, val)
}
// NewConfigFrom returns a clone of a Config
func (d *Config) Clone() *Config {
data := New()
data.CreatedAt = d.CreatedAt
data.LoadedAt = d.LoadedAt
data.UpdatedAt = d.UpdatedAt
data.Version = d.Version
data.ID = d.ID
data.Name = d.Name
data.Address = d.Address
data.CheckForUpdates = d.CheckForUpdates
data.Log = d.Log
data.DB = d.DB
data.Host = d.Host
data.API = d.API
data.TLS = d.TLS
data.Storage = d.Storage
data.RTMP = d.RTMP
data.SRT = d.SRT
data.FFmpeg = d.FFmpeg
data.Playout = d.Playout
data.Debug = d.Debug
data.Metrics = d.Metrics
data.Sessions = d.Sessions
data.Service = d.Service
data.Router = d.Router
data.Log.Topics = copy.Slice(d.Log.Topics)
data.Host.Name = copy.Slice(d.Host.Name)
data.API.Access.HTTP.Allow = copy.Slice(d.API.Access.HTTP.Allow)
data.API.Access.HTTP.Block = copy.Slice(d.API.Access.HTTP.Block)
data.API.Access.HTTPS.Allow = copy.Slice(d.API.Access.HTTPS.Allow)
data.API.Access.HTTPS.Block = copy.Slice(d.API.Access.HTTPS.Block)
data.API.Auth.Auth0.Tenants = copy.TenantSlice(d.API.Auth.Auth0.Tenants)
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.Storage.Disk.Cache.Types = copy.Slice(d.Storage.Disk.Cache.Types)
data.FFmpeg.Access.Input.Allow = copy.Slice(d.FFmpeg.Access.Input.Allow)
data.FFmpeg.Access.Input.Block = copy.Slice(d.FFmpeg.Access.Input.Block)
data.FFmpeg.Access.Output.Allow = copy.Slice(d.FFmpeg.Access.Output.Allow)
data.FFmpeg.Access.Output.Block = copy.Slice(d.FFmpeg.Access.Output.Block)
data.Sessions.IPIgnoreList = copy.Slice(d.Sessions.IPIgnoreList)
data.SRT.Log.Topics = copy.Slice(d.SRT.Log.Topics)
data.Router.BlockedPrefixes = copy.Slice(d.Router.BlockedPrefixes)
data.Router.Routes = copy.StringMap(d.Router.Routes)
data.vars.Transfer(&d.vars)
return data
}
func (d *Config) init() {
d.vars.Register(value.NewInt64(&d.Version, version), "version", "", nil, "Configuration file layout version", true, false)
d.vars.Register(value.NewTime(&d.CreatedAt, time.Now()), "created_at", "", nil, "Configuration file creation time", false, false)
d.vars.Register(value.NewString(&d.ID, uuid.New().String()), "id", "CORE_ID", nil, "ID for this instance", true, false)
d.vars.Register(value.NewString(&d.Name, haikunator.New().Haikunate()), "name", "CORE_NAME", nil, "A human readable name for this instance", false, false)
d.vars.Register(value.NewAddress(&d.Address, ":8080"), "address", "CORE_ADDRESS", nil, "HTTP listening address", false, false)
d.vars.Register(value.NewBool(&d.CheckForUpdates, true), "update_check", "CORE_UPDATE_CHECK", nil, "Check for updates and send anonymized data", false, false)
// Log
d.vars.Register(value.NewString(&d.Log.Level, "info"), "log.level", "CORE_LOG_LEVEL", nil, "Loglevel: silent, error, warn, info, debug", false, false)
d.vars.Register(value.NewStringList(&d.Log.Topics, []string{}, ","), "log.topics", "CORE_LOG_TOPICS", nil, "Show only selected log topics", false, false)
d.vars.Register(value.NewInt(&d.Log.MaxLines, 1000), "log.max_lines", "CORE_LOG_MAXLINES", nil, "Number of latest log lines to keep in memory", false, false)
// DB
d.vars.Register(value.NewMustDir(&d.DB.Dir, "./config"), "db.dir", "CORE_DB_DIR", nil, "Directory for holding the operational data", false, false)
// Host
d.vars.Register(value.NewStringList(&d.Host.Name, []string{}, ","), "host.name", "CORE_HOST_NAME", nil, "Comma separated list of public host/domain names or IPs", false, false)
d.vars.Register(value.NewBool(&d.Host.Auto, true), "host.auto", "CORE_HOST_AUTO", nil, "Enable detection of public IP addresses", false, false)
// API
d.vars.Register(value.NewBool(&d.API.ReadOnly, false), "api.read_only", "CORE_API_READ_ONLY", nil, "Allow only ready only access to the API", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTP.Allow, []string{}, ","), "api.access.http.allow", "CORE_API_ACCESS_HTTP_ALLOW", nil, "List of IPs in CIDR notation (HTTP traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTP.Block, []string{}, ","), "api.access.http.block", "CORE_API_ACCESS_HTTP_BLOCK", nil, "List of IPs in CIDR notation (HTTP traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTPS.Allow, []string{}, ","), "api.access.https.allow", "CORE_API_ACCESS_HTTPS_ALLOW", nil, "List of IPs in CIDR notation (HTTPS traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTPS.Block, []string{}, ","), "api.access.https.block", "CORE_API_ACCESS_HTTPS_BLOCK", nil, "List of IPs in CIDR notation (HTTPS traffic)", false, false)
d.vars.Register(value.NewBool(&d.API.Auth.Enable, false), "api.auth.enable", "CORE_API_AUTH_ENABLE", nil, "Enable authentication for all clients", false, false)
d.vars.Register(value.NewBool(&d.API.Auth.DisableLocalhost, false), "api.auth.disable_localhost", "CORE_API_AUTH_DISABLE_LOCALHOST", nil, "Disable authentication for clients from localhost", false, false)
d.vars.Register(value.NewString(&d.API.Auth.Username, ""), "api.auth.username", "CORE_API_AUTH_USERNAME", []string{"RS_USERNAME"}, "Username", false, false)
d.vars.Register(value.NewString(&d.API.Auth.Password, ""), "api.auth.password", "CORE_API_AUTH_PASSWORD", []string{"RS_PASSWORD"}, "Password", false, true)
// Auth JWT
d.vars.Register(value.NewString(&d.API.Auth.JWT.Secret, rand.String(32)), "api.auth.jwt.secret", "CORE_API_AUTH_JWT_SECRET", nil, "JWT secret, leave empty for generating a random value", false, true)
// Auth Auth0
d.vars.Register(value.NewBool(&d.API.Auth.Auth0.Enable, false), "api.auth.auth0.enable", "CORE_API_AUTH_AUTH0_ENABLE", nil, "Enable Auth0", false, false)
d.vars.Register(value.NewTenantList(&d.API.Auth.Auth0.Tenants, []value.Auth0Tenant{}, ","), "api.auth.auth0.tenants", "CORE_API_AUTH_AUTH0_TENANTS", nil, "List of Auth0 tenants", false, false)
// TLS
d.vars.Register(value.NewAddress(&d.TLS.Address, ":8181"), "tls.address", "CORE_TLS_ADDRESS", nil, "HTTPS listening address", false, false)
d.vars.Register(value.NewBool(&d.TLS.Enable, false), "tls.enable", "CORE_TLS_ENABLE", nil, "Enable HTTPS", false, false)
d.vars.Register(value.NewBool(&d.TLS.Auto, false), "tls.auto", "CORE_TLS_AUTO", nil, "Enable Let's Encrypt certificate", false, false)
d.vars.Register(value.NewFile(&d.TLS.CertFile, ""), "tls.cert_file", "CORE_TLS_CERTFILE", nil, "Path to certificate file in PEM format", false, false)
d.vars.Register(value.NewFile(&d.TLS.KeyFile, ""), "tls.key_file", "CORE_TLS_KEYFILE", nil, "Path to key file in PEM format", false, false)
// Storage
d.vars.Register(value.NewFile(&d.Storage.MimeTypes, "./mime.types"), "storage.mimetypes_file", "CORE_STORAGE_MIMETYPES_FILE", []string{"CORE_MIMETYPES_FILE"}, "Path to file with mime-types", false, false)
// Storage (Disk)
d.vars.Register(value.NewMustDir(&d.Storage.Disk.Dir, "./data"), "storage.disk.dir", "CORE_STORAGE_DISK_DIR", nil, "Directory on disk, exposed on /", false, false)
d.vars.Register(value.NewInt64(&d.Storage.Disk.Size, 0), "storage.disk.max_size_mbytes", "CORE_STORAGE_DISK_MAXSIZEMBYTES", nil, "Max. allowed megabytes for storage.disk.dir, 0 for unlimited", false, false)
d.vars.Register(value.NewBool(&d.Storage.Disk.Cache.Enable, true), "storage.disk.cache.enable", "CORE_STORAGE_DISK_CACHE_ENABLE", nil, "Enable cache for /", false, false)
d.vars.Register(value.NewUint64(&d.Storage.Disk.Cache.Size, 0), "storage.disk.cache.max_size_mbytes", "CORE_STORAGE_DISK_CACHE_MAXSIZEMBYTES", nil, "Max. allowed cache size, 0 for unlimited", false, false)
d.vars.Register(value.NewInt64(&d.Storage.Disk.Cache.TTL, 300), "storage.disk.cache.ttl_seconds", "CORE_STORAGE_DISK_CACHE_TTLSECONDS", nil, "Seconds to keep files in cache", false, false)
d.vars.Register(value.NewUint64(&d.Storage.Disk.Cache.FileSize, 1), "storage.disk.cache.max_file_size_mbytes", "CORE_STORAGE_DISK_CACHE_MAXFILESIZEMBYTES", nil, "Max. file size to put in cache", false, false)
d.vars.Register(value.NewStringList(&d.Storage.Disk.Cache.Types, []string{}, " "), "storage.disk.cache.types", "CORE_STORAGE_DISK_CACHE_TYPES_ALLOW", []string{"CORE_STORAGE_DISK_CACHE_TYPES"}, "File extensions to cache, empty for all", false, false)
// Storage (Memory)
d.vars.Register(value.NewBool(&d.Storage.Memory.Auth.Enable, true), "storage.memory.auth.enable", "CORE_STORAGE_MEMORY_AUTH_ENABLE", nil, "Enable basic auth for PUT,POST, and DELETE on /memfs", false, false)
d.vars.Register(value.NewString(&d.Storage.Memory.Auth.Username, "admin"), "storage.memory.auth.username", "CORE_STORAGE_MEMORY_AUTH_USERNAME", nil, "Username for Basic-Auth of /memfs", false, false)
d.vars.Register(value.NewString(&d.Storage.Memory.Auth.Password, rand.StringAlphanumeric(18)), "storage.memory.auth.password", "CORE_STORAGE_MEMORY_AUTH_PASSWORD", nil, "Password for Basic-Auth of /memfs", false, true)
d.vars.Register(value.NewInt64(&d.Storage.Memory.Size, 0), "storage.memory.max_size_mbytes", "CORE_STORAGE_MEMORY_MAXSIZEMBYTES", nil, "Max. allowed megabytes for /memfs, 0 for unlimited", false, false)
d.vars.Register(value.NewBool(&d.Storage.Memory.Purge, false), "storage.memory.purge", "CORE_STORAGE_MEMORY_PURGE", nil, "Automatically remove the oldest files if /memfs is full", false, false)
// Storage (CORS)
d.vars.Register(value.NewCORSOrigins(&d.Storage.CORS.Origins, []string{"*"}, ","), "storage.cors.origins", "CORE_STORAGE_CORS_ORIGINS", nil, "Allowed CORS origins for /memfs and /data", false, false)
// RTMP
d.vars.Register(value.NewBool(&d.RTMP.Enable, false), "rtmp.enable", "CORE_RTMP_ENABLE", nil, "Enable RTMP server", false, false)
d.vars.Register(value.NewBool(&d.RTMP.EnableTLS, false), "rtmp.enable_tls", "CORE_RTMP_ENABLE_TLS", nil, "Enable RTMPS server instead of RTMP", false, false)
d.vars.Register(value.NewAddress(&d.RTMP.Address, ":1935"), "rtmp.address", "CORE_RTMP_ADDRESS", nil, "RTMP server listen address", false, false)
d.vars.Register(value.NewAbsolutePath(&d.RTMP.App, "/"), "rtmp.app", "CORE_RTMP_APP", nil, "RTMP app for publishing", false, false)
d.vars.Register(value.NewString(&d.RTMP.Token, ""), "rtmp.token", "CORE_RTMP_TOKEN", nil, "RTMP token for publishing and playing", false, true)
// SRT
d.vars.Register(value.NewBool(&d.SRT.Enable, false), "srt.enable", "CORE_SRT_ENABLE", nil, "Enable SRT server", false, false)
d.vars.Register(value.NewAddress(&d.SRT.Address, ":6000"), "srt.address", "CORE_SRT_ADDRESS", nil, "SRT server listen address", false, false)
d.vars.Register(value.NewString(&d.SRT.Passphrase, ""), "srt.passphrase", "CORE_SRT_PASSPHRASE", nil, "SRT encryption passphrase", false, true)
d.vars.Register(value.NewString(&d.SRT.Token, ""), "srt.token", "CORE_SRT_TOKEN", nil, "SRT token for publishing and playing", false, true)
d.vars.Register(value.NewBool(&d.SRT.Log.Enable, false), "srt.log.enable", "CORE_SRT_LOG_ENABLE", nil, "Enable SRT server logging", false, false)
d.vars.Register(value.NewStringList(&d.SRT.Log.Topics, []string{}, ","), "srt.log.topics", "CORE_SRT_LOG_TOPICS", nil, "List of topics to log", false, false)
// FFmpeg
d.vars.Register(value.NewExec(&d.FFmpeg.Binary, "ffmpeg"), "ffmpeg.binary", "CORE_FFMPEG_BINARY", nil, "Path to ffmpeg binary", true, false)
d.vars.Register(value.NewInt64(&d.FFmpeg.MaxProcesses, 0), "ffmpeg.max_processes", "CORE_FFMPEG_MAXPROCESSES", nil, "Max. allowed simultaneously running ffmpeg instances, 0 for unlimited", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Input.Allow, []string{}, " "), "ffmpeg.access.input.allow", "CORE_FFMPEG_ACCESS_INPUT_ALLOW", nil, "List of allowed expression to match against the input addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Input.Block, []string{}, " "), "ffmpeg.access.input.block", "CORE_FFMPEG_ACCESS_INPUT_BLOCK", nil, "List of blocked expression to match against the input addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Output.Allow, []string{}, " "), "ffmpeg.access.output.allow", "CORE_FFMPEG_ACCESS_OUTPUT_ALLOW", nil, "List of allowed expression to match against the output addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Output.Block, []string{}, " "), "ffmpeg.access.output.block", "CORE_FFMPEG_ACCESS_OUTPUT_BLOCK", nil, "List of blocked expression to match against the output addresses", false, false)
d.vars.Register(value.NewInt(&d.FFmpeg.Log.MaxLines, 50), "ffmpeg.log.max_lines", "CORE_FFMPEG_LOG_MAXLINES", nil, "Number of latest log lines to keep for each process", false, false)
d.vars.Register(value.NewInt(&d.FFmpeg.Log.MaxHistory, 3), "ffmpeg.log.max_history", "CORE_FFMPEG_LOG_MAXHISTORY", nil, "Number of latest logs to keep for each process", false, false)
// Playout
d.vars.Register(value.NewBool(&d.Playout.Enable, false), "playout.enable", "CORE_PLAYOUT_ENABLE", nil, "Enable playout proxy where available", false, false)
d.vars.Register(value.NewPort(&d.Playout.MinPort, 0), "playout.min_port", "CORE_PLAYOUT_MINPORT", nil, "Min. playout server port", false, false)
d.vars.Register(value.NewPort(&d.Playout.MaxPort, 0), "playout.max_port", "CORE_PLAYOUT_MAXPORT", nil, "Max. playout server port", false, false)
// Debug
d.vars.Register(value.NewBool(&d.Debug.Profiling, false), "debug.profiling", "CORE_DEBUG_PROFILING", nil, "Enable profiling endpoint on /profiling", false, false)
d.vars.Register(value.NewInt(&d.Debug.ForceGC, 0), "debug.force_gc", "CORE_DEBUG_FORCEGC", nil, "Number of seconds between forcing GC to return memory to the OS", false, false)
// Metrics
d.vars.Register(value.NewBool(&d.Metrics.Enable, false), "metrics.enable", "CORE_METRICS_ENABLE", nil, "Enable collecting historic metrics data", false, false)
d.vars.Register(value.NewBool(&d.Metrics.EnablePrometheus, false), "metrics.enable_prometheus", "CORE_METRICS_ENABLE_PROMETHEUS", nil, "Enable prometheus endpoint /metrics", false, false)
d.vars.Register(value.NewInt64(&d.Metrics.Range, 300), "metrics.range_seconds", "CORE_METRICS_RANGE_SECONDS", nil, "Seconds to keep history data", false, false)
d.vars.Register(value.NewInt64(&d.Metrics.Interval, 2), "metrics.interval_seconds", "CORE_METRICS_INTERVAL_SECONDS", nil, "Interval for collecting metrics", false, false)
// Sessions
d.vars.Register(value.NewBool(&d.Sessions.Enable, true), "sessions.enable", "CORE_SESSIONS_ENABLE", nil, "Enable collecting HLS session stats for /memfs", false, false)
d.vars.Register(value.NewCIDRList(&d.Sessions.IPIgnoreList, []string{"127.0.0.1/32", "::1/128"}, ","), "sessions.ip_ignorelist", "CORE_SESSIONS_IP_IGNORELIST", nil, "List of IP ranges in CIDR notation to ignore", false, false)
d.vars.Register(value.NewInt(&d.Sessions.SessionTimeout, 30), "sessions.session_timeout_sec", "CORE_SESSIONS_SESSION_TIMEOUT_SEC", nil, "Timeout for an idle session", false, false)
d.vars.Register(value.NewBool(&d.Sessions.Persist, false), "sessions.persist", "CORE_SESSIONS_PERSIST", nil, "Whether to persist session history. Will be stored as sessions.json in db.dir", false, false)
d.vars.Register(value.NewInt(&d.Sessions.PersistInterval, 300), "sessions.persist_interval_sec", "CORE_SESSIONS_PERSIST_INTERVAL_SEC", nil, "Interval in seconds in which to persist the current session history", false, false)
d.vars.Register(value.NewUint64(&d.Sessions.MaxBitrate, 0), "sessions.max_bitrate_mbit", "CORE_SESSIONS_MAXBITRATE_MBIT", nil, "Max. allowed outgoing bitrate in mbit/s, 0 for unlimited", false, false)
d.vars.Register(value.NewUint64(&d.Sessions.MaxSessions, 0), "sessions.max_sessions", "CORE_SESSIONS_MAXSESSIONS", nil, "Max. allowed number of simultaneous sessions, 0 for unlimited", false, false)
// Service
d.vars.Register(value.NewBool(&d.Service.Enable, false), "service.enable", "CORE_SERVICE_ENABLE", nil, "Enable connecting to the Restreamer Service", false, false)
d.vars.Register(value.NewString(&d.Service.Token, ""), "service.token", "CORE_SERVICE_TOKEN", nil, "Restreamer Service account token", false, true)
d.vars.Register(value.NewURL(&d.Service.URL, "https://service.datarhei.com"), "service.url", "CORE_SERVICE_URL", nil, "URL of the Restreamer Service", false, false)
// Router
d.vars.Register(value.NewStringList(&d.Router.BlockedPrefixes, []string{"/api"}, ","), "router.blocked_prefixes", "CORE_ROUTER_BLOCKED_PREFIXES", nil, "List of path prefixes that can't be routed", false, false)
d.vars.Register(value.NewStringMapString(&d.Router.Routes, nil), "router.routes", "CORE_ROUTER_ROUTES", nil, "List of route mappings", false, false)
d.vars.Register(value.NewDir(&d.Router.UIPath, ""), "router.ui_path", "CORE_ROUTER_UI_PATH", nil, "Path to a directory holding UI files mounted as /ui", false, false)
}
// Validate validates the current state of the Config for completeness and sanity. Errors are
// written to the log. Use resetLogs to indicate to reset the logs prior validation.
func (d *Config) Validate(resetLogs bool) {
if resetLogs {
d.vars.ResetLogs()
}
if d.Version != version {
d.vars.Log("error", "version", "unknown configuration layout version (found version %d, expecting version %d)", d.Version, version)
return
}
d.vars.Validate()
// Individual sanity checks
// If HTTP Auth is enabled, check that the username and password are set
if d.API.Auth.Enable {
if len(d.API.Auth.Username) == 0 || len(d.API.Auth.Password) == 0 {
d.vars.Log("error", "api.auth.enable", "api.auth.username and api.auth.password must be set")
}
}
// If Auth0 is enabled, check that domain, audience, and clientid are set
if d.API.Auth.Auth0.Enable {
if len(d.API.Auth.Auth0.Tenants) == 0 {
d.vars.Log("error", "api.auth.auth0.enable", "at least one tenants must be set")
}
for i, t := range d.API.Auth.Auth0.Tenants {
if len(t.Domain) == 0 || len(t.Audience) == 0 || len(t.ClientID) == 0 {
d.vars.Log("error", "api.auth.auth0.tenants", "domain, audience, and clientid must be set (tenant %d)", i)
}
}
}
// If TLS is enabled and Let's Encrypt is disabled, require certfile and keyfile
if d.TLS.Enable && !d.TLS.Auto {
if len(d.TLS.CertFile) == 0 || len(d.TLS.KeyFile) == 0 {
d.vars.Log("error", "tls.enable", "tls.certfile and tls.keyfile must be set")
}
}
// If TLS and Let's Encrypt certificate is enabled, we require a public hostname
if d.TLS.Enable && d.TLS.Auto {
if len(d.Host.Name) == 0 {
d.vars.Log("error", "host.name", "a hostname must be set in order to get an automatic TLS certificate")
} else {
r := &net.Resolver{
PreferGo: true,
StrictErrors: true,
}
for _, host := range d.Host.Name {
// Don't lookup IP addresses
if ip := net.ParseIP(host); ip != nil {
d.vars.Log("error", "host.name", "only host names are allowed if automatic TLS is enabled, but found IP address: %s", host)
}
// Lookup host name with a timeout
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
_, err := r.LookupHost(ctx, host)
if err != nil {
d.vars.Log("error", "host.name", "the host '%s' can't be resolved and will not work with automatic TLS", host)
}
cancel()
}
}
}
// If TLS for RTMP is enabled, TLS must be enabled
if d.RTMP.EnableTLS {
if !d.RTMP.Enable {
d.vars.Log("error", "rtmp.enable", "RTMP server must be enabled if RTMPS server is enabled")
}
if !d.TLS.Enable {
d.vars.Log("error", "rtmp.enable_tls", "RTMPS server can only be enabled if TLS is enabled")
}
}
// If CORE_MEMFS_USERNAME and CORE_MEMFS_PASSWORD are set, automatically active/deactivate Basic-Auth for memfs
if d.vars.IsMerged("storage.memory.auth.username") && d.vars.IsMerged("storage.memory.auth.password") {
d.Storage.Memory.Auth.Enable = true
if len(d.Storage.Memory.Auth.Username) == 0 && len(d.Storage.Memory.Auth.Password) == 0 {
d.Storage.Memory.Auth.Enable = false
}
}
// If Basic-Auth for memfs is enable, check that the username and password are set
if d.Storage.Memory.Auth.Enable {
if len(d.Storage.Memory.Auth.Username) == 0 || len(d.Storage.Memory.Auth.Password) == 0 {
d.vars.Log("error", "storage.memory.auth.enable", "storage.memory.auth.username and storage.memory.auth.password must be set")
}
}
// If playout is enabled, check that the port range is sane
if d.Playout.Enable {
if d.Playout.MinPort >= d.Playout.MaxPort {
d.vars.Log("error", "playout.min_port", "must be bigger than playout.max_port")
}
}
// If cache is enabled, a valid TTL has to be set to a useful value
if d.Storage.Disk.Cache.Enable && d.Storage.Disk.Cache.TTL < 0 {
d.vars.Log("error", "storage.disk.cache.ttl_seconds", "must be equal or greater than 0")
}
// If the stats are enabled, the session timeout has to be set to a useful value
if d.Sessions.Enable && d.Sessions.SessionTimeout < 1 {
d.vars.Log("error", "stats.session_timeout_sec", "must be equal or greater than 1")
}
// If the stats and their persistence are enabled, the persist interval has to be set to a useful value
if d.Sessions.Enable && d.Sessions.PersistInterval < 0 {
d.vars.Log("error", "stats.persist_interval_sec", "must be at equal or greater than 0")
}
// If the service is enabled, the token and enpoint have to be defined
if d.Service.Enable {
if len(d.Service.Token) == 0 {
d.vars.Log("error", "service.token", "must be non-empty")
}
if len(d.Service.URL) == 0 {
d.vars.Log("error", "service.url", "must be non-empty")
}
}
// If historic metrics are enabled, the timerange and interval have to be valid
if d.Metrics.Enable {
if d.Metrics.Range <= 0 {
d.vars.Log("error", "metrics.range", "must be greater 0")
}
if d.Metrics.Interval <= 0 {
d.vars.Log("error", "metrics.interval", "must be greater 0")
}
if d.Metrics.Interval > d.Metrics.Range {
d.vars.Log("error", "metrics.interval", "must be smaller than the range")
}
}
}
func (d *Config) Merge() {
d.vars.Merge()
}
func (d *Config) Messages(logger func(level string, v vars.Variable, message string)) {
d.vars.Messages(logger)
}
func (d *Config) HasErrors() bool {
return d.vars.HasErrors()
}
func (d *Config) Overrides() []string {
return d.vars.Overrides()
}

158
config/v1/data.go Normal file
View File

@@ -0,0 +1,158 @@
package v1
import (
"time"
"github.com/datarhei/core/v16/config/value"
)
type Data struct {
CreatedAt time.Time `json:"created_at"`
LoadedAt time.Time `json:"-"`
UpdatedAt time.Time `json:"-"`
Version int64 `json:"version" jsonschema:"minimum=1,maximum=1"`
ID string `json:"id"`
Name string `json:"name"`
Address string `json:"address"`
CheckForUpdates bool `json:"update_check"`
Log struct {
Level string `json:"level" enums:"debug,info,warn,error,silent" jsonschema:"enum=debug,enum=info,enum=warn,enum=error,enum=silent"`
Topics []string `json:"topics"`
MaxLines int `json:"max_lines"`
} `json:"log"`
DB struct {
Dir string `json:"dir"`
} `json:"db"`
Host struct {
Name []string `json:"name"`
Auto bool `json:"auto"`
} `json:"host"`
API struct {
ReadOnly bool `json:"read_only"`
Access struct {
HTTP struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"http"`
HTTPS struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"https"`
} `json:"access"`
Auth struct {
Enable bool `json:"enable"`
DisableLocalhost bool `json:"disable_localhost"`
Username string `json:"username"`
Password string `json:"password"`
JWT struct {
Secret string `json:"secret"`
} `json:"jwt"`
Auth0 struct {
Enable bool `json:"enable"`
Tenants []value.Auth0Tenant `json:"tenants"`
} `json:"auth0"`
} `json:"auth"`
} `json:"api"`
TLS struct {
Address string `json:"address"`
Enable bool `json:"enable"`
Auto bool `json:"auto"`
CertFile string `json:"cert_file"`
KeyFile string `json:"key_file"`
} `json:"tls"`
Storage struct {
Disk struct {
Dir string `json:"dir"`
Size int64 `json:"max_size_mbytes"`
Cache struct {
Enable bool `json:"enable"`
Size uint64 `json:"max_size_mbytes"`
TTL int64 `json:"ttl_seconds"`
FileSize uint64 `json:"max_file_size_mbytes"`
Types []string `json:"types"`
} `json:"cache"`
} `json:"disk"`
Memory struct {
Auth struct {
Enable bool `json:"enable"`
Username string `json:"username"`
Password string `json:"password"`
} `json:"auth"`
Size int64 `json:"max_size_mbytes"`
Purge bool `json:"purge"`
} `json:"memory"`
CORS struct {
Origins []string `json:"origins"`
} `json:"cors"`
MimeTypes string `json:"mimetypes_file"`
} `json:"storage"`
RTMP struct {
Enable bool `json:"enable"`
EnableTLS bool `json:"enable_tls"`
Address string `json:"address"`
App string `json:"app"`
Token string `json:"token"`
} `json:"rtmp"`
SRT struct {
Enable bool `json:"enable"`
Address string `json:"address"`
Passphrase string `json:"passphrase"`
Token string `json:"token"`
Log struct {
Enable bool `json:"enable"`
Topics []string `json:"topics"`
} `json:"log"`
} `json:"srt"`
FFmpeg struct {
Binary string `json:"binary"`
MaxProcesses int64 `json:"max_processes"`
Access struct {
Input struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"input"`
Output struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"output"`
} `json:"access"`
Log struct {
MaxLines int `json:"max_lines"`
MaxHistory int `json:"max_history"`
} `json:"log"`
} `json:"ffmpeg"`
Playout struct {
Enable bool `json:"enable"`
MinPort int `json:"min_port"`
MaxPort int `json:"max_port"`
} `json:"playout"`
Debug struct {
Profiling bool `json:"profiling"`
ForceGC int `json:"force_gc"`
} `json:"debug"`
Metrics struct {
Enable bool `json:"enable"`
EnablePrometheus bool `json:"enable_prometheus"`
Range int64 `json:"range_sec"` // seconds
Interval int64 `json:"interval_sec"` // seconds
} `json:"metrics"`
Sessions struct {
Enable bool `json:"enable"`
IPIgnoreList []string `json:"ip_ignorelist"`
SessionTimeout int `json:"session_timeout_sec"`
Persist bool `json:"persist"`
PersistInterval int `json:"persist_interval_sec"`
MaxBitrate uint64 `json:"max_bitrate_mbit"`
MaxSessions uint64 `json:"max_sessions"`
} `json:"sessions"`
Service struct {
Enable bool `json:"enable"`
Token string `json:"token"`
URL string `json:"url"`
} `json:"service"`
Router struct {
BlockedPrefixes []string `json:"blocked_prefixes"`
Routes map[string]string `json:"routes"`
UIPath string `json:"ui_path"`
} `json:"router"`
}

398
config/v2/config.go Normal file
View File

@@ -0,0 +1,398 @@
package v2
import (
"context"
"net"
"time"
"github.com/datarhei/core/v16/config/copy"
"github.com/datarhei/core/v16/config/value"
"github.com/datarhei/core/v16/config/vars"
"github.com/datarhei/core/v16/math/rand"
haikunator "github.com/atrox/haikunatorgo/v2"
"github.com/google/uuid"
)
const version int64 = 2
// Make sure that the config.Config interface is satisfied
//var _ config.Config = &Config{}
// Config is a wrapper for Data
type Config struct {
vars vars.Variables
Data
}
// New returns a Config which is initialized with its default values
func New() *Config {
cfg := &Config{}
cfg.init()
return cfg
}
func (d *Config) Get(name string) (string, error) {
return d.vars.Get(name)
}
func (d *Config) Set(name, val string) error {
return d.vars.Set(name, val)
}
// NewConfigFrom returns a clone of a Config
func (d *Config) Clone() *Config {
data := New()
data.CreatedAt = d.CreatedAt
data.LoadedAt = d.LoadedAt
data.UpdatedAt = d.UpdatedAt
data.Version = d.Version
data.ID = d.ID
data.Name = d.Name
data.Address = d.Address
data.CheckForUpdates = d.CheckForUpdates
data.Log = d.Log
data.DB = d.DB
data.Host = d.Host
data.API = d.API
data.TLS = d.TLS
data.Storage = d.Storage
data.RTMP = d.RTMP
data.SRT = d.SRT
data.FFmpeg = d.FFmpeg
data.Playout = d.Playout
data.Debug = d.Debug
data.Metrics = d.Metrics
data.Sessions = d.Sessions
data.Service = d.Service
data.Router = d.Router
data.Log.Topics = copy.Slice(d.Log.Topics)
data.Host.Name = copy.Slice(d.Host.Name)
data.API.Access.HTTP.Allow = copy.Slice(d.API.Access.HTTP.Allow)
data.API.Access.HTTP.Block = copy.Slice(d.API.Access.HTTP.Block)
data.API.Access.HTTPS.Allow = copy.Slice(d.API.Access.HTTPS.Allow)
data.API.Access.HTTPS.Block = copy.Slice(d.API.Access.HTTPS.Block)
data.API.Auth.Auth0.Tenants = copy.TenantSlice(d.API.Auth.Auth0.Tenants)
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.Storage.Disk.Cache.Types = copy.Slice(d.Storage.Disk.Cache.Types)
data.FFmpeg.Access.Input.Allow = copy.Slice(d.FFmpeg.Access.Input.Allow)
data.FFmpeg.Access.Input.Block = copy.Slice(d.FFmpeg.Access.Input.Block)
data.FFmpeg.Access.Output.Allow = copy.Slice(d.FFmpeg.Access.Output.Allow)
data.FFmpeg.Access.Output.Block = copy.Slice(d.FFmpeg.Access.Output.Block)
data.Sessions.IPIgnoreList = copy.Slice(d.Sessions.IPIgnoreList)
data.SRT.Log.Topics = copy.Slice(d.SRT.Log.Topics)
data.Router.BlockedPrefixes = copy.Slice(d.Router.BlockedPrefixes)
data.Router.Routes = copy.StringMap(d.Router.Routes)
data.vars.Transfer(&d.vars)
return data
}
func (d *Config) init() {
d.vars.Register(value.NewInt64(&d.Version, version), "version", "", nil, "Configuration file layout version", true, false)
d.vars.Register(value.NewTime(&d.CreatedAt, time.Now()), "created_at", "", nil, "Configuration file creation time", false, false)
d.vars.Register(value.NewString(&d.ID, uuid.New().String()), "id", "CORE_ID", nil, "ID for this instance", true, false)
d.vars.Register(value.NewString(&d.Name, haikunator.New().Haikunate()), "name", "CORE_NAME", nil, "A human readable name for this instance", false, false)
d.vars.Register(value.NewAddress(&d.Address, ":8080"), "address", "CORE_ADDRESS", nil, "HTTP listening address", false, false)
d.vars.Register(value.NewBool(&d.CheckForUpdates, true), "update_check", "CORE_UPDATE_CHECK", nil, "Check for updates and send anonymized data", false, false)
// Log
d.vars.Register(value.NewString(&d.Log.Level, "info"), "log.level", "CORE_LOG_LEVEL", nil, "Loglevel: silent, error, warn, info, debug", false, false)
d.vars.Register(value.NewStringList(&d.Log.Topics, []string{}, ","), "log.topics", "CORE_LOG_TOPICS", nil, "Show only selected log topics", false, false)
d.vars.Register(value.NewInt(&d.Log.MaxLines, 1000), "log.max_lines", "CORE_LOG_MAXLINES", nil, "Number of latest log lines to keep in memory", false, false)
// DB
d.vars.Register(value.NewMustDir(&d.DB.Dir, "./config"), "db.dir", "CORE_DB_DIR", nil, "Directory for holding the operational data", false, false)
// Host
d.vars.Register(value.NewStringList(&d.Host.Name, []string{}, ","), "host.name", "CORE_HOST_NAME", nil, "Comma separated list of public host/domain names or IPs", false, false)
d.vars.Register(value.NewBool(&d.Host.Auto, true), "host.auto", "CORE_HOST_AUTO", nil, "Enable detection of public IP addresses", false, false)
// API
d.vars.Register(value.NewBool(&d.API.ReadOnly, false), "api.read_only", "CORE_API_READ_ONLY", nil, "Allow only ready only access to the API", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTP.Allow, []string{}, ","), "api.access.http.allow", "CORE_API_ACCESS_HTTP_ALLOW", nil, "List of IPs in CIDR notation (HTTP traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTP.Block, []string{}, ","), "api.access.http.block", "CORE_API_ACCESS_HTTP_BLOCK", nil, "List of IPs in CIDR notation (HTTP traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTPS.Allow, []string{}, ","), "api.access.https.allow", "CORE_API_ACCESS_HTTPS_ALLOW", nil, "List of IPs in CIDR notation (HTTPS traffic)", false, false)
d.vars.Register(value.NewCIDRList(&d.API.Access.HTTPS.Block, []string{}, ","), "api.access.https.block", "CORE_API_ACCESS_HTTPS_BLOCK", nil, "List of IPs in CIDR notation (HTTPS traffic)", false, false)
d.vars.Register(value.NewBool(&d.API.Auth.Enable, false), "api.auth.enable", "CORE_API_AUTH_ENABLE", nil, "Enable authentication for all clients", false, false)
d.vars.Register(value.NewBool(&d.API.Auth.DisableLocalhost, false), "api.auth.disable_localhost", "CORE_API_AUTH_DISABLE_LOCALHOST", nil, "Disable authentication for clients from localhost", false, false)
d.vars.Register(value.NewString(&d.API.Auth.Username, ""), "api.auth.username", "CORE_API_AUTH_USERNAME", []string{"RS_USERNAME"}, "Username", false, false)
d.vars.Register(value.NewString(&d.API.Auth.Password, ""), "api.auth.password", "CORE_API_AUTH_PASSWORD", []string{"RS_PASSWORD"}, "Password", false, true)
// Auth JWT
d.vars.Register(value.NewString(&d.API.Auth.JWT.Secret, rand.String(32)), "api.auth.jwt.secret", "CORE_API_AUTH_JWT_SECRET", nil, "JWT secret, leave empty for generating a random value", false, true)
// Auth Auth0
d.vars.Register(value.NewBool(&d.API.Auth.Auth0.Enable, false), "api.auth.auth0.enable", "CORE_API_AUTH_AUTH0_ENABLE", nil, "Enable Auth0", false, false)
d.vars.Register(value.NewTenantList(&d.API.Auth.Auth0.Tenants, []value.Auth0Tenant{}, ","), "api.auth.auth0.tenants", "CORE_API_AUTH_AUTH0_TENANTS", nil, "List of Auth0 tenants", false, false)
// TLS
d.vars.Register(value.NewAddress(&d.TLS.Address, ":8181"), "tls.address", "CORE_TLS_ADDRESS", nil, "HTTPS listening address", false, false)
d.vars.Register(value.NewBool(&d.TLS.Enable, false), "tls.enable", "CORE_TLS_ENABLE", nil, "Enable HTTPS", false, false)
d.vars.Register(value.NewBool(&d.TLS.Auto, false), "tls.auto", "CORE_TLS_AUTO", nil, "Enable Let's Encrypt certificate", false, false)
d.vars.Register(value.NewFile(&d.TLS.CertFile, ""), "tls.cert_file", "CORE_TLS_CERTFILE", nil, "Path to certificate file in PEM format", false, false)
d.vars.Register(value.NewFile(&d.TLS.KeyFile, ""), "tls.key_file", "CORE_TLS_KEYFILE", nil, "Path to key file in PEM format", false, false)
// Storage
d.vars.Register(value.NewFile(&d.Storage.MimeTypes, "./mime.types"), "storage.mimetypes_file", "CORE_STORAGE_MIMETYPES_FILE", []string{"CORE_MIMETYPES_FILE"}, "Path to file with mime-types", false, false)
// Storage (Disk)
d.vars.Register(value.NewMustDir(&d.Storage.Disk.Dir, "./data"), "storage.disk.dir", "CORE_STORAGE_DISK_DIR", nil, "Directory on disk, exposed on /", false, false)
d.vars.Register(value.NewInt64(&d.Storage.Disk.Size, 0), "storage.disk.max_size_mbytes", "CORE_STORAGE_DISK_MAXSIZEMBYTES", nil, "Max. allowed megabytes for storage.disk.dir, 0 for unlimited", false, false)
d.vars.Register(value.NewBool(&d.Storage.Disk.Cache.Enable, true), "storage.disk.cache.enable", "CORE_STORAGE_DISK_CACHE_ENABLE", nil, "Enable cache for /", false, false)
d.vars.Register(value.NewUint64(&d.Storage.Disk.Cache.Size, 0), "storage.disk.cache.max_size_mbytes", "CORE_STORAGE_DISK_CACHE_MAXSIZEMBYTES", nil, "Max. allowed cache size, 0 for unlimited", false, false)
d.vars.Register(value.NewInt64(&d.Storage.Disk.Cache.TTL, 300), "storage.disk.cache.ttl_seconds", "CORE_STORAGE_DISK_CACHE_TTLSECONDS", nil, "Seconds to keep files in cache", false, false)
d.vars.Register(value.NewUint64(&d.Storage.Disk.Cache.FileSize, 1), "storage.disk.cache.max_file_size_mbytes", "CORE_STORAGE_DISK_CACHE_MAXFILESIZEMBYTES", nil, "Max. file size to put in cache", false, false)
d.vars.Register(value.NewStringList(&d.Storage.Disk.Cache.Types, []string{}, " "), "storage.disk.cache.types", "CORE_STORAGE_DISK_CACHE_TYPES_ALLOW", []string{"CORE_STORAGE_DISK_CACHE_TYPES"}, "File extensions to cache, empty for all", false, false)
// Storage (Memory)
d.vars.Register(value.NewBool(&d.Storage.Memory.Auth.Enable, true), "storage.memory.auth.enable", "CORE_STORAGE_MEMORY_AUTH_ENABLE", nil, "Enable basic auth for PUT,POST, and DELETE on /memfs", false, false)
d.vars.Register(value.NewString(&d.Storage.Memory.Auth.Username, "admin"), "storage.memory.auth.username", "CORE_STORAGE_MEMORY_AUTH_USERNAME", nil, "Username for Basic-Auth of /memfs", false, false)
d.vars.Register(value.NewString(&d.Storage.Memory.Auth.Password, rand.StringAlphanumeric(18)), "storage.memory.auth.password", "CORE_STORAGE_MEMORY_AUTH_PASSWORD", nil, "Password for Basic-Auth of /memfs", false, true)
d.vars.Register(value.NewInt64(&d.Storage.Memory.Size, 0), "storage.memory.max_size_mbytes", "CORE_STORAGE_MEMORY_MAXSIZEMBYTES", nil, "Max. allowed megabytes for /memfs, 0 for unlimited", false, false)
d.vars.Register(value.NewBool(&d.Storage.Memory.Purge, false), "storage.memory.purge", "CORE_STORAGE_MEMORY_PURGE", nil, "Automatically remove the oldest files if /memfs is full", false, false)
// Storage (CORS)
d.vars.Register(value.NewCORSOrigins(&d.Storage.CORS.Origins, []string{"*"}, ","), "storage.cors.origins", "CORE_STORAGE_CORS_ORIGINS", nil, "Allowed CORS origins for /memfs and /data", false, false)
// RTMP
d.vars.Register(value.NewBool(&d.RTMP.Enable, false), "rtmp.enable", "CORE_RTMP_ENABLE", nil, "Enable RTMP server", false, false)
d.vars.Register(value.NewBool(&d.RTMP.EnableTLS, false), "rtmp.enable_tls", "CORE_RTMP_ENABLE_TLS", nil, "Enable RTMPS server instead of RTMP", false, false)
d.vars.Register(value.NewAddress(&d.RTMP.Address, ":1935"), "rtmp.address", "CORE_RTMP_ADDRESS", nil, "RTMP server listen address", false, false)
d.vars.Register(value.NewAddress(&d.RTMP.AddressTLS, ":1936"), "rtmp.address_tls", "CORE_RTMP_ADDRESS_TLS", nil, "RTMPS server listen address", false, false)
d.vars.Register(value.NewAbsolutePath(&d.RTMP.App, "/"), "rtmp.app", "CORE_RTMP_APP", nil, "RTMP app for publishing", false, false)
d.vars.Register(value.NewString(&d.RTMP.Token, ""), "rtmp.token", "CORE_RTMP_TOKEN", nil, "RTMP token for publishing and playing", false, true)
// SRT
d.vars.Register(value.NewBool(&d.SRT.Enable, false), "srt.enable", "CORE_SRT_ENABLE", nil, "Enable SRT server", false, false)
d.vars.Register(value.NewAddress(&d.SRT.Address, ":6000"), "srt.address", "CORE_SRT_ADDRESS", nil, "SRT server listen address", false, false)
d.vars.Register(value.NewString(&d.SRT.Passphrase, ""), "srt.passphrase", "CORE_SRT_PASSPHRASE", nil, "SRT encryption passphrase", false, true)
d.vars.Register(value.NewString(&d.SRT.Token, ""), "srt.token", "CORE_SRT_TOKEN", nil, "SRT token for publishing and playing", false, true)
d.vars.Register(value.NewBool(&d.SRT.Log.Enable, false), "srt.log.enable", "CORE_SRT_LOG_ENABLE", nil, "Enable SRT server logging", false, false)
d.vars.Register(value.NewStringList(&d.SRT.Log.Topics, []string{}, ","), "srt.log.topics", "CORE_SRT_LOG_TOPICS", nil, "List of topics to log", false, false)
// FFmpeg
d.vars.Register(value.NewExec(&d.FFmpeg.Binary, "ffmpeg"), "ffmpeg.binary", "CORE_FFMPEG_BINARY", nil, "Path to ffmpeg binary", true, false)
d.vars.Register(value.NewInt64(&d.FFmpeg.MaxProcesses, 0), "ffmpeg.max_processes", "CORE_FFMPEG_MAXPROCESSES", nil, "Max. allowed simultaneously running ffmpeg instances, 0 for unlimited", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Input.Allow, []string{}, " "), "ffmpeg.access.input.allow", "CORE_FFMPEG_ACCESS_INPUT_ALLOW", nil, "List of allowed expression to match against the input addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Input.Block, []string{}, " "), "ffmpeg.access.input.block", "CORE_FFMPEG_ACCESS_INPUT_BLOCK", nil, "List of blocked expression to match against the input addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Output.Allow, []string{}, " "), "ffmpeg.access.output.allow", "CORE_FFMPEG_ACCESS_OUTPUT_ALLOW", nil, "List of allowed expression to match against the output addresses", false, false)
d.vars.Register(value.NewStringList(&d.FFmpeg.Access.Output.Block, []string{}, " "), "ffmpeg.access.output.block", "CORE_FFMPEG_ACCESS_OUTPUT_BLOCK", nil, "List of blocked expression to match against the output addresses", false, false)
d.vars.Register(value.NewInt(&d.FFmpeg.Log.MaxLines, 50), "ffmpeg.log.max_lines", "CORE_FFMPEG_LOG_MAXLINES", nil, "Number of latest log lines to keep for each process", false, false)
d.vars.Register(value.NewInt(&d.FFmpeg.Log.MaxHistory, 3), "ffmpeg.log.max_history", "CORE_FFMPEG_LOG_MAXHISTORY", nil, "Number of latest logs to keep for each process", false, false)
// Playout
d.vars.Register(value.NewBool(&d.Playout.Enable, false), "playout.enable", "CORE_PLAYOUT_ENABLE", nil, "Enable playout proxy where available", false, false)
d.vars.Register(value.NewPort(&d.Playout.MinPort, 0), "playout.min_port", "CORE_PLAYOUT_MINPORT", nil, "Min. playout server port", false, false)
d.vars.Register(value.NewPort(&d.Playout.MaxPort, 0), "playout.max_port", "CORE_PLAYOUT_MAXPORT", nil, "Max. playout server port", false, false)
// Debug
d.vars.Register(value.NewBool(&d.Debug.Profiling, false), "debug.profiling", "CORE_DEBUG_PROFILING", nil, "Enable profiling endpoint on /profiling", false, false)
d.vars.Register(value.NewInt(&d.Debug.ForceGC, 0), "debug.force_gc", "CORE_DEBUG_FORCEGC", nil, "Number of seconds between forcing GC to return memory to the OS", false, false)
// Metrics
d.vars.Register(value.NewBool(&d.Metrics.Enable, false), "metrics.enable", "CORE_METRICS_ENABLE", nil, "Enable collecting historic metrics data", false, false)
d.vars.Register(value.NewBool(&d.Metrics.EnablePrometheus, false), "metrics.enable_prometheus", "CORE_METRICS_ENABLE_PROMETHEUS", nil, "Enable prometheus endpoint /metrics", false, false)
d.vars.Register(value.NewInt64(&d.Metrics.Range, 300), "metrics.range_seconds", "CORE_METRICS_RANGE_SECONDS", nil, "Seconds to keep history data", false, false)
d.vars.Register(value.NewInt64(&d.Metrics.Interval, 2), "metrics.interval_seconds", "CORE_METRICS_INTERVAL_SECONDS", nil, "Interval for collecting metrics", false, false)
// Sessions
d.vars.Register(value.NewBool(&d.Sessions.Enable, true), "sessions.enable", "CORE_SESSIONS_ENABLE", nil, "Enable collecting HLS session stats for /memfs", false, false)
d.vars.Register(value.NewCIDRList(&d.Sessions.IPIgnoreList, []string{"127.0.0.1/32", "::1/128"}, ","), "sessions.ip_ignorelist", "CORE_SESSIONS_IP_IGNORELIST", nil, "List of IP ranges in CIDR notation to ignore", false, false)
d.vars.Register(value.NewInt(&d.Sessions.SessionTimeout, 30), "sessions.session_timeout_sec", "CORE_SESSIONS_SESSION_TIMEOUT_SEC", nil, "Timeout for an idle session", false, false)
d.vars.Register(value.NewBool(&d.Sessions.Persist, false), "sessions.persist", "CORE_SESSIONS_PERSIST", nil, "Whether to persist session history. Will be stored as sessions.json in db.dir", false, false)
d.vars.Register(value.NewInt(&d.Sessions.PersistInterval, 300), "sessions.persist_interval_sec", "CORE_SESSIONS_PERSIST_INTERVAL_SEC", nil, "Interval in seconds in which to persist the current session history", false, false)
d.vars.Register(value.NewUint64(&d.Sessions.MaxBitrate, 0), "sessions.max_bitrate_mbit", "CORE_SESSIONS_MAXBITRATE_MBIT", nil, "Max. allowed outgoing bitrate in mbit/s, 0 for unlimited", false, false)
d.vars.Register(value.NewUint64(&d.Sessions.MaxSessions, 0), "sessions.max_sessions", "CORE_SESSIONS_MAXSESSIONS", nil, "Max. allowed number of simultaneous sessions, 0 for unlimited", false, false)
// Service
d.vars.Register(value.NewBool(&d.Service.Enable, false), "service.enable", "CORE_SERVICE_ENABLE", nil, "Enable connecting to the Restreamer Service", false, false)
d.vars.Register(value.NewString(&d.Service.Token, ""), "service.token", "CORE_SERVICE_TOKEN", nil, "Restreamer Service account token", false, true)
d.vars.Register(value.NewURL(&d.Service.URL, "https://service.datarhei.com"), "service.url", "CORE_SERVICE_URL", nil, "URL of the Restreamer Service", false, false)
// Router
d.vars.Register(value.NewStringList(&d.Router.BlockedPrefixes, []string{"/api"}, ","), "router.blocked_prefixes", "CORE_ROUTER_BLOCKED_PREFIXES", nil, "List of path prefixes that can't be routed", false, false)
d.vars.Register(value.NewStringMapString(&d.Router.Routes, nil), "router.routes", "CORE_ROUTER_ROUTES", nil, "List of route mappings", false, false)
d.vars.Register(value.NewDir(&d.Router.UIPath, ""), "router.ui_path", "CORE_ROUTER_UI_PATH", nil, "Path to a directory holding UI files mounted as /ui", false, false)
}
// Validate validates the current state of the Config for completeness and sanity. Errors are
// written to the log. Use resetLogs to indicate to reset the logs prior validation.
func (d *Config) Validate(resetLogs bool) {
if resetLogs {
d.vars.ResetLogs()
}
if d.Version != version {
d.vars.Log("error", "version", "unknown configuration layout version (found version %d, expecting version %d)", d.Version, version)
return
}
d.vars.Validate()
// Individual sanity checks
// If HTTP Auth is enabled, check that the username and password are set
if d.API.Auth.Enable {
if len(d.API.Auth.Username) == 0 || len(d.API.Auth.Password) == 0 {
d.vars.Log("error", "api.auth.enable", "api.auth.username and api.auth.password must be set")
}
}
// If Auth0 is enabled, check that domain, audience, and clientid are set
if d.API.Auth.Auth0.Enable {
if len(d.API.Auth.Auth0.Tenants) == 0 {
d.vars.Log("error", "api.auth.auth0.enable", "at least one tenants must be set")
}
for i, t := range d.API.Auth.Auth0.Tenants {
if len(t.Domain) == 0 || len(t.Audience) == 0 || len(t.ClientID) == 0 {
d.vars.Log("error", "api.auth.auth0.tenants", "domain, audience, and clientid must be set (tenant %d)", i)
}
}
}
// If TLS is enabled and Let's Encrypt is disabled, require certfile and keyfile
if d.TLS.Enable && !d.TLS.Auto {
if len(d.TLS.CertFile) == 0 || len(d.TLS.KeyFile) == 0 {
d.vars.Log("error", "tls.enable", "tls.certfile and tls.keyfile must be set")
}
}
// If TLS and Let's Encrypt certificate is enabled, we require a public hostname
if d.TLS.Enable && d.TLS.Auto {
if len(d.Host.Name) == 0 {
d.vars.Log("error", "host.name", "a hostname must be set in order to get an automatic TLS certificate")
} else {
r := &net.Resolver{
PreferGo: true,
StrictErrors: true,
}
for _, host := range d.Host.Name {
// Don't lookup IP addresses
if ip := net.ParseIP(host); ip != nil {
d.vars.Log("error", "host.name", "only host names are allowed if automatic TLS is enabled, but found IP address: %s", host)
}
// Lookup host name with a timeout
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
_, err := r.LookupHost(ctx, host)
if err != nil {
d.vars.Log("error", "host.name", "the host '%s' can't be resolved and will not work with automatic TLS", host)
}
cancel()
}
}
}
// If TLS for RTMP is enabled, TLS must be enabled
if d.RTMP.EnableTLS {
if !d.RTMP.Enable {
d.vars.Log("error", "rtmp.enable", "RTMP server must be enabled if RTMPS server is enabled")
}
if !d.TLS.Enable {
d.vars.Log("error", "rtmp.enable_tls", "RTMPS server can only be enabled if TLS is enabled")
}
}
// If CORE_MEMFS_USERNAME and CORE_MEMFS_PASSWORD are set, automatically active/deactivate Basic-Auth for memfs
if d.vars.IsMerged("storage.memory.auth.username") && d.vars.IsMerged("storage.memory.auth.password") {
d.Storage.Memory.Auth.Enable = true
if len(d.Storage.Memory.Auth.Username) == 0 && len(d.Storage.Memory.Auth.Password) == 0 {
d.Storage.Memory.Auth.Enable = false
}
}
// If Basic-Auth for memfs is enable, check that the username and password are set
if d.Storage.Memory.Auth.Enable {
if len(d.Storage.Memory.Auth.Username) == 0 || len(d.Storage.Memory.Auth.Password) == 0 {
d.vars.Log("error", "storage.memory.auth.enable", "storage.memory.auth.username and storage.memory.auth.password must be set")
}
}
// If playout is enabled, check that the port range is sane
if d.Playout.Enable {
if d.Playout.MinPort >= d.Playout.MaxPort {
d.vars.Log("error", "playout.min_port", "must be bigger than playout.max_port")
}
}
// If cache is enabled, a valid TTL has to be set to a useful value
if d.Storage.Disk.Cache.Enable && d.Storage.Disk.Cache.TTL < 0 {
d.vars.Log("error", "storage.disk.cache.ttl_seconds", "must be equal or greater than 0")
}
// If the stats are enabled, the session timeout has to be set to a useful value
if d.Sessions.Enable && d.Sessions.SessionTimeout < 1 {
d.vars.Log("error", "stats.session_timeout_sec", "must be equal or greater than 1")
}
// If the stats and their persistence are enabled, the persist interval has to be set to a useful value
if d.Sessions.Enable && d.Sessions.PersistInterval < 0 {
d.vars.Log("error", "stats.persist_interval_sec", "must be at equal or greater than 0")
}
// If the service is enabled, the token and enpoint have to be defined
if d.Service.Enable {
if len(d.Service.Token) == 0 {
d.vars.Log("error", "service.token", "must be non-empty")
}
if len(d.Service.URL) == 0 {
d.vars.Log("error", "service.url", "must be non-empty")
}
}
// If historic metrics are enabled, the timerange and interval have to be valid
if d.Metrics.Enable {
if d.Metrics.Range <= 0 {
d.vars.Log("error", "metrics.range", "must be greater 0")
}
if d.Metrics.Interval <= 0 {
d.vars.Log("error", "metrics.interval", "must be greater 0")
}
if d.Metrics.Interval > d.Metrics.Range {
d.vars.Log("error", "metrics.interval", "must be smaller than the range")
}
}
}
func (d *Config) Merge() {
d.vars.Merge()
}
func (d *Config) Messages(logger func(level string, v vars.Variable, message string)) {
d.vars.Messages(logger)
}
func (d *Config) HasErrors() bool {
return d.vars.HasErrors()
}
func (d *Config) Overrides() []string {
return d.vars.Overrides()
}

319
config/v2/data.go Normal file
View File

@@ -0,0 +1,319 @@
package v2
import (
"fmt"
"net"
"strconv"
"strings"
"time"
"github.com/datarhei/core/v16/config/copy"
v1 "github.com/datarhei/core/v16/config/v1"
"github.com/datarhei/core/v16/config/value"
)
type Data struct {
CreatedAt time.Time `json:"created_at"`
LoadedAt time.Time `json:"-"`
UpdatedAt time.Time `json:"-"`
Version int64 `json:"version" jsonschema:"minimum=2,maximum=2"`
ID string `json:"id"`
Name string `json:"name"`
Address string `json:"address"`
CheckForUpdates bool `json:"update_check"`
Log struct {
Level string `json:"level" enums:"debug,info,warn,error,silent" jsonschema:"enum=debug,enum=info,enum=warn,enum=error,enum=silent"`
Topics []string `json:"topics"`
MaxLines int `json:"max_lines"`
} `json:"log"`
DB struct {
Dir string `json:"dir"`
} `json:"db"`
Host struct {
Name []string `json:"name"`
Auto bool `json:"auto"`
} `json:"host"`
API struct {
ReadOnly bool `json:"read_only"`
Access struct {
HTTP struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"http"`
HTTPS struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"https"`
} `json:"access"`
Auth struct {
Enable bool `json:"enable"`
DisableLocalhost bool `json:"disable_localhost"`
Username string `json:"username"`
Password string `json:"password"`
JWT struct {
Secret string `json:"secret"`
} `json:"jwt"`
Auth0 struct {
Enable bool `json:"enable"`
Tenants []value.Auth0Tenant `json:"tenants"`
} `json:"auth0"`
} `json:"auth"`
} `json:"api"`
TLS struct {
Address string `json:"address"`
Enable bool `json:"enable"`
Auto bool `json:"auto"`
CertFile string `json:"cert_file"`
KeyFile string `json:"key_file"`
} `json:"tls"`
Storage struct {
Disk struct {
Dir string `json:"dir"`
Size int64 `json:"max_size_mbytes"`
Cache struct {
Enable bool `json:"enable"`
Size uint64 `json:"max_size_mbytes"`
TTL int64 `json:"ttl_seconds"`
FileSize uint64 `json:"max_file_size_mbytes"`
Types []string `json:"types"`
} `json:"cache"`
} `json:"disk"`
Memory struct {
Auth struct {
Enable bool `json:"enable"`
Username string `json:"username"`
Password string `json:"password"`
} `json:"auth"`
Size int64 `json:"max_size_mbytes"`
Purge bool `json:"purge"`
} `json:"memory"`
CORS struct {
Origins []string `json:"origins"`
} `json:"cors"`
MimeTypes string `json:"mimetypes_file"`
} `json:"storage"`
RTMP struct {
Enable bool `json:"enable"`
EnableTLS bool `json:"enable_tls"`
Address string `json:"address"`
AddressTLS string `json:"address_tls"`
App string `json:"app"`
Token string `json:"token"`
} `json:"rtmp"`
SRT struct {
Enable bool `json:"enable"`
Address string `json:"address"`
Passphrase string `json:"passphrase"`
Token string `json:"token"`
Log struct {
Enable bool `json:"enable"`
Topics []string `json:"topics"`
} `json:"log"`
} `json:"srt"`
FFmpeg struct {
Binary string `json:"binary"`
MaxProcesses int64 `json:"max_processes"`
Access struct {
Input struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"input"`
Output struct {
Allow []string `json:"allow"`
Block []string `json:"block"`
} `json:"output"`
} `json:"access"`
Log struct {
MaxLines int `json:"max_lines"`
MaxHistory int `json:"max_history"`
} `json:"log"`
} `json:"ffmpeg"`
Playout struct {
Enable bool `json:"enable"`
MinPort int `json:"min_port"`
MaxPort int `json:"max_port"`
} `json:"playout"`
Debug struct {
Profiling bool `json:"profiling"`
ForceGC int `json:"force_gc"`
} `json:"debug"`
Metrics struct {
Enable bool `json:"enable"`
EnablePrometheus bool `json:"enable_prometheus"`
Range int64 `json:"range_sec"` // seconds
Interval int64 `json:"interval_sec"` // seconds
} `json:"metrics"`
Sessions struct {
Enable bool `json:"enable"`
IPIgnoreList []string `json:"ip_ignorelist"`
SessionTimeout int `json:"session_timeout_sec"`
Persist bool `json:"persist"`
PersistInterval int `json:"persist_interval_sec"`
MaxBitrate uint64 `json:"max_bitrate_mbit"`
MaxSessions uint64 `json:"max_sessions"`
} `json:"sessions"`
Service struct {
Enable bool `json:"enable"`
Token string `json:"token"`
URL string `json:"url"`
} `json:"service"`
Router struct {
BlockedPrefixes []string `json:"blocked_prefixes"`
Routes map[string]string `json:"routes"`
UIPath string `json:"ui_path"`
} `json:"router"`
}
func UpgradeV1ToV2(d *v1.Data) (*Data, error) {
cfg := New()
return MergeV1ToV2(&cfg.Data, d)
}
// Migrate will migrate some settings, depending on the version it finds. Migrations
// are only going upwards, i.e. from a lower version to a higher version.
func MergeV1ToV2(data *Data, d *v1.Data) (*Data, error) {
data.CreatedAt = d.CreatedAt
data.LoadedAt = d.LoadedAt
data.UpdatedAt = d.UpdatedAt
data.ID = d.ID
data.Name = d.Name
data.Address = d.Address
data.CheckForUpdates = d.CheckForUpdates
data.Log = d.Log
data.DB = d.DB
data.Host = d.Host
data.API = d.API
data.TLS = d.TLS
data.Storage = d.Storage
data.SRT = d.SRT
data.FFmpeg = d.FFmpeg
data.Playout = d.Playout
data.Debug = d.Debug
data.Metrics = d.Metrics
data.Sessions = d.Sessions
data.Service = d.Service
data.Router = d.Router
data.Log.Topics = copy.Slice(d.Log.Topics)
data.Host.Name = copy.Slice(d.Host.Name)
data.API.Access.HTTP.Allow = copy.Slice(d.API.Access.HTTP.Allow)
data.API.Access.HTTP.Block = copy.Slice(d.API.Access.HTTP.Block)
data.API.Access.HTTPS.Allow = copy.Slice(d.API.Access.HTTPS.Allow)
data.API.Access.HTTPS.Block = copy.Slice(d.API.Access.HTTPS.Block)
data.API.Auth.Auth0.Tenants = copy.TenantSlice(d.API.Auth.Auth0.Tenants)
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.FFmpeg.Access.Input.Allow = copy.Slice(d.FFmpeg.Access.Input.Allow)
data.FFmpeg.Access.Input.Block = copy.Slice(d.FFmpeg.Access.Input.Block)
data.FFmpeg.Access.Output.Allow = copy.Slice(d.FFmpeg.Access.Output.Allow)
data.FFmpeg.Access.Output.Block = copy.Slice(d.FFmpeg.Access.Output.Block)
data.Sessions.IPIgnoreList = copy.Slice(d.Sessions.IPIgnoreList)
data.SRT.Log.Topics = copy.Slice(d.SRT.Log.Topics)
data.Router.BlockedPrefixes = copy.Slice(d.Router.BlockedPrefixes)
data.Router.Routes = copy.StringMap(d.Router.Routes)
// Actual changes
data.RTMP.Enable = d.RTMP.Enable
data.RTMP.EnableTLS = d.RTMP.EnableTLS
data.RTMP.Address = d.RTMP.Address
data.RTMP.App = d.RTMP.App
data.RTMP.Token = d.RTMP.Token
if !strings.HasPrefix(data.RTMP.App, "/") {
data.RTMP.App = "/" + data.RTMP.App
}
if d.RTMP.EnableTLS {
data.RTMP.Enable = true
data.RTMP.AddressTLS = data.RTMP.Address
host, sport, err := net.SplitHostPort(data.RTMP.Address)
if err != nil {
return nil, fmt.Errorf("migrating rtmp.address to rtmp.address_tls failed: %w", err)
}
port, err := strconv.Atoi(sport)
if err != nil {
return nil, fmt.Errorf("migrating rtmp.address to rtmp.address_tls failed: %w", err)
}
data.RTMP.Address = net.JoinHostPort(host, strconv.Itoa(port-1))
}
data.Version = 2
return data, nil
}
func DowngradeV2toV1(d *Data) (*v1.Data, error) {
data := &v1.Data{}
data.CreatedAt = d.CreatedAt
data.LoadedAt = d.LoadedAt
data.UpdatedAt = d.UpdatedAt
data.ID = d.ID
data.Name = d.Name
data.Address = d.Address
data.CheckForUpdates = d.CheckForUpdates
data.Log = d.Log
data.DB = d.DB
data.Host = d.Host
data.API = d.API
data.TLS = d.TLS
data.Storage = d.Storage
data.SRT = d.SRT
data.FFmpeg = d.FFmpeg
data.Playout = d.Playout
data.Debug = d.Debug
data.Metrics = d.Metrics
data.Sessions = d.Sessions
data.Service = d.Service
data.Router = d.Router
data.Log.Topics = copy.Slice(d.Log.Topics)
data.Host.Name = copy.Slice(d.Host.Name)
data.API.Access.HTTP.Allow = copy.Slice(d.API.Access.HTTP.Allow)
data.API.Access.HTTP.Block = copy.Slice(d.API.Access.HTTP.Block)
data.API.Access.HTTPS.Allow = copy.Slice(d.API.Access.HTTPS.Allow)
data.API.Access.HTTPS.Block = copy.Slice(d.API.Access.HTTPS.Block)
data.API.Auth.Auth0.Tenants = copy.TenantSlice(d.API.Auth.Auth0.Tenants)
data.Storage.CORS.Origins = copy.Slice(d.Storage.CORS.Origins)
data.FFmpeg.Access.Input.Allow = copy.Slice(d.FFmpeg.Access.Input.Allow)
data.FFmpeg.Access.Input.Block = copy.Slice(d.FFmpeg.Access.Input.Block)
data.FFmpeg.Access.Output.Allow = copy.Slice(d.FFmpeg.Access.Output.Allow)
data.FFmpeg.Access.Output.Block = copy.Slice(d.FFmpeg.Access.Output.Block)
data.Sessions.IPIgnoreList = copy.Slice(d.Sessions.IPIgnoreList)
data.SRT.Log.Topics = copy.Slice(d.SRT.Log.Topics)
data.Router.BlockedPrefixes = copy.Slice(d.Router.BlockedPrefixes)
data.Router.Routes = copy.StringMap(d.Router.Routes)
// Actual changes
data.RTMP.Enable = d.RTMP.Enable
data.RTMP.EnableTLS = d.RTMP.EnableTLS
data.RTMP.Address = d.RTMP.Address
data.RTMP.App = d.RTMP.App
data.RTMP.Token = d.RTMP.Token
data.Version = 1
return data, nil
}

126
config/value/auth0.go Normal file
View File

@@ -0,0 +1,126 @@
package value
import (
"encoding/base64"
"encoding/json"
"fmt"
"net/url"
"strings"
)
// array of auth0 tenants
type Auth0Tenant struct {
Domain string `json:"domain"`
Audience string `json:"audience"`
ClientID string `json:"clientid"`
Users []string `json:"users"`
}
func (a *Auth0Tenant) String() string {
u := url.URL{
Scheme: "auth0",
Host: a.Domain,
}
if len(a.ClientID) != 0 {
u.User = url.User(a.ClientID)
}
q := url.Values{}
q.Set("aud", a.Audience)
for _, user := range a.Users {
q.Add("user", user)
}
u.RawQuery = q.Encode()
return u.String()
}
type TenantList struct {
p *[]Auth0Tenant
separator string
}
func NewTenantList(p *[]Auth0Tenant, val []Auth0Tenant, separator string) *TenantList {
v := &TenantList{
p: p,
separator: separator,
}
*p = val
return v
}
// Set allows to set a tenant list in two formats:
// - a separator separated list of bas64 encoded Auth0Tenant JSON objects
// - a separator separated list of Auth0Tenant in URL representation: auth0://[clientid]@[domain]?aud=[audience]&user=...&user=...
func (s *TenantList) Set(val string) error {
list := []Auth0Tenant{}
for i, elm := range strings.Split(val, s.separator) {
t := Auth0Tenant{}
if strings.HasPrefix(elm, "auth0://") {
data, err := url.Parse(elm)
if err != nil {
return fmt.Errorf("invalid url encoding of tenant %d: %w", i, err)
}
t.Domain = data.Host
t.ClientID = data.User.Username()
t.Audience = data.Query().Get("aud")
t.Users = data.Query()["user"]
} else {
data, err := base64.StdEncoding.DecodeString(elm)
if err != nil {
return fmt.Errorf("invalid base64 encoding of tenant %d: %w", i, err)
}
if err := json.Unmarshal(data, &t); err != nil {
return fmt.Errorf("invalid JSON in tenant %d: %w", i, err)
}
}
list = append(list, t)
}
*s.p = list
return nil
}
func (s *TenantList) String() string {
if s.IsEmpty() {
return "(empty)"
}
list := []string{}
for _, t := range *s.p {
list = append(list, t.String())
}
return strings.Join(list, s.separator)
}
func (s *TenantList) Validate() error {
for i, t := range *s.p {
if len(t.Domain) == 0 {
return fmt.Errorf("the domain for tenant %d is missing", i)
}
if len(t.Audience) == 0 {
return fmt.Errorf("the audience for tenant %d is missing", i)
}
}
return nil
}
func (s *TenantList) IsEmpty() bool {
return len(*s.p) == 0
}

View File

@@ -0,0 +1,43 @@
package value
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestAuth0Value(t *testing.T) {
tenants := []Auth0Tenant{}
v := NewTenantList(&tenants, nil, " ")
require.Equal(t, "(empty)", v.String())
v.Set("auth0://clientid@domain?aud=audience&user=user1&user=user2 auth0://domain2?aud=audience2&user=user3")
require.Equal(t, []Auth0Tenant{
{
Domain: "domain",
ClientID: "clientid",
Audience: "audience",
Users: []string{"user1", "user2"},
},
{
Domain: "domain2",
Audience: "audience2",
Users: []string{"user3"},
},
}, tenants)
require.Equal(t, "auth0://clientid@domain?aud=audience&user=user1&user=user2 auth0://domain2?aud=audience2&user=user3", v.String())
require.NoError(t, v.Validate())
v.Set("eyJkb21haW4iOiJkYXRhcmhlaS5ldS5hdXRoMC5jb20iLCJhdWRpZW5jZSI6Imh0dHBzOi8vZGF0YXJoZWkuY29tL2NvcmUiLCJ1c2VycyI6WyJhdXRoMHx4eHgiXX0=")
require.Equal(t, []Auth0Tenant{
{
Domain: "datarhei.eu.auth0.com",
ClientID: "",
Audience: "https://datarhei.com/core",
Users: []string{"auth0|xxx"},
},
}, tenants)
require.Equal(t, "auth0://datarhei.eu.auth0.com?aud=https%3A%2F%2Fdatarhei.com%2Fcore&user=auth0%7Cxxx", v.String())
require.NoError(t, v.Validate())
}

277
config/value/network.go Normal file
View File

@@ -0,0 +1,277 @@
package value
import (
"fmt"
"net"
"net/mail"
"net/url"
"regexp"
"strconv"
"strings"
"github.com/datarhei/core/v16/http/cors"
)
// address (host?:port)
type Address string
func NewAddress(p *string, val string) *Address {
*p = val
return (*Address)(p)
}
func (s *Address) Set(val string) error {
// Check if the new value is only a port number
re := regexp.MustCompile("^[0-9]+$")
if re.MatchString(val) {
val = ":" + val
}
*s = Address(val)
return nil
}
func (s *Address) String() string {
return string(*s)
}
func (s *Address) Validate() error {
_, port, err := net.SplitHostPort(string(*s))
if err != nil {
return err
}
re := regexp.MustCompile("^[0-9]+$")
if !re.MatchString(port) {
return fmt.Errorf("the port must be numerical")
}
return nil
}
func (s *Address) IsEmpty() bool {
return s.Validate() != nil
}
// array of CIDR notation IP adresses
type CIDRList struct {
p *[]string
separator string
}
func NewCIDRList(p *[]string, val []string, separator string) *CIDRList {
v := &CIDRList{
p: p,
separator: separator,
}
*p = val
return v
}
func (s *CIDRList) Set(val string) error {
list := []string{}
for _, elm := range strings.Split(val, s.separator) {
elm = strings.TrimSpace(elm)
if len(elm) != 0 {
list = append(list, elm)
}
}
*s.p = list
return nil
}
func (s *CIDRList) String() string {
if s.IsEmpty() {
return "(empty)"
}
return strings.Join(*s.p, s.separator)
}
func (s *CIDRList) Validate() error {
for _, cidr := range *s.p {
_, _, err := net.ParseCIDR(cidr)
if err != nil {
return err
}
}
return nil
}
func (s *CIDRList) IsEmpty() bool {
return len(*s.p) == 0
}
// array of origins for CORS
type CORSOrigins struct {
p *[]string
separator string
}
func NewCORSOrigins(p *[]string, val []string, separator string) *CORSOrigins {
v := &CORSOrigins{
p: p,
separator: separator,
}
*p = val
return v
}
func (s *CORSOrigins) Set(val string) error {
list := []string{}
for _, elm := range strings.Split(val, s.separator) {
elm = strings.TrimSpace(elm)
if len(elm) != 0 {
list = append(list, elm)
}
}
*s.p = list
return nil
}
func (s *CORSOrigins) String() string {
if s.IsEmpty() {
return "(empty)"
}
return strings.Join(*s.p, s.separator)
}
func (s *CORSOrigins) Validate() error {
return cors.Validate(*s.p)
}
func (s *CORSOrigins) IsEmpty() bool {
return len(*s.p) == 0
}
// network port
type Port int
func NewPort(p *int, val int) *Port {
*p = val
return (*Port)(p)
}
func (i *Port) Set(val string) error {
v, err := strconv.Atoi(val)
if err != nil {
return err
}
*i = Port(v)
return nil
}
func (i *Port) String() string {
return strconv.Itoa(int(*i))
}
func (i *Port) Validate() error {
val := int(*i)
if val < 0 || val >= (1<<16) {
return fmt.Errorf("%d is not in the range of [0, %d]", val, 1<<16-1)
}
return nil
}
func (i *Port) IsEmpty() bool {
return int(*i) == 0
}
// url
type URL string
func NewURL(p *string, val string) *URL {
*p = val
return (*URL)(p)
}
func (u *URL) Set(val string) error {
*u = URL(val)
return nil
}
func (u *URL) String() string {
return string(*u)
}
func (u *URL) Validate() error {
val := string(*u)
if len(val) == 0 {
return nil
}
URL, err := url.Parse(val)
if err != nil {
return fmt.Errorf("%s is not a valid URL", val)
}
if len(URL.Scheme) == 0 || len(URL.Host) == 0 {
return fmt.Errorf("%s is not a valid URL", val)
}
return nil
}
func (u *URL) IsEmpty() bool {
return len(string(*u)) == 0
}
// email address
type Email string
func NewEmail(p *string, val string) *Email {
*p = val
return (*Email)(p)
}
func (s *Email) Set(val string) error {
addr, err := mail.ParseAddress(val)
if err != nil {
return err
}
*s = Email(addr.Address)
return nil
}
func (s *Email) String() string {
return string(*s)
}
func (s *Email) Validate() error {
if len(s.String()) == 0 {
return nil
}
_, err := mail.ParseAddress(s.String())
return err
}
func (s *Email) IsEmpty() bool {
return len(string(*s)) == 0
}

206
config/value/os.go Normal file
View File

@@ -0,0 +1,206 @@
package value
import (
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
)
// must directory
type MustDir string
func NewMustDir(p *string, val string) *MustDir {
*p = val
return (*MustDir)(p)
}
func (u *MustDir) Set(val string) error {
*u = MustDir(val)
return nil
}
func (u *MustDir) String() string {
return string(*u)
}
func (u *MustDir) Validate() error {
val := string(*u)
if len(strings.TrimSpace(val)) == 0 {
return fmt.Errorf("path name must not be empty")
}
if err := os.MkdirAll(val, 0750); err != nil {
return fmt.Errorf("%s can't be created (%w)", val, err)
}
finfo, err := os.Stat(val)
if err != nil {
return fmt.Errorf("%s does not exist", val)
}
if !finfo.IsDir() {
return fmt.Errorf("%s is not a directory", val)
}
return nil
}
func (u *MustDir) IsEmpty() bool {
return len(string(*u)) == 0
}
// directory
type Dir string
func NewDir(p *string, val string) *Dir {
*p = val
return (*Dir)(p)
}
func (u *Dir) Set(val string) error {
*u = Dir(val)
return nil
}
func (u *Dir) String() string {
return string(*u)
}
func (u *Dir) Validate() error {
val := string(*u)
if len(strings.TrimSpace(val)) == 0 {
return nil
}
finfo, err := os.Stat(val)
if err != nil {
return fmt.Errorf("%s does not exist", val)
}
if !finfo.IsDir() {
return fmt.Errorf("%s is not a directory", val)
}
return nil
}
func (u *Dir) IsEmpty() bool {
return len(string(*u)) == 0
}
// executable
type Exec string
func NewExec(p *string, val string) *Exec {
*p = val
return (*Exec)(p)
}
func (u *Exec) Set(val string) error {
*u = Exec(val)
return nil
}
func (u *Exec) String() string {
return string(*u)
}
func (u *Exec) Validate() error {
val := string(*u)
_, err := exec.LookPath(val)
if err != nil {
return fmt.Errorf("%s not found or is not executable", val)
}
return nil
}
func (u *Exec) IsEmpty() bool {
return len(string(*u)) == 0
}
// regular file
type File string
func NewFile(p *string, val string) *File {
*p = val
return (*File)(p)
}
func (u *File) Set(val string) error {
*u = File(val)
return nil
}
func (u *File) String() string {
return string(*u)
}
func (u *File) Validate() error {
val := string(*u)
if len(val) == 0 {
return nil
}
finfo, err := os.Stat(val)
if err != nil {
return fmt.Errorf("%s does not exist", val)
}
if !finfo.Mode().IsRegular() {
return fmt.Errorf("%s is not a regular file", val)
}
return nil
}
func (u *File) IsEmpty() bool {
return len(string(*u)) == 0
}
// absolute path
type AbsolutePath string
func NewAbsolutePath(p *string, val string) *AbsolutePath {
*p = filepath.Clean(val)
return (*AbsolutePath)(p)
}
func (s *AbsolutePath) Set(val string) error {
*s = AbsolutePath(filepath.Clean(val))
return nil
}
func (s *AbsolutePath) String() string {
return string(*s)
}
func (s *AbsolutePath) Validate() error {
path := string(*s)
if !filepath.IsAbs(path) {
return fmt.Errorf("%s is not an absolute path", path)
}
return nil
}
func (s *AbsolutePath) IsEmpty() bool {
return len(string(*s)) == 0
}

271
config/value/primitives.go Normal file
View File

@@ -0,0 +1,271 @@
package value
import (
"strconv"
"strings"
)
// string
type String string
func NewString(p *string, val string) *String {
*p = val
return (*String)(p)
}
func (s *String) Set(val string) error {
*s = String(val)
return nil
}
func (s *String) String() string {
return string(*s)
}
func (s *String) Validate() error {
return nil
}
func (s *String) IsEmpty() bool {
return len(string(*s)) == 0
}
// array of strings
type StringList struct {
p *[]string
separator string
}
func NewStringList(p *[]string, val []string, separator string) *StringList {
v := &StringList{
p: p,
separator: separator,
}
*p = val
return v
}
func (s *StringList) Set(val string) error {
list := []string{}
for _, elm := range strings.Split(val, s.separator) {
elm = strings.TrimSpace(elm)
if len(elm) != 0 {
list = append(list, elm)
}
}
*s.p = list
return nil
}
func (s *StringList) String() string {
if s.IsEmpty() {
return "(empty)"
}
return strings.Join(*s.p, s.separator)
}
func (s *StringList) Validate() error {
return nil
}
func (s *StringList) IsEmpty() bool {
return len(*s.p) == 0
}
// map of strings to strings
type StringMapString struct {
p *map[string]string
}
func NewStringMapString(p *map[string]string, val map[string]string) *StringMapString {
v := &StringMapString{
p: p,
}
if *p == nil {
*p = make(map[string]string)
}
if val != nil {
*p = val
}
return v
}
func (s *StringMapString) Set(val string) error {
mappings := make(map[string]string)
for _, elm := range strings.Split(val, " ") {
elm = strings.TrimSpace(elm)
if len(elm) == 0 {
continue
}
mapping := strings.SplitN(elm, ":", 2)
mappings[mapping[0]] = mapping[1]
}
*s.p = mappings
return nil
}
func (s *StringMapString) String() string {
if s.IsEmpty() {
return "(empty)"
}
mappings := make([]string, len(*s.p))
i := 0
for k, v := range *s.p {
mappings[i] = k + ":" + v
i++
}
return strings.Join(mappings, " ")
}
func (s *StringMapString) Validate() error {
return nil
}
func (s *StringMapString) IsEmpty() bool {
return len(*s.p) == 0
}
// boolean
type Bool bool
func NewBool(p *bool, val bool) *Bool {
*p = val
return (*Bool)(p)
}
func (b *Bool) Set(val string) error {
v, err := strconv.ParseBool(val)
if err != nil {
return err
}
*b = Bool(v)
return nil
}
func (b *Bool) String() string {
return strconv.FormatBool(bool(*b))
}
func (b *Bool) Validate() error {
return nil
}
func (b *Bool) IsEmpty() bool {
return !bool(*b)
}
// int
type Int int
func NewInt(p *int, val int) *Int {
*p = val
return (*Int)(p)
}
func (i *Int) Set(val string) error {
v, err := strconv.Atoi(val)
if err != nil {
return err
}
*i = Int(v)
return nil
}
func (i *Int) String() string {
return strconv.Itoa(int(*i))
}
func (i *Int) Validate() error {
return nil
}
func (i *Int) IsEmpty() bool {
return int(*i) == 0
}
// int64
type Int64 int64
func NewInt64(p *int64, val int64) *Int64 {
*p = val
return (*Int64)(p)
}
func (u *Int64) Set(val string) error {
v, err := strconv.ParseInt(val, 0, 64)
if err != nil {
return err
}
*u = Int64(v)
return nil
}
func (u *Int64) String() string {
return strconv.FormatInt(int64(*u), 10)
}
func (u *Int64) Validate() error {
return nil
}
func (u *Int64) IsEmpty() bool {
return int64(*u) == 0
}
// uint64
type Uint64 uint64
func NewUint64(p *uint64, val uint64) *Uint64 {
*p = val
return (*Uint64)(p)
}
func (u *Uint64) Set(val string) error {
v, err := strconv.ParseUint(val, 0, 64)
if err != nil {
return err
}
*u = Uint64(v)
return nil
}
func (u *Uint64) String() string {
return strconv.FormatUint(uint64(*u), 10)
}
func (u *Uint64) Validate() error {
return nil
}
func (u *Uint64) IsEmpty() bool {
return uint64(*u) == 0
}

36
config/value/time.go Normal file
View File

@@ -0,0 +1,36 @@
package value
import "time"
// time
type Time time.Time
func NewTime(p *time.Time, val time.Time) *Time {
*p = val
return (*Time)(p)
}
func (u *Time) Set(val string) error {
v, err := time.Parse(time.RFC3339, val)
if err != nil {
return err
}
*u = Time(v)
return nil
}
func (u *Time) String() string {
v := time.Time(*u)
return v.Format(time.RFC3339)
}
func (u *Time) Validate() error {
return nil
}
func (u *Time) IsEmpty() bool {
v := time.Time(*u)
return v.IsZero()
}

21
config/value/value.go Normal file
View File

@@ -0,0 +1,21 @@
package value
type Value interface {
// String returns a string representation of the value.
String() string
// Set a new value for the value. Returns an
// error if the given string representation can't
// be transformed to the value. Returns nil
// if the new value has been set.
Set(string) error
// Validate the value. The returned error will
// indicate what is wrong with the current value.
// Returns nil if the value is OK.
Validate() error
// IsEmpty returns whether the value represents an empty
// representation for that value.
IsEmpty() bool
}

View File

@@ -0,0 +1,58 @@
package value
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestIntValue(t *testing.T) {
var i int
ivar := NewInt(&i, 11)
require.Equal(t, "11", ivar.String())
require.Equal(t, nil, ivar.Validate())
require.Equal(t, false, ivar.IsEmpty())
i = 42
require.Equal(t, "42", ivar.String())
require.Equal(t, nil, ivar.Validate())
require.Equal(t, false, ivar.IsEmpty())
ivar.Set("77")
require.Equal(t, int(77), i)
}
type testdata struct {
value1 int
value2 int
}
func TestCopyStruct(t *testing.T) {
data1 := testdata{}
NewInt(&data1.value1, 1)
NewInt(&data1.value2, 2)
require.Equal(t, int(1), data1.value1)
require.Equal(t, int(2), data1.value2)
data2 := testdata{}
val21 := NewInt(&data2.value1, 3)
val22 := NewInt(&data2.value2, 4)
require.Equal(t, int(3), data2.value1)
require.Equal(t, int(4), data2.value2)
data2 = data1
require.Equal(t, int(1), data2.value1)
require.Equal(t, int(2), data2.value2)
require.Equal(t, "1", val21.String())
require.Equal(t, "2", val22.String())
}

216
config/vars/vars.go Normal file
View File

@@ -0,0 +1,216 @@
package vars
import (
"fmt"
"os"
"github.com/datarhei/core/v16/config/value"
)
type variable struct {
value value.Value // The actual value
defVal string // The default value in string representation
name string // A name for this value
envName string // The environment variable that corresponds to this value
envAltNames []string // Alternative environment variable names
description string // A desriptions for this value
required bool // Whether a non-empty value is required
disguise bool // Whether the value should be disguised if printed
merged bool // Whether this value has been replaced by its corresponding environment variable
}
type Variable struct {
Value string
Name string
EnvName string
Description string
Merged bool
}
type message struct {
message string // The log message
variable Variable // The config field this message refers to
level string // The loglevel for this message
}
type Variables struct {
vars []*variable
logs []message
}
func (vs *Variables) Register(val value.Value, name, envName string, envAltNames []string, description string, required, disguise bool) {
vs.vars = append(vs.vars, &variable{
value: val,
defVal: val.String(),
name: name,
envName: envName,
envAltNames: envAltNames,
description: description,
required: required,
disguise: disguise,
})
}
func (vs *Variables) Transfer(vss *Variables) {
for _, v := range vs.vars {
if vss.IsMerged(v.name) {
v.merged = true
}
}
}
func (vs *Variables) SetDefault(name string) {
v := vs.findVariable(name)
if v == nil {
return
}
v.value.Set(v.defVal)
}
func (vs *Variables) Get(name string) (string, error) {
v := vs.findVariable(name)
if v == nil {
return "", fmt.Errorf("variable not found")
}
return v.value.String(), nil
}
func (vs *Variables) Set(name, val string) error {
v := vs.findVariable(name)
if v == nil {
return fmt.Errorf("variable not found")
}
return v.value.Set(val)
}
func (vs *Variables) Log(level, name string, format string, args ...interface{}) {
v := vs.findVariable(name)
if v == nil {
return
}
variable := Variable{
Value: v.value.String(),
Name: v.name,
EnvName: v.envName,
Description: v.description,
Merged: v.merged,
}
if v.disguise {
variable.Value = "***"
}
l := message{
message: fmt.Sprintf(format, args...),
variable: variable,
level: level,
}
vs.logs = append(vs.logs, l)
}
func (vs *Variables) Merge() {
for _, v := range vs.vars {
if len(v.envName) == 0 {
continue
}
var envval string
var ok bool
envval, ok = os.LookupEnv(v.envName)
if !ok {
foundAltName := false
for _, envName := range v.envAltNames {
envval, ok = os.LookupEnv(envName)
if ok {
foundAltName = true
vs.Log("warn", v.name, "deprecated name, please use %s", v.envName)
break
}
}
if !foundAltName {
continue
}
}
err := v.value.Set(envval)
if err != nil {
vs.Log("error", v.name, "%s", err.Error())
}
v.merged = true
}
}
func (vs *Variables) IsMerged(name string) bool {
v := vs.findVariable(name)
if v == nil {
return false
}
return v.merged
}
func (vs *Variables) Validate() {
for _, v := range vs.vars {
vs.Log("info", v.name, "%s", "")
err := v.value.Validate()
if err != nil {
vs.Log("error", v.name, "%s", err.Error())
}
if v.required && v.value.IsEmpty() {
vs.Log("error", v.name, "a value is required")
}
}
}
func (vs *Variables) ResetLogs() {
vs.logs = nil
}
func (vs *Variables) Messages(logger func(level string, v Variable, message string)) {
for _, l := range vs.logs {
logger(l.level, l.variable, l.message)
}
}
func (vs *Variables) HasErrors() bool {
for _, l := range vs.logs {
if l.level == "error" {
return true
}
}
return false
}
func (vs *Variables) Overrides() []string {
overrides := []string{}
for _, v := range vs.vars {
if v.merged {
overrides = append(overrides, v.name)
}
}
return overrides
}
func (vs *Variables) findVariable(name string) *variable {
for _, v := range vs.vars {
if v.name == name {
return v
}
}
return nil
}

40
config/vars/vars_test.go Normal file
View File

@@ -0,0 +1,40 @@
package vars
import (
"testing"
"github.com/datarhei/core/v16/config/value"
"github.com/stretchr/testify/require"
)
func TestVars(t *testing.T) {
v1 := Variables{}
s := ""
v1.Register(value.NewString(&s, "foobar"), "string", "", nil, "a string", false, false)
require.Equal(t, "foobar", s)
x, _ := v1.Get("string")
require.Equal(t, "foobar", x)
v := v1.findVariable("string")
v.value.Set("barfoo")
require.Equal(t, "barfoo", s)
x, _ = v1.Get("string")
require.Equal(t, "barfoo", x)
v1.Set("string", "foobaz")
require.Equal(t, "foobaz", s)
x, _ = v1.Get("string")
require.Equal(t, "foobaz", x)
v1.SetDefault("string")
require.Equal(t, "foobar", s)
x, _ = v1.Get("string")
require.Equal(t, "foobar", x)
}

View File

@@ -62,7 +62,7 @@ const docTemplate = `{
"operationId": "graph-playground",
"responses": {
"200": {
"description": ""
"description": "OK"
}
}
}
@@ -220,6 +220,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Retrieve the currently active Restreamer configuration",
"operationId": "config-3-get",
"responses": {
@@ -244,6 +247,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Update the current Restreamer configuration",
"operationId": "config-3-set",
"parameters": [
@@ -290,6 +296,9 @@ const docTemplate = `{
"produces": [
"text/plain"
],
"tags": [
"v16.7.2"
],
"summary": "Reload the currently active configuration",
"operationId": "config-3-reload",
"responses": {
@@ -302,7 +311,7 @@ const docTemplate = `{
}
}
},
"/api/v3/fs/disk/": {
"/api/v3/fs/disk": {
"get": {
"security": [
{
@@ -313,6 +322,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List all files on the filesystem",
"operationId": "diskfs-3-list-files",
"parameters": [
@@ -360,6 +372,9 @@ const docTemplate = `{
"application/data",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Fetch a file from the filesystem",
"operationId": "diskfs-3-get-file",
"parameters": [
@@ -406,6 +421,9 @@ const docTemplate = `{
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add a file to the filesystem",
"operationId": "diskfs-3-put-file",
"parameters": [
@@ -460,6 +478,9 @@ const docTemplate = `{
"produces": [
"text/plain"
],
"tags": [
"v16.7.2"
],
"summary": "Remove a file from the filesystem",
"operationId": "diskfs-3-delete-file",
"parameters": [
@@ -487,7 +508,7 @@ const docTemplate = `{
}
}
},
"/api/v3/fs/mem/": {
"/api/v3/fs/mem": {
"get": {
"security": [
{
@@ -498,6 +519,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List all files on the memory filesystem",
"operationId": "memfs-3-list-files",
"parameters": [
@@ -545,8 +569,11 @@ const docTemplate = `{
"application/data",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Fetch a file from the memory filesystem",
"operationId": "memfs-3-get-file-api",
"operationId": "memfs-3-get-file",
"parameters": [
{
"type": "string",
@@ -591,8 +618,11 @@ const docTemplate = `{
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add a file to the memory filesystem",
"operationId": "memfs-3-put-file-api",
"operationId": "memfs-3-put-file",
"parameters": [
{
"type": "string",
@@ -645,8 +675,11 @@ const docTemplate = `{
"produces": [
"text/plain"
],
"tags": [
"v16.7.2"
],
"summary": "Remove a file from the memory filesystem",
"operationId": "memfs-delete-file-api",
"operationId": "memfs-3-delete-file",
"parameters": [
{
"type": "string",
@@ -685,6 +718,9 @@ const docTemplate = `{
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Create a link to a file in the memory filesystem",
"operationId": "memfs-3-patch",
"parameters": [
@@ -732,6 +768,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Application log",
"operationId": "log-3",
"parameters": [
@@ -766,6 +805,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Retrieve JSON metadata from a key",
"operationId": "metadata-3-get",
"parameters": [
@@ -806,6 +848,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add JSON metadata under the given key",
"operationId": "metadata-3-set",
"parameters": [
@@ -839,6 +884,33 @@ const docTemplate = `{
}
},
"/api/v3/metrics": {
"get": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "List all known metrics with their description and labels",
"produces": [
"application/json"
],
"tags": [
"v16.10.0"
],
"summary": "List all known metrics with their description and labels",
"operationId": "metrics-3-describe",
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/api.MetricsDescription"
}
}
}
}
},
"post": {
"security": [
{
@@ -852,6 +924,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Query the collected metrics",
"operationId": "metrics-3-metrics",
"parameters": [
@@ -892,26 +967,41 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List all known processes",
"operationId": "restream-3-get-all",
"operationId": "process-3-get-all",
"parameters": [
{
"type": "string",
"description": "Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output",
"description": "Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output.",
"name": "filter",
"in": "query"
},
{
"type": "string",
"description": "Return only these process that have this reference value. Overrides a list of IDs. If empty, the reference will be ignored",
"description": "Return only these process that have this reference value. If empty, the reference will be ignored.",
"name": "reference",
"in": "query"
},
{
"type": "string",
"description": "Comma separated list of process ids to list",
"description": "Comma separated list of process ids to list. Overrides the reference. If empty all IDs will be returned.",
"name": "id",
"in": "query"
},
{
"type": "string",
"description": "Glob pattern for process IDs. If empty all IDs will be returned. Intersected with results from refpattern.",
"name": "idpattern",
"in": "query"
},
{
"type": "string",
"description": "Glob pattern for process references. If empty all IDs will be returned. Intersected with results from idpattern.",
"name": "refpattern",
"in": "query"
}
],
"responses": {
@@ -939,8 +1029,11 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add a new process",
"operationId": "restream-3-add",
"operationId": "process-3-add",
"parameters": [
{
"description": "Process config",
@@ -979,8 +1072,11 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List a process by its ID",
"operationId": "restream-3-get",
"operationId": "process-3-get",
"parameters": [
{
"type": "string",
@@ -1017,15 +1113,18 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "Replace an existing process. This is a shortcut for DELETE+POST.",
"description": "Replace an existing process.",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Replace an existing process",
"operationId": "restream-3-update",
"operationId": "process-3-update",
"parameters": [
{
"type": "string",
@@ -1075,8 +1174,11 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Delete a process by its ID",
"operationId": "restream-3-delete",
"operationId": "process-3-delete",
"parameters": [
{
"type": "string",
@@ -1116,8 +1218,11 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Issue a command to a process",
"operationId": "restream-3-command",
"operationId": "process-3-command",
"parameters": [
{
"type": "string",
@@ -1169,8 +1274,11 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the configuration of a process",
"operationId": "restream-3-get-config",
"operationId": "process-3-get-config",
"parameters": [
{
"type": "string",
@@ -1213,8 +1321,11 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Retrieve JSON metadata stored with a process under a key",
"operationId": "restream-3-get-process-metadata",
"operationId": "process-3-get-process-metadata",
"parameters": [
{
"type": "string",
@@ -1260,8 +1371,11 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add JSON metadata with a process under the given key",
"operationId": "restream-3-set-process-metadata",
"operationId": "process-3-set-process-metadata",
"parameters": [
{
"type": "string",
@@ -1317,8 +1431,11 @@ const docTemplate = `{
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Encode the errorframe",
"operationId": "restream-3-playout-errorframencode",
"operationId": "process-3-playout-errorframencode",
"parameters": [
{
"type": "string",
@@ -1372,8 +1489,11 @@ const docTemplate = `{
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Upload an error frame",
"operationId": "restream-3-playout-errorframe",
"operationId": "process-3-playout-errorframe",
"parameters": [
{
"type": "string",
@@ -1444,8 +1564,11 @@ const docTemplate = `{
"image/png",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the last keyframe",
"operationId": "restream-3-playout-keyframe",
"operationId": "process-3-playout-keyframe",
"parameters": [
{
"type": "string",
@@ -1502,8 +1625,11 @@ const docTemplate = `{
"produces": [
"text/plain"
],
"tags": [
"v16.7.2"
],
"summary": "Close the current input stream",
"operationId": "restream-3-playout-reopen-input",
"operationId": "process-3-playout-reopen-input",
"parameters": [
{
"type": "string",
@@ -1553,8 +1679,11 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the current playout status",
"operationId": "restream-3-playout-status",
"operationId": "process-3-playout-status",
"parameters": [
{
"type": "string",
@@ -1608,8 +1737,11 @@ const docTemplate = `{
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Switch to a new stream",
"operationId": "restream-3-playout-stream",
"operationId": "process-3-playout-stream",
"parameters": [
{
"type": "string",
@@ -1664,12 +1796,15 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "Probe an existing process to get a detailed stream information on the inputs",
"description": "Probe an existing process to get a detailed stream information on the inputs.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Probe a process",
"operationId": "restream-3-probe",
"operationId": "process-3-probe",
"parameters": [
{
"type": "string",
@@ -1696,12 +1831,15 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "Get the logs and the log history of a process",
"description": "Get the logs and the log history of a process.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the logs of a process",
"operationId": "restream-3-get-report",
"operationId": "process-3-get-report",
"parameters": [
{
"type": "string",
@@ -1740,12 +1878,15 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "Get the state and progress data of a process",
"description": "Get the state and progress data of a process.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the state of a process",
"operationId": "restream-3-get-state",
"operationId": "process-3-get-state",
"parameters": [
{
"type": "string",
@@ -1784,10 +1925,13 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "List all currently publishing RTMP streams",
"description": "List all currently publishing RTMP streams.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List all publishing RTMP streams",
"operationId": "rtmp-3-list-channels",
"responses": {
@@ -1810,10 +1954,13 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "Get a summary of all active and past sessions of the given collector",
"description": "Get a summary of all active and past sessions of the given collector.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get a summary of all active and past sessions",
"operationId": "session-3-summary",
"parameters": [
@@ -1841,10 +1988,13 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "Get a minimal summary of all active sessions (i.e. number of sessions, bandwidth)",
"description": "Get a minimal summary of all active sessions (i.e. number of sessions, bandwidth).",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get a minimal summary of all active sessions",
"operationId": "session-3-current",
"parameters": [
@@ -1872,10 +2022,13 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "List all detected FFmpeg capabilities",
"description": "List all detected FFmpeg capabilities.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "FFmpeg capabilities",
"operationId": "skills-3",
"responses": {
@@ -1895,10 +2048,13 @@ const docTemplate = `{
"ApiKeyAuth": []
}
],
"description": "Refresh the available FFmpeg capabilities",
"description": "Refresh the available FFmpeg capabilities.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Refresh FFmpeg capabilities",
"operationId": "skills-3-reload",
"responses": {
@@ -1922,6 +2078,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.9.0"
],
"summary": "List all publishing SRT treams",
"operationId": "srt-3-list-channels",
"responses": {
@@ -1943,6 +2102,9 @@ const docTemplate = `{
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Fetch minimal statistics about a process",
"operationId": "widget-3-get",
"parameters": [
@@ -2391,7 +2553,7 @@ const docTemplate = `{
"tenants": {
"type": "array",
"items": {
"$ref": "#/definitions/config.Auth0Tenant"
"$ref": "#/definitions/value.Auth0Tenant"
}
}
}
@@ -2609,6 +2771,9 @@ const docTemplate = `{
"address": {
"type": "string"
},
"address_tls": {
"type": "string"
},
"app": {
"type": "string"
},
@@ -2730,9 +2895,20 @@ const docTemplate = `{
"type": "integer"
},
"types": {
"type": "array",
"items": {
"type": "string"
"type": "object",
"properties": {
"allow": {
"type": "array",
"items": {
"type": "string"
}
},
"block": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
@@ -2787,6 +2963,9 @@ const docTemplate = `{
"cert_file": {
"type": "string"
},
"email": {
"type": "string"
},
"enable": {
"type": "boolean"
},
@@ -2900,6 +3079,23 @@ const docTemplate = `{
}
}
},
"api.MetricsDescription": {
"type": "object",
"properties": {
"description": {
"type": "string"
},
"labels": {
"type": "array",
"items": {
"type": "string"
}
},
"name": {
"type": "string"
}
}
},
"api.MetricsQuery": {
"type": "object",
"properties": {
@@ -3641,7 +3837,7 @@ const docTemplate = `{
"description": "The total number of received KM (Key Material) control packets",
"type": "integer"
},
"recv_loss__bytes": {
"recv_loss_bytes": {
"description": "Same as pktRcvLoss, but expressed in bytes, including payload and all the headers (IP, TCP, SRT), bytes for the presently missing (either reordered or lost) packets' payloads are estimated based on the average packet size",
"type": "integer"
},
@@ -3749,7 +3945,7 @@ const docTemplate = `{
"description": "The total number of retransmitted packets sent by the SRT sender",
"type": "integer"
},
"sent_unique__bytes": {
"sent_unique_bytes": {
"description": "Same as pktSentUnique, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)",
"type": "integer"
},
@@ -3985,7 +4181,7 @@ const docTemplate = `{
"tenants": {
"type": "array",
"items": {
"$ref": "#/definitions/config.Auth0Tenant"
"$ref": "#/definitions/value.Auth0Tenant"
}
}
}
@@ -4203,6 +4399,9 @@ const docTemplate = `{
"address": {
"type": "string"
},
"address_tls": {
"type": "string"
},
"app": {
"type": "string"
},
@@ -4324,9 +4523,20 @@ const docTemplate = `{
"type": "integer"
},
"types": {
"type": "array",
"items": {
"type": "string"
"type": "object",
"properties": {
"allow": {
"type": "array",
"items": {
"type": "string"
}
},
"block": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
@@ -4381,6 +4591,9 @@ const docTemplate = `{
"cert_file": {
"type": "string"
},
"email": {
"type": "string"
},
"enable": {
"type": "boolean"
},
@@ -4660,7 +4873,7 @@ const docTemplate = `{
}
}
},
"config.Auth0Tenant": {
"value.Auth0Tenant": {
"type": "object",
"properties": {
"audience": {

View File

@@ -54,7 +54,7 @@
"operationId": "graph-playground",
"responses": {
"200": {
"description": ""
"description": "OK"
}
}
}
@@ -212,6 +212,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Retrieve the currently active Restreamer configuration",
"operationId": "config-3-get",
"responses": {
@@ -236,6 +239,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Update the current Restreamer configuration",
"operationId": "config-3-set",
"parameters": [
@@ -282,6 +288,9 @@
"produces": [
"text/plain"
],
"tags": [
"v16.7.2"
],
"summary": "Reload the currently active configuration",
"operationId": "config-3-reload",
"responses": {
@@ -294,7 +303,7 @@
}
}
},
"/api/v3/fs/disk/": {
"/api/v3/fs/disk": {
"get": {
"security": [
{
@@ -305,6 +314,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List all files on the filesystem",
"operationId": "diskfs-3-list-files",
"parameters": [
@@ -352,6 +364,9 @@
"application/data",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Fetch a file from the filesystem",
"operationId": "diskfs-3-get-file",
"parameters": [
@@ -398,6 +413,9 @@
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add a file to the filesystem",
"operationId": "diskfs-3-put-file",
"parameters": [
@@ -452,6 +470,9 @@
"produces": [
"text/plain"
],
"tags": [
"v16.7.2"
],
"summary": "Remove a file from the filesystem",
"operationId": "diskfs-3-delete-file",
"parameters": [
@@ -479,7 +500,7 @@
}
}
},
"/api/v3/fs/mem/": {
"/api/v3/fs/mem": {
"get": {
"security": [
{
@@ -490,6 +511,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List all files on the memory filesystem",
"operationId": "memfs-3-list-files",
"parameters": [
@@ -537,8 +561,11 @@
"application/data",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Fetch a file from the memory filesystem",
"operationId": "memfs-3-get-file-api",
"operationId": "memfs-3-get-file",
"parameters": [
{
"type": "string",
@@ -583,8 +610,11 @@
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add a file to the memory filesystem",
"operationId": "memfs-3-put-file-api",
"operationId": "memfs-3-put-file",
"parameters": [
{
"type": "string",
@@ -637,8 +667,11 @@
"produces": [
"text/plain"
],
"tags": [
"v16.7.2"
],
"summary": "Remove a file from the memory filesystem",
"operationId": "memfs-delete-file-api",
"operationId": "memfs-3-delete-file",
"parameters": [
{
"type": "string",
@@ -677,6 +710,9 @@
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Create a link to a file in the memory filesystem",
"operationId": "memfs-3-patch",
"parameters": [
@@ -724,6 +760,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Application log",
"operationId": "log-3",
"parameters": [
@@ -758,6 +797,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Retrieve JSON metadata from a key",
"operationId": "metadata-3-get",
"parameters": [
@@ -798,6 +840,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add JSON metadata under the given key",
"operationId": "metadata-3-set",
"parameters": [
@@ -831,6 +876,33 @@
}
},
"/api/v3/metrics": {
"get": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "List all known metrics with their description and labels",
"produces": [
"application/json"
],
"tags": [
"v16.10.0"
],
"summary": "List all known metrics with their description and labels",
"operationId": "metrics-3-describe",
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/api.MetricsDescription"
}
}
}
}
},
"post": {
"security": [
{
@@ -844,6 +916,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Query the collected metrics",
"operationId": "metrics-3-metrics",
"parameters": [
@@ -884,26 +959,41 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List all known processes",
"operationId": "restream-3-get-all",
"operationId": "process-3-get-all",
"parameters": [
{
"type": "string",
"description": "Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output",
"description": "Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output.",
"name": "filter",
"in": "query"
},
{
"type": "string",
"description": "Return only these process that have this reference value. Overrides a list of IDs. If empty, the reference will be ignored",
"description": "Return only these process that have this reference value. If empty, the reference will be ignored.",
"name": "reference",
"in": "query"
},
{
"type": "string",
"description": "Comma separated list of process ids to list",
"description": "Comma separated list of process ids to list. Overrides the reference. If empty all IDs will be returned.",
"name": "id",
"in": "query"
},
{
"type": "string",
"description": "Glob pattern for process IDs. If empty all IDs will be returned. Intersected with results from refpattern.",
"name": "idpattern",
"in": "query"
},
{
"type": "string",
"description": "Glob pattern for process references. If empty all IDs will be returned. Intersected with results from idpattern.",
"name": "refpattern",
"in": "query"
}
],
"responses": {
@@ -931,8 +1021,11 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add a new process",
"operationId": "restream-3-add",
"operationId": "process-3-add",
"parameters": [
{
"description": "Process config",
@@ -971,8 +1064,11 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List a process by its ID",
"operationId": "restream-3-get",
"operationId": "process-3-get",
"parameters": [
{
"type": "string",
@@ -1009,15 +1105,18 @@
"ApiKeyAuth": []
}
],
"description": "Replace an existing process. This is a shortcut for DELETE+POST.",
"description": "Replace an existing process.",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Replace an existing process",
"operationId": "restream-3-update",
"operationId": "process-3-update",
"parameters": [
{
"type": "string",
@@ -1067,8 +1166,11 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Delete a process by its ID",
"operationId": "restream-3-delete",
"operationId": "process-3-delete",
"parameters": [
{
"type": "string",
@@ -1108,8 +1210,11 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Issue a command to a process",
"operationId": "restream-3-command",
"operationId": "process-3-command",
"parameters": [
{
"type": "string",
@@ -1161,8 +1266,11 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the configuration of a process",
"operationId": "restream-3-get-config",
"operationId": "process-3-get-config",
"parameters": [
{
"type": "string",
@@ -1205,8 +1313,11 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Retrieve JSON metadata stored with a process under a key",
"operationId": "restream-3-get-process-metadata",
"operationId": "process-3-get-process-metadata",
"parameters": [
{
"type": "string",
@@ -1252,8 +1363,11 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Add JSON metadata with a process under the given key",
"operationId": "restream-3-set-process-metadata",
"operationId": "process-3-set-process-metadata",
"parameters": [
{
"type": "string",
@@ -1309,8 +1423,11 @@
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Encode the errorframe",
"operationId": "restream-3-playout-errorframencode",
"operationId": "process-3-playout-errorframencode",
"parameters": [
{
"type": "string",
@@ -1364,8 +1481,11 @@
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Upload an error frame",
"operationId": "restream-3-playout-errorframe",
"operationId": "process-3-playout-errorframe",
"parameters": [
{
"type": "string",
@@ -1436,8 +1556,11 @@
"image/png",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the last keyframe",
"operationId": "restream-3-playout-keyframe",
"operationId": "process-3-playout-keyframe",
"parameters": [
{
"type": "string",
@@ -1494,8 +1617,11 @@
"produces": [
"text/plain"
],
"tags": [
"v16.7.2"
],
"summary": "Close the current input stream",
"operationId": "restream-3-playout-reopen-input",
"operationId": "process-3-playout-reopen-input",
"parameters": [
{
"type": "string",
@@ -1545,8 +1671,11 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the current playout status",
"operationId": "restream-3-playout-status",
"operationId": "process-3-playout-status",
"parameters": [
{
"type": "string",
@@ -1600,8 +1729,11 @@
"text/plain",
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Switch to a new stream",
"operationId": "restream-3-playout-stream",
"operationId": "process-3-playout-stream",
"parameters": [
{
"type": "string",
@@ -1656,12 +1788,15 @@
"ApiKeyAuth": []
}
],
"description": "Probe an existing process to get a detailed stream information on the inputs",
"description": "Probe an existing process to get a detailed stream information on the inputs.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Probe a process",
"operationId": "restream-3-probe",
"operationId": "process-3-probe",
"parameters": [
{
"type": "string",
@@ -1688,12 +1823,15 @@
"ApiKeyAuth": []
}
],
"description": "Get the logs and the log history of a process",
"description": "Get the logs and the log history of a process.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the logs of a process",
"operationId": "restream-3-get-report",
"operationId": "process-3-get-report",
"parameters": [
{
"type": "string",
@@ -1732,12 +1870,15 @@
"ApiKeyAuth": []
}
],
"description": "Get the state and progress data of a process",
"description": "Get the state and progress data of a process.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get the state of a process",
"operationId": "restream-3-get-state",
"operationId": "process-3-get-state",
"parameters": [
{
"type": "string",
@@ -1776,10 +1917,13 @@
"ApiKeyAuth": []
}
],
"description": "List all currently publishing RTMP streams",
"description": "List all currently publishing RTMP streams.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "List all publishing RTMP streams",
"operationId": "rtmp-3-list-channels",
"responses": {
@@ -1802,10 +1946,13 @@
"ApiKeyAuth": []
}
],
"description": "Get a summary of all active and past sessions of the given collector",
"description": "Get a summary of all active and past sessions of the given collector.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get a summary of all active and past sessions",
"operationId": "session-3-summary",
"parameters": [
@@ -1833,10 +1980,13 @@
"ApiKeyAuth": []
}
],
"description": "Get a minimal summary of all active sessions (i.e. number of sessions, bandwidth)",
"description": "Get a minimal summary of all active sessions (i.e. number of sessions, bandwidth).",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Get a minimal summary of all active sessions",
"operationId": "session-3-current",
"parameters": [
@@ -1864,10 +2014,13 @@
"ApiKeyAuth": []
}
],
"description": "List all detected FFmpeg capabilities",
"description": "List all detected FFmpeg capabilities.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "FFmpeg capabilities",
"operationId": "skills-3",
"responses": {
@@ -1887,10 +2040,13 @@
"ApiKeyAuth": []
}
],
"description": "Refresh the available FFmpeg capabilities",
"description": "Refresh the available FFmpeg capabilities.",
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Refresh FFmpeg capabilities",
"operationId": "skills-3-reload",
"responses": {
@@ -1914,6 +2070,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.9.0"
],
"summary": "List all publishing SRT treams",
"operationId": "srt-3-list-channels",
"responses": {
@@ -1935,6 +2094,9 @@
"produces": [
"application/json"
],
"tags": [
"v16.7.2"
],
"summary": "Fetch minimal statistics about a process",
"operationId": "widget-3-get",
"parameters": [
@@ -2383,7 +2545,7 @@
"tenants": {
"type": "array",
"items": {
"$ref": "#/definitions/config.Auth0Tenant"
"$ref": "#/definitions/value.Auth0Tenant"
}
}
}
@@ -2601,6 +2763,9 @@
"address": {
"type": "string"
},
"address_tls": {
"type": "string"
},
"app": {
"type": "string"
},
@@ -2722,9 +2887,20 @@
"type": "integer"
},
"types": {
"type": "array",
"items": {
"type": "string"
"type": "object",
"properties": {
"allow": {
"type": "array",
"items": {
"type": "string"
}
},
"block": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
@@ -2779,6 +2955,9 @@
"cert_file": {
"type": "string"
},
"email": {
"type": "string"
},
"enable": {
"type": "boolean"
},
@@ -2892,6 +3071,23 @@
}
}
},
"api.MetricsDescription": {
"type": "object",
"properties": {
"description": {
"type": "string"
},
"labels": {
"type": "array",
"items": {
"type": "string"
}
},
"name": {
"type": "string"
}
}
},
"api.MetricsQuery": {
"type": "object",
"properties": {
@@ -3633,7 +3829,7 @@
"description": "The total number of received KM (Key Material) control packets",
"type": "integer"
},
"recv_loss__bytes": {
"recv_loss_bytes": {
"description": "Same as pktRcvLoss, but expressed in bytes, including payload and all the headers (IP, TCP, SRT), bytes for the presently missing (either reordered or lost) packets' payloads are estimated based on the average packet size",
"type": "integer"
},
@@ -3741,7 +3937,7 @@
"description": "The total number of retransmitted packets sent by the SRT sender",
"type": "integer"
},
"sent_unique__bytes": {
"sent_unique_bytes": {
"description": "Same as pktSentUnique, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)",
"type": "integer"
},
@@ -3977,7 +4173,7 @@
"tenants": {
"type": "array",
"items": {
"$ref": "#/definitions/config.Auth0Tenant"
"$ref": "#/definitions/value.Auth0Tenant"
}
}
}
@@ -4195,6 +4391,9 @@
"address": {
"type": "string"
},
"address_tls": {
"type": "string"
},
"app": {
"type": "string"
},
@@ -4316,9 +4515,20 @@
"type": "integer"
},
"types": {
"type": "array",
"items": {
"type": "string"
"type": "object",
"properties": {
"allow": {
"type": "array",
"items": {
"type": "string"
}
},
"block": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
@@ -4373,6 +4583,9 @@
"cert_file": {
"type": "string"
},
"email": {
"type": "string"
},
"enable": {
"type": "boolean"
},
@@ -4652,7 +4865,7 @@
}
}
},
"config.Auth0Tenant": {
"value.Auth0Tenant": {
"type": "object",
"properties": {
"audience": {

View File

@@ -122,7 +122,7 @@ definitions:
type: boolean
tenants:
items:
$ref: '#/definitions/config.Auth0Tenant'
$ref: '#/definitions/value.Auth0Tenant'
type: array
type: object
disable_localhost:
@@ -264,6 +264,8 @@ definitions:
properties:
address:
type: string
address_tls:
type: string
app:
type: string
enable:
@@ -343,9 +345,16 @@ definitions:
ttl_seconds:
type: integer
types:
items:
type: string
type: array
properties:
allow:
items:
type: string
type: array
block:
items:
type: string
type: array
type: object
type: object
dir:
type: string
@@ -379,6 +388,8 @@ definitions:
type: boolean
cert_file:
type: string
email:
type: string
enable:
type: boolean
key_file:
@@ -453,6 +464,17 @@ definitions:
- password
- username
type: object
api.MetricsDescription:
properties:
description:
type: string
labels:
items:
type: string
type: array
name:
type: string
type: object
api.MetricsQuery:
properties:
interval_sec:
@@ -957,7 +979,7 @@ definitions:
recv_km_pkt:
description: The total number of received KM (Key Material) control packets
type: integer
recv_loss__bytes:
recv_loss_bytes:
description: Same as pktRcvLoss, but expressed in bytes, including payload
and all the headers (IP, TCP, SRT), bytes for the presently missing (either
reordered or lost) packets' payloads are estimated based on the average
@@ -1066,7 +1088,7 @@ definitions:
sent_retrans_pkt:
description: The total number of retransmitted packets sent by the SRT sender
type: integer
sent_unique__bytes:
sent_unique_bytes:
description: Same as pktSentUnique, but expressed in bytes, including payload
and all the headers (IP, TCP, SRT)
type: integer
@@ -1225,7 +1247,7 @@ definitions:
type: boolean
tenants:
items:
$ref: '#/definitions/config.Auth0Tenant'
$ref: '#/definitions/value.Auth0Tenant'
type: array
type: object
disable_localhost:
@@ -1367,6 +1389,8 @@ definitions:
properties:
address:
type: string
address_tls:
type: string
app:
type: string
enable:
@@ -1446,9 +1470,16 @@ definitions:
ttl_seconds:
type: integer
types:
items:
type: string
type: array
properties:
allow:
items:
type: string
type: array
block:
items:
type: string
type: array
type: object
type: object
dir:
type: string
@@ -1482,6 +1513,8 @@ definitions:
type: boolean
cert_file:
type: string
email:
type: string
enable:
type: boolean
key_file:
@@ -1662,7 +1695,7 @@ definitions:
uptime:
type: integer
type: object
config.Auth0Tenant:
value.Auth0Tenant:
properties:
audience:
type: string
@@ -1738,7 +1771,7 @@ paths:
- text/html
responses:
"200":
description: ""
description: OK
security:
- ApiKeyAuth: []
summary: Load GraphQL playground
@@ -1847,6 +1880,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Retrieve the currently active Restreamer configuration
tags:
- v16.7.2
put:
consumes:
- application/json
@@ -1878,6 +1913,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Update the current Restreamer configuration
tags:
- v16.7.2
/api/v3/config/reload:
get:
description: Reload the currently active configuration. This will trigger a
@@ -1893,7 +1930,9 @@ paths:
security:
- ApiKeyAuth: []
summary: Reload the currently active configuration
/api/v3/fs/disk/:
tags:
- v16.7.2
/api/v3/fs/disk:
get:
description: List all files on the filesystem. The listing can be ordered by
name, size, or date of last modification in ascending or descending order.
@@ -1923,6 +1962,8 @@ paths:
security:
- ApiKeyAuth: []
summary: List all files on the filesystem
tags:
- v16.7.2
/api/v3/fs/disk/{path}:
delete:
description: Remove a file from the filesystem
@@ -1947,6 +1988,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Remove a file from the filesystem
tags:
- v16.7.2
get:
description: Fetch a file from the filesystem. The contents of that file are
returned.
@@ -1976,6 +2019,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Fetch a file from the filesystem
tags:
- v16.7.2
put:
consumes:
- application/data
@@ -2014,7 +2059,9 @@ paths:
security:
- ApiKeyAuth: []
summary: Add a file to the filesystem
/api/v3/fs/mem/:
tags:
- v16.7.2
/api/v3/fs/mem:
get:
description: List all files on the memory filesystem. The listing can be ordered
by name, size, or date of last modification in ascending or descending order.
@@ -2044,10 +2091,12 @@ paths:
security:
- ApiKeyAuth: []
summary: List all files on the memory filesystem
tags:
- v16.7.2
/api/v3/fs/mem/{path}:
delete:
description: Remove a file from the memory filesystem
operationId: memfs-delete-file-api
operationId: memfs-3-delete-file
parameters:
- description: Path to file
in: path
@@ -2068,9 +2117,11 @@ paths:
security:
- ApiKeyAuth: []
summary: Remove a file from the memory filesystem
tags:
- v16.7.2
get:
description: Fetch a file from the memory filesystem
operationId: memfs-3-get-file-api
operationId: memfs-3-get-file
parameters:
- description: Path to file
in: path
@@ -2096,6 +2147,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Fetch a file from the memory filesystem
tags:
- v16.7.2
patch:
consumes:
- application/data
@@ -2129,11 +2182,13 @@ paths:
security:
- ApiKeyAuth: []
summary: Create a link to a file in the memory filesystem
tags:
- v16.7.2
put:
consumes:
- application/data
description: Writes or overwrites a file on the memory filesystem
operationId: memfs-3-put-file-api
operationId: memfs-3-put-file
parameters:
- description: Path to file
in: path
@@ -2167,6 +2222,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Add a file to the memory filesystem
tags:
- v16.7.2
/api/v3/log:
get:
description: Get the last log lines of the Restreamer application
@@ -2188,6 +2245,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Application log
tags:
- v16.7.2
/api/v3/metadata/{key}:
get:
description: Retrieve the previously stored JSON metadata under the given key.
@@ -2216,6 +2275,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Retrieve JSON metadata from a key
tags:
- v16.7.2
put:
description: Add arbitrary JSON metadata under the given key. If the key exists,
all already stored metadata with this key will be overwritten. If the key
@@ -2245,7 +2306,26 @@ paths:
security:
- ApiKeyAuth: []
summary: Add JSON metadata under the given key
tags:
- v16.7.2
/api/v3/metrics:
get:
description: List all known metrics with their description and labels
operationId: metrics-3-describe
produces:
- application/json
responses:
"200":
description: OK
schema:
items:
$ref: '#/definitions/api.MetricsDescription'
type: array
security:
- ApiKeyAuth: []
summary: List all known metrics with their description and labels
tags:
- v16.10.0
post:
consumes:
- application/json
@@ -2272,27 +2352,40 @@ paths:
security:
- ApiKeyAuth: []
summary: Query the collected metrics
tags:
- v16.7.2
/api/v3/process:
get:
description: List all known processes. Use the query parameter to filter the
listed processes.
operationId: restream-3-get-all
operationId: process-3-get-all
parameters:
- description: Comma separated list of fields (config, state, report, metadata)
that will be part of the output. If empty, all fields will be part of the
output
output.
in: query
name: filter
type: string
- description: Return only these process that have this reference value. Overrides
a list of IDs. If empty, the reference will be ignored
- description: Return only these process that have this reference value. If
empty, the reference will be ignored.
in: query
name: reference
type: string
- description: Comma separated list of process ids to list
- description: Comma separated list of process ids to list. Overrides the reference.
If empty all IDs will be returned.
in: query
name: id
type: string
- description: Glob pattern for process IDs. If empty all IDs will be returned.
Intersected with results from refpattern.
in: query
name: idpattern
type: string
- description: Glob pattern for process references. If empty all IDs will be
returned. Intersected with results from idpattern.
in: query
name: refpattern
type: string
produces:
- application/json
responses:
@@ -2305,11 +2398,13 @@ paths:
security:
- ApiKeyAuth: []
summary: List all known processes
tags:
- v16.7.2
post:
consumes:
- application/json
description: Add a new FFmpeg process
operationId: restream-3-add
operationId: process-3-add
parameters:
- description: Process config
in: body
@@ -2331,10 +2426,12 @@ paths:
security:
- ApiKeyAuth: []
summary: Add a new process
tags:
- v16.7.2
/api/v3/process/{id}:
delete:
description: Delete a process by its ID
operationId: restream-3-delete
operationId: process-3-delete
parameters:
- description: Process ID
in: path
@@ -2355,10 +2452,12 @@ paths:
security:
- ApiKeyAuth: []
summary: Delete a process by its ID
tags:
- v16.7.2
get:
description: List a process by its ID. Use the filter parameter to specifiy
the level of detail of the output.
operationId: restream-3-get
operationId: process-3-get
parameters:
- description: Process ID
in: path
@@ -2384,11 +2483,13 @@ paths:
security:
- ApiKeyAuth: []
summary: List a process by its ID
tags:
- v16.7.2
put:
consumes:
- application/json
description: Replace an existing process. This is a shortcut for DELETE+POST.
operationId: restream-3-update
description: Replace an existing process.
operationId: process-3-update
parameters:
- description: Process ID
in: path
@@ -2419,12 +2520,14 @@ paths:
security:
- ApiKeyAuth: []
summary: Replace an existing process
tags:
- v16.7.2
/api/v3/process/{id}/command:
put:
consumes:
- application/json
description: 'Issue a command to a process: start, stop, reload, restart'
operationId: restream-3-command
operationId: process-3-command
parameters:
- description: Process ID
in: path
@@ -2455,11 +2558,13 @@ paths:
security:
- ApiKeyAuth: []
summary: Issue a command to a process
tags:
- v16.7.2
/api/v3/process/{id}/config:
get:
description: Get the configuration of a process. This is the configuration as
provided by Add or Update.
operationId: restream-3-get-config
operationId: process-3-get-config
parameters:
- description: Process ID
in: path
@@ -2484,11 +2589,13 @@ paths:
security:
- ApiKeyAuth: []
summary: Get the configuration of a process
tags:
- v16.7.2
/api/v3/process/{id}/metadata/{key}:
get:
description: Retrieve the previously stored JSON metadata under the given key.
If the key is empty, all metadata will be returned.
operationId: restream-3-get-process-metadata
operationId: process-3-get-process-metadata
parameters:
- description: Process ID
in: path
@@ -2517,11 +2624,13 @@ paths:
security:
- ApiKeyAuth: []
summary: Retrieve JSON metadata stored with a process under a key
tags:
- v16.7.2
put:
description: Add arbitrary JSON metadata under the given key. If the key exists,
all already stored metadata with this key will be overwritten. If the key
doesn't exist, it will be created.
operationId: restream-3-set-process-metadata
operationId: process-3-set-process-metadata
parameters:
- description: Process ID
in: path
@@ -2556,12 +2665,14 @@ paths:
security:
- ApiKeyAuth: []
summary: Add JSON metadata with a process under the given key
tags:
- v16.7.2
/api/v3/process/{id}/playout/{inputid}/errorframe/{name}:
post:
consumes:
- application/octet-stream
description: Upload an error frame which will be encoded immediately
operationId: restream-3-playout-errorframe
operationId: process-3-playout-errorframe
parameters:
- description: Process ID
in: path
@@ -2605,10 +2716,12 @@ paths:
security:
- ApiKeyAuth: []
summary: Upload an error frame
tags:
- v16.7.2
/api/v3/process/{id}/playout/{inputid}/errorframe/encode:
get:
description: Immediately encode the errorframe (if available and looping)
operationId: restream-3-playout-errorframencode
operationId: process-3-playout-errorframencode
parameters:
- description: Process ID
in: path
@@ -2639,11 +2752,13 @@ paths:
security:
- ApiKeyAuth: []
summary: Encode the errorframe
tags:
- v16.7.2
/api/v3/process/{id}/playout/{inputid}/keyframe/{name}:
get:
description: Get the last keyframe of an input of a process. The extension of
the name determines the return type.
operationId: restream-3-playout-keyframe
operationId: process-3-playout-keyframe
parameters:
- description: Process ID
in: path
@@ -2680,11 +2795,13 @@ paths:
security:
- ApiKeyAuth: []
summary: Get the last keyframe
tags:
- v16.7.2
/api/v3/process/{id}/playout/{inputid}/reopen:
get:
description: Close the current input stream such that it will be automatically
re-opened
operationId: restream-3-playout-reopen-input
operationId: process-3-playout-reopen-input
parameters:
- description: Process ID
in: path
@@ -2714,10 +2831,12 @@ paths:
security:
- ApiKeyAuth: []
summary: Close the current input stream
tags:
- v16.7.2
/api/v3/process/{id}/playout/{inputid}/status:
get:
description: Get the current playout status of an input of a process
operationId: restream-3-playout-status
operationId: process-3-playout-status
parameters:
- description: Process ID
in: path
@@ -2747,13 +2866,15 @@ paths:
security:
- ApiKeyAuth: []
summary: Get the current playout status
tags:
- v16.7.2
/api/v3/process/{id}/playout/{inputid}/stream:
put:
consumes:
- text/plain
description: Replace the current stream with the one from the given URL. The
switch will only happen if the stream parameters match.
operationId: restream-3-playout-stream
operationId: process-3-playout-stream
parameters:
- description: Process ID
in: path
@@ -2790,11 +2911,13 @@ paths:
security:
- ApiKeyAuth: []
summary: Switch to a new stream
tags:
- v16.7.2
/api/v3/process/{id}/probe:
get:
description: Probe an existing process to get a detailed stream information
on the inputs
operationId: restream-3-probe
on the inputs.
operationId: process-3-probe
parameters:
- description: Process ID
in: path
@@ -2811,10 +2934,12 @@ paths:
security:
- ApiKeyAuth: []
summary: Probe a process
tags:
- v16.7.2
/api/v3/process/{id}/report:
get:
description: Get the logs and the log history of a process
operationId: restream-3-get-report
description: Get the logs and the log history of a process.
operationId: process-3-get-report
parameters:
- description: Process ID
in: path
@@ -2839,10 +2964,12 @@ paths:
security:
- ApiKeyAuth: []
summary: Get the logs of a process
tags:
- v16.7.2
/api/v3/process/{id}/state:
get:
description: Get the state and progress data of a process
operationId: restream-3-get-state
description: Get the state and progress data of a process.
operationId: process-3-get-state
parameters:
- description: Process ID
in: path
@@ -2867,9 +2994,11 @@ paths:
security:
- ApiKeyAuth: []
summary: Get the state of a process
tags:
- v16.7.2
/api/v3/rtmp:
get:
description: List all currently publishing RTMP streams
description: List all currently publishing RTMP streams.
operationId: rtmp-3-list-channels
produces:
- application/json
@@ -2883,9 +3012,11 @@ paths:
security:
- ApiKeyAuth: []
summary: List all publishing RTMP streams
tags:
- v16.7.2
/api/v3/session:
get:
description: Get a summary of all active and past sessions of the given collector
description: Get a summary of all active and past sessions of the given collector.
operationId: session-3-summary
parameters:
- description: Comma separated list of collectors
@@ -2902,10 +3033,12 @@ paths:
security:
- ApiKeyAuth: []
summary: Get a summary of all active and past sessions
tags:
- v16.7.2
/api/v3/session/active:
get:
description: Get a minimal summary of all active sessions (i.e. number of sessions,
bandwidth)
bandwidth).
operationId: session-3-current
parameters:
- description: Comma separated list of collectors
@@ -2922,9 +3055,11 @@ paths:
security:
- ApiKeyAuth: []
summary: Get a minimal summary of all active sessions
tags:
- v16.7.2
/api/v3/skills:
get:
description: List all detected FFmpeg capabilities
description: List all detected FFmpeg capabilities.
operationId: skills-3
produces:
- application/json
@@ -2936,9 +3071,11 @@ paths:
security:
- ApiKeyAuth: []
summary: FFmpeg capabilities
tags:
- v16.7.2
/api/v3/skills/reload:
get:
description: Refresh the available FFmpeg capabilities
description: Refresh the available FFmpeg capabilities.
operationId: skills-3-reload
produces:
- application/json
@@ -2950,6 +3087,8 @@ paths:
security:
- ApiKeyAuth: []
summary: Refresh FFmpeg capabilities
tags:
- v16.7.2
/api/v3/srt:
get:
description: List all currently publishing SRT streams. This endpoint is EXPERIMENTAL
@@ -2967,6 +3106,8 @@ paths:
security:
- ApiKeyAuth: []
summary: List all publishing SRT treams
tags:
- v16.9.0
/api/v3/widget/process/{id}:
get:
description: Fetch minimal statistics about a process, which is not protected
@@ -2990,6 +3131,8 @@ paths:
schema:
$ref: '#/definitions/api.Error'
summary: Fetch minimal statistics about a process
tags:
- v16.7.2
/memfs/{path}:
delete:
description: Remove a file from the memory filesystem

View File

@@ -379,13 +379,12 @@ func (p *parser) Parse(line string) uint64 {
}
// Calculate if any of the processed frames staled.
// If one number of frames in an output is the same as
// before, then pFrames becomes 0.
var pFrames uint64 = 0
pFrames = p.stats.main.diff.frame
// If one number of frames in an output is the same as before, then pFrames becomes 0.
pFrames := p.stats.main.diff.frame
if isFFmpegProgress {
// Only consider the outputs
pFrames = 1
for i := range p.stats.output {
pFrames *= p.stats.output[i].diff.frame
}

View File

@@ -3,7 +3,7 @@ package skills
import (
"bufio"
"bytes"
"io/ioutil"
"os"
"regexp"
)
@@ -16,14 +16,14 @@ type alsaCard struct {
func DevicesALSA() ([]HWDevice, error) {
devices := []HWDevice{}
content, err := ioutil.ReadFile("/proc/asound/cards")
content, err := os.ReadFile("/proc/asound/cards")
if err != nil {
return devices, err
}
cards := parseALSACards(content)
content, err = ioutil.ReadFile("/proc/asound/devices")
content, err = os.ReadFile("/proc/asound/devices")
if err != nil {
return devices, err
}

14
glob/glob.go Normal file
View File

@@ -0,0 +1,14 @@
package glob
import (
"github.com/gobwas/glob"
)
func Match(pattern, name string, separators ...rune) (bool, error) {
g, err := glob.Compile(pattern, separators...)
if err != nil {
return false, err
}
return g.Match(name), nil
}

74
go.mod
View File

@@ -3,28 +3,31 @@ module github.com/datarhei/core/v16
go 1.18
require (
github.com/99designs/gqlgen v0.17.12
github.com/99designs/gqlgen v0.17.20
github.com/Masterminds/semver/v3 v3.1.1
github.com/atrox/haikunatorgo/v2 v2.0.1
github.com/datarhei/gosrt v0.1.2
github.com/datarhei/joy4 v0.0.0-20210125162555-2102a8289cce
github.com/go-playground/validator/v10 v10.11.0
github.com/caddyserver/certmagic v0.17.2
github.com/datarhei/gosrt v0.3.1
github.com/datarhei/joy4 v0.0.0-20220914170649-23c70d207759
github.com/go-playground/validator/v10 v10.11.1
github.com/gobwas/glob v0.2.3
github.com/golang-jwt/jwt/v4 v4.4.2
github.com/google/uuid v1.3.0
github.com/invopop/jsonschema v0.4.0
github.com/joho/godotenv v1.4.0
github.com/labstack/echo/v4 v4.7.2
github.com/labstack/echo/v4 v4.9.1
github.com/lithammer/shortuuid/v4 v4.0.0
github.com/mattn/go-isatty v0.0.14
github.com/mattn/go-isatty v0.0.16
github.com/prep/average v0.0.0-20200506183628-d26c465f48c3
github.com/prometheus/client_golang v1.12.2
github.com/shirou/gopsutil/v3 v3.22.6
github.com/stretchr/testify v1.7.5
github.com/swaggo/echo-swagger v1.3.3
github.com/swaggo/swag v1.8.3
github.com/vektah/gqlparser/v2 v2.4.6
github.com/prometheus/client_golang v1.13.1
github.com/shirou/gopsutil/v3 v3.22.10
github.com/stretchr/testify v1.8.1
github.com/swaggo/echo-swagger v1.3.5
github.com/swaggo/swag v1.8.7
github.com/vektah/gqlparser/v2 v2.5.1
github.com/xeipuuv/gojsonschema v1.2.0
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4
go.uber.org/zap v1.23.0
golang.org/x/mod v0.6.0
)
require (
@@ -38,8 +41,8 @@ require (
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/spec v0.20.6 // indirect
github.com/go-openapi/swag v0.21.1 // indirect
github.com/go-openapi/spec v0.20.7 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-playground/locales v0.14.0 // indirect
github.com/go-playground/universal-translator v0.18.0 // indirect
github.com/golang-jwt/jwt v3.2.2+incompatible // indirect
@@ -48,36 +51,41 @@ require (
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/iancoleman/orderedmap v0.2.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/labstack/gommon v0.3.1 // indirect
github.com/klauspost/cpuid/v2 v2.1.2 // indirect
github.com/labstack/gommon v0.4.0 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/lufia/plan9stats v0.0.0-20220517141722-cf486979b281 // indirect
github.com/libdns/libdns v0.2.1 // indirect
github.com/lufia/plan9stats v0.0.0-20220913051719-115f729f3c8c // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/matryer/moq v0.2.7 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mholt/acmez v1.0.4 // indirect
github.com/miekg/dns v1.1.50 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20220216144756-c35f1ee13d7c // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.35.0 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/swaggo/files v0.0.0-20220610200504-28940afbdbfe // indirect
github.com/swaggo/files v0.0.0-20220728132757-551d4a08d97a // indirect
github.com/tklauser/go-sysconf v0.3.10 // indirect
github.com/tklauser/numcpus v0.5.0 // indirect
github.com/urfave/cli/v2 v2.8.1 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasttemplate v1.2.1 // indirect
github.com/valyala/fasttemplate v1.2.2 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
golang.org/x/net v0.0.0-20220706163947-c90051bbdb60 // indirect
golang.org/x/sys v0.0.0-20220708085239-5a0f0661e09d // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20220609170525-579cf78fd858 // indirect
golang.org/x/tools v0.1.11 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
go.uber.org/atomic v1.10.0 // indirect
go.uber.org/multierr v1.8.0 // indirect
golang.org/x/crypto v0.1.0 // indirect
golang.org/x/net v0.1.0 // indirect
golang.org/x/sys v0.1.0 // indirect
golang.org/x/text v0.4.0 // indirect
golang.org/x/time v0.1.0 // indirect
golang.org/x/tools v0.2.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

172
go.sum
View File

@@ -31,13 +31,15 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/99designs/gqlgen v0.17.12 h1:lH/H5dTYCY5eLNRKXeq22l0wFMavpOnN6v9GAIw+fxY=
github.com/99designs/gqlgen v0.17.12/go.mod h1:w1brbeOdqVyNJI553BGwtwdVcYu1LKeYE1opLWN9RgQ=
github.com/99designs/gqlgen v0.17.20 h1:O7WzccIhKB1dm+7g6dhQcULINftfiLSBg2l/mwbpJMw=
github.com/99designs/gqlgen v0.17.20/go.mod h1:Mja2HI23kWT1VRH09hvWshFgOzKswpO20o4ScpJIES4=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.1.0/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/KyleBanks/depth v1.2.1 h1:5h8fQADFrWtarTdtDudMmGsC7GPbOAu6RVB3ffsVFHc=
github.com/KyleBanks/depth v1.2.1/go.mod h1:jzSb9d0L43HxTQfT+oSA1EEp2q+ne2uh6XgeJcm8brE=
github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/agiledragon/gomonkey/v2 v2.3.1/go.mod h1:ap1AmDzcVOAz1YpeJ3TCzIgstoaWLA6jbbgxfB4w2iY=
@@ -55,12 +57,16 @@ github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0 h1:jfIu9sQUG6Ig
github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0/go.mod h1:t2tdKJDJF9BV14lnkjHmOQgcvEKgtqs5a1N3LNdJhGE=
github.com/atrox/haikunatorgo/v2 v2.0.1 h1:FCVx2KL2YvZtI1rI9WeEHxeLRrKGr0Dd4wfCJiUXupc=
github.com/atrox/haikunatorgo/v2 v2.0.1/go.mod h1:BBQmx2o+1Z5poziaHRgddAZKOpijwfKdAmMnSYlFK70=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/benburkert/openpgp v0.0.0-20160410205803-c2471f86866c h1:8XZeJrs4+ZYhJeJ2aZxADI2tGADS15AzIF8MQ8XAhT4=
github.com/benburkert/openpgp v0.0.0-20160410205803-c2471f86866c/go.mod h1:x1vxHcL/9AVzuk5HOloOEPrtJY0MaalYr78afXZ+pWI=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/caddyserver/certmagic v0.17.2 h1:o30seC1T/dBqBCNNGNHWwj2i5/I/FMjBbTAhjADP3nE=
github.com/caddyserver/certmagic v0.17.2/go.mod h1:ouWUuC490GOLJzkyN35eXfV8bSbwMwSf4bdhkIxtdQE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
@@ -74,10 +80,10 @@ github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:ma
github.com/cpuguy83/go-md2man/v2 v2.0.1 h1:r/myEWzV9lfsM1tFLgDyu0atFtJ1fXn261LKYj/3DxU=
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/datarhei/gosrt v0.1.2 h1:rGOP2Xkbi52z4tLzBwCBw2TKt7BrfTO2LmEVY+yWf1M=
github.com/datarhei/gosrt v0.1.2/go.mod h1:IftDbZGIIC9OvQO5on5ZpU0iB/JX/PFOqGXORbwHYQM=
github.com/datarhei/joy4 v0.0.0-20210125162555-2102a8289cce h1:bg/OE9GfGK6d/XbqiMq8YaGQzw1Ul3Y3qiGMzU1G4HQ=
github.com/datarhei/joy4 v0.0.0-20210125162555-2102a8289cce/go.mod h1:Jcw/6jZDQQmPx8A7INEkXmuEF7E9jjBbSTfVSLwmiQw=
github.com/datarhei/gosrt v0.3.1 h1:9A75hIvnY74IUFyeguqYXh1lsGF8Qt8fjxJS2Ewr12Q=
github.com/datarhei/gosrt v0.3.1/go.mod h1:M2nl2WPrawncUc1FtUBK6gZX4tpZRC7FqL8NjOdBZV0=
github.com/datarhei/joy4 v0.0.0-20220914170649-23c70d207759 h1:h8NyekuQSDvLIsZVTV172m5/RVArXkEM/cnHaUzszQU=
github.com/datarhei/joy4 v0.0.0-20220914170649-23c70d207759/go.mod h1:Jcw/6jZDQQmPx8A7INEkXmuEF7E9jjBbSTfVSLwmiQw=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -108,21 +114,23 @@ github.com/go-openapi/jsonreference v0.19.6/go.mod h1:diGHMEHg2IqXZGKxqyvWdfWU/a
github.com/go-openapi/jsonreference v0.20.0 h1:MYlu0sBgChmCfJxxUKZ8g1cPWFOB37YSZqewK7OKeyA=
github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo=
github.com/go-openapi/spec v0.20.4/go.mod h1:faYFR1CvsJZ0mNsmsphTMSoRrNV3TEDoAM7FOEWeq8I=
github.com/go-openapi/spec v0.20.6 h1:ich1RQ3WDbfoeTqTAb+5EIxNmpKVJZWBNah9RAT0jIQ=
github.com/go-openapi/spec v0.20.6/go.mod h1:2OpW+JddWPrpXSCIX8eOx7lZ5iyuWj3RYR6VaaBKcWA=
github.com/go-openapi/spec v0.20.7 h1:1Rlu/ZrOCCob0n+JKKJAWhNWMPW8bOZRg8FJaY+0SKI=
github.com/go-openapi/spec v0.20.7/go.mod h1:2OpW+JddWPrpXSCIX8eOx7lZ5iyuWj3RYR6VaaBKcWA=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.15/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.21.1 h1:wm0rhTb5z7qpJRHBdPOMuY4QjVUMbF6/kwoYeRAOrKU=
github.com/go-openapi/swag v0.21.1/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.22.3 h1:yMBqmnQ0gyZvEb/+KzuWZOXgllrXT4SADYbvDaXHv/g=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-playground/assert/v2 v2.0.1 h1:MsBgLAaY856+nPRTKrp3/OZK38U/wa0CcBYNjji3q3A=
github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.0 h1:u50s323jtVGugKlcYeyzC0etD1HifMjqmJqb8WugfUU=
github.com/go-playground/locales v0.14.0/go.mod h1:sawfccIbzZTqEDETgFXqTho0QybSa7l++s0DH+LDiLs=
github.com/go-playground/universal-translator v0.18.0 h1:82dyy6p4OuJq4/CByFNOn/jYrnRPArHwAcmLoJZxyho=
github.com/go-playground/universal-translator v0.18.0/go.mod h1:UvRDBj+xPUEGrFYl+lu/H90nyDXpg0fqeB/AQUGNTVA=
github.com/go-playground/validator/v10 v10.11.0 h1:0W+xRM511GY47Yy3bZUbJVitCNg2BOGlCyvTqsp/xIw=
github.com/go-playground/validator/v10 v10.11.0/go.mod h1:i+3WkQ1FvaUjjxh1kSvIA4dMGDBiPU55YFDl0WbKdWU=
github.com/go-playground/validator/v10 v10.11.1 h1:prmOlTVv+YjZjmRmNSF3VmspqJIxJWXmqUsHwfTRRkQ=
github.com/go-playground/validator/v10 v10.11.1/go.mod h1:i+3WkQ1FvaUjjxh1kSvIA4dMGDBiPU55YFDl0WbKdWU=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang-jwt/jwt v3.2.2+incompatible h1:IfV12K8xAKAnZqdXVzCZ+TOjboZ2keLg81eXfW3O+oY=
github.com/golang-jwt/jwt v3.2.2+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
@@ -168,8 +176,8 @@ github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
@@ -214,6 +222,8 @@ github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7V
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kevinmbeaulieu/eq-go v1.0.0/go.mod h1:G3S8ajA56gKBZm4UB9AOyoOS37JO3roToPzKNM8dtdM=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/cpuid/v2 v2.1.2 h1:XhdX4fqAJUA0yj+kUwMavO0hHrSPAecYdYf1ZmxHvak=
github.com/klauspost/cpuid/v2 v2.1.2/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
@@ -225,32 +235,41 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/labstack/echo/v4 v4.7.2 h1:Kv2/p8OaQ+M6Ex4eGimg9b9e6icoxA42JSlOR3msKtI=
github.com/labstack/echo/v4 v4.7.2/go.mod h1:xkCDAdFCIf8jsFQ5NnbK7oqaF/yU1A1X20Ltm0OvSks=
github.com/labstack/gommon v0.3.1 h1:OomWaJXm7xR6L1HmEtGyQf26TEn7V6X88mktX9kee9o=
github.com/labstack/echo/v4 v4.9.0/go.mod h1:xkCDAdFCIf8jsFQ5NnbK7oqaF/yU1A1X20Ltm0OvSks=
github.com/labstack/echo/v4 v4.9.1 h1:GliPYSpzGKlyOhqIbG8nmHBo3i1saKWFOgh41AN3b+Y=
github.com/labstack/echo/v4 v4.9.1/go.mod h1:Pop5HLc+xoc4qhTZ1ip6C0RtP7Z+4VzRLWZZFKqbbjo=
github.com/labstack/gommon v0.3.1/go.mod h1:uW6kP17uPlLJsD3ijUYn3/M5bAxtlZhMI6m3MFxTMTM=
github.com/labstack/gommon v0.4.0 h1:y7cvthEAEbU0yHOf4axH8ZG2NH8knB9iNSoTO8dyIk8=
github.com/labstack/gommon v0.4.0/go.mod h1:uW6kP17uPlLJsD3ijUYn3/M5bAxtlZhMI6m3MFxTMTM=
github.com/leodido/go-urn v1.2.1 h1:BqpAaACuzVSgi/VLzGZIobT2z4v53pjosyNd9Yv6n/w=
github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
github.com/libdns/libdns v0.2.1 h1:Wu59T7wSHRgtA0cfxC+n1c/e+O3upJGWytknkmFEDis=
github.com/libdns/libdns v0.2.1/go.mod h1:yQCXzk1lEZmmCPa857bnk4TsOiqYasqpyOEeSObbb40=
github.com/lithammer/shortuuid/v4 v4.0.0 h1:QRbbVkfgNippHOS8PXDkti4NaWeyYfcBTHtw7k08o4c=
github.com/lithammer/shortuuid/v4 v4.0.0/go.mod h1:Zs8puNcrvf2rV9rTH51ZLLcj7ZXqQI3lv67aw4KiB1Y=
github.com/logrusorgru/aurora/v3 v3.0.0/go.mod h1:vsR12bk5grlLvLXAYrBsb5Oc/N+LxAlxggSjiwMnCUc=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/lufia/plan9stats v0.0.0-20220517141722-cf486979b281 h1:aczX6NMOtt6L4YT0fQvKkDK6LZEtdOso9sUH89V1+P0=
github.com/lufia/plan9stats v0.0.0-20220517141722-cf486979b281/go.mod h1:lc+czkgO/8F7puNki5jk8QyujbfK1LOT7Wl0ON2hxyk=
github.com/lufia/plan9stats v0.0.0-20220913051719-115f729f3c8c h1:VtwQ41oftZwlMnOEbMWQtSEUgU64U4s+GHk7hZK+jtY=
github.com/lufia/plan9stats v0.0.0-20220913051719-115f729f3c8c/go.mod h1:JKx41uQRwqlTZabZc+kILPrO/3jlKnQ2Z8b7YiVw5cE=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/matryer/moq v0.2.7 h1:RtpiPUM8L7ZSCbSwK+QcZH/E9tgqAkFjKQxsRs25b4w=
github.com/matryer/moq v0.2.7/go.mod h1:kITsx543GOENm48TUAQyJ9+SAvFSr7iGQXPoth/VUBk=
github.com/mattn/go-colorable v0.1.11/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-colorable v0.1.12 h1:jF+Du6AlPIjs2BiUiQlKOX0rt3SujHxPnksPKZbaA40=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/mattn/go-isatty v0.0.16 h1:bq3VjFmv/sOjHtdEhmkEV4x1AJtvUvOJ2PFAZ5+peKQ=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/mholt/acmez v1.0.4 h1:N3cE4Pek+dSolbsofIkAYz6H1d3pE+2G0os7QHslf80=
github.com/mholt/acmez v1.0.4/go.mod h1:qFGLZ4u+ehWINeJZjzPlsnjJBCPAADWTcIqE/7DAYQY=
github.com/miekg/dns v1.1.50 h1:DQUfb9uc6smULcREF09Uc+/Gd46YWqJd5DbpPE9xkcA=
github.com/miekg/dns v1.1.50/go.mod h1:e3IlAVfNqAllflbibAZEWOXOQ+Ynzk/dDozDxY7XnME=
github.com/mitchellh/mapstructure v1.3.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
@@ -270,6 +289,7 @@ github.com/otiai10/mint v1.3.3/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.6.0/go.mod h1:qBsxPvzyUincmltOk6iyRVxHYg4adc0OFOv72ZdLa18=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -284,25 +304,27 @@ github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5Fsn
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.12.2 h1:51L9cDoUHVrXx4zWYlcLQIZ+d+VXHgqnYKkIuq4g/34=
github.com/prometheus/client_golang v1.12.2/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.13.1 h1:3gMjIY2+/hzmqhtUC/aQNYldJA6DtH3CgQvwS+02K1c=
github.com/prometheus/client_golang v1.13.1/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.35.0 h1:Eyr+Pw2VymWejHqCugNaQXkAi6KayVNxaHeu6khmFBE=
github.com/prometheus/common v0.35.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8pXE=
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.8.0 h1:FCbCCtXNOY3UtUuHUYaghJg4y7Fd14rXifAYUAtL9R8=
@@ -312,8 +334,8 @@ github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shirou/gopsutil/v3 v3.22.6 h1:FnHOFOh+cYAM0C30P+zysPISzlknLC5Z1G4EAElznfQ=
github.com/shirou/gopsutil/v3 v3.22.6/go.mod h1:EdIubSnZhbAvBS1yJ7Xi+AShB/hxwLHOMz4MCYz7yMs=
github.com/shirou/gopsutil/v3 v3.22.10 h1:4KMHdfBRYXGF9skjDWiL4RA2N+E8dRdodU/bOZpPoVg=
github.com/shirou/gopsutil/v3 v3.22.10/go.mod h1:QNza6r4YQoydyCfo6rH0blGfKahgibh4dQmV5xdFkQk=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
@@ -323,6 +345,7 @@ github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.3.1-0.20190311161405-34c6fa2dc709/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
@@ -330,16 +353,16 @@ github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81P
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.7.5 h1:s5PTfem8p8EbKQOctVV53k6jCJt3UX4IEJzwh+C324Q=
github.com/stretchr/testify v1.7.5/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/swaggo/echo-swagger v1.3.3 h1:Fx8kQ8IcIIEL3ZE20wzvcT8gFnPo/4U+fsnS3I1wvCw=
github.com/swaggo/echo-swagger v1.3.3/go.mod h1:vbKcEBeJgOexLuPcsdZhrRAV508fsE79xaKIqmvse98=
github.com/swaggo/files v0.0.0-20220610200504-28940afbdbfe h1:K8pHPVoTgxFJt1lXuIzzOX7zZhZFldJQK/CgKx9BFIc=
github.com/swaggo/files v0.0.0-20220610200504-28940afbdbfe/go.mod h1:lKJPbtWzJ9JhsTN1k1gZgleJWY/cqq0psdoMmaThG3w=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/swaggo/echo-swagger v1.3.5 h1:kCx1wvX5AKhjI6Ykt48l3PTsfL9UD40ZROOx/tYzWyY=
github.com/swaggo/echo-swagger v1.3.5/go.mod h1:3IMHd2Z8KftdWFEEjGmv6QpWj370LwMCOfovuh7vF34=
github.com/swaggo/files v0.0.0-20220728132757-551d4a08d97a h1:kAe4YSu0O0UFn1DowNo2MY5p6xzqtJ/wQ7LZynSvGaY=
github.com/swaggo/files v0.0.0-20220728132757-551d4a08d97a/go.mod h1:lKJPbtWzJ9JhsTN1k1gZgleJWY/cqq0psdoMmaThG3w=
github.com/swaggo/swag v1.8.1/go.mod h1:ugemnJsPZm/kRwFUnzBlbHRd0JY9zE1M4F+uy2pAaPQ=
github.com/swaggo/swag v1.8.3 h1:3pZSSCQ//gAH88lfmxM3Cd1+JCsxV8Md6f36b9hrZ5s=
github.com/swaggo/swag v1.8.3/go.mod h1:jMLeXOOmYyjk8PvHTsXBdrubsNd9gUJTTCzL5iBnseg=
github.com/swaggo/swag v1.8.7 h1:2K9ivTD3teEO+2fXV6zrZKDqk5IuU2aJtBDo8U7omWU=
github.com/swaggo/swag v1.8.7/go.mod h1:ezQVUUhly8dludpVk+/PuwJWvLLanB13ygV5Pr9enSk=
github.com/tklauser/go-sysconf v0.3.10 h1:IJ1AZGZRWbY8T5Vfk04D9WOA5WSejdflXxP03OUqALw=
github.com/tklauser/go-sysconf v0.3.10/go.mod h1:C8XykCvCb+Gn0oNCWPIlcb0RuglQTYaQ2hGm7jmxEFk=
github.com/tklauser/numcpus v0.4.0/go.mod h1:1+UI3pD8NW14VMwdgJNJ1ESk2UnwhAnz5hMwiKKqXCQ=
@@ -350,10 +373,11 @@ github.com/urfave/cli/v2 v2.8.1 h1:CGuYNZF9IKZY/rfBe3lJpccSoIY1ytfvmgQT90cNOl4=
github.com/urfave/cli/v2 v2.8.1/go.mod h1:Z41J9TPoffeoqP0Iza0YbAhGvymRdZAd2uPmZ5JxRdY=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.2.1 h1:TVEnxayobAdVkhQfrfes2IzOB6o+z4roRkPF52WA1u4=
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/vektah/gqlparser/v2 v2.4.6 h1:Yjzp66g6oVq93Jihbi0qhGnf/6zIWjcm8H6gA27zstE=
github.com/vektah/gqlparser/v2 v2.4.6/go.mod h1:flJWIR04IMQPGz+BXLrORkrARBxv/rtyIAFvd/MceW0=
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/vektah/gqlparser/v2 v2.5.1 h1:ZGu+bquAY23jsxDRcYpWjttRZrUz07LbiY77gUOHcr4=
github.com/vektah/gqlparser/v2 v2.5.1/go.mod h1:mPgqFBu/woKTVYWyNk8cO3kh4S/f4aRFZrvOnp3hmCs=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
@@ -366,8 +390,10 @@ github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673/go.mod h1:N3UwUGtsr
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yusufpapurcu/wmi v1.2.2 h1:KBNDSne4vP5mbSWnJbO+51IMOXJB67QiYCSBrubbPRg=
github.com/yusufpapurcu/wmi v1.2.2/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
@@ -375,6 +401,17 @@ go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.10.0 h1:9qC72Qh0+3MqyJbAn8YU5xVq1frD8bn3JtD2oXtafVQ=
go.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/goleak v1.1.11 h1:wy28qYRKZgnJTxGxvye5/wgWr1EKjmUDGYox5mGlRlI=
go.uber.org/goleak v1.1.11/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/multierr v1.8.0 h1:dg6GjLku4EH+249NNmoIciG9N/jURbDG+pFlTkhzIC8=
go.uber.org/multierr v1.8.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=
go.uber.org/zap v1.21.0/go.mod h1:wjWOCqI0f2ZZrJF/UufIOkiC8ii6tm1iqIsLo76RfJw=
go.uber.org/zap v1.23.0 h1:OjGQ5KQDEUawVHxNwQgPpiypGHOxo2mNZsOqTak4fFY=
go.uber.org/zap v1.23.0/go.mod h1:D+nX8jyLsMHMYrln8A0rJjFt/T/9/bGgIhAqxv5URuY=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
@@ -385,9 +422,8 @@ golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5y
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220411220226-7b82a4e95df4/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220525230936-793ad666bf5e/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d h1:sK3txAijHtOK88l68nt020reeT1ZdKLIYetKl95FzVY=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -419,10 +455,10 @@ golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzB
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.6.0 h1:b9gGHsz9/HhJ3HF5DHQytPpuwocVTChQJK3AvoLRD5I=
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -452,16 +488,20 @@ golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220706163947-c90051bbdb60 h1:8NSylCMxLW4JvserAndSgFL7aPli6A68yf0bYFTcWCM=
golang.org/x/net v0.0.0-20220706163947-c90051bbdb60/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220630215102-69896b714898/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -479,6 +519,8 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 h1:uVc8UZUe6tr40fFVnUP5Oj+veunVezqYl9z7DYw9xzw=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -514,8 +556,10 @@ golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -530,24 +574,29 @@ golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220422013727-9388b58f7150/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220708085239-5a0f0661e09d h1:/m5NbqQelATgoSPVC2Z23sR4kVNokFwDDyWh/3rGY+I=
golang.org/x/sys v0.0.0-20220708085239-5a0f0661e09d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20220609170525-579cf78fd858 h1:Dpdu/EMxGMFgq0CeYMh4fazTD2vtlZRYE7wyynxJb9U=
golang.org/x/time v0.0.0-20220609170525-579cf78fd858/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.1.0 h1:xYY+Bajn2a7VBmTM5GikTmnK8ZuX8YgnQCqZpbBNtmA=
golang.org/x/time v0.1.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@@ -589,11 +638,13 @@ golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roY
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.6-0.20210726203631-07bc1bf47fb2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
golang.org/x/tools v0.1.9/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/tools v0.1.11 h1:loJ25fNOEhSXfHrpoGj91eCUThwdNX6u24rO1xnNteY=
golang.org/x/tools v0.1.11/go.mod h1:SgwaegtQh8clINPpECJMqnxLv9I09HLqnW3RMqW0CA4=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.2.0 h1:G6AHpWxTMGY1KyEYoAQ5WTtIekUUvDNjan3ugu60JvE=
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -673,8 +724,9 @@ google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGj
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -4,8 +4,16 @@ import (
"time"
"github.com/datarhei/core/v16/config"
v1config "github.com/datarhei/core/v16/config/v1"
v2config "github.com/datarhei/core/v16/config/v2"
)
// ConfigVersion is used to only unmarshal the version field in order
// find out which SetConfig should be used.
type ConfigVersion struct {
Version int64 `json:"version"`
}
// ConfigData embeds config.Data
type ConfigData struct {
config.Data
@@ -22,11 +30,68 @@ type Config struct {
Overrides []string `json:"overrides"`
}
type SetConfigV1 struct {
v1config.Data
}
// NewSetConfigV1 creates a new SetConfigV1 based on the current
// config with downgrading.
func NewSetConfigV1(cfg *config.Config) SetConfigV1 {
v2data, _ := config.DowngradeV3toV2(&cfg.Data)
v1data, _ := v2config.DowngradeV2toV1(v2data)
data := SetConfigV1{
Data: *v1data,
}
return data
}
// MergeTo merges the v1 config into the current config.
func (s *SetConfigV1) MergeTo(cfg *config.Config) {
v2data, _ := config.DowngradeV3toV2(&cfg.Data)
v2config.MergeV1ToV2(v2data, &s.Data)
config.MergeV2toV3(&cfg.Data, v2data)
}
type SetConfigV2 struct {
v2config.Data
}
// NewSetConfigV2 creates a new SetConfigV2 based on the current
// config with downgrading.
func NewSetConfigV2(cfg *config.Config) SetConfigV2 {
v2data, _ := config.DowngradeV3toV2(&cfg.Data)
data := SetConfigV2{
Data: *v2data,
}
return data
}
// MergeTo merges the v2 config into the current config.
func (s *SetConfigV2) MergeTo(cfg *config.Config) {
config.MergeV2toV3(&cfg.Data, &s.Data)
}
// SetConfig embeds config.Data. It is used to send a new config to the server.
type SetConfig struct {
config.Data
}
// NewSetConfig converts a config.Config into a SetConfig in order to prepopulate
// a SetConfig with the current values. The uploaded config can have missing fields that
// will be filled with the current values after unmarshalling the JSON.
func NewSetConfig(cfg *config.Config) SetConfig {
data := SetConfig{
cfg.Data,
}
return data
}
// MergeTo merges a sent config into a config.Config
func (rscfg *SetConfig) MergeTo(cfg *config.Config) {
cfg.ID = rscfg.ID
@@ -51,18 +116,7 @@ func (rscfg *SetConfig) MergeTo(cfg *config.Config) {
cfg.Router = rscfg.Router
}
// NewSetConfig converts a config.Config into a RestreamerSetConfig in order to prepopulate
// a RestreamerSetConfig with the current values. The uploaded config can have missing fields that
// will be filled with the current values after unmarshalling the JSON.
func NewSetConfig(cfg *config.Config) SetConfig {
data := SetConfig{
cfg.Data,
}
return data
}
// Unmarshal converts a config.Config to a RestreamerConfig.
// Unmarshal converts a config.Config to a Config.
func (c *Config) Unmarshal(cfg *config.Config) {
if cfg == nil {
return

View File

@@ -7,6 +7,12 @@ import (
"github.com/datarhei/core/v16/monitor"
)
type MetricsDescription struct {
Name string `json:"name"`
Description string `json:"description"`
Labels []string `json:"labels"`
}
type MetricsQueryMetric struct {
Name string `json:"name"`
Labels map[string]string `json:"labels"`

View File

@@ -175,9 +175,10 @@ func (cfg *ProcessConfig) Unmarshal(c *app.Config) {
for _, c := range x.Cleanup {
io.Cleanup = append(io.Cleanup, ProcessConfigIOCleanup{
Pattern: c.Pattern,
MaxFiles: c.MaxFiles,
MaxFileAge: c.MaxFileAge,
Pattern: c.Pattern,
MaxFiles: c.MaxFiles,
MaxFileAge: c.MaxFileAge,
PurgeOnDelete: c.PurgeOnDelete,
})
}

View File

@@ -33,9 +33,9 @@ type SRTStatistics struct {
ByteSent uint64 `json:"sent_bytes"` // Same as pktSent, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
ByteRecv uint64 `json:"recv_bytes"` // Same as pktRecv, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
ByteSentUnique uint64 `json:"sent_unique__bytes"` // Same as pktSentUnique, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
ByteSentUnique uint64 `json:"sent_unique_bytes"` // Same as pktSentUnique, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
ByteRecvUnique uint64 `json:"recv_unique_bytes"` // Same as pktRecvUnique, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
ByteRcvLoss uint64 `json:"recv_loss__bytes"` // Same as pktRcvLoss, but expressed in bytes, including payload and all the headers (IP, TCP, SRT), bytes for the presently missing (either reordered or lost) packets' payloads are estimated based on the average packet size
ByteRcvLoss uint64 `json:"recv_loss_bytes"` // Same as pktRcvLoss, but expressed in bytes, including payload and all the headers (IP, TCP, SRT), bytes for the presently missing (either reordered or lost) packets' payloads are estimated based on the average packet size
ByteRetrans uint64 `json:"sent_retrans_bytes"` // Same as pktRetrans, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
ByteSndDrop uint64 `json:"send_drop_bytes"` // Same as pktSndDrop, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
ByteRcvDrop uint64 `json:"recv_drop_bytes"` // Same as pktRcvDrop, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
@@ -68,34 +68,54 @@ type SRTStatistics struct {
func (s *SRTStatistics) Unmarshal(ss *gosrt.Statistics) {
s.MsTimeStamp = ss.MsTimeStamp
s.PktSent = ss.PktSent
s.PktRecv = ss.PktRecv
s.PktSentUnique = ss.PktSentUnique
s.PktRecvUnique = ss.PktRecvUnique
s.PktSndLoss = ss.PktSndLoss
s.PktRcvLoss = ss.PktRcvLoss
s.PktRetrans = ss.PktRetrans
s.PktRcvRetrans = ss.PktRcvRetrans
s.PktSentACK = ss.PktSentACK
s.PktRecvACK = ss.PktRecvACK
s.PktSentNAK = ss.PktSentNAK
s.PktRecvNAK = ss.PktRecvNAK
s.PktSentKM = ss.PktSentKM
s.PktRecvKM = ss.PktRecvKM
s.UsSndDuration = ss.UsSndDuration
s.PktSndDrop = ss.PktSndDrop
s.PktRcvDrop = ss.PktRcvDrop
s.PktRcvUndecrypt = ss.PktRcvUndecrypt
s.PktSent = ss.Accumulated.PktSent
s.PktRecv = ss.Accumulated.PktRecv
s.PktSentUnique = ss.Accumulated.PktSentUnique
s.PktRecvUnique = ss.Accumulated.PktRecvUnique
s.PktSndLoss = ss.Accumulated.PktSendLoss
s.PktRcvLoss = ss.Accumulated.PktRecvLoss
s.PktRetrans = ss.Accumulated.PktRetrans
s.PktRcvRetrans = ss.Accumulated.PktRecvRetrans
s.PktSentACK = ss.Accumulated.PktSentACK
s.PktRecvACK = ss.Accumulated.PktRecvACK
s.PktSentNAK = ss.Accumulated.PktSentNAK
s.PktRecvNAK = ss.Accumulated.PktRecvNAK
s.PktSentKM = ss.Accumulated.PktSentKM
s.PktRecvKM = ss.Accumulated.PktRecvKM
s.UsSndDuration = ss.Accumulated.UsSndDuration
s.PktSndDrop = ss.Accumulated.PktSendDrop
s.PktRcvDrop = ss.Accumulated.PktRecvDrop
s.PktRcvUndecrypt = ss.Accumulated.PktRecvUndecrypt
s.ByteSent = ss.ByteSent
s.ByteRecv = ss.ByteRecv
s.ByteSentUnique = ss.ByteSentUnique
s.ByteRecvUnique = ss.ByteRecvUnique
s.ByteRcvLoss = ss.ByteRcvLoss
s.ByteRetrans = ss.ByteRetrans
s.ByteSndDrop = ss.ByteSndDrop
s.ByteRcvDrop = ss.ByteRcvDrop
s.ByteRcvUndecrypt = ss.ByteRcvUndecrypt
s.ByteSent = ss.Accumulated.ByteSent
s.ByteRecv = ss.Accumulated.ByteRecv
s.ByteSentUnique = ss.Accumulated.ByteSentUnique
s.ByteRecvUnique = ss.Accumulated.ByteRecvUnique
s.ByteRcvLoss = ss.Accumulated.ByteRecvLoss
s.ByteRetrans = ss.Accumulated.ByteRetrans
s.ByteSndDrop = ss.Accumulated.ByteSendDrop
s.ByteRcvDrop = ss.Accumulated.ByteRecvDrop
s.ByteRcvUndecrypt = ss.Accumulated.ByteRecvUndecrypt
s.UsPktSndPeriod = ss.Instantaneous.UsPktSendPeriod
s.PktFlowWindow = ss.Instantaneous.PktFlowWindow
s.PktFlightSize = ss.Instantaneous.PktFlightSize
s.MsRTT = ss.Instantaneous.MsRTT
s.MbpsBandwidth = ss.Instantaneous.MbpsLinkCapacity
s.ByteAvailSndBuf = ss.Instantaneous.ByteAvailSendBuf
s.ByteAvailRcvBuf = ss.Instantaneous.ByteAvailRecvBuf
s.MbpsMaxBW = ss.Instantaneous.MbpsMaxBW
s.ByteMSS = ss.Instantaneous.ByteMSS
s.PktSndBuf = ss.Instantaneous.PktSendBuf
s.ByteSndBuf = ss.Instantaneous.ByteSendBuf
s.MsSndBuf = ss.Instantaneous.MsSendBuf
s.MsSndTsbPdDelay = ss.Instantaneous.MsSendTsbPdDelay
s.PktRcvBuf = ss.Instantaneous.PktRecvBuf
s.ByteRcvBuf = ss.Instantaneous.ByteRecvBuf
s.MsRcvBuf = ss.Instantaneous.MsRecvBuf
s.MsRcvTsbPdDelay = ss.Instantaneous.MsRecvTsbPdDelay
s.PktReorderTolerance = ss.Instantaneous.PktReorderTolerance
s.PktRcvAvgBelatedTime = ss.Instantaneous.PktRecvAvgBelatedTime
}
type SRTLog struct {

59
http/cache/lru.go vendored
View File

@@ -11,23 +11,25 @@ import (
// LRUConfig is the configuration for a new LRU cache
type LRUConfig struct {
TTL time.Duration // For how long the object should stay in cache
MaxSize uint64 // Max. size of the cache, 0 for unlimited, bytes
MaxFileSize uint64 // Max. file size allowed to put in cache, 0 for unlimited, bytes
Extensions []string // List of file extension allowed to cache, empty list for all files
Logger log.Logger
TTL time.Duration // For how long the object should stay in cache
MaxSize uint64 // Max. size of the cache, 0 for unlimited, bytes
MaxFileSize uint64 // Max. file size allowed to put in cache, 0 for unlimited, bytes
AllowExtensions []string // List of file extension allowed to cache, empty list for all files
BlockExtensions []string // List of file extensions not allowed to cache, empty list for none
Logger log.Logger
}
type lrucache struct {
ttl time.Duration
maxSize uint64
maxFileSize uint64
extensions []string
objects map[string]*list.Element
list *list.List
size uint64
lock sync.Mutex
logger log.Logger
ttl time.Duration
maxSize uint64
maxFileSize uint64
allowExtensions []string
blockExtensions []string
objects map[string]*list.Element
list *list.List
size uint64
lock sync.Mutex
logger log.Logger
}
type value struct {
@@ -53,11 +55,14 @@ func NewLRUCache(config LRUConfig) (Cacher, error) {
}
if cache.logger == nil {
cache.logger = log.New("HTTPCache")
cache.logger = log.New("")
}
cache.extensions = make([]string, len(config.Extensions))
copy(cache.extensions, config.Extensions)
cache.allowExtensions = make([]string, len(config.AllowExtensions))
copy(cache.allowExtensions, config.AllowExtensions)
cache.blockExtensions = make([]string, len(config.BlockExtensions))
copy(cache.blockExtensions, config.BlockExtensions)
return cache, nil
}
@@ -199,19 +204,27 @@ func (c *lrucache) TTL() time.Duration {
}
func (c *lrucache) IsExtensionCacheable(extension string) bool {
if len(c.extensions) == 0 {
if len(c.allowExtensions) == 0 && len(c.blockExtensions) == 0 {
return true
}
cacheable := false
for _, e := range c.extensions {
for _, e := range c.blockExtensions {
if extension == e {
cacheable = true
break
return false
}
}
return cacheable
if len(c.allowExtensions) == 0 {
return true
}
for _, e := range c.allowExtensions {
if extension == e {
return true
}
}
return false
}
func (c *lrucache) IsSizeCacheable(size uint64) bool {

View File

@@ -8,11 +8,12 @@ import (
)
var defaultConfig = LRUConfig{
TTL: time.Hour,
MaxSize: 128,
MaxFileSize: 0,
Extensions: []string{".html", ".js", ".jpg"},
Logger: nil,
TTL: time.Hour,
MaxSize: 128,
MaxFileSize: 0,
AllowExtensions: []string{".html", ".js", ".jpg"},
BlockExtensions: []string{".m3u8"},
Logger: nil,
}
func getCache(t *testing.T) *lrucache {
@@ -27,8 +28,6 @@ func TestNew(t *testing.T) {
TTL: time.Hour,
MaxSize: 128,
MaxFileSize: 129,
Extensions: []string{},
Logger: nil,
})
require.NotEqual(t, nil, err)
@@ -36,8 +35,6 @@ func TestNew(t *testing.T) {
TTL: time.Hour,
MaxSize: 0,
MaxFileSize: 129,
Extensions: []string{},
Logger: nil,
})
require.Equal(t, nil, err)
@@ -45,8 +42,6 @@ func TestNew(t *testing.T) {
TTL: time.Hour,
MaxSize: 128,
MaxFileSize: 127,
Extensions: []string{},
Logger: nil,
})
require.Equal(t, nil, err)
}
@@ -144,7 +139,7 @@ func TestLRU(t *testing.T) {
require.NotEqual(t, nil, data)
}
func TestExtension(t *testing.T) {
func TestAllowExtension(t *testing.T) {
cache := getCache(t)
r := cache.IsExtensionCacheable(".html")
@@ -154,6 +149,17 @@ func TestExtension(t *testing.T) {
require.Equal(t, false, r)
}
func TestBlockExtension(t *testing.T) {
cache := getCache(t)
cache.allowExtensions = []string{}
r := cache.IsExtensionCacheable(".html")
require.Equal(t, true, r)
r = cache.IsExtensionCacheable(".m3u8")
require.Equal(t, false, r)
}
func TestSize(t *testing.T) {
cache := getCache(t)

View File

@@ -12,7 +12,7 @@ import (
func (r *queryResolver) Log(ctx context.Context) ([]string, error) {
if r.LogBuffer == nil {
r.LogBuffer = log.NewBufferWriter(log.Lsilent, 1)
r.LogBuffer = log.NewBufferWriter(1)
}
events := r.LogBuffer.Events()

View File

@@ -10,7 +10,7 @@ import (
)
func (r *queryResolver) Processes(ctx context.Context) ([]*models.Process, error) {
ids := r.Restream.GetProcessIDs()
ids := r.Restream.GetProcessIDs("", "")
procs := []*models.Process{}

View File

@@ -2,7 +2,7 @@ package resolver
import (
"bytes"
"io/ioutil"
"io"
"net/http"
"time"
@@ -79,7 +79,7 @@ func (r *queryResolver) playoutRequest(method, addr, path, contentType string, d
defer response.Body.Close()
// Read the whole response
data, err = ioutil.ReadAll(response.Body)
data, err = io.ReadAll(response.Body)
if err != nil {
return nil, err
}

View File

@@ -1,11 +1,13 @@
package api
import (
"io"
"net/http"
"github.com/datarhei/core/v16/config"
cfgstore "github.com/datarhei/core/v16/config/store"
cfgvars "github.com/datarhei/core/v16/config/vars"
"github.com/datarhei/core/v16/encoding/json"
"github.com/datarhei/core/v16/http/api"
"github.com/datarhei/core/v16/http/handler/util"
"github.com/labstack/echo/v4"
)
@@ -13,11 +15,11 @@ import (
// The ConfigHandler type provides handler functions for reading and manipulating
// the current config.
type ConfigHandler struct {
store config.Store
store cfgstore.Store
}
// NewConfig return a new Config type. You have to provide a valid config store.
func NewConfig(store config.Store) *ConfigHandler {
func NewConfig(store cfgstore.Store) *ConfigHandler {
return &ConfigHandler{
store: store,
}
@@ -26,6 +28,7 @@ func NewConfig(store config.Store) *ConfigHandler {
// Get returns the currently active Restreamer configuration
// @Summary Retrieve the currently active Restreamer configuration
// @Description Retrieve the currently active Restreamer configuration
// @Tags v16.7.2
// @ID config-3-get
// @Produce json
// @Success 200 {object} api.Config
@@ -43,6 +46,7 @@ func (p *ConfigHandler) Get(c echo.Context) error {
// Set will set the given configuration as new active configuration
// @Summary Update the current Restreamer configuration
// @Description Update the current Restreamer configuration by providing a complete or partial configuration. Fields that are not provided will not be changed.
// @Tags v16.7.2
// @ID config-3-set
// @Accept json
// @Produce json
@@ -53,25 +57,73 @@ func (p *ConfigHandler) Get(c echo.Context) error {
// @Security ApiKeyAuth
// @Router /api/v3/config [put]
func (p *ConfigHandler) Set(c echo.Context) error {
cfg := p.store.Get()
version := api.ConfigVersion{}
// Set the current config as default config value. This will
// allow to set a partial config without destroying the other
// values.
setConfig := api.NewSetConfig(cfg)
req := c.Request()
if err := util.ShouldBindJSON(c, &setConfig); err != nil {
body, err := io.ReadAll(req.Body)
if err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", err)
}
// Merge it into the current config
setConfig.MergeTo(cfg)
if err := json.Unmarshal(body, &version); err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", json.FormatError(body, err))
}
cfg := p.store.Get()
// For each version, set the current config as default config value. This will
// allow to set a partial config without destroying the other values.
if version.Version == 1 {
// Downgrade to v1 in order to have a populated v1 config
v1SetConfig := api.NewSetConfigV1(cfg)
if err := json.Unmarshal(body, &v1SetConfig); err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", json.FormatError(body, err))
}
if err := c.Validate(v1SetConfig); err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", err)
}
// Merge it into the current config
v1SetConfig.MergeTo(cfg)
} else if version.Version == 2 {
// Downgrade to v2 in order to have a populated v2 config
v2SetConfig := api.NewSetConfigV2(cfg)
if err := json.Unmarshal(body, &v2SetConfig); err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", json.FormatError(body, err))
}
if err := c.Validate(v2SetConfig); err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", err)
}
// Merge it into the current config
v2SetConfig.MergeTo(cfg)
} else if version.Version == 3 {
v3SetConfig := api.NewSetConfig(cfg)
if err := json.Unmarshal(body, &v3SetConfig); err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", json.FormatError(body, err))
}
if err := c.Validate(v3SetConfig); err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", err)
}
// Merge it into the current config
v3SetConfig.MergeTo(cfg)
} else {
return api.Err(http.StatusBadRequest, "Invalid config version", "version %d", version.Version)
}
// Now we make a copy from the config and merge it with the environment
// variables. If this configuration is valid, we will store the un-merged
// one to disk.
mergedConfig := config.NewConfigFrom(cfg)
mergedConfig := cfg.Clone()
mergedConfig.Merge()
// Validate the new merged config
@@ -79,7 +131,7 @@ func (p *ConfigHandler) Set(c echo.Context) error {
if mergedConfig.HasErrors() {
errors := make(map[string][]string)
mergedConfig.Messages(func(level string, v config.Variable, message string) {
mergedConfig.Messages(func(level string, v cfgvars.Variable, message string) {
if level != "error" {
return
}
@@ -106,6 +158,7 @@ func (p *ConfigHandler) Set(c echo.Context) error {
// Reload will reload the currently active configuration
// @Summary Reload the currently active configuration
// @Description Reload the currently active configuration. This will trigger a restart of the Restreamer.
// @Tags v16.7.2
// @ID config-3-reload
// @Produce plain
// @Success 200 {string} string "OK"

View File

@@ -7,25 +7,28 @@ import (
"testing"
"github.com/datarhei/core/v16/config"
"github.com/datarhei/core/v16/config/store"
v1 "github.com/datarhei/core/v16/config/v1"
"github.com/datarhei/core/v16/http/mock"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/require"
)
func getDummyConfigRouter() *echo.Echo {
func getDummyConfigRouter() (*echo.Echo, store.Store) {
router := mock.DummyEcho()
config := config.NewDummyStore()
config := store.NewDummy()
handler := NewConfig(config)
router.Add("GET", "/", handler.Get)
router.Add("PUT", "/", handler.Set)
return router
return router, config
}
func TestConfigGet(t *testing.T) {
router := getDummyConfigRouter()
router, _ := getDummyConfigRouter()
mock.Request(t, http.StatusOK, router, "GET", "/", nil)
@@ -33,7 +36,7 @@ func TestConfigGet(t *testing.T) {
}
func TestConfigSetConflict(t *testing.T) {
router := getDummyConfigRouter()
router, _ := getDummyConfigRouter()
var data bytes.Buffer
@@ -44,18 +47,86 @@ func TestConfigSetConflict(t *testing.T) {
}
func TestConfigSet(t *testing.T) {
router := getDummyConfigRouter()
router, store := getDummyConfigRouter()
storedcfg := store.Get()
require.Equal(t, []string{}, storedcfg.Host.Name)
var data bytes.Buffer
encoder := json.NewEncoder(&data)
// Setting a new v3 config
cfg := config.New()
cfg.FFmpeg.Binary = "true"
cfg.DB.Dir = "."
cfg.Storage.Disk.Dir = "."
cfg.Storage.MimeTypes = ""
cfg.Storage.Disk.Cache.Types.Allow = []string{".aaa"}
cfg.Storage.Disk.Cache.Types.Block = []string{".zzz"}
cfg.Host.Name = []string{"foobar.com"}
encoder := json.NewEncoder(&data)
encoder.Encode(cfg)
mock.Request(t, http.StatusOK, router, "PUT", "/", &data)
storedcfg = store.Get()
require.Equal(t, []string{"foobar.com"}, storedcfg.Host.Name)
require.Equal(t, []string{".aaa"}, cfg.Storage.Disk.Cache.Types.Allow)
require.Equal(t, []string{".zzz"}, cfg.Storage.Disk.Cache.Types.Block)
require.Equal(t, "cert@datarhei.com", cfg.TLS.Email)
// Setting a complete v1 config
cfgv1 := v1.New()
cfgv1.FFmpeg.Binary = "true"
cfgv1.DB.Dir = "."
cfgv1.Storage.Disk.Dir = "."
cfgv1.Storage.MimeTypes = ""
cfgv1.Storage.Disk.Cache.Types = []string{".bbb"}
cfgv1.Host.Name = []string{"foobar.com"}
data.Reset()
encoder.Encode(cfgv1)
mock.Request(t, http.StatusOK, router, "PUT", "/", &data)
storedcfg = store.Get()
require.Equal(t, []string{"foobar.com"}, storedcfg.Host.Name)
require.Equal(t, []string{".bbb"}, storedcfg.Storage.Disk.Cache.Types.Allow)
require.Equal(t, []string{".zzz"}, storedcfg.Storage.Disk.Cache.Types.Block)
require.Equal(t, "cert@datarhei.com", cfg.TLS.Email)
// Setting a partial v1 config
type customconfig struct {
Version int `json:"version"`
Storage struct {
Disk struct {
Cache struct {
Types []string `json:"types"`
} `json:"cache"`
} `json:"disk"`
} `json:"storage"`
}
customcfg := customconfig{
Version: 1,
}
customcfg.Storage.Disk.Cache.Types = []string{".ccc"}
data.Reset()
encoder.Encode(customcfg)
mock.Request(t, http.StatusOK, router, "PUT", "/", &data)
storedcfg = store.Get()
require.Equal(t, []string{"foobar.com"}, storedcfg.Host.Name)
require.Equal(t, []string{".ccc"}, storedcfg.Storage.Disk.Cache.Types.Allow)
require.Equal(t, []string{".zzz"}, storedcfg.Storage.Disk.Cache.Types.Block)
require.Equal(t, "cert@datarhei.com", cfg.TLS.Email)
}

View File

@@ -34,6 +34,7 @@ func NewDiskFS(fs fs.Filesystem, cache cache.Cacher) *DiskFSHandler {
// GetFile returns the file at the given path
// @Summary Fetch a file from the filesystem
// @Description Fetch a file from the filesystem. The contents of that file are returned.
// @Tags v16.7.2
// @ID diskfs-3-get-file
// @Produce application/data
// @Produce json
@@ -86,6 +87,7 @@ func (h *DiskFSHandler) GetFile(c echo.Context) error {
// PutFile adds or overwrites a file at the given path
// @Summary Add a file to the filesystem
// @Description Writes or overwrites a file on the filesystem
// @Tags v16.7.2
// @ID diskfs-3-put-file
// @Accept application/data
// @Produce text/plain
@@ -125,6 +127,7 @@ func (h *DiskFSHandler) PutFile(c echo.Context) error {
// DeleteFile removes a file from the filesystem
// @Summary Remove a file from the filesystem
// @Description Remove a file from the filesystem
// @Tags v16.7.2
// @ID diskfs-3-delete-file
// @Produce text/plain
// @Param path path string true "Path to file"
@@ -153,6 +156,7 @@ func (h *DiskFSHandler) DeleteFile(c echo.Context) error {
// ListFiles lists all files on the filesystem
// @Summary List all files on the filesystem
// @Description List all files on the filesystem. The listing can be ordered by name, size, or date of last modification in ascending or descending order.
// @Tags v16.7.2
// @ID diskfs-3-list-files
// @Produce json
// @Param glob query string false "glob pattern for file names"
@@ -160,7 +164,7 @@ func (h *DiskFSHandler) DeleteFile(c echo.Context) error {
// @Param order query string false "asc, desc"
// @Success 200 {array} api.FileInfo
// @Security ApiKeyAuth
// @Router /api/v3/fs/disk/ [get]
// @Router /api/v3/fs/disk [get]
func (h *DiskFSHandler) ListFiles(c echo.Context) error {
pattern := util.DefaultQuery(c, "glob", "")
sortby := util.DefaultQuery(c, "sort", "none")
@@ -193,14 +197,18 @@ func (h *DiskFSHandler) ListFiles(c echo.Context) error {
sort.Slice(files, sortFunc)
var fileinfos []api.FileInfo = make([]api.FileInfo, len(files))
fileinfos := []api.FileInfo{}
for i, f := range files {
fileinfos[i] = api.FileInfo{
for _, f := range files {
if f.IsDir() {
continue
}
fileinfos = append(fileinfos, api.FileInfo{
Name: f.Name(),
Size: f.Size(),
LastMod: f.ModTime().Unix(),
}
})
}
return c.JSON(http.StatusOK, fileinfos)

View File

@@ -22,7 +22,7 @@ func NewLog(buffer log.BufferWriter) *LogHandler {
}
if l.buffer == nil {
l.buffer = log.NewBufferWriter(log.Lsilent, 1)
l.buffer = log.NewBufferWriter(1)
}
return l
@@ -31,6 +31,7 @@ func NewLog(buffer log.BufferWriter) *LogHandler {
// Log returns the last log lines of the Restreamer application
// @Summary Application log
// @Description Get the last log lines of the Restreamer application
// @Tags v16.7.2
// @ID log-3
// @Param format query string false "Format of the list of log events (*console, raw)"
// @Produce json

View File

@@ -1,7 +1,7 @@
package api
import (
"io/ioutil"
"io"
"net/http"
"net/url"
"sort"
@@ -31,7 +31,8 @@ func NewMemFS(fs fs.Filesystem) *MemFSHandler {
// GetFileAPI returns the file at the given path
// @Summary Fetch a file from the memory filesystem
// @Description Fetch a file from the memory filesystem
// @ID memfs-3-get-file-api
// @Tags v16.7.2
// @ID memfs-3-get-file
// @Produce application/data
// @Produce json
// @Param path path string true "Path to file"
@@ -47,7 +48,8 @@ func (h *MemFSHandler) GetFile(c echo.Context) error {
// PutFileAPI adds or overwrites a file at the given path
// @Summary Add a file to the memory filesystem
// @Description Writes or overwrites a file on the memory filesystem
// @ID memfs-3-put-file-api
// @Tags v16.7.2
// @ID memfs-3-put-file
// @Accept application/data
// @Produce text/plain
// @Produce json
@@ -65,7 +67,8 @@ func (h *MemFSHandler) PutFile(c echo.Context) error {
// DeleteFileAPI removes a file from the filesystem
// @Summary Remove a file from the memory filesystem
// @Description Remove a file from the memory filesystem
// @ID memfs-delete-file-api
// @Tags v16.7.2
// @ID memfs-3-delete-file
// @Produce text/plain
// @Param path path string true "Path to file"
// @Success 200 {string} string
@@ -79,6 +82,7 @@ func (h *MemFSHandler) DeleteFile(c echo.Context) error {
// PatchFile creates a symbolic link to a file in the filesystem
// @Summary Create a link to a file in the memory filesystem
// @Description Create a link to a file in the memory filesystem. The file linked to has to exist.
// @Tags v16.7.2
// @ID memfs-3-patch
// @Accept application/data
// @Produce text/plain
@@ -96,7 +100,7 @@ func (h *MemFSHandler) PatchFile(c echo.Context) error {
req := c.Request()
body, err := ioutil.ReadAll(req.Body)
body, err := io.ReadAll(req.Body)
if err != nil {
return api.Err(http.StatusBadRequest, "Failed reading request body", "%s", err)
}
@@ -118,6 +122,7 @@ func (h *MemFSHandler) PatchFile(c echo.Context) error {
// ListFiles lists all files on the filesystem
// @Summary List all files on the memory filesystem
// @Description List all files on the memory filesystem. The listing can be ordered by name, size, or date of last modification in ascending or descending order.
// @Tags v16.7.2
// @ID memfs-3-list-files
// @Produce json
// @Param glob query string false "glob pattern for file names"
@@ -125,7 +130,7 @@ func (h *MemFSHandler) PatchFile(c echo.Context) error {
// @Param order query string false "asc, desc"
// @Success 200 {array} api.FileInfo
// @Security ApiKeyAuth
// @Router /api/v3/fs/mem/ [get]
// @Router /api/v3/fs/mem [get]
func (h *MemFSHandler) ListFiles(c echo.Context) error {
pattern := util.DefaultQuery(c, "glob", "")
sortby := util.DefaultQuery(c, "sort", "none")

View File

@@ -2,6 +2,7 @@ package api
import (
"net/http"
"sort"
"time"
"github.com/datarhei/core/v16/http/api"
@@ -28,9 +29,39 @@ func NewMetrics(config MetricsConfig) *MetricsHandler {
}
}
// Describe the known metrics
// @Summary List all known metrics with their description and labels
// @Description List all known metrics with their description and labels
// @Tags v16.10.0
// @ID metrics-3-describe
// @Produce json
// @Success 200 {array} api.MetricsDescription
// @Security ApiKeyAuth
// @Router /api/v3/metrics [get]
func (r *MetricsHandler) Describe(c echo.Context) error {
response := []api.MetricsDescription{}
descriptors := r.metrics.Describe()
for _, d := range descriptors {
response = append(response, api.MetricsDescription{
Name: d.Name(),
Description: d.Description(),
Labels: d.Labels(),
})
}
sort.Slice(response, func(i, j int) bool {
return response[i].Name < response[j].Name
})
return c.JSON(http.StatusOK, response)
}
// Query the collected metrics
// @Summary Query the collected metrics
// @Description Query the collected metrics
// @Tags v16.7.2
// @ID metrics-3-metrics
// @Accept json
// @Produce json

View File

@@ -3,7 +3,7 @@ package api
import (
"bytes"
"encoding/json"
"io/ioutil"
"io"
"net/http"
"strings"
"time"
@@ -31,7 +31,8 @@ func NewPlayout(restream restream.Restreamer) *PlayoutHandler {
// Status return the current playout status
// @Summary Get the current playout status
// @Description Get the current playout status of an input of a process
// @ID restream-3-playout-status
// @Tags v16.7.2
// @ID process-3-playout-status
// @Produce json
// @Param id path string true "Process ID"
// @Param inputid path string true "Process Input ID"
@@ -59,7 +60,7 @@ func (h *PlayoutHandler) Status(c echo.Context) error {
defer response.Body.Close()
// Read the whole response
data, err := ioutil.ReadAll(response.Body)
data, err := io.ReadAll(response.Body)
if err != nil {
return api.Err(http.StatusInternalServerError, "", "%s", err)
}
@@ -84,7 +85,8 @@ func (h *PlayoutHandler) Status(c echo.Context) error {
// Keyframe returns the last keyframe
// @Summary Get the last keyframe
// @Description Get the last keyframe of an input of a process. The extension of the name determines the return type.
// @ID restream-3-playout-keyframe
// @Tags v16.7.2
// @ID process-3-playout-keyframe
// @Produce image/jpeg
// @Produce image/png
// @Produce json
@@ -122,7 +124,7 @@ func (h *PlayoutHandler) Keyframe(c echo.Context) error {
defer response.Body.Close()
// Read the whole response
data, err := ioutil.ReadAll(response.Body)
data, err := io.ReadAll(response.Body)
if err != nil {
return api.Err(http.StatusInternalServerError, "", "%s", err)
}
@@ -133,7 +135,8 @@ func (h *PlayoutHandler) Keyframe(c echo.Context) error {
// EncodeErrorframe encodes the errorframe
// @Summary Encode the errorframe
// @Description Immediately encode the errorframe (if available and looping)
// @ID restream-3-playout-errorframencode
// @Tags v16.7.2
// @ID process-3-playout-errorframencode
// @Produce text/plain
// @Produce json
// @Param id path string true "Process ID"
@@ -162,7 +165,7 @@ func (h *PlayoutHandler) EncodeErrorframe(c echo.Context) error {
defer response.Body.Close()
// Read the whole response
data, err := ioutil.ReadAll(response.Body)
data, err := io.ReadAll(response.Body)
if err != nil {
return api.Err(http.StatusInternalServerError, "", "%s", err)
}
@@ -173,7 +176,8 @@ func (h *PlayoutHandler) EncodeErrorframe(c echo.Context) error {
// SetErrorframe sets an errorframe
// @Summary Upload an error frame
// @Description Upload an error frame which will be encoded immediately
// @ID restream-3-playout-errorframe
// @Tags v16.7.2
// @ID process-3-playout-errorframe
// @Produce text/plain
// @Produce json
// @Accept application/octet-stream
@@ -195,7 +199,7 @@ func (h *PlayoutHandler) SetErrorframe(c echo.Context) error {
return api.Err(http.StatusNotFound, "Unknown process or input", "%s", err)
}
data, err := ioutil.ReadAll(c.Request().Body)
data, err := io.ReadAll(c.Request().Body)
if err != nil {
return api.Err(http.StatusBadRequest, "Failed to read request body", "%s", err)
}
@@ -210,7 +214,7 @@ func (h *PlayoutHandler) SetErrorframe(c echo.Context) error {
defer response.Body.Close()
// Read the whole response
data, err = ioutil.ReadAll(response.Body)
data, err = io.ReadAll(response.Body)
if err != nil {
return api.Err(http.StatusInternalServerError, "", "%s", err)
}
@@ -221,7 +225,8 @@ func (h *PlayoutHandler) SetErrorframe(c echo.Context) error {
// ReopenInput closes the current input stream
// @Summary Close the current input stream
// @Description Close the current input stream such that it will be automatically re-opened
// @ID restream-3-playout-reopen-input
// @Tags v16.7.2
// @ID process-3-playout-reopen-input
// @Produce plain
// @Param id path string true "Process ID"
// @Param inputid path string true "Process Input ID"
@@ -249,7 +254,7 @@ func (h *PlayoutHandler) ReopenInput(c echo.Context) error {
defer response.Body.Close()
// Read the whole response
data, err := ioutil.ReadAll(response.Body)
data, err := io.ReadAll(response.Body)
if err != nil {
return api.Err(http.StatusInternalServerError, "", "%s", err)
}
@@ -260,7 +265,8 @@ func (h *PlayoutHandler) ReopenInput(c echo.Context) error {
// SetStream replaces the current stream
// @Summary Switch to a new stream
// @Description Replace the current stream with the one from the given URL. The switch will only happen if the stream parameters match.
// @ID restream-3-playout-stream
// @Tags v16.7.2
// @ID process-3-playout-stream
// @Produce text/plain
// @Produce json
// @Accept text/plain
@@ -281,7 +287,7 @@ func (h *PlayoutHandler) SetStream(c echo.Context) error {
return api.Err(http.StatusNotFound, "Unknown process or input", "%s", err)
}
data, err := ioutil.ReadAll(c.Request().Body)
data, err := io.ReadAll(c.Request().Body)
if err != nil {
return api.Err(http.StatusBadRequest, "Failed to read request body", "%s", err)
}
@@ -296,7 +302,7 @@ func (h *PlayoutHandler) SetStream(c echo.Context) error {
defer response.Body.Close()
// Read the whole response
data, err = ioutil.ReadAll(response.Body)
data, err = io.ReadAll(response.Body)
if err != nil {
return api.Err(http.StatusInternalServerError, "", "%s", err)
}

View File

@@ -27,7 +27,8 @@ func NewRestream(restream restream.Restreamer) *RestreamHandler {
// Add adds a new process
// @Summary Add a new process
// @Description Add a new FFmpeg process
// @ID restream-3-add
// @Tags v16.7.2
// @ID process-3-add
// @Accept json
// @Produce json
// @Param config body api.ProcessConfig true "Process config"
@@ -50,7 +51,7 @@ func (h *RestreamHandler) Add(c echo.Context) error {
return api.Err(http.StatusBadRequest, "Unsupported process type", "Supported process types are: ffmpeg")
}
if len(process.Input) == 0 && len(process.Output) == 0 {
if len(process.Input) == 0 || len(process.Output) == 0 {
return api.Err(http.StatusBadRequest, "At least one input and one output need to be defined")
}
@@ -68,11 +69,14 @@ func (h *RestreamHandler) Add(c echo.Context) error {
// GetAll returns all known processes
// @Summary List all known processes
// @Description List all known processes. Use the query parameter to filter the listed processes.
// @ID restream-3-get-all
// @Tags v16.7.2
// @ID process-3-get-all
// @Produce json
// @Param filter query string false "Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output"
// @Param reference query string false "Return only these process that have this reference value. Overrides a list of IDs. If empty, the reference will be ignored"
// @Param id query string false "Comma separated list of process ids to list"
// @Param filter query string false "Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output."
// @Param reference query string false "Return only these process that have this reference value. If empty, the reference will be ignored."
// @Param id query string false "Comma separated list of process ids to list. Overrides the reference. If empty all IDs will be returned."
// @Param idpattern query string false "Glob pattern for process IDs. If empty all IDs will be returned. Intersected with results from refpattern."
// @Param refpattern query string false "Glob pattern for process references. If empty all IDs will be returned. Intersected with results from idpattern."
// @Success 200 {array} api.Process
// @Security ApiKeyAuth
// @Router /api/v3/process [get]
@@ -82,8 +86,10 @@ func (h *RestreamHandler) GetAll(c echo.Context) error {
wantids := strings.FieldsFunc(util.DefaultQuery(c, "id", ""), func(r rune) bool {
return r == rune(',')
})
idpattern := util.DefaultQuery(c, "idpattern", "")
refpattern := util.DefaultQuery(c, "refpattern", "")
ids := h.restream.GetProcessIDs()
ids := h.restream.GetProcessIDs(idpattern, refpattern)
processes := []api.Process{}
@@ -114,7 +120,8 @@ func (h *RestreamHandler) GetAll(c echo.Context) error {
// Get returns the process with the given ID
// @Summary List a process by its ID
// @Description List a process by its ID. Use the filter parameter to specifiy the level of detail of the output.
// @ID restream-3-get
// @Tags v16.7.2
// @ID process-3-get
// @Produce json
// @Param id path string true "Process ID"
// @Param filter query string false "Comma separated list of fields (config, state, report, metadata) to be part of the output. If empty, all fields will be part of the output"
@@ -137,7 +144,8 @@ func (h *RestreamHandler) Get(c echo.Context) error {
// Delete deletes the process with the given ID
// @Summary Delete a process by its ID
// @Description Delete a process by its ID
// @ID restream-3-delete
// @Tags v16.7.2
// @ID process-3-delete
// @Produce json
// @Param id path string true "Process ID"
// @Success 200 {string} string
@@ -160,8 +168,9 @@ func (h *RestreamHandler) Delete(c echo.Context) error {
// Update replaces an existing process
// @Summary Replace an existing process
// @Description Replace an existing process. This is a shortcut for DELETE+POST.
// @ID restream-3-update
// @Description Replace an existing process.
// @Tags v16.7.2
// @ID process-3-update
// @Accept json
// @Produce json
// @Param id path string true "Process ID"
@@ -180,6 +189,14 @@ func (h *RestreamHandler) Update(c echo.Context) error {
Autostart: true,
}
current, err := h.restream.GetProcess(id)
if err != nil {
return api.Err(http.StatusNotFound, "Process not found", "%s", id)
}
// Prefill the config with the current values
process.Unmarshal(current.Config)
if err := util.ShouldBindJSON(c, &process); err != nil {
return api.Err(http.StatusBadRequest, "Invalid JSON", "%s", err)
}
@@ -202,7 +219,8 @@ func (h *RestreamHandler) Update(c echo.Context) error {
// Command issues a command to a process
// @Summary Issue a command to a process
// @Description Issue a command to a process: start, stop, reload, restart
// @ID restream-3-command
// @Tags v16.7.2
// @ID process-3-command
// @Accept json
// @Produce json
// @Param id path string true "Process ID"
@@ -244,7 +262,8 @@ func (h *RestreamHandler) Command(c echo.Context) error {
// GetConfig returns the configuration of a process
// @Summary Get the configuration of a process
// @Description Get the configuration of a process. This is the configuration as provided by Add or Update.
// @ID restream-3-get-config
// @Tags v16.7.2
// @ID process-3-get-config
// @Produce json
// @Param id path string true "Process ID"
// @Success 200 {object} api.ProcessConfig
@@ -268,8 +287,9 @@ func (h *RestreamHandler) GetConfig(c echo.Context) error {
// GetState returns the current state of a process
// @Summary Get the state of a process
// @Description Get the state and progress data of a process
// @ID restream-3-get-state
// @Description Get the state and progress data of a process.
// @Tags v16.7.2
// @ID process-3-get-state
// @Produce json
// @Param id path string true "Process ID"
// @Success 200 {object} api.ProcessState
@@ -293,8 +313,9 @@ func (h *RestreamHandler) GetState(c echo.Context) error {
// GetReport return the current log and the log history of a process
// @Summary Get the logs of a process
// @Description Get the logs and the log history of a process
// @ID restream-3-get-report
// @Description Get the logs and the log history of a process.
// @Tags v16.7.2
// @ID process-3-get-report
// @Produce json
// @Param id path string true "Process ID"
// @Success 200 {object} api.ProcessReport
@@ -318,8 +339,9 @@ func (h *RestreamHandler) GetReport(c echo.Context) error {
// Probe probes a process
// @Summary Probe a process
// @Description Probe an existing process to get a detailed stream information on the inputs
// @ID restream-3-probe
// @Description Probe an existing process to get a detailed stream information on the inputs.
// @Tags v16.7.2
// @ID process-3-probe
// @Produce json
// @Param id path string true "Process ID"
// @Success 200 {object} api.Probe
@@ -338,7 +360,8 @@ func (h *RestreamHandler) Probe(c echo.Context) error {
// Skills returns the detected FFmpeg capabilities
// @Summary FFmpeg capabilities
// @Description List all detected FFmpeg capabilities
// @Description List all detected FFmpeg capabilities.
// @Tags v16.7.2
// @ID skills-3
// @Produce json
// @Success 200 {object} api.Skills
@@ -355,7 +378,8 @@ func (h *RestreamHandler) Skills(c echo.Context) error {
// ReloadSkills will refresh the FFmpeg capabilities
// @Summary Refresh FFmpeg capabilities
// @Description Refresh the available FFmpeg capabilities
// @Description Refresh the available FFmpeg capabilities.
// @Tags v16.7.2
// @ID skills-3-reload
// @Produce json
// @Success 200 {object} api.Skills
@@ -374,7 +398,8 @@ func (h *RestreamHandler) ReloadSkills(c echo.Context) error {
// GetProcessMetadata returns the metadata stored with a process
// @Summary Retrieve JSON metadata stored with a process under a key
// @Description Retrieve the previously stored JSON metadata under the given key. If the key is empty, all metadata will be returned.
// @ID restream-3-get-process-metadata
// @Tags v16.7.2
// @ID process-3-get-process-metadata
// @Produce json
// @Param id path string true "Process ID"
// @Param key path string true "Key for data store"
@@ -398,7 +423,8 @@ func (h *RestreamHandler) GetProcessMetadata(c echo.Context) error {
// SetProcessMetadata stores metadata with a process
// @Summary Add JSON metadata with a process under the given key
// @Description Add arbitrary JSON metadata under the given key. If the key exists, all already stored metadata with this key will be overwritten. If the key doesn't exist, it will be created.
// @ID restream-3-set-process-metadata
// @Tags v16.7.2
// @ID process-3-set-process-metadata
// @Produce json
// @Param id path string true "Process ID"
// @Param key path string true "Key for data store"
@@ -432,6 +458,7 @@ func (h *RestreamHandler) SetProcessMetadata(c echo.Context) error {
// GetMetadata returns the metadata stored with the Restreamer
// @Summary Retrieve JSON metadata from a key
// @Description Retrieve the previously stored JSON metadata under the given key. If the key is empty, all metadata will be returned.
// @Tags v16.7.2
// @ID metadata-3-get
// @Produce json
// @Param key path string true "Key for data store"
@@ -454,6 +481,7 @@ func (h *RestreamHandler) GetMetadata(c echo.Context) error {
// SetMetadata stores metadata with the Restreamer
// @Summary Add JSON metadata under the given key
// @Description Add arbitrary JSON metadata under the given key. If the key exists, all already stored metadata with this key will be overwritten. If the key doesn't exist, it will be created.
// @Tags v16.7.2
// @ID metadata-3-set
// @Produce json
// @Param key path string true "Key for data store"

View File

@@ -23,7 +23,8 @@ func NewRTMP(rtmp rtmp.Server) *RTMPHandler {
// ListChannels lists all currently publishing RTMP streams
// @Summary List all publishing RTMP streams
// @Description List all currently publishing RTMP streams
// @Description List all currently publishing RTMP streams.
// @Tags v16.7.2
// @ID rtmp-3-list-channels
// @Produce json
// @Success 200 {array} api.RTMPChannel

View File

@@ -13,11 +13,11 @@ import (
// The SessionHandler type provides handlers to retrieve session information
type SessionHandler struct {
registry session.Registry
registry session.RegistryReader
}
// NewSession returns a new Session type. You have to provide a session registry.
func NewSession(registry session.Registry) *SessionHandler {
func NewSession(registry session.RegistryReader) *SessionHandler {
return &SessionHandler{
registry: registry,
}
@@ -25,7 +25,8 @@ func NewSession(registry session.Registry) *SessionHandler {
// Summary returns a summary of all active and past sessions
// @Summary Get a summary of all active and past sessions
// @Description Get a summary of all active and past sessions of the given collector
// @Description Get a summary of all active and past sessions of the given collector.
// @Tags v16.7.2
// @ID session-3-summary
// @Produce json
// @Security ApiKeyAuth
@@ -49,7 +50,8 @@ func (s *SessionHandler) Summary(c echo.Context) error {
// Active returns a list of active sessions
// @Summary Get a minimal summary of all active sessions
// @Description Get a minimal summary of all active sessions (i.e. number of sessions, bandwidth)
// @Description Get a minimal summary of all active sessions (i.e. number of sessions, bandwidth).
// @Tags v16.7.2
// @ID session-3-current
// @Produce json
// @Security ApiKeyAuth

View File

@@ -24,6 +24,7 @@ func NewSRT(srt srt.Server) *SRTHandler {
// ListChannels lists all currently publishing SRT streams
// @Summary List all publishing SRT treams
// @Description List all currently publishing SRT streams. This endpoint is EXPERIMENTAL and may change in future.
// @Tags v16.9.0
// @ID srt-3-list-channels
// @Produce json
// @Success 200 {array} api.SRTChannels

View File

@@ -2,6 +2,7 @@ package api
import (
"net/http"
"strings"
"github.com/datarhei/core/v16/http/api"
"github.com/datarhei/core/v16/http/handler/util"
@@ -13,13 +14,13 @@ import (
type WidgetConfig struct {
Restream restream.Restreamer
Registry session.Registry
Registry session.RegistryReader
}
// The WidgetHandler type provides handlers for the widget API
type WidgetHandler struct {
restream restream.Restreamer
registry session.Registry
registry session.RegistryReader
}
// NewWidget return a new Widget type
@@ -33,6 +34,7 @@ func NewWidget(config WidgetConfig) *WidgetHandler {
// Get returns minimal public statistics about a process
// @Summary Fetch minimal statistics about a process
// @Description Fetch minimal statistics about a process, which is not protected by any auth.
// @Tags v16.7.2
// @ID widget-3-get
// @Produce json
// @Param id path string true "ID of a process"
@@ -73,13 +75,19 @@ func (w *WidgetHandler) Get(c echo.Context) error {
summary := collector.Summary()
for _, session := range summary.Active {
if session.Reference == process.Reference {
data.CurrentSessions++
if !strings.HasPrefix(session.Reference, process.Reference) {
continue
}
data.CurrentSessions++
}
if s, ok := summary.Summary.References[process.Reference]; ok {
data.TotalSessions = s.TotalSessions
for reference, s := range summary.Summary.References {
if !strings.HasPrefix(reference, process.Reference) {
continue
}
data.TotalSessions += s.TotalSessions
}
return c.JSON(http.StatusOK, data)

View File

@@ -2,7 +2,7 @@ package util
import (
"fmt"
"io/ioutil"
"io"
"net/url"
"strings"
@@ -24,7 +24,7 @@ func ShouldBindJSONValidation(c echo.Context, obj interface{}, validate bool) er
return fmt.Errorf("request doesn't contain JSON content")
}
body, err := ioutil.ReadAll(req.Body)
body, err := io.ReadAll(req.Body)
if err != nil {
return err
}

View File

@@ -6,7 +6,7 @@ import (
"crypto"
"encoding/json"
"errors"
"io/ioutil"
"io"
"net/http"
"sync"
"time"
@@ -324,7 +324,7 @@ func (j *jwksImpl) refresh() (err error) {
// Read the raw JWKs from the body of the response.
var jwksBytes []byte
if jwksBytes, err = ioutil.ReadAll(resp.Body); err != nil {
if jwksBytes, err = io.ReadAll(resp.Body); err != nil {
return err
}

View File

@@ -1,65 +0,0 @@
// Package bodysize is an echo middleware that fixes the final number of body bytes sent on the wire
package bodysize
import (
"net/http"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)
type Config struct {
Skipper middleware.Skipper
}
var DefaultConfig = Config{
Skipper: middleware.DefaultSkipper,
}
func New() echo.MiddlewareFunc {
return NewWithConfig(DefaultConfig)
}
// New return a new bodysize middleware handler
func NewWithConfig(config Config) echo.MiddlewareFunc {
if config.Skipper == nil {
config.Skipper = DefaultConfig.Skipper
}
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
if config.Skipper(c) {
return next(c)
}
res := c.Response()
writer := res.Writer
w := &fakeWriter{
ResponseWriter: res.Writer,
}
res.Writer = w
defer func() {
res.Writer = writer
res.Size = w.size
}()
return next(c)
}
}
}
type fakeWriter struct {
http.ResponseWriter
size int64
}
func (w *fakeWriter) Write(body []byte) (int, error) {
n, err := w.ResponseWriter.Write(body)
w.size += int64(n)
return n, err
}

View File

@@ -57,31 +57,18 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
if req.Method != "GET" {
res.Header().Set("X-Cache", "SKIP ONLYGET")
if err := next(c); err != nil {
c.Error(err)
}
return nil
return next(c)
}
res.Header().Set("Cache-Control", fmt.Sprintf("max-age=%.0f", config.Cache.TTL().Seconds()))
key := strings.TrimPrefix(req.URL.Path, config.Prefix)
if !config.Cache.IsExtensionCacheable(path.Ext(req.URL.Path)) {
res.Header().Set("X-Cache", "SKIP EXT")
if err := next(c); err != nil {
c.Error(err)
}
return nil
return next(c)
}
if obj, expireIn, _ := config.Cache.Get(key); obj == nil {
// cache miss
writer := res.Writer
w := &cacheWriter{
@@ -105,6 +92,7 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
if res.Status != 200 {
res.Header().Set("X-Cache", "SKIP NOTOK")
res.Writer.WriteHeader(res.Status)
return nil
}
@@ -112,6 +100,7 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
if !config.Cache.IsSizeCacheable(size) {
res.Header().Set("X-Cache", "SKIP TOOBIG")
res.Writer.WriteHeader(res.Status)
return nil
}
@@ -123,11 +112,13 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
if err := config.Cache.Put(key, o, size); err != nil {
res.Header().Set("X-Cache", "SKIP TOOBIG")
res.Writer.WriteHeader(res.Status)
return nil
}
res.Header().Set("Cache-Control", fmt.Sprintf("max-age=%.0f", expireIn.Seconds()))
res.Header().Set("X-Cache", "MISS")
res.Writer.WriteHeader(res.Status)
} else {
// cache hit
o := obj.(*cacheObject)
@@ -190,7 +181,5 @@ func (w *cacheWriter) WriteHeader(code int) {
}
func (w *cacheWriter) Write(body []byte) (int, error) {
n, err := w.body.Write(body)
return n, err
return w.body.Write(body)
}

100
http/middleware/cache/cache_test.go vendored Normal file
View File

@@ -0,0 +1,100 @@
package cache
import (
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/datarhei/core/v16/http/cache"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/require"
)
func TestCache(t *testing.T) {
c, err := cache.NewLRUCache(cache.LRUConfig{
TTL: 300 * time.Second,
MaxSize: 0,
MaxFileSize: 16,
AllowExtensions: []string{".js"},
BlockExtensions: []string{".ts"},
Logger: nil,
})
require.NoError(t, err)
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/found.js", nil)
rec := httptest.NewRecorder()
ctx := e.NewContext(req, rec)
handler := NewWithConfig(Config{
Cache: c,
})(func(c echo.Context) error {
if c.Request().URL.Path == "/found.js" {
c.Response().Write([]byte("test"))
} else if c.Request().URL.Path == "/toobig.js" {
c.Response().Write([]byte("testtesttesttesttest"))
} else if c.Request().URL.Path == "/blocked.ts" {
c.Response().Write([]byte("blocked"))
}
c.Response().WriteHeader(http.StatusNotFound)
return nil
})
handler(ctx)
require.Equal(t, "test", rec.Body.String())
require.Equal(t, 200, rec.Result().StatusCode)
require.Equal(t, "MISS", rec.Result().Header.Get("x-cache"))
rec = httptest.NewRecorder()
ctx = e.NewContext(req, rec)
handler(ctx)
require.Equal(t, "test", rec.Body.String())
require.Equal(t, 200, rec.Result().StatusCode)
require.Equal(t, "HIT", rec.Result().Header.Get("x-cache")[:3])
req = httptest.NewRequest(http.MethodGet, "/notfound.js", nil)
rec = httptest.NewRecorder()
ctx = e.NewContext(req, rec)
handler(ctx)
require.Equal(t, 404, rec.Result().StatusCode)
require.Equal(t, "SKIP NOTOK", rec.Result().Header.Get("x-cache"))
req = httptest.NewRequest(http.MethodGet, "/toobig.js", nil)
rec = httptest.NewRecorder()
ctx = e.NewContext(req, rec)
handler(ctx)
require.Equal(t, "testtesttesttesttest", rec.Body.String())
require.Equal(t, 200, rec.Result().StatusCode)
require.Equal(t, "SKIP TOOBIG", rec.Result().Header.Get("x-cache"))
req = httptest.NewRequest(http.MethodGet, "/blocked.ts", nil)
rec = httptest.NewRecorder()
ctx = e.NewContext(req, rec)
handler(ctx)
require.Equal(t, "blocked", rec.Body.String())
require.Equal(t, 200, rec.Result().StatusCode)
require.Equal(t, "SKIP EXT", rec.Result().Header.Get("x-cache"))
req = httptest.NewRequest(http.MethodPost, "/found.js", nil)
rec = httptest.NewRecorder()
ctx = e.NewContext(req, rec)
handler(ctx)
require.Equal(t, "test", rec.Body.String())
require.Equal(t, 200, rec.Result().StatusCode)
require.Equal(t, "SKIP ONLYGET", rec.Result().Header.Get("x-cache"))
}

View File

@@ -2,9 +2,9 @@ package gzip
import (
"bufio"
"bytes"
"compress/gzip"
"io"
"io/ioutil"
"net"
"net/http"
"strings"
@@ -26,15 +26,17 @@ type Config struct {
// Length threshold before gzip compression
// is used. Optional. Default value 0
MinLength int
// Content-Types to compress. Empty for all
// files. Optional. Default value "text/plain" and "text/html"
ContentTypes []string
}
type gzipResponseWriter struct {
io.Writer
http.ResponseWriter
wroteHeader bool
wroteBody bool
minLength int
minLengthExceeded bool
buffer *bytes.Buffer
code int
}
const gzipScheme = "gzip"
@@ -48,10 +50,32 @@ const (
// DefaultConfig is the default Gzip middleware config.
var DefaultConfig = Config{
Skipper: middleware.DefaultSkipper,
Level: -1,
MinLength: 0,
ContentTypes: []string{"text/plain", "text/html"},
Skipper: middleware.DefaultSkipper,
Level: DefaultCompression,
MinLength: 0,
}
// ContentTypesSkipper returns a Skipper based on the list of content types
// that should be compressed. If the list is empty, all responses will be
// compressed.
func ContentTypeSkipper(contentTypes []string) middleware.Skipper {
return func(c echo.Context) bool {
// If no allowed content types are given, compress all
if len(contentTypes) == 0 {
return false
}
// Iterate through the allowed content types and don't skip if the content type matches
responseContentType := c.Response().Header().Get(echo.HeaderContentType)
for _, contentType := range contentTypes {
if strings.Contains(responseContentType, contentType) {
return false
}
}
return true
}
}
// New returns a middleware which compresses HTTP response using gzip compression
@@ -76,11 +100,8 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
config.MinLength = DefaultConfig.MinLength
}
if config.ContentTypes == nil {
config.ContentTypes = DefaultConfig.ContentTypes
}
pool := gzipPool(config)
bpool := bufferPool()
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
@@ -90,8 +111,8 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
res := c.Response()
res.Header().Add(echo.HeaderVary, echo.HeaderAcceptEncoding)
if shouldCompress(c, config.ContentTypes) {
res.Header().Set(echo.HeaderContentEncoding, gzipScheme) // Issue #806
if strings.Contains(c.Request().Header.Get(echo.HeaderAcceptEncoding), gzipScheme) {
i := pool.Get()
w, ok := i.(*gzip.Writer)
if !ok {
@@ -99,8 +120,14 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
}
rw := res.Writer
w.Reset(rw)
buf := bpool.Get().(*bytes.Buffer)
buf.Reset()
grw := &gzipResponseWriter{Writer: w, ResponseWriter: rw, minLength: config.MinLength, buffer: buf}
defer func() {
if res.Size == 0 {
if !grw.wroteBody {
if res.Header().Get(echo.HeaderContentEncoding) == gzipScheme {
res.Header().Del(echo.HeaderContentEncoding)
}
@@ -108,50 +135,39 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
// nothing is written to body or error is returned.
// See issue #424, #407.
res.Writer = rw
w.Reset(ioutil.Discard)
w.Reset(io.Discard)
} else if !grw.minLengthExceeded {
// If the minimum content length hasn't exceeded, write the uncompressed response
res.Writer = rw
if grw.wroteHeader {
grw.ResponseWriter.WriteHeader(grw.code)
}
grw.buffer.WriteTo(rw)
w.Reset(io.Discard)
}
w.Close()
bpool.Put(buf)
pool.Put(w)
}()
grw := &gzipResponseWriter{Writer: w, ResponseWriter: rw}
res.Writer = grw
}
return next(c)
}
}
}
func shouldCompress(c echo.Context, contentTypes []string) bool {
if !strings.Contains(c.Request().Header.Get(echo.HeaderAcceptEncoding), gzipScheme) ||
strings.Contains(c.Request().Header.Get("Connection"), "Upgrade") ||
strings.Contains(c.Request().Header.Get(echo.HeaderContentType), "text/event-stream") {
return false
}
// If no allowed content types are given, compress all
if len(contentTypes) == 0 {
return true
}
// Iterate through the allowed content types and return true if the content type matches
responseContentType := c.Response().Header().Get(echo.HeaderContentType)
for _, contentType := range contentTypes {
if strings.Contains(responseContentType, contentType) {
return true
}
}
return false
}
func (w *gzipResponseWriter) WriteHeader(code int) {
if code == http.StatusNoContent { // Issue #489
w.ResponseWriter.Header().Del(echo.HeaderContentEncoding)
}
w.Header().Del(echo.HeaderContentLength) // Issue #444
w.ResponseWriter.WriteHeader(code)
w.wroteHeader = true
// Delay writing of the header until we know if we'll actually compress the response
w.code = code
}
func (w *gzipResponseWriter) Write(b []byte) (int, error) {
@@ -159,10 +175,41 @@ func (w *gzipResponseWriter) Write(b []byte) (int, error) {
w.Header().Set(echo.HeaderContentType, http.DetectContentType(b))
}
w.wroteBody = true
if !w.minLengthExceeded {
n, err := w.buffer.Write(b)
if w.buffer.Len() >= w.minLength {
w.minLengthExceeded = true
// The minimum length is exceeded, add Content-Encoding header and write the header
w.Header().Set(echo.HeaderContentEncoding, gzipScheme) // Issue #806
if w.wroteHeader {
w.ResponseWriter.WriteHeader(w.code)
}
return w.Writer.Write(w.buffer.Bytes())
} else {
return n, err
}
}
return w.Writer.Write(b)
}
func (w *gzipResponseWriter) Flush() {
if !w.minLengthExceeded {
// Enforce compression
w.minLengthExceeded = true
w.Header().Set(echo.HeaderContentEncoding, gzipScheme) // Issue #806
if w.wroteHeader {
w.ResponseWriter.WriteHeader(w.code)
}
w.Writer.Write(w.buffer.Bytes())
}
w.Writer.(*gzip.Writer).Flush()
if flusher, ok := w.ResponseWriter.(http.Flusher); ok {
flusher.Flush()
@@ -183,7 +230,7 @@ func (w *gzipResponseWriter) Push(target string, opts *http.PushOptions) error {
func gzipPool(config Config) sync.Pool {
return sync.Pool{
New: func() interface{} {
w, err := gzip.NewWriterLevel(ioutil.Discard, config.Level)
w, err := gzip.NewWriterLevel(io.Discard, config.Level)
if err != nil {
return err
}
@@ -191,3 +238,12 @@ func gzipPool(config Config) sync.Pool {
},
}
}
func bufferPool() sync.Pool {
return sync.Pool{
New: func() interface{} {
b := &bytes.Buffer{}
return b
},
}
}

View File

@@ -0,0 +1,240 @@
package gzip
import (
"bytes"
"compress/gzip"
"io"
"net/http"
"net/http/httptest"
"os"
"testing"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/assert"
)
func TestGzip(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
// Skip if no Accept-Encoding header
h := New()(func(c echo.Context) error {
c.Response().Write([]byte("test")) // For Content-Type sniffing
return nil
})
h(c)
assert := assert.New(t)
assert.Equal("test", rec.Body.String())
// Gzip
req = httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec = httptest.NewRecorder()
c = e.NewContext(req, rec)
h(c)
assert.Equal(gzipScheme, rec.Header().Get(echo.HeaderContentEncoding))
assert.Contains(rec.Header().Get(echo.HeaderContentType), echo.MIMETextPlain)
r, err := gzip.NewReader(rec.Body)
if assert.NoError(err) {
buf := new(bytes.Buffer)
defer r.Close()
buf.ReadFrom(r)
assert.Equal("test", buf.String())
}
chunkBuf := make([]byte, 5)
// Gzip chunked
req = httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec = httptest.NewRecorder()
c = e.NewContext(req, rec)
New()(func(c echo.Context) error {
c.Response().Header().Set("Content-Type", "text/event-stream")
c.Response().Header().Set("Transfer-Encoding", "chunked")
// Write and flush the first part of the data
c.Response().Write([]byte("test\n"))
c.Response().Flush()
// Read the first part of the data
assert.True(rec.Flushed)
assert.Equal(gzipScheme, rec.Header().Get(echo.HeaderContentEncoding))
r.Reset(rec.Body)
_, err = io.ReadFull(r, chunkBuf)
assert.NoError(err)
assert.Equal("test\n", string(chunkBuf))
// Write and flush the second part of the data
c.Response().Write([]byte("test\n"))
c.Response().Flush()
_, err = io.ReadFull(r, chunkBuf)
assert.NoError(err)
assert.Equal("test\n", string(chunkBuf))
// Write the final part of the data and return
c.Response().Write([]byte("test"))
return nil
})(c)
buf := new(bytes.Buffer)
defer r.Close()
buf.ReadFrom(r)
assert.Equal("test", buf.String())
}
func TestGzipWithMinLength(t *testing.T) {
e := echo.New()
// Invalid level
e.Use(NewWithConfig(Config{MinLength: 5}))
e.GET("/", func(c echo.Context) error {
c.Response().Write([]byte("test"))
return nil
})
e.GET("/foobar", func(c echo.Context) error {
c.Response().Write([]byte("foobar"))
return nil
})
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec := httptest.NewRecorder()
e.ServeHTTP(rec, req)
assert.Equal(t, "", rec.Header().Get(echo.HeaderContentEncoding))
assert.Contains(t, rec.Body.String(), "test")
req = httptest.NewRequest(http.MethodGet, "/foobar", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec = httptest.NewRecorder()
e.ServeHTTP(rec, req)
assert.Equal(t, gzipScheme, rec.Header().Get(echo.HeaderContentEncoding))
r, err := gzip.NewReader(rec.Body)
if assert.NoError(t, err) {
buf := new(bytes.Buffer)
defer r.Close()
buf.ReadFrom(r)
assert.Equal(t, "foobar", buf.String())
}
}
func TestGzipNoContent(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
h := New()(func(c echo.Context) error {
return c.NoContent(http.StatusNoContent)
})
if assert.NoError(t, h(c)) {
assert.Empty(t, rec.Header().Get(echo.HeaderContentEncoding))
assert.Empty(t, rec.Header().Get(echo.HeaderContentType))
assert.Equal(t, 0, len(rec.Body.Bytes()))
}
}
func TestGzipEmpty(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
h := New()(func(c echo.Context) error {
return c.String(http.StatusOK, "")
})
if assert.NoError(t, h(c)) {
assert.Equal(t, gzipScheme, rec.Header().Get(echo.HeaderContentEncoding))
assert.Equal(t, "text/plain; charset=UTF-8", rec.Header().Get(echo.HeaderContentType))
r, err := gzip.NewReader(rec.Body)
if assert.NoError(t, err) {
var buf bytes.Buffer
buf.ReadFrom(r)
assert.Equal(t, "", buf.String())
}
}
}
func TestGzipErrorReturned(t *testing.T) {
e := echo.New()
e.Use(New())
e.GET("/", func(c echo.Context) error {
return echo.ErrNotFound
})
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec := httptest.NewRecorder()
e.ServeHTTP(rec, req)
assert.Equal(t, http.StatusNotFound, rec.Code)
assert.Empty(t, rec.Header().Get(echo.HeaderContentEncoding))
}
func TestGzipErrorReturnedInvalidConfig(t *testing.T) {
e := echo.New()
// Invalid level
e.Use(NewWithConfig(Config{Level: 12}))
e.GET("/", func(c echo.Context) error {
c.Response().Write([]byte("test"))
return nil
})
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec := httptest.NewRecorder()
e.ServeHTTP(rec, req)
assert.Equal(t, http.StatusInternalServerError, rec.Code)
assert.Contains(t, rec.Body.String(), "gzip")
}
// Issue #806
func TestGzipWithStatic(t *testing.T) {
e := echo.New()
e.Use(New())
e.Static("/test", "./")
req := httptest.NewRequest(http.MethodGet, "/test/gzip.go", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
rec := httptest.NewRecorder()
e.ServeHTTP(rec, req)
assert.Equal(t, http.StatusOK, rec.Code)
// Data is written out in chunks when Content-Length == "", so only
// validate the content length if it's not set.
if cl := rec.Header().Get("Content-Length"); cl != "" {
assert.Equal(t, cl, rec.Body.Len())
}
r, err := gzip.NewReader(rec.Body)
if assert.NoError(t, err) {
defer r.Close()
want, err := os.ReadFile("./gzip.go")
if assert.NoError(t, err) {
buf := new(bytes.Buffer)
buf.ReadFrom(r)
assert.Equal(t, want, buf.Bytes())
}
}
}
func BenchmarkGzip(b *testing.B) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set(echo.HeaderAcceptEncoding, gzipScheme)
h := New()(func(c echo.Context) error {
c.Response().Write([]byte("test")) // For Content-Type sniffing
return nil
})
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Gzip
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
h(c)
}
}

View File

@@ -0,0 +1,164 @@
package hlsrewrite
import (
"bufio"
"bytes"
"net/http"
"strings"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)
type HLSRewriteConfig struct {
// Skipper defines a function to skip middleware.
Skipper middleware.Skipper
PathPrefix string
}
var DefaultHLSRewriteConfig = HLSRewriteConfig{
Skipper: func(c echo.Context) bool {
req := c.Request()
return !strings.HasSuffix(req.URL.Path, ".m3u8")
},
PathPrefix: "",
}
// NewHTTP returns a new HTTP session middleware with default config
func NewHLSRewrite() echo.MiddlewareFunc {
return NewHLSRewriteWithConfig(DefaultHLSRewriteConfig)
}
type hlsrewrite struct {
pathPrefix string
}
func NewHLSRewriteWithConfig(config HLSRewriteConfig) echo.MiddlewareFunc {
if config.Skipper == nil {
config.Skipper = DefaultHLSRewriteConfig.Skipper
}
pathPrefix := config.PathPrefix
if len(pathPrefix) != 0 {
if !strings.HasSuffix(pathPrefix, "/") {
pathPrefix += "/"
}
}
hls := hlsrewrite{
pathPrefix: pathPrefix,
}
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
if config.Skipper(c) {
return next(c)
}
req := c.Request()
if req.Method == "GET" || req.Method == "HEAD" {
return hls.rewrite(c, next)
}
return next(c)
}
}
}
func (h *hlsrewrite) rewrite(c echo.Context, next echo.HandlerFunc) error {
req := c.Request()
res := c.Response()
path := req.URL.Path
isM3U8 := strings.HasSuffix(path, ".m3u8")
rewrite := false
if isM3U8 {
rewrite = true
}
var rewriter *hlsRewriter
// Keep the current writer for later
writer := res.Writer
if rewrite {
// Put the session rewriter in the middle. This will collect
// the data that we need to rewrite.
rewriter = &hlsRewriter{
ResponseWriter: res.Writer,
}
res.Writer = rewriter
}
if err := next(c); err != nil {
c.Error(err)
}
// Restore the original writer
res.Writer = writer
if rewrite {
if res.Status != 200 {
res.Write(rewriter.buffer.Bytes())
return nil
}
// Rewrite the data befor sending it to the client
rewriter.rewrite(h.pathPrefix)
res.Header().Set("Cache-Control", "private")
res.Write(rewriter.buffer.Bytes())
}
return nil
}
type hlsRewriter struct {
http.ResponseWriter
buffer bytes.Buffer
}
func (g *hlsRewriter) Write(data []byte) (int, error) {
// Write the data into internal buffer for later rewrite
w, err := g.buffer.Write(data)
return w, err
}
func (g *hlsRewriter) rewrite(pathPrefix string) {
var buffer bytes.Buffer
// Find all URLS in the .m3u8 and add the session ID to the query string
scanner := bufio.NewScanner(&g.buffer)
for scanner.Scan() {
line := scanner.Text()
// Write empty lines unmodified
if len(line) == 0 {
buffer.WriteString(line + "\n")
continue
}
// Write comments unmodified
if strings.HasPrefix(line, "#") {
buffer.WriteString(line + "\n")
continue
}
// Rewrite
line = strings.TrimPrefix(line, pathPrefix)
buffer.WriteString(line + "\n")
}
if err := scanner.Err(); err != nil {
return
}
g.buffer = buffer
}

View File

@@ -2,6 +2,7 @@
package log
import (
"io"
"net/http"
"time"
@@ -45,40 +46,92 @@ func NewWithConfig(config Config) echo.MiddlewareFunc {
start := time.Now()
req := c.Request()
var reader io.ReadCloser
r := &sizeReadCloser{}
if req.Body != nil {
reader = req.Body
r.ReadCloser = req.Body
req.Body = r
}
res := c.Response()
writer := res.Writer
w := &sizeWriter{
ResponseWriter: res.Writer,
}
res.Writer = w
path := req.URL.Path
raw := req.URL.RawQuery
if err := next(c); err != nil {
c.Error(err)
}
defer func() {
res.Writer = writer
req.Body = reader
latency := time.Since(start)
latency := time.Since(start)
if raw != "" {
path = path + "?" + raw
}
if raw != "" {
path = path + "?" + raw
}
logger := config.Logger.WithFields(log.Fields{
"client": c.RealIP(),
"method": req.Method,
"path": path,
"proto": req.Proto,
"status": res.Status,
"status_text": http.StatusText(res.Status),
"size_bytes": res.Size,
"latency_ms": latency.Milliseconds(),
"user_agent": req.Header.Get("User-Agent"),
})
logger := config.Logger.WithFields(log.Fields{
"client": c.RealIP(),
"method": req.Method,
"path": path,
"proto": req.Proto,
"status": res.Status,
"status_text": http.StatusText(res.Status),
"tx_size_bytes": w.size,
"rx_size_bytes": r.size,
"latency_ms": latency.Milliseconds(),
"user_agent": req.Header.Get("User-Agent"),
})
if res.Status >= 400 {
logger.Warn().Log("")
}
if res.Status >= 400 {
logger.Warn().Log("")
}
logger.Debug().Log("")
logger.Debug().Log("")
}()
return nil
return next(c)
}
}
}
type sizeWriter struct {
http.ResponseWriter
size int64
}
func (w *sizeWriter) Write(body []byte) (int, error) {
n, err := w.ResponseWriter.Write(body)
w.size += int64(n)
return n, err
}
type sizeReadCloser struct {
io.ReadCloser
size int64
}
func (r *sizeReadCloser) Read(p []byte) (int, error) {
n, err := r.ReadCloser.Read(p)
r.size += int64(n)
return n, err
}
func (r *sizeReadCloser) Close() error {
err := r.ReadCloser.Close()
return err
}

View File

@@ -51,7 +51,7 @@ type hls struct {
// NewHLS returns a new HLS session middleware
func NewHLSWithConfig(config HLSConfig) echo.MiddlewareFunc {
if config.Skipper == nil {
config.Skipper = DefaultHTTPConfig.Skipper
config.Skipper = DefaultHLSConfig.Skipper
}
if config.EgressCollector == nil {

View File

@@ -5,9 +5,9 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strings"
"testing"
@@ -57,7 +57,7 @@ func DummyEcho() *echo.Echo {
router.HideBanner = true
router.HidePort = true
router.HTTPErrorHandler = errorhandler.HTTPErrorHandler
router.Logger.SetOutput(ioutil.Discard)
router.Logger.SetOutput(io.Discard)
router.Validator = validator.New()
return router
@@ -89,7 +89,7 @@ func CheckResponse(t *testing.T, res *http.Response) *Response {
Code: res.StatusCode,
}
body, err := ioutil.ReadAll(res.Body)
body, err := io.ReadAll(res.Body)
require.Equal(t, nil, err)
if strings.Contains(res.Header.Get("Content-Type"), "application/json") {
@@ -122,7 +122,7 @@ func Validate(t *testing.T, datatype, data interface{}) bool {
}
func Read(t *testing.T, path string) io.Reader {
data, err := ioutil.ReadFile(path)
data, err := os.ReadFile(path)
require.Equal(t, nil, err)
return bytes.NewReader(data)

View File

@@ -32,7 +32,7 @@ import (
"net/http"
"strings"
"github.com/datarhei/core/v16/config"
cfgstore "github.com/datarhei/core/v16/config/store"
"github.com/datarhei/core/v16/http/cache"
"github.com/datarhei/core/v16/http/errorhandler"
"github.com/datarhei/core/v16/http/graph/resolver"
@@ -51,10 +51,10 @@ import (
"github.com/datarhei/core/v16/session"
"github.com/datarhei/core/v16/srt"
mwbodysize "github.com/datarhei/core/v16/http/middleware/bodysize"
mwcache "github.com/datarhei/core/v16/http/middleware/cache"
mwcors "github.com/datarhei/core/v16/http/middleware/cors"
mwgzip "github.com/datarhei/core/v16/http/middleware/gzip"
mwhlsrewrite "github.com/datarhei/core/v16/http/middleware/hlsrewrite"
mwiplimit "github.com/datarhei/core/v16/http/middleware/iplimit"
mwlog "github.com/datarhei/core/v16/http/middleware/log"
mwmime "github.com/datarhei/core/v16/http/middleware/mime"
@@ -87,9 +87,9 @@ type Config struct {
RTMP rtmp.Server
SRT srt.Server
JWT jwt.JWT
Config config.Store
Config cfgstore.Store
Cache cache.Cacher
Sessions session.Registry
Sessions session.RegistryReader
Router router.Router
ReadOnly bool
}
@@ -145,6 +145,7 @@ type server struct {
cors echo.MiddlewareFunc
cache echo.MiddlewareFunc
session echo.MiddlewareFunc
hlsrewrite echo.MiddlewareFunc
}
memfs struct {
@@ -185,6 +186,10 @@ func NewServer(config Config) (Server, error) {
config.Cache,
)
s.middleware.hlsrewrite = mwhlsrewrite.NewHLSRewriteWithConfig(mwhlsrewrite.HLSRewriteConfig{
PathPrefix: config.DiskFS.Base(),
})
s.memfs.enableAuth = config.MemFS.EnableAuth
s.memfs.username = config.MemFS.Username
s.memfs.password = config.MemFS.Password
@@ -341,7 +346,6 @@ func NewServer(config Config) (Server, error) {
return nil
},
}))
s.router.Use(mwbodysize.New())
s.router.Use(mwsession.NewHTTPWithConfig(mwsession.HTTPConfig{
Collector: config.Sessions.Collector("http"),
}))
@@ -405,9 +409,9 @@ func (s *server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
func (s *server) setRoutes() {
gzipMiddleware := mwgzip.NewWithConfig(mwgzip.Config{
Level: mwgzip.BestSpeed,
MinLength: 1000,
ContentTypes: []string{""},
Level: mwgzip.BestSpeed,
MinLength: 1000,
Skipper: mwgzip.ContentTypeSkipper(nil),
})
// API router grouo
@@ -440,13 +444,17 @@ func (s *server) setRoutes() {
DefaultContentType: "text/html",
}))
fs.Use(mwgzip.NewWithConfig(mwgzip.Config{
Level: mwgzip.BestSpeed,
MinLength: 1000,
ContentTypes: s.gzip.mimetypes,
Level: mwgzip.BestSpeed,
MinLength: 1000,
Skipper: mwgzip.ContentTypeSkipper(s.gzip.mimetypes),
}))
if s.middleware.cache != nil {
fs.Use(s.middleware.cache)
}
fs.Use(s.middleware.hlsrewrite)
if s.middleware.session != nil {
fs.Use(s.middleware.session)
}
fs.GET("", s.handler.diskfs.GetFile)
fs.HEAD("", s.handler.diskfs.GetFile)
@@ -459,9 +467,9 @@ func (s *server) setRoutes() {
DefaultContentType: "application/data",
}))
memfs.Use(mwgzip.NewWithConfig(mwgzip.Config{
Level: mwgzip.BestSpeed,
MinLength: 1000,
ContentTypes: s.gzip.mimetypes,
Level: mwgzip.BestSpeed,
MinLength: 1000,
Skipper: mwgzip.ContentTypeSkipper(s.gzip.mimetypes),
}))
if s.middleware.session != nil {
memfs.Use(s.middleware.session)
@@ -642,6 +650,7 @@ func (s *server) setRoutesV3(v3 *echo.Group) {
// v3 Log
v3.GET("/log", s.v3handler.log.Log)
// v3 Resources
// v3 Metrics
v3.GET("/metrics", s.v3handler.resources.Describe)
v3.POST("/metrics", s.v3handler.resources.Metrics)
}

1
internal/.gitignore vendored
View File

@@ -1,3 +1,4 @@
testhelper/ignoresigint/ignoresigint
testhelper/sigint/sigint
testhelper/sigintwait/sigintwait
testhelper/ffmpeg/ffmpeg

View File

@@ -0,0 +1,18 @@
package main
import (
"os"
"os/signal"
"time"
)
func main() {
// Wait for interrupt signal to gracefully shutdown the app
quit := make(chan os.Signal, 1)
signal.Notify(quit, os.Interrupt)
<-quit
time.Sleep(3 * time.Second)
os.Exit(255)
}

View File

@@ -17,6 +17,18 @@ func Rename(src, dst string) error {
}
// If renaming the file fails, copy the data
Copy(src, dst)
if err := os.Remove(src); err != nil {
os.Remove(dst)
return fmt.Errorf("failed to remove source file: %w", err)
}
return nil
}
// Copy copies a file from src to dst.
func Copy(src, dst string) error {
source, err := os.Open(src)
if err != nil {
return fmt.Errorf("failed to open source file: %w", err)
@@ -37,10 +49,5 @@ func Rename(src, dst string) error {
source.Close()
if err := os.Remove(src); err != nil {
os.Remove(dst)
return fmt.Errorf("failed to remove source file: %w", err)
}
return nil
}

View File

@@ -8,6 +8,7 @@ import (
"strings"
"time"
"github.com/datarhei/core/v16/glob"
"github.com/datarhei/core/v16/log"
)
@@ -290,13 +291,21 @@ func (fs *diskFilesystem) List(pattern string) []FileInfo {
files := []FileInfo{}
fs.walk(func(path string, info os.FileInfo) {
if path == fs.dir {
return
}
name := strings.TrimPrefix(path, fs.dir)
if name[0] != os.PathSeparator {
name = string(os.PathSeparator) + name
}
if info.IsDir() {
name += "/"
}
if len(pattern) != 0 {
if ok, _ := filepath.Match(pattern, name); !ok {
if ok, _ := glob.Match(pattern, name, '/'); !ok {
return
}
}
@@ -318,6 +327,7 @@ func (fs *diskFilesystem) walk(walkfn func(path string, info os.FileInfo)) {
}
if info.IsDir() {
walkfn(path, info)
return nil
}

View File

@@ -4,11 +4,11 @@ import (
"bytes"
"fmt"
"io"
"path/filepath"
"sort"
"sync"
"time"
"github.com/datarhei/core/v16/glob"
"github.com/datarhei/core/v16/log"
)
@@ -427,7 +427,7 @@ func (fs *memFilesystem) List(pattern string) []FileInfo {
for _, file := range fs.files {
if len(pattern) != 0 {
if ok, _ := filepath.Match(pattern, file.name); !ok {
if ok, _ := glob.Match(pattern, file.name, '/'); !ok {
continue
}
}

View File

@@ -14,28 +14,29 @@ import (
type Level uint
const (
Lsilent Level = 0
Lerror Level = 1
Lwarn Level = 2
Linfo Level = 3
Ldebug Level = 4
Lsilent Level = 0b0000
Lerror Level = 0b0001
Lwarn Level = 0b0010
Linfo Level = 0b0100
Ldebug Level = 0b1000
)
// String returns a string representing the log level.
func (level Level) String() string {
names := []string{
"SILENT",
"ERROR",
"WARN",
"INFO",
"DEBUG",
}
if level > Ldebug {
switch level {
case Lsilent:
return "SILENT"
case Lerror:
return "ERROR"
case Lwarn:
return "WARN"
case Linfo:
return "INFO"
case Ldebug:
return "DEBUG"
default:
return `¯\_(ツ)_/¯`
}
return names[level]
}
func (level *Level) MarshalJSON() ([]byte, error) {
@@ -97,6 +98,9 @@ type Logger interface {
// Write implements the io.Writer interface such that it can be used in e.g. the
// the log/Logger facility. Messages will be printed with debug level.
Write(p []byte) (int, error)
// Close closes the underlying writer.
Close()
}
// logger is an implementation of the Logger interface.
@@ -184,6 +188,10 @@ func (l *logger) Write(p []byte) (int, error) {
return newEvent(l).Write(p)
}
func (l *logger) Close() {
l.output.Close()
}
type Event struct {
logger *logger
@@ -352,12 +360,6 @@ func (l *Event) Write(p []byte) (int, error) {
return len(p), nil
}
type Eventx struct {
Time time.Time `json:"ts"`
Level Level `json:"level"`
Component string `json:"component"`
Reference string `json:"ref"`
Message string `json:"message"`
Caller string `json:"caller"`
Detail interface{} `json:"detail"`
func (l *Event) Close() {
l.logger.Close()
}

View File

@@ -5,25 +5,25 @@ import (
"bytes"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestLoglevelNames(t *testing.T) {
assert.Equal(t, "DEBUG", Ldebug.String())
assert.Equal(t, "ERROR", Lerror.String())
assert.Equal(t, "WARN", Lwarn.String())
assert.Equal(t, "INFO", Linfo.String())
assert.Equal(t, `SILENT`, Lsilent.String())
require.Equal(t, "DEBUG", Ldebug.String())
require.Equal(t, "ERROR", Lerror.String())
require.Equal(t, "WARN", Lwarn.String())
require.Equal(t, "INFO", Linfo.String())
require.Equal(t, `SILENT`, Lsilent.String())
}
func TestLogColorToNotTTY(t *testing.T) {
var buffer bytes.Buffer
writer := bufio.NewWriter(&buffer)
w := NewConsoleWriter(writer, Linfo, true).(*syncWriter)
w := NewLevelWriter(NewConsoleWriter(writer, true), Linfo).(*levelWriter).writer.(*syncWriter)
formatter := w.writer.(*consoleWriter).formatter.(*consoleFormatter)
assert.NotEqual(t, true, formatter.color, "Color should not be used on a buffer logger")
require.NotEqual(t, true, formatter.color, "Color should not be used on a buffer logger")
}
func TestLogContext(t *testing.T) {
@@ -31,7 +31,7 @@ func TestLogContext(t *testing.T) {
var buffer bytes.Buffer
writer := bufio.NewWriter(&buffer)
logger := New("component").WithOutput(NewConsoleWriter(writer, Ldebug, false))
logger := New("component").WithOutput(NewLevelWriter(NewConsoleWriter(writer, false), Ldebug))
logger.Debug().Log("debug")
logger.Info().Log("info")
@@ -53,19 +53,19 @@ func TestLogContext(t *testing.T) {
lenWithoutCtx := buffer.Len()
buffer.Reset()
assert.Greater(t, lenWithCtx, lenWithoutCtx, "Log line length without context is not shorter than with context")
require.Greater(t, lenWithCtx, lenWithoutCtx, "Log line length without context is not shorter than with context")
}
func TestLogClone(t *testing.T) {
var buffer bytes.Buffer
writer := bufio.NewWriter(&buffer)
logger := New("test").WithOutput(NewConsoleWriter(writer, Linfo, false))
logger := New("test").WithOutput(NewLevelWriter(NewConsoleWriter(writer, false), Linfo))
logger.Info().Log("info")
writer.Flush()
assert.Contains(t, buffer.String(), `component="test"`)
require.Contains(t, buffer.String(), `component="test"`)
buffer.Reset()
@@ -74,33 +74,33 @@ func TestLogClone(t *testing.T) {
logger2.Info().Log("info")
writer.Flush()
assert.Contains(t, buffer.String(), `component="tset"`)
require.Contains(t, buffer.String(), `component="tset"`)
}
func TestLogSilent(t *testing.T) {
var buffer bytes.Buffer
writer := bufio.NewWriter(&buffer)
logger := New("test").WithOutput(NewConsoleWriter(writer, Lsilent, false))
logger := New("test").WithOutput(NewLevelWriter(NewConsoleWriter(writer, false), Lsilent))
logger.Debug().Log("debug")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Info().Log("info")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Warn().Log("warn")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Error().Log("error")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
}
@@ -108,26 +108,26 @@ func TestLogDebug(t *testing.T) {
var buffer bytes.Buffer
writer := bufio.NewWriter(&buffer)
logger := New("test").WithOutput(NewConsoleWriter(writer, Ldebug, false))
logger := New("test").WithOutput(NewLevelWriter(NewConsoleWriter(writer, false), Ldebug))
logger.Debug().Log("debug")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
logger.Info().Log("info")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
logger.Warn().Log("warn")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
logger.Error().Log("error")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
}
@@ -135,26 +135,26 @@ func TestLogInfo(t *testing.T) {
var buffer bytes.Buffer
writer := bufio.NewWriter(&buffer)
logger := New("test").WithOutput(NewConsoleWriter(writer, Linfo, false))
logger := New("test").WithOutput(NewLevelWriter(NewConsoleWriter(writer, false), Linfo))
logger.Debug().Log("debug")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Info().Log("info")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
logger.Warn().Log("warn")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
logger.Error().Log("error")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
}
@@ -162,26 +162,26 @@ func TestLogWarn(t *testing.T) {
var buffer bytes.Buffer
writer := bufio.NewWriter(&buffer)
logger := New("test").WithOutput(NewConsoleWriter(writer, Lwarn, false))
logger := New("test").WithOutput(NewLevelWriter(NewConsoleWriter(writer, false), Lwarn))
logger.Debug().Log("debug")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Info().Log("info")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Warn().Log("warn")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
logger.Error().Log("error")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
}
@@ -189,25 +189,25 @@ func TestLogError(t *testing.T) {
var buffer bytes.Buffer
writer := bufio.NewWriter(&buffer)
logger := New("test").WithOutput(NewConsoleWriter(writer, Lerror, false))
logger := New("test").WithOutput(NewLevelWriter(NewConsoleWriter(writer, false), Lerror))
logger.Debug().Log("debug")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Info().Log("info")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Warn().Log("warn")
writer.Flush()
assert.Equal(t, 0, buffer.Len(), "Buffer should be empty")
require.Equal(t, 0, buffer.Len(), "Buffer should be empty")
buffer.Reset()
logger.Error().Log("error")
writer.Flush()
assert.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
require.NotEqual(t, 0, buffer.Len(), "Buffer should not be empty")
buffer.Reset()
}

Some files were not shown because too many files have changed in this diff Show More