mirror of
https://github.com/asticode/go-astiencoder.git
synced 2025-09-27 03:28:10 +08:00
Cleanup
This commit is contained in:
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1,4 +1,2 @@
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
.idea/
|
||||
tmp
|
||||
|
22
Makefile
22
Makefile
@@ -1,22 +0,0 @@
|
||||
libs = $(addprefix -l,$(subst .a,,$(subst tmp/lib/lib,,$(wildcard tmp/lib/*.a))))
|
||||
env = CGO_CFLAGS="-I$(CURDIR)/tmp/include" CGO_LDFLAGS="-L$(CURDIR)/tmp/lib $(libs)" PKG_CONFIG_PATH="$(CURDIR)/tmp/lib/pkgconfig"
|
||||
|
||||
example:
|
||||
$(env) go run ./astiencoder -j examples/$(example).json
|
||||
|
||||
build:
|
||||
$(env) go build -o $(GOPATH)/bin/astiencoder ./astiencoder
|
||||
|
||||
server:
|
||||
$(env) go run ./astiencoder
|
||||
|
||||
test:
|
||||
$(env) go test -cover -v ./...
|
||||
|
||||
install-ffmpeg:
|
||||
mkdir -p tmp/src
|
||||
git clone https://github.com/FFmpeg/FFmpeg tmp/src/ffmpeg
|
||||
cd tmp/src/ffmpeg && git checkout n4.4.1
|
||||
cd tmp/src/ffmpeg && ./configure --prefix=../.. $(configure)
|
||||
cd tmp/src/ffmpeg && make
|
||||
cd tmp/src/ffmpeg && make install
|
177
README.md
177
README.md
@@ -1,177 +0,0 @@
|
||||
`astiencoder` is an open source video encoder written in GO and based on [ffmpeg](https://github.com/FFmpeg/FFmpeg) C bindings
|
||||
|
||||
Right now this project has only been tested on FFMpeg n4.4.1
|
||||
|
||||
You need GO >= 1.13
|
||||
|
||||

|
||||
|
||||
# Why use this project when I can use `ffmpeg` binary?
|
||||
|
||||
In most cases you won't need this project as the `ffmpeg` binary is pretty awesome.
|
||||
|
||||
However, this project could be useful to you if you're looking to:
|
||||
|
||||
1. understand how the video encoding process work
|
||||
1. integrate your video encoder in a GO ecosystem
|
||||
1. visualize in real time, record or replay your encoding workflow and nodes statuses and stats
|
||||
1. use native GO subtitle libraries like [astisub](https://github.com/asticode/go-astisub)
|
||||
1. build your own video encoder and take control of its workflow
|
||||
|
||||
# How is this project structured?
|
||||
## The encoder framework
|
||||
|
||||
At the root of the project, package `astiencoder` provides the framework to build [Workflows](workflow.go). Workflows are made of [Nodes](node.go). Nodes can start/pause/continue/stop any kind of work.
|
||||
|
||||
It also provides a [Server](server.go) that exposes the UI.
|
||||
|
||||
All internal [Events](event.go) can be handled with the proper `EventHandler`.
|
||||
|
||||
## The libav wrapper
|
||||
|
||||
In folder `libav`, package `astilibav` provides the proper nodes to use the `ffmpeg` C bindings with the encoder:
|
||||
|
||||
- [Opener](libav/opener.go)
|
||||
- [Demuxer](libav/demuxer.go)
|
||||
- [Decoder](libav/decoder.go)
|
||||
- [Filterer](libav/filterer.go)
|
||||
- [Encoder](libav/encoder.go)
|
||||
- [Muxer](libav/muxer.go)
|
||||
- [PktDumper](libav/pkt_dumper.go)
|
||||
|
||||
At this point the way you connect those nodes is up to you since they implement 2 main interfaces:
|
||||
|
||||
```go
|
||||
type PktHandler interface {
|
||||
HandlePkt(p *PktHandlerPayload)
|
||||
}
|
||||
|
||||
type FrameHandler interface {
|
||||
HandleFrame(p *FrameHandlerPayload)
|
||||
}
|
||||
```
|
||||
|
||||
## The out-of-the-box encoder
|
||||
|
||||
In folder `astiencoder`, package `main` provides an out-of-the-box encoder using both packages `astiencoder` and `astilibav`.
|
||||
|
||||
It creates workflows based on [Jobs](astiencoder/job.go).
|
||||
|
||||
It's a good place to start digging if you're looking to implement your own workflow builder.
|
||||
|
||||
# How do I install this project?
|
||||
## FFMpeg
|
||||
|
||||
In order to use the `ffmpeg` C bindings, you need to install ffmpeg. To do so, run the following command:
|
||||
|
||||
```
|
||||
$ make install-ffmpeg
|
||||
```
|
||||
|
||||
In some cases, you'll need to enable/disable libs explicitly. To do so, use the `configure` placeholder. For instance this will install the `libx264` as well:
|
||||
|
||||
```
|
||||
$ make install-ffmpeg configure="--enable-libx264 --enable-gpl"
|
||||
```
|
||||
|
||||
## Astiencoder
|
||||
|
||||
Simply run the following command:
|
||||
|
||||
```
|
||||
$ go get github.com/asticode/go-astiencoder/...
|
||||
```
|
||||
|
||||
# How can I run the out-of-the-box encoder?
|
||||
## Modes
|
||||
|
||||
The out-of-the-box encoder has 2 modes:
|
||||
|
||||
- by default it will spawn the server and wait for a new workflow to be added manually
|
||||
- when provided with the `-j` flag, it will open the provided json-formatted job, transform it into a workflow, execute it and exit once everything is done
|
||||
|
||||
To run the default mode, simply run the following command:
|
||||
|
||||
```
|
||||
$ make server
|
||||
```
|
||||
|
||||
## Web UI
|
||||
|
||||
If you set up the server correctly, you can open the Web UI in order to see your node's stats.
|
||||
|
||||
### What do those stats mean?
|
||||
|
||||
Nodes use the same stats:
|
||||
|
||||
- Incoming rate: the number of incoming object received per second. This is either packets per second (`pps`) or frames per second (`fps`).
|
||||
- Listen ratio: the percentage of time spent waiting for a new incoming object
|
||||
- Dispatch ratio: the percentage of time spent waiting for all children to be available to process the output object.
|
||||
- Work ratio: the percentage of time spent doing some actual work
|
||||
|
||||
That way you can monitor the efficiency of your workflow and see which node needs work.
|
||||
|
||||
# How can I run examples?
|
||||
|
||||
Examples are located in the `examples` folder and consists of json-formatted jobs.
|
||||
|
||||
If you want to run a specific example, run the following command:
|
||||
|
||||
```
|
||||
$ make example=<name of the example>
|
||||
```
|
||||
|
||||
File outputs will be written in the `examples/tmp` folder.
|
||||
|
||||
**WARNING**: for the following examples you will need specific ffmpeg libs enabled. Again, in order to do so, use the `configure` placeholder as mentioned [here](#ffmpeg):
|
||||
|
||||
- encode: `--enable-libx264 --enable-gpl`
|
||||
|
||||
# How can I build my own workflow?
|
||||
|
||||
I'd recommend to get inspiration from the out-of-the-box encoder's [workflow builder](astiencoder/workflow.go).
|
||||
|
||||
# Which ffmpeg C bindings is this project using and why?
|
||||
|
||||
Right now this project is using [these bindings](https://github.com/asticode/goav).
|
||||
|
||||
Here's why:
|
||||
|
||||
1) [the base project](https://github.com/giorgisio/goav) is not maintained anymore
|
||||
2) [this project](https://github.com/targodan/ffgopeg) is a hard fork of #1 but has gone a very different route
|
||||
3) [this project](https://github.com/selfmodify/goav) is a fork of #1 but I'm experiencing lots of panics
|
||||
4) [this project](https://github.com/amarburg/goav) is the best fork of #1 even though the last commit is not recent
|
||||
5) [this project](https://github.com/ioblank/goav) is a fork of #4 with interesting additional commits
|
||||
6) [this project](https://github.com/koropets/goav) is a fork of #4 with interesting additional commits
|
||||
7) [this project](https://github.com/alon-ne/goav) has a very nice set of examples
|
||||
|
||||
Therefore I've forked #4, added useful commits from other forks and removed deprecated functions so that it works properly in FFMpeg n4.4.1.
|
||||
|
||||
# I'm not familiar with CGO, which flags am I supposed to provide?
|
||||
|
||||
If you're using `make`, you're not supposed to add CGO flags manually, it will do it for you.
|
||||
|
||||
Otherwise you need to provide the proper `CGO_CFLAGS`, `CGO_LDFLAGS` and `PKG_CONFIG_PATH` environment variables when running your GO code.
|
||||
|
||||
Let say the absolute path to your current dir is `/path/to`, here are their respective values:
|
||||
|
||||
- `CGO_CFLAGS`: -I/path/to/vendor_c/include
|
||||
- `CGO_LDFLAGS`: -L/path/to/vendor_c/lib
|
||||
- `PKG_CONFIG_PATH`: /path/to/vendor_c/lib/pkgconfig
|
||||
|
||||
|
||||
# Features and roadmap
|
||||
|
||||
- [x] copy (remux)
|
||||
- [x] mjpeg (thumbnails)
|
||||
- [x] basic encode (h264 + aac)
|
||||
- [x] stats
|
||||
- [x] web ui
|
||||
- [ ] proper tests
|
||||
- [ ] packaging (dash + hls + smooth)
|
||||
- [ ] add plugin in [snickers](https://github.com/snickers/snickers/tree/master/encoders)
|
||||
- [ ] many others :D
|
||||
|
||||
# Contribute
|
||||
|
||||
Contributions are more than welcome! Simply fork this project, make changes in a specific branch such as `patch-1` for instance and submit a PR.
|
@@ -1,49 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
)
|
||||
|
||||
var (
|
||||
configPath = flag.String("c", "", "the config path")
|
||||
)
|
||||
|
||||
type Configuration struct {
|
||||
Encoder *ConfigurationEncoder `toml:"encoder"`
|
||||
}
|
||||
|
||||
type ConfigurationEncoder struct {
|
||||
Exec ConfigurationExec `toml:"exec"`
|
||||
Server ConfigurationServer `toml:"server"`
|
||||
}
|
||||
|
||||
type ConfigurationExec struct {
|
||||
StopWhenWorkflowsAreStopped bool `toml:"stop_when_workflows_are_stopped"`
|
||||
}
|
||||
|
||||
type ConfigurationServer struct {
|
||||
Addr string `toml:"addr"`
|
||||
}
|
||||
|
||||
func newConfiguration() (c Configuration, err error) {
|
||||
// Global
|
||||
c = Configuration{
|
||||
Encoder: &ConfigurationEncoder{
|
||||
Server: ConfigurationServer{
|
||||
Addr: "127.0.0.1:4000",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Local
|
||||
if *configPath != "" {
|
||||
if _, err = toml.DecodeFile(*configPath, &c); err != nil {
|
||||
err = fmt.Errorf("main: toml decoding %s failed: %w", *configPath, err)
|
||||
return
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
@@ -1,50 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/asticode/go-astiencoder"
|
||||
"github.com/asticode/go-astikit"
|
||||
)
|
||||
|
||||
type encoder struct {
|
||||
c *ConfigurationEncoder
|
||||
eh *astiencoder.EventHandler
|
||||
m *sync.Mutex
|
||||
s *astiencoder.Stater
|
||||
w *astikit.Worker
|
||||
ws *astiencoder.Server
|
||||
wsStarted map[string]bool
|
||||
}
|
||||
|
||||
func newEncoder(c *ConfigurationEncoder, eh *astiencoder.EventHandler, ws *astiencoder.Server, l astikit.StdLogger, s *astiencoder.Stater) (e *encoder) {
|
||||
e = &encoder{
|
||||
c: c,
|
||||
eh: eh,
|
||||
m: &sync.Mutex{},
|
||||
s: s,
|
||||
w: astikit.NewWorker(astikit.WorkerOptions{Logger: l}),
|
||||
ws: ws,
|
||||
wsStarted: make(map[string]bool),
|
||||
}
|
||||
e.adaptEventHandler(e.eh)
|
||||
return
|
||||
}
|
||||
|
||||
func (e *encoder) adaptEventHandler(h *astiencoder.EventHandler) {
|
||||
h.AddForEventName(astiencoder.EventNameWorkflowStarted, func(evt astiencoder.Event) bool {
|
||||
e.m.Lock()
|
||||
defer e.m.Unlock()
|
||||
e.wsStarted[evt.Target.(*astiencoder.Workflow).Name()] = true
|
||||
return false
|
||||
})
|
||||
h.AddForEventName(astiencoder.EventNameWorkflowStopped, func(evt astiencoder.Event) bool {
|
||||
e.m.Lock()
|
||||
defer e.m.Unlock()
|
||||
delete(e.wsStarted, evt.Target.(*astiencoder.Workflow).Name())
|
||||
if e.c.Exec.StopWhenWorkflowsAreStopped && len(e.wsStarted) == 0 {
|
||||
e.w.Stop()
|
||||
}
|
||||
return false
|
||||
})
|
||||
}
|
@@ -1,94 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/asticode/go-astiav"
|
||||
"github.com/asticode/go-astikit"
|
||||
)
|
||||
|
||||
// Job represents a job
|
||||
type Job struct {
|
||||
Inputs map[string]JobInput `json:"inputs"`
|
||||
Operations map[string]JobOperation `json:"operations"`
|
||||
Outputs map[string]JobOutput `json:"outputs"`
|
||||
}
|
||||
|
||||
// JobInput represents a job input
|
||||
type JobInput struct {
|
||||
Dict string `json:"dict"`
|
||||
EmulateRate bool `json:"emulate_rate"`
|
||||
URL string `json:"url"`
|
||||
}
|
||||
|
||||
// Job output types
|
||||
const (
|
||||
// The packet data is dumped directly to the url without any mux
|
||||
JobOutputTypePktDump = "pkt_dump"
|
||||
)
|
||||
|
||||
// JobOutput represents a job output
|
||||
type JobOutput struct {
|
||||
Format string `json:"format,omitempty"`
|
||||
// Possible values are "default" and "pkt_dump"
|
||||
Type string `json:"type,omitempty"`
|
||||
URL string `json:"url"`
|
||||
}
|
||||
|
||||
// Job operation codecs
|
||||
const (
|
||||
JobOperationCodecCopy = "copy"
|
||||
)
|
||||
|
||||
// JobOperation represents a job operation
|
||||
// This can usually be compared to an encoding
|
||||
// Refrain from indicating all options in the dict and use other attributes instead
|
||||
type JobOperation struct {
|
||||
BitRate *int64 `json:"bit_rate,omitempty"`
|
||||
// Possible values are "copy" and all libav codec names.
|
||||
Codec string `json:"codec,omitempty"`
|
||||
Dict string `json:"dict,omitempty"`
|
||||
// Frame rate is a per-operation value since we may have different frame rate operations for a similar output
|
||||
FrameRate *astikit.Rational `json:"frame_rate,omitempty"`
|
||||
GopSize *int `json:"gop_size,omitempty"`
|
||||
Height *int `json:"height,omitempty"`
|
||||
Inputs []JobOperationInput `json:"inputs"`
|
||||
Outputs []JobOperationOutput `json:"outputs"`
|
||||
ThreadCount *int `json:"thread_count,omitempty"`
|
||||
// Since frame rate is a per-operation value, time base is as well
|
||||
TimeBase *astikit.Rational `json:"time_base,omitempty"`
|
||||
Width *int `json:"width,omitempty"`
|
||||
}
|
||||
|
||||
type JobMediaType astiav.MediaType
|
||||
|
||||
func (t *JobMediaType) UnmarshalText(b []byte) error {
|
||||
switch string(b) {
|
||||
case "audio":
|
||||
*t = JobMediaType(astiav.MediaTypeAudio)
|
||||
case "subtitle":
|
||||
*t = JobMediaType(astiav.MediaTypeSubtitle)
|
||||
case "video":
|
||||
*t = JobMediaType(astiav.MediaTypeVideo)
|
||||
default:
|
||||
*t = JobMediaType(astiav.MediaTypeUnknown)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t JobMediaType) MediaType() astiav.MediaType {
|
||||
return astiav.MediaType(t)
|
||||
}
|
||||
|
||||
// JobOperationInput represents a job operation input
|
||||
// TODO Add start, end and duration (use seek?)
|
||||
type JobOperationInput struct {
|
||||
// Possible values are "audio", "subtitle" and "video"
|
||||
MediaType *JobMediaType `json:"media_type,omitempty"`
|
||||
Name string `json:"name"`
|
||||
PID *int `json:"pid,omitempty"`
|
||||
}
|
||||
|
||||
// JobOperationOutput represents a job operation output
|
||||
type JobOperationOutput struct {
|
||||
Name string `json:"name"`
|
||||
PID *int `json:"pid,omitempty"`
|
||||
}
|
@@ -1,94 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/asticode/go-astiav"
|
||||
"github.com/asticode/go-astiencoder"
|
||||
astilibav "github.com/asticode/go-astiencoder/libav"
|
||||
"github.com/asticode/go-astikit"
|
||||
)
|
||||
|
||||
// Flags
|
||||
var (
|
||||
job = flag.String("j", "", "the path to the job in JSON format")
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Parse flags
|
||||
flag.Parse()
|
||||
|
||||
// Create logger
|
||||
l := log.New(log.Writer(), log.Prefix(), log.Flags())
|
||||
|
||||
// Create configuration
|
||||
c, err := newConfiguration()
|
||||
if err != nil {
|
||||
l.Fatal(fmt.Errorf("main: creating configuration failed: %w", err))
|
||||
}
|
||||
|
||||
// Create event handler
|
||||
eh := astiencoder.NewEventHandler()
|
||||
|
||||
// Create workflow server
|
||||
ws := astiencoder.NewServer(astiencoder.ServerOptions{Logger: l})
|
||||
ws.EventHandlerAdapter(eh)
|
||||
|
||||
// Create stater
|
||||
s := astiencoder.NewStater(time.Second, eh)
|
||||
|
||||
// Create encoder
|
||||
e := newEncoder(c.Encoder, eh, ws, l, s)
|
||||
|
||||
// Log event handler
|
||||
defer eh.Log(astiencoder.EventHandlerLogOptions{
|
||||
Adapters: []astiencoder.EventHandlerLogAdapter{
|
||||
astilibav.EventHandlerLogAdapter(astilibav.EventHandlerLogAdapterOptions{LogLevel: astiav.LogLevelInfo}),
|
||||
},
|
||||
Logger: l,
|
||||
}).Start(e.w.Context()).Close()
|
||||
|
||||
// Handle signals
|
||||
e.w.HandleSignals()
|
||||
|
||||
// Job has been provided
|
||||
if len(*job) > 0 {
|
||||
// Open file
|
||||
var f *os.File
|
||||
if f, err = os.Open(*job); err != nil {
|
||||
l.Fatal(fmt.Errorf("main: opening %s failed: %w", *job, err))
|
||||
}
|
||||
|
||||
// Unmarshal
|
||||
var j Job
|
||||
if err = json.NewDecoder(f).Decode(&j); err != nil {
|
||||
l.Fatal(fmt.Errorf("main: unmarshaling %s into %+v failed: %w", *job, j, err))
|
||||
}
|
||||
|
||||
// Add workflow
|
||||
var w *astiencoder.Workflow
|
||||
var cl *astikit.Closer
|
||||
if w, cl, err = addWorkflow("default", j, e); err != nil {
|
||||
l.Fatal(fmt.Errorf("main: adding default workflow failed: %w", err))
|
||||
}
|
||||
defer cl.Close()
|
||||
|
||||
// Make sure the worker stops when the workflow is stopped
|
||||
c.Encoder.Exec.StopWhenWorkflowsAreStopped = true
|
||||
|
||||
// Start stater
|
||||
go s.Start(e.w.Context())
|
||||
defer s.Stop()
|
||||
|
||||
// Start workflow
|
||||
w.Start()
|
||||
}
|
||||
|
||||
// Wait
|
||||
e.w.Wait()
|
||||
}
|
@@ -1,131 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha1"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/asticode/go-astiav"
|
||||
"github.com/asticode/go-astiencoder"
|
||||
)
|
||||
|
||||
func init() {
|
||||
astiav.SetLogLevel(astiav.LogLevelError)
|
||||
}
|
||||
|
||||
func testJob(t *testing.T, jobPath string, assertPaths func(j Job) map[string]string) {
|
||||
// Create logger
|
||||
l := log.New(log.Writer(), log.Prefix(), log.Flags())
|
||||
|
||||
// Create event handler
|
||||
eh := astiencoder.NewEventHandler()
|
||||
|
||||
// Create workflow server
|
||||
ws := astiencoder.NewServer(astiencoder.ServerOptions{Logger: l})
|
||||
|
||||
// Create stater
|
||||
s := astiencoder.NewStater(2*time.Second, eh)
|
||||
|
||||
// Create encoder
|
||||
cfg := &ConfigurationEncoder{}
|
||||
cfg.Exec.StopWhenWorkflowsAreStopped = true
|
||||
e := newEncoder(cfg, eh, ws, l, s)
|
||||
|
||||
// Open job
|
||||
j, err := openJob(jobPath)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
// Add workflow
|
||||
w, _, err := addWorkflow("test", j, e)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
// Start workflow
|
||||
w.Start()
|
||||
|
||||
// Wait
|
||||
e.w.Wait()
|
||||
|
||||
// Check expected paths
|
||||
for expectedPath, actualPath := range assertPaths(j) {
|
||||
assertFilesEqual(expectedPath, actualPath, t)
|
||||
}
|
||||
}
|
||||
|
||||
func openJob(path string) (j Job, err error) {
|
||||
// Open
|
||||
var f *os.File
|
||||
if f, err = os.Open(path); err != nil {
|
||||
err = fmt.Errorf("opening %s failed: %w", path, err)
|
||||
return
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
// Unmarshal job
|
||||
if err = json.NewDecoder(f).Decode(&j); err != nil {
|
||||
err = fmt.Errorf("unmarshaling %s failed: %w", path, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Update paths
|
||||
for k, v := range j.Inputs {
|
||||
v.URL = "../" + v.URL
|
||||
j.Inputs[k] = v
|
||||
}
|
||||
for k, v := range j.Outputs {
|
||||
v.URL = "../" + v.URL
|
||||
j.Outputs[k] = v
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func assertFilesEqual(expected, actual string, t *testing.T) {
|
||||
// Expected hash
|
||||
expectedHash, err := hashFileContent(expected)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
// Actual hash
|
||||
actualHash, err := hashFileContent(actual)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
// Compare hash
|
||||
if !bytes.Equal(expectedHash, actualHash) {
|
||||
t.Errorf("expected hash (%s) != actual hash (%s)", expectedHash, actualHash)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func hashFileContent(path string) (hash []byte, err error) {
|
||||
// Read
|
||||
var b []byte
|
||||
if b, err = ioutil.ReadFile(path); err != nil {
|
||||
err = fmt.Errorf("reading file %s failed: %w", path, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Hash
|
||||
h := sha1.New()
|
||||
if _, err = h.Write(b); err != nil {
|
||||
err = fmt.Errorf("hashing content of file %s failed: %w", path, err)
|
||||
return
|
||||
}
|
||||
hash = h.Sum(nil)
|
||||
return
|
||||
}
|
BIN
astiencoder/testdata/copy.mp4
vendored
BIN
astiencoder/testdata/copy.mp4
vendored
Binary file not shown.
BIN
astiencoder/testdata/default-0-0-1.jpeg
vendored
BIN
astiencoder/testdata/default-0-0-1.jpeg
vendored
Binary file not shown.
Before Width: | Height: | Size: 582 B |
BIN
astiencoder/testdata/default-0-1-2.jpeg
vendored
BIN
astiencoder/testdata/default-0-1-2.jpeg
vendored
Binary file not shown.
Before Width: | Height: | Size: 1.9 KiB |
BIN
astiencoder/testdata/default-0-2-3.jpeg
vendored
BIN
astiencoder/testdata/default-0-2-3.jpeg
vendored
Binary file not shown.
Before Width: | Height: | Size: 5.4 KiB |
BIN
astiencoder/testdata/default-0-3-4.jpeg
vendored
BIN
astiencoder/testdata/default-0-3-4.jpeg
vendored
Binary file not shown.
Before Width: | Height: | Size: 6.7 KiB |
BIN
astiencoder/testdata/default-0-4-5.jpeg
vendored
BIN
astiencoder/testdata/default-0-4-5.jpeg
vendored
Binary file not shown.
Before Width: | Height: | Size: 8.7 KiB |
BIN
astiencoder/testdata/encode.mp4
vendored
BIN
astiencoder/testdata/encode.mp4
vendored
Binary file not shown.
@@ -1,473 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/asticode/go-astiav"
|
||||
"github.com/asticode/go-astiencoder"
|
||||
astilibav "github.com/asticode/go-astiencoder/libav"
|
||||
"github.com/asticode/go-astikit"
|
||||
)
|
||||
|
||||
func addWorkflow(name string, j Job, e *encoder) (w *astiencoder.Workflow, c *astikit.Closer, err error) {
|
||||
// Create closer
|
||||
c = astikit.NewCloser()
|
||||
|
||||
// Create workflow
|
||||
w = astiencoder.NewWorkflow(e.w.Context(), name, e.eh, e.w.NewTask, c, e.s)
|
||||
|
||||
// Add default stats
|
||||
if err = w.AddDefaultStats(); err != nil {
|
||||
err = fmt.Errorf("main: adding default stats failed: %w", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Build workflow
|
||||
b := newBuilder()
|
||||
if err = b.buildWorkflow(j, w, e.eh, c, e.s); err != nil {
|
||||
err = fmt.Errorf("main: building workflow failed: %w", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Update workflow server
|
||||
e.ws.SetWorkflow(w)
|
||||
return
|
||||
}
|
||||
|
||||
type builder struct{}
|
||||
|
||||
func newBuilder() *builder {
|
||||
return &builder{}
|
||||
}
|
||||
|
||||
type openedInput struct {
|
||||
c JobInput
|
||||
d *astilibav.Demuxer
|
||||
}
|
||||
|
||||
type openedOutput struct {
|
||||
c JobOutput
|
||||
m *astilibav.Muxer
|
||||
}
|
||||
|
||||
type buildData struct {
|
||||
c *astikit.Closer
|
||||
decoders map[*astilibav.Demuxer]map[*astilibav.Stream]*astilibav.Decoder
|
||||
eh *astiencoder.EventHandler
|
||||
inputs map[string]openedInput
|
||||
outputs map[string]openedOutput
|
||||
s *astiencoder.Stater
|
||||
w *astiencoder.Workflow
|
||||
}
|
||||
|
||||
func newBuildData(w *astiencoder.Workflow, eh *astiencoder.EventHandler, c *astikit.Closer, s *astiencoder.Stater) *buildData {
|
||||
return &buildData{
|
||||
c: c,
|
||||
eh: eh,
|
||||
decoders: make(map[*astilibav.Demuxer]map[*astilibav.Stream]*astilibav.Decoder),
|
||||
s: s,
|
||||
w: w,
|
||||
}
|
||||
}
|
||||
|
||||
func (b *builder) buildWorkflow(j Job, w *astiencoder.Workflow, eh *astiencoder.EventHandler, c *astikit.Closer, s *astiencoder.Stater) (err error) {
|
||||
// Create build data
|
||||
bd := newBuildData(w, eh, c, s)
|
||||
|
||||
// No inputs
|
||||
if len(j.Inputs) == 0 {
|
||||
err = errors.New("main: no inputs provided")
|
||||
return
|
||||
}
|
||||
|
||||
// Open inputs
|
||||
if bd.inputs, err = b.openInputs(j, bd); err != nil {
|
||||
err = fmt.Errorf("main: opening inputs failed: %w", err)
|
||||
return
|
||||
}
|
||||
|
||||
// No outputs
|
||||
if len(j.Outputs) == 0 {
|
||||
err = errors.New("main: no outputs provided")
|
||||
return
|
||||
}
|
||||
|
||||
// Open outputs
|
||||
if bd.outputs, err = b.openOutputs(j, bd); err != nil {
|
||||
err = fmt.Errorf("main: opening outputs failed: %w", err)
|
||||
return
|
||||
}
|
||||
|
||||
// No operations
|
||||
if len(j.Operations) == 0 {
|
||||
err = errors.New("main: no operations provided")
|
||||
return
|
||||
}
|
||||
|
||||
// Loop through operations
|
||||
for n, o := range j.Operations {
|
||||
// Add operation to workflow
|
||||
if err = b.addOperationToWorkflow(n, o, bd); err != nil {
|
||||
err = fmt.Errorf("main: adding operation %s with conf %+v to workflow failed: %w", n, o, err)
|
||||
return
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (b *builder) openInputs(j Job, bd *buildData) (is map[string]openedInput, err error) {
|
||||
// Loop through inputs
|
||||
is = make(map[string]openedInput)
|
||||
for n, cfg := range j.Inputs {
|
||||
// Create demuxer
|
||||
var d *astilibav.Demuxer
|
||||
if d, err = astilibav.NewDemuxer(astilibav.DemuxerOptions{
|
||||
Dictionary: astilibav.NewDefaultDictionary(cfg.Dict),
|
||||
EmulateRate: astilibav.DemuxerEmulateRateOptions{Enabled: cfg.EmulateRate},
|
||||
URL: cfg.URL,
|
||||
}, bd.eh, bd.c, bd.s); err != nil {
|
||||
err = fmt.Errorf("main: creating demuxer failed: %w", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Index
|
||||
is[n] = openedInput{
|
||||
c: cfg,
|
||||
d: d,
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (b *builder) openOutputs(j Job, bd *buildData) (os map[string]openedOutput, err error) {
|
||||
// Loop through outputs
|
||||
os = make(map[string]openedOutput)
|
||||
for n, cfg := range j.Outputs {
|
||||
// Create output
|
||||
oo := openedOutput{
|
||||
c: cfg,
|
||||
}
|
||||
|
||||
// Switch on type
|
||||
switch cfg.Type {
|
||||
case JobOutputTypePktDump:
|
||||
// This is a per-operation and per-input value since we may want to index the path by input name
|
||||
// The writer is created afterwards
|
||||
default:
|
||||
// Create muxer
|
||||
if oo.m, err = astilibav.NewMuxer(astilibav.MuxerOptions{
|
||||
FormatName: cfg.Format,
|
||||
URL: cfg.URL,
|
||||
}, bd.eh, bd.c, bd.s); err != nil {
|
||||
err = fmt.Errorf("main: creating muxer failed: %w", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Index
|
||||
os[n] = oo
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
type operationInput struct {
|
||||
c JobOperationInput
|
||||
o openedInput
|
||||
}
|
||||
|
||||
type operationOutput struct {
|
||||
c JobOperationOutput
|
||||
o openedOutput
|
||||
}
|
||||
|
||||
func (b *builder) addOperationToWorkflow(name string, o JobOperation, bd *buildData) (err error) {
|
||||
// Get operation inputs and outputs
|
||||
var ois []operationInput
|
||||
var oos []operationOutput
|
||||
if ois, oos, err = b.operationInputsOutputs(o, bd); err != nil {
|
||||
err = fmt.Errorf("main: getting inputs and outputs of operation %s failed: %w", name, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Loop through inputs
|
||||
for _, i := range ois {
|
||||
// Loop through streams
|
||||
for _, is := range i.o.d.Streams() {
|
||||
// Only process a specific PID
|
||||
if i.c.PID != nil && is.ID != *i.c.PID {
|
||||
continue
|
||||
}
|
||||
|
||||
// Only process a specific media type
|
||||
if i.c.MediaType != nil && is.Ctx.MediaType != i.c.MediaType.MediaType() {
|
||||
continue
|
||||
}
|
||||
|
||||
// Add demuxer as root node of the workflow
|
||||
bd.w.AddChild(i.o.d)
|
||||
|
||||
// In case of copy we only want to connect the demuxer to the muxer
|
||||
if o.Codec == JobOperationCodecCopy {
|
||||
// Loop through outputs
|
||||
for _, o := range oos {
|
||||
// Clone stream
|
||||
var os *astiav.Stream
|
||||
if os, err = astilibav.CloneStream(is, o.o.m.FormatContext()); err != nil {
|
||||
err = fmt.Errorf("main: cloning stream 0x%x(%d) of %s failed: %w", is.ID, is.ID, i.c.Name, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Create muxer handler
|
||||
h := o.o.m.NewPktHandler(os)
|
||||
|
||||
// Connect demuxer to handler
|
||||
i.o.d.ConnectForStream(h, is)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Create decoder
|
||||
var d *astilibav.Decoder
|
||||
if d, err = b.createDecoder(bd, i, is); err != nil {
|
||||
err = fmt.Errorf("main: creating decoder for stream 0x%x(%d) of input %s failed: %w", is.ID, is.ID, i.c.Name, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Create output ctx
|
||||
outCtx := b.operationOutputCtx(o, d.OutputCtx(), oos)
|
||||
|
||||
// Create filterer
|
||||
var f *astilibav.Filterer
|
||||
if f, err = b.createFilterer(bd, outCtx, d); err != nil {
|
||||
err = fmt.Errorf("main: creating filterer for stream 0x%x(%d) of input %s failed: %w", is.ID, is.ID, i.c.Name, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Create encoder
|
||||
var e *astilibav.Encoder
|
||||
if e, err = astilibav.NewEncoder(astilibav.EncoderOptions{Ctx: outCtx}, bd.eh, bd.c, bd.s); err != nil {
|
||||
err = fmt.Errorf("main: creating encoder for stream 0x%x(%d) of input %s failed: %w", is.ID, is.ID, i.c.Name, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Connect demuxer or filterer to encoder
|
||||
if f != nil {
|
||||
d.Connect(f)
|
||||
f.Connect(e)
|
||||
} else {
|
||||
d.Connect(e)
|
||||
}
|
||||
|
||||
// Loop through outputs
|
||||
for _, o := range oos {
|
||||
// Switch on type
|
||||
var h astilibav.PktHandler
|
||||
switch o.o.c.Type {
|
||||
case JobOutputTypePktDump:
|
||||
// Create pkt dumper
|
||||
if h, err = astilibav.NewPktDumper(astilibav.PktDumperOptions{
|
||||
Data: map[string]interface{}{"input": i.c.Name},
|
||||
Handler: astilibav.PktDumpFile,
|
||||
Pattern: o.o.c.URL,
|
||||
}, bd.eh, bd.c, bd.s); err != nil {
|
||||
err = fmt.Errorf("main: creating pkt dumper for output %s with conf %+v failed: %w", o.c.Name, o.c, err)
|
||||
return
|
||||
}
|
||||
default:
|
||||
// Add stream
|
||||
var os *astiav.Stream
|
||||
if os, err = e.AddStream(o.o.m.FormatContext()); err != nil {
|
||||
err = fmt.Errorf("main: adding stream for stream 0x%x(%d) of %s and output %s failed: %w", is.ID, is.ID, i.c.Name, o.c.Name, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Create muxer handler
|
||||
h = o.o.m.NewPktHandler(os)
|
||||
}
|
||||
|
||||
// Connect encoder to handler
|
||||
e.Connect(h)
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (b *builder) operationInputsOutputs(o JobOperation, bd *buildData) (is []operationInput, os []operationOutput, err error) {
|
||||
// No inputs
|
||||
if len(o.Inputs) == 0 {
|
||||
err = errors.New("main: no operation inputs provided")
|
||||
return
|
||||
}
|
||||
|
||||
// Loop through inputs
|
||||
for _, pi := range o.Inputs {
|
||||
// Retrieve opened input
|
||||
i, ok := bd.inputs[pi.Name]
|
||||
if !ok {
|
||||
err = fmt.Errorf("main: opened input %s not found", pi.Name)
|
||||
return
|
||||
}
|
||||
|
||||
// Append input
|
||||
is = append(is, operationInput{
|
||||
c: pi,
|
||||
o: i,
|
||||
})
|
||||
}
|
||||
|
||||
// No outputs
|
||||
if len(o.Outputs) == 0 {
|
||||
err = errors.New("main: no operation outputs provided")
|
||||
return
|
||||
}
|
||||
|
||||
// Loop through outputs
|
||||
for _, po := range o.Outputs {
|
||||
// Retrieve opened output
|
||||
o, ok := bd.outputs[po.Name]
|
||||
if !ok {
|
||||
err = fmt.Errorf("main: opened output %s not found", po.Name)
|
||||
return
|
||||
}
|
||||
|
||||
// Append output
|
||||
os = append(os, operationOutput{
|
||||
c: po,
|
||||
o: o,
|
||||
})
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (b *builder) operationOutputCtx(o JobOperation, inCtx astilibav.Context, oos []operationOutput) (outCtx astilibav.Context) {
|
||||
// Default output ctx is input ctx
|
||||
outCtx = inCtx
|
||||
|
||||
// Set codec name
|
||||
outCtx.CodecName = o.Codec
|
||||
|
||||
// Set frame rate
|
||||
if o.FrameRate != nil {
|
||||
outCtx.FrameRate = astiav.NewRational(o.FrameRate.Num(), o.FrameRate.Den())
|
||||
}
|
||||
|
||||
// Set time base
|
||||
if o.TimeBase != nil {
|
||||
outCtx.TimeBase = astiav.NewRational(o.TimeBase.Num(), o.TimeBase.Den())
|
||||
} else {
|
||||
outCtx.TimeBase = astiav.NewRational(outCtx.FrameRate.Den(), outCtx.FrameRate.Num())
|
||||
}
|
||||
|
||||
// Set pixel format
|
||||
if o.Codec == "mjpeg" {
|
||||
outCtx.PixelFormat = astiav.PixelFormatYuvj420P
|
||||
}
|
||||
|
||||
// Set height
|
||||
if o.Height != nil {
|
||||
outCtx.Height = *o.Height
|
||||
}
|
||||
|
||||
// Set width
|
||||
if o.Width != nil {
|
||||
outCtx.Width = *o.Width
|
||||
}
|
||||
|
||||
// Set bit rate
|
||||
if o.BitRate != nil {
|
||||
outCtx.BitRate = *o.BitRate
|
||||
}
|
||||
|
||||
// Set gop size
|
||||
if o.GopSize != nil {
|
||||
outCtx.GopSize = *o.GopSize
|
||||
}
|
||||
|
||||
// Set thread count
|
||||
outCtx.ThreadCount = o.ThreadCount
|
||||
|
||||
// Set dict
|
||||
outCtx.Dictionary = astilibav.NewDefaultDictionary(o.Dict)
|
||||
|
||||
// TODO Add audio options
|
||||
|
||||
// Set global header
|
||||
if oos[0].o.m != nil {
|
||||
outCtx.GlobalHeader = oos[0].o.m.FormatContext().OutputFormat().Flags().Has(astiav.IOFormatFlagGlobalheader)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (b *builder) createDecoder(bd *buildData, i operationInput, is *astilibav.Stream) (d *astilibav.Decoder, err error) {
|
||||
// Get decoder
|
||||
var okD, okS bool
|
||||
if _, okD = bd.decoders[i.o.d]; okD {
|
||||
d, okS = bd.decoders[i.o.d][is]
|
||||
} else {
|
||||
bd.decoders[i.o.d] = make(map[*astilibav.Stream]*astilibav.Decoder)
|
||||
}
|
||||
|
||||
// Decoder doesn't exist
|
||||
if !okD || !okS {
|
||||
// Create decoder
|
||||
if d, err = astilibav.NewDecoder(astilibav.DecoderOptions{
|
||||
CodecParameters: is.CodecParameters,
|
||||
OutputCtx: is.Ctx,
|
||||
}, bd.eh, bd.c, bd.s); err != nil {
|
||||
err = fmt.Errorf("main: creating decoder for stream 0x%x(%d) of %s failed: %w", is.ID, is.ID, i.c.Name, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Connect demuxer
|
||||
i.o.d.ConnectForStream(d, is)
|
||||
|
||||
// Index decoder
|
||||
bd.decoders[i.o.d][is] = d
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (b *builder) createFilterer(bd *buildData, outCtx astilibav.Context, n astiencoder.Node) (f *astilibav.Filterer, err error) {
|
||||
// Create filters
|
||||
var filters []string
|
||||
|
||||
// Get in ctx
|
||||
inCtx := n.(astilibav.OutputContexter).OutputCtx()
|
||||
|
||||
// Switch on media type
|
||||
switch inCtx.MediaType {
|
||||
case astiav.MediaTypeVideo:
|
||||
// Frame rate
|
||||
// TODO Use select if inFramerate > outFramerate
|
||||
if inCtx.FrameRate.Den() > 0 && outCtx.FrameRate.Den() > 0 && inCtx.FrameRate.Num()/inCtx.FrameRate.Den() != outCtx.FrameRate.Num()/outCtx.FrameRate.Den() {
|
||||
filters = append(filters, fmt.Sprintf("minterpolate='fps=%d/%d'", outCtx.FrameRate.Num(), outCtx.FrameRate.Den()))
|
||||
}
|
||||
|
||||
// Scale
|
||||
if inCtx.Height != outCtx.Height || inCtx.Width != outCtx.Width {
|
||||
filters = append(filters, fmt.Sprintf("scale='w=%d:h=%d'", outCtx.Width, outCtx.Height))
|
||||
}
|
||||
}
|
||||
|
||||
// There are filters
|
||||
if len(filters) > 0 {
|
||||
// Create filterer options
|
||||
fo := astilibav.FiltererOptions{
|
||||
Content: strings.Join(filters, ","),
|
||||
Inputs: map[string]astiencoder.Node{
|
||||
"in": n,
|
||||
},
|
||||
OutputCtx: outCtx,
|
||||
}
|
||||
|
||||
// Create filterer
|
||||
if f, err = astilibav.NewFilterer(fo, bd.eh, bd.c, bd.s); err != nil {
|
||||
err = fmt.Errorf("main: creating filterer with filters %+v failed: %w", filters, err)
|
||||
return
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
@@ -1,33 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCopy(t *testing.T) {
|
||||
testJob(t, "../examples/copy.json", func(j Job) map[string]string {
|
||||
return map[string]string{
|
||||
"../examples/tmp/copy.mp4": "testdata/copy.mp4",
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestMJpeg(t *testing.T) {
|
||||
testJob(t, "../examples/mjpeg.json", func(j Job) (o map[string]string) {
|
||||
return map[string]string{
|
||||
"../examples/tmp/default-0-0-1.jpeg": "testdata/default-0-0-1.jpeg",
|
||||
"../examples/tmp/default-0-1-2.jpeg": "testdata/default-0-1-2.jpeg",
|
||||
"../examples/tmp/default-0-2-3.jpeg": "testdata/default-0-2-3.jpeg",
|
||||
"../examples/tmp/default-0-3-4.jpeg": "testdata/default-0-3-4.jpeg",
|
||||
"../examples/tmp/default-0-4-5.jpeg": "testdata/default-0-4-5.jpeg",
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestEncode(t *testing.T) {
|
||||
testJob(t, "../examples/encode.json", func(j Job) (o map[string]string) {
|
||||
return map[string]string{
|
||||
"../examples/tmp/encode.mp4": "testdata/encode.mp4",
|
||||
}
|
||||
})
|
||||
}
|
Binary file not shown.
Before Width: | Height: | Size: 205 KiB |
@@ -1,27 +0,0 @@
|
||||
{
|
||||
"inputs": {
|
||||
"default": {
|
||||
"url": "examples/sample.mp4"
|
||||
}
|
||||
},
|
||||
"outputs": {
|
||||
"default": {
|
||||
"url": "examples/tmp/copy.mp4"
|
||||
}
|
||||
},
|
||||
"operations": {
|
||||
"default": {
|
||||
"codec": "copy",
|
||||
"inputs": [
|
||||
{
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "default"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,45 +0,0 @@
|
||||
{
|
||||
"inputs": {
|
||||
"default": {
|
||||
"url": "examples/sample.mp4"
|
||||
}
|
||||
},
|
||||
"outputs": {
|
||||
"default": {
|
||||
"url": "examples/tmp/encode.mp4"
|
||||
}
|
||||
},
|
||||
"operations": {
|
||||
"audio": {
|
||||
"codec": "aac",
|
||||
"inputs": [
|
||||
{
|
||||
"media_type": "audio",
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "default"
|
||||
}
|
||||
]
|
||||
},
|
||||
"video": {
|
||||
"codec": "libx264",
|
||||
"dict": "profile=baseline",
|
||||
"height": 360,
|
||||
"inputs": [
|
||||
{
|
||||
"media_type": "video",
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"width": 640
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,30 +0,0 @@
|
||||
{
|
||||
"inputs": {
|
||||
"default": {
|
||||
"url": "examples/sample.mp4"
|
||||
}
|
||||
},
|
||||
"outputs": {
|
||||
"default": {
|
||||
"type": "pkt_dump",
|
||||
"url": "examples/tmp/{{.input}}-{{.stream_idx}}-{{.pts}}-{{.count}}.jpeg"
|
||||
}
|
||||
},
|
||||
"operations": {
|
||||
"default": {
|
||||
"codec": "mjpeg",
|
||||
"frame_rate": "1",
|
||||
"inputs": [
|
||||
{
|
||||
"media_type": "video",
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "default"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
Binary file not shown.
4
examples/tmp/.gitignore
vendored
4
examples/tmp/.gitignore
vendored
@@ -1,4 +0,0 @@
|
||||
# Ignore everything in this directory
|
||||
*
|
||||
# Except this file
|
||||
!.gitignore
|
1
go.mod
1
go.mod
@@ -3,7 +3,6 @@ module github.com/asticode/go-astiencoder
|
||||
go 1.13
|
||||
|
||||
require (
|
||||
github.com/BurntSushi/toml v0.3.1
|
||||
github.com/asticode/go-astiav v0.6.0
|
||||
github.com/asticode/go-astikit v0.35.0
|
||||
github.com/shirou/gopsutil/v3 v3.21.10
|
||||
|
2
go.sum
2
go.sum
@@ -1,5 +1,3 @@
|
||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/StackExchange/wmi v1.2.1 h1:VIkavFPXSjcnS+O8yTq7NI32k0R5Aj+v39y29VYDOSA=
|
||||
github.com/StackExchange/wmi v1.2.1/go.mod h1:rcmrprowKIVzvc+NUiLncP2uuArMWLCbu9SBzvHz7e8=
|
||||
github.com/asticode/go-astiav v0.6.0 h1:OEizrERY5Aj+H8X+479v9Lu6ipdYgSZyxU79hCM5JpY=
|
||||
|
Reference in New Issue
Block a user