diff --git a/VideoStitch/README.md b/VideoStitch/README.md new file mode 100644 index 0000000..46082dd --- /dev/null +++ b/VideoStitch/README.md @@ -0,0 +1,168 @@ +# 多路视频拼接 + +## 1. 简介 +本开发样例基于opencv实现了对4路视频流的拼接,其端到端处理流程如下: +![flow](flow.png) +## 2. 适用场景 +多路视频拼接将多路具有重叠区域的小视频拼接为一路完整大视野场景,有效解决单个摄像头视野局限性的问题,在智能监控,虚拟现实等领域有着广泛应用 +## 3. 目录结构 +本工程名称为VideoStitch,工程目录如下图所示: +``` +VideoStitch +|---- src // 源文件脚本文件夹 +| |---- main.cpp +| |---- stitch.cpp +| |---- util.h +|---- run.sh // 运行脚本 +|---- test.sh // 测试脚本 +|---- README.md +|---- build.sh // 编译脚本 +|---- flow.png +``` +## 4. 依赖 +### 4.1 依赖版本 +| 软件名称 | 版本 | +| :--------: | :------: | +|ubuntu 18.04|18.04.1 LTS | +|gcc|7.5.0| +|C++ | 11 | +|opencv|4.5.2| +|opencv_contrib|4.5.2| + +### 4.2 200dk上opencv和opencv_contrib安装流程 +因200dk上网络配置可能遇到问题且编译较慢,可以采用同架构机器编译安装opencv之后移植到200dk上的方式。 + +#### 4.2.1 在aarch64平台上编译安装opencv和opencv_contrib +a. 安装可能需要的依赖 +``` +sudo apt-get update +sudo apt-get install build-essential +sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev +sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev +``` +b. 下载并解压opencv及opencv_contrib源码包 +``` +wget https://github.com/opencv/opencv/archive/4.5.2.zip -O opencv-4.5.2.zip +wget https://github.com/opencv/opencv_contrib/archive/refs/tags/4.5.2.zip -O opencv_contrib-4.5.2.zip +unzip opencv-4.5.2.zip +unzip opencv_contrib-4.5.2.zip +``` +c. 编译安装 +``` +mkdir opencv-4.5.2/build && cd opencv-4.5.2/build +cmake -D CMAKE_BUILD_TYPE=Release -D BUILD_opencv_world=ON -D OPENCV_DOWNLOAD_MIRROR_ID=gitcode -D OPENCV_ENABLE_NONFREE=ON -D BUILD_TIFF=ON -D OPENCV_GENERATE_PKGCONFIG=ON -D CMAKE_INSTALL_PREFIX=xxx/opencv_install -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.5.2/modules .. +# CMAKE_INSTALL_PREFIX参数指定目标安装路径,可根据实际情况配置 +# OPENCV_EXTRA_MODULES_PATH参数为解压后的opencv_contrib-4.5.2目录下modules的路径,可根据实际情况配置 +make -j20 +make install +``` +>若遇到下载文件失败问题,可从OBS手动下载模型等文件到opencv-4.5.2/.cache下。 +>下载链接:https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/ascend_community_projects/VideoStitch/cache.zip +#### 4.2.2 将编译安装的opencv(包括了opencv_contrib)移植到200dk +a. 安装可能需要的依赖 +``` +sudo apt-get update +sudo apt-get install build-essential +sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev +sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev +``` +注意,需要保证依赖版本和编译时一致,否则可能导致依赖库无法正确链接 + +b. 将已经编译安装的目标目录(即CMAKE_INSTALL_PREFIX配置的目录opencv_install)上传到200dk的任意位置。 +注意,为防止破环lib文件软连接,建议打包压缩后进行传输 +``` +# 压缩命令 +tar -zcvf opencv_install.tar.gz opencv_install/ +#解压命令 +tar -zxvf opencv_install.tar.gz +``` +c. 配置环境变量 + +找到200dk上的目标目录下pc文件 opencv_install/lib/pkgconfig/opencv4.pc并修改prefix为当前opencv_install路径 +在/etc/profile文件末尾添加如下环境变量 +``` +export PKG_CONFIG_PATH=xxx/opencv_install/lib/pkgconfig:$PKG_CONFIG_PATH +export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:XXX/opencv_install/lib +``` +更新配置 +``` +source /etc/profile +``` + +## 5. 数据准备 +### 5.1 限制说明 +- 输入视频规格为:格式:mp4,分辨率:1920*1080; +- 传入数据需为4路的固定机位的同步1080P视频,即4个机位的相对位置为近似四宫格且是固定的视频; +- 视频之间需要存在少部分重叠以作为拼接配准依据; +- 位于左上角的视频为基准,做为第一个视频输入。 + +### 5.2 准备数据 +执行以下命令创建“data”目录,并将待拼接的4个视频放入创建的目录中: +``` +cd ${VideoStitch代码根目录} +mkdir data +``` +注:若需人工制作简单视频数据用于测试,可使用视频编辑工具对分辨率较高的完整视频按“16:9”的宽高比进行裁剪、旋转、缩放后导出四个互相之间有重叠的MP4格式、分辨率为1920*1080的视频,作为样例输入。 + +## 6.编译及运行 +### 6.1 编译 +执行以下命令进行编译,并在${VideoStitch代码根目录}下生成“main”的二进制文件: +``` +cd ${VideoStitch代码根目录} +bash build.sh +``` + +### 6.2 运行 +对"run.sh"脚本做以下修改: +``` + VIDEO0="${左上角视频及路径}" # 左上角视频路径,作为拼接基准 + VIDEO1="${右上角视频及路径}" + VIDEO2="${左下角视频及路径}" + VIDEO3="${右下角视频及路径}" +``` +保存后,执行以下命令进行视频拼接,并在当前目录下生成分辨率为3840*2160的“output.avi”视频: +``` +bash run.sh +``` +注: + +1、因该操作会输出视频,受限于200DK环境内存,测试时建议输入视频时长在2分钟以内,防止系统卡死; + +2、本样例对于(非)旋转缩放视频均有良好的拼接能力; + +脚本中的参数说明: +``` + VIDEO(0-3):待拼接的4路视频及路径; + FRAMES:需要拼接的帧数,“0”表示拼接完整视频所有帧; + VIDEO_GLAG:是否保存拼接的视频,“0”表示只执行拼接过程,不输出视频;“1”表示执行拼接并输出视频; + MINHESSIAN:特征提取算法阈值,默认“2000”,若视频特征点较不明显,或视频重叠部分较少,可适当减小该值,可调整范围(0, 10000); +``` + +## 7. 性能测试 + +对“test.sh”脚本做以下修改(修改可参考6.2节中的参数说明): +``` + VIDEO0="${左上角视频及路径}" # 左上角视频路径,作为拼接基准 + VIDEO1="${右上角视频及路径}" + VIDEO2="${左下角视频及路径}" + VIDEO3="${右下角视频及路径}" + FRAMES=40 # 可根据需要进行修改 +``` +保存后,执行以下命令进行性能测试,性能结果将在回显中体现: +``` +bash test.sh +``` +测试数据如下: +| 拼接帧数 | 耗时 | +| :--------: | :------: | +|40|0.558s| +|100|0.561s| + +## 8 Q&A +### 8.1 运行脚本时出现“读取视频失败” +> 可能是因为视频路径错误 +### 8.2 输出的视频出现黑色区域 +>边缘出现黑色或黑色区域属于正常结果,因为拼接后的视频小于画布size,默认用0,0,0填充 + +### 8.3 运行拼接时出现“OpenCV(4.5.2) xxx/opencv-4.5.2/modules/core/src/matrix_wrap.cpp:1667: error: (-215:Assertion failed) !fixedSize() in function 'release'” +>可能是因为基准帧指定错误或视频重叠部分提取相同特征点失败导致拼接失败,请检查输入视频或减小MINHESSIAN参数, \ No newline at end of file diff --git a/VideoStitch/build.sh b/VideoStitch/build.sh new file mode 100644 index 0000000..909114b --- /dev/null +++ b/VideoStitch/build.sh @@ -0,0 +1,16 @@ +#!/bin/bash +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.mitations under the License. + +g++ -std=c++11 -pthread -o main src/* `pkg-config opencv4 --cflags --libs` \ No newline at end of file diff --git a/VideoStitch/flow.png b/VideoStitch/flow.png new file mode 100644 index 0000000..fedf337 Binary files /dev/null and b/VideoStitch/flow.png differ diff --git a/VideoStitch/run.sh b/VideoStitch/run.sh new file mode 100644 index 0000000..b560743 --- /dev/null +++ b/VideoStitch/run.sh @@ -0,0 +1,32 @@ +#!/bin/bash +# Copyright(C) 2022. Huawei Technologies Co.,Ltd. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# 4路视频路径 +VIDEO0="" # 左上角视频路径,作为拼接基准 +VIDEO1="" +VIDEO2="" +VIDEO3="" + +# 要拼接的帧数,若为0表示拼接完整视频所有帧 +FRAMES=0 + +# 是否保存结果,若为0,则只执行拼接过程不输出视频,若为1则执行拼接与保存视频操作 +VIDEO_GLAG=1 + +# 特征点提取阈值,若视频重叠部分纹理不明显或视频清晰度较差,建议适当减小阈值,阈值范围(0,10000) +MINHESSIAN=2000 + +./main $FRAMES $VIDEO_GLAG $MINHESSIAN $VIDEO0 $VIDEO1 $VIDEO2 $VIDEO3 +echo "Stitch finish!" \ No newline at end of file diff --git a/VideoStitch/src/main.cpp b/VideoStitch/src/main.cpp new file mode 100644 index 0000000..a36f1c8 --- /dev/null +++ b/VideoStitch/src/main.cpp @@ -0,0 +1,94 @@ +/* + * Copyright(C) 2022. Huawei Technologies Co.,Ltd. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include +#include "util.h" + +using namespace std; +using namespace cv; + +void MainStitch(vector &videos, int frameNums, bool writeVideo, int minHessian) { + // 初始化 + Stitch stitch(writeVideo, minHessian = minHessian); + bool ret = stitch.Init(videos); + if (!ret) { + cout << "初始化失败!"; + return; + } + if ((frameNums < 1) | (frameNums >= stitch.TotalFrames())) { + frameNums = stitch.TotalFrames() - 1; + } + cout << "初始化成功!" << endl; + // 拼接 + clock_t sumBegin = clock(); + stitch.Stitching(frameNums); + clock_t sumFinish = clock(); + if (!writeVideo) { + cout << "端到端平均耗时:" << float(sumFinish - sumBegin) / CLOCKS_PER_SEC / frameNums << " seconds" << endl; + } +} + +int main(int argc, char* argv[]) { + int videoNum = 4; + vector videos(videoNum); + int maxSize = 2; + int frames = 0; + bool writeVideo = false; + int argNum = 8; + int minHessian; + int minHessianLimit = 10000; + if (argc < argNum) { + cout << "missing parameter!" << endl; + return 0; + } + int argIndex = 1; + if (atoi(argv[argIndex])) { + frames = atoi(argv[argIndex]); + } + if (atoi(argv[++argIndex])) { + writeVideo = true; + } + if (atoi(argv[++argIndex])) { + minHessian = atoi(argv[argIndex]); + } + int videoIndex = 0; + videos[videoIndex++] = argv[++argIndex]; + videos[videoIndex++] = argv[++argIndex]; + videos[videoIndex++] = argv[++argIndex]; + videos[videoIndex++] = argv[++argIndex]; + + if (minHessian <= 0 || minHessian >= minHessianLimit) { + cout << "minHessian must be in range (0,10000), but get " << minHessian << endl; + return -1; + } + for (int i = 0; i < videoNum; i++) { + if (access(videos[i].c_str(), F_OK) == -1) { + cout << "File " << videos[i] << " does not exist.\n"; + return -1; + } + string suffix_str = videos[i].substr(videos[i].find_last_of('.') + 1); + if (suffix_str != "mp4" & suffix_str != "MP4") { + cout << "File " << videos[i] << " isn't MP4.\n"; + return -1; + } + } + MainStitch(videos, frames, writeVideo, minHessian); + return 0; +} \ No newline at end of file diff --git a/VideoStitch/src/stitch.cpp b/VideoStitch/src/stitch.cpp new file mode 100644 index 0000000..de5a871 --- /dev/null +++ b/VideoStitch/src/stitch.cpp @@ -0,0 +1,390 @@ +/* + * Copyright(C) 2022. Huawei Technologies Co.,Ltd. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include +#include +#include +#include "util.h" + +using namespace std; +using namespace cv; + +namespace { +const int WIDTH = 1920; +const int HEIGHT = 1080; +const int WIDTHALL = 3840; +const int HEIGHTALL = 2160; +const int VIDEONUM = 4; +const int index0 = 0; +const int index1 = 1; +const int index2 = 2; +const int index3 = 3; +} +// 计算透射变换坐标映射 +void getMapping(Mat H, vector &newIndices, vector &orgIndices) { + int num = HEIGHT * WIDTH; + int depth = 3; + Mat old_xy = Mat::ones(num, depth, CV_32F); + Mat xyz; + + for (int j = 0; j < HEIGHT; j++) { + for (int i = 0; i < WIDTH; i++) { + old_xy.at(j * WIDTH + i, index0) = (float)i; + old_xy.at(j * WIDTH + i, index1) = (float)j; + } + } + Mat H_t, H_32f; + transpose(H, H_t); + H_t.convertTo(H_32f, CV_32F); + xyz = old_xy * H_32f; + for (int i = 0; i < num; i++) { + int x = ceil(xyz.at(i, index0) / xyz.at(i, index2)); + int y = ceil(xyz.at(i, index1) / xyz.at(i, index2)); + if (x > 0 & y > 0 & x < WIDTHALL & y < HEIGHTALL) { + newIndices.emplace_back(y); + newIndices.emplace_back(x); + + orgIndices.emplace_back(int(old_xy.at(i, index1))); + orgIndices.emplace_back(int(old_xy.at(i, index0))); + } + } +} + +// 单张读帧 +void ReadFrame(VideoCapture cap, Mat &frame) { + if (!cap.read(frame)) { + cout << "读取帧失败" << endl; + return; + } return; +} + +void WarpCopy(Mat &src, Mat &warp, int *goalIndices, int *srcIndices, int *goalIndicesEnd) { + while (goalIndices < goalIndicesEnd) { + warp.at(*(goalIndices++), *(goalIndices++)) = src.at(*(srcIndices++), *(srcIndices++)); + } +} + +// 初始化 +bool Stitch::Init(vector &videos) { + for (int i = 0; i < VIDEONUM ;i++) { + caps_.emplace_back(); + caps_[i].open(videos[i]); + if (!caps_[i].isOpened()) + { + cout << "读取视频失败,请检查视频路径!" << endl; + return false; + } + cout << videos[i] << "读取成功!" << endl; + int width = caps_[i].get(CAP_PROP_FRAME_WIDTH); + int height = caps_[i].get(CAP_PROP_FRAME_HEIGHT); + cout << "视频宽度: " << width << endl; + cout << "视频高度: " << height << endl; + cout << "视频总帧数: " << caps_[i].get(CAP_PROP_FRAME_COUNT) << endl; + cout << "帧率: " << caps_[i].get(CAP_PROP_FPS) << endl; + if (width != WIDTH || height != HEIGHT) { + cout << "The input videos' resolution must be 1080P, but get " << width << '*' << height << endl; + return false; + } + } + totalFrames_ = caps_[index0].get(CAP_PROP_FRAME_COUNT); + + if (writeFlag_) { + double fps = caps_[index0].get(CAP_PROP_FPS); + writer_ = VideoWriter("./output.avi", VideoWriter::fourcc('x', '2', '6', '4'), fps, Size(WIDTHALL, HEIGHTALL), true); + if (writer_.isOpened()) { + cout << "writer_ is opened!" << endl; + } + else { + cout << "writer_ is not opened!" << endl; + } + } + GetTransformationH(); + return true; +} + +Stitch::~Stitch() { + for (int i = 0; i < VIDEONUM ;i++) { + caps_[i].release(); + } + writer_.release(); +} + +// 计算变换矩阵 +void CalTransformationH(Mat &img0, Mat &img1, vector &Hs) { + Mat gray0, gray1; + std::vector ipts0, ipts1; + Mat desp0, desp1; + cvtColor(img0, gray0, CV_RGB2GRAY); + clock_t t3 = clock(); + cvtColor(img1, gray1, CV_RGB2GRAY); + clock_t t4 = clock(); + int minHessian = 2000; // SURF算法中的hessian阈值 + Ptr surf = xfeatures2d::SURF::create(minHessian); + // 提取特征点并计算特征描述子 + surf->detectAndCompute(gray0, Mat(), ipts0, desp0); + surf->detectAndCompute(gray1, Mat(), ipts1, desp1); + // 特征点匹配 + vector> matchPoints; + FlannBasedMatcher matcher; + vector train_disc(1, desp1); + matcher.add(train_disc); + matcher.train(); + matcher.knnMatch(desp0, matchPoints, index2); // k临近,按顺序排 + vector good_matches; + for (int i = 0; i < matchPoints.size(); i++) { + if (matchPoints[i][index0].distance < 0.4f*matchPoints[i][index1].distance) + { + good_matches.push_back(matchPoints[i][index0]); + } + } + vector ip0; + vector ip1; + // 从匹配成功的匹配对中获取关键点 + for (unsigned int i = 0; i < good_matches.size(); ++i) { + ip1.push_back(ipts1[good_matches[i].trainIdx].pt); + ip0.push_back(ipts0[good_matches[i].queryIdx].pt); + } + Hs.push_back(findHomography(ip1, ip0, RANSAC)); // 计算透视变换矩阵 +} + +// 根据首帧计算图像映射关系 +bool Stitch::GetTransformationH() { + Mat warp = Mat(HEIGHTALL, WIDTHALL, CV_8UC3); + vector frame(VIDEONUM); + vector Hs; + vector newIndices; + vector orgIndices; + + for (int i = 0; i < VIDEONUM; i++) { + ReadFrame(caps_[i], frame[i]); + } + + // 搬运第0路到warp + frame[index0].copyTo(warp(Rect(0, 0, frame[index0].cols, frame[index0].rows))); + + // 计算第1路坐标映射关系 + CalTransformationH(frame[index0], frame[index1], Hs); + getMapping(Hs[index0], newIndices, orgIndices); + goalIndices_[index0] = new int[newIndices.size()]; + memcpy(goalIndices_[index0], &newIndices[index0], newIndices.size() * sizeof(int)); + srcIndices_[index0] = new int[orgIndices.size()]; + memcpy(srcIndices_[index0], &orgIndices[index0], orgIndices.size() * sizeof(int)); + goalIndicesEnd_[index0] = &(goalIndices_[index0][newIndices.size()-1]); + + // 对第1路进行内存搬运 + WarpCopy(frame[index1], warp, goalIndices_[index0], srcIndices_[index0], goalIndicesEnd_[index0]); + + // 计算第2路坐标映射关系 + CalTransformationH(warp, frame[index2], Hs); + newIndices.clear(); + orgIndices.clear(); + getMapping(Hs[index1], newIndices, orgIndices); + goalIndices_[index1] = new int[newIndices.size()]; + memcpy(goalIndices_[index1], &newIndices[index0], newIndices.size() * sizeof(int)); + srcIndices_[index1] = new int[orgIndices.size()]; + memcpy(srcIndices_[index1], &orgIndices[index0], orgIndices.size() * sizeof(int)); + goalIndicesEnd_[index1] = &(goalIndices_[index1][newIndices.size()-1]); + + // 对第2路进行内存搬运 + WarpCopy(frame[index2], warp, goalIndices_[index1], srcIndices_[index1], goalIndicesEnd_[index1]); + + // 计算第3路坐标映射关系 + CalTransformationH(warp, frame[index3], Hs); + newIndices.clear(); + orgIndices.clear(); + getMapping(Hs[index2], newIndices, orgIndices); + goalIndices_[index2] = new int[newIndices.size()]; + memcpy(goalIndices_[index2], &newIndices[index0], newIndices.size() * sizeof(int)); + srcIndices_[index2] = new int[orgIndices.size()]; + memcpy(srcIndices_[index2], &orgIndices[index0], orgIndices.size() * sizeof(int)); + goalIndicesEnd_[index2] = &(goalIndices_[index2][newIndices.size()-1]); + + // 对第3路进行内存搬运 + WarpCopy(frame[index3], warp, goalIndices_[index2], srcIndices_[index2], goalIndicesEnd_[index2]); + + // 存储拼接后的视频 + if (writeFlag_) { + writer_.write(warp); + } + return true; +} + +// 读帧线程函数 +void Stitch::ReadWarpped(int frameNums, VideoCapture &cap) { + for (int i = 0; i < frameNums; i++) { + Mat warp = Mat(HEIGHTALL, WIDTHALL, CV_8UC3); + if (!cap.read(warp(Rect(0, 0, WIDTH, HEIGHT)))) { + cout << "读取帧失败" << endl; + return; + } + { + unique_lock lk(mtx_); + warps_.push(warp); + if (warps_.size() == maxSize_) { + cvRead0_.wait(lk); + } + } + if (warps_.size() == 1) { + cvStitch0_.notify_one(); + } + } +} + +// 读帧线程函数 +void Stitch::Read(int frameNums, VideoCapture &cap, int i) { + for (int j = 0; j < frameNums; j++) { + Mat frame; + ReadFrame(cap, frame); + switch (i) { + case index1: { + { + unique_lock lk(mtx_); + frames1_.push(frame); + if (frames1_.size() == maxSize_) { + cvRead1_.wait(lk); + } + } + if (frames1_.size() == 1) { + cvStitch1_.notify_one(); + } + break; + } + case index2: { + { + unique_lock lk(mtx_); + frames2_.push(frame); + if (frames2_.size() == maxSize_) { + cvRead2_.wait(lk); + } + } + if (frames2_.size() == 1) { + cvStitch2_.notify_one(); + } + break; + } + case index3: { + { + unique_lock lk(mtx_); + frames3_.push(frame); + if (frames3_.size() == maxSize_) { + cvRead3_.wait(lk); + } + } + if (frames3_.size() == 1) { + cvStitch3_.notify_one(); + } + break; + } + default : + cout << "please check input 'i', get " << i << endl; + } + } +} + +// 拼接线程函数 +void Stitch::StitchAll(int frameNums) { + Mat frame, warp; + for (int i = 0; i < frameNums; i++) { + // 第0帧 + { + unique_lock lk(mtx_); + if (warps_.size() == 0) { + cvStitch0_.wait(lk); + } + warp = warps_.front(); + warps_.pop(); + } + if (warps_.size() == (maxSize_ - 1)) { + cvRead0_.notify_one(); + } + + // 第1帧 + { + unique_lock lk(mtx_); + if (frames1_.size() == 0) { + cvStitch1_.wait(lk); + } + frame = frames1_.front(); + frames1_.pop(); + } + if (frames1_.size() == (maxSize_ - 1)) { + cvRead1_.notify_one(); + } + WarpCopy(frame, warp, goalIndices_[index0], srcIndices_[index0], goalIndicesEnd_[index0]); + + // 第2帧 + { + unique_lock lk(mtx_); + if (frames2_.size() == 0) { + cvStitch2_.wait(lk); + } + frame = frames2_.front(); + frames2_.pop(); + } + if (frames2_.size() == (maxSize_ - 1)) { + cvRead2_.notify_one(); + } + WarpCopy(frame, warp, goalIndices_[index1], srcIndices_[index1], goalIndicesEnd_[index1]); + + // 第3帧 + { + unique_lock lk(mtx_); + if (frames3_.size() == 0) { + cvStitch3_.wait(lk); + } + frame = frames3_.front(); + frames3_.pop(); + } + if (frames3_.size() == (maxSize_ - 1)) { + cvRead3_.notify_one(); + } + WarpCopy(frame, warp, goalIndices_[index2], srcIndices_[index2], goalIndicesEnd_[index2]); + + // 保存 + if (writeFlag_) { + writer_.write(warp); + } + } +} + +// 总调度 +bool Stitch::Stitching(int frameNums) { + thread read0 = thread(&Stitch::ReadWarpped, this, frameNums, ref(caps_[index0])); + thread read1 = thread(&Stitch::Read, this, frameNums, ref(caps_[index1]), index1); + thread read2 = thread(&Stitch::Read, this, frameNums, ref(caps_[index2]), index2); + thread read3 = thread(&Stitch::Read, this, frameNums, ref(caps_[index3]), index3); + thread stitchAll = thread(&Stitch::StitchAll, this, frameNums); + if (read0.joinable()) { + read0.join(); + } + if (read1.joinable()) { + read1.join(); + } + if (read2.joinable()) { + read2.join(); + } + if (read3.joinable()) { + read3.join(); + } + if (stitchAll.joinable()) { + stitchAll.join(); + } + return true; +} \ No newline at end of file diff --git a/VideoStitch/src/util.h b/VideoStitch/src/util.h new file mode 100644 index 0000000..4a3c30d --- /dev/null +++ b/VideoStitch/src/util.h @@ -0,0 +1,64 @@ +/* + * Copyright(C) 2022. Huawei Technologies Co.,Ltd. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#ifndef UTIL_H +#define UTIL_H +#include +#include +#include +#include +#include +#include +#include + + +class Stitch +{ +public: + explicit Stitch(bool write_video = false, int minHessian = 2000, int maxSize = 2) : + writeFlag_(write_video), minHessian_(minHessian), maxSize_(maxSize) {} + ~Stitch(); + bool Init(std::vector &videos); + int TotalFrames() { + return totalFrames_; + } + bool Stitching(int frameNums = 0); + +private: + bool GetTransformationH(); + void ReadWarpped(int frameNums, cv::VideoCapture &cap); + void Read(int frameNums, cv::VideoCapture &cap, int i); + void StitchAll(int frameNums); + + bool writeFlag_; + int maxSize_; + int totalFrames_; + int minHessian_; + cv::VideoWriter writer_; + std::vector caps_; + int *goalIndices_[3]; + int *srcIndices_[3]; + int *goalIndicesEnd_[3]; + + std::queue warps_; + std::queue frames1_; + std::queue frames2_; + std::queue frames3_; + std::mutex mtx_; + std::condition_variable cvRead0_, cvRead1_, cvRead2_, cvRead3_, cvStitch0_, cvStitch1_, cvStitch2_, cvStitch3_; +}; + +#endif \ No newline at end of file diff --git a/VideoStitch/test.sh b/VideoStitch/test.sh new file mode 100644 index 0000000..34592b2 --- /dev/null +++ b/VideoStitch/test.sh @@ -0,0 +1,32 @@ +#!/bin/bash +# Copyright(C) 2022. Huawei Technologies Co.,Ltd. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# 4路视频路径 +VIDEO0="" # 左上角视频路径,作为拼接基准 +VIDEO1="" +VIDEO2="" +VIDEO3="" + +# 要拼接的帧数,若为0表示拼接完整视频所有帧 +FRAMES=40 + +# 是否保存结果,若为0,则只执行拼接过程不输出视频,若为1则执行拼接与保存视频操作 +VIDEO_GLAG=0 + +# 特征点提取阈值,若视频重叠部分纹理不明显或视频清晰度较差,建议适当减小阈值,阈值范围(0,10000) +MINHESSIAN=2000 + +./main $FRAMES $VIDEO_GLAG $MINHESSIAN $VIDEO0 $VIDEO1 $VIDEO2 $VIDEO3 +echo "Stitch finish!" \ No newline at end of file