feat: audio room

feat: show room type
feat: update doc
fix: socket default host port err
fix: log err
This commit is contained in:
https://blog.iamtsm.cn
2023-08-16 20:25:51 +08:00
parent 58921d2212
commit 4d34d51f40
24 changed files with 997 additions and 330 deletions

247
README.md
View File

@@ -6,7 +6,6 @@
[![](https://img.shields.io/badge/deployment-private-yellow)](https://github.com/iamtsm/tl-rtc-file/)
[![](https://img.shields.io/badge/platform-unlimited-coral)](https://github.com/iamtsm/tl-rtc-file/)
<p align="center">
<a href="https://im.iamtsm.cn/file" target="_blank">体验地址</a>
<a href="https://hub.docker.com/u/iamtsm" target="_blank">DockerHub</a>
@@ -16,171 +15,125 @@
## 目录
- [背景-简介](#背景-简介)
- [优点-扩展](#优点-扩展)
- [修改配置](#修改配置)
- [背景](#背景)
- [优点](#优点)
- [部署前必看](#部署前必看)
- [自行部署](#自行部署)
- [安装环境](#安装环境)
- [准备启动](#准备启动)
- [配置数据库 (非必须步骤)](#配置数据库-非必须步骤)
- [管理后台 (非必须步骤)](#管理后台-非必须步骤)
- [企微通知 (非必须步骤)](#企微通知-非必须步骤)
- [OSS云存储 (非必须步骤)](#oss云存储-非必须步骤)
- [Chat-GPT (非必须步骤)](#chat-gpt-非必须步骤)
- [配置turnserver (局域网非必须步骤,公网必须步骤)](#配置turnserver-局域网非必须步骤公网必须步骤)
- [Docker部署](#Docker部署)
- [启动服务](#启动服务)
- [docker部署](#docker部署)
- [docker一键脚本启动](#docker一键脚本启动)
- [docker-compose命令启动](#docker-compose启动)
- [自行打包镜像启动](#自行打包启动镜像)
- [其他形式部署](#其他形式部署)
- [配置数据库 (非必须步骤)](#配置数据库-非必须步骤)
- [管理后台 (非必须步骤)](#管理后台-非必须步骤)
- [企微通知 (非必须步骤)](#企微通知-非必须步骤)
- [OSS云存储 (非必须步骤)](#oss云存储-非必须步骤)
- [Chat-GPT (非必须步骤)](#chat-gpt-非必须步骤)
- [配置turnserver (局域网非必须步骤,公网必须步骤)](#配置turnserver-局域网非必须步骤公网必须步骤)
- [概述图](#概述图)
- [License](#license)
- [免责声明](#免责声明)
### 背景-简介
## 背景
20年毕设的题目相关整理出来的用webrt在web端传输文件支持传输超大文件。
### 优点-扩展
## 优点
分片传输跨终端不限平台方便使用内网不限速局域网最高到过70多M/s支持私有部署支持多文件拖拽发送网页文件预览。 扩展了许多丰富的小功能,如本地屏幕录制,远程屏幕共享(无延迟),远程音视频通话(无延迟),直播(无延迟)密码房间oss云存储中继服务设置webrtc检测webrtc统计文字传输(群聊,私聊)公共聊天远程画板AI聊天框丰富的后台管理实时执行日志展示机器人告警通知等功能... 等等
## 修改配置
## 部署前必看
无论是自行部署还是docker部署还是其他脚本部署都需要先行修改 `tlrtcfile.env` 中相应配置,再执行下面操作,且后续还需修改配置,需要重启服务
当然你也可以不修改配置使用默认的配置但是默认的配置仅限于可以在localhost测试使用其他人访问不到也使用不了。所以如果是需要部署到服务器上给局域网或者公网其他用户使用就必须按需设置好 `tlrtcfile.env`
## 自行部署
#### 安装环境
1.安装node-14.21.x或14.21.x以上npm后进入项目目录运行下面命令
安装node-14.21.x或14.21.x以上npm后进入项目目录运行下面命令
```
cd svr/
npm install
```
2.首次运行/自行开发页面,用下面两个命令之一即可
首次运行执行一次下面的命令
`npm run build:dev` (如果你需要自己开发/修改前端页面,用这个命令)
`npm run build:pro` (不需要开发/修改前端页面,用这个命令)
```
npm run build:pro
```
3.修改 `tlrtcfile.env` 配置文件
如果你需要自己开发/修改前端页面,用这个命令,不需要开发页面就跳过这一个
#### 准备启动
```
npm run build:dev
```
#### 启动服务
启动以下两个服务, 选一种模式启动即可两者的区别就是https环境启动才可以使用音视频,直播,屏幕共享功能,其他功能不影响
http模式启动后访问 http://你的机器ip:9092 即可
http模式启动后访问 `http://你的机器ip:9092`
- api服务: `npm run http-api`
- socket服务 : `npm run http-socket`
- 启动api服务 和 socket服务
https模式启动后访问 https://你的机器ip:9092 即可
```
npm run http-api
- api服务: `npm run https-api`
- socket服务 : `npm run https-socket`
#### 配置数据库 (非必须步骤)
修改 `tlrtcfile.env` 中的数据库相关配置即可
#### 企微通知 (非必须步骤)
修改 `tlrtcfile.env` 中的企业微信通知相关配置即可
#### OSS云存储 (非必须步骤)
修改 `tlrtcfile.env` 中的OSS存储相关配置即可
#### Chat-GPT (非必须步骤)
修改 `tlrtcfile.env` 中的openai相关配置即可
#### 管理后台 (非必须步骤)
前提 : 需要开启数据库配置
修改 `tlrtcfile.env` 中的管理后台相关配置即可, 启动后,输入配置的房间号,输入密码,即可进入管理后台
#### 配置turnserver (局域网非必须步骤,公网必须步骤)
目前有两种形式去生成使用turn服务的帐号密码一种是固定帐号密码 (优先推荐),一种是有效期帐号密码。**选一种方式即可**
ubuntu示例:
- 安装coturn `sudo apt-get install coturn`
有效帐号密码 : `docker/coturn/turnserver-with-secret-user.conf`
1. 修改 `listening-device`, `listening-ip`, `external-ip`, `static-auth-secret`, `realm` 几个字段即可
2. 启动turnserver
`turnserver -c /这个地方路径填完整/conf/turn/turnserver-with-secret-user.conf`
固定帐号密码 : `docker/coturn/turnserver-with-fixed-user.conf`
1. 修改 `listening-device`, `listening-ip`, `external-ip`, `user`, `realm` 几个字段即可
2. 生成用户
`turnadmin -a -u 帐号 -p 密码 -r 这个地方填配置文件中的relam`
3. 启动turnserver
`turnserver -c /这个地方路径填完整/docker/coturn/turnserver-with-secret-user.conf`
部署好coturn后在对应的 `tlrtcfile.env` 配置中设置好webrtc相关信息即可
## webrtc-stun中继服务地址
tl_rtc_file_webrtc_stun_host=
## webrtc-turn中继服务地址
tl_rtc_file_webrtc_turn_host=
## webrtc中继服务用户名
tl_rtc_file_webrtc_turn_username=tlrtcfile
## webrtc中继服务密码
tl_rtc_file_webrtc_turn_credential=tlrtcfile
## webrtc中继服务Secret
tl_rtc_file_webrtc_turn_secret=tlrtcfile
## webrtc中继服务帐号过期时间 (毫秒)
tl_rtc_file_webrtc_turn_expire=86400000
npm run http-socket
```
## Docker
或者使用https模式启动访问 `https://你的机器ip:9092`
- 启动api服务 和 socket服务
```
npm run https-api
npm run https-socket
```
## docker部署
目前支持 `官方镜像``自行打包镜像`,使用官方镜像目前支持两种操作方式 `docker脚本启动``docker-compose启动`
#### docker一键脚本启动
`自行部署` 操作/配置上的差异有下面两点。
- [x] docker环境默认开启数据库,coturn服务
- [x] docker环境需要挂载coturn的配置项目基础配置(tlrtcfile.env)
由于是内置coturn和mysql服务所以这两个相应的配置(可以在docker-compose.yml中找到具体配置文件位置),也需要在启动前修改好。
#### 使用官方镜像(docker脚本启动) :
按需修改好 `tlrtcfile.env` 配置 (或使用默认配置也可) 后,进入 `bin/` 目录执行脚本 `auto-pull-and-start-docker.sh`
进入 `bin/` 目录执行脚本 `auto-pull-and-start-docker.sh`
```
chmod +x ./auto-pull-and-start-docker.sh
./auto-pull-and-start-docker.sh
```
#### docker-compose启动
#### 使用官方镜像(docker-compose启动) :
根据你的 `Docker Compose` 版本在 `主目录` 执行如下对应的命令
按需修改好 `tlrtcfile.env` 配置 (或使用默认配置也可) 后,根据你的`Docker Compose`版本在主目录执行如下对应的命令
- 对于`Docker Compose V1`
- 对于 `Docker Compose V1`
```
docker-compose --profile=http up -d
```
- 对于`Docker Compose V2`
- 对于 `Docker Compose V2`
```
docker compose --profile=http up -d
```
#### 自行打包启动镜像(docker-compose打包启动) :
#### 自行打包启动镜像
确认修改好 `tlrtcfile.env` 配置文件 (或使用默认配置也可) 后, 进入 `docker/` 目录后根据你的`Docker Compose`版本在主目录执行如下对应的命令
进入 `docker/` 目录后根据你的 `Docker Compose` 版本在主目录执行如下对应的命令
- 对于`Docker Compose V1`
- 对于 `Docker Compose V1`
```
docker-compose -f docker-compose-build-code.yml up -d
```
- 对于`Docker Compose V2`
- 对于 `Docker Compose V2`
```
docker compose -f docker-compose-build-code.yml up -d
```
@@ -191,32 +144,36 @@ docker compose -f docker-compose-build-code.yml up -d
下载项目后,可以进入 `bin/` 目录,选择对应的系统脚本,直接执行即可,会自动检测安装环境,自动安装依赖,自动启动服务
**注意 : 执行之前可以先修改好 tlrtcfile.env 配置,如使用默认配置,后续修改需要重启两个服务才能生效**,重启可以先执行 `停止服务脚本`,然后再次执行 `自动脚本` 即可
#### ubuntu自动脚本 (比如ubuntu16)
- 如果脚本没有执行权限,执行一下下面的命令
```
chmod +x ./ubuntu16/*.sh
```
cd ubuntu16/
- 使用 `http` 方式则是执行这个脚本
```
./auto-check-install-http.sh
```
使用https方式则是执行这个脚本
- 或者使用 `https` 方式则是执行这个脚本
```
./auto-check-install-https.sh
```
停止服务脚本 :
- 停止服务脚本 :
```
./auto-stop.sh
```
#### windows自动脚本
- 使用 `http` 方式则是执行这个脚本
```
windows/auto-check-install-http.bat
```
或者使用https方式则是执行这个脚本
- 或者使用https方式则是执行这个脚本
```
windows/auto-check-install-https.bat
```
@@ -226,6 +183,68 @@ windows/auto-check-install-https.bat
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/898TLE?referralCode=iamtsm)
## 其他配置项
#### 配置数据库 (非必须步骤)
需要自行安装mysql数据库新建一个数据库名称为 `webchat`,然后修改 `tlrtcfile.env` 中的数据库相关配置即可
#### 企微通知 (非必须步骤)
如果需要设置一些访问通知错误告警通知可以在企业微信建立机器人后每一个机器人会有一个key修改 `tlrtcfile.env` 中的企业微信通知相关配置即可
#### OSS云存储 (非必须步骤)
目前支持对接了seafile存储后续会逐步支持阿里云腾讯云七牛云自己的服务器等存储方式。 修改 `tlrtcfile.env` 中的OSS存储相关配置即可
#### Chat-GPT (非必须步骤)
对接了openai的接口内置了一个聊天对话框 修改 `tlrtcfile.env` 中的openai相关配置即可
#### 管理后台 (非必须步骤)
前提 : 需要开启数据库配置
修改 `tlrtcfile.env` 中的管理后台相关配置即可, 启动后,输入配置的房间号,输入密码,即可进入管理后台
#### 配置turnserver (局域网非必须步骤,公网必须步骤)
目前有两种形式去生成使用turn服务的帐号密码一种是固定帐号密码 (优先推荐),一种是有效期帐号密码。**选一种方式即可** 以下以ubuntu示例
安装coturn
```
sudo apt-get install coturn
```
有效帐号密码模式配置文件 : `docker/coturn/turnserver-with-secret-user.conf`
- 修改配置文件字段
```
`listening-device`, `listening-ip`, `external-ip`, `static-auth-secret`, `realm`
```
- 启动turnserver
```
turnserver -c /这个地方路径填完整/conf/turn/turnserver-with-secret-user.conf
```
固定帐号密码模式配置文件 : `docker/coturn/turnserver-with-fixed-user.conf`
- 修改配置文件字段
```
`listening-device`, `listening-ip`, `external-ip`, `user`, `realm`
```
- 生成用户
```
turnadmin -a -u 帐号 -p 密码 -r 这个地方填配置文件中的relam
```
- 启动turnserver
```
turnserver -c /这个地方路径填完整/docker/coturn/turnserver-with-secret-user.conf
```
部署好coturn后在对应的 `tlrtcfile.env` 配置中设置好webrtc相关信息即可
## 概述图
![image](doc/tl-rtc-file-tool.jpg)

View File

@@ -1,4 +1,4 @@
# tl-rtc-file-tool【Beyond File Transfer, Beyond Imagination】
# tl-rtc-file-tool (tl webrtc file tools)
[![](https://img.shields.io/badge/webrtc-p2p-blue)](https://webrtc.org.cn/)
[![](https://img.shields.io/badge/code-simple-green)](https://github.com/iamtsm/tl-rtc-file/)
@@ -7,283 +7,232 @@
[![](https://img.shields.io/badge/platform-unlimited-coral)](https://github.com/iamtsm/tl-rtc-file/)
<p align="center">
<a href="https://im.iamtsm.cn/file" target="_blank">Experience</a>
<a href="https://im.iamtsm.cn/file" target="_blank">Demo</a>
<a href="https://hub.docker.com/u/iamtsm" target="_blank">DockerHub</a>
<a href="https://github.com/tl-open-source/tl-rtc-file/blob/master/doc/README_EN.md" target="_blank">EN-DOC</a>
<a href="https://github.com/tl-open-source/tl-rtc-file/blob/master/doc/README_EN.md" target="_blank">EN-DOC</a> QQ Group:
<a href="https://jq.qq.com/?_wv=1027&k=TKCwMBjN" target="_blank">624214498</a>
</p>
<p align="center">QQ Group: <a href="https://jq.qq.com/?_wv=1027&k=TKCwMBjN" target="_blank">624214498 </a></p>
## Table of Contents
- [Background](#background)
- [Introduction](#introduction)
- [Advantages](#advantages)
- [Extensions](#extensions)
- [Preparation (Essential Steps)](#preparation-essential-steps)
- [Configure Websocket (Essential Steps)](#configure-websocket-essential-steps)
- [Startup (Essential Steps)](#startup-essential-steps)
- [Configure Database (Non-Essential Steps)](#configure-database-non-essential-steps)
- [Admin Panel (Non-Essential Steps)](#admin-panel-non-essential-steps)
- [WeChat Work Notification (Non-Essential Steps)](#wechat-work-notification-non-essential-steps)
- [OSS Cloud Storage (Non-Essential Steps)](#oss-cloud-storage-non-essential-steps)
- [Chat-GPT (Non-Essential Steps)](#chat-gpt-non-essential-steps)
- [Configure Turn Server (LAN Non-Essential Steps, Internet Essential Steps)](#configure-turn-server-lan-non-essential-steps-internet-essential-steps)
- [Docker](#docker)
- [Pre-deployment Considerations](#pre-deployment-considerations)
- [Self-Deployment](#self-deployment)
- [Installing Dependencies](#installing-dependencies)
- [Starting the Service](#starting-the-service)
- [Docker Deployment](#docker-deployment)
- [One-Click Docker Script](#one-click-docker-script)
- [Using docker-compose](#using-docker-compose)
- [Self-Building and Starting the Image](#self-building-and-starting-the-image)
- [Other Deployment Methods](#other-deployment-methods)
- [Configuring the Database (Optional)](#configuring-the-database-optional)
- [Admin Panel (Optional)](#admin-panel-optional)
- [WeChat Notifications (Optional)](#wechat-notifications-optional)
- [OSS Cloud Storage (Optional)](#oss-cloud-storage-optional)
- [Chat-GPT (Optional)](#chat-gpt-optional)
- [Configuring turnserver (Optional for LAN, Required for WAN)](#configuring-turnserver-optional-for-lan-required-for-wan)
- [Overview Diagram](#overview-diagram)
- [License](#license)
- [Disclaimer](#disclaimer)
#### Background: Consolidated from the topic of the 20th-year graduation project
## Background
#### Introduction: (tl webrtc datachannel filetools) Transferring files on the web using WebRTC, supporting the transfer of very large files.
This project was developed based on the topic of the graduation project in 2020. It allows file transfer using WebRTC in web applications and supports transferring large files.
#### Advantages: Fragmented transmission, cross-device, cross-platform, easy to use, unlimited speed within the intranet (up to over 70MB/s in the LAN), supports private deployment, supports multi-file drag-and-drop sending, web file preview.
## Advantages
#### Extensions: Extends many rich features, such as local screen recording, remote screen sharing (no delay), remote audio and video calls (no delay), live streaming (no delay), password-protected rooms, OSS cloud storage, relay service settings, WebRTC detection, WebRTC statistics, text transmission (group chat, private chat), public chat, remote whiteboard, AI chatbox, rich backend management, real-time execution log display, robot alert notifications, and more...
Fragmented transmission, cross-platform, platform-independent, easy to use, no speed limit in the local network (up to over 70 MB/s in the LAN), supports private deployment, supports drag-and-drop sending of multiple files, web file preview. Many additional features have been added, such as local screen recording, remote screen sharing (zero-latency), remote audio and video calls (zero-latency), live streaming (zero-latency), password-protected rooms, OSS cloud storage, relay service settings, WebRTC detection, WebRTC statistics, text transmission (group chat, private chat), public chat, remote whiteboard, AI chatbox, feature-rich admin panel, real-time execution log display, robot alert notifications, and more.
## Preparation (Essential Steps)
## Pre-deployment Considerations
1. Install node-14.21.x or higher and npm, then run the following command in the project directory:
Whether it's self-deployment, Docker deployment, or other script deployments, you need to modify the corresponding configurations in `tlrtcfile.env` before performing the following operations. Further configuration modifications and service restarts are required.
Of course, you can also use the default configurations without modifications, but the default configurations are only suitable for testing on localhost. They won't be accessible to others, making it impossible for others to use. Therefore, if you intend to deploy on a server for local network or public network users, you must configure `tlrtcfile.env` accordingly.
## Self-Deployment
#### Installing Dependencies
Install Node.js 14.21.x or above, and npm. Then, navigate to the project directory and run the following command:
```
cd svr/
npm install
```
2. For the first run or self-development of the page, use either of the following commands:
For the first run, execute the following command:
```
npm run build:pro
```
If you need to develop or modify the frontend pages, use this command. If not, you can skip this step:
```
npm run build:dev
```
`npm run build:dev` (Use this command if you need to develop/modifty the frontend page)
`npm run build:pro` (Use this command if you don't need to develop/modifty the frontend page)
#### Starting the Service
3. Modify the `tlrtcfile.env` configuration file.
Start the following two services. Choose one mode to start. The only difference between them is that the HTTPS mode is required to use features like audio/video streaming, live streaming, and screen sharing. Other features are not affected.
## Configure Websocket (Essential Steps)
After starting in HTTP mode, access the service at `http://your_machine_ip:9092`.
Modify the corresponding websocket configurations in `tlrtcfile.env`:
- Start the API and socket services:
```
npm run http-api
npm run http-socket
```
## Websocket server port
tl_rtc_file_socket_port=8444
Or, start in HTTPS mode and access the service at `https://your_machine_ip:9092`.
## Websocket server address
## "domain or ip:port or domain:port"
tl_rtc_file_socket_host=127.0.0.1
- Start the API and socket services:
```
npm run https-api
npm run https-socket
```
## Startup (Essential Steps)
## Docker Deployment
Start the following two services. Choose one mode to start, and the difference between them is that the HTTPS environment is required for audio, video, live streaming, and screen sharing features. Other features are not affected.
Currently, both `official images` and `self-built images` are supported. For official images, there are two ways to operate: `docker script startup` and `docker-compose startup`.
After starting in HTTP mode, access http://your_machine_ip:9092.
- API service: `npm run http-api`
- Socket service: `npm run http-socket`
After starting in HTTPS mode, access https://your_machine_ip:9092.
- API service: `npm run https-api`
- Socket service: `npm run https-socket`
## Configure Database (Non-Essential Steps)
Modify the database-related configurations in `tlrtcfile.env`:
## Enable database
tl_rtc_file_db_open=false
## Database address
tl_rtc_file_db_mysql_host=mysql
## Database port
tl_rtc_file_db_mysql_port=3306
## Database name
tl_rtc_file_db_mysql_dbName=webchat
## Database username
tl_rtc_file_db_mysql_user=tlrtcfile
## Database password
tl_rtc_file_db_mysql_password=tlrtcfile
## Admin Panel (Non-Essential Steps)
Prerequisite: Database configuration must be enabled.
Modify the admin panel-related configurations in `tlrtcfile.env`. After starting, enter the configured room number and password to access the admin panel:
## Admin panel room number
tl_rtc_file_manage_room=tlrtcfile
## Admin panel password
tl_rtc_file_manage_password=tlrtcfile
## WeChat Work Notification (Non-Essential Steps)
Modify the WeChat Work notification-related configurations in `tlrtcfile.env`:
## WeChat Work notification switch
tl_rtc_file_notify_open=false
## WeChat Work notification robot KEY, normal notifications, comma-separated if multiple keys
tl_rtc_file_notify_qiwei_normal=
## WeChat Work notification robot KEY, error notifications, comma-separated if multiple keys
tl_rtc_file_notify_qiwei_error=
## OSS Cloud Storage (Non-Essential Steps)
Modify the OSS storage-related configurations in `tlrtcfile.env`:
## oss-seafile storage repository ID
tl_rtc_file_oss_seafile_repoid=
## oss-seafile address
tl_rtc_file_oss_seafile_host=
## oss-seafile username
tl_rtc_file_oss_seafile_username=
## oss-seafile password
tl_rtc_file_oss_seafile_password=
##
oss-alyun storage accessKey
tl_rtc_file_oss_alyun_AccessKey=
## oss-aly storage SecretKey
tl_rtc_file_oss_alyun_Secretkey=
## oss-aly storage bucket
tl_rtc_file_oss_alyun_bucket=
## oss-txyun storage accessKey
tl_rtc_file_oss_txyun_AccessKey=
## oss-txyun storage SecretKey
tl_rtc_file_oss_txyun_Secretkey=
## oss-txyun storage bucket
tl_rtc_file_oss_txyun_bucket=
## oss-qiniuyun storage accessKey
tl_rtc_file_oss_qiniuyun_AccessKey=
## oss-qiniuyun storage SecretKey
tl_rtc_file_oss_qiniuyun_Secretkey==
## oss-qiniuyun storage bucket
tl_rtc_file_oss_qiniuyun_bucket=
## Chat-GPT (Non-Essential Steps)
Modify the OpenAI-related configurations in `tlrtcfile.env`:
## openai-key, comma-separated if multiple keys
tl_rtc_file_openai_keys=
## Configure Turn Server (LAN Non-Essential Steps, Internet Essential Steps)
Currently, there are two ways to generate and use Turn server accounts and passwords: fixed accounts and passwords (recommended), and time-limited accounts and passwords. **Choose one method**:
Example for Ubuntu:
- Install coturn: `sudo apt-get install coturn`
Valid accounts and passwords: `docker/coturn/turnserver-with-secret-user.conf`
1. Modify the fields `listening-device`, `listening-ip`, `external-ip`, `static-auth-secret`, and `realm`.
2. Start turnserver:
`turnserver -c /path/to/complete/conf/turn/turnserver-with-secret-user.conf`
Fixed accounts and passwords: `docker/coturn/turnserver-with-fixed-user.conf`
1. Modify the fields `listening-device`, `listening-ip`, `external-ip`, `user`, and `realm`.
2. Generate users:
`turnadmin -a -u username -p password -r realm_from_config_file`
3. Start turnserver:
`turnserver -c /path/to/complete/docker/coturn/turnserver-with-secret-user.conf`
After deploying coturn, set up WebRTC-related information in the corresponding `tlrtcfile.env` configuration:
## webrtc-stun relay service address
tl_rtc_file_webrtc_stun_host=
## webrtc-turn relay service address
tl_rtc_file_webrtc_turn_host=
## webrtc relay service username
tl_rtc_file_webrtc_turn_username=tlrtcfile
## webrtc relay service password
tl_rtc_file_webrtc_turn_credential=tlrtcfile
## webrtc relay service secret
tl_rtc_file_webrtc_turn_secret=tlrtcfile
## webrtc relay service account expiration time (milliseconds)
tl_rtc_file_webrtc_turn_expire=86400000
## Docker
Currently, support is provided for `official images` and `self-packaged images`. Using official images supports two methods: `docker script startup` and `docker-compose startup`.
Unlike self-deployment on a server/computer, the Docker environment by default starts the database and coturn services, requiring minimal additional steps for setup.
### Using Official Images (Docker Script Startup):
After modifying the `tlrtcfile.env` configuration as needed (or using the default configuration), navigate to the `bin/` directory and execute the `auto-pull-and-start-docker.sh` script:
#### One-Click Docker Script
Navigate to the `bin/` directory and execute the `auto-pull-and-start-docker.sh` script:
```
chmod +x ./auto-pull-and-start-docker.sh
./auto-pull-and-start-docker.sh
```
### Using Official Images (Docker-Compose Startup):
#### Using docker-compose
After modifying the `tlrtcfile.env` configuration as needed (or using the default configuration), execute the following command based on your `Docker Compose` version in the main directory:
In the main directory, execute the corresponding command based on your Docker Compose version:
- For `Docker Compose V1`:
- For Docker Compose V1:
```
docker-compose --profile=http up -d
```
- For `Docker Compose V2`:
- For Docker Compose V2:
```
docker compose --profile=http up -d
```
### Self-Packaged Image (Docker-Compose Self-Packaged Startup):
#### Self-Building and Starting the Image
After confirming the modification of the `tlrtcfile.env` configuration file (or using the default configuration), navigate to the `docker/` directory and execute the following command based on your `Docker Compose` version in the main directory:
Navigate to the `docker/` directory and execute the corresponding command based on your Docker Compose version:
- For `Docker Compose V1`:
- For Docker Compose V1:
```
docker-compose -f docker-compose-build-code.yml up -d
```
- For `Docker Compose V2`:
- For Docker Compose V2:
```
docker compose -f docker-compose-build-code.yml up -d
```
## Other Deployment Methods
In addition to the above manual installation, Docker official images, and self-packaged images, there are also options for automatic scripts and one-click deployment on hosting platforms.
In addition to the manual installation, Docker official images, and self-built Docker images, there are other methods such as automatic scripts and one-click deployments on hosting platforms.
After downloading the project, navigate to the `bin/` directory and select the appropriate system script to execute. The script will automatically detect the installation environment, install dependencies, and start the services automatically.
After downloading the project, navigate to the `bin/` directory and choose the appropriate system script to execute. It will automatically detect the environment, install dependencies, and start the service.
**Note: Before executing the script, you can modify the configuration first. If you're using the default configuration, you'll need to restart both services for the changes to take effect. To do this, execute the `Stop Services` script and then run the `Automatic Script` again.**
### Ubuntu Automatic Script (e.g., ubuntu16):
#### Automatic script for Ubuntu (e.g., Ubuntu 16)
- If the script doesn't have execution permission, run the following command:
```
chmod +x ./ubuntu16/*.sh
```
cd ubuntu16/
- If using HTTP, execute this script:
```
./auto-check-install-http.sh
```
For HTTPS, use this script:
- If using HTTPS, execute this script:
```
./auto-check-install-https.sh
```
Stop Services Script:
- To stop the service:
```
./auto-stop.sh
```
### Windows Automatic Script:
#### Automatic script for Windows
- If using HTTP, execute this script:
```
windows/auto-check-install-http.bat
```
For HTTPS, use this script:
- If using HTTPS, execute this script:
```
windows/auto-check-install-https.bat
```
### Zeabur One-Click Deployment Platform
#### One-Click Deployment on Zeabur Platform
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/898TLE?referralCode=iamtsm)
## Other Configuration Options
#### Configuring the Database (Optional)
You need to install MySQL database manually, create a database named `webchat`, and then modify the database-related configurations in `tlrtcfile.env`.
#### Admin Panel (Optional)
Prerequisite: Database configuration must be enabled.
Modify the admin panel-related configurations in `tlrtcfile.env`. After starting, enter the configured room number and password to access the admin panel.
#### WeChat Notifications (Optional)
If you need to set up notification for access and error alerts, you can create a WeChat Work robot and get an API key. Modify the WeChat notification configurations in `tlrtcfile.env`.
#### OSS Cloud Storage (Optional)
The project currently supports Seafile storage integration, and future updates will include support for Alibaba Cloud, Tencent Cloud, Qiniu Cloud, and self-hosted server storage methods. Modify the OSS storage configurations in `tlrtcfile.env`.
#### Chat-GPT (Optional)
Integrated with the OpenAI API, this project includes a chat dialog. Modify the OpenAI configurations in `tlrtcfile.env`.
#### Configuring turnserver (Optional for LAN, Required for WAN)
There are two ways to generate TURN server credentials: fixed credentials (recommended) and time-limited credentials. Choose one method. The following example uses Ubuntu.
Install coturn:
```
sudo apt-get install coturn
```
For time-limited credentials, modify the configuration file `docker/coturn/turnserver-with-secret-user.conf`.
- Modify the fields in the configuration file:
```
`listening-device`, `listening-ip`, `external-ip`, `static-auth-secret`, `realm`
```
- Start the turnserver:
```
turnserver -c /path/to/conf/turn/turnserver-with-secret-user.conf
```
For fixed credentials, modify the configuration file `docker/coturn/turnserver-with-fixed-user.conf`.
- Modify the fields in the configuration file:
```
`listening-device`, `listening-ip`, `external-ip`, `user`, `realm`
```
- Generate a user:
```
turnadmin -a -u username -p password -r realm_in_config_file
```
- Start the turnserver:
```
turnserver -c /path/to/docker/coturn/turnserver-with-secret-user.conf
```
After setting up coturn, configure the WebRTC-related information in the corresponding `tlrtcfile.env` configuration.
## Overview Diagram
![image](tl-rtc-file-tool.jpg)

View File

@@ -1,5 +1,5 @@
{
"version": "10.4.2",
"version": "10.4.3",
"socket": {
"port": "请到 tlrtcfile.env 中进行配置",
"host": "请到 tlrtcfile.env 中进行配置"

View File

@@ -1203,4 +1203,19 @@ body {
/** 765px以上 */
@media screen and (min-width: 765px) {
}
/** 语音连麦动画覆盖 */
@keyframes layui-scale-spring {
0% {
transform: scale(.3);
}
80% {
opacity: .8;
transform: scale(0.8);
}
100% {
opacity: 0;
transform: scale(0.6);
}
}

View File

@@ -98,6 +98,10 @@
v-show="isLiveShare && owner">
{{lang.living}}: {{liveShareTimes < 10 ? '0' + liveShareTimes :
liveShareTimes}}{{lang.second}} </b>
<b style="transition: color 0.8s;margin-right: 5px;" id="audioShareTimes"
v-show="isAudioShare && owner">
{{lang.audioing}}: {{audioShareTimes < 10 ? '0' + audioShareTimes :
audioShareTimes}}{{lang.second}} </b>
<b style="transition: color 0.8s;margin-right: 5px;" id="screenTimes"></b>
</a>
</div>
@@ -208,7 +212,10 @@
</svg>
<b class="tl-rtc-file-tool-title" :class="clientWidth < 450 ? 'tl-rtc-file-tool-title-mobile' : ''">{{lang.chat_comm}}</b>
<span v-show="receiveChatCommList.length > 0" class="layui-badge tl-rtc-file-msg-dot"
style="right: 5px; top: 4px; width: 7px; height: 7px;position: relative;"></span>
style="right: 2px; top: 2px; width: 7px; height: 7px;position: relative;"></span>
<!-- <svg class="icon tl-rtc-file-msg-dot" aria-hidden="true" style="width: 20px;height: 20px;">
<use xlink:href="#icon-rtc-file-remenhot"></use> -->
</svg>
</div>
</div>
<div class="layui-col-xs3 swiper-slide" @click="startScreenShare"
@@ -256,6 +263,21 @@
<b class="tl-rtc-file-tool-title" :class="clientWidth < 450 ? 'tl-rtc-file-tool-title-mobile' : ''">{{lang.start_live}}</b>
</div>
</div>
<div class="layui-col-xs3 swiper-slide" @click="startAudioShare"
:class="switchData.openAudioShare ? '':'tl-rtc-file-tool-disabled'">
<div class="tl-rtc-file-tool " v-if="isAudioShare" :class="clientWidth < 450 ? 'tl-rtc-file-tool-mobile' : ''">
<svg id="audioShareIcon" class="icon" aria-hidden="true" style="width: 18px;height: 18px;">
<use xlink:href="#icon-rtc-file-guaduandianhua"></use>
</svg>
<b class="tl-rtc-file-tool-title" :class="clientWidth < 450 ? 'tl-rtc-file-tool-title-mobile' : ''">{{lang.start_audio}}</b>
</div>
<div class="tl-rtc-file-tool" v-else :class="clientWidth < 450 ? 'tl-rtc-file-tool-mobile' : ''">
<svg id="audioShareIcon" class="icon" aria-hidden="true" style="width: 18px;height: 18px;">
<use xlink:href="#icon-rtc-file-yuyin"></use>
</svg>
<b class="tl-rtc-file-tool-title" :class="clientWidth < 450 ? 'tl-rtc-file-tool-title-mobile' : ''">{{lang.start_audio}}</b>
</div>
</div>
<div class="layui-col-xs3 swiper-slide" @click="getCodeFile"
:class="switchData.openGetCodeFile ? '':'tl-rtc-file-tool-disabled'">
<div class="tl-rtc-file-tool" :class="clientWidth < 450 ? 'tl-rtc-file-tool-mobile' : ''">
@@ -315,8 +337,8 @@
</svg>
<div class="tl-rtc-file-user-body-left">
<b class="tl-rtc-file-user-body-left-nick">
<b v-show="owner" style="color: #7375e9;">【{{lang.owner}}】-</b>
【{{lang.self}}】- {{nickName}}
<b v-show="owner" style="color: #51d788;">【{{roomTypeName}}】-</b>
<b v-show="owner" style="color: #7375e9;">【{{lang.owner}}】-</b>【{{lang.self}}】- {{nickName}}
</b>
<b class="tl-rtc-file-user-body-left-id">
{{socketId}}
@@ -1137,6 +1159,51 @@
</div>
</div>
</div>
<!-- 语音连麦 -->
<div id="audioMask" class="tl-rtc-file-mask-media-list" :style="{left: mediaAudioMaskHeightNum + '%'}">
<div class="layui-col-sm2" style="width: 100%;">
<div class="layui-card">
<div class="layui-card-header" style="text-align: left;">
{{lang.audio_sharing}}
</div>
</div>
<div class="layui-card">
<div :style="{height: logsHeight + 80 +'px',overflowY: 'auto'}">
<div class="layui-card-body">
<div class="tl-rtc-file-mask-media-container" id="mediaAudioRoomList">
<div class="tl-rtc-file-mask-media-video">
<video v-show="false" id="selfMediaShareAudio"
preload="auto" autoplay="autoplay" x-webkit-airplay="true"
playsinline ="true" webkit-playsinline ="true" x5-video-player-type="h5"
x5-video-player-fullscreen="true" x5-video-orientation="portraint"></video>
<svg v-show="isAudioEnabled" class="icon layui-anim layui-anim-scaleSpring layui-anim-loop" aria-hidden="true" style="width: 100%;height: 100%;animation-duration:.7s;max-width:50%;color:cadetblue;">
<use xlink:href="#icon-rtc-file-shengboyuyinxiaoxi"></use>
</svg>
<svg v-show="!isAudioEnabled" class="icon" aria-hidden="true" style="width: 100%;height: 100%;max-width:50%">
<use xlink:href="#icon-rtc-file-shengboyuyinxiaoxi"></use>
</svg>
</div>
<div class="tl-rtc-file-mask-media-video-tool">
<div class="tl-rtc-file-mask-media-video-tool-item" @click="changeShareStream('audio','audio')">
<svg class="icon" aria-hidden="true" style="width: 18px;height: 18px;">
<use v-if="isAudioEnabled" xlink:href="#icon-rtc-file-maikefeng-XDY"></use>
<use v-else xlink:href="#icon-rtc-file-guanbimaikefeng"></use>
</svg>
</div>
<div class="tl-rtc-file-mask-media-video-tool-item" @click="startAudioShare()">
<svg class="icon" aria-hidden="true" style="width: 18px;height: 18px;">
<use xlink:href="#icon-rtc-file-guaduandianhua"></use>
</svg>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<script type="text/javascript" src="/js/language.min.js"></script>
@@ -1144,6 +1211,7 @@
<script type="text/javascript" src="/js/draw.min.js"></script>
<script type="text/javascript" src="/js/videoShare.min.js"></script>
<script type="text/javascript" src="/js/liveShare.min.js"></script>
<script type="text/javascript" src="/js/audioShare.min.js"></script>
<script type="text/javascript" src="/js/screen.min.js"></script>
<script type="text/javascript" src="/js/screenShare.min.js"></script>
<script type="text/javascript" src="/js/index.min.js"></script>

192
svr/res/js/audioShare.js Normal file
View File

@@ -0,0 +1,192 @@
// --------------------------- //
// -- audioShare.js -- //
// -- version : 1.0.0 -- //
// -- date : 2023-08-15 -- //
// --------------------------- //
var audioShare = new Vue({
el: '#audioShareApp',
data: function () {
return {
stream: null,
times: 0,
interverlId: 0,
track: null,
}
},
methods: {
getMediaPlay: function (constraints) {
let media = null;
let defaultConstraints = {
// 音频轨道
audio:true,
// 视频轨道
video: false
};
if(constraints){
defaultConstraints = constraints
}
if(window.navigator.mediaDevices && window.navigator.mediaDevices.getUserMedia){
media = window.navigator.mediaDevices.getUserMedia(defaultConstraints);
} else if (window.navigator.mozGetUserMedia) {
media = navagator.mozGetUserMedia(defaultConstraints);
} else if (window.navigator.getUserMedia) {
media = window.navigator.getUserMedia(defaultConstraints)
} else if (window.navigator.webkitGetUserMedia) {
media = new Promise((resolve, reject) => {
window.navigator.webkitGetUserMedia(defaultConstraints, (res) => {
resolve(res)
}, (err) => {
reject(err)
});
})
}
return media
},
startAudioShare: async function (callback) {
let that = this;
let msgData = {
"Requested device not found" : "没有检测到麦克风"
}
let msg = "获取设备权限失败";
if (this.stream == null) {
try {
this.stream = await this.getMediaPlay();
} catch (error) {
console.log(error)
msg = msgData[error.message]
}
}
if (this.stream == null) {
if (window.layer) {
layer.msg("获取设备权限失败")
}
window.Bus.$emit("changeAudioShareState", false)
callback && callback()
return;
}
const video = document.querySelector("#selfMediaShareAudio");
video.addEventListener('loadedmetadata', function() {
window.Bus.$emit("addSysLogs", "loadedmetadata")
// ios 微信浏览器兼容问题
video.play();
document.addEventListener('WeixinJSBridgeReady', function () {
window.Bus.$emit("addSysLogs", "loadedmetadata WeixinJSBridgeReady")
video.play();
}, false);
});
document.addEventListener('WeixinJSBridgeReady', function () {
window.Bus.$emit("addSysLogs", "WeixinJSBridgeReady")
video.play();
}, false);
video.srcObject = this.stream;
video.play();
//计算时间
this.interverlId = setInterval(() => {
that.times += 1;
window.Bus.$emit("changeAudioShareTimes", that.times)
$("#audioShareIcon").css("color", "#fb0404")
$("#audioShareTimes").css("color", "#fb0404")
setTimeout(() => {
$("#audioShareIcon").css("color", "#000000")
$("#audioShareTimes").css("color", "#000000")
}, 500)
}, 1000);
if (window.layer) {
layer.msg("开始语音连麦,再次点击按钮即可挂断")
}
this.stream.getTracks().forEach(function (track) {
that.track = track;
callback && callback(track, that.stream)
});
},
stopAudioShare: function () {
if (this.stream) {
this.stream.getTracks().forEach(track => track.stop());
}
clearInterval(this.interverlId);
window.Bus.$emit("changeAudioShareTimes", 0);
if (window.layer) {
layer.msg("语音连麦结束,本次连麦时长 " + this.times + "秒")
}
setTimeout(() => {
$("#audioShareIcon").css("color", "#000000")
}, 1000);
this.stream = null;
this.times = 0;
return;
},
changeAudioShareDevice: async function ({kind, rtcConnect}) {
//重新获取流
let newStream = null;
try{
newStream = await this.getMediaPlay();
}catch(e){
console.log("changeAudioShareMediaTrackAndStream error! ", e)
}
//获取流/权限失败
if(newStream === null){
callback && callback(false)
return;
}
if(kind === 'audio'){
newStream.getAudioTracks()[0].enabled = true;
if(rtcConns){//远程track替换
for(let id in rtcConns){
const senders = rtcConns[id].getSenders();
const sender = senders.find((sender) => (sender.track ? sender.track.kind === 'audio' : false));
if(!sender){
console.error("changeDevice find sender error! ");
return
}
sender.replaceTrack(newStream.getAudioTracks()[0]);
}
}
}
const video = document.querySelector("#selfMediaShareVideo");
video.addEventListener('loadedmetadata', function() {
// ios 微信浏览器兼容问题
window.Bus.$emit("addSysLogs", "loadedmetadata")
video.play();
document.addEventListener('WeixinJSBridgeReady', function () {
window.Bus.$emit("addSysLogs", "loadedmetadata WeixinJSBridgeReady")
video.play();
}, false);
});
document.addEventListener('WeixinJSBridgeReady', function () {
window.Bus.$emit("addSysLogs", "WeixinJSBridgeReady")
video.play();
}, false);
//替换本地音频流
this.stream = new MediaStream([this.stream.getAudioTracks()[0]]);
video.srcObject = this.stream;
video.play();
},
getAudioShareTrackAndStream: function (callback) {
callback(this.track, this.stream)
},
},
mounted: async function () {
window.Bus.$on("startAudioShare", this.startAudioShare);
window.Bus.$on("stopAudioShare", this.stopAudioShare);
window.Bus.$on("getAudioShareTrackAndStream", this.getAudioShareTrackAndStream);
window.Bus.$on("changeAudioShareDevice", this.changeAudioShareDevice);
}
})

View File

@@ -482,6 +482,23 @@ window.tlrtcfile = {
window.addEventListener('keyup', function (event) {
callback("keyup", event)
})
},
getRoomTypeZh: function (type){
if(type === 'file'){
return "文件房间"
}else if(type === 'live'){
return "直播房间"
}else if(type === 'video'){
return "音视频房间"
}else if(type === 'screen'){
return "屏幕共享房间"
}else if(type === 'password'){
return "密码房间"
}else if(type === 'audio'){
return "语音连麦房间"
}else{
return "未知类型房间"
}
},
scrollToBottom: function (dom, duration, timeout) {
let start = dom.scrollTop;

View File

@@ -46,6 +46,8 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
videoShareTimes: 0, //当前音视频时间
isLiveShare: false, //是否在直播中
liveShareTimes: 0, //当前直播时间
isAudioShare: false, //是否在语音连麦中
audioShareTimes: 0, //当前语音连麦时间
isPasswordRoom: false, //是否在密码房中
isAiAnswering: false, //是否ai正在回答中
switchDataGet: false, // 是否已经拿到配置开关数据
@@ -77,6 +79,7 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
mediaVideoMaskHeightNum: -150, // 用于控制音视频展示
mediaScreenMaskHeightNum: -150, // 用于控制屏幕共享展示
mediaLiveMaskHeightNum: -150, // 用于控制直播展示
mediaAudioMaskHeightNum: -150, // 用于控制语音连麦展示
logsHeight: 0, // 日志栏目展示高度
sendFileRecoderHeight : 0, // 发送文件展示列表高度
@@ -190,6 +193,9 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
}
}
},
roomTypeName: function(){
return window.tlrtcfile.getRoomTypeZh(this.roomType)
}
},
watch: {
isAiAnswering: function (newV, oldV) {
@@ -290,6 +296,12 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
resolve(stream)
});
});
}else if(type === 'audio'){
stream = await new Promise((resolve, reject) => {
stream = window.Bus.$emit("getAudioShareTrackAndStream", (track, stream) => {
resolve(stream)
});
});
}else if(type === 'live'){
stream = await new Promise((resolve, reject) => {
stream = window.Bus.$emit("getLiveShareTrackAndStream", (track, stream) => {
@@ -1379,6 +1391,11 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
this.addUserLogs(this.lang.in_living)
return
}
if (this.isAudioShare) {
layer.msg(this.lang.in_audioing)
this.addUserLogs(this.lang.in_audioing)
return
}
if (this.isVideoShare) {
window.Bus.$emit("stopVideoShare")
this.isVideoShare = !this.isVideoShare;
@@ -1438,6 +1455,11 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
this.addUserLogs(this.lang.in_living)
return
}
if (this.isAudioShare) {
layer.msg(this.lang.in_audioing)
this.addUserLogs(this.lang.in_audioing)
return
}
if (this.isScreenShare) {
window.Bus.$emit("stopScreenShare")
this.isScreenShare = !this.isScreenShare;
@@ -1497,6 +1519,11 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
this.addUserLogs(this.lang.in_sharing_screen)
return
}
if (this.isAudioShare) {
layer.msg(this.lang.in_audioing)
this.addUserLogs(this.lang.in_audioing)
return
}
if (this.isLiveShare) {
window.Bus.$emit("stopLiveShare")
this.isLiveShare = !this.isLiveShare;
@@ -1569,6 +1596,70 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
});
}
},
// 创建/加入语音连麦房间
startAudioShare: function () {
if (!this.switchData.openAudioShare) {
layer.msg(this.lang.feature_close)
this.addUserLogs(this.lang.feature_close)
return
}
if (this.isVideoShare) {
layer.msg(this.lang.in_videoing)
this.addUserLogs(this.lang.in_videoing)
return
}
if (this.isLiveShare) {
layer.msg(this.lang.in_living)
this.addUserLogs(this.lang.in_living)
return
}
if (this.isScreenShare) {
layer.msg(this.lang.in_living)
this.addUserLogs(this.lang.in_living)
return
}
if (this.isAudioShare) {
window.Bus.$emit("stopAudioShare")
this.isAudioShare = !this.isAudioShare;
this.addUserLogs(this.lang.end_audio_sharing);
return
}
if (this.isJoined) {
layer.msg(this.lang.please_exit_then_join_screen)
this.addUserLogs(this.lang.please_exit_then_join_screen)
return
}
let that = this;
if(that.isShareJoin){ //分享进入
that.createMediaRoom("audio");
that.socket.emit('message', {
emitType: "startAudioShare",
room: that.roomId,
to : that.socketId
});
that.clickMediaAudio();
that.isAudioShare = !that.isAudioShare;
that.addUserLogs(that.lang.start_audio_sharing);
}else{
layer.prompt({
formType: 0,
title: this.lang.please_enter_audio_sharing_room_num,
}, function (value, index, elem) {
that.roomId = value;
that.createMediaRoom("audio");
layer.close(index)
that.socket.emit('message', {
emitType: "startAudioShare",
room: that.roomId,
to : that.socketId
});
that.clickMediaAudio();
that.isAudioShare = !that.isAudioShare;
that.addUserLogs(that.lang.start_audio_sharing);
});
}
},
// 打开画笔
openRemoteDraw : function(){
let that = this;
@@ -2128,13 +2219,15 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
layer.close(index)
that.openRoomInput = true;
that.isShareJoin = true;
if(typeArgs && ['screen','live','video'].includes(typeArgs)){
if(typeArgs && ['screen','live','video','audio'].includes(typeArgs)){
if(typeArgs === 'screen'){
that.startScreenShare();
}else if(typeArgs === 'live'){
that.startLiveShare();
}else if(typeArgs === 'video'){
that.startVideoShare();
}else if(typeArgs === 'audio'){
that.startAudioShare();
}
}else{
that.createFileRoom();
@@ -2299,6 +2392,23 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
document.getElementById("iamtsm").style.marginLeft = "0";
}
},
clickMediaAudio : function(){
this.showMedia = !this.showMedia;
this.touchResize();
if (this.showMedia) {
this.addUserLogs(this.lang.expand_audio);
if(this.clientWidth < 500){
document.getElementById("iamtsm").style.marginLeft = "0";
}else{
document.getElementById("iamtsm").style.marginLeft = "50%";
}
this.mediaAudioMaskHeightNum = 0;
} else {
this.addUserLogs(this.lang.collapse_audio);
this.mediaAudioMaskHeightNum = -150;
document.getElementById("iamtsm").style.marginLeft = "0";
}
},
typeInArr: function(arr, type, name = ""){
if(type === ''){
let fileTail = name.split(".").pop()
@@ -2563,11 +2673,36 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
//远程媒体流处理
mediaTrackHandler: function(event, id){
let that = this;
let video = null;
if(event.track.kind === 'audio'){
return;
// audio-track事件除了语音连麦房间之外其他都可以跳过因为音视频/直播/屏幕共享他们的音频流都并入了video-track了
if(that.roomType !== 'audio'){
return
}
//连麦房间,只有音频数据
$(`#mediaAudioRoomList`).append(`
<div class="tl-rtc-file-mask-media-video">
<video id="otherMediaAudioShare${id}" preload="auto" autoplay="autoplay" x-webkit-airplay="true" playsinline ="true" webkit-playsinline ="true" x5-video-player-type="h5" x5-video-player-fullscreen="true" x5-video-orientation="portraint" ></video>
<svg id="otherMediaAudioShareAudioOpenAnimSvg${id}" class="icon layui-anim layui-anim-scaleSpring layui-anim-loop" aria-hidden="true" style="width: auto; height: auto; position: absolute;animation-duration:.7s;max-width:50%;color:cadetblue;">
<use xlink:href="#icon-rtc-file-shengboyuyinxiaoxi"></use>
</svg>
<svg id="otherMediaAudioShareAudioCloseAnimSvg${id}" class="icon" aria-hidden="true" style="width: auto; height: auto; position: absolute;display:none;max-width:50%">
<use xlink:href="#icon-rtc-file-shengboyuyinxiaoxi"></use>
</svg>
<svg id="otherMediaAudioShareAudioOpenSvg${id}" class="tl-rtc-file-mask-media-video-other-audio" aria-hidden="true">
<use xlink:href="#icon-rtc-file-maikefeng-XDY"></use>
</svg>
<svg id="otherMediaAudioShareAudioCloseSvg${id}" class="tl-rtc-file-mask-media-video-other-audio" aria-hidden="true" style="display:none;">
<use xlink:href="#icon-rtc-file-guanbimaikefeng"></use>
</svg>
</div>
`);
video = document.querySelector(`#otherMediaAudioShare${id}`);
}
let video = null;
if(this.roomType === 'video'){
$(`#mediaVideoRoomList`).append(`
<div class="tl-rtc-file-mask-media-video">
@@ -3088,14 +3223,14 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
return item.id !== id;
})
if(['live','video','screen'].includes(this.roomType)){
if(['live','video','screen','audio'].includes(this.roomType)){
//主播异常关闭直播,观众页面强制刷新
if(this.roomType === 'live' && removeIsOwner){
window.location.reload()
}
//多人音视频/多人屏幕共享有人异常退出移除对应的video标签
if(this.roomType === 'video' || this.roomType === 'screen'){
if(this.roomType === 'video' || this.roomType === 'screen' || this.roomType === 'audio'){
$(`#otherMediaVideoShare${id}`).parent().remove();
}
}
@@ -3189,6 +3324,9 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
if(data.type === 'video'){
window.Bus.$emit("startVideoShare", that.videoConstraints);
}
if(data.type === 'audio'){
window.Bus.$emit("startAudioShare");
}
if(data.type === 'live'){
window.Bus.$emit("startLiveShare", {
liveShareMode : that.liveShareMode
@@ -3213,7 +3351,7 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
await new Promise(resolve => {
if(data.type === 'screen'){
window.Bus.$emit("startScreenShare",(track, stream) => {
window.Bus.$emit("startScreenShare", (track, stream) => {
//其他人将数据流添加到通道中, 此时需要addTrack因为后面会有offer收集然后进行answer ...等后续操作
otherRtcConnect.addTrack(track, stream);
resolve()
@@ -3224,6 +3362,12 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
otherRtcConnect.addTrack(track, stream);
resolve()
});
}else if(data.type === 'audio'){
window.Bus.$emit("startAudioShare", (track, stream) => {
//其他人将数据流添加到通道中, 此时需要addTrack因为后面会有offer收集然后进行answer ...等后续操作
otherRtcConnect.addTrack(track, stream);
resolve()
});
}else{
resolve();
}
@@ -3272,6 +3416,14 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
rtcConnect.addTrack(track, stream);
});
}
if (data.type === 'audio') {
//比如多人语音连麦后面加入的连接已经在created事件中addTrack了
//所以这个地方主要是通知当前连接之前的那一些连接进行addTrack以便于当前连接能收到
window.Bus.$emit("getAudioShareTrackAndStream", (track, stream) => {
rtcConnect.addTrack(track, stream);
});
}
if (data.type === 'live') {
//比如直播,后面加入的都是观众,所以每个观众加入的时候,都会通知一下所有人可以添加媒体流到通道了(这里就是只有房主有媒体流数据)
@@ -3279,6 +3431,7 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
rtcConnect.addTrack(track, stream);
});
}
that.addPopup({
title : that.lang.join_room,
msg : data.nickName + that.lang.join_room
@@ -3585,6 +3738,14 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
document.querySelector(`#otherMediaLiveShareAudioOpenSvg${data.from}`).style.display = data.isAudioEnabled ? 'block' : 'none';
document.querySelector(`#otherMediaLiveShareAudioCloseSvg${data.from}`).style.display = data.isAudioEnabled ? 'none' : 'block';
}
}else if(data.type === 'audio'){
if(data.kind === 'audio'){
document.querySelector(`#otherMediaAudioShareAudioOpenSvg${data.from}`).style.display = data.isAudioEnabled ? 'block' : 'none';
document.querySelector(`#otherMediaAudioShareAudioCloseSvg${data.from}`).style.display = data.isAudioEnabled ? 'none' : 'block';
document.querySelector(`#otherMediaAudioShareAudioOpenAnimSvg${data.from}`).style.display = data.isAudioEnabled ? 'block' : 'none';
document.querySelector(`#otherMediaAudioShareAudioCloseAnimSvg${data.from}`).style.display = data.isAudioEnabled ? 'none' : 'block';
}
}
});
@@ -3610,6 +3771,15 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
}
});
//退出语音连麦
this.socket.on('stopAudioShare', function (data) {
if (data.id === that.socketId) {
that.clickMediaAudio();
} else {
$(`#otherMediaAudioShare${data.id}`).parent().remove();
}
});
//ai对话
this.socket.on('openaiAnswer', function (data) {
that.isAiAnswering = false
@@ -3863,6 +4033,24 @@ axios.get("/api/comm/initData?turn="+useTurn, {}).then((initData) => {
}
this.liveShareTimes = res
})
window.Bus.$on("changeAudioShareTimes", (res) => {
if (res === 0) {
this.socket.emit('message', {
emitType: "stopAudioShare",
id: this.socketId,
room: this.roomId,
cost: this.audioShareTimes,
owner : this.owner,
});
}
this.audioShareTimes = res
})
window.Bus.$on("changeAudioShareState", (res) => {
if(!res){//状态失败,收起面板
this.clickMediaAudio();
}
this.isAudioShare = res
})
window.Bus.$on("sendChatingComm", (res) => {
this.sendChatingComm()
})

View File

@@ -8,6 +8,14 @@
const local_lang = {
"en": {
"audioing" : "Audioing",
"audio_sharing" : "Audio sharing",
"expand_audio" : "Expand audio panel",
"collapse_audio" : "Collapse audio panel",
"start_audio_sharing" : "Start audio sharing",
"end_audio_sharing" : "End audio sharing",
"in_audioing" : "In audioing",
"start_audio" : "Start audio",
"audience" : "Audience",
"webrtc_check_init" : "Webrtc check init",
"webrtc_check_init_done" : "Webrtc check init done",
@@ -184,6 +192,7 @@ const local_lang = {
"please_enter_password": "Please enter the password room password",
"please_enter_right_code": "Please enter the correct pickup code",
"please_enter_room_num": "Please fill in the room number first",
"please_enter_audio_sharing_room_num" : "Please enter the audio sharing room number",
"please_enter_screen_sharing_room_num": "Please enter the screen sharing room number",
"please_enter_video_call_room_num": "Please enter the audio and video call room number",
"please_exit_then_join_live": "Please exit the room first and then enter the live room",
@@ -329,6 +338,14 @@ const local_lang = {
"webrtc_ice_state" : "webrtc state"
},
"zh": {
"audioing" : "语音中",
"audio_sharing" : "语音连麦",
"expand_audio" : "展开音频面板",
"collapse_audio" : "收起音频面板",
"start_audio_sharing" : "开始语音连麦",
"end_audio_sharing" : "结束语言连麦",
"in_audioing" : "正在语音中",
"start_audio" : "语音连麦",
"audience" : "观众",
"webrtc_check_init" : "Webrtc检测初始化",
"webrtc_check_init_done" : "Webrtc检测初始化完成",
@@ -515,6 +532,7 @@ const local_lang = {
"please_enter_password": "请输入密码房间密码",
"please_enter_right_code": "请输入正确的取件码",
"please_enter_room_num": "请先填写房间号",
"please_enter_audio_sharing_room_num" : "请输入语音连麦房间号",
"please_enter_screen_sharing_room_num": "请输入屏幕共享房间号",
"please_enter_video_call_room_num": "请输入音视频通话房间号",
"please_exit_then_join_live": "请先退出房间,然后进入直播间",

View File

@@ -179,7 +179,7 @@ var videoShare = new Vue({
try{
newStream = await this.getMediaPlay(constraints);
}catch(e){
console.log("changeLiveShareMediaTrackAndStream error! ", e)
console.log("changeVideoShareMediaTrackAndStream error! ", e)
}
//获取流/权限失败

View File

@@ -80,6 +80,11 @@ async function getSettingPageHtml(data) {
<input type="checkbox" name="openLiveShare" title="开启直播功能" lay-skin="primary">
</div>
</div>
<div class="layui-form-item">
<div class="layui-input-block">
<input type="checkbox" name="openAudioShare" title="开启语音连麦" lay-skin="primary">
</div>
</div>
<div class="layui-form-item">
<div class="layui-input-block">
<input type="checkbox" name="openRemoteDraw" title="开启远程画笔" lay-skin="primary">

View File

@@ -161,6 +161,35 @@ function sendStopVideoShareNotify(data) {
}
/**
* 发送开始语音连麦通知
* @param {*} data
*/
function sendStartAudioShareNotify(data) {
let notifyMsg = `## <font color='info'>文件传输通知</font> - <font color="warning">${data.title}</font>` +
` - <font color="comment">${data.room}</font>\n` +
`当前时间: ${utils.formateDateTime(new Date(), "yyyy-MM-dd hh:mm:ss")}\n` +
`访问IP: ${data.ip}\n` +
`访问设备: ${data.userAgent}\n`;
notify.requestMsg(notifyMsg)
}
/**
* 发送停止语音连麦通知
* @param {*} data
*/
function sendStopAudioShareNotify(data) {
let notifyMsg = `## <font color='info'>文件传输通知</font> - <font color="warning">${data.title}</font>` +
` - <font color="comment">${data.room}</font>\n` +
`连麦时长: ${data.cost}\n` +
`当前时间: ${utils.formateDateTime(new Date(), "yyyy-MM-dd hh:mm:ss")}\n` +
`访问IP: ${data.ip}\n` +
`访问设备: ${data.userAgent}\n`;
notify.requestMsg(notifyMsg)
}
/**
* 发送开始直播通知
* @param {*} data
@@ -461,6 +490,8 @@ module.exports = {
sendStartVideoShareNotify,
sendStartLiveShareNotify,
sendStopLiveShareNotify,
sendStartAudioShareNotify,
sendStopAudioShareNotify,
sendChatingRoomNotify,
sendCodeFileNotify,
sendFileDoneNotify,

View File

@@ -28,7 +28,7 @@ function initData(req, res) {
ip = "127.0.0.1"
}
let wsHost = conf.socket.host || ip + ":" + conf.socket.port;
let wsHost = conf.socket.host || ip + conf.socket.port;
let data = {
version : conf.version,

View File

@@ -15,6 +15,7 @@ const defaultSwitchData = {
openGetCodeFile: true,
openVideoShare: true,
openLiveShare: true,
openAudioShare: true,
openRemoteDraw: true,
openPasswordRoom: true,
openScreenShare: true,

View File

@@ -69,6 +69,10 @@ let rtcServerMessageEvent = {
startVideoShare : "startVideoShare",
//结束音视频
stopVideoShare : "stopVideoShare",
//开始语音连麦
startAudioShare : "startAudioShare",
//结束语音连麦
stopAudioShare : "stopAudioShare",
//开始直播
startLiveShare : "startLiveShare",
//结束直播

View File

@@ -37,7 +37,7 @@ async function userCreateAndJoin(io, socket, tables, dbClient, data){
langMode = 'zh'
}
if(['file', 'screen', 'video', 'password', 'live'].indexOf(type) === -1){
if(['file', 'screen', 'video', 'password', 'live', 'audio'].indexOf(type) === -1){
type = 'file'
}
@@ -114,8 +114,8 @@ async function userCreateAndJoin(io, socket, tables, dbClient, data){
return
}
//流媒体房间只允许个人同时在线
if((type === 'screen' || type === 'video') && numClients >= 2){
//流媒体房间只允许3个人同时在线
if((type === 'screen' || type === 'video' || type === 'audio') && numClients >= 3){
socket.emit(rtcClientEvent.tips, {
room : data.room,
to : socket.id,
@@ -235,6 +235,8 @@ function getRoomTypeZh(type){
return "屏幕共享"
}else if(type === 'password'){
return "密码"
}else if(type === 'audio'){
return "语音连麦"
}else{
return "未知类型"
}

View File

@@ -15,6 +15,8 @@ let rtcEventOpName = {
"stopScreenShare": "停止屏幕共享",
"startVideoShare": "开始音视频通话",
"stopVideoShare": "停止音视频通话",
"startAudioShare": "开始语音连麦",
"stopAudioShare": "停止语音连麦",
"startLiveShare": "开启直播",
"stopLiveShare": "关闭直播",
"startRemoteDraw": "开启远程画笔",
@@ -137,6 +139,26 @@ async function message(io, socket, tables, dbClient, data){
})
}
if (emitType === rtcServerMessageEvent.startAudioShare) {
bussinessNotify.sendStartAudioShareNotify({
title: rtcEventOpName.startAudioShare,
userAgent: userAgent,
ip: ip,
room: data.room
})
}
if (emitType === rtcServerMessageEvent.stopAudioShare) {
bussinessNotify.sendStopAudioShareNotify({
title: rtcEventOpName.stopAudioShare,
userAgent: data.userAgent,
cost: data.cost,
userAgent: userAgent,
ip: ip,
room: data.room
})
}
if (emitType === rtcServerMessageEvent.startLiveShare) {
bussinessNotify.sendStartLiveShareNotify({
title: rtcEventOpName.startLiveShare,

View File

@@ -54,6 +54,30 @@
<div class="content unicode" style="display: block;">
<ul class="icon_lists dib-box">
<li class="dib">
<span class="icon iconfont">&#xe677;</span>
<div class="name">HOT</div>
<div class="code-name">&amp;#xe677;</div>
</li>
<li class="dib">
<span class="icon iconfont">&#xe624;</span>
<div class="name">热门hot</div>
<div class="code-name">&amp;#xe624;</div>
</li>
<li class="dib">
<span class="icon iconfont">&#xe8c4;</span>
<div class="name">214声波、语音消息</div>
<div class="code-name">&amp;#xe8c4;</div>
</li>
<li class="dib">
<span class="icon iconfont">&#xe813;</span>
<div class="name">语音</div>
<div class="code-name">&amp;#xe813;</div>
</li>
<li class="dib">
<span class="icon iconfont">&#xe872;</span>
<div class="name">翻转镜头</div>
@@ -666,9 +690,9 @@
<pre><code class="language-css"
>@font-face {
font-family: 'iconfont';
src: url('iconfont.woff2?t=1690027853800') format('woff2'),
url('iconfont.woff?t=1690027853800') format('woff'),
url('iconfont.ttf?t=1690027853800') format('truetype');
src: url('iconfont.woff2?t=1692112288946') format('woff2'),
url('iconfont.woff?t=1692112288946') format('woff'),
url('iconfont.ttf?t=1692112288946') format('truetype');
}
</code></pre>
<h3 id="-iconfont-">第二步:定义使用 iconfont 的样式</h3>
@@ -694,6 +718,42 @@
<div class="content font-class">
<ul class="icon_lists dib-box">
<li class="dib">
<span class="icon iconfont icon-rtc-file-hot1"></span>
<div class="name">
HOT
</div>
<div class="code-name">.icon-rtc-file-hot1
</div>
</li>
<li class="dib">
<span class="icon iconfont icon-rtc-file-remenhot"></span>
<div class="name">
热门hot
</div>
<div class="code-name">.icon-rtc-file-remenhot
</div>
</li>
<li class="dib">
<span class="icon iconfont icon-rtc-file-shengboyuyinxiaoxi"></span>
<div class="name">
214声波、语音消息
</div>
<div class="code-name">.icon-rtc-file-shengboyuyinxiaoxi
</div>
</li>
<li class="dib">
<span class="icon iconfont icon-rtc-file-yuyin"></span>
<div class="name">
语音
</div>
<div class="code-name">.icon-rtc-file-yuyin
</div>
</li>
<li class="dib">
<span class="icon iconfont icon-rtc-file-fanzhuanjingtou"></span>
<div class="name">
@@ -1612,6 +1672,38 @@
<div class="content symbol">
<ul class="icon_lists dib-box">
<li class="dib">
<svg class="icon svg-icon" aria-hidden="true">
<use xlink:href="#icon-rtc-file-hot1"></use>
</svg>
<div class="name">HOT</div>
<div class="code-name">#icon-rtc-file-hot1</div>
</li>
<li class="dib">
<svg class="icon svg-icon" aria-hidden="true">
<use xlink:href="#icon-rtc-file-remenhot"></use>
</svg>
<div class="name">热门hot</div>
<div class="code-name">#icon-rtc-file-remenhot</div>
</li>
<li class="dib">
<svg class="icon svg-icon" aria-hidden="true">
<use xlink:href="#icon-rtc-file-shengboyuyinxiaoxi"></use>
</svg>
<div class="name">214声波、语音消息</div>
<div class="code-name">#icon-rtc-file-shengboyuyinxiaoxi</div>
</li>
<li class="dib">
<svg class="icon svg-icon" aria-hidden="true">
<use xlink:href="#icon-rtc-file-yuyin"></use>
</svg>
<div class="name">语音</div>
<div class="code-name">#icon-rtc-file-yuyin</div>
</li>
<li class="dib">
<svg class="icon svg-icon" aria-hidden="true">
<use xlink:href="#icon-rtc-file-fanzhuanjingtou"></use>

View File

@@ -1,8 +1,8 @@
@font-face {
font-family: "iconfont"; /* Project id 4147343 */
src: url('iconfont.woff2?t=1690027853800') format('woff2'),
url('iconfont.woff?t=1690027853800') format('woff'),
url('iconfont.ttf?t=1690027853800') format('truetype');
src: url('iconfont.woff2?t=1692112288946') format('woff2'),
url('iconfont.woff?t=1692112288946') format('woff'),
url('iconfont.ttf?t=1692112288946') format('truetype');
}
.iconfont {
@@ -13,6 +13,22 @@
-moz-osx-font-smoothing: grayscale;
}
.icon-rtc-file-hot1:before {
content: "\e677";
}
.icon-rtc-file-remenhot:before {
content: "\e624";
}
.icon-rtc-file-shengboyuyinxiaoxi:before {
content: "\e8c4";
}
.icon-rtc-file-yuyin:before {
content: "\e813";
}
.icon-rtc-file-fanzhuanjingtou:before {
content: "\e872";
}

File diff suppressed because one or more lines are too long

View File

@@ -5,6 +5,34 @@
"css_prefix_text": "icon-rtc-file-",
"description": "",
"glyphs": [
{
"icon_id": "867298",
"name": "HOT",
"font_class": "hot1",
"unicode": "e677",
"unicode_decimal": 58999
},
{
"icon_id": "10305614",
"name": "热门hot",
"font_class": "remenhot",
"unicode": "e624",
"unicode_decimal": 58916
},
{
"icon_id": "1727449",
"name": "214声波、语音消息",
"font_class": "shengboyuyinxiaoxi",
"unicode": "e8c4",
"unicode_decimal": 59588
},
{
"icon_id": "17605450",
"name": "语音",
"font_class": "yuyin",
"unicode": "e813",
"unicode_decimal": 59411
},
{
"icon_id": "2076200",
"name": "翻转镜头",