Compare commits

..

58 Commits

Author SHA1 Message Date
sijie.sun
d0a3a40a0f fix bugs
add timeout for wss try_accept

public server should show stats

use default values for flags

bump version to 2.0.0
2024-09-29 17:49:14 +08:00
sijie.sun
ff5ee8a05e support forward foreign network packet between peers 2024-09-29 10:31:29 +08:00
Hs_Yeah
a50bcf3087 Fix IP address display in the status page of GUI
Signed-off-by: Hs_Yeah <bYeahq@gmail.com>
2024-09-27 15:58:02 +08:00
sijie.sun
e0b364d3e2 use ubuntu 24.04 apt source
github action upgraded the ubuntu-latest to 24.04

https://github.com/actions/runner-images/pull/10687
2024-09-27 11:05:52 +08:00
sijie.sun
2496cf51c3 fix connection loss when traffic is huge 2024-09-26 23:49:01 +08:00
sijie.sun
7b4a01e7fb fix ring buffer stuck when using multi thread runtime 2024-09-26 14:34:33 +08:00
Hs_Yeah
3f9a1d8f2e Get dev_name from the global_ctx of each instance 2024-09-24 16:52:38 +08:00
Hs_Yeah
0b927bcc91 Add TUN device name setting support to easytier-gui 2024-09-24 16:52:38 +08:00
Hs_Yeah
92397bf7b6 Set Category of the TUN device's network profile to 1 in Windows Registry 2024-09-24 14:23:42 +08:00
sijie.sun
d1e2e1db2b fix ospf foreign network info version 2024-09-23 13:42:25 +08:00
sijie.sun
783ba50c9e add cli command for global foreign network info 2024-09-23 00:03:57 +08:00
sijie.sun
aca9a0e35b use ospf route to propogate foreign network info 2024-09-22 22:12:18 +08:00
liyang
fb8d262554 Fix spelling errors 2024-09-22 20:58:37 +08:00
sijie.sun
bd60cfc2a0 add feature flag to ospf route 2024-09-21 20:54:19 +08:00
sijie.sun
06afd221d5 make ping more smart 2024-09-21 18:00:52 +08:00
sijie.sun
0171fb35a4 fix upload oss 2024-09-21 00:24:58 +08:00
Jiangqiu Shen
99c47813c3 add the options to enable latency first or not
in the old behavior, the flags is not set, and it will be generated as default value in the first read. so the default value for the latency_first will be set to true according to the Default settings to Flag.

so the Vue code init the latency first to true.
2024-09-19 20:09:17 +08:00
sijie.sun
82f5dfd569 show nodes version correctly 2024-09-18 23:15:08 +08:00
sijie.sun
6d7edcd486 fix connect failed after setup one of sockets fails 2024-09-18 23:15:08 +08:00
M2kar
9f273dc887 modify compile command (#333)
* modify compile command

* fix(READMD.md): compile from git

* Update README_CN.md
2024-09-18 21:57:25 +08:00
Jiangqiu Shen
ac9cfa5040 making cli parse code more ergonomic by remove some copy and unwrap (#347)
1. remove some unessesary copy in cli parse code of string
2. make some member function into non-member function to avoid taking the self reference.
3. use if let Some(..) instead of if xxx.is_some() to avoid copy and unwrap
2024-09-18 21:57:12 +08:00
Sijie.Sun
1b03223537 use customized rpc implementation, remove Tarpc & Tonic (#348)
This patch removes Tarpc & Tonic GRPC and implements a customized rpc framework, which can be used by peer rpc and cli interface.

web config server can also use this rpc framework.

moreover, rewrite the public server logic, use ospf route to implement public server based networking. this make public server mesh possible.
2024-09-18 21:55:28 +08:00
m1m1sha
0467b0a3dc Merge pull request #342 from EasyTier/ci/issue-template
🐎 ci: Modify Text
2024-09-15 22:39:11 +08:00
m1m1sha
ba75167238 🐎 ci: Modify Text 2024-09-15 22:38:06 +08:00
m1m1sha
51e7daa26f Merge pull request #341 from EasyTier/ci/github-issue-template
🐎 ci: github issue template
2024-09-15 22:30:49 +08:00
m1m1sha
2ff653cc6f 🐎 ci: github issue template 2024-09-15 22:28:55 +08:00
m1m1sha
cfe4d080d5 🐞 fix: GUI relay display error (#335) 2024-09-14 11:41:38 +08:00
M2kar
9b28ecde8e fix compile error due to rust version format (#332) 2024-09-14 11:40:46 +08:00
Sijie.Sun
096ed39d23 fix udp proxy disconn unexpectedly (#321) 2024-09-11 23:46:26 +08:00
m1m1sha
6ea3adcef8 feat: show version & local node (#318)
*  feat: version

Add display version information, incompatible with lower versions

* 🎈 perf: unknown

Unknown when there is no version number displayed

*  feat: Display local nodes

Display local nodes, incompatible with lower versions
2024-09-11 15:58:13 +08:00
m1m1sha
4342be29d7 Perf/front page (#316)
* 🐳 chore: dependencies

* 🐞 fix: minor style issues

fixed background white patches in dark mode
fixed the line height of the status label, which resulted in a bloated appearance

* 🌈 style: lint

*  feat: about
2024-09-11 09:13:00 +08:00
Sijie.Sun
1609c97574 fix panic when wireguard tunnel encounter udp recv error (#299) 2024-09-02 09:37:34 +08:00
Sijie.Sun
f07b3ee9c6 fix punching task leak (#298)
the punching task creator doesn't check if the task is already
running, and may create many punching task to same peer node.

this patch also improve hole punching by checking hole punch packet
even if punch rpc is failed.
2024-08-31 14:37:34 +08:00
Sijie.Sun
2058dbc470 fix wg client hang after some time (#297)
wg portal doesn't know client disconnect causing msg overstocked in queue, make
entire peer packet process pipeline hang.
2024-08-31 12:44:12 +08:00
3RDNature
6964fb71fc Add a setting "disable_udp_hole_punch" to disable UDP hole punch function (#291)
It can solve #289 tentative.

Co-authored-by: 3rdnature <root@natureblog.net>
2024-08-29 11:34:30 +08:00
Jiangqiu Shen
a8bb4ee7e5 Update Cargo.toml (#290)
fix compile error metioned in #286
2024-08-29 09:06:48 +08:00
严浩
3fcd74ce4e fix: Different network methods server URL display (#283)
Co-authored-by: 严浩 <i@oo1.dev>
2024-08-27 10:09:46 +08:00
Sijie.Sun
2b7ff0efc5 bump version to v1.2.3 and update readme (#280) 2024-08-25 13:45:18 +08:00
Sijie.Sun
5833541a6e set correct route cost for peers relayed by public server (#279) 2024-08-25 12:27:00 +08:00
Sijie.Sun
54c6418f97 only add necessary conn to alive urls (#277)
too many alive conns may cause high cpu usage and lagged broadcast
recv.
2024-08-25 11:12:01 +08:00
Sijie.Sun
fc9aac42b4 fix release.yml, just skip zip gui & mobile artifect (#276) 2024-08-25 00:45:14 +08:00
Sijie.Sun
89b43684d8 add complete support for freebsd (#275)
add tun & websocket & wireguard support on freebsd
2024-08-25 00:44:45 +08:00
Sunakier
31b26222d3 feat: support multi-service management (#251)
feat: support config (only config file now)
2024-08-24 10:17:57 +08:00
Mrered Cio
e4df03053e add MacOS Homebrew installation method (#273) 2024-08-24 10:13:30 +08:00
Sijie.Sun
833e7eca22 add command to show local node info (#271) 2024-08-23 11:50:11 +08:00
Sijie.Sun
b7d85ad2ff update rust-i18n to v3.1.2 (#269) 2024-08-21 11:00:13 +08:00
Sijie.Sun
8793560e12 fix i18n, revert rust-i18n to v3.0.1 (#267) 2024-08-20 00:38:59 +08:00
Dingxuan Jiang
58e0e48d59 easytier-gui: prevent multiple instances (#265)
* easytier-gui: prevent multiple instances
* ignore single instance for Android and iOS
2024-08-19 12:25:36 +08:00
sijie.sun
ad4cbbea6d fix socks5 access local virtual ip 2024-08-17 23:52:05 +08:00
sijie.sun
db660ee3b1 add test for socks5 server 2024-08-17 21:39:19 +08:00
sijie.sun
ae54a872ce support socks5 proxy
usage: --socks5 12345

create an socks5 server on port 12345, can be used by socks5 client to access
virtual network.
2024-08-17 13:17:38 +08:00
sijie.sun
2aa686f7ad use autostart plugin and hide window when autostart 2024-08-17 02:15:15 +08:00
sijie.sun
ce10bf5e60 update tauri to rc2 2024-08-17 02:15:15 +08:00
Sijie.Sun
28ae9c447a Update Dockerfile to fix timezone 2024-08-17 00:00:32 +08:00
sijie.sun
ff6da9bbec also setup panic handler on gui
this helps collect gui crash info.
2024-08-15 23:00:04 +08:00
sijie.sun
198c239399 set ipv6 mtu on windows
windows use different MTU for ipv4 / ipv6, we should set both.
2024-08-15 22:59:48 +08:00
sijie.sun
0fbbea963f forward foreign peer event to unbounded channel
if some events loss, may cause inconsistent foreign peer info.
2024-08-15 08:03:50 +08:00
sijie.sun
51165c54f5 smoltcp listener should bind multiple times
if smoltcp bind only once on tcp socket, it can only accept exactly
one syn packet in one round. other syn packets will be dropped and
client will receive a RST packet.
2024-08-13 23:01:34 +08:00
139 changed files with 12115 additions and 6030 deletions

53
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,53 @@
# Copyright 2024-present Easytier Programme within The Commons Conservancy
# SPDX-License-Identifier: Apache-2.0
name: 🐞 问题报告 / Bug Report
title: '[bug] '
description: 报告一个问题 / Report a bug
labels: ['type: bug', 'status: needs triage']
body:
- type: markdown
attributes:
value: |
## 在提交问题之前 / First of all
1. 请先搜索有关此问题的 [现有问题](https://github.com/EasyTier/EasyTier/issues?q=is%3Aissue)。
1. Please search for [existing issues](https://github.com/EasyTier/EasyTier/issues?q=is%3Aissue) about this problem first.
2. 请确保所使用的 Easytier 版本都是最新的。
2. Make sure that all Easytier versions are up-to-date.
3. 请确保这是 EasyTier 的问题,而不是你正在使用的其他内容引起的问题。
3. Make sure it's an issue with EasyTier and not something else you are using.
4. 请记得遵守我们的社区准则并保持友好态度。
4. Remember to follow our community guidelines and be friendly.
- type: textarea
id: description
attributes:
label: 描述问题 / Describe the bug
description: 对 bug 的明确描述。如果条件允许,请包括屏幕截图。 / A clear description of what the bug is. Include screenshots if applicable.
placeholder: 问题描述 / Bug description
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: 重现步骤 / Reproduction
description: 能够重现行为的步骤或指向能够复现的存储库链接。 / A link to a reproduction repo or steps to reproduce the behaviour.
placeholder: |
请提供一个最小化的复现示例或复现步骤,请参考这个指南 https://stackoverflow.com/help/minimal-reproducible-example
Please provide a minimal reproduction or steps to reproduce, see this guide https://stackoverflow.com/help/minimal-reproducible-example
为什么需要重现(问题)?请参阅这篇文章 https://antfu.me/posts/why-reproductions-are-required
Why reproduction is required? see this article https://antfu.me/posts/why-reproductions-are-required
- type: textarea
id: expected-behavior
attributes:
label: 预期结果 / Expected behavior
description: 清楚地描述您期望发生的事情。 / A clear description of what you expected to happen.
- type: textarea
id: context
attributes:
label: 额外上下文 / Additional context
description: 在这里添加关于问题的任何其他上下文。 / Add any other context about the problem here.

View File

@@ -0,0 +1,38 @@
# Copyright 2024-present Easytier Programme within The Commons Conservancy
# SPDX-License-Identifier: Apache-2.0
name: 💡 新功能请求 / Feature Request
title: '[feat] '
description: 提出一个想法 / Suggest an idea
labels: ['type: feature request']
body:
- type: textarea
id: problem
attributes:
label: 描述问题 / Describe the problem
description: 明确描述此功能将解决的问题 / A clear description of the problem this feature would solve
placeholder: "我总是在...感觉困惑 / I'm always frustrated when..."
validations:
required: true
- type: textarea
id: solution
attributes:
label: "描述您想要的解决方案 / Describe the solution you'd like"
description: 明确说明您希望做出的改变 / A clear description of what change you would like
placeholder: '我希望... / I would like to...'
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: 替代方案 / Alternatives considered
description: "您考虑过的任何替代解决方案 / Any alternative solutions you've considered"
- type: textarea
id: context
attributes:
label: 额外上下文 / Additional context
description: 在此处添加有关问题的任何其他上下文。 / Add any other context about the problem here.

View File

@@ -18,9 +18,13 @@ RUN mkdir -p /tmp/output; \
FROM alpine:latest
RUN apk add --no-cache tzdata
WORKDIR /app
COPY --from=builder --chmod=755 /tmp/output/* /usr/local/bin
# users can use "-e TZ=xxx" to adjust it
ENV TZ Asia/Shanghai
# tcp
EXPOSE 11010/tcp
# udp

View File

@@ -2,7 +2,7 @@ name: EasyTier Core
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,16 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
# do not skip push on branch starts with releases/
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/workflows/install_rust.sh"]'
build:
strategy:
@@ -70,6 +72,11 @@ jobs:
OS: windows-latest
ARTIFACT_NAME: windows-x86_64
- TARGET: x86_64-unknown-freebsd
OS: ubuntu-latest
ARTIFACT_NAME: freebsd-13.2-x86_64
BSD_VERSION: 13.2
runs-on: ${{ matrix.OS }}
env:
NAME: easytier
@@ -81,9 +88,9 @@ jobs:
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v4
with:
node-version: 21
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- name: Cargo cache
uses: actions/cache@v4
@@ -93,9 +100,6 @@ jobs:
./target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install rust target
run: bash ./.github/workflows/install_rust.sh
- name: Setup protoc
uses: arduino/setup-protoc@v2
with:
@@ -103,13 +107,52 @@ jobs:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Build Core & Cli
if: ${{ ! endsWith(matrix.TARGET, 'freebsd') }}
run: |
bash ./.github/workflows/install_rust.sh
if [[ $OS =~ ^ubuntu.*$ && $TARGET =~ ^mips.*$ ]]; then
cargo +nightly build -r --verbose --target $TARGET -Z build-std=std,panic_abort --no-default-features --features mips
else
cargo build --release --verbose --target $TARGET
fi
# Copied and slightly modified from @lmq8267 (https://github.com/lmq8267)
- name: Build Core & Cli (X86_64 FreeBSD)
uses: cross-platform-actions/action@v0.23.0
if: ${{ endsWith(matrix.TARGET, 'freebsd') }}
env:
TARGET: ${{ matrix.TARGET }}
with:
operating_system: freebsd
environment_variables: TARGET
architecture: x86-64
version: ${{ matrix.BSD_VERSION }}
shell: bash
memory: 5G
cpu_count: 4
run: |
uname -a
echo $SHELL
pwd
ls -lah
whoami
env | sort
sudo pkg install -y git protobuf
curl --proto 'https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source $HOME/.cargo/env
rustup set auto-self-update disable
rustup install 1.77
rustup default 1.77
export CC=clang
export CXX=clang++
export CARGO_TERM_COLOR=always
cargo build --release --verbose --target $TARGET
- name: Install UPX
if: ${{ matrix.OS != 'macos-latest' }}
uses: crazy-max/ghaction-upx@v3
@@ -132,7 +175,7 @@ jobs:
TAG=$GITHUB_SHA
fi
if [[ $OS =~ ^ubuntu.*$ ]]; then
if [[ $OS =~ ^ubuntu.*$ && ! $TARGET =~ ^.*freebsd$ ]]; then
upx --lzma --best ./target/$TARGET/release/easytier-core"$SUFFIX"
upx --lzma --best ./target/$TARGET/release/easytier-cli"$SUFFIX"
fi
@@ -159,7 +202,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
core-result:

View File

@@ -2,7 +2,7 @@ name: EasyTier GUI
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,15 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/workflows/install_rust.sh"]'
build-gui:
strategy:
@@ -69,6 +70,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- uses: actions/setup-node@v4
with:
node-version: 21
@@ -118,33 +123,31 @@ jobs:
if: ${{ matrix.TARGET == 'aarch64-unknown-linux-musl' }}
run: |
# see https://tauri.app/v1/guides/building/linux/
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy main restricted" | sudo tee /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble main restricted" | sudo tee /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security multiverse" | sudo tee -a /etc/apt/sources.list
sudo dpkg --add-architecture arm64
sudo apt-get update && sudo apt-get upgrade -y
sudo apt install gcc-aarch64-linux-gnu
sudo apt install libwebkit2gtk-4.1-dev:arm64
sudo apt install libssl-dev:arm64
sudo apt install -f -o Dpkg::Options::="--force-overwrite" libwebkit2gtk-4.1-dev:arm64 libssl-dev:arm64 gcc-aarch64-linux-gnu
echo "PKG_CONFIG_SYSROOT_DIR=/usr/aarch64-linux-gnu/" >> "$GITHUB_ENV"
echo "PKG_CONFIG_PATH=/usr/lib/aarch64-linux-gnu/pkgconfig/" >> "$GITHUB_ENV"
@@ -197,7 +200,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/gui
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-gui-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
gui-result:

View File

@@ -2,7 +2,7 @@ name: EasyTier Mobile
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,15 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", "tauri-plugin-vpnservice/**", ".github/workflows/mobile.yml", ".github/workflows/install_rust.sh"]'
build-mobile:
strategy:
@@ -48,6 +49,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- uses: actions/setup-java@v4
with:
distribution: 'oracle'
@@ -150,7 +155,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/mobile
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-gui-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
mobile-result:

View File

@@ -19,9 +19,9 @@ on:
default: 10322498555
required: true
version:
description: 'version for this release'
description: 'Version for this release'
type: string
default: 'v1.2.2'
default: 'v2.0.0'
required: true
make_latest:
description: 'Mark this release as latest'
@@ -55,7 +55,7 @@ jobs:
github_token: ${{secrets.GITHUB_TOKEN}}
run_id: ${{ inputs.gui_run_id }}
repo: EasyTier/EasyTier
path: release_assets
path: release_assets_nozip
- name: Download GUI Artifact
uses: dawidd6/action-download-artifact@v6
@@ -63,17 +63,20 @@ jobs:
github_token: ${{secrets.GITHUB_TOKEN}}
run_id: ${{ inputs.mobile_run_id }}
repo: EasyTier/EasyTier
path: release_assets
path: release_assets_nozip
- name: Zip release assets
env:
VERSION: ${{ inputs.version }}
run: |
mkdir zipped_assets
find release_assets_nozip -type f -exec mv {} zipped_assets \;
ls -l -R ./zipped_assets
cd release_assets
ls -l -R ./
chmod -R 755 .
mkdir ../zipped_assets
for x in `ls`; do
zip ../zipped_assets/$x-${VERSION}.zip $x/*;
done

1223
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -10,4 +10,3 @@ panic = "unwind"
panic = "abort"
lto = true
codegen-units = 1
strip = true

View File

@@ -11,7 +11,7 @@
}
],
"settings": {
"eslint.experimental.useFlatConfig": true,
"eslint.useFlatConfig": true,
"prettier.enable": false,
"editor.formatOnSave": false,
"editor.codeActionsOnSave": {

370
README.md
View File

@@ -1,28 +1,28 @@
# EasyTier
[![GitHub](https://img.shields.io/github/license/EasyTier/EasyTier)](https://github.com/EasyTier/EasyTier/blob/main/LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/EasyTier/EasyTier)](https://github.com/EasyTier/EasyTier/commits/main)
[![GitHub issues](https://img.shields.io/github/issues/EasyTier/EasyTier)](https://github.com/EasyTier/EasyTier/issues)
[![GitHub Core Actions](https://github.com/EasyTier/EasyTier/actions/workflows/core.yml/badge.svg)](https://github.com/EasyTier/EasyTier/actions/workflows/core.yml)
[![GitHub GUI Actions](https://github.com/EasyTier/EasyTier/actions/workflows/gui.yml/badge.svg)](https://github.com/EasyTier/EasyTier/actions/workflows/gui.yml)
# EasyTier
[![GitHub](https://img.shields.io/github/license/EasyTier/EasyTier)](https://github.com/EasyTier/EasyTier/blob/main/LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/EasyTier/EasyTier)](https://github.com/EasyTier/EasyTier/commits/main)
[![GitHub issues](https://img.shields.io/github/issues/EasyTier/EasyTier)](https://github.com/EasyTier/EasyTier/issues)
[![GitHub Core Actions](https://github.com/EasyTier/EasyTier/actions/workflows/core.yml/badge.svg)](https://github.com/EasyTier/EasyTier/actions/workflows/core.yml)
[![GitHub GUI Actions](https://github.com/EasyTier/EasyTier/actions/workflows/gui.yml/badge.svg)](https://github.com/EasyTier/EasyTier/actions/workflows/gui.yml)
[简体中文](/README_CN.md) | [English](/README.md)
**Please visit the [EasyTier Official Website](https://www.easytier.top/en/) to view the full documentation.**
**Please visit the [EasyTier Official Website](https://www.easytier.top/en/) to view the full documentation.**
EasyTier is a simple, safe and decentralized VPN networking solution implemented with the Rust language and Tokio framework.
EasyTier is a simple, safe and decentralized VPN networking solution implemented with the Rust language and Tokio framework.
<p align="center">
<img src="assets/image-5.png" width="300">
<img src="assets/image-4.png" width="300">
</p>
## Features
## Features
- **Decentralized**: No need to rely on centralized services, nodes are equal and independent.
- **Safe**: Use WireGuard protocol to encrypt data.
- **High Performance**: Full-link zero-copy, with performance comparable to mainstream networking software.
- **Cross-platform**: Supports MacOS/Linux/Windows, will support IOS and Android in the future. The executable file is statically linked, making deployment simple.
- **Cross-platform**: Supports MacOS/Linux/Windows/Android, will support IOS in the future. The executable file is statically linked, making deployment simple.
- **Networking without public IP**: Supports networking using shared public nodes, refer to [Configuration Guide](#Networking-without-public-IP)
- **NAT traversal**: Supports UDP-based NAT traversal, able to establish stable connections even in complex network environments.
- **Subnet Proxy (Point-to-Network)**: Nodes can expose accessible network segments as proxies to the VPN subnet, allowing other nodes to access these subnets through the node.
@@ -32,170 +32,195 @@
- **IPv6 Support**: Supports networking using IPv6.
- **Multiple Protocol Types**: Supports communication between nodes using protocols such as WebSocket and QUIC.
## Installation
1. **Download the precompiled binary file**
Visit the [GitHub Release page](https://github.com/EasyTier/EasyTier/releases) to download the binary file suitable for your operating system. Release includes both command-line programs and GUI programs in the compressed package.
2. **Install via crates.io**
## Installation
1. **Download the precompiled binary file**
Visit the [GitHub Release page](https://github.com/EasyTier/EasyTier/releases) to download the binary file suitable for your operating system. Release includes both command-line programs and GUI programs in the compressed package.
2. **Install via crates.io**
```sh
cargo install easytier
```
3. **Install from source code**
3. **Install from source code**
```sh
cargo install --git https://github.com/EasyTier/EasyTier.git
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
4. **Install by Docker Compose**
4. **Install by Docker Compose**
Please visit the [EasyTier Official Website](https://www.easytier.top/en/) to view the full documentation.
Please visit the [EasyTier Official Website](https://www.easytier.top/en/) to view the full documentation.
5. **Install by script (For Linux Only)**
5. **Install by script (For Linux Only)**
```sh
wget -O /tmp/easytier.sh "https://raw.githubusercontent.com/EasyTier/EasyTier/main/script/easytier.sh" && bash /tmp/easytier.sh install
wget -O /tmp/easytier.sh "https://raw.githubusercontent.com/EasyTier/EasyTier/main/script/install.sh" && bash /tmp/easytier.sh install
```
You can also uninstall/update Easytier by the command "uninstall" or "update" of this script
## Quick Start
6. **Install by Homebrew (For MacOS Only)**
```sh
brew tap brewforge/chinese
brew install --cask easytier
```
## Quick Start
> The following text only describes the use of the command-line tool; the GUI program can be configured by referring to the following concepts.
Make sure EasyTier is installed according to the [Installation Guide](#Installation), and both easytier-core and easytier-cli commands are available.
### Two-node Networking
Assuming the network topology of the two nodes is as follows
```mermaid
flowchart LR
subgraph Node A IP 22.1.1.1
nodea[EasyTier\n10.144.144.1]
end
subgraph Node B
nodeb[EasyTier\n10.144.144.2]
end
nodea <-----> nodeb
```
1. Execute on Node A:
> The following text only describes the use of the command-line tool; the GUI program can be configured by referring to the following concepts.
Make sure EasyTier is installed according to the [Installation Guide](#Installation), and both easytier-core and easytier-cli commands are available.
### Two-node Networking
Assuming the network topology of the two nodes is as follows
```mermaid
flowchart LR
subgraph Node A IP 22.1.1.1
nodea[EasyTier\n10.144.144.1]
end
subgraph Node B
nodeb[EasyTier\n10.144.144.2]
end
nodea <-----> nodeb
```
1. Execute on Node A:
```sh
sudo easytier-core --ipv4 10.144.144.1
```
Successful execution of the command will print the following.
![alt text](/assets/image-2.png)
2. Execute on Node B
2. Execute on Node B
```sh
sudo easytier-core --ipv4 10.144.144.2 --peers udp://22.1.1.1:11010
```
3. Test Connectivity
3. Test Connectivity
The two nodes should connect successfully and be able to communicate within the virtual subnet
```sh
ping 10.144.144.2
```
Use easytier-cli to view node information in the subnet
```sh
easytier-cli peer
```
![alt text](/assets/image.png)
```sh
easytier-cli route
```
![alt text](/assets/image-1.png)
```sh
easytier-cli node
```
![alt text](assets/image-10.png)
---
### Multi-node Networking
Based on the two-node networking example just now, if more nodes need to join the virtual network, you can use the following command.
```
sudo easytier-core --ipv4 10.144.144.2 --peers udp://22.1.1.1:11010
```
The `--peers` parameter can fill in the listening address of any node already in the virtual network.
---
### Subnet Proxy (Point-to-Network) Configuration
Assuming the network topology is as follows, Node B wants to share its accessible subnet 10.1.1.0/24 with other nodes.
```mermaid
flowchart LR
subgraph Node A IP 22.1.1.1
nodea[EasyTier\n10.144.144.1]
end
subgraph Node B
nodeb[EasyTier\n10.144.144.2]
end
id1[[10.1.1.0/24]]
nodea <--> nodeb <-.-> id1
```
Then the startup parameters for Node B's easytier are (new -n parameter)
```sh
sudo easytier-core --ipv4 10.144.144.2 -n 10.1.1.0/24
```
Subnet proxy information will automatically sync to each node in the virtual network, and each node will automatically configure the corresponding route. Node A can check whether the subnet proxy is effective through the following command.
1. Check whether the routing information has been synchronized, the proxy_cidrs column shows the proxied subnets.
### Multi-node Networking
Based on the two-node networking example just now, if more nodes need to join the virtual network, you can use the following command.
```sh
sudo easytier-core --ipv4 10.144.144.2 --peers udp://22.1.1.1:11010
```
The `--peers` parameter can fill in the listening address of any node already in the virtual network.
---
### Subnet Proxy (Point-to-Network) Configuration
Assuming the network topology is as follows, Node B wants to share its accessible subnet 10.1.1.0/24 with other nodes.
```mermaid
flowchart LR
subgraph Node A IP 22.1.1.1
nodea[EasyTier\n10.144.144.1]
end
subgraph Node B
nodeb[EasyTier\n10.144.144.2]
end
id1[[10.1.1.0/24]]
nodea <--> nodeb <-.-> id1
```
Then the startup parameters for Node B's easytier are (new -n parameter)
```sh
sudo easytier-core --ipv4 10.144.144.2 -n 10.1.1.0/24
```
Subnet proxy information will automatically sync to each node in the virtual network, and each node will automatically configure the corresponding route. Node A can check whether the subnet proxy is effective through the following command.
1. Check whether the routing information has been synchronized, the proxy_cidrs column shows the proxied subnets.
```sh
easytier-cli route
```
![alt text](/assets/image-3.png)
![alt text](/assets/image-3.png)
2. Test whether Node A can access nodes under the proxied subnet
```sh
ping 10.1.1.2
```
---
### Networking without Public IP
EasyTier supports networking using shared public nodes. The currently deployed shared public node is ``tcp://easytier.public.kkrainbow.top:11010``.
When using shared nodes, each node entering the network needs to provide the same ``--network-name`` and ``--network-secret`` parameters as the unique identifier of the network.
Taking two nodes as an example, Node A executes:
```sh
sudo easytier-core -i 10.144.144.1 --network-name abc --network-secret abc -e tcp://easytier.public.kkrainbow.top:11010
```
Node B executes
```sh
sudo easytier-core --ipv4 10.144.144.2 --network-name abc --network-secret abc -e tcp://easytier.public.kkrainbow.top:11010
```
After the command is successfully executed, Node A can access Node B through the virtual IP 10.144.144.2.
### Use EasyTier with WireGuard Client
EasyTier can be used as a WireGuard server to allow any device with WireGuard client installed to access the EasyTier network. For platforms currently unsupported by EasyTier (such as iOS, Android, etc.), this method can be used to connect to the EasyTier network.
---
### Networking without Public IP
EasyTier supports networking using shared public nodes. The currently deployed shared public node is ``tcp://easytier.public.kkrainbow.top:11010``.
When using shared nodes, each node entering the network needs to provide the same ``--network-name`` and ``--network-secret`` parameters as the unique identifier of the network.
Taking two nodes as an example, Node A executes:
```sh
sudo easytier-core -i 10.144.144.1 --network-name abc --network-secret abc -e tcp://easytier.public.kkrainbow.top:11010
```
Node B executes
```sh
sudo easytier-core --ipv4 10.144.144.2 --network-name abc --network-secret abc -e tcp://easytier.public.kkrainbow.top:11010
```
After the command is successfully executed, Node A can access Node B through the virtual IP 10.144.144.2.
### Use EasyTier with WireGuard Client
EasyTier can be used as a WireGuard server to allow any device with WireGuard client installed to access the EasyTier network. For platforms currently unsupported by EasyTier (such as iOS, Android, etc.), this method can be used to connect to the EasyTier network.
Assuming the network topology is as follows:
@@ -221,14 +246,14 @@ To enable an iPhone to access the EasyTier network through Node A, the following
Include the --vpn-portal parameter in the easytier-core command on Node A to specify the port that the WireGuard service listens on and the subnet used by the WireGuard network.
```
```sh
# The following parameters mean: listen on port 0.0.0.0:11013, and use the 10.14.14.0/24 subnet for WireGuard
sudo easytier-core --ipv4 10.144.144.1 --vpn-portal wg://0.0.0.0:11013/10.14.14.0/24
```
After successfully starting easytier-core, use easytier-cli to obtain the WireGuard client configuration.
```
```sh
$> easytier-cli vpn-portal
portal_name: wireguard
@@ -252,45 +277,44 @@ connected_clients:
Before using the Client Config, you need to modify the Interface Address and Peer Endpoint to the client's IP and the IP of the EasyTier node, respectively. Import the configuration file into the WireGuard client to access the EasyTier network.
# Self-Hosted Public Server
### Self-Hosted Public Server
Each node can act as a relay node for other users' networks. Simply start EasyTier without any parameters.
### Configurations
You can use ``easytier-core --help`` to view all configuration items
# Roadmap
- [ ] Improve documentation and user guides.
- [ ] Support features such as encryption, TCP hole punching, etc.
- [ ] Support Android, IOS and other mobile platforms.
- [ ] Support Web configuration management.
# Community and Contribution
We welcome and encourage community contributions! If you want to get involved, please submit a [GitHub PR](https://github.com/EasyTier/EasyTier/pulls). Detailed contribution guidelines can be found in [CONTRIBUTING.md](https://github.com/EasyTier/EasyTier/blob/main/CONTRIBUTING.md).
# Related Projects and Resources
- [ZeroTier](https://www.zerotier.com/): A global virtual network for connecting devices.
- [TailScale](https://tailscale.com/): A VPN solution aimed at simplifying network configuration.
- [vpncloud](https://github.com/dswd/vpncloud): A P2P Mesh VPN
- [Candy](https://github.com/lanthora/candy): A reliable, low-latency, and anti-censorship virtual private network
# License
EasyTier is released under the [Apache License 2.0](https://github.com/EasyTier/EasyTier/blob/main/LICENSE).
# Contact
- Ask questions or report problems: [GitHub Issues](https://github.com/EasyTier/EasyTier/issues)
- Discussion and exchange: [GitHub Discussions](https://github.com/EasyTier/EasyTier/discussions)
- Telegramhttps://t.me/easytier
- QQ Group: 949700262
# Sponsor
### Configurations
You can use ``easytier-core --help`` to view all configuration items
## Roadmap
- [ ] Improve documentation and user guides.
- [ ] Support features such as encryption, TCP hole punching, etc.
- [ ] Support iOS.
- [ ] Support Web configuration management.
## Community and Contribution
We welcome and encourage community contributions! If you want to get involved, please submit a [GitHub PR](https://github.com/EasyTier/EasyTier/pulls). Detailed contribution guidelines can be found in [CONTRIBUTING.md](https://github.com/EasyTier/EasyTier/blob/main/CONTRIBUTING.md).
## Related Projects and Resources
- [ZeroTier](https://www.zerotier.com/): A global virtual network for connecting devices.
- [TailScale](https://tailscale.com/): A VPN solution aimed at simplifying network configuration.
- [vpncloud](https://github.com/dswd/vpncloud): A P2P Mesh VPN
- [Candy](https://github.com/lanthora/candy): A reliable, low-latency, and anti-censorship virtual private network
## License
EasyTier is released under the [Apache License 2.0](https://github.com/EasyTier/EasyTier/blob/main/LICENSE).
## Contact
- Ask questions or report problems: [GitHub Issues](https://github.com/EasyTier/EasyTier/issues)
- Discussion and exchange: [GitHub Discussions](https://github.com/EasyTier/EasyTier/discussions)
- Telegramhttps://t.me/easytier
- QQ Group: 949700262
## Sponsor
<img src="assets/image-8.png" width="300">
<img src="assets/image-9.png" width="300">
<img src="assets/image-9.png" width="300">

View File

@@ -22,7 +22,7 @@
- **去中心化**:无需依赖中心化服务,节点平等且独立。
- **安全**:支持利用 WireGuard 加密通信,也支持 AES-GCM 加密保护中转流量。
- **高性能**:全链路零拷贝,性能与主流组网软件相当。
- **跨平台**:支持 MacOS/Linux/Windows未来将支持 IOS 和 Android。可执行文件静态链接,部署简单。
- **跨平台**:支持 MacOS/Linux/Windows/Android,未来将支持 IOS。可执行文件静态链接部署简单。
- **无公网 IP 组网**:支持利用共享的公网节点组网,可参考 [配置指南](#无公网IP组网)
- **NAT 穿透**:支持基于 UDP 的 NAT 穿透,即使在复杂的网络环境下也能建立稳定的连接。
- **子网代理(点对网)**:节点可以将可访问的网段作为代理暴露给 VPN 子网,允许其他节点通过该节点访问这些子网。
@@ -39,27 +39,35 @@
访问 [GitHub Release 页面](https://github.com/EasyTier/EasyTier/releases) 下载适用于您操作系统的二进制文件。Release 压缩包中同时包含命令行程序和图形界面程序。
2. **通过 crates.io 安装**
```sh
cargo install easytier
```
```sh
cargo install easytier
```
3. **通过源码安装**
```sh
cargo install --git https://github.com/EasyTier/EasyTier.git
```
```sh
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
4. **通过Docker Compose安装**
请访问 [EasyTier 官网](https://www.easytier.top/) 以查看完整的文档。
请访问 [EasyTier 官网](https://www.easytier.top/) 以查看完整的文档。
5. **使用一键脚本安装 (仅适用于 Linux)**
```sh
wget -O /tmp/easytier.sh "https://raw.githubusercontent.com/EasyTier/EasyTier/main/script/easytier.sh" && bash /tmp/easytier.sh install
```
使用本脚本安装的 Easytier 可以使用脚本的 uninstall/update 对其卸载/升级
```sh
wget -O /tmp/easytier.sh "https://raw.githubusercontent.com/EasyTier/EasyTier/main/script/install.sh" && bash /tmp/easytier.sh install
```
使用本脚本安装的 Easytier 可以使用脚本的 uninstall/update 对其卸载/升级
6. **使用 Homebrew 安装 (仅适用于 MacOS)**
```sh
brew tap brewforge/chinese
brew install --cask easytier
```
## 快速开始
@@ -87,34 +95,48 @@ nodea <-----> nodeb
```
1. 在节点 A 上执行:
```sh
sudo easytier-core --ipv4 10.144.144.1
```
命令执行成功会有如下打印。
![alt text](/assets/image-2.png)
```sh
sudo easytier-core --ipv4 10.144.144.1
```
命令执行成功会有如下打印。
![alt text](/assets/image-2.png)
2. 在节点 B 执行
```sh
sudo easytier-core --ipv4 10.144.144.2 --peers udp://22.1.1.1:11010
```
```sh
sudo easytier-core --ipv4 10.144.144.2 --peers udp://22.1.1.1:11010
```
3. 测试联通性
两个节点应成功连接并能够在虚拟子网内通信
```sh
ping 10.144.144.2
```
两个节点应成功连接并能够在虚拟子网内通信
使用 easytier-cli 查看子网中的节点信息
```sh
easytier-cli peer
```
![alt text](/assets/image.png)
```sh
easytier-cli route
```
![alt text](/assets/image-1.png)
```sh
ping 10.144.144.2
```
使用 easytier-cli 查看子网中的节点信息
```sh
easytier-cli peer
```
![alt text](/assets/image.png)
```sh
easytier-cli route
```
![alt text](/assets/image-1.png)
```sh
easytier-cli node
```
![alt text](assets/image-10.png)
---
@@ -122,11 +144,11 @@ nodea <-----> nodeb
基于刚才的双节点组网例子,如果有更多的节点需要加入虚拟网络,可以使用如下命令。
```
```sh
sudo easytier-core --ipv4 10.144.144.2 --peers udp://22.1.1.1:11010
```
其中 `--peers ` 参数可以填写任意一个已经在虚拟网络中的节点的监听地址。
其中 `--peers` 参数可以填写任意一个已经在虚拟网络中的节点的监听地址。
---
@@ -161,16 +183,17 @@ sudo easytier-core --ipv4 10.144.144.2 -n 10.1.1.0/24
1. 检查路由信息是否已经同步proxy_cidrs 列展示了被代理的子网。
```sh
easytier-cli route
```
![alt text](/assets/image-3.png)
```sh
easytier-cli route
```
![alt text](/assets/image-3.png)
2. 测试节点 A 是否可访问被代理子网下的节点
```sh
ping 10.1.1.2
```
```sh
ping 10.1.1.2
```
---
@@ -224,14 +247,14 @@ ios <-.-> nodea <--> nodeb <-.-> id1
在节点 A 的 easytier-core 命令中,加入 --vpn-portal 参数,指定 WireGuard 服务监听的端口,以及 WireGuard 网络使用的网段。
```
```sh
# 以下参数的含义为: 监听 0.0.0.0:11013 端口WireGuard 使用 10.14.14.0/24 网段
sudo easytier-core --ipv4 10.144.144.1 --vpn-portal wg://0.0.0.0:11013/10.14.14.0/24
```
easytier-core 启动成功后,使用 easytier-cli 获取 WireGuard Client 的配置。
```
```sh
$> easytier-cli vpn-portal
portal_name: wireguard
@@ -265,37 +288,36 @@ connected_clients:
可使用 ``easytier-core --help`` 查看全部配置项
# 路线图
## 路线图
- [ ] 完善文档和用户指南。
- [ ] 支持 TCP 打洞等特性。
- [ ] 支持 Android、IOS 等移动平台
- [ ] 支持 iOS
- [ ] 支持 Web 配置管理。
# 社区和贡献
## 社区和贡献
我们欢迎并鼓励社区贡献!如果你想参与进来,请提交 [GitHub PR](https://github.com/EasyTier/EasyTier/pulls)。详细的贡献指南可以在 [CONTRIBUTING.md](https://github.com/EasyTier/EasyTier/blob/main/CONTRIBUTING.md) 中找到。
# 相关项目和资源
## 相关项目和资源
- [ZeroTier](https://www.zerotier.com/): 一个全球虚拟网络,用于连接设备。
- [TailScale](https://tailscale.com/): 一个旨在简化网络配置的 VPN 解决方案。
- [vpncloud](https://github.com/dswd/vpncloud): 一个 P2P Mesh VPN
- [Candy](https://github.com/lanthora/candy): 可靠、低延迟、抗审查的虚拟专用网络
# 许可证
## 许可证
EasyTier 根据 [Apache License 2.0](https://github.com/EasyTier/EasyTier/blob/main/LICENSE) 许可证发布。
# 联系方式
## 联系方式
- 提问或报告问题:[GitHub Issues](https://github.com/EasyTier/EasyTier/issues)
- 讨论和交流:[GitHub Discussions](https://github.com/EasyTier/EasyTier/discussions)
- QQ 群: 949700262
- Telegramhttps://t.me/easytier
# 赞助
## 赞助
<img src="assets/image-8.png" width="300">
<img src="assets/image-9.png" width="300">

BIN
assets/image-10.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -14,6 +14,11 @@ npm install -g pnpm
### For Desktop (Win/Mac/Linux)
```
cd ../tauri-plugin-vpnservice
pnpm install
pnpm build
cd ../easytier-gui
pnpm install
pnpm tauri build
```
@@ -34,7 +39,6 @@ rustup target add aarch64-linux-android
install java 20
```
Java version depend on gradle version specified in (easytier-gui\src-tauri\gen\android\build.gradle.kts)
See [Gradle compatibility matrix](https://docs.gradle.org/current/userguide/compatibility.html) for detail .
@@ -43,4 +47,4 @@ See [Gradle compatibility matrix](https://docs.gradle.org/current/userguide/comp
pnpm install
pnpm tauri android init
pnpm tauri android build
```
```

View File

@@ -13,6 +13,7 @@ proxy_cidrs: 子网代理CIDR
enable_vpn_portal: 启用VPN门户
vpn_portal_listen_port: 监听端口
vpn_portal_client_network: 客户端子网
dev_name: TUN接口名称
advanced_settings: 高级设置
basic_settings: 基础设置
listener_urls: 监听地址
@@ -45,11 +46,13 @@ enable_auto_launch: 开启开机自启
exit: 退出
chips_placeholder: 例如: {0}, 按回车添加
hostname_placeholder: '留空默认为主机名: {0}'
dev_name_placeholder: 注意当多个网络同时使用相同的TUN接口名称时将会在设置TUN的IP时产生冲突留空以自动生成随机名称
off_text: 点击关闭
on_text: 点击开启
show_config: 显示配置
close: 关闭
use_latency_first: 延迟优先模式
my_node_info: 当前节点信息
peer_count: 已连接
upload: 上传
@@ -66,6 +69,10 @@ upload_bytes: 上传
download_bytes: 下载
loss_rate: 丢包率
status:
version: 内核版本
local: 本机
run_network: 运行网络
stop_network: 停止网络
network_running: 运行中
@@ -75,3 +82,12 @@ dhcp_experimental_warning: 实验性警告使用DHCP时如果组网环境中
tray:
show: 显示 / 隐藏
exit: 退出
about:
title: 关于
version: 版本
author: 作者
homepage: 主页
license: 许可证
description: 一个简单、安全、去中心化的内网穿透 VPN 组网方案,使用 Rust 语言和 Tokio 框架实现。
check_update: 检查更新

View File

@@ -13,6 +13,7 @@ proxy_cidrs: Subnet Proxy CIDRs
enable_vpn_portal: Enable VPN Portal
vpn_portal_listen_port: VPN Portal Listen Port
vpn_portal_client_network: Client Sub Network
dev_name: TUN interface name
advanced_settings: Advanced Settings
basic_settings: Basic Settings
listener_urls: Listener URLs
@@ -43,9 +44,10 @@ logging_copy_dir: Copy Log Path
disable_auto_launch: Disable Launch on Reboot
enable_auto_launch: Enable Launch on Reboot
exit: Exit
use_latency_first: Latency First Mode
chips_placeholder: 'e.g: {0}, press Enter to add'
hostname_placeholder: 'Leave blank and default to host name: {0}'
dev_name_placeholder: 'Note: When multiple networks use the same TUN interface name at the same time, there will be a conflict when setting the TUN''s IP. Leave blank to automatically generate a random name.'
off_text: Press to disable
on_text: Press to enable
show_config: Show Config
@@ -66,6 +68,10 @@ upload_bytes: Upload
download_bytes: Download
loss_rate: Loss Rate
status:
version: Version
local: Local
run_network: Run Network
stop_network: Stop Network
network_running: running
@@ -75,3 +81,12 @@ dhcp_experimental_warning: Experimental warning! if there is an IP conflict in t
tray:
show: Show / Hide
exit: Exit
about:
title: About
version: Version
author: Author
homepage: Homepage
license: License
description: 'EasyTier is a simple, safe and decentralized VPN networking solution implemented with the Rust language and Tokio framework.'
check_update: Check Update

View File

@@ -1,7 +1,7 @@
{
"name": "easytier-gui",
"type": "module",
"version": "1.2.2",
"version": "2.0.0",
"private": true,
"scripts": {
"dev": "vite",
@@ -12,49 +12,50 @@
"lint:fix": "eslint . --ignore-pattern src-tauri --fix"
},
"dependencies": {
"@primevue/themes": "^4.0.4",
"@tauri-apps/plugin-clipboard-manager": "2.0.0-rc.0",
"@tauri-apps/plugin-os": "2.0.0-rc.0",
"@tauri-apps/plugin-process": "2.0.0-rc.0",
"@tauri-apps/plugin-shell": "2.0.0-rc.0",
"@primevue/themes": "^4.0.5",
"@tauri-apps/plugin-autostart": "2.0.0-rc.1",
"@tauri-apps/plugin-clipboard-manager": "2.0.0-rc.1",
"@tauri-apps/plugin-os": "2.0.0-rc.1",
"@tauri-apps/plugin-process": "2.0.0-rc.1",
"@tauri-apps/plugin-shell": "2.0.0-rc.1",
"aura": "link:@primevue/themes/aura",
"pinia": "^2.2.1",
"ip-num": "1.5.1",
"pinia": "^2.2.2",
"primeflex": "^3.3.1",
"primeicons": "^7.0.0",
"primevue": "^4.0.4",
"primevue": "^4.0.5",
"tauri-plugin-vpnservice-api": "link:../tauri-plugin-vpnservice",
"vue": "^3.4.36",
"vue-i18n": "^9.13.1",
"vue": "^3.5.3",
"vue-i18n": "^10.0.0",
"vue-router": "^4.4.3"
},
"devDependencies": {
"@antfu/eslint-config": "^2.24.1",
"@intlify/unplugin-vue-i18n": "^4.0.0",
"@primevue/auto-import-resolver": "^4.0.4",
"@sveltejs/vite-plugin-svelte": "^3.1.1",
"@antfu/eslint-config": "^3.5.0",
"@intlify/unplugin-vue-i18n": "^5.0.0",
"@primevue/auto-import-resolver": "^4.0.5",
"@tauri-apps/api": "2.0.0-rc.0",
"@tauri-apps/cli": "2.0.0-rc.1",
"@types/node": "^20.14.14",
"@types/uuid": "^9.0.8",
"@vitejs/plugin-vue": "^5.1.2",
"@vue-macros/volar": "^0.19.1",
"@tauri-apps/cli": "2.0.0-rc.3",
"@types/node": "^22.5.4",
"@types/uuid": "^10.0.0",
"@vitejs/plugin-vue": "^5.1.3",
"@vue-macros/volar": "^0.29.1",
"autoprefixer": "^10.4.20",
"eslint": "^9.8.0",
"eslint": "^9.10.0",
"eslint-plugin-format": "^0.1.2",
"internal-ip": "^8.0.0",
"postcss": "^8.4.41",
"tailwindcss": "^3.4.7",
"typescript": "^5.5.4",
"unplugin-auto-import": "^0.17.8",
"unplugin-vue-components": "^0.27.3",
"unplugin-vue-macros": "^2.11.4",
"postcss": "^8.4.45",
"tailwindcss": "^3.4.10",
"typescript": "^5.6.2",
"unplugin-auto-import": "^0.18.2",
"unplugin-vue-components": "^0.27.4",
"unplugin-vue-macros": "^2.11.11",
"unplugin-vue-markdown": "^0.26.2",
"unplugin-vue-router": "^0.8.8",
"uuid": "^9.0.1",
"vite": "^5.3.5",
"vite-plugin-vue-devtools": "^7.3.7",
"unplugin-vue-router": "^0.10.8",
"uuid": "^10.0.0",
"vite": "^5.4.3",
"vite-plugin-vue-devtools": "^7.4.4",
"vite-plugin-vue-layouts": "^0.11.0",
"vue-i18n": "^9.13.1",
"vue-tsc": "^2.0.29"
"vue-i18n": "^10.0.0",
"vue-tsc": "^2.1.6"
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[package]
name = "easytier-gui"
version = "1.2.2"
version = "2.0.0"
description = "EasyTier GUI"
authors = ["you"]
edition = "2021"
@@ -35,7 +35,6 @@ dashmap = "6.0"
privilege = "0.3"
gethostname = "0.5"
auto-launch = "0.5.0"
dunce = "1.0.4"
tauri-plugin-shell = "2.0.0-rc"
@@ -44,8 +43,12 @@ tauri-plugin-clipboard-manager = "2.0.0-rc"
tauri-plugin-positioner = { version = "2.0.0-rc", features = ["tray-icon"] }
tauri-plugin-vpnservice = { path = "../../tauri-plugin-vpnservice" }
tauri-plugin-os = "2.0.0-rc"
tauri-plugin-autostart = "2.0.0-rc"
[features]
# This feature is used for production builds or when a dev server is not specified, DO NOT REMOVE!!
custom-protocol = ["tauri/custom-protocol"]
[target.'cfg(not(any(target_os = "android", target_os = "ios")))'.dependencies]
tauri-plugin-single-instance = "2.0.0-rc.0"

View File

@@ -1,34 +1,3 @@
fn main() {
if !cfg!(debug_assertions) && cfg!(target_os = "windows") {
let mut windows = tauri_build::WindowsAttributes::new();
windows = windows.app_manifest(
r#"
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<dependency>
<dependentAssembly>
<assemblyIdentity
type="win32"
name="Microsoft.Windows.Common-Controls"
version="6.0.0.0"
processorArchitecture="*"
publicKeyToken="6595b64144ccf1df"
language="*"
/>
</dependentAssembly>
</dependency>
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<requestedExecutionLevel level="requireAdministrator" uiAccess="false" />
</requestedPrivileges>
</security>
</trustInfo>
</assembly>
"#,
);
tauri_build::try_build(tauri_build::Attributes::new().windows_attributes(windows))
.expect("failed to run build script");
} else {
tauri_build::build();
}
}
fn main() {
tauri_build::build();
}

View File

@@ -44,6 +44,10 @@
"os:allow-arch",
"os:allow-hostname",
"os:allow-platform",
"os:allow-locale"
"os:allow-locale",
"autostart:default",
"autostart:allow-disable",
"autostart:allow-enable",
"autostart:allow-is-enabled"
]
}

View File

@@ -4,12 +4,10 @@
use std::collections::BTreeMap;
use anyhow::Context;
#[cfg(not(target_os = "android"))]
use auto_launch::AutoLaunchBuilder;
use dashmap::DashMap;
use easytier::{
common::config::{
ConfigLoader, FileLoggerConfig, NetworkIdentity, PeerConfig, TomlConfigLoader,
ConfigLoader, FileLoggerConfig, Flags, NetworkIdentity, PeerConfig, TomlConfigLoader,
VpnPortalConfig,
},
launcher::{NetworkInstance, NetworkInstanceRunningInfo},
@@ -19,6 +17,7 @@ use serde::{Deserialize, Serialize};
use tauri::Manager as _;
pub const AUTOSTART_ARG: &str = "--autostart";
#[derive(Deserialize, Serialize, PartialEq, Debug)]
enum NetworkingMethod {
@@ -61,6 +60,9 @@ struct NetworkConfig {
listener_urls: Vec<String>,
rpc_port: i32,
latency_first: bool,
dev_name: String,
}
impl NetworkConfig {
@@ -137,7 +139,7 @@ impl NetworkConfig {
}
cfg.set_rpc_portal(
format!("127.0.0.1:{}", self.rpc_port)
format!("0.0.0.0:{}", self.rpc_port)
.parse()
.with_context(|| format!("failed to parse rpc portal port: {}", self.rpc_port))?,
);
@@ -161,7 +163,10 @@ impl NetworkConfig {
})?,
});
}
let mut flags = Flags::default();
flags.latency_first = self.latency_first;
flags.dev_name = self.dev_name.clone();
cfg.set_flags(flags);
Ok(cfg)
}
}
@@ -172,6 +177,18 @@ static INSTANCE_MAP: once_cell::sync::Lazy<DashMap<String, NetworkInstance>> =
static mut LOGGER_LEVEL_SENDER: once_cell::sync::Lazy<Option<NewFilterSender>> =
once_cell::sync::Lazy::new(Default::default);
#[tauri::command]
fn easytier_version() -> Result<String, String> {
Ok(easytier::VERSION.to_string())
}
#[tauri::command]
fn is_autostart() -> Result<bool, String> {
let args: Vec<String> = std::env::args().collect();
println!("{:?}", args);
Ok(args.contains(&AUTOSTART_ARG.to_owned()))
}
// Learn more about Tauri commands at https://tauri.app/v1/guides/features/command
#[tauri::command]
fn parse_network_config(cfg: NetworkConfig) -> Result<String, String> {
@@ -224,11 +241,6 @@ fn get_os_hostname() -> Result<String, String> {
Ok(gethostname::gethostname().to_string_lossy().to_string())
}
#[tauri::command]
fn set_auto_launch_status(app_handle: tauri::AppHandle, enable: bool) -> Result<bool, String> {
Ok(init_launch(&app_handle, enable).map_err(|e| e.to_string())?)
}
#[tauri::command]
fn set_logging_level(level: String) -> Result<(), String> {
let sender = unsafe { LOGGER_LEVEL_SENDER.as_ref().unwrap() };
@@ -262,82 +274,19 @@ fn check_sudo() -> bool {
use std::env::current_exe;
let is_elevated = privilege::user::privileged();
if !is_elevated {
let Ok(my_exe) = current_exe() else {
let Ok(exe) = current_exe() else {
return true;
};
let mut elevated_cmd = privilege::runas::Command::new(my_exe);
let _ = elevated_cmd.force_prompt(true).gui(true).run();
let args: Vec<String> = std::env::args().collect();
let mut elevated_cmd = privilege::runas::Command::new(exe);
if args.contains(&AUTOSTART_ARG.to_owned()) {
elevated_cmd.arg(AUTOSTART_ARG);
}
let _ = elevated_cmd.force_prompt(true).hide(true).gui(true).run();
}
is_elevated
}
#[cfg(target_os = "android")]
pub fn init_launch(_app_handle: &tauri::AppHandle, _enable: bool) -> Result<bool, anyhow::Error> {
Ok(false)
}
/// init the auto launch
#[cfg(not(target_os = "android"))]
pub fn init_launch(_app_handle: &tauri::AppHandle, enable: bool) -> Result<bool, anyhow::Error> {
use std::env::current_exe;
let app_exe = current_exe()?;
let app_exe = dunce::canonicalize(app_exe)?;
let app_name = app_exe
.file_stem()
.and_then(|f| f.to_str())
.ok_or(anyhow::anyhow!("failed to get file stem"))?;
let app_path = app_exe
.as_os_str()
.to_str()
.ok_or(anyhow::anyhow!("failed to get app_path"))?
.to_string();
#[cfg(target_os = "windows")]
let app_path = format!("\"{app_path}\"");
// use the /Applications/easytier-gui.app
#[cfg(target_os = "macos")]
let app_path = (|| -> Option<String> {
let path = std::path::PathBuf::from(&app_path);
let path = path.parent()?.parent()?.parent()?;
let extension = path.extension()?.to_str()?;
match extension == "app" {
true => Some(path.as_os_str().to_str()?.to_string()),
false => None,
}
})()
.unwrap_or(app_path);
#[cfg(target_os = "linux")]
let app_path = {
let appimage = _app_handle.env().appimage;
appimage
.and_then(|p| p.to_str().map(|s| s.to_string()))
.unwrap_or(app_path)
};
let auto = AutoLaunchBuilder::new()
.set_app_name(app_name)
.set_app_path(&app_path)
.build()
.with_context(|| "failed to build auto launch")?;
if enable && !auto.is_enabled().unwrap_or(false) {
// 避免重复设置登录项
let _ = auto.disable();
auto.enable()
.with_context(|| "failed to enable auto launch")?
} else if !enable {
let _ = auto.disable();
}
let enabled = auto.is_enabled()?;
Ok(enabled)
}
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
#[cfg(not(target_os = "android"))]
@@ -345,12 +294,41 @@ pub fn run() {
use std::process;
process::exit(0);
}
tauri::Builder::default()
#[cfg(not(target_os = "android"))]
utils::setup_panic_handler();
let mut builder = tauri::Builder::default();
#[cfg(not(target_os = "android"))]
{
use tauri_plugin_autostart::MacosLauncher;
builder = builder.plugin(tauri_plugin_autostart::init(
MacosLauncher::LaunchAgent,
Some(vec![AUTOSTART_ARG]),
));
}
#[cfg(not(any(target_os = "android", target_os = "ios")))]
{
builder = builder.plugin(tauri_plugin_single_instance::init(|app, _args, _cwd| {
app.webview_windows()
.values()
.next()
.expect("Sorry, no window found")
.set_focus()
.expect("Can't Bring Window to Focus");
}));
}
builder = builder
.plugin(tauri_plugin_os::init())
.plugin(tauri_plugin_clipboard_manager::init())
.plugin(tauri_plugin_process::init())
.plugin(tauri_plugin_shell::init())
.plugin(tauri_plugin_vpnservice::init())
.plugin(tauri_plugin_vpnservice::init());
builder
.setup(|app| {
// for logging config
let Ok(log_dir) = app.path().app_log_dir() else {
@@ -382,7 +360,7 @@ pub fn run() {
toggle_window_visibility(app);
}
})
.icon(tauri::image::Image::from_bytes(include_bytes!(
.icon(tauri::image::Image::from_bytes(include_bytes!(
"../icons/icon.png"
))?)
.icon_as_template(false)
@@ -396,9 +374,10 @@ pub fn run() {
retain_network_instance,
collect_network_infos,
get_os_hostname,
set_auto_launch_status,
set_logging_level,
set_tun_fd
set_tun_fd,
is_autostart,
easytier_version
])
.on_window_event(|_win, event| match event {
#[cfg(not(target_os = "android"))]

View File

@@ -17,7 +17,7 @@
"createUpdaterArtifacts": false
},
"productName": "easytier-gui",
"version": "1.2.2",
"version": "2.0.0",
"identifier": "com.kkrainbow.easytier",
"plugins": {},
"app": {

View File

@@ -24,11 +24,13 @@ declare global {
const getActivePinia: typeof import('pinia')['getActivePinia']
const getCurrentInstance: typeof import('vue')['getCurrentInstance']
const getCurrentScope: typeof import('vue')['getCurrentScope']
const getEasytierVersion: typeof import('./composables/network')['getEasytierVersion']
const getOsHostname: typeof import('./composables/network')['getOsHostname']
const h: typeof import('vue')['h']
const initMobileService: typeof import('./composables/mobile_vpn')['initMobileService']
const initMobileVpnService: typeof import('./composables/mobile_vpn')['initMobileVpnService']
const inject: typeof import('vue')['inject']
const isAutostart: typeof import('./composables/network')['isAutostart']
const isProxy: typeof import('vue')['isProxy']
const isReactive: typeof import('vue')['isReactive']
const isReadonly: typeof import('vue')['isReadonly']
@@ -43,8 +45,8 @@ declare global {
const nextTick: typeof import('vue')['nextTick']
const onActivated: typeof import('vue')['onActivated']
const onBeforeMount: typeof import('vue')['onBeforeMount']
const onBeforeRouteLeave: typeof import('vue-router/auto')['onBeforeRouteLeave']
const onBeforeRouteUpdate: typeof import('vue-router/auto')['onBeforeRouteUpdate']
const onBeforeRouteLeave: typeof import('vue-router')['onBeforeRouteLeave']
const onBeforeRouteUpdate: typeof import('vue-router')['onBeforeRouteUpdate']
const onBeforeUnmount: typeof import('vue')['onBeforeUnmount']
const onBeforeUpdate: typeof import('vue')['onBeforeUpdate']
const onDeactivated: typeof import('vue')['onDeactivated']
@@ -89,8 +91,8 @@ declare global {
const useI18n: typeof import('vue-i18n')['useI18n']
const useLink: typeof import('vue-router/auto')['useLink']
const useNetworkStore: typeof import('./stores/network')['useNetworkStore']
const useRoute: typeof import('vue-router/auto')['useRoute']
const useRouter: typeof import('vue-router/auto')['useRouter']
const useRoute: typeof import('vue-router')['useRoute']
const useRouter: typeof import('vue-router')['useRouter']
const useSlots: typeof import('vue')['useSlots']
const useTray: typeof import('./composables/tray')['useTray']
const watch: typeof import('vue')['watch']
@@ -120,22 +122,22 @@ declare module 'vue' {
readonly customRef: UnwrapRef<typeof import('vue')['customRef']>
readonly defineAsyncComponent: UnwrapRef<typeof import('vue')['defineAsyncComponent']>
readonly defineComponent: UnwrapRef<typeof import('vue')['defineComponent']>
readonly definePage: UnwrapRef<typeof import('unplugin-vue-router/runtime')['definePage']>
readonly defineStore: UnwrapRef<typeof import('pinia')['defineStore']>
readonly effectScope: UnwrapRef<typeof import('vue')['effectScope']>
readonly generateMenuItem: UnwrapRef<typeof import('./composables/tray')['generateMenuItem']>
readonly getActivePinia: UnwrapRef<typeof import('pinia')['getActivePinia']>
readonly getCurrentInstance: UnwrapRef<typeof import('vue')['getCurrentInstance']>
readonly getCurrentScope: UnwrapRef<typeof import('vue')['getCurrentScope']>
readonly getEasytierVersion: UnwrapRef<typeof import('./composables/network')['getEasytierVersion']>
readonly getOsHostname: UnwrapRef<typeof import('./composables/network')['getOsHostname']>
readonly h: UnwrapRef<typeof import('vue')['h']>
readonly initMobileVpnService: UnwrapRef<typeof import('./composables/mobile_vpn')['initMobileVpnService']>
readonly inject: UnwrapRef<typeof import('vue')['inject']>
readonly isAutostart: UnwrapRef<typeof import('./composables/network')['isAutostart']>
readonly isProxy: UnwrapRef<typeof import('vue')['isProxy']>
readonly isReactive: UnwrapRef<typeof import('vue')['isReactive']>
readonly isReadonly: UnwrapRef<typeof import('vue')['isReadonly']>
readonly isRef: UnwrapRef<typeof import('vue')['isRef']>
readonly loadRunningInstanceIdsFromLocalStorage: UnwrapRef<typeof import('./stores/network')['loadRunningInstanceIdsFromLocalStorage']>
readonly mapActions: UnwrapRef<typeof import('pinia')['mapActions']>
readonly mapGetters: UnwrapRef<typeof import('pinia')['mapGetters']>
readonly mapState: UnwrapRef<typeof import('pinia')['mapState']>
@@ -145,8 +147,8 @@ declare module 'vue' {
readonly nextTick: UnwrapRef<typeof import('vue')['nextTick']>
readonly onActivated: UnwrapRef<typeof import('vue')['onActivated']>
readonly onBeforeMount: UnwrapRef<typeof import('vue')['onBeforeMount']>
readonly onBeforeRouteLeave: UnwrapRef<typeof import('vue-router/auto')['onBeforeRouteLeave']>
readonly onBeforeRouteUpdate: UnwrapRef<typeof import('vue-router/auto')['onBeforeRouteUpdate']>
readonly onBeforeRouteLeave: UnwrapRef<typeof import('vue-router')['onBeforeRouteLeave']>
readonly onBeforeRouteUpdate: UnwrapRef<typeof import('vue-router')['onBeforeRouteUpdate']>
readonly onBeforeUnmount: UnwrapRef<typeof import('vue')['onBeforeUnmount']>
readonly onBeforeUpdate: UnwrapRef<typeof import('vue')['onBeforeUpdate']>
readonly onDeactivated: UnwrapRef<typeof import('vue')['onDeactivated']>
@@ -168,7 +170,6 @@ declare module 'vue' {
readonly retainNetworkInstance: UnwrapRef<typeof import('./composables/network')['retainNetworkInstance']>
readonly runNetworkInstance: UnwrapRef<typeof import('./composables/network')['runNetworkInstance']>
readonly setActivePinia: UnwrapRef<typeof import('pinia')['setActivePinia']>
readonly setAutoLaunchStatus: UnwrapRef<typeof import('./composables/network')['setAutoLaunchStatus']>
readonly setLoggingLevel: UnwrapRef<typeof import('./composables/network')['setLoggingLevel']>
readonly setMapStoreSuffix: UnwrapRef<typeof import('pinia')['setMapStoreSuffix']>
readonly setTrayMenu: UnwrapRef<typeof import('./composables/tray')['setTrayMenu']>
@@ -191,8 +192,8 @@ declare module 'vue' {
readonly useI18n: UnwrapRef<typeof import('vue-i18n')['useI18n']>
readonly useLink: UnwrapRef<typeof import('vue-router/auto')['useLink']>
readonly useNetworkStore: UnwrapRef<typeof import('./stores/network')['useNetworkStore']>
readonly useRoute: UnwrapRef<typeof import('vue-router/auto')['useRoute']>
readonly useRouter: UnwrapRef<typeof import('vue-router/auto')['useRouter']>
readonly useRoute: UnwrapRef<typeof import('vue-router')['useRoute']>
readonly useRouter: UnwrapRef<typeof import('vue-router')['useRouter']>
readonly useSlots: UnwrapRef<typeof import('vue')['useSlots']>
readonly useTray: UnwrapRef<typeof import('./composables/tray')['useTray']>
readonly watch: UnwrapRef<typeof import('vue')['watch']>

View File

@@ -0,0 +1,27 @@
<script setup lang="ts">
import { getEasytierVersion } from '~/composables/network'
const { t } = useI18n()
const etVersion = ref('')
onMounted(async () => {
etVersion.value = await getEasytierVersion()
})
</script>
<template>
<Card>
<template #title>
Easytier - {{ t('about.version') }}: {{ etVersion }}
</template>
<template #content>
<p class="mb-1">
{{ t('about.description') }}
</p>
</template>
</Card>
</template>
<style scoped lang="postcss">
</style>

View File

@@ -1,11 +1,10 @@
<script setup lang="ts">
import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon'
import { getOsHostname } from '~/composables/network'
import { NetworkingMethod } from '~/types/network'
const { t } = useI18n()
import { ping } from 'tauri-plugin-vpnservice-api'
import { getOsHostname } from '~/composables/network'
import { NetworkingMethod } from '~/types/network'
const props = defineProps<{
configInvalid?: boolean
@@ -14,6 +13,8 @@ const props = defineProps<{
defineEmits(['runNetwork'])
const { t } = useI18n()
const networking_methods = ref([
{ value: NetworkingMethod.PublicServer, label: () => t('public_server') },
{ value: NetworkingMethod.Manual, label: () => t('manual') },
@@ -32,24 +33,26 @@ const curNetwork = computed(() => {
return networkStore.curNetwork
})
const protos:{ [proto: string] : number; } = {'tcp': 11010, 'udp': 11010, 'wg':11011, 'ws': 11011, 'wss': 11012}
const protos: { [proto: string]: number } = { tcp: 11010, udp: 11010, wg: 11011, ws: 11011, wss: 11012 }
function searchUrlSuggestions(e: { query: string }): string[] {
const query = e.query
let ret = []
const ret = []
// if query match "^\w+:.*", then no proto prefix
if (query.match(/^\w+:.*/)) {
// if query is a valid url, then add to suggestions
try {
new URL(query)
ret.push(query)
} catch (e) {}
} else {
for (let proto in protos) {
let item = proto + '://' + query
}
catch (e) {}
}
else {
for (const proto in protos) {
let item = `${proto}://${query}`
// if query match ":\d+$", then no port suffix
if (!query.match(/:\d+$/)) {
item += ':' + protos[proto]
item += `:${protos[proto]}`
}
ret.push(item)
}
@@ -58,45 +61,45 @@ function searchUrlSuggestions(e: { query: string }): string[] {
return ret
}
const publicServerSuggestions = ref([''])
const searchPresetPublicServers = (e: { query: string }) => {
const presetPublicServers = [
'tcp://easytier.public.kkrainbow.top:11010',
]
function searchPresetPublicServers(e: { query: string }) {
const presetPublicServers = [
'tcp://easytier.public.kkrainbow.top:11010',
]
let query = e.query
// if query is sub string of presetPublicServers, add to suggestions
let ret = presetPublicServers.filter((item) => item.includes(query))
// add additional suggestions
if (query.length > 0) {
ret = ret.concat(searchUrlSuggestions(e))
}
const query = e.query
// if query is sub string of presetPublicServers, add to suggestions
let ret = presetPublicServers.filter(item => item.includes(query))
// add additional suggestions
if (query.length > 0) {
ret = ret.concat(searchUrlSuggestions(e))
}
publicServerSuggestions.value = ret
publicServerSuggestions.value = ret
}
const peerSuggestions = ref([''])
const searchPeerSuggestions = (e: { query: string }) => {
function searchPeerSuggestions(e: { query: string }) {
peerSuggestions.value = searchUrlSuggestions(e)
}
const listenerSuggestions = ref([''])
const searchListenerSuggestiong = (e: { query: string }) => {
let ret = []
function searchListenerSuggestiong(e: { query: string }) {
const ret = []
for (let proto in protos) {
let item = proto + '://0.0.0.0:';
for (const proto in protos) {
let item = `${proto}://0.0.0.0:`
// if query is a number, use it as port
if (e.query.match(/^\d+$/)) {
item += e.query
} else {
}
else {
item += protos[proto]
}
if (item.includes(e.query)) {
ret.push(item)
}
@@ -112,7 +115,7 @@ const searchListenerSuggestiong = (e: { query: string }) => {
function validateHostname() {
if (curNetwork.value.hostname) {
// eslint no-useless-escape
let name = curNetwork.value.hostname!.replaceAll(/[^\u4E00-\u9FA5a-zA-Z0-9\-]*/g, '')
let name = curNetwork.value.hostname!.replaceAll(/[^\u4E00-\u9FA5a-z0-9\-]*/gi, '')
if (name.length > 32)
name = name.substring(0, 32)
@@ -132,7 +135,7 @@ onMounted(async () => {
<template>
<div class="flex flex-column h-full">
<div class="flex flex-column">
<div class="w-10/12 self-center ">
<div class="w-10/12 self-center mb-3">
<Message severity="warn">
{{ t('dhcp_experimental_warning') }}
</Message>
@@ -151,8 +154,10 @@ onMounted(async () => {
</label>
</div>
<InputGroup>
<InputText id="virtual_ip" v-model="curNetwork.virtual_ipv4" :disabled="curNetwork.dhcp"
aria-describedby="virtual_ipv4-help" />
<InputText
id="virtual_ip" v-model="curNetwork.virtual_ipv4" :disabled="curNetwork.dhcp"
aria-describedby="virtual_ipv4-help"
/>
<InputGroupAddon>
<span>/24</span>
</InputGroupAddon>
@@ -167,23 +172,29 @@ onMounted(async () => {
</div>
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="network_secret">{{ t('network_secret') }}</label>
<InputText id="network_secret" v-model="curNetwork.network_secret"
aria-describedby=" network_secret-help" />
<InputText
id="network_secret" v-model="curNetwork.network_secret"
aria-describedby="network_secret-help"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="nm">{{ t('networking_method') }}</label>
<SelectButton v-model="curNetwork.networking_method" :options="networking_methods" :option-label="(v) => v.label()" option-value="value"></SelectButton>
<SelectButton v-model="curNetwork.networking_method" :options="networking_methods" :option-label="(v) => v.label()" option-value="value" />
<div class="items-center flex flex-row p-fluid gap-x-1">
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.Manual" id="chips"
<AutoComplete
v-if="curNetwork.networking_method === NetworkingMethod.Manual" id="chips"
v-model="curNetwork.peer_urls" :placeholder="t('chips_placeholder', ['tcp://8.8.8.8:11010'])"
class="grow" multiple fluid :suggestions="peerSuggestions" @complete="searchPeerSuggestions"/>
class="grow" multiple fluid :suggestions="peerSuggestions" @complete="searchPeerSuggestions"
/>
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.PublicServer" :suggestions="publicServerSuggestions"
:virtualScrollerOptions="{ itemSize: 38 }" class="grow" dropdown @complete="searchPresetPublicServers" :completeOnFocus="true"
v-model="curNetwork.public_server_url"/>
<AutoComplete
v-if="curNetwork.networking_method === NetworkingMethod.PublicServer" v-model="curNetwork.public_server_url"
:suggestions="publicServerSuggestions" :virtual-scroller-options="{ itemSize: 38 }" class="grow" dropdown :complete-on-focus="true"
@complete="searchPresetPublicServers"
/>
</div>
</div>
</div>
@@ -194,67 +205,102 @@ onMounted(async () => {
<Panel :header="t('advanced_settings')" toggleable collapsed>
<div class="flex flex-column gap-y-2">
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<div class="flex align-items-center">
<Checkbox v-model="curNetwork.latency_first" input-id="use_latency_first" :binary="true" />
<label for="use_latency_first" class="ml-2"> {{ t('use_latency_first') }} </label>
</div>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="hostname">{{ t('hostname') }}</label>
<InputText id="hostname" v-model="curNetwork.hostname" aria-describedby="hostname-help" :format="true"
:placeholder="t('hostname_placeholder', [osHostname])" @blur="validateHostname" />
<InputText
id="hostname" v-model="curNetwork.hostname" aria-describedby="hostname-help" :format="true"
:placeholder="t('hostname_placeholder', [osHostname])" @blur="validateHostname"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap w-full">
<div class="flex flex-column gap-2 grow p-fluid">
<label for="username">{{ t('proxy_cidrs') }}</label>
<Chips id="chips" v-model="curNetwork.proxy_cidrs"
:placeholder="t('chips_placeholder', ['10.0.0.0/24'])" separator=" " class="w-full" />
<Chips
id="chips" v-model="curNetwork.proxy_cidrs"
:placeholder="t('chips_placeholder', ['10.0.0.0/24'])" separator=" " class="w-full"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap ">
<div class="flex flex-column gap-2 grow">
<label for="username">VPN Portal</label>
<ToggleButton v-model="curNetwork.enable_vpn_portal" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('off_text')" :off-label="t('on_text')" class="w-48"/>
<div class="items-center flex flex-row gap-x-4" v-if="curNetwork.enable_vpn_portal">
<div class="min-w-64">
<InputGroup>
<InputText v-model="curNetwork.vpn_portal_client_network_addr"
:placeholder="t('vpn_portal_client_network')" />
<InputGroupAddon>
<span>/{{ curNetwork.vpn_portal_client_network_len }}</span>
</InputGroupAddon>
</InputGroup>
<ToggleButton
v-model="curNetwork.enable_vpn_portal" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('off_text')" :off-label="t('on_text')" class="w-48"
/>
<div v-if="curNetwork.enable_vpn_portal" class="items-center flex flex-row gap-x-4">
<div class="min-w-64">
<InputGroup>
<InputText
v-model="curNetwork.vpn_portal_client_network_addr"
:placeholder="t('vpn_portal_client_network')"
/>
<InputGroupAddon>
<span>/{{ curNetwork.vpn_portal_client_network_len }}</span>
</InputGroupAddon>
</InputGroup>
<InputNumber v-model="curNetwork.vpn_portal_listen_port" :allow-empty="false"
:format="false" :min="0" :max="65535" class="w-8" fluid/>
</div>
<InputNumber
v-model="curNetwork.vpn_portal_listen_port" :allow-empty="false"
:format="false" :min="0" :max="65535" class="w-8" fluid
/>
</div>
</div>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 grow p-fluid">
<label for="listener_urls">{{ t('listener_urls') }}</label>
<AutoComplete id="listener_urls" :suggestions="listenerSuggestions"
class="w-full" dropdown @complete="searchListenerSuggestiong" :completeOnFocus="true"
:placeholder="t('chips_placeholder', ['tcp://1.1.1.1:11010'])"
v-model="curNetwork.listener_urls" multiple/>
<AutoComplete
id="listener_urls" v-model="curNetwork.listener_urls"
:suggestions="listenerSuggestions" class="w-full" dropdown :complete-on-focus="true"
:placeholder="t('chips_placeholder', ['tcp://1.1.1.1:11010'])"
multiple @complete="searchListenerSuggestiong"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="rpc_port">{{ t('rpc_port') }}</label>
<InputNumber id="rpc_port" v-model="curNetwork.rpc_port" aria-describedby="username-help"
:format="false" :min="0" :max="65535" />
<InputNumber
id="rpc_port" v-model="curNetwork.rpc_port" aria-describedby="rpc_port-help"
:format="false" :min="0" :max="65535"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="dev_name">{{ t('dev_name') }}</label>
<InputText
id="dev_name" v-model="curNetwork.dev_name" aria-describedby="dev_name-help" :format="true"
:placeholder="t('dev_name_placeholder')"
/>
</div>
</div>
</div>
</Panel>
<div class="flex pt-4 justify-content-center">
<Button :label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid"
@click="$emit('runNetwork', curNetwork)" />
<Button
:label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid"
@click="$emit('runNetwork', curNetwork)"
/>
</div>
</div>
</div>

View File

@@ -1,11 +1,13 @@
<script setup lang="ts">
import type { NodeInfo } from '~/types/network'
const { t } = useI18n()
import { IPv4, IPv6 } from 'ip-num/IPNumber'
import type { NodeInfo, PeerRoutePair } from '~/types/network'
const props = defineProps<{
instanceId?: string
}>()
const { t } = useI18n()
const networkStore = useNetworkStore()
const curNetwork = computed(() => {
@@ -24,8 +26,16 @@ const curNetworkInst = computed(() => {
})
const peerRouteInfos = computed(() => {
if (curNetworkInst.value)
return curNetworkInst.value.detail?.peer_route_pairs || []
if (curNetworkInst.value) {
const my_node_info = curNetworkInst.value.detail?.my_node_info
return [{
route: {
ipv4_addr: my_node_info?.virtual_ipv4,
hostname: my_node_info?.hostname,
version: my_node_info?.version,
},
}, ...(curNetworkInst.value.detail?.peer_route_pairs || [])]
}
return []
})
@@ -33,8 +43,9 @@ const peerRouteInfos = computed(() => {
function routeCost(info: any) {
if (info.route) {
const cost = info.route.cost
return cost === 1 ? 'p2p' : `relay(${cost})`
return cost ? cost === 1 ? 'p2p' : `relay(${cost})` : t('status.local')
}
return '?'
}
@@ -73,29 +84,33 @@ function humanFileSize(bytes: number, si = false, dp = 1) {
return `${bytes.toFixed(dp)} ${units[u]}`
}
function latencyMs(info: any) {
function latencyMs(info: PeerRoutePair) {
let lat_us_sum = statsCommon(info, 'stats.latency_us')
if (lat_us_sum === undefined)
return ''
lat_us_sum = lat_us_sum / 1000 / info.peer.conns.length
lat_us_sum = lat_us_sum / 1000 / info.peer!.conns.length
return `${lat_us_sum % 1 > 0 ? Math.round(lat_us_sum) + 1 : Math.round(lat_us_sum)}ms`
}
function txBytes(info: any) {
function txBytes(info: PeerRoutePair) {
const tx = statsCommon(info, 'stats.tx_bytes')
return tx ? humanFileSize(tx) : ''
}
function rxBytes(info: any) {
function rxBytes(info: PeerRoutePair) {
const rx = statsCommon(info, 'stats.rx_bytes')
return rx ? humanFileSize(rx) : ''
}
function lossRate(info: any) {
function lossRate(info: PeerRoutePair) {
const lossRate = statsCommon(info, 'loss_rate')
return lossRate !== undefined ? `${Math.round(lossRate * 100)}%` : ''
}
function version(info: PeerRoutePair) {
return info.route.version === '' ? 'unknown' : info.route.version
}
const myNodeInfo = computed(() => {
if (!curNetworkInst.value)
return {} as NodeInfo
@@ -117,8 +132,16 @@ const myNodeInfoChips = computed(() => {
if (!my_node_info)
return chips
// virtual ipv4
// TUN Device Name
const dev_name = curNetworkInst.value.detail?.dev_name
if (dev_name) {
chips.push({
label: `TUN Device Name: ${dev_name}`,
icon: '',
} as Chip)
}
// virtual ipv4
chips.push({
label: `Virtual IPv4: ${my_node_info.virtual_ipv4}`,
icon: '',
@@ -128,7 +151,7 @@ const myNodeInfoChips = computed(() => {
const local_ipv4s = my_node_info.ips?.interface_ipv4s
for (const [idx, ip] of local_ipv4s?.entries()) {
chips.push({
label: `Local IPv4 ${idx}: ${ip}`,
label: `Local IPv4 ${idx}: ${IPv4.fromNumber(ip.addr)}`,
icon: '',
} as Chip)
}
@@ -137,7 +160,11 @@ const myNodeInfoChips = computed(() => {
const local_ipv6s = my_node_info.ips?.interface_ipv6s
for (const [idx, ip] of local_ipv6s?.entries()) {
chips.push({
label: `Local IPv6 ${idx}: ${ip}`,
label: `Local IPv6 ${idx}: ${IPv6.fromBigInt((BigInt(ip.part1) << BigInt(96))
+ (BigInt(ip.part2) << BigInt(64))
+ (BigInt(ip.part3) << BigInt(32))
+ BigInt(ip.part4),
)}`,
icon: '',
} as Chip)
}
@@ -146,7 +173,19 @@ const myNodeInfoChips = computed(() => {
const public_ip = my_node_info.ips?.public_ipv4
if (public_ip) {
chips.push({
label: `Public IP: ${public_ip}`,
label: `Public IP: ${IPv4.fromNumber(public_ip.addr)}`,
icon: '',
} as Chip)
}
const public_ipv6 = my_node_info.ips?.public_ipv6
if (public_ipv6) {
chips.push({
label: `Public IPv6: ${IPv6.fromBigInt((BigInt(public_ipv6.part1) << BigInt(96))
+ (BigInt(public_ipv6.part2) << BigInt(64))
+ (BigInt(public_ipv6.part3) << BigInt(32))
+ BigInt(public_ipv6.part4),
)}`,
icon: '',
} as Chip)
}
@@ -373,6 +412,7 @@ function showEventLogs() {
<Column :field="txBytes" style="width: 80px;" :header="t('upload_bytes')" />
<Column :field="rxBytes" style="width: 80px;" :header="t('download_bytes')" />
<Column :field="lossRate" style="width: 100px;" :header="t('loss_rate')" />
<Column :field="version" style="width: 100px;" :header="t('status.version')" />
</DataTable>
</template>
</Card>

View File

@@ -1,183 +1,184 @@
import { addPluginListener } from '@tauri-apps/api/core';
import { prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api';
import { Route } from '~/types/network';
import { addPluginListener } from '@tauri-apps/api/core'
import { prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api'
import type { Route } from '~/types/network'
const networkStore = useNetworkStore()
interface vpnStatus {
running: boolean
ipv4Addr: string | null | undefined
ipv4Cidr: number | null | undefined
routes: string[]
running: boolean
ipv4Addr: string | null | undefined
ipv4Cidr: number | null | undefined
routes: string[]
}
var curVpnStatus: vpnStatus = {
running: false,
ipv4Addr: undefined,
ipv4Cidr: undefined,
routes: []
const curVpnStatus: vpnStatus = {
running: false,
ipv4Addr: undefined,
ipv4Cidr: undefined,
routes: [],
}
async function waitVpnStatus(target_status: boolean, timeout_sec: number) {
let start_time = Date.now()
while (curVpnStatus.running !== target_status) {
if (Date.now() - start_time > timeout_sec * 1000) {
throw new Error('wait vpn status timeout')
}
await new Promise(r => setTimeout(r, 50))
const start_time = Date.now()
while (curVpnStatus.running !== target_status) {
if (Date.now() - start_time > timeout_sec * 1000) {
throw new Error('wait vpn status timeout')
}
await new Promise(r => setTimeout(r, 50))
}
}
async function doStopVpn() {
if (!curVpnStatus.running) {
return
}
console.log('stop vpn')
let stop_ret = await stop_vpn()
console.log('stop vpn', JSON.stringify((stop_ret)))
await waitVpnStatus(false, 3)
if (!curVpnStatus.running) {
return
}
console.log('stop vpn')
const stop_ret = await stop_vpn()
console.log('stop vpn', JSON.stringify((stop_ret)))
await waitVpnStatus(false, 3)
curVpnStatus.ipv4Addr = undefined
curVpnStatus.routes = []
curVpnStatus.ipv4Addr = undefined
curVpnStatus.routes = []
}
async function doStartVpn(ipv4Addr: string, cidr: number, routes: string[]) {
if (curVpnStatus.running) {
return
}
if (curVpnStatus.running) {
return
}
console.log('start vpn')
let start_ret = await start_vpn({
"ipv4Addr": ipv4Addr + '/' + cidr,
"routes": routes,
"disallowedApplications": ["com.kkrainbow.easytier"],
"mtu": 1300,
});
if (start_ret?.errorMsg?.length) {
throw new Error(start_ret.errorMsg)
}
await waitVpnStatus(true, 3)
console.log('start vpn')
const start_ret = await start_vpn({
ipv4Addr: `${ipv4Addr}/${cidr}`,
routes,
disallowedApplications: ['com.kkrainbow.easytier'],
mtu: 1300,
})
if (start_ret?.errorMsg?.length) {
throw new Error(start_ret.errorMsg)
}
await waitVpnStatus(true, 3)
curVpnStatus.ipv4Addr = ipv4Addr
curVpnStatus.routes = routes
curVpnStatus.ipv4Addr = ipv4Addr
curVpnStatus.routes = routes
}
async function onVpnServiceStart(payload: any) {
console.log('vpn service start', JSON.stringify(payload))
curVpnStatus.running = true
if (payload.fd) {
setTunFd(networkStore.networkInstanceIds[0], payload.fd)
}
console.log('vpn service start', JSON.stringify(payload))
curVpnStatus.running = true
if (payload.fd) {
setTunFd(networkStore.networkInstanceIds[0], payload.fd)
}
}
async function onVpnServiceStop(payload: any) {
console.log('vpn service stop', JSON.stringify(payload))
curVpnStatus.running = false
console.log('vpn service stop', JSON.stringify(payload))
curVpnStatus.running = false
}
async function registerVpnServiceListener() {
console.log('register vpn service listener')
await addPluginListener(
'vpnservice',
'vpn_service_start',
onVpnServiceStart
)
console.log('register vpn service listener')
await addPluginListener(
'vpnservice',
'vpn_service_start',
onVpnServiceStart,
)
await addPluginListener(
'vpnservice',
'vpn_service_stop',
onVpnServiceStop
)
await addPluginListener(
'vpnservice',
'vpn_service_stop',
onVpnServiceStop,
)
}
function getRoutesForVpn(routes: Route[]): string[] {
if (!routes) {
return []
}
if (!routes) {
return []
}
let ret = []
for (let r of routes) {
for (let cidr of r.proxy_cidrs) {
if (cidr.indexOf('/') === -1) {
cidr += '/32'
}
ret.push(cidr)
}
const ret = []
for (const r of routes) {
for (let cidr of r.proxy_cidrs) {
if (!cidr.includes('/')) {
cidr += '/32'
}
ret.push(cidr)
}
}
// sort and dedup
return Array.from(new Set(ret)).sort()
// sort and dedup
return Array.from(new Set(ret)).sort()
}
async function onNetworkInstanceChange() {
let insts = networkStore.networkInstanceIds
if (!insts) {
await doStopVpn()
return
const insts = networkStore.networkInstanceIds
if (!insts) {
await doStopVpn()
return
}
const curNetworkInfo = networkStore.networkInfos[insts[0]]
if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) {
await doStopVpn()
return
}
const virtual_ip = curNetworkInfo?.node_info?.virtual_ipv4
if (!virtual_ip || !virtual_ip.length) {
await doStopVpn()
return
}
const routes = getRoutesForVpn(curNetworkInfo?.routes)
const ipChanged = virtual_ip !== curVpnStatus.ipv4Addr
const routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes)
if (ipChanged || routesChanged) {
console.log('virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip)
try {
await doStopVpn()
}
catch (e) {
console.error(e)
}
const curNetworkInfo = networkStore.networkInfos[insts[0]]
if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) {
await doStopVpn()
return
try {
await doStartVpn(virtual_ip, 24, routes)
}
const virtual_ip = curNetworkInfo?.node_info?.virtual_ipv4
if (!virtual_ip || !virtual_ip.length) {
await doStopVpn()
return
}
const routes = getRoutesForVpn(curNetworkInfo?.routes)
var ipChanged = virtual_ip !== curVpnStatus.ipv4Addr
var routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes)
if (ipChanged || routesChanged) {
console.log('virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip)
try {
await doStopVpn()
} catch (e) {
console.error(e)
}
try {
await doStartVpn(virtual_ip, 24, routes)
} catch (e) {
console.error("start vpn failed, clear all network insts.", e)
networkStore.clearNetworkInstances()
await retainNetworkInstance(networkStore.networkInstanceIds)
}
return
catch (e) {
console.error('start vpn failed, clear all network insts.', e)
networkStore.clearNetworkInstances()
await retainNetworkInstance(networkStore.networkInstanceIds)
}
}
}
async function watchNetworkInstance() {
var subscribe_running = false
networkStore.$subscribe(async () => {
if (subscribe_running) {
return
}
subscribe_running = true
try {
await onNetworkInstanceChange()
} catch (_) {
}
subscribe_running = false
})
let subscribe_running = false
networkStore.$subscribe(async () => {
if (subscribe_running) {
return
}
subscribe_running = true
try {
await onNetworkInstanceChange()
}
catch (_) {
}
subscribe_running = false
})
}
export async function initMobileVpnService() {
await registerVpnServiceListener()
await watchNetworkInstance()
await registerVpnServiceListener()
await watchNetworkInstance()
}
export async function prepareVpnService() {
console.log('prepare vpn')
let prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
console.log('prepare vpn')
const prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
}

View File

@@ -1,4 +1,4 @@
import { invoke } from "@tauri-apps/api/core"
import { invoke } from '@tauri-apps/api/core'
import type { NetworkConfig, NetworkInstanceRunningInfo } from '~/types/network'
@@ -22,8 +22,8 @@ export async function getOsHostname() {
return await invoke<string>('get_os_hostname')
}
export async function setAutoLaunchStatus(enable: boolean) {
return await invoke<boolean>('set_auto_launch_status', { enable })
export async function isAutostart() {
return await invoke<boolean>('is_autostart')
}
export async function setLoggingLevel(level: string) {
@@ -33,3 +33,7 @@ export async function setLoggingLevel(level: string) {
export async function setTunFd(instanceId: string, fd: number) {
return await invoke('set_tun_fd', { instanceId, fd })
}
export async function getEasytierVersion() {
return await invoke<string>('easytier_version')
}

View File

@@ -1,6 +1,6 @@
import { getCurrentWindow } from '@tauri-apps/api/window'
import { Menu, MenuItem, PredefinedMenuItem } from '@tauri-apps/api/menu'
import { TrayIcon } from '@tauri-apps/api/tray'
import { getCurrentWindow } from '@tauri-apps/api/window'
import pkg from '~/../package.json'
const DEFAULT_TRAY_NAME = 'main'
@@ -8,14 +8,15 @@ const DEFAULT_TRAY_NAME = 'main'
async function toggleVisibility() {
if (await getCurrentWindow().isVisible()) {
await getCurrentWindow().hide()
} else {
}
else {
await getCurrentWindow().show()
await getCurrentWindow().setFocus()
}
}
export async function useTray(init: boolean = false) {
let tray;
let tray
try {
tray = await TrayIcon.getById(DEFAULT_TRAY_NAME)
if (!tray) {
@@ -29,17 +30,18 @@ export async function useTray(init: boolean = false) {
}),
action: async () => {
toggleVisibility()
}
},
})
}
} catch (error) {
}
catch (error) {
console.warn('Error while creating tray icon:', error)
return null
}
if (init) {
tray.setTooltip(`EasyTier\n${pkg.version}`)
tray.setMenuOnLeftClick(false);
tray.setMenuOnLeftClick(false)
tray.setMenu(await Menu.new({
id: 'main',
items: await generateMenuItem(),
@@ -59,7 +61,7 @@ export async function generateMenuItem() {
export async function MenuItemExit(text: string) {
return await PredefinedMenuItem.new({
text: text,
text,
item: 'Quit',
})
}
@@ -69,14 +71,15 @@ export async function MenuItemShow(text: string) {
id: 'show',
text,
action: async () => {
await toggleVisibility();
await toggleVisibility()
},
})
}
export async function setTrayMenu(items: (MenuItem | PredefinedMenuItem)[] | undefined = undefined) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
const menu = await Menu.new({
id: 'main',
items: items || await generateMenuItem(),
@@ -86,15 +89,17 @@ export async function setTrayMenu(items: (MenuItem | PredefinedMenuItem)[] | und
export async function setTrayRunState(isRunning: boolean = false) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
tray.setIcon(isRunning ? 'icons/icon-inactive.ico' : 'icons/icon.ico')
}
export async function setTrayTooltip(tooltip: string) {
if (tooltip) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
tray.setTooltip(`EasyTier\n${pkg.version}\n${tooltip}`)
tray.setTitle(`EasyTier\n${pkg.version}\n${tooltip}`)
}
}
}

View File

@@ -1,16 +1,16 @@
import { setupLayouts } from 'virtual:generated-layouts'
import { createRouter, createWebHistory } from 'vue-router/auto'
import Aura from '@primevue/themes/aura'
import PrimeVue from 'primevue/config'
import ToastService from 'primevue/toastservice'
import App from '~/App.vue'
import { createRouter, createWebHistory } from 'vue-router/auto'
import { routes } from 'vue-router/auto-routes'
import App from '~/App.vue'
import { i18n, loadLanguageAsync } from '~/modules/i18n'
import { getAutoLaunchStatusAsync, loadAutoLaunchStatusAsync } from './modules/auto_launch'
import '~/styles.css'
import Aura from '@primevue/themes/aura'
import 'primeicons/primeicons.css'
import 'primeflex/primeflex.css'
import { i18n, loadLanguageAsync } from '~/modules/i18n'
import { loadAutoLaunchStatusAsync, getAutoLaunchStatusAsync } from './modules/auto_launch'
if (import.meta.env.PROD) {
document.addEventListener('keydown', (event) => {
@@ -18,8 +18,9 @@ if (import.meta.env.PROD) {
event.key === 'F5'
|| (event.ctrlKey && event.key === 'r')
|| (event.metaKey && event.key === 'r')
)
) {
event.preventDefault()
}
})
document.addEventListener('contextmenu', (event) => {
@@ -35,7 +36,7 @@ async function main() {
const router = createRouter({
history: createWebHistory(),
extendRoutes: routes => setupLayouts(routes),
routes,
})
app.use(router)
@@ -45,11 +46,12 @@ async function main() {
theme: {
preset: Aura,
options: {
prefix: 'p',
darkModeSelector: 'system',
cssLayer: false
}
}})
prefix: 'p',
darkModeSelector: 'system',
cssLayer: false,
},
},
})
app.use(ToastService)
app.mount('#app')
}

View File

@@ -1,16 +1,17 @@
import { setAutoLaunchStatus } from "~/composables/network"
import { disable, enable, isEnabled } from '@tauri-apps/plugin-autostart'
export async function loadAutoLaunchStatusAsync(enable: boolean): Promise<boolean> {
try {
const ret = await setAutoLaunchStatus(enable)
localStorage.setItem('auto_launch', JSON.stringify(ret))
return ret
} catch (e) {
console.error(e)
return false
}
export async function loadAutoLaunchStatusAsync(target_enable: boolean): Promise<boolean> {
try {
target_enable ? await enable() : await disable()
localStorage.setItem('auto_launch', JSON.stringify(await isEnabled()))
return isEnabled()
}
catch (e) {
console.error(e)
return false
}
}
export function getAutoLaunchStatusAsync(): boolean {
return localStorage.getItem('auto_launch') === 'true'
return localStorage.getItem('auto_launch') === 'true'
}

View File

@@ -1,5 +1,5 @@
import type { Locale } from 'vue-i18n'
import { createI18n } from 'vue-i18n'
import type { Locale } from 'vue-i18n'
// Import i18n resources
// https://vitejs.dev/guide/features.html#glob-import

View File

@@ -1,24 +1,25 @@
<script setup lang="ts">
import { useToast } from 'primevue/usetoast'
import { exit } from '@tauri-apps/plugin-process';
import Config from '~/components/Config.vue'
import Status from '~/components/Status.vue'
import type { NetworkConfig } from '~/types/network'
import { loadLanguageAsync } from '~/modules/i18n'
import { getAutoLaunchStatusAsync as getAutoLaunchStatus, loadAutoLaunchStatusAsync } from '~/modules/auto_launch'
import { loadRunningInstanceIdsFromLocalStorage } from '~/stores/network'
import { setLoggingLevel } from '~/composables/network'
import TieredMenu from 'primevue/tieredmenu'
import { open } from '@tauri-apps/plugin-shell';
import { appLogDir } from '@tauri-apps/api/path'
import { writeText } from '@tauri-apps/plugin-clipboard-manager';
import { useTray } from '~/composables/tray';
import { type } from '@tauri-apps/plugin-os';
import { getCurrentWindow } from '@tauri-apps/api/window'
import { writeText } from '@tauri-apps/plugin-clipboard-manager'
import { type } from '@tauri-apps/plugin-os'
import { exit } from '@tauri-apps/plugin-process'
import { open } from '@tauri-apps/plugin-shell'
import TieredMenu from 'primevue/tieredmenu'
import { useToast } from 'primevue/usetoast'
import Config from '~/components/Config.vue'
import Status from '~/components/Status.vue'
import { isAutostart, setLoggingLevel } from '~/composables/network'
import { useTray } from '~/composables/tray'
import { getAutoLaunchStatusAsync as getAutoLaunchStatus, loadAutoLaunchStatusAsync } from '~/modules/auto_launch'
import { loadLanguageAsync } from '~/modules/i18n'
import { type NetworkConfig, NetworkingMethod } from '~/types/network'
const { t, locale } = useI18n()
const visible = ref(false)
const aboutVisible = ref(false)
const tomlConfig = ref('')
useTray(true)
@@ -71,7 +72,6 @@ function addNewNetwork() {
networkStore.$subscribe(async () => {
networkStore.saveToLocalStorage()
networkStore.saveRunningInstanceIdsToLocalStorage()
try {
await parseNetworkConfig(networkStore.curNetwork)
messageBarSeverity.value = Severity.None
@@ -86,7 +86,8 @@ async function runNetworkCb(cfg: NetworkConfig, cb: () => void) {
if (type() === 'android') {
await prepareVpnService()
networkStore.clearNetworkInstances()
} else {
}
else {
networkStore.removeNetworkInstance(cfg.instance_id)
}
@@ -95,6 +96,7 @@ async function runNetworkCb(cfg: NetworkConfig, cb: () => void) {
try {
await runNetworkInstance(cfg)
networkStore.addAutoStartInstId(cfg.instance_id)
}
catch (e: any) {
// console.error(e)
@@ -109,6 +111,7 @@ async function stopNetworkCb(cfg: NetworkConfig, cb: () => void) {
cb()
networkStore.removeNetworkInstance(cfg.instance_id)
await retainNetworkInstance(networkStore.networkInstanceIds)
networkStore.removeAutoStartInstId(cfg.instance_id)
}
async function updateNetworkInfos() {
@@ -120,10 +123,13 @@ onMounted(async () => {
intervalId = window.setInterval(async () => {
await updateNetworkInfos()
}, 500)
await setTrayMenu([
await MenuItemExit(t('tray.exit')),
await MenuItemShow(t('tray.show'))
])
window.setTimeout(async () => {
await setTrayMenu([
await MenuItemExit(t('tray.exit')),
await MenuItemShow(t('tray.show')),
])
}, 1000)
})
onUnmounted(() => clearInterval(intervalId))
@@ -142,7 +148,7 @@ const setting_menu_items = ref([
await loadLanguageAsync((locale.value === 'en' ? 'cn' : 'en'))
await setTrayMenu([
await MenuItemExit(t('tray.exit')),
await MenuItemShow(t('tray.show'))
await MenuItemShow(t('tray.show')),
])
},
},
@@ -158,10 +164,10 @@ const setting_menu_items = ref([
icon: 'pi pi-file',
items: (function () {
const levels = ['off', 'warn', 'info', 'debug', 'trace']
let items = []
for (let level of levels) {
const items = []
for (const level of levels) {
items.push({
label: () => t("logging_level_" + level) + (current_log_level === level ? ' ✓' : ''),
label: () => t(`logging_level_${level}`) + (current_log_level === level ? ' ✓' : ''),
command: async () => {
current_log_level = level
await setLoggingLevel(level)
@@ -175,7 +181,7 @@ const setting_menu_items = ref([
label: () => t('logging_open_dir'),
icon: 'pi pi-folder-open',
command: async () => {
console.log("open log dir", await appLogDir())
console.log('open log dir', await appLogDir())
await open(await appLogDir())
},
})
@@ -187,7 +193,14 @@ const setting_menu_items = ref([
},
})
return items
})()
})(),
},
{
label: () => t('about.title'),
icon: 'pi pi-at',
command: async () => {
aboutVisible.value = true
},
},
{
label: () => t('exit'),
@@ -202,18 +215,22 @@ function toggle_setting_menu(event: any) {
setting_menu.value.toggle(event)
}
onMounted(async () => {
onBeforeMount(async () => {
networkStore.loadFromLocalStorage()
if (getAutoLaunchStatus()) {
let prev_running_ids = loadRunningInstanceIdsFromLocalStorage()
for (let id of prev_running_ids) {
let cfg = networkStore.networkList.find((item) => item.instance_id === id)
if (type() !== 'android' && getAutoLaunchStatus() && await isAutostart()) {
getCurrentWindow().hide()
const autoStartIds = networkStore.autoStartInstIds
for (const id of autoStartIds) {
const cfg = networkStore.networkList.find(item => item.instance_id === id)
if (cfg) {
networkStore.addNetworkInstance(cfg.instance_id)
await runNetworkInstance(cfg)
}
}
}
})
onMounted(async () => {
if (type() === 'android') {
await initMobileVpnService()
}
@@ -222,7 +239,6 @@ onMounted(async () => {
function isRunning(id: string) {
return networkStore.networkInstanceIds.includes(id)
}
</script>
<script lang="ts">
@@ -237,11 +253,15 @@ function isRunning(id: string) {
</ScrollPanel>
</Panel>
<Divider />
<div class="flex justify-content-end gap-2">
<div class="flex gap-2 justify-content-end">
<Button type="button" :label="t('close')" @click="visible = false" />
</div>
</Dialog>
<Dialog v-model:visible="aboutVisible" modal :header="t('about.title')" :style="{ width: '70%' }">
<About />
</Dialog>
<div>
<Toolbar>
<template #start>
@@ -252,30 +272,45 @@ function isRunning(id: string) {
<template #center>
<div class="min-w-40">
<Dropdown v-model="networkStore.curNetwork" :options="networkStore.networkList" :highlight-on-select="false"
:placeholder="t('select_network')" class="w-full">
<Dropdown
v-model="networkStore.curNetwork" :options="networkStore.networkList" :highlight-on-select="false"
:placeholder="t('select_network')" class="w-full"
>
<template #value="slotProps">
<div class="flex items-start content-center">
<div class="mr-3 flex-column">
<span>{{ slotProps.value.network_name }}</span>
</div>
<Tag class="my-auto" :severity="isRunning(slotProps.value.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.value.instance_id) ? 'network_running' : 'network_stopped')" />
<Tag
class="my-auto leading-3" :severity="isRunning(slotProps.value.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.value.instance_id) ? 'network_running' : 'network_stopped')"
/>
</div>
</template>
<template #option="slotProps">
<div class="flex flex-col items-start content-center">
<div class="flex flex-col items-start content-center max-w-full">
<div class="flex">
<div class="mr-3">
{{ t('network_name') }}: {{ slotProps.option.network_name }}
</div>
<Tag class="my-auto" :severity="isRunning(slotProps.option.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.option.instance_id) ? 'network_running' : 'network_stopped')" />
<Tag
class="my-auto leading-3"
:severity="isRunning(slotProps.option.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.option.instance_id) ? 'network_running' : 'network_stopped')"
/>
</div>
<div>{{ slotProps.option.public_server_url }}</div>
<div
v-if="isRunning(slotProps.option.instance_id) && networkStore.instances[slotProps.option.instance_id].detail && (networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 !== '')">
{{ networkStore.instances[slotProps.option.instance_id].detail
v-if="slotProps.option.networking_method !== NetworkingMethod.Standalone"
class="max-w-full overflow-hidden text-ellipsis"
>
{{ slotProps.option.networking_method === NetworkingMethod.Manual
? slotProps.option.peer_urls.join(', ')
: slotProps.option.public_server_url }}
</div>
<div
v-if="isRunning(slotProps.option.instance_id) && networkStore.instances[slotProps.option.instance_id].detail && (networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 !== '')"
>
{{ networkStore.instances[slotProps.option.instance_id].detail
? networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 : '' }}
</div>
</div>
@@ -285,8 +320,10 @@ function isRunning(id: string) {
</template>
<template #end>
<Button icon="pi pi-cog" severity="secondary" aria-haspopup="true" :label="t('settings')"
aria-controls="overlay_setting_menu" @click="toggle_setting_menu" />
<Button
icon="pi pi-cog" severity="secondary" aria-haspopup="true" :label="t('settings')"
aria-controls="overlay_setting_menu" @click="toggle_setting_menu"
/>
<TieredMenu id="overlay_setting_menu" ref="setting_menu" :model="setting_menu_items" :popup="true" />
</template>
</Toolbar>
@@ -295,21 +332,29 @@ function isRunning(id: string) {
<Panel class="h-full overflow-y-auto">
<Stepper :value="activeStep">
<StepList value="1">
<Step value="1">{{ t('config_network') }}</Step>
<Step value="2">{{ t('running') }}</Step>
<Step value="1">
{{ t('config_network') }}
</Step>
<Step value="2">
{{ t('running') }}
</Step>
</StepList>
<StepPanels value="1">
<StepPanel v-slot="{ activateCallback = (s: string) => { } } = {}" value="1">
<Config :instance-id="networkStore.curNetworkId" :config-invalid="messageBarSeverity !== Severity.None"
@run-network="runNetworkCb($event, () => activateCallback('2'))" />
<Config
:instance-id="networkStore.curNetworkId" :config-invalid="messageBarSeverity !== Severity.None"
@run-network="runNetworkCb($event, () => activateCallback('2'))"
/>
</StepPanel>
<StepPanel v-slot="{ activateCallback = (s: string) => { } } = {}" value="2">
<div class="flex flex-column">
<Status :instance-id="networkStore.curNetworkId" />
</div>
<div class="flex pt-4 justify-content-center">
<Button :label="t('stop_network')" severity="danger" icon="pi pi-arrow-left"
@click="stopNetworkCb(networkStore.curNetwork, () => activateCallback('1'))" />
<Button
:label="t('stop_network')" severity="danger" icon="pi pi-arrow-left"
@click="stopNetworkCb(networkStore.curNetwork, () => activateCallback('1'))"
/>
</div>
</StepPanel>
</StepPanels>
@@ -349,6 +394,10 @@ body {
margin: 0;
}
.p-select-overlay {
max-width: calc(100% - 2rem);
}
/*
.p-tabview-panel {

View File

@@ -14,6 +14,8 @@ export const useNetworkStore = defineStore('networkStore', {
instances: {} as Record<string, NetworkInstance>,
networkInfos: {} as Record<string, NetworkInstanceRunningInfo>,
autoStartInstIds: [] as string[],
}
},
@@ -74,7 +76,6 @@ export const useNetworkStore = defineStore('networkStore', {
this.instances[instanceId].error_msg = info.error_msg || ''
this.instances[instanceId].detail = info
}
this.saveRunningInstanceIdsToLocalStorage()
},
loadFromLocalStorage() {
@@ -92,27 +93,44 @@ export const useNetworkStore = defineStore('networkStore', {
this.networkList = networkList
this.curNetwork = this.networkList[0]
this.loadAutoStartInstIdsFromLocalStorage()
},
saveToLocalStorage() {
localStorage.setItem('networkList', JSON.stringify(this.networkList))
},
saveRunningInstanceIdsToLocalStorage() {
let instance_ids = Object.keys(this.instances).filter((instanceId) => this.instances[instanceId].running)
localStorage.setItem('runningInstanceIds', JSON.stringify(instance_ids))
}
saveAutoStartInstIdsToLocalStorage() {
localStorage.setItem('autoStartInstIds', JSON.stringify(this.autoStartInstIds))
},
loadAutoStartInstIdsFromLocalStorage() {
try {
this.autoStartInstIds = JSON.parse(localStorage.getItem('autoStartInstIds') || '[]')
}
catch (e) {
console.error(e)
this.autoStartInstIds = []
}
},
addAutoStartInstId(instanceId: string) {
if (!this.autoStartInstIds.includes(instanceId)) {
this.autoStartInstIds.push(instanceId)
}
this.saveAutoStartInstIdsToLocalStorage()
},
removeAutoStartInstId(instanceId: string) {
const idx = this.autoStartInstIds.indexOf(instanceId)
if (idx !== -1) {
this.autoStartInstIds.splice(idx, 1)
}
this.saveAutoStartInstIdsToLocalStorage()
},
},
})
if (import.meta.hot)
import.meta.hot.accept(acceptHMRUpdate(useNetworkStore as any, import.meta.hot))
export function loadRunningInstanceIdsFromLocalStorage(): string[] {
try {
return JSON.parse(localStorage.getItem('runningInstanceIds') || '[]')
} catch (e) {
console.error(e)
return []
}
}

View File

@@ -16,7 +16,6 @@
font-weight: 400;
color: #0f0f0f;
background-color: white;
font-synthesis: none;
text-rendering: optimizeLegibility;

View File

@@ -12,7 +12,7 @@ declare module 'vue-router/auto-routes' {
ParamValueOneOrMore,
ParamValueZeroOrMore,
ParamValueZeroOrOne,
} from 'unplugin-vue-router/types'
} from 'vue-router'
/**
* Route name map generated by unplugin-vue-router

View File

@@ -31,6 +31,9 @@ export interface NetworkConfig {
listener_urls: string[]
rpc_port: number
latency_first: boolean
dev_name: string
}
export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
@@ -62,6 +65,8 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
'wg://0.0.0.0:11011',
],
rpc_port: 0,
latency_first: true,
dev_name: '',
}
}
@@ -75,6 +80,7 @@ export interface NetworkInstance {
}
export interface NetworkInstanceRunningInfo {
dev_name: string
my_node_info: NodeInfo
events: Record<string, any>
node_info: NodeInfo
@@ -85,13 +91,26 @@ export interface NetworkInstanceRunningInfo {
error_msg?: string
}
export interface Ipv4Addr {
addr: number
}
export interface Ipv6Addr {
part1: number
part2: number
part3: number
part4: number
}
export interface NodeInfo {
virtual_ipv4: string
hostname: string
version: string
ips: {
public_ipv4: string
interface_ipv4s: string[]
public_ipv6: string
interface_ipv6s: string[]
public_ipv4: Ipv4Addr
interface_ipv4s: Ipv4Addr[]
public_ipv6: Ipv6Addr
interface_ipv6s: Ipv6Addr[]
listeners: {
serialization: string
scheme_end: number
@@ -125,6 +144,7 @@ export interface Route {
hostname: string
stun_info?: StunInfo
inst_id: string
version: string
}
export interface PeerInfo {

View File

@@ -1,19 +1,19 @@
import path from 'node:path'
import { defineConfig } from 'vite'
import Vue from '@vitejs/plugin-vue'
import Layouts from 'vite-plugin-vue-layouts'
import Components from 'unplugin-vue-components/vite'
import AutoImport from 'unplugin-auto-import/vite'
import VueMacros from 'unplugin-vue-macros/vite'
import process from 'node:process'
import VueI18n from '@intlify/unplugin-vue-i18n/vite'
import VueDevTools from 'vite-plugin-vue-devtools'
import VueRouter from 'unplugin-vue-router/vite'
import { PrimeVueResolver } from '@primevue/auto-import-resolver'
import Vue from '@vitejs/plugin-vue'
import { internalIpV4Sync } from 'internal-ip'
import AutoImport from 'unplugin-auto-import/vite'
import Components from 'unplugin-vue-components/vite'
import VueMacros from 'unplugin-vue-macros/vite'
import { VueRouterAutoImports } from 'unplugin-vue-router'
import { PrimeVueResolver } from '@primevue/auto-import-resolver';
import { svelte } from '@sveltejs/vite-plugin-svelte';
import { internalIpV4Sync } from 'internal-ip';
import VueRouter from 'unplugin-vue-router/vite'
import { defineConfig } from 'vite'
import VueDevTools from 'vite-plugin-vue-devtools'
import Layouts from 'vite-plugin-vue-layouts'
const host = process.env.TAURI_DEV_HOST;
const host = process.env.TAURI_DEV_HOST
// https://vitejs.dev/config/
export default defineConfig(async () => ({
@@ -23,7 +23,6 @@ export default defineConfig(async () => ({
},
},
plugins: [
svelte(),
VueMacros({
plugins: {
vue: Vue({
@@ -100,10 +99,10 @@ export default defineConfig(async () => ({
},
hmr: host
? {
protocol: 'ws',
host: internalIpV4Sync(),
port: 1430,
}
protocol: 'ws',
host: internalIpV4Sync(),
port: 1430,
}
: undefined,
},
}))

View File

@@ -3,12 +3,12 @@ name = "easytier"
description = "A full meshed p2p VPN, connecting all your devices in one network with one command."
homepage = "https://github.com/EasyTier/EasyTier"
repository = "https://github.com/EasyTier/EasyTier"
version = "1.2.2"
version = "2.0.0"
edition = "2021"
authors = ["kkrainbow"]
keywords = ["vpn", "p2p", "network", "easytier"]
categories = ["network-programming", "command-line-utilities"]
rust-version = "1.75"
rust-version = "1.77.0"
license-file = "LICENSE"
readme = "README.md"
@@ -29,6 +29,8 @@ path = "src/lib.rs"
test = false
[dependencies]
git-version = "0.3.9"
tracing = { version = "0.1", features = ["log"] }
tracing-subscriber = { version = "0.3", features = [
"env-filter",
@@ -49,7 +51,7 @@ futures = { version = "0.3", features = ["bilock", "unstable"] }
tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1"
tokio-util = { version = "0.7.9", features = ["codec", "net"] }
tokio-util = { version = "0.7.9", features = ["codec", "net", "io"] }
async-stream = "0.3.5"
async-trait = "0.1.74"
@@ -101,14 +103,10 @@ uuid = { version = "1.5.0", features = [
crossbeam-queue = "0.3"
once_cell = "1.18.0"
# for packet
postcard = { "version" = "1.0.8", features = ["alloc"] }
# for rpc
tonic = "0.12"
prost = "0.13"
prost-types = "0.13"
anyhow = "1.0"
tarpc = { version = "0.32", features = ["tokio1", "serde1"] }
url = { version = "2.5", features = ["serde"] }
percent-encoding = "2.3.1"
@@ -127,6 +125,7 @@ rand = "0.8.5"
serde = { version = "1.0", features = ["derive"] }
pnet = { version = "0.35.0", features = ["serde"] }
serde_json = "1"
clap = { version = "4.4.8", features = [
"string",
@@ -142,7 +141,10 @@ network-interface = "2.0"
# for ospf route
petgraph = "0.6.5"
boringtun = { package = "boringtun-easytier", version = "0.6.0", optional = true } # for encryption
# for wireguard
boringtun = { package = "boringtun-easytier", version = "0.6.1", optional = true }
# for encryption
ring = { version = "0.17", optional = true }
bitflags = "2.5"
aes-gcm = { version = "0.10.3", optional = true }
@@ -177,6 +179,9 @@ wildmatch = "2.3.4"
rust-i18n = "3"
sys-locale = "0.3"
ringbuf = "0.4.5"
async-ringbuf = "0.3.1"
[target.'cfg(windows)'.dependencies]
windows-sys = { version = "0.52", features = [
"Win32_Networking_WinSock",
@@ -191,6 +196,8 @@ winreg = "0.52"
tonic-build = "0.12"
globwalk = "0.8.1"
regex = "1"
prost-build = "0.13.2"
rpc_build = { path = "src/proto/rpc_build" }
[target.'cfg(windows)'.build-dependencies]
reqwest = { version = "0.11", features = ["blocking"] }
@@ -200,13 +207,15 @@ zip = "0.6.6"
[dev-dependencies]
serial_test = "3.0.0"
rstest = "0.18.2"
futures-util = "0.3.30"
[target.'cfg(target_os = "linux")'.dev-dependencies]
defguard_wireguard_rs = "0.4.2"
tokio-socks = "0.5.2"
[features]
default = ["wireguard", "mimalloc", "websocket", "smoltcp", "tun"]
default = ["wireguard", "mimalloc", "websocket", "smoltcp", "tun", "socks5"]
full = [
"quic",
"websocket",
@@ -215,9 +224,9 @@ full = [
"aes-gcm",
"smoltcp",
"tun",
"socks5",
]
mips = ["aes-gcm", "mimalloc", "wireguard", "tun", "smoltcp"]
bsd = ["aes-gcm", "mimalloc", "smoltcp"]
mips = ["aes-gcm", "mimalloc", "wireguard", "tun", "smoltcp", "socks5"]
wireguard = ["dep:boringtun", "dep:ring"]
quic = ["dep:quinn", "dep:rustls", "dep:rcgen"]
mimalloc = ["dep:mimalloc-rust"]
@@ -231,3 +240,4 @@ websocket = [
"dep:rcgen",
]
smoltcp = ["dep:smoltcp", "dep:parking_lot"]
socks5 = ["dep:smoltcp"]

View File

@@ -129,14 +129,35 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
#[cfg(target_os = "windows")]
WindowsBuild::check_for_win();
tonic_build::configure()
.type_attribute(".", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute("cli.DirectConnectedPeerInfo", "#[derive(Hash)]")
.type_attribute("cli.PeerInfoForGlobalMap", "#[derive(Hash)]")
let proto_files = [
"src/proto/peer_rpc.proto",
"src/proto/common.proto",
"src/proto/error.proto",
"src/proto/tests.proto",
"src/proto/cli.proto",
];
for proto_file in &proto_files {
println!("cargo:rerun-if-changed={}", proto_file);
}
prost_build::Config::new()
.type_attribute(".common", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".error", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".cli", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(
"peer_rpc.GetIpListResponse",
"#[derive(serde::Serialize, serde::Deserialize)]",
)
.type_attribute("peer_rpc.DirectConnectedPeerInfo", "#[derive(Hash)]")
.type_attribute("peer_rpc.PeerInfoForGlobalMap", "#[derive(Hash)]")
.type_attribute("peer_rpc.ForeignNetworkRouteInfoKey", "#[derive(Hash, Eq)]")
.type_attribute("common.RpcDescriptor", "#[derive(Hash, Eq)]")
.service_generator(Box::new(rpc_build::ServiceGenerator::new()))
.btree_map(&["."])
.compile(&["proto/cli.proto"], &["proto/"])
.compile_protos(&proto_files, &["src/proto/"])
.unwrap();
// tonic_build::compile_protos("proto/cli.proto")?;
check_locale();
Ok(())
}

View File

@@ -108,6 +108,12 @@ core_clap:
disable_p2p:
en: "disable p2p communication, will only relay packets with peers specified by --peers"
zh-CN: "禁用P2P通信只通过--peers指定的节点转发数据包"
disable_udp_hole_punching:
en: "disable udp hole punching"
zh-CN: "禁用UDP打洞功能"
relay_all_peer_rpc:
en: "relay all peer rpc packets, even if the peer is not in the relay network whitelist. this can help peers not in relay network whitelist to establish p2p connection."
zh-CN: "转发所有对等节点的RPC数据包即使对等节点不在转发网络白名单中。这可以帮助白名单外网络中的对等节点建立P2P连接。"
zh-CN: "转发所有对等节点的RPC数据包即使对等节点不在转发网络白名单中。这可以帮助白名单外网络中的对等节点建立P2P连接。"
socks5:
en: "enable socks5 server, allow socks5 client to access virtual network. format: <port>, e.g.: 1080"
zh-CN: "启用 socks5 服务器,允许 socks5 客户端访问虚拟网络. 格式: <端口>例如1080"

View File

@@ -64,12 +64,15 @@ pub trait ConfigLoader: Send + Sync {
fn get_routes(&self) -> Option<Vec<cidr::Ipv4Cidr>>;
fn set_routes(&self, routes: Option<Vec<cidr::Ipv4Cidr>>);
fn get_socks5_portal(&self) -> Option<url::Url>;
fn set_socks5_portal(&self, addr: Option<url::Url>);
fn dump(&self) -> String;
}
pub type NetworkSecretDigest = [u8; 32];
#[derive(Debug, Clone, Deserialize, Serialize, Default)]
#[derive(Debug, Clone, Deserialize, Serialize, Default, Eq, Hash)]
pub struct NetworkIdentity {
pub network_name: String,
pub network_secret: Option<String>,
@@ -175,6 +178,8 @@ pub struct Flags {
pub disable_p2p: bool,
#[derivative(Default(value = "false"))]
pub relay_all_peer_rpc: bool,
#[derivative(Default(value = "false"))]
pub disable_udp_hole_punching: bool,
}
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
@@ -201,7 +206,12 @@ struct Config {
routes: Option<Vec<cidr::Ipv4Cidr>>,
flags: Option<Flags>,
socks5_proxy: Option<url::Url>,
flags: Option<serde_json::Map<String, serde_json::Value>>,
#[serde(skip)]
flags_struct: Option<Flags>,
}
#[derive(Debug, Clone)]
@@ -217,13 +227,15 @@ impl Default for TomlConfigLoader {
impl TomlConfigLoader {
pub fn new_from_str(config_str: &str) -> Result<Self, anyhow::Error> {
let config = toml::de::from_str::<Config>(config_str).with_context(|| {
let mut config = toml::de::from_str::<Config>(config_str).with_context(|| {
format!(
"failed to parse config file: {}\n{}",
config_str, config_str
)
})?;
config.flags_struct = Some(Self::gen_flags(config.flags.clone().unwrap_or_default()));
Ok(TomlConfigLoader {
config: Arc::new(Mutex::new(config)),
})
@@ -241,6 +253,28 @@ impl TomlConfigLoader {
Ok(ret)
}
fn gen_flags(mut flags_hashmap: serde_json::Map<String, serde_json::Value>) -> Flags {
let default_flags_json = serde_json::to_string(&Flags::default()).unwrap();
let default_flags_hashmap =
serde_json::from_str::<serde_json::Map<String, serde_json::Value>>(&default_flags_json)
.unwrap();
tracing::debug!("default_flags_hashmap: {:?}", default_flags_hashmap);
let mut merged_hashmap = serde_json::Map::new();
for (key, value) in default_flags_hashmap {
if let Some(v) = flags_hashmap.remove(&key) {
merged_hashmap.insert(key, v);
} else {
merged_hashmap.insert(key, value);
}
}
tracing::debug!("merged_hashmap: {:?}", merged_hashmap);
serde_json::from_value(serde_json::Value::Object(merged_hashmap)).unwrap()
}
}
impl ConfigLoader for TomlConfigLoader {
@@ -467,13 +501,13 @@ impl ConfigLoader for TomlConfigLoader {
self.config
.lock()
.unwrap()
.flags
.flags_struct
.clone()
.unwrap_or_default()
}
fn set_flags(&self, flags: Flags) {
self.config.lock().unwrap().flags = Some(flags);
self.config.lock().unwrap().flags_struct = Some(flags);
}
fn get_exit_nodes(&self) -> Vec<Ipv4Addr> {
@@ -500,6 +534,14 @@ impl ConfigLoader for TomlConfigLoader {
fn set_routes(&self, routes: Option<Vec<cidr::Ipv4Cidr>>) {
self.config.lock().unwrap().routes = routes;
}
fn get_socks5_portal(&self) -> Option<url::Url> {
self.config.lock().unwrap().socks5_proxy.clone()
}
fn set_socks5_portal(&self, addr: Option<url::Url>) {
self.config.lock().unwrap().socks5_proxy = addr;
}
}
#[cfg(test)]

View File

@@ -21,4 +21,13 @@ macro_rules! set_global_var {
define_global_var!(MANUAL_CONNECTOR_RECONNECT_INTERVAL_MS, u64, 1000);
define_global_var!(OSPF_UPDATE_MY_GLOBAL_FOREIGN_NETWORK_INTERVAL_SEC, u64, 10);
pub const UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID: u32 = 2;
pub const EASYTIER_VERSION: &str = git_version::git_version!(
args = ["--abbrev=8", "--always", "--dirty=~"],
prefix = concat!(env!("CARGO_PKG_VERSION"), "-"),
suffix = "",
fallback = env!("CARGO_PKG_VERSION")
);

View File

@@ -31,8 +31,6 @@ pub enum Error {
// RpcListenError(String),
#[error("Rpc connect error: {0}")]
RpcConnectError(String),
#[error("Rpc error: {0}")]
RpcClientError(#[from] tarpc::client::RpcError),
#[error("Timeout error: {0}")]
Timeout(#[from] tokio::time::error::Elapsed),
#[error("url in blacklist")]

View File

@@ -4,7 +4,8 @@ use std::{
sync::{Arc, Mutex},
};
use crate::rpc::PeerConnInfo;
use crate::proto::cli::PeerConnInfo;
use crate::proto::common::PeerFeatureFlag;
use crossbeam::atomic::AtomicCell;
use super::{
@@ -68,6 +69,8 @@ pub struct GlobalCtx {
enable_exit_node: bool,
no_tun: bool,
feature_flags: AtomicCell<PeerFeatureFlag>,
}
impl std::fmt::Debug for GlobalCtx {
@@ -91,7 +94,7 @@ impl GlobalCtx {
let net_ns = NetNS::new(config_fs.get_netns());
let hostname = config_fs.get_hostname();
let (event_bus, _) = tokio::sync::broadcast::channel(100);
let (event_bus, _) = tokio::sync::broadcast::channel(1024);
let stun_info_collection = Arc::new(StunInfoCollector::new_with_default_servers());
@@ -119,6 +122,8 @@ impl GlobalCtx {
enable_exit_node,
no_tun,
feature_flags: AtomicCell::new(PeerFeatureFlag::default()),
}
}
@@ -179,6 +184,10 @@ impl GlobalCtx {
self.config.get_network_identity()
}
pub fn get_network_name(&self) -> String {
self.get_network_identity().network_name
}
pub fn get_ip_collector(&self) -> Arc<IPCollector> {
self.ip_collector.clone()
}
@@ -191,7 +200,6 @@ impl GlobalCtx {
self.stun_info_collection.as_ref()
}
#[cfg(test)]
pub fn replace_stun_info_collector(&self, collector: Box<dyn StunInfoCollectorTrait>) {
// force replace the stun_info_collection without mut and drop the old one
let ptr = &self.stun_info_collection as *const Box<dyn StunInfoCollectorTrait>;
@@ -219,6 +227,10 @@ impl GlobalCtx {
self.config.get_flags()
}
pub fn set_flags(&self, flags: Flags) {
self.config.set_flags(flags);
}
pub fn get_128_key(&self) -> [u8; 16] {
let mut key = [0u8; 16];
let secret = self
@@ -243,6 +255,14 @@ impl GlobalCtx {
pub fn no_tun(&self) -> bool {
self.no_tun
}
pub fn get_feature_flags(&self) -> PeerFeatureFlag {
self.feature_flags.load()
}
pub fn set_feature_flags(&self, flags: PeerFeatureFlag) {
self.feature_flags.store(flags);
}
}
#[cfg(test)]

View File

@@ -384,6 +384,10 @@ impl IfConfiguerTrait for WindowsIfConfiger {
}
async fn set_mtu(&self, name: &str, mtu: u32) -> Result<(), Error> {
let _ = run_shell_cmd(
format!("netsh interface ipv6 set subinterface {} mtu={}", name, mtu).as_str(),
)
.await;
run_shell_cmd(
format!("netsh interface ipv4 set subinterface {} mtu={}", name, mtu).as_str(),
)
@@ -395,7 +399,7 @@ pub struct DummyIfConfiger {}
#[async_trait]
impl IfConfiguerTrait for DummyIfConfiger {}
#[cfg(target_os = "macos")]
#[cfg(any(target_os = "macos", target_os = "freebsd"))]
pub type IfConfiger = MacIfConfiger;
#[cfg(target_os = "linux")]
@@ -404,5 +408,10 @@ pub type IfConfiger = LinuxIfConfiger;
#[cfg(target_os = "windows")]
pub type IfConfiger = WindowsIfConfiger;
#[cfg(not(any(target_os = "macos", target_os = "linux", target_os = "windows")))]
#[cfg(not(any(
target_os = "macos",
target_os = "linux",
target_os = "windows",
target_os = "freebsd"
)))]
pub type IfConfiger = DummyIfConfiger;

View File

@@ -14,6 +14,7 @@ pub mod global_ctx;
pub mod ifcfg;
pub mod netns;
pub mod network;
pub mod scoped_task;
pub mod stun;
pub mod stun_codec_ext;

View File

@@ -1,12 +1,13 @@
use std::{net::IpAddr, ops::Deref, sync::Arc};
use crate::rpc::peer::GetIpListResponse;
use pnet::datalink::NetworkInterface;
use tokio::{
sync::{Mutex, RwLock},
task::JoinSet,
};
use crate::proto::peer_rpc::GetIpListResponse;
use super::{netns::NetNS, stun::StunInfoCollectorTrait};
pub const CACHED_IP_LIST_TIMEOUT_SEC: u64 = 60;
@@ -60,7 +61,9 @@ impl InterfaceFilter {
#[cfg(any(target_os = "macos", target_os = "freebsd"))]
impl InterfaceFilter {
async fn is_interface_physical(interface_name: &str) -> bool {
#[cfg(target_os = "macos")]
async fn is_interface_physical(&self) -> bool {
let interface_name = &self.iface.name;
let output = tokio::process::Command::new("networksetup")
.args(&["-listallhardwareports"])
.output()
@@ -87,11 +90,17 @@ impl InterfaceFilter {
false
}
#[cfg(target_os = "freebsd")]
async fn is_interface_physical(&self) -> bool {
// if mac addr is not zero, then it's physical interface
self.iface.mac.map(|mac| !mac.is_zero()).unwrap_or(false)
}
async fn filter_iface(&self) -> bool {
!self.iface.is_point_to_point()
&& !self.iface.is_loopback()
&& self.iface.is_up()
&& Self::is_interface_physical(&self.iface.name).await
&& self.is_interface_physical().await
}
}
@@ -155,7 +164,7 @@ pub struct IPCollector {
impl IPCollector {
pub fn new<T: StunInfoCollectorTrait + 'static>(net_ns: NetNS, stun_info_collector: T) -> Self {
Self {
cached_ip_list: Arc::new(RwLock::new(GetIpListResponse::new())),
cached_ip_list: Arc::new(RwLock::new(GetIpListResponse::default())),
collect_ip_task: Mutex::new(JoinSet::new()),
net_ns,
stun_info_collector: Arc::new(Box::new(stun_info_collector)),
@@ -187,14 +196,18 @@ impl IPCollector {
let Ok(ip_addr) = ip.parse::<IpAddr>() else {
continue;
};
if ip_addr.is_ipv4() {
cached_ip_list.write().await.public_ipv4 = ip.clone();
} else {
cached_ip_list.write().await.public_ipv6 = ip.clone();
match ip_addr {
IpAddr::V4(v) => {
cached_ip_list.write().await.public_ipv4 = Some(v.into())
}
IpAddr::V6(v) => {
cached_ip_list.write().await.public_ipv6 = Some(v.into())
}
}
}
let sleep_sec = if !cached_ip_list.read().await.public_ipv4.is_empty() {
let sleep_sec = if !cached_ip_list.read().await.public_ipv4.is_none() {
CACHED_IP_LIST_TIMEOUT_SEC
} else {
3
@@ -228,7 +241,7 @@ impl IPCollector {
#[tracing::instrument(skip(net_ns))]
async fn do_collect_local_ip_addrs(net_ns: NetNS) -> GetIpListResponse {
let mut ret = crate::rpc::peer::GetIpListResponse::new();
let mut ret = GetIpListResponse::default();
let ifaces = Self::collect_interfaces(net_ns.clone()).await;
let _g = net_ns.guard();
@@ -238,25 +251,28 @@ impl IPCollector {
if ip.is_loopback() || ip.is_multicast() {
continue;
}
if ip.is_ipv4() {
ret.interface_ipv4s.push(ip.to_string());
} else if ip.is_ipv6() {
ret.interface_ipv6s.push(ip.to_string());
match ip {
std::net::IpAddr::V4(v4) => {
ret.interface_ipv4s.push(v4.into());
}
std::net::IpAddr::V6(v6) => {
ret.interface_ipv6s.push(v6.into());
}
}
}
}
if let Ok(v4_addr) = local_ipv4().await {
tracing::trace!("got local ipv4: {}", v4_addr);
if !ret.interface_ipv4s.contains(&v4_addr.to_string()) {
ret.interface_ipv4s.push(v4_addr.to_string());
if !ret.interface_ipv4s.contains(&v4_addr.into()) {
ret.interface_ipv4s.push(v4_addr.into());
}
}
if let Ok(v6_addr) = local_ipv6().await {
tracing::trace!("got local ipv6: {}", v6_addr);
if !ret.interface_ipv6s.contains(&v6_addr.to_string()) {
ret.interface_ipv6s.push(v6_addr.to_string());
if !ret.interface_ipv6s.contains(&v6_addr.into()) {
ret.interface_ipv6s.push(v6_addr.into());
}
}

View File

@@ -0,0 +1,134 @@
//! This crate provides a wrapper type of Tokio's JoinHandle: `ScopedTask`, which aborts the task when it's dropped.
//! `ScopedTask` can still be awaited to join the child-task, and abort-on-drop will still trigger while it is being awaited.
//!
//! For example, if task A spawned task B but is doing something else, and task B is waiting for task C to join,
//! aborting A will also abort both B and C.
use std::future::Future;
use std::ops::Deref;
use std::pin::Pin;
use std::task::{Context, Poll};
use tokio::task::JoinHandle;
#[derive(Debug)]
pub struct ScopedTask<T> {
inner: JoinHandle<T>,
}
impl<T> Drop for ScopedTask<T> {
fn drop(&mut self) {
self.inner.abort()
}
}
impl<T> Future for ScopedTask<T> {
type Output = <JoinHandle<T> as Future>::Output;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
Pin::new(&mut self.inner).poll(cx)
}
}
impl<T> From<JoinHandle<T>> for ScopedTask<T> {
fn from(inner: JoinHandle<T>) -> Self {
Self { inner }
}
}
impl<T> Deref for ScopedTask<T> {
type Target = JoinHandle<T>;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
#[cfg(test)]
mod tests {
use super::ScopedTask;
use futures_util::future::pending;
use std::sync::{Arc, RwLock};
use tokio::task::yield_now;
struct Sentry(Arc<RwLock<bool>>);
impl Drop for Sentry {
fn drop(&mut self) {
*self.0.write().unwrap() = true
}
}
#[tokio::test]
async fn drop_while_not_waiting_for_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
drop(task);
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn drop_while_waiting_for_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let handle = tokio::spawn(async move {
ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}))
.await
.unwrap()
});
yield_now().await;
assert!(!*dropped.read().unwrap());
handle.abort();
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn no_drop_only_join() {
assert_eq!(
ScopedTask::from(tokio::spawn(async {
yield_now().await;
5
}))
.await
.unwrap(),
5
)
}
#[tokio::test]
async fn manually_abort_before_drop() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
task.abort();
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn manually_abort_then_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
task.abort();
yield_now().await;
assert!(task.await.is_err());
}
}

View File

@@ -1,9 +1,10 @@
use std::collections::BTreeSet;
use std::net::{IpAddr, SocketAddr};
use std::sync::atomic::AtomicBool;
use std::sync::{Arc, RwLock};
use std::time::{Duration, Instant};
use crate::rpc::{NatType, StunInfo};
use crate::proto::common::{NatType, StunInfo};
use anyhow::Context;
use chrono::Local;
use crossbeam::atomic::AtomicCell;
@@ -161,7 +162,7 @@ impl StunClient {
continue;
};
tracing::debug!(b = ?&udp_buf[..len], ?tids, ?remote_addr, ?stun_host, "recv stun response, msg: {:#?}", msg);
tracing::trace!(b = ?&udp_buf[..len], ?tids, ?remote_addr, ?stun_host, "recv stun response, msg: {:#?}", msg);
if msg.class() != MessageClass::SuccessResponse
|| msg.method() != BINDING
@@ -216,7 +217,7 @@ impl StunClient {
changed_addr
}
#[tracing::instrument(ret, err, level = Level::DEBUG)]
#[tracing::instrument(ret, level = Level::TRACE)]
pub async fn bind_request(
self,
change_ip: bool,
@@ -243,7 +244,7 @@ impl StunClient {
.encode_into_bytes(message.clone())
.with_context(|| "encode stun message")?;
tids.push(tid as u128);
tracing::debug!(?message, ?msg, tid, "send stun request");
tracing::trace!(?message, ?msg, tid, "send stun request");
self.socket
.send_to(msg.as_slice().into(), &stun_host)
.await?;
@@ -276,7 +277,7 @@ impl StunClient {
latency_us: now.elapsed().as_micros() as u32,
};
tracing::debug!(
tracing::trace!(
?stun_host,
?recv_addr,
?changed_socket_addr,
@@ -303,14 +304,14 @@ impl StunClientBuilder {
task_set.spawn(
async move {
let mut buf = [0; 1620];
tracing::info!("start stun packet listener");
tracing::trace!("start stun packet listener");
loop {
let Ok((len, addr)) = udp_clone.recv_from(&mut buf).await else {
tracing::error!("udp recv_from error");
break;
};
let data = buf[..len].to_vec();
tracing::debug!(?addr, ?data, "recv udp stun packet");
tracing::trace!(?addr, ?data, "recv udp stun packet");
let _ = stun_packet_sender_clone.send(StunPacket { data, addr });
}
}
@@ -552,12 +553,15 @@ pub struct StunInfoCollector {
udp_nat_test_result: Arc<RwLock<Option<UdpNatTypeDetectResult>>>,
nat_test_result_time: Arc<AtomicCell<chrono::DateTime<Local>>>,
redetect_notify: Arc<tokio::sync::Notify>,
tasks: JoinSet<()>,
tasks: std::sync::Mutex<JoinSet<()>>,
started: AtomicBool,
}
#[async_trait::async_trait]
impl StunInfoCollectorTrait for StunInfoCollector {
fn get_stun_info(&self) -> StunInfo {
self.start_stun_routine();
let Some(result) = self.udp_nat_test_result.read().unwrap().clone() else {
return Default::default();
};
@@ -572,6 +576,8 @@ impl StunInfoCollectorTrait for StunInfoCollector {
}
async fn get_udp_port_mapping(&self, local_port: u16) -> Result<SocketAddr, Error> {
self.start_stun_routine();
let stun_servers = self
.udp_nat_test_result
.read()
@@ -605,17 +611,14 @@ impl StunInfoCollectorTrait for StunInfoCollector {
impl StunInfoCollector {
pub fn new(stun_servers: Vec<String>) -> Self {
let mut ret = Self {
Self {
stun_servers: Arc::new(RwLock::new(stun_servers)),
udp_nat_test_result: Arc::new(RwLock::new(None)),
nat_test_result_time: Arc::new(AtomicCell::new(Local::now())),
redetect_notify: Arc::new(tokio::sync::Notify::new()),
tasks: JoinSet::new(),
};
ret.start_stun_routine();
ret
tasks: std::sync::Mutex::new(JoinSet::new()),
started: AtomicBool::new(false),
}
}
pub fn new_with_default_servers() -> Self {
@@ -648,12 +651,18 @@ impl StunInfoCollector {
.collect()
}
fn start_stun_routine(&mut self) {
fn start_stun_routine(&self) {
if self.started.load(std::sync::atomic::Ordering::Relaxed) {
return;
}
self.started
.store(true, std::sync::atomic::Ordering::Relaxed);
let stun_servers = self.stun_servers.clone();
let udp_nat_test_result = self.udp_nat_test_result.clone();
let udp_test_time = self.nat_test_result_time.clone();
let redetect_notify = self.redetect_notify.clone();
self.tasks.spawn(async move {
self.tasks.lock().unwrap().spawn(async move {
loop {
let servers = stun_servers.read().unwrap().clone();
// use first three and random choose one from the rest
@@ -712,6 +721,31 @@ impl StunInfoCollector {
}
}
pub struct MockStunInfoCollector {
pub udp_nat_type: NatType,
}
#[async_trait::async_trait]
impl StunInfoCollectorTrait for MockStunInfoCollector {
fn get_stun_info(&self) -> StunInfo {
StunInfo {
udp_nat_type: self.udp_nat_type as i32,
tcp_nat_type: NatType::Unknown as i32,
last_update_time: std::time::Instant::now().elapsed().as_secs() as i64,
min_port: 100,
max_port: 200,
..Default::default()
}
}
async fn get_udp_port_mapping(&self, mut port: u16) -> Result<std::net::SocketAddr, Error> {
if port == 0 {
port = 40144;
}
Ok(format!("127.0.0.1:{}", port).parse().unwrap())
}
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -4,10 +4,21 @@ use std::{net::SocketAddr, sync::Arc};
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, PeerId},
peers::{peer_manager::PeerManager, peer_rpc::PeerRpcManager},
peers::{
peer_manager::PeerManager, peer_rpc::PeerRpcManager,
peer_rpc_service::DirectConnectorManagerRpcServer,
},
proto::{
peer_rpc::{
DirectConnectorRpc, DirectConnectorRpcClientFactory, DirectConnectorRpcServer,
GetIpListRequest, GetIpListResponse,
},
rpc_types::controller::BaseController,
},
};
use crate::rpc::{peer::GetIpListResponse, PeerConnInfo};
use crate::proto::cli::PeerConnInfo;
use anyhow::Context;
use tokio::{task::JoinSet, time::timeout};
use tracing::Instrument;
use url::Host;
@@ -17,11 +28,6 @@ use super::create_connector_by_url;
pub const DIRECT_CONNECTOR_SERVICE_ID: u32 = 1;
pub const DIRECT_CONNECTOR_BLACKLIST_TIMEOUT_SEC: u64 = 300;
#[tarpc::service]
pub trait DirectConnectorRpc {
async fn get_ip_list() -> GetIpListResponse;
}
#[async_trait::async_trait]
pub trait PeerManagerForDirectConnector {
async fn list_peers(&self) -> Vec<PeerId>;
@@ -35,7 +41,10 @@ impl PeerManagerForDirectConnector for PeerManager {
let mut ret = vec![];
let routes = self.list_routes().await;
for r in routes.iter() {
for r in routes
.iter()
.filter(|r| r.feature_flag.map(|r| !r.is_public_server).unwrap_or(true))
{
ret.push(r.peer_id);
}
@@ -51,27 +60,6 @@ impl PeerManagerForDirectConnector for PeerManager {
}
}
#[derive(Clone)]
struct DirectConnectorManagerRpcServer {
// TODO: this only cache for one src peer, should make it global
global_ctx: ArcGlobalCtx,
}
#[tarpc::server]
impl DirectConnectorRpc for DirectConnectorManagerRpcServer {
async fn get_ip_list(self, _: tarpc::context::Context) -> GetIpListResponse {
let mut ret = self.global_ctx.get_ip_collector().collect_ip_addrs().await;
ret.listeners = self.global_ctx.get_running_listeners();
ret
}
}
impl DirectConnectorManagerRpcServer {
pub fn new(global_ctx: ArcGlobalCtx) -> Self {
Self { global_ctx }
}
}
#[derive(Hash, Eq, PartialEq, Clone)]
struct DstBlackListItem(PeerId, String);
@@ -130,10 +118,17 @@ impl DirectConnectorManager {
}
pub fn run_as_server(&mut self) {
self.data.peer_manager.get_peer_rpc_mgr().run_service(
DIRECT_CONNECTOR_SERVICE_ID,
DirectConnectorManagerRpcServer::new(self.global_ctx.clone()).serve(),
);
self.data
.peer_manager
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
DirectConnectorRpcServer::new(DirectConnectorManagerRpcServer::new(
self.global_ctx.clone(),
)),
&self.data.global_ctx.get_network_name(),
);
}
pub fn run_as_client(&mut self) {
@@ -238,7 +233,8 @@ impl DirectConnectorManager {
let enable_ipv6 = data.global_ctx.get_flags().enable_ipv6;
let available_listeners = ip_list
.listeners
.iter()
.into_iter()
.map(Into::<url::Url>::into)
.filter_map(|l| if l.scheme() != "ring" { Some(l) } else { None })
.filter(|l| l.port().is_some() && l.host().is_some())
.filter(|l| {
@@ -268,7 +264,7 @@ impl DirectConnectorManager {
Some(SocketAddr::V4(_)) => {
ip_list.interface_ipv4s.iter().for_each(|ip| {
let mut addr = (*listener).clone();
if addr.set_host(Some(ip.as_str())).is_ok() {
if addr.set_host(Some(ip.to_string().as_str())).is_ok() {
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
@@ -277,19 +273,27 @@ impl DirectConnectorManager {
}
});
let mut addr = (*listener).clone();
if addr.set_host(Some(ip_list.public_ipv4.as_str())).is_ok() {
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
if let Some(public_ipv4) = ip_list.public_ipv4 {
let mut addr = (*listener).clone();
if addr
.set_host(Some(public_ipv4.to_string().as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
}
}
}
Some(SocketAddr::V6(_)) => {
ip_list.interface_ipv6s.iter().for_each(|ip| {
let mut addr = (*listener).clone();
if addr.set_host(Some(format!("[{}]", ip).as_str())).is_ok() {
if addr
.set_host(Some(format!("[{}]", ip.to_string()).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
@@ -298,16 +302,18 @@ impl DirectConnectorManager {
}
});
let mut addr = (*listener).clone();
if addr
.set_host(Some(format!("[{}]", ip_list.public_ipv6).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
if let Some(public_ipv6) = ip_list.public_ipv6 {
let mut addr = (*listener).clone();
if addr
.set_host(Some(format!("[{}]", public_ipv6.to_string()).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
}
}
}
p => {
@@ -351,16 +357,21 @@ impl DirectConnectorManager {
tracing::trace!("try direct connect to peer: {}", dst_peer_id);
let ip_list = peer_manager
let rpc_stub = peer_manager
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, dst_peer_id, |c| async {
let client =
DirectConnectorRpcClient::new(tarpc::client::Config::default(), c).spawn();
let ip_list = client.get_ip_list(tarpc::context::current()).await;
tracing::info!(ip_list = ?ip_list, dst_peer_id = ?dst_peer_id, "got ip list");
ip_list
})
.await?;
.rpc_client()
.scoped_client::<DirectConnectorRpcClientFactory<BaseController>>(
peer_manager.my_peer_id(),
dst_peer_id,
data.global_ctx.get_network_name(),
);
let ip_list = rpc_stub
.get_ip_list(BaseController {}, GetIpListRequest {})
.await
.with_context(|| format!("get ip list from peer {}", dst_peer_id))?;
tracing::info!(ip_list = ?ip_list, dst_peer_id = ?dst_peer_id, "got ip list");
Self::do_try_direct_connect_internal(data, dst_peer_id, ip_list).await
}
@@ -380,7 +391,7 @@ mod tests {
connect_peer_manager, create_mock_peer_manager, wait_route_appear,
wait_route_appear_with_cost,
},
rpc::peer::GetIpListResponse,
proto::peer_rpc::GetIpListResponse,
};
#[rstest::rstest]
@@ -436,12 +447,14 @@ mod tests {
p_a.get_global_ctx(),
p_a.clone(),
));
let mut ip_list = GetIpListResponse::new();
let mut ip_list = GetIpListResponse::default();
ip_list
.listeners
.push("tcp://127.0.0.1:10222".parse().unwrap());
ip_list.interface_ipv4s.push("127.0.0.1".to_string());
ip_list
.interface_ipv4s
.push("127.0.0.1".parse::<std::net::Ipv4Addr>().unwrap().into());
DirectConnectorManager::do_try_direct_connect_internal(data.clone(), 1, ip_list.clone())
.await

View File

@@ -11,7 +11,12 @@ use tokio::{
use crate::{
common::PeerId,
peers::peer_conn::PeerConnId,
rpc as easytier_rpc,
proto::{
cli::{
ConnectorManageAction, ListConnectorResponse, ManageConnectorResponse, PeerConnInfo,
},
rpc_types::{self, controller::BaseController},
},
tunnel::{IpVersion, TunnelConnector},
};
@@ -23,9 +28,9 @@ use crate::{
},
connector::set_bind_addr_for_peer_connector,
peers::peer_manager::PeerManager,
rpc::{
connector_manage_rpc_server::ConnectorManageRpc, Connector, ConnectorStatus,
ListConnectorRequest, ManageConnectorRequest,
proto::cli::{
Connector, ConnectorManageRpc, ConnectorStatus, ListConnectorRequest,
ManageConnectorRequest,
},
use_global_var,
};
@@ -46,7 +51,7 @@ struct ConnectorManagerData {
connectors: ConnectorMap,
reconnecting: DashSet<String>,
peer_manager: Arc<PeerManager>,
alive_conn_urls: Arc<Mutex<BTreeSet<String>>>,
alive_conn_urls: Arc<DashSet<String>>,
// user removed connector urls
removed_conn_urls: Arc<DashSet<String>>,
net_ns: NetNS,
@@ -71,7 +76,7 @@ impl ManualConnectorManager {
connectors,
reconnecting: DashSet::new(),
peer_manager,
alive_conn_urls: Arc::new(Mutex::new(BTreeSet::new())),
alive_conn_urls: Arc::new(DashSet::new()),
removed_conn_urls: Arc::new(DashSet::new()),
net_ns: global_ctx.net_ns.clone(),
global_ctx,
@@ -80,7 +85,11 @@ impl ManualConnectorManager {
};
ret.tasks
.spawn(Self::conn_mgr_routine(ret.data.clone(), event_subscriber));
.spawn(Self::conn_mgr_reconn_routine(ret.data.clone()));
ret.tasks.spawn(Self::conn_mgr_handle_event_routine(
ret.data.clone(),
event_subscriber,
));
ret
}
@@ -101,12 +110,18 @@ impl ManualConnectorManager {
Ok(())
}
pub async fn remove_connector(&self, url: &str) -> Result<(), Error> {
pub async fn remove_connector(&self, url: url::Url) -> Result<(), Error> {
tracing::info!("remove_connector: {}", url);
if !self.list_connectors().await.iter().any(|x| x.url == url) {
let url = url.into();
if !self
.list_connectors()
.await
.iter()
.any(|x| x.url.as_ref() == Some(&url))
{
return Err(Error::NotFound);
}
self.data.removed_conn_urls.insert(url.into());
self.data.removed_conn_urls.insert(url.to_string());
Ok(())
}
@@ -133,7 +148,7 @@ impl ManualConnectorManager {
ret.insert(
0,
Connector {
url: conn_url,
url: Some(conn_url.parse().unwrap()),
status: status.into(),
},
);
@@ -150,7 +165,7 @@ impl ManualConnectorManager {
ret.insert(
0,
Connector {
url: conn_url,
url: Some(conn_url.parse().unwrap()),
status: ConnectorStatus::Connecting.into(),
},
);
@@ -159,10 +174,17 @@ impl ManualConnectorManager {
ret
}
async fn conn_mgr_routine(
async fn conn_mgr_handle_event_routine(
data: Arc<ConnectorManagerData>,
mut event_recv: Receiver<GlobalCtxEvent>,
) {
loop {
let event = event_recv.recv().await.expect("event_recv got error");
Self::handle_event(&event, &data).await;
}
}
async fn conn_mgr_reconn_routine(data: Arc<ConnectorManagerData>) {
tracing::warn!("conn_mgr_routine started");
let mut reconn_interval = tokio::time::interval(std::time::Duration::from_millis(
use_global_var!(MANUAL_CONNECTOR_RECONNECT_INTERVAL_MS),
@@ -171,15 +193,6 @@ impl ManualConnectorManager {
loop {
tokio::select! {
event = event_recv.recv() => {
if let Ok(event) = event {
Self::handle_event(&event, data.clone()).await;
} else {
tracing::warn!(?event, "event_recv got error");
panic!("event_recv got error, err: {:?}", event);
}
}
_ = reconn_interval.tick() => {
let dead_urls = Self::collect_dead_conns(data.clone()).await;
if dead_urls.is_empty() {
@@ -210,17 +223,24 @@ impl ManualConnectorManager {
}
}
async fn handle_event(event: &GlobalCtxEvent, data: Arc<ConnectorManagerData>) {
async fn handle_event(event: &GlobalCtxEvent, data: &ConnectorManagerData) {
let need_add_alive = |conn_info: &PeerConnInfo| conn_info.is_client;
match event {
GlobalCtxEvent::PeerConnAdded(conn_info) => {
if !need_add_alive(conn_info) {
return;
}
let addr = conn_info.tunnel.as_ref().unwrap().remote_addr.clone();
data.alive_conn_urls.lock().await.insert(addr);
data.alive_conn_urls.insert(addr.unwrap().to_string());
tracing::warn!("peer conn added: {:?}", conn_info);
}
GlobalCtxEvent::PeerConnRemoved(conn_info) => {
if !need_add_alive(conn_info) {
return;
}
let addr = conn_info.tunnel.as_ref().unwrap().remote_addr.clone();
data.alive_conn_urls.lock().await.remove(&addr);
data.alive_conn_urls.remove(&addr.unwrap().to_string());
tracing::warn!("peer conn removed: {:?}", conn_info);
}
@@ -252,13 +272,18 @@ impl ManualConnectorManager {
async fn collect_dead_conns(data: Arc<ConnectorManagerData>) -> BTreeSet<String> {
Self::handle_remove_connector(data.clone());
let curr_alive = data.alive_conn_urls.lock().await.clone();
let all_urls: BTreeSet<String> = data
.connectors
.iter()
.map(|x| x.key().clone().into())
.collect();
&all_urls - &curr_alive
let mut ret = BTreeSet::new();
for url in all_urls.iter() {
if !data.alive_conn_urls.contains(url) {
ret.insert(url.clone());
}
}
ret
}
async fn conn_reconnect_with_ip_version(
@@ -289,7 +314,7 @@ impl ManualConnectorManager {
tracing::info!("reconnect get tunnel succ: {:?}", tunnel);
assert_eq!(
dead_url,
tunnel.info().unwrap().remote_addr,
tunnel.info().unwrap().remote_addr.unwrap().to_string(),
"info: {:?}",
tunnel.info()
);
@@ -371,45 +396,43 @@ impl ManualConnectorManager {
}
}
#[derive(Clone)]
pub struct ConnectorManagerRpcService(pub Arc<ManualConnectorManager>);
#[tonic::async_trait]
#[async_trait::async_trait]
impl ConnectorManageRpc for ConnectorManagerRpcService {
type Controller = BaseController;
async fn list_connector(
&self,
_request: tonic::Request<ListConnectorRequest>,
) -> Result<tonic::Response<easytier_rpc::ListConnectorResponse>, tonic::Status> {
let mut ret = easytier_rpc::ListConnectorResponse::default();
_: BaseController,
_request: ListConnectorRequest,
) -> Result<ListConnectorResponse, rpc_types::error::Error> {
let mut ret = ListConnectorResponse::default();
let connectors = self.0.list_connectors().await;
ret.connectors = connectors;
Ok(tonic::Response::new(ret))
Ok(ret)
}
async fn manage_connector(
&self,
request: tonic::Request<ManageConnectorRequest>,
) -> Result<tonic::Response<easytier_rpc::ManageConnectorResponse>, tonic::Status> {
let req = request.into_inner();
let url = url::Url::parse(&req.url)
.map_err(|_| tonic::Status::invalid_argument("invalid url"))?;
if req.action == easytier_rpc::ConnectorManageAction::Remove as i32 {
self.0.remove_connector(url.path()).await.map_err(|e| {
tonic::Status::invalid_argument(format!("remove connector failed: {:?}", e))
})?;
return Ok(tonic::Response::new(
easytier_rpc::ManageConnectorResponse::default(),
));
_: BaseController,
req: ManageConnectorRequest,
) -> Result<ManageConnectorResponse, rpc_types::error::Error> {
let url: url::Url = req.url.ok_or(anyhow::anyhow!("url is empty"))?.into();
if req.action == ConnectorManageAction::Remove as i32 {
self.0
.remove_connector(url.clone())
.await
.with_context(|| format!("remove connector failed: {:?}", url))?;
return Ok(ManageConnectorResponse::default());
} else {
self.0
.add_connector_by_url(url.as_str())
.await
.map_err(|e| {
tonic::Status::invalid_argument(format!("add connector failed: {:?}", e))
})?;
.with_context(|| format!("add connector failed: {:?}", url))?;
}
Ok(tonic::Response::new(
easytier_rpc::ManageConnectorResponse::default(),
))
Ok(ManageConnectorResponse::default())
}
}

View File

@@ -32,14 +32,14 @@ async fn set_bind_addr_for_peer_connector(
if is_ipv4 {
let mut bind_addrs = vec![];
for ipv4 in ips.interface_ipv4s {
let socket_addr = SocketAddrV4::new(ipv4.parse().unwrap(), 0).into();
let socket_addr = SocketAddrV4::new(ipv4.into(), 0).into();
bind_addrs.push(socket_addr);
}
connector.set_bind_addrs(bind_addrs);
} else {
let mut bind_addrs = vec![];
for ipv6 in ips.interface_ipv6s {
let socket_addr = SocketAddrV6::new(ipv6.parse().unwrap(), 0, 0, 0).into();
let socket_addr = SocketAddrV6::new(ipv6.into(), 0, 0, 0).into();
bind_addrs.push(socket_addr);
}
connector.set_bind_addrs(bind_addrs);

View File

@@ -5,6 +5,7 @@ use std::{
Arc,
},
time::Duration,
u16,
};
use anyhow::Context;
@@ -21,12 +22,20 @@ use zerocopy::FromBytes;
use crate::{
common::{
constants, error::Error, global_ctx::ArcGlobalCtx, join_joinset_background, netns::NetNS,
stun::StunInfoCollectorTrait, PeerId,
error::Error, global_ctx::ArcGlobalCtx, join_joinset_background, netns::NetNS,
scoped_task::ScopedTask, stun::StunInfoCollectorTrait, PeerId,
},
defer,
peers::peer_manager::PeerManager,
rpc::NatType,
proto::{
common::NatType,
peer_rpc::{
TryPunchHoleRequest, TryPunchHoleResponse, TryPunchSymmetricRequest,
TryPunchSymmetricResponse, UdpHolePunchRpc, UdpHolePunchRpcClientFactory,
UdpHolePunchRpcServer,
},
rpc_types::{self, controller::BaseController},
},
tunnel::{
common::setup_sokcet2,
packet_def::{UDPTunnelHeader, UdpPacketType, UDP_TUNNEL_HEADER_SIZE},
@@ -186,21 +195,6 @@ impl std::fmt::Debug for UdpSocketArray {
}
}
#[tarpc::service]
pub trait UdpHolePunchService {
async fn try_punch_hole(local_mapped_addr: SocketAddr) -> Option<SocketAddr>;
async fn try_punch_symmetric(
listener_addr: SocketAddr,
port: u16,
public_ips: Vec<Ipv4Addr>,
min_port: u16,
max_port: u16,
transaction_id: u32,
round: u32,
last_port_index: usize,
) -> Option<usize>;
}
#[derive(Debug)]
struct UdpHolePunchListener {
socket: Arc<UdpSocket>,
@@ -324,23 +318,34 @@ impl UdpHolePunchConnectorData {
}
#[derive(Clone)]
struct UdpHolePunchRpcServer {
struct UdpHolePunchRpcService {
data: Arc<UdpHolePunchConnectorData>,
tasks: Arc<std::sync::Mutex<JoinSet<()>>>,
}
#[tarpc::server]
impl UdpHolePunchService for UdpHolePunchRpcServer {
#[async_trait::async_trait]
impl UdpHolePunchRpc for UdpHolePunchRpcService {
type Controller = BaseController;
#[tracing::instrument(skip(self))]
async fn try_punch_hole(
self,
_: tarpc::context::Context,
local_mapped_addr: SocketAddr,
) -> Option<SocketAddr> {
&self,
_: BaseController,
request: TryPunchHoleRequest,
) -> Result<TryPunchHoleResponse, rpc_types::error::Error> {
let local_mapped_addr = request.local_mapped_addr.ok_or(anyhow::anyhow!(
"try_punch_hole request missing local_mapped_addr"
))?;
let local_mapped_addr = std::net::SocketAddr::from(local_mapped_addr);
// local mapped addr will be unspecified if peer is symmetric
let peer_is_symmetric = local_mapped_addr.ip().is_unspecified();
let (socket, mapped_addr) = self.select_listener(peer_is_symmetric).await?;
let (socket, mapped_addr) =
self.select_listener(peer_is_symmetric)
.await
.ok_or(anyhow::anyhow!(
"failed to select listener for hole punching"
))?;
tracing::warn!(?local_mapped_addr, ?mapped_addr, "start hole punching");
if !peer_is_symmetric {
@@ -380,32 +385,48 @@ impl UdpHolePunchService for UdpHolePunchRpcServer {
}
}
Some(mapped_addr)
Ok(TryPunchHoleResponse {
remote_mapped_addr: Some(mapped_addr.into()),
})
}
#[instrument(skip(self))]
async fn try_punch_symmetric(
self,
_: tarpc::context::Context,
listener_addr: SocketAddr,
port: u16,
public_ips: Vec<Ipv4Addr>,
mut min_port: u16,
mut max_port: u16,
transaction_id: u32,
round: u32,
last_port_index: usize,
) -> Option<usize> {
&self,
_: BaseController,
request: TryPunchSymmetricRequest,
) -> Result<TryPunchSymmetricResponse, rpc_types::error::Error> {
let listener_addr = request.listener_addr.ok_or(anyhow::anyhow!(
"try_punch_symmetric request missing listener_addr"
))?;
let listener_addr = std::net::SocketAddr::from(listener_addr);
let port = request.port as u16;
let public_ips = request
.public_ips
.into_iter()
.map(|ip| std::net::Ipv4Addr::from(ip))
.collect::<Vec<_>>();
let mut min_port = request.min_port as u16;
let mut max_port = request.max_port as u16;
let transaction_id = request.transaction_id;
let round = request.round;
let last_port_index = request.last_port_index as usize;
tracing::info!("try_punch_symmetric start");
let punch_predictablely = self.data.punch_predicablely.load(Ordering::Relaxed);
let punch_randomly = self.data.punch_randomly.load(Ordering::Relaxed);
let total_port_count = self.data.shuffled_port_vec.len();
let listener = self.find_listener(&listener_addr).await?;
let listener = self
.find_listener(&listener_addr)
.await
.ok_or(anyhow::anyhow!(
"try_punch_symmetric failed to find listener"
))?;
let ip_count = public_ips.len();
if ip_count == 0 {
tracing::warn!("try_punch_symmetric got zero len public ip");
return None;
return Err(anyhow::anyhow!("try_punch_symmetric got zero len public ip").into());
}
min_port = std::cmp::max(1, min_port);
@@ -417,12 +438,12 @@ impl UdpHolePunchService for UdpHolePunchRpcServer {
}
// send max k1 packets if we are predicting the dst port
let max_k1 = 180;
let max_k1 = 60;
// send max k2 packets if we are sending to random port
let max_k2 = rand::thread_rng().gen_range(600..800);
// this means the NAT is allocating port in a predictable way
if max_port.abs_diff(min_port) <= max_k1 && round <= 6 && punch_predictablely {
if max_port.abs_diff(min_port) <= 3 * max_k1 && round <= 6 && punch_predictablely {
let (min_port, max_port) = {
// round begin from 0. if round is even, we guess port in increasing order
let port_delta = (max_k1 as u32) / ip_count as u32;
@@ -447,7 +468,7 @@ impl UdpHolePunchService for UdpHolePunchRpcServer {
&ports,
)
.await
.ok()?;
.with_context(|| "failed to send symmetric hole punch packet predict")?;
}
if punch_randomly {
@@ -461,20 +482,22 @@ impl UdpHolePunchService for UdpHolePunchRpcServer {
&self.data.shuffled_port_vec[start..end],
)
.await
.ok()?;
.with_context(|| "failed to send symmetric hole punch packet randomly")?;
return if end >= self.data.shuffled_port_vec.len() {
Some(1)
Ok(TryPunchSymmetricResponse { last_port_index: 1 })
} else {
Some(end)
Ok(TryPunchSymmetricResponse {
last_port_index: end as u32,
})
};
}
return Some(1);
return Ok(TryPunchSymmetricResponse { last_port_index: 1 });
}
}
impl UdpHolePunchRpcServer {
impl UdpHolePunchRpcService {
pub fn new(data: Arc<UdpHolePunchConnectorData>) -> Self {
let tasks = Arc::new(std::sync::Mutex::new(JoinSet::new()));
join_joinset_background(tasks.clone(), "UdpHolePunchRpcServer".to_owned());
@@ -593,10 +616,15 @@ impl UdpHolePunchConnector {
}
pub async fn run_as_server(&mut self) -> Result<(), Error> {
self.data.peer_mgr.get_peer_rpc_mgr().run_service(
constants::UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID,
UdpHolePunchRpcServer::new(self.data.clone()).serve(),
);
self.data
.peer_mgr
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
UdpHolePunchRpcServer::new(UdpHolePunchRpcService::new(self.data.clone())),
&self.data.global_ctx.get_network_name(),
);
Ok(())
}
@@ -605,6 +633,9 @@ impl UdpHolePunchConnector {
if self.data.global_ctx.get_flags().disable_p2p {
return Ok(());
}
if self.data.global_ctx.get_flags().disable_udp_hole_punching {
return Ok(());
}
self.run_as_client().await?;
self.run_as_server().await?;
@@ -733,26 +764,26 @@ impl UdpHolePunchConnector {
.with_context(|| "failed to get udp port mapping")?;
// client -> server: tell server the mapped port, server will return the mapped address of listening port.
let Some(remote_mapped_addr) = data
let rpc_stub = data
.peer_mgr
.get_peer_rpc_mgr()
.do_client_rpc_scoped(
constants::UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID,
.rpc_client()
.scoped_client::<UdpHolePunchRpcClientFactory<BaseController>>(
data.peer_mgr.my_peer_id(),
dst_peer_id,
|c| async {
let client =
UdpHolePunchServiceClient::new(tarpc::client::Config::default(), c).spawn();
let remote_mapped_addr = client
.try_punch_hole(tarpc::context::current(), local_mapped_addr)
.await;
tracing::info!(?remote_mapped_addr, ?dst_peer_id, "got remote mapped addr");
remote_mapped_addr
data.global_ctx.get_network_name(),
);
let remote_mapped_addr = rpc_stub
.try_punch_hole(
BaseController {},
TryPunchHoleRequest {
local_mapped_addr: Some(local_mapped_addr.into()),
},
)
.await?
else {
return Err(anyhow::anyhow!("failed to get remote mapped addr"));
};
.remote_mapped_addr
.ok_or(anyhow::anyhow!("failed to get remote mapped addr"))?;
// server: will send some punching resps, total 10 packets.
// client: use the socket to create UdpTunnel with UdpTunnelConnector
@@ -766,9 +797,11 @@ impl UdpHolePunchConnector {
setup_sokcet2(&socket2_socket, &local_socket_addr)?;
let socket = Arc::new(UdpSocket::from_std(socket2_socket.into())?);
Ok(Self::try_connect_with_socket(socket, remote_mapped_addr)
.await
.with_context(|| "UdpTunnelConnector failed to connect remote")?)
Ok(
Self::try_connect_with_socket(socket, remote_mapped_addr.into())
.await
.with_context(|| "UdpTunnelConnector failed to connect remote")?,
)
}
#[tracing::instrument(err(level = Level::ERROR))]
@@ -780,30 +813,28 @@ impl UdpHolePunchConnector {
return Err(anyhow::anyhow!("udp array not started"));
};
let Some(remote_mapped_addr) = data
let rpc_stub = data
.peer_mgr
.get_peer_rpc_mgr()
.do_client_rpc_scoped(
constants::UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID,
.rpc_client()
.scoped_client::<UdpHolePunchRpcClientFactory<BaseController>>(
data.peer_mgr.my_peer_id(),
dst_peer_id,
|c| async {
let client =
UdpHolePunchServiceClient::new(tarpc::client::Config::default(), c).spawn();
let remote_mapped_addr = client
.try_punch_hole(tarpc::context::current(), "0.0.0.0:0".parse().unwrap())
.await;
tracing::debug!(
?remote_mapped_addr,
?dst_peer_id,
"hole punching symmetric got remote mapped addr"
);
remote_mapped_addr
data.global_ctx.get_network_name(),
);
let local_mapped_addr: SocketAddr = "0.0.0.0:0".parse().unwrap();
let remote_mapped_addr = rpc_stub
.try_punch_hole(
BaseController {},
TryPunchHoleRequest {
local_mapped_addr: Some(local_mapped_addr.into()),
},
)
.await?
else {
return Err(anyhow::anyhow!("failed to get remote mapped addr"));
};
.remote_mapped_addr
.ok_or(anyhow::anyhow!("failed to get remote mapped addr"))?
.into();
// try direct connect first
if data.try_direct_connect.load(Ordering::Relaxed) {
@@ -846,41 +877,38 @@ impl UdpHolePunchConnector {
return Err(anyhow::anyhow!("failed to get public ips"));
}
let mut last_port_idx = 0;
let mut last_port_idx = rand::thread_rng().gen_range(0..data.shuffled_port_vec.len());
for round in 0..30 {
let Some(next_last_port_idx) = data
.peer_mgr
.get_peer_rpc_mgr()
.do_client_rpc_scoped(
constants::UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID,
dst_peer_id,
|c| async {
let client =
UdpHolePunchServiceClient::new(tarpc::client::Config::default(), c)
.spawn();
let last_port_idx = client
.try_punch_symmetric(
tarpc::context::current(),
remote_mapped_addr,
port,
public_ips.clone(),
stun_info.min_port as u16,
stun_info.max_port as u16,
tid,
round,
last_port_idx,
)
.await;
tracing::info!(?last_port_idx, ?dst_peer_id, "punch symmetric return");
last_port_idx
for round in 0..5 {
let ret = rpc_stub
.try_punch_symmetric(
BaseController {},
TryPunchSymmetricRequest {
listener_addr: Some(remote_mapped_addr.into()),
port: port as u32,
public_ips: public_ips.clone().into_iter().map(|x| x.into()).collect(),
min_port: stun_info.min_port as u32,
max_port: stun_info.max_port as u32,
transaction_id: tid,
round,
last_port_index: last_port_idx as u32,
},
)
.await?
else {
return Err(anyhow::anyhow!("failed to get remote mapped addr"));
.await;
tracing::info!(?ret, "punch symmetric return");
let next_last_port_idx = match ret {
Ok(s) => s.last_port_index as usize,
Err(err) => {
tracing::error!(?err, "failed to get remote mapped addr");
rand::thread_rng().gen_range(0..data.shuffled_port_vec.len())
}
};
// wait for some time to increase the chance of receiving hole punching packet
tokio::time::sleep(Duration::from_secs(2)).await;
// no matter what the result is, we should check if we received any hole punching packet
while let Some(socket) = udp_array.try_fetch_punched_socket(tid) {
if let Ok(tunnel) = Self::try_connect_with_socket(socket, remote_mapped_addr).await
{
@@ -898,8 +926,8 @@ impl UdpHolePunchConnector {
data: Arc<UdpHolePunchConnectorData>,
peer_id: PeerId,
) -> Result<(), anyhow::Error> {
const MAX_BACKOFF_TIME: u64 = 600;
let mut backoff_time = vec![15, 15, 30, 30, 60, 120, 300, MAX_BACKOFF_TIME];
const MAX_BACKOFF_TIME: u64 = 300;
let mut backoff_time = vec![15, 15, 30, 30, 60, 120, 180, MAX_BACKOFF_TIME];
let my_nat_type = data.my_nat_type();
loop {
@@ -939,7 +967,7 @@ impl UdpHolePunchConnector {
async fn main_loop(data: Arc<UdpHolePunchConnectorData>) {
type JoinTaskRet = Result<(), anyhow::Error>;
type JoinTask = tokio::task::JoinHandle<JoinTaskRet>;
type JoinTask = ScopedTask<JoinTaskRet>;
let punching_task = Arc::new(DashMap::<(PeerId, NatType), JoinTask>::new());
let mut last_my_nat_type = NatType::Unknown;
@@ -975,23 +1003,27 @@ impl UdpHolePunchConnector {
last_my_nat_type = my_nat_type;
if !peers_to_connect.is_empty() {
let my_nat_type = data.my_nat_type();
if my_nat_type == NatType::Symmetric || my_nat_type == NatType::SymUdpFirewall {
let mut udp_array = data.udp_array.lock().await;
if udp_array.is_none() {
*udp_array = Some(Arc::new(UdpSocketArray::new(
data.udp_array_size.load(Ordering::Relaxed),
data.global_ctx.net_ns.clone(),
)));
}
let udp_array = udp_array.as_ref().unwrap();
udp_array.start().await.unwrap();
}
for item in peers_to_connect {
if punching_task.contains_key(&item) {
continue;
}
let my_nat_type = data.my_nat_type();
if my_nat_type == NatType::Symmetric || my_nat_type == NatType::SymUdpFirewall {
let mut udp_array = data.udp_array.lock().await;
if udp_array.is_none() {
*udp_array = Some(Arc::new(UdpSocketArray::new(
data.udp_array_size.load(Ordering::Relaxed),
data.global_ctx.net_ns.clone(),
)));
}
let udp_array = udp_array.as_ref().unwrap();
udp_array.start().await.unwrap();
}
punching_task.insert(
item,
tokio::spawn(Self::peer_punching_task(data.clone(), item.0)),
tokio::spawn(Self::peer_punching_task(data.clone(), item.0)).into(),
);
}
} else if punching_task.is_empty() {
@@ -1011,11 +1043,11 @@ pub mod tests {
use tokio::net::UdpSocket;
use crate::rpc::{NatType, StunInfo};
use crate::common::stun::MockStunInfoCollector;
use crate::proto::common::NatType;
use crate::tunnel::common::tests::wait_for_condition;
use crate::{
common::{error::Error, stun::StunInfoCollectorTrait},
connector::udp_hole_punch::UdpHolePunchConnector,
peers::{
peer_manager::PeerManager,
@@ -1026,31 +1058,6 @@ pub mod tests {
},
};
struct MockStunInfoCollector {
udp_nat_type: NatType,
}
#[async_trait::async_trait]
impl StunInfoCollectorTrait for MockStunInfoCollector {
fn get_stun_info(&self) -> StunInfo {
StunInfo {
udp_nat_type: self.udp_nat_type as i32,
tcp_nat_type: NatType::Unknown as i32,
last_update_time: std::time::Instant::now().elapsed().as_secs() as i64,
min_port: 100,
max_port: 200,
..Default::default()
}
}
async fn get_udp_port_mapping(&self, mut port: u16) -> Result<std::net::SocketAddr, Error> {
if port == 0 {
port = 40144;
}
Ok(format!("127.0.0.1:{}", port).parse().unwrap())
}
}
pub fn replace_stun_info_collector(peer_mgr: Arc<PeerManager>, udp_nat_type: NatType) {
let collector = Box::new(MockStunInfoCollector { udp_nat_type });
peer_mgr
@@ -1170,9 +1177,9 @@ pub mod tests {
let udp_self = Arc::new(UdpSocket::bind("0.0.0.0:40144").await.unwrap());
let udp_inc = Arc::new(UdpSocket::bind("0.0.0.0:40147").await.unwrap());
let udp_inc2 = Arc::new(UdpSocket::bind("0.0.0.0:40400").await.unwrap());
let udp_inc2 = Arc::new(UdpSocket::bind("0.0.0.0:40200").await.unwrap());
let udp_dec = Arc::new(UdpSocket::bind("0.0.0.0:40140").await.unwrap());
let udp_dec2 = Arc::new(UdpSocket::bind("0.0.0.0:40350").await.unwrap());
let udp_dec2 = Arc::new(UdpSocket::bind("0.0.0.0:40050").await.unwrap());
let udps = vec![udp_self, udp_inc, udp_inc2, udp_dec, udp_dec2];
let counter = Arc::new(AtomicU32::new(0));
@@ -1183,7 +1190,7 @@ pub mod tests {
tokio::spawn(async move {
let mut buf = [0u8; 1024];
let (len, addr) = udp.recv_from(&mut buf).await.unwrap();
println!("{:?} {:?}", len, addr);
println!("{:?} {:?} {:?}", len, addr, udp.local_addr());
counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
});
}

View File

@@ -1,26 +1,29 @@
#![allow(dead_code)]
use std::{net::SocketAddr, time::Duration, vec};
use std::{net::SocketAddr, sync::Mutex, time::Duration, vec};
use anyhow::{Context, Ok};
use clap::{command, Args, Parser, Subcommand};
use common::stun::StunInfoCollectorTrait;
use rpc::vpn_portal_rpc_client::VpnPortalRpcClient;
use proto::{
common::NatType,
peer_rpc::{GetGlobalPeerMapRequest, PeerCenterRpc, PeerCenterRpcClientFactory},
rpc_impl::standalone::StandAloneClient,
rpc_types::controller::BaseController,
};
use tokio::time::timeout;
use tunnel::tcp::TcpTunnelConnector;
use utils::{list_peer_route_pair, PeerRoutePair};
mod arch;
mod common;
mod rpc;
mod proto;
mod tunnel;
mod utils;
use crate::{
common::stun::StunInfoCollector,
rpc::{
connector_manage_rpc_client::ConnectorManageRpcClient,
peer_center_rpc_client::PeerCenterRpcClient, peer_manage_rpc_client::PeerManageRpcClient,
*,
},
proto::cli::*,
utils::{cost_to_str, float_to_str},
};
use humansize::format_size;
@@ -48,6 +51,7 @@ enum SubCommand {
Route(RouteArgs),
PeerCenter,
VpnPortal,
Node(NodeArgs),
}
#[derive(Args, Debug)]
@@ -68,6 +72,7 @@ enum PeerSubCommand {
Remove,
List(PeerListArgs),
ListForeign,
ListGlobalForeign,
}
#[derive(Args, Debug)]
@@ -101,56 +106,88 @@ enum ConnectorSubCommand {
List,
}
#[derive(thiserror::Error, Debug)]
enum Error {
#[error("tonic transport error")]
TonicTransportError(#[from] tonic::transport::Error),
#[error("tonic rpc error")]
TonicRpcError(#[from] tonic::Status),
#[derive(Subcommand, Debug)]
enum NodeSubCommand {
Info,
Config,
}
#[derive(Args, Debug)]
struct NodeArgs {
#[command(subcommand)]
sub_command: Option<NodeSubCommand>,
}
type Error = anyhow::Error;
struct CommandHandler {
addr: String,
client: Mutex<RpcClient>,
verbose: bool,
}
type RpcClient = StandAloneClient<TcpTunnelConnector>;
impl CommandHandler {
async fn get_peer_manager_client(
&self,
) -> Result<PeerManageRpcClient<tonic::transport::Channel>, Error> {
Ok(PeerManageRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn PeerManageRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<PeerManageRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get peer manager client")?)
}
async fn get_connector_manager_client(
&self,
) -> Result<ConnectorManageRpcClient<tonic::transport::Channel>, Error> {
Ok(ConnectorManageRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn ConnectorManageRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<ConnectorManageRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get connector manager client")?)
}
async fn get_peer_center_client(
&self,
) -> Result<PeerCenterRpcClient<tonic::transport::Channel>, Error> {
Ok(PeerCenterRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn PeerCenterRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<PeerCenterRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get peer center client")?)
}
async fn get_vpn_portal_client(
&self,
) -> Result<VpnPortalRpcClient<tonic::transport::Channel>, Error> {
Ok(VpnPortalRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn VpnPortalRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<VpnPortalRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get vpn portal client")?)
}
async fn list_peers(&self) -> Result<ListPeerResponse, Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListPeerRequest::default());
let response = client.list_peer(request).await?;
Ok(response.into_inner())
let client = self.get_peer_manager_client().await?;
let request = ListPeerRequest::default();
let response = client.list_peer(BaseController {}, request).await?;
Ok(response)
}
async fn list_routes(&self) -> Result<ListRouteResponse, Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListRouteRequest::default());
let response = client.list_route(request).await?;
Ok(response.into_inner())
let client = self.get_peer_manager_client().await?;
let request = ListRouteRequest::default();
let response = client.list_route(BaseController {}, request).await?;
Ok(response)
}
async fn list_peer_route_pair(&self) -> Result<Vec<PeerRoutePair>, Error> {
@@ -182,6 +219,7 @@ impl CommandHandler {
tunnel_proto: String,
nat_type: String,
id: String,
version: String,
}
impl From<PeerRoutePair> for PeerTableItem {
@@ -197,6 +235,33 @@ impl CommandHandler {
tunnel_proto: p.get_conn_protos().unwrap_or(vec![]).join(",").to_string(),
nat_type: p.get_udp_nat_type(),
id: p.route.peer_id.to_string(),
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
}
}
}
impl From<NodeInfo> for PeerTableItem {
fn from(p: NodeInfo) -> Self {
PeerTableItem {
ipv4: p.ipv4_addr.clone(),
hostname: p.hostname.clone(),
cost: "Local".to_string(),
lat_ms: "-".to_string(),
loss_rate: "-".to_string(),
rx_bytes: "-".to_string(),
tx_bytes: "-".to_string(),
tunnel_proto: "-".to_string(),
nat_type: if let Some(info) = p.stun_info {
info.udp_nat_type().as_str_name().to_string()
} else {
"Unknown".to_string()
},
id: p.peer_id.to_string(),
version: p.version,
}
}
}
@@ -208,6 +273,14 @@ impl CommandHandler {
return Ok(());
}
let client = self.get_peer_manager_client().await?;
let node_info = client
.show_node_info(BaseController {}, ShowNodeInfoRequest::default())
.await?
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
items.push(node_info.into());
for p in peer_routes {
items.push(p.into());
}
@@ -221,18 +294,20 @@ impl CommandHandler {
}
async fn handle_route_dump(&self) -> Result<(), Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(DumpRouteRequest::default());
let response = client.dump_route(request).await?;
println!("response: {}", response.into_inner().result);
let client = self.get_peer_manager_client().await?;
let request = DumpRouteRequest::default();
let response = client.dump_route(BaseController {}, request).await?;
println!("response: {}", response.result);
Ok(())
}
async fn handle_foreign_network_list(&self) -> Result<(), Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListForeignNetworkRequest::default());
let response = client.list_foreign_network(request).await?;
let network_map = response.into_inner();
let client = self.get_peer_manager_client().await?;
let request = ListForeignNetworkRequest::default();
let response = client
.list_foreign_network(BaseController {}, request)
.await?;
let network_map = response;
if self.verbose {
println!("{:#?}", network_map);
return Ok(());
@@ -251,7 +326,7 @@ impl CommandHandler {
"remote_addr: {}, rx_bytes: {}, tx_bytes: {}, latency_us: {}",
conn.tunnel
.as_ref()
.map(|t| t.remote_addr.clone())
.map(|t| t.remote_addr.clone().unwrap_or_default())
.unwrap_or_default(),
conn.stats.as_ref().map(|s| s.rx_bytes).unwrap_or_default(),
conn.stats.as_ref().map(|s| s.tx_bytes).unwrap_or_default(),
@@ -268,6 +343,30 @@ impl CommandHandler {
Ok(())
}
async fn handle_global_foreign_network_list(&self) -> Result<(), Error> {
let client = self.get_peer_manager_client().await?;
let request = ListGlobalForeignNetworkRequest::default();
let response = client
.list_global_foreign_network(BaseController {}, request)
.await?;
if self.verbose {
println!("{:#?}", response);
return Ok(());
}
for (k, v) in response.foreign_networks.iter() {
println!("Peer ID: {}", k);
for n in v.foreign_networks.iter() {
println!(
" Network Name: {}, Last Updated: {}, Version: {}, PeerIds: {:?}",
n.network_name, n.last_updated, n.version, n.peer_ids
);
}
}
Ok(())
}
async fn handle_route_list(&self) -> Result<(), Error> {
#[derive(tabled::Tabled)]
struct RouteTableItem {
@@ -278,9 +377,27 @@ impl CommandHandler {
next_hop_hostname: String,
next_hop_lat: f64,
cost: i32,
version: String,
}
let mut items: Vec<RouteTableItem> = vec![];
let client = self.get_peer_manager_client().await?;
let node_info = client
.show_node_info(BaseController {}, ShowNodeInfoRequest::default())
.await?
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
items.push(RouteTableItem {
ipv4: node_info.ipv4_addr.clone(),
hostname: node_info.hostname.clone(),
proxy_cidrs: node_info.proxy_cidrs.join(", "),
next_hop_ipv4: "-".to_string(),
next_hop_hostname: "Local".to_string(),
next_hop_lat: 0.0,
cost: 0,
version: node_info.version.clone(),
});
let peer_routes = self.list_peer_route_pair().await?;
for p in peer_routes.iter() {
let Some(next_hop_pair) = peer_routes
@@ -299,6 +416,11 @@ impl CommandHandler {
next_hop_hostname: "".to_string(),
next_hop_lat: next_hop_pair.get_latency_ms().unwrap_or(0.0),
cost: p.route.cost,
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
});
} else {
items.push(RouteTableItem {
@@ -309,6 +431,11 @@ impl CommandHandler {
next_hop_hostname: next_hop_pair.route.hostname.clone(),
next_hop_lat: next_hop_pair.get_latency_ms().unwrap_or(0.0),
cost: p.route.cost,
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
});
}
}
@@ -322,10 +449,10 @@ impl CommandHandler {
}
async fn handle_connector_list(&self) -> Result<(), Error> {
let mut client = self.get_connector_manager_client().await?;
let request = tonic::Request::new(ListConnectorRequest::default());
let response = client.list_connector(request).await?;
println!("response: {:#?}", response.into_inner());
let client = self.get_connector_manager_client().await?;
let request = ListConnectorRequest::default();
let response = client.list_connector(BaseController {}, request).await?;
println!("response: {:#?}", response);
Ok(())
}
}
@@ -334,8 +461,13 @@ impl CommandHandler {
#[tracing::instrument]
async fn main() -> Result<(), Error> {
let cli = Cli::parse();
let client = RpcClient::new(TcpTunnelConnector::new(
format!("tcp://{}:{}", cli.rpc_portal.ip(), cli.rpc_portal.port())
.parse()
.unwrap(),
));
let handler = CommandHandler {
addr: format!("http://{}:{}", cli.rpc_portal.ip(), cli.rpc_portal.port()),
client: Mutex::new(client),
verbose: cli.verbose,
};
@@ -357,6 +489,9 @@ async fn main() -> Result<(), Error> {
Some(PeerSubCommand::ListForeign) => {
handler.handle_foreign_network_list().await?;
}
Some(PeerSubCommand::ListGlobalForeign) => {
handler.handle_global_foreign_network_list().await?;
}
None => {
handler.handle_peer_list(&peer_args).await?;
}
@@ -395,11 +530,10 @@ async fn main() -> Result<(), Error> {
.unwrap();
}
SubCommand::PeerCenter => {
let mut peer_center_client = handler.get_peer_center_client().await?;
let peer_center_client = handler.get_peer_center_client().await?;
let resp = peer_center_client
.get_global_peer_map(GetGlobalPeerMapRequest::default())
.await?
.into_inner();
.get_global_peer_map(BaseController {}, GetGlobalPeerMapRequest::default())
.await?;
#[derive(tabled::Tabled)]
struct PeerCenterTableItem {
@@ -429,11 +563,10 @@ async fn main() -> Result<(), Error> {
);
}
SubCommand::VpnPortal => {
let mut vpn_portal_client = handler.get_vpn_portal_client().await?;
let vpn_portal_client = handler.get_vpn_portal_client().await?;
let resp = vpn_portal_client
.get_vpn_portal_info(GetVpnPortalInfoRequest::default())
.get_vpn_portal_info(BaseController {}, GetVpnPortalInfoRequest::default())
.await?
.into_inner()
.vpn_portal_info
.unwrap_or_default();
println!("portal_name: {}", resp.vpn_type);
@@ -447,6 +580,44 @@ async fn main() -> Result<(), Error> {
);
println!("connected_clients:\n{:#?}", resp.connected_clients);
}
SubCommand::Node(sub_cmd) => {
let client = handler.get_peer_manager_client().await?;
let node_info = client
.show_node_info(BaseController {}, ShowNodeInfoRequest::default())
.await?
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
match sub_cmd.sub_command {
Some(NodeSubCommand::Info) | None => {
let stun_info = node_info.stun_info.clone().unwrap_or_default();
let mut builder = tabled::builder::Builder::default();
builder.push_record(vec!["Virtual IP", node_info.ipv4_addr.as_str()]);
builder.push_record(vec!["Hostname", node_info.hostname.as_str()]);
builder.push_record(vec![
"Proxy CIDRs",
node_info.proxy_cidrs.join(", ").as_str(),
]);
builder.push_record(vec!["Peer ID", node_info.peer_id.to_string().as_str()]);
builder.push_record(vec!["Public IP", stun_info.public_ip.join(", ").as_str()]);
builder.push_record(vec![
"UDP Stun Type",
format!("{:?}", stun_info.udp_nat_type()).as_str(),
]);
for (idx, l) in node_info.listeners.iter().enumerate() {
if l.starts_with("ring") {
continue;
}
builder.push_record(vec![format!("Listener {}", idx).as_str(), l]);
}
println!("{}", builder.build().with(Style::modern()).to_string());
}
Some(NodeSubCommand::Config) => {
println!("{}", node_info.config);
}
}
}
}
Ok(())

View File

@@ -4,8 +4,6 @@
mod tests;
use std::{
backtrace,
io::Write as _,
net::{Ipv4Addr, SocketAddr},
path::PathBuf,
};
@@ -23,7 +21,7 @@ mod gateway;
mod instance;
mod peer_center;
mod peers;
mod rpc;
mod proto;
mod tunnel;
mod utils;
mod vpn_portal;
@@ -33,6 +31,7 @@ use common::config::{
};
use instance::instance::Instance;
use tokio::net::TcpSocket;
use utils::setup_panic_handler;
use crate::{
common::{
@@ -267,26 +266,39 @@ struct Cli {
)]
disable_p2p: bool,
#[arg(
long,
help = t!("core_clap.disable_udp_hole_punching").to_string(),
default_value = "false"
)]
disable_udp_hole_punching: bool,
#[arg(
long,
help = t!("core_clap.relay_all_peer_rpc").to_string(),
default_value = "false"
)]
relay_all_peer_rpc: bool,
#[cfg(feature = "socks5")]
#[arg(
long,
help = t!("core_clap.socks5").to_string()
)]
socks5: Option<u16>,
}
rust_i18n::i18n!("locales");
rust_i18n::i18n!("locales", fallback = "en");
impl Cli {
fn parse_listeners(&self) -> Vec<String> {
println!("parsing listeners: {:?}", self.listeners);
fn parse_listeners(no_listener: bool, listeners: Vec<String>) -> Vec<String> {
let proto_port_offset = vec![("tcp", 0), ("udp", 0), ("wg", 1), ("ws", 1), ("wss", 2)];
if self.no_listener || self.listeners.is_empty() {
if no_listener || listeners.is_empty() {
return vec![];
}
let origin_listners = self.listeners.clone();
let origin_listners = listeners;
let mut listeners: Vec<String> = Vec::new();
if origin_listners.len() == 1 {
if let Ok(port) = origin_listners[0].parse::<u16>() {
@@ -327,12 +339,12 @@ impl Cli {
}
fn check_tcp_available(port: u16) -> Option<SocketAddr> {
let s = format!("127.0.0.1:{}", port).parse::<SocketAddr>().unwrap();
let s = format!("0.0.0.0:{}", port).parse::<SocketAddr>().unwrap();
TcpSocket::new_v4().unwrap().bind(s).map(|_| s).ok()
}
fn parse_rpc_portal(&self) -> SocketAddr {
if let Ok(port) = self.rpc_portal.parse::<u16>() {
fn parse_rpc_portal(rpc_portal: String) -> SocketAddr {
if let Ok(port) = rpc_portal.parse::<u16>() {
if port == 0 {
// check tcp 15888 first
for i in 15888..15900 {
@@ -340,12 +352,12 @@ impl Cli {
return s;
}
}
return "127.0.0.1:0".parse().unwrap();
return "0.0.0.0:0".parse().unwrap();
}
return format!("127.0.0.1:{}", port).parse().unwrap();
return format!("0.0.0.0:{}", port).parse().unwrap();
}
self.rpc_portal.parse().unwrap()
rpc_portal.parse().unwrap()
}
}
@@ -363,14 +375,9 @@ impl From<Cli> for TomlConfigLoader {
let cfg = TomlConfigLoader::default();
cfg.set_inst_name(cli.instance_name.clone());
cfg.set_hostname(cli.hostname);
cfg.set_hostname(cli.hostname.clone());
cfg.set_network_identity(NetworkIdentity::new(
cli.network_name.clone(),
cli.network_secret.clone(),
));
cfg.set_network_identity(NetworkIdentity::new(cli.network_name, cli.network_secret));
cfg.set_dhcp(cli.dhcp);
@@ -395,7 +402,7 @@ impl From<Cli> for TomlConfigLoader {
);
cfg.set_listeners(
cli.parse_listeners()
Cli::parse_listeners(cli.no_listener, cli.listeners)
.into_iter()
.map(|s| s.parse().unwrap())
.collect(),
@@ -409,21 +416,15 @@ impl From<Cli> for TomlConfigLoader {
);
}
cfg.set_rpc_portal(cli.parse_rpc_portal());
cfg.set_rpc_portal(Cli::parse_rpc_portal(cli.rpc_portal));
if cli.external_node.is_some() {
if let Some(external_nodes) = cli.external_node {
let mut old_peers = cfg.get_peers();
old_peers.push(PeerConfig {
uri: cli
.external_node
.clone()
.unwrap()
uri: external_nodes
.parse()
.with_context(|| {
format!(
"failed to parse external node uri: {}",
cli.external_node.unwrap()
)
format!("failed to parse external node uri: {}", external_nodes)
})
.unwrap(),
});
@@ -432,7 +433,7 @@ impl From<Cli> for TomlConfigLoader {
if cli.console_log_level.is_some() {
cfg.set_console_logger_config(ConsoleLoggerConfig {
level: cli.console_log_level.clone(),
level: cli.console_log_level,
});
}
@@ -444,18 +445,12 @@ impl From<Cli> for TomlConfigLoader {
});
}
if cli.vpn_portal.is_some() {
let url: url::Url = cli
.vpn_portal
.clone()
.unwrap()
cfg.set_inst_name(cli.instance_name);
if let Some(vpn_portal) = cli.vpn_portal {
let url: url::Url = vpn_portal
.parse()
.with_context(|| {
format!(
"failed to parse vpn portal url: {}",
cli.vpn_portal.unwrap()
)
})
.with_context(|| format!("failed to parse vpn portal url: {}", vpn_portal))
.unwrap();
cfg.set_vpn_portal_config(VpnPortalConfig {
client_cidr: url.path()[1..]
@@ -476,11 +471,9 @@ impl From<Cli> for TomlConfigLoader {
});
}
if cli.manual_routes.is_some() {
if let Some(manual_routes) = cli.manual_routes {
cfg.set_routes(Some(
cli.manual_routes
.clone()
.unwrap()
manual_routes
.iter()
.map(|s| {
s.parse()
@@ -491,6 +484,15 @@ impl From<Cli> for TomlConfigLoader {
));
}
#[cfg(feature = "socks5")]
if let Some(socks5_proxy) = cli.socks5 {
cfg.set_socks5_portal(Some(
format!("socks5://0.0.0.0:{}", socks5_proxy)
.parse()
.unwrap(),
));
}
let mut f = cfg.get_flags();
if cli.default_protocol.is_some() {
f.default_protocol = cli.default_protocol.as_ref().unwrap().clone();
@@ -526,23 +528,13 @@ fn print_event(msg: String) {
);
}
fn peer_conn_info_to_string(p: crate::rpc::PeerConnInfo) -> String {
fn peer_conn_info_to_string(p: crate::proto::cli::PeerConnInfo) -> String {
format!(
"my_peer_id: {}, dst_peer_id: {}, tunnel_info: {:?}",
p.my_peer_id, p.peer_id, p.tunnel
)
}
fn setup_panic_handler() {
std::panic::set_hook(Box::new(|info| {
let backtrace = backtrace::Backtrace::force_capture();
println!("panic occurred: {:?}", info);
let _ = std::fs::File::create("easytier-panic.log")
.and_then(|mut f| f.write_all(format!("{:?}\n{:#?}", info, backtrace).as_bytes()));
std::process::exit(1);
}));
}
#[tracing::instrument]
pub async fn async_main(cli: Cli) {
let cfg: TomlConfigLoader = cli.into();

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2021 Jonathan Dizdarevic
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1 @@
Code is modified from https://github.com/dizda/fast-socks5

View File

@@ -0,0 +1,314 @@
//! Fast SOCKS5 client/server implementation written in Rust async/.await (with tokio).
//!
//! This library is maintained by [anyip.io](https://anyip.io/) a residential and mobile socks5 proxy provider.
//!
//! ## Features
//!
//! - An `async`/`.await` [SOCKS5](https://tools.ietf.org/html/rfc1928) implementation.
//! - An `async`/`.await` [SOCKS4 Client](https://www.openssh.com/txt/socks4.protocol) implementation.
//! - An `async`/`.await` [SOCKS4a Client](https://www.openssh.com/txt/socks4a.protocol) implementation.
//! - No **unsafe** code
//! - Built on-top of `tokio` library
//! - Ultra lightweight and scalable
//! - No system dependencies
//! - Cross-platform
//! - Authentication methods:
//! - No-Auth method
//! - Username/Password auth method
//! - Custom auth methods can be implemented via the Authentication Trait
//! - Credentials returned on authentication success
//! - All SOCKS5 RFC errors (replies) should be mapped
//! - `AsyncRead + AsyncWrite` traits are implemented on Socks5Stream & Socks5Socket
//! - `IPv4`, `IPv6`, and `Domains` types are supported
//! - Config helper for Socks5Server
//! - Helpers to run a Socks5Server à la *"std's TcpStream"* via `incoming.next().await`
//! - Examples come with real cases commands scenarios
//! - Can disable `DNS resolving`
//! - Can skip the authentication/handshake process, which will directly handle command's request (useful to save useless round-trips in a current authenticated environment)
//! - Can disable command execution (useful if you just want to forward the request to a different server)
//!
//!
//! ## Install
//!
//! Open in [crates.io](https://crates.io/crates/fast-socks5).
//!
//!
//! ## Examples
//!
//! Please check [`examples`](https://github.com/dizda/fast-socks5/tree/master/examples) directory.
#![forbid(unsafe_code)]
pub mod server;
pub mod util;
use anyhow::Context;
use std::fmt;
use std::io;
use thiserror::Error;
use util::target_addr::read_address;
use util::target_addr::TargetAddr;
use util::target_addr::ToTargetAddr;
use tokio::io::AsyncReadExt;
use tracing::error;
use crate::read_exact;
#[rustfmt::skip]
pub mod consts {
pub const SOCKS5_VERSION: u8 = 0x05;
pub const SOCKS5_AUTH_METHOD_NONE: u8 = 0x00;
pub const SOCKS5_AUTH_METHOD_GSSAPI: u8 = 0x01;
pub const SOCKS5_AUTH_METHOD_PASSWORD: u8 = 0x02;
pub const SOCKS5_AUTH_METHOD_NOT_ACCEPTABLE: u8 = 0xff;
pub const SOCKS5_CMD_TCP_CONNECT: u8 = 0x01;
pub const SOCKS5_CMD_TCP_BIND: u8 = 0x02;
pub const SOCKS5_CMD_UDP_ASSOCIATE: u8 = 0x03;
pub const SOCKS5_ADDR_TYPE_IPV4: u8 = 0x01;
pub const SOCKS5_ADDR_TYPE_DOMAIN_NAME: u8 = 0x03;
pub const SOCKS5_ADDR_TYPE_IPV6: u8 = 0x04;
pub const SOCKS5_REPLY_SUCCEEDED: u8 = 0x00;
pub const SOCKS5_REPLY_GENERAL_FAILURE: u8 = 0x01;
pub const SOCKS5_REPLY_CONNECTION_NOT_ALLOWED: u8 = 0x02;
pub const SOCKS5_REPLY_NETWORK_UNREACHABLE: u8 = 0x03;
pub const SOCKS5_REPLY_HOST_UNREACHABLE: u8 = 0x04;
pub const SOCKS5_REPLY_CONNECTION_REFUSED: u8 = 0x05;
pub const SOCKS5_REPLY_TTL_EXPIRED: u8 = 0x06;
pub const SOCKS5_REPLY_COMMAND_NOT_SUPPORTED: u8 = 0x07;
pub const SOCKS5_REPLY_ADDRESS_TYPE_NOT_SUPPORTED: u8 = 0x08;
}
#[derive(Debug, PartialEq)]
pub enum Socks5Command {
TCPConnect,
TCPBind,
UDPAssociate,
}
#[allow(dead_code)]
impl Socks5Command {
#[inline]
#[rustfmt::skip]
fn as_u8(&self) -> u8 {
match self {
Socks5Command::TCPConnect => consts::SOCKS5_CMD_TCP_CONNECT,
Socks5Command::TCPBind => consts::SOCKS5_CMD_TCP_BIND,
Socks5Command::UDPAssociate => consts::SOCKS5_CMD_UDP_ASSOCIATE,
}
}
#[inline]
#[rustfmt::skip]
fn from_u8(code: u8) -> Option<Socks5Command> {
match code {
consts::SOCKS5_CMD_TCP_CONNECT => Some(Socks5Command::TCPConnect),
consts::SOCKS5_CMD_TCP_BIND => Some(Socks5Command::TCPBind),
consts::SOCKS5_CMD_UDP_ASSOCIATE => Some(Socks5Command::UDPAssociate),
_ => None,
}
}
}
#[derive(Debug, PartialEq)]
pub enum AuthenticationMethod {
None,
Password { username: String, password: String },
}
impl AuthenticationMethod {
#[inline]
#[rustfmt::skip]
fn as_u8(&self) -> u8 {
match self {
AuthenticationMethod::None => consts::SOCKS5_AUTH_METHOD_NONE,
AuthenticationMethod::Password {..} =>
consts::SOCKS5_AUTH_METHOD_PASSWORD
}
}
#[inline]
#[rustfmt::skip]
fn from_u8(code: u8) -> Option<AuthenticationMethod> {
match code {
consts::SOCKS5_AUTH_METHOD_NONE => Some(AuthenticationMethod::None),
consts::SOCKS5_AUTH_METHOD_PASSWORD => Some(AuthenticationMethod::Password { username: "test".to_string(), password: "test".to_string()}),
_ => None,
}
}
}
impl fmt::Display for AuthenticationMethod {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
AuthenticationMethod::None => f.write_str("AuthenticationMethod::None"),
AuthenticationMethod::Password { .. } => f.write_str("AuthenticationMethod::Password"),
}
}
}
//impl Vec<AuthenticationMethod> {
// pub fn as_bytes(&self) -> &[u8] {
// self.iter().map(|l| l.as_u8()).collect()
// }
//}
//
//impl From<&[AuthenticationMethod]> for &[u8] {
// fn from(_: Vec<AuthenticationMethod>) -> Self {
// &[0x00]
// }
//}
#[derive(Error, Debug)]
pub enum SocksError {
#[error("i/o error: {0}")]
Io(#[from] io::Error),
#[error("the data for key `{0}` is not available")]
Redaction(String),
#[error("invalid header (expected {expected:?}, found {found:?})")]
InvalidHeader { expected: String, found: String },
#[error("Auth method unacceptable `{0:?}`.")]
AuthMethodUnacceptable(Vec<u8>),
#[error("Unsupported SOCKS version `{0}`.")]
UnsupportedSocksVersion(u8),
#[error("Domain exceeded max sequence length")]
ExceededMaxDomainLen(usize),
#[error("Authentication failed `{0}`")]
AuthenticationFailed(String),
#[error("Authentication rejected `{0}`")]
AuthenticationRejected(String),
#[error("Error with reply: {0}.")]
ReplyError(#[from] ReplyError),
#[error("Argument input error: `{0}`.")]
ArgumentInputError(&'static str),
// #[error("Other: `{0}`.")]
#[error(transparent)]
Other(#[from] anyhow::Error),
}
pub type Result<T, E = SocksError> = core::result::Result<T, E>;
/// SOCKS5 reply code
#[derive(Error, Debug, Copy, Clone)]
pub enum ReplyError {
#[error("Succeeded")]
Succeeded,
#[error("General failure")]
GeneralFailure,
#[error("Connection not allowed by ruleset")]
ConnectionNotAllowed,
#[error("Network unreachable")]
NetworkUnreachable,
#[error("Host unreachable")]
HostUnreachable,
#[error("Connection refused")]
ConnectionRefused,
#[error("Connection timeout")]
ConnectionTimeout,
#[error("TTL expired")]
TtlExpired,
#[error("Command not supported")]
CommandNotSupported,
#[error("Address type not supported")]
AddressTypeNotSupported,
// OtherReply(u8),
}
impl ReplyError {
#[inline]
#[rustfmt::skip]
pub fn as_u8(self) -> u8 {
match self {
ReplyError::Succeeded => consts::SOCKS5_REPLY_SUCCEEDED,
ReplyError::GeneralFailure => consts::SOCKS5_REPLY_GENERAL_FAILURE,
ReplyError::ConnectionNotAllowed => consts::SOCKS5_REPLY_CONNECTION_NOT_ALLOWED,
ReplyError::NetworkUnreachable => consts::SOCKS5_REPLY_NETWORK_UNREACHABLE,
ReplyError::HostUnreachable => consts::SOCKS5_REPLY_HOST_UNREACHABLE,
ReplyError::ConnectionRefused => consts::SOCKS5_REPLY_CONNECTION_REFUSED,
ReplyError::ConnectionTimeout => consts::SOCKS5_REPLY_TTL_EXPIRED,
ReplyError::TtlExpired => consts::SOCKS5_REPLY_TTL_EXPIRED,
ReplyError::CommandNotSupported => consts::SOCKS5_REPLY_COMMAND_NOT_SUPPORTED,
ReplyError::AddressTypeNotSupported => consts::SOCKS5_REPLY_ADDRESS_TYPE_NOT_SUPPORTED,
// ReplyError::OtherReply(c) => c,
}
}
#[inline]
#[rustfmt::skip]
pub fn from_u8(code: u8) -> ReplyError {
match code {
consts::SOCKS5_REPLY_SUCCEEDED => ReplyError::Succeeded,
consts::SOCKS5_REPLY_GENERAL_FAILURE => ReplyError::GeneralFailure,
consts::SOCKS5_REPLY_CONNECTION_NOT_ALLOWED => ReplyError::ConnectionNotAllowed,
consts::SOCKS5_REPLY_NETWORK_UNREACHABLE => ReplyError::NetworkUnreachable,
consts::SOCKS5_REPLY_HOST_UNREACHABLE => ReplyError::HostUnreachable,
consts::SOCKS5_REPLY_CONNECTION_REFUSED => ReplyError::ConnectionRefused,
consts::SOCKS5_REPLY_TTL_EXPIRED => ReplyError::TtlExpired,
consts::SOCKS5_REPLY_COMMAND_NOT_SUPPORTED => ReplyError::CommandNotSupported,
consts::SOCKS5_REPLY_ADDRESS_TYPE_NOT_SUPPORTED => ReplyError::AddressTypeNotSupported,
// _ => ReplyError::OtherReply(code),
_ => unreachable!("ReplyError code unsupported."),
}
}
}
/// Generate UDP header
///
/// # UDP Request header structure.
/// ```text
/// +----+------+------+----------+----------+----------+
/// |RSV | FRAG | ATYP | DST.ADDR | DST.PORT | DATA |
/// +----+------+------+----------+----------+----------+
/// | 2 | 1 | 1 | Variable | 2 | Variable |
/// +----+------+------+----------+----------+----------+
///
/// The fields in the UDP request header are:
///
/// o RSV Reserved X'0000'
/// o FRAG Current fragment number
/// o ATYP address type of following addresses:
/// o IP V4 address: X'01'
/// o DOMAINNAME: X'03'
/// o IP V6 address: X'04'
/// o DST.ADDR desired destination address
/// o DST.PORT desired destination port
/// o DATA user data
/// ```
pub fn new_udp_header<T: ToTargetAddr>(target_addr: T) -> Result<Vec<u8>> {
let mut header = vec![
0, 0, // RSV
0, // FRAG
];
header.append(&mut target_addr.to_target_addr()?.to_be_bytes()?);
Ok(header)
}
/// Parse data from UDP client on raw buffer, return (frag, target_addr, payload).
pub async fn parse_udp_request<'a>(mut req: &'a [u8]) -> Result<(u8, TargetAddr, &'a [u8])> {
let rsv = read_exact!(req, [0u8; 2]).context("Malformed request")?;
if !rsv.eq(&[0u8; 2]) {
return Err(ReplyError::GeneralFailure.into());
}
let [frag, atyp] = read_exact!(req, [0u8; 2]).context("Malformed request")?;
let target_addr = read_address(&mut req, atyp).await.map_err(|e| {
// print explicit error
error!("{:#}", e);
// then convert it to a reply
ReplyError::AddressTypeNotSupported
})?;
Ok((frag, target_addr, req))
}

View File

@@ -0,0 +1,842 @@
use super::new_udp_header;
use super::parse_udp_request;
use super::read_exact;
use super::util::stream::tcp_connect_with_timeout;
use super::util::target_addr::{read_address, TargetAddr};
use super::Socks5Command;
use super::{consts, AuthenticationMethod, ReplyError, Result, SocksError};
use anyhow::Context;
use std::io;
use std::net::IpAddr;
use std::net::Ipv4Addr;
use std::net::{SocketAddr, ToSocketAddrs as StdToSocketAddrs};
use std::ops::Deref;
use std::pin::Pin;
use std::sync::Arc;
use std::task::Poll;
use tokio::io::AsyncReadExt;
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt};
use tokio::net::TcpStream;
use tokio::net::UdpSocket;
use tokio::try_join;
use tracing::{debug, error, info, trace};
#[derive(Clone)]
pub struct Config<A: Authentication = DenyAuthentication> {
/// Timeout of the command request
request_timeout: u64,
/// Avoid useless roundtrips if we don't need the Authentication layer
skip_auth: bool,
/// Enable dns-resolving
dns_resolve: bool,
/// Enable command execution
execute_command: bool,
/// Enable UDP support
allow_udp: bool,
/// For some complex scenarios, we may want to either accept Username/Password configuration
/// or IP Whitelisting, in case the client send only 1-2 auth methods (no auth) rather than 3 (with auth)
allow_no_auth: bool,
/// Contains the authentication trait to use the user against with
auth: Option<Arc<A>>,
}
impl<A: Authentication> Default for Config<A> {
fn default() -> Self {
Config {
request_timeout: 10,
skip_auth: false,
dns_resolve: true,
execute_command: true,
allow_udp: false,
allow_no_auth: false,
auth: None,
}
}
}
/// Use this trait to handle a custom authentication on your end.
#[async_trait::async_trait]
pub trait Authentication: Send + Sync {
type Item;
async fn authenticate(&self, credentials: Option<(String, String)>) -> Option<Self::Item>;
}
/// Basic user/pass auth method provided.
pub struct SimpleUserPassword {
pub username: String,
pub password: String,
}
/// The struct returned when the user has successfully authenticated
pub struct AuthSucceeded {
pub username: String,
}
/// This is an example to auth via simple credentials.
/// If the auth succeed, we return the username authenticated with, for further uses.
#[async_trait::async_trait]
impl Authentication for SimpleUserPassword {
type Item = AuthSucceeded;
async fn authenticate(&self, credentials: Option<(String, String)>) -> Option<Self::Item> {
if let Some((username, password)) = credentials {
// Client has supplied credentials
if username == self.username && password == self.password {
// Some() will allow the authentication and the credentials
// will be forwarded to the socket
Some(AuthSucceeded { username })
} else {
// Credentials incorrect, we deny the auth
None
}
} else {
// The client hasn't supplied any credentials, which only happens
// when `Config::allow_no_auth()` is set as `true`
None
}
}
}
/// This will simply return Option::None, which denies the authentication
#[derive(Copy, Clone, Default)]
pub struct DenyAuthentication {}
#[async_trait::async_trait]
impl Authentication for DenyAuthentication {
type Item = ();
async fn authenticate(&self, _credentials: Option<(String, String)>) -> Option<Self::Item> {
None
}
}
/// While this one will always allow the user in.
#[derive(Copy, Clone, Default)]
pub struct AcceptAuthentication {}
#[async_trait::async_trait]
impl Authentication for AcceptAuthentication {
type Item = ();
async fn authenticate(&self, _credentials: Option<(String, String)>) -> Option<Self::Item> {
Some(())
}
}
impl<A: Authentication> Config<A> {
/// How much time it should wait until the request timeout.
pub fn set_request_timeout(&mut self, n: u64) -> &mut Self {
self.request_timeout = n;
self
}
/// Skip the entire auth/handshake part, which means the server will directly wait for
/// the command request.
pub fn set_skip_auth(&mut self, value: bool) -> &mut Self {
self.skip_auth = value;
self.auth = None;
self
}
/// Enable authentication
/// 'static lifetime for Authentication avoid us to use `dyn Authentication`
/// and set the Arc before calling the function.
pub fn with_authentication<T: Authentication + 'static>(self, authentication: T) -> Config<T> {
Config {
request_timeout: self.request_timeout,
skip_auth: self.skip_auth,
dns_resolve: self.dns_resolve,
execute_command: self.execute_command,
allow_udp: self.allow_udp,
allow_no_auth: self.allow_no_auth,
auth: Some(Arc::new(authentication)),
}
}
/// For some complex scenarios, we may want to either accept Username/Password configuration
/// or IP Whitelisting, in case the client send only 2 auth methods rather than 3 (with auth)
pub fn set_allow_no_auth(&mut self, value: bool) -> &mut Self {
self.allow_no_auth = value;
self
}
/// Set whether or not to execute commands
pub fn set_execute_command(&mut self, value: bool) -> &mut Self {
self.execute_command = value;
self
}
/// Will the server perform dns resolve
pub fn set_dns_resolve(&mut self, value: bool) -> &mut Self {
self.dns_resolve = value;
self
}
/// Set whether or not to allow udp traffic
pub fn set_udp_support(&mut self, value: bool) -> &mut Self {
self.allow_udp = value;
self
}
}
#[async_trait::async_trait]
pub trait AsyncTcpConnector {
type S: AsyncRead + AsyncWrite + Unpin + Send + Sync;
async fn tcp_connect(&self, addr: SocketAddr, timeout_s: u64) -> Result<Self::S>;
}
pub struct DefaultTcpConnector {}
#[async_trait::async_trait]
impl AsyncTcpConnector for DefaultTcpConnector {
type S = TcpStream;
async fn tcp_connect(&self, addr: SocketAddr, timeout_s: u64) -> Result<TcpStream> {
tcp_connect_with_timeout(addr, timeout_s).await
}
}
/// Wrap TcpStream and contains Socks5 protocol implementation.
pub struct Socks5Socket<T: AsyncRead + AsyncWrite + Unpin, A: Authentication, C: AsyncTcpConnector>
{
inner: T,
config: Arc<Config<A>>,
auth: AuthenticationMethod,
target_addr: Option<TargetAddr>,
cmd: Option<Socks5Command>,
/// Socket address which will be used in the reply message.
reply_ip: Option<IpAddr>,
/// If the client has been authenticated, that's where we store his credentials
/// to be accessed from the socket
credentials: Option<A::Item>,
tcp_connector: C,
}
impl<T: AsyncRead + AsyncWrite + Unpin, A: Authentication, C: AsyncTcpConnector>
Socks5Socket<T, A, C>
{
pub fn new(socket: T, config: Arc<Config<A>>, tcp_connector: C) -> Self {
Socks5Socket {
inner: socket,
config,
auth: AuthenticationMethod::None,
target_addr: None,
cmd: None,
reply_ip: None,
credentials: None,
tcp_connector,
}
}
/// Set the bind IP address in Socks5Reply.
///
/// Only the inner socket owner knows the correct reply bind addr, so leave this field to be
/// populated. For those strict clients, users can use this function to set the correct IP
/// address.
///
/// Most popular SOCKS5 clients [1] [2] ignore BND.ADDR and BND.PORT the reply of command
/// CONNECT, but this field could be useful in some other command, such as UDP ASSOCIATE.
///
/// [1]: https://github.com/chromium/chromium/blob/bd2c7a8b65ec42d806277dd30f138a673dec233a/net/socket/socks5_client_socket.cc#L481
/// [2]: https://github.com/curl/curl/blob/d15692ebbad5e9cfb871b0f7f51a73e43762cee2/lib/socks.c#L978
pub fn set_reply_ip(&mut self, addr: IpAddr) {
self.reply_ip = Some(addr);
}
/// Process clients SOCKS requests
/// This is the entry point where a whole request is processed.
pub async fn upgrade_to_socks5(mut self) -> Result<Socks5Socket<T, A, C>> {
trace!("upgrading to socks5...");
// Handshake
if !self.config.skip_auth {
let methods = self.get_methods().await?;
let auth_method = self.can_accept_method(methods).await?;
if self.config.auth.is_some() {
let credentials = self.authenticate(auth_method).await?;
self.credentials = Some(credentials);
}
} else {
debug!("skipping auth");
}
match self.request().await {
Ok(_) => {}
Err(SocksError::ReplyError(e)) => {
// If a reply error has been returned, we send it to the client
self.reply_error(&e).await?;
return Err(e.into()); // propagate the error to end this connection's task
}
// if any other errors has been detected, we simply end connection's task
Err(d) => return Err(d),
};
Ok(self)
}
/// Consumes the `Socks5Socket`, returning the wrapped stream.
pub fn into_inner(self) -> T {
self.inner
}
/// Read the authentication method provided by the client.
/// A client send a list of methods that he supports, he could send
///
/// - 0: Non auth
/// - 2: Auth with username/password
///
/// Altogether, then the server choose to use of of these,
/// or deny the handshake (thus the connection).
///
/// # Examples
/// ```text
/// {SOCKS Version, methods-length}
/// eg. (non-auth) {5, 2}
/// eg. (auth) {5, 3}
/// ```
///
async fn get_methods(&mut self) -> Result<Vec<u8>> {
trace!("Socks5Socket: get_methods()");
// read the first 2 bytes which contains the SOCKS version and the methods len()
let [version, methods_len] =
read_exact!(self.inner, [0u8; 2]).context("Can't read methods")?;
debug!(
"Handshake headers: [version: {version}, methods len: {len}]",
version = version,
len = methods_len,
);
if version != consts::SOCKS5_VERSION {
return Err(SocksError::UnsupportedSocksVersion(version));
}
// {METHODS available from the client}
// eg. (non-auth) {0, 1}
// eg. (auth) {0, 1, 2}
let methods = read_exact!(self.inner, vec![0u8; methods_len as usize])
.context("Can't get methods.")?;
debug!("methods supported sent by the client: {:?}", &methods);
// Return methods available
Ok(methods)
}
/// Decide to whether or not, accept the authentication method.
/// Don't forget that the methods list sent by the client, contains one or more methods.
///
/// # Request
///
/// Client send an array of 3 entries: [0, 1, 2]
/// ```text
/// {SOCKS Version, Authentication chosen}
/// eg. (non-auth) {5, 0}
/// eg. (GSSAPI) {5, 1}
/// eg. (auth) {5, 2}
/// ```
///
/// # Response
/// ```text
/// eg. (accept non-auth) {5, 0x00}
/// eg. (non-acceptable) {5, 0xff}
/// ```
///
async fn can_accept_method(&mut self, client_methods: Vec<u8>) -> Result<u8> {
let method_supported;
if let Some(_auth) = self.config.auth.as_ref() {
if client_methods.contains(&consts::SOCKS5_AUTH_METHOD_PASSWORD) {
// can auth with password
method_supported = consts::SOCKS5_AUTH_METHOD_PASSWORD;
} else {
// client hasn't provided a password
if self.config.allow_no_auth {
// but we allow no auth, for ip whitelisting
method_supported = consts::SOCKS5_AUTH_METHOD_NONE;
} else {
// we don't allow no auth, so we deny the entry
debug!("Don't support this auth method, reply with (0xff)");
self.inner
.write_all(&[
consts::SOCKS5_VERSION,
consts::SOCKS5_AUTH_METHOD_NOT_ACCEPTABLE,
])
.await
.context("Can't reply with method not acceptable.")?;
return Err(SocksError::AuthMethodUnacceptable(client_methods));
}
}
} else {
method_supported = consts::SOCKS5_AUTH_METHOD_NONE;
}
debug!(
"Reply with method {} ({})",
AuthenticationMethod::from_u8(method_supported).context("Method not supported")?,
method_supported
);
self.inner
.write(&[consts::SOCKS5_VERSION, method_supported])
.await
.context("Can't reply with method auth-none")?;
Ok(method_supported)
}
async fn read_username_password(socket: &mut T) -> Result<(String, String)> {
trace!("Socks5Socket: authenticate()");
let [version, user_len] = read_exact!(socket, [0u8; 2]).context("Can't read user len")?;
debug!(
"Auth: [version: {version}, user len: {len}]",
version = version,
len = user_len,
);
if user_len < 1 {
return Err(SocksError::AuthenticationFailed(format!(
"Username malformed ({} chars)",
user_len
)));
}
let username =
read_exact!(socket, vec![0u8; user_len as usize]).context("Can't get username.")?;
debug!("username bytes: {:?}", &username);
let [pass_len] = read_exact!(socket, [0u8; 1]).context("Can't read pass len")?;
debug!("Auth: [pass len: {len}]", len = pass_len,);
if pass_len < 1 {
return Err(SocksError::AuthenticationFailed(format!(
"Password malformed ({} chars)",
pass_len
)));
}
let password =
read_exact!(socket, vec![0u8; pass_len as usize]).context("Can't get password.")?;
debug!("password bytes: {:?}", &password);
let username = String::from_utf8(username).context("Failed to convert username")?;
let password = String::from_utf8(password).context("Failed to convert password")?;
Ok((username, password))
}
/// Only called if
/// - this server has `Authentication` trait implemented.
/// - and the client supports authentication via username/password
/// - or the client doesn't send authentication, but we let the trait decides if the `allow_no_auth()` set as `true`
async fn authenticate(&mut self, auth_method: u8) -> Result<A::Item> {
let credentials = if auth_method == consts::SOCKS5_AUTH_METHOD_PASSWORD {
let credentials = Self::read_username_password(&mut self.inner).await?;
Some(credentials)
} else {
// the client hasn't provided any credentials, the function auth.authenticate()
// will then check None, according to other parameters provided by the trait
// such as IP, etc.
None
};
let auth = self.config.auth.as_ref().context("No auth module")?;
if let Some(credentials) = auth.authenticate(credentials).await {
if auth_method == consts::SOCKS5_AUTH_METHOD_PASSWORD {
// only the password way expect to write a response at this moment
self.inner
.write_all(&[1, consts::SOCKS5_REPLY_SUCCEEDED])
.await
.context("Can't reply auth success")?;
}
info!("User logged successfully.");
return Ok(credentials);
} else {
self.inner
.write_all(&[1, consts::SOCKS5_AUTH_METHOD_NOT_ACCEPTABLE])
.await
.context("Can't reply with auth method not acceptable.")?;
return Err(SocksError::AuthenticationRejected(format!(
"Authentication, rejected."
)));
}
}
/// Wrapper to principally cover ReplyError types for both functions read & execute request.
async fn request(&mut self) -> Result<()> {
self.read_command().await?;
if self.config.dns_resolve {
self.resolve_dns().await?;
} else {
debug!("Domain won't be resolved because `dns_resolve`'s config has been turned off.")
}
if self.config.execute_command {
self.execute_command().await?;
}
Ok(())
}
/// Reply error to the client with the reply code according to the RFC.
async fn reply_error(&mut self, error: &ReplyError) -> Result<()> {
let reply = new_reply(error, "0.0.0.0:0".parse().unwrap());
debug!("reply error to be written: {:?}", &reply);
self.inner
.write(&reply)
.await
.context("Can't write the reply!")?;
self.inner.flush().await.context("Can't flush the reply!")?;
Ok(())
}
/// Decide to whether or not, accept the authentication method.
/// Don't forget that the methods list sent by the client, contains one or more methods.
///
/// # Request
/// ```text
/// +----+-----+-------+------+----------+----------+
/// |VER | CMD | RSV | ATYP | DST.ADDR | DST.PORT |
/// +----+-----+-------+------+----------+----------+
/// | 1 | 1 | 1 | 1 | Variable | 2 |
/// +----+-----+-------+------+----------+----------+
/// ```
///
/// It the request is correct, it should returns a ['SocketAddr'].
///
async fn read_command(&mut self) -> Result<()> {
let [version, cmd, rsv, address_type] =
read_exact!(self.inner, [0u8; 4]).context("Malformed request")?;
debug!(
"Request: [version: {version}, command: {cmd}, rev: {rsv}, address_type: {address_type}]",
version = version,
cmd = cmd,
rsv = rsv,
address_type = address_type,
);
if version != consts::SOCKS5_VERSION {
return Err(SocksError::UnsupportedSocksVersion(version));
}
match Socks5Command::from_u8(cmd) {
None => return Err(ReplyError::CommandNotSupported.into()),
Some(cmd) => match cmd {
Socks5Command::TCPConnect => {
self.cmd = Some(cmd);
}
Socks5Command::UDPAssociate => {
if !self.config.allow_udp {
return Err(ReplyError::CommandNotSupported.into());
}
self.cmd = Some(cmd);
}
Socks5Command::TCPBind => return Err(ReplyError::CommandNotSupported.into()),
},
}
// Guess address type
let target_addr = read_address(&mut self.inner, address_type)
.await
.map_err(|e| {
// print explicit error
error!("{:#}", e);
// then convert it to a reply
ReplyError::AddressTypeNotSupported
})?;
self.target_addr = Some(target_addr);
debug!("Request target is {}", self.target_addr.as_ref().unwrap());
Ok(())
}
/// This function is public, it can be call manually on your own-willing
/// if config flag has been turned off: `Config::dns_resolve == false`.
pub async fn resolve_dns(&mut self) -> Result<()> {
trace!("resolving dns");
if let Some(target_addr) = self.target_addr.take() {
// decide whether we have to resolve DNS or not
self.target_addr = match target_addr {
TargetAddr::Domain(_, _) => Some(target_addr.resolve_dns().await?),
TargetAddr::Ip(_) => Some(target_addr),
};
}
Ok(())
}
/// Execute the socks5 command that the client wants.
async fn execute_command(&mut self) -> Result<()> {
match &self.cmd {
None => Err(ReplyError::CommandNotSupported.into()),
Some(cmd) => match cmd {
Socks5Command::TCPBind => Err(ReplyError::CommandNotSupported.into()),
Socks5Command::TCPConnect => return self.execute_command_connect().await,
Socks5Command::UDPAssociate => {
if self.config.allow_udp {
return self.execute_command_udp_assoc().await;
} else {
Err(ReplyError::CommandNotSupported.into())
}
}
},
}
}
/// Connect to the target address that the client wants,
/// then forward the data between them (client <=> target address).
async fn execute_command_connect(&mut self) -> Result<()> {
// async-std's ToSocketAddrs doesn't supports external trait implementation
// @see https://github.com/async-rs/async-std/issues/539
let addr = self
.target_addr
.as_ref()
.context("target_addr empty")?
.to_socket_addrs()?
.next()
.context("unreachable")?;
// TCP connect with timeout, to avoid memory leak for connection that takes forever
let outbound = self
.tcp_connector
.tcp_connect(addr, self.config.request_timeout)
.await?;
debug!("Connected to remote destination");
self.inner
.write(&new_reply(
&ReplyError::Succeeded,
SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 0),
))
.await
.context("Can't write successful reply")?;
self.inner.flush().await.context("Can't flush the reply!")?;
debug!("Wrote success");
transfer(&mut self.inner, outbound).await
}
/// Bind to a random UDP port, wait for the traffic from
/// the client, and then forward the data to the remote addr.
async fn execute_command_udp_assoc(&mut self) -> Result<()> {
// The DST.ADDR and DST.PORT fields contain the address and port that
// the client expects to use to send UDP datagrams on for the
// association. The server MAY use this information to limit access
// to the association.
// @see Page 6, https://datatracker.ietf.org/doc/html/rfc1928.
//
// We do NOT limit the access from the client currently in this implementation.
let _not_used = self.target_addr.as_ref();
// Listen with UDP6 socket, so the client can connect to it with either
// IPv4 or IPv6.
let peer_sock = UdpSocket::bind("[::]:0").await?;
// Respect the pre-populated reply IP address.
self.inner
.write(&new_reply(
&ReplyError::Succeeded,
SocketAddr::new(
self.reply_ip.context("invalid reply ip")?,
peer_sock.local_addr()?.port(),
),
))
.await
.context("Can't write successful reply")?;
debug!("Wrote success");
transfer_udp(peer_sock).await?;
Ok(())
}
pub fn target_addr(&self) -> Option<&TargetAddr> {
self.target_addr.as_ref()
}
pub fn auth(&self) -> &AuthenticationMethod {
&self.auth
}
pub fn cmd(&self) -> &Option<Socks5Command> {
&self.cmd
}
/// Borrow the credentials of the user has authenticated with
pub fn get_credentials(&self) -> Option<&<<A as Authentication>::Item as Deref>::Target>
where
<A as Authentication>::Item: Deref,
{
self.credentials.as_deref()
}
/// Get the credentials of the user has authenticated with
pub fn take_credentials(&mut self) -> Option<A::Item> {
self.credentials.take()
}
pub fn tcp_connector(&self) -> &C {
&self.tcp_connector
}
}
/// Copy data between two peers
/// Using 2 different generators, because they could be different structs with same traits.
async fn transfer<I, O>(mut inbound: I, mut outbound: O) -> Result<()>
where
I: AsyncRead + AsyncWrite + Unpin,
O: AsyncRead + AsyncWrite + Unpin,
{
match tokio::io::copy_bidirectional(&mut inbound, &mut outbound).await {
Ok(res) => info!("transfer closed ({}, {})", res.0, res.1),
Err(err) => error!("transfer error: {:?}", err),
};
Ok(())
}
async fn handle_udp_request(inbound: &UdpSocket, outbound: &UdpSocket) -> Result<()> {
let mut buf = vec![0u8; 0x10000];
loop {
let (size, client_addr) = inbound.recv_from(&mut buf).await?;
debug!("Server recieve udp from {}", client_addr);
inbound.connect(client_addr).await?;
let (frag, target_addr, data) = parse_udp_request(&buf[..size]).await?;
if frag != 0 {
debug!("Discard UDP frag packets sliently.");
return Ok(());
}
debug!("Server forward to packet to {}", target_addr);
let mut target_addr = target_addr
.to_socket_addrs()?
.next()
.context("unreachable")?;
target_addr.set_ip(match target_addr.ip() {
std::net::IpAddr::V4(v4) => std::net::IpAddr::V6(v4.to_ipv6_mapped()),
v6 @ std::net::IpAddr::V6(_) => v6,
});
outbound.send_to(data, target_addr).await?;
}
}
async fn handle_udp_response(inbound: &UdpSocket, outbound: &UdpSocket) -> Result<()> {
let mut buf = vec![0u8; 0x10000];
loop {
let (size, remote_addr) = outbound.recv_from(&mut buf).await?;
debug!("Recieve packet from {}", remote_addr);
let mut data = new_udp_header(remote_addr)?;
data.extend_from_slice(&buf[..size]);
inbound.send(&data).await?;
}
}
async fn transfer_udp(inbound: UdpSocket) -> Result<()> {
let outbound = UdpSocket::bind("[::]:0").await?;
let req_fut = handle_udp_request(&inbound, &outbound);
let res_fut = handle_udp_response(&inbound, &outbound);
match try_join!(req_fut, res_fut) {
Ok(_) => {}
Err(error) => return Err(error),
}
Ok(())
}
// Fixes the issue "cannot borrow data in dereference of `Pin<&mut >` as mutable"
//
// cf. https://users.rust-lang.org/t/take-in-impl-future-cannot-borrow-data-in-a-dereference-of-pin/52042
impl<T, A: Authentication, S: AsyncTcpConnector> Unpin for Socks5Socket<T, A, S> where
T: AsyncRead + AsyncWrite + Unpin
{
}
/// Allow us to read directly from the struct
impl<T, A: Authentication, S: AsyncTcpConnector> AsyncRead for Socks5Socket<T, A, S>
where
T: AsyncRead + AsyncWrite + Unpin,
{
fn poll_read(
mut self: Pin<&mut Self>,
context: &mut std::task::Context,
buf: &mut tokio::io::ReadBuf<'_>,
) -> Poll<std::io::Result<()>> {
Pin::new(&mut self.inner).poll_read(context, buf)
}
}
/// Allow us to write directly into the struct
impl<T, A: Authentication, S: AsyncTcpConnector> AsyncWrite for Socks5Socket<T, A, S>
where
T: AsyncRead + AsyncWrite + Unpin,
{
fn poll_write(
mut self: Pin<&mut Self>,
context: &mut std::task::Context,
buf: &[u8],
) -> Poll<io::Result<usize>> {
Pin::new(&mut self.inner).poll_write(context, buf)
}
fn poll_flush(
mut self: Pin<&mut Self>,
context: &mut std::task::Context,
) -> Poll<io::Result<()>> {
Pin::new(&mut self.inner).poll_flush(context)
}
fn poll_shutdown(
mut self: Pin<&mut Self>,
context: &mut std::task::Context,
) -> Poll<io::Result<()>> {
Pin::new(&mut self.inner).poll_shutdown(context)
}
}
/// Generate reply code according to the RFC.
fn new_reply(error: &ReplyError, sock_addr: SocketAddr) -> Vec<u8> {
let (addr_type, mut ip_oct, mut port) = match sock_addr {
SocketAddr::V4(sock) => (
consts::SOCKS5_ADDR_TYPE_IPV4,
sock.ip().octets().to_vec(),
sock.port().to_be_bytes().to_vec(),
),
SocketAddr::V6(sock) => (
consts::SOCKS5_ADDR_TYPE_IPV6,
sock.ip().octets().to_vec(),
sock.port().to_be_bytes().to_vec(),
),
};
let mut reply = vec![
consts::SOCKS5_VERSION,
error.as_u8(), // transform the error into byte code
0x00, // reserved
addr_type, // address type (ipv4, v6, domain)
];
reply.append(&mut ip_oct);
reply.append(&mut port);
reply
}

View File

@@ -0,0 +1,2 @@
pub mod stream;
pub mod target_addr;

View File

@@ -0,0 +1,65 @@
use std::time::Duration;
use tokio::io::ErrorKind as IOErrorKind;
use tokio::net::{TcpStream, ToSocketAddrs};
use tokio::time::timeout;
use crate::gateway::fast_socks5::{ReplyError, Result};
/// Easy to destructure bytes buffers by naming each fields:
///
/// # Examples (before)
///
/// ```ignore
/// let mut buf = [0u8; 2];
/// stream.read_exact(&mut buf).await?;
/// let [version, method_len] = buf;
///
/// assert_eq!(version, 0x05);
/// ```
///
/// # Examples (after)
///
/// ```ignore
/// let [version, method_len] = read_exact!(stream, [0u8; 2]);
///
/// assert_eq!(version, 0x05);
/// ```
#[macro_export]
macro_rules! read_exact {
($stream: expr, $array: expr) => {{
let mut x = $array;
// $stream
// .read_exact(&mut x)
// .await
// .map_err(|_| io_err("lol"))?;
$stream.read_exact(&mut x).await.map(|_| x)
}};
}
pub async fn tcp_connect_with_timeout<T>(addr: T, request_timeout_s: u64) -> Result<TcpStream>
where
T: ToSocketAddrs,
{
let fut = tcp_connect(addr);
match timeout(Duration::from_secs(request_timeout_s), fut).await {
Ok(result) => result,
Err(_) => Err(ReplyError::ConnectionTimeout.into()),
}
}
pub async fn tcp_connect<T>(addr: T) -> Result<TcpStream>
where
T: ToSocketAddrs,
{
match TcpStream::connect(addr).await {
Ok(o) => Ok(o),
Err(e) => match e.kind() {
// Match other TCP errors with ReplyError
IOErrorKind::ConnectionRefused => Err(ReplyError::ConnectionRefused.into()),
IOErrorKind::ConnectionAborted => Err(ReplyError::ConnectionNotAllowed.into()),
IOErrorKind::ConnectionReset => Err(ReplyError::ConnectionNotAllowed.into()),
IOErrorKind::NotConnected => Err(ReplyError::NetworkUnreachable.into()),
_ => Err(e.into()), // #[error("General failure")] ?
},
}
}

View File

@@ -0,0 +1,244 @@
use crate::gateway::fast_socks5::consts;
use crate::gateway::fast_socks5::consts::SOCKS5_ADDR_TYPE_IPV4;
use crate::gateway::fast_socks5::SocksError;
use crate::read_exact;
use anyhow::Context;
use std::fmt;
use std::io;
use std::net::{Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6};
use std::vec::IntoIter;
use thiserror::Error;
use tokio::io::{AsyncRead, AsyncReadExt};
use tokio::net::lookup_host;
use tracing::{debug, error};
/// SOCKS5 reply code
#[derive(Error, Debug)]
pub enum AddrError {
#[error("DNS Resolution failed")]
DNSResolutionFailed,
#[error("Can't read IPv4")]
IPv4Unreadable,
#[error("Can't read IPv6")]
IPv6Unreadable,
#[error("Can't read port number")]
PortNumberUnreadable,
#[error("Can't read domain len")]
DomainLenUnreadable,
#[error("Can't read Domain content")]
DomainContentUnreadable,
#[error("Malformed UTF-8")]
Utf8,
#[error("Unknown address type")]
IncorrectAddressType,
#[error("{0}")]
Custom(String),
}
/// A description of a connection target.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum TargetAddr {
/// Connect to an IP address.
Ip(SocketAddr),
/// Connect to a fully qualified domain name.
///
/// The domain name will be passed along to the proxy server and DNS lookup
/// will happen there.
Domain(String, u16),
}
impl TargetAddr {
pub async fn resolve_dns(self) -> anyhow::Result<TargetAddr> {
match self {
TargetAddr::Ip(ip) => Ok(TargetAddr::Ip(ip)),
TargetAddr::Domain(domain, port) => {
debug!("Attempt to DNS resolve the domain {}...", &domain);
let socket_addr = lookup_host((&domain[..], port))
.await
.context(AddrError::DNSResolutionFailed)?
.next()
.ok_or(AddrError::Custom(
"Can't fetch DNS to the domain.".to_string(),
))?;
debug!("domain name resolved to {}", socket_addr);
// has been converted to an ip
Ok(TargetAddr::Ip(socket_addr))
}
}
}
pub fn is_ip(&self) -> bool {
match self {
TargetAddr::Ip(_) => true,
_ => false,
}
}
pub fn is_domain(&self) -> bool {
!self.is_ip()
}
pub fn to_be_bytes(&self) -> anyhow::Result<Vec<u8>> {
let mut buf = vec![];
match self {
TargetAddr::Ip(SocketAddr::V4(addr)) => {
debug!("TargetAddr::IpV4");
buf.extend_from_slice(&[SOCKS5_ADDR_TYPE_IPV4]);
debug!("addr ip {:?}", (*addr.ip()).octets());
buf.extend_from_slice(&(addr.ip()).octets()); // ip
buf.extend_from_slice(&addr.port().to_be_bytes()); // port
}
TargetAddr::Ip(SocketAddr::V6(addr)) => {
debug!("TargetAddr::IpV6");
buf.extend_from_slice(&[consts::SOCKS5_ADDR_TYPE_IPV6]);
debug!("addr ip {:?}", (*addr.ip()).octets());
buf.extend_from_slice(&(addr.ip()).octets()); // ip
buf.extend_from_slice(&addr.port().to_be_bytes()); // port
}
TargetAddr::Domain(ref domain, port) => {
debug!("TargetAddr::Domain");
if domain.len() > u8::max_value() as usize {
return Err(SocksError::ExceededMaxDomainLen(domain.len()).into());
}
buf.extend_from_slice(&[consts::SOCKS5_ADDR_TYPE_DOMAIN_NAME, domain.len() as u8]);
buf.extend_from_slice(domain.as_bytes()); // domain content
buf.extend_from_slice(&port.to_be_bytes());
// port content (.to_be_bytes() convert from u16 to u8 type)
}
}
Ok(buf)
}
}
// async-std ToSocketAddrs doesn't supports external trait implementation
// @see https://github.com/async-rs/async-std/issues/539
impl std::net::ToSocketAddrs for TargetAddr {
type Iter = IntoIter<SocketAddr>;
fn to_socket_addrs(&self) -> io::Result<IntoIter<SocketAddr>> {
match *self {
TargetAddr::Ip(addr) => Ok(vec![addr].into_iter()),
TargetAddr::Domain(_, _) => Err(io::Error::new(
io::ErrorKind::Other,
"Domain name has to be explicitly resolved, please use TargetAddr::resolve_dns().",
)),
}
}
}
impl fmt::Display for TargetAddr {
#[inline]
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
TargetAddr::Ip(ref addr) => write!(f, "{}", addr),
TargetAddr::Domain(ref addr, ref port) => write!(f, "{}:{}", addr, port),
}
}
}
/// A trait for objects that can be converted to `TargetAddr`.
pub trait ToTargetAddr {
/// Converts the value of `self` to a `TargetAddr`.
fn to_target_addr(&self) -> io::Result<TargetAddr>;
}
impl<'a> ToTargetAddr for (&'a str, u16) {
fn to_target_addr(&self) -> io::Result<TargetAddr> {
// try to parse as an IP first
if let Ok(addr) = self.0.parse::<Ipv4Addr>() {
return (addr, self.1).to_target_addr();
}
if let Ok(addr) = self.0.parse::<Ipv6Addr>() {
return (addr, self.1).to_target_addr();
}
Ok(TargetAddr::Domain(self.0.to_owned(), self.1))
}
}
impl ToTargetAddr for SocketAddr {
fn to_target_addr(&self) -> io::Result<TargetAddr> {
Ok(TargetAddr::Ip(*self))
}
}
impl ToTargetAddr for SocketAddrV4 {
fn to_target_addr(&self) -> io::Result<TargetAddr> {
SocketAddr::V4(*self).to_target_addr()
}
}
impl ToTargetAddr for SocketAddrV6 {
fn to_target_addr(&self) -> io::Result<TargetAddr> {
SocketAddr::V6(*self).to_target_addr()
}
}
impl ToTargetAddr for (Ipv4Addr, u16) {
fn to_target_addr(&self) -> io::Result<TargetAddr> {
SocketAddrV4::new(self.0, self.1).to_target_addr()
}
}
impl ToTargetAddr for (Ipv6Addr, u16) {
fn to_target_addr(&self) -> io::Result<TargetAddr> {
SocketAddrV6::new(self.0, self.1, 0, 0).to_target_addr()
}
}
#[derive(Debug)]
pub enum Addr {
V4([u8; 4]),
V6([u8; 16]),
Domain(String), // Vec<[u8]> or Box<[u8]> or String ?
}
/// This function is used by the client & the server
pub async fn read_address<T: AsyncRead + Unpin>(
stream: &mut T,
atyp: u8,
) -> anyhow::Result<TargetAddr> {
let addr = match atyp {
consts::SOCKS5_ADDR_TYPE_IPV4 => {
debug!("Address type `IPv4`");
Addr::V4(read_exact!(stream, [0u8; 4]).context(AddrError::IPv4Unreadable)?)
}
consts::SOCKS5_ADDR_TYPE_IPV6 => {
debug!("Address type `IPv6`");
Addr::V6(read_exact!(stream, [0u8; 16]).context(AddrError::IPv6Unreadable)?)
}
consts::SOCKS5_ADDR_TYPE_DOMAIN_NAME => {
debug!("Address type `domain`");
let len = read_exact!(stream, [0]).context(AddrError::DomainLenUnreadable)?[0];
let domain = read_exact!(stream, vec![0u8; len as usize])
.context(AddrError::DomainContentUnreadable)?;
// make sure the bytes are correct utf8 string
let domain = String::from_utf8(domain).context(AddrError::Utf8)?;
Addr::Domain(domain)
}
_ => return Err(anyhow::anyhow!(AddrError::IncorrectAddressType)),
};
// Find port number
let port = read_exact!(stream, [0u8; 2]).context(AddrError::PortNumberUnreadable)?;
// Convert (u8 * 2) into u16
let port = (port[0] as u16) << 8 | port[1] as u16;
// Merge ADDRESS + PORT into a TargetAddr
let addr: TargetAddr = match addr {
Addr::V4([a, b, c, d]) => (Ipv4Addr::new(a, b, c, d), port).to_target_addr()?,
Addr::V6(x) => (Ipv6Addr::from(x), port).to_target_addr()?,
Addr::Domain(domain) => TargetAddr::Domain(domain, port),
};
Ok(addr)
}

View File

@@ -9,6 +9,12 @@ pub mod tcp_proxy;
#[cfg(feature = "smoltcp")]
pub mod tokio_smoltcp;
pub mod udp_proxy;
#[cfg(feature = "socks5")]
pub mod fast_socks5;
#[cfg(feature = "socks5")]
pub mod socks5;
#[derive(Debug)]
struct CidrSet {
global_ctx: ArcGlobalCtx,

View File

@@ -0,0 +1,416 @@
use std::{
net::{IpAddr, Ipv4Addr, SocketAddr},
sync::Arc,
time::Duration,
};
use crate::{
gateway::{
fast_socks5::{
server::{
AcceptAuthentication, AsyncTcpConnector, Config, SimpleUserPassword, Socks5Socket,
},
util::stream::tcp_connect_with_timeout,
},
tokio_smoltcp::TcpStream,
},
tunnel::packet_def::PacketType,
};
use anyhow::Context;
use dashmap::DashSet;
use pnet::packet::{ip::IpNextHeaderProtocols, ipv4::Ipv4Packet, tcp::TcpPacket, Packet};
use tokio::{
io::{AsyncRead, AsyncWrite},
select,
};
use tokio::{
net::TcpListener,
sync::{mpsc, Mutex},
task::JoinSet,
time::timeout,
};
use crate::{
common::{error::Error, global_ctx::GlobalCtx},
gateway::tokio_smoltcp::{channel_device, Net, NetConfig},
peers::{peer_manager::PeerManager, PeerPacketFilter},
tunnel::packet_def::ZCPacket,
};
enum SocksTcpStream {
TcpStream(tokio::net::TcpStream),
SmolTcpStream(TcpStream),
}
impl AsyncRead for SocksTcpStream {
fn poll_read(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> std::task::Poll<std::io::Result<()>> {
match self.get_mut() {
SocksTcpStream::TcpStream(ref mut stream) => {
std::pin::Pin::new(stream).poll_read(cx, buf)
}
SocksTcpStream::SmolTcpStream(ref mut stream) => {
std::pin::Pin::new(stream).poll_read(cx, buf)
}
}
}
}
impl AsyncWrite for SocksTcpStream {
fn poll_write(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &[u8],
) -> std::task::Poll<Result<usize, std::io::Error>> {
match self.get_mut() {
SocksTcpStream::TcpStream(ref mut stream) => {
std::pin::Pin::new(stream).poll_write(cx, buf)
}
SocksTcpStream::SmolTcpStream(ref mut stream) => {
std::pin::Pin::new(stream).poll_write(cx, buf)
}
}
}
fn poll_flush(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
match self.get_mut() {
SocksTcpStream::TcpStream(ref mut stream) => std::pin::Pin::new(stream).poll_flush(cx),
SocksTcpStream::SmolTcpStream(ref mut stream) => {
std::pin::Pin::new(stream).poll_flush(cx)
}
}
}
fn poll_shutdown(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
match self.get_mut() {
SocksTcpStream::TcpStream(ref mut stream) => {
std::pin::Pin::new(stream).poll_shutdown(cx)
}
SocksTcpStream::SmolTcpStream(ref mut stream) => {
std::pin::Pin::new(stream).poll_shutdown(cx)
}
}
}
}
#[derive(Debug, Eq, PartialEq, Hash, Clone)]
struct Socks5Entry {
src: SocketAddr,
dst: SocketAddr,
}
type Socks5EntrySet = Arc<DashSet<Socks5Entry>>;
struct Socks5ServerNet {
ipv4_addr: Ipv4Addr,
auth: Option<SimpleUserPassword>,
smoltcp_net: Arc<Net>,
forward_tasks: Arc<std::sync::Mutex<JoinSet<()>>>,
entries: Socks5EntrySet,
}
impl Socks5ServerNet {
pub fn new(
ipv4_addr: Ipv4Addr,
auth: Option<SimpleUserPassword>,
peer_manager: Arc<PeerManager>,
packet_recv: Arc<Mutex<mpsc::Receiver<ZCPacket>>>,
entries: Socks5EntrySet,
) -> Self {
let mut forward_tasks = JoinSet::new();
let mut cap = smoltcp::phy::DeviceCapabilities::default();
cap.max_transmission_unit = 1280;
cap.medium = smoltcp::phy::Medium::Ip;
let (dev, stack_sink, mut stack_stream) = channel_device::ChannelDevice::new(cap);
let packet_recv = packet_recv.clone();
forward_tasks.spawn(async move {
let mut smoltcp_stack_receiver = packet_recv.lock().await;
while let Some(packet) = smoltcp_stack_receiver.recv().await {
tracing::trace!(?packet, "receive from peer send to smoltcp packet");
if let Err(e) = stack_sink.send(Ok(packet.payload().to_vec())).await {
tracing::error!("send to smoltcp stack failed: {:?}", e);
}
}
tracing::error!("smoltcp stack sink exited");
panic!("smoltcp stack sink exited");
});
forward_tasks.spawn(async move {
while let Some(data) = stack_stream.recv().await {
tracing::trace!(
?data,
"receive from smoltcp stack and send to peer mgr packet"
);
let Some(ipv4) = Ipv4Packet::new(&data) else {
tracing::error!(?data, "smoltcp stack stream get non ipv4 packet");
continue;
};
let dst = ipv4.get_destination();
let packet = ZCPacket::new_with_payload(&data);
if let Err(e) = peer_manager.send_msg_ipv4(packet, dst).await {
tracing::error!("send to peer failed in smoltcp sender: {:?}", e);
}
}
tracing::error!("smoltcp stack stream exited");
panic!("smoltcp stack stream exited");
});
let interface_config = smoltcp::iface::Config::new(smoltcp::wire::HardwareAddress::Ip);
let net = Net::new(
dev,
NetConfig::new(
interface_config,
format!("{}/24", ipv4_addr).parse().unwrap(),
vec![format!("{}", ipv4_addr).parse().unwrap()],
),
);
Self {
ipv4_addr,
auth,
smoltcp_net: Arc::new(net),
forward_tasks: Arc::new(std::sync::Mutex::new(forward_tasks)),
entries,
}
}
fn handle_tcp_stream(&self, stream: tokio::net::TcpStream) {
let mut config = Config::<AcceptAuthentication>::default();
config.set_request_timeout(10);
config.set_skip_auth(false);
config.set_allow_no_auth(true);
struct SmolTcpConnector(
Arc<Net>,
Socks5EntrySet,
std::sync::Mutex<Option<Socks5Entry>>,
);
#[async_trait::async_trait]
impl AsyncTcpConnector for SmolTcpConnector {
type S = SocksTcpStream;
async fn tcp_connect(
&self,
addr: SocketAddr,
timeout_s: u64,
) -> crate::gateway::fast_socks5::Result<SocksTcpStream> {
let local_addr = self.0.get_address();
let port = self.0.get_port();
let entry = Socks5Entry {
src: SocketAddr::new(local_addr, port),
dst: addr,
};
*self.2.lock().unwrap() = Some(entry.clone());
self.1.insert(entry);
if addr.ip() == local_addr {
let modified_addr =
SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), addr.port());
Ok(SocksTcpStream::TcpStream(
tcp_connect_with_timeout(modified_addr, timeout_s).await?,
))
} else {
let remote_socket = timeout(
Duration::from_secs(timeout_s),
self.0.tcp_connect(addr, port),
)
.await
.with_context(|| "connect to remote timeout")?;
Ok(SocksTcpStream::SmolTcpStream(remote_socket.map_err(
|e| super::fast_socks5::SocksError::Other(e.into()),
)?))
}
}
}
impl Drop for SmolTcpConnector {
fn drop(&mut self) {
if let Some(entry) = self.2.lock().unwrap().take() {
self.1.remove(&entry);
}
}
}
let socket = Socks5Socket::new(
stream,
Arc::new(config),
SmolTcpConnector(
self.smoltcp_net.clone(),
self.entries.clone(),
std::sync::Mutex::new(None),
),
);
self.forward_tasks.lock().unwrap().spawn(async move {
match socket.upgrade_to_socks5().await {
Ok(_) => {
tracing::info!("socks5 handle success");
}
Err(e) => {
tracing::error!("socks5 handshake failed: {:?}", e);
}
};
});
}
}
pub struct Socks5Server {
global_ctx: Arc<GlobalCtx>,
peer_manager: Arc<PeerManager>,
auth: Option<SimpleUserPassword>,
tasks: Arc<Mutex<JoinSet<()>>>,
packet_sender: mpsc::Sender<ZCPacket>,
packet_recv: Arc<Mutex<mpsc::Receiver<ZCPacket>>>,
net: Arc<Mutex<Option<Socks5ServerNet>>>,
entries: Socks5EntrySet,
}
#[async_trait::async_trait]
impl PeerPacketFilter for Socks5Server {
async fn try_process_packet_from_peer(&self, packet: ZCPacket) -> Option<ZCPacket> {
let hdr = packet.peer_manager_header().unwrap();
if hdr.packet_type != PacketType::Data as u8 {
return Some(packet);
};
let payload_bytes = packet.payload();
let ipv4 = Ipv4Packet::new(payload_bytes).unwrap();
if ipv4.get_version() != 4 || ipv4.get_next_level_protocol() != IpNextHeaderProtocols::Tcp {
return Some(packet);
}
let tcp_packet = TcpPacket::new(ipv4.payload()).unwrap();
let entry = Socks5Entry {
dst: SocketAddr::new(ipv4.get_source().into(), tcp_packet.get_source()),
src: SocketAddr::new(ipv4.get_destination().into(), tcp_packet.get_destination()),
};
if !self.entries.contains(&entry) {
return Some(packet);
}
let _ = self.packet_sender.try_send(packet).ok();
return None;
}
}
impl Socks5Server {
pub fn new(
global_ctx: Arc<GlobalCtx>,
peer_manager: Arc<PeerManager>,
auth: Option<SimpleUserPassword>,
) -> Arc<Self> {
let (packet_sender, packet_recv) = mpsc::channel(1024);
Arc::new(Self {
global_ctx,
peer_manager,
auth,
tasks: Arc::new(Mutex::new(JoinSet::new())),
packet_recv: Arc::new(Mutex::new(packet_recv)),
packet_sender,
net: Arc::new(Mutex::new(None)),
entries: Arc::new(DashSet::new()),
})
}
async fn run_net_update_task(self: &Arc<Self>) {
let net = self.net.clone();
let global_ctx = self.global_ctx.clone();
let peer_manager = self.peer_manager.clone();
let packet_recv = self.packet_recv.clone();
let entries = self.entries.clone();
self.tasks.lock().await.spawn(async move {
let mut prev_ipv4 = None;
loop {
let mut event_recv = global_ctx.subscribe();
let cur_ipv4 = global_ctx.get_ipv4();
if prev_ipv4 != cur_ipv4 {
prev_ipv4 = cur_ipv4;
entries.clear();
if cur_ipv4.is_none() {
let _ = net.lock().await.take();
} else {
net.lock().await.replace(Socks5ServerNet::new(
cur_ipv4.unwrap(),
None,
peer_manager.clone(),
packet_recv.clone(),
entries.clone(),
));
}
}
select! {
_ = event_recv.recv() => {}
_ = tokio::time::sleep(Duration::from_secs(120)) => {}
}
}
});
}
pub async fn run(self: &Arc<Self>) -> Result<(), Error> {
let Some(proxy_url) = self.global_ctx.config.get_socks5_portal() else {
return Ok(());
};
let bind_addr = format!(
"{}:{}",
proxy_url.host_str().unwrap(),
proxy_url.port().unwrap()
);
let listener = {
let _g = self.global_ctx.net_ns.guard();
TcpListener::bind(bind_addr.parse::<SocketAddr>().unwrap()).await?
};
self.peer_manager
.add_packet_process_pipeline(Box::new(self.clone()))
.await;
self.run_net_update_task().await;
let net = self.net.clone();
self.tasks.lock().await.spawn(async move {
loop {
match listener.accept().await {
Ok((socket, _addr)) => {
tracing::info!("accept a new connection, {:?}", socket);
if let Some(net) = net.lock().await.as_ref() {
net.handle_tcp_stream(socket);
}
}
Err(err) => tracing::error!("accept error = {:?}", err),
}
}
});
Ok(())
}
}

View File

@@ -97,10 +97,55 @@ impl ProxyTcpStream {
}
}
#[cfg(feature = "smoltcp")]
struct SmolTcpListener {
listener_task: JoinSet<()>,
listen_count: usize,
stream_rx: mpsc::UnboundedReceiver<Result<(tokio_smoltcp::TcpStream, SocketAddr)>>,
}
#[cfg(feature = "smoltcp")]
impl SmolTcpListener {
pub async fn new(net: Arc<Mutex<Option<Net>>>, listen_count: usize) -> Self {
let mut tasks = JoinSet::new();
let (tx, rx) = mpsc::unbounded_channel();
let locked_net = net.lock().await;
for _ in 0..listen_count {
let mut tcp = locked_net
.as_ref()
.unwrap()
.tcp_bind("0.0.0.0:8899".parse().unwrap())
.await
.unwrap();
let tx = tx.clone();
tasks.spawn(async move {
loop {
tx.send(tcp.accept().await.map_err(|e| {
anyhow::anyhow!("smol tcp listener accept failed: {:?}", e).into()
}))
.unwrap();
}
});
}
Self {
listener_task: tasks,
listen_count,
stream_rx: rx,
}
}
pub async fn accept(&mut self) -> Result<(tokio_smoltcp::TcpStream, SocketAddr)> {
self.stream_rx.recv().await.unwrap()
}
}
enum ProxyTcpListener {
KernelTcpListener(TcpListener),
#[cfg(feature = "smoltcp")]
SmolTcpListener(tokio_smoltcp::TcpListener),
SmolTcpListener(SmolTcpListener),
}
impl ProxyTcpListener {
@@ -375,8 +420,8 @@ impl TcpProxy {
),
);
net.set_any_ip(true);
let tcp = net.tcp_bind("0.0.0.0:8899".parse().unwrap()).await?;
self.smoltcp_net.lock().await.replace(net);
let tcp = SmolTcpListener::new(self.smoltcp_net.clone(), 64).await;
self.enable_smoltcp
.store(true, std::sync::atomic::Ordering::Relaxed);

View File

@@ -4,7 +4,7 @@
use std::{
io,
net::{Ipv4Addr, Ipv6Addr, SocketAddr},
net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},
sync::{
atomic::{AtomicU16, Ordering},
Arc,
@@ -134,7 +134,10 @@ impl Net {
fut,
)
}
fn get_port(&self) -> u16 {
pub fn get_address(&self) -> IpAddr {
self.ip_addr.address().into()
}
pub fn get_port(&self) -> u16 {
self.from_port
.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
Some(if x > 60000 { 10000 } else { x + 1 })
@@ -147,10 +150,10 @@ impl Net {
TcpListener::new(self.reactor.clone(), addr.into()).await
}
/// Opens a TCP connection to a remote host.
pub async fn tcp_connect(&self, addr: SocketAddr) -> io::Result<TcpStream> {
pub async fn tcp_connect(&self, addr: SocketAddr, local_port: u16) -> io::Result<TcpStream> {
TcpStream::connect(
self.reactor.clone(),
(self.ip_addr.address(), self.get_port()).into(),
(self.ip_addr.address(), local_port).into(),
addr.into(),
)
.await

View File

@@ -4,6 +4,7 @@ use std::{
time::Duration,
};
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use pnet::packet::{
ip::IpNextHeaderProtocols,
@@ -11,12 +12,10 @@ use pnet::packet::{
udp::{self, MutableUdpPacket},
Packet,
};
use tachyonix::{channel, Receiver, Sender, TrySendError};
use tokio::{
net::UdpSocket,
sync::{
mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender},
Mutex,
},
sync::Mutex,
task::{JoinHandle, JoinSet},
time::timeout,
};
@@ -49,6 +48,7 @@ struct UdpNatEntry {
forward_task: Mutex<Option<JoinHandle<()>>>,
stopped: AtomicBool,
start_time: std::time::Instant,
last_active_time: AtomicCell<std::time::Instant>,
}
impl UdpNatEntry {
@@ -72,6 +72,7 @@ impl UdpNatEntry {
forward_task: Mutex::new(None),
stopped: AtomicBool::new(false),
start_time: std::time::Instant::now(),
last_active_time: AtomicCell::new(std::time::Instant::now()),
})
}
@@ -82,7 +83,7 @@ impl UdpNatEntry {
async fn compose_ipv4_packet(
self: &Arc<Self>,
packet_sender: &mut UnboundedSender<ZCPacket>,
packet_sender: &mut Sender<ZCPacket>,
buf: &mut [u8],
src_v4: &SocketAddrV4,
payload_len: usize,
@@ -119,11 +120,13 @@ impl UdpNatEntry {
p.fill_peer_manager_hdr(self.my_peer_id, self.src_peer_id, PacketType::Data as u8);
p.mut_peer_manager_header().unwrap().set_no_proxy(true);
if let Err(e) = packet_sender.send(p) {
tracing::error!("send icmp packet to peer failed: {:?}, may exiting..", e);
return Err(Error::AnyhowError(e.into()));
match packet_sender.try_send(p) {
Err(TrySendError::Closed(e)) => {
tracing::error!("send icmp packet to peer failed: {:?}, may exiting..", e);
Err(Error::Unknown)
}
_ => Ok(()),
}
Ok(())
},
)?;
@@ -132,7 +135,7 @@ impl UdpNatEntry {
async fn forward_task(
self: Arc<Self>,
mut packet_sender: UnboundedSender<ZCPacket>,
mut packet_sender: Sender<ZCPacket>,
virtual_ipv4: Ipv4Addr,
) {
let mut buf = [0u8; 65536];
@@ -141,7 +144,7 @@ impl UdpNatEntry {
loop {
let (len, src_socket) = match timeout(
Duration::from_secs(30),
Duration::from_secs(120),
self.socket.recv_from(&mut udp_body),
)
.await
@@ -167,6 +170,8 @@ impl UdpNatEntry {
continue;
};
self.mark_active();
if src_v4.ip().is_loopback() {
src_v4.set_ip(virtual_ipv4);
}
@@ -177,7 +182,7 @@ impl UdpNatEntry {
&mut buf,
&src_v4,
len,
1200,
1256,
ip_id,
)
.await
@@ -189,6 +194,14 @@ impl UdpNatEntry {
self.stop();
}
fn mark_active(&self) {
self.last_active_time.store(std::time::Instant::now());
}
fn is_active(&self) -> bool {
self.last_active_time.load().elapsed().as_secs() < 180
}
}
#[derive(Debug)]
@@ -200,8 +213,8 @@ pub struct UdpProxy {
nat_table: Arc<DashMap<UdpNatKey, Arc<UdpNatEntry>>>,
sender: UnboundedSender<ZCPacket>,
receiver: Mutex<Option<UnboundedReceiver<ZCPacket>>>,
sender: Sender<ZCPacket>,
receiver: Mutex<Option<Receiver<ZCPacket>>>,
tasks: Mutex<JoinSet<()>>,
@@ -287,6 +300,8 @@ impl UdpProxy {
)));
}
nat_entry.mark_active();
// TODO: should it be async.
let dst_socket = if Some(ipv4.get_destination()) == self.global_ctx.get_ipv4() {
format!("127.0.0.1:{}", udp_packet.get_destination())
@@ -335,7 +350,7 @@ impl UdpProxy {
peer_manager: Arc<PeerManager>,
) -> Result<Arc<Self>, Error> {
let cidr_set = CidrSet::new(global_ctx.clone());
let (sender, receiver) = unbounded_channel();
let (sender, receiver) = channel(64);
let ret = Self {
global_ctx,
peer_manager,
@@ -360,7 +375,7 @@ impl UdpProxy {
loop {
tokio::time::sleep(Duration::from_secs(15)).await;
nat_table.retain(|_, v| {
if v.start_time.elapsed().as_secs() > 120 {
if !v.is_active() {
tracing::info!(?v, "udp nat table entry removed");
v.stop();
false
@@ -383,7 +398,7 @@ impl UdpProxy {
let mut receiver = self.receiver.lock().await.take().unwrap();
let peer_manager = self.peer_manager.clone();
self.tasks.lock().await.spawn(async move {
while let Some(msg) = receiver.recv().await {
while let Ok(msg) = receiver.recv().await {
let to_peer_id: PeerId = msg.peer_manager_header().unwrap().to_peer_id.get();
tracing::trace!(?msg, ?to_peer_id, "udp nat packet response send");
let ret = peer_manager.send_msg(msg, to_peer_id).await;

View File

@@ -8,8 +8,6 @@ use anyhow::Context;
use cidr::Ipv4Inet;
use tokio::{sync::Mutex, task::JoinSet};
use tonic::transport::server::TcpIncoming;
use tonic::transport::Server;
use crate::common::config::ConfigLoader;
use crate::common::error::Error;
@@ -26,12 +24,20 @@ use crate::peers::peer_conn::PeerConnId;
use crate::peers::peer_manager::{PeerManager, RouteAlgoType};
use crate::peers::rpc_service::PeerManagerRpcService;
use crate::peers::PacketRecvChanReceiver;
use crate::rpc::vpn_portal_rpc_server::VpnPortalRpc;
use crate::rpc::{GetVpnPortalInfoRequest, GetVpnPortalInfoResponse, VpnPortalInfo};
use crate::proto::cli::VpnPortalRpc;
use crate::proto::cli::{GetVpnPortalInfoRequest, GetVpnPortalInfoResponse, VpnPortalInfo};
use crate::proto::peer_rpc::PeerCenterRpcServer;
use crate::proto::rpc_impl::standalone::StandAloneServer;
use crate::proto::rpc_types;
use crate::proto::rpc_types::controller::BaseController;
use crate::tunnel::tcp::TcpTunnelListener;
use crate::vpn_portal::{self, VpnPortal};
use super::listeners::ListenerManager;
#[cfg(feature = "socks5")]
use crate::gateway::socks5::Socks5Server;
#[derive(Clone)]
struct IpProxy {
tcp_proxy: Arc<TcpProxy>,
@@ -101,8 +107,6 @@ pub struct Instance {
nic_ctx: ArcNicCtx,
tasks: JoinSet<()>,
peer_packet_receiver: Arc<Mutex<PacketRecvChanReceiver>>,
peer_manager: Arc<PeerManager>,
listener_manager: Arc<Mutex<ListenerManager<PeerManager>>>,
@@ -116,6 +120,11 @@ pub struct Instance {
vpn_portal: Arc<Mutex<Box<dyn VpnPortal>>>,
#[cfg(feature = "socks5")]
socks5_server: Arc<Socks5Server>,
rpc_server: Option<StandAloneServer<TcpTunnelListener>>,
global_ctx: ArcGlobalCtx,
}
@@ -161,6 +170,15 @@ impl Instance {
#[cfg(not(feature = "wireguard"))]
let vpn_portal_inst = vpn_portal::NullVpnPortal;
#[cfg(feature = "socks5")]
let socks5_server = Socks5Server::new(global_ctx.clone(), peer_manager.clone(), None);
let rpc_server = global_ctx.config.get_rpc_portal().and_then(|s| {
Some(StandAloneServer::new(TcpTunnelListener::new(
format!("tcp://{}", s).parse().unwrap(),
)))
});
Instance {
inst_name: global_ctx.inst_name.clone(),
id,
@@ -168,7 +186,6 @@ impl Instance {
peer_packet_receiver: Arc::new(Mutex::new(peer_packet_receiver)),
nic_ctx: Arc::new(Mutex::new(None)),
tasks: JoinSet::new(),
peer_manager,
listener_manager,
conn_manager,
@@ -181,6 +198,11 @@ impl Instance {
vpn_portal: Arc::new(Mutex::new(Box::new(vpn_portal_inst))),
#[cfg(feature = "socks5")]
socks5_server,
rpc_server,
global_ctx,
}
}
@@ -363,7 +385,7 @@ impl Instance {
self.check_dhcp_ip_conflict();
}
self.run_rpc_server()?;
self.run_rpc_server().await?;
// run after tun device created, so listener can bind to tun device, which may be required by win 10
self.ip_proxy = Some(IpProxy::new(
@@ -387,6 +409,9 @@ impl Instance {
self.run_vpn_portal().await?;
}
#[cfg(feature = "socks5")]
self.socks5_server.run().await?;
Ok(())
}
@@ -426,11 +451,8 @@ impl Instance {
Ok(())
}
pub async fn wait(&mut self) {
while let Some(ret) = self.tasks.join_next().await {
tracing::info!("task finished: {:?}", ret);
ret.unwrap();
}
pub async fn wait(&self) {
self.peer_manager.wait().await;
}
pub fn id(&self) -> uuid::Uuid {
@@ -441,24 +463,28 @@ impl Instance {
self.peer_manager.my_peer_id()
}
fn get_vpn_portal_rpc_service(&self) -> impl VpnPortalRpc {
fn get_vpn_portal_rpc_service(&self) -> impl VpnPortalRpc<Controller = BaseController> + Clone {
#[derive(Clone)]
struct VpnPortalRpcService {
peer_mgr: Weak<PeerManager>,
vpn_portal: Weak<Mutex<Box<dyn VpnPortal>>>,
}
#[tonic::async_trait]
#[async_trait::async_trait]
impl VpnPortalRpc for VpnPortalRpcService {
type Controller = BaseController;
async fn get_vpn_portal_info(
&self,
_request: tonic::Request<GetVpnPortalInfoRequest>,
) -> Result<tonic::Response<GetVpnPortalInfoResponse>, tonic::Status> {
_: BaseController,
_request: GetVpnPortalInfoRequest,
) -> Result<GetVpnPortalInfoResponse, rpc_types::error::Error> {
let Some(vpn_portal) = self.vpn_portal.upgrade() else {
return Err(tonic::Status::unavailable("vpn portal not available"));
return Err(anyhow::anyhow!("vpn portal not available").into());
};
let Some(peer_mgr) = self.peer_mgr.upgrade() else {
return Err(tonic::Status::unavailable("peer manager not available"));
return Err(anyhow::anyhow!("peer manager not available").into());
};
let vpn_portal = vpn_portal.lock().await;
@@ -470,7 +496,7 @@ impl Instance {
}),
};
Ok(tonic::Response::new(ret))
Ok(ret)
}
}
@@ -480,46 +506,36 @@ impl Instance {
}
}
fn run_rpc_server(&mut self) -> Result<(), Error> {
let Some(addr) = self.global_ctx.config.get_rpc_portal() else {
async fn run_rpc_server(&mut self) -> Result<(), Error> {
let Some(_) = self.global_ctx.config.get_rpc_portal() else {
tracing::info!("rpc server not enabled, because rpc_portal is not set.");
return Ok(());
};
use crate::proto::cli::*;
let peer_mgr = self.peer_manager.clone();
let conn_manager = self.conn_manager.clone();
let net_ns = self.global_ctx.net_ns.clone();
let peer_center = self.peer_center.clone();
let vpn_portal_rpc = self.get_vpn_portal_rpc_service();
let incoming = TcpIncoming::new(addr, true, None)
.map_err(|e| anyhow::anyhow!("create rpc server failed. addr: {}, err: {}", addr, e))?;
self.tasks.spawn(async move {
let _g = net_ns.guard();
Server::builder()
.add_service(
crate::rpc::peer_manage_rpc_server::PeerManageRpcServer::new(
PeerManagerRpcService::new(peer_mgr),
),
)
.add_service(
crate::rpc::connector_manage_rpc_server::ConnectorManageRpcServer::new(
ConnectorManagerRpcService(conn_manager.clone()),
),
)
.add_service(
crate::rpc::peer_center_rpc_server::PeerCenterRpcServer::new(
peer_center.get_rpc_service(),
),
)
.add_service(crate::rpc::vpn_portal_rpc_server::VpnPortalRpcServer::new(
vpn_portal_rpc,
))
.serve_with_incoming(incoming)
.await
.with_context(|| format!("rpc server failed. addr: {}", addr))
.unwrap();
});
Ok(())
let s = self.rpc_server.as_mut().unwrap();
s.registry().register(
PeerManageRpcServer::new(PeerManagerRpcService::new(peer_mgr)),
"",
);
s.registry().register(
ConnectorManageRpcServer::new(ConnectorManagerRpcService(conn_manager)),
"",
);
s.registry()
.register(PeerCenterRpcServer::new(peer_center.get_rpc_service()), "");
s.registry()
.register(VpnPortalRpcServer::new(vpn_portal_rpc), "");
let _g = self.global_ctx.net_ns.guard();
Ok(s.serve().await.with_context(|| "rpc server start failed")?)
}
pub fn get_global_ctx(&self) -> ArcGlobalCtx {

View File

@@ -159,8 +159,16 @@ impl<H: TunnelHandlerForListener + Send + Sync + 'static + Debug> ListenerManage
let tunnel_info = ret.info().unwrap();
global_ctx.issue_event(GlobalCtxEvent::ConnectionAccepted(
tunnel_info.local_addr.clone(),
tunnel_info.remote_addr.clone(),
tunnel_info
.local_addr
.clone()
.unwrap_or_default()
.to_string(),
tunnel_info
.remote_addr
.clone()
.unwrap_or_default()
.to_string(),
));
tracing::info!(ret = ?ret, "conn accepted");
let peer_manager = peer_manager.clone();
@@ -169,8 +177,8 @@ impl<H: TunnelHandlerForListener + Send + Sync + 'static + Debug> ListenerManage
let server_ret = peer_manager.handle_tunnel(ret).await;
if let Err(e) = &server_ret {
global_ctx.issue_event(GlobalCtxEvent::ConnectionError(
tunnel_info.local_addr,
tunnel_info.remote_addr,
tunnel_info.local_addr.unwrap_or_default().to_string(),
tunnel_info.remote_addr.unwrap_or_default().to_string(),
e.to_string(),
));
tracing::error!(error = ?e, "handle conn error");

View File

@@ -1,7 +1,5 @@
pub mod instance;
pub mod listeners;
#[cfg(feature = "tun")]
pub mod tun_codec;
#[cfg(feature = "tun")]
pub mod virtual_nic;

View File

@@ -1,179 +0,0 @@
use std::io;
use byteorder::{NativeEndian, NetworkEndian, WriteBytesExt};
use tokio_util::bytes::{BufMut, Bytes, BytesMut};
use tokio_util::codec::{Decoder, Encoder};
/// A packet protocol IP version
#[derive(Debug, Clone, Copy, Default)]
enum PacketProtocol {
#[default]
IPv4,
IPv6,
Other(u8),
}
// Note: the protocol in the packet information header is platform dependent.
impl PacketProtocol {
#[cfg(any(target_os = "linux", target_os = "android"))]
fn into_pi_field(self) -> Result<u16, io::Error> {
use nix::libc;
match self {
PacketProtocol::IPv4 => Ok(libc::ETH_P_IP as u16),
PacketProtocol::IPv6 => Ok(libc::ETH_P_IPV6 as u16),
PacketProtocol::Other(_) => Err(io::Error::new(
io::ErrorKind::Other,
"neither an IPv4 nor IPv6 packet",
)),
}
}
#[cfg(any(target_os = "macos", target_os = "ios"))]
fn into_pi_field(self) -> Result<u16, io::Error> {
use nix::libc;
match self {
PacketProtocol::IPv4 => Ok(libc::PF_INET as u16),
PacketProtocol::IPv6 => Ok(libc::PF_INET6 as u16),
PacketProtocol::Other(_) => Err(io::Error::new(
io::ErrorKind::Other,
"neither an IPv4 nor IPv6 packet",
)),
}
}
#[cfg(target_os = "windows")]
fn into_pi_field(self) -> Result<u16, io::Error> {
unimplemented!()
}
}
#[derive(Debug)]
pub enum TunPacketBuffer {
Bytes(Bytes),
BytesMut(BytesMut),
}
impl From<TunPacketBuffer> for Bytes {
fn from(buf: TunPacketBuffer) -> Self {
match buf {
TunPacketBuffer::Bytes(bytes) => bytes,
TunPacketBuffer::BytesMut(bytes) => bytes.freeze(),
}
}
}
impl AsRef<[u8]> for TunPacketBuffer {
fn as_ref(&self) -> &[u8] {
match self {
TunPacketBuffer::Bytes(bytes) => bytes.as_ref(),
TunPacketBuffer::BytesMut(bytes) => bytes.as_ref(),
}
}
}
/// A Tun Packet to be sent or received on the TUN interface.
#[derive(Debug)]
pub struct TunPacket(PacketProtocol, TunPacketBuffer);
/// Infer the protocol based on the first nibble in the packet buffer.
fn infer_proto(buf: &[u8]) -> PacketProtocol {
match buf[0] >> 4 {
4 => PacketProtocol::IPv4,
6 => PacketProtocol::IPv6,
p => PacketProtocol::Other(p),
}
}
impl TunPacket {
/// Create a new `TunPacket` based on a byte slice.
pub fn new(buffer: TunPacketBuffer) -> TunPacket {
let proto = infer_proto(buffer.as_ref());
TunPacket(proto, buffer)
}
/// Return this packet's bytes.
pub fn get_bytes(&self) -> &[u8] {
match &self.1 {
TunPacketBuffer::Bytes(bytes) => bytes.as_ref(),
TunPacketBuffer::BytesMut(bytes) => bytes.as_ref(),
}
}
pub fn into_bytes(self) -> Bytes {
match self.1 {
TunPacketBuffer::Bytes(bytes) => bytes,
TunPacketBuffer::BytesMut(bytes) => bytes.freeze(),
}
}
pub fn into_bytes_mut(self) -> BytesMut {
match self.1 {
TunPacketBuffer::Bytes(_) => panic!("cannot into_bytes_mut from bytes"),
TunPacketBuffer::BytesMut(bytes) => bytes,
}
}
}
/// A TunPacket Encoder/Decoder.
pub struct TunPacketCodec(bool, i32);
impl TunPacketCodec {
/// Create a new `TunPacketCodec` specifying whether the underlying
/// tunnel Device has enabled the packet information header.
pub fn new(pi: bool, mtu: i32) -> TunPacketCodec {
TunPacketCodec(pi, mtu)
}
}
impl Decoder for TunPacketCodec {
type Item = TunPacket;
type Error = io::Error;
fn decode(&mut self, buf: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
if buf.is_empty() {
return Ok(None);
}
let mut pkt = buf.split_to(buf.len());
// reserve enough space for the next packet
if self.0 {
buf.reserve(self.1 as usize + 4);
} else {
buf.reserve(self.1 as usize);
}
// if the packet information is enabled we have to ignore the first 4 bytes
if self.0 {
let _ = pkt.split_to(4);
}
let proto = infer_proto(pkt.as_ref());
Ok(Some(TunPacket(proto, TunPacketBuffer::BytesMut(pkt))))
}
}
impl Encoder<TunPacket> for TunPacketCodec {
type Error = io::Error;
fn encode(&mut self, item: TunPacket, dst: &mut BytesMut) -> Result<(), Self::Error> {
dst.reserve(item.get_bytes().len() + 4);
match item {
TunPacket(proto, bytes) if self.0 => {
// build the packet information header comprising of 2 u16
// fields: flags and protocol.
let mut buf = Vec::<u8>::with_capacity(4);
// flags is always 0
buf.write_u16::<NativeEndian>(0)?;
// write the protocol as network byte order
buf.write_u16::<NetworkEndian>(proto.into_pi_field()?)?;
dst.put_slice(&buf);
dst.put(Bytes::from(bytes));
}
TunPacket(_, bytes) => dst.put(Bytes::from(bytes)),
}
Ok(())
}
}

View File

@@ -119,7 +119,7 @@ impl PacketProtocol {
}
}
#[cfg(any(target_os = "macos", target_os = "ios"))]
#[cfg(any(target_os = "macos", target_os = "ios", target_os = "freebsd"))]
fn into_pi_field(self) -> Result<u16, io::Error> {
use nix::libc;
match self {
@@ -242,8 +242,9 @@ pub struct VirtualNic {
ifname: Option<String>,
ifcfg: Box<dyn IfConfiguerTrait + Send + Sync + 'static>,
}
#[cfg(target_os = "windows")]
pub fn checkreg(dev_name:&str) -> io::Result<()> {
pub fn checkreg(dev_name: &str) -> io::Result<()> {
use winreg::{enums::HKEY_LOCAL_MACHINE, enums::KEY_ALL_ACCESS, RegKey};
let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
let profiles_key = hklm.open_subkey_with_flags(
@@ -262,7 +263,9 @@ pub fn checkreg(dev_name:&str) -> io::Result<()> {
// check if ProfileName contains "et"
match subkey.get_value::<String, _>("ProfileName") {
Ok(profile_name) => {
if profile_name.contains("et_") || (!dev_name.is_empty() && dev_name == profile_name) {
if profile_name.contains("et_")
|| (!dev_name.is_empty() && dev_name == profile_name)
{
keys_to_delete.push(subkey_name);
}
}
@@ -280,7 +283,9 @@ pub fn checkreg(dev_name:&str) -> io::Result<()> {
// check if ProfileName contains "et"
match subkey.get_value::<String, _>("Description") {
Ok(profile_name) => {
if profile_name.contains("et_") || (!dev_name.is_empty() && dev_name == profile_name) {
if profile_name.contains("et_")
|| (!dev_name.is_empty() && dev_name == profile_name)
{
keys_to_delete_unmanaged.push(subkey_name);
}
}
@@ -334,7 +339,7 @@ impl VirtualNic {
}
}
#[cfg(target_os = "macos")]
#[cfg(any(target_os = "macos"))]
config.platform_config(|config| {
// disable packet information so we can process the header by ourselves, see tun2 impl for more details
config.packet_information(false);
@@ -348,20 +353,26 @@ impl VirtualNic {
Ok(_) => tracing::trace!("delete successful!"),
Err(e) => tracing::error!("An error occurred: {}", e),
}
use rand::distributions::Distribution as _;
let c = crate::arch::windows::interface_count()?;
let mut rng = rand::thread_rng();
let s: String = rand::distributions::Alphanumeric
.sample_iter(&mut rng)
.take(4)
.map(char::from)
.collect::<String>()
.to_lowercase();
if !dev_name.is_empty() {
config.tun_name(format!("{}", dev_name));
} else {
config.tun_name(format!("et_{}_{}", c, s));
use rand::distributions::Distribution as _;
let c = crate::arch::windows::interface_count()?;
let mut rng = rand::thread_rng();
let s: String = rand::distributions::Alphanumeric
.sample_iter(&mut rng)
.take(4)
.map(char::from)
.collect::<String>()
.to_lowercase();
let random_dev_name = format!("et_{}_{}", c, s);
config.tun_name(random_dev_name.clone());
let mut flags = self.global_ctx.get_flags();
flags.dev_name = random_dev_name.clone();
self.global_ctx.set_flags(flags);
}
config.platform_config(|config| {
@@ -480,6 +491,39 @@ impl VirtualNic {
}
}
#[cfg(target_os = "windows")]
pub fn reg_change_catrgory_in_profile(dev_name: &str) -> io::Result<()> {
use winreg::{enums::HKEY_LOCAL_MACHINE, enums::KEY_ALL_ACCESS, RegKey};
let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
let profiles_key = hklm.open_subkey_with_flags(
"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\NetworkList\\Profiles",
KEY_ALL_ACCESS,
)?;
for subkey_name in profiles_key.enum_keys().filter_map(Result::ok) {
let subkey = profiles_key.open_subkey_with_flags(&subkey_name, KEY_ALL_ACCESS)?;
match subkey.get_value::<String, _>("ProfileName") {
Ok(profile_name) => {
if !dev_name.is_empty() && dev_name == profile_name
{
match subkey.set_value("Category", &1u32) {
Ok(_) => tracing::trace!("Successfully set Category in registry"),
Err(e) => tracing::error!("Failed to set Category in registry: {}", e),
}
}
}
Err(e) => {
tracing::error!(
"Failed to read ProfileName for subkey {}: {}",
subkey_name,
e
);
}
}
}
Ok(())
}
pub struct NicCtx {
global_ctx: ArcGlobalCtx,
peer_mgr: Weak<PeerManager>,
@@ -509,7 +553,8 @@ impl NicCtx {
nic.link_up().await?;
nic.remove_ip(None).await?;
nic.add_ip(ipv4_addr, 24).await?;
if cfg!(target_os = "macos") {
#[cfg(any(target_os = "macos", target_os = "freebsd"))]
{
nic.add_route(ipv4_addr, 24).await?;
}
Ok(())
@@ -553,6 +598,7 @@ impl NicCtx {
}
Self::do_forward_nic_to_peers_ipv4(ret.unwrap(), mgr.as_ref()).await;
}
panic!("nic stream closed");
});
Ok(())
@@ -573,6 +619,7 @@ impl NicCtx {
tracing::error!(?ret, "do_forward_tunnel_to_nic sink error");
}
}
panic!("peer packet receiver closed");
});
}
@@ -668,6 +715,13 @@ impl NicCtx {
let mut nic = self.nic.lock().await;
match nic.create_dev().await {
Ok(ret) => {
#[cfg(target_os = "windows")]
{
let dev_name = self.global_ctx.get_flags().dev_name;
let _ = reg_change_catrgory_in_profile(&dev_name);
}
self.global_ctx
.issue_event(GlobalCtxEvent::TunDeviceReady(nic.ifname().to_string()));
ret

View File

@@ -6,14 +6,16 @@ use std::{
use crate::{
common::{
config::{ConfigLoader, TomlConfigLoader},
constants::EASYTIER_VERSION,
global_ctx::GlobalCtxEvent,
stun::StunInfoCollectorTrait,
},
instance::instance::Instance,
peers::rpc_service::PeerManagerRpcService,
rpc::{
cli::{PeerInfo, Route, StunInfo},
peer::GetIpListResponse,
proto::{
cli::{PeerInfo, Route},
common::StunInfo,
peer_rpc::GetIpListResponse,
},
utils::{list_peer_route_pair, PeerRoutePair},
};
@@ -24,6 +26,8 @@ use tokio::task::JoinSet;
#[derive(Default, Clone, Debug, Serialize, Deserialize)]
pub struct MyNodeInfo {
pub virtual_ipv4: String,
pub hostname: String,
pub version: String,
pub ips: GetIpListResponse,
pub stun_info: StunInfo,
pub listeners: Vec<String>,
@@ -37,6 +41,7 @@ struct EasyTierData {
routes: Arc<RwLock<Vec<Route>>>,
peers: Arc<RwLock<Vec<PeerInfo>>>,
tun_fd: Arc<RwLock<Option<i32>>>,
tun_dev_name: Arc<RwLock<String>>,
}
pub struct EasyTierLauncher {
@@ -132,11 +137,17 @@ impl EasyTierLauncher {
let vpn_portal = instance.get_vpn_portal_inst();
tasks.spawn(async move {
loop {
// Update TUN Device Name
*data_c.tun_dev_name.write().unwrap() = global_ctx_c.get_flags().dev_name.clone();
let node_info = MyNodeInfo {
virtual_ipv4: global_ctx_c
.get_ipv4()
.map(|x| x.to_string())
.unwrap_or_default(),
hostname: global_ctx_c.get_hostname(),
version: EASYTIER_VERSION.to_string(),
ips: global_ctx_c.get_ip_collector().collect_ip_addrs().await,
stun_info: global_ctx_c.get_stun_info_collector().get_stun_info(),
listeners: global_ctx_c
@@ -229,6 +240,10 @@ impl EasyTierLauncher {
.load(std::sync::atomic::Ordering::Relaxed)
}
pub fn get_dev_name(&self) -> String {
self.data.tun_dev_name.read().unwrap().clone()
}
pub fn get_events(&self) -> Vec<(DateTime<Local>, GlobalCtxEvent)> {
let events = self.data.events.read().unwrap();
events.iter().cloned().collect()
@@ -261,6 +276,7 @@ impl Drop for EasyTierLauncher {
#[derive(Deserialize, Serialize, Debug)]
pub struct NetworkInstanceRunningInfo {
pub dev_name: String,
pub my_node_info: MyNodeInfo,
pub events: Vec<(DateTime<Local>, GlobalCtxEvent)>,
pub node_info: MyNodeInfo,
@@ -300,6 +316,7 @@ impl NetworkInstance {
let peer_route_pairs = list_peer_route_pair(peers.clone(), routes.clone());
Some(NetworkInstanceRunningInfo {
dev_name: launcher.get_dev_name(),
my_node_info: launcher.get_node_info(),
events: launcher.get_events(),
node_info: launcher.get_node_info(),

View File

@@ -6,10 +6,12 @@ mod gateway;
mod instance;
mod peer_center;
mod peers;
mod proto;
mod vpn_portal;
pub mod common;
pub mod launcher;
pub mod rpc;
pub mod tunnel;
pub mod utils;
pub const VERSION: &str = common::constants::EASYTIER_VERSION;

View File

@@ -1,7 +1,7 @@
use std::{
collections::BTreeSet,
sync::Arc,
time::{Duration, Instant, SystemTime},
time::{Duration, Instant},
};
use crossbeam::atomic::AtomicCell;
@@ -18,14 +18,17 @@ use crate::{
route_trait::{RouteCostCalculator, RouteCostCalculatorInterface},
rpc_service::PeerManagerRpcService,
},
rpc::{GetGlobalPeerMapRequest, GetGlobalPeerMapResponse},
proto::{
peer_rpc::{
GetGlobalPeerMapRequest, GetGlobalPeerMapResponse, GlobalPeerMap, PeerCenterRpc,
PeerCenterRpcClientFactory, PeerCenterRpcServer, PeerInfoForGlobalMap,
ReportPeersRequest, ReportPeersResponse,
},
rpc_types::{self, controller::BaseController},
},
};
use super::{
server::PeerCenterServer,
service::{GlobalPeerMap, PeerCenterService, PeerCenterServiceClient, PeerInfoForGlobalMap},
Digest, Error,
};
use super::{server::PeerCenterServer, Digest, Error};
struct PeerCenterBase {
peer_mgr: Arc<PeerManager>,
@@ -44,11 +47,14 @@ struct PeridicJobCtx<T> {
impl PeerCenterBase {
pub async fn init(&self) -> Result<(), Error> {
self.peer_mgr.get_peer_rpc_mgr().run_service(
SERVICE_ID,
PeerCenterServer::new(self.peer_mgr.my_peer_id()).serve(),
);
self.peer_mgr
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
PeerCenterRpcServer::new(PeerCenterServer::new(self.peer_mgr.my_peer_id())),
&self.peer_mgr.get_global_ctx().get_network_name(),
);
Ok(())
}
@@ -59,7 +65,10 @@ impl PeerCenterBase {
}
// find peer with alphabetical smallest id.
let mut min_peer = peer_mgr.my_peer_id();
for peer in peers.iter() {
for peer in peers
.iter()
.filter(|r| r.feature_flag.map(|r| !r.is_public_server).unwrap_or(true))
{
let peer_id = peer.peer_id;
if peer_id < min_peer {
min_peer = peer_id;
@@ -70,11 +79,17 @@ impl PeerCenterBase {
async fn init_periodic_job<
T: Send + Sync + 'static + Clone,
Fut: Future<Output = Result<u32, tarpc::client::RpcError>> + Send + 'static,
Fut: Future<Output = Result<u32, rpc_types::error::Error>> + Send + 'static,
>(
&self,
job_ctx: T,
job_fn: (impl Fn(PeerCenterServiceClient, Arc<PeridicJobCtx<T>>) -> Fut + Send + Sync + 'static),
job_fn: (impl Fn(
Box<dyn PeerCenterRpc<Controller = BaseController> + Send>,
Arc<PeridicJobCtx<T>>,
) -> Fut
+ Send
+ Sync
+ 'static),
) -> () {
let my_peer_id = self.peer_mgr.my_peer_id();
let peer_mgr = self.peer_mgr.clone();
@@ -96,14 +111,14 @@ impl PeerCenterBase {
tracing::trace!(?center_peer, "run periodic job");
let rpc_mgr = peer_mgr.get_peer_rpc_mgr();
let _g = lock.lock().await;
let ret = rpc_mgr
.do_client_rpc_scoped(SERVICE_ID, center_peer, |c| async {
let client =
PeerCenterServiceClient::new(tarpc::client::Config::default(), c)
.spawn();
job_fn(client, ctx.clone()).await
})
.await;
let stub = rpc_mgr
.rpc_client()
.scoped_client::<PeerCenterRpcClientFactory<BaseController>>(
my_peer_id,
center_peer,
peer_mgr.get_global_ctx().get_network_name(),
);
let ret = job_fn(stub, ctx.clone()).await;
drop(_g);
let Ok(sleep_time_ms) = ret else {
@@ -130,25 +145,34 @@ impl PeerCenterBase {
}
}
#[derive(Clone)]
pub struct PeerCenterInstanceService {
global_peer_map: Arc<RwLock<GlobalPeerMap>>,
global_peer_map_digest: Arc<AtomicCell<Digest>>,
}
#[tonic::async_trait]
impl crate::rpc::cli::peer_center_rpc_server::PeerCenterRpc for PeerCenterInstanceService {
#[async_trait::async_trait]
impl PeerCenterRpc for PeerCenterInstanceService {
type Controller = BaseController;
async fn get_global_peer_map(
&self,
_request: tonic::Request<GetGlobalPeerMapRequest>,
) -> Result<tonic::Response<GetGlobalPeerMapResponse>, tonic::Status> {
let global_peer_map = self.global_peer_map.read().unwrap().clone();
Ok(tonic::Response::new(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map
.map
.into_iter()
.map(|(k, v)| (k, v))
.collect(),
}))
_: BaseController,
_: GetGlobalPeerMapRequest,
) -> Result<GetGlobalPeerMapResponse, rpc_types::error::Error> {
let global_peer_map = self.global_peer_map.read().unwrap();
Ok(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map.map.clone(),
digest: Some(self.global_peer_map_digest.load()),
})
}
async fn report_peers(
&self,
_: BaseController,
_req: ReportPeersRequest,
) -> Result<ReportPeersResponse, rpc_types::error::Error> {
Err(anyhow::anyhow!("not implemented").into())
}
}
@@ -166,7 +190,7 @@ impl PeerCenterInstance {
PeerCenterInstance {
peer_mgr: peer_mgr.clone(),
client: Arc::new(PeerCenterBase::new(peer_mgr.clone())),
global_peer_map: Arc::new(RwLock::new(GlobalPeerMap::new())),
global_peer_map: Arc::new(RwLock::new(GlobalPeerMap::default())),
global_peer_map_digest: Arc::new(AtomicCell::new(Digest::default())),
global_peer_map_update_time: Arc::new(AtomicCell::new(Instant::now())),
}
@@ -193,35 +217,38 @@ impl PeerCenterInstance {
self.client
.init_periodic_job(ctx, |client, ctx| async move {
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(3);
if ctx
.job_ctx
.global_peer_map_update_time
.load()
.elapsed()
.as_secs()
> 60
> 120
{
ctx.job_ctx.global_peer_map_digest.store(Digest::default());
}
let ret = client
.get_global_peer_map(rpc_ctx, ctx.job_ctx.global_peer_map_digest.load())
.await?;
.get_global_peer_map(
BaseController {},
GetGlobalPeerMapRequest {
digest: ctx.job_ctx.global_peer_map_digest.load(),
},
)
.await;
let Ok(resp) = ret else {
tracing::error!(
"get global info from center server got error result: {:?}",
ret
);
return Ok(1000);
return Ok(10000);
};
let Some(resp) = resp else {
return Ok(5000);
};
if resp == GetGlobalPeerMapResponse::default() {
// digest match, no need to update
return Ok(15000);
}
tracing::info!(
"get global info from center server: {:?}, digest: {:?}",
@@ -229,13 +256,17 @@ impl PeerCenterInstance {
resp.digest
);
*ctx.job_ctx.global_peer_map.write().unwrap() = resp.global_peer_map;
ctx.job_ctx.global_peer_map_digest.store(resp.digest);
*ctx.job_ctx.global_peer_map.write().unwrap() = GlobalPeerMap {
map: resp.global_peer_map,
};
ctx.job_ctx
.global_peer_map_digest
.store(resp.digest.unwrap_or_default());
ctx.job_ctx
.global_peer_map_update_time
.store(Instant::now());
Ok(5000)
Ok(15000)
})
.await;
}
@@ -274,12 +305,15 @@ impl PeerCenterInstance {
return Ok(5000);
}
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(3);
let ret = client
.report_peers(rpc_ctx, my_node_id.clone(), peers)
.await?;
.report_peers(
BaseController {},
ReportPeersRequest {
my_peer_id: my_node_id,
peer_infos: Some(peers),
},
)
.await;
if ret.is_ok() {
ctx.job_ctx.last_center_peer.store(ctx.center_peer.load());
@@ -311,15 +345,22 @@ impl PeerCenterInstance {
global_peer_map_update_time: Arc<AtomicCell<Instant>>,
}
impl RouteCostCalculatorInterface for RouteCostCalculatorImpl {
fn calculate_cost(&self, src: PeerId, dst: PeerId) -> i32 {
let ret = self
.global_peer_map_clone
impl RouteCostCalculatorImpl {
fn directed_cost(&self, src: PeerId, dst: PeerId) -> Option<i32> {
self.global_peer_map_clone
.map
.get(&src)
.and_then(|src_peer_info| src_peer_info.direct_peers.get(&dst))
.and_then(|info| Some(info.latency_ms));
ret.unwrap_or(80)
.and_then(|info| Some(info.latency_ms))
}
}
impl RouteCostCalculatorInterface for RouteCostCalculatorImpl {
fn calculate_cost(&self, src: PeerId, dst: PeerId) -> i32 {
if let Some(cost) = self.directed_cost(src, dst) {
return cost;
}
self.directed_cost(dst, src).unwrap_or(100)
}
fn begin_update(&mut self) {
@@ -339,7 +380,7 @@ impl PeerCenterInstance {
Box::new(RouteCostCalculatorImpl {
global_peer_map: self.global_peer_map.clone(),
global_peer_map_clone: GlobalPeerMap::new(),
global_peer_map_clone: GlobalPeerMap::default(),
last_update_time: AtomicCell::new(
self.global_peer_map_update_time.load() - Duration::from_secs(1),
),
@@ -395,7 +436,7 @@ mod tests {
false
}
},
Duration::from_secs(10),
Duration::from_secs(20),
)
.await;
@@ -404,7 +445,7 @@ mod tests {
let rpc_service = pc.get_rpc_service();
wait_for_condition(
|| async { rpc_service.global_peer_map.read().unwrap().map.len() == 3 },
Duration::from_secs(10),
Duration::from_secs(20),
)
.await;

View File

@@ -5,9 +5,13 @@
// peer center is not guaranteed to be stable and can be changed when peer enter or leave.
// it's used to reduce the cost to exchange infos between peers.
use std::collections::BTreeMap;
use crate::proto::cli::PeerInfo;
use crate::proto::peer_rpc::{DirectConnectedPeerInfo, PeerInfoForGlobalMap};
pub mod instance;
mod server;
mod service;
#[derive(thiserror::Error, Debug, serde::Deserialize, serde::Serialize)]
pub enum Error {
@@ -18,3 +22,29 @@ pub enum Error {
}
pub type Digest = u64;
impl From<Vec<PeerInfo>> for PeerInfoForGlobalMap {
fn from(peers: Vec<PeerInfo>) -> Self {
let mut peer_map = BTreeMap::new();
for peer in peers {
let Some(min_lat) = peer
.conns
.iter()
.map(|conn| conn.stats.as_ref().unwrap().latency_us)
.min()
else {
continue;
};
let dp_info = DirectConnectedPeerInfo {
latency_ms: std::cmp::max(1, (min_lat as u32 / 1000) as i32),
};
// sort conn info so hash result is stable
peer_map.insert(peer.peer_id, dp_info);
}
PeerInfoForGlobalMap {
direct_peers: peer_map,
}
}
}

View File

@@ -7,15 +7,22 @@ use std::{
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use once_cell::sync::Lazy;
use tokio::{task::JoinSet};
use tokio::task::JoinSet;
use crate::{common::PeerId, rpc::DirectConnectedPeerInfo};
use super::{
service::{GetGlobalPeerMapResponse, GlobalPeerMap, PeerCenterService, PeerInfoForGlobalMap},
Digest, Error,
use crate::{
common::PeerId,
proto::{
peer_rpc::{
DirectConnectedPeerInfo, GetGlobalPeerMapRequest, GetGlobalPeerMapResponse,
GlobalPeerMap, PeerCenterRpc, PeerInfoForGlobalMap, ReportPeersRequest,
ReportPeersResponse,
},
rpc_types::{self, controller::BaseController},
},
};
use super::Digest;
#[derive(Debug, Clone, PartialEq, PartialOrd, Ord, Eq, Hash)]
pub(crate) struct SrcDstPeerPair {
src: PeerId,
@@ -95,15 +102,19 @@ impl PeerCenterServer {
}
}
#[tarpc::server]
impl PeerCenterService for PeerCenterServer {
#[async_trait::async_trait]
impl PeerCenterRpc for PeerCenterServer {
type Controller = BaseController;
#[tracing::instrument()]
async fn report_peers(
self,
_: tarpc::context::Context,
my_peer_id: PeerId,
peers: PeerInfoForGlobalMap,
) -> Result<(), Error> {
&self,
_: BaseController,
req: ReportPeersRequest,
) -> Result<ReportPeersResponse, rpc_types::error::Error> {
let my_peer_id = req.my_peer_id;
let peers = req.peer_infos.unwrap_or_default();
tracing::debug!("receive report_peers");
let data = get_global_data(self.my_node_id);
@@ -125,20 +136,23 @@ impl PeerCenterService for PeerCenterServer {
data.digest
.store(PeerCenterServer::calc_global_digest(self.my_node_id));
Ok(())
Ok(ReportPeersResponse::default())
}
#[tracing::instrument()]
async fn get_global_peer_map(
self,
_: tarpc::context::Context,
digest: Digest,
) -> Result<Option<GetGlobalPeerMapResponse>, Error> {
&self,
_: BaseController,
req: GetGlobalPeerMapRequest,
) -> Result<GetGlobalPeerMapResponse, rpc_types::error::Error> {
let digest = req.digest;
let data = get_global_data(self.my_node_id);
if digest == data.digest.load() && digest != 0 {
return Ok(None);
return Ok(GetGlobalPeerMapResponse::default());
}
let mut global_peer_map = GlobalPeerMap::new();
let mut global_peer_map = GlobalPeerMap::default();
for item in data.global_peer_map.iter() {
let (pair, entry) = item.pair();
global_peer_map
@@ -151,9 +165,9 @@ impl PeerCenterService for PeerCenterServer {
.insert(pair.dst, entry.info.clone());
}
Ok(Some(GetGlobalPeerMapResponse {
global_peer_map,
digest: data.digest.load(),
}))
Ok(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map.map,
digest: Some(data.digest.load()),
})
}
}

View File

@@ -1,64 +0,0 @@
use std::collections::BTreeMap;
use crate::{common::PeerId, rpc::DirectConnectedPeerInfo};
use super::{Digest, Error};
use crate::rpc::PeerInfo;
pub type PeerInfoForGlobalMap = crate::rpc::cli::PeerInfoForGlobalMap;
impl From<Vec<PeerInfo>> for PeerInfoForGlobalMap {
fn from(peers: Vec<PeerInfo>) -> Self {
let mut peer_map = BTreeMap::new();
for peer in peers {
let Some(min_lat) = peer
.conns
.iter()
.map(|conn| conn.stats.as_ref().unwrap().latency_us)
.min()
else {
continue;
};
let dp_info = DirectConnectedPeerInfo {
latency_ms: std::cmp::max(1, (min_lat as u32 / 1000) as i32),
};
// sort conn info so hash result is stable
peer_map.insert(peer.peer_id, dp_info);
}
PeerInfoForGlobalMap {
direct_peers: peer_map,
}
}
}
// a global peer topology map, peers can use it to find optimal path to other peers
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct GlobalPeerMap {
pub map: BTreeMap<PeerId, PeerInfoForGlobalMap>,
}
impl GlobalPeerMap {
pub fn new() -> Self {
GlobalPeerMap {
map: BTreeMap::new(),
}
}
}
#[derive(Debug, Clone, serde::Deserialize, serde::Serialize)]
pub struct GetGlobalPeerMapResponse {
pub global_peer_map: GlobalPeerMap,
pub digest: Digest,
}
#[tarpc::service]
pub trait PeerCenterService {
// report center server which peer is directly connected to me
// digest is a hash of current peer map, if digest not match, we need to transfer the whole map
async fn report_peers(my_peer_id: PeerId, peers: PeerInfoForGlobalMap) -> Result<(), Error>;
async fn get_global_peer_map(digest: Digest)
-> Result<Option<GetGlobalPeerMapResponse>, Error>;
}

View File

@@ -1,27 +1,11 @@
use std::{
sync::Arc,
time::{Duration, SystemTime},
};
use dashmap::DashMap;
use tokio::{sync::Mutex, task::JoinSet};
use std::sync::{Arc, Mutex};
use crate::{
common::{
error::Error,
global_ctx::{ArcGlobalCtx, NetworkIdentity},
PeerId,
},
common::{error::Error, global_ctx::ArcGlobalCtx, scoped_task::ScopedTask, PeerId},
tunnel::packet_def::ZCPacket,
};
use super::{
foreign_network_manager::{ForeignNetworkServiceClient, FOREIGN_NETWORK_SERVICE_ID},
peer_conn::PeerConn,
peer_map::PeerMap,
peer_rpc::PeerRpcManager,
PacketRecvChan,
};
use super::{peer_conn::PeerConn, peer_map::PeerMap, peer_rpc::PeerRpcManager, PacketRecvChan};
pub struct ForeignNetworkClient {
global_ctx: ArcGlobalCtx,
@@ -29,9 +13,7 @@ pub struct ForeignNetworkClient {
my_peer_id: PeerId,
peer_map: Arc<PeerMap>,
next_hop: Arc<DashMap<PeerId, PeerId>>,
tasks: Mutex<JoinSet<()>>,
task: Mutex<Option<ScopedTask<()>>>,
}
impl ForeignNetworkClient {
@@ -46,17 +28,13 @@ impl ForeignNetworkClient {
global_ctx.clone(),
my_peer_id,
));
let next_hop = Arc::new(DashMap::new());
Self {
global_ctx,
peer_rpc,
my_peer_id,
peer_map,
next_hop,
tasks: Mutex::new(JoinSet::new()),
task: Mutex::new(None),
}
}
@@ -65,91 +43,19 @@ impl ForeignNetworkClient {
self.peer_map.add_new_peer_conn(peer_conn).await
}
async fn collect_next_hop_in_foreign_network_task(
network_identity: NetworkIdentity,
peer_map: Arc<PeerMap>,
peer_rpc: Arc<PeerRpcManager>,
next_hop: Arc<DashMap<PeerId, PeerId>>,
) {
loop {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
peer_map.clean_peer_without_conn().await;
let new_next_hop = Self::collect_next_hop_in_foreign_network(
network_identity.clone(),
peer_map.clone(),
peer_rpc.clone(),
)
.await;
next_hop.clear();
for (k, v) in new_next_hop.into_iter() {
next_hop.insert(k, v);
}
}
}
async fn collect_next_hop_in_foreign_network(
network_identity: NetworkIdentity,
peer_map: Arc<PeerMap>,
peer_rpc: Arc<PeerRpcManager>,
) -> DashMap<PeerId, PeerId> {
let peers = peer_map.list_peers().await;
let mut tasks = JoinSet::new();
if !peers.is_empty() {
tracing::warn!(?peers, my_peer_id = ?peer_rpc.my_peer_id(), "collect next hop in foreign network");
}
for peer in peers {
let peer_rpc = peer_rpc.clone();
let network_identity = network_identity.clone();
tasks.spawn(async move {
let Ok(Some(peers_in_foreign)) = peer_rpc
.do_client_rpc_scoped(FOREIGN_NETWORK_SERVICE_ID, peer, |c| async {
let c =
ForeignNetworkServiceClient::new(tarpc::client::Config::default(), c)
.spawn();
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(2);
let ret = c.list_network_peers(rpc_ctx, network_identity).await;
ret
})
.await
else {
return (peer, vec![]);
};
(peer, peers_in_foreign)
});
}
let new_next_hop = DashMap::new();
while let Some(join_ret) = tasks.join_next().await {
let Ok((gateway, peer_ids)) = join_ret else {
tracing::error!(?join_ret, "collect next hop in foreign network failed");
continue;
};
for ret in peer_ids {
new_next_hop.insert(ret, gateway);
}
}
new_next_hop
}
pub fn has_next_hop(&self, peer_id: PeerId) -> bool {
self.get_next_hop(peer_id).is_some()
}
pub fn is_peer_public_node(&self, peer_id: &PeerId) -> bool {
self.peer_map.has_peer(*peer_id)
pub async fn list_public_peers(&self) -> Vec<PeerId> {
self.peer_map.list_peers().await
}
pub fn get_next_hop(&self, peer_id: PeerId) -> Option<PeerId> {
if self.peer_map.has_peer(peer_id) {
return Some(peer_id.clone());
}
self.next_hop.get(&peer_id).map(|v| v.clone())
None
}
pub async fn send_msg(&self, msg: ZCPacket, peer_id: PeerId) -> Result<(), Error> {
@@ -162,40 +68,32 @@ impl ForeignNetworkClient {
?next_hop,
"foreign network client send msg failed"
);
} else {
tracing::info!(
?peer_id,
?next_hop,
"foreign network client send msg success"
);
}
return ret;
}
Err(Error::RouteError(Some("no next hop".to_string())))
}
pub fn list_foreign_peers(&self) -> Vec<PeerId> {
let mut peers = vec![];
for item in self.next_hop.iter() {
if item.key() != &self.my_peer_id {
peers.push(item.key().clone());
}
}
peers
}
pub async fn run(&self) {
self.tasks
.lock()
.await
.spawn(Self::collect_next_hop_in_foreign_network_task(
self.global_ctx.get_network_identity(),
self.peer_map.clone(),
self.peer_rpc.clone(),
self.next_hop.clone(),
));
}
pub fn get_next_hop_table(&self) -> DashMap<PeerId, PeerId> {
let next_hop = DashMap::new();
for item in self.next_hop.iter() {
next_hop.insert(item.key().clone(), item.value().clone());
}
next_hop
let peer_map = Arc::downgrade(&self.peer_map);
*self.task.lock().unwrap() = Some(
tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
let Some(peer_map) = peer_map.upgrade() else {
break;
};
peer_map.clean_peer_without_conn().await;
}
})
.into(),
);
}
pub fn get_peer_map(&self) -> Arc<PeerMap> {

File diff suppressed because it is too large Load Diff

View File

@@ -5,8 +5,8 @@ pub mod peer_conn_ping;
pub mod peer_manager;
pub mod peer_map;
pub mod peer_ospf_route;
pub mod peer_rip_route;
pub mod peer_rpc;
pub mod peer_rpc_service;
pub mod route_trait;
pub mod rpc_service;

View File

@@ -11,7 +11,7 @@ use super::{
peer_conn::{PeerConn, PeerConnId},
PacketRecvChan,
};
use crate::rpc::PeerConnInfo;
use crate::proto::cli::PeerConnInfo;
use crate::{
common::{
error::Error,

View File

@@ -8,7 +8,7 @@ use std::{
},
};
use futures::{SinkExt, StreamExt, TryFutureExt};
use futures::{StreamExt, TryFutureExt};
use prost::Message;
@@ -18,23 +18,26 @@ use tokio::{
time::{timeout, Duration},
};
use tokio_util::sync::PollSender;
use tracing::Instrument;
use zerocopy::AsBytes;
use crate::{
common::{
config::{NetworkIdentity, NetworkSecretDigest},
defer,
error::Error,
global_ctx::ArcGlobalCtx,
PeerId,
},
rpc::{HandshakeRequest, PeerConnInfo, PeerConnStats, TunnelInfo},
tunnel::packet_def::PacketType,
proto::{
cli::{PeerConnInfo, PeerConnStats},
common::TunnelInfo,
peer_rpc::HandshakeRequest,
},
tunnel::{
filter::{StatsRecorderTunnelFilter, TunnelFilter, TunnelWithFilter},
mpsc::{MpscTunnel, MpscTunnelSender},
packet_def::ZCPacket,
packet_def::{PacketType, ZCPacket},
stats::{Throughput, WindowLatency},
Tunnel, TunnelError, ZCPacketStream,
},
@@ -61,6 +64,7 @@ pub struct PeerConn {
tasks: JoinSet<Result<(), TunnelError>>,
info: Option<HandshakeRequest>,
is_client: Option<bool>,
close_event_sender: Option<mpsc::Sender<PeerConnId>>,
@@ -99,7 +103,9 @@ impl PeerConn {
my_peer_id,
global_ctx,
tunnel: Arc::new(Mutex::new(Box::new(mpsc_tunnel))),
tunnel: Arc::new(Mutex::new(Box::new(defer::Defer::new(move || {
mpsc_tunnel.close()
})))),
sink,
recv: Arc::new(Mutex::new(Some(recv))),
tunnel_info,
@@ -107,6 +113,7 @@ impl PeerConn {
tasks: JoinSet::new(),
info: None,
is_client: None,
close_event_sender: None,
ctrl_resp_sender: ctrl_sender,
@@ -215,6 +222,7 @@ impl PeerConn {
let rsp = self.wait_handshake_loop().await?;
tracing::info!("handshake request: {:?}", rsp);
self.info = Some(rsp);
self.is_client = Some(false);
self.send_handshake().await?;
Ok(())
}
@@ -226,6 +234,7 @@ impl PeerConn {
let rsp = self.wait_handshake_loop().await?;
tracing::info!("handshake response: {:?}", rsp);
self.info = Some(rsp);
self.is_client = Some(true);
Ok(())
}
@@ -236,7 +245,7 @@ impl PeerConn {
pub async fn start_recv_loop(&mut self, packet_recv_chan: PacketRecvChan) {
let mut stream = self.recv.lock().await.take().unwrap();
let sink = self.sink.clone();
let mut sender = PollSender::new(packet_recv_chan.clone());
let sender = packet_recv_chan.clone();
let close_event_sender = self.close_event_sender.clone().unwrap();
let conn_id = self.conn_id;
let ctrl_sender = self.ctrl_resp_sender.clone();
@@ -273,7 +282,9 @@ impl PeerConn {
tracing::error!(?e, "peer conn send ctrl resp error");
}
} else {
if sender.send(zc_packet).await.is_err() {
if zc_packet.is_lossy() {
let _ = sender.try_send(zc_packet);
} else if sender.send(zc_packet).await.is_err() {
break;
}
}
@@ -302,6 +313,7 @@ impl PeerConn {
self.ctrl_resp_sender.clone(),
self.latency_stats.clone(),
self.loss_rate_stats.clone(),
self.throughput.clone(),
);
let close_event_sender = self.close_event_sender.clone().unwrap();
@@ -359,14 +371,17 @@ impl PeerConn {
}
pub fn get_conn_info(&self) -> PeerConnInfo {
let info = self.info.as_ref().unwrap();
PeerConnInfo {
conn_id: self.conn_id.to_string(),
my_peer_id: self.my_peer_id,
peer_id: self.get_peer_id(),
features: self.info.as_ref().unwrap().features.clone(),
features: info.features.clone(),
tunnel: self.tunnel_info.clone(),
stats: Some(self.get_stats()),
loss_rate: (f64::from(self.loss_rate_stats.load(Ordering::Relaxed)) / 100.0) as f32,
is_client: self.is_client.unwrap_or_default(),
network_name: info.network_name.clone(),
}
}
}
@@ -378,6 +393,7 @@ mod tests {
use super::*;
use crate::common::global_ctx::tests::get_mock_global_ctx;
use crate::common::new_peer_id;
use crate::common::scoped_task::ScopedTask;
use crate::tunnel::filter::tests::DropSendTunnelFilter;
use crate::tunnel::filter::PacketRecorderTunnelFilter;
use crate::tunnel::ring::create_ring_tunnel_pair;
@@ -419,13 +435,25 @@ mod tests {
assert_eq!(c_peer.get_network_identity(), NetworkIdentity::default());
}
async fn peer_conn_pingpong_test_common(drop_start: u32, drop_end: u32, conn_closed: bool) {
async fn peer_conn_pingpong_test_common(
drop_start: u32,
drop_end: u32,
conn_closed: bool,
drop_both: bool,
) {
let (c, s) = create_ring_tunnel_pair();
// drop 1-3 packets should not affect pingpong
let c_recorder = Arc::new(DropSendTunnelFilter::new(drop_start, drop_end));
let c = TunnelWithFilter::new(c, c_recorder.clone());
let s = if drop_both {
let s_recorder = Arc::new(DropSendTunnelFilter::new(drop_start, drop_end));
Box::new(TunnelWithFilter::new(s, s_recorder.clone()))
} else {
s
};
let c_peer_id = new_peer_id();
let s_peer_id = new_peer_id();
@@ -452,7 +480,15 @@ mod tests {
.start_recv_loop(tokio::sync::mpsc::channel(200).0)
.await;
// wait 5s, conn should not be disconnected
let throughput = c_peer.throughput.clone();
let _t = ScopedTask::from(tokio::spawn(async move {
// if not drop both, we mock some rx traffic for client peer to test pinger
while !drop_both {
tokio::time::sleep(Duration::from_millis(100)).await;
throughput.record_rx_bytes(3);
}
}));
tokio::time::sleep(Duration::from_secs(15)).await;
if conn_closed {
@@ -463,9 +499,18 @@ mod tests {
}
#[tokio::test]
async fn peer_conn_pingpong_timeout() {
peer_conn_pingpong_test_common(3, 5, false).await;
peer_conn_pingpong_test_common(5, 12, true).await;
async fn peer_conn_pingpong_timeout_not_close() {
peer_conn_pingpong_test_common(3, 5, false, false).await;
}
#[tokio::test]
async fn peer_conn_pingpong_oneside_timeout() {
peer_conn_pingpong_test_common(4, 12, false, false).await;
}
#[tokio::test]
async fn peer_conn_pingpong_bothside_timeout() {
peer_conn_pingpong_test_common(4, 12, true, true).await;
}
#[tokio::test]

View File

@@ -6,18 +6,98 @@ use std::{
time::Duration,
};
use tokio::{sync::broadcast, task::JoinSet, time::timeout};
use rand::{thread_rng, Rng};
use tokio::{
sync::broadcast,
task::JoinSet,
time::{timeout, Interval},
};
use crate::{
common::{error::Error, PeerId},
tunnel::{
mpsc::MpscTunnelSender,
packet_def::{PacketType, ZCPacket},
stats::WindowLatency,
stats::{Throughput, WindowLatency},
TunnelError,
},
};
struct PingIntervalController {
throughput: Arc<Throughput>,
loss_rate_20: Arc<WindowLatency>,
interval: Interval,
logic_time: u64,
last_send_logic_time: u64,
backoff_idx: i32,
max_backoff_idx: i32,
last_throughput: Throughput,
}
impl PingIntervalController {
fn new(throughput: Arc<Throughput>, loss_rate_20: Arc<WindowLatency>) -> Self {
let last_throughput = *throughput;
Self {
throughput,
loss_rate_20,
interval: tokio::time::interval(Duration::from_secs(1)),
logic_time: 0,
last_send_logic_time: 0,
backoff_idx: 0,
max_backoff_idx: 5,
last_throughput,
}
}
async fn tick(&mut self) {
self.interval.tick().await;
self.logic_time += 1;
}
fn tx_increase(&self) -> bool {
self.throughput.tx_packets() > self.last_throughput.tx_packets()
}
fn rx_increase(&self) -> bool {
self.throughput.rx_packets() > self.last_throughput.rx_packets()
}
fn should_send_ping(&mut self) -> bool {
if self.loss_rate_20.get_latency_us::<f64>() > 0.0 {
self.backoff_idx = 0;
} else if self.tx_increase()
&& !self.rx_increase()
&& self.logic_time - self.last_send_logic_time > 2
{
// if tx increase but rx not increase, we should do pingpong more frequently
self.backoff_idx = 0;
}
self.last_throughput = *self.throughput;
if (self.logic_time - self.last_send_logic_time) < (1 << self.backoff_idx) {
return false;
}
self.backoff_idx = std::cmp::min(self.backoff_idx + 1, self.max_backoff_idx);
// use this makes two peers not pingpong at the same time
if self.backoff_idx > self.max_backoff_idx - 2 && thread_rng().gen_bool(0.2) {
self.backoff_idx -= 1;
}
self.last_send_logic_time = self.logic_time;
return true;
}
}
pub struct PeerConnPinger {
my_peer_id: PeerId,
peer_id: PeerId,
@@ -25,6 +105,7 @@ pub struct PeerConnPinger {
ctrl_sender: broadcast::Sender<ZCPacket>,
latency_stats: Arc<WindowLatency>,
loss_rate_stats: Arc<AtomicU32>,
throughput_stats: Arc<Throughput>,
tasks: JoinSet<Result<(), TunnelError>>,
}
@@ -45,6 +126,7 @@ impl PeerConnPinger {
ctrl_sender: broadcast::Sender<ZCPacket>,
latency_stats: Arc<WindowLatency>,
loss_rate_stats: Arc<AtomicU32>,
throughput_stats: Arc<Throughput>,
) -> Self {
Self {
my_peer_id,
@@ -54,6 +136,7 @@ impl PeerConnPinger {
latency_stats,
ctrl_sender,
loss_rate_stats,
throughput_stats,
}
}
@@ -125,17 +208,23 @@ impl PeerConnPinger {
let (ping_res_sender, mut ping_res_receiver) = tokio::sync::mpsc::channel(100);
// one with 1% precision
let loss_rate_stats_1 = WindowLatency::new(100);
// one with 20% precision, so we can fast fail this conn.
let loss_rate_stats_20 = Arc::new(WindowLatency::new(5));
let stopped = Arc::new(AtomicU32::new(0));
// generate a pingpong task every 200ms
let mut pingpong_tasks = JoinSet::new();
let ctrl_resp_sender = self.ctrl_sender.clone();
let stopped_clone = stopped.clone();
let mut controller =
PingIntervalController::new(self.throughput_stats.clone(), loss_rate_stats_20.clone());
self.tasks.spawn(async move {
let mut req_seq = 0;
loop {
let receiver = ctrl_resp_sender.subscribe();
let ping_res_sender = ping_res_sender.clone();
controller.tick().await;
if stopped_clone.load(Ordering::Relaxed) != 0 {
return Ok(());
@@ -145,7 +234,13 @@ impl PeerConnPinger {
pingpong_tasks.join_next().await;
}
if !controller.should_send_ping() {
continue;
}
let mut sink = sink.clone();
let receiver = ctrl_resp_sender.subscribe();
let ping_res_sender = ping_res_sender.clone();
pingpong_tasks.spawn(async move {
let mut receiver = receiver.resubscribe();
let pingpong_once_ret = Self::do_pingpong_once(
@@ -163,16 +258,12 @@ impl PeerConnPinger {
});
req_seq = req_seq.wrapping_add(1);
tokio::time::sleep(Duration::from_millis(1000)).await;
}
});
// one with 1% precision
let loss_rate_stats_1 = WindowLatency::new(100);
// one with 20% precision, so we can fast fail this conn.
let loss_rate_stats_20 = WindowLatency::new(5);
let mut counter: u64 = 0;
let throughput = self.throughput_stats.clone();
let mut last_rx_packets = throughput.rx_packets();
while let Some(ret) = ping_res_receiver.recv().await {
counter += 1;
@@ -199,16 +290,29 @@ impl PeerConnPinger {
);
if (counter > 5 && loss_rate_20 > 0.74) || (counter > 150 && loss_rate_1 > 0.20) {
tracing::warn!(
?ret,
?self,
?loss_rate_1,
?loss_rate_20,
"pingpong loss rate too high, closing"
);
break;
let current_rx_packets = throughput.rx_packets();
let need_close = if last_rx_packets != current_rx_packets {
// if we receive some packet from peers, we should relax the condition
counter > 50 && loss_rate_1 > 0.5
} else {
true
};
if need_close {
tracing::warn!(
?ret,
?self,
?loss_rate_1,
?loss_rate_20,
?last_rx_packets,
?current_rx_packets,
"pingpong loss rate too high, closing"
);
break;
}
}
last_rx_packets = throughput.rx_packets();
self.loss_rate_stats
.store((loss_rate_1 * 100.0) as u32, Ordering::Relaxed);
}

View File

@@ -2,12 +2,13 @@ use std::{
fmt::Debug,
net::Ipv4Addr,
sync::{Arc, Weak},
time::SystemTime,
};
use anyhow::Context;
use async_trait::async_trait;
use futures::StreamExt;
use dashmap::DashMap;
use tokio::{
sync::{
@@ -16,17 +17,28 @@ use tokio::{
},
task::JoinSet,
};
use tokio_stream::wrappers::ReceiverStream;
use tokio_util::bytes::Bytes;
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, PeerId},
common::{
constants::EASYTIER_VERSION,
error::Error,
global_ctx::{ArcGlobalCtx, NetworkIdentity},
stun::StunInfoCollectorTrait,
PeerId,
},
peers::{
peer_conn::PeerConn,
peer_rpc::PeerRpcManagerTransport,
route_trait::{NextHopPolicy, RouteInterface},
route_trait::{ForeignNetworkRouteInfoMap, NextHopPolicy, RouteInterface},
PeerPacketFilter,
},
proto::{
cli::{
self, list_global_foreign_network_response::OneForeignNetwork,
ListGlobalForeignNetworkResponse,
},
peer_rpc::{ForeignNetworkRouteInfoEntry, ForeignNetworkRouteInfoKey},
},
tunnel::{
self,
packet_def::{PacketType, ZCPacket},
@@ -37,11 +49,10 @@ use crate::{
use super::{
encrypt::{Encryptor, NullCipher},
foreign_network_client::ForeignNetworkClient,
foreign_network_manager::ForeignNetworkManager,
foreign_network_manager::{ForeignNetworkManager, GlobalForeignNetworkAccessor},
peer_conn::PeerConnId,
peer_map::PeerMap,
peer_ospf_route::PeerRoute,
peer_rip_route::BasicRoute,
peer_rpc::PeerRpcManager,
route_trait::{ArcRoute, Route},
BoxNicPacketFilter, BoxPeerPacketFilter, PacketRecvChanReceiver,
@@ -75,7 +86,15 @@ impl PeerRpcManagerTransport for RpcTransport {
.ok_or(Error::Unknown)?;
let peers = self.peers.upgrade().ok_or(Error::Unknown)?;
if let Some(gateway_id) = peers
if foreign_peers.has_next_hop(dst_peer_id) {
// do not encrypt for data sending to public server
tracing::debug!(
?dst_peer_id,
?self.my_peer_id,
"failed to send msg to peer, try foreign network",
);
foreign_peers.send_msg(msg, dst_peer_id).await
} else if let Some(gateway_id) = peers
.get_gateway_peer_id(dst_peer_id, NextHopPolicy::LeastHop)
.await
{
@@ -88,20 +107,11 @@ impl PeerRpcManagerTransport for RpcTransport {
self.encryptor
.encrypt(&mut msg)
.with_context(|| "encrypt failed")?;
peers.send_msg_directly(msg, gateway_id).await
} else if foreign_peers.has_next_hop(dst_peer_id) {
if !foreign_peers.is_peer_public_node(&dst_peer_id) {
// do not encrypt for msg sending to public node
self.encryptor
.encrypt(&mut msg)
.with_context(|| "encrypt failed")?;
if peers.has_peer(gateway_id) {
peers.send_msg_directly(msg, gateway_id).await
} else {
foreign_peers.send_msg(msg, gateway_id).await
}
tracing::debug!(
?dst_peer_id,
?self.my_peer_id,
"failed to send msg to peer, try foreign network",
);
foreign_peers.send_msg(msg, dst_peer_id).await
} else {
Err(Error::RouteError(Some(format!(
"peermgr RpcTransport no route for dst_peer_id: {}",
@@ -120,13 +130,11 @@ impl PeerRpcManagerTransport for RpcTransport {
}
pub enum RouteAlgoType {
Rip,
Ospf,
None,
}
enum RouteAlgoInst {
Rip(Arc<BasicRoute>),
Ospf(Arc<PeerRoute>),
None,
}
@@ -217,9 +225,6 @@ impl PeerManager {
let peer_rpc_mgr = Arc::new(PeerRpcManager::new(rpc_tspt.clone()));
let route_algo_inst = match route_algo {
RouteAlgoType::Rip => {
RouteAlgoInst::Rip(Arc::new(BasicRoute::new(my_peer_id, global_ctx.clone())))
}
RouteAlgoType::Ospf => RouteAlgoInst::Ospf(PeerRoute::new(
my_peer_id,
global_ctx.clone(),
@@ -232,6 +237,7 @@ impl PeerManager {
my_peer_id,
global_ctx.clone(),
packet_send.clone(),
Self::build_foreign_network_manager_accessor(&peers),
));
let foreign_network_client = Arc::new(ForeignNetworkClient::new(
global_ctx.clone(),
@@ -270,6 +276,34 @@ impl PeerManager {
}
}
fn build_foreign_network_manager_accessor(
peer_map: &Arc<PeerMap>,
) -> Box<dyn GlobalForeignNetworkAccessor> {
struct T {
peer_map: Weak<PeerMap>,
}
#[async_trait::async_trait]
impl GlobalForeignNetworkAccessor for T {
async fn list_global_foreign_peer(
&self,
network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
let Some(peer_map) = self.peer_map.upgrade() else {
return vec![];
};
peer_map
.list_peers_own_foreign_network(network_identity)
.await
}
}
Box::new(T {
peer_map: Arc::downgrade(peer_map),
})
}
async fn add_new_peer_conn(&self, peer_conn: PeerConn) -> Result<(), Error> {
if self.global_ctx.get_network_identity() != peer_conn.get_network_identity() {
return Err(Error::SecretKeyError(
@@ -325,20 +359,85 @@ impl PeerManager {
Ok(())
}
async fn try_handle_foreign_network_packet(
packet: ZCPacket,
my_peer_id: PeerId,
peer_map: &PeerMap,
foreign_network_mgr: &ForeignNetworkManager,
) -> Result<(), ZCPacket> {
let pm_header = packet.peer_manager_header().unwrap();
if pm_header.packet_type != PacketType::ForeignNetworkPacket as u8 {
return Err(packet);
}
let from_peer_id = pm_header.from_peer_id.get();
let to_peer_id = pm_header.to_peer_id.get();
let foreign_hdr = packet.foreign_network_hdr().unwrap();
let foreign_network_name = foreign_hdr.get_network_name(packet.payload());
let foreign_peer_id = foreign_hdr.get_dst_peer_id();
if to_peer_id == my_peer_id {
// packet sent from other peer to me, extract the inner packet and forward it
if let Err(e) = foreign_network_mgr
.send_msg_to_peer(
&foreign_network_name,
foreign_peer_id,
packet.foreign_network_packet(),
)
.await
{
tracing::debug!(
?e,
?foreign_network_name,
?foreign_peer_id,
"foreign network mgr send_msg_to_peer failed"
);
}
Ok(())
} else if from_peer_id == my_peer_id {
// packet is generated from foreign network mgr and should be forward to other peer
if let Err(e) = peer_map
.send_msg(packet, to_peer_id, NextHopPolicy::LeastHop)
.await
{
tracing::debug!(
?e,
?to_peer_id,
"send_msg_directly failed when forward local generated foreign network packet"
);
}
Ok(())
} else {
// target is not me, forward it
Err(packet)
}
}
async fn start_peer_recv(&self) {
let mut recv = ReceiverStream::new(self.packet_recv.lock().await.take().unwrap());
let mut recv = self.packet_recv.lock().await.take().unwrap();
let my_peer_id = self.my_peer_id;
let peers = self.peers.clone();
let pipe_line = self.peer_packet_process_pipeline.clone();
let foreign_client = self.foreign_network_client.clone();
let foreign_mgr = self.foreign_network_manager.clone();
let encryptor = self.encryptor.clone();
self.tasks.lock().await.spawn(async move {
tracing::trace!("start_peer_recv");
while let Some(mut ret) = recv.next().await {
while let Some(ret) = recv.recv().await {
let Err(mut ret) =
Self::try_handle_foreign_network_packet(ret, my_peer_id, &peers, &foreign_mgr)
.await
else {
continue;
};
let Some(hdr) = ret.mut_peer_manager_header() else {
tracing::warn!(?ret, "invalid packet, skip");
continue;
};
tracing::trace!(?hdr, "peer recv a packet...");
let from_peer_id = hdr.from_peer_id.get();
let to_peer_id = hdr.to_peer_id.get();
@@ -438,7 +537,10 @@ impl PeerManager {
impl PeerPacketFilter for PeerRpcPacketProcessor {
async fn try_process_packet_from_peer(&self, packet: ZCPacket) -> Option<ZCPacket> {
let hdr = packet.peer_manager_header().unwrap();
if hdr.packet_type == PacketType::TaRpc as u8 {
if hdr.packet_type == PacketType::TaRpc as u8
|| hdr.packet_type == PacketType::RpcReq as u8
|| hdr.packet_type == PacketType::RpcResp as u8
{
self.peer_rpc_tspt_sender.send(packet).unwrap();
None
} else {
@@ -464,6 +566,7 @@ impl PeerManager {
my_peer_id: PeerId,
peers: Weak<PeerMap>,
foreign_network_client: Weak<ForeignNetworkClient>,
foreign_network_manager: Weak<ForeignNetworkManager>,
}
#[async_trait]
@@ -477,36 +580,45 @@ impl PeerManager {
return vec![];
};
let mut peers = foreign_client.list_foreign_peers();
let mut peers = foreign_client.list_public_peers().await;
peers.extend(peer_map.list_peers_with_conn().await);
peers
}
async fn send_route_packet(
&self,
msg: Bytes,
_route_id: u8,
dst_peer_id: PeerId,
) -> Result<(), Error> {
let foreign_client = self
.foreign_network_client
.upgrade()
.ok_or(Error::Unknown)?;
let peer_map = self.peers.upgrade().ok_or(Error::Unknown)?;
let mut zc_packet = ZCPacket::new_with_payload(&msg);
zc_packet.fill_peer_manager_hdr(
self.my_peer_id,
dst_peer_id,
PacketType::Route as u8,
);
if foreign_client.has_next_hop(dst_peer_id) {
foreign_client.send_msg(zc_packet, dst_peer_id).await
} else {
peer_map.send_msg_directly(zc_packet, dst_peer_id).await
}
}
fn my_peer_id(&self) -> PeerId {
self.my_peer_id
}
async fn list_foreign_networks(&self) -> ForeignNetworkRouteInfoMap {
let ret = DashMap::new();
let Some(foreign_mgr) = self.foreign_network_manager.upgrade() else {
return ret;
};
let networks = foreign_mgr.list_foreign_networks().await;
for (network_name, info) in networks.foreign_networks.iter() {
if info.peers.is_empty() {
continue;
}
let last_update = foreign_mgr
.get_foreign_network_last_update(network_name)
.unwrap_or(SystemTime::now());
ret.insert(
ForeignNetworkRouteInfoKey {
peer_id: self.my_peer_id,
network_name: network_name.clone(),
},
ForeignNetworkRouteInfoEntry {
foreign_peer_ids: info.peers.iter().map(|x| x.peer_id).collect(),
last_update: Some(last_update.into()),
version: 0,
network_secret_digest: info.network_secret_digest.clone(),
},
);
}
ret
}
}
let my_peer_id = self.my_peer_id;
@@ -515,6 +627,7 @@ impl PeerManager {
my_peer_id,
peers: Arc::downgrade(&self.peers),
foreign_network_client: Arc::downgrade(&self.foreign_network_client),
foreign_network_manager: Arc::downgrade(&self.foreign_network_manager),
}))
.await
.unwrap();
@@ -525,13 +638,12 @@ impl PeerManager {
pub fn get_route(&self) -> Box<dyn Route + Send + Sync + 'static> {
match &self.route_algo_inst {
RouteAlgoInst::Rip(route) => Box::new(route.clone()),
RouteAlgoInst::Ospf(route) => Box::new(route.clone()),
RouteAlgoInst::None => panic!("no route"),
}
}
pub async fn list_routes(&self) -> Vec<crate::rpc::Route> {
pub async fn list_routes(&self) -> Vec<cli::Route> {
self.get_route().list_routes().await
}
@@ -539,6 +651,28 @@ impl PeerManager {
self.get_route().dump().await
}
pub async fn list_global_foreign_network(&self) -> ListGlobalForeignNetworkResponse {
let mut resp = ListGlobalForeignNetworkResponse::default();
let ret = self.get_route().list_foreign_network_info().await;
for info in ret.infos.iter() {
let entry = resp
.foreign_networks
.entry(info.key.as_ref().unwrap().peer_id)
.or_insert_with(|| Default::default());
let mut f = OneForeignNetwork::default();
f.network_name = info.key.as_ref().unwrap().network_name.clone();
f.peer_ids
.extend(info.value.as_ref().unwrap().foreign_peer_ids.iter());
f.last_updated = format!("{}", info.value.as_ref().unwrap().last_update.unwrap());
f.version = info.value.as_ref().unwrap().version;
entry.foreign_networks.push(f);
}
resp
}
async fn run_nic_packet_process_pipeline(&self, data: &mut ZCPacket) {
for pipeline in self.nic_packet_process_pipeline.read().await.iter().rev() {
pipeline.try_process_packet_from_nic(data).await;
@@ -649,13 +783,23 @@ impl PeerManager {
.get_gateway_peer_id(*peer_id, next_hop_policy.clone())
.await
{
if let Err(e) = self.peers.send_msg_directly(msg, gateway).await {
errs.push(e);
}
} else if self.foreign_network_client.has_next_hop(*peer_id) {
if let Err(e) = self.foreign_network_client.send_msg(msg, *peer_id).await {
errs.push(e);
if self.peers.has_peer(gateway) {
if let Err(e) = self.peers.send_msg_directly(msg, gateway).await {
errs.push(e);
}
} else if self.foreign_network_client.has_next_hop(gateway) {
if let Err(e) = self.foreign_network_client.send_msg(msg, gateway).await {
errs.push(e);
}
} else {
tracing::warn!(
?gateway,
?peer_id,
"cannot send msg to peer through gateway"
);
}
} else {
tracing::debug!(?peer_id, "no gateway for peer");
}
}
@@ -686,14 +830,12 @@ impl PeerManager {
.await
.replace(Arc::downgrade(&self.foreign_network_client));
self.foreign_network_manager.run().await;
self.foreign_network_client.run().await;
}
pub async fn run(&self) -> Result<(), Error> {
match &self.route_algo_inst {
RouteAlgoInst::Ospf(route) => self.add_route(route.clone()).await,
RouteAlgoInst::Rip(route) => self.add_route(route.clone()).await,
RouteAlgoInst::None => {}
};
@@ -732,13 +874,6 @@ impl PeerManager {
self.nic_channel.clone()
}
pub fn get_basic_route(&self) -> Arc<BasicRoute> {
match &self.route_algo_inst {
RouteAlgoInst::Rip(route) => route.clone(),
_ => panic!("not rip route"),
}
}
pub fn get_foreign_network_manager(&self) -> Arc<ForeignNetworkManager> {
self.foreign_network_manager.clone()
}
@@ -746,6 +881,41 @@ impl PeerManager {
pub fn get_foreign_network_client(&self) -> Arc<ForeignNetworkClient> {
self.foreign_network_client.clone()
}
pub fn get_my_info(&self) -> cli::NodeInfo {
cli::NodeInfo {
peer_id: self.my_peer_id,
ipv4_addr: self
.global_ctx
.get_ipv4()
.map(|x| x.to_string())
.unwrap_or_default(),
proxy_cidrs: self
.global_ctx
.get_proxy_cidrs()
.into_iter()
.map(|x| x.to_string())
.collect(),
hostname: self.global_ctx.get_hostname(),
stun_info: Some(self.global_ctx.get_stun_info_collector().get_stun_info()),
inst_id: self.global_ctx.get_id().to_string(),
listeners: self
.global_ctx
.get_running_listeners()
.iter()
.map(|x| x.to_string())
.collect(),
config: self.global_ctx.config.dump(),
version: EASYTIER_VERSION.to_string(),
feature_flag: Some(self.global_ctx.get_feature_flags()),
}
}
pub async fn wait(&self) {
while !self.tasks.lock().await.is_empty() {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
}
}
}
#[cfg(test)]
@@ -761,12 +931,11 @@ mod tests {
instance::listeners::get_listener_by_url,
peers::{
peer_manager::RouteAlgoType,
peer_rpc::tests::{MockService, TestRpcService, TestRpcServiceClient},
peer_rpc::tests::register_service,
tests::{connect_peer_manager, wait_route_appear},
},
rpc::NatType,
tunnel::common::tests::wait_for_condition,
tunnel::{TunnelConnector, TunnelListener},
proto::common::NatType,
tunnel::{common::tests::wait_for_condition, TunnelConnector, TunnelListener},
};
use super::PeerManager;
@@ -829,25 +998,18 @@ mod tests {
#[values("tcp", "udp", "wg", "quic")] proto1: &str,
#[values("tcp", "udp", "wg", "quic")] proto2: &str,
) {
use crate::proto::{
rpc_impl::RpcController,
tests::{GreetingClientFactory, SayHelloRequest},
};
let peer_mgr_a = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
peer_mgr_a.get_peer_rpc_mgr().run_service(
100,
MockService {
prefix: "hello a".to_owned(),
}
.serve(),
);
register_service(&peer_mgr_a.peer_rpc_mgr, "", 0, "hello a");
let peer_mgr_b = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
let peer_mgr_c = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
peer_mgr_c.get_peer_rpc_mgr().run_service(
100,
MockService {
prefix: "hello c".to_owned(),
}
.serve(),
);
register_service(&peer_mgr_c.peer_rpc_mgr, "", 0, "hello c");
let mut listener1 = get_listener_by_url(
&format!("{}://0.0.0.0:31013", proto1).parse().unwrap(),
@@ -885,16 +1047,26 @@ mod tests {
.await
.unwrap();
let ret = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(100, peer_mgr_c.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), "abc".to_owned()).await;
ret
})
let stub = peer_mgr_a
.peer_rpc_mgr
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id,
peer_mgr_c.my_peer_id,
"".to_string(),
);
let ret = stub
.say_hello(
RpcController {},
SayHelloRequest {
name: "abc".to_string(),
},
)
.await
.unwrap();
assert_eq!(ret, "hello c abc");
assert_eq!(ret.greeting, "hello c abc!");
}
#[tokio::test]

View File

@@ -7,12 +7,11 @@ use tokio::sync::RwLock;
use crate::{
common::{
error::Error,
global_ctx::{ArcGlobalCtx, GlobalCtxEvent},
global_ctx::{ArcGlobalCtx, GlobalCtxEvent, NetworkIdentity},
PeerId,
},
rpc::PeerConnInfo,
tunnel::packet_def::ZCPacket,
tunnel::TunnelError,
proto::cli::PeerConnInfo,
tunnel::{packet_def::ZCPacket, TunnelError},
};
use super::{
@@ -66,7 +65,7 @@ impl PeerMap {
}
pub fn has_peer(&self, peer_id: PeerId) -> bool {
self.peer_map.contains_key(&peer_id)
peer_id == self.my_peer_id || self.peer_map.contains_key(&peer_id)
}
pub async fn send_msg_directly(&self, msg: ZCPacket, dst_peer_id: PeerId) -> Result<(), Error> {
@@ -113,16 +112,28 @@ impl PeerMap {
.get_next_hop_with_policy(dst_peer_id, policy.clone())
.await
{
// for foreign network, gateway_peer_id may not connect to me
if self.has_peer(gateway_peer_id) {
return Some(gateway_peer_id);
}
// NOTIC: for foreign network, gateway_peer_id may not connect to me
return Some(gateway_peer_id);
}
}
None
}
pub async fn list_peers_own_foreign_network(
&self,
network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
let mut ret = Vec::new();
for route in self.routes.read().await.iter() {
let peers = route
.list_peers_own_foreign_network(&network_identity)
.await;
ret.extend(peers);
}
ret
}
pub async fn send_msg(
&self,
msg: ZCPacket,
@@ -240,3 +251,13 @@ impl PeerMap {
route_map
}
}
impl Drop for PeerMap {
fn drop(&mut self) {
tracing::debug!(
self.my_peer_id,
network = ?self.global_ctx.get_network_identity(),
"PeerMap is dropped"
);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,753 +0,0 @@
use std::{
net::Ipv4Addr,
sync::{atomic::AtomicU32, Arc},
time::{Duration, Instant},
};
use async_trait::async_trait;
use dashmap::DashMap;
use tokio::{
sync::{Mutex, RwLock},
task::JoinSet,
};
use tokio_util::bytes::Bytes;
use tracing::Instrument;
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, stun::StunInfoCollectorTrait, PeerId},
peers::route_trait::{Route, RouteInterfaceBox},
rpc::{NatType, StunInfo},
tunnel::packet_def::{PacketType, ZCPacket},
};
use super::PeerPacketFilter;
const SEND_ROUTE_PERIOD_SEC: u64 = 60;
const SEND_ROUTE_FAST_REPLY_SEC: u64 = 5;
const ROUTE_EXPIRED_SEC: u64 = 70;
type Version = u32;
#[derive(serde::Deserialize, serde::Serialize, Clone, Debug, PartialEq)]
// Derives can be passed through to the generated type:
pub struct SyncPeerInfo {
// means next hop in route table.
pub peer_id: PeerId,
pub cost: u32,
pub ipv4_addr: Option<Ipv4Addr>,
pub proxy_cidrs: Vec<String>,
pub hostname: Option<String>,
pub udp_stun_info: i8,
}
impl SyncPeerInfo {
pub fn new_self(from_peer: PeerId, global_ctx: &ArcGlobalCtx) -> Self {
SyncPeerInfo {
peer_id: from_peer,
cost: 0,
ipv4_addr: global_ctx.get_ipv4(),
proxy_cidrs: global_ctx
.get_proxy_cidrs()
.iter()
.map(|x| x.to_string())
.chain(global_ctx.get_vpn_portal_cidr().map(|x| x.to_string()))
.collect(),
hostname: Some(global_ctx.get_hostname()),
udp_stun_info: global_ctx
.get_stun_info_collector()
.get_stun_info()
.udp_nat_type as i8,
}
}
pub fn clone_for_route_table(&self, next_hop: PeerId, cost: u32, from: &Self) -> Self {
SyncPeerInfo {
peer_id: next_hop,
cost,
ipv4_addr: from.ipv4_addr.clone(),
proxy_cidrs: from.proxy_cidrs.clone(),
hostname: from.hostname.clone(),
udp_stun_info: from.udp_stun_info,
}
}
}
#[derive(serde::Deserialize, serde::Serialize, Clone, Debug)]
pub struct SyncPeer {
pub myself: SyncPeerInfo,
pub neighbors: Vec<SyncPeerInfo>,
// the route table version of myself
pub version: Version,
// the route table version of peer that we have received last time
pub peer_version: Option<Version>,
// if we do not have latest peer version, need_reply is true
pub need_reply: bool,
}
impl SyncPeer {
pub fn new(
from_peer: PeerId,
_to_peer: PeerId,
neighbors: Vec<SyncPeerInfo>,
global_ctx: ArcGlobalCtx,
version: Version,
peer_version: Option<Version>,
need_reply: bool,
) -> Self {
SyncPeer {
myself: SyncPeerInfo::new_self(from_peer, &global_ctx),
neighbors,
version,
peer_version,
need_reply,
}
}
}
#[derive(Debug)]
struct SyncPeerFromRemote {
packet: SyncPeer,
last_update: std::time::Instant,
}
type SyncPeerFromRemoteMap = Arc<DashMap<PeerId, SyncPeerFromRemote>>;
#[derive(Debug)]
struct RouteTable {
route_info: DashMap<PeerId, SyncPeerInfo>,
ipv4_peer_id_map: DashMap<Ipv4Addr, PeerId>,
cidr_peer_id_map: DashMap<cidr::IpCidr, PeerId>,
}
impl RouteTable {
fn new() -> Self {
RouteTable {
route_info: DashMap::new(),
ipv4_peer_id_map: DashMap::new(),
cidr_peer_id_map: DashMap::new(),
}
}
fn copy_from(&self, other: &Self) {
self.route_info.clear();
for item in other.route_info.iter() {
let (k, v) = item.pair();
self.route_info.insert(*k, v.clone());
}
self.ipv4_peer_id_map.clear();
for item in other.ipv4_peer_id_map.iter() {
let (k, v) = item.pair();
self.ipv4_peer_id_map.insert(*k, *v);
}
self.cidr_peer_id_map.clear();
for item in other.cidr_peer_id_map.iter() {
let (k, v) = item.pair();
self.cidr_peer_id_map.insert(*k, *v);
}
}
}
#[derive(Debug, Clone)]
struct RouteVersion(Arc<AtomicU32>);
impl RouteVersion {
fn new() -> Self {
// RouteVersion(Arc::new(AtomicU32::new(rand::random())))
RouteVersion(Arc::new(AtomicU32::new(0)))
}
fn get(&self) -> Version {
self.0.load(std::sync::atomic::Ordering::Relaxed)
}
fn inc(&self) {
self.0.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
}
pub struct BasicRoute {
my_peer_id: PeerId,
global_ctx: ArcGlobalCtx,
interface: Arc<Mutex<Option<RouteInterfaceBox>>>,
route_table: Arc<RouteTable>,
sync_peer_from_remote: SyncPeerFromRemoteMap,
tasks: Mutex<JoinSet<()>>,
need_sync_notifier: Arc<tokio::sync::Notify>,
version: RouteVersion,
myself: Arc<RwLock<SyncPeerInfo>>,
last_send_time_map: Arc<DashMap<PeerId, (Version, Option<Version>, Instant)>>,
}
impl BasicRoute {
pub fn new(my_peer_id: PeerId, global_ctx: ArcGlobalCtx) -> Self {
BasicRoute {
my_peer_id,
global_ctx: global_ctx.clone(),
interface: Arc::new(Mutex::new(None)),
route_table: Arc::new(RouteTable::new()),
sync_peer_from_remote: Arc::new(DashMap::new()),
tasks: Mutex::new(JoinSet::new()),
need_sync_notifier: Arc::new(tokio::sync::Notify::new()),
version: RouteVersion::new(),
myself: Arc::new(RwLock::new(SyncPeerInfo::new_self(
my_peer_id.into(),
&global_ctx,
))),
last_send_time_map: Arc::new(DashMap::new()),
}
}
fn update_route_table(
my_id: PeerId,
sync_peer_reqs: SyncPeerFromRemoteMap,
route_table: Arc<RouteTable>,
) {
tracing::trace!(my_id = ?my_id, route_table = ?route_table, "update route table");
let new_route_table = Arc::new(RouteTable::new());
for item in sync_peer_reqs.iter() {
Self::update_route_table_with_req(my_id, &item.value().packet, new_route_table.clone());
}
route_table.copy_from(&new_route_table);
}
async fn update_myself(
my_peer_id: PeerId,
myself: &Arc<RwLock<SyncPeerInfo>>,
global_ctx: &ArcGlobalCtx,
) -> bool {
let new_myself = SyncPeerInfo::new_self(my_peer_id, &global_ctx);
if *myself.read().await != new_myself {
*myself.write().await = new_myself;
true
} else {
false
}
}
fn update_route_table_with_req(my_id: PeerId, packet: &SyncPeer, route_table: Arc<RouteTable>) {
let peer_id = packet.myself.peer_id.clone();
let update = |cost: u32, peer_info: &SyncPeerInfo| {
let node_id: PeerId = peer_info.peer_id.into();
let ret = route_table
.route_info
.entry(node_id.clone().into())
.and_modify(|info| {
if info.cost > cost {
*info = info.clone_for_route_table(peer_id, cost, &peer_info);
}
})
.or_insert(
peer_info
.clone()
.clone_for_route_table(peer_id, cost, &peer_info),
)
.value()
.clone();
if ret.cost > 6 {
tracing::error!(
"cost too large: {}, may lost connection, remove it",
ret.cost
);
route_table.route_info.remove(&node_id);
}
tracing::trace!(
"update route info, to: {:?}, gateway: {:?}, cost: {}, peer: {:?}",
node_id,
peer_id,
cost,
&peer_info
);
if let Some(ipv4) = peer_info.ipv4_addr {
route_table
.ipv4_peer_id_map
.insert(ipv4.clone(), node_id.clone().into());
}
for cidr in peer_info.proxy_cidrs.iter() {
let cidr: cidr::IpCidr = cidr.parse().unwrap();
route_table
.cidr_peer_id_map
.insert(cidr, node_id.clone().into());
}
};
for neighbor in packet.neighbors.iter() {
if neighbor.peer_id == my_id {
continue;
}
update(neighbor.cost + 1, &neighbor);
tracing::trace!("route info: {:?}", neighbor);
}
// add the sender peer to route info
update(1, &packet.myself);
tracing::trace!("my_id: {:?}, current route table: {:?}", my_id, route_table);
}
async fn send_sync_peer_request(
interface: &RouteInterfaceBox,
my_peer_id: PeerId,
global_ctx: ArcGlobalCtx,
peer_id: PeerId,
route_table: Arc<RouteTable>,
my_version: Version,
peer_version: Option<Version>,
need_reply: bool,
) -> Result<(), Error> {
let mut route_info_copy: Vec<SyncPeerInfo> = Vec::new();
// copy the route info
for item in route_table.route_info.iter() {
let (k, v) = item.pair();
route_info_copy.push(v.clone().clone_for_route_table(*k, v.cost, &v));
}
let msg = SyncPeer::new(
my_peer_id,
peer_id,
route_info_copy,
global_ctx,
my_version,
peer_version,
need_reply,
);
// TODO: this may exceed the MTU of the tunnel
interface
.send_route_packet(postcard::to_allocvec(&msg).unwrap().into(), 1, peer_id)
.await
}
async fn sync_peer_periodically(&self) {
let route_table = self.route_table.clone();
let global_ctx = self.global_ctx.clone();
let my_peer_id = self.my_peer_id.clone();
let interface = self.interface.clone();
let notifier = self.need_sync_notifier.clone();
let sync_peer_from_remote = self.sync_peer_from_remote.clone();
let myself = self.myself.clone();
let version = self.version.clone();
let last_send_time_map = self.last_send_time_map.clone();
self.tasks.lock().await.spawn(
async move {
loop {
if Self::update_myself(my_peer_id,&myself, &global_ctx).await {
version.inc();
tracing::info!(
my_id = ?my_peer_id,
version = version.get(),
"update route table version when myself changed"
);
}
let lockd_interface = interface.lock().await;
let interface = lockd_interface.as_ref().unwrap();
let last_send_time_map_new = DashMap::new();
let peers = interface.list_peers().await;
for peer in peers.iter() {
let last_send_time = last_send_time_map.get(peer).map(|v| *v).unwrap_or((0, None, Instant::now() - Duration::from_secs(3600)));
let my_version_peer_saved = sync_peer_from_remote.get(peer).and_then(|v| v.packet.peer_version);
let peer_have_latest_version = my_version_peer_saved == Some(version.get());
if peer_have_latest_version && last_send_time.2.elapsed().as_secs() < SEND_ROUTE_PERIOD_SEC {
last_send_time_map_new.insert(*peer, last_send_time);
continue;
}
tracing::trace!(
my_id = ?my_peer_id,
dst_peer_id = ?peer,
version = version.get(),
?my_version_peer_saved,
last_send_version = ?last_send_time.0,
last_send_peer_version = ?last_send_time.1,
last_send_elapse = ?last_send_time.2.elapsed().as_secs(),
"need send route info"
);
let peer_version_we_saved = sync_peer_from_remote.get(&peer).and_then(|v| Some(v.packet.version));
last_send_time_map_new.insert(*peer, (version.get(), peer_version_we_saved, Instant::now()));
let ret = Self::send_sync_peer_request(
interface,
my_peer_id.clone(),
global_ctx.clone(),
*peer,
route_table.clone(),
version.get(),
peer_version_we_saved,
!peer_have_latest_version,
)
.await;
match &ret {
Ok(_) => {
tracing::trace!("send sync peer request to peer: {}", peer);
}
Err(Error::PeerNoConnectionError(_)) => {
tracing::trace!("peer {} no connection", peer);
}
Err(e) => {
tracing::error!(
"send sync peer request to peer: {} error: {:?}",
peer,
e
);
}
};
}
last_send_time_map.clear();
for item in last_send_time_map_new.iter() {
let (k, v) = item.pair();
last_send_time_map.insert(*k, *v);
}
tokio::select! {
_ = notifier.notified() => {
tracing::trace!("sync peer request triggered by notifier");
}
_ = tokio::time::sleep(Duration::from_secs(1)) => {
tracing::trace!("sync peer request triggered by timeout");
}
}
}
}
.instrument(
tracing::info_span!("sync_peer_periodically", my_id = ?self.my_peer_id, global_ctx = ?self.global_ctx),
),
);
}
async fn check_expired_sync_peer_from_remote(&self) {
let route_table = self.route_table.clone();
let my_peer_id = self.my_peer_id.clone();
let sync_peer_from_remote = self.sync_peer_from_remote.clone();
let notifier = self.need_sync_notifier.clone();
let interface = self.interface.clone();
let version = self.version.clone();
self.tasks.lock().await.spawn(async move {
loop {
let mut need_update_route = false;
let now = std::time::Instant::now();
let mut need_remove = Vec::new();
let connected_peers = interface.lock().await.as_ref().unwrap().list_peers().await;
for item in sync_peer_from_remote.iter() {
let (k, v) = item.pair();
if now.duration_since(v.last_update).as_secs() > ROUTE_EXPIRED_SEC
|| !connected_peers.contains(k)
{
need_update_route = true;
need_remove.insert(0, k.clone());
}
}
for k in need_remove.iter() {
tracing::warn!("remove expired sync peer: {:?}", k);
sync_peer_from_remote.remove(k);
}
if need_update_route {
Self::update_route_table(
my_peer_id,
sync_peer_from_remote.clone(),
route_table.clone(),
);
version.inc();
tracing::info!(
my_id = ?my_peer_id,
version = version.get(),
"update route table when check expired peer"
);
notifier.notify_one();
}
tokio::time::sleep(Duration::from_secs(1)).await;
}
});
}
fn get_peer_id_for_proxy(&self, ipv4: &Ipv4Addr) -> Option<PeerId> {
let ipv4 = std::net::IpAddr::V4(*ipv4);
for item in self.route_table.cidr_peer_id_map.iter() {
let (k, v) = item.pair();
if k.contains(&ipv4) {
return Some(*v);
}
}
None
}
#[tracing::instrument(skip(self, packet), fields(my_id = ?self.my_peer_id, ctx = ?self.global_ctx))]
async fn handle_route_packet(&self, src_peer_id: PeerId, packet: Bytes) {
let packet = postcard::from_bytes::<SyncPeer>(&packet).unwrap();
let p = &packet;
let mut updated = true;
assert_eq!(packet.myself.peer_id, src_peer_id);
self.sync_peer_from_remote
.entry(packet.myself.peer_id.into())
.and_modify(|v| {
if v.packet.myself == p.myself && v.packet.neighbors == p.neighbors {
updated = false;
} else {
v.packet = p.clone();
}
v.packet.version = p.version;
v.packet.peer_version = p.peer_version;
v.last_update = std::time::Instant::now();
})
.or_insert(SyncPeerFromRemote {
packet: p.clone(),
last_update: std::time::Instant::now(),
});
if updated {
Self::update_route_table(
self.my_peer_id.clone(),
self.sync_peer_from_remote.clone(),
self.route_table.clone(),
);
self.version.inc();
tracing::info!(
my_id = ?self.my_peer_id,
?p,
version = self.version.get(),
"update route table when receive route packet"
);
}
if packet.need_reply {
self.last_send_time_map
.entry(packet.myself.peer_id.into())
.and_modify(|v| {
const FAST_REPLY_DURATION: u64 =
SEND_ROUTE_PERIOD_SEC - SEND_ROUTE_FAST_REPLY_SEC;
if v.0 != self.version.get() || v.1 != Some(p.version) {
v.2 = Instant::now() - Duration::from_secs(3600);
} else if v.2.elapsed().as_secs() < FAST_REPLY_DURATION {
// do not send same version route info too frequently
v.2 = Instant::now() - Duration::from_secs(FAST_REPLY_DURATION);
}
});
}
if updated || packet.need_reply {
self.need_sync_notifier.notify_one();
}
}
}
#[async_trait]
impl Route for BasicRoute {
async fn open(&self, interface: RouteInterfaceBox) -> Result<u8, ()> {
*self.interface.lock().await = Some(interface);
self.sync_peer_periodically().await;
self.check_expired_sync_peer_from_remote().await;
Ok(1)
}
async fn close(&self) {}
async fn get_next_hop(&self, dst_peer_id: PeerId) -> Option<PeerId> {
match self.route_table.route_info.get(&dst_peer_id) {
Some(info) => {
return Some(info.peer_id.clone().into());
}
None => {
tracing::error!("no route info for dst_peer_id: {}", dst_peer_id);
return None;
}
}
}
async fn list_routes(&self) -> Vec<crate::rpc::Route> {
let mut routes = Vec::new();
let parse_route_info = |real_peer_id: PeerId, route_info: &SyncPeerInfo| {
let mut route = crate::rpc::Route::default();
route.ipv4_addr = if let Some(ipv4_addr) = route_info.ipv4_addr {
ipv4_addr.to_string()
} else {
"".to_string()
};
route.peer_id = real_peer_id;
route.next_hop_peer_id = route_info.peer_id;
route.cost = route_info.cost as i32;
route.proxy_cidrs = route_info.proxy_cidrs.clone();
route.hostname = route_info.hostname.clone().unwrap_or_default();
let mut stun_info = StunInfo::default();
if let Ok(udp_nat_type) = NatType::try_from(route_info.udp_stun_info as i32) {
stun_info.set_udp_nat_type(udp_nat_type);
}
route.stun_info = Some(stun_info);
route
};
self.route_table.route_info.iter().for_each(|item| {
routes.push(parse_route_info(*item.key(), item.value()));
});
routes
}
async fn get_peer_id_by_ipv4(&self, ipv4_addr: &Ipv4Addr) -> Option<PeerId> {
if let Some(peer_id) = self.route_table.ipv4_peer_id_map.get(ipv4_addr) {
return Some(*peer_id);
}
if let Some(peer_id) = self.get_peer_id_for_proxy(ipv4_addr) {
return Some(peer_id);
}
tracing::info!("no peer id for ipv4: {}", ipv4_addr);
return None;
}
}
#[async_trait::async_trait]
impl PeerPacketFilter for BasicRoute {
async fn try_process_packet_from_peer(&self, packet: ZCPacket) -> Option<ZCPacket> {
let hdr = packet.peer_manager_header().unwrap();
if hdr.packet_type == PacketType::Route as u8 {
let b = packet.payload().to_vec();
self.handle_route_packet(hdr.from_peer_id.get(), b.into())
.await;
None
} else {
Some(packet)
}
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use crate::{
common::{global_ctx::tests::get_mock_global_ctx, PeerId},
connector::udp_hole_punch::tests::replace_stun_info_collector,
peers::{
peer_manager::{PeerManager, RouteAlgoType},
peer_rip_route::Version,
tests::{connect_peer_manager, wait_route_appear},
},
rpc::NatType,
};
async fn create_mock_pmgr() -> Arc<PeerManager> {
let (s, _r) = tokio::sync::mpsc::channel(1000);
let peer_mgr = Arc::new(PeerManager::new(
RouteAlgoType::Rip,
get_mock_global_ctx(),
s,
));
replace_stun_info_collector(peer_mgr.clone(), NatType::Unknown);
peer_mgr.run().await.unwrap();
peer_mgr
}
#[tokio::test]
async fn test_rip_route() {
let peer_mgr_a = create_mock_pmgr().await;
let peer_mgr_b = create_mock_pmgr().await;
let peer_mgr_c = create_mock_pmgr().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
connect_peer_manager(peer_mgr_b.clone(), peer_mgr_c.clone()).await;
wait_route_appear(peer_mgr_a.clone(), peer_mgr_b.clone())
.await
.unwrap();
wait_route_appear(peer_mgr_a.clone(), peer_mgr_c.clone())
.await
.unwrap();
let mgrs = vec![peer_mgr_a.clone(), peer_mgr_b.clone(), peer_mgr_c.clone()];
tokio::time::sleep(tokio::time::Duration::from_secs(4)).await;
let check_version = |version: Version, peer_id: PeerId, mgrs: &Vec<Arc<PeerManager>>| {
for mgr in mgrs.iter() {
tracing::warn!(
"check version: {:?}, {:?}, {:?}, {:?}",
version,
peer_id,
mgr,
mgr.get_basic_route().sync_peer_from_remote
);
assert_eq!(
version,
mgr.get_basic_route()
.sync_peer_from_remote
.get(&peer_id)
.unwrap()
.packet
.version,
);
assert_eq!(
mgr.get_basic_route()
.sync_peer_from_remote
.get(&peer_id)
.unwrap()
.packet
.peer_version
.unwrap(),
mgr.get_basic_route().version.get()
);
}
};
let check_sanity = || {
// check peer version in other peer mgr are correct.
check_version(
peer_mgr_b.get_basic_route().version.get(),
peer_mgr_b.my_peer_id(),
&vec![peer_mgr_a.clone(), peer_mgr_c.clone()],
);
check_version(
peer_mgr_a.get_basic_route().version.get(),
peer_mgr_a.my_peer_id(),
&vec![peer_mgr_b.clone()],
);
check_version(
peer_mgr_c.get_basic_route().version.get(),
peer_mgr_c.my_peer_id(),
&vec![peer_mgr_b.clone()],
);
};
check_sanity();
let versions = mgrs
.iter()
.map(|x| x.get_basic_route().version.get())
.collect::<Vec<_>>();
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
let versions2 = mgrs
.iter()
.map(|x| x.get_basic_route().version.get())
.collect::<Vec<_>>();
assert_eq!(versions, versions2);
check_sanity();
assert!(peer_mgr_a.get_basic_route().version.get() <= 3);
assert!(peer_mgr_b.get_basic_route().version.get() <= 6);
assert!(peer_mgr_c.get_basic_route().version.get() <= 3);
}
}

View File

@@ -1,27 +1,11 @@
use std::{
sync::{
atomic::{AtomicBool, AtomicU32, Ordering},
Arc,
},
time::Instant,
};
use std::sync::{Arc, Mutex};
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use futures::{SinkExt, StreamExt};
use prost::Message;
use tarpc::{server::Channel, transport::channel::UnboundedChannel};
use tokio::{
sync::mpsc::{self, UnboundedSender},
task::JoinSet,
};
use tracing::Instrument;
use futures::StreamExt;
use tokio::task::JoinSet;
use crate::{
common::{error::Error, PeerId},
rpc::TaRpcPacket,
proto::rpc_impl,
tunnel::packet_def::{PacketType, ZCPacket},
};
@@ -38,33 +22,13 @@ pub trait PeerRpcManagerTransport: Send + Sync + 'static {
async fn recv(&self) -> Result<ZCPacket, Error>;
}
type PacketSender = UnboundedSender<ZCPacket>;
struct PeerRpcEndPoint {
peer_id: PeerId,
packet_sender: PacketSender,
create_time: AtomicCell<Instant>,
finished: Arc<AtomicBool>,
tasks: JoinSet<()>,
}
type PeerRpcEndPointCreator =
Box<dyn Fn(PeerId, PeerRpcTransactId) -> PeerRpcEndPoint + Send + Sync + 'static>;
#[derive(Hash, Eq, PartialEq, Clone)]
struct PeerRpcClientCtxKey(PeerId, PeerRpcServiceId, PeerRpcTransactId);
// handle rpc request from one peer
pub struct PeerRpcManager {
service_map: Arc<DashMap<PeerRpcServiceId, PacketSender>>,
tasks: JoinSet<()>,
tspt: Arc<Box<dyn PeerRpcManagerTransport>>,
rpc_client: rpc_impl::client::Client,
rpc_server: rpc_impl::server::Server,
service_registry: Arc<DashMap<PeerRpcServiceId, PeerRpcEndPointCreator>>,
peer_rpc_endpoints: Arc<DashMap<PeerRpcClientCtxKey, PeerRpcEndPoint>>,
client_resp_receivers: Arc<DashMap<PeerRpcClientCtxKey, PacketSender>>,
transact_id: AtomicU32,
tasks: Arc<Mutex<JoinSet<()>>>,
}
impl std::fmt::Debug for PeerRpcManager {
@@ -75,470 +39,82 @@ impl std::fmt::Debug for PeerRpcManager {
}
}
struct PacketMerger {
first_piece: Option<TaRpcPacket>,
pieces: Vec<TaRpcPacket>,
}
impl PacketMerger {
fn new() -> Self {
Self {
first_piece: None,
pieces: Vec::new(),
}
}
fn try_merge_pieces(&self) -> Option<TaRpcPacket> {
if self.first_piece.is_none() || self.pieces.is_empty() {
return None;
}
for p in &self.pieces {
// some piece is missing
if p.total_pieces == 0 {
return None;
}
}
// all pieces are received
let mut content = Vec::new();
for p in &self.pieces {
content.extend_from_slice(&p.content);
}
let mut tmpl_packet = self.first_piece.as_ref().unwrap().clone();
tmpl_packet.total_pieces = 1;
tmpl_packet.piece_idx = 0;
tmpl_packet.content = content;
Some(tmpl_packet)
}
fn feed(
&mut self,
packet: ZCPacket,
expected_tid: Option<PeerRpcTransactId>,
) -> Result<Option<TaRpcPacket>, Error> {
let payload = packet.payload();
let rpc_packet =
TaRpcPacket::decode(payload).map_err(|e| Error::MessageDecodeError(e.to_string()))?;
if expected_tid.is_some() && rpc_packet.transact_id != expected_tid.unwrap() {
return Ok(None);
}
let total_pieces = rpc_packet.total_pieces;
let piece_idx = rpc_packet.piece_idx;
// for compatibility with old version
if total_pieces == 0 && piece_idx == 0 {
return Ok(Some(rpc_packet));
}
if total_pieces > 100 || total_pieces == 0 {
return Err(Error::MessageDecodeError(format!(
"total_pieces is invalid: {}",
total_pieces
)));
}
if piece_idx >= total_pieces {
return Err(Error::MessageDecodeError(
"piece_idx >= total_pieces".to_owned(),
));
}
if self.first_piece.is_none()
|| self.first_piece.as_ref().unwrap().transact_id != rpc_packet.transact_id
|| self.first_piece.as_ref().unwrap().from_peer != rpc_packet.from_peer
{
self.first_piece = Some(rpc_packet.clone());
self.pieces.clear();
}
self.pieces
.resize(total_pieces as usize, Default::default());
self.pieces[piece_idx as usize] = rpc_packet;
Ok(self.try_merge_pieces())
}
}
impl PeerRpcManager {
pub fn new(tspt: impl PeerRpcManagerTransport) -> Self {
Self {
service_map: Arc::new(DashMap::new()),
tasks: JoinSet::new(),
tspt: Arc::new(Box::new(tspt)),
rpc_client: rpc_impl::client::Client::new(),
rpc_server: rpc_impl::server::Server::new(),
service_registry: Arc::new(DashMap::new()),
peer_rpc_endpoints: Arc::new(DashMap::new()),
client_resp_receivers: Arc::new(DashMap::new()),
transact_id: AtomicU32::new(0),
tasks: Arc::new(Mutex::new(JoinSet::new())),
}
}
pub fn run_service<S, Req>(self: &Self, service_id: PeerRpcServiceId, s: S) -> ()
where
S: tarpc::server::Serve<Req> + Clone + Send + Sync + 'static,
Req: Send + 'static + serde::Serialize + for<'a> serde::Deserialize<'a>,
S::Resp:
Send + std::fmt::Debug + 'static + serde::Serialize + for<'a> serde::Deserialize<'a>,
S::Fut: Send + 'static,
{
let tspt = self.tspt.clone();
let creator = Box::new(move |peer_id: PeerId, transact_id: PeerRpcTransactId| {
let mut tasks = JoinSet::new();
let (packet_sender, mut packet_receiver) = mpsc::unbounded_channel();
let (mut client_transport, server_transport) = tarpc::transport::channel::unbounded();
let server = tarpc::server::BaseChannel::with_defaults(server_transport);
let finished = Arc::new(AtomicBool::new(false));
let my_peer_id_clone = tspt.my_peer_id();
let peer_id_clone = peer_id.clone();
let o = server.execute(s.clone());
tasks.spawn(o);
let tspt = tspt.clone();
let finished_clone = finished.clone();
tasks.spawn(async move {
let mut packet_merger = PacketMerger::new();
loop {
tokio::select! {
Some(resp) = client_transport.next() => {
tracing::debug!(resp = ?resp, ?transact_id, ?peer_id, "server recv packet from service provider");
if resp.is_err() {
tracing::warn!(err = ?resp.err(),
"[PEER RPC MGR] client_transport in server side got channel error, ignore it.");
continue;
}
let resp = resp.unwrap();
let serialized_resp = postcard::to_allocvec(&resp);
if serialized_resp.is_err() {
tracing::error!(error = ?serialized_resp.err(), "serialize resp failed");
continue;
}
let msgs = Self::build_rpc_packet(
tspt.my_peer_id(),
peer_id,
service_id,
transact_id,
false,
serialized_resp.as_ref().unwrap(),
);
for msg in msgs {
if let Err(e) = tspt.send(msg, peer_id).await {
tracing::error!(error = ?e, peer_id = ?peer_id, service_id = ?service_id, "send resp to peer failed");
break;
}
}
finished_clone.store(true, Ordering::Relaxed);
}
Some(packet) = packet_receiver.recv() => {
tracing::trace!("recv packet from peer, packet: {:?}", packet);
let info = match packet_merger.feed(packet, None) {
Err(e) => {
tracing::error!(error = ?e, "feed packet to merger failed");
continue;
},
Ok(None) => {
continue;
},
Ok(Some(info)) => {
info
}
};
assert_eq!(info.service_id, service_id);
assert_eq!(info.from_peer, peer_id);
assert_eq!(info.transact_id, transact_id);
let decoded_ret = postcard::from_bytes(&info.content.as_slice());
if let Err(e) = decoded_ret {
tracing::error!(error = ?e, "decode rpc packet failed");
continue;
}
let decoded: tarpc::ClientMessage<Req> = decoded_ret.unwrap();
if let Err(e) = client_transport.send(decoded).await {
tracing::error!(error = ?e, "send to req to client transport failed");
}
}
else => {
tracing::warn!("[PEER RPC MGR] service runner destroy, peer_id: {}, service_id: {}", peer_id, service_id);
}
}
}
}.instrument(tracing::info_span!("service_runner", my_id = ?my_peer_id_clone, peer_id = ?peer_id_clone, service_id = ?service_id)));
tracing::info!(
"[PEER RPC MGR] create new service endpoint for peer {}, service {}",
peer_id,
service_id
);
return PeerRpcEndPoint {
peer_id,
packet_sender,
create_time: AtomicCell::new(Instant::now()),
finished,
tasks,
};
// let resp = client_transport.next().await;
});
if let Some(_) = self.service_registry.insert(service_id, creator) {
panic!(
"[PEER RPC MGR] service {} is already registered",
service_id
);
}
tracing::info!(
"[PEER RPC MGR] register service {} succeed, my_node_id {}",
service_id,
self.tspt.my_peer_id()
)
}
fn parse_rpc_packet(packet: &ZCPacket) -> Result<TaRpcPacket, Error> {
let payload = packet.payload();
TaRpcPacket::decode(payload).map_err(|e| Error::MessageDecodeError(e.to_string()))
}
fn build_rpc_packet(
from_peer: PeerId,
to_peer: PeerId,
service_id: PeerRpcServiceId,
transact_id: PeerRpcTransactId,
is_req: bool,
content: &Vec<u8>,
) -> Vec<ZCPacket> {
let mut ret = Vec::new();
let content_mtu = RPC_PACKET_CONTENT_MTU;
let total_pieces = (content.len() + content_mtu - 1) / content_mtu;
let mut cur_offset = 0;
while cur_offset < content.len() {
let mut cur_len = content_mtu;
if cur_offset + cur_len > content.len() {
cur_len = content.len() - cur_offset;
}
let mut cur_content = Vec::new();
cur_content.extend_from_slice(&content[cur_offset..cur_offset + cur_len]);
let cur_packet = TaRpcPacket {
from_peer,
to_peer,
service_id,
transact_id,
is_req,
total_pieces: total_pieces as u32,
piece_idx: (cur_offset / content_mtu) as u32,
content: cur_content,
};
cur_offset += cur_len;
let mut buf = Vec::new();
cur_packet.encode(&mut buf).unwrap();
let mut zc_packet = ZCPacket::new_with_payload(&buf);
zc_packet.fill_peer_manager_hdr(from_peer, to_peer, PacketType::TaRpc as u8);
ret.push(zc_packet);
}
ret
}
pub fn run(&self) {
self.rpc_client.run();
self.rpc_server.run();
let (server_tx, mut server_rx) = (
self.rpc_server.get_transport_sink(),
self.rpc_server.get_transport_stream(),
);
let (client_tx, mut client_rx) = (
self.rpc_client.get_transport_sink(),
self.rpc_client.get_transport_stream(),
);
let tspt = self.tspt.clone();
let service_registry = self.service_registry.clone();
let peer_rpc_endpoints = self.peer_rpc_endpoints.clone();
let client_resp_receivers = self.client_resp_receivers.clone();
tokio::spawn(async move {
self.tasks.lock().unwrap().spawn(async move {
loop {
let packet = tokio::select! {
Some(Ok(packet)) = server_rx.next() => {
tracing::trace!(?packet, "recv rpc packet from server");
packet
}
Some(Ok(packet)) = client_rx.next() => {
tracing::trace!(?packet, "recv rpc packet from client");
packet
}
else => {
tracing::warn!("rpc transport read aborted, exiting");
break;
}
};
let dst_peer_id = packet.peer_manager_header().unwrap().to_peer_id.into();
if let Err(e) = tspt.send(packet, dst_peer_id).await {
tracing::error!(error = ?e, dst_peer_id = ?dst_peer_id, "send to peer failed");
}
}
});
let tspt = self.tspt.clone();
self.tasks.lock().unwrap().spawn(async move {
loop {
let Ok(o) = tspt.recv().await else {
tracing::warn!("peer rpc transport read aborted, exiting");
break;
};
let info = Self::parse_rpc_packet(&o).unwrap();
tracing::debug!(?info, "recv rpc packet from peer");
if info.is_req {
if !service_registry.contains_key(&info.service_id) {
tracing::warn!(
"service {} not found, my_node_id: {}",
info.service_id,
tspt.my_peer_id()
);
continue;
}
let endpoint = peer_rpc_endpoints
.entry(PeerRpcClientCtxKey(
info.from_peer,
info.service_id,
info.transact_id,
))
.or_insert_with(|| {
service_registry.get(&info.service_id).unwrap()(
info.from_peer,
info.transact_id,
)
});
endpoint.packet_sender.send(o).unwrap();
} else {
if let Some(a) = client_resp_receivers.get(&PeerRpcClientCtxKey(
info.from_peer,
info.service_id,
info.transact_id,
)) {
tracing::trace!("recv resp: {:?}", info);
if let Err(e) = a.send(o) {
tracing::error!(error = ?e, "send resp to client failed");
}
} else {
tracing::warn!("client resp receiver not found, info: {:?}", info);
}
if o.peer_manager_header().unwrap().packet_type == PacketType::RpcReq as u8 {
server_tx.send(o).await.unwrap();
continue;
} else if o.peer_manager_header().unwrap().packet_type == PacketType::RpcResp as u8
{
client_tx.send(o).await.unwrap();
continue;
}
}
});
let peer_rpc_endpoints = self.peer_rpc_endpoints.clone();
tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
peer_rpc_endpoints.retain(|_, v| {
v.create_time.load().elapsed().as_secs() < 30
&& !v.finished.load(Ordering::Relaxed)
});
}
});
}
#[tracing::instrument(skip(f))]
pub async fn do_client_rpc_scoped<Resp, Req, RpcRet, Fut>(
&self,
service_id: PeerRpcServiceId,
dst_peer_id: PeerId,
f: impl FnOnce(UnboundedChannel<Resp, Req>) -> Fut,
) -> RpcRet
where
Resp: serde::Serialize
+ for<'a> serde::Deserialize<'a>
+ Send
+ Sync
+ std::fmt::Debug
+ 'static,
Req: serde::Serialize
+ for<'a> serde::Deserialize<'a>
+ Send
+ Sync
+ std::fmt::Debug
+ 'static,
Fut: std::future::Future<Output = RpcRet>,
{
let mut tasks = JoinSet::new();
let (packet_sender, mut packet_receiver) = mpsc::unbounded_channel();
pub fn rpc_client(&self) -> &rpc_impl::client::Client {
&self.rpc_client
}
let (client_transport, server_transport) =
tarpc::transport::channel::unbounded::<Resp, Req>();
let (mut server_s, mut server_r) = server_transport.split();
let transact_id = self
.transact_id
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let tspt = self.tspt.clone();
tasks.spawn(async move {
while let Some(a) = server_r.next().await {
if a.is_err() {
tracing::error!(error = ?a.err(), "channel error");
continue;
}
let req = postcard::to_allocvec(&a.unwrap());
if req.is_err() {
tracing::error!(error = ?req.err(), "bincode serialize failed");
continue;
}
let packets = Self::build_rpc_packet(
tspt.my_peer_id(),
dst_peer_id,
service_id,
transact_id,
true,
req.as_ref().unwrap(),
);
tracing::debug!(?packets, ?req, ?transact_id, "client send rpc packet to peer");
for packet in packets {
if let Err(e) = tspt.send(packet, dst_peer_id).await {
tracing::error!(error = ?e, dst_peer_id = ?dst_peer_id, "send to peer failed");
break;
}
}
}
tracing::warn!("[PEER RPC MGR] server trasport read aborted");
});
tasks.spawn(async move {
let mut packet_merger = PacketMerger::new();
while let Some(packet) = packet_receiver.recv().await {
tracing::trace!("tunnel recv: {:?}", packet);
let info = match packet_merger.feed(packet, Some(transact_id)) {
Err(e) => {
tracing::error!(error = ?e, "feed packet to merger failed");
continue;
}
Ok(None) => {
continue;
}
Ok(Some(info)) => info,
};
let decoded = postcard::from_bytes(&info.content.as_slice());
tracing::debug!(?info, ?decoded, "client recv rpc packet from peer");
assert_eq!(info.transact_id, transact_id);
if let Err(e) = decoded {
tracing::error!(error = ?e, "decode rpc packet failed");
continue;
}
if let Err(e) = server_s.send(decoded.unwrap()).await {
tracing::error!(error = ?e, "send to rpc server channel failed");
}
}
tracing::warn!("[PEER RPC MGR] server packet read aborted");
});
let key = PeerRpcClientCtxKey(dst_peer_id, service_id, transact_id);
let _insert_ret = self
.client_resp_receivers
.insert(key.clone(), packet_sender);
let ret = f(client_transport).await;
self.client_resp_receivers.remove(&key);
ret
pub fn rpc_server(&self) -> &rpc_impl::server::Server {
&self.rpc_server
}
pub fn my_peer_id(&self) -> PeerId {
@@ -546,9 +122,15 @@ impl PeerRpcManager {
}
}
impl Drop for PeerRpcManager {
fn drop(&mut self) {
tracing::debug!("PeerRpcManager drop, my_peer_id: {:?}", self.my_peer_id());
}
}
#[cfg(test)]
pub mod tests {
use std::{pin::Pin, sync::Arc, time::Duration};
use std::{pin::Pin, sync::Arc};
use futures::{SinkExt, StreamExt};
use tokio::sync::Mutex;
@@ -559,31 +141,18 @@ pub mod tests {
peer_rpc::PeerRpcManager,
tests::{connect_peer_manager, create_mock_peer_manager, wait_route_appear},
},
proto::{
rpc_impl::RpcController,
tests::{GreetingClientFactory, GreetingServer, GreetingService, SayHelloRequest},
},
tunnel::{
common::tests::wait_for_condition, packet_def::ZCPacket, ring::create_ring_tunnel_pair,
Tunnel, ZCPacketSink, ZCPacketStream,
packet_def::ZCPacket, ring::create_ring_tunnel_pair, Tunnel, ZCPacketSink,
ZCPacketStream,
},
};
use super::PeerRpcManagerTransport;
#[tarpc::service]
pub trait TestRpcService {
async fn hello(s: String) -> String;
}
#[derive(Clone)]
pub struct MockService {
pub prefix: String,
}
#[tarpc::server]
impl TestRpcService for MockService {
async fn hello(self, _: tarpc::context::Context, s: String) -> String {
format!("{} {}", self.prefix, s)
}
}
fn random_string(len: usize) -> String {
use rand::distributions::Alphanumeric;
use rand::Rng;
@@ -595,6 +164,16 @@ pub mod tests {
String::from_utf8(s).unwrap()
}
pub fn register_service(rpc_mgr: &PeerRpcManager, domain: &str, delay_ms: u64, prefix: &str) {
rpc_mgr.rpc_server().registry().register(
GreetingServer::new(GreetingService {
delay_ms,
prefix: prefix.to_string(),
}),
domain,
);
}
#[tokio::test]
async fn peer_rpc_basic_test() {
struct MockTransport {
@@ -630,10 +209,7 @@ pub mod tests {
my_peer_id: new_peer_id(),
});
server_rpc_mgr.run();
let s = MockService {
prefix: "hello".to_owned(),
};
server_rpc_mgr.run_service(1, s.serve());
register_service(&server_rpc_mgr, "test", 0, "Hello");
let client_rpc_mgr = PeerRpcManager::new(MockTransport {
sink: Arc::new(Mutex::new(stsr)),
@@ -642,35 +218,27 @@ pub mod tests {
});
client_rpc_mgr.run();
let stub = client_rpc_mgr
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(1, 1, "test".to_string());
let msg = random_string(8192);
let ret = client_rpc_mgr
.do_client_rpc_scoped(1, server_rpc_mgr.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
println!("ret: {:?}", ret);
assert_eq!(ret.unwrap(), format!("hello {}", msg));
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let msg = random_string(10);
let ret = client_rpc_mgr
.do_client_rpc_scoped(1, server_rpc_mgr.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
println!("ret: {:?}", ret);
assert_eq!(ret.unwrap(), format!("hello {}", msg));
wait_for_condition(
|| async { server_rpc_mgr.peer_rpc_endpoints.is_empty() },
Duration::from_secs(10),
)
.await;
assert_eq!(ret.greeting, format!("Hello {}!", msg));
}
#[tokio::test]
@@ -680,6 +248,7 @@ pub mod tests {
let peer_mgr_c = create_mock_peer_manager().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
connect_peer_manager(peer_mgr_b.clone(), peer_mgr_c.clone()).await;
wait_route_appear(peer_mgr_a.clone(), peer_mgr_b.clone())
.await
.unwrap();
@@ -699,51 +268,42 @@ pub mod tests {
peer_mgr_b.my_peer_id()
);
let s = MockService {
prefix: "hello".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(1, s.serve());
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test", 0, "Hello");
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
let stub = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test".to_string(),
);
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
// call again
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_c
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
}
#[tokio::test]
async fn test_multi_service_with_peer_manager() {
async fn test_multi_domain_with_peer_manager() {
let peer_mgr_a = create_mock_peer_manager().await;
let peer_mgr_b = create_mock_peer_manager().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
@@ -757,42 +317,37 @@ pub mod tests {
peer_mgr_b.my_peer_id()
);
let s = MockService {
prefix: "hello_a".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(1, s.serve());
let b = MockService {
prefix: "hello_b".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(2, b.serve());
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test1", 0, "Hello");
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test2", 20000, "Hello2");
let stub1 = peer_mgr_a
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test1".to_string(),
);
let stub2 = peer_mgr_a
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test2".to_string(),
);
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
let ret = stub1
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let ret = stub2
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await;
assert_eq!(ip_list.unwrap(), format!("hello_a {}", msg));
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(2, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
assert_eq!(ip_list.unwrap(), format!("hello_b {}", msg));
wait_for_condition(
|| async { peer_mgr_b.get_peer_rpc_mgr().peer_rpc_endpoints.is_empty() },
Duration::from_secs(10),
)
.await;
assert!(ret.is_err() && ret.unwrap_err().to_string().contains("Timeout"));
}
}

View File

@@ -0,0 +1,39 @@
use crate::{
common::global_ctx::ArcGlobalCtx,
proto::{
peer_rpc::{DirectConnectorRpc, GetIpListRequest, GetIpListResponse},
rpc_types::{self, controller::BaseController},
},
};
#[derive(Clone)]
pub struct DirectConnectorManagerRpcServer {
// TODO: this only cache for one src peer, should make it global
global_ctx: ArcGlobalCtx,
}
#[async_trait::async_trait]
impl DirectConnectorRpc for DirectConnectorManagerRpcServer {
type Controller = BaseController;
async fn get_ip_list(
&self,
_: BaseController,
_: GetIpListRequest,
) -> rpc_types::error::Result<GetIpListResponse> {
let mut ret = self.global_ctx.get_ip_collector().collect_ip_addrs().await;
ret.listeners = self
.global_ctx
.get_running_listeners()
.into_iter()
.map(Into::into)
.collect();
Ok(ret)
}
}
impl DirectConnectorManagerRpcServer {
pub fn new(global_ctx: ArcGlobalCtx) -> Self {
Self { global_ctx }
}
}

View File

@@ -1,9 +1,13 @@
use std::{net::Ipv4Addr, sync::Arc};
use async_trait::async_trait;
use tokio_util::bytes::Bytes;
use dashmap::DashMap;
use crate::common::{error::Error, PeerId};
use crate::{
common::{global_ctx::NetworkIdentity, PeerId},
proto::peer_rpc::{
ForeignNetworkRouteInfoEntry, ForeignNetworkRouteInfoKey, RouteForeignNetworkInfos,
},
};
#[derive(Clone, Debug)]
pub enum NextHopPolicy {
@@ -17,16 +21,16 @@ impl Default for NextHopPolicy {
}
}
#[async_trait]
pub type ForeignNetworkRouteInfoMap =
DashMap<ForeignNetworkRouteInfoKey, ForeignNetworkRouteInfoEntry>;
#[async_trait::async_trait]
pub trait RouteInterface {
async fn list_peers(&self) -> Vec<PeerId>;
async fn send_route_packet(
&self,
msg: Bytes,
route_id: u8,
dst_peer_id: PeerId,
) -> Result<(), Error>;
fn my_peer_id(&self) -> PeerId;
async fn list_foreign_networks(&self) -> ForeignNetworkRouteInfoMap {
DashMap::new()
}
}
pub type RouteInterfaceBox = Box<dyn RouteInterface + Send + Sync>;
@@ -56,7 +60,7 @@ impl RouteCostCalculatorInterface for DefaultRouteCostCalculator {}
pub type RouteCostCalculator = Box<dyn RouteCostCalculatorInterface>;
#[async_trait]
#[async_trait::async_trait]
#[auto_impl::auto_impl(Box, Arc)]
pub trait Route {
async fn open(&self, interface: RouteInterfaceBox) -> Result<u8, ()>;
@@ -71,12 +75,23 @@ pub trait Route {
self.get_next_hop(peer_id).await
}
async fn list_routes(&self) -> Vec<crate::rpc::Route>;
async fn list_routes(&self) -> Vec<crate::proto::cli::Route>;
async fn get_peer_id_by_ipv4(&self, _ipv4: &Ipv4Addr) -> Option<PeerId> {
None
}
async fn list_peers_own_foreign_network(
&self,
_network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
vec![]
}
async fn list_foreign_network_info(&self) -> RouteForeignNetworkInfos {
Default::default()
}
async fn set_route_cost_fn(&self, _cost_fn: RouteCostCalculator) {}
async fn dump(&self) -> String {

View File

@@ -1,14 +1,18 @@
use std::sync::Arc;
use crate::rpc::{
cli::PeerInfo, peer_manage_rpc_server::PeerManageRpc, DumpRouteRequest, DumpRouteResponse,
ListForeignNetworkRequest, ListForeignNetworkResponse, ListPeerRequest, ListPeerResponse,
ListRouteRequest, ListRouteResponse,
use crate::proto::{
cli::{
DumpRouteRequest, DumpRouteResponse, ListForeignNetworkRequest, ListForeignNetworkResponse,
ListGlobalForeignNetworkRequest, ListGlobalForeignNetworkResponse, ListPeerRequest,
ListPeerResponse, ListRouteRequest, ListRouteResponse, PeerInfo, PeerManageRpc,
ShowNodeInfoRequest, ShowNodeInfoResponse,
},
rpc_types::{self, controller::BaseController},
};
use tonic::{Request, Response, Status};
use super::peer_manager::PeerManager;
#[derive(Clone)]
pub struct PeerManagerRpcService {
peer_manager: Arc<PeerManager>,
}
@@ -19,7 +23,15 @@ impl PeerManagerRpcService {
}
pub async fn list_peers(&self) -> Vec<PeerInfo> {
let peers = self.peer_manager.get_peer_map().list_peers().await;
let mut peers = self.peer_manager.get_peer_map().list_peers().await;
peers.extend(
self.peer_manager
.get_foreign_network_client()
.get_peer_map()
.list_peers()
.await
.iter(),
);
let mut peer_infos = Vec::new();
for peer in peers {
let mut peer_info = PeerInfo::default();
@@ -27,6 +39,14 @@ impl PeerManagerRpcService {
if let Some(conns) = self.peer_manager.get_peer_map().list_peer_conns(peer).await {
peer_info.conns = conns;
} else if let Some(conns) = self
.peer_manager
.get_foreign_network_client()
.get_peer_map()
.list_peer_conns(peer)
.await
{
peer_info.conns = conns;
}
peer_infos.push(peer_info);
@@ -36,12 +56,14 @@ impl PeerManagerRpcService {
}
}
#[tonic::async_trait]
#[async_trait::async_trait]
impl PeerManageRpc for PeerManagerRpcService {
type Controller = BaseController;
async fn list_peer(
&self,
_request: Request<ListPeerRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListPeerResponse>, Status> {
_: BaseController,
_request: ListPeerRequest, // Accept request of type HelloRequest
) -> Result<ListPeerResponse, rpc_types::error::Error> {
let mut reply = ListPeerResponse::default();
let peers = self.list_peers().await;
@@ -49,36 +71,57 @@ impl PeerManageRpc for PeerManagerRpcService {
reply.peer_infos.push(peer);
}
Ok(Response::new(reply))
Ok(reply)
}
async fn list_route(
&self,
_request: Request<ListRouteRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListRouteResponse>, Status> {
_: BaseController,
_request: ListRouteRequest, // Accept request of type HelloRequest
) -> Result<ListRouteResponse, rpc_types::error::Error> {
let mut reply = ListRouteResponse::default();
reply.routes = self.peer_manager.list_routes().await;
Ok(Response::new(reply))
Ok(reply)
}
async fn dump_route(
&self,
_request: Request<DumpRouteRequest>, // Accept request of type HelloRequest
) -> Result<Response<DumpRouteResponse>, Status> {
_: BaseController,
_request: DumpRouteRequest, // Accept request of type HelloRequest
) -> Result<DumpRouteResponse, rpc_types::error::Error> {
let mut reply = DumpRouteResponse::default();
reply.result = self.peer_manager.dump_route().await;
Ok(Response::new(reply))
Ok(reply)
}
async fn list_foreign_network(
&self,
_request: Request<ListForeignNetworkRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListForeignNetworkResponse>, Status> {
_: BaseController,
_request: ListForeignNetworkRequest, // Accept request of type HelloRequest
) -> Result<ListForeignNetworkResponse, rpc_types::error::Error> {
let reply = self
.peer_manager
.get_foreign_network_manager()
.list_foreign_networks()
.await;
Ok(Response::new(reply))
Ok(reply)
}
async fn list_global_foreign_network(
&self,
_: BaseController,
_request: ListGlobalForeignNetworkRequest,
) -> Result<ListGlobalForeignNetworkResponse, rpc_types::error::Error> {
Ok(self.peer_manager.list_global_foreign_network().await)
}
async fn show_node_info(
&self,
_: BaseController,
_request: ShowNodeInfoRequest, // Accept request of type HelloRequest
) -> Result<ShowNodeInfoResponse, rpc_types::error::Error> {
Ok(ShowNodeInfoResponse {
node_info: Some(self.peer_manager.get_my_info()),
})
}
}

View File

@@ -1,4 +1,7 @@
syntax = "proto3";
import "common.proto";
package cli;
message Status {
@@ -16,20 +19,16 @@ message PeerConnStats {
uint64 latency_us = 5;
}
message TunnelInfo {
string tunnel_type = 1;
string local_addr = 2;
string remote_addr = 3;
}
message PeerConnInfo {
string conn_id = 1;
uint32 my_peer_id = 2;
uint32 peer_id = 3;
repeated string features = 4;
TunnelInfo tunnel = 5;
common.TunnelInfo tunnel = 5;
PeerConnStats stats = 6;
float loss_rate = 7;
bool is_client = 8;
string network_name = 9;
}
message PeerInfo {
@@ -39,27 +38,9 @@ message PeerInfo {
message ListPeerRequest {}
message ListPeerResponse { repeated PeerInfo peer_infos = 1; }
enum NatType {
// has NAT; but own a single public IP, port is not changed
Unknown = 0;
OpenInternet = 1;
NoPAT = 2;
FullCone = 3;
Restricted = 4;
PortRestricted = 5;
Symmetric = 6;
SymUdpFirewall = 7;
}
message StunInfo {
NatType udp_nat_type = 1;
NatType tcp_nat_type = 2;
int64 last_update_time = 3;
repeated string public_ip = 4;
uint32 min_port = 5;
uint32 max_port = 6;
message ListPeerResponse {
repeated PeerInfo peer_infos = 1;
NodeInfo my_info = 2;
}
message Route {
@@ -69,10 +50,29 @@ message Route {
int32 cost = 4;
repeated string proxy_cidrs = 5;
string hostname = 6;
StunInfo stun_info = 7;
common.StunInfo stun_info = 7;
string inst_id = 8;
string version = 9;
common.PeerFeatureFlag feature_flag = 10;
}
message NodeInfo {
uint32 peer_id = 1;
string ipv4_addr = 2;
repeated string proxy_cidrs = 3;
string hostname = 4;
common.StunInfo stun_info = 5;
string inst_id = 6;
repeated string listeners = 7;
string config = 8;
string version = 9;
common.PeerFeatureFlag feature_flag = 10;
}
message ShowNodeInfoRequest {}
message ShowNodeInfoResponse { NodeInfo node_info = 1; }
message ListRouteRequest {}
message ListRouteResponse { repeated Route routes = 1; }
@@ -83,18 +83,41 @@ message DumpRouteResponse { string result = 1; }
message ListForeignNetworkRequest {}
message ForeignNetworkEntryPb { repeated PeerInfo peers = 1; }
message ForeignNetworkEntryPb {
repeated PeerInfo peers = 1;
bytes network_secret_digest = 2;
}
message ListForeignNetworkResponse {
// foreign network in local
map<string, ForeignNetworkEntryPb> foreign_networks = 1;
}
message ListGlobalForeignNetworkRequest {}
message ListGlobalForeignNetworkResponse {
// foreign network in the entire network
message OneForeignNetwork {
string network_name = 1;
repeated uint32 peer_ids = 2;
string last_updated = 3;
uint32 version = 4;
}
message ForeignNetworks { repeated OneForeignNetwork foreign_networks = 1; }
map<uint32, ForeignNetworks> foreign_networks = 1;
}
service PeerManageRpc {
rpc ListPeer(ListPeerRequest) returns (ListPeerResponse);
rpc ListRoute(ListRouteRequest) returns (ListRouteResponse);
rpc DumpRoute(DumpRouteRequest) returns (DumpRouteResponse);
rpc ListForeignNetwork(ListForeignNetworkRequest)
returns (ListForeignNetworkResponse);
rpc ListGlobalForeignNetwork(ListGlobalForeignNetworkRequest)
returns (ListGlobalForeignNetworkResponse);
rpc ShowNodeInfo(ShowNodeInfoRequest) returns (ShowNodeInfoResponse);
}
enum ConnectorStatus {
@@ -104,7 +127,7 @@ enum ConnectorStatus {
}
message Connector {
string url = 1;
common.Url url = 1;
ConnectorStatus status = 2;
}
@@ -119,7 +142,7 @@ enum ConnectorManageAction {
message ManageConnectorRequest {
ConnectorManageAction action = 1;
string url = 2;
common.Url url = 2;
}
message ManageConnectorResponse {}
@@ -129,23 +152,6 @@ service ConnectorManageRpc {
rpc ManageConnector(ManageConnectorRequest) returns (ManageConnectorResponse);
}
message DirectConnectedPeerInfo { int32 latency_ms = 1; }
message PeerInfoForGlobalMap {
map<uint32, DirectConnectedPeerInfo> direct_peers = 1;
}
message GetGlobalPeerMapRequest {}
message GetGlobalPeerMapResponse {
map<uint32, PeerInfoForGlobalMap> global_peer_map = 1;
}
service PeerCenterRpc {
rpc GetGlobalPeerMap(GetGlobalPeerMapRequest)
returns (GetGlobalPeerMapResponse);
}
message VpnPortalInfo {
string vpn_type = 1;
string client_config = 2;
@@ -159,24 +165,3 @@ service VpnPortalRpc {
rpc GetVpnPortalInfo(GetVpnPortalInfoRequest)
returns (GetVpnPortalInfoResponse);
}
message HandshakeRequest {
uint32 magic = 1;
uint32 my_peer_id = 2;
uint32 version = 3;
repeated string features = 4;
string network_name = 5;
bytes network_secret_digrest = 6;
}
message TaRpcPacket {
uint32 from_peer = 1;
uint32 to_peer = 2;
uint32 service_id = 3;
uint32 transact_id = 4;
bool is_req = 5;
bytes content = 6;
uint32 total_pieces = 7;
uint32 piece_idx = 8;
}

View File

@@ -0,0 +1 @@
include!(concat!(env!("OUT_DIR"), "/cli.rs"));

View File

@@ -0,0 +1,99 @@
syntax = "proto3";
import "error.proto";
package common;
message RpcDescriptor {
// allow same service registered multiple times in different domain
string domain_name = 1;
string proto_name = 2;
string service_name = 3;
uint32 method_index = 4;
}
message RpcRequest {
RpcDescriptor descriptor = 1;
bytes request = 2;
int32 timeout_ms = 3;
}
message RpcResponse {
bytes response = 1;
error.Error error = 2;
uint64 runtime_us = 3;
}
message RpcPacket {
uint32 from_peer = 1;
uint32 to_peer = 2;
int64 transaction_id = 3;
RpcDescriptor descriptor = 4;
bytes body = 5;
bool is_request = 6;
uint32 total_pieces = 7;
uint32 piece_idx = 8;
int32 trace_id = 9;
}
message UUID {
uint64 high = 1;
uint64 low = 2;
}
enum NatType {
// has NAT; but own a single public IP, port is not changed
Unknown = 0;
OpenInternet = 1;
NoPAT = 2;
FullCone = 3;
Restricted = 4;
PortRestricted = 5;
Symmetric = 6;
SymUdpFirewall = 7;
}
message Ipv4Addr { uint32 addr = 1; }
message Ipv6Addr {
uint32 part1 = 1;
uint32 part2 = 2;
uint32 part3 = 3;
uint32 part4 = 4;
}
message Url { string url = 1; }
message SocketAddr {
oneof ip {
Ipv4Addr ipv4 = 1;
Ipv6Addr ipv6 = 2;
};
uint32 port = 3;
}
message TunnelInfo {
string tunnel_type = 1;
common.Url local_addr = 2;
common.Url remote_addr = 3;
}
message StunInfo {
NatType udp_nat_type = 1;
NatType tcp_nat_type = 2;
int64 last_update_time = 3;
repeated string public_ip = 4;
uint32 min_port = 5;
uint32 max_port = 6;
}
message PeerFeatureFlag {
bool is_public_server = 1;
bool no_relay_data = 2;
}

View File

@@ -0,0 +1,137 @@
use std::{fmt::Display, str::FromStr};
include!(concat!(env!("OUT_DIR"), "/common.rs"));
impl From<uuid::Uuid> for Uuid {
fn from(uuid: uuid::Uuid) -> Self {
let (high, low) = uuid.as_u64_pair();
Uuid { low, high }
}
}
impl From<Uuid> for uuid::Uuid {
fn from(uuid: Uuid) -> Self {
uuid::Uuid::from_u64_pair(uuid.high, uuid.low)
}
}
impl Display for Uuid {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", uuid::Uuid::from(self.clone()))
}
}
impl From<std::net::Ipv4Addr> for Ipv4Addr {
fn from(value: std::net::Ipv4Addr) -> Self {
Self {
addr: u32::from_be_bytes(value.octets()),
}
}
}
impl From<Ipv4Addr> for std::net::Ipv4Addr {
fn from(value: Ipv4Addr) -> Self {
std::net::Ipv4Addr::from(value.addr)
}
}
impl ToString for Ipv4Addr {
fn to_string(&self) -> String {
std::net::Ipv4Addr::from(self.addr).to_string()
}
}
impl From<std::net::Ipv6Addr> for Ipv6Addr {
fn from(value: std::net::Ipv6Addr) -> Self {
let b = value.octets();
Self {
part1: u32::from_be_bytes([b[0], b[1], b[2], b[3]]),
part2: u32::from_be_bytes([b[4], b[5], b[6], b[7]]),
part3: u32::from_be_bytes([b[8], b[9], b[10], b[11]]),
part4: u32::from_be_bytes([b[12], b[13], b[14], b[15]]),
}
}
}
impl From<Ipv6Addr> for std::net::Ipv6Addr {
fn from(value: Ipv6Addr) -> Self {
let part1 = value.part1.to_be_bytes();
let part2 = value.part2.to_be_bytes();
let part3 = value.part3.to_be_bytes();
let part4 = value.part4.to_be_bytes();
std::net::Ipv6Addr::from([
part1[0], part1[1], part1[2], part1[3],
part2[0], part2[1], part2[2], part2[3],
part3[0], part3[1], part3[2], part3[3],
part4[0], part4[1], part4[2], part4[3]
])
}
}
impl ToString for Ipv6Addr {
fn to_string(&self) -> String {
std::net::Ipv6Addr::from(self.clone()).to_string()
}
}
impl From<url::Url> for Url {
fn from(value: url::Url) -> Self {
Url {
url: value.to_string(),
}
}
}
impl From<Url> for url::Url {
fn from(value: Url) -> Self {
url::Url::parse(&value.url).unwrap()
}
}
impl FromStr for Url {
type Err = url::ParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(Url {
url: s.parse::<url::Url>()?.to_string(),
})
}
}
impl Display for Url {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.url)
}
}
impl From<std::net::SocketAddr> for SocketAddr {
fn from(value: std::net::SocketAddr) -> Self {
match value {
std::net::SocketAddr::V4(v4) => SocketAddr {
ip: Some(socket_addr::Ip::Ipv4(v4.ip().clone().into())),
port: v4.port() as u32,
},
std::net::SocketAddr::V6(v6) => SocketAddr {
ip: Some(socket_addr::Ip::Ipv6(v6.ip().clone().into())),
port: v6.port() as u32,
},
}
}
}
impl From<SocketAddr> for std::net::SocketAddr {
fn from(value: SocketAddr) -> Self {
match value.ip.unwrap() {
socket_addr::Ip::Ipv4(ip) => std::net::SocketAddr::V4(std::net::SocketAddrV4::new(
std::net::Ipv4Addr::from(ip),
value.port as u16,
)),
socket_addr::Ip::Ipv6(ip) => std::net::SocketAddr::V6(std::net::SocketAddrV6::new(
std::net::Ipv6Addr::from(ip),
value.port as u16,
0,
0,
)),
}
}
}

View File

@@ -0,0 +1,34 @@
syntax = "proto3";
package error;
message OtherError { string error_message = 1; }
message InvalidMethodIndex {
string service_name = 1;
uint32 method_index = 2;
}
message InvalidService { string service_name = 1; }
message ProstDecodeError {}
message ProstEncodeError {}
message ExecuteError { string error_message = 1; }
message MalformatRpcPacket { string error_message = 1; }
message Timeout { string error_message = 1; }
message Error {
oneof error {
OtherError other_error = 1;
InvalidMethodIndex invalid_method_index = 2;
InvalidService invalid_service = 3;
ProstDecodeError prost_decode_error = 4;
ProstEncodeError prost_encode_error = 5;
ExecuteError execute_error = 6;
MalformatRpcPacket malformat_rpc_packet = 7;
Timeout timeout = 8;
}
}

View File

@@ -0,0 +1,84 @@
use prost::DecodeError;
use super::rpc_types;
include!(concat!(env!("OUT_DIR"), "/error.rs"));
impl From<&rpc_types::error::Error> for Error {
fn from(e: &rpc_types::error::Error) -> Self {
use super::error::error::Error as ProtoError;
match e {
rpc_types::error::Error::ExecutionError(e) => Self {
error: Some(ProtoError::ExecuteError(ExecuteError {
error_message: e.to_string(),
})),
},
rpc_types::error::Error::DecodeError(_) => Self {
error: Some(ProtoError::ProstDecodeError(ProstDecodeError {})),
},
rpc_types::error::Error::EncodeError(_) => Self {
error: Some(ProtoError::ProstEncodeError(ProstEncodeError {})),
},
rpc_types::error::Error::InvalidMethodIndex(m, s) => Self {
error: Some(ProtoError::InvalidMethodIndex(InvalidMethodIndex {
method_index: *m as u32,
service_name: s.to_string(),
})),
},
rpc_types::error::Error::InvalidServiceKey(s, _) => Self {
error: Some(ProtoError::InvalidService(InvalidService {
service_name: s.to_string(),
})),
},
rpc_types::error::Error::MalformatRpcPacket(e) => Self {
error: Some(ProtoError::MalformatRpcPacket(MalformatRpcPacket {
error_message: e.to_string(),
})),
},
rpc_types::error::Error::Timeout(e) => Self {
error: Some(ProtoError::Timeout(Timeout {
error_message: e.to_string(),
})),
},
#[allow(unreachable_patterns)]
e => Self {
error: Some(ProtoError::OtherError(OtherError {
error_message: e.to_string(),
})),
},
}
}
}
impl From<&Error> for rpc_types::error::Error {
fn from(e: &Error) -> Self {
use super::error::error::Error as ProtoError;
match &e.error {
Some(ProtoError::ExecuteError(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
Some(ProtoError::ProstDecodeError(_)) => {
Self::DecodeError(DecodeError::new("decode error"))
}
Some(ProtoError::ProstEncodeError(_)) => {
Self::DecodeError(DecodeError::new("encode error"))
}
Some(ProtoError::InvalidMethodIndex(e)) => {
Self::InvalidMethodIndex(e.method_index as u8, e.service_name.clone())
}
Some(ProtoError::InvalidService(e)) => {
Self::InvalidServiceKey(e.service_name.clone(), "".to_string())
}
Some(ProtoError::MalformatRpcPacket(e)) => {
Self::MalformatRpcPacket(e.error_message.clone())
}
Some(ProtoError::Timeout(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
Some(ProtoError::OtherError(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
None => Self::ExecutionError(anyhow::anyhow!("unknown error {:?}", e)),
}
}
}

Some files were not shown because too many files have changed in this diff Show More