3.18.4 virtual private network

This commit is contained in:
TenderIronh
2024-07-14 11:00:43 +08:00
parent e7b696c474
commit 9dda148232
62 changed files with 3846 additions and 899 deletions

7
.gitignore vendored
View File

@@ -15,4 +15,9 @@ libs/
openp2p.app.jks openp2p.app.jks
openp2p.aar openp2p.aar
openp2p-sources.jar openp2p-sources.jar
build.gradle wintun/
wintun.dll
.vscode/
app/.idea/
*_debug_bin*
cmd/openp2p

View File

@@ -19,9 +19,9 @@
[查看详细](#安全性) [查看详细](#安全性)
### 4. 轻量 ### 4. 轻量
文件大小2MB+运行内存2MB+全部在应用层实现,没有虚拟网卡,没有内核程序 文件大小2MB+运行内存2MB+它可以仅跑在应用层或者配合wintun驱动使用组网功能
### 5. 跨平台 ### 5. 跨平台
因为轻量所以很容易支持各个平台。支持主流的操作系统Windows,Linux,MacOS和主流的cpu架构386、amd64、arm、arm64、mipsle、mipsle64、mips、mips64 因为轻量所以很容易支持各个平台。支持主流的操作系统Windows,Linux,MacOS和主流的cpu架构386、amd64、arm、arm64、mipsle、mipsle64、mips、mips64、s390x、ppc64le
### 6. 高效 ### 6. 高效
P2P直连可以让你的设备跑满带宽。不论你的设备在任何网络环境无论NAT1-4Cone或SymmetricUDP或TCP打洞,UPNP,IPv6都支持。依靠Quic协议优秀的拥塞算法能在糟糕的网络环境获得高带宽低延时。 P2P直连可以让你的设备跑满带宽。不论你的设备在任何网络环境无论NAT1-4Cone或SymmetricUDP或TCP打洞,UPNP,IPv6都支持。依靠Quic协议优秀的拥塞算法能在糟糕的网络环境获得高带宽低延时。
@@ -128,13 +128,15 @@ CGO_ENABLED=0 env GOOS=linux GOARCH=amd64 go build -o openp2p --ldflags '-s -w '
6. 客户端提供WebUI 6. 客户端提供WebUI
7. ~~支持自有服务器,开源服务器程序~~(100%) 7. ~~支持自有服务器,开源服务器程序~~(100%)
8. 共享节点调度模型优化,对不同的运营商优化 8. 共享节点调度模型优化,对不同的运营商优化
9. 方便二次开发提供API和lib 9. ~~方便二次开发提供API和lib~~(100%)
10. ~~应用层支持UDP协议实现很简单但UDP应用较少暂不急~~(100%) 10. ~~应用层支持UDP协议实现很简单但UDP应用较少暂不急~~(100%)
11. 底层通信支持KCP协议目前仅支持QuicKCP专门对延时优化被游戏加速器广泛使用可以牺牲一定的带宽降低延时 11. ~~底层通信支持KCP协议目前仅支持QuicKCP专门对延时优化被游戏加速器广泛使用可以牺牲一定的带宽降低延时~~(100%)
12. ~~支持Android系统让旧手机焕发青春变成移动网关~~(100%) 12. ~~支持Android系统让旧手机焕发青春变成移动网关~~(100%)
13. 支持Windows网上邻居共享文件 13. ~~支持Windows网上邻居共享文件~~(100%)
14. 内网直连优化,用处不大,估计就用户测试时用到 14. ~~内网直连优化~~(100%)
15. ~~支持UPNP~~(100%) 15. ~~支持UPNP~~(100%)
16. ~~支持Android~~(100%)
17. 支持IOS
远期计划: 远期计划:
1. 利用区块链技术去中心化,让共享设备的用户有收益,从而促进更多用户共享,达到正向闭环。 1. 利用区块链技术去中心化,让共享设备的用户有收益,从而促进更多用户共享,达到正向闭环。

View File

@@ -19,10 +19,10 @@ The code is open source, the P2P tunnel uses TLS1.3+AES double encryption, and t
[details](#Safety) [details](#Safety)
### 4. Lightweight ### 4. Lightweight
2MB+ filesize, 2MB+ memory. It runs at appllication layer, no vitrual NIC, no kernel driver. 2MB+ filesize, 2MB+ memory. It could only runs at application layer, or uses wintun driver for SDWAN.
### 5. Cross-platform ### 5. Cross-platform
Benefit from lightweight, it easily supports most of major OS, like Windows, Linux, MacOS, also most of CPU architecture, like 386、amd64、arm、arm64、mipsle、mipsle64、mips、mips64. Benefit from lightweight, it easily supports most of major OS, like Windows, Linux, MacOS, also most of CPU architecture, like 386、amd64、arm、arm64、mipsle、mipsle64、mips、mips64、s390x、ppc64le.
### 6. Efficient ### 6. Efficient
P2P direct connection lets your devices make good use of bandwidth. Your device can be connected in any network environments, even supports NAT1-4 (Cone or Symmetric),UDP or TCP punching,UPNP,IPv6. Relying on the excellent congestion algorithm of the Quic protocol, high bandwidth and low latency can be obtained in a bad network environment. P2P direct connection lets your devices make good use of bandwidth. Your device can be connected in any network environments, even supports NAT1-4 (Cone or Symmetric),UDP or TCP punching,UPNP,IPv6. Relying on the excellent congestion algorithm of the Quic protocol, high bandwidth and low latency can be obtained in a bad network environment.
@@ -136,14 +136,15 @@ Short-Term:
6. Provide WebUI on client side. 6. Provide WebUI on client side.
7. ~~Support private server, open source server program.~~(100%) 7. ~~Support private server, open source server program.~~(100%)
8. Optimize our share scheduling model for different network operators. 8. Optimize our share scheduling model for different network operators.
9. Provide REST APIs and libary for secondary development. 9. ~~Provide REST APIs and libary for secondary development.~~(100%)
10. ~~Support UDP at application layer, it is easy to implement but not urgent due to only a few applicaitons using UDP protocol.~~(100%) 10. ~~Support UDP at application layer, it is easy to implement but not urgent due to only a few applicaitons using UDP protocol.~~(100%)
11. Support KCP protocol underlay, currently support Quic only. KCP focus on delay optimization,which has been widely used as game accelerator,it can sacrifice part of bandwidth to reduce timelag. 11. ~~Support KCP protocol underlay, currently support Quic only. KCP focus on delay optimization,which has been widely used as game accelerator,it can sacrifice part of bandwidth to reduce timelag. ~~(100%)
12. ~~Support Android platform, let the phones to be mobile gateway.~~(100%) 12. ~~Support Android platform, let the phones to be mobile gateway.~~(100%)
13. Support SMB Windows neighborhood. 13. ~~Support SMB Windows neighborhood.~~(100%)
14. Direct connection on intranet, for testing. 14. ~~Direct connection on intranet, for testing.~~(100%)
15. ~~Support UPNP.~~(100%) 15. ~~Support UPNP.~~(100%)
16. ~~Support Android~~(100%)
17. Support IOS
Long-Term: Long-Term:
1. Use blockchain technology to decentralize, so that users who share equipment have benefits, thereby promoting more users to share, and achieving a positive closed loop. 1. Use blockchain technology to decentralize, so that users who share equipment have benefits, thereby promoting more users to share, and achieving a positive closed loop.

View File

@@ -43,7 +43,7 @@ nohup ./openp2p -d -node OFFICEPC1 -token TOKEN &
``` ```
{ {
"network": { "network": {
"Node": "hhd1207-222", "Node": "YOUR-NODE-NAME",
"Token": "TOKEN", "Token": "TOKEN",
"ShareBandwidth": 0, "ShareBandwidth": 0,
"ServerHost": "api.openp2p.cn", "ServerHost": "api.openp2p.cn",
@@ -87,7 +87,8 @@ systemctl stop firewalld.service
systemctl start firewalld.service systemctl start firewalld.service
firewall-cmd --state firewall-cmd --state
``` ```
## 停止
TODO: windows linux macos
## 卸载 ## 卸载
``` ```
./openp2p uninstall ./openp2p uninstall

View File

@@ -46,7 +46,7 @@ Configuration example
``` ```
{ {
"network": { "network": {
"Node": "hhd1207-222", "Node": "YOUR-NODE-NAME",
"Token": "TOKEN", "Token": "TOKEN",
"ShareBandwidth": 0, "ShareBandwidth": 0,
"ServerHost": "api.openp2p.cn", "ServerHost": "api.openp2p.cn",

View File

@@ -1,10 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="RunConfigurationProducerService">
<option name="ignoredProducers">
<set>
<option value="com.android.tools.idea.compose.preview.runconfiguration.ComposePreviewRunConfigurationProducer" />
</set>
</option>
</component>
</project>

View File

@@ -1,7 +1,11 @@
## Build ## Build
depends on openjdk 11, gradle 8.1.3, ndk 21
``` ```
cd core
go install golang.org/x/mobile/cmd/gomobile@latest
gomobile init
go get -v golang.org/x/mobile/bind go get -v golang.org/x/mobile/bind
cd core
gomobile bind -target android -v gomobile bind -target android -v
if [[ $? -ne 0 ]]; then if [[ $? -ne 0 ]]; then
echo "build error" echo "build error"

View File

@@ -12,17 +12,18 @@ android {
keyPassword 'YOUR-PASSWORD' keyPassword 'YOUR-PASSWORD'
} }
} }
compileSdkVersion 30 compileSdkVersion 31
buildToolsVersion "30.0.3" buildToolsVersion "30.0.3"
defaultConfig { defaultConfig {
applicationId "cn.openp2p" applicationId "cn.openp2p"
minSdkVersion 16 minSdkVersion 16
targetSdkVersion 30 targetSdkVersion 31
versionCode 1 versionCode 1
versionName "2718281828" versionName "2718281828"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
namespace "cn.openp2p"
} }
buildTypes { buildTypes {

View File

@@ -1,12 +1,11 @@
<?xml version="1.0" encoding="utf-8"?> <?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" <manifest xmlns:android="http://schemas.android.com/apk/res/android">
package="cn.openp2p">
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" /> <uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />
<application <application
android:allowBackup="true" android:allowBackup="true"
@@ -27,13 +26,21 @@
<activity <activity
android:name=".ui.login.LoginActivity" android:name=".ui.login.LoginActivity"
android:label="@string/app_name"> android:label="@string/app_name"
android:exported="true">
<intent-filter> <intent-filter>
<action android:name="android.intent.action.MAIN" /> <action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" /> <category android:name="android.intent.category.LAUNCHER" />
</intent-filter> </intent-filter>
</activity> </activity>
<receiver android:name="BootReceiver"
android:exported="true"
android:enabled="true">
<intent-filter>
<action android:name="android.intent.action.BOOT_COMPLETED"/>
</intent-filter>
</receiver>
</application> </application>
</manifest> </manifest>

View File

@@ -0,0 +1,27 @@
package cn.openp2p
import android.content.BroadcastReceiver
import android.content.Context
import android.content.Intent
import android.net.VpnService
import android.os.Build
import android.os.Bundle
import android.util.Log
import cn.openp2p.ui.login.LoginActivity
class BootReceiver : BroadcastReceiver() {
override fun onReceive(context: Context, intent: Intent) {
// Logger.log("pp onReceive "+intent.action.toString())
Log.i("onReceive","start "+intent.action.toString())
// if (Intent.ACTION_BOOT_COMPLETED == intent.action) {
// Log.i("onReceive","match "+intent.action.toString())
// VpnService.prepare(context)
// val intent = Intent(context, OpenP2PService::class.java)
// if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
// context.startForegroundService(intent)
// } else {
// context.startService(intent)
// }
// }
Log.i("onReceive","end "+intent.action.toString())
}
}

View File

@@ -4,9 +4,12 @@ import android.app.*
import android.content.Context import android.content.Context
import android.content.Intent import android.content.Intent
import android.graphics.Color import android.graphics.Color
import java.io.IOException
import android.net.VpnService
import android.os.Binder import android.os.Binder
import android.os.Build import android.os.Build
import android.os.IBinder import android.os.IBinder
import android.os.ParcelFileDescriptor
import android.util.Log import android.util.Log
import androidx.annotation.RequiresApi import androidx.annotation.RequiresApi
import androidx.core.app.NotificationCompat import androidx.core.app.NotificationCompat
@@ -14,9 +17,22 @@ import cn.openp2p.ui.login.LoginActivity
import kotlinx.coroutines.GlobalScope import kotlinx.coroutines.GlobalScope
import kotlinx.coroutines.launch import kotlinx.coroutines.launch
import openp2p.Openp2p import openp2p.Openp2p
import java.io.FileInputStream
import java.io.FileOutputStream
import java.nio.ByteBuffer
import kotlinx.coroutines.*
import org.json.JSONObject
data class Node(val name: String, val ip: String, val resource: String? = null)
class OpenP2PService : Service() { data class Network(
val id: Long,
val name: String,
val gateway: String,
val Nodes: List<Node>
)
class OpenP2PService : VpnService() {
companion object { companion object {
private val LOG_TAG = OpenP2PService::class.simpleName private val LOG_TAG = OpenP2PService::class.simpleName
} }
@@ -29,10 +45,12 @@ class OpenP2PService : Service() {
private lateinit var network: openp2p.P2PNetwork private lateinit var network: openp2p.P2PNetwork
private lateinit var mToken: String private lateinit var mToken: String
private var running:Boolean =true private var running:Boolean =true
private var sdwanRunning:Boolean =false
private var vpnInterface: ParcelFileDescriptor? = null
private var sdwanJob: Job? = null
override fun onCreate() { override fun onCreate() {
Log.i(LOG_TAG, "onCreate - Thread ID = " + Thread.currentThread().id) Log.i(LOG_TAG, "onCreate - Thread ID = " + Thread.currentThread().id)
var channelId: String? = null var channelId = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
channelId = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
createNotificationChannel("kim.hsl", "ForegroundService") createNotificationChannel("kim.hsl", "ForegroundService")
} else { } else {
"" ""
@@ -41,7 +59,7 @@ class OpenP2PService : Service() {
val pendingIntent = PendingIntent.getActivity( val pendingIntent = PendingIntent.getActivity(
this, 0, this, 0,
notificationIntent, 0 notificationIntent, PendingIntent.FLAG_IMMUTABLE
) )
val notification = channelId?.let { val notification = channelId?.let {
@@ -54,7 +72,7 @@ class OpenP2PService : Service() {
startForeground(1337, notification) startForeground(1337, notification)
super.onCreate() super.onCreate()
refreshSDWAN()
} }
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int { override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
@@ -62,23 +80,172 @@ class OpenP2PService : Service() {
LOG_TAG, LOG_TAG,
"onStartCommand - startId = " + startId + ", Thread ID = " + Thread.currentThread().id "onStartCommand - startId = " + startId + ", Thread ID = " + Thread.currentThread().id
) )
startOpenP2P(null)
return super.onStartCommand(intent, flags, startId) return super.onStartCommand(intent, flags, startId)
} }
override fun onBind(p0: Intent?): IBinder? { override fun onBind(p0: Intent?): IBinder? {
val token = p0?.getStringExtra("token") val token = p0?.getStringExtra("token")
Log.i(LOG_TAG, "onBind - Thread ID = " + Thread.currentThread().id + token) Log.i(LOG_TAG, "onBind token=$token")
startOpenP2P(token)
return binder
}
private fun startOpenP2P(token : String?): Boolean {
if (sdwanRunning) {
return true
}
Log.i(LOG_TAG, "startOpenP2P - Thread ID = " + Thread.currentThread().id + token)
val oldToken = Openp2p.getToken(getExternalFilesDir(null).toString())
Log.i(LOG_TAG, "startOpenP2P oldtoken=$oldToken newtoken=$token")
if (oldToken=="0" && token==null){
return false
}
sdwanRunning = true
// runSDWAN()
GlobalScope.launch { GlobalScope.launch {
network = Openp2p.runAsModule(getExternalFilesDir(null).toString(), token, 0, 1) network = Openp2p.runAsModule(
getExternalFilesDir(null).toString(),
token,
0,
1
) // /storage/emulated/0/Android/data/cn.openp2p/files/
val isConnect = network.connect(30000) // ms val isConnect = network.connect(30000) // ms
Log.i(OpenP2PService.LOG_TAG, "login result: " + isConnect.toString()); Log.i(LOG_TAG, "login result: " + isConnect.toString());
do { do {
Thread.sleep(1000) Thread.sleep(1000)
}while(network.connect(30000)&&running) } while (network.connect(30000) && running)
stopSelf() stopSelf()
} }
return binder return false
}
private fun refreshSDWAN() {
GlobalScope.launch {
Log.i(OpenP2PService.LOG_TAG, "refreshSDWAN start");
while (true) {
Log.i(OpenP2PService.LOG_TAG, "waiting new sdwan config");
val buf = ByteArray(4096)
val buffLen = Openp2p.getAndroidSDWANConfig(buf)
Log.i(OpenP2PService.LOG_TAG, "closing running sdwan instance");
sdwanRunning = false
vpnInterface?.close()
vpnInterface = null
Thread.sleep(10000)
runSDWAN(buf.copyOfRange(0,buffLen.toInt() ))
}
Log.i(OpenP2PService.LOG_TAG, "refreshSDWAN end");
}
}
private suspend fun readTunLoop() {
val inputStream = FileInputStream(vpnInterface?.fileDescriptor).channel
if (inputStream==null){
Log.i(OpenP2PService.LOG_TAG, "open FileInputStream error: ");
return
}
Log.d(LOG_TAG, "read tun loop start")
val buffer = ByteBuffer.allocate(4096)
val byteArrayRead = ByteArray(4096)
while (sdwanRunning) {
buffer.clear()
val readBytes = inputStream.read(buffer)
if (readBytes <= 0) {
// Log.i(OpenP2PService.LOG_TAG, "inputStream.read error: ")
delay(1)
continue
}
buffer.flip()
buffer.get(byteArrayRead,0,readBytes)
Log.i(OpenP2PService.LOG_TAG, String.format("Openp2p.androidRead: %d", readBytes))
Openp2p.androidRead(byteArrayRead, readBytes.toLong())
}
Log.d(LOG_TAG, "read tun loop end")
}
private fun runSDWAN(buf:ByteArray) {
sdwanRunning=true
sdwanJob=GlobalScope.launch(context = Dispatchers.IO) {
Log.i(OpenP2PService.LOG_TAG, "runSDWAN start:${buf.decodeToString()}");
try{
var builder = Builder()
val jsonObject = JSONObject(buf.decodeToString())
val id = jsonObject.getLong("id")
val name = jsonObject.getString("name")
val gateway = jsonObject.getString("gateway")
val nodesArray = jsonObject.getJSONArray("Nodes")
val nodesList = mutableListOf<JSONObject>()
for (i in 0 until nodesArray.length()) {
nodesList.add(nodesArray.getJSONObject(i))
}
val myNodeName = Openp2p.getAndroidNodeName()
Log.i(OpenP2PService.LOG_TAG, "getAndroidNodeName:${myNodeName}");
val nodeList = nodesList.map {
val nodeName = it.getString("name")
val nodeIp = it.getString("ip")
if (nodeName==myNodeName){
builder.addAddress(nodeIp, 24)
}
val nodeResource = if (it.has("resource")) it.getString("resource") else null
val parts = nodeResource?.split("/")
if (parts?.size == 2) {
val ipAddress = parts[0]
val subnetMask = parts[1]
builder.addRoute(ipAddress, subnetMask.toInt())
Log.i(OpenP2PService.LOG_TAG, "sdwan addRoute:${ipAddress},${subnetMask.toInt()}");
}
Node(nodeName, nodeIp, nodeResource)
}
val network = Network(id, name, gateway, nodeList)
println(network)
Log.i(OpenP2PService.LOG_TAG, "onBind");
builder.addDnsServer("8.8.8.8")
builder.addRoute("10.2.3.0", 24)
// builder.addRoute("0.0.0.0", 0);
builder.setSession(LOG_TAG!!)
builder.setMtu(1420)
vpnInterface = builder.establish()
if (vpnInterface==null){
Log.e(OpenP2PService.LOG_TAG, "start vpnservice error: ");
}
val outputStream = FileOutputStream(vpnInterface?.fileDescriptor).channel
if (outputStream==null){
Log.e(OpenP2PService.LOG_TAG, "open FileOutputStream error: ");
return@launch
}
val byteArrayWrite = ByteArray(4096)
launch {
readTunLoop()
}
Log.d(LOG_TAG, "write tun loop start")
while (sdwanRunning) {
val len = Openp2p.androidWrite(byteArrayWrite)
Log.i(OpenP2PService.LOG_TAG, String.format("Openp2p.androidWrite: %d",len));
val writeBytes = outputStream?.write(ByteBuffer.wrap(byteArrayWrite))
if (writeBytes != null && writeBytes <= 0) {
Log.i(OpenP2PService.LOG_TAG, "outputStream?.write error: ");
continue
}
}
outputStream.close()
// 关闭 VPN 接口
vpnInterface?.close()
// 置空变量以释放资源
vpnInterface = null
Log.d(LOG_TAG, "write tun loop end")
}catch (e: Exception) {
// 捕获异常并记录
Log.e("VPN Connection", "发生异常: ${e.message}")
}
Log.i(OpenP2PService.LOG_TAG, "runSDWAN end");
}
} }
override fun onDestroy() { override fun onDestroy() {
@@ -93,12 +260,13 @@ class OpenP2PService : Service() {
} }
fun isConnected(): Boolean { fun isConnected(): Boolean {
if (!::network.isInitialized) return false if (!::network.isInitialized) return false
return network?.connect(1000) return network.connect(1000)
} }
fun stop() { fun stop() {
running=false running=false
stopSelf() stopSelf()
Openp2p.stop()
} }
@RequiresApi(Build.VERSION_CODES.O) @RequiresApi(Build.VERSION_CODES.O)
private fun createNotificationChannel(channelId: String, channelName: String): String? { private fun createNotificationChannel(channelId: String, channelName: String): String? {

View File

@@ -0,0 +1,45 @@
package cn.openp2p
import android.content.*
import java.io.File
import java.io.FileWriter
import java.text.SimpleDateFormat
import java.util.*
import android.app.*
import android.content.Context
import android.content.Intent
import android.graphics.Color
import java.io.IOException
import android.net.VpnService
import android.os.Binder
import android.os.Build
import android.os.IBinder
import android.os.ParcelFileDescriptor
import android.util.Log
import androidx.annotation.RequiresApi
import androidx.core.app.NotificationCompat
import cn.openp2p.ui.login.LoginActivity
import kotlinx.coroutines.GlobalScope
import kotlinx.coroutines.launch
import openp2p.Openp2p
import java.io.FileInputStream
import java.io.FileOutputStream
import java.nio.ByteBuffer
import kotlinx.coroutines.*
import org.json.JSONObject
object Logger {
private val logFile: File = File("app.log")
fun log(message: String) {
val timestamp = SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.getDefault()).format(Date())
val logMessage = "$timestamp: $message\n"
try {
val fileWriter = FileWriter(logFile, true)
fileWriter.append(logMessage)
fileWriter.close()
} catch (e: Exception) {
e.printStackTrace()
}
}
}

View File

@@ -5,6 +5,7 @@ import android.app.Activity
import android.app.ActivityManager import android.app.ActivityManager
import android.app.Notification import android.app.Notification
import android.app.PendingIntent import android.app.PendingIntent
import android.content.BroadcastReceiver
import android.content.ComponentName import android.content.ComponentName
import android.content.Context import android.content.Context
import android.content.Intent import android.content.Intent
@@ -25,6 +26,7 @@ import androidx.annotation.StringRes
import androidx.appcompat.app.AppCompatActivity import androidx.appcompat.app.AppCompatActivity
import androidx.lifecycle.Observer import androidx.lifecycle.Observer
import androidx.lifecycle.ViewModelProvider import androidx.lifecycle.ViewModelProvider
import cn.openp2p.Logger
import cn.openp2p.OpenP2PService import cn.openp2p.OpenP2PService
import cn.openp2p.R import cn.openp2p.R
import cn.openp2p.databinding.ActivityLoginBinding import cn.openp2p.databinding.ActivityLoginBinding
@@ -51,6 +53,14 @@ class LoginActivity : AppCompatActivity() {
private lateinit var loginViewModel: LoginViewModel private lateinit var loginViewModel: LoginViewModel
private lateinit var binding: ActivityLoginBinding private lateinit var binding: ActivityLoginBinding
private lateinit var mService: OpenP2PService private lateinit var mService: OpenP2PService
@RequiresApi(Build.VERSION_CODES.O)
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == 0 && resultCode == Activity.RESULT_OK) {
startService(Intent(this, OpenP2PService::class.java))
}
}
@RequiresApi(Build.VERSION_CODES.O) @RequiresApi(Build.VERSION_CODES.O)
override fun onCreate(savedInstanceState: Bundle?) { override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState) super.onCreate(savedInstanceState)
@@ -78,22 +88,18 @@ class LoginActivity : AppCompatActivity() {
token.error = getString(loginState.passwordError) token.error = getString(loginState.passwordError)
} }
}) })
val intent1 = VpnService.prepare(this) ?: return openp2pLog.setText(R.string.phone_setting)
loginViewModel.loginResult.observe(this@LoginActivity, Observer { val intent = VpnService.prepare(this)
val loginResult = it ?: return@Observer if (intent != null)
{
loading.visibility = View.GONE Log.i("openp2p", "VpnService.prepare need permission");
if (loginResult.error != null) { startActivityForResult(intent, 0)
showLoginFailed(loginResult.error)
} }
if (loginResult.success != null) { else {
updateUiWithUser(loginResult.success) Log.i("openp2p", "VpnService.prepare ready");
onActivityResult(0, Activity.RESULT_OK, null)
} }
setResult(Activity.RESULT_OK)
//Complete and destroy login activity once successful
finish()
})
profile.setOnClickListener { profile.setOnClickListener {
val url = "https://console.openp2p.cn/profile" val url = "https://console.openp2p.cn/profile"
@@ -110,17 +116,23 @@ class LoginActivity : AppCompatActivity() {
} }
openp2pLog.setText(R.string.phone_setting) openp2pLog.setText(R.string.phone_setting)
token.setText(Openp2p.getToken(getExternalFilesDir(null).toString()))
login.setOnClickListener { login.setOnClickListener {
if (login.text.toString()=="退出"){ if (login.text.toString()=="退出"){
// val intent = Intent(this@LoginActivity, OpenP2PService::class.java) // val intent = Intent(this@LoginActivity, OpenP2PService::class.java)
// stopService(intent) // stopService(intent)
Log.i(LOG_TAG, "quit") Log.i(LOG_TAG, "quit")
mService.stop() mService.stop()
unbindService(connection)
val intent = Intent(this@LoginActivity, OpenP2PService::class.java) val intent = Intent(this@LoginActivity, OpenP2PService::class.java)
stopService(intent) stopService(intent)
// 解绑服务
unbindService(connection)
// 结束当前 Activity
finish() // 或者使用 finishAffinity() 来结束整个应用程序
exitAPP() exitAPP()
// finishAffinity()
} }
login.setText("退出") login.setText("退出")
@@ -139,13 +151,20 @@ class LoginActivity : AppCompatActivity() {
if (isConnect) { if (isConnect) {
onlineState.setText("在线") onlineState.setText("在线")
} else { } else {
onlineState.setText("离线") onlineState.setText("正在登录")
} }
} }
} while (true) } while (true)
} }
} }
val tokenText = Openp2p.getToken(getExternalFilesDir(null).toString())
token.setText(tokenText.toString())
// Check token length and automatically click login if length > 10
if (tokenText.length > 10) {
// Logger.log("performClick ")
login.performClick()
}
} }
} }
@RequiresApi(Build.VERSION_CODES.LOLLIPOP) @RequiresApi(Build.VERSION_CODES.LOLLIPOP)

View File

@@ -13,28 +13,27 @@
<EditText <EditText
android:id="@+id/token" android:id="@+id/token"
android:layout_width="225dp" android:layout_width="250dp"
android:layout_height="46dp" android:layout_height="45dp"
android:hint="Token" android:hint="Token"
android:imeActionLabel="@string/action_sign_in_short" android:imeActionLabel="@string/action_sign_in_short"
android:imeOptions="actionDone" android:imeOptions="actionDone"
android:selectAllOnFocus="true" android:selectAllOnFocus="true"
app:layout_constraintStart_toStartOf="parent" app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" app:layout_constraintTop_toTopOf="parent" />
/>
<Button <Button
android:id="@+id/login" android:id="@+id/login"
android:layout_width="wrap_content" android:layout_width="85dp"
android:layout_height="wrap_content" android:layout_height="45dp"
android:layout_gravity="start" android:layout_gravity="start"
android:layout_marginStart="24dp" android:layout_marginStart="24dp"
android:layout_marginLeft="24dp" android:layout_marginLeft="24dp"
android:enabled="false" android:enabled="false"
android:text="@string/action_sign_in" android:text="@string/action_sign_in"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@+id/token" app:layout_constraintStart_toEndOf="@+id/token"
app:layout_constraintTop_toTopOf="parent" app:layout_constraintTop_toTopOf="parent" />
tools:layout_editor_absoluteY="-2dp" />
@@ -58,28 +57,36 @@
android:id="@+id/openp2pLog" android:id="@+id/openp2pLog"
android:layout_width="359dp" android:layout_width="359dp"
android:layout_height="548dp" android:layout_height="548dp"
android:layout_marginTop="24dp"
android:ems="10" android:ems="10"
android:inputType="none"
android:textIsSelectable="true"
android:focusable="false" android:focusable="false"
android:inputType="none"
android:text="Name" android:text="Name"
android:textIsSelectable="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent" app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/onlineState" /> app:layout_constraintTop_toBottomOf="@+id/onlineState" />
<Button <Button
android:id="@+id/profile" android:id="@+id/profile"
android:layout_width="wrap_content" android:layout_width="250dp"
android:layout_height="wrap_content" android:layout_height="45dp"
android:layout_marginTop="8dp"
android:text="打开控制台查看Token" android:text="打开控制台查看Token"
app:layout_constraintStart_toStartOf="parent" app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/token" /> app:layout_constraintTop_toBottomOf="@+id/token" />
<EditText <EditText
android:id="@+id/onlineState" android:id="@+id/onlineState"
android:layout_width="113dp" android:layout_width="85dp"
android:layout_height="45dp" android:layout_height="45dp"
android:layout_marginStart="24dp"
android:layout_marginLeft="24dp"
android:layout_marginTop="8dp"
android:ems="10" android:ems="10"
android:inputType="textPersonName" android:inputType="textPersonName"
android:text="未登录" android:text="未登录"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@+id/profile" app:layout_constraintStart_toEndOf="@+id/profile"
app:layout_constraintTop_toBottomOf="@+id/login" /> app:layout_constraintTop_toBottomOf="@+id/login" />

View File

@@ -10,6 +10,13 @@
<string name="invalid_password">Token可以在 https://console.openp2p.cn/profile 获得</string> <string name="invalid_password">Token可以在 https://console.openp2p.cn/profile 获得</string>
<string name="login_failed">"Login failed"</string> <string name="login_failed">"Login failed"</string>
<string name="phone_setting">"安卓系统默认设置的”杀后台进程“会导致 OpenP2P 在后台运行一会后,被系统杀死进程,导致您的体验受到影响。您可以通过以下方式修改几个设置,解决此问题: <string name="phone_setting">"安卓系统默认设置的”杀后台进程“会导致 OpenP2P 在后台运行一会后,被系统杀死进程,导致您的体验受到影响。您可以通过以下方式修改几个设置,解决此问题:
华为鸿蒙:
1. 允许应用后台运行:进入设置 → 搜索进入 应用启动管理 → 关闭 OpenP2P 的 自动管理 开关 → 在弹框中勾选 允许后台活动
2. 避免应用被电池优化程序清理:进入设置 → 搜索进入电池优化 → 不允许 →选择所有应用 → 找到无法后台运行的应用 → 设置为不允许
3. 关闭省电模式:进入设置 → 电池 → 关闭 省电模式 开关
4. 保持设备网络连接:进入设置 → 电池 → 更多电池设置 → 开启 休眠时始终保持网络连接 开关。
5. 给后台运行的应用加锁:打开应用后 → 进入多任务界面 → 下拉选中的卡片进行加锁 → 然后点击清理图标清理其他不经常使用的应用
6. 设置开发人员选项中相关开关:进入设置 → 搜索进入 开发人员选项 → 找到 不保留活动 开关后关闭 → 并在 后台进程限制 选择 标准限制
华为手机: 华为手机:
进入”设置“,搜索并进入“电池优化“界面,选中 OpenP2P 程序,不允许系统对其进行电池优化; 进入”设置“,搜索并进入“电池优化“界面,选中 OpenP2P 程序,不允许系统对其进行电池优化;

View File

@@ -1,12 +1,14 @@
// Top-level build file where you can add configuration options common to all sub-projects/modules. // Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript { buildscript {
ext.kotlin_version = "1.5.21"
ext.kotlin_version = "1.8.20"
repositories { repositories {
google() google()
mavenCentral() mavenCentral()
} }
dependencies { dependencies {
classpath "com.android.tools.build:gradle:4.2.1" classpath "com.android.tools.build:gradle:8.1.3"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
// NOTE: Do not place your application dependencies here; they belong // NOTE: Do not place your application dependencies here; they belong

View File

@@ -1,6 +1,6 @@
#Sat Oct 22 21:46:24 CST 2022 #Tue Dec 05 16:04:08 CST 2023
distributionBase=GRADLE_USER_HOME distributionBase=GRADLE_USER_HOME
distributionUrl=https\://services.gradle.org/distributions/gradle-6.7.1-bin.zip
distributionPath=wrapper/dists distributionPath=wrapper/dists
zipStorePath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-8.2-bin.zip
zipStoreBase=GRADLE_USER_HOME zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

View File

@@ -1,9 +1,9 @@
package main package main
import ( import (
core "openp2p/core" op "openp2p/core"
) )
func main() { func main() {
core.Run() op.Run()
} }

View File

@@ -5,8 +5,10 @@ import (
"crypto/aes" "crypto/aes"
"crypto/cipher" "crypto/cipher"
"crypto/tls" "crypto/tls"
"encoding/binary"
"encoding/json" "encoding/json"
"fmt" "fmt"
"math/big"
"math/rand" "math/rand"
"net" "net"
"net/http" "net/http"
@@ -197,8 +199,12 @@ func parseMajorVer(ver string) int {
return 0 return 0
} }
func IsIPv6(address string) bool { func IsIPv6(ipStr string) bool {
return strings.Count(address, ":") >= 2 ip := net.ParseIP(ipStr)
if ip == nil {
return false
}
return ip.To16() != nil && ip.To4() == nil
} }
var letters = []byte("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-") var letters = []byte("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-")
@@ -210,3 +216,67 @@ func randStr(n int) string {
} }
return string(b) return string(b)
} }
func execCommand(commandPath string, wait bool, arg ...string) (err error) {
command := exec.Command(commandPath, arg...)
err = command.Start()
if err != nil {
return
}
if wait {
err = command.Wait()
}
return
}
func sanitizeFileName(fileName string) string {
validFileName := fileName
invalidChars := []string{"\\", "/", ":", "*", "?", "\"", "<", ">", "|"}
for _, char := range invalidChars {
validFileName = strings.ReplaceAll(validFileName, char, " ")
}
return validFileName
}
func prettyJson(s interface{}) string {
jsonData, err := json.MarshalIndent(s, "", " ")
if err != nil {
fmt.Println("Error marshalling JSON:", err)
return ""
}
return string(jsonData)
}
func inetAtoN(ipstr string) (uint32, error) { // support both ipnet or single ip
i, _, err := net.ParseCIDR(ipstr)
if err != nil {
i = net.ParseIP(ipstr)
if i == nil {
return 0, err
}
}
ret := big.NewInt(0)
ret.SetBytes(i.To4())
return uint32(ret.Int64()), nil
}
func calculateChecksum(data []byte) uint16 {
length := len(data)
sum := uint32(0)
// Calculate the sum of 16-bit words
for i := 0; i < length-1; i += 2 {
sum += uint32(binary.BigEndian.Uint16(data[i : i+2]))
}
// Add the last byte (if odd length)
if length%2 != 0 {
sum += uint32(data[length-1])
}
// Fold 32-bit sum to 16 bits
sum = (sum >> 16) + (sum & 0xffff)
sum += (sum >> 16)
return uint16(^sum)
}

View File

@@ -94,3 +94,23 @@ func TestParseMajorVer(t *testing.T) {
assertParseMajorVer(t, "3.0.0", 3) assertParseMajorVer(t, "3.0.0", 3)
} }
func TestIsIPv6(t *testing.T) {
tests := []struct {
ipStr string
want bool
}{
{"2001:0db8:85a3:0000:0000:8a2e:0370:7334", true}, // 有效的 IPv6 地址
{"2001:db8::2:1", true}, // 有效的 IPv6 地址
{"192.168.1.1", false}, // 无效的 IPv6 地址,是 IPv4
{"2001:db8::G:1", false}, // 无效的 IPv6 地址,包含非法字符
// 可以添加更多测试用例
}
for _, tt := range tests {
got := IsIPv6(tt.ipStr)
if got != tt.want {
t.Errorf("isValidIPv6(%s) = %v, want %v", tt.ipStr, got, tt.want)
}
}
}

View File

@@ -3,10 +3,9 @@ package openp2p
import ( import (
"encoding/json" "encoding/json"
"flag" "flag"
"fmt"
"io/ioutil"
"os" "os"
"strconv" "strconv"
"strings"
"sync" "sync"
"time" "time"
) )
@@ -17,6 +16,8 @@ type AppConfig struct {
// required // required
AppName string AppName string
Protocol string Protocol string
UnderlayProtocol string
PunchPriority int // bitwise DisableTCP|DisableUDP|TCPFirst 0:tcp and udp both enable, udp first
Whitelist string Whitelist string
SrcPort int SrcPort int
PeerNode string PeerNode string
@@ -24,11 +25,13 @@ type AppConfig struct {
DstHost string DstHost string
PeerUser string PeerUser string
RelayNode string RelayNode string
ForceRelay int // default:0 disable;1 enable
Enabled int // default:1 Enabled int // default:1
// runtime info // runtime info
peerVersion string peerVersion string
peerToken uint64 peerToken uint64
peerNatType int peerNatType int
peerLanIP string
hasIPv4 int hasIPv4 int
peerIPv6 string peerIPv6 string
hasUPNPorNATPMP int hasUPNPorNATPMP int
@@ -45,16 +48,85 @@ type AppConfig struct {
isUnderlayServer int isUnderlayServer int
} }
func (c *AppConfig) ID() string { const (
return fmt.Sprintf("%s%d", c.Protocol, c.SrcPort) PunchPriorityTCPFirst = 1
PunchPriorityUDPDisable = 1 << 1
PunchPriorityTCPDisable = 1 << 2
)
func (c *AppConfig) ID() uint64 {
if c.SrcPort == 0 { // memapp
return NodeNameToID(c.PeerNode)
}
if c.Protocol == "tcp" {
return uint64(c.SrcPort) * 10
}
return uint64(c.SrcPort)*10 + 1
} }
type Config struct { type Config struct {
Network NetworkConfig `json:"network"` Network NetworkConfig `json:"network"`
Apps []*AppConfig `json:"apps"` Apps []*AppConfig `json:"apps"`
LogLevel int LogLevel int
daemonMode bool daemonMode bool
mtx sync.Mutex mtx sync.Mutex
sdwanMtx sync.Mutex
sdwan SDWANInfo
delNodes []SDWANNode
addNodes []SDWANNode
}
func (c *Config) getSDWAN() SDWANInfo {
c.sdwanMtx.Lock()
defer c.sdwanMtx.Unlock()
return c.sdwan
}
func (c *Config) getDelNodes() []SDWANNode {
c.sdwanMtx.Lock()
defer c.sdwanMtx.Unlock()
return c.delNodes
}
func (c *Config) getAddNodes() []SDWANNode {
c.sdwanMtx.Lock()
defer c.sdwanMtx.Unlock()
return c.addNodes
}
func (c *Config) setSDWAN(s SDWANInfo) {
c.sdwanMtx.Lock()
defer c.sdwanMtx.Unlock()
// get old-new
c.delNodes = []SDWANNode{}
for _, oldNode := range c.sdwan.Nodes {
isDeleted := true
for _, newNode := range s.Nodes {
if oldNode.Name == newNode.Name && oldNode.IP == newNode.IP && oldNode.Resource == newNode.Resource && c.sdwan.Mode == s.Mode && c.sdwan.CentralNode == s.CentralNode {
isDeleted = false
break
}
}
if isDeleted {
c.delNodes = append(c.delNodes, oldNode)
}
}
// get new-old
c.addNodes = []SDWANNode{}
for _, newNode := range s.Nodes {
isNew := true
for _, oldNode := range c.sdwan.Nodes {
if oldNode.Name == newNode.Name && oldNode.IP == newNode.IP && oldNode.Resource == newNode.Resource && c.sdwan.Mode == s.Mode && c.sdwan.CentralNode == s.CentralNode {
isNew = false
break
}
}
if isNew {
c.addNodes = append(c.addNodes, newNode)
}
}
c.sdwan = s
} }
func (c *Config) switchApp(app AppConfig, enabled int) { func (c *Config) switchApp(app AppConfig, enabled int) {
@@ -70,28 +142,72 @@ func (c *Config) switchApp(app AppConfig, enabled int) {
} }
c.save() c.save()
} }
func (c *Config) retryApp(peerNode string) { func (c *Config) retryApp(peerNode string) {
c.mtx.Lock() GNetwork.apps.Range(func(id, i interface{}) bool {
defer c.mtx.Unlock() app := i.(*p2pApp)
for i := 0; i < len(c.Apps); i++ { if app.config.PeerNode == peerNode {
if c.Apps[i].PeerNode == peerNode { gLog.Println(LvDEBUG, "retry app ", peerNode)
c.Apps[i].retryNum = 0 app.config.retryNum = 0
c.Apps[i].nextRetryTime = time.Now() app.config.nextRetryTime = time.Now()
app.retryRelayNum = 0
app.nextRetryRelayTime = time.Now()
app.hbMtx.Lock()
app.hbTimeRelay = time.Now().Add(-TunnelHeartbeatTime * 3)
app.hbMtx.Unlock()
} }
if app.config.RelayNode == peerNode {
gLog.Println(LvDEBUG, "retry app ", peerNode)
app.retryRelayNum = 0
app.nextRetryRelayTime = time.Now()
app.hbMtx.Lock()
app.hbTimeRelay = time.Now().Add(-TunnelHeartbeatTime * 3)
app.hbMtx.Unlock()
} }
return true
})
}
func (c *Config) retryAllApp() {
GNetwork.apps.Range(func(id, i interface{}) bool {
app := i.(*p2pApp)
gLog.Println(LvDEBUG, "retry app ", app.config.PeerNode)
app.config.retryNum = 0
app.config.nextRetryTime = time.Now()
app.retryRelayNum = 0
app.nextRetryRelayTime = time.Now()
app.hbMtx.Lock()
defer app.hbMtx.Unlock()
app.hbTimeRelay = time.Now().Add(-TunnelHeartbeatTime * 3)
return true
})
}
func (c *Config) retryAllMemApp() {
GNetwork.apps.Range(func(id, i interface{}) bool {
app := i.(*p2pApp)
if app.config.SrcPort != 0 {
return true
}
gLog.Println(LvDEBUG, "retry app ", app.config.PeerNode)
app.config.retryNum = 0
app.config.nextRetryTime = time.Now()
app.retryRelayNum = 0
app.nextRetryRelayTime = time.Now()
app.hbMtx.Lock()
defer app.hbMtx.Unlock()
app.hbTimeRelay = time.Now().Add(-TunnelHeartbeatTime * 3)
return true
})
} }
func (c *Config) add(app AppConfig, override bool) { func (c *Config) add(app AppConfig, override bool) {
c.mtx.Lock() c.mtx.Lock()
defer c.mtx.Unlock() defer c.mtx.Unlock()
defer c.save() defer c.save()
if app.SrcPort == 0 || app.DstPort == 0 {
gLog.Println(LvERROR, "invalid app ", app)
return
}
if override { if override {
for i := 0; i < len(c.Apps); i++ { for i := 0; i < len(c.Apps); i++ {
if c.Apps[i].Protocol == app.Protocol && c.Apps[i].SrcPort == app.SrcPort { if c.Apps[i].PeerNode == app.PeerNode && c.Apps[i].Protocol == app.Protocol && c.Apps[i].SrcPort == app.SrcPort {
c.Apps[i] = &app // override it c.Apps[i] = &app // override it
return return
} }
@@ -101,15 +217,26 @@ func (c *Config) add(app AppConfig, override bool) {
} }
func (c *Config) delete(app AppConfig) { func (c *Config) delete(app AppConfig) {
if app.SrcPort == 0 || app.DstPort == 0 {
return
}
c.mtx.Lock() c.mtx.Lock()
defer c.mtx.Unlock() defer c.mtx.Unlock()
defer c.save() defer c.save()
for i := 0; i < len(c.Apps); i++ { for i := 0; i < len(c.Apps); i++ {
got := false
if app.SrcPort != 0 { // normal p2papp
if c.Apps[i].Protocol == app.Protocol && c.Apps[i].SrcPort == app.SrcPort { if c.Apps[i].Protocol == app.Protocol && c.Apps[i].SrcPort == app.SrcPort {
got = true
}
} else { // memapp
if c.Apps[i].PeerNode == app.PeerNode {
got = true
}
}
if got {
if i == len(c.Apps)-1 {
c.Apps = c.Apps[:i]
} else {
c.Apps = append(c.Apps[:i], c.Apps[i+1:]...) c.Apps = append(c.Apps[:i], c.Apps[i+1:]...)
}
return return
} }
} }
@@ -120,12 +247,22 @@ func (c *Config) save() {
// c.mtx.Lock() // c.mtx.Lock()
// defer c.mtx.Unlock() // internal call // defer c.mtx.Unlock() // internal call
data, _ := json.MarshalIndent(c, "", " ") data, _ := json.MarshalIndent(c, "", " ")
err := ioutil.WriteFile("config.json", data, 0644) err := os.WriteFile("config.json", data, 0644)
if err != nil { if err != nil {
gLog.Println(LvERROR, "save config.json error:", err) gLog.Println(LvERROR, "save config.json error:", err)
} }
} }
func (c *Config) saveCache() {
// c.mtx.Lock()
// defer c.mtx.Unlock() // internal call
data, _ := json.MarshalIndent(c, "", " ")
err := os.WriteFile("config.json0", data, 0644)
if err != nil {
gLog.Println(LvERROR, "save config.json0 error:", err)
}
}
func init() { func init() {
gConf.LogLevel = int(LvINFO) gConf.LogLevel = int(LvINFO)
gConf.Network.ShareBandwidth = 10 gConf.Network.ShareBandwidth = 10
@@ -137,14 +274,36 @@ func init() {
func (c *Config) load() error { func (c *Config) load() error {
c.mtx.Lock() c.mtx.Lock()
defer c.mtx.Unlock() defer c.mtx.Unlock()
data, err := ioutil.ReadFile("config.json") data, err := os.ReadFile("config.json")
if err != nil { if err != nil {
// gLog.Println(LevelERROR, "read config.json error:", err) return c.loadCache()
return err
} }
err = json.Unmarshal(data, &c) err = json.Unmarshal(data, &c)
if err != nil { if err != nil {
gLog.Println(LvERROR, "parse config.json error:", err) gLog.Println(LvERROR, "parse config.json error:", err)
// try cache
return c.loadCache()
}
// load ok. cache it
var filteredApps []*AppConfig // filter memapp
for _, app := range c.Apps {
if app.SrcPort != 0 {
filteredApps = append(filteredApps, app)
}
}
c.Apps = filteredApps
c.saveCache()
return err
}
func (c *Config) loadCache() error {
data, err := os.ReadFile("config.json0")
if err != nil {
return err
}
err = json.Unmarshal(data, &c)
if err != nil {
gLog.Println(LvERROR, "parse config.json0 error:", err)
} }
return err return err
} }
@@ -169,6 +328,15 @@ func (c *Config) setNode(node string) {
defer c.mtx.Unlock() defer c.mtx.Unlock()
defer c.save() defer c.save()
c.Network.Node = node c.Network.Node = node
c.Network.nodeID = NodeNameToID(c.Network.Node)
}
func (c *Config) nodeID() uint64 {
c.mtx.Lock()
defer c.mtx.Unlock()
if c.Network.nodeID == 0 {
c.Network.nodeID = NodeNameToID(c.Network.Node)
}
return c.Network.nodeID
} }
func (c *Config) setShareBandwidth(bw int) { func (c *Config) setShareBandwidth(bw int) {
c.mtx.Lock() c.mtx.Lock()
@@ -191,6 +359,7 @@ type NetworkConfig struct {
// local info // local info
Token uint64 Token uint64
Node string Node string
nodeID uint64
User string User string
localIP string localIP string
mac string mac string
@@ -209,7 +378,7 @@ type NetworkConfig struct {
TCPPort int TCPPort int
} }
func parseParams(subCommand string) { func parseParams(subCommand string, cmd string) {
fset := flag.NewFlagSet(subCommand, flag.ExitOnError) fset := flag.NewFlagSet(subCommand, flag.ExitOnError)
serverHost := fset.String("serverhost", "api.openp2p.cn", "server host ") serverHost := fset.String("serverhost", "api.openp2p.cn", "server host ")
serverPort := fset.Int("serverport", WsPort, "server port ") serverPort := fset.Int("serverport", WsPort, "server port ")
@@ -223,6 +392,8 @@ func parseParams(subCommand string) {
srcPort := fset.Int("srcport", 0, "source port ") srcPort := fset.Int("srcport", 0, "source port ")
tcpPort := fset.Int("tcpport", 0, "tcp port for upnp or publicip") tcpPort := fset.Int("tcpport", 0, "tcp port for upnp or publicip")
protocol := fset.String("protocol", "tcp", "tcp or udp") protocol := fset.String("protocol", "tcp", "tcp or udp")
underlayProtocol := fset.String("underlay_protocol", "quic", "quic or kcp")
punchPriority := fset.Int("punch_priority", 0, "bitwise DisableTCP|DisableUDP|UDPFirst 0:tcp and udp both enable, tcp first")
appName := fset.String("appname", "", "app name") appName := fset.String("appname", "", "app name")
relayNode := fset.String("relaynode", "", "relaynode") relayNode := fset.String("relaynode", "", "relaynode")
shareBandwidth := fset.Int("sharebandwidth", 10, "N mbps share bandwidth limit, private network no limit") shareBandwidth := fset.Int("sharebandwidth", 10, "N mbps share bandwidth limit, private network no limit")
@@ -230,12 +401,20 @@ func parseParams(subCommand string) {
notVerbose := fset.Bool("nv", false, "not log console") notVerbose := fset.Bool("nv", false, "not log console")
newconfig := fset.Bool("newconfig", false, "not load existing config.json") newconfig := fset.Bool("newconfig", false, "not load existing config.json")
logLevel := fset.Int("loglevel", 1, "0:debug 1:info 2:warn 3:error") logLevel := fset.Int("loglevel", 1, "0:debug 1:info 2:warn 3:error")
maxLogSize := fset.Int("maxlogsize", 1024*1024, "default 1MB")
if cmd == "" {
if subCommand == "" { // no subcommand if subCommand == "" { // no subcommand
fset.Parse(os.Args[1:]) fset.Parse(os.Args[1:])
} else { } else {
fset.Parse(os.Args[2:]) fset.Parse(os.Args[2:])
} }
} else {
gLog.Println(LvINFO, "cmd=", cmd)
args := strings.Split(cmd, " ")
fset.Parse(args)
}
gLog.setMaxSize(int64(*maxLogSize))
config := AppConfig{Enabled: 1} config := AppConfig{Enabled: 1}
config.PeerNode = *peerNode config.PeerNode = *peerNode
config.DstHost = *dstIP config.DstHost = *dstIP
@@ -243,12 +422,14 @@ func parseParams(subCommand string) {
config.DstPort = *dstPort config.DstPort = *dstPort
config.SrcPort = *srcPort config.SrcPort = *srcPort
config.Protocol = *protocol config.Protocol = *protocol
config.UnderlayProtocol = *underlayProtocol
config.PunchPriority = *punchPriority
config.AppName = *appName config.AppName = *appName
config.RelayNode = *relayNode config.RelayNode = *relayNode
if !*newconfig { if !*newconfig {
gConf.load() // load old config. otherwise will clear all apps gConf.load() // load old config. otherwise will clear all apps
} }
if config.SrcPort != 0 { if config.SrcPort != 0 { // filter memapp
gConf.add(config, true) gConf.add(config, true)
} }
// gConf.mtx.Lock() // when calling this func it's single-thread no lock // gConf.mtx.Lock() // when calling this func it's single-thread no lock
@@ -259,7 +440,7 @@ func parseParams(subCommand string) {
gConf.Network.ShareBandwidth = *shareBandwidth gConf.Network.ShareBandwidth = *shareBandwidth
} }
if f.Name == "node" { if f.Name == "node" {
gConf.Network.Node = *node gConf.setNode(*node)
} }
if f.Name == "serverhost" { if f.Name == "serverhost" {
gConf.Network.ServerHost = *serverHost gConf.Network.ServerHost = *serverHost
@@ -279,19 +460,19 @@ func parseParams(subCommand string) {
gConf.Network.ServerHost = *serverHost gConf.Network.ServerHost = *serverHost
} }
if *node != "" { if *node != "" {
gConf.Network.Node = *node gConf.setNode(*node)
} else { } else {
envNode := os.Getenv("OPENP2P_NODE") envNode := os.Getenv("OPENP2P_NODE")
if envNode != "" { if envNode != "" {
gConf.Network.Node = envNode gConf.setNode(envNode)
} }
if gConf.Network.Node == "" { // if node name not set. use os.Hostname if gConf.Network.Node == "" { // if node name not set. use os.Hostname
gConf.Network.Node = defaultNodeName() gConf.setNode(defaultNodeName())
} }
} }
if gConf.Network.TCPPort == 0 { if gConf.Network.TCPPort == 0 {
if *tcpPort == 0 { if *tcpPort == 0 {
p := int(nodeNameToID(gConf.Network.Node)%15000 + 50000) p := int(gConf.nodeID()%15000 + 50000)
tcpPort = &p tcpPort = &p
} }
gConf.Network.TCPPort = *tcpPort gConf.Network.TCPPort = *tcpPort

178
core/config_test.go Normal file
View File

@@ -0,0 +1,178 @@
package openp2p
import (
"encoding/json"
"testing"
)
func TestSetSDWAN_ChangeNode(t *testing.T) {
conf := Config{}
sdwanInfo := SDWANInfo{}
sdwanStr := `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mate60pro","ip":"10.2.3.60"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"},{"name":"tony-stable","ip":"10.2.3.4","resource":"10.1.0.0/16"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo)
if len(conf.getDelNodes()) > 0 {
t.Errorf("getDelNodes error")
return
}
if len(conf.getAddNodes()) != 7 {
t.Errorf("getAddNodes error")
return
}
sdwanInfo2 := SDWANInfo{}
sdwanStr = `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mate60pro","ip":"10.2.3.60"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo2); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo2)
diff := conf.getDelNodes()
if len(diff) != 1 && diff[0].IP != "10.2.3.4" {
t.Errorf("getDelNodes error")
return
}
sdwanInfo3 := SDWANInfo{}
sdwanStr = `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo3); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo3)
diff = conf.getDelNodes()
if len(diff) != 1 && diff[0].IP != "10.2.3.60" {
t.Errorf("getDelNodes error")
return
}
// add new node
sdwanInfo4 := SDWANInfo{}
sdwanStr = `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mate60pro","ip":"10.2.3.60"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo4); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo4)
diff = conf.getDelNodes()
if len(diff) > 0 {
t.Errorf("getDelNodes error")
return
}
diff = conf.getAddNodes()
if len(diff) != 1 && diff[0].IP != "10.2.3.60" {
t.Errorf("getAddNodes error")
return
}
}
func TestSetSDWAN_ChangeNodeIP(t *testing.T) {
conf := Config{}
sdwanInfo := SDWANInfo{}
sdwanStr := `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mate60pro","ip":"10.2.3.60"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"},{"name":"tony-stable","ip":"10.2.3.4","resource":"10.1.0.0/16"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo)
if len(conf.getDelNodes()) > 0 {
t.Errorf("getDelNodes error")
return
}
sdwanInfo2 := SDWANInfo{}
sdwanStr = `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mate60pro","ip":"10.2.3.60"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"},{"name":"tony-stable","ip":"10.2.3.44","resource":"10.1.0.0/16"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo2); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo2)
diff := conf.getDelNodes()
if len(diff) != 1 && diff[0].IP != "10.2.3.4" {
t.Errorf("getDelNodes error")
return
}
diff = conf.getAddNodes()
if len(diff) != 1 || diff[0].IP != "10.2.3.44" {
t.Errorf("getAddNodes error")
return
}
}
func TestSetSDWAN_ClearAll(t *testing.T) {
conf := Config{}
sdwanInfo := SDWANInfo{}
sdwanStr := `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mate60pro","ip":"10.2.3.60"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"},{"name":"tony-stable","ip":"10.2.3.4","resource":"10.1.0.0/16"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo)
if len(conf.getDelNodes()) > 0 {
t.Errorf("getDelNodes error")
return
}
sdwanInfo2 := SDWANInfo{}
sdwanStr = `{"Nodes":null}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo2); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo2)
diff := conf.getDelNodes()
if len(diff) != 7 {
t.Errorf("getDelNodes error")
return
}
diff = conf.getAddNodes()
if len(diff) != 0 {
t.Errorf("getAddNodes error")
return
}
}
func TestSetSDWAN_ChangeNodeResource(t *testing.T) {
conf := Config{}
sdwanInfo := SDWANInfo{}
sdwanStr := `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mate60pro","ip":"10.2.3.60"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"},{"name":"tony-stable","ip":"10.2.3.4","resource":"10.1.0.0/16"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo)
if len(conf.getDelNodes()) > 0 {
t.Errorf("getDelNodes error")
return
}
sdwanInfo2 := SDWANInfo{}
sdwanStr = `{"id":1312667996276071700,"name":"network1","gateway":"10.2.3.254/24","mode":"fullmesh","centralNode":"n1-stable","enable":1,"Nodes":[{"name":"222-debug","ip":"10.2.3.13"},{"name":"222stable","ip":"10.2.3.222"},{"name":"5800-debug","ip":"10.2.3.56"},{"name":"Mate60pro","ip":"10.2.3.60"},{"name":"Mymatepad2023","ip":"10.2.3.23"},{"name":"n1-stable","ip":"10.2.3.29","resource":"192.168.3.0/24"},{"name":"tony-stable","ip":"10.2.3.4","resource":"10.11.0.0/16"}]}`
if err := json.Unmarshal([]byte(sdwanStr), &sdwanInfo2); err != nil {
t.Errorf("unmarshal error")
return
}
conf.setSDWAN(sdwanInfo2)
diff := conf.getDelNodes()
if len(diff) != 1 && diff[0].IP != "10.2.3.4" {
t.Errorf("getDelNodes error")
return
}
diff = conf.getAddNodes()
if len(diff) != 1 || diff[0].Resource != "10.11.0.0/16" {
t.Errorf("getAddNodes error")
return
}
}
func TestInetAtoN(t *testing.T) {
ipa, _ := inetAtoN("121.5.147.4")
t.Log(ipa)
ipa, _ = inetAtoN("121.5.147.4/32")
t.Log(ipa)
}

View File

@@ -25,4 +25,8 @@ var (
ErrMsgChannelNotFound = errors.New("message channel not found") ErrMsgChannelNotFound = errors.New("message channel not found")
ErrRelayTunnelNotFound = errors.New("relay tunnel not found") ErrRelayTunnelNotFound = errors.New("relay tunnel not found")
ErrSymmetricLimit = errors.New("symmetric limit") ErrSymmetricLimit = errors.New("symmetric limit")
ErrForceRelay = errors.New("force relay")
ErrPeerConnectRelay = errors.New("peer connect relayNode error")
ErrBuildTunnelBusy = errors.New("build tunnel busy")
ErrMemAppTunnelNotFound = errors.New("memapp tunnel not found")
) )

View File

@@ -8,12 +8,13 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"reflect" "reflect"
"runtime"
"time" "time"
"github.com/openp2p-cn/totp" "github.com/openp2p-cn/totp"
) )
func handlePush(pn *P2PNetwork, subType uint16, msg []byte) error { func handlePush(subType uint16, msg []byte) error {
pushHead := PushHeader{} pushHead := PushHeader{}
err := binary.Read(bytes.NewReader(msg[openP2PHeaderSize:openP2PHeaderSize+PushHeaderSize]), binary.LittleEndian, &pushHead) err := binary.Read(bytes.NewReader(msg[openP2PHeaderSize:openP2PHeaderSize+PushHeaderSize]), binary.LittleEndian, &pushHead)
if err != nil { if err != nil {
@@ -22,7 +23,7 @@ func handlePush(pn *P2PNetwork, subType uint16, msg []byte) error {
gLog.Printf(LvDEBUG, "handle push msg type:%d, push header:%+v", subType, pushHead) gLog.Printf(LvDEBUG, "handle push msg type:%d, push header:%+v", subType, pushHead)
switch subType { switch subType {
case MsgPushConnectReq: case MsgPushConnectReq:
err = handleConnectReq(pn, subType, msg) err = handleConnectReq(msg)
case MsgPushRsp: case MsgPushRsp:
rsp := PushRsp{} rsp := PushRsp{}
if err = json.Unmarshal(msg[openP2PHeaderSize:], &rsp); err != nil { if err = json.Unmarshal(msg[openP2PHeaderSize:], &rsp); err != nil {
@@ -44,15 +45,77 @@ func handlePush(pn *P2PNetwork, subType uint16, msg []byte) error {
config.PeerNode = req.RelayName config.PeerNode = req.RelayName
config.peerToken = req.RelayToken config.peerToken = req.RelayToken
go func(r AddRelayTunnelReq) { go func(r AddRelayTunnelReq) {
t, errDt := pn.addDirectTunnel(config, 0) t, errDt := GNetwork.addDirectTunnel(config, 0)
if errDt == nil { if errDt == nil {
// notify peer relay ready // notify peer relay ready
msg := TunnelMsg{ID: t.id} msg := TunnelMsg{ID: t.id}
pn.push(r.From, MsgPushAddRelayTunnelRsp, msg) GNetwork.push(r.From, MsgPushAddRelayTunnelRsp, msg)
appConfig := config
appConfig.PeerNode = req.From
} else { } else {
pn.push(r.From, MsgPushAddRelayTunnelRsp, "error") // compatible with old version client, trigger unmarshal error gLog.Printf(LvERROR, "addDirectTunnel error:%s", errDt)
GNetwork.push(r.From, MsgPushAddRelayTunnelRsp, "error") // compatible with old version client, trigger unmarshal error
} }
}(req) }(req)
case MsgPushServerSideSaveMemApp:
req := ServerSideSaveMemApp{}
if err = json.Unmarshal(msg[openP2PHeaderSize+PushHeaderSize:], &req); err != nil {
gLog.Printf(LvERROR, "wrong %v:%s", reflect.TypeOf(req), err)
return err
}
gLog.Println(LvDEBUG, "handle MsgPushServerSideSaveMemApp:", prettyJson(req))
var existTunnel *P2PTunnel
i, ok := GNetwork.allTunnels.Load(req.TunnelID)
if !ok {
time.Sleep(time.Millisecond * 100)
i, ok = GNetwork.allTunnels.Load(req.TunnelID) // retry sometimes will receive MsgPushServerSideSaveMemApp but p2ptunnel not store yet.
if !ok {
gLog.Println(LvERROR, "handle MsgPushServerSideSaveMemApp error:", ErrMemAppTunnelNotFound)
return ErrMemAppTunnelNotFound
}
}
existTunnel = i.(*P2PTunnel)
peerID := NodeNameToID(req.From)
existApp, appok := GNetwork.apps.Load(peerID)
if appok {
app := existApp.(*p2pApp)
app.config.AppName = fmt.Sprintf("%d", peerID)
app.id = req.AppID
app.setRelayTunnelID(req.RelayTunnelID)
app.relayMode = req.RelayMode
app.hbTimeRelay = time.Now()
if req.RelayTunnelID == 0 {
app.setDirectTunnel(existTunnel)
} else {
app.setRelayTunnel(existTunnel)
}
gLog.Println(LvDEBUG, "find existing memapp, update it")
} else {
appConfig := existTunnel.config
appConfig.SrcPort = 0
appConfig.Protocol = ""
appConfig.AppName = fmt.Sprintf("%d", peerID)
appConfig.PeerNode = req.From
app := p2pApp{
id: req.AppID,
config: appConfig,
relayMode: req.RelayMode,
running: true,
hbTimeRelay: time.Now(),
}
if req.RelayTunnelID == 0 {
app.setDirectTunnel(existTunnel)
} else {
app.setRelayTunnel(existTunnel)
app.setRelayTunnelID(req.RelayTunnelID)
}
if req.RelayTunnelID != 0 {
app.relayNode = req.Node
}
GNetwork.apps.Store(NodeNameToID(req.From), &app)
}
return nil
case MsgPushAPPKey: case MsgPushAPPKey:
req := APPKeySync{} req := APPKeySync{}
if err = json.Unmarshal(msg[openP2PHeaderSize+PushHeaderSize:], &req); err != nil { if err = json.Unmarshal(msg[openP2PHeaderSize+PushHeaderSize:], &req); err != nil {
@@ -62,7 +125,7 @@ func handlePush(pn *P2PNetwork, subType uint16, msg []byte) error {
SaveKey(req.AppID, req.AppKey) SaveKey(req.AppID, req.AppKey)
case MsgPushUpdate: case MsgPushUpdate:
gLog.Println(LvINFO, "MsgPushUpdate") gLog.Println(LvINFO, "MsgPushUpdate")
err := update(pn.config.ServerHost, pn.config.ServerPort) err := update(gConf.Network.ServerHost, gConf.Network.ServerPort)
if err == nil { if err == nil {
os.Exit(0) os.Exit(0)
} }
@@ -72,11 +135,15 @@ func handlePush(pn *P2PNetwork, subType uint16, msg []byte) error {
os.Exit(0) os.Exit(0)
return err return err
case MsgPushReportApps: case MsgPushReportApps:
err = handleReportApps(pn, subType, msg) err = handleReportApps()
case MsgPushReportMemApps:
err = handleReportMemApps()
case MsgPushReportLog: case MsgPushReportLog:
err = handleLog(pn, subType, msg) err = handleLog(msg)
case MsgPushReportGoroutine:
err = handleReportGoroutine()
case MsgPushEditApp: case MsgPushEditApp:
err = handleEditApp(pn, subType, msg) err = handleEditApp(msg)
case MsgPushEditNode: case MsgPushEditNode:
gLog.Println(LvINFO, "MsgPushEditNode") gLog.Println(LvINFO, "MsgPushEditNode")
req := EditNode{} req := EditNode{}
@@ -99,7 +166,7 @@ func handlePush(pn *P2PNetwork, subType uint16, msg []byte) error {
gConf.switchApp(config, app.Enabled) gConf.switchApp(config, app.Enabled)
if app.Enabled == 0 { if app.Enabled == 0 {
// disable APP // disable APP
pn.DeleteApp(config) GNetwork.DeleteApp(config)
} }
case MsgPushDstNodeOnline: case MsgPushDstNodeOnline:
gLog.Println(LvINFO, "MsgPushDstNodeOnline") gLog.Println(LvINFO, "MsgPushDstNodeOnline")
@@ -111,7 +178,7 @@ func handlePush(pn *P2PNetwork, subType uint16, msg []byte) error {
gLog.Println(LvINFO, "retry peerNode ", req.Node) gLog.Println(LvINFO, "retry peerNode ", req.Node)
gConf.retryApp(req.Node) gConf.retryApp(req.Node)
default: default:
i, ok := pn.msgMap.Load(pushHead.From) i, ok := GNetwork.msgMap.Load(pushHead.From)
if !ok { if !ok {
return ErrMsgChannelNotFound return ErrMsgChannelNotFound
} }
@@ -121,7 +188,7 @@ func handlePush(pn *P2PNetwork, subType uint16, msg []byte) error {
return err return err
} }
func handleEditApp(pn *P2PNetwork, subType uint16, msg []byte) (err error) { func handleEditApp(msg []byte) (err error) {
gLog.Println(LvINFO, "MsgPushEditApp") gLog.Println(LvINFO, "MsgPushEditApp")
newApp := AppInfo{} newApp := AppInfo{}
if err = json.Unmarshal(msg[openP2PHeaderSize:], &newApp); err != nil { if err = json.Unmarshal(msg[openP2PHeaderSize:], &newApp); err != nil {
@@ -137,18 +204,24 @@ func handleEditApp(pn *P2PNetwork, subType uint16, msg []byte) (err error) {
oldConf.PeerNode = newApp.PeerNode oldConf.PeerNode = newApp.PeerNode
oldConf.DstHost = newApp.DstHost oldConf.DstHost = newApp.DstHost
oldConf.DstPort = newApp.DstPort oldConf.DstPort = newApp.DstPort
if newApp.Protocol0 != "" && newApp.SrcPort0 != 0 { // not edit
gConf.delete(oldConf) gConf.delete(oldConf)
}
// AddApp // AddApp
newConf := oldConf newConf := oldConf
newConf.Protocol = newApp.Protocol newConf.Protocol = newApp.Protocol
newConf.SrcPort = newApp.SrcPort newConf.SrcPort = newApp.SrcPort
newConf.RelayNode = newApp.SpecRelayNode
newConf.PunchPriority = newApp.PunchPriority
gConf.add(newConf, false) gConf.add(newConf, false)
pn.DeleteApp(oldConf) // DeleteApp may cost some times, execute at the end if newApp.Protocol0 != "" && newApp.SrcPort0 != 0 { // not edit
GNetwork.DeleteApp(oldConf) // DeleteApp may cost some times, execute at the end
}
return nil return nil
} }
func handleConnectReq(pn *P2PNetwork, subType uint16, msg []byte) (err error) { func handleConnectReq(msg []byte) (err error) {
req := PushConnectReq{} req := PushConnectReq{}
if err = json.Unmarshal(msg[openP2PHeaderSize+PushHeaderSize:], &req); err != nil { if err = json.Unmarshal(msg[openP2PHeaderSize+PushHeaderSize:], &req); err != nil {
gLog.Printf(LvERROR, "wrong %v:%s", reflect.TypeOf(req), err) gLog.Printf(LvERROR, "wrong %v:%s", reflect.TypeOf(req), err)
@@ -156,21 +229,21 @@ func handleConnectReq(pn *P2PNetwork, subType uint16, msg []byte) (err error) {
} }
gLog.Printf(LvDEBUG, "%s is connecting...", req.From) gLog.Printf(LvDEBUG, "%s is connecting...", req.From)
gLog.Println(LvDEBUG, "push connect response to ", req.From) gLog.Println(LvDEBUG, "push connect response to ", req.From)
if compareVersion(req.Version, LeastSupportVersion) == LESS { if compareVersion(req.Version, LeastSupportVersion) < 0 {
gLog.Println(LvERROR, ErrVersionNotCompatible.Error(), ":", req.From) gLog.Println(LvERROR, ErrVersionNotCompatible.Error(), ":", req.From)
rsp := PushConnectRsp{ rsp := PushConnectRsp{
Error: 10, Error: 10,
Detail: ErrVersionNotCompatible.Error(), Detail: ErrVersionNotCompatible.Error(),
To: req.From, To: req.From,
From: pn.config.Node, From: gConf.Network.Node,
} }
pn.push(req.From, MsgPushConnectRsp, rsp) GNetwork.push(req.From, MsgPushConnectRsp, rsp)
return ErrVersionNotCompatible return ErrVersionNotCompatible
} }
// verify totp token or token // verify totp token or token
t := totp.TOTP{Step: totp.RelayTOTPStep} t := totp.TOTP{Step: totp.RelayTOTPStep}
if t.Verify(req.Token, pn.config.Token, time.Now().Unix()-pn.dt/int64(time.Second)) { // localTs may behind, auto adjust ts if t.Verify(req.Token, gConf.Network.Token, time.Now().Unix()-GNetwork.dt/int64(time.Second)) { // localTs may behind, auto adjust ts
gLog.Printf(LvINFO, "Access Granted\n") gLog.Printf(LvINFO, "Access Granted")
config := AppConfig{} config := AppConfig{}
config.peerNatType = req.NatType config.peerNatType = req.NatType
config.peerConeNatPort = req.ConeNatPort config.peerConeNatPort = req.ConeNatPort
@@ -183,52 +256,74 @@ func handleConnectReq(pn *P2PNetwork, subType uint16, msg []byte) (err error) {
config.hasUPNPorNATPMP = req.HasUPNPorNATPMP config.hasUPNPorNATPMP = req.HasUPNPorNATPMP
config.linkMode = req.LinkMode config.linkMode = req.LinkMode
config.isUnderlayServer = req.IsUnderlayServer config.isUnderlayServer = req.IsUnderlayServer
config.UnderlayProtocol = req.UnderlayProtocol
// share relay node will limit bandwidth // share relay node will limit bandwidth
if req.Token != pn.config.Token { if req.Token != gConf.Network.Token {
gLog.Printf(LvINFO, "set share bandwidth %d mbps", pn.config.ShareBandwidth) gLog.Printf(LvINFO, "set share bandwidth %d mbps", gConf.Network.ShareBandwidth)
config.shareBandwidth = pn.config.ShareBandwidth config.shareBandwidth = gConf.Network.ShareBandwidth
} }
// go pn.AddTunnel(config, req.ID) // go GNetwork.AddTunnel(config, req.ID)
go pn.addDirectTunnel(config, req.ID) go func() {
GNetwork.addDirectTunnel(config, req.ID)
}()
return nil return nil
} }
gLog.Println(LvERROR, "Access Denied:", req.From) gLog.Println(LvERROR, "Access Denied:", req.From)
rsp := PushConnectRsp{ rsp := PushConnectRsp{
Error: 1, Error: 1,
Detail: fmt.Sprintf("connect to %s error: Access Denied", pn.config.Node), Detail: fmt.Sprintf("connect to %s error: Access Denied", gConf.Network.Node),
To: req.From, To: req.From,
From: pn.config.Node, From: gConf.Network.Node,
} }
return pn.push(req.From, MsgPushConnectRsp, rsp) return GNetwork.push(req.From, MsgPushConnectRsp, rsp)
} }
func handleReportApps(pn *P2PNetwork, subType uint16, msg []byte) (err error) { func handleReportApps() (err error) {
gLog.Println(LvINFO, "MsgPushReportApps") gLog.Println(LvINFO, "MsgPushReportApps")
req := ReportApps{} req := ReportApps{}
gConf.mtx.Lock() gConf.mtx.Lock()
defer gConf.mtx.Unlock() defer gConf.mtx.Unlock()
for _, config := range gConf.Apps { for _, config := range gConf.Apps {
appActive := 0 appActive := 0
relayNode := "" relayNode := ""
specRelayNode := ""
relayMode := "" relayMode := ""
linkMode := LinkModeUDPPunch linkMode := LinkModeUDPPunch
i, ok := pn.apps.Load(config.ID()) var connectTime string
var retryTime string
var app *p2pApp
i, ok := GNetwork.apps.Load(config.ID())
if ok { if ok {
app := i.(*p2pApp) app = i.(*p2pApp)
if app.isActive() { if app.isActive() {
appActive = 1 appActive = 1
} }
if app.config.SrcPort == 0 { // memapp
continue
}
specRelayNode = app.config.RelayNode
if !app.isDirect() { // TODO: should always report relay node for app edit
relayNode = app.relayNode relayNode = app.relayNode
relayMode = app.relayMode relayMode = app.relayMode
linkMode = app.tunnel.linkModeWeb }
if app.Tunnel() != nil {
linkMode = app.Tunnel().linkModeWeb
}
retryTime = app.RetryTime().Local().Format("2006-01-02T15:04:05-0700")
connectTime = app.ConnectTime().Local().Format("2006-01-02T15:04:05-0700")
} }
appInfo := AppInfo{ appInfo := AppInfo{
AppName: config.AppName, AppName: config.AppName,
Error: config.errMsg, Error: config.errMsg,
Protocol: config.Protocol, Protocol: config.Protocol,
PunchPriority: config.PunchPriority,
Whitelist: config.Whitelist, Whitelist: config.Whitelist,
SrcPort: config.SrcPort, SrcPort: config.SrcPort,
RelayNode: relayNode, RelayNode: relayNode,
SpecRelayNode: specRelayNode,
RelayMode: relayMode, RelayMode: relayMode,
LinkMode: linkMode, LinkMode: linkMode,
PeerNode: config.PeerNode, PeerNode: config.PeerNode,
@@ -237,17 +332,77 @@ func handleReportApps(pn *P2PNetwork, subType uint16, msg []byte) (err error) {
PeerUser: config.PeerUser, PeerUser: config.PeerUser,
PeerIP: config.peerIP, PeerIP: config.peerIP,
PeerNatType: config.peerNatType, PeerNatType: config.peerNatType,
RetryTime: config.retryTime.Local().Format("2006-01-02T15:04:05-0700"), RetryTime: retryTime,
ConnectTime: config.connectTime.Local().Format("2006-01-02T15:04:05-0700"), ConnectTime: connectTime,
IsActive: appActive, IsActive: appActive,
Enabled: config.Enabled, Enabled: config.Enabled,
} }
req.Apps = append(req.Apps, appInfo) req.Apps = append(req.Apps, appInfo)
} }
return pn.write(MsgReport, MsgReportApps, &req) return GNetwork.write(MsgReport, MsgReportApps, &req)
} }
func handleLog(pn *P2PNetwork, subType uint16, msg []byte) (err error) { func handleReportMemApps() (err error) {
gLog.Println(LvINFO, "handleReportMemApps")
req := ReportApps{}
gConf.mtx.Lock()
defer gConf.mtx.Unlock()
GNetwork.sdwan.sysRoute.Range(func(key, value interface{}) bool {
node := value.(*sdwanNode)
appActive := 0
relayMode := ""
var connectTime string
var retryTime string
i, ok := GNetwork.apps.Load(node.id)
var app *p2pApp
if ok {
app = i.(*p2pApp)
if app.isActive() {
appActive = 1
}
if !app.isDirect() {
relayMode = app.relayMode
}
retryTime = app.RetryTime().Local().Format("2006-01-02T15:04:05-0700")
connectTime = app.ConnectTime().Local().Format("2006-01-02T15:04:05-0700")
}
appInfo := AppInfo{
RelayMode: relayMode,
PeerNode: node.name,
IsActive: appActive,
Enabled: 1,
}
if app != nil {
appInfo.AppName = app.config.AppName
appInfo.Error = app.config.errMsg
appInfo.Protocol = app.config.Protocol
appInfo.Whitelist = app.config.Whitelist
appInfo.SrcPort = app.config.SrcPort
if !app.isDirect() {
appInfo.RelayNode = app.relayNode
}
if app.Tunnel() != nil {
appInfo.LinkMode = app.Tunnel().linkModeWeb
}
appInfo.DstHost = app.config.DstHost
appInfo.DstPort = app.config.DstPort
appInfo.PeerUser = app.config.PeerUser
appInfo.PeerIP = app.config.peerIP
appInfo.PeerNatType = app.config.peerNatType
appInfo.RetryTime = retryTime
appInfo.ConnectTime = connectTime
}
req.Apps = append(req.Apps, appInfo)
return true
})
gLog.Println(LvDEBUG, "handleReportMemApps res:", prettyJson(req))
return GNetwork.write(MsgReport, MsgReportMemApps, &req)
}
func handleLog(msg []byte) (err error) {
gLog.Println(LvDEBUG, "MsgPushReportLog") gLog.Println(LvDEBUG, "MsgPushReportLog")
const defaultLen = 1024 * 128 const defaultLen = 1024 * 128
const maxLen = 1024 * 1024 const maxLen = 1024 * 1024
@@ -258,6 +413,8 @@ func handleLog(pn *P2PNetwork, subType uint16, msg []byte) (err error) {
} }
if req.FileName == "" { if req.FileName == "" {
req.FileName = "openp2p.log" req.FileName = "openp2p.log"
} else {
req.FileName = sanitizeFileName(req.FileName)
} }
f, err := os.Open(filepath.Join("log", req.FileName)) f, err := os.Open(filepath.Join("log", req.FileName))
if err != nil { if err != nil {
@@ -292,5 +449,12 @@ func handleLog(pn *P2PNetwork, subType uint16, msg []byte) (err error) {
rsp.FileName = req.FileName rsp.FileName = req.FileName
rsp.Total = fi.Size() rsp.Total = fi.Size()
rsp.Len = req.Len rsp.Len = req.Len
return pn.write(MsgReport, MsgPushReportLog, &rsp) return GNetwork.write(MsgReport, MsgPushReportLog, &rsp)
}
func handleReportGoroutine() (err error) {
gLog.Println(LvDEBUG, "handleReportGoroutine")
buf := make([]byte, 1024*128)
stackLen := runtime.Stack(buf, true)
return GNetwork.write(MsgReport, MsgPushReportLog, string(buf[:stackLen]))
} }

View File

@@ -3,6 +3,7 @@ package openp2p
import ( import (
"bytes" "bytes"
"encoding/binary" "encoding/binary"
"encoding/json"
"fmt" "fmt"
"math/rand" "math/rand"
"net" "net"
@@ -10,7 +11,7 @@ import (
) )
func handshakeC2C(t *P2PTunnel) (err error) { func handshakeC2C(t *P2PTunnel) (err error) {
gLog.Printf(LvDEBUG, "handshakeC2C %s:%d:%d to %s:%d", t.pn.config.Node, t.coneLocalPort, t.coneNatPort, t.config.peerIP, t.config.peerConeNatPort) gLog.Printf(LvDEBUG, "handshakeC2C %s:%d:%d to %s:%d", gConf.Network.Node, t.coneLocalPort, t.coneNatPort, t.config.peerIP, t.config.peerConeNatPort)
defer gLog.Printf(LvDEBUG, "handshakeC2C end") defer gLog.Printf(LvDEBUG, "handshakeC2C end")
conn, err := net.ListenUDP("udp", t.la) conn, err := net.ListenUDP("udp", t.la)
if err != nil { if err != nil {
@@ -22,13 +23,22 @@ func handshakeC2C(t *P2PTunnel) (err error) {
gLog.Println(LvDEBUG, "handshakeC2C write MsgPunchHandshake error:", err) gLog.Println(LvDEBUG, "handshakeC2C write MsgPunchHandshake error:", err)
return err return err
} }
ra, head, _, _, err := UDPRead(conn, HandshakeTimeout) ra, head, buff, _, err := UDPRead(conn, HandshakeTimeout)
if err != nil { if err != nil {
gLog.Println(LvDEBUG, "handshakeC2C read MsgPunchHandshake error:", err) gLog.Println(LvDEBUG, "handshakeC2C read MsgPunchHandshake error:", err)
return err return err
} }
t.ra, _ = net.ResolveUDPAddr("udp", ra.String()) t.ra, _ = net.ResolveUDPAddr("udp", ra.String())
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshake { var tunnelID uint64
if len(buff) > openP2PHeaderSize {
req := P2PHandshakeReq{}
if err := json.Unmarshal(buff[openP2PHeaderSize:openP2PHeaderSize+int(head.DataLen)], &req); err == nil {
tunnelID = req.ID
}
} else { // compatible with old version
tunnelID = t.id
}
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshake && tunnelID == t.id {
gLog.Printf(LvDEBUG, "read %d handshake ", t.id) gLog.Printf(LvDEBUG, "read %d handshake ", t.id)
UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id}) UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id})
_, head, _, _, err = UDPRead(conn, HandshakeTimeout) _, head, _, _, err = UDPRead(conn, HandshakeTimeout)
@@ -37,7 +47,7 @@ func handshakeC2C(t *P2PTunnel) (err error) {
return err return err
} }
} }
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshakeAck { if head.MainType == MsgP2P && head.SubType == MsgPunchHandshakeAck && tunnelID == t.id {
gLog.Printf(LvDEBUG, "read %d handshake ack ", t.id) gLog.Printf(LvDEBUG, "read %d handshake ack ", t.id)
_, err = UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id}) _, err = UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id})
if err != nil { if err != nil {
@@ -52,6 +62,11 @@ func handshakeC2C(t *P2PTunnel) (err error) {
func handshakeC2S(t *P2PTunnel) error { func handshakeC2S(t *P2PTunnel) error {
gLog.Printf(LvDEBUG, "handshakeC2S start") gLog.Printf(LvDEBUG, "handshakeC2S start")
defer gLog.Printf(LvDEBUG, "handshakeC2S end") defer gLog.Printf(LvDEBUG, "handshakeC2S end")
if !buildTunnelMtx.TryLock() {
// time.Sleep(time.Second * 3)
return ErrBuildTunnelBusy
}
defer buildTunnelMtx.Unlock()
startTime := time.Now() startTime := time.Now()
r := rand.New(rand.NewSource(time.Now().UnixNano())) r := rand.New(rand.NewSource(time.Now().UnixNano()))
randPorts := r.Perm(65532) randPorts := r.Perm(65532)
@@ -84,30 +99,48 @@ func handshakeC2S(t *P2PTunnel) error {
return err return err
} }
// read response of the punching hole ok port // read response of the punching hole ok port
result := make([]byte, 1024) buff := make([]byte, 1024)
_, dst, err := conn.ReadFrom(result) _, dst, err := conn.ReadFrom(buff)
if err != nil { if err != nil {
gLog.Println(LvERROR, "handshakeC2S wait timeout") gLog.Println(LvERROR, "handshakeC2S wait timeout")
return err return err
} }
head := &openP2PHeader{} head := &openP2PHeader{}
err = binary.Read(bytes.NewReader(result[:openP2PHeaderSize]), binary.LittleEndian, head) err = binary.Read(bytes.NewReader(buff[:openP2PHeaderSize]), binary.LittleEndian, head)
if err != nil { if err != nil {
gLog.Println(LvERROR, "parse p2pheader error:", err) gLog.Println(LvERROR, "parse p2pheader error:", err)
return err return err
} }
t.ra, _ = net.ResolveUDPAddr("udp", dst.String()) t.ra, _ = net.ResolveUDPAddr("udp", dst.String())
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshake { var tunnelID uint64
if len(buff) > openP2PHeaderSize {
req := P2PHandshakeReq{}
if err := json.Unmarshal(buff[openP2PHeaderSize:openP2PHeaderSize+int(head.DataLen)], &req); err == nil {
tunnelID = req.ID
}
} else { // compatible with old version
tunnelID = t.id
}
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshake && tunnelID == t.id {
gLog.Printf(LvDEBUG, "handshakeC2S read %d handshake ", t.id) gLog.Printf(LvDEBUG, "handshakeC2S read %d handshake ", t.id)
UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id}) UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id})
for { for {
_, head, _, _, err = UDPRead(conn, HandshakeTimeout) _, head, buff, _, err = UDPRead(conn, HandshakeTimeout)
if err != nil { if err != nil {
gLog.Println(LvDEBUG, "handshakeC2S handshake error") gLog.Println(LvDEBUG, "handshakeC2S handshake error")
return err return err
} }
var tunnelID uint64
if len(buff) > openP2PHeaderSize {
req := P2PHandshakeReq{}
if err := json.Unmarshal(buff[openP2PHeaderSize:openP2PHeaderSize+int(head.DataLen)], &req); err == nil {
tunnelID = req.ID
}
} else { // compatible with old version
tunnelID = t.id
}
// waiting ack // waiting ack
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshakeAck { if head.MainType == MsgP2P && head.SubType == MsgPunchHandshakeAck && tunnelID == t.id {
break break
} }
} }
@@ -126,6 +159,11 @@ func handshakeC2S(t *P2PTunnel) error {
func handshakeS2C(t *P2PTunnel) error { func handshakeS2C(t *P2PTunnel) error {
gLog.Printf(LvDEBUG, "handshakeS2C start") gLog.Printf(LvDEBUG, "handshakeS2C start")
defer gLog.Printf(LvDEBUG, "handshakeS2C end") defer gLog.Printf(LvDEBUG, "handshakeS2C end")
if !buildTunnelMtx.TryLock() {
// time.Sleep(time.Second * 3)
return ErrBuildTunnelBusy
}
defer buildTunnelMtx.Unlock()
startTime := time.Now() startTime := time.Now()
gotCh := make(chan *net.UDPAddr, 5) gotCh := make(chan *net.UDPAddr, 5)
// sequencely udp send handshake, do not parallel send // sequencely udp send handshake, do not parallel send
@@ -141,7 +179,7 @@ func handshakeS2C(t *P2PTunnel) error {
} }
defer conn.Close() defer conn.Close()
UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshake, P2PHandshakeReq{ID: t.id}) UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshake, P2PHandshakeReq{ID: t.id})
_, head, _, _, err := UDPRead(conn, HandshakeTimeout) _, head, buff, _, err := UDPRead(conn, HandshakeTimeout)
if err != nil { if err != nil {
// gLog.Println(LevelDEBUG, "one of the handshake error:", err) // gLog.Println(LevelDEBUG, "one of the handshake error:", err)
return err return err
@@ -149,18 +187,35 @@ func handshakeS2C(t *P2PTunnel) error {
if gotIt { if gotIt {
return nil return nil
} }
var tunnelID uint64
if len(buff) >= openP2PHeaderSize+8 {
req := P2PHandshakeReq{}
if err := json.Unmarshal(buff[openP2PHeaderSize:openP2PHeaderSize+int(head.DataLen)], &req); err == nil {
tunnelID = req.ID
}
} else { // compatible with old version
tunnelID = t.id
}
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshake { if head.MainType == MsgP2P && head.SubType == MsgPunchHandshake && tunnelID == t.id {
gLog.Printf(LvDEBUG, "handshakeS2C read %d handshake ", t.id) gLog.Printf(LvDEBUG, "handshakeS2C read %d handshake ", t.id)
UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id}) UDPWrite(conn, t.ra, MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id})
// may read sereral MsgPunchHandshake // may read several MsgPunchHandshake
for { for {
_, head, _, _, err = UDPRead(conn, HandshakeTimeout) _, head, buff, _, err = UDPRead(conn, HandshakeTimeout)
if err != nil { if err != nil {
gLog.Println(LvDEBUG, "handshakeS2C handshake error") gLog.Println(LvDEBUG, "handshakeS2C handshake error")
return err return err
} }
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshakeAck { if len(buff) > openP2PHeaderSize {
req := P2PHandshakeReq{}
if err := json.Unmarshal(buff[openP2PHeaderSize:openP2PHeaderSize+int(head.DataLen)], &req); err == nil {
tunnelID = req.ID
}
} else { // compatible with old version
tunnelID = t.id
}
if head.MainType == MsgP2P && head.SubType == MsgPunchHandshakeAck && tunnelID == t.id {
break break
} else { } else {
gLog.Println(LvDEBUG, "handshakeS2C read msg but not MsgPunchHandshakeAck") gLog.Println(LvDEBUG, "handshakeS2C read msg but not MsgPunchHandshakeAck")
@@ -181,14 +236,14 @@ func handshakeS2C(t *P2PTunnel) error {
}(t) }(t)
} }
gLog.Printf(LvDEBUG, "send symmetric handshake end") gLog.Printf(LvDEBUG, "send symmetric handshake end")
if compareVersion(t.config.peerVersion, SymmetricSimultaneouslySendVersion) == LESS { // compatible with old client if compareVersion(t.config.peerVersion, SymmetricSimultaneouslySendVersion) < 0 { // compatible with old client
gLog.Println(LvDEBUG, "handshakeS2C ready, notify peer connect") gLog.Println(LvDEBUG, "handshakeS2C ready, notify peer connect")
t.pn.push(t.config.PeerNode, MsgPushHandshakeStart, TunnelMsg{ID: t.id}) t.pn.push(t.config.PeerNode, MsgPushHandshakeStart, TunnelMsg{ID: t.id})
} }
select { select {
case <-time.After(HandshakeTimeout): case <-time.After(HandshakeTimeout):
return fmt.Errorf("wait handshake failed") return fmt.Errorf("wait handshake timeout")
case la := <-gotCh: case la := <-gotCh:
t.la = la t.la = la
gLog.Println(LvDEBUG, "symmetric handshake ok", la) gLog.Println(LvDEBUG, "symmetric handshake ok", la)

View File

@@ -10,11 +10,6 @@ import (
"time" "time"
) )
// examples:
// listen:
// ./openp2p install -node hhd1207-222 -token YOUR-TOKEN -sharebandwidth 0
// listen and build p2papp:
// ./openp2p install -node hhd1207-222 -token YOUR-TOKEN -sharebandwidth 0 -peernode hhdhome-n1 -dstip 127.0.0.1 -dstport 50022 -protocol tcp -srcport 22
func install() { func install() {
gLog.Println(LvINFO, "openp2p start. version: ", OpenP2PVersion) gLog.Println(LvINFO, "openp2p start. version: ", OpenP2PVersion)
gLog.Println(LvINFO, "Contact: QQ group 16947733, Email openp2p.cn@gmail.com") gLog.Println(LvINFO, "Contact: QQ group 16947733, Email openp2p.cn@gmail.com")
@@ -35,7 +30,7 @@ func install() {
uninstall() uninstall()
// save config file // save config file
parseParams("install") parseParams("install", "")
targetPath := filepath.Join(defaultInstallPath, defaultBinName) targetPath := filepath.Join(defaultInstallPath, defaultBinName)
d := daemon{} d := daemon{}
// copy files // copy files

74
core/iptables.go Normal file
View File

@@ -0,0 +1,74 @@
package openp2p
import (
"log"
"os/exec"
"runtime"
)
func allowTunForward() {
if runtime.GOOS != "linux" { // only support Linux
return
}
exec.Command("sh", "-c", `iptables -t filter -D FORWARD -i optun -j ACCEPT`).Run()
exec.Command("sh", "-c", `iptables -t filter -D FORWARD -o optun -j ACCEPT`).Run()
err := exec.Command("sh", "-c", `iptables -t filter -I FORWARD -i optun -j ACCEPT`).Run()
if err != nil {
log.Println("allow foward in error:", err)
}
err = exec.Command("sh", "-c", `iptables -t filter -I FORWARD -o optun -j ACCEPT`).Run()
if err != nil {
log.Println("allow foward out error:", err)
}
}
func clearSNATRule() {
if runtime.GOOS != "linux" {
return
}
execCommand("iptables", true, "-t", "nat", "-D", "POSTROUTING", "-j", "OPSDWAN")
execCommand("iptables", true, "-t", "nat", "-F", "OPSDWAN")
execCommand("iptables", true, "-t", "nat", "-X", "OPSDWAN")
}
func initSNATRule(localNet string) {
if runtime.GOOS != "linux" {
return
}
clearSNATRule()
err := execCommand("iptables", true, "-t", "nat", "-N", "OPSDWAN")
if err != nil {
log.Println("iptables new sdwan chain error:", err)
return
}
err = execCommand("iptables", true, "-t", "nat", "-A", "POSTROUTING", "-j", "OPSDWAN")
if err != nil {
log.Println("iptables append postrouting error:", err)
return
}
err = execCommand("iptables", true, "-t", "nat", "-A", "OPSDWAN",
"-o", "optun", "!", "-s", localNet, "-j", "MASQUERADE")
if err != nil {
log.Println("add optun snat error:", err)
return
}
err = execCommand("iptables", true, "-t", "nat", "-A", "OPSDWAN", "!", "-o", "optun",
"-s", localNet, "-j", "MASQUERADE")
if err != nil {
log.Println("add optun snat error:", err)
return
}
}
func addSNATRule(target string) {
if runtime.GOOS != "linux" {
return
}
err := execCommand("iptables", true, "-t", "nat", "-A", "OPSDWAN", "!", "-o", "optun",
"-s", target, "-j", "MASQUERADE")
if err != nil {
log.Println("iptables add optun snat error:", err)
return
}
}

View File

@@ -17,9 +17,18 @@ type IPTree struct {
tree *avltree.Tree tree *avltree.Tree
treeMtx sync.RWMutex treeMtx sync.RWMutex
} }
type IPTreeValue struct {
maxIP uint32
v interface{}
}
// TODO: deal interset
func (iptree *IPTree) DelIntIP(minIP uint32, maxIP uint32) {
iptree.tree.Remove(minIP)
}
// add 120k cost 0.5s // add 120k cost 0.5s
func (iptree *IPTree) AddIntIP(minIP uint32, maxIP uint32) bool { func (iptree *IPTree) AddIntIP(minIP uint32, maxIP uint32, v interface{}) bool {
if minIP > maxIP { if minIP > maxIP {
return false return false
} }
@@ -32,15 +41,15 @@ func (iptree *IPTree) AddIntIP(minIP uint32, maxIP uint32) bool {
if cur == nil { if cur == nil {
break break
} }
curMaxIP := cur.Value.(uint32) tv := cur.Value.(*IPTreeValue)
curMinIP := cur.Key.(uint32) curMinIP := cur.Key.(uint32)
// newNode all in existNode, treat as inserted. // newNode all in existNode, treat as inserted.
if newMinIP >= curMinIP && newMaxIP <= curMaxIP { if newMinIP >= curMinIP && newMaxIP <= tv.maxIP {
return true return true
} }
// has no interset // has no interset
if newMinIP > curMaxIP { if newMinIP > tv.maxIP {
cur = cur.Children[1] cur = cur.Children[1]
continue continue
} }
@@ -53,27 +62,35 @@ func (iptree *IPTree) AddIntIP(minIP uint32, maxIP uint32) bool {
if curMinIP < newMinIP { if curMinIP < newMinIP {
newMinIP = curMinIP newMinIP = curMinIP
} }
if curMaxIP > newMaxIP { if tv.maxIP > newMaxIP {
newMaxIP = curMaxIP newMaxIP = tv.maxIP
} }
cur = iptree.tree.Root cur = iptree.tree.Root
} }
// put in the tree // put in the tree
iptree.tree.Put(newMinIP, newMaxIP) iptree.tree.Put(newMinIP, &IPTreeValue{newMaxIP, v})
return true return true
} }
func (iptree *IPTree) Add(minIPStr string, maxIPStr string) bool { func (iptree *IPTree) Add(minIPStr string, maxIPStr string, v interface{}) bool {
var minIP, maxIP uint32 var minIP, maxIP uint32
binary.Read(bytes.NewBuffer(net.ParseIP(minIPStr).To4()), binary.BigEndian, &minIP) binary.Read(bytes.NewBuffer(net.ParseIP(minIPStr).To4()), binary.BigEndian, &minIP)
binary.Read(bytes.NewBuffer(net.ParseIP(maxIPStr).To4()), binary.BigEndian, &maxIP) binary.Read(bytes.NewBuffer(net.ParseIP(maxIPStr).To4()), binary.BigEndian, &maxIP)
return iptree.AddIntIP(minIP, maxIP) return iptree.AddIntIP(minIP, maxIP, v)
}
func (iptree *IPTree) Del(minIPStr string, maxIPStr string) {
var minIP, maxIP uint32
binary.Read(bytes.NewBuffer(net.ParseIP(minIPStr).To4()), binary.BigEndian, &minIP)
binary.Read(bytes.NewBuffer(net.ParseIP(maxIPStr).To4()), binary.BigEndian, &maxIP)
iptree.DelIntIP(minIP, maxIP)
} }
func (iptree *IPTree) Contains(ipStr string) bool { func (iptree *IPTree) Contains(ipStr string) bool {
var ip uint32 var ip uint32
binary.Read(bytes.NewBuffer(net.ParseIP(ipStr).To4()), binary.BigEndian, &ip) binary.Read(bytes.NewBuffer(net.ParseIP(ipStr).To4()), binary.BigEndian, &ip)
return iptree.ContainsInt(ip) _, ok := iptree.Load(ip)
return ok
} }
func IsLocalhost(ipStr string) bool { func IsLocalhost(ipStr string) bool {
@@ -83,26 +100,26 @@ func IsLocalhost(ipStr string) bool {
return false return false
} }
func (iptree *IPTree) ContainsInt(ip uint32) bool { func (iptree *IPTree) Load(ip uint32) (interface{}, bool) {
iptree.treeMtx.RLock() iptree.treeMtx.RLock()
defer iptree.treeMtx.RUnlock() defer iptree.treeMtx.RUnlock()
if iptree.tree == nil { if iptree.tree == nil {
return false return nil, false
} }
n := iptree.tree.Root n := iptree.tree.Root
for n != nil { for n != nil {
curMaxIP := n.Value.(uint32) tv := n.Value.(*IPTreeValue)
curMinIP := n.Key.(uint32) curMinIP := n.Key.(uint32)
switch { switch {
case ip >= curMinIP && ip <= curMaxIP: // hit case ip >= curMinIP && ip <= tv.maxIP: // hit
return true return tv.v, true
case ip < curMinIP: case ip < curMinIP:
n = n.Children[0] n = n.Children[0]
default: default:
n = n.Children[1] n = n.Children[1]
} }
} }
return false return nil, false
} }
func (iptree *IPTree) Size() int { func (iptree *IPTree) Size() int {
@@ -142,12 +159,12 @@ func NewIPTree(ips string) *IPTree {
} }
minIP := ipNet.IP.Mask(ipNet.Mask).String() minIP := ipNet.IP.Mask(ipNet.Mask).String()
maxIP := calculateMaxIP(ipNet).String() maxIP := calculateMaxIP(ipNet).String()
iptree.Add(minIP, maxIP) iptree.Add(minIP, maxIP, nil)
} else if strings.Contains(ip, "-") { // x.x.x.x-y.y.y.y } else if strings.Contains(ip, "-") { // x.x.x.x-y.y.y.y
minAndMax := strings.Split(ip, "-") minAndMax := strings.Split(ip, "-")
iptree.Add(minAndMax[0], minAndMax[1]) iptree.Add(minAndMax[0], minAndMax[1], nil)
} else { // single ip } else { // single ip
iptree.Add(ip, ip) iptree.Add(ip, ip, nil)
} }
} }
return iptree return iptree

View File

@@ -41,14 +41,14 @@ func TestAllInputFormat(t *testing.T) {
func TestSingleIP(t *testing.T) { func TestSingleIP(t *testing.T) {
iptree := NewIPTree("") iptree := NewIPTree("")
iptree.Add("219.137.185.70", "219.137.185.70") iptree.Add("219.137.185.70", "219.137.185.70", nil)
wrapTestContains(t, iptree, "219.137.185.70", true) wrapTestContains(t, iptree, "219.137.185.70", true)
wrapTestContains(t, iptree, "219.137.185.71", false) wrapTestContains(t, iptree, "219.137.185.71", false)
} }
func TestWrongSegment(t *testing.T) { func TestWrongSegment(t *testing.T) {
iptree := NewIPTree("") iptree := NewIPTree("")
inserted := iptree.Add("87.251.75.0", "82.251.75.255") inserted := iptree.Add("87.251.75.0", "82.251.75.255", nil)
if inserted { if inserted {
t.Errorf("TestWrongSegment failed\n") t.Errorf("TestWrongSegment failed\n")
} }
@@ -57,20 +57,20 @@ func TestWrongSegment(t *testing.T) {
func TestSegment2(t *testing.T) { func TestSegment2(t *testing.T) {
iptree := NewIPTree("") iptree := NewIPTree("")
iptree.Clear() iptree.Clear()
iptree.Add("10.1.5.50", "10.1.5.100") iptree.Add("10.1.5.50", "10.1.5.100", nil)
iptree.Add("10.1.1.50", "10.1.1.100") iptree.Add("10.1.1.50", "10.1.1.100", nil)
iptree.Add("10.1.2.50", "10.1.2.100") iptree.Add("10.1.2.50", "10.1.2.100", nil)
iptree.Add("10.1.6.50", "10.1.6.100") iptree.Add("10.1.6.50", "10.1.6.100", nil)
iptree.Add("10.1.7.50", "10.1.7.100") iptree.Add("10.1.7.50", "10.1.7.100", nil)
iptree.Add("10.1.3.50", "10.1.3.100") iptree.Add("10.1.3.50", "10.1.3.100", nil)
iptree.Add("10.1.1.1", "10.1.1.10") // no interset iptree.Add("10.1.1.1", "10.1.1.10", nil) // no interset
iptree.Add("10.1.1.200", "10.1.1.250") // no interset iptree.Add("10.1.1.200", "10.1.1.250", nil) // no interset
iptree.Print() iptree.Print()
iptree.Add("10.1.1.80", "10.1.1.90") // all in iptree.Add("10.1.1.80", "10.1.1.90", nil) // all in
iptree.Add("10.1.1.40", "10.1.1.60") // interset iptree.Add("10.1.1.40", "10.1.1.60", nil) // interset
iptree.Print() iptree.Print()
iptree.Add("10.1.1.90", "10.1.1.110") // interset iptree.Add("10.1.1.90", "10.1.1.110", nil) // interset
iptree.Print() iptree.Print()
t.Logf("ipTree size:%d\n", iptree.Size()) t.Logf("ipTree size:%d\n", iptree.Size())
wrapTestContains(t, iptree, "10.1.1.40", true) wrapTestContains(t, iptree, "10.1.1.40", true)
@@ -87,7 +87,7 @@ func TestSegment2(t *testing.T) {
wrapTestContains(t, iptree, "10.1.100.30", false) wrapTestContains(t, iptree, "10.1.100.30", false)
wrapTestContains(t, iptree, "10.1.200.30", false) wrapTestContains(t, iptree, "10.1.200.30", false)
iptree.Add("10.0.0.0", "10.255.255.255") // will merge all segment iptree.Add("10.0.0.0", "10.255.255.255", nil) // will merge all segment
iptree.Print() iptree.Print()
if iptree.Size() != 1 { if iptree.Size() != 1 {
t.Errorf("merge ip segment error\n") t.Errorf("merge ip segment error\n")
@@ -98,17 +98,17 @@ func TestSegment2(t *testing.T) {
func BenchmarkBuildipTree20k(t *testing.B) { func BenchmarkBuildipTree20k(t *testing.B) {
iptree := NewIPTree("") iptree := NewIPTree("")
iptree.Clear() iptree.Clear()
iptree.Add("10.1.5.50", "10.1.5.100") iptree.Add("10.1.5.50", "10.1.5.100", nil)
iptree.Add("10.1.1.50", "10.1.1.100") iptree.Add("10.1.1.50", "10.1.1.100", nil)
iptree.Add("10.1.2.50", "10.1.2.100") iptree.Add("10.1.2.50", "10.1.2.100", nil)
iptree.Add("10.1.6.50", "10.1.6.100") iptree.Add("10.1.6.50", "10.1.6.100", nil)
iptree.Add("10.1.7.50", "10.1.7.100") iptree.Add("10.1.7.50", "10.1.7.100", nil)
iptree.Add("10.1.3.50", "10.1.3.100") iptree.Add("10.1.3.50", "10.1.3.100", nil)
iptree.Add("10.1.1.1", "10.1.1.10") // no interset iptree.Add("10.1.1.1", "10.1.1.10", nil) // no interset
iptree.Add("10.1.1.200", "10.1.1.250") // no interset iptree.Add("10.1.1.200", "10.1.1.250", nil) // no interset
iptree.Add("10.1.1.80", "10.1.1.90") // all in iptree.Add("10.1.1.80", "10.1.1.90", nil) // all in
iptree.Add("10.1.1.40", "10.1.1.60") // interset iptree.Add("10.1.1.40", "10.1.1.60", nil) // interset
iptree.Add("10.1.1.90", "10.1.1.110") // interset iptree.Add("10.1.1.90", "10.1.1.110", nil) // interset
var minIP uint32 var minIP uint32
binary.Read(bytes.NewBuffer(net.ParseIP("10.1.1.1").To4()), binary.BigEndian, &minIP) binary.Read(bytes.NewBuffer(net.ParseIP("10.1.1.1").To4()), binary.BigEndian, &minIP)
@@ -116,13 +116,13 @@ func BenchmarkBuildipTree20k(t *testing.B) {
nodeNum := uint32(10000 * 1) nodeNum := uint32(10000 * 1)
gap := uint32(10) gap := uint32(10)
for i := minIP; i < minIP+nodeNum*gap; i += gap { for i := minIP; i < minIP+nodeNum*gap; i += gap {
iptree.AddIntIP(i, i) iptree.AddIntIP(i, i, nil)
// t.Logf("ipTree size:%d\n", iptree.Size()) // t.Logf("ipTree size:%d\n", iptree.Size())
} }
binary.Read(bytes.NewBuffer(net.ParseIP("100.1.1.1").To4()), binary.BigEndian, &minIP) binary.Read(bytes.NewBuffer(net.ParseIP("100.1.1.1").To4()), binary.BigEndian, &minIP)
// insert 100k block ip segment // insert 100k block ip segment
for i := minIP; i < minIP+nodeNum*gap; i += gap { for i := minIP; i < minIP+nodeNum*gap; i += gap {
iptree.AddIntIP(i, i+5) iptree.AddIntIP(i, i+5, nil)
} }
t.Logf("ipTree size:%d\n", iptree.Size()) t.Logf("ipTree size:%d\n", iptree.Size())
iptree.Clear() iptree.Clear()
@@ -132,17 +132,17 @@ func BenchmarkQuery(t *testing.B) {
ts := time.Now() ts := time.Now()
iptree := NewIPTree("") iptree := NewIPTree("")
iptree.Clear() iptree.Clear()
iptree.Add("10.1.5.50", "10.1.5.100") iptree.Add("10.1.5.50", "10.1.5.100", nil)
iptree.Add("10.1.1.50", "10.1.1.100") iptree.Add("10.1.1.50", "10.1.1.100", nil)
iptree.Add("10.1.2.50", "10.1.2.100") iptree.Add("10.1.2.50", "10.1.2.100", nil)
iptree.Add("10.1.6.50", "10.1.6.100") iptree.Add("10.1.6.50", "10.1.6.100", nil)
iptree.Add("10.1.7.50", "10.1.7.100") iptree.Add("10.1.7.50", "10.1.7.100", nil)
iptree.Add("10.1.3.50", "10.1.3.100") iptree.Add("10.1.3.50", "10.1.3.100", nil)
iptree.Add("10.1.1.1", "10.1.1.10") // no interset iptree.Add("10.1.1.1", "10.1.1.10", nil) // no interset
iptree.Add("10.1.1.200", "10.1.1.250") // no interset iptree.Add("10.1.1.200", "10.1.1.250", nil) // no interset
iptree.Add("10.1.1.80", "10.1.1.90") // all in iptree.Add("10.1.1.80", "10.1.1.90", nil) // all in
iptree.Add("10.1.1.40", "10.1.1.60") // interset iptree.Add("10.1.1.40", "10.1.1.60", nil) // interset
iptree.Add("10.1.1.90", "10.1.1.110") // interset iptree.Add("10.1.1.90", "10.1.1.110", nil) // interset
var minIP uint32 var minIP uint32
binary.Read(bytes.NewBuffer(net.ParseIP("10.1.1.1").To4()), binary.BigEndian, &minIP) binary.Read(bytes.NewBuffer(net.ParseIP("10.1.1.1").To4()), binary.BigEndian, &minIP)
@@ -150,20 +150,20 @@ func BenchmarkQuery(t *testing.B) {
nodeNum := uint32(10000 * 1000) nodeNum := uint32(10000 * 1000)
gap := uint32(10) gap := uint32(10)
for i := minIP; i < minIP+nodeNum*gap; i += gap { for i := minIP; i < minIP+nodeNum*gap; i += gap {
iptree.AddIntIP(i, i) iptree.AddIntIP(i, i, nil)
// t.Logf("ipTree size:%d\n", iptree.Size()) // t.Logf("ipTree size:%d\n", iptree.Size())
} }
binary.Read(bytes.NewBuffer(net.ParseIP("100.1.1.1").To4()), binary.BigEndian, &minIP) binary.Read(bytes.NewBuffer(net.ParseIP("100.1.1.1").To4()), binary.BigEndian, &minIP)
// insert 100k block ip segment // insert 100k block ip segment
for i := minIP; i < minIP+nodeNum*gap; i += gap { for i := minIP; i < minIP+nodeNum*gap; i += gap {
iptree.AddIntIP(i, i+5) iptree.AddIntIP(i, i+5, nil)
} }
t.Logf("ipTree size:%d cost:%dms\n", iptree.Size(), time.Since(ts)/time.Millisecond) t.Logf("ipTree size:%d cost:%dms\n", iptree.Size(), time.Since(ts)/time.Millisecond)
ts = time.Now() ts = time.Now()
// t.ResetTimer() // t.ResetTimer()
queryNum := 100 * 10000 queryNum := 100 * 10000
for i := 0; i < queryNum; i++ { for i := 0; i < queryNum; i++ {
iptree.ContainsInt(minIP + uint32(i)) iptree.Load(minIP + uint32(i))
wrapBenchmarkContains(t, iptree, "10.1.5.55", true) wrapBenchmarkContains(t, iptree, "10.1.5.55", true)
wrapBenchmarkContains(t, iptree, "10.1.1.1", true) wrapBenchmarkContains(t, iptree, "10.1.1.1", true)
wrapBenchmarkContains(t, iptree, "10.1.5.200", false) wrapBenchmarkContains(t, iptree, "10.1.5.200", false)

View File

@@ -13,6 +13,7 @@ type LogLevel int
var gLog *logger var gLog *logger
const ( const (
LvDev LogLevel = -1
LvDEBUG LogLevel = iota LvDEBUG LogLevel = iota
LvINFO LvINFO
LvWARN LvWARN
@@ -32,12 +33,13 @@ func init() {
loglevel[LvINFO] = "INFO" loglevel[LvINFO] = "INFO"
loglevel[LvWARN] = "WARN" loglevel[LvWARN] = "WARN"
loglevel[LvERROR] = "ERROR" loglevel[LvERROR] = "ERROR"
loglevel[LvDev] = "Dev"
} }
const ( const (
LogFile = 1 << iota LogFile = 1
LogConsole LogConsole = 1 << 1
) )
type logger struct { type logger struct {
@@ -92,6 +94,13 @@ func (l *logger) setLevel(level LogLevel) {
defer l.mtx.Unlock() defer l.mtx.Unlock()
l.level = level l.level = level
} }
func (l *logger) setMaxSize(size int64) {
l.mtx.Lock()
defer l.mtx.Unlock()
l.maxLogSize = size
}
func (l *logger) setMode(mode int) { func (l *logger) setMode(mode int) {
l.mtx.Lock() l.mtx.Lock()
defer l.mtx.Unlock() defer l.mtx.Unlock()
@@ -139,10 +148,10 @@ func (l *logger) Printf(level LogLevel, format string, params ...interface{}) {
} }
pidAndLevel := []interface{}{l.pid, loglevel[level]} pidAndLevel := []interface{}{l.pid, loglevel[level]}
params = append(pidAndLevel, params...) params = append(pidAndLevel, params...)
if l.mode & LogFile != 0 { if l.mode&LogFile != 0 {
l.loggers[0].Printf("%d %s "+format+l.lineEnding, params...) l.loggers[0].Printf("%d %s "+format+l.lineEnding, params...)
} }
if l.mode & LogConsole != 0 { if l.mode&LogConsole != 0 {
l.stdLogger.Printf("%d %s "+format+l.lineEnding, params...) l.stdLogger.Printf("%d %s "+format+l.lineEnding, params...)
} }
} }
@@ -156,10 +165,10 @@ func (l *logger) Println(level LogLevel, params ...interface{}) {
pidAndLevel := []interface{}{l.pid, " ", loglevel[level], " "} pidAndLevel := []interface{}{l.pid, " ", loglevel[level], " "}
params = append(pidAndLevel, params...) params = append(pidAndLevel, params...)
params = append(params, l.lineEnding) params = append(params, l.lineEnding)
if l.mode & LogFile != 0 { if l.mode&LogFile != 0 {
l.loggers[0].Print(params...) l.loggers[0].Print(params...)
} }
if l.mode & LogConsole != 0 { if l.mode&LogConsole != 0 {
l.stdLogger.Print(params...) l.stdLogger.Print(params...)
} }
} }

View File

@@ -88,28 +88,30 @@ func natTest(serverHost string, serverPort int, localPort int) (publicIP string,
return natRsp.IP, natRsp.Port, nil return natRsp.IP, natRsp.Port, nil
} }
func getNATType(host string, udp1 int, udp2 int) (publicIP string, NATType int, hasIPvr int, hasUPNPorNATPMP int, err error) { func getNATType(host string, udp1 int, udp2 int) (publicIP string, NATType int, err error) {
// the random local port may be used by other. // the random local port may be used by other.
localPort := int(rand.Uint32()%15000 + 50000) localPort := int(rand.Uint32()%15000 + 50000)
echoPort := gConf.Network.TCPPort
ip1, port1, err := natTest(host, udp1, localPort) ip1, port1, err := natTest(host, udp1, localPort)
if err != nil { if err != nil {
return "", 0, 0, 0, err return "", 0, err
} }
hasIPv4, hasUPNPorNATPMP := publicIPTest(ip1, echoPort)
_, port2, err := natTest(host, udp2, localPort) // 2rd nat test not need testing publicip _, port2, err := natTest(host, udp2, localPort) // 2rd nat test not need testing publicip
gLog.Printf(LvDEBUG, "local port:%d nat port:%d", localPort, port2) gLog.Printf(LvDEBUG, "local port:%d nat port:%d", localPort, port2)
if err != nil { if err != nil {
return "", 0, hasIPv4, hasUPNPorNATPMP, err return "", 0, err
} }
natType := NATSymmetric natType := NATSymmetric
if port1 == port2 { if port1 == port2 {
natType = NATCone natType = NATCone
} }
return ip1, natType, hasIPv4, hasUPNPorNATPMP, nil return ip1, natType, nil
} }
func publicIPTest(publicIP string, echoPort int) (hasPublicIP int, hasUPNPorNATPMP int) { func publicIPTest(publicIP string, echoPort int) (hasPublicIP int, hasUPNPorNATPMP int) {
if publicIP == "" || echoPort == 0 {
return
}
var echoConn *net.UDPConn var echoConn *net.UDPConn
gLog.Println(LvDEBUG, "echo server start") gLog.Println(LvDEBUG, "echo server start")
var err error var err error

View File

@@ -9,6 +9,8 @@ import (
"time" "time"
) )
var GNetwork *P2PNetwork
func Run() { func Run() {
rand.Seed(time.Now().UnixNano()) rand.Seed(time.Now().UnixNano())
baseDir := filepath.Dir(os.Args[0]) baseDir := filepath.Dir(os.Args[0])
@@ -29,7 +31,7 @@ func Run() {
} else { } else {
installByFilename() installByFilename()
} }
parseParams("") parseParams("", "")
gLog.Println(LvINFO, "openp2p start. version: ", OpenP2PVersion) gLog.Println(LvINFO, "openp2p start. version: ", OpenP2PVersion)
gLog.Println(LvINFO, "Contact: QQ group 16947733, Email openp2p.cn@gmail.com") gLog.Println(LvINFO, "Contact: QQ group 16947733, Email openp2p.cn@gmail.com")
@@ -45,8 +47,8 @@ func Run() {
if err != nil { if err != nil {
gLog.Println(LvINFO, "setRLimit error:", err) gLog.Println(LvINFO, "setRLimit error:", err)
} }
network := P2PNetworkInstance(&gConf.Network) GNetwork = P2PNetworkInstance()
if ok := network.Connect(30000); !ok { if ok := GNetwork.Connect(30000); !ok {
gLog.Println(LvERROR, "P2PNetwork login error") gLog.Println(LvERROR, "P2PNetwork login error")
return return
} }
@@ -55,34 +57,57 @@ func Run() {
<-forever <-forever
} }
var network *P2PNetwork
// for Android app // for Android app
// gomobile not support uint64 exported to java // gomobile not support uint64 exported to java
func RunAsModule(baseDir string, token string, bw int, logLevel int) *P2PNetwork { func RunAsModule(baseDir string, token string, bw int, logLevel int) *P2PNetwork {
rand.Seed(time.Now().UnixNano()) rand.Seed(time.Now().UnixNano())
os.Chdir(baseDir) // for system service os.Chdir(baseDir) // for system service
gLog = NewLogger(baseDir, ProductName, LvDEBUG, 1024*1024, LogFile|LogConsole) gLog = NewLogger(baseDir, ProductName, LvINFO, 1024*1024, LogFile|LogConsole)
parseParams("") parseParams("", "")
n, err := strconv.ParseUint(token, 10, 64) n, err := strconv.ParseUint(token, 10, 64)
if err == nil { if err == nil && n > 0 {
gConf.setToken(n) gConf.setToken(n)
} }
gLog.setLevel(LogLevel(logLevel)) if n <= 0 && gConf.Network.Token == 0 { // not input token
return nil
}
// gLog.setLevel(LogLevel(logLevel))
gConf.setShareBandwidth(bw) gConf.setShareBandwidth(bw)
gLog.Println(LvINFO, "openp2p start. version: ", OpenP2PVersion) gLog.Println(LvINFO, "openp2p start. version: ", OpenP2PVersion)
gLog.Println(LvINFO, "Contact: QQ group 16947733, Email openp2p.cn@gmail.com") gLog.Println(LvINFO, "Contact: QQ group 16947733, Email openp2p.cn@gmail.com")
gLog.Println(LvINFO, &gConf) gLog.Println(LvINFO, &gConf)
network = P2PNetworkInstance(&gConf.Network) GNetwork = P2PNetworkInstance()
if ok := network.Connect(30000); !ok { if ok := GNetwork.Connect(30000); !ok {
gLog.Println(LvERROR, "P2PNetwork login error") gLog.Println(LvERROR, "P2PNetwork login error")
return nil return nil
} }
// gLog.Println(LvINFO, "waiting for connection...") // gLog.Println(LvINFO, "waiting for connection...")
return network return GNetwork
}
func RunCmd(cmd string) {
rand.Seed(time.Now().UnixNano())
baseDir := filepath.Dir(os.Args[0])
os.Chdir(baseDir) // for system service
gLog = NewLogger(baseDir, ProductName, LvINFO, 1024*1024, LogFile|LogConsole)
parseParams("", cmd)
setFirewall()
err := setRLimit()
if err != nil {
gLog.Println(LvINFO, "setRLimit error:", err)
}
GNetwork = P2PNetworkInstance()
if ok := GNetwork.Connect(30000); !ok {
gLog.Println(LvERROR, "P2PNetwork login error")
return
}
forever := make(chan bool)
<-forever
} }
func GetToken(baseDir string) string { func GetToken(baseDir string) string {
@@ -90,3 +115,7 @@ func GetToken(baseDir string) string {
gConf.load() gConf.load()
return fmt.Sprintf("%d", gConf.Network.Token) return fmt.Sprintf("%d", gConf.Network.Token)
} }
func Stop() {
os.Exit(0)
}

20
core/optun.go Normal file
View File

@@ -0,0 +1,20 @@
package openp2p
import (
"github.com/openp2p-cn/wireguard-go/tun"
)
var AndroidSDWANConfig chan []byte
type optun struct {
tunName string
dev tun.Device
}
func (t *optun) Stop() error {
t.dev.Close()
return nil
}
func init() {
AndroidSDWANConfig = make(chan []byte, 1)
}

85
core/optun_android.go Normal file
View File

@@ -0,0 +1,85 @@
// optun_android.go
//go:build android
// +build android
package openp2p
import (
"net"
)
const (
tunIfaceName = "optun"
PIHeaderSize = 0
)
var AndroidReadTun chan []byte // TODO: multi channel
var AndroidWriteTun chan []byte
func (t *optun) Start(localAddr string, detail *SDWANInfo) error {
return nil
}
func (t *optun) Read(bufs [][]byte, sizes []int, offset int) (n int, err error) {
bufs[0] = <-AndroidReadTun
sizes[0] = len(bufs[0])
return 1, nil
}
func (t *optun) Write(bufs [][]byte, offset int) (int, error) {
AndroidWriteTun <- bufs[0]
return len(bufs[0]), nil
}
func AndroidRead(data []byte, len int) {
head := PacketHeader{}
parseHeader(data, &head)
gLog.Printf(LvDev, "AndroidRead tun dst ip=%s,len=%d", net.IP{byte(head.dst >> 24), byte(head.dst >> 16), byte(head.dst >> 8), byte(head.dst)}.String(), len)
buf := make([]byte, len)
copy(buf, data)
AndroidReadTun <- buf
}
func AndroidWrite(buf []byte) int {
p := <-AndroidWriteTun
copy(buf, p)
return len(p)
}
func GetAndroidSDWANConfig(buf []byte) int {
p := <-AndroidSDWANConfig
copy(buf, p)
gLog.Printf(LvINFO, "AndroidSDWANConfig=%s", p)
return len(p)
}
func GetAndroidNodeName() string {
gLog.Printf(LvINFO, "GetAndroidNodeName=%s", gConf.Network.Node)
return gConf.Network.Node
}
func setTunAddr(ifname, localAddr, remoteAddr string, wintun interface{}) error {
// TODO:
return nil
}
func addRoute(dst, gw, ifname string) error {
// TODO:
return nil
}
func delRoute(dst, gw string) error {
// TODO:
return nil
}
func delRoutesByGateway(gateway string) error {
// TODO:
return nil
}
func init() {
AndroidReadTun = make(chan []byte, 1000)
AndroidWriteTun = make(chan []byte, 1000)
}

87
core/optun_darwin.go Normal file
View File

@@ -0,0 +1,87 @@
package openp2p
import (
"fmt"
"net"
"os/exec"
"strings"
"github.com/openp2p-cn/wireguard-go/tun"
)
const (
tunIfaceName = "utun"
PIHeaderSize = 4 // utun has no IFF_NO_PI
)
func (t *optun) Start(localAddr string, detail *SDWANInfo) error {
var err error
t.tunName = tunIfaceName
t.dev, err = tun.CreateTUN(t.tunName, 1420)
if err != nil {
return err
}
t.tunName, _ = t.dev.Name()
return nil
}
func (t *optun) Read(bufs [][]byte, sizes []int, offset int) (n int, err error) {
return t.dev.Read(bufs, sizes, offset)
}
func (t *optun) Write(bufs [][]byte, offset int) (int, error) {
return t.dev.Write(bufs, offset)
}
func setTunAddr(ifname, localAddr, remoteAddr string, wintun interface{}) error {
li, _, err := net.ParseCIDR(localAddr)
if err != nil {
return fmt.Errorf("parse local addr fail:%s", err)
}
ri, _, err := net.ParseCIDR(remoteAddr)
if err != nil {
return fmt.Errorf("parse remote addr fail:%s", err)
}
err = exec.Command("ifconfig", ifname, "inet", li.String(), ri.String(), "up").Run()
return err
}
func addRoute(dst, gw, ifname string) error {
err := exec.Command("route", "add", dst, gw).Run()
return err
}
func delRoute(dst, gw string) error {
err := exec.Command("route", "delete", dst, gw).Run()
return err
}
func delRoutesByGateway(gateway string) error {
cmd := exec.Command("netstat", "-rn")
output, err := cmd.Output()
if err != nil {
return err
}
lines := strings.Split(string(output), "\n")
for _, line := range lines {
if !strings.Contains(line, gateway) {
continue
}
fields := strings.Fields(line)
if len(fields) >= 7 && fields[0] == "default" && fields[len(fields)-1] == gateway {
delCmd := exec.Command("route", "delete", "default", gateway)
err := delCmd.Run()
if err != nil {
return err
}
fmt.Printf("Delete route ok: %s %s\n", "default", gateway)
}
}
return nil
}
func addTunAddr(localAddr, remoteAddr string) error {
return nil
}
func delTunAddr(localAddr, remoteAddr string) error {
return nil
}

133
core/optun_linux.go Normal file
View File

@@ -0,0 +1,133 @@
//go:build !android
// +build !android
// optun_linux.go
package openp2p
import (
"fmt"
"net"
"os/exec"
"strings"
"github.com/openp2p-cn/wireguard-go/tun"
"github.com/vishvananda/netlink"
)
const (
tunIfaceName = "optun"
PIHeaderSize = 0
)
var previousIP = ""
func (t *optun) Start(localAddr string, detail *SDWANInfo) error {
var err error
t.tunName = tunIfaceName
t.dev, err = tun.CreateTUN(t.tunName, 1420)
if err != nil {
return err
}
return nil
}
func (t *optun) Read(bufs [][]byte, sizes []int, offset int) (n int, err error) {
return t.dev.Read(bufs, sizes, offset)
}
func (t *optun) Write(bufs [][]byte, offset int) (int, error) {
return t.dev.Write(bufs, offset)
}
func setTunAddr(ifname, localAddr, remoteAddr string, wintun interface{}) error {
ifce, err := netlink.LinkByName(ifname)
if err != nil {
return err
}
netlink.LinkSetMTU(ifce, 1375)
netlink.LinkSetTxQLen(ifce, 100)
netlink.LinkSetUp(ifce)
ln, err := netlink.ParseIPNet(localAddr)
if err != nil {
return err
}
ln.Mask = net.CIDRMask(32, 32)
rn, err := netlink.ParseIPNet(remoteAddr)
if err != nil {
return err
}
rn.Mask = net.CIDRMask(32, 32)
addr := &netlink.Addr{
IPNet: ln,
Peer: rn,
}
if previousIP != "" {
lnDel, err := netlink.ParseIPNet(previousIP)
if err != nil {
return err
}
lnDel.Mask = net.CIDRMask(32, 32)
addrDel := &netlink.Addr{
IPNet: lnDel,
Peer: rn,
}
netlink.AddrDel(ifce, addrDel)
}
previousIP = localAddr
return netlink.AddrAdd(ifce, addr)
}
func addRoute(dst, gw, ifname string) error {
_, networkid, err := net.ParseCIDR(dst)
if err != nil {
return err
}
ipGW := net.ParseIP(gw)
if ipGW == nil {
return fmt.Errorf("parse gateway %s failed", gw)
}
route := &netlink.Route{
Dst: networkid,
Gw: ipGW,
}
return netlink.RouteAdd(route)
}
func delRoute(dst, gw string) error {
_, networkid, err := net.ParseCIDR(dst)
if err != nil {
return err
}
route := &netlink.Route{
Dst: networkid,
}
return netlink.RouteDel(route)
}
func delRoutesByGateway(gateway string) error {
cmd := exec.Command("route", "-n")
output, err := cmd.Output()
if err != nil {
return err
}
lines := strings.Split(string(output), "\n")
for _, line := range lines {
if !strings.Contains(line, gateway) {
continue
}
fields := strings.Fields(line)
if len(fields) >= 8 && fields[1] == "0.0.0.0" && fields[7] == gateway {
delCmd := exec.Command("route", "del", "-net", fields[0], "gw", gateway)
err := delCmd.Run()
if err != nil {
return err
}
fmt.Printf("Delete route ok: %s %s %s\n", fields[0], fields[1], gateway)
}
}
return nil
}

42
core/optun_other.go Normal file
View File

@@ -0,0 +1,42 @@
//go:build !linux && !windows && !darwin
// +build !linux,!windows,!darwin
package openp2p
import "github.com/openp2p-cn/wireguard-go/tun"
const (
tunIfaceName = "optun"
PIHeaderSize = 0
)
func (t *optun) Start(localAddr string, detail *SDWANInfo) error {
var err error
t.tunName = tunIfaceName
t.dev, err = tun.CreateTUN(t.tunName, 1420)
if err != nil {
return err
}
err = setTunAddr(t.tunName, localAddr, detail.Gateway, t.dev)
if err != nil {
return err
}
return nil
}
func addRoute(dst, gw, ifname string) error {
return nil
}
func delRoute(dst, gw string) error {
return nil
}
func addTunAddr(localAddr, remoteAddr string) error {
return nil
}
func delTunAddr(localAddr, remoteAddr string) error {
return nil
}

142
core/optun_windows.go Normal file
View File

@@ -0,0 +1,142 @@
package openp2p
import (
"fmt"
"net"
"net/netip"
"os"
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"github.com/openp2p-cn/wireguard-go/tun"
"golang.org/x/sys/windows"
"golang.zx2c4.com/wireguard/windows/tunnel/winipcfg"
)
const (
tunIfaceName = "optun"
PIHeaderSize = 0
)
func (t *optun) Start(localAddr string, detail *SDWANInfo) error {
// check wintun.dll
tmpFile := filepath.Dir(os.Args[0]) + "/wintun.dll"
fs, err := os.Stat(tmpFile)
if err != nil || fs.Size() == 0 {
url := fmt.Sprintf("https://openp2p.cn/download/v1/latest/wintun/%s/wintun.dll", runtime.GOARCH)
err = downloadFile(url, "", tmpFile)
if err != nil {
os.Remove(tmpFile)
return err
}
}
t.tunName = tunIfaceName
uuid := &windows.GUID{
Data1: 0xf411e821,
Data2: 0xb310,
Data3: 0x4567,
Data4: [8]byte{0x80, 0x42, 0x83, 0x7e, 0xf4, 0x56, 0xce, 0x13},
}
t.dev, err = tun.CreateTUNWithRequestedGUID(t.tunName, uuid, 1420)
if err != nil { // retry
t.dev, err = tun.CreateTUNWithRequestedGUID(t.tunName, uuid, 1420)
}
if err != nil {
return err
}
return nil
}
func (t *optun) Read(bufs [][]byte, sizes []int, offset int) (n int, err error) {
return t.dev.Read(bufs, sizes, offset)
}
func (t *optun) Write(bufs [][]byte, offset int) (int, error) {
return t.dev.Write(bufs, offset)
}
func setTunAddr(ifname, localAddr, remoteAddr string, wintun interface{}) error {
nativeTunDevice := wintun.(*tun.NativeTun)
link := winipcfg.LUID(nativeTunDevice.LUID())
ip, err := netip.ParsePrefix(localAddr)
if err != nil {
gLog.Printf(LvERROR, "ParsePrefix error:%s, luid:%d,localAddr:%s", err, nativeTunDevice.LUID(), localAddr)
return err
}
err = link.SetIPAddresses([]netip.Prefix{ip})
if err != nil {
gLog.Printf(LvERROR, "SetIPAddresses error:%s, netip.Prefix:%+v", err, []netip.Prefix{ip})
return err
}
return nil
}
func addRoute(dst, gw, ifname string) error {
_, dstNet, err := net.ParseCIDR(dst)
if err != nil {
return err
}
i, err := net.InterfaceByName(ifname)
if err != nil {
return err
}
params := make([]string, 0)
params = append(params, "add")
params = append(params, dstNet.IP.String())
params = append(params, "mask")
params = append(params, net.IP(dstNet.Mask).String())
params = append(params, gw)
params = append(params, "if")
params = append(params, strconv.Itoa(i.Index))
// gLogger.Println(LevelINFO, "windows add route params:", params)
execCommand("route", true, params...)
return nil
}
func delRoute(dst, gw string) error {
_, dstNet, err := net.ParseCIDR(dst)
if err != nil {
return err
}
params := make([]string, 0)
params = append(params, "delete")
params = append(params, dstNet.IP.String())
params = append(params, "mask")
params = append(params, net.IP(dstNet.Mask).String())
params = append(params, gw)
// gLogger.Println(LevelINFO, "windows delete route params:", params)
execCommand("route", true, params...)
return nil
}
func delRoutesByGateway(gateway string) error {
cmd := exec.Command("route", "print", "-4")
output, err := cmd.Output()
if err != nil {
return err
}
lines := strings.Split(string(output), "\n")
for _, line := range lines {
if !strings.Contains(line, gateway) {
continue
}
fields := strings.Fields(line)
if len(fields) >= 5 {
cmd := exec.Command("route", "delete", fields[0], "mask", fields[1], gateway)
err := cmd.Run()
if err != nil {
fmt.Println("Delete route error:", err)
}
fmt.Printf("Delete route ok: %s %s %s\n", fields[0], fields[1], gateway)
}
}
return nil
}

View File

@@ -23,15 +23,16 @@ func (e *DeadlineExceededError) Temporary() bool { return true }
// implement io.Writer // implement io.Writer
type overlayConn struct { type overlayConn struct {
tunnel *P2PTunnel tunnel *P2PTunnel // TODO: del
app *p2pApp
connTCP net.Conn connTCP net.Conn
id uint64 id uint64
rtid uint64 rtid uint64
running bool running bool
isClient bool isClient bool
appID uint64 appID uint64 // TODO: del
appKey uint64 appKey uint64 // TODO: del
appKeyBytes []byte appKeyBytes []byte // TODO: del
// for udp // for udp
connUDP *net.UDPConn connUDP *net.UDPConn
remoteAddr net.Addr remoteAddr net.Addr
@@ -65,15 +66,16 @@ func (oConn *overlayConn) run() {
payload, _ = encryptBytes(oConn.appKeyBytes, encryptData, readBuff[:dataLen], dataLen) payload, _ = encryptBytes(oConn.appKeyBytes, encryptData, readBuff[:dataLen], dataLen)
} }
writeBytes := append(tunnelHead.Bytes(), payload...) writeBytes := append(tunnelHead.Bytes(), payload...)
// TODO: app.write
if oConn.rtid == 0 { if oConn.rtid == 0 {
oConn.tunnel.conn.WriteBytes(MsgP2P, MsgOverlayData, writeBytes) oConn.tunnel.conn.WriteBytes(MsgP2P, MsgOverlayData, writeBytes)
gLog.Printf(LvDEBUG, "write overlay data to tid:%d,oid:%d bodylen=%d", oConn.tunnel.id, oConn.id, len(writeBytes)) gLog.Printf(LvDev, "write overlay data to tid:%d,oid:%d bodylen=%d", oConn.tunnel.id, oConn.id, len(writeBytes))
} else { } else {
// write raley data // write raley data
all := append(relayHead.Bytes(), encodeHeader(MsgP2P, MsgOverlayData, uint32(len(writeBytes)))...) all := append(relayHead.Bytes(), encodeHeader(MsgP2P, MsgOverlayData, uint32(len(writeBytes)))...)
all = append(all, writeBytes...) all = append(all, writeBytes...)
oConn.tunnel.conn.WriteBytes(MsgP2P, MsgRelayData, all) oConn.tunnel.conn.WriteBytes(MsgP2P, MsgRelayData, all)
gLog.Printf(LvDEBUG, "write relay data to tid:%d,rtid:%d,oid:%d bodylen=%d", oConn.tunnel.id, oConn.rtid, oConn.id, len(writeBytes)) gLog.Printf(LvDev, "write relay data to tid:%d,rtid:%d,oid:%d bodylen=%d", oConn.tunnel.id, oConn.rtid, oConn.id, len(writeBytes))
} }
} }
if oConn.connTCP != nil { if oConn.connTCP != nil {
@@ -85,14 +87,7 @@ func (oConn *overlayConn) run() {
oConn.tunnel.overlayConns.Delete(oConn.id) oConn.tunnel.overlayConns.Delete(oConn.id)
// notify peer disconnect // notify peer disconnect
req := OverlayDisconnectReq{ID: oConn.id} req := OverlayDisconnectReq{ID: oConn.id}
if oConn.rtid == 0 { oConn.tunnel.WriteMessage(oConn.rtid, MsgP2P, MsgOverlayDisconnectReq, &req)
oConn.tunnel.conn.WriteMessage(MsgP2P, MsgOverlayDisconnectReq, &req)
} else {
// write relay data
msg, _ := newMessage(MsgP2P, MsgOverlayDisconnectReq, &req)
msgWithHead := append(relayHead.Bytes(), msg...)
oConn.tunnel.conn.WriteBytes(MsgP2P, MsgRelayData, msgWithHead)
}
} }
func (oConn *overlayConn) Read(reuseBuff []byte) (buff []byte, dataLen int, err error) { func (oConn *overlayConn) Read(reuseBuff []byte) (buff []byte, dataLen int, err error) {
@@ -163,11 +158,11 @@ func (oConn *overlayConn) Close() (err error) {
oConn.running = false oConn.running = false
if oConn.connTCP != nil { if oConn.connTCP != nil {
oConn.connTCP.Close() oConn.connTCP.Close()
oConn.connTCP = nil // oConn.connTCP = nil
} }
if oConn.connUDP != nil { if oConn.connUDP != nil {
oConn.connUDP.Close() oConn.connUDP.Close()
oConn.connUDP = nil // oConn.connUDP = nil
} }
return nil return nil
} }

View File

@@ -16,36 +16,367 @@ type p2pApp struct {
config AppConfig config AppConfig
listener net.Listener listener net.Listener
listenerUDP *net.UDPConn listenerUDP *net.UDPConn
tunnel *P2PTunnel directTunnel *P2PTunnel
iptree *IPTree relayTunnel *P2PTunnel
tunnelMtx sync.Mutex
iptree *IPTree // for whitelist
rtid uint64 // relay tunnelID rtid uint64 // relay tunnelID
relayNode string relayNode string
relayMode string relayMode string // public/private
hbTime time.Time hbTimeRelay time.Time
hbMtx sync.Mutex hbMtx sync.Mutex
running bool running bool
id uint64 id uint64
key uint64 key uint64 // aes
wg sync.WaitGroup wg sync.WaitGroup
relayHead *bytes.Buffer
once sync.Once
// for relayTunnel
retryRelayNum int
retryRelayTime time.Time
nextRetryRelayTime time.Time
errMsg string
connectTime time.Time
}
func (app *p2pApp) Tunnel() *P2PTunnel {
app.tunnelMtx.Lock()
defer app.tunnelMtx.Unlock()
if app.directTunnel != nil {
return app.directTunnel
}
return app.relayTunnel
}
func (app *p2pApp) DirectTunnel() *P2PTunnel {
app.tunnelMtx.Lock()
defer app.tunnelMtx.Unlock()
return app.directTunnel
}
func (app *p2pApp) setDirectTunnel(t *P2PTunnel) {
app.tunnelMtx.Lock()
defer app.tunnelMtx.Unlock()
app.directTunnel = t
}
func (app *p2pApp) RelayTunnel() *P2PTunnel {
app.tunnelMtx.Lock()
defer app.tunnelMtx.Unlock()
return app.relayTunnel
}
func (app *p2pApp) setRelayTunnel(t *P2PTunnel) {
app.tunnelMtx.Lock()
defer app.tunnelMtx.Unlock()
app.relayTunnel = t
}
func (app *p2pApp) isDirect() bool {
return app.directTunnel != nil
}
func (app *p2pApp) RelayTunnelID() uint64 {
if app.isDirect() {
return 0
}
return app.rtid
}
func (app *p2pApp) ConnectTime() time.Time {
if app.isDirect() {
return app.config.connectTime
}
return app.connectTime
}
func (app *p2pApp) RetryTime() time.Time {
if app.isDirect() {
return app.config.retryTime
}
return app.retryRelayTime
}
func (app *p2pApp) checkP2PTunnel() error {
for app.running {
app.checkDirectTunnel()
app.checkRelayTunnel()
time.Sleep(time.Second * 3)
}
return nil
}
func (app *p2pApp) directRetryLimit() int {
if app.config.peerIP == gConf.Network.publicIP && compareVersion(app.config.peerVersion, SupportIntranetVersion) >= 0 {
return retryLimit
}
if IsIPv6(app.config.peerIPv6) && IsIPv6(gConf.IPv6()) {
return retryLimit
}
if app.config.hasIPv4 == 1 || gConf.Network.hasIPv4 == 1 || app.config.hasUPNPorNATPMP == 1 || gConf.Network.hasUPNPorNATPMP == 1 {
return retryLimit
}
if gConf.Network.natType == NATCone && app.config.peerNatType == NATCone {
return retryLimit
}
if app.config.peerNatType == NATSymmetric && gConf.Network.natType == NATSymmetric {
return 0
}
return retryLimit / 10 // c2s or s2c
}
func (app *p2pApp) checkDirectTunnel() error {
if app.config.ForceRelay == 1 && app.config.RelayNode != app.config.PeerNode {
return nil
}
if app.DirectTunnel() != nil && app.DirectTunnel().isActive() {
return nil
}
if app.config.nextRetryTime.After(time.Now()) || app.config.Enabled == 0 || app.config.retryNum >= app.directRetryLimit() {
return nil
}
if time.Now().Add(-time.Minute * 15).After(app.config.retryTime) { // run normally 15min, reset retrynum
app.config.retryNum = 1
}
if app.config.retryNum > 0 { // first time not show reconnect log
gLog.Printf(LvINFO, "detect app %s appid:%d disconnect, reconnecting the %d times...", app.config.PeerNode, app.id, app.config.retryNum)
}
app.config.retryNum++
app.config.retryTime = time.Now()
app.config.nextRetryTime = time.Now().Add(retryInterval)
app.config.connectTime = time.Now()
err := app.buildDirectTunnel()
if err != nil {
app.config.errMsg = err.Error()
if err == ErrPeerOffline && app.config.retryNum > 2 { // stop retry, waiting for online
app.config.retryNum = retryLimit
gLog.Printf(LvINFO, " %s offline, it will auto reconnect when peer node online", app.config.PeerNode)
}
if err == ErrBuildTunnelBusy {
app.config.retryNum--
}
}
if app.Tunnel() != nil {
app.once.Do(func() {
go app.listen()
// memapp also need
go app.relayHeartbeatLoop()
})
}
return nil
}
func (app *p2pApp) buildDirectTunnel() error {
relayNode := ""
peerNatType := NATUnknown
peerIP := ""
errMsg := ""
var t *P2PTunnel
var err error
pn := GNetwork
initErr := pn.requestPeerInfo(&app.config)
if initErr != nil {
gLog.Printf(LvERROR, "%s init error:%s", app.config.PeerNode, initErr)
return initErr
}
t, err = pn.addDirectTunnel(app.config, 0)
if t != nil {
peerNatType = t.config.peerNatType
peerIP = t.config.peerIP
}
if err != nil {
errMsg = err.Error()
}
req := ReportConnect{
Error: errMsg,
Protocol: app.config.Protocol,
SrcPort: app.config.SrcPort,
NatType: gConf.Network.natType,
PeerNode: app.config.PeerNode,
DstPort: app.config.DstPort,
DstHost: app.config.DstHost,
PeerNatType: peerNatType,
PeerIP: peerIP,
ShareBandwidth: gConf.Network.ShareBandwidth,
RelayNode: relayNode,
Version: OpenP2PVersion,
}
pn.write(MsgReport, MsgReportConnect, &req)
if err != nil {
return err
}
// if rtid != 0 || t.conn.Protocol() == "tcp" {
// sync appkey
if t == nil {
return err
}
syncKeyReq := APPKeySync{
AppID: app.id,
AppKey: app.key,
}
gLog.Printf(LvDEBUG, "sync appkey direct to %s", app.config.PeerNode)
pn.push(app.config.PeerNode, MsgPushAPPKey, &syncKeyReq)
app.setDirectTunnel(t)
// if memapp notify peer addmemapp
if app.config.SrcPort == 0 {
req := ServerSideSaveMemApp{From: gConf.Network.Node, Node: gConf.Network.Node, TunnelID: t.id, RelayTunnelID: 0, AppID: app.id}
pn.push(app.config.PeerNode, MsgPushServerSideSaveMemApp, &req)
gLog.Printf(LvDEBUG, "push %s ServerSideSaveMemApp: %s", app.config.PeerNode, prettyJson(req))
}
gLog.Printf(LvDEBUG, "%s use tunnel %d", app.config.AppName, t.id)
return nil
}
func (app *p2pApp) checkRelayTunnel() error {
// if app.config.ForceRelay == 1 && (gConf.sdwan.CentralNode == app.config.PeerNode && compareVersion(app.config.peerVersion, SupportDualTunnelVersion) < 0) {
if app.config.SrcPort == 0 && (gConf.sdwan.CentralNode == app.config.PeerNode || gConf.sdwan.CentralNode == gConf.Network.Node) { // memapp central node not build relay tunnel
return nil
}
app.hbMtx.Lock()
if app.RelayTunnel() != nil && time.Now().Before(app.hbTimeRelay.Add(TunnelHeartbeatTime*2)) { // must check app.hbtime instead of relayTunnel
app.hbMtx.Unlock()
return nil
}
app.hbMtx.Unlock()
if app.nextRetryRelayTime.After(time.Now()) || app.config.Enabled == 0 || app.retryRelayNum >= retryLimit {
return nil
}
if time.Now().Add(-time.Minute * 15).After(app.retryRelayTime) { // run normally 15min, reset retrynum
app.retryRelayNum = 1
}
if app.retryRelayNum > 0 { // first time not show reconnect log
gLog.Printf(LvINFO, "detect app %s appid:%d relay disconnect, reconnecting the %d times...", app.config.PeerNode, app.id, app.retryRelayNum)
}
app.setRelayTunnel(nil) // reset relayTunnel
app.retryRelayNum++
app.retryRelayTime = time.Now()
app.nextRetryRelayTime = time.Now().Add(retryInterval)
app.connectTime = time.Now()
err := app.buildRelayTunnel()
if err != nil {
app.errMsg = err.Error()
if err == ErrPeerOffline && app.retryRelayNum > 2 { // stop retry, waiting for online
app.retryRelayNum = retryLimit
gLog.Printf(LvINFO, " %s offline, it will auto reconnect when peer node online", app.config.PeerNode)
}
}
if app.Tunnel() != nil {
app.once.Do(func() {
go app.listen()
// memapp also need
go app.relayHeartbeatLoop()
})
}
return nil
}
func (app *p2pApp) buildRelayTunnel() error {
var rtid uint64
relayNode := ""
relayMode := ""
peerNatType := NATUnknown
peerIP := ""
errMsg := ""
var t *P2PTunnel
var err error
pn := GNetwork
config := app.config
initErr := pn.requestPeerInfo(&config)
if initErr != nil {
gLog.Printf(LvERROR, "%s init error:%s", config.PeerNode, initErr)
return initErr
}
t, rtid, relayMode, err = pn.addRelayTunnel(config)
if t != nil {
relayNode = t.config.PeerNode
}
if err != nil {
errMsg = err.Error()
}
req := ReportConnect{
Error: errMsg,
Protocol: config.Protocol,
SrcPort: config.SrcPort,
NatType: gConf.Network.natType,
PeerNode: config.PeerNode,
DstPort: config.DstPort,
DstHost: config.DstHost,
PeerNatType: peerNatType,
PeerIP: peerIP,
ShareBandwidth: gConf.Network.ShareBandwidth,
RelayNode: relayNode,
Version: OpenP2PVersion,
}
pn.write(MsgReport, MsgReportConnect, &req)
if err != nil {
return err
}
// if rtid != 0 || t.conn.Protocol() == "tcp" {
// sync appkey
syncKeyReq := APPKeySync{
AppID: app.id,
AppKey: app.key,
}
gLog.Printf(LvDEBUG, "sync appkey relay to %s", config.PeerNode)
pn.push(config.PeerNode, MsgPushAPPKey, &syncKeyReq)
app.setRelayTunnelID(rtid)
app.setRelayTunnel(t)
app.relayNode = relayNode
app.relayMode = relayMode
app.hbTimeRelay = time.Now()
// if memapp notify peer addmemapp
if config.SrcPort == 0 {
req := ServerSideSaveMemApp{From: gConf.Network.Node, Node: relayNode, TunnelID: rtid, RelayTunnelID: t.id, AppID: app.id, RelayMode: relayMode}
pn.push(config.PeerNode, MsgPushServerSideSaveMemApp, &req)
gLog.Printf(LvDEBUG, "push %s relay ServerSideSaveMemApp: %s", config.PeerNode, prettyJson(req))
}
gLog.Printf(LvDEBUG, "%s use tunnel %d", app.config.AppName, t.id)
return nil
}
func (app *p2pApp) buildOfficialTunnel() error {
return nil
}
// cache relayHead, refresh when rtid change
func (app *p2pApp) RelayHead() *bytes.Buffer {
if app.relayHead == nil {
app.relayHead = new(bytes.Buffer)
binary.Write(app.relayHead, binary.LittleEndian, app.rtid)
}
return app.relayHead
}
func (app *p2pApp) setRelayTunnelID(rtid uint64) {
app.rtid = rtid
app.relayHead = new(bytes.Buffer)
binary.Write(app.relayHead, binary.LittleEndian, app.rtid)
} }
func (app *p2pApp) isActive() bool { func (app *p2pApp) isActive() bool {
if app.tunnel == nil { if app.Tunnel() == nil {
// gLog.Printf(LvDEBUG, "isActive app.tunnel==nil")
return false return false
} }
if app.rtid == 0 { // direct mode app heartbeat equals to tunnel heartbeat if app.isDirect() { // direct mode app heartbeat equals to tunnel heartbeat
return app.tunnel.isActive() return app.Tunnel().isActive()
} }
// relay mode calc app heartbeat // relay mode calc app heartbeat
app.hbMtx.Lock() app.hbMtx.Lock()
defer app.hbMtx.Unlock() defer app.hbMtx.Unlock()
return time.Now().Before(app.hbTime.Add(TunnelIdleTimeout)) res := time.Now().Before(app.hbTimeRelay.Add(TunnelHeartbeatTime * 2))
// if !res {
// gLog.Printf(LvDEBUG, "%d app isActive false. peer=%s", app.id, app.config.PeerNode)
// }
return res
} }
func (app *p2pApp) updateHeartbeat() { func (app *p2pApp) updateHeartbeat() {
app.hbMtx.Lock() app.hbMtx.Lock()
defer app.hbMtx.Unlock() defer app.hbMtx.Unlock()
app.hbTime = time.Now() app.hbTimeRelay = time.Now()
} }
func (app *p2pApp) listenTCP() error { func (app *p2pApp) listenTCP() error {
@@ -61,6 +392,7 @@ func (app *p2pApp) listenTCP() error {
gLog.Printf(LvERROR, "listen error:%s", err) gLog.Printf(LvERROR, "listen error:%s", err)
return err return err
} }
defer app.listener.Close()
for app.running { for app.running {
conn, err := app.listener.Accept() conn, err := app.listener.Accept()
if err != nil { if err != nil {
@@ -69,6 +401,11 @@ func (app *p2pApp) listenTCP() error {
} }
break break
} }
if app.Tunnel() == nil {
gLog.Printf(LvDEBUG, "srcPort=%d, app.Tunnel()==nil, not ready", app.config.SrcPort)
time.Sleep(time.Second)
continue
}
// check white list // check white list
if app.config.Whitelist != "" { if app.config.Whitelist != "" {
remoteIP := conn.RemoteAddr().(*net.TCPAddr).IP.String() remoteIP := conn.RemoteAddr().(*net.TCPAddr).IP.String()
@@ -79,15 +416,18 @@ func (app *p2pApp) listenTCP() error {
} }
} }
oConn := overlayConn{ oConn := overlayConn{
tunnel: app.tunnel, tunnel: app.Tunnel(),
app: app,
connTCP: conn, connTCP: conn,
id: rand.Uint64(), id: rand.Uint64(),
isClient: true, isClient: true,
rtid: app.rtid,
appID: app.id, appID: app.id,
appKey: app.key, appKey: app.key,
running: true, running: true,
} }
if !app.isDirect() {
oConn.rtid = app.rtid
}
// pre-calc key bytes for encrypt // pre-calc key bytes for encrypt
if oConn.appKey != 0 { if oConn.appKey != 0 {
encryptKey := make([]byte, AESKeySize) encryptKey := make([]byte, AESKeySize)
@@ -95,26 +435,20 @@ func (app *p2pApp) listenTCP() error {
binary.LittleEndian.PutUint64(encryptKey[8:], oConn.appKey) binary.LittleEndian.PutUint64(encryptKey[8:], oConn.appKey)
oConn.appKeyBytes = encryptKey oConn.appKeyBytes = encryptKey
} }
app.tunnel.overlayConns.Store(oConn.id, &oConn) app.Tunnel().overlayConns.Store(oConn.id, &oConn)
gLog.Printf(LvDEBUG, "Accept TCP overlayID:%d, %s", oConn.id, oConn.connTCP.RemoteAddr()) gLog.Printf(LvDEBUG, "Accept TCP overlayID:%d, %s", oConn.id, oConn.connTCP.RemoteAddr())
// tell peer connect // tell peer connect
req := OverlayConnectReq{ID: oConn.id, req := OverlayConnectReq{ID: oConn.id,
Token: app.tunnel.pn.config.Token, Token: gConf.Network.Token,
DstIP: app.config.DstHost, DstIP: app.config.DstHost,
DstPort: app.config.DstPort, DstPort: app.config.DstPort,
Protocol: app.config.Protocol, Protocol: app.config.Protocol,
AppID: app.id, AppID: app.id,
} }
if app.rtid == 0 { if !app.isDirect() {
app.tunnel.conn.WriteMessage(MsgP2P, MsgOverlayConnectReq, &req) req.RelayTunnelID = app.Tunnel().id
} else {
req.RelayTunnelID = app.tunnel.id
relayHead := new(bytes.Buffer)
binary.Write(relayHead, binary.LittleEndian, app.rtid)
msg, _ := newMessage(MsgP2P, MsgOverlayConnectReq, &req)
msgWithHead := append(relayHead.Bytes(), msg...)
app.tunnel.conn.WriteBytes(MsgP2P, MsgRelayData, msgWithHead)
} }
app.Tunnel().WriteMessage(app.RelayTunnelID(), MsgP2P, MsgOverlayConnectReq, &req)
// TODO: wait OverlayConnectRsp instead of sleep // TODO: wait OverlayConnectRsp instead of sleep
time.Sleep(time.Second) // waiting remote node connection ok time.Sleep(time.Second) // waiting remote node connection ok
go oConn.run() go oConn.run()
@@ -131,6 +465,7 @@ func (app *p2pApp) listenUDP() error {
gLog.Printf(LvERROR, "listen error:%s", err) gLog.Printf(LvERROR, "listen error:%s", err)
return err return err
} }
defer app.listenerUDP.Close()
buffer := make([]byte, 64*1024+PaddingSize) buffer := make([]byte, 64*1024+PaddingSize)
udpID := make([]byte, 8) udpID := make([]byte, 8)
for { for {
@@ -144,6 +479,11 @@ func (app *p2pApp) listenUDP() error {
break break
} }
} else { } else {
if app.Tunnel() == nil {
gLog.Printf(LvDEBUG, "srcPort=%d, app.Tunnel()==nil, not ready", app.config.SrcPort)
time.Sleep(time.Second)
continue
}
dupData := bytes.Buffer{} // should uses memory pool dupData := bytes.Buffer{} // should uses memory pool
dupData.Write(buffer[:len+PaddingSize]) dupData.Write(buffer[:len+PaddingSize])
// load from app.tunnel.overlayConns by remoteAddr error, new udp connection // load from app.tunnel.overlayConns by remoteAddr error, new udp connection
@@ -157,20 +497,22 @@ func (app *p2pApp) listenUDP() error {
udpID[4] = byte(port) udpID[4] = byte(port)
udpID[5] = byte(port >> 8) udpID[5] = byte(port >> 8)
id := binary.LittleEndian.Uint64(udpID) // convert remoteIP:port to uint64 id := binary.LittleEndian.Uint64(udpID) // convert remoteIP:port to uint64
s, ok := app.tunnel.overlayConns.Load(id) s, ok := app.Tunnel().overlayConns.Load(id)
if !ok { if !ok {
oConn := overlayConn{ oConn := overlayConn{
tunnel: app.tunnel, tunnel: app.Tunnel(),
connUDP: app.listenerUDP, connUDP: app.listenerUDP,
remoteAddr: remoteAddr, remoteAddr: remoteAddr,
udpData: make(chan []byte, 1000), udpData: make(chan []byte, 1000),
id: id, id: id,
isClient: true, isClient: true,
rtid: app.rtid,
appID: app.id, appID: app.id,
appKey: app.key, appKey: app.key,
running: true, running: true,
} }
if !app.isDirect() {
oConn.rtid = app.rtid
}
// calc key bytes for encrypt // calc key bytes for encrypt
if oConn.appKey != 0 { if oConn.appKey != 0 {
encryptKey := make([]byte, AESKeySize) encryptKey := make([]byte, AESKeySize)
@@ -178,26 +520,20 @@ func (app *p2pApp) listenUDP() error {
binary.LittleEndian.PutUint64(encryptKey[8:], oConn.appKey) binary.LittleEndian.PutUint64(encryptKey[8:], oConn.appKey)
oConn.appKeyBytes = encryptKey oConn.appKeyBytes = encryptKey
} }
app.tunnel.overlayConns.Store(oConn.id, &oConn) app.Tunnel().overlayConns.Store(oConn.id, &oConn)
gLog.Printf(LvDEBUG, "Accept UDP overlayID:%d", oConn.id) gLog.Printf(LvDEBUG, "Accept UDP overlayID:%d", oConn.id)
// tell peer connect // tell peer connect
req := OverlayConnectReq{ID: oConn.id, req := OverlayConnectReq{ID: oConn.id,
Token: app.tunnel.pn.config.Token, Token: gConf.Network.Token,
DstIP: app.config.DstHost, DstIP: app.config.DstHost,
DstPort: app.config.DstPort, DstPort: app.config.DstPort,
Protocol: app.config.Protocol, Protocol: app.config.Protocol,
AppID: app.id, AppID: app.id,
} }
if app.rtid == 0 { if !app.isDirect() {
app.tunnel.conn.WriteMessage(MsgP2P, MsgOverlayConnectReq, &req) req.RelayTunnelID = app.Tunnel().id
} else {
req.RelayTunnelID = app.tunnel.id
relayHead := new(bytes.Buffer)
binary.Write(relayHead, binary.LittleEndian, app.rtid)
msg, _ := newMessage(MsgP2P, MsgOverlayConnectReq, &req)
msgWithHead := append(relayHead.Bytes(), msg...)
app.tunnel.conn.WriteBytes(MsgP2P, MsgRelayData, msgWithHead)
} }
app.Tunnel().WriteMessage(app.RelayTunnelID(), MsgP2P, MsgOverlayConnectReq, &req)
// TODO: wait OverlayConnectRsp instead of sleep // TODO: wait OverlayConnectRsp instead of sleep
time.Sleep(time.Second) // waiting remote node connection ok time.Sleep(time.Second) // waiting remote node connection ok
go oConn.run() go oConn.run()
@@ -216,15 +552,14 @@ func (app *p2pApp) listenUDP() error {
} }
func (app *p2pApp) listen() error { func (app *p2pApp) listen() error {
if app.config.SrcPort == 0 {
return nil
}
gLog.Printf(LvINFO, "LISTEN ON PORT %s:%d START", app.config.Protocol, app.config.SrcPort) gLog.Printf(LvINFO, "LISTEN ON PORT %s:%d START", app.config.Protocol, app.config.SrcPort)
defer gLog.Printf(LvINFO, "LISTEN ON PORT %s:%d END", app.config.Protocol, app.config.SrcPort) defer gLog.Printf(LvINFO, "LISTEN ON PORT %s:%d END", app.config.Protocol, app.config.SrcPort)
app.wg.Add(1) app.wg.Add(1)
defer app.wg.Done() defer app.wg.Done()
app.running = true for app.running {
if app.rtid != 0 {
go app.relayHeartbeatLoop()
}
for app.tunnel.isRuning() {
if app.config.Protocol == "udp" { if app.config.Protocol == "udp" {
app.listenUDP() app.listenUDP()
} else { } else {
@@ -246,8 +581,11 @@ func (app *p2pApp) close() {
if app.listenerUDP != nil { if app.listenerUDP != nil {
app.listenerUDP.Close() app.listenerUDP.Close()
} }
if app.tunnel != nil { if app.DirectTunnel() != nil {
app.tunnel.closeOverlayConns(app.id) app.DirectTunnel().closeOverlayConns(app.id)
}
if app.RelayTunnel() != nil {
app.RelayTunnel().closeOverlayConns(app.id)
} }
app.wg.Wait() app.wg.Wait()
} }
@@ -256,21 +594,23 @@ func (app *p2pApp) close() {
func (app *p2pApp) relayHeartbeatLoop() { func (app *p2pApp) relayHeartbeatLoop() {
app.wg.Add(1) app.wg.Add(1)
defer app.wg.Done() defer app.wg.Done()
gLog.Printf(LvDEBUG, "relayHeartbeat to rtid:%d start", app.rtid) gLog.Printf(LvDEBUG, "%s appid:%d relayHeartbeat to rtid:%d start", app.config.PeerNode, app.id, app.rtid)
defer gLog.Printf(LvDEBUG, "relayHeartbeat to rtid%d end", app.rtid) defer gLog.Printf(LvDEBUG, "%s appid:%d relayHeartbeat to rtid%d end", app.config.PeerNode, app.id, app.rtid)
relayHead := new(bytes.Buffer)
binary.Write(relayHead, binary.LittleEndian, app.rtid) for app.running {
req := RelayHeartbeat{RelayTunnelID: app.tunnel.id, if app.RelayTunnel() == nil || !app.RelayTunnel().isRuning() {
time.Sleep(TunnelHeartbeatTime)
continue
}
req := RelayHeartbeat{From: gConf.Network.Node, RelayTunnelID: app.RelayTunnel().id,
AppID: app.id} AppID: app.id}
msg, _ := newMessage(MsgP2P, MsgRelayHeartbeat, &req) err := app.RelayTunnel().WriteMessage(app.rtid, MsgP2P, MsgRelayHeartbeat, &req)
msgWithHead := append(relayHead.Bytes(), msg...)
for app.tunnel.isRuning() && app.running {
err := app.tunnel.conn.WriteBytes(MsgP2P, MsgRelayData, msgWithHead)
if err != nil { if err != nil {
gLog.Printf(LvERROR, "%d app write relay tunnel heartbeat error %s", app.rtid, err) gLog.Printf(LvERROR, "%s appid:%d rtid:%d write relay tunnel heartbeat error %s", app.config.PeerNode, app.id, app.rtid, err)
return return
} }
gLog.Printf(LvDEBUG, "%d app write relay tunnel heartbeat ok", app.rtid) // TODO: debug relay heartbeat
gLog.Printf(LvDEBUG, "%s appid:%d rtid:%d write relay tunnel heartbeat ok", app.config.PeerNode, app.id, app.rtid)
time.Sleep(TunnelHeartbeatTime) time.Sleep(TunnelHeartbeatTime)
} }
} }

View File

@@ -22,13 +22,14 @@ import (
var ( var (
v4l *v4Listener v4l *v4Listener
instance *P2PNetwork instance *P2PNetwork
once sync.Once onceP2PNetwork sync.Once
onceV4Listener sync.Once onceV4Listener sync.Once
) )
const ( const (
retryLimit = 20 retryLimit = 20
retryInterval = 10 * time.Second retryInterval = 10 * time.Second
DefaultLoginMaxDelaySeconds = 60
) )
// golang not support float64 const // golang not support float64 const
@@ -37,6 +38,11 @@ var (
ma5 float64 = 1.0 / 5 ma5 float64 = 1.0 / 5
) )
type NodeData struct {
NodeID uint64
Data []byte
}
type P2PNetwork struct { type P2PNetwork struct {
conn *websocket.Conn conn *websocket.Conn
online bool online bool
@@ -44,18 +50,23 @@ type P2PNetwork struct {
restartCh chan bool restartCh chan bool
wgReconnect sync.WaitGroup wgReconnect sync.WaitGroup
writeMtx sync.Mutex writeMtx sync.Mutex
reqGatewayMtx sync.Mutex
hbTime time.Time hbTime time.Time
// for sync server time // for sync server time
t1 int64 // nanoSeconds t1 int64 // nanoSeconds
preRtt int64 // nanoSeconds
dt int64 // client faster then server dt nanoSeconds dt int64 // client faster then server dt nanoSeconds
ddtma int64 ddtma int64
ddt int64 // differential of dt ddt int64 // differential of dt
msgMap sync.Map //key: nodeID msgMap sync.Map //key: nodeID
// msgMap map[uint64]chan pushMsg //key: nodeID // msgMap map[uint64]chan pushMsg //key: nodeID
config NetworkConfig allTunnels sync.Map // key: tid
allTunnels sync.Map apps sync.Map //key: config.ID(); value: *p2pApp
apps sync.Map //key: protocol+srcport; value: p2pApp
limiter *SpeedLimiter limiter *SpeedLimiter
nodeData chan *NodeData
sdwan *p2pSDWAN
tunnelCloseCh chan *P2PTunnel
loginMaxDelaySeconds int
} }
type msgCtx struct { type msgCtx struct {
@@ -63,26 +74,27 @@ type msgCtx struct {
ts time.Time ts time.Time
} }
func P2PNetworkInstance(config *NetworkConfig) *P2PNetwork { func P2PNetworkInstance() *P2PNetwork {
if instance == nil { if instance == nil {
once.Do(func() { onceP2PNetwork.Do(func() {
instance = &P2PNetwork{ instance = &P2PNetwork{
restartCh: make(chan bool, 2), restartCh: make(chan bool, 1),
tunnelCloseCh: make(chan *P2PTunnel, 100),
nodeData: make(chan *NodeData, 10000),
online: false, online: false,
running: true, running: true,
limiter: newSpeedLimiter(config.ShareBandwidth*1024*1024/8, 1), limiter: newSpeedLimiter(gConf.Network.ShareBandwidth*1024*1024/8, 1),
dt: 0, dt: 0,
ddt: 0, ddt: 0,
loginMaxDelaySeconds: DefaultLoginMaxDelaySeconds,
} }
instance.msgMap.Store(uint64(0), make(chan msgCtx)) // for gateway instance.msgMap.Store(uint64(0), make(chan msgCtx, 50)) // for gateway
if config != nil { instance.StartSDWAN()
instance.config = *config
}
instance.init() instance.init()
go instance.run() go instance.run()
go func() { go func() {
for { for {
instance.refreshIPv6(false) instance.refreshIPv6()
time.Sleep(time.Hour) time.Sleep(time.Hour)
} }
}() }()
@@ -96,23 +108,47 @@ func (pn *P2PNetwork) run() {
heartbeatTimer := time.NewTicker(NetworkHeartbeatTime) heartbeatTimer := time.NewTicker(NetworkHeartbeatTime)
pn.t1 = time.Now().UnixNano() pn.t1 = time.Now().UnixNano()
pn.write(MsgHeartbeat, 0, "") pn.write(MsgHeartbeat, 0, "")
for pn.running { for {
select { select {
case <-heartbeatTimer.C: case <-heartbeatTimer.C:
pn.t1 = time.Now().UnixNano() pn.t1 = time.Now().UnixNano()
pn.write(MsgHeartbeat, 0, "") pn.write(MsgHeartbeat, 0, "")
case <-pn.restartCh: case <-pn.restartCh:
gLog.Printf(LvDEBUG, "got restart channel")
pn.online = false pn.online = false
pn.wgReconnect.Wait() // wait read/autorunapp goroutine end pn.wgReconnect.Wait() // wait read/autorunapp goroutine end
time.Sleep(ClientAPITimeout) delay := ClientAPITimeout + time.Duration(rand.Int()%pn.loginMaxDelaySeconds)*time.Second
time.Sleep(delay)
err := pn.init() err := pn.init()
if err != nil { if err != nil {
gLog.Println(LvERROR, "P2PNetwork init error:", err) gLog.Println(LvERROR, "P2PNetwork init error:", err)
} }
gConf.retryAllApp()
case t := <-pn.tunnelCloseCh:
gLog.Printf(LvDEBUG, "got tunnelCloseCh %s", t.config.PeerNode)
pn.apps.Range(func(id, i interface{}) bool {
app := i.(*p2pApp)
if app.DirectTunnel() == t {
app.setDirectTunnel(nil)
}
if app.RelayTunnel() == t {
app.setRelayTunnel(nil)
}
return true
})
} }
} }
} }
func (pn *P2PNetwork) NotifyTunnelClose(t *P2PTunnel) bool {
select {
case pn.tunnelCloseCh <- t:
return true
default:
}
return false
}
func (pn *P2PNetwork) Connect(timeout int) bool { func (pn *P2PNetwork) Connect(timeout int) bool {
// waiting for heartbeat // waiting for heartbeat
for i := 0; i < (timeout / 1000); i++ { for i := 0; i < (timeout / 1000); i++ {
@@ -128,42 +164,22 @@ func (pn *P2PNetwork) runAll() {
gConf.mtx.Lock() // lock for copy gConf.Apps and the modification of config(it's pointer) gConf.mtx.Lock() // lock for copy gConf.Apps and the modification of config(it's pointer)
defer gConf.mtx.Unlock() defer gConf.mtx.Unlock()
allApps := gConf.Apps // read a copy, other thread will modify the gConf.Apps allApps := gConf.Apps // read a copy, other thread will modify the gConf.Apps
for _, config := range allApps { for _, config := range allApps {
if config.nextRetryTime.After(time.Now()) || config.Enabled == 0 || config.retryNum >= retryLimit {
continue
}
if config.AppName == "" { if config.AppName == "" {
config.AppName = config.ID() config.AppName = fmt.Sprintf("%d", config.ID())
} }
if i, ok := pn.apps.Load(config.ID()); ok { if config.Enabled == 0 {
if app := i.(*p2pApp); app.isActive() {
continue continue
} }
pn.DeleteApp(*config) if _, ok := pn.apps.Load(config.ID()); ok {
continue
} }
if config.retryNum > 0 { // first time not show reconnect log config.peerToken = gConf.Network.Token
gLog.Printf(LvINFO, "detect app %s disconnect, reconnecting the %d times...", config.AppName, config.retryNum)
if time.Now().Add(-time.Minute * 15).After(config.retryTime) { // run normally 15min, reset retrynum
config.retryNum = 0
}
}
config.retryNum++
config.retryTime = time.Now()
config.nextRetryTime = time.Now().Add(retryInterval)
config.connectTime = time.Now()
config.peerToken = pn.config.Token
gConf.mtx.Unlock() // AddApp will take a period of time, let outside modify gConf gConf.mtx.Unlock() // AddApp will take a period of time, let outside modify gConf
err := pn.AddApp(*config) pn.AddApp(*config)
gConf.mtx.Lock() gConf.mtx.Lock()
if err != nil {
config.errMsg = err.Error()
if err == ErrPeerOffline { // stop retry, waiting for online
config.retryNum = retryLimit
gLog.Printf(LvINFO, " %s offline, it will auto reconnect when peer node online", config.PeerNode)
}
}
} }
} }
@@ -181,11 +197,29 @@ func (pn *P2PNetwork) autorunApp() {
func (pn *P2PNetwork) addRelayTunnel(config AppConfig) (*P2PTunnel, uint64, string, error) { func (pn *P2PNetwork) addRelayTunnel(config AppConfig) (*P2PTunnel, uint64, string, error) {
gLog.Printf(LvINFO, "addRelayTunnel to %s start", config.PeerNode) gLog.Printf(LvINFO, "addRelayTunnel to %s start", config.PeerNode)
defer gLog.Printf(LvINFO, "addRelayTunnel to %s end", config.PeerNode) defer gLog.Printf(LvINFO, "addRelayTunnel to %s end", config.PeerNode)
relayConfig := config relayConfig := AppConfig{
PeerNode: config.RelayNode,
peerToken: config.peerToken}
relayMode := "private" relayMode := "private"
if config.RelayNode == "" { if relayConfig.PeerNode == "" {
// find existing relay tunnel
pn.apps.Range(func(id, i interface{}) bool {
app := i.(*p2pApp)
if app.config.PeerNode != config.PeerNode {
return true
}
if app.RelayTunnel() == nil {
return true
}
relayConfig.PeerNode = app.RelayTunnel().config.PeerNode
gLog.Printf(LvDEBUG, "found existing relay tunnel %s", relayConfig.PeerNode)
return false
})
if relayConfig.PeerNode == "" { // request relay node
pn.reqGatewayMtx.Lock()
pn.write(MsgRelay, MsgRelayNodeReq, &RelayNodeReq{config.PeerNode}) pn.write(MsgRelay, MsgRelayNodeReq, &RelayNodeReq{config.PeerNode})
head, body := pn.read("", MsgRelay, MsgRelayNodeRsp, ClientAPITimeout) head, body := pn.read("", MsgRelay, MsgRelayNodeRsp, ClientAPITimeout)
pn.reqGatewayMtx.Unlock()
if head == nil { if head == nil {
return nil, 0, "", errors.New("read MsgRelayNodeRsp error") return nil, 0, "", errors.New("read MsgRelayNodeRsp error")
} }
@@ -197,13 +231,13 @@ func (pn *P2PNetwork) addRelayTunnel(config AppConfig) (*P2PTunnel, uint64, stri
gLog.Printf(LvERROR, "MsgRelayNodeReq error") gLog.Printf(LvERROR, "MsgRelayNodeReq error")
return nil, 0, "", errors.New("MsgRelayNodeReq error") return nil, 0, "", errors.New("MsgRelayNodeReq error")
} }
gLog.Printf(LvINFO, "got relay node:%s", rsp.RelayName) gLog.Printf(LvDEBUG, "got relay node:%s", rsp.RelayName)
relayConfig.PeerNode = rsp.RelayName relayConfig.PeerNode = rsp.RelayName
relayConfig.peerToken = rsp.RelayToken relayConfig.peerToken = rsp.RelayToken
relayMode = rsp.Mode relayMode = rsp.Mode
} else { }
relayConfig.PeerNode = config.RelayNode
} }
/// ///
t, err := pn.addDirectTunnel(relayConfig, 0) t, err := pn.addDirectTunnel(relayConfig, 0)
@@ -213,11 +247,13 @@ func (pn *P2PNetwork) addRelayTunnel(config AppConfig) (*P2PTunnel, uint64, stri
} }
// notify peer addRelayTunnel // notify peer addRelayTunnel
req := AddRelayTunnelReq{ req := AddRelayTunnelReq{
From: pn.config.Node, From: gConf.Network.Node,
RelayName: relayConfig.PeerNode, RelayName: relayConfig.PeerNode,
RelayToken: relayConfig.peerToken, RelayToken: relayConfig.peerToken,
RelayMode: relayMode,
RelayTunnelID: t.id,
} }
gLog.Printf(LvINFO, "push relay %s---------%s", config.PeerNode, relayConfig.PeerNode) gLog.Printf(LvDEBUG, "push %s the relay node(%s)", config.PeerNode, relayConfig.PeerNode)
pn.push(config.PeerNode, MsgPushAddRelayTunnelReq, &req) pn.push(config.PeerNode, MsgPushAddRelayTunnelReq, &req)
// wait relay ready // wait relay ready
@@ -228,7 +264,8 @@ func (pn *P2PNetwork) addRelayTunnel(config AppConfig) (*P2PTunnel, uint64, stri
} }
rspID := TunnelMsg{} rspID := TunnelMsg{}
if err = json.Unmarshal(body, &rspID); err != nil { if err = json.Unmarshal(body, &rspID); err != nil {
return nil, 0, "", errors.New("peer connect relayNode error") gLog.Println(LvDEBUG, ErrPeerConnectRelay)
return nil, 0, "", ErrPeerConnectRelay
} }
return t, rspID.ID, relayMode, err return t, rspID.ID, relayMode, err
} }
@@ -240,88 +277,35 @@ func (pn *P2PNetwork) AddApp(config AppConfig) error {
if !pn.online { if !pn.online {
return errors.New("P2PNetwork offline") return errors.New("P2PNetwork offline")
} }
// check if app already exist? if _, ok := pn.msgMap.Load(NodeNameToID(config.PeerNode)); !ok {
appExist := false pn.msgMap.Store(NodeNameToID(config.PeerNode), make(chan msgCtx, 50))
_, ok := pn.apps.Load(config.ID())
if ok {
appExist = true
} }
if appExist { // check if app already exist?
if _, ok := pn.apps.Load(config.ID()); ok {
return errors.New("P2PApp already exist") return errors.New("P2PApp already exist")
} }
appID := rand.Uint64()
appKey := uint64(0)
var rtid uint64
relayNode := ""
relayMode := ""
peerNatType := NATUnknown
peerIP := ""
errMsg := ""
t, err := pn.addDirectTunnel(config, 0)
if t != nil {
peerNatType = t.config.peerNatType
peerIP = t.config.peerIP
}
if err != nil && err == ErrorHandshake {
gLog.Println(LvERROR, "direct connect failed, try to relay")
t, rtid, relayMode, err = pn.addRelayTunnel(config)
if t != nil {
relayNode = t.config.PeerNode
}
}
if err != nil {
errMsg = err.Error()
}
req := ReportConnect{
Error: errMsg,
Protocol: config.Protocol,
SrcPort: config.SrcPort,
NatType: pn.config.natType,
PeerNode: config.PeerNode,
DstPort: config.DstPort,
DstHost: config.DstHost,
PeerNatType: peerNatType,
PeerIP: peerIP,
ShareBandwidth: pn.config.ShareBandwidth,
RelayNode: relayNode,
Version: OpenP2PVersion,
}
pn.write(MsgReport, MsgReportConnect, &req)
if err != nil {
return err
}
if rtid != 0 || t.conn.Protocol() == "tcp" {
// sync appkey
appKey = rand.Uint64()
req := APPKeySync{
AppID: appID,
AppKey: appKey,
}
gLog.Printf(LvDEBUG, "sync appkey to %s", config.PeerNode)
pn.push(config.PeerNode, MsgPushAPPKey, &req)
}
app := p2pApp{ app := p2pApp{
id: appID, // tunnel: t,
key: appKey, id: rand.Uint64(),
tunnel: t, key: rand.Uint64(),
config: config, config: config,
iptree: NewIPTree(config.Whitelist), iptree: NewIPTree(config.Whitelist),
rtid: rtid, running: true,
relayNode: relayNode, hbTimeRelay: time.Now(),
relayMode: relayMode,
hbTime: time.Now()}
pn.apps.Store(config.ID(), &app)
gLog.Printf(LvDEBUG, "%s use tunnel %d", app.config.AppName, app.tunnel.id)
if err == nil {
go app.listen()
} }
return err if _, ok := pn.msgMap.Load(NodeNameToID(config.PeerNode)); !ok {
pn.msgMap.Store(NodeNameToID(config.PeerNode), make(chan msgCtx, 50))
}
pn.apps.Store(config.ID(), &app)
gLog.Printf(LvDEBUG, "Store app %d", config.ID())
go app.checkP2PTunnel()
return nil
} }
func (pn *P2PNetwork) DeleteApp(config AppConfig) { func (pn *P2PNetwork) DeleteApp(config AppConfig) {
gLog.Printf(LvINFO, "DeleteApp %s%d start", config.Protocol, config.SrcPort) gLog.Printf(LvINFO, "DeleteApp %s to %s:%s:%d start", config.AppName, config.PeerNode, config.DstHost, config.DstPort)
defer gLog.Printf(LvINFO, "DeleteApp %s%d end", config.Protocol, config.SrcPort) defer gLog.Printf(LvINFO, "DeleteApp %s to %s:%s:%d end", config.AppName, config.PeerNode, config.DstHost, config.DstPort)
// close the apps of this config // close the apps of this config
i, ok := pn.apps.Load(config.ID()) i, ok := pn.apps.Load(config.ID())
if ok { if ok {
@@ -332,16 +316,17 @@ func (pn *P2PNetwork) DeleteApp(config AppConfig) {
} }
} }
func (pn *P2PNetwork) findTunnel(config *AppConfig) (t *P2PTunnel) { func (pn *P2PNetwork) findTunnel(peerNode string) (t *P2PTunnel) {
t = nil
// find existing tunnel to peer // find existing tunnel to peer
pn.allTunnels.Range(func(id, i interface{}) bool { pn.allTunnels.Range(func(id, i interface{}) bool {
tmpt := i.(*P2PTunnel) tmpt := i.(*P2PTunnel)
if tmpt.config.PeerNode == config.PeerNode { if tmpt.config.PeerNode == peerNode {
gLog.Println(LvINFO, "tunnel already exist ", config.PeerNode) gLog.Println(LvINFO, "tunnel already exist ", peerNode)
isActive := tmpt.checkActive() isActive := tmpt.checkActive()
// inactive, close it // inactive, close it
if !isActive { if !isActive {
gLog.Println(LvINFO, "but it's not active, close it ", config.PeerNode) gLog.Println(LvINFO, "but it's not active, close it ", peerNode)
tmpt.close() tmpt.close()
} else { } else {
t = tmpt t = tmpt
@@ -362,8 +347,8 @@ func (pn *P2PNetwork) addDirectTunnel(config AppConfig, tid uint64) (t *P2PTunne
tid = rand.Uint64() tid = rand.Uint64()
isClient = true isClient = true
} }
if _, ok := pn.msgMap.Load(nodeNameToID(config.PeerNode)); !ok { if _, ok := pn.msgMap.Load(NodeNameToID(config.PeerNode)); !ok {
pn.msgMap.Store(nodeNameToID(config.PeerNode), make(chan msgCtx, 50)) pn.msgMap.Store(NodeNameToID(config.PeerNode), make(chan msgCtx, 50))
} }
// server side // server side
@@ -375,10 +360,21 @@ func (pn *P2PNetwork) addDirectTunnel(config AppConfig, tid uint64) (t *P2PTunne
// peer info // peer info
initErr := pn.requestPeerInfo(&config) initErr := pn.requestPeerInfo(&config)
if initErr != nil { if initErr != nil {
gLog.Println(LvERROR, "init error:", initErr) gLog.Printf(LvERROR, "%s init error:%s", config.PeerNode, initErr)
return nil, initErr return nil, initErr
} }
gLog.Printf(LvDEBUG, "config.peerNode=%s,config.peerVersion=%s,config.peerIP=%s,config.peerLanIP=%s,gConf.Network.publicIP=%s,config.peerIPv6=%s,config.hasIPv4=%d,config.hasUPNPorNATPMP=%d,gConf.Network.hasIPv4=%d,gConf.Network.hasUPNPorNATPMP=%d,config.peerNatType=%d,gConf.Network.natType=%d,",
config.PeerNode, config.peerVersion, config.peerIP, config.peerLanIP, gConf.Network.publicIP, config.peerIPv6, config.hasIPv4, config.hasUPNPorNATPMP, gConf.Network.hasIPv4, gConf.Network.hasUPNPorNATPMP, config.peerNatType, gConf.Network.natType)
// try Intranet
if config.peerIP == gConf.Network.publicIP && compareVersion(config.peerVersion, SupportIntranetVersion) >= 0 { // old version client has no peerLanIP
gLog.Println(LvINFO, "try Intranet")
config.linkMode = LinkModeIntranet
config.isUnderlayServer = 0
if t, err = pn.newTunnel(config, tid, isClient); err == nil {
return t, nil
}
}
// try TCP6 // try TCP6
if IsIPv6(config.peerIPv6) && IsIPv6(gConf.IPv6()) { if IsIPv6(config.peerIPv6) && IsIPv6(gConf.IPv6()) {
gLog.Println(LvINFO, "try TCP6") gLog.Println(LvINFO, "try TCP6")
@@ -389,28 +385,51 @@ func (pn *P2PNetwork) addDirectTunnel(config AppConfig, tid uint64) (t *P2PTunne
} }
} }
// TODO: try UDP6 // try UDP6? maybe no
// try TCP4 // try TCP4
if config.hasIPv4 == 1 || pn.config.hasIPv4 == 1 || config.hasUPNPorNATPMP == 1 || pn.config.hasUPNPorNATPMP == 1 { if config.hasIPv4 == 1 || gConf.Network.hasIPv4 == 1 || config.hasUPNPorNATPMP == 1 || gConf.Network.hasUPNPorNATPMP == 1 {
gLog.Println(LvINFO, "try TCP4") gLog.Println(LvINFO, "try TCP4")
config.linkMode = LinkModeTCP4 config.linkMode = LinkModeTCP4
if config.hasIPv4 == 1 || config.hasUPNPorNATPMP == 1 { if gConf.Network.hasIPv4 == 1 || gConf.Network.hasUPNPorNATPMP == 1 {
config.isUnderlayServer = 0
} else {
config.isUnderlayServer = 1 config.isUnderlayServer = 1
} else {
config.isUnderlayServer = 0
} }
if t, err = pn.newTunnel(config, tid, isClient); err == nil { if t, err = pn.newTunnel(config, tid, isClient); err == nil {
return t, nil return t, nil
} else if config.hasIPv4 == 1 || config.hasUPNPorNATPMP == 1 { // peer has ipv4 no punching
return nil, ErrConnectPublicV4
} }
} }
// TODO: try UDP4 // try UDP4? maybe no
var primaryPunchFunc func() (*P2PTunnel, error)
var secondaryPunchFunc func() (*P2PTunnel, error)
funcUDP := func() (t *P2PTunnel, err error) {
if config.PunchPriority&PunchPriorityUDPDisable != 0 {
return
}
// try UDPPunch
for i := 0; i < Cone2ConeUDPPunchMaxRetry; i++ { // when both 2 nats has restrict firewall, simultaneous punching needs to be very precise, it takes a few tries
if config.peerNatType == NATCone || gConf.Network.natType == NATCone {
gLog.Println(LvINFO, "try UDP4 Punch")
config.linkMode = LinkModeUDPPunch
config.isUnderlayServer = 0
if t, err = pn.newTunnel(config, tid, isClient); err == nil {
return t, nil
}
}
if !(config.peerNatType == NATCone && gConf.Network.natType == NATCone) { // not cone2cone, no more try
break
}
}
return
}
funcTCP := func() (t *P2PTunnel, err error) {
if config.PunchPriority&PunchPriorityTCPDisable != 0 {
return
}
// try TCPPunch // try TCPPunch
for i := 0; i < Cone2ConeTCPPunchMaxRetry; i++ { // when both 2 nats has restrict firewall, simultaneous punching needs to be very precise, it takes a few tries for i := 0; i < Cone2ConeTCPPunchMaxRetry; i++ { // when both 2 nats has restrict firewall, simultaneous punching needs to be very precise, it takes a few tries
if config.peerNatType == NATCone && pn.config.natType == NATCone { if config.peerNatType == NATCone || gConf.Network.natType == NATCone {
gLog.Println(LvINFO, "try TCP4 Punch") gLog.Println(LvINFO, "try TCP4 Punch")
config.linkMode = LinkModeTCPPunch config.linkMode = LinkModeTCPPunch
config.isUnderlayServer = 0 config.isUnderlayServer = 0
@@ -420,27 +439,29 @@ func (pn *P2PNetwork) addDirectTunnel(config AppConfig, tid uint64) (t *P2PTunne
} }
} }
} }
return
}
if config.PunchPriority&PunchPriorityTCPFirst != 0 {
primaryPunchFunc = funcTCP
secondaryPunchFunc = funcUDP
} else {
primaryPunchFunc = funcTCP
secondaryPunchFunc = funcUDP
}
if t, err = primaryPunchFunc(); t != nil && err == nil {
return t, err
}
if t, err = secondaryPunchFunc(); t != nil && err == nil {
return t, err
}
// try UDPPunch // TODO: s2s won't return err
for i := 0; i < Cone2ConeUDPPunchMaxRetry; i++ { // when both 2 nats has restrict firewall, simultaneous punching needs to be very precise, it takes a few tries return nil, err
if config.peerNatType == NATCone || pn.config.natType == NATCone {
gLog.Println(LvINFO, "try UDP4 Punch")
config.linkMode = LinkModeUDPPunch
config.isUnderlayServer = 0
if t, err = pn.newTunnel(config, tid, isClient); err == nil {
return t, nil
}
}
if !(config.peerNatType == NATCone && pn.config.natType == NATCone) { // not cone2cone, no more try
break
}
}
return nil, ErrorHandshake // only ErrorHandshake will try relay
} }
func (pn *P2PNetwork) newTunnel(config AppConfig, tid uint64, isClient bool) (t *P2PTunnel, err error) { func (pn *P2PNetwork) newTunnel(config AppConfig, tid uint64, isClient bool) (t *P2PTunnel, err error) {
if isClient { // only client side find existing tunnel if isClient { // only client side find existing tunnel
if existTunnel := pn.findTunnel(&config); existTunnel != nil { if existTunnel := pn.findTunnel(config.PeerNode); existTunnel != nil {
return existTunnel, nil return existTunnel, nil
} }
} }
@@ -448,6 +469,8 @@ func (pn *P2PNetwork) newTunnel(config AppConfig, tid uint64, isClient bool) (t
t = &P2PTunnel{pn: pn, t = &P2PTunnel{pn: pn,
config: config, config: config,
id: tid, id: tid,
writeData: make(chan []byte, WriteDataChanSize),
writeDataSmall: make(chan []byte, WriteDataChanSize/30),
} }
t.initPort() t.initPort()
if isClient { if isClient {
@@ -467,43 +490,49 @@ func (pn *P2PNetwork) newTunnel(config AppConfig, tid uint64, isClient bool) (t
return return
} }
func (pn *P2PNetwork) init() error { func (pn *P2PNetwork) init() error {
gLog.Println(LvINFO, "P2PNetwork start") gLog.Println(LvINFO, "P2PNetwork init start")
defer gLog.Println(LvINFO, "P2PNetwork init end")
pn.wgReconnect.Add(1) pn.wgReconnect.Add(1)
defer pn.wgReconnect.Done() defer pn.wgReconnect.Done()
var err error var err error
for { for {
// detect nat type // detect nat type
pn.config.publicIP, pn.config.natType, pn.config.hasIPv4, pn.config.hasUPNPorNATPMP, err = getNATType(pn.config.ServerHost, pn.config.UDPPort1, pn.config.UDPPort2) gConf.Network.publicIP, gConf.Network.natType, err = getNATType(gConf.Network.ServerHost, gConf.Network.UDPPort1, gConf.Network.UDPPort2)
// for testcase
if strings.Contains(pn.config.Node, "openp2pS2STest") {
pn.config.natType = NATSymmetric
pn.config.hasIPv4 = 0
pn.config.hasUPNPorNATPMP = 0
gLog.Println(LvINFO, "openp2pS2STest debug")
}
if strings.Contains(pn.config.Node, "openp2pC2CTest") {
pn.config.natType = NATCone
pn.config.hasIPv4 = 0
pn.config.hasUPNPorNATPMP = 0
gLog.Println(LvINFO, "openp2pC2CTest debug")
}
if err != nil { if err != nil {
gLog.Println(LvDEBUG, "detect NAT type error:", err) gLog.Println(LvDEBUG, "detect NAT type error:", err)
break break
} }
if pn.config.hasIPv4 == 1 || pn.config.hasUPNPorNATPMP == 1 { if gConf.Network.hasIPv4 == 0 && gConf.Network.hasUPNPorNATPMP == 0 { // if already has ipv4 or upnp no need test again
gConf.Network.hasIPv4, gConf.Network.hasUPNPorNATPMP = publicIPTest(gConf.Network.publicIP, gConf.Network.TCPPort)
}
// for testcase
if strings.Contains(gConf.Network.Node, "openp2pS2STest") {
gConf.Network.natType = NATSymmetric
gConf.Network.hasIPv4 = 0
gConf.Network.hasUPNPorNATPMP = 0
gLog.Println(LvINFO, "openp2pS2STest debug")
}
if strings.Contains(gConf.Network.Node, "openp2pC2CTest") {
gConf.Network.natType = NATCone
gConf.Network.hasIPv4 = 0
gConf.Network.hasUPNPorNATPMP = 0
gLog.Println(LvINFO, "openp2pC2CTest debug")
}
if gConf.Network.hasIPv4 == 1 || gConf.Network.hasUPNPorNATPMP == 1 {
onceV4Listener.Do(func() { onceV4Listener.Do(func() {
v4l = &v4Listener{port: gConf.Network.TCPPort} v4l = &v4Listener{port: gConf.Network.TCPPort}
go v4l.start() go v4l.start()
}) })
} }
gLog.Printf(LvINFO, "hasIPv4:%d, UPNP:%d, NAT type:%d, publicIP:%s", pn.config.hasIPv4, pn.config.hasUPNPorNATPMP, pn.config.natType, pn.config.publicIP) gLog.Printf(LvINFO, "hasIPv4:%d, UPNP:%d, NAT type:%d, publicIP:%s", gConf.Network.hasIPv4, gConf.Network.hasUPNPorNATPMP, gConf.Network.natType, gConf.Network.publicIP)
gatewayURL := fmt.Sprintf("%s:%d", pn.config.ServerHost, pn.config.ServerPort) gatewayURL := fmt.Sprintf("%s:%d", gConf.Network.ServerHost, gConf.Network.ServerPort)
uri := "/api/v1/login" uri := "/api/v1/login"
caCertPool, err := x509.SystemCertPool() caCertPool, errCert := x509.SystemCertPool()
if err != nil { if errCert != nil {
gLog.Println(LvERROR, "Failed to load system root CAs:", err) gLog.Println(LvERROR, "Failed to load system root CAs:", errCert)
} else { } else {
caCertPool = x509.NewCertPool() caCertPool = x509.NewCertPool()
} }
@@ -516,11 +545,11 @@ func (pn *P2PNetwork) init() error {
websocket.DefaultDialer.HandshakeTimeout = ClientAPITimeout websocket.DefaultDialer.HandshakeTimeout = ClientAPITimeout
u := url.URL{Scheme: "wss", Host: gatewayURL, Path: uri} u := url.URL{Scheme: "wss", Host: gatewayURL, Path: uri}
q := u.Query() q := u.Query()
q.Add("node", pn.config.Node) q.Add("node", gConf.Network.Node)
q.Add("token", fmt.Sprintf("%d", pn.config.Token)) q.Add("token", fmt.Sprintf("%d", gConf.Network.Token))
q.Add("version", OpenP2PVersion) q.Add("version", OpenP2PVersion)
q.Add("nattype", fmt.Sprintf("%d", pn.config.natType)) q.Add("nattype", fmt.Sprintf("%d", gConf.Network.natType))
q.Add("sharebandwidth", fmt.Sprintf("%d", pn.config.ShareBandwidth)) q.Add("sharebandwidth", fmt.Sprintf("%d", gConf.Network.ShareBandwidth))
u.RawQuery = q.Encode() u.RawQuery = q.Encode()
var ws *websocket.Conn var ws *websocket.Conn
ws, _, err = websocket.DefaultDialer.Dial(u.String(), nil) ws, _, err = websocket.DefaultDialer.Dial(u.String(), nil)
@@ -528,25 +557,26 @@ func (pn *P2PNetwork) init() error {
gLog.Println(LvERROR, "Dial error:", err) gLog.Println(LvERROR, "Dial error:", err)
break break
} }
pn.running = true
pn.online = true pn.online = true
pn.conn = ws pn.conn = ws
localAddr := strings.Split(ws.LocalAddr().String(), ":") localAddr := strings.Split(ws.LocalAddr().String(), ":")
if len(localAddr) == 2 { if len(localAddr) == 2 {
pn.config.localIP = localAddr[0] gConf.Network.localIP = localAddr[0]
} else { } else {
err = errors.New("get local ip failed") err = errors.New("get local ip failed")
break break
} }
go pn.readLoop() go pn.readLoop()
pn.config.mac = getmac(pn.config.localIP) gConf.Network.mac = getmac(gConf.Network.localIP)
pn.config.os = getOsName() gConf.Network.os = getOsName()
go func() { go func() {
req := ReportBasic{ req := ReportBasic{
Mac: pn.config.mac, Mac: gConf.Network.mac,
LanIP: pn.config.localIP, LanIP: gConf.Network.localIP,
OS: pn.config.os, OS: gConf.Network.os,
HasIPv4: pn.config.hasIPv4, HasIPv4: gConf.Network.hasIPv4,
HasUPNPorNATPMP: pn.config.hasUPNPorNATPMP, HasUPNPorNATPMP: gConf.Network.hasUPNPorNATPMP,
Version: OpenP2PVersion, Version: OpenP2PVersion,
} }
rsp := netInfo() rsp := netInfo()
@@ -557,24 +587,25 @@ func (pn *P2PNetwork) init() error {
} }
req.NetInfo = *rsp req.NetInfo = *rsp
} else { } else {
pn.refreshIPv6(true) pn.refreshIPv6()
} }
req.IPv6 = gConf.IPv6() req.IPv6 = gConf.IPv6()
pn.write(MsgReport, MsgReportBasic, &req) pn.write(MsgReport, MsgReportBasic, &req)
}() }()
go pn.autorunApp() go pn.autorunApp()
pn.write(MsgSDWAN, MsgSDWANInfoReq, nil)
gLog.Println(LvDEBUG, "P2PNetwork init ok") gLog.Println(LvDEBUG, "P2PNetwork init ok")
break break
} }
if err != nil { if err != nil {
// init failed, retry // init failed, retry
pn.restartCh <- true pn.close()
gLog.Println(LvERROR, "P2PNetwork init error:", err) gLog.Println(LvERROR, "P2PNetwork init error:", err)
} }
return err return err
} }
func (pn *P2PNetwork) handleMessage(t int, msg []byte) { func (pn *P2PNetwork) handleMessage(msg []byte) {
head := openP2PHeader{} head := openP2PHeader{}
err := binary.Read(bytes.NewReader(msg[:openP2PHeaderSize]), binary.LittleEndian, &head) err := binary.Read(bytes.NewReader(msg[:openP2PHeaderSize]), binary.LittleEndian, &head)
if err != nil { if err != nil {
@@ -593,38 +624,46 @@ func (pn *P2PNetwork) handleMessage(t int, msg []byte) {
gLog.Printf(LvERROR, "login error:%d, detail:%s", rsp.Error, rsp.Detail) gLog.Printf(LvERROR, "login error:%d, detail:%s", rsp.Error, rsp.Detail)
pn.running = false pn.running = false
} else { } else {
pn.config.Token = rsp.Token gConf.Network.Token = rsp.Token
pn.config.User = rsp.User gConf.Network.User = rsp.User
gConf.setToken(rsp.Token) gConf.setToken(rsp.Token)
gConf.setUser(rsp.User) gConf.setUser(rsp.User)
if len(rsp.Node) >= MinNodeNameLen { if len(rsp.Node) >= MinNodeNameLen {
gConf.setNode(rsp.Node) gConf.setNode(rsp.Node)
pn.config.Node = rsp.Node }
if rsp.LoginMaxDelay > 0 {
pn.loginMaxDelaySeconds = rsp.LoginMaxDelay
} }
gLog.Printf(LvINFO, "login ok. user=%s,node=%s", rsp.User, rsp.Node) gLog.Printf(LvINFO, "login ok. user=%s,node=%s", rsp.User, rsp.Node)
} }
case MsgHeartbeat: case MsgHeartbeat:
gLog.Printf(LvDEBUG, "P2PNetwork heartbeat ok") gLog.Printf(LvDev, "P2PNetwork heartbeat ok")
pn.hbTime = time.Now() pn.hbTime = time.Now()
rtt := pn.hbTime.UnixNano() - pn.t1 rtt := pn.hbTime.UnixNano() - pn.t1
if rtt > int64(PunchTsDelay) || (pn.preRtt > 0 && rtt > pn.preRtt*5) {
gLog.Printf(LvINFO, "rtt=%d too large ignore", rtt)
return // invalid hb rsp
}
pn.preRtt = rtt
t2 := int64(binary.LittleEndian.Uint64(msg[openP2PHeaderSize : openP2PHeaderSize+8])) t2 := int64(binary.LittleEndian.Uint64(msg[openP2PHeaderSize : openP2PHeaderSize+8]))
dt := pn.t1 + rtt/2 - t2 thisdt := pn.t1 + rtt/2 - t2
newdt := thisdt
if pn.dt != 0 { if pn.dt != 0 {
ddt := dt - pn.dt ddt := thisdt - pn.dt
pn.ddt = ddt pn.ddt = ddt
if pn.ddtma == 0 { if pn.ddtma == 0 {
pn.ddtma = pn.ddt pn.ddtma = pn.ddt
} else { } else {
pn.ddtma = int64(float64(pn.ddtma)*(1-ma10) + float64(pn.ddt)*ma10) // avoid int64 overflow pn.ddtma = int64(float64(pn.ddtma)*(1-ma10) + float64(pn.ddt)*ma10) // avoid int64 overflow
newdt := pn.dt + pn.ddtma newdt = pn.dt + pn.ddtma
// gLog.Printf(LvDEBUG, "server time auto adjust dt=%.2fms to %.2fms", float64(dt)/float64(time.Millisecond), float64(newdt)/float64(time.Millisecond))
dt = newdt
} }
} }
pn.dt = dt pn.dt = newdt
gLog.Printf(LvDEBUG, "synctime dt=%dms ddt=%dns ddtma=%dns rtt=%dms ", pn.dt/int64(time.Millisecond), pn.ddt, pn.ddtma, rtt/int64(time.Millisecond)) gLog.Printf(LvDEBUG, "synctime thisdt=%dms dt=%dms ddt=%dns ddtma=%dns rtt=%dms ", thisdt/int64(time.Millisecond), pn.dt/int64(time.Millisecond), pn.ddt, pn.ddtma, rtt/int64(time.Millisecond))
case MsgPush: case MsgPush:
handlePush(pn, head.SubType, msg) handlePush(head.SubType, msg)
case MsgSDWAN:
handleSDWAN(head.SubType, msg)
default: default:
i, ok := pn.msgMap.Load(uint64(0)) i, ok := pn.msgMap.Load(uint64(0))
if ok { if ok {
@@ -642,14 +681,13 @@ func (pn *P2PNetwork) readLoop() {
defer pn.wgReconnect.Done() defer pn.wgReconnect.Done()
for pn.running { for pn.running {
pn.conn.SetReadDeadline(time.Now().Add(NetworkHeartbeatTime + 10*time.Second)) pn.conn.SetReadDeadline(time.Now().Add(NetworkHeartbeatTime + 10*time.Second))
t, msg, err := pn.conn.ReadMessage() _, msg, err := pn.conn.ReadMessage()
if err != nil { if err != nil {
gLog.Printf(LvERROR, "P2PNetwork read error:%s", err) gLog.Printf(LvERROR, "P2PNetwork read error:%s", err)
pn.conn.Close() pn.close()
pn.restartCh <- true
break break
} }
pn.handleMessage(t, msg) pn.handleMessage(msg)
} }
gLog.Printf(LvDEBUG, "P2PNetwork readLoop end") gLog.Printf(LvDEBUG, "P2PNetwork readLoop end")
} }
@@ -666,7 +704,7 @@ func (pn *P2PNetwork) write(mainType uint16, subType uint16, packet interface{})
defer pn.writeMtx.Unlock() defer pn.writeMtx.Unlock()
if err = pn.conn.WriteMessage(websocket.BinaryMessage, msg); err != nil { if err = pn.conn.WriteMessage(websocket.BinaryMessage, msg); err != nil {
gLog.Printf(LvERROR, "write msgType %d,%d error:%s", mainType, subType, err) gLog.Printf(LvERROR, "write msgType %d,%d error:%s", mainType, subType, err)
pn.conn.Close() pn.close()
} }
return err return err
} }
@@ -674,7 +712,6 @@ func (pn *P2PNetwork) write(mainType uint16, subType uint16, packet interface{})
func (pn *P2PNetwork) relay(to uint64, body []byte) error { func (pn *P2PNetwork) relay(to uint64, body []byte) error {
i, ok := pn.allTunnels.Load(to) i, ok := pn.allTunnels.Load(to)
if !ok { if !ok {
gLog.Printf(LvERROR, "relay to %d len=%d error:%s", to, len(body), ErrRelayTunnelNotFound)
return ErrRelayTunnelNotFound return ErrRelayTunnelNotFound
} }
tunnel := i.(*P2PTunnel) tunnel := i.(*P2PTunnel)
@@ -694,8 +731,8 @@ func (pn *P2PNetwork) push(to string, subType uint16, packet interface{}) error
return errors.New("client offline") return errors.New("client offline")
} }
pushHead := PushHeader{} pushHead := PushHeader{}
pushHead.From = nodeNameToID(pn.config.Node) pushHead.From = gConf.nodeID()
pushHead.To = nodeNameToID(to) pushHead.To = NodeNameToID(to)
pushHeadBuf := new(bytes.Buffer) pushHeadBuf := new(bytes.Buffer)
err := binary.Write(pushHeadBuf, binary.LittleEndian, pushHead) err := binary.Write(pushHeadBuf, binary.LittleEndian, pushHead)
if err != nil { if err != nil {
@@ -712,27 +749,41 @@ func (pn *P2PNetwork) push(to string, subType uint16, packet interface{}) error
defer pn.writeMtx.Unlock() defer pn.writeMtx.Unlock()
if err = pn.conn.WriteMessage(websocket.BinaryMessage, pushMsg); err != nil { if err = pn.conn.WriteMessage(websocket.BinaryMessage, pushMsg); err != nil {
gLog.Printf(LvERROR, "push to %s error:%s", to, err) gLog.Printf(LvERROR, "push to %s error:%s", to, err)
pn.conn.Close() pn.close()
} }
return err return err
} }
func (pn *P2PNetwork) close() {
if pn.running {
if pn.conn != nil {
pn.conn.Close()
}
pn.running = false
}
select {
case pn.restartCh <- true:
default:
}
}
func (pn *P2PNetwork) read(node string, mainType uint16, subType uint16, timeout time.Duration) (head *openP2PHeader, body []byte) { func (pn *P2PNetwork) read(node string, mainType uint16, subType uint16, timeout time.Duration) (head *openP2PHeader, body []byte) {
var nodeID uint64 var nodeID uint64
if node == "" { if node == "" {
nodeID = 0 nodeID = 0
} else { } else {
nodeID = nodeNameToID(node) nodeID = NodeNameToID(node)
} }
i, ok := pn.msgMap.Load(nodeID) i, ok := pn.msgMap.Load(nodeID)
if !ok { if !ok {
gLog.Printf(LvERROR, "read msg error: %s not found", node)
return return
} }
ch := i.(chan msgCtx) ch := i.(chan msgCtx)
for { for {
select { select {
case <-time.After(timeout): case <-time.After(timeout):
gLog.Printf(LvERROR, "wait msg%d:%d timeout", mainType, subType) gLog.Printf(LvERROR, "read msg error %d:%d timeout", mainType, subType)
return return
case msg := <-ch: case msg := <-ch:
head = &openP2PHeader{} head = &openP2PHeader{}
@@ -742,11 +793,11 @@ func (pn *P2PNetwork) read(node string, mainType uint16, subType uint16, timeout
break break
} }
if time.Since(msg.ts) > ReadMsgTimeout { if time.Since(msg.ts) > ReadMsgTimeout {
gLog.Printf(LvDEBUG, "msg expired error %d:%d", head.MainType, head.SubType) gLog.Printf(LvDEBUG, "read msg error expired %d:%d", head.MainType, head.SubType)
continue continue
} }
if head.MainType != mainType || head.SubType != subType { if head.MainType != mainType || head.SubType != subType {
gLog.Printf(LvDEBUG, "read msg type error %d:%d, requeue it", head.MainType, head.SubType) gLog.Printf(LvDEBUG, "read msg error type %d:%d, requeue it", head.MainType, head.SubType)
ch <- msg ch <- msg
time.Sleep(time.Second) time.Sleep(time.Second)
continue continue
@@ -764,22 +815,18 @@ func (pn *P2PNetwork) read(node string, mainType uint16, subType uint16, timeout
func (pn *P2PNetwork) updateAppHeartbeat(appID uint64) { func (pn *P2PNetwork) updateAppHeartbeat(appID uint64) {
pn.apps.Range(func(id, i interface{}) bool { pn.apps.Range(func(id, i interface{}) bool {
app := i.(*p2pApp) app := i.(*p2pApp)
if app.id != appID { if app.id == appID {
return true
}
app.updateHeartbeat() app.updateHeartbeat()
return false }
return true
}) })
} }
// ipv6 will expired need to refresh. // ipv6 will expired need to refresh.
func (pn *P2PNetwork) refreshIPv6(force bool) { func (pn *P2PNetwork) refreshIPv6() {
if !force && !IsIPv6(gConf.IPv6()) { // not support ipv6, not refresh for i := 0; i < 2; i++ {
return
}
for i := 0; i < 3; i++ {
client := &http.Client{Timeout: time.Second * 10} client := &http.Client{Timeout: time.Second * 10}
r, err := client.Get("http://6.ipw.cn") r, err := client.Get("http://ipv6.ddnspod.com/")
if err != nil { if err != nil {
gLog.Println(LvDEBUG, "refreshIPv6 error:", err) gLog.Println(LvDEBUG, "refreshIPv6 error:", err)
continue continue
@@ -791,7 +838,9 @@ func (pn *P2PNetwork) refreshIPv6(force bool) {
gLog.Println(LvINFO, "refreshIPv6 error:", err, n) gLog.Println(LvINFO, "refreshIPv6 error:", err, n)
continue continue
} }
if IsIPv6(string(buf[:n])) {
gConf.setIPv6(string(buf[:n])) gConf.setIPv6(string(buf[:n]))
}
break break
} }
@@ -799,9 +848,13 @@ func (pn *P2PNetwork) refreshIPv6(force bool) {
func (pn *P2PNetwork) requestPeerInfo(config *AppConfig) error { func (pn *P2PNetwork) requestPeerInfo(config *AppConfig) error {
// request peer info // request peer info
// TODO: multi-thread issue
pn.reqGatewayMtx.Lock()
pn.write(MsgQuery, MsgQueryPeerInfoReq, &QueryPeerInfoReq{config.peerToken, config.PeerNode}) pn.write(MsgQuery, MsgQueryPeerInfoReq, &QueryPeerInfoReq{config.peerToken, config.PeerNode})
head, body := pn.read("", MsgQuery, MsgQueryPeerInfoRsp, ClientAPITimeout) head, body := pn.read("", MsgQuery, MsgQueryPeerInfoRsp, ClientAPITimeout)
pn.reqGatewayMtx.Unlock()
if head == nil { if head == nil {
gLog.Println(LvERROR, "requestPeerInfo error")
return ErrNetwork // network error, should not be ErrPeerOffline return ErrNetwork // network error, should not be ErrPeerOffline
} }
rsp := QueryPeerInfoRsp{} rsp := QueryPeerInfoRsp{}
@@ -811,10 +864,11 @@ func (pn *P2PNetwork) requestPeerInfo(config *AppConfig) error {
if rsp.Online == 0 { if rsp.Online == 0 {
return ErrPeerOffline return ErrPeerOffline
} }
if compareVersion(rsp.Version, LeastSupportVersion) == LESS { if compareVersion(rsp.Version, LeastSupportVersion) < 0 {
return ErrVersionNotCompatible return ErrVersionNotCompatible
} }
config.peerVersion = rsp.Version config.peerVersion = rsp.Version
config.peerLanIP = rsp.LanIP
config.hasIPv4 = rsp.HasIPv4 config.hasIPv4 = rsp.HasIPv4
config.peerIP = rsp.IPv4 config.peerIP = rsp.IPv4
config.peerIPv6 = rsp.IPv6 config.peerIPv6 = rsp.IPv6
@@ -823,3 +877,102 @@ func (pn *P2PNetwork) requestPeerInfo(config *AppConfig) error {
/// ///
return nil return nil
} }
func (pn *P2PNetwork) StartSDWAN() {
// request peer info
pn.sdwan = &p2pSDWAN{}
}
func (pn *P2PNetwork) ConnectNode(node string) error {
if gConf.nodeID() < NodeNameToID(node) {
return errors.New("only the bigger nodeid connect")
}
peerNodeID := fmt.Sprintf("%d", NodeNameToID(node))
config := AppConfig{Enabled: 1}
config.AppName = peerNodeID
config.SrcPort = 0
config.PeerNode = node
sdwan := gConf.getSDWAN()
config.PunchPriority = int(sdwan.PunchPriority)
if node != sdwan.CentralNode && gConf.Network.Node != sdwan.CentralNode { // neither is centralnode
config.RelayNode = sdwan.CentralNode
config.ForceRelay = int(sdwan.ForceRelay)
if sdwan.Mode == SDWANModeCentral {
config.ForceRelay = 1
}
}
gConf.add(config, true)
return nil
}
func (pn *P2PNetwork) WriteNode(nodeID uint64, buff []byte) error {
i, ok := pn.apps.Load(nodeID)
if !ok {
return errors.New("peer not found")
}
var err error
app := i.(*p2pApp)
if app.Tunnel() == nil {
return errors.New("peer tunnel nil")
}
// TODO: move to app.write
gLog.Printf(LvDev, "%d tunnel write node data bodylen=%d, relay=%t", app.Tunnel().id, len(buff), !app.isDirect())
if app.isDirect() { // direct
app.Tunnel().asyncWriteNodeData(MsgP2P, MsgNodeData, buff)
} else { // relay
fromNodeIDHead := new(bytes.Buffer)
binary.Write(fromNodeIDHead, binary.LittleEndian, gConf.nodeID())
all := app.RelayHead().Bytes()
all = append(all, encodeHeader(MsgP2P, MsgRelayNodeData, uint32(len(buff)+overlayHeaderSize))...)
all = append(all, fromNodeIDHead.Bytes()...)
all = append(all, buff...)
app.Tunnel().asyncWriteNodeData(MsgP2P, MsgRelayData, all)
}
return err
}
func (pn *P2PNetwork) WriteBroadcast(buff []byte) error {
///
pn.apps.Range(func(id, i interface{}) bool {
// newDestIP := net.ParseIP("10.2.3.2")
// copy(buff[16:20], newDestIP.To4())
// binary.BigEndian.PutUint16(buff[10:12], 0) // set checksum=0 for calc checksum
// ipChecksum := calculateChecksum(buff[0:20])
// binary.BigEndian.PutUint16(buff[10:12], ipChecksum)
// binary.BigEndian.PutUint16(buff[26:28], 0x082e)
app := i.(*p2pApp)
if app.Tunnel() == nil {
return true
}
if app.config.SrcPort != 0 { // normal portmap app
return true
}
if app.config.peerIP == gConf.Network.publicIP { // mostly in a lan
return true
}
if app.isDirect() { // direct
app.Tunnel().conn.WriteBytes(MsgP2P, MsgNodeData, buff)
} else { // relay
fromNodeIDHead := new(bytes.Buffer)
binary.Write(fromNodeIDHead, binary.LittleEndian, gConf.nodeID())
all := app.RelayHead().Bytes()
all = append(all, encodeHeader(MsgP2P, MsgRelayNodeData, uint32(len(buff)+overlayHeaderSize))...)
all = append(all, fromNodeIDHead.Bytes()...)
all = append(all, buff...)
app.Tunnel().conn.WriteBytes(MsgP2P, MsgRelayData, all)
}
return true
})
return nil
}
func (pn *P2PNetwork) ReadNode(tm time.Duration) *NodeData {
select {
case nd := <-pn.nodeData:
return nd
case <-time.After(tm):
}
return nil
}

View File

@@ -10,9 +10,14 @@ import (
"net" "net"
"reflect" "reflect"
"sync" "sync"
"sync/atomic"
"time" "time"
) )
const WriteDataChanSize int = 3000
var buildTunnelMtx sync.Mutex
type P2PTunnel struct { type P2PTunnel struct {
pn *P2PNetwork pn *P2PNetwork
conn underlay conn underlay
@@ -22,7 +27,7 @@ type P2PTunnel struct {
la *net.UDPAddr // local hole address la *net.UDPAddr // local hole address
ra *net.UDPAddr // remote hole address ra *net.UDPAddr // remote hole address
overlayConns sync.Map // both TCP and UDP overlayConns sync.Map // both TCP and UDP
id uint64 id uint64 // client side alloc rand.uint64 = server side
running bool running bool
runMtx sync.Mutex runMtx sync.Mutex
tunnelServer bool // different from underlayServer tunnelServer bool // different from underlayServer
@@ -30,28 +35,30 @@ type P2PTunnel struct {
coneNatPort int coneNatPort int
linkModeWeb string // use config.linkmode linkModeWeb string // use config.linkmode
punchTs uint64 punchTs uint64
writeData chan []byte
writeDataSmall chan []byte
} }
func (t *P2PTunnel) initPort() { func (t *P2PTunnel) initPort() {
t.running = true t.running = true
localPort := int(rand.Uint32()%15000 + 50000) // if the process has bug, will add many upnp port. use specify p2p port by param localPort := int(rand.Uint32()%15000 + 50000) // if the process has bug, will add many upnp port. use specify p2p port by param
if t.config.linkMode == LinkModeTCP6 || t.config.linkMode == LinkModeTCP4 { if t.config.linkMode == LinkModeTCP6 || t.config.linkMode == LinkModeTCP4 || t.config.linkMode == LinkModeIntranet {
t.coneLocalPort = t.pn.config.TCPPort t.coneLocalPort = gConf.Network.TCPPort
t.coneNatPort = t.pn.config.TCPPort // symmetric doesn't need coneNatPort t.coneNatPort = gConf.Network.TCPPort // symmetric doesn't need coneNatPort
} }
if t.config.linkMode == LinkModeUDPPunch { if t.config.linkMode == LinkModeUDPPunch {
// prepare one random cone hole manually // prepare one random cone hole manually
_, natPort, _ := natTest(t.pn.config.ServerHost, t.pn.config.UDPPort1, localPort) _, natPort, _ := natTest(gConf.Network.ServerHost, gConf.Network.UDPPort1, localPort)
t.coneLocalPort = localPort t.coneLocalPort = localPort
t.coneNatPort = natPort t.coneNatPort = natPort
} }
if t.config.linkMode == LinkModeTCPPunch { if t.config.linkMode == LinkModeTCPPunch {
// prepare one random cone hole by system automatically // prepare one random cone hole by system automatically
_, natPort, localPort2 := natTCP(t.pn.config.ServerHost, IfconfigPort1) _, natPort, localPort2 := natTCP(gConf.Network.ServerHost, IfconfigPort1)
t.coneLocalPort = localPort2 t.coneLocalPort = localPort2
t.coneNatPort = natPort t.coneNatPort = natPort
} }
t.la = &net.UDPAddr{IP: net.ParseIP(t.pn.config.localIP), Port: t.coneLocalPort} t.la = &net.UDPAddr{IP: net.ParseIP(gConf.Network.localIP), Port: t.coneLocalPort}
gLog.Printf(LvDEBUG, "prepare punching port %d:%d", t.coneLocalPort, t.coneNatPort) gLog.Printf(LvDEBUG, "prepare punching port %d:%d", t.coneLocalPort, t.coneNatPort)
} }
@@ -61,21 +68,22 @@ func (t *P2PTunnel) connect() error {
appKey := uint64(0) appKey := uint64(0)
req := PushConnectReq{ req := PushConnectReq{
Token: t.config.peerToken, Token: t.config.peerToken,
From: t.pn.config.Node, From: gConf.Network.Node,
FromIP: t.pn.config.publicIP, FromIP: gConf.Network.publicIP,
ConeNatPort: t.coneNatPort, ConeNatPort: t.coneNatPort,
NatType: t.pn.config.natType, NatType: gConf.Network.natType,
HasIPv4: t.pn.config.hasIPv4, HasIPv4: gConf.Network.hasIPv4,
IPv6: gConf.IPv6(), IPv6: gConf.IPv6(),
HasUPNPorNATPMP: t.pn.config.hasUPNPorNATPMP, HasUPNPorNATPMP: gConf.Network.hasUPNPorNATPMP,
ID: t.id, ID: t.id,
AppKey: appKey, AppKey: appKey,
Version: OpenP2PVersion, Version: OpenP2PVersion,
LinkMode: t.config.linkMode, LinkMode: t.config.linkMode,
IsUnderlayServer: t.config.isUnderlayServer ^ 1, // peer IsUnderlayServer: t.config.isUnderlayServer ^ 1, // peer
UnderlayProtocol: t.config.UnderlayProtocol,
} }
if req.Token == 0 { // no relay token if req.Token == 0 { // no relay token
req.Token = t.pn.config.Token req.Token = gConf.Network.Token
} }
t.pn.push(t.config.PeerNode, MsgPushConnectReq, req) t.pn.push(t.config.PeerNode, MsgPushConnectReq, req)
head, body := t.pn.read(t.config.PeerNode, MsgPush, MsgPushConnectRsp, UnderlayConnectTimeout*3) head, body := t.pn.read(t.config.PeerNode, MsgPush, MsgPushConnectRsp, UnderlayConnectTimeout*3)
@@ -102,7 +110,6 @@ func (t *P2PTunnel) connect() error {
err := t.start() err := t.start()
if err != nil { if err != nil {
gLog.Println(LvERROR, "handshake error:", err) gLog.Println(LvERROR, "handshake error:", err)
err = ErrorHandshake
} }
return err return err
} }
@@ -125,7 +132,11 @@ func (t *P2PTunnel) isActive() bool {
} }
t.hbMtx.Lock() t.hbMtx.Lock()
defer t.hbMtx.Unlock() defer t.hbMtx.Unlock()
return time.Now().Before(t.hbTime.Add(TunnelHeartbeatTime * 2)) res := time.Now().Before(t.hbTime.Add(TunnelHeartbeatTime * 2))
if !res {
gLog.Printf(LvDEBUG, "%d tunnel isActive false", t.id)
}
return res
} }
func (t *P2PTunnel) checkActive() bool { func (t *P2PTunnel) checkActive() bool {
@@ -150,8 +161,16 @@ func (t *P2PTunnel) checkActive() bool {
// call when user delete tunnel // call when user delete tunnel
func (t *P2PTunnel) close() { func (t *P2PTunnel) close() {
t.pn.NotifyTunnelClose(t)
if !t.running {
return
}
t.setRun(false) t.setRun(false)
if t.conn != nil {
t.conn.Close()
}
t.pn.allTunnels.Delete(t.id) t.pn.allTunnels.Delete(t.id)
gLog.Printf(LvINFO, "%d p2ptunnel close %s ", t.id, t.config.PeerNode)
} }
func (t *P2PTunnel) start() error { func (t *P2PTunnel) start() error {
@@ -176,23 +195,26 @@ func (t *P2PTunnel) handshake() error {
return err return err
} }
} }
if compareVersion(t.config.peerVersion, SyncServerTimeVersion) == LESS { if compareVersion(t.config.peerVersion, SyncServerTimeVersion) < 0 {
gLog.Printf(LvDEBUG, "peer version %s less than %s", t.config.peerVersion, SyncServerTimeVersion) gLog.Printf(LvDEBUG, "peer version %s less than %s", t.config.peerVersion, SyncServerTimeVersion)
} else { } else {
ts := time.Duration(int64(t.punchTs) + t.pn.dt + t.pn.ddtma*int64(time.Since(t.pn.hbTime)+PunchTsDelay)/int64(NetworkHeartbeatTime) - time.Now().UnixNano()) ts := time.Duration(int64(t.punchTs) + t.pn.dt + t.pn.ddtma*int64(time.Since(t.pn.hbTime)+PunchTsDelay)/int64(NetworkHeartbeatTime) - time.Now().UnixNano())
if ts > PunchTsDelay || ts < 0 {
ts = PunchTsDelay
}
gLog.Printf(LvDEBUG, "sleep %d ms", ts/time.Millisecond) gLog.Printf(LvDEBUG, "sleep %d ms", ts/time.Millisecond)
time.Sleep(ts) time.Sleep(ts)
} }
gLog.Println(LvDEBUG, "handshake to ", t.config.PeerNode) gLog.Println(LvDEBUG, "handshake to ", t.config.PeerNode)
var err error var err error
if t.pn.config.natType == NATCone && t.config.peerNatType == NATCone { if gConf.Network.natType == NATCone && t.config.peerNatType == NATCone {
err = handshakeC2C(t) err = handshakeC2C(t)
} else if t.config.peerNatType == NATSymmetric && t.pn.config.natType == NATSymmetric { } else if t.config.peerNatType == NATSymmetric && gConf.Network.natType == NATSymmetric {
err = ErrorS2S err = ErrorS2S
t.close() t.close()
} else if t.config.peerNatType == NATSymmetric && t.pn.config.natType == NATCone { } else if t.config.peerNatType == NATSymmetric && gConf.Network.natType == NATCone {
err = handshakeC2S(t) err = handshakeC2S(t)
} else if t.config.peerNatType == NATCone && t.pn.config.natType == NATSymmetric { } else if t.config.peerNatType == NATCone && gConf.Network.natType == NATSymmetric {
err = handshakeS2C(t) err = handshakeS2C(t)
} else { } else {
return errors.New("unknown error") return errors.New("unknown error")
@@ -212,9 +234,15 @@ func (t *P2PTunnel) connectUnderlay() (err error) {
case LinkModeTCP4: case LinkModeTCP4:
t.conn, err = t.connectUnderlayTCP() t.conn, err = t.connectUnderlayTCP()
case LinkModeTCPPunch: case LinkModeTCPPunch:
if gConf.Network.natType == NATSymmetric || t.config.peerNatType == NATSymmetric {
t.conn, err = t.connectUnderlayTCPSymmetric()
} else {
t.conn, err = t.connectUnderlayTCP()
}
case LinkModeIntranet:
t.conn, err = t.connectUnderlayTCP() t.conn, err = t.connectUnderlayTCP()
case LinkModeUDPPunch: case LinkModeUDPPunch:
t.conn, err = t.connectUnderlayQuic() t.conn, err = t.connectUnderlayUDP()
} }
if err != nil { if err != nil {
@@ -225,59 +253,70 @@ func (t *P2PTunnel) connectUnderlay() (err error) {
} }
t.setRun(true) t.setRun(true)
go t.readLoop() go t.readLoop()
go t.heartbeatLoop() go t.writeLoop()
return nil return nil
} }
func (t *P2PTunnel) connectUnderlayQuic() (c underlay, err error) { func (t *P2PTunnel) connectUnderlayUDP() (c underlay, err error) {
gLog.Println(LvINFO, "connectUnderlayQuic start") gLog.Printf(LvDEBUG, "connectUnderlayUDP %s start ", t.config.PeerNode)
defer gLog.Println(LvINFO, "connectUnderlayQuic end") defer gLog.Printf(LvDEBUG, "connectUnderlayUDP %s end ", t.config.PeerNode)
var ul *underlayQUIC var ul underlay
underlayProtocol := t.config.UnderlayProtocol
if underlayProtocol == "" {
underlayProtocol = "quic"
}
if t.config.isUnderlayServer == 1 { if t.config.isUnderlayServer == 1 {
time.Sleep(time.Millisecond * 10) // punching udp port will need some times in some env time.Sleep(time.Millisecond * 10) // punching udp port will need some times in some env
go t.pn.push(t.config.PeerNode, MsgPushUnderlayConnect, nil)
if t.config.UnderlayProtocol == "kcp" {
ul, err = listenKCP(t.la.String(), TunnelIdleTimeout)
} else {
ul, err = listenQuic(t.la.String(), TunnelIdleTimeout) ul, err = listenQuic(t.la.String(), TunnelIdleTimeout)
if err != nil {
gLog.Println(LvINFO, "listen quic error:", err, ", retry...")
} }
t.pn.push(t.config.PeerNode, MsgPushUnderlayConnect, nil)
err = ul.Accept()
if err != nil { if err != nil {
ul.CloseListener() gLog.Printf(LvINFO, "listen %s error:%s", underlayProtocol, err)
return nil, fmt.Errorf("accept quic error:%s", err) return nil, err
} }
_, buff, err := ul.ReadBuffer() _, buff, err := ul.ReadBuffer()
if err != nil { if err != nil {
ul.listener.Close() ul.Close()
return nil, fmt.Errorf("read start msg error:%s", err) return nil, fmt.Errorf("read start msg error:%s", err)
} }
if buff != nil { if buff != nil {
gLog.Println(LvDEBUG, string(buff)) gLog.Println(LvDEBUG, string(buff))
} }
ul.WriteBytes(MsgP2P, MsgTunnelHandshakeAck, []byte("OpenP2P,hello2")) ul.WriteBytes(MsgP2P, MsgTunnelHandshakeAck, []byte("OpenP2P,hello2"))
gLog.Println(LvDEBUG, "quic connection ok") gLog.Printf(LvDEBUG, "%s connection ok", underlayProtocol)
return ul, nil return ul, nil
} }
//else //else
conn, e := net.ListenUDP("udp", t.la) conn, errL := net.ListenUDP("udp", t.la)
if e != nil { if errL != nil {
time.Sleep(time.Millisecond * 10) time.Sleep(time.Millisecond * 10)
conn, e = net.ListenUDP("udp", t.la) conn, errL = net.ListenUDP("udp", t.la)
if e != nil { if errL != nil {
return nil, fmt.Errorf("quic listen error:%s", e) return nil, fmt.Errorf("%s listen error:%s", underlayProtocol, errL)
} }
} }
t.pn.read(t.config.PeerNode, MsgPush, MsgPushUnderlayConnect, ReadMsgTimeout) t.pn.read(t.config.PeerNode, MsgPush, MsgPushUnderlayConnect, ReadMsgTimeout)
gLog.Println(LvDEBUG, "quic dial to ", t.ra.String()) gLog.Printf(LvDEBUG, "%s dial to %s", underlayProtocol, t.ra.String())
ul, e = dialQuic(conn, t.ra, TunnelIdleTimeout) if t.config.UnderlayProtocol == "kcp" {
if e != nil { ul, errL = dialKCP(conn, t.ra, TunnelIdleTimeout)
return nil, fmt.Errorf("quic dial to %s error:%s", t.ra.String(), e) } else {
ul, errL = dialQuic(conn, t.ra, TunnelIdleTimeout)
}
if errL != nil {
return nil, fmt.Errorf("%s dial to %s error:%s", underlayProtocol, t.ra.String(), errL)
} }
handshakeBegin := time.Now() handshakeBegin := time.Now()
ul.WriteBytes(MsgP2P, MsgTunnelHandshake, []byte("OpenP2P,hello")) ul.WriteBytes(MsgP2P, MsgTunnelHandshake, []byte("OpenP2P,hello"))
_, buff, err := ul.ReadBuffer() _, buff, err := ul.ReadBuffer() // TODO: kcp need timeout
if e != nil { if err != nil {
ul.listener.Close() ul.Close()
return nil, fmt.Errorf("read MsgTunnelHandshake error:%s", err) return nil, fmt.Errorf("read MsgTunnelHandshake error:%s", err)
} }
if buff != nil { if buff != nil {
@@ -285,22 +324,30 @@ func (t *P2PTunnel) connectUnderlayQuic() (c underlay, err error) {
} }
gLog.Println(LvINFO, "rtt=", time.Since(handshakeBegin)) gLog.Println(LvINFO, "rtt=", time.Since(handshakeBegin))
gLog.Println(LvDEBUG, "quic connection ok") gLog.Printf(LvINFO, "%s connection ok", underlayProtocol)
t.linkModeWeb = LinkModeUDPPunch t.linkModeWeb = LinkModeUDPPunch
return ul, nil return ul, nil
} }
// websocket
func (t *P2PTunnel) connectUnderlayTCP() (c underlay, err error) { func (t *P2PTunnel) connectUnderlayTCP() (c underlay, err error) {
gLog.Println(LvDEBUG, "connectUnderlayTCP start") gLog.Printf(LvDEBUG, "connectUnderlayTCP %s start ", t.config.PeerNode)
defer gLog.Println(LvDEBUG, "connectUnderlayTCP end") defer gLog.Printf(LvDEBUG, "connectUnderlayTCP %s end ", t.config.PeerNode)
var ul *underlayTCP var ul *underlayTCP
peerIP := t.config.peerIP
if t.config.linkMode == LinkModeIntranet {
peerIP = t.config.peerLanIP
}
// server side
if t.config.isUnderlayServer == 1 { if t.config.isUnderlayServer == 1 {
ul, err = listenTCP(t.config.peerIP, t.config.peerConeNatPort, t.coneLocalPort, t.config.linkMode, t) ul, err = listenTCP(peerIP, t.config.peerConeNatPort, t.coneLocalPort, t.config.linkMode, t)
if err != nil { if err != nil {
return nil, fmt.Errorf("listen TCP error:%s", err) return nil, fmt.Errorf("listen TCP error:%s", err)
} }
gLog.Println(LvINFO, "TCP connection ok") gLog.Println(LvINFO, "TCP connection ok")
t.linkModeWeb = LinkModeIPv4
if t.config.linkMode == LinkModeIntranet {
t.linkModeWeb = LinkModeIntranet
}
return ul, nil return ul, nil
} }
@@ -308,15 +355,18 @@ func (t *P2PTunnel) connectUnderlayTCP() (c underlay, err error) {
if t.config.linkMode == LinkModeTCP4 { if t.config.linkMode == LinkModeTCP4 {
t.pn.read(t.config.PeerNode, MsgPush, MsgPushUnderlayConnect, ReadMsgTimeout) t.pn.read(t.config.PeerNode, MsgPush, MsgPushUnderlayConnect, ReadMsgTimeout)
} else { //tcp punch should sleep for punch the same time } else { //tcp punch should sleep for punch the same time
if compareVersion(t.config.peerVersion, SyncServerTimeVersion) == LESS { if compareVersion(t.config.peerVersion, SyncServerTimeVersion) < 0 {
gLog.Printf(LvDEBUG, "peer version %s less than %s", t.config.peerVersion, SyncServerTimeVersion) gLog.Printf(LvDEBUG, "peer version %s less than %s", t.config.peerVersion, SyncServerTimeVersion)
} else { } else {
ts := time.Duration(int64(t.punchTs) + t.pn.dt + t.pn.ddtma*int64(time.Since(t.pn.hbTime)+PunchTsDelay)/int64(NetworkHeartbeatTime) - time.Now().UnixNano()) ts := time.Duration(int64(t.punchTs) + t.pn.dt + t.pn.ddtma*int64(time.Since(t.pn.hbTime)+PunchTsDelay)/int64(NetworkHeartbeatTime) - time.Now().UnixNano())
if ts > PunchTsDelay || ts < 0 {
ts = PunchTsDelay
}
gLog.Printf(LvDEBUG, "sleep %d ms", ts/time.Millisecond) gLog.Printf(LvDEBUG, "sleep %d ms", ts/time.Millisecond)
time.Sleep(ts) time.Sleep(ts)
} }
} }
ul, err = dialTCP(t.config.peerIP, t.config.peerConeNatPort, t.coneLocalPort, t.config.linkMode) ul, err = dialTCP(peerIP, t.config.peerConeNatPort, t.coneLocalPort, t.config.linkMode)
if err != nil { if err != nil {
return nil, fmt.Errorf("TCP dial to %s:%d error:%s", t.config.peerIP, t.config.peerConeNatPort, err) return nil, fmt.Errorf("TCP dial to %s:%d error:%s", t.config.peerIP, t.config.peerConeNatPort, err)
} }
@@ -335,12 +385,109 @@ func (t *P2PTunnel) connectUnderlayTCP() (c underlay, err error) {
gLog.Println(LvINFO, "rtt=", time.Since(handshakeBegin)) gLog.Println(LvINFO, "rtt=", time.Since(handshakeBegin))
gLog.Println(LvINFO, "TCP connection ok") gLog.Println(LvINFO, "TCP connection ok")
t.linkModeWeb = LinkModeIPv4 t.linkModeWeb = LinkModeIPv4
if t.config.linkMode == LinkModeIntranet {
t.linkModeWeb = LinkModeIntranet
}
return ul, nil return ul, nil
} }
func (t *P2PTunnel) connectUnderlayTCPSymmetric() (c underlay, err error) {
gLog.Printf(LvDEBUG, "connectUnderlayTCPSymmetric %s start ", t.config.PeerNode)
defer gLog.Printf(LvDEBUG, "connectUnderlayTCPSymmetric %s end ", t.config.PeerNode)
ts := time.Duration(int64(t.punchTs) + t.pn.dt + t.pn.ddtma*int64(time.Since(t.pn.hbTime)+PunchTsDelay)/int64(NetworkHeartbeatTime) - time.Now().UnixNano())
if ts > PunchTsDelay || ts < 0 {
ts = PunchTsDelay
}
gLog.Printf(LvDEBUG, "sleep %d ms", ts/time.Millisecond)
time.Sleep(ts)
startTime := time.Now()
t.linkModeWeb = LinkModeTCPPunch
gotCh := make(chan *underlayTCP, 1)
var wg sync.WaitGroup
var success atomic.Int32
if t.config.peerNatType == NATSymmetric { // c2s
randPorts := rand.Perm(65532)
for i := 0; i < SymmetricHandshakeNum; i++ {
wg.Add(1)
go func(port int) {
defer wg.Done()
ul, err := dialTCP(t.config.peerIP, port, t.coneLocalPort, LinkModeTCPPunch)
if err != nil {
return
}
if !success.CompareAndSwap(0, 1) {
ul.Close() // only cone side close
return
}
err = ul.WriteMessage(MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id})
if err != nil {
ul.Close()
return
}
_, buff, err := ul.ReadBuffer()
if err != nil {
gLog.Printf(LvERROR, "utcp.ReadBuffer error:", err)
return
}
req := P2PHandshakeReq{}
if err = json.Unmarshal(buff, &req); err != nil {
return
}
if req.ID != t.id {
return
}
gLog.Printf(LvINFO, "handshakeS2C TCP ok. cost %dms", time.Since(startTime)/time.Millisecond)
gotCh <- ul
close(gotCh)
}(randPorts[i] + 2)
}
} else { // s2c
for i := 0; i < SymmetricHandshakeNum; i++ {
wg.Add(1)
go func() {
defer wg.Done()
ul, err := dialTCP(t.config.peerIP, t.config.peerConeNatPort, 0, LinkModeTCPPunch)
if err != nil {
return
}
_, buff, err := ul.ReadBuffer()
if err != nil {
gLog.Printf(LvERROR, "utcp.ReadBuffer error:", err)
return
}
req := P2PHandshakeReq{}
if err = json.Unmarshal(buff, &req); err != nil {
return
}
if req.ID != t.id {
return
}
err = ul.WriteMessage(MsgP2P, MsgPunchHandshakeAck, P2PHandshakeReq{ID: t.id})
if err != nil {
ul.Close()
return
}
if success.CompareAndSwap(0, 1) {
gotCh <- ul
close(gotCh)
}
}()
}
}
select {
case <-time.After(HandshakeTimeout):
return nil, fmt.Errorf("wait tcp handshake timeout")
case ul := <-gotCh:
return ul, nil
}
}
func (t *P2PTunnel) connectUnderlayTCP6() (c underlay, err error) { func (t *P2PTunnel) connectUnderlayTCP6() (c underlay, err error) {
gLog.Println(LvINFO, "connectUnderlayTCP6 start") gLog.Printf(LvDEBUG, "connectUnderlayTCP6 %s start ", t.config.PeerNode)
defer gLog.Println(LvINFO, "connectUnderlayTCP6 end") defer gLog.Printf(LvDEBUG, "connectUnderlayTCP6 %s end ", t.config.PeerNode)
var ul *underlayTCP6 var ul *underlayTCP6
if t.config.isUnderlayServer == 1 { if t.config.isUnderlayServer == 1 {
t.pn.push(t.config.PeerNode, MsgPushUnderlayConnect, nil) t.pn.push(t.config.PeerNode, MsgPushUnderlayConnect, nil)
@@ -350,7 +497,6 @@ func (t *P2PTunnel) connectUnderlayTCP6() (c underlay, err error) {
} }
_, buff, err := ul.ReadBuffer() _, buff, err := ul.ReadBuffer()
if err != nil { if err != nil {
ul.listener.Close()
return nil, fmt.Errorf("read start msg error:%s", err) return nil, fmt.Errorf("read start msg error:%s", err)
} }
if buff != nil { if buff != nil {
@@ -358,6 +504,7 @@ func (t *P2PTunnel) connectUnderlayTCP6() (c underlay, err error) {
} }
ul.WriteBytes(MsgP2P, MsgTunnelHandshakeAck, []byte("OpenP2P,hello2")) ul.WriteBytes(MsgP2P, MsgTunnelHandshakeAck, []byte("OpenP2P,hello2"))
gLog.Println(LvDEBUG, "TCP6 connection ok") gLog.Println(LvDEBUG, "TCP6 connection ok")
t.linkModeWeb = LinkModeIPv6
return ul, nil return ul, nil
} }
@@ -372,7 +519,6 @@ func (t *P2PTunnel) connectUnderlayTCP6() (c underlay, err error) {
ul.WriteBytes(MsgP2P, MsgTunnelHandshake, []byte("OpenP2P,hello")) ul.WriteBytes(MsgP2P, MsgTunnelHandshake, []byte("OpenP2P,hello"))
_, buff, err := ul.ReadBuffer() _, buff, err := ul.ReadBuffer()
if err != nil { if err != nil {
ul.listener.Close()
return nil, fmt.Errorf("read MsgTunnelHandshake error:%s", err) return nil, fmt.Errorf("read MsgTunnelHandshake error:%s", err)
} }
if buff != nil { if buff != nil {
@@ -380,7 +526,7 @@ func (t *P2PTunnel) connectUnderlayTCP6() (c underlay, err error) {
} }
gLog.Println(LvINFO, "rtt=", time.Since(handshakeBegin)) gLog.Println(LvINFO, "rtt=", time.Since(handshakeBegin))
gLog.Println(LvDEBUG, "TCP6 connection ok") gLog.Println(LvINFO, "TCP6 connection ok")
t.linkModeWeb = LinkModeIPv6 t.linkModeWeb = LinkModeIPv6
return ul, nil return ul, nil
} }
@@ -389,7 +535,7 @@ func (t *P2PTunnel) readLoop() {
decryptData := make([]byte, ReadBuffLen+PaddingSize) // 16 bytes for padding decryptData := make([]byte, ReadBuffLen+PaddingSize) // 16 bytes for padding
gLog.Printf(LvDEBUG, "%d tunnel readloop start", t.id) gLog.Printf(LvDEBUG, "%d tunnel readloop start", t.id)
for t.isRuning() { for t.isRuning() {
t.conn.SetReadDeadline(time.Now().Add(TunnelIdleTimeout)) t.conn.SetReadDeadline(time.Now().Add(TunnelHeartbeatTime * 2))
head, body, err := t.conn.ReadBuffer() head, body, err := t.conn.ReadBuffer()
if err != nil { if err != nil {
if t.isRuning() { if t.isRuning() {
@@ -401,23 +547,26 @@ func (t *P2PTunnel) readLoop() {
gLog.Printf(LvWARN, "%d head.MainType != MsgP2P", t.id) gLog.Printf(LvWARN, "%d head.MainType != MsgP2P", t.id)
continue continue
} }
// TODO: replace some case implement to functions
switch head.SubType { switch head.SubType {
case MsgTunnelHeartbeat: case MsgTunnelHeartbeat:
t.hbMtx.Lock()
t.hbTime = time.Now() t.hbTime = time.Now()
t.hbMtx.Unlock()
t.conn.WriteBytes(MsgP2P, MsgTunnelHeartbeatAck, nil) t.conn.WriteBytes(MsgP2P, MsgTunnelHeartbeatAck, nil)
gLog.Printf(LvDEBUG, "%d read tunnel heartbeat", t.id) gLog.Printf(LvDev, "%d read tunnel heartbeat", t.id)
case MsgTunnelHeartbeatAck: case MsgTunnelHeartbeatAck:
t.hbMtx.Lock() t.hbMtx.Lock()
t.hbTime = time.Now() t.hbTime = time.Now()
t.hbMtx.Unlock() t.hbMtx.Unlock()
gLog.Printf(LvDEBUG, "%d read tunnel heartbeat ack", t.id) gLog.Printf(LvDev, "%d read tunnel heartbeat ack", t.id)
case MsgOverlayData: case MsgOverlayData:
if len(body) < overlayHeaderSize { if len(body) < overlayHeaderSize {
gLog.Printf(LvWARN, "%d len(body) < overlayHeaderSize", t.id) gLog.Printf(LvWARN, "%d len(body) < overlayHeaderSize", t.id)
continue continue
} }
overlayID := binary.LittleEndian.Uint64(body[:8]) overlayID := binary.LittleEndian.Uint64(body[:8])
gLog.Printf(LvDEBUG, "%d tunnel read overlay data %d bodylen=%d", t.id, overlayID, head.DataLen) gLog.Printf(LvDev, "%d tunnel read overlay data %d bodylen=%d", t.id, overlayID, head.DataLen)
s, ok := t.overlayConns.Load(overlayID) s, ok := t.overlayConns.Load(overlayID)
if !ok { if !ok {
// debug level, when overlay connection closed, always has some packet not found tunnel // debug level, when overlay connection closed, always has some packet not found tunnel
@@ -437,25 +586,31 @@ func (t *P2PTunnel) readLoop() {
if err != nil { if err != nil {
gLog.Println(LvERROR, "overlay write error:", err) gLog.Println(LvERROR, "overlay write error:", err)
} }
case MsgNodeData:
t.handleNodeData(head, body, false)
case MsgRelayNodeData:
t.handleNodeData(head, body, true)
case MsgRelayData: case MsgRelayData:
if len(body) < 8 { if len(body) < 8 {
continue continue
} }
tunnelID := binary.LittleEndian.Uint64(body[:8]) tunnelID := binary.LittleEndian.Uint64(body[:8])
gLog.Printf(LvDEBUG, "relay data to %d, len=%d", tunnelID, head.DataLen-RelayHeaderSize) gLog.Printf(LvDev, "relay data to %d, len=%d", tunnelID, head.DataLen-RelayHeaderSize)
t.pn.relay(tunnelID, body[RelayHeaderSize:]) if err := t.pn.relay(tunnelID, body[RelayHeaderSize:]); err != nil {
gLog.Printf(LvERROR, "%s:%d relay to %d len=%d error:%s", t.config.PeerNode, t.id, tunnelID, len(body), ErrRelayTunnelNotFound)
}
case MsgRelayHeartbeat: case MsgRelayHeartbeat:
req := RelayHeartbeat{} req := RelayHeartbeat{}
if err := json.Unmarshal(body, &req); err != nil { if err := json.Unmarshal(body, &req); err != nil {
gLog.Printf(LvERROR, "wrong %v:%s", reflect.TypeOf(req), err) gLog.Printf(LvERROR, "wrong %v:%s", reflect.TypeOf(req), err)
continue continue
} }
// TODO: debug relay heartbeat
gLog.Printf(LvDEBUG, "read MsgRelayHeartbeat from rtid:%d,appid:%d", req.RelayTunnelID, req.AppID) gLog.Printf(LvDEBUG, "read MsgRelayHeartbeat from rtid:%d,appid:%d", req.RelayTunnelID, req.AppID)
relayHead := new(bytes.Buffer) // update app hbtime
binary.Write(relayHead, binary.LittleEndian, req.RelayTunnelID) t.pn.updateAppHeartbeat(req.AppID)
msg, _ := newMessage(MsgP2P, MsgRelayHeartbeatAck, &req) req.From = gConf.Network.Node
msgWithHead := append(relayHead.Bytes(), msg...) t.WriteMessage(req.RelayTunnelID, MsgP2P, MsgRelayHeartbeatAck, &req)
t.conn.WriteBytes(MsgP2P, MsgRelayData, msgWithHead)
case MsgRelayHeartbeatAck: case MsgRelayHeartbeatAck:
req := RelayHeartbeat{} req := RelayHeartbeat{}
err := json.Unmarshal(body, &req) err := json.Unmarshal(body, &req)
@@ -463,6 +618,7 @@ func (t *P2PTunnel) readLoop() {
gLog.Printf(LvERROR, "wrong RelayHeartbeat:%s", err) gLog.Printf(LvERROR, "wrong RelayHeartbeat:%s", err)
continue continue
} }
// TODO: debug relay heartbeat
gLog.Printf(LvDEBUG, "read MsgRelayHeartbeatAck to appid:%d", req.AppID) gLog.Printf(LvDEBUG, "read MsgRelayHeartbeatAck to appid:%d", req.AppID)
t.pn.updateAppHeartbeat(req.AppID) t.pn.updateAppHeartbeat(req.AppID)
case MsgOverlayConnectReq: case MsgOverlayConnectReq:
@@ -472,7 +628,7 @@ func (t *P2PTunnel) readLoop() {
continue continue
} }
// app connect only accept token(not relay totp token), avoid someone using the share relay node's token // app connect only accept token(not relay totp token), avoid someone using the share relay node's token
if req.Token != t.pn.config.Token { if req.Token != gConf.Network.Token {
gLog.Println(LvERROR, "Access Denied:", req.Token) gLog.Println(LvERROR, "Access Denied:", req.Token)
continue continue
} }
@@ -525,30 +681,40 @@ func (t *P2PTunnel) readLoop() {
default: default:
} }
} }
t.setRun(false) t.close()
t.conn.Close()
gLog.Printf(LvDEBUG, "%d tunnel readloop end", t.id) gLog.Printf(LvDEBUG, "%d tunnel readloop end", t.id)
} }
func (t *P2PTunnel) heartbeatLoop() { func (t *P2PTunnel) writeLoop() {
t.hbMtx.Lock() t.hbMtx.Lock()
t.hbTime = time.Now() // init t.hbTime = time.Now() // init
t.hbMtx.Unlock() t.hbMtx.Unlock()
tc := time.NewTicker(TunnelHeartbeatTime) tc := time.NewTicker(TunnelHeartbeatTime)
defer tc.Stop() defer tc.Stop()
gLog.Printf(LvDEBUG, "%d tunnel heartbeatLoop start", t.id) gLog.Printf(LvDEBUG, "%s:%d tunnel writeLoop start", t.config.PeerNode, t.id)
defer gLog.Printf(LvDEBUG, "%d tunnel heartbeatLoop end", t.id) defer gLog.Printf(LvDEBUG, "%s:%d tunnel writeLoop end", t.config.PeerNode, t.id)
for t.isRuning() { for t.isRuning() {
select { select {
case buff := <-t.writeDataSmall:
t.conn.WriteBuffer(buff)
// gLog.Printf(LvDEBUG, "write icmp %d", time.Now().Unix())
default:
select {
case buff := <-t.writeDataSmall:
t.conn.WriteBuffer(buff)
// gLog.Printf(LvDEBUG, "write icmp %d", time.Now().Unix())
case buff := <-t.writeData:
t.conn.WriteBuffer(buff)
case <-tc.C: case <-tc.C:
// tunnel send // tunnel send
err := t.conn.WriteBytes(MsgP2P, MsgTunnelHeartbeat, nil) err := t.conn.WriteBytes(MsgP2P, MsgTunnelHeartbeat, nil)
if err != nil { if err != nil {
gLog.Printf(LvERROR, "%d write tunnel heartbeat error %s", t.id, err) gLog.Printf(LvERROR, "%d write tunnel heartbeat error %s", t.id, err)
t.setRun(false) t.close()
return return
} }
gLog.Printf(LvDEBUG, "%d write tunnel heartbeat ok", t.id) gLog.Printf(LvDev, "%d write tunnel heartbeat ok", t.id)
}
} }
} }
} }
@@ -559,12 +725,12 @@ func (t *P2PTunnel) listen() error {
Error: 0, Error: 0,
Detail: "connect ok", Detail: "connect ok",
To: t.config.PeerNode, To: t.config.PeerNode,
From: t.pn.config.Node, From: gConf.Network.Node,
NatType: t.pn.config.natType, NatType: gConf.Network.natType,
HasIPv4: t.pn.config.hasIPv4, HasIPv4: gConf.Network.hasIPv4,
// IPv6: t.pn.config.IPv6, // IPv6: gConf.Network.IPv6,
HasUPNPorNATPMP: t.pn.config.hasUPNPorNATPMP, HasUPNPorNATPMP: gConf.Network.hasUPNPorNATPMP,
FromIP: t.pn.config.publicIP, FromIP: gConf.Network.publicIP,
ConeNatPort: t.coneNatPort, ConeNatPort: t.coneNatPort,
ID: t.id, ID: t.id,
PunchTs: uint64(time.Now().UnixNano() + int64(PunchTsDelay) - t.pn.dt), PunchTs: uint64(time.Now().UnixNano() + int64(PunchTsDelay) - t.pn.dt),
@@ -572,7 +738,7 @@ func (t *P2PTunnel) listen() error {
} }
t.punchTs = rsp.PunchTs t.punchTs = rsp.PunchTs
// only private node set ipv6 // only private node set ipv6
if t.config.fromToken == t.pn.config.Token { if t.config.fromToken == gConf.Network.Token {
rsp.IPv6 = gConf.IPv6() rsp.IPv6 = gConf.IPv6()
} }
@@ -591,3 +757,50 @@ func (t *P2PTunnel) closeOverlayConns(appID uint64) {
return true return true
}) })
} }
func (t *P2PTunnel) handleNodeData(head *openP2PHeader, body []byte, isRelay bool) {
gLog.Printf(LvDev, "%d tunnel read node data bodylen=%d, relay=%t", t.id, head.DataLen, isRelay)
ch := t.pn.nodeData
// if body[9] == 1 { // TODO: deal relay
// ch = t.pn.nodeDataSmall
// gLog.Printf(LvDEBUG, "read icmp %d", time.Now().Unix())
// }
if isRelay {
fromPeerID := binary.LittleEndian.Uint64(body[:8])
ch <- &NodeData{fromPeerID, body[8:]} // TODO: cache peerNodeID; encrypt/decrypt
} else {
ch <- &NodeData{NodeNameToID(t.config.PeerNode), body} // TODO: cache peerNodeID; encrypt/decrypt
}
}
func (t *P2PTunnel) asyncWriteNodeData(mainType, subType uint16, data []byte) {
writeBytes := append(encodeHeader(mainType, subType, uint32(len(data))), data...)
// if len(data) < 192 {
if data[9] == 1 { // icmp
select {
case t.writeDataSmall <- writeBytes:
// gLog.Printf(LvWARN, "%s:%d t.writeDataSmall write %d", t.config.PeerNode, t.id, len(t.writeDataSmall))
default:
gLog.Printf(LvWARN, "%s:%d t.writeDataSmall is full, drop it", t.config.PeerNode, t.id)
}
} else {
select {
case t.writeData <- writeBytes:
default:
gLog.Printf(LvWARN, "%s:%d t.writeData is full, drop it", t.config.PeerNode, t.id)
}
}
}
func (t *P2PTunnel) WriteMessage(rtid uint64, mainType uint16, subType uint16, req interface{}) error {
if rtid == 0 {
return t.conn.WriteMessage(mainType, subType, &req)
}
relayHead := new(bytes.Buffer)
binary.Write(relayHead, binary.LittleEndian, rtid)
msg, _ := newMessage(mainType, subType, &req)
msgWithHead := append(relayHead.Bytes(), msg...)
return t.conn.WriteBytes(mainType, MsgRelayData, msgWithHead)
}

24
core/p2ptunnel_test.go Normal file
View File

@@ -0,0 +1,24 @@
package openp2p
import (
"fmt"
"testing"
)
func TestSelectPriority(t *testing.T) {
writeData := make(chan []byte, WriteDataChanSize)
writeDataSmall := make(chan []byte, WriteDataChanSize/30)
for i := 0; i < 100; i++ {
writeData <- []byte("data")
writeDataSmall <- []byte("small data")
}
for i := 0; i < 100; i++ {
select {
case buff := <-writeDataSmall:
fmt.Printf("got small data:%s\n", string(buff))
case buff := <-writeData:
fmt.Printf("got data:%s\n", string(buff))
}
}
}

View File

@@ -10,12 +10,14 @@ import (
"time" "time"
) )
const OpenP2PVersion = "3.12.0" const OpenP2PVersion = "3.18.4"
const ProductName string = "openp2p" const ProductName string = "openp2p"
const LeastSupportVersion = "3.0.0" const LeastSupportVersion = "3.0.0"
const SyncServerTimeVersion = "3.9.0" const SyncServerTimeVersion = "3.9.0"
const SymmetricSimultaneouslySendVersion = "3.10.7" const SymmetricSimultaneouslySendVersion = "3.10.7"
const PublicIPVersion = "3.11.2" const PublicIPVersion = "3.11.2"
const SupportIntranetVersion = "3.14.5"
const SupportDualTunnelVersion = "3.15.5"
const ( const (
IfconfigPort1 = 27180 IfconfigPort1 = 27180
@@ -82,6 +84,7 @@ const (
MsgRelay = 5 MsgRelay = 5
MsgReport = 6 MsgReport = 6
MsgQuery = 7 MsgQuery = 7
MsgSDWAN = 8
) )
// TODO: seperate node push and web push. // TODO: seperate node push and web push.
@@ -102,6 +105,9 @@ const (
MsgPushAPPKey = 13 MsgPushAPPKey = 13
MsgPushReportLog = 14 MsgPushReportLog = 14
MsgPushDstNodeOnline = 15 MsgPushDstNodeOnline = 15
MsgPushReportGoroutine = 16
MsgPushReportMemApps = 17
MsgPushServerSideSaveMemApp = 18
) )
// MsgP2P sub type message // MsgP2P sub type message
@@ -119,6 +125,8 @@ const (
MsgRelayData MsgRelayData
MsgRelayHeartbeat MsgRelayHeartbeat
MsgRelayHeartbeatAck MsgRelayHeartbeatAck
MsgNodeData
MsgRelayNodeData
) )
// MsgRelay sub type message // MsgRelay sub type message
@@ -134,12 +142,15 @@ const (
MsgReportConnect MsgReportConnect
MsgReportApps MsgReportApps
MsgReportLog MsgReportLog
MsgReportMemApps
) )
const ( const (
ReadBuffLen = 4096 // for UDP maybe not enough ReadBuffLen = 4096 // for UDP maybe not enough
NetworkHeartbeatTime = time.Second * 30 NetworkHeartbeatTime = time.Second * 30
TunnelHeartbeatTime = time.Second * 10 // some nat udp session expired time less than 15s. change to 10s TunnelHeartbeatTime = time.Second * 10 // some nat udp session expired time less than 15s. change to 10s
UnderlayTCPKeepalive = time.Second * 5
UnderlayTCPConnectTimeout = time.Second * 5
TunnelIdleTimeout = time.Minute TunnelIdleTimeout = time.Minute
SymmetricHandshakeNum = 800 // 0.992379 SymmetricHandshakeNum = 800 // 0.992379
// SymmetricHandshakeNum = 1000 // 0.999510 // SymmetricHandshakeNum = 1000 // 0.999510
@@ -160,6 +171,10 @@ const (
ClientAPITimeout = time.Second * 10 ClientAPITimeout = time.Second * 10
UnderlayConnectTimeout = time.Second * 10 UnderlayConnectTimeout = time.Second * 10
MaxDirectTry = 3 MaxDirectTry = 3
// sdwan
ReadTunBuffSize = 1600
ReadTunBuffNum = 10
) )
// NATNone has public ip // NATNone has public ip
@@ -182,6 +197,7 @@ const (
LinkModeUDPPunch = "udppunch" LinkModeUDPPunch = "udppunch"
LinkModeTCPPunch = "tcppunch" LinkModeTCPPunch = "tcppunch"
LinkModeIPv4 = "ipv4" // for web LinkModeIPv4 = "ipv4" // for web
LinkModeIntranet = "intranet" // for web
LinkModeIPv6 = "ipv6" // for web LinkModeIPv6 = "ipv6" // for web
LinkModeTCP6 = "tcp6" LinkModeTCP6 = "tcp6"
LinkModeTCP4 = "tcp4" LinkModeTCP4 = "tcp4"
@@ -194,6 +210,11 @@ const (
MsgQueryPeerInfoRsp MsgQueryPeerInfoRsp
) )
const (
MsgSDWANInfoReq = iota
MsgSDWANInfoRsp
)
func newMessage(mainType uint16, subType uint16, packet interface{}) ([]byte, error) { func newMessage(mainType uint16, subType uint16, packet interface{}) ([]byte, error) {
data, err := json.Marshal(packet) data, err := json.Marshal(packet)
if err != nil { if err != nil {
@@ -214,7 +235,7 @@ func newMessage(mainType uint16, subType uint16, packet interface{}) ([]byte, er
return writeBytes, nil return writeBytes, nil
} }
func nodeNameToID(name string) uint64 { func NodeNameToID(name string) uint64 {
return crc64.Checksum([]byte(name), crc64.MakeTable(crc64.ISO)) return crc64.Checksum([]byte(name), crc64.MakeTable(crc64.ISO))
} }
@@ -233,6 +254,7 @@ type PushConnectReq struct {
AppKey uint64 `json:"appKey,omitempty"` // for underlay tcp AppKey uint64 `json:"appKey,omitempty"` // for underlay tcp
LinkMode string `json:"linkMode,omitempty"` LinkMode string `json:"linkMode,omitempty"`
IsUnderlayServer int `json:"isServer,omitempty"` // Requset spec peer is server IsUnderlayServer int `json:"isServer,omitempty"` // Requset spec peer is server
UnderlayProtocol string `json:"underlayProtocol,omitempty"` // quic or kcp, default quic
} }
type PushDstNodeOnline struct { type PushDstNodeOnline struct {
Node string `json:"node,omitempty"` Node string `json:"node,omitempty"`
@@ -264,6 +286,7 @@ type LoginRsp struct {
Node string `json:"node,omitempty"` Node string `json:"node,omitempty"`
Token uint64 `json:"token,omitempty"` Token uint64 `json:"token,omitempty"`
Ts int64 `json:"ts,omitempty"` Ts int64 `json:"ts,omitempty"`
LoginMaxDelay int `json:"loginMaxDelay,omitempty"` // seconds
} }
type NatDetectReq struct { type NatDetectReq struct {
@@ -310,7 +333,9 @@ type RelayNodeRsp struct {
type AddRelayTunnelReq struct { type AddRelayTunnelReq struct {
From string `json:"from,omitempty"` From string `json:"from,omitempty"`
RelayName string `json:"relayName,omitempty"` RelayName string `json:"relayName,omitempty"`
RelayTunnelID uint64 `json:"relayTunnelID,omitempty"`
RelayToken uint64 `json:"relayToken,omitempty"` RelayToken uint64 `json:"relayToken,omitempty"`
RelayMode string `json:"relayMode,omitempty"`
AppID uint64 `json:"appID,omitempty"` // deprecated AppID uint64 `json:"appID,omitempty"` // deprecated
AppKey uint64 `json:"appKey,omitempty"` // deprecated AppKey uint64 `json:"appKey,omitempty"` // deprecated
} }
@@ -321,6 +346,7 @@ type APPKeySync struct {
} }
type RelayHeartbeat struct { type RelayHeartbeat struct {
From string `json:"from,omitempty"`
RelayTunnelID uint64 `json:"relayTunnelID,omitempty"` RelayTunnelID uint64 `json:"relayTunnelID,omitempty"`
AppID uint64 `json:"appID,omitempty"` AppID uint64 `json:"appID,omitempty"`
} }
@@ -356,6 +382,7 @@ type AppInfo struct {
AppName string `json:"appName,omitempty"` AppName string `json:"appName,omitempty"`
Error string `json:"error,omitempty"` Error string `json:"error,omitempty"`
Protocol string `json:"protocol,omitempty"` Protocol string `json:"protocol,omitempty"`
PunchPriority int `json:"punchPriority,omitempty"`
Whitelist string `json:"whitelist,omitempty"` Whitelist string `json:"whitelist,omitempty"`
SrcPort int `json:"srcPort,omitempty"` SrcPort int `json:"srcPort,omitempty"`
Protocol0 string `json:"protocol0,omitempty"` Protocol0 string `json:"protocol0,omitempty"`
@@ -369,6 +396,7 @@ type AppInfo struct {
PeerIP string `json:"peerIP,omitempty"` PeerIP string `json:"peerIP,omitempty"`
ShareBandwidth int `json:"shareBandWidth,omitempty"` ShareBandwidth int `json:"shareBandWidth,omitempty"`
RelayNode string `json:"relayNode,omitempty"` RelayNode string `json:"relayNode,omitempty"`
SpecRelayNode string `json:"specRelayNode,omitempty"`
RelayMode string `json:"relayMode,omitempty"` RelayMode string `json:"relayMode,omitempty"`
LinkMode string `json:"linkMode,omitempty"` LinkMode string `json:"linkMode,omitempty"`
Version string `json:"version,omitempty"` Version string `json:"version,omitempty"`
@@ -438,15 +466,50 @@ type QueryPeerInfoReq struct {
PeerNode string `json:"peerNode,omitempty"` PeerNode string `json:"peerNode,omitempty"`
} }
type QueryPeerInfoRsp struct { type QueryPeerInfoRsp struct {
PeerNode string `json:"peerNode,omitempty"`
Online int `json:"online,omitempty"` Online int `json:"online,omitempty"`
Version string `json:"version,omitempty"` Version string `json:"version,omitempty"`
NatType int `json:"natType,omitempty"` NatType int `json:"natType,omitempty"`
IPv4 string `json:"IPv4,omitempty"` IPv4 string `json:"IPv4,omitempty"`
LanIP string `json:"lanIP,omitempty"`
HasIPv4 int `json:"hasIPv4,omitempty"` // has public ipv4 HasIPv4 int `json:"hasIPv4,omitempty"` // has public ipv4
IPv6 string `json:"IPv6,omitempty"` // if public relay node, ipv6 not set IPv6 string `json:"IPv6,omitempty"` // if public relay node, ipv6 not set
HasUPNPorNATPMP int `json:"hasUPNPorNATPMP,omitempty"` HasUPNPorNATPMP int `json:"hasUPNPorNATPMP,omitempty"`
} }
type SDWANNode struct {
Name string `json:"name,omitempty"`
IP string `json:"ip,omitempty"`
Resource string `json:"resource,omitempty"`
Enable int32 `json:"enable,omitempty"`
}
type SDWANInfo struct {
ID uint64 `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Gateway string `json:"gateway,omitempty"`
Mode string `json:"mode,omitempty"` // default: fullmesh; central
CentralNode string `json:"centralNode,omitempty"`
ForceRelay int32 `json:"forceRelay,omitempty"`
PunchPriority int32 `json:"punchPriority,omitempty"`
Enable int32 `json:"enable,omitempty"`
Nodes []SDWANNode
}
const (
SDWANModeFullmesh = "fullmesh"
SDWANModeCentral = "central"
)
type ServerSideSaveMemApp struct {
From string `json:"from,omitempty"`
Node string `json:"node,omitempty"` // for server side findtunnel, maybe relayNode
TunnelID uint64 `json:"tunnelID,omitempty"` // save in app.tunnel or app.relayTunnel
RelayTunnelID uint64 `json:"relayTunnelID,omitempty"` // rtid, if not 0 relay
RelayMode string `json:"relayMode,omitempty"`
AppID uint64 `json:"appID,omitempty"`
}
const rootCA = `-----BEGIN CERTIFICATE----- const rootCA = `-----BEGIN CERTIFICATE-----
MIIDhTCCAm0CFHm0cd8dnGCbUW/OcS56jf0gvRk7MA0GCSqGSIb3DQEBCwUAMH4x MIIDhTCCAm0CFHm0cd8dnGCbUW/OcS56jf0gvRk7MA0GCSqGSIb3DQEBCwUAMH4x
CzAJBgNVBAYTAkNOMQswCQYDVQQIDAJHRDETMBEGA1UECgwKb3BlbnAycC5jbjET CzAJBgNVBAYTAkNOMQswCQYDVQQIDAJHRDETMBEGA1UECgwKb3BlbnAycC5jbjET

292
core/sdwan.go Normal file
View File

@@ -0,0 +1,292 @@
package openp2p
import (
"encoding/binary"
"encoding/json"
"fmt"
"net"
"runtime"
"strings"
"sync"
"time"
)
type PacketHeader struct {
version int
// src uint32
// prot uint8
protocol byte
dst uint32
port uint16
}
func parseHeader(b []byte, h *PacketHeader) error {
if len(b) < 20 {
return fmt.Errorf("small packet")
}
h.version = int(b[0] >> 4)
h.protocol = byte(b[9])
if h.version == 4 {
h.dst = binary.BigEndian.Uint32(b[16:20])
} else if h.version != 6 {
return fmt.Errorf("unknown version in ip header:%d", h.version)
}
if h.protocol == 6 || h.protocol == 17 { // TCP or UDP
h.port = binary.BigEndian.Uint16(b[22:24])
}
return nil
}
type sdwanNode struct {
name string
id uint64
}
type p2pSDWAN struct {
nodeName string
tun *optun
sysRoute sync.Map // ip:sdwanNode
subnet *net.IPNet
gateway net.IP
virtualIP *net.IPNet
internalRoute *IPTree
}
func (s *p2pSDWAN) init(name string) error {
if gConf.getSDWAN().Gateway == "" {
gLog.Println(LvDEBUG, "not in sdwan clear all ")
}
if s.internalRoute == nil {
s.internalRoute = NewIPTree("")
}
s.nodeName = name
s.gateway, s.subnet, _ = net.ParseCIDR(gConf.getSDWAN().Gateway)
for _, node := range gConf.getDelNodes() {
gLog.Println(LvDEBUG, "deal deleted node: ", node.Name)
delRoute(node.IP, s.gateway.String())
s.internalRoute.Del(node.IP, node.IP)
ipNum, _ := inetAtoN(node.IP)
s.sysRoute.Delete(ipNum)
gConf.delete(AppConfig{SrcPort: 0, PeerNode: node.Name})
GNetwork.DeleteApp(AppConfig{SrcPort: 0, PeerNode: node.Name})
arr := strings.Split(node.Resource, ",")
for _, r := range arr {
_, ipnet, err := net.ParseCIDR(r)
if err != nil {
// fmt.Println("Error parsing CIDR:", err)
continue
}
if ipnet.Contains(net.ParseIP(gConf.Network.localIP)) { // local ip and resource in the same lan
continue
}
minIP := ipnet.IP
maxIP := make(net.IP, len(minIP))
copy(maxIP, minIP)
for i := range minIP {
maxIP[i] = minIP[i] | ^ipnet.Mask[i]
}
s.internalRoute.Del(minIP.String(), maxIP.String())
delRoute(ipnet.String(), s.gateway.String())
}
}
for _, node := range gConf.getAddNodes() {
gLog.Println(LvDEBUG, "deal add node: ", node.Name)
ipNet := &net.IPNet{
IP: net.ParseIP(node.IP),
Mask: s.subnet.Mask,
}
if node.Name == s.nodeName {
s.virtualIP = ipNet
gLog.Println(LvINFO, "start tun ", ipNet.String())
err := s.StartTun()
if err != nil {
gLog.Println(LvERROR, "start tun error:", err)
return err
}
gLog.Println(LvINFO, "start tun ok")
allowTunForward()
addRoute(s.subnet.String(), s.gateway.String(), s.tun.tunName)
// addRoute("255.255.255.255/32", s.gateway.String(), s.tun.tunName) // for broadcast
// addRoute("224.0.0.0/4", s.gateway.String(), s.tun.tunName) // for multicast
initSNATRule(s.subnet.String()) // for network resource
continue
}
ip, err := inetAtoN(ipNet.String())
if err != nil {
return err
}
s.sysRoute.Store(ip, &sdwanNode{name: node.Name, id: NodeNameToID(node.Name)})
s.internalRoute.AddIntIP(ip, ip, &sdwanNode{name: node.Name, id: NodeNameToID(node.Name)})
}
for _, node := range gConf.getAddNodes() {
if node.Name == s.nodeName { // not deal resource itself
continue
}
if len(node.Resource) > 0 {
gLog.Printf(LvINFO, "deal add node: %s resource: %s", node.Name, node.Resource)
arr := strings.Split(node.Resource, ",")
for _, r := range arr {
// add internal route
_, ipnet, err := net.ParseCIDR(r)
if err != nil {
fmt.Println("Error parsing CIDR:", err)
continue
}
if ipnet.Contains(net.ParseIP(gConf.Network.localIP)) { // local ip and resource in the same lan
continue
}
minIP := ipnet.IP
maxIP := make(net.IP, len(minIP))
copy(maxIP, minIP)
for i := range minIP {
maxIP[i] = minIP[i] | ^ipnet.Mask[i]
}
s.internalRoute.Add(minIP.String(), maxIP.String(), &sdwanNode{name: node.Name, id: NodeNameToID(node.Name)})
// add sys route
addRoute(ipnet.String(), s.gateway.String(), s.tun.tunName)
}
}
}
gConf.retryAllMemApp()
gLog.Printf(LvINFO, "sdwan init ok")
return nil
}
func (s *p2pSDWAN) run() {
s.sysRoute.Range(func(key, value interface{}) bool {
node := value.(*sdwanNode)
GNetwork.ConnectNode(node.name)
return true
})
}
func (s *p2pSDWAN) readNodeLoop() {
gLog.Printf(LvDEBUG, "sdwan readNodeLoop start")
defer gLog.Printf(LvDEBUG, "sdwan readNodeLoop end")
writeBuff := make([][]byte, 1)
for {
nd := GNetwork.ReadNode(time.Second * 10) // TODO: read multi packet
if nd == nil {
gLog.Printf(LvDev, "waiting for node data")
continue
}
head := PacketHeader{}
parseHeader(nd.Data, &head)
gLog.Printf(LvDev, "write tun dst ip=%s,len=%d", net.IP{byte(head.dst >> 24), byte(head.dst >> 16), byte(head.dst >> 8), byte(head.dst)}.String(), len(nd.Data))
if PIHeaderSize == 0 {
writeBuff[0] = nd.Data
} else {
writeBuff[0] = make([]byte, PIHeaderSize+len(nd.Data))
copy(writeBuff[0][PIHeaderSize:], nd.Data)
}
len, err := s.tun.Write(writeBuff, PIHeaderSize)
if err != nil {
gLog.Printf(LvDEBUG, "write tun dst ip=%s,len=%d,error:%s", net.IP{byte(head.dst >> 24), byte(head.dst >> 16), byte(head.dst >> 8), byte(head.dst)}.String(), len, err)
}
}
}
func isBroadcastOrMulticast(ipUint32 uint32, subnet *net.IPNet) bool {
// return ipUint32 == 0xffffffff || (byte(ipUint32) == 0xff) || (ipUint32>>28 == 0xe)
return ipUint32 == 0xffffffff || (ipUint32>>28 == 0xe) // 225.255.255.255/32, 224.0.0.0/4
}
func (s *p2pSDWAN) routeTunPacket(p []byte, head *PacketHeader) {
var node *sdwanNode
// v, ok := s.routes.Load(ih.dst)
v, ok := s.internalRoute.Load(head.dst)
if !ok || v == nil {
if isBroadcastOrMulticast(head.dst, s.subnet) {
gLog.Printf(LvDev, "multicast ip=%s", net.IP{byte(head.dst >> 24), byte(head.dst >> 16), byte(head.dst >> 8), byte(head.dst)}.String())
GNetwork.WriteBroadcast(p)
}
return
} else {
node = v.(*sdwanNode)
}
err := GNetwork.WriteNode(node.id, p)
if err != nil {
gLog.Printf(LvDev, "write packet to %s fail: %s", node.name, err)
}
}
func (s *p2pSDWAN) readTunLoop() {
gLog.Printf(LvDEBUG, "sdwan readTunLoop start")
defer gLog.Printf(LvDEBUG, "sdwan readTunLoop end")
readBuff := make([][]byte, ReadTunBuffNum)
for i := 0; i < ReadTunBuffNum; i++ {
readBuff[i] = make([]byte, ReadTunBuffSize+PIHeaderSize)
}
readBuffSize := make([]int, ReadTunBuffNum)
ih := PacketHeader{}
for {
n, err := s.tun.Read(readBuff, readBuffSize, PIHeaderSize)
if err != nil {
gLog.Printf(LvERROR, "read tun fail: ", err)
return
}
for i := 0; i < n; i++ {
if readBuffSize[i] > ReadTunBuffSize {
gLog.Printf(LvERROR, "read tun overflow: len=", readBuffSize[i])
continue
}
parseHeader(readBuff[i][PIHeaderSize:readBuffSize[i]+PIHeaderSize], &ih)
gLog.Printf(LvDev, "read tun dst ip=%s,len=%d", net.IP{byte(ih.dst >> 24), byte(ih.dst >> 16), byte(ih.dst >> 8), byte(ih.dst)}.String(), readBuffSize[0])
s.routeTunPacket(readBuff[i][PIHeaderSize:readBuffSize[i]+PIHeaderSize], &ih)
}
}
}
func (s *p2pSDWAN) StartTun() error {
sdwan := gConf.getSDWAN()
if s.tun == nil {
tun := &optun{}
err := tun.Start(s.virtualIP.String(), &sdwan)
if err != nil {
gLog.Println(LvERROR, "open tun fail:", err)
return err
}
s.tun = tun
go s.readTunLoop()
go s.readNodeLoop() // multi-thread read will cause packets out of order, resulting in slower speeds
}
err := setTunAddr(s.tun.tunName, s.virtualIP.String(), sdwan.Gateway, s.tun.dev)
if err != nil {
gLog.Printf(LvERROR, "setTunAddr error:%s,%s,%s,%s", err, s.tun.tunName, s.virtualIP.String(), sdwan.Gateway)
return err
}
return nil
}
func handleSDWAN(subType uint16, msg []byte) error {
gLog.Printf(LvDEBUG, "handle sdwan msg type:%d", subType)
var err error
switch subType {
case MsgSDWANInfoRsp:
rsp := SDWANInfo{}
if err = json.Unmarshal(msg[openP2PHeaderSize:], &rsp); err != nil {
return ErrMsgFormat
}
gLog.Println(LvINFO, "sdwan init:", prettyJson(rsp))
if runtime.GOOS == "android" {
AndroidSDWANConfig <- msg[openP2PHeaderSize:]
}
// GNetwork.sdwan.detail = &rsp
gConf.setSDWAN(rsp)
err = GNetwork.sdwan.init(gConf.Network.Node)
if err != nil {
gLog.Println(LvERROR, "sdwan init fail: ", err)
if GNetwork.sdwan.tun != nil {
GNetwork.sdwan.tun.Stop()
GNetwork.sdwan.tun = nil
return err
}
}
go GNetwork.sdwan.run()
default:
}
return err
}

View File

@@ -1,7 +1,6 @@
package openp2p package openp2p
import ( import (
"fmt"
"sync" "sync"
"time" "time"
) )
@@ -44,7 +43,7 @@ func (sl *SpeedLimiter) Add(increment int, wait bool) bool {
sl.lastUpdate = time.Now() sl.lastUpdate = time.Now()
if sl.freeCap < 0 { if sl.freeCap < 0 {
// sleep for the overflow // sleep for the overflow
fmt.Println("sleep ", time.Millisecond*time.Duration(-sl.freeCap*100)/time.Duration(sl.speed)) // fmt.Println("sleep ", time.Millisecond*time.Duration(-sl.freeCap*100)/time.Duration(sl.speed))
time.Sleep(time.Millisecond * time.Duration(-sl.freeCap*1000) / time.Duration(sl.speed)) // sleep ms time.Sleep(time.Millisecond * time.Duration(-sl.freeCap*1000) / time.Duration(sl.speed)) // sleep ms
} }
return true return true

View File

@@ -25,8 +25,9 @@ func TestSymmetric(t *testing.T) {
speed := 20000 / 180 speed := 20000 / 180
speedl := newSpeedLimiter(speed, 180) speedl := newSpeedLimiter(speed, 180)
oneBuffSize := 300 oneBuffSize := 300
writeNum := 100 writeNum := 70
expectTime := (oneBuffSize*writeNum - 20000) / speed expectTime := (oneBuffSize*writeNum - 20000) / speed
t.Logf("expect %ds", expectTime)
startTs := time.Now() startTs := time.Now()
for i := 0; i < writeNum; i++ { for i := 0; i < writeNum; i++ {
speedl.Add(oneBuffSize, true) speedl.Add(oneBuffSize, true)
@@ -41,7 +42,7 @@ func TestSymmetric2(t *testing.T) {
speed := 30000 / 180 speed := 30000 / 180
speedl := newSpeedLimiter(speed, 180) speedl := newSpeedLimiter(speed, 180)
oneBuffSize := 800 oneBuffSize := 800
writeNum := 50 writeNum := 40
expectTime := (oneBuffSize*writeNum - 30000) / speed expectTime := (oneBuffSize*writeNum - 30000) / speed
startTs := time.Now() startTs := time.Now()
for i := 0; i < writeNum; { for i := 0; i < writeNum; {

View File

@@ -18,7 +18,7 @@ func UDPWrite(conn *net.UDPConn, dst net.Addr, mainType uint16, subType uint16,
return conn.WriteTo(msg, dst) return conn.WriteTo(msg, dst)
} }
func UDPRead(conn *net.UDPConn, timeout time.Duration) (ra net.Addr, head *openP2PHeader, result []byte, len int, err error) { func UDPRead(conn *net.UDPConn, timeout time.Duration) (ra net.Addr, head *openP2PHeader, buff []byte, length int, err error) {
if timeout > 0 { if timeout > 0 {
err = conn.SetReadDeadline(time.Now().Add(timeout)) err = conn.SetReadDeadline(time.Now().Add(timeout))
if err != nil { if err != nil {
@@ -27,15 +27,15 @@ func UDPRead(conn *net.UDPConn, timeout time.Duration) (ra net.Addr, head *openP
} }
} }
result = make([]byte, 1024) buff = make([]byte, 1024)
len, ra, err = conn.ReadFrom(result) length, ra, err = conn.ReadFrom(buff)
if err != nil { if err != nil {
// gLog.Println(LevelDEBUG, "ReadFrom error") // gLog.Println(LevelDEBUG, "ReadFrom error")
return nil, nil, nil, 0, err return nil, nil, nil, 0, err
} }
head = &openP2PHeader{} head = &openP2PHeader{}
err = binary.Read(bytes.NewReader(result[:openP2PHeaderSize]), binary.LittleEndian, head) err = binary.Read(bytes.NewReader(buff[:openP2PHeaderSize]), binary.LittleEndian, head)
if err != nil { if err != nil || head.DataLen > uint32(len(buff)-openP2PHeaderSize) {
gLog.Println(LvERROR, "parse p2pheader error:", err) gLog.Println(LvERROR, "parse p2pheader error:", err)
return nil, nil, nil, 0, err return nil, nil, nil, 0, err
} }

View File

@@ -37,6 +37,7 @@ func DefaultReadBuffer(ul underlay) (*openP2PHeader, []byte, error) {
func DefaultWriteBytes(ul underlay, mainType, subType uint16, data []byte) error { func DefaultWriteBytes(ul underlay, mainType, subType uint16, data []byte) error {
writeBytes := append(encodeHeader(mainType, subType, uint32(len(data))), data...) writeBytes := append(encodeHeader(mainType, subType, uint32(len(data))), data...)
ul.SetWriteDeadline(time.Now().Add(TunnelHeartbeatTime / 2))
ul.WLock() ul.WLock()
_, err := ul.Write(writeBytes) _, err := ul.Write(writeBytes)
ul.WUnlock() ul.WUnlock()
@@ -44,6 +45,7 @@ func DefaultWriteBytes(ul underlay, mainType, subType uint16, data []byte) error
} }
func DefaultWriteBuffer(ul underlay, data []byte) error { func DefaultWriteBuffer(ul underlay, data []byte) error {
ul.SetWriteDeadline(time.Now().Add(TunnelHeartbeatTime / 2))
ul.WLock() ul.WLock()
_, err := ul.Write(data) _, err := ul.Write(data)
ul.WUnlock() ul.WUnlock()
@@ -55,6 +57,7 @@ func DefaultWriteMessage(ul underlay, mainType uint16, subType uint16, packet in
if err != nil { if err != nil {
return err return err
} }
ul.SetWriteDeadline(time.Now().Add(TunnelHeartbeatTime / 2))
ul.WLock() ul.WLock()
_, err = ul.Write(writeBytes) _, err = ul.Write(writeBytes)
ul.WUnlock() ul.WUnlock()

94
core/underlay_kcp.go Normal file
View File

@@ -0,0 +1,94 @@
package openp2p
import (
"fmt"
"net"
"sync"
"time"
"github.com/xtaci/kcp-go/v5"
)
type underlayKCP struct {
listener *kcp.Listener
writeMtx *sync.Mutex
*kcp.UDPSession
}
func (conn *underlayKCP) Protocol() string {
return "kcp"
}
func (conn *underlayKCP) ReadBuffer() (*openP2PHeader, []byte, error) {
return DefaultReadBuffer(conn)
}
func (conn *underlayKCP) WriteBytes(mainType uint16, subType uint16, data []byte) error {
return DefaultWriteBytes(conn, mainType, subType, data)
}
func (conn *underlayKCP) WriteBuffer(data []byte) error {
return DefaultWriteBuffer(conn, data)
}
func (conn *underlayKCP) WriteMessage(mainType uint16, subType uint16, packet interface{}) error {
return DefaultWriteMessage(conn, mainType, subType, packet)
}
func (conn *underlayKCP) Close() error {
conn.UDPSession.Close()
return nil
}
func (conn *underlayKCP) WLock() {
conn.writeMtx.Lock()
}
func (conn *underlayKCP) WUnlock() {
conn.writeMtx.Unlock()
}
func (conn *underlayKCP) CloseListener() {
if conn.listener != nil {
conn.listener.Close()
}
}
func (conn *underlayKCP) Accept() error {
kConn, err := conn.listener.AcceptKCP()
if err != nil {
conn.listener.Close()
return err
}
kConn.SetNoDelay(0, 40, 0, 0)
kConn.SetWindowSize(512, 512)
kConn.SetWriteBuffer(1024 * 128)
kConn.SetReadBuffer(1024 * 128)
conn.UDPSession = kConn
return nil
}
func listenKCP(addr string, idleTimeout time.Duration) (*underlayKCP, error) {
gLog.Println(LvDEBUG, "kcp listen on ", addr)
listener, err := kcp.ListenWithOptions(addr, nil, 0, 0)
if err != nil {
return nil, fmt.Errorf("quic.ListenAddr error:%s", err)
}
ul := &underlayKCP{listener: listener, writeMtx: &sync.Mutex{}}
err = ul.Accept()
if err != nil {
ul.CloseListener()
return nil, fmt.Errorf("accept KCP error:%s", err)
}
return ul, nil
}
func dialKCP(conn *net.UDPConn, remoteAddr *net.UDPAddr, idleTimeout time.Duration) (*underlayKCP, error) {
kConn, err := kcp.NewConn(remoteAddr.String(), nil, 0, 0, conn)
if err != nil {
return nil, fmt.Errorf("quic.DialContext error:%s", err)
}
kConn.SetNoDelay(0, 40, 0, 0)
kConn.SetWindowSize(512, 512)
kConn.SetWriteBuffer(1024 * 128)
kConn.SetReadBuffer(1024 * 128)
ul := &underlayKCP{nil, &sync.Mutex{}, kConn}
return ul, nil
}

View File

@@ -49,6 +49,7 @@ func (conn *underlayQUIC) WriteMessage(mainType uint16, subType uint16, packet i
func (conn *underlayQUIC) Close() error { func (conn *underlayQUIC) Close() error {
conn.Stream.CancelRead(1) conn.Stream.CancelRead(1)
conn.Connection.CloseWithError(0, "") conn.Connection.CloseWithError(0, "")
conn.CloseListener()
return nil return nil
} }
func (conn *underlayQUIC) WLock() { func (conn *underlayQUIC) WLock() {
@@ -86,7 +87,13 @@ func listenQuic(addr string, idleTimeout time.Duration) (*underlayQUIC, error) {
if err != nil { if err != nil {
return nil, fmt.Errorf("quic.ListenAddr error:%s", err) return nil, fmt.Errorf("quic.ListenAddr error:%s", err)
} }
return &underlayQUIC{listener: listener, writeMtx: &sync.Mutex{}}, nil ul := &underlayQUIC{listener: listener, writeMtx: &sync.Mutex{}}
err = ul.Accept()
if err != nil {
ul.CloseListener()
return nil, fmt.Errorf("accept quic error:%s", err)
}
return ul, nil
} }
func dialQuic(conn *net.UDPConn, remoteAddr *net.UDPAddr, idleTimeout time.Duration) (*underlayQUIC, error) { func dialQuic(conn *net.UDPConn, remoteAddr *net.UDPAddr, idleTimeout time.Duration) (*underlayQUIC, error) {

View File

@@ -13,6 +13,7 @@ import (
type underlayTCP struct { type underlayTCP struct {
writeMtx *sync.Mutex writeMtx *sync.Mutex
net.Conn net.Conn
connectTime time.Time
} }
func (conn *underlayTCP) Protocol() string { func (conn *underlayTCP) Protocol() string {
@@ -47,7 +48,7 @@ func (conn *underlayTCP) WUnlock() {
func listenTCP(host string, port int, localPort int, mode string, t *P2PTunnel) (*underlayTCP, error) { func listenTCP(host string, port int, localPort int, mode string, t *P2PTunnel) (*underlayTCP, error) {
if mode == LinkModeTCPPunch { if mode == LinkModeTCPPunch {
if compareVersion(t.config.peerVersion, SyncServerTimeVersion) == LESS { if compareVersion(t.config.peerVersion, SyncServerTimeVersion) < 0 {
gLog.Printf(LvDEBUG, "peer version %s less than %s", t.config.peerVersion, SyncServerTimeVersion) gLog.Printf(LvDEBUG, "peer version %s less than %s", t.config.peerVersion, SyncServerTimeVersion)
} else { } else {
ts := time.Duration(int64(t.punchTs) + t.pn.dt - time.Now().UnixNano()) ts := time.Duration(int64(t.punchTs) + t.pn.dt - time.Now().UnixNano())
@@ -73,12 +74,36 @@ func listenTCP(host string, port int, localPort int, mode string, t *P2PTunnel)
} }
t.pn.push(t.config.PeerNode, MsgPushUnderlayConnect, nil) t.pn.push(t.config.PeerNode, MsgPushUnderlayConnect, nil)
tid := t.id tid := t.id
if compareVersion(t.config.peerVersion, PublicIPVersion) == LESS { // old version if compareVersion(t.config.peerVersion, PublicIPVersion) < 0 { // old version
ipBytes := net.ParseIP(t.config.peerIP).To4() ipBytes := net.ParseIP(t.config.peerIP).To4()
tid = uint64(binary.BigEndian.Uint32(ipBytes)) tid = uint64(binary.BigEndian.Uint32(ipBytes))
gLog.Println(LvDEBUG, "compatible with old client, use ip as key:", tid) gLog.Println(LvDEBUG, "compatible with old client, use ip as key:", tid)
} }
utcp := v4l.getUnderlayTCP(tid) var utcp *underlayTCP
if mode == LinkModeIntranet && gConf.Network.hasIPv4 == 0 && gConf.Network.hasUPNPorNATPMP == 0 {
addr, _ := net.ResolveTCPAddr("tcp4", fmt.Sprintf("0.0.0.0:%d", localPort))
l, err := net.ListenTCP("tcp4", addr)
if err != nil {
gLog.Printf(LvERROR, "listen %d error:", localPort, err)
return nil, err
}
defer l.Close()
err = l.SetDeadline(time.Now().Add(UnderlayTCPConnectTimeout))
if err != nil {
gLog.Printf(LvERROR, "set listen timeout:", err)
return nil, err
}
c, err := l.Accept()
if err != nil {
return nil, err
}
utcp = &underlayTCP{writeMtx: &sync.Mutex{}, Conn: c}
} else {
if v4l != nil {
utcp = v4l.getUnderlayTCP(tid)
}
}
if utcp == nil { if utcp == nil {
return nil, ErrConnectPublicV4 return nil, ErrConnectPublicV4
} }
@@ -89,9 +114,9 @@ func dialTCP(host string, port int, localPort int, mode string) (*underlayTCP, e
var c net.Conn var c net.Conn
var err error var err error
if mode == LinkModeTCPPunch { if mode == LinkModeTCPPunch {
gLog.Println(LvDEBUG, " send tcp punch: ", fmt.Sprintf("0.0.0.0:%d", localPort), "-->", fmt.Sprintf("%s:%d", host, port)) gLog.Println(LvDev, " send tcp punch: ", fmt.Sprintf("0.0.0.0:%d", localPort), "-->", fmt.Sprintf("%s:%d", host, port))
if c, err = reuse.DialTimeout("tcp", fmt.Sprintf("0.0.0.0:%d", localPort), fmt.Sprintf("%s:%d", host, port), CheckActiveTimeout); err != nil { if c, err = reuse.DialTimeout("tcp", fmt.Sprintf("0.0.0.0:%d", localPort), fmt.Sprintf("%s:%d", host, port), CheckActiveTimeout); err != nil {
gLog.Println(LvDEBUG, "send tcp punch: ", err) gLog.Println(LvDev, "send tcp punch: ", err)
} }
} else { } else {
@@ -99,9 +124,12 @@ func dialTCP(host string, port int, localPort int, mode string) (*underlayTCP, e
} }
if err != nil { if err != nil {
gLog.Printf(LvERROR, "Dial %s:%d error:%s", host, port, err) gLog.Printf(LvDev, "Dial %s:%d error:%s", host, port, err)
return nil, err return nil, err
} }
tc := c.(*net.TCPConn)
tc.SetKeepAlive(true)
tc.SetKeepAlivePeriod(UnderlayTCPKeepalive)
gLog.Printf(LvDEBUG, "Dial %s:%d OK", host, port) gLog.Printf(LvDEBUG, "Dial %s:%d OK", host, port)
return &underlayTCP{writeMtx: &sync.Mutex{}, Conn: c}, nil return &underlayTCP{writeMtx: &sync.Mutex{}, Conn: c}, nil
} }

View File

@@ -8,7 +8,6 @@ import (
) )
type underlayTCP6 struct { type underlayTCP6 struct {
listener net.Listener
writeMtx *sync.Mutex writeMtx *sync.Mutex
net.Conn net.Conn
} }
@@ -42,14 +41,14 @@ func (conn *underlayTCP6) WLock() {
func (conn *underlayTCP6) WUnlock() { func (conn *underlayTCP6) WUnlock() {
conn.writeMtx.Unlock() conn.writeMtx.Unlock()
} }
func listenTCP6(port int, idleTimeout time.Duration) (*underlayTCP6, error) { func listenTCP6(port int, timeout time.Duration) (*underlayTCP6, error) {
addr, _ := net.ResolveTCPAddr("tcp6", fmt.Sprintf("[::]:%d", port)) addr, _ := net.ResolveTCPAddr("tcp6", fmt.Sprintf("[::]:%d", port))
l, err := net.ListenTCP("tcp6", addr) l, err := net.ListenTCP("tcp6", addr)
if err != nil { if err != nil {
return nil, err return nil, err
} }
defer l.Close() defer l.Close()
l.SetDeadline(time.Now().Add(UnderlayConnectTimeout)) l.SetDeadline(time.Now().Add(timeout))
c, err := l.Accept() c, err := l.Accept()
defer l.Close() defer l.Close()
if err != nil { if err != nil {

View File

@@ -11,6 +11,7 @@ import (
"io" "io"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"net/url"
"os" "os"
"path/filepath" "path/filepath"
"runtime" "runtime"
@@ -38,7 +39,7 @@ func update(host string, port int) error {
} }
goos := runtime.GOOS goos := runtime.GOOS
goarch := runtime.GOARCH goarch := runtime.GOARCH
rsp, err := c.Get(fmt.Sprintf("https://%s:%d/api/v1/update?fromver=%s&os=%s&arch=%s&user=%s&node=%s", host, port, OpenP2PVersion, goos, goarch, gConf.Network.User, gConf.Network.Node)) rsp, err := c.Get(fmt.Sprintf("https://%s:%d/api/v1/update?fromver=%s&os=%s&arch=%s&user=%s&node=%s", host, port, OpenP2PVersion, goos, goarch, url.QueryEscape(gConf.Network.User), url.QueryEscape(gConf.Network.Node)))
if err != nil { if err != nil {
gLog.Println(LvERROR, "update:query update list failed:", err) gLog.Println(LvERROR, "update:query update list failed:", err)
return err return err
@@ -70,12 +71,10 @@ func update(host string, port int) error {
return nil return nil
} }
func updateFile(url string, checksum string, dst string) error { func downloadFile(url string, checksum string, dstFile string) error {
gLog.Println(LvINFO, "download ", url) output, err := os.OpenFile(dstFile, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0776)
tmpFile := filepath.Dir(os.Args[0]) + "/openp2p.tmp"
output, err := os.OpenFile(tmpFile, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0776)
if err != nil { if err != nil {
gLog.Printf(LvERROR, "OpenFile %s error:%s", tmpFile, err) gLog.Printf(LvERROR, "OpenFile %s error:%s", dstFile, err)
return err return err
} }
caCertPool, err := x509.SystemCertPool() caCertPool, err := x509.SystemCertPool()
@@ -109,6 +108,16 @@ func updateFile(url string, checksum string, dst string) error {
output.Close() output.Close()
gLog.Println(LvINFO, "download ", url, " ok") gLog.Println(LvINFO, "download ", url, " ok")
gLog.Printf(LvINFO, "size: %d bytes", n) gLog.Printf(LvINFO, "size: %d bytes", n)
return nil
}
func updateFile(url string, checksum string, dst string) error {
gLog.Println(LvINFO, "download ", url)
tmpFile := filepath.Dir(os.Args[0]) + "/openp2p.tmp"
err := downloadFile(url, checksum, tmpFile)
if err != nil {
return err
}
backupFile := os.Args[0] + "0" backupFile := os.Args[0] + "0"
err = os.Rename(os.Args[0], backupFile) // the old daemon process was using the 0 file, so it will prevent override it err = os.Rename(os.Args[0], backupFile) // the old daemon process was using the 0 file, so it will prevent override it
if err != nil { if err != nil {
@@ -203,9 +212,6 @@ func extractTgz(dst, src string) error {
if err != nil { if err != nil {
return err return err
} }
if err != nil {
return err
}
defer outFile.Close() defer outFile.Close()
if _, err := io.Copy(outFile, tarReader); err != nil { if _, err := io.Copy(outFile, tarReader); err != nil {
return err return err

View File

@@ -5,6 +5,7 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"strconv"
"strings" "strings"
"golang.org/x/sys/windows/registry" "golang.org/x/sys/windows/registry"
@@ -23,6 +24,17 @@ func getOsName() (osName string) {
defer k.Close() defer k.Close()
pn, _, err := k.GetStringValue("ProductName") pn, _, err := k.GetStringValue("ProductName")
if err == nil { if err == nil {
currentBuild, _, err := k.GetStringValue("CurrentBuild")
if err != nil {
return
}
buildNumber, err := strconv.Atoi(currentBuild)
if err != nil {
return
}
if buildNumber >= 22000 {
pn = strings.Replace(pn, "Windows 10", "Windows 11", 1)
}
osName = pn osName = pn
} }
return return

View File

@@ -15,20 +15,20 @@ type v4Listener struct {
} }
func (vl *v4Listener) start() error { func (vl *v4Listener) start() error {
v4l.acceptCh = make(chan bool, 10) v4l.acceptCh = make(chan bool, 500)
for { for {
vl.listen() vl.listen()
time.Sleep(time.Second * 5) time.Sleep(UnderlayTCPConnectTimeout)
} }
} }
func (vl *v4Listener) listen() error { func (vl *v4Listener) listen() error {
gLog.Printf(LvINFO, "listen %d start", vl.port) gLog.Printf(LvINFO, "v4Listener listen %d start", vl.port)
defer gLog.Printf(LvINFO, "listen %d end", vl.port) defer gLog.Printf(LvINFO, "v4Listener listen %d end", vl.port)
addr, _ := net.ResolveTCPAddr("tcp", fmt.Sprintf("0.0.0.0:%d", vl.port)) addr, _ := net.ResolveTCPAddr("tcp4", fmt.Sprintf("0.0.0.0:%d", vl.port))
l, err := net.ListenTCP("tcp", addr) l, err := net.ListenTCP("tcp4", addr)
if err != nil { if err != nil {
gLog.Printf(LvERROR, "listen %d error:", vl.port, err) gLog.Printf(LvERROR, "v4Listener listen %d error:", vl.port, err)
return err return err
} }
defer l.Close() defer l.Close()
@@ -43,8 +43,8 @@ func (vl *v4Listener) listen() error {
} }
func (vl *v4Listener) handleConnection(c net.Conn) { func (vl *v4Listener) handleConnection(c net.Conn) {
gLog.Println(LvDEBUG, "v4Listener accept connection: ", c.RemoteAddr().String()) gLog.Println(LvDEBUG, "v4Listener accept connection: ", c.RemoteAddr().String())
utcp := &underlayTCP{writeMtx: &sync.Mutex{}, Conn: c} utcp := &underlayTCP{writeMtx: &sync.Mutex{}, Conn: c, connectTime: time.Now()}
utcp.SetReadDeadline(time.Now().Add(time.Second * 5)) utcp.SetReadDeadline(time.Now().Add(UnderlayTCPConnectTimeout))
_, buff, err := utcp.ReadBuffer() _, buff, err := utcp.ReadBuffer()
if err != nil { if err != nil {
gLog.Printf(LvERROR, "utcp.ReadBuffer error:", err) gLog.Printf(LvERROR, "utcp.ReadBuffer error:", err)
@@ -64,8 +64,18 @@ func (vl *v4Listener) handleConnection(c net.Conn) {
tid = binary.LittleEndian.Uint64(buff[:8]) tid = binary.LittleEndian.Uint64(buff[:8])
gLog.Println(LvDEBUG, "hello ", tid) gLog.Println(LvDEBUG, "hello ", tid)
} }
// clear timeout connection
vl.conns.Range(func(idx, i interface{}) bool {
ut := i.(*underlayTCP)
if ut.connectTime.Before(time.Now().Add(-UnderlayTCPConnectTimeout)) {
vl.conns.Delete(idx)
}
return true
})
vl.conns.Store(tid, utcp) vl.conns.Store(tid, utcp)
if len(vl.acceptCh) == 0 {
vl.acceptCh <- true vl.acceptCh <- true
}
} }
func (vl *v4Listener) getUnderlayTCP(tid uint64) *underlayTCP { func (vl *v4Listener) getUnderlayTCP(tid uint64) *underlayTCP {

View File

@@ -2,7 +2,7 @@ FROM alpine:3.18.2
# Replace the default Alpine repositories with Aliyun mirrors # Replace the default Alpine repositories with Aliyun mirrors
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && \
apk add --no-cache ca-certificates && \ apk add --no-cache ca-certificates iptables && \
rm -rf /tmp/* /var/tmp/* /var/cache/apk/* /var/cache/distfiles/* rm -rf /tmp/* /var/tmp/* /var/cache/apk/* /var/cache/distfiles/*
COPY get-client.sh / COPY get-client.sh /

14
example/dll/dll.cpp Normal file
View File

@@ -0,0 +1,14 @@
#include <iostream>
#include <windows.h>
using namespace std;
typedef void (*pRun)(const char *);
int main(int argc, char *argv[])
{
HMODULE dll = LoadLibraryA("openp2p.dll");
pRun run = (pRun)GetProcAddress(dll, "RunCmd");
run("-node 5800-debug2 -token YOUR-TOKEN");
FreeLibrary(dll);
return 0;
}

View File

@@ -0,0 +1,42 @@
package main
import (
"fmt"
op "openp2p/core"
"time"
)
func main() {
op.Run()
for i := 0; i < 10; i++ {
go echoClient("5800-debug")
}
echoClient("5800-debug")
}
func echoClient(peerNode string) {
sendDatalen := op.ReadBuffLen
sendBuff := make([]byte, sendDatalen)
for i := 0; i < len(sendBuff); i++ {
sendBuff[i] = byte('A' + i/100)
}
// peerNode = "YOUR-PEER-NODE-NAME"
if err := op.GNetwork.ConnectNode(peerNode); err != nil {
fmt.Println("connect error:", err)
return
}
for i := 0; ; i++ {
sendBuff[1] = 'A' + byte(i%26)
if err := op.GNetwork.WriteNode(op.NodeNameToID(peerNode), sendBuff[:sendDatalen]); err != nil {
fmt.Println("write error:", err)
break
}
nd := op.GNetwork.ReadNode(time.Second * 10)
if nd == nil {
fmt.Printf("waiting for node data\n")
time.Sleep(time.Second * 10)
continue
}
fmt.Printf("read %d len=%d data=%s\n", nd.NodeID, len(nd.Data), nd.Data[:16]) // only print 16 bytes
}
}

View File

@@ -0,0 +1,32 @@
package main
import (
"fmt"
op "openp2p/core"
"time"
)
func main() {
op.Run()
echoServer()
forever := make(chan bool)
<-forever
}
func echoServer() {
// peerID := fmt.Sprintf("%d", core.NodeNameToID(peerNode))
for {
nd := op.GNetwork.ReadNode(time.Second * 10)
if nd == nil {
fmt.Printf("waiting for node data\n")
// time.Sleep(time.Second * 10)
continue
}
// fmt.Printf("read %s len=%d data=%s\n", nd.Node, len(nd.Data), nd.Data[:16])
nd.Data[0] = 'R' // echo server mark as replied
if err := op.GNetwork.WriteNode(nd.NodeID, nd.Data); err != nil {
fmt.Println("write error:", err)
break
}
}
}

27
go.mod
View File

@@ -1,15 +1,19 @@
module openp2p module openp2p
go 1.18 go 1.20
require ( require (
github.com/emirpasic/gods v1.18.1 github.com/emirpasic/gods v1.18.1
github.com/gorilla/websocket v1.4.2 github.com/gorilla/websocket v1.4.2
github.com/openp2p-cn/go-reuseport v0.3.2 github.com/openp2p-cn/go-reuseport v0.3.2
github.com/openp2p-cn/service v1.0.0 github.com/openp2p-cn/service v1.0.0
github.com/openp2p-cn/totp v0.0.0-20230102121327-8e02f6b392ed github.com/openp2p-cn/totp v0.0.0-20230421034602-0f3320ffb25e
github.com/openp2p-cn/wireguard-go v0.0.20240223
github.com/quic-go/quic-go v0.34.0 github.com/quic-go/quic-go v0.34.0
golang.org/x/sys v0.5.0 github.com/vishvananda/netlink v1.1.0
github.com/xtaci/kcp-go/v5 v5.5.17
golang.org/x/sys v0.21.0
golang.zx2c4.com/wireguard/windows v0.5.3
) )
require ( require (
@@ -17,13 +21,22 @@ require (
github.com/golang/mock v1.6.0 // indirect github.com/golang/mock v1.6.0 // indirect
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 // indirect github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 // indirect
github.com/kardianos/service v1.2.2 // indirect github.com/kardianos/service v1.2.2 // indirect
github.com/klauspost/cpuid/v2 v2.2.5 // indirect
github.com/klauspost/reedsolomon v1.11.8 // indirect
github.com/onsi/ginkgo/v2 v2.2.0 // indirect github.com/onsi/ginkgo/v2 v2.2.0 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/quic-go/qtls-go1-19 v0.3.2 // indirect github.com/quic-go/qtls-go1-19 v0.3.2 // indirect
github.com/quic-go/qtls-go1-20 v0.2.2 // indirect github.com/quic-go/qtls-go1-20 v0.2.2 // indirect
golang.org/x/crypto v0.4.0 // indirect github.com/templexxx/cpu v0.1.0 // indirect
github.com/templexxx/xorsimd v0.4.2 // indirect
github.com/tjfoc/gmsm v1.4.1 // indirect
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df // indirect
golang.org/x/crypto v0.24.0 // indirect
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db // indirect golang.org/x/exp v0.0.0-20221205204356-47842c84f3db // indirect
golang.org/x/mod v0.6.0 // indirect golang.org/x/mod v0.18.0 // indirect
golang.org/x/net v0.7.0 // indirect golang.org/x/net v0.26.0 // indirect
golang.org/x/tools v0.2.0 // indirect golang.org/x/tools v0.22.0 // indirect
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect
golang.zx2c4.com/wireguard v0.0.0-20231211153847-12269c276173 // indirect
google.golang.org/protobuf v1.28.1 // indirect google.golang.org/protobuf v1.28.1 // indirect
) )

14
lib/openp2p.go Normal file
View File

@@ -0,0 +1,14 @@
package main
import (
op "openp2p/core"
)
import "C"
func main() {
}
//export RunCmd
func RunCmd(cmd *C.char) {
op.RunCmd(C.GoString(cmd))
}