mirror of
https://github.com/bolucat/Archive.git
synced 2025-12-24 13:28:37 +08:00
Update On Thu Oct 16 20:51:18 CEST 2025
This commit is contained in:
214
nodepass/.github/copilot-instructions.md
vendored
Normal file
214
nodepass/.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,214 @@
|
||||
# NodePass AI Coding Agent Instructions
|
||||
|
||||
## Project Overview
|
||||
|
||||
NodePass is an enterprise-grade TCP/UDP network tunneling solution with a three-tier architecture supporting server, client, and master modes. The core is written in Go with a focus on performance, security, and minimal configuration.
|
||||
|
||||
## Architecture Essentials
|
||||
|
||||
### Three-Tier S/C/M Architecture
|
||||
|
||||
1. **Server Mode** (`internal/server.go`): Accepts tunnel connections, manages connection pools, forwards traffic bidirectionally
|
||||
2. **Client Mode** (`internal/client.go`): Connects to servers, supports single-end forwarding or dual-end handshake modes
|
||||
3. **Master Mode** (`internal/master.go`): RESTful API for dynamic instance management with persistent state in `nodepass.gob`
|
||||
|
||||
### Critical Design Patterns
|
||||
|
||||
- **Separation of Control/Data Channels**:
|
||||
- Control channel: Unencrypted TCP for signaling (`np://` scheme with fragments)
|
||||
- Data channel: Configurable TLS (modes 0/1/2) for actual traffic
|
||||
|
||||
- **Connection Pooling**: Pre-established connections via `github.com/NodePassProject/pool` library
|
||||
- Server controls `max` pool capacity, passes to client during handshake
|
||||
- Client manages `min` capacity for persistent connections
|
||||
|
||||
- **Bidirectional Data Flow**: Automatic mode detection in `Common.runMode`
|
||||
- Mode 0: Auto-detect based on target address bindability
|
||||
- Mode 1: Reverse/single-end (server receives OR client listens locally)
|
||||
- Mode 2: Forward/dual-end (server sends OR client connects remotely)
|
||||
|
||||
### External Dependencies (NodePassProject Ecosystem)
|
||||
|
||||
All critical networking primitives are in separate libraries:
|
||||
- `github.com/NodePassProject/cert`: TLS certificate generation and management
|
||||
- `github.com/NodePassProject/conn`: Custom connection types (`StatConn`, `TimeoutReader`, `DataExchange`)
|
||||
- `github.com/NodePassProject/logs`: Structured logging with levels (None/Debug/Info/Warn/Error/Event)
|
||||
- `github.com/NodePassProject/pool`: Connection pool management for both server and client
|
||||
|
||||
**Never modify these libraries directly** - they're external dependencies. Use their exported APIs only.
|
||||
|
||||
## Configuration System
|
||||
|
||||
### URL-Based Configuration
|
||||
|
||||
All modes use URL-style configuration: `scheme://[password@]host:port/target?param=value`
|
||||
|
||||
**Server**: `server://bind_addr:port/target_addr:port?max=1024&tls=1&log=debug`
|
||||
**Client**: `client://server_addr:port/local_addr:port?min=128&mode=0&rate=100`
|
||||
**Master**: `master://api_addr:port/prefix?log=info&tls=2&crt=path&key=path`
|
||||
|
||||
### Query Parameters
|
||||
|
||||
- `log`: none|debug|info|warn|error|event (default: info)
|
||||
- `tls`: 0=plain, 1=self-signed, 2=custom cert (server/master only)
|
||||
- `min`/`max`: Connection pool capacity (client sets min, server sets max)
|
||||
- `mode`: 0=auto, 1=reverse/single-end, 2=forward/dual-end
|
||||
- `read`: Timeout duration (e.g., 1h, 30m, 15s)
|
||||
- `rate`: Mbps bandwidth limit (0=unlimited)
|
||||
- `slot`: Max concurrent connections (default: 65536)
|
||||
- `proxy`: PROXY protocol v1 support (0=off, 1=on)
|
||||
|
||||
### Environment Variables for Tuning
|
||||
|
||||
See `internal/common.go` for all `NP_*` environment variables:
|
||||
- `NP_TCP_DATA_BUF_SIZE`: TCP buffer size (default: 16384)
|
||||
- `NP_UDP_DATA_BUF_SIZE`: UDP buffer size (default: 2048)
|
||||
- `NP_HANDSHAKE_TIMEOUT`: Handshake timeout (default: 5s)
|
||||
- `NP_POOL_GET_TIMEOUT`: Pool connection timeout (default: 5s)
|
||||
- `NP_REPORT_INTERVAL`: Health check interval (default: 5s)
|
||||
- `NP_RELOAD_INTERVAL`: TLS cert reload interval (default: 1h)
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Building
|
||||
|
||||
```bash
|
||||
# Development build
|
||||
go build -o nodepass ./cmd/nodepass
|
||||
|
||||
# Release build (mimics .goreleaser.yml)
|
||||
go build -trimpath -ldflags="-s -w -X main.version=dev" -o nodepass ./cmd/nodepass
|
||||
```
|
||||
|
||||
### Testing Manually
|
||||
|
||||
No automated test suite exists currently. Test via real-world scenarios:
|
||||
|
||||
```bash
|
||||
# Terminal 1: Server with debug logging
|
||||
./nodepass "server://0.0.0.0:10101/127.0.0.1:8080?log=debug&tls=1&max=256"
|
||||
|
||||
# Terminal 2: Client
|
||||
./nodepass "client://localhost:10101/127.0.0.1:9090?log=debug&min=64"
|
||||
|
||||
# Terminal 3: Master mode for API testing
|
||||
./nodepass "master://0.0.0.0:9090/api?log=debug&tls=0"
|
||||
```
|
||||
|
||||
Test all TLS modes (0, 1, 2) and protocol types (TCP, UDP). Verify graceful shutdown with SIGTERM/SIGINT.
|
||||
|
||||
### Release Process
|
||||
|
||||
Uses GoReleaser on tag push (`v*.*.*`). See `.goreleaser.yml` for build matrix (Linux, Windows, macOS, FreeBSD across multiple architectures).
|
||||
|
||||
## Code Patterns & Conventions
|
||||
|
||||
### Error Handling
|
||||
|
||||
Always wrap errors with context using `fmt.Errorf("function: operation failed: %w", err)`
|
||||
|
||||
### Logging
|
||||
|
||||
Use the injected `logger` instance with appropriate levels:
|
||||
```go
|
||||
logger.Debug("Detailed info: %v", detail) // Verbose debugging
|
||||
logger.Info("Operation: %v", status) // Normal operations
|
||||
logger.Warn("Non-critical issue: %v", err) // Recoverable problems
|
||||
logger.Error("Critical error: %v", err) // Functionality affected
|
||||
logger.Event("Traffic stats: %v", stats) // Important events
|
||||
```
|
||||
|
||||
### Goroutine Management
|
||||
|
||||
All long-running goroutines must:
|
||||
1. Check `ctx.Err()` regularly for cancellation
|
||||
2. Use proper cleanup with `defer` statements
|
||||
3. Handle panics in critical sections
|
||||
4. Release resources (slots, buffers, connections) on exit
|
||||
|
||||
### Buffer Pooling
|
||||
|
||||
Always use `Common.getTCPBuffer()` / `Common.putTCPBuffer()` or UDP equivalents to minimize allocations:
|
||||
```go
|
||||
buf := c.getTCPBuffer()
|
||||
defer c.putTCPBuffer(buf)
|
||||
// ... use buf
|
||||
```
|
||||
|
||||
### Connection Slot Management
|
||||
|
||||
Before creating connections:
|
||||
```go
|
||||
if !c.tryAcquireSlot(isUDP) {
|
||||
return fmt.Errorf("slot limit reached")
|
||||
}
|
||||
defer c.releaseSlot(isUDP)
|
||||
```
|
||||
|
||||
### Comments Style
|
||||
|
||||
Maintain bilingual (Chinese/English) comments for public APIs and exported functions:
|
||||
```go
|
||||
// NewServer 创建新的服务端实例
|
||||
// NewServer creates a new server instance
|
||||
func NewServer(parsedURL *url.URL, ...) (*Server, error) { ... }
|
||||
```
|
||||
|
||||
## Master Mode Specifics
|
||||
|
||||
### API Structure
|
||||
|
||||
RESTful endpoints at `/{prefix}/*` (default `/api/*`):
|
||||
- Instance CRUD: POST/GET/PATCH/PUT/DELETE on `/instances` and `/instances/{id}`
|
||||
- Real-time events: SSE stream at `/events` (types: initial, create, update, delete, shutdown, log)
|
||||
- OpenAPI docs: `/openapi.json` and `/docs` (Swagger UI)
|
||||
|
||||
### State Persistence
|
||||
|
||||
All instances stored in `nodepass.gob` using Go's `encoding/gob`:
|
||||
- Auto-saved on instance changes via `saveMasterState()`
|
||||
- Restored on startup via `restoreMasterState()`
|
||||
- Mutex-protected writes with `stateMu`
|
||||
|
||||
### API Authentication
|
||||
|
||||
API Key in `X-API-Key` header. Special instance ID `********` for key regeneration via PATCH action `restart`.
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
1. **Don't modify NodePassProject libraries**: These are external dependencies, not internal packages
|
||||
2. **Always decode before using tunnel URLs**: Use `Common.decode()` for base64+XOR encoded data
|
||||
3. **TLS mode is server-controlled**: Clients receive TLS mode during handshake, don't override
|
||||
4. **Pool capacity coordination**: Server sets `max`, client sets `min` - they must align correctly
|
||||
5. **UDP session cleanup**: Sessions in `targetUDPSession` require explicit cleanup with timeouts
|
||||
6. **Certificate hot-reload**: Only applies to `tls=2` mode with periodic checks every `ReloadInterval`
|
||||
7. **Graceful shutdown**: Use context cancellation propagation, don't abruptly close connections
|
||||
|
||||
## Key Files Reference
|
||||
|
||||
- `cmd/nodepass/main.go`: Entry point, version variable injection
|
||||
- `cmd/nodepass/core.go`: Mode dispatch, TLS setup, CLI help formatting
|
||||
- `internal/common.go`: Shared primitives (buffer pools, slot management, encoding, config init)
|
||||
- `internal/server.go`: Server lifecycle, tunnel handshake, forward/reverse modes
|
||||
- `internal/client.go`: Client lifecycle, single-end/dual-end modes, tunnel connection
|
||||
- `internal/master.go`: HTTP API, SSE events, instance subprocess management, state persistence
|
||||
- `docs/en/how-it-works.md`: Detailed architecture documentation
|
||||
- `docs/en/configuration.md`: Complete parameter reference
|
||||
- `docs/en/api.md`: Master mode API specification
|
||||
|
||||
## Documentation Requirements
|
||||
|
||||
When adding features:
|
||||
1. Update relevant `docs/en/*.md` and `docs/zh/*.md` files
|
||||
2. Add examples to `docs/en/examples.md`
|
||||
3. Document new query parameters in `docs/en/configuration.md`
|
||||
4. Update API endpoints in `docs/en/api.md` if touching master mode
|
||||
5. Keep README.md feature list current
|
||||
|
||||
## Additional Notes
|
||||
|
||||
- Project uses Go 1.25+ features, maintain compatibility
|
||||
- Single binary with no external runtime dependencies (except TLS cert files for mode 2)
|
||||
- Focus on zero-configuration deployment - defaults should work for most use cases
|
||||
- Performance-critical paths: buffer allocation, connection pooling, data transfer loops
|
||||
- Security considerations: TLS mode selection, API key protection, input validation on master API
|
||||
@@ -47,7 +47,7 @@ English | [简体中文](README_zh.md)
|
||||
- **📈 Performance**
|
||||
- Intelligent scheduling, auto-tuning, ultra-low resource usage.
|
||||
- Stable under high concurrency and heavy load.
|
||||
- Health checks, auto-reconnect, self-healing.
|
||||
- Load balancing, health checks, self-healing and more.
|
||||
|
||||
- **💡 Visualization**
|
||||
- Rich cross-platform visual frontends.
|
||||
@@ -100,7 +100,7 @@ The [NodePassProject](https://github.com/NodePassProject) organization develops
|
||||
|
||||
- **[npsh](https://github.com/NodePassProject/npsh)**: A collection of one-click scripts that provide simple deployment for API or Dashboard with flexible configuration and management.
|
||||
|
||||
- **[NodePass-ApplePlatforms](https://github.com/NodePassProject/NodePass-ApplePlatforms)**: An iOS/macOS application that offers a native experience for Apple users.
|
||||
- **[NodePass-ApplePlatforms](https://github.com/NodePassProject/NodePass-ApplePlatforms)**: A service-oriented iOS/macOS application that offers a native experience for Apple users.
|
||||
|
||||
- **[nodepass-core](https://github.com/NodePassProject/nodepass-core)**: Development branch, featuring previews of new functionalities and performance optimizations, suitable for advanced users and developers.
|
||||
|
||||
@@ -112,12 +112,16 @@ The [NodePassProject](https://github.com/NodePassProject) organization develops
|
||||
|
||||
## 📄 License
|
||||
|
||||
Project `NodePass` is licensed under the [BSD 3-Clause License](LICENSE).
|
||||
Project **NodePass** is licensed under the [BSD 3-Clause License](LICENSE).
|
||||
|
||||
## ⚖️ Disclaimer
|
||||
|
||||
This project is provided "as is" without any warranties. Users assume all risks and must comply with local laws for legal use only. Developers are not liable for any direct, indirect, incidental, or consequential damages. Secondary development requires commitment to legal use and self-responsibility for legal compliance. Developers reserve the right to modify software features and this disclaimer at any time. Final interpretation rights belong to developers.
|
||||
|
||||
## 🔗 NFT Support
|
||||
|
||||
Support **NodePass** in a unique way by checking out our NFT collection on [OpenSea](https://opensea.io/collection/nodepass).
|
||||
|
||||
## 🤝 Sponsors
|
||||
|
||||
<table>
|
||||
|
||||
@@ -47,7 +47,7 @@
|
||||
- **📈 高性能优化**
|
||||
- 智能流量调度与自动连接调优,极低资源占用。
|
||||
- 高并发、高负载状态下卓越的系统稳定性能。
|
||||
- 健康检查、断线重连、故障自愈,确保持续高可用。
|
||||
- 负载均衡、健康检查、故障自愈,确保持续高可用。
|
||||
|
||||
- **💡 可视化管理**
|
||||
- 配套跨平台、多样化的管理前端应用,具备可视化配置能力。
|
||||
@@ -100,7 +100,7 @@ nodepass "master://:10101/api?log=debug&tls=1"
|
||||
|
||||
- **[npsh](https://github.com/NodePassProject/npsh)**: 简单易用的 NodePass 一键脚本合集,包括 API 主控、Dash 面板的安装部署、灵活配置和辅助管理。
|
||||
|
||||
- **[NodePass-ApplePlatforms](https://github.com/NodePassProject/NodePass-ApplePlatforms)**: 一款为 Apple 用户提供原生体验的 iOS/macOS 应用程序。
|
||||
- **[NodePass-ApplePlatforms](https://github.com/NodePassProject/NodePass-ApplePlatforms)**: 面向服务的 iOS/macOS 应用,为 Apple 用户提供原生体验。
|
||||
|
||||
- **[nodepass-core](https://github.com/NodePassProject/nodepass-core)**: 开发分支,包含新功能预览和性能优化测试,适合高级用户和开发者。
|
||||
|
||||
@@ -112,11 +112,15 @@ nodepass "master://:10101/api?log=debug&tls=1"
|
||||
|
||||
## 📄 许可协议
|
||||
|
||||
`NodePass`项目根据[BSD 3-Clause许可证](LICENSE)授权。
|
||||
**NodePass** 项目根据 [BSD 3-Clause 许可证](LICENSE)授权。
|
||||
|
||||
## ⚖️ 免责声明
|
||||
|
||||
本项目以“现状”提供,开发者不提供任何明示或暗示的保证。用户使用风险自担,需遵守当地法律法规,仅限合法用途。开发者对任何直接、间接、偶然或后果性损害概不负责。进行二次开发须承诺合法使用并自负法律责任。开发者保留随时修改软件功能及本声明的权利。最终解释权归开发者所有。
|
||||
本项目以"现状"提供,开发者不提供任何明示或暗示的保证。用户使用风险自担,需遵守当地法律法规,仅限合法用途。开发者对任何直接、间接、偶然或后果性损害概不负责。进行二次开发须承诺合法使用并自负法律责任。开发者保留随时修改软件功能及本声明的权利。最终解释权归开发者所有。
|
||||
|
||||
## 🔗 NFT 支持
|
||||
|
||||
以独特方式支持 **NodePass**,查看我们在 [OpenSea](https://opensea.io/collection/nodepass) 上的 NFT 收藏。
|
||||
|
||||
## 🤝 赞助商
|
||||
|
||||
|
||||
@@ -67,6 +67,18 @@ API Key authentication is enabled by default, automatically generated and saved
|
||||
"url": "...",
|
||||
"config": "server://0.0.0.0:8080/localhost:3000?log=info&tls=1&max=1024&mode=0&read=1h&rate=0&slot=65536&proxy=0",
|
||||
"restart": true,
|
||||
"meta": {
|
||||
"peer": {
|
||||
"sid": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"type": "1",
|
||||
"alias": "remote-service"
|
||||
},
|
||||
"tags": {
|
||||
"environment": "production",
|
||||
"region": "us-west",
|
||||
"owner": "team-alpha"
|
||||
}
|
||||
},
|
||||
"mode": 0,
|
||||
"ping": 0,
|
||||
"pool": 0,
|
||||
@@ -85,6 +97,15 @@ API Key authentication is enabled by default, automatically generated and saved
|
||||
- `tcprx`/`tcptx`/`udprx`/`udptx`: Cumulative traffic statistics
|
||||
- `config`: Instance configuration URL with complete startup configuration
|
||||
- `restart`: Auto-restart policy
|
||||
- `meta`: Metadata information for instance organization and peer identification
|
||||
- `peer`: Peer connection information (remote endpoint details)
|
||||
- `sid`: Service ID of the remote service, using UUID v4 format (e.g., `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- `type`: Remote service type, using standard enumeration values
|
||||
- `"0"`: Single-end Forwarding mode
|
||||
- `"1"`: NAT Traversal mode
|
||||
- `"2"`: Tunnel Forwarding mode
|
||||
- `alias`: Service alias of the remote endpoint (no format restriction)
|
||||
- `tags`: Custom key-value tags for flexible categorization and filtering
|
||||
|
||||
### Instance URL Format
|
||||
|
||||
@@ -131,9 +152,25 @@ async function regenerateApiKey() {
|
||||
const result = await response.json();
|
||||
return result.url; // New API Key
|
||||
}
|
||||
|
||||
// Get Master ID
|
||||
async function getMasterID() {
|
||||
const response = await fetch(`${API_URL}/instances/${apiKeyID}`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'X-API-Key': 'current-api-key'
|
||||
}
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
return result.data.config; // Master ID (16-character hex)
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: API Key ID is fixed as `********` (eight asterisks). In the internal implementation, this is a special instance ID used to store and manage the API Key.
|
||||
**Note**:
|
||||
- API Key ID is fixed as `********` (eight asterisks). In the internal implementation, this is a special instance ID used to store and manage the API Key.
|
||||
- The API Key instance's `config` field stores the **Master ID**, which is a 16-character hexadecimal string (e.g., `1a2b3c4d5e6f7890`) used to uniquely identify the master service.
|
||||
- The Master ID is automatically generated on first startup and persisted, remaining constant throughout the master service's lifecycle.
|
||||
|
||||
### Using SSE for Real-time Event Monitoring
|
||||
|
||||
@@ -531,9 +568,91 @@ To properly manage lifecycles:
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
|
||||
// Update instance metadata
|
||||
async function updateInstanceMetadata(instanceId, metadata) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey // If API Key is enabled
|
||||
},
|
||||
body: JSON.stringify({ meta: metadata })
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
```
|
||||
|
||||
5. **Auto-restart Policy Management**: Configure automatic startup behavior
|
||||
5. **Metadata Management**: Organize and categorize instances with metadata
|
||||
```javascript
|
||||
// Set peer connection information
|
||||
async function setPeerInfo(instanceId, peerInfo) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: {
|
||||
sid: peerInfo.serviceId, // UUID v4 format
|
||||
type: peerInfo.type, // "0" | "1" | "2"
|
||||
alias: peerInfo.alias
|
||||
},
|
||||
tags: {} // Preserve existing tags
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
|
||||
// Add or update instance tags
|
||||
async function updateInstanceTags(instanceId, tags) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: {}, // Preserve existing peer info
|
||||
tags: tags
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
|
||||
// Complete metadata update
|
||||
async function updateCompleteMetadata(instanceId, peerInfo, tags) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: peerInfo,
|
||||
tags: tags
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
```
|
||||
|
||||
6. **Auto-restart Policy Management**: Configure automatic startup behavior
|
||||
```javascript
|
||||
async function setAutoStartPolicy(instanceId, enableAutoStart) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
@@ -585,6 +704,201 @@ To properly manage lifecycles:
|
||||
}
|
||||
```
|
||||
|
||||
#### Metadata Management Usage Examples
|
||||
|
||||
Here are comprehensive examples showing how to use metadata for instance organization and management:
|
||||
|
||||
```javascript
|
||||
// Example 1: Establish peer-to-peer tunnel with metadata
|
||||
async function establishPeerTunnel(localConfig, remoteConfig) {
|
||||
// Create local server instance
|
||||
const localInstance = await createNodePassInstance({
|
||||
type: 'server',
|
||||
port: localConfig.port,
|
||||
target: localConfig.target
|
||||
});
|
||||
|
||||
// Create remote client instance
|
||||
const remoteInstance = await createNodePassInstance({
|
||||
type: 'client',
|
||||
serverHost: localConfig.serverHost,
|
||||
port: remoteConfig.port,
|
||||
target: remoteConfig.target
|
||||
});
|
||||
|
||||
if (localInstance.success && remoteInstance.success) {
|
||||
// Set peer information on local instance
|
||||
await updateCompleteMetadata(
|
||||
localInstance.data.id,
|
||||
{
|
||||
sid: remoteConfig.serviceId, // UUID format
|
||||
type: "2", // Tunnel forwarding
|
||||
alias: remoteConfig.serviceName
|
||||
},
|
||||
{
|
||||
tunnel_type: 'peer-to-peer',
|
||||
protocol: 'tcp',
|
||||
encryption: 'tls'
|
||||
}
|
||||
);
|
||||
|
||||
// Set peer information on remote instance
|
||||
await updateCompleteMetadata(
|
||||
remoteInstance.data.id,
|
||||
{
|
||||
sid: localConfig.serviceId, // UUID format
|
||||
type: "2", // Tunnel forwarding
|
||||
alias: localConfig.serviceName
|
||||
},
|
||||
{
|
||||
tunnel_type: 'peer-to-peer',
|
||||
protocol: 'tcp',
|
||||
encryption: 'tls'
|
||||
}
|
||||
);
|
||||
|
||||
console.log('Peer tunnel established with metadata');
|
||||
}
|
||||
}
|
||||
|
||||
// Example 2: Organize instances by environment and region
|
||||
async function organizeInstancesByEnvironment(instances) {
|
||||
for (const instance of instances) {
|
||||
const tags = {
|
||||
environment: instance.isProduction ? 'production' : 'development',
|
||||
region: instance.deploymentRegion,
|
||||
team: instance.owningTeam,
|
||||
cost_center: instance.costCenter,
|
||||
criticality: instance.isCritical ? 'high' : 'normal'
|
||||
};
|
||||
|
||||
await updateInstanceTags(instance.id, tags);
|
||||
console.log(`Tagged instance ${instance.id} with environment metadata`);
|
||||
}
|
||||
}
|
||||
|
||||
// Example 3: Query instances by metadata tags
|
||||
async function findInstancesByTags(requiredTags) {
|
||||
const response = await fetch(`${API_URL}/instances`, {
|
||||
headers: { 'X-API-Key': apiKey }
|
||||
});
|
||||
const data = await response.json();
|
||||
|
||||
if (data.success) {
|
||||
return data.data.filter(instance => {
|
||||
if (!instance.meta || !instance.meta.tags) return false;
|
||||
|
||||
// Check if all required tags match
|
||||
return Object.entries(requiredTags).every(([key, value]) =>
|
||||
instance.meta.tags[key] === value
|
||||
);
|
||||
});
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
// Example 4: Update metadata based on operational status
|
||||
async function updateMetadataOnStatusChange(instanceId, newStatus) {
|
||||
const instance = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
headers: { 'X-API-Key': apiKey }
|
||||
});
|
||||
const data = await instance.json();
|
||||
|
||||
if (data.success && data.data.meta) {
|
||||
const updatedTags = {
|
||||
...data.data.meta.tags,
|
||||
last_status_change: new Date().toISOString(),
|
||||
current_status: newStatus,
|
||||
status_change_count: (parseInt(data.data.meta.tags.status_change_count || '0') + 1).toString()
|
||||
};
|
||||
|
||||
await updateInstanceTags(instanceId, updatedTags);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Metadata Best Practices
|
||||
|
||||
1. **Peer Information**: Use the `peer` object to track connections between instances
|
||||
- `sid`: Service unique identifier (required, UUID v4 format, e.g., `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- Use standard UUID v4 format to ensure global uniqueness
|
||||
- Can use JavaScript's `crypto.randomUUID()` or third-party libraries to generate
|
||||
- `type`: Service type identifier (required, string enumeration value)
|
||||
- `"0"`: Single-end Forwarding - For simple client forwarding scenarios
|
||||
- `"1"`: NAT Traversal - For scenarios requiring NAT traversal
|
||||
- `"2"`: Tunnel Forwarding - For establishing encrypted tunnels
|
||||
- `alias`: Friendly name of the remote service (no format restriction, max 256 chars)
|
||||
|
||||
2. **Frontend Integration Standards**: To ensure consistency, frontends should follow these standards
|
||||
|
||||
**Service ID (sid) Generation Standards:**
|
||||
```javascript
|
||||
// Use browser native API to generate UUID v4
|
||||
const serviceId = crypto.randomUUID();
|
||||
// Example output: "550e8400-e29b-41d4-a716-446655440000"
|
||||
|
||||
// Or use third-party library (e.g., uuid)
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
const serviceId = uuidv4();
|
||||
```
|
||||
|
||||
**Service Type (type) Usage Standards:**
|
||||
```javascript
|
||||
// Define service type enumeration
|
||||
const ServiceType = {
|
||||
SINGLE_END: "0", // Single-end forwarding: client unidirectional forwarding, no server callback needed
|
||||
NAT_TRAVERSAL: "1", // NAT traversal: traverse NAT for internal network access
|
||||
TUNNEL: "2" // Tunnel forwarding: establish end-to-end encrypted tunnel
|
||||
};
|
||||
|
||||
// Usage example
|
||||
const peerInfo = {
|
||||
sid: crypto.randomUUID(),
|
||||
type: ServiceType.NAT_TRAVERSAL,
|
||||
alias: "Web Server"
|
||||
};
|
||||
```
|
||||
|
||||
**Type Selection Guide:**
|
||||
- **Single-end Forwarding ("0")**:
|
||||
- Scenario: Client only needs to forward traffic to remote server
|
||||
- Feature: One-way connection, no server callback required
|
||||
- Example: Local app connecting to cloud database
|
||||
|
||||
- **NAT Traversal ("1")**:
|
||||
- Scenario: Need to access internal network services from external network
|
||||
- Feature: Traverse NAT and firewall restrictions
|
||||
- Example: Remote access to home NAS, internal web services
|
||||
|
||||
- **Tunnel Forwarding ("2")**:
|
||||
- Scenario: Need to establish secure end-to-end connection
|
||||
- Feature: Encrypted transmission, bidirectional communication
|
||||
- Example: Secure interconnection between branch offices and headquarters
|
||||
|
||||
3. **Tags Organization**: Design a consistent tagging strategy
|
||||
- Use lowercase keys with underscores (e.g., `cost_center`, `deployment_region`)
|
||||
- Limit tag values to meaningful, searchable strings
|
||||
- Common tag categories:
|
||||
- Environment: `production`, `staging`, `development`
|
||||
- Location: `us-west`, `eu-central`, `ap-southeast`
|
||||
- Ownership: `team-alpha`, `ops-team`, `platform-team`
|
||||
- Function: `database-tunnel`, `web-proxy`, `api-gateway`
|
||||
- Criticality: `high`, `medium`, `low`
|
||||
|
||||
4. **Field Length Limits**: All metadata fields have length requirements
|
||||
- `peer.sid`: Fixed 36 characters (UUID v4 format, e.g., `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- `peer.type`: Fixed 1 character (enumeration value: `"0"` | `"1"` | `"2"`)
|
||||
- `peer.alias`: Max 256 chars (no specific format required)
|
||||
- Tag keys and values: Max 256 chars each
|
||||
|
||||
5. **Tag Uniqueness**: Ensure tag keys are unique within an instance
|
||||
- Duplicate keys will result in a 400 Bad Request error
|
||||
|
||||
6. **Filtering and Search**: Use metadata for instance filtering
|
||||
- Client-side filtering by tags for dashboard views
|
||||
- Query instances by peer information for relationship mapping
|
||||
- Group instances by tags for batch operations
|
||||
|
||||
#### Complete Auto-restart Policy Usage Example
|
||||
|
||||
Here's a comprehensive example showing how to implement auto-restart policy management in real scenarios:
|
||||
@@ -798,6 +1112,18 @@ The instance object in API responses contains the following fields:
|
||||
"url": "server://...", // Instance configuration URL
|
||||
"config": "server://0.0.0.0:8080/localhost:3000?log=info&tls=1&max=1024&mode=0&read=1h&rate=0&slot=65536&proxy=0", // Complete configuration URL
|
||||
"restart": true, // Auto-restart policy
|
||||
"meta": { // Metadata for organization and peer tracking
|
||||
"peer": {
|
||||
"sid": "550e8400-e29b-41d4-a716-446655440000", // Remote service ID (UUID format)
|
||||
"type": "1", // Remote service type (0=Single-end, 1=NAT Traversal, 2=Tunnel)
|
||||
"alias": "remote-service" // Remote service friendly name
|
||||
},
|
||||
"tags": { // Custom key-value tags
|
||||
"environment": "production",
|
||||
"region": "us-west",
|
||||
"team": "platform"
|
||||
}
|
||||
},
|
||||
"mode": 0, // Instance mode
|
||||
"tcprx": 1024, // TCP received bytes
|
||||
"tcptx": 2048, // TCP transmitted bytes
|
||||
@@ -811,6 +1137,17 @@ The instance object in API responses contains the following fields:
|
||||
- `config` field contains the instance's complete configuration URL, auto-generated by the system
|
||||
- `mode` field indicates the current runtime mode of the instance
|
||||
- `restart` field controls the auto-restart behavior of the instance
|
||||
- `meta` field contains structured metadata for instance organization
|
||||
- `peer` object tracks remote endpoint information for peer-to-peer connections
|
||||
- `sid`: Service unique identifier, must use UUID v4 format (36 chars, e.g., `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- `type`: Service type identifier, string enumeration value (`"0"` | `"1"` | `"2"`)
|
||||
- `"0"`: Single-end Forwarding - Client unidirectional forwarding
|
||||
- `"1"`: NAT Traversal - Traverse NAT for internal network access
|
||||
- `"2"`: Tunnel Forwarding - Establish end-to-end encrypted tunnel
|
||||
- `alias`: Custom string, max 256 chars, no format restriction
|
||||
- `tags` map allows flexible categorization with custom key-value pairs
|
||||
- Tag keys and values have a 256-character maximum length
|
||||
- Tag keys must be unique within an instance
|
||||
|
||||
### Instance Configuration Field
|
||||
|
||||
@@ -1018,9 +1355,18 @@ const instance = await fetch(`${API_URL}/instances/abc123`, {
|
||||
```
|
||||
|
||||
#### PATCH /instances/{id}
|
||||
- **Description**: Update instance state, alias, or perform control operations
|
||||
- **Description**: Update instance state, alias, metadata, or perform control operations
|
||||
- **Authentication**: Requires API Key
|
||||
- **Request body**: `{ "alias": "new alias", "action": "start|stop|restart|reset", "restart": true|false }`
|
||||
- **Request body**: `{ "alias": "new alias", "action": "start|stop|restart|reset", "restart": true|false, "meta": {...} }`
|
||||
- **Metadata Structure**:
|
||||
- `peer`: Object with fields (all optional):
|
||||
- `sid`: Service ID (UUID v4 format, 36 chars, e.g., `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- `type`: Service type (enumeration value: `"0"` | `"1"` | `"2"`)
|
||||
- `"0"`: Single-end Forwarding
|
||||
- `"1"`: NAT Traversal
|
||||
- `"2"`: Tunnel Forwarding
|
||||
- `alias`: Service alias (max 256 chars, no format restriction)
|
||||
- `tags`: Object with custom key-value pairs (keys and values max 256 chars, keys must be unique)
|
||||
- **Example**:
|
||||
```javascript
|
||||
// Update alias and restart policy
|
||||
@@ -1047,6 +1393,48 @@ await fetch(`${API_URL}/instances/abc123`, {
|
||||
action: "restart"
|
||||
})
|
||||
});
|
||||
|
||||
// Update metadata with peer information and tags
|
||||
await fetch(`${API_URL}/instances/abc123`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: {
|
||||
sid: "550e8400-e29b-41d4-a716-446655440000", // UUID format
|
||||
type: "1", // NAT Traversal
|
||||
alias: "remote-api-server"
|
||||
},
|
||||
tags: {
|
||||
environment: "production",
|
||||
region: "us-east",
|
||||
team: "backend",
|
||||
criticality: "high"
|
||||
}
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
// Update only tags (peer info remains unchanged)
|
||||
await fetch(`${API_URL}/instances/abc123`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: {}, // Empty object preserves existing peer info
|
||||
tags: {
|
||||
environment: "staging",
|
||||
updated_at: new Date().toISOString()
|
||||
}
|
||||
}
|
||||
})
|
||||
});
|
||||
```
|
||||
|
||||
#### PUT /instances/{id}
|
||||
@@ -1101,6 +1489,7 @@ await fetch(`${API_URL}/instances/abc123`, {
|
||||
- **Authentication**: Requires API Key
|
||||
- **Request body**: `{ "alias": "new alias" }`
|
||||
- **Response**: Complete master information (same as GET /info)
|
||||
- **Note**: Master alias is stored in the `alias` field of the API Key instance (ID `********`)
|
||||
- **Example**:
|
||||
```javascript
|
||||
// Update master alias
|
||||
@@ -1118,6 +1507,18 @@ console.log('Updated alias:', data.alias);
|
||||
// Response contains full system info with updated alias
|
||||
```
|
||||
|
||||
**Retrieving Master ID**: The Master ID is stored in the `config` field of the API Key instance and can be retrieved as follows:
|
||||
```javascript
|
||||
// Get Master ID
|
||||
async function getMasterID() {
|
||||
const response = await fetch(`${API_URL}/instances/********`, {
|
||||
headers: { 'X-API-Key': apiKey }
|
||||
});
|
||||
const data = await response.json();
|
||||
return data.data.config; // Returns 16-character hex Master ID
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /tcping
|
||||
- **Description**: TCP connection test, checks connectivity and latency to target address
|
||||
- **Authentication**: Requires API Key
|
||||
|
||||
@@ -264,6 +264,63 @@ nodepass "server://0.0.0.0:10101/0.0.0.0:8080?log=info&tls=1&proxy=1&rate=100"
|
||||
- The header format follows the HAProxy PROXY protocol v1 specification
|
||||
- If the target service doesn't support PROXY protocol, connections may fail or behave unexpectedly
|
||||
|
||||
## Target Address Groups and Load Balancing
|
||||
|
||||
NodePass supports configuring multiple target addresses to achieve high availability and load balancing. Target address groups are only applicable to the egress side (the final destination of traffic) and should not be used on the ingress side.
|
||||
|
||||
### Target Address Group Configuration
|
||||
|
||||
Target address groups are configured by separating multiple addresses with commas. NodePass automatically performs round-robin and failover across these addresses:
|
||||
|
||||
```bash
|
||||
# Server with multiple backend targets (forward mode, mode=2)
|
||||
nodepass "server://0.0.0.0:10101/backend1.example.com:8080,backend2.example.com:8080,backend3.example.com:8080?mode=2&tls=1"
|
||||
|
||||
# Client with multiple local services (single-end forwarding mode, mode=1)
|
||||
nodepass "client://127.0.0.1:1080/app1.local:8080,app2.local:8080?mode=1"
|
||||
```
|
||||
|
||||
### Rotation Strategy
|
||||
|
||||
NodePass employs a Round-Robin algorithm that combines failover and load balancing features:
|
||||
|
||||
- **Load Balancing**: After each successful connection establishment, automatically switches to the next target address for even traffic distribution
|
||||
- **Failover**: When a connection to an address fails, immediately tries the next address to ensure service availability
|
||||
- **Automatic Recovery**: Failed addresses are retried in subsequent rotation cycles and automatically resume receiving traffic after recovery
|
||||
|
||||
### Use Cases
|
||||
|
||||
Target address groups are suitable for the following scenarios:
|
||||
|
||||
- **High Availability Deployment**: Multiple backend servers for automatic failover
|
||||
- **Load Balancing**: Even traffic distribution across multiple backend instances
|
||||
- **Canary Releases**: Gradually shifting traffic to new service versions
|
||||
- **Geographic Distribution**: Selecting optimal paths based on network topology
|
||||
|
||||
### Important Notes
|
||||
|
||||
- **Egress Only**: Target address groups can only be configured at the final traffic destination
|
||||
- ✓ Server forward mode (mode=2): `server://0.0.0.0:10101/target1:80,target2:80`
|
||||
- ✓ Client single-end forwarding mode (mode=1): `client://127.0.0.1:1080/target1:80,target2:80`
|
||||
- ✗ Tunnel addresses do not support: Do not use multi-address configuration for tunnel addresses
|
||||
|
||||
- **Address Format**: All addresses must use the same port or explicitly specify the port for each address
|
||||
- **Protocol Consistency**: All addresses in the group must support the same protocol (TCP/UDP)
|
||||
- **Thread Safety**: Rotation index uses atomic operations, supporting high-concurrency scenarios
|
||||
|
||||
Example configurations:
|
||||
|
||||
```bash
|
||||
# Correct example: Server with 3 backend web servers
|
||||
nodepass "server://0.0.0.0:10101/web1.internal:8080,web2.internal:8080,web3.internal:8080?mode=2&log=info"
|
||||
|
||||
# Correct example: Client with 2 local database instances
|
||||
nodepass "client://127.0.0.1:3306/db-primary.local:3306,db-secondary.local:3306?mode=1&log=warn"
|
||||
|
||||
# Incorrect example: Do not use multi-address for tunnel addresses (will cause parsing errors)
|
||||
# nodepass "server://host1:10101,host2:10101/target:8080" # ✗ Wrong usage
|
||||
```
|
||||
|
||||
## URL Query Parameter Scope and Applicability
|
||||
|
||||
NodePass allows flexible configuration via URL query parameters. The following table shows which parameters are applicable in server, client, and master modes:
|
||||
@@ -301,10 +358,10 @@ NodePass behavior can be fine-tuned using environment variables. Below is the co
|
||||
| `NP_SEMAPHORE_LIMIT` | Signal channel buffer size | 65536 | `export NP_SEMAPHORE_LIMIT=2048` |
|
||||
| `NP_TCP_DATA_BUF_SIZE` | Buffer size for TCP data transfer | 16384 | `export NP_TCP_DATA_BUF_SIZE=65536` |
|
||||
| `NP_UDP_DATA_BUF_SIZE` | Buffer size for UDP packets | 2048 | `export NP_UDP_DATA_BUF_SIZE=16384` |
|
||||
| `NP_HANDSHAKE_TIMEOUT` | Timeout for handshake operations | 10s | `export NP_HANDSHAKE_TIMEOUT=30s` |
|
||||
| `NP_HANDSHAKE_TIMEOUT` | Timeout for handshake operations | 5s | `export NP_HANDSHAKE_TIMEOUT=30s` |
|
||||
| `NP_UDP_READ_TIMEOUT` | Timeout for UDP read operations | 30s | `export NP_UDP_READ_TIMEOUT=60s` |
|
||||
| `NP_TCP_DIAL_TIMEOUT` | Timeout for establishing TCP connections | 30s | `export NP_TCP_DIAL_TIMEOUT=60s` |
|
||||
| `NP_UDP_DIAL_TIMEOUT` | Timeout for establishing UDP connections | 10s | `export NP_UDP_DIAL_TIMEOUT=30s` |
|
||||
| `NP_TCP_DIAL_TIMEOUT` | Timeout for establishing TCP connections | 5s | `export NP_TCP_DIAL_TIMEOUT=60s` |
|
||||
| `NP_UDP_DIAL_TIMEOUT` | Timeout for establishing UDP connections | 5s | `export NP_UDP_DIAL_TIMEOUT=30s` |
|
||||
| `NP_POOL_GET_TIMEOUT` | Timeout for getting connections from pool | 5s | `export NP_POOL_GET_TIMEOUT=60s` |
|
||||
| `NP_MIN_POOL_INTERVAL` | Minimum interval between connection creations | 100ms | `export NP_MIN_POOL_INTERVAL=200ms` |
|
||||
| `NP_MAX_POOL_INTERVAL` | Maximum interval between connection creations | 1s | `export NP_MAX_POOL_INTERVAL=3s` |
|
||||
@@ -362,7 +419,7 @@ For applications relying heavily on UDP traffic:
|
||||
- For applications allowing intermittent transmission, increase this value to avoid false timeout detection
|
||||
|
||||
- `NP_UDP_DIAL_TIMEOUT`: Timeout for establishing UDP connections
|
||||
- Default (10s) provides good balance for most applications
|
||||
- Default (5s) provides good balance for most applications
|
||||
- Increase for high-latency networks or applications with slow response times
|
||||
- Decrease for low-latency applications requiring quick failover
|
||||
|
||||
@@ -376,14 +433,14 @@ For optimizing TCP connections:
|
||||
- Consider increasing to 65536 or higher for bulk data transfers and streaming
|
||||
|
||||
- `NP_TCP_DIAL_TIMEOUT`: Timeout for establishing TCP connections
|
||||
- Default (30s) is suitable for most network conditions
|
||||
- Default (5s) is suitable for most network conditions
|
||||
- Increase for unstable network conditions
|
||||
- Decrease for applications that need quick connection success/failure determination
|
||||
|
||||
### Pool Management Settings
|
||||
|
||||
- `NP_POOL_GET_TIMEOUT`: Maximum time to wait when getting connections from pool
|
||||
- Default (30s) provides sufficient time for connection establishment
|
||||
- Default (5s) provides sufficient time for connection establishment
|
||||
- Increase for high-latency environments or when using large pool sizes
|
||||
- Decrease for applications requiring fast failure detection
|
||||
- In client single-end forwarding mode, connection pools are not used and this parameter is ignored
|
||||
|
||||
@@ -235,9 +235,84 @@ This setup:
|
||||
- Enables developers to access environments without direct network exposure
|
||||
- Maps remote services to different local ports for easy identification
|
||||
|
||||
## High Availability and Load Balancing
|
||||
|
||||
### Example 14: Multi-Backend Server Load Balancing
|
||||
|
||||
Use target address groups for even traffic distribution and automatic failover:
|
||||
|
||||
```bash
|
||||
# Server side: Configure 3 backend web servers
|
||||
nodepass "server://0.0.0.0:10101/web1.internal:8080,web2.internal:8080,web3.internal:8080?mode=2&tls=1&log=info"
|
||||
|
||||
# Client side: Connect to server
|
||||
nodepass "client://server.example.com:10101/127.0.0.1:8080?log=info"
|
||||
```
|
||||
|
||||
This configuration:
|
||||
- Automatically distributes traffic across 3 backend servers using round-robin for load balancing
|
||||
- Automatically switches to other available servers when one backend fails
|
||||
- Automatically resumes sending traffic to recovered servers
|
||||
- Uses TLS encryption to secure the tunnel
|
||||
|
||||
### Example 15: Database Primary-Replica Failover
|
||||
|
||||
Configure primary and replica database instances for high availability access:
|
||||
|
||||
```bash
|
||||
# Client side: Configure primary and replica database addresses (single-end forwarding mode)
|
||||
nodepass "client://127.0.0.1:3306/db-primary.local:3306,db-secondary.local:3306?mode=1&log=warn"
|
||||
```
|
||||
|
||||
This setup:
|
||||
- Prioritizes connections to primary database, automatically switches to replica on primary failure
|
||||
- Single-end forwarding mode provides high-performance local proxy
|
||||
- Application requires no modification for transparent failover
|
||||
- Logs only warnings and errors to reduce output
|
||||
|
||||
### Example 16: API Gateway Backend Pool
|
||||
|
||||
Configure multiple backend service instances for an API gateway:
|
||||
|
||||
```bash
|
||||
# Server side: Configure 4 API service instances
|
||||
nodepass "server://0.0.0.0:10101/api1.backend:8080,api2.backend:8080,api3.backend:8080,api4.backend:8080?mode=2&tls=1&rate=200&slot=5000"
|
||||
|
||||
# Client side: Connect from API gateway
|
||||
nodepass "client://apigateway.example.com:10101/127.0.0.1:8080?rate=100&slot=2000"
|
||||
```
|
||||
|
||||
This configuration:
|
||||
- 4 API service instances form backend pool with round-robin request distribution
|
||||
- Server limits bandwidth to 200 Mbps with maximum 5000 concurrent connections
|
||||
- Client limits bandwidth to 100 Mbps with maximum 2000 concurrent connections
|
||||
- Single instance failure doesn't affect overall service availability
|
||||
|
||||
### Example 17: Geo-Distributed Services
|
||||
|
||||
Configure multi-region service nodes to optimize network latency:
|
||||
|
||||
```bash
|
||||
# Server side: Configure multi-region nodes
|
||||
nodepass "server://0.0.0.0:10101/us-west.service:8080,us-east.service:8080,eu-central.service:8080?mode=2&log=debug"
|
||||
```
|
||||
|
||||
This setup:
|
||||
- Configures 3 service nodes in different regions
|
||||
- Round-robin algorithm automatically distributes traffic across regions
|
||||
- Debug logging helps analyze traffic distribution and failure scenarios
|
||||
- Suitable for globally distributed application scenarios
|
||||
|
||||
**Target Address Group Best Practices:**
|
||||
- **Address Count**: Recommend configuring 2-5 addresses; too many increases failure detection time
|
||||
- **Health Checks**: Ensure backend services have their own health check mechanisms
|
||||
- **Port Consistency**: All addresses use the same port or explicitly specify port for each address
|
||||
- **Monitoring & Alerts**: Configure monitoring systems to track failover events
|
||||
- **Testing & Validation**: Verify failover and load balancing behavior in test environments before deployment
|
||||
|
||||
## PROXY Protocol Integration
|
||||
|
||||
### Example 14: Load Balancer Integration with PROXY Protocol
|
||||
### Example 18: Load Balancer Integration with PROXY Protocol
|
||||
|
||||
Enable PROXY protocol support for integration with load balancers and reverse proxies:
|
||||
|
||||
@@ -256,7 +331,7 @@ This configuration:
|
||||
- Compatible with HAProxy, Nginx, and other PROXY protocol aware services
|
||||
- Useful for maintaining accurate access logs and IP-based access controls
|
||||
|
||||
### Example 15: Reverse Proxy Support for Web Applications
|
||||
### Example 19: Reverse Proxy Support for Web Applications
|
||||
|
||||
Enable web applications behind NodePass to receive original client information:
|
||||
|
||||
@@ -280,7 +355,7 @@ This setup:
|
||||
- Supports compliance requirements for connection auditing
|
||||
- Works with web servers that support PROXY protocol (Nginx, HAProxy, etc.)
|
||||
|
||||
### Example 16: Database Access with Client IP Preservation
|
||||
### Example 20: Database Access with Client IP Preservation
|
||||
|
||||
Maintain client IP information for database access logging and security:
|
||||
|
||||
@@ -307,7 +382,7 @@ Benefits:
|
||||
|
||||
## Container Deployment
|
||||
|
||||
### Example 17: Containerized NodePass
|
||||
### Example 21: Containerized NodePass
|
||||
|
||||
Deploy NodePass in a Docker environment:
|
||||
|
||||
@@ -342,7 +417,7 @@ This configuration:
|
||||
|
||||
## Master API Management
|
||||
|
||||
### Example 18: Centralized Management
|
||||
### Example 22: Centralized Management
|
||||
|
||||
Set up a central controller for multiple NodePass instances:
|
||||
|
||||
@@ -379,7 +454,7 @@ This setup:
|
||||
- Offers a RESTful API for automation and integration
|
||||
- Includes a built-in Swagger UI at http://localhost:9090/api/v1/docs
|
||||
|
||||
### Example 19: Custom API Prefix
|
||||
### Example 23: Custom API Prefix
|
||||
|
||||
Use a custom API prefix for the master mode:
|
||||
|
||||
@@ -398,7 +473,7 @@ This allows:
|
||||
- Custom URL paths for security or organizational purposes
|
||||
- Swagger UI access at http://localhost:9090/admin/v1/docs
|
||||
|
||||
### Example 20: Real-time Connection and Traffic Monitoring
|
||||
### Example 24: Real-time Connection and Traffic Monitoring
|
||||
|
||||
Monitor instance connection counts and traffic statistics through the master API:
|
||||
|
||||
|
||||
@@ -67,6 +67,18 @@ API Key 认证默认启用,首次启动自动生成并保存在 `nodepass.gob`
|
||||
"url": "...",
|
||||
"config": "server://0.0.0.0:8080/localhost:3000?log=info&tls=1&max=1024&mode=0&read=1h&rate=0&slot=65536&proxy=0",
|
||||
"restart": true,
|
||||
"meta": {
|
||||
"peer": {
|
||||
"sid": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"type": "1",
|
||||
"alias": "远程服务"
|
||||
},
|
||||
"tags": {
|
||||
"environment": "production",
|
||||
"region": "us-west",
|
||||
"owner": "team-alpha"
|
||||
}
|
||||
},
|
||||
"mode": 0,
|
||||
"ping": 0,
|
||||
"pool": 0,
|
||||
@@ -85,6 +97,15 @@ API Key 认证默认启用,首次启动自动生成并保存在 `nodepass.gob`
|
||||
- `tcprx`/`tcptx`/`udprx`/`udptx`:累计流量统计
|
||||
- `config`:实例配置URL,包含完整的启动配置
|
||||
- `restart`:自启动策略
|
||||
- `meta`:元数据信息,用于实例组织和对端识别
|
||||
- `peer`:对端连接信息(远程端点详情)
|
||||
- `sid`:远程服务的服务ID,使用UUID v4格式(如 `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- `type`:远程服务类型,使用标准枚举值
|
||||
- `"0"`:单端转发模式(Single-end Forwarding)
|
||||
- `"1"`:内网穿透模式(NAT Traversal)
|
||||
- `"2"`:隧道转发模式(Tunnel Forwarding)
|
||||
- `alias`:远程端点的服务别名(无格式限制)
|
||||
- `tags`:自定义键值对标签,用于灵活分类和筛选
|
||||
|
||||
### 实例 URL 格式
|
||||
|
||||
@@ -131,9 +152,25 @@ async function regenerateApiKey() {
|
||||
const result = await response.json();
|
||||
return result.url; // 新的API Key
|
||||
}
|
||||
|
||||
// 获取主控ID(Master ID)
|
||||
async function getMasterID() {
|
||||
const response = await fetch(`${API_URL}/instances/${apiKeyID}`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'X-API-Key': 'current-api-key'
|
||||
}
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
return result.data.config; // 主控ID(16位十六进制)
|
||||
}
|
||||
```
|
||||
|
||||
**注意**: API Key ID 固定为 `********`(八个星号)。在内部实现中,这是一个特殊的实例ID,用于存储和管理API Key。
|
||||
**注意**:
|
||||
- API Key ID 固定为 `********`(八个星号)。在内部实现中,这是一个特殊的实例ID,用于存储和管理API Key。
|
||||
- API Key实例的 `config` 字段存储**主控ID**(Master ID),这是一个16位十六进制字符串(如 `1a2b3c4d5e6f7890`),用于唯一标识主控服务。
|
||||
- 主控ID在首次启动时自动生成并持久化保存,在主控服务的整个生命周期中保持不变。
|
||||
|
||||
### 使用SSE实时事件监控
|
||||
|
||||
@@ -531,6 +568,88 @@ NodePass主控模式提供自动备份功能,定期备份状态文件以防止
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
|
||||
// 更新实例元数据
|
||||
async function updateInstanceMetadata(instanceId, metadata) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey // 如果启用了API Key
|
||||
},
|
||||
body: JSON.stringify({ meta: metadata })
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
```
|
||||
|
||||
5. **元数据管理**:使用元数据组织和分类实例
|
||||
```javascript
|
||||
// 设置对端连接信息
|
||||
async function setPeerInfo(instanceId, peerInfo) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: {
|
||||
sid: peerInfo.serviceId, // UUID v4 格式
|
||||
type: peerInfo.type, // "0" | "1" | "2"
|
||||
alias: peerInfo.alias
|
||||
},
|
||||
tags: {} // 保留现有标签
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
|
||||
// 添加或更新实例标签
|
||||
async function updateInstanceTags(instanceId, tags) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: {}, // 保留现有对端信息
|
||||
tags: tags
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
|
||||
// 完整元数据更新
|
||||
async function updateCompleteMetadata(instanceId, peerInfo, tags) {
|
||||
const response = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: peerInfo,
|
||||
tags: tags
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
return data.success;
|
||||
}
|
||||
```
|
||||
|
||||
6. **自启动策略管理**:配置自动启动行为
|
||||
@@ -585,6 +704,201 @@ NodePass主控模式提供自动备份功能,定期备份状态文件以防止
|
||||
}
|
||||
```
|
||||
|
||||
#### 元数据管理使用示例
|
||||
|
||||
以下是展示如何使用元数据进行实例组织和管理的综合示例:
|
||||
|
||||
```javascript
|
||||
// 示例1:建立带有元数据的点对点隧道
|
||||
async function establishPeerTunnel(localConfig, remoteConfig) {
|
||||
// 创建本地服务器实例
|
||||
const localInstance = await createNodePassInstance({
|
||||
type: 'server',
|
||||
port: localConfig.port,
|
||||
target: localConfig.target
|
||||
});
|
||||
|
||||
// 创建远程客户端实例
|
||||
const remoteInstance = await createNodePassInstance({
|
||||
type: 'client',
|
||||
serverHost: localConfig.serverHost,
|
||||
port: remoteConfig.port,
|
||||
target: remoteConfig.target
|
||||
});
|
||||
|
||||
if (localInstance.success && remoteInstance.success) {
|
||||
// 在本地实例上设置对端信息
|
||||
await updateCompleteMetadata(
|
||||
localInstance.data.id,
|
||||
{
|
||||
sid: remoteConfig.serviceId, // UUID格式
|
||||
type: "2", // 隧道转发
|
||||
alias: remoteConfig.serviceName
|
||||
},
|
||||
{
|
||||
tunnel_type: 'peer-to-peer',
|
||||
protocol: 'tcp',
|
||||
encryption: 'tls'
|
||||
}
|
||||
);
|
||||
|
||||
// 在远程实例上设置对端信息
|
||||
await updateCompleteMetadata(
|
||||
remoteInstance.data.id,
|
||||
{
|
||||
sid: localConfig.serviceId, // UUID格式
|
||||
type: "2", // 隧道转发
|
||||
alias: localConfig.serviceName
|
||||
},
|
||||
{
|
||||
tunnel_type: 'peer-to-peer',
|
||||
protocol: 'tcp',
|
||||
encryption: 'tls'
|
||||
}
|
||||
);
|
||||
|
||||
console.log('已建立带有元数据的对等隧道');
|
||||
}
|
||||
}
|
||||
|
||||
// 示例2:按环境和区域组织实例
|
||||
async function organizeInstancesByEnvironment(instances) {
|
||||
for (const instance of instances) {
|
||||
const tags = {
|
||||
environment: instance.isProduction ? 'production' : 'development',
|
||||
region: instance.deploymentRegion,
|
||||
team: instance.owningTeam,
|
||||
cost_center: instance.costCenter,
|
||||
criticality: instance.isCritical ? 'high' : 'normal'
|
||||
};
|
||||
|
||||
await updateInstanceTags(instance.id, tags);
|
||||
console.log(`已为实例 ${instance.id} 设置环境元数据标签`);
|
||||
}
|
||||
}
|
||||
|
||||
// 示例3:通过元数据标签查询实例
|
||||
async function findInstancesByTags(requiredTags) {
|
||||
const response = await fetch(`${API_URL}/instances`, {
|
||||
headers: { 'X-API-Key': apiKey }
|
||||
});
|
||||
const data = await response.json();
|
||||
|
||||
if (data.success) {
|
||||
return data.data.filter(instance => {
|
||||
if (!instance.meta || !instance.meta.tags) return false;
|
||||
|
||||
// 检查所有必需标签是否匹配
|
||||
return Object.entries(requiredTags).every(([key, value]) =>
|
||||
instance.meta.tags[key] === value
|
||||
);
|
||||
});
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
// 示例4:根据运行状态更新元数据
|
||||
async function updateMetadataOnStatusChange(instanceId, newStatus) {
|
||||
const instance = await fetch(`${API_URL}/instances/${instanceId}`, {
|
||||
headers: { 'X-API-Key': apiKey }
|
||||
});
|
||||
const data = await instance.json();
|
||||
|
||||
if (data.success && data.data.meta) {
|
||||
const updatedTags = {
|
||||
...data.data.meta.tags,
|
||||
last_status_change: new Date().toISOString(),
|
||||
current_status: newStatus,
|
||||
status_change_count: (parseInt(data.data.meta.tags.status_change_count || '0') + 1).toString()
|
||||
};
|
||||
|
||||
await updateInstanceTags(instanceId, updatedTags);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 元数据最佳实践
|
||||
|
||||
1. **对端信息**:使用 `peer` 对象跟踪实例之间的连接
|
||||
- `sid`:服务唯一标识符(必填,UUID v4格式,如 `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- 使用标准UUID v4格式确保全局唯一性
|
||||
- 可使用JavaScript的 `crypto.randomUUID()` 或第三方库生成
|
||||
- `type`:服务类型标识(必填,字符串枚举值)
|
||||
- `"0"`:单端转发(Single-end Forwarding)- 适用于简单的客户端转发场景
|
||||
- `"1"`:内网穿透(NAT Traversal)- 适用于需要穿透NAT的场景
|
||||
- `"2"`:隧道转发(Tunnel Forwarding)- 适用于建立加密隧道的场景
|
||||
- `alias`:远程服务的友好名称(无格式限制,最多256字符)
|
||||
|
||||
2. **前端集成标准**:为确保一致性,前端应遵循以下标准
|
||||
|
||||
**服务ID(sid)生成标准:**
|
||||
```javascript
|
||||
// 使用浏览器原生API生成UUID v4
|
||||
const serviceId = crypto.randomUUID();
|
||||
// 示例输出: "550e8400-e29b-41d4-a716-446655440000"
|
||||
|
||||
// 或使用第三方库(如uuid)
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
const serviceId = uuidv4();
|
||||
```
|
||||
|
||||
**服务类型(type)使用标准:**
|
||||
```javascript
|
||||
// 定义服务类型枚举
|
||||
const ServiceType = {
|
||||
SINGLE_END: "0", // 单端转发:客户端单向转发,无需服务端回连
|
||||
NAT_TRAVERSAL: "1", // 内网穿透:穿透NAT进行内网访问
|
||||
TUNNEL: "2" // 隧道转发:建立端到端加密隧道
|
||||
};
|
||||
|
||||
// 使用示例
|
||||
const peerInfo = {
|
||||
sid: crypto.randomUUID(),
|
||||
type: ServiceType.NAT_TRAVERSAL,
|
||||
alias: "Web服务器"
|
||||
};
|
||||
```
|
||||
|
||||
**类型选择指南:**
|
||||
- **单端转发("0")**:
|
||||
- 场景:客户端仅需要将流量转发到远程服务器
|
||||
- 特点:单向连接,无需服务端主动回连
|
||||
- 示例:本地应用连接到云端数据库
|
||||
|
||||
- **内网穿透("1")**:
|
||||
- 场景:需要从外网访问内网服务
|
||||
- 特点:穿透NAT和防火墙限制
|
||||
- 示例:远程访问家庭NAS、内网Web服务
|
||||
|
||||
- **隧道转发("2")**:
|
||||
- 场景:需要建立安全的端到端连接
|
||||
- 特点:加密传输,双向通信
|
||||
- 示例:分支机构与总部的安全互联
|
||||
|
||||
3. **标签组织**:设计一致的标签策略
|
||||
- 使用小写字母和下划线的键名(如 `cost_center`、`deployment_region`)
|
||||
- 将标签值限制为有意义的、可搜索的字符串
|
||||
- 常见标签类别:
|
||||
- 环境:`production`、`staging`、`development`
|
||||
- 位置:`us-west`、`eu-central`、`ap-southeast`
|
||||
- 所有权:`team-alpha`、`ops-team`、`platform-team`
|
||||
- 功能:`database-tunnel`、`web-proxy`、`api-gateway`
|
||||
- 重要性:`high`、`medium`、`low`
|
||||
|
||||
4. **字段长度限制**:元数据字段的长度要求
|
||||
- `peer.sid`:固定36字符(UUID v4格式,如 `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- `peer.type`:固定1字符(枚举值:`"0"` | `"1"` | `"2"`)
|
||||
- `peer.alias`:最多256字符(无特定格式要求)
|
||||
- 标签键和值:每个最多256字符
|
||||
|
||||
5. **标签唯一性**:确保实例内的标签键唯一
|
||||
- 重复的键将导致400 Bad Request错误
|
||||
|
||||
6. **过滤和搜索**:使用元数据进行实例过滤
|
||||
- 客户端按标签过滤以显示仪表板视图
|
||||
- 通过对端信息查询实例以进行关系映射
|
||||
- 按标签分组实例以进行批量操作
|
||||
|
||||
#### 自启动策略完整使用示例
|
||||
|
||||
以下是一个全面的示例,展示了如何在实际场景中实现自启动策略管理:
|
||||
@@ -798,6 +1112,18 @@ API响应中的实例对象包含以下字段:
|
||||
"url": "server://...", // 实例配置URL
|
||||
"config": "server://0.0.0.0:8080/localhost:3000?log=info&tls=1&max=1024&mode=0&read=1h&rate=0&slot=65536&proxy=0", // 完整配置URL
|
||||
"restart": true, // 自启动策略
|
||||
"meta": { // 用于组织和对端跟踪的元数据
|
||||
"peer": {
|
||||
"sid": "550e8400-e29b-41d4-a716-446655440000", // 远程服务ID(UUID格式)
|
||||
"type": "1", // 远程服务类型(0=单端转发,1=内网穿透,2=隧道转发)
|
||||
"alias": "远程服务" // 远程服务友好名称
|
||||
},
|
||||
"tags": { // 自定义键值对标签
|
||||
"environment": "production",
|
||||
"region": "us-west",
|
||||
"team": "platform"
|
||||
}
|
||||
},
|
||||
"mode": 0, // 运行模式
|
||||
"tcprx": 1024, // TCP接收字节数
|
||||
"tcptx": 2048, // TCP发送字节数
|
||||
@@ -811,6 +1137,17 @@ API响应中的实例对象包含以下字段:
|
||||
- `config` 字段包含实例的完整配置URL,由系统自动生成
|
||||
- `mode` 字段表示实例当前的运行模式
|
||||
- `restart` 字段控制实例的自启动行为
|
||||
- `meta` 字段包含用于实例组织的结构化元数据
|
||||
- `peer` 对象跟踪点对点连接的远程端点信息
|
||||
- `sid`:服务唯一标识符,必须使用UUID v4格式(36字符,如 `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- `type`:服务类型标识,字符串枚举值(`"0"` | `"1"` | `"2"`)
|
||||
- `"0"`:单端转发(Single-end Forwarding) - 客户端单向转发流量
|
||||
- `"1"`:内网穿透(NAT Traversal) - 穿透NAT进行内网访问
|
||||
- `"2"`:隧道转发(Tunnel Forwarding) - 建立端到端加密隧道
|
||||
- `alias`:自定义字符串,最多256字符,无格式限制
|
||||
- `tags` 映射允许使用自定义键值对进行灵活分类
|
||||
- 标签键和值最大长度为256个字符
|
||||
- 标签键在实例内必须唯一
|
||||
|
||||
### 实例配置字段
|
||||
|
||||
@@ -1018,9 +1355,18 @@ const instance = await fetch(`${API_URL}/instances/abc123`, {
|
||||
```
|
||||
|
||||
#### PATCH /instances/{id}
|
||||
- **描述**:更新实例状态、别名或执行控制操作
|
||||
- **描述**:更新实例状态、别名、元数据或执行控制操作
|
||||
- **认证**:需要API Key
|
||||
- **请求体**:`{ "alias": "新别名", "action": "start|stop|restart|reset", "restart": true|false }`
|
||||
- **请求体**:`{ "alias": "新别名", "action": "start|stop|restart|reset", "restart": true|false, "meta": {...} }`
|
||||
- **元数据结构**:
|
||||
- `peer`:对象,包含以下字段(均为可选):
|
||||
- `sid`:服务ID(UUID v4格式,36字符,如 `550e8400-e29b-41d4-a716-446655440000`)
|
||||
- `type`:服务类型(枚举值:`"0"` | `"1"` | `"2"`)
|
||||
- `"0"`:单端转发(Single-end Forwarding)
|
||||
- `"1"`:内网穿透(NAT Traversal)
|
||||
- `"2"`:隧道转发(Tunnel Forwarding)
|
||||
- `alias`:服务别名(最多256字符,无格式限制)
|
||||
- `tags`:自定义键值对对象(键和值最多256字符,键必须唯一)
|
||||
- **示例**:
|
||||
```javascript
|
||||
// 更新别名和自启动策略
|
||||
@@ -1047,6 +1393,48 @@ await fetch(`${API_URL}/instances/abc123`, {
|
||||
action: "restart"
|
||||
})
|
||||
});
|
||||
|
||||
// 更新元数据(包含对端信息和标签)
|
||||
await fetch(`${API_URL}/instances/abc123`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: {
|
||||
sid: "550e8400-e29b-41d4-a716-446655440000", // UUID格式
|
||||
type: "1", // 内网穿透
|
||||
alias: "远程API服务器"
|
||||
},
|
||||
tags: {
|
||||
environment: "production",
|
||||
region: "us-east",
|
||||
team: "backend",
|
||||
criticality: "high"
|
||||
}
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
// 仅更新标签(对端信息保持不变)
|
||||
await fetch(`${API_URL}/instances/abc123`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': apiKey
|
||||
},
|
||||
body: JSON.stringify({
|
||||
meta: {
|
||||
peer: {}, // 空对象保留现有对端信息
|
||||
tags: {
|
||||
environment: "staging",
|
||||
updated_at: new Date().toISOString()
|
||||
}
|
||||
}
|
||||
})
|
||||
});
|
||||
```
|
||||
|
||||
#### PUT /instances/{id}
|
||||
@@ -1101,6 +1489,7 @@ await fetch(`${API_URL}/instances/abc123`, {
|
||||
- **认证**:需要API Key
|
||||
- **请求体**:`{ "alias": "新别名" }`
|
||||
- **响应**:完整的主控信息(与GET /info相同)
|
||||
- **说明**:主控别名存储在API Key实例(ID为`********`)的 `alias` 字段中
|
||||
- **示例**:
|
||||
```javascript
|
||||
// 更新主控别名
|
||||
@@ -1118,6 +1507,18 @@ console.log('更新后的别名:', data.alias);
|
||||
// 响应包含完整的系统信息,包括更新后的别名
|
||||
```
|
||||
|
||||
**主控ID获取**:主控ID存储在API Key实例的 `config` 字段中,可以通过以下方式获取:
|
||||
```javascript
|
||||
// 获取主控ID
|
||||
async function getMasterID() {
|
||||
const response = await fetch(`${API_URL}/instances/********`, {
|
||||
headers: { 'X-API-Key': apiKey }
|
||||
});
|
||||
const data = await response.json();
|
||||
return data.data.config; // 返回16位十六进制的主控ID
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /tcping
|
||||
- **描述**:TCP连接测试,检测目标地址的连通性和延迟
|
||||
- **认证**:需要API Key
|
||||
|
||||
@@ -264,6 +264,63 @@ nodepass "server://0.0.0.0:10101/0.0.0.0:8080?log=info&tls=1&proxy=1&rate=100"
|
||||
- 头部格式遵循HAProxy PROXY协议v1规范
|
||||
- 如果目标服务不支持PROXY协议,将导致连接失败
|
||||
|
||||
## 目标地址组与负载均衡
|
||||
|
||||
NodePass支持配置多个目标地址以实现高可用性和负载均衡。目标地址组功能仅适用于出口端(流量最终到达的目的地),不应在入口端使用。
|
||||
|
||||
### 目标地址组配置
|
||||
|
||||
目标地址组通过逗号分隔多个地址来配置,NodePass会自动在这些地址之间进行轮询和故障转移:
|
||||
|
||||
```bash
|
||||
# 服务端配置多个后端目标(正向模式,mode=2)
|
||||
nodepass "server://0.0.0.0:10101/backend1.example.com:8080,backend2.example.com:8080,backend3.example.com:8080?mode=2&tls=1"
|
||||
|
||||
# 客户端配置多个本地服务(单端转发模式,mode=1)
|
||||
nodepass "client://127.0.0.1:1080/app1.local:8080,app2.local:8080?mode=1"
|
||||
```
|
||||
|
||||
### 轮询策略
|
||||
|
||||
NodePass采用轮询(Round-Robin)算法,结合故障转移和负载均衡特性:
|
||||
|
||||
- **负载均衡**:每次成功建立连接后,自动切换到下一个目标地址,实现流量均匀分布
|
||||
- **故障转移**:当某个地址连接失败时,立即尝试下一个地址,确保服务高可用
|
||||
- **自动恢复**:失败的地址会在轮询周期中重新尝试,故障恢复后自动接入流量
|
||||
|
||||
### 使用场景
|
||||
|
||||
目标地址组适用于以下场景:
|
||||
|
||||
- **高可用性部署**:多个后端服务器实现故障自动切换
|
||||
- **负载均衡**:流量均匀分布到多个后端实例
|
||||
- **灰度发布**:逐步将流量切换到新版本服务
|
||||
- **地域分布**:根据网络拓扑选择最优路径
|
||||
|
||||
### 重要说明
|
||||
|
||||
- **仅适用于出口**:目标地址组只能配置在流量的最终目的地
|
||||
- ✓ 服务端正向模式(mode=2):`server://0.0.0.0:10101/target1:80,target2:80`
|
||||
- ✓ 客户端单端转发模式(mode=1):`client://127.0.0.1:1080/target1:80,target2:80`
|
||||
- ✗ 隧道地址不支持:不要在隧道地址使用多地址配置
|
||||
|
||||
- **地址格式**:所有地址必须使用相同的端口或明确指定每个地址的端口
|
||||
- **协议一致性**:地址组中的所有地址必须支持相同的协议(TCP/UDP)
|
||||
- **线程安全**:轮询索引使用原子操作,支持高并发场景
|
||||
|
||||
示例配置:
|
||||
|
||||
```bash
|
||||
# 正确示例:服务端配置3个后端Web服务器
|
||||
nodepass "server://0.0.0.0:10101/web1.internal:8080,web2.internal:8080,web3.internal:8080?mode=2&log=info"
|
||||
|
||||
# 正确示例:客户端配置2个本地数据库实例
|
||||
nodepass "client://127.0.0.1:3306/db-primary.local:3306,db-secondary.local:3306?mode=1&log=warn"
|
||||
|
||||
# 错误示例:不要在隧道地址使用多地址(会导致解析错误)
|
||||
# nodepass "server://host1:10101,host2:10101/target:8080" # ✗ 错误用法
|
||||
```
|
||||
|
||||
## URL查询参数配置及作用范围
|
||||
|
||||
NodePass支持通过URL查询参数进行灵活配置,不同参数在 server、client、master 模式下的适用性如下表:
|
||||
@@ -282,7 +339,6 @@ NodePass支持通过URL查询参数进行灵活配置,不同参数在 server
|
||||
| `slot` | 最大连接数限制 | `65536` | O | O | X |
|
||||
| `proxy` | PROXY协议支持 | `0` | O | O | X |
|
||||
|
||||
|
||||
- O:参数有效,推荐根据实际场景配置
|
||||
- X:参数无效,忽略设置
|
||||
|
||||
@@ -302,10 +358,10 @@ NodePass支持通过URL查询参数进行灵活配置,不同参数在 server
|
||||
| `NP_SEMAPHORE_LIMIT` | 信号缓冲区大小 | 65536 | `export NP_SEMAPHORE_LIMIT=2048` |
|
||||
| `NP_TCP_DATA_BUF_SIZE` | TCP数据传输缓冲区大小 | 16384 | `export NP_TCP_DATA_BUF_SIZE=65536` |
|
||||
| `NP_UDP_DATA_BUF_SIZE` | UDP数据包缓冲区大小 | 2048 | `export NP_UDP_DATA_BUF_SIZE=16384` |
|
||||
| `NP_HANDSHAKE_TIMEOUT` | 握手操作超时 | 10s | `export NP_HANDSHAKE_TIMEOUT=30s` |
|
||||
| `NP_HANDSHAKE_TIMEOUT` | 握手操作超时 | 5s | `export NP_HANDSHAKE_TIMEOUT=30s` |
|
||||
| `NP_UDP_READ_TIMEOUT` | UDP读取操作超时 | 30s | `export NP_UDP_READ_TIMEOUT=60s` |
|
||||
| `NP_TCP_DIAL_TIMEOUT` | TCP连接建立超时 | 30s | `export NP_TCP_DIAL_TIMEOUT=60s` |
|
||||
| `NP_UDP_DIAL_TIMEOUT` | UDP连接建立超时 | 10s | `export NP_UDP_DIAL_TIMEOUT=30s` |
|
||||
| `NP_TCP_DIAL_TIMEOUT` | TCP连接建立超时 | 5s | `export NP_TCP_DIAL_TIMEOUT=60s` |
|
||||
| `NP_UDP_DIAL_TIMEOUT` | UDP连接建立超时 | 5s | `export NP_UDP_DIAL_TIMEOUT=30s` |
|
||||
| `NP_POOL_GET_TIMEOUT` | 从连接池获取连接的超时时间 | 5s | `export NP_POOL_GET_TIMEOUT=60s` |
|
||||
| `NP_MIN_POOL_INTERVAL` | 连接创建之间的最小间隔 | 100ms | `export NP_MIN_POOL_INTERVAL=200ms` |
|
||||
| `NP_MAX_POOL_INTERVAL` | 连接创建之间的最大间隔 | 1s | `export NP_MAX_POOL_INTERVAL=3s` |
|
||||
@@ -363,7 +419,7 @@ NodePass支持通过URL查询参数进行灵活配置,不同参数在 server
|
||||
- 对于允许间歇性传输的应用可以增加此值以避免误判超时
|
||||
|
||||
- `NP_UDP_DIAL_TIMEOUT`:UDP连接建立超时
|
||||
- 默认值(10s)为大多数应用提供良好平衡
|
||||
- 默认值(5s)为大多数应用提供良好平衡
|
||||
- 对于高延迟网络或响应缓慢的应用增加此值
|
||||
- 对于需要快速故障切换的低延迟应用减少此值
|
||||
|
||||
@@ -377,14 +433,14 @@ NodePass支持通过URL查询参数进行灵活配置,不同参数在 server
|
||||
- 考虑为批量数据传输和流媒体增加到65536或更高
|
||||
|
||||
- `NP_TCP_DIAL_TIMEOUT`:TCP连接建立超时
|
||||
- 默认值(30s)适用于大多数网络条件
|
||||
- 默认值(5s)适用于大多数网络条件
|
||||
- 对于网络条件不稳定的环境增加此值
|
||||
- 对于需要快速判断连接成功与否的应用减少此值
|
||||
|
||||
### 连接池管理设置
|
||||
|
||||
- `NP_POOL_GET_TIMEOUT`:从连接池获取连接时的最大等待时间
|
||||
- 默认值(30s)为连接建立提供充足时间
|
||||
- 默认值(5s)为连接建立提供充足时间
|
||||
- 对于高延迟环境或使用大型连接池时增加此值
|
||||
- 对于需要快速故障检测的应用减少此值
|
||||
- 在客户端单端转发模式下不使用连接池,此参数被忽略
|
||||
|
||||
@@ -235,9 +235,84 @@ nodepass "server://tunnel.example.com:10101/127.0.0.1:3001?log=warn&tls=1"
|
||||
- 使开发人员能够访问环境而无需直接网络暴露
|
||||
- 将远程服务映射到不同的本地端口,便于识别
|
||||
|
||||
## 高可用性与负载均衡
|
||||
|
||||
### 示例14:多后端服务器负载均衡
|
||||
|
||||
使用目标地址组实现流量均衡分配和自动故障转移:
|
||||
|
||||
```bash
|
||||
# 服务端:配置3个后端Web服务器
|
||||
nodepass "server://0.0.0.0:10101/web1.internal:8080,web2.internal:8080,web3.internal:8080?mode=2&tls=1&log=info"
|
||||
|
||||
# 客户端:连接到服务端
|
||||
nodepass "client://server.example.com:10101/127.0.0.1:8080?log=info"
|
||||
```
|
||||
|
||||
此配置:
|
||||
- 流量自动轮询分配到3个后端服务器,实现负载均衡
|
||||
- 当某个后端服务器故障时,自动切换到其他可用服务器
|
||||
- 故障服务器恢复后自动重新接入流量
|
||||
- 使用TLS加密确保隧道安全
|
||||
|
||||
### 示例15:数据库主从切换
|
||||
|
||||
为数据库配置主从实例,实现高可用访问:
|
||||
|
||||
```bash
|
||||
# 客户端:配置主从数据库地址(单端转发模式)
|
||||
nodepass "client://127.0.0.1:3306/db-primary.local:3306,db-secondary.local:3306?mode=1&log=warn"
|
||||
```
|
||||
|
||||
此设置:
|
||||
- 优先连接主数据库,主库故障时自动切换到从库
|
||||
- 单端转发模式提供高性能本地代理
|
||||
- 应用程序无需修改,透明地实现故障转移
|
||||
- 仅记录警告和错误,减少日志输出
|
||||
|
||||
### 示例16:API网关后端池
|
||||
|
||||
为API网关配置多个后端服务实例:
|
||||
|
||||
```bash
|
||||
# 服务端:配置4个API服务实例
|
||||
nodepass "server://0.0.0.0:10101/api1.backend:8080,api2.backend:8080,api3.backend:8080,api4.backend:8080?mode=2&tls=1&rate=200&slot=5000"
|
||||
|
||||
# 客户端:从API网关连接
|
||||
nodepass "client://apigateway.example.com:10101/127.0.0.1:8080?rate=100&slot=2000"
|
||||
```
|
||||
|
||||
此配置:
|
||||
- 4个API服务实例形成后端池,轮询分配请求
|
||||
- 服务端限制带宽200 Mbps,最大5000并发连接
|
||||
- 客户端限制带宽100 Mbps,最大2000并发连接
|
||||
- 单个实例故障不影响整体服务可用性
|
||||
|
||||
### 示例17:地域分布式服务
|
||||
|
||||
配置多地域服务节点,优化网络延迟:
|
||||
|
||||
```bash
|
||||
# 服务端:配置多地域节点
|
||||
nodepass "server://0.0.0.0:10101/us-west.service:8080,us-east.service:8080,eu-central.service:8080?mode=2&log=debug"
|
||||
```
|
||||
|
||||
此设置:
|
||||
- 配置3个不同地域的服务节点
|
||||
- 轮询算法自动分配流量到各个地域
|
||||
- Debug日志帮助分析流量分布和故障情况
|
||||
- 适用于全球分布式应用场景
|
||||
|
||||
**目标地址组最佳实践:**
|
||||
- **地址数量**:建议配置2-5个地址,过多会增加故障检测时间
|
||||
- **健康检查**:确保后端服务有自己的健康检查机制
|
||||
- **端口一致性**:所有地址使用相同端口或明确指定每个地址的端口
|
||||
- **监控告警**:配置监控系统跟踪故障转移事件
|
||||
- **测试验证**:部署前在测试环境验证故障转移和负载均衡行为
|
||||
|
||||
## PROXY协议集成
|
||||
|
||||
### 示例14:负载均衡器与PROXY协议集成
|
||||
### 示例18:负载均衡器与PROXY协议集成
|
||||
|
||||
启用PROXY协议支持,与负载均衡器和反向代理集成:
|
||||
|
||||
@@ -256,7 +331,7 @@ nodepass "client://tunnel.example.com:10101/127.0.0.1:3000?log=info&proxy=1"
|
||||
- 兼容HAProxy、Nginx和其他支持PROXY协议的服务
|
||||
- 有助于维护准确的访问日志和基于IP的访问控制
|
||||
|
||||
### 示例15:Web应用的反向代理支持
|
||||
### 示例19:Web应用的反向代理支持
|
||||
|
||||
使NodePass后的Web应用能够接收原始客户端信息:
|
||||
|
||||
@@ -280,7 +355,7 @@ nodepass "server://0.0.0.0:10101/127.0.0.1:8080?log=warn&tls=2&crt=/path/to/cert
|
||||
- 支持连接审计的合规性要求
|
||||
- 适用于支持PROXY协议的Web服务器(Nginx、HAProxy等)
|
||||
|
||||
### 示例16:数据库访问与客户端IP保留
|
||||
### 示例20:数据库访问与客户端IP保留
|
||||
|
||||
为数据库访问日志记录和安全维护客户端IP信息:
|
||||
|
||||
@@ -307,7 +382,7 @@ nodepass "client://dbproxy.example.com:10101/127.0.0.1:5432?proxy=1"
|
||||
|
||||
## 容器部署
|
||||
|
||||
### 示例17:容器化NodePass
|
||||
### 示例21:容器化NodePass
|
||||
|
||||
在Docker环境中部署NodePass:
|
||||
|
||||
@@ -342,7 +417,7 @@ docker run -d --name nodepass-client \
|
||||
|
||||
## 主控API管理
|
||||
|
||||
### 示例18:集中化管理
|
||||
### 示例22:集中化管理
|
||||
|
||||
为多个NodePass实例设置中央控制器:
|
||||
|
||||
@@ -379,7 +454,7 @@ curl -X PUT http://localhost:9090/api/v1/instances/{id} \
|
||||
- 提供用于自动化和集成的RESTful API
|
||||
- 包含内置的Swagger UI,位于http://localhost:9090/api/v1/docs
|
||||
|
||||
### 示例19:自定义API前缀
|
||||
### 示例23:自定义API前缀
|
||||
|
||||
为主控模式使用自定义API前缀:
|
||||
|
||||
@@ -398,7 +473,7 @@ curl -X POST http://localhost:9090/admin/v1/instances \
|
||||
- 用于安全或组织目的的自定义URL路径
|
||||
- 在http://localhost:9090/admin/v1/docs访问Swagger UI
|
||||
|
||||
### 示例20:实时连接和流量监控
|
||||
### 示例24:实时连接和流量监控
|
||||
|
||||
通过主控API监控实例的连接数和流量统计:
|
||||
|
||||
|
||||
@@ -63,7 +63,7 @@ func NewClient(parsedURL *url.URL, logger *logs.Logger) (*Client, error) {
|
||||
func (c *Client) Run() {
|
||||
logInfo := func(prefix string) {
|
||||
c.logger.Info("%v: client://%v@%v/%v?min=%v&mode=%v&read=%v&rate=%v&slot=%v&proxy=%v",
|
||||
prefix, c.tunnelKey, c.tunnelTCPAddr, c.targetTCPAddr,
|
||||
prefix, c.tunnelKey, c.tunnelTCPAddr, c.getTargetAddrsString(),
|
||||
c.minPoolCapacity, c.runMode, c.readTimeout, c.rateLimit/125000, c.slotLimit, c.proxyProtocol)
|
||||
}
|
||||
logInfo("Client started")
|
||||
|
||||
@@ -33,8 +33,9 @@ type Common struct {
|
||||
tunnelKey string // 隧道密钥
|
||||
tunnelTCPAddr *net.TCPAddr // 隧道TCP地址
|
||||
tunnelUDPAddr *net.UDPAddr // 隧道UDP地址
|
||||
targetTCPAddr *net.TCPAddr // 目标TCP地址
|
||||
targetUDPAddr *net.UDPAddr // 目标UDP地址
|
||||
targetTCPAddrs []*net.TCPAddr // 目标TCP地址组
|
||||
targetUDPAddrs []*net.UDPAddr // 目标UDP地址组
|
||||
targetIdx uint64 // 目标地址索引
|
||||
targetListener *net.TCPListener // 目标监听器
|
||||
tunnelListener net.Listener // 隧道监听器
|
||||
tunnelTCPConn *net.TCPConn // 隧道TCP连接
|
||||
@@ -75,9 +76,9 @@ var (
|
||||
semaphoreLimit = getEnvAsInt("NP_SEMAPHORE_LIMIT", 65536) // 信号量限制
|
||||
tcpDataBufSize = getEnvAsInt("NP_TCP_DATA_BUF_SIZE", 16384) // TCP缓冲区大小
|
||||
udpDataBufSize = getEnvAsInt("NP_UDP_DATA_BUF_SIZE", 2048) // UDP缓冲区大小
|
||||
handshakeTimeout = getEnvAsDuration("NP_HANDSHAKE_TIMEOUT", 10*time.Second) // 握手超时
|
||||
tcpDialTimeout = getEnvAsDuration("NP_TCP_DIAL_TIMEOUT", 30*time.Second) // TCP拨号超时
|
||||
udpDialTimeout = getEnvAsDuration("NP_UDP_DIAL_TIMEOUT", 10*time.Second) // UDP拨号超时
|
||||
handshakeTimeout = getEnvAsDuration("NP_HANDSHAKE_TIMEOUT", 5*time.Second) // 握手超时
|
||||
tcpDialTimeout = getEnvAsDuration("NP_TCP_DIAL_TIMEOUT", 5*time.Second) // TCP拨号超时
|
||||
udpDialTimeout = getEnvAsDuration("NP_UDP_DIAL_TIMEOUT", 5*time.Second) // UDP拨号超时
|
||||
udpReadTimeout = getEnvAsDuration("NP_UDP_READ_TIMEOUT", 30*time.Second) // UDP读取超时
|
||||
poolGetTimeout = getEnvAsDuration("NP_POOL_GET_TIMEOUT", 5*time.Second) // 池连接获取超时
|
||||
minPoolInterval = getEnvAsDuration("NP_MIN_POOL_INTERVAL", 100*time.Millisecond) // 最小池间隔
|
||||
@@ -207,6 +208,9 @@ func (c *Common) decode(data []byte) ([]byte, error) {
|
||||
func (c *Common) getAddress(parsedURL *url.URL) error {
|
||||
// 解析隧道地址
|
||||
tunnelAddr := parsedURL.Host
|
||||
if tunnelAddr == "" {
|
||||
return fmt.Errorf("getAddress: no valid tunnel address found")
|
||||
}
|
||||
|
||||
// 解析隧道TCP地址
|
||||
if tunnelTCPAddr, err := net.ResolveTCPAddr("tcp", tunnelAddr); err == nil {
|
||||
@@ -222,26 +226,101 @@ func (c *Common) getAddress(parsedURL *url.URL) error {
|
||||
return fmt.Errorf("getAddress: resolveUDPAddr failed: %w", err)
|
||||
}
|
||||
|
||||
// 处理目标地址
|
||||
// 处理目标地址组
|
||||
targetAddr := strings.TrimPrefix(parsedURL.Path, "/")
|
||||
|
||||
// 解析目标TCP地址
|
||||
if targetTCPAddr, err := net.ResolveTCPAddr("tcp", targetAddr); err == nil {
|
||||
c.targetTCPAddr = targetTCPAddr
|
||||
} else {
|
||||
return fmt.Errorf("getAddress: resolveTCPAddr failed: %w", err)
|
||||
if targetAddr == "" {
|
||||
return fmt.Errorf("getAddress: no valid target address found")
|
||||
}
|
||||
|
||||
// 解析目标UDP地址
|
||||
if targetUDPAddr, err := net.ResolveUDPAddr("udp", targetAddr); err == nil {
|
||||
c.targetUDPAddr = targetUDPAddr
|
||||
} else {
|
||||
return fmt.Errorf("getAddress: resolveUDPAddr failed: %w", err)
|
||||
addrList := strings.Split(targetAddr, ",")
|
||||
tempTCPAddrs := make([]*net.TCPAddr, 0, len(addrList))
|
||||
tempUDPAddrs := make([]*net.UDPAddr, 0, len(addrList))
|
||||
|
||||
for _, addr := range addrList {
|
||||
addr = strings.TrimSpace(addr)
|
||||
if addr == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// 解析目标TCP地址
|
||||
tcpAddr, err := net.ResolveTCPAddr("tcp", addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getAddress: resolveTCPAddr failed for %s: %w", addr, err)
|
||||
}
|
||||
|
||||
// 解析目标UDP地址
|
||||
udpAddr, err := net.ResolveUDPAddr("udp", addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getAddress: resolveUDPAddr failed for %s: %w", addr, err)
|
||||
}
|
||||
|
||||
tempTCPAddrs = append(tempTCPAddrs, tcpAddr)
|
||||
tempUDPAddrs = append(tempUDPAddrs, udpAddr)
|
||||
}
|
||||
|
||||
if len(tempTCPAddrs) == 0 || len(tempUDPAddrs) == 0 || len(tempTCPAddrs) != len(tempUDPAddrs) {
|
||||
return fmt.Errorf("getAddress: no valid target address found")
|
||||
}
|
||||
|
||||
// 设置目标地址组
|
||||
c.targetTCPAddrs = tempTCPAddrs
|
||||
c.targetUDPAddrs = tempUDPAddrs
|
||||
c.targetIdx = 0
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// getTargetAddrsString 获取目标地址组的字符串表示
|
||||
func (c *Common) getTargetAddrsString() string {
|
||||
addrs := make([]string, len(c.targetTCPAddrs))
|
||||
for i, addr := range c.targetTCPAddrs {
|
||||
addrs[i] = addr.String()
|
||||
}
|
||||
return strings.Join(addrs, ",")
|
||||
}
|
||||
|
||||
// nextTargetIdx 获取下一个目标地址索引
|
||||
func (c *Common) nextTargetIdx() int {
|
||||
if len(c.targetTCPAddrs) <= 1 {
|
||||
return 0
|
||||
}
|
||||
return int((atomic.AddUint64(&c.targetIdx, 1) - 1) % uint64(len(c.targetTCPAddrs)))
|
||||
}
|
||||
|
||||
// dialWithRotation 轮询拨号到目标地址组
|
||||
func (c *Common) dialWithRotation(network string, timeout time.Duration) (net.Conn, error) {
|
||||
var addrCount int
|
||||
var getAddr func(int) string
|
||||
|
||||
if network == "tcp" {
|
||||
addrCount = len(c.targetTCPAddrs)
|
||||
getAddr = func(i int) string { return c.targetTCPAddrs[i].String() }
|
||||
} else {
|
||||
addrCount = len(c.targetUDPAddrs)
|
||||
getAddr = func(i int) string { return c.targetUDPAddrs[i].String() }
|
||||
}
|
||||
|
||||
// 单目标地址:快速路径
|
||||
if addrCount == 1 {
|
||||
return net.DialTimeout(network, getAddr(0), timeout)
|
||||
}
|
||||
|
||||
// 多目标地址:负载均衡 + 故障转移
|
||||
startIdx := c.nextTargetIdx()
|
||||
var lastErr error
|
||||
|
||||
for i := range addrCount {
|
||||
currentIdx := (startIdx + i) % addrCount
|
||||
conn, err := net.DialTimeout(network, getAddr(currentIdx), timeout)
|
||||
if err == nil {
|
||||
return conn, nil
|
||||
}
|
||||
lastErr = err
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("dialWithRotation: all %d targets failed: %w", addrCount, lastErr)
|
||||
}
|
||||
|
||||
// getTunnelKey 从URL中获取隧道密钥
|
||||
func (c *Common) getTunnelKey(parsedURL *url.URL) {
|
||||
if key := parsedURL.User.Username(); key != "" {
|
||||
@@ -418,19 +497,19 @@ func (c *Common) initTunnelListener() error {
|
||||
|
||||
// initTargetListener 初始化目标监听器
|
||||
func (c *Common) initTargetListener() error {
|
||||
if c.targetTCPAddr == nil || c.targetUDPAddr == nil {
|
||||
return fmt.Errorf("initTargetListener: nil target address")
|
||||
if len(c.targetTCPAddrs) == 0 || len(c.targetUDPAddrs) == 0 {
|
||||
return fmt.Errorf("initTargetListener: no target address")
|
||||
}
|
||||
|
||||
// 初始化目标TCP监听器
|
||||
targetListener, err := net.ListenTCP("tcp", c.targetTCPAddr)
|
||||
targetListener, err := net.ListenTCP("tcp", c.targetTCPAddrs[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("initTargetListener: listenTCP failed: %w", err)
|
||||
}
|
||||
c.targetListener = targetListener
|
||||
|
||||
// 初始化目标UDP监听器
|
||||
targetUDPConn, err := net.ListenUDP("udp", c.targetUDPAddr)
|
||||
targetUDPConn, err := net.ListenUDP("udp", c.targetUDPAddrs[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("initTargetListener: listenUDP failed: %w", err)
|
||||
}
|
||||
@@ -510,11 +589,6 @@ func (c *Common) stop() {
|
||||
if c.rateLimiter != nil {
|
||||
c.rateLimiter.Reset()
|
||||
}
|
||||
|
||||
// 发送检查点事件
|
||||
c.logger.Event("CHECK_POINT|MODE=%v|PING=0ms|POOL=0|TCPS=0|UDPS=0|TCPRX=%v|TCPTX=%v|UDPRX=%v|UDPTX=%v", c.runMode,
|
||||
atomic.LoadUint64(&c.tcpRX), atomic.LoadUint64(&c.tcpTX),
|
||||
atomic.LoadUint64(&c.udpRX), atomic.LoadUint64(&c.udpTX))
|
||||
}
|
||||
|
||||
// shutdown 共用优雅关闭
|
||||
@@ -1045,9 +1119,9 @@ func (c *Common) commonTCPOnce(signalURL *url.URL) {
|
||||
defer c.releaseSlot(false)
|
||||
|
||||
// 连接到目标TCP地址
|
||||
targetConn, err := net.DialTimeout("tcp", c.targetTCPAddr.String(), tcpDialTimeout)
|
||||
targetConn, err := c.dialWithRotation("tcp", tcpDialTimeout)
|
||||
if err != nil {
|
||||
c.logger.Error("commonTCPOnce: dialTimeout failed: %v", err)
|
||||
c.logger.Error("commonTCPOnce: dialWithRotation failed: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1130,9 +1204,10 @@ func (c *Common) commonUDPOnce(signalURL *url.URL) {
|
||||
return
|
||||
}
|
||||
|
||||
newSession, err := net.DialTimeout("udp", c.targetUDPAddr.String(), udpDialTimeout)
|
||||
// 创建新的会话
|
||||
newSession, err := c.dialWithRotation("udp", udpDialTimeout)
|
||||
if err != nil {
|
||||
c.logger.Error("commonUDPOnce: dialTimeout failed: %v", err)
|
||||
c.logger.Error("commonUDPOnce: dialWithRotation failed: %v", err)
|
||||
c.releaseSlot(true)
|
||||
return
|
||||
}
|
||||
@@ -1250,7 +1325,7 @@ func (c *Common) singleEventLoop() error {
|
||||
now := time.Now()
|
||||
|
||||
// 尝试连接到目标地址
|
||||
if conn, err := net.DialTimeout("tcp", c.targetTCPAddr.String(), reportInterval); err == nil {
|
||||
if conn, err := net.DialTimeout("tcp", c.targetTCPAddrs[c.nextTargetIdx()].String(), reportInterval); err == nil {
|
||||
ping = int(time.Since(now).Milliseconds())
|
||||
conn.Close()
|
||||
}
|
||||
@@ -1312,9 +1387,9 @@ func (c *Common) singleTCPLoop() error {
|
||||
defer c.releaseSlot(false)
|
||||
|
||||
// 尝试建立目标连接
|
||||
targetConn, err := net.DialTimeout("tcp", c.targetTCPAddr.String(), tcpDialTimeout)
|
||||
targetConn, err := c.dialWithRotation("tcp", tcpDialTimeout)
|
||||
if err != nil {
|
||||
c.logger.Error("singleTCPLoop: dialTimeout failed: %v", err)
|
||||
c.logger.Error("singleTCPLoop: dialWithRotation failed: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1391,9 +1466,9 @@ func (c *Common) singleUDPLoop() error {
|
||||
}
|
||||
|
||||
// 创建新的会话
|
||||
newSession, err := net.DialTimeout("udp", c.targetUDPAddr.String(), udpDialTimeout)
|
||||
newSession, err := c.dialWithRotation("udp", udpDialTimeout)
|
||||
if err != nil {
|
||||
c.logger.Error("singleUDPLoop: dialTimeout failed: %v", err)
|
||||
c.logger.Error("singleUDPLoop: dialWithRotation failed: %v", err)
|
||||
c.releaseSlot(true)
|
||||
c.putUDPBuffer(buffer)
|
||||
continue
|
||||
|
||||
@@ -65,6 +65,7 @@ const swaggerUIHTML = `<!DOCTYPE html>
|
||||
// Master 实现主控模式功能
|
||||
type Master struct {
|
||||
Common // 继承通用功能
|
||||
mid string // 主控ID
|
||||
alias string // 主控别名
|
||||
prefix string // API前缀
|
||||
version string // NP版本
|
||||
@@ -94,7 +95,7 @@ type Instance struct {
|
||||
URL string `json:"url"` // 实例URL
|
||||
Config string `json:"config"` // 实例配置
|
||||
Restart bool `json:"restart"` // 是否自启动
|
||||
Tags []Tag `json:"tags"` // 标签数组
|
||||
Meta Meta `json:"meta"` // 元数据信息
|
||||
Mode int32 `json:"mode"` // 实例模式
|
||||
Ping int32 `json:"ping"` // 端内延迟
|
||||
Pool int32 `json:"pool"` // 池连接数
|
||||
@@ -108,6 +109,10 @@ type Instance struct {
|
||||
TCPTXBase uint64 `json:"-" gob:"-"` // TCP发送字节数基线(不序列化)
|
||||
UDPRXBase uint64 `json:"-" gob:"-"` // UDP接收字节数基线(不序列化)
|
||||
UDPTXBase uint64 `json:"-" gob:"-"` // UDP发送字节数基线(不序列化)
|
||||
TCPRXReset uint64 `json:"-" gob:"-"` // TCP接收重置偏移量(不序列化)
|
||||
TCPTXReset uint64 `json:"-" gob:"-"` // TCP发送重置偏移量(不序列化)
|
||||
UDPRXReset uint64 `json:"-" gob:"-"` // UDP接收重置偏移量(不序列化)
|
||||
UDPTXReset uint64 `json:"-" gob:"-"` // UDP发送重置偏移量(不序列化)
|
||||
cmd *exec.Cmd `json:"-" gob:"-"` // 命令对象(不序列化)
|
||||
stopped chan struct{} `json:"-" gob:"-"` // 停止信号通道(不序列化)
|
||||
deleted bool `json:"-" gob:"-"` // 删除标志(不序列化)
|
||||
@@ -115,10 +120,17 @@ type Instance struct {
|
||||
lastCheckPoint time.Time `json:"-" gob:"-"` // 上次检查点时间(不序列化)
|
||||
}
|
||||
|
||||
// Tag 标签结构体
|
||||
type Tag struct {
|
||||
Key string `json:"key"` // 标签键
|
||||
Value string `json:"value"` // 标签值
|
||||
// Meta 元数据信息
|
||||
type Meta struct {
|
||||
Peer Peer `json:"peer"` // 对端信息
|
||||
Tags map[string]string `json:"tags"` // 标签映射
|
||||
}
|
||||
|
||||
// Peer 对端信息
|
||||
type Peer struct {
|
||||
SID string `json:"sid"` // 服务ID
|
||||
Type string `json:"type"` // 服务类型
|
||||
Alias string `json:"alias"` // 服务别名
|
||||
}
|
||||
|
||||
// InstanceEvent 实例事件信息
|
||||
@@ -250,13 +262,27 @@ func (w *InstanceLogWriter) Write(p []byte) (n int, err error) {
|
||||
|
||||
stats := []*uint64{&w.instance.TCPRX, &w.instance.TCPTX, &w.instance.UDPRX, &w.instance.UDPTX}
|
||||
bases := []uint64{w.instance.TCPRXBase, w.instance.TCPTXBase, w.instance.UDPRXBase, w.instance.UDPTXBase}
|
||||
resets := []*uint64{&w.instance.TCPRXReset, &w.instance.TCPTXReset, &w.instance.UDPRXReset, &w.instance.UDPTXReset}
|
||||
for i, stat := range stats {
|
||||
if v, err := strconv.ParseUint(matches[i+6], 10, 64); err == nil {
|
||||
*stat = bases[i] + v
|
||||
// 累计值 = 基线 + 检查点值 - 重置偏移
|
||||
if v >= *resets[i] {
|
||||
*stat = bases[i] + v - *resets[i]
|
||||
} else {
|
||||
// 发生重启,更新算法,清零偏移
|
||||
*stat = bases[i] + v
|
||||
*resets[i] = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
w.instance.lastCheckPoint = time.Now()
|
||||
|
||||
// 自动恢复运行状态
|
||||
if w.instance.Status == "error" {
|
||||
w.instance.Status = "running"
|
||||
}
|
||||
|
||||
// 仅当实例未被删除时才存储和发送更新事件
|
||||
if !w.instance.deleted {
|
||||
w.master.instances.Store(w.instanceID, w.instance)
|
||||
@@ -266,6 +292,13 @@ func (w *InstanceLogWriter) Write(p []byte) (n int, err error) {
|
||||
continue
|
||||
}
|
||||
|
||||
// 检测实例错误并标记状态
|
||||
if w.instance.Status != "error" && !w.instance.deleted &&
|
||||
(strings.Contains(line, "Server error:") || strings.Contains(line, "Client error:")) {
|
||||
w.instance.Status = "error"
|
||||
w.master.instances.Store(w.instanceID, w.instance)
|
||||
}
|
||||
|
||||
// 输出日志加实例ID
|
||||
fmt.Fprintf(w.target, "%s [%s]\n", line, w.instanceID)
|
||||
|
||||
@@ -355,13 +388,26 @@ func (m *Master) Run() {
|
||||
if !ok {
|
||||
// 如果不存在API Key实例,则创建一个
|
||||
apiKey = &Instance{
|
||||
ID: apiKeyID,
|
||||
URL: generateAPIKey(),
|
||||
ID: apiKeyID,
|
||||
URL: generateAPIKey(),
|
||||
Config: generateMID(),
|
||||
Meta: Meta{Tags: make(map[string]string)},
|
||||
}
|
||||
m.instances.Store(apiKeyID, apiKey)
|
||||
m.saveState()
|
||||
m.logger.Info("API Key created: %v", apiKey.URL)
|
||||
} else {
|
||||
// 从API Key实例加载别名和主控ID
|
||||
m.alias = apiKey.Alias
|
||||
|
||||
if apiKey.Config == "" {
|
||||
apiKey.Config = generateMID()
|
||||
m.instances.Store(apiKeyID, apiKey)
|
||||
m.saveState()
|
||||
m.logger.Info("Master ID created: %v", apiKey.Config)
|
||||
}
|
||||
m.mid = apiKey.Config
|
||||
|
||||
m.logger.Info("API Key loaded: %v", apiKey.URL)
|
||||
}
|
||||
|
||||
@@ -734,7 +780,10 @@ func (m *Master) loadState() {
|
||||
instance.Config = m.generateConfigURL(instance)
|
||||
}
|
||||
|
||||
instance.Tags = nil
|
||||
// 初始化标签映射
|
||||
if instance.Meta.Tags == nil {
|
||||
instance.Meta.Tags = make(map[string]string)
|
||||
}
|
||||
|
||||
m.instances.Store(id, instance)
|
||||
|
||||
@@ -742,6 +791,7 @@ func (m *Master) loadState() {
|
||||
if instance.Restart {
|
||||
m.logger.Info("Auto-starting instance: %v [%v]", instance.URL, instance.ID)
|
||||
m.startInstance(instance)
|
||||
time.Sleep(baseDuration)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -784,6 +834,13 @@ func (m *Master) handleInfo(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
m.alias = reqData.Alias
|
||||
|
||||
// 持久化别名到API Key实例
|
||||
if apiKey, ok := m.findInstance(apiKeyID); ok {
|
||||
apiKey.Alias = m.alias
|
||||
m.instances.Store(apiKeyID, apiKey)
|
||||
go m.saveState()
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, m.getMasterInfo())
|
||||
|
||||
default:
|
||||
@@ -794,6 +851,7 @@ func (m *Master) handleInfo(w http.ResponseWriter, r *http.Request) {
|
||||
// getMasterInfo 获取完整的主控信息
|
||||
func (m *Master) getMasterInfo() map[string]any {
|
||||
info := map[string]any{
|
||||
"mid": m.mid,
|
||||
"alias": m.alias,
|
||||
"os": runtime.GOOS,
|
||||
"arch": runtime.GOARCH,
|
||||
@@ -1013,6 +1071,7 @@ func (m *Master) handleInstances(w http.ResponseWriter, r *http.Request) {
|
||||
URL: m.enhanceURL(reqData.URL, instanceType),
|
||||
Status: "stopped",
|
||||
Restart: true,
|
||||
Meta: Meta{Tags: make(map[string]string)},
|
||||
stopped: make(chan struct{}),
|
||||
}
|
||||
|
||||
@@ -1078,6 +1137,10 @@ func (m *Master) handlePatchInstance(w http.ResponseWriter, r *http.Request, id
|
||||
Alias string `json:"alias,omitempty"`
|
||||
Action string `json:"action,omitempty"`
|
||||
Restart *bool `json:"restart,omitempty"`
|
||||
Meta *struct {
|
||||
Peer *Peer `json:"peer,omitempty"`
|
||||
Tags map[string]string `json:"tags,omitempty"`
|
||||
} `json:"meta,omitempty"`
|
||||
}
|
||||
if err := json.NewDecoder(r.Body).Decode(&reqData); err == nil {
|
||||
if id == apiKeyID {
|
||||
@@ -1088,31 +1151,6 @@ func (m *Master) handlePatchInstance(w http.ResponseWriter, r *http.Request, id
|
||||
m.sendSSEEvent("update", instance)
|
||||
}
|
||||
} else {
|
||||
// 重置流量统计
|
||||
if reqData.Action == "reset" {
|
||||
instance.TCPRX = 0
|
||||
instance.TCPTX = 0
|
||||
instance.UDPRX = 0
|
||||
instance.UDPTX = 0
|
||||
m.instances.Store(id, instance)
|
||||
go m.saveState()
|
||||
m.logger.Info("Traffic stats reset: [%v]", instance.ID)
|
||||
|
||||
// 发送流量统计重置事件
|
||||
m.sendSSEEvent("update", instance)
|
||||
}
|
||||
|
||||
// 更新自启动设置
|
||||
if reqData.Restart != nil && instance.Restart != *reqData.Restart {
|
||||
instance.Restart = *reqData.Restart
|
||||
m.instances.Store(id, instance)
|
||||
go m.saveState()
|
||||
m.logger.Info("Restart policy updated: %v [%v]", *reqData.Restart, instance.ID)
|
||||
|
||||
// 发送restart策略变更事件
|
||||
m.sendSSEEvent("update", instance)
|
||||
}
|
||||
|
||||
// 更新实例别名
|
||||
if reqData.Alias != "" && instance.Alias != reqData.Alias {
|
||||
if len(reqData.Alias) > maxValueLen {
|
||||
@@ -1128,10 +1166,106 @@ func (m *Master) handlePatchInstance(w http.ResponseWriter, r *http.Request, id
|
||||
m.sendSSEEvent("update", instance)
|
||||
}
|
||||
|
||||
// 处理当前实例操作
|
||||
if reqData.Action != "" && reqData.Action != "reset" {
|
||||
m.processInstanceAction(instance, reqData.Action)
|
||||
// 处理实例操作
|
||||
if reqData.Action != "" {
|
||||
// 验证 action 是否合法
|
||||
validActions := map[string]bool{
|
||||
"start": true,
|
||||
"stop": true,
|
||||
"restart": true,
|
||||
"reset": true,
|
||||
}
|
||||
if !validActions[reqData.Action] {
|
||||
httpError(w, fmt.Sprintf("Invalid action: %s", reqData.Action), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
// 重置流量统计
|
||||
if reqData.Action == "reset" {
|
||||
instance.TCPRXReset = instance.TCPRX - instance.TCPRXBase
|
||||
instance.TCPTXReset = instance.TCPTX - instance.TCPTXBase
|
||||
instance.UDPRXReset = instance.UDPRX - instance.UDPRXBase
|
||||
instance.UDPTXReset = instance.UDPTX - instance.UDPTXBase
|
||||
instance.TCPRX = 0
|
||||
instance.TCPTX = 0
|
||||
instance.UDPRX = 0
|
||||
instance.UDPTX = 0
|
||||
instance.TCPRXBase = 0
|
||||
instance.TCPTXBase = 0
|
||||
instance.UDPRXBase = 0
|
||||
instance.UDPTXBase = 0
|
||||
m.instances.Store(id, instance)
|
||||
go m.saveState()
|
||||
m.logger.Info("Traffic stats reset: 0 [%v]", instance.ID)
|
||||
|
||||
// 发送流量统计重置事件
|
||||
m.sendSSEEvent("update", instance)
|
||||
} else {
|
||||
// 处理 start/stop/restart 操作
|
||||
m.processInstanceAction(instance, reqData.Action)
|
||||
}
|
||||
}
|
||||
|
||||
// 更新自启动设置
|
||||
if reqData.Restart != nil && instance.Restart != *reqData.Restart {
|
||||
instance.Restart = *reqData.Restart
|
||||
m.instances.Store(id, instance)
|
||||
go m.saveState()
|
||||
m.logger.Info("Restart policy updated: %v [%v]", *reqData.Restart, instance.ID)
|
||||
|
||||
// 发送restart策略变更事件
|
||||
m.sendSSEEvent("update", instance)
|
||||
}
|
||||
|
||||
// 更新元数据
|
||||
if reqData.Meta != nil {
|
||||
// 验证并更新 Peer 信息
|
||||
if reqData.Meta.Peer != nil {
|
||||
if len(reqData.Meta.Peer.SID) > maxValueLen {
|
||||
httpError(w, fmt.Sprintf("Meta peer.sid exceeds maximum length %d", maxValueLen), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if len(reqData.Meta.Peer.Type) > maxValueLen {
|
||||
httpError(w, fmt.Sprintf("Meta peer.type exceeds maximum length %d", maxValueLen), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if len(reqData.Meta.Peer.Alias) > maxValueLen {
|
||||
httpError(w, fmt.Sprintf("Meta peer.alias exceeds maximum length %d", maxValueLen), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
instance.Meta.Peer = *reqData.Meta.Peer
|
||||
}
|
||||
|
||||
// 验证并更新 Tags 信息
|
||||
if reqData.Meta.Tags != nil {
|
||||
// 检查键值对的唯一性和长度
|
||||
seen := make(map[string]bool)
|
||||
for key, value := range reqData.Meta.Tags {
|
||||
if len(key) > maxValueLen {
|
||||
httpError(w, fmt.Sprintf("Meta tag key exceeds maximum length %d", maxValueLen), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if len(value) > maxValueLen {
|
||||
httpError(w, fmt.Sprintf("Meta tag value exceeds maximum length %d", maxValueLen), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if seen[key] {
|
||||
httpError(w, fmt.Sprintf("Duplicate meta tag key: %s", key), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
seen[key] = true
|
||||
}
|
||||
instance.Meta.Tags = reqData.Meta.Tags
|
||||
}
|
||||
|
||||
m.instances.Store(id, instance)
|
||||
go m.saveState()
|
||||
m.logger.Info("Meta updated [%v]", instance.ID)
|
||||
|
||||
// 发送元数据更新事件
|
||||
m.sendSSEEvent("update", instance)
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
writeJSON(w, http.StatusOK, instance)
|
||||
@@ -1681,13 +1815,20 @@ func (m *Master) generateConfigURL(instance *Instance) string {
|
||||
return parsedURL.String()
|
||||
}
|
||||
|
||||
// generateID 生成随机ID
|
||||
// generateID 生成实例ID
|
||||
func generateID() string {
|
||||
bytes := make([]byte, 4)
|
||||
rand.Read(bytes)
|
||||
return hex.EncodeToString(bytes)
|
||||
}
|
||||
|
||||
// generateMID 生成主控ID
|
||||
func generateMID() string {
|
||||
bytes := make([]byte, 8)
|
||||
rand.Read(bytes)
|
||||
return hex.EncodeToString(bytes)
|
||||
}
|
||||
|
||||
// generateAPIKey 生成API Key
|
||||
func generateAPIKey() string {
|
||||
bytes := make([]byte, 16)
|
||||
@@ -1889,6 +2030,7 @@ func (m *Master) generateOpenAPISpec() string {
|
||||
"url": {"type": "string", "description": "Command string or API Key"},
|
||||
"config": {"type": "string", "description": "Instance configuration URL"},
|
||||
"restart": {"type": "boolean", "description": "Restart policy"},
|
||||
"meta": {"$ref": "#/components/schemas/Meta"},
|
||||
"mode": {"type": "integer", "description": "Instance mode"},
|
||||
"ping": {"type": "integer", "description": "TCPing latency"},
|
||||
"pool": {"type": "integer", "description": "Pool active count"},
|
||||
@@ -1910,7 +2052,8 @@ func (m *Master) generateOpenAPISpec() string {
|
||||
"properties": {
|
||||
"alias": {"type": "string", "description": "Instance alias"},
|
||||
"action": {"type": "string", "enum": ["start", "stop", "restart", "reset"], "description": "Action for the instance"},
|
||||
"restart": {"type": "boolean", "description": "Instance restart policy"}
|
||||
"restart": {"type": "boolean", "description": "Instance restart policy"},
|
||||
"meta": {"$ref": "#/components/schemas/Meta"}
|
||||
}
|
||||
},
|
||||
"PutInstanceRequest": {
|
||||
@@ -1918,9 +2061,25 @@ func (m *Master) generateOpenAPISpec() string {
|
||||
"required": ["url"],
|
||||
"properties": {"url": {"type": "string", "description": "New command string(scheme://host:port/host:port)"}}
|
||||
},
|
||||
"Meta": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"peer": {"$ref": "#/components/schemas/Peer"},
|
||||
"tags": {"type": "object", "additionalProperties": {"type": "string"}, "description": "Key-value tags"}
|
||||
}
|
||||
},
|
||||
"Peer": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"sid": {"type": "string", "description": "Service ID"},
|
||||
"type": {"type": "string", "description": "Service type"},
|
||||
"alias": {"type": "string", "description": "Service alias"}
|
||||
}
|
||||
},
|
||||
"MasterInfo": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"mid": {"type": "string", "description": "Master ID"},
|
||||
"alias": {"type": "string", "description": "Master alias"},
|
||||
"os": {"type": "string", "description": "Operating system"},
|
||||
"arch": {"type": "string", "description": "System architecture"},
|
||||
|
||||
@@ -65,7 +65,7 @@ func NewServer(parsedURL *url.URL, tlsCode string, tlsConfig *tls.Config, logger
|
||||
func (s *Server) Run() {
|
||||
logInfo := func(prefix string) {
|
||||
s.logger.Info("%v: server://%v@%v/%v?max=%v&mode=%v&read=%v&rate=%v&slot=%v&proxy=%v",
|
||||
prefix, s.tunnelKey, s.tunnelTCPAddr, s.targetTCPAddr,
|
||||
prefix, s.tunnelKey, s.tunnelTCPAddr, s.getTargetAddrsString(),
|
||||
s.maxPoolCapacity, s.runMode, s.readTimeout, s.rateLimit/125000, s.slotLimit, s.proxyProtocol)
|
||||
}
|
||||
logInfo("Server started")
|
||||
|
||||
Reference in New Issue
Block a user