mirror of
https://github.com/kubenetworks/kubevpn.git
synced 2025-09-26 19:31:17 +08:00
refactor: refactor log make it more formal (#314)
This commit is contained in:
295
README.md
295
README.md
@@ -96,41 +96,37 @@ kubectl delete -f https://raw.githubusercontent.com/kubenetworks/kubevpn/master/
|
||||
```shell
|
||||
➜ ~ kubevpn connect
|
||||
Password:
|
||||
start to connect
|
||||
get cidr from cluster info...
|
||||
get cidr from cluster info ok
|
||||
get cidr from cni...
|
||||
wait pod cni-net-dir-kubevpn to be running timeout, reason , ignore
|
||||
get cidr from svc...
|
||||
get cidr from svc ok
|
||||
get cidr successfully
|
||||
traffic manager not exist, try to create it...
|
||||
label namespace default
|
||||
create serviceAccount kubevpn-traffic-manager
|
||||
create roles kubevpn-traffic-manager
|
||||
create roleBinding kubevpn-traffic-manager
|
||||
create service kubevpn-traffic-manager
|
||||
create deployment kubevpn-traffic-manager
|
||||
pod kubevpn-traffic-manager-66d969fd45-9zlbp is Pending
|
||||
Starting connect
|
||||
Getting network CIDR from cluster info...
|
||||
Getting network CIDR from CNI...
|
||||
Getting network CIDR from services...
|
||||
Labeling Namespace default
|
||||
Creating ServiceAccount kubevpn-traffic-manager
|
||||
Creating Roles kubevpn-traffic-manager
|
||||
Creating RoleBinding kubevpn-traffic-manager
|
||||
Creating Service kubevpn-traffic-manager
|
||||
Creating MutatingWebhookConfiguration kubevpn-traffic-manager
|
||||
Creating Deployment kubevpn-traffic-manager
|
||||
|
||||
Pod kubevpn-traffic-manager-66d969fd45-9zlbp is Pending
|
||||
Container Reason Message
|
||||
control-plane ContainerCreating
|
||||
vpn ContainerCreating
|
||||
webhook ContainerCreating
|
||||
|
||||
pod kubevpn-traffic-manager-66d969fd45-9zlbp is Running
|
||||
Pod kubevpn-traffic-manager-66d969fd45-9zlbp is Running
|
||||
Container Reason Message
|
||||
control-plane ContainerRunning
|
||||
vpn ContainerRunning
|
||||
webhook ContainerRunning
|
||||
|
||||
Creating mutatingWebhook_configuration for kubevpn-traffic-manager
|
||||
update ref count successfully
|
||||
port forward ready
|
||||
tunnel connected
|
||||
dns service ok
|
||||
+---------------------------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster, enjoy it :) |
|
||||
+---------------------------------------------------------------------------+
|
||||
Forwarding port...
|
||||
Connected tunnel
|
||||
Adding route...
|
||||
Configured DNS service
|
||||
+----------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster ! |
|
||||
+----------------------------------------------------------+
|
||||
➜ ~
|
||||
```
|
||||
|
||||
@@ -230,18 +226,16 @@ ID Mode Cluster Kubeconfig Namespace Status
|
||||
|
||||
```shell
|
||||
➜ ~ kubevpn connect -n default --kubeconfig ~/.kube/dev_config --lite
|
||||
start to connect
|
||||
got cidr from cache
|
||||
get cidr successfully
|
||||
update ref count successfully
|
||||
traffic manager already exist, reuse it
|
||||
port forward ready
|
||||
tunnel connected
|
||||
adding route...
|
||||
dns service ok
|
||||
+---------------------------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster, enjoy it :) |
|
||||
+---------------------------------------------------------------------------+
|
||||
Starting connect
|
||||
Got network CIDR from cache
|
||||
Use exist traffic manager
|
||||
Forwarding port...
|
||||
Connected tunnel
|
||||
Adding route...
|
||||
Configured DNS service
|
||||
+----------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster ! |
|
||||
+----------------------------------------------------------+
|
||||
```
|
||||
|
||||
```shell
|
||||
@@ -256,18 +250,15 @@ ID Mode Cluster Kubeconfig Namespace Status
|
||||
|
||||
```shell
|
||||
➜ ~ kubevpn proxy deployment/productpage
|
||||
already connect to cluster
|
||||
start to create remote inbound pod for deployment/productpage
|
||||
workload default/deployment/productpage is controlled by a controller
|
||||
rollout status for deployment/productpage
|
||||
Connected to cluster
|
||||
Injecting inbound sidecar for deployment/productpage
|
||||
Checking rollout status for deployment/productpage
|
||||
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
|
||||
deployment "productpage" successfully rolled out
|
||||
rollout status for deployment/productpage successfully
|
||||
create remote inbound pod for deployment/productpage successfully
|
||||
+---------------------------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster, enjoy it :) |
|
||||
+---------------------------------------------------------------------------+
|
||||
Rollout successfully for deployment/productpage
|
||||
+----------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster ! |
|
||||
+----------------------------------------------------------+
|
||||
➜ ~
|
||||
```
|
||||
|
||||
@@ -348,18 +339,15 @@ Support HTTP, GRPC and WebSocket etc. with specific header `"a: 1"` will route t
|
||||
|
||||
```shell
|
||||
➜ ~ kubevpn proxy deployment/productpage --headers a=1
|
||||
already connect to cluster
|
||||
start to create remote inbound pod for deployment/productpage
|
||||
patch workload default/deployment/productpage with sidecar
|
||||
rollout status for deployment/productpage
|
||||
Connected to cluster
|
||||
Injecting inbound sidecar for deployment/productpage
|
||||
Checking rollout status for deployment/productpage
|
||||
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
|
||||
deployment "productpage" successfully rolled out
|
||||
rollout status for deployment/productpage successfully
|
||||
create remote inbound pod for deployment/productpage successfully
|
||||
+---------------------------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster, enjoy it :) |
|
||||
+---------------------------------------------------------------------------+
|
||||
Rollout successfully for deployment/productpage
|
||||
+----------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster ! |
|
||||
+----------------------------------------------------------+
|
||||
➜ ~
|
||||
```
|
||||
|
||||
@@ -389,9 +377,12 @@ If you want to cancel proxy, just run command:
|
||||
|
||||
```shell
|
||||
➜ ~ kubevpn leave deployments/productpage
|
||||
leave workload deployments/productpage
|
||||
workload default/deployments/productpage is controlled by a controller
|
||||
leave workload deployments/productpage successfully
|
||||
Leaving workload deployments/productpage
|
||||
Checking rollout status for deployments/productpage
|
||||
Waiting for deployment "productpage" rollout to finish: 0 out of 1 new replicas have been updated...
|
||||
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
|
||||
Rollout successfully for deployments/productpage
|
||||
```
|
||||
|
||||
### Dev mode in local Docker 🐳
|
||||
@@ -401,23 +392,20 @@ the specified header to the local, or all the traffic to the local.
|
||||
|
||||
```shell
|
||||
➜ ~ kubevpn dev deployment/authors --headers a=1 --entrypoint sh
|
||||
connectting to cluster
|
||||
start to connect
|
||||
got cidr from cache
|
||||
get cidr successfully
|
||||
update ref count successfully
|
||||
traffic manager already exist, reuse it
|
||||
port forward ready
|
||||
tunnel connected
|
||||
dns service ok
|
||||
start to create remote inbound pod for Deployment.apps/authors
|
||||
patch workload default/Deployment.apps/authors with sidecar
|
||||
rollout status for Deployment.apps/authors
|
||||
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
|
||||
Starting connect
|
||||
Got network CIDR from cache
|
||||
Use exist traffic manager
|
||||
Forwarding port...
|
||||
Connected tunnel
|
||||
Adding route...
|
||||
Configured DNS service
|
||||
Injecting inbound sidecar for deployment/authors
|
||||
Patching workload deployment/authors
|
||||
Checking rollout status for deployment/authors
|
||||
Waiting for deployment "authors" rollout to finish: 0 out of 1 new replicas have been updated...
|
||||
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
|
||||
deployment "authors" successfully rolled out
|
||||
rollout status for Deployment.apps/authors successfully
|
||||
create remote inbound pod for Deployment.apps/authors successfully
|
||||
Rollout successfully for Deployment.apps/authors
|
||||
tar: removing leading '/' from member names
|
||||
/var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/4563987760170736212:/var/run/secrets/kubernetes.io/serviceaccount
|
||||
tar: Removing leading `/' from member names
|
||||
@@ -457,16 +445,14 @@ OK: 8 MiB in 19 packages
|
||||
{"status":"Authors is healthy"} /opt/microservices # echo "continue testing pod access..."
|
||||
continue testing pod access...
|
||||
/opt/microservices # exit
|
||||
prepare to exit, cleaning up
|
||||
update ref count successfully
|
||||
tun device closed
|
||||
leave resource: deployments.apps/authors
|
||||
workload default/deployments.apps/authors is controlled by a controller
|
||||
leave resource: deployments.apps/authors successfully
|
||||
clean up successfully
|
||||
prepare to exit, cleaning up
|
||||
update ref count successfully
|
||||
clean up successfully
|
||||
Created container: default_authors
|
||||
Wait container default_authors to be running...
|
||||
Container default_authors is running now
|
||||
Disconnecting from the cluster...
|
||||
Leaving workload deployments.apps/authors
|
||||
Disconnecting from the cluster...
|
||||
Performing cleanup operations
|
||||
Clearing DNS settings
|
||||
➜ ~
|
||||
```
|
||||
|
||||
@@ -507,15 +493,13 @@ Example:
|
||||
|
||||
```shell
|
||||
➜ ~ kubevpn dev deployment/authors --no-proxy
|
||||
connectting to cluster
|
||||
start to connect
|
||||
got cidr from cache
|
||||
get cidr successfully
|
||||
update ref count successfully
|
||||
traffic manager already exist, reuse it
|
||||
port forward ready
|
||||
tunnel connected
|
||||
dns service ok
|
||||
Starting connect
|
||||
Got network CIDR from cache
|
||||
Use exist traffic manager
|
||||
Forwarding port...
|
||||
Connected tunnel
|
||||
Adding route...
|
||||
Configured DNS service
|
||||
tar: removing leading '/' from member names
|
||||
/var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/5631078868924498209:/var/run/secrets/kubernetes.io/serviceaccount
|
||||
tar: Removing leading `/' from member names
|
||||
@@ -533,7 +517,7 @@ Created main container: authors_default_kubevpn_ff34b
|
||||
|
||||
Now the main process will hang up to show you log.
|
||||
|
||||
If you want to specify the image to start the container locally, you can use the parameter `--docker-image`. When the
|
||||
If you want to specify the image to start the container locally, you can use the parameter `--dev-image`. When the
|
||||
image does not exist locally, it will be pulled from the corresponding mirror warehouse. If you want to specify startup
|
||||
parameters, you can use `--entrypoint` parameter, replace it with the command you want to execute, such
|
||||
as `--entrypoint /bin/bash`, for more parameters, see `kubevpn dev --help`.
|
||||
@@ -548,50 +532,46 @@ need to special parameter `--network` (inner docker) for sharing network and pid
|
||||
Example:
|
||||
|
||||
```shell
|
||||
docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config --platform linux/amd64 naison/kubevpn:v2.0.0
|
||||
docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config --platform linux/amd64 naison/kubevpn:latest
|
||||
```
|
||||
|
||||
```shell
|
||||
➜ ~ docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/vke:/root/.kube/config --platform linux/amd64 naison/kubevpn:v2.0.0
|
||||
Unable to find image 'naison/kubevpn:v2.0.0' locally
|
||||
v2.0.0: Pulling from naison/kubevpn
|
||||
445a6a12be2b: Already exists
|
||||
bd6c670dd834: Pull complete
|
||||
64a7297475a2: Pull complete
|
||||
33fa2e3224db: Pull complete
|
||||
e008f553422a: Pull complete
|
||||
5132e0110ddc: Pull complete
|
||||
5b2243de1f1a: Pull complete
|
||||
662a712db21d: Pull complete
|
||||
4f4fb700ef54: Pull complete
|
||||
33f0298d1d4f: Pull complete
|
||||
Digest: sha256:115b975a97edd0b41ce7a0bc1d8428e6b8569c91a72fe31ea0bada63c685742e
|
||||
Status: Downloaded newer image for naison/kubevpn:v2.0.0
|
||||
root@d0b3dab8912a:/app# kubevpn dev deployment/authors --headers user=naison --entrypoint sh
|
||||
hostname is d0b3dab8912a
|
||||
connectting to cluster
|
||||
start to connect
|
||||
got cidr from cache
|
||||
get cidr successfully
|
||||
update ref count successfully
|
||||
traffic manager already exist, reuse it
|
||||
port forward ready
|
||||
tunnel connected
|
||||
dns service ok
|
||||
start to create remote inbound pod for Deployment.apps/authors
|
||||
patch workload default/Deployment.apps/authors with sidecar
|
||||
rollout status for Deployment.apps/authors
|
||||
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
|
||||
➜ ~ docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/vke:/root/.kube/config --platform linux/amd64 naison/kubevpn:latest
|
||||
Unable to find image 'naison/kubevpn:latest' locally
|
||||
latest: Pulling from naison/kubevpn
|
||||
9c704ecd0c69: Already exists
|
||||
4987d0a976b5: Pull complete
|
||||
8aa94c4fc048: Pull complete
|
||||
526fee014382: Pull complete
|
||||
6c1c2bedceb6: Pull complete
|
||||
97ac845120c5: Pull complete
|
||||
ca82aef6a9eb: Pull complete
|
||||
1fd9534c7596: Pull complete
|
||||
588bd802eb9c: Pull complete
|
||||
Digest: sha256:368db2e0d98f6866dcefd60512960ce1310e85c24a398fea2a347905ced9507d
|
||||
Status: Downloaded newer image for naison/kubevpn:latest
|
||||
WARNING: image with reference naison/kubevpn was found but does not match the specified platform: wanted linux/amd64, actual: linux/arm64
|
||||
root@5732124e6447:/app# kubevpn dev deployment/authors --headers user=naison --entrypoint sh
|
||||
hostname is 5732124e6447
|
||||
Starting connect
|
||||
Got network CIDR from cache
|
||||
Use exist traffic manager
|
||||
Forwarding port...
|
||||
Connected tunnel
|
||||
Adding route...
|
||||
Configured DNS service
|
||||
Injecting inbound sidecar for deployment/authors
|
||||
Patching workload deployment/authors
|
||||
Checking rollout status for deployment/authors
|
||||
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
|
||||
deployment "authors" successfully rolled out
|
||||
rollout status for Deployment.apps/authors successfully
|
||||
create remote inbound pod for Deployment.apps/authors successfully
|
||||
Rollout successfully for Deployment.apps/authors
|
||||
tar: removing leading '/' from member names
|
||||
/tmp/6460902982794789917:/var/run/secrets/kubernetes.io/serviceaccount
|
||||
tar: Removing leading `/' from member names
|
||||
tar: Removing leading `/' from hard link targets
|
||||
/tmp/5028895788722532426:/var/run/secrets/kubernetes.io/serviceaccount
|
||||
network mode is container:d0b3dab8912a
|
||||
Network mode is container:d0b3dab8912a
|
||||
Created container: nginx_default_kubevpn_6df63
|
||||
Wait container nginx_default_kubevpn_6df63 to be running...
|
||||
Container nginx_default_kubevpn_6df63 is running now
|
||||
@@ -651,16 +631,14 @@ Hello world!/opt/microservices #
|
||||
Hello world!/opt/microservices #
|
||||
/opt/microservices # curl localhost:9080/health
|
||||
{"status":"Authors is healthy"}/opt/microservices # exit
|
||||
prepare to exit, cleaning up
|
||||
update ref count successfully
|
||||
tun device closed
|
||||
leave resource: deployments.apps/authors
|
||||
workload default/deployments.apps/authors is controlled by a controller
|
||||
leave resource: deployments.apps/authors successfully
|
||||
clean up successfully
|
||||
prepare to exit, cleaning up
|
||||
update ref count successfully
|
||||
clean up successfully
|
||||
Created container: default_authors
|
||||
Wait container default_authors to be running...
|
||||
Container default_authors is running now
|
||||
Disconnecting from the cluster...
|
||||
Leaving workload deployments.apps/authors
|
||||
Disconnecting from the cluster...
|
||||
Performing cleanup operations
|
||||
Clearing DNS settings
|
||||
root@d0b3dab8912a:/app# exit
|
||||
exit
|
||||
➜ ~
|
||||
@@ -673,7 +651,7 @@ during test, check what container is running
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
1cd576b51b66 naison/authors:latest "sh" 4 minutes ago Up 4 minutes authors_default_kubevpn_6df5f
|
||||
56a6793df82d nginx:latest "/docker-entrypoint.…" 4 minutes ago Up 4 minutes nginx_default_kubevpn_6df63
|
||||
d0b3dab8912a naison/kubevpn:v2.0.0 "/bin/bash" 5 minutes ago Up 5 minutes upbeat_noyce
|
||||
d0b3dab8912a naison/kubevpn:v2.0.0 "/bin/bash" 5 minutes ago Up 5 minutes upbeat_noyce
|
||||
➜ ~
|
||||
```
|
||||
|
||||
@@ -734,9 +712,10 @@ Then you can use this image, as follows:
|
||||
|
||||
```text
|
||||
➜ ~ kubevpn connect --image [docker registry]/[namespace]/[repo]:[tag]
|
||||
got cidr from cache
|
||||
traffic manager not exist, try to create it...
|
||||
pod [kubevpn-traffic-manager] status is Running
|
||||
Starting connect
|
||||
Getting network CIDR from cluster info...
|
||||
Getting network CIDR from CNI...
|
||||
Getting network CIDR from services...
|
||||
...
|
||||
```
|
||||
|
||||
@@ -758,24 +737,23 @@ f5507edfc283: Pushed
|
||||
ecc065754c15: Pushed
|
||||
feda785382bb: Pushed
|
||||
v2.0.0: digest: sha256:85d29ebb53af7d95b9137f8e743d49cbc16eff1cdb9983128ab6e46e0c25892c size: 2000
|
||||
start to connect
|
||||
got cidr from cache
|
||||
get cidr successfully
|
||||
update ref count successfully
|
||||
traffic manager already exist, reuse it
|
||||
port forward ready
|
||||
tunnel connected
|
||||
dns service ok
|
||||
+---------------------------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster, enjoy it :) |
|
||||
+---------------------------------------------------------------------------+
|
||||
Starting connect
|
||||
Got network CIDR from cache
|
||||
Use exist traffic manager
|
||||
Forwarding port...
|
||||
Connected tunnel
|
||||
Adding route...
|
||||
Configured DNS service
|
||||
+----------------------------------------------------------+
|
||||
| Now you can access resources in the kubernetes cluster ! |
|
||||
+----------------------------------------------------------+
|
||||
➜ ~
|
||||
```
|
||||
|
||||
### 2, When use `kubevpn dev`, but got error code 137, how to resolve?
|
||||
|
||||
```text
|
||||
dns service ok
|
||||
Configured DNS service
|
||||
tar: Removing leading `/' from member names
|
||||
tar: Removing leading `/' from hard link targets
|
||||
/var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/7375606548554947868:/var/run/secrets/kubernetes.io/serviceaccount
|
||||
@@ -783,11 +761,8 @@ Created container: server_vke-system_kubevpn_0db84
|
||||
Wait container server_vke-system_kubevpn_0db84 to be running...
|
||||
Container server_vke-system_kubevpn_0db84 is running on port 8888/tcp: 6789/tcp:6789 now
|
||||
$ Status: , Code: 137
|
||||
prepare to exit, cleaning up
|
||||
port-forward occurs error, err: lost connection to pod, retrying
|
||||
update ref count successfully
|
||||
ref-count is zero, prepare to clean up resource
|
||||
clean up successfully
|
||||
Performing cleanup operations
|
||||
Clearing DNS settings
|
||||
```
|
||||
|
||||
This is because of your docker-desktop required resource is less than pod running request resource, it OOM killed, so
|
||||
|
Reference in New Issue
Block a user