feat: update README.md

This commit is contained in:
fengcaiwen
2023-09-30 22:46:51 +08:00
committed by naison
parent fb93cb5748
commit cf138d24b6
2 changed files with 447 additions and 409 deletions

416
README.md
View File

@@ -67,7 +67,7 @@ kubectl apply -f https://raw.githubusercontent.com/KubeNetworks/kubevpn/master/s
### Connect to k8s cluster network ### Connect to k8s cluster network
```shell ```shell
➜ ~ kubevpn connect -n default --kubeconfig ~/.kube/config ➜ ~ kubevpn connect
Password: Password:
start to connect start to connect
get cidr from cluster info... get cidr from cluster info...
@@ -84,13 +84,13 @@ create roles kubevpn-traffic-manager
create roleBinding kubevpn-traffic-manager create roleBinding kubevpn-traffic-manager
create service kubevpn-traffic-manager create service kubevpn-traffic-manager
create deployment kubevpn-traffic-manager create deployment kubevpn-traffic-manager
pod kubevpn-traffic-manager-799b5f5474-d7bp7 is Pending pod kubevpn-traffic-manager-66d969fd45-9zlbp is Pending
Container Reason Message Container Reason Message
control-plane ContainerCreating control-plane ContainerCreating
vpn ContainerCreating vpn ContainerCreating
webhook ContainerCreating webhook ContainerCreating
pod kubevpn-traffic-manager-799b5f5474-d7bp7 is Running pod kubevpn-traffic-manager-66d969fd45-9zlbp is Running
Container Reason Message Container Reason Message
control-plane ContainerRunning control-plane ContainerRunning
vpn ContainerRunning vpn ContainerRunning
@@ -107,43 +107,44 @@ dns service ok
➜ ~ ➜ ~
``` ```
**after you see this prompt, then leave this terminal alone, open a new terminal, continue operation**
```shell ```shell
➜ ~ kubectl get pods -o wide ➜ ~ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-7db5668668-mq9qr 1/1 Running 0 7m 172.27.0.199 172.30.0.14 <none> <none> authors-dbb57d856-mbgqk 3/3 Running 0 7d23h 172.29.2.132 192.168.0.5 <none> <none>
kubevpn-traffic-manager-99f8c8d77-x9xjt 1/1 Running 0 74s 172.27.0.207 172.30.0.14 <none> <none> details-7d8b5f6bcf-hcl4t 1/1 Running 0 61d 172.29.0.77 192.168.104.255 <none> <none>
productpage-8f9d86644-z8snh 1/1 Running 0 6m59s 172.27.0.206 172.30.0.14 <none> <none> kubevpn-traffic-manager-66d969fd45-9zlbp 3/3 Running 0 74s 172.29.2.136 192.168.0.5 <none> <none>
ratings-859b96848d-68d7n 1/1 Running 0 6m59s 172.27.0.201 172.30.0.14 <none> <none> productpage-788df7ff7f-jpkcs 1/1 Running 0 61d 172.29.2.134 192.168.0.5 <none> <none>
reviews-dcf754f9d-46l4j 1/1 Running 0 6m59s 172.27.0.202 172.30.0.14 <none> <none> ratings-77b6cd4499-zvl6c 1/1 Running 0 61d 172.29.0.86 192.168.104.255 <none> <none>
reviews-85c88894d9-vgkxd 1/1 Running 0 24d 172.29.2.249 192.168.0.5 <none> <none>
``` ```
```shell ```shell
➜ ~ ping 172.27.0.206 ➜ ~ ping 172.29.2.134
PING 172.27.0.206 (172.27.0.206): 56 data bytes PING 172.29.2.134 (172.29.2.134): 56 data bytes
64 bytes from 172.27.0.206: icmp_seq=0 ttl=63 time=49.563 ms 64 bytes from 172.29.2.134: icmp_seq=0 ttl=63 time=55.727 ms
64 bytes from 172.27.0.206: icmp_seq=1 ttl=63 time=43.014 ms 64 bytes from 172.29.2.134: icmp_seq=1 ttl=63 time=56.270 ms
64 bytes from 172.27.0.206: icmp_seq=2 ttl=63 time=43.841 ms 64 bytes from 172.29.2.134: icmp_seq=2 ttl=63 time=55.228 ms
64 bytes from 172.27.0.206: icmp_seq=3 ttl=63 time=44.004 ms 64 bytes from 172.29.2.134: icmp_seq=3 ttl=63 time=54.293 ms
64 bytes from 172.27.0.206: icmp_seq=4 ttl=63 time=43.484 ms
^C ^C
--- 172.27.0.206 ping statistics --- --- 172.29.2.134 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss 4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 43.014/44.781/49.563/2.415 ms round-trip min/avg/max/stddev = 54.293/55.380/56.270/0.728 ms
``` ```
```shell ```shell
➜ ~ kubectl get services -o wide ➜ ~ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
details ClusterIP 172.27.255.92 <none> 9080/TCP 9m7s app=details authors ClusterIP 172.21.5.160 <none> 9080/TCP 114d app=authors
productpage ClusterIP 172.27.255.48 <none> 9080/TCP 9m6s app=productpage details ClusterIP 172.21.6.183 <none> 9080/TCP 114d app=details
ratings ClusterIP 172.27.255.154 <none> 9080/TCP 9m7s app=ratings kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 319d <none>
reviews ClusterIP 172.27.255.155 <none> 9080/TCP 9m6s app=reviews kubevpn-traffic-manager ClusterIP 172.21.2.86 <none> 8422/UDP,10800/TCP,9002/TCP,80/TCP 2m28s app=kubevpn-traffic-manager
productpage ClusterIP 172.21.10.49 <none> 9080/TCP 114d app=productpage
ratings ClusterIP 172.21.3.247 <none> 9080/TCP 114d app=ratings
reviews ClusterIP 172.21.8.24 <none> 9080/TCP 114d app=reviews
``` ```
```shell ```shell
➜ ~ curl 172.27.255.48:9080 ➜ ~ curl 172.21.10.49:9080
<!DOCTYPE html> <!DOCTYPE html>
<html> <html>
<head> <head>
@@ -183,30 +184,19 @@ reviews ClusterIP 172.27.255.155 <none> 9080/TCP 9m6s app=
```shell ```shell
➜ ~ kubevpn proxy deployment/productpage ➜ ~ kubevpn proxy deployment/productpage
got cidr from cache already connect to cluster
traffic manager not exist, try to create it... start to create remote inbound pod for deployment/productpage
pod [kubevpn-traffic-manager] status is Running workload default/deployment/productpage is controlled by a controller
Container Reason Message rollout status for deployment/productpage
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
deployment "productpage" successfully rolled out deployment "productpage" successfully rolled out
port forward ready rollout status for deployment/productpage successfully
your ip is 223.254.0.101 create remote inbound pod for deployment/productpage successfully
tunnel connected +---------------------------------------------------------------------------+
dns service ok | Now you can access resources in the kubernetes cluster, enjoy it :) |
+---------------------------------------------------------------------------+
--------------------------------------------------------------------------- ➜ ~
Now you can access resources in the kubernetes cluster, enjoy it :)
---------------------------------------------------------------------------
``` ```
```go ```go
@@ -238,30 +228,19 @@ Support HTTP, GRPC and WebSocket etc. with specific header `"a: 1"` will route t
```shell ```shell
➜ ~ kubevpn proxy deployment/productpage --headers a=1 ➜ ~ kubevpn proxy deployment/productpage --headers a=1
got cidr from cache already connect to cluster
traffic manager not exist, try to create it... start to create remote inbound pod for deployment/productpage
pod [kubevpn-traffic-manager] status is Running patch workload default/deployment/productpage with sidecar
Container Reason Message rollout status for deployment/productpage
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
deployment "productpage" successfully rolled out deployment "productpage" successfully rolled out
port forward ready rollout status for deployment/productpage successfully
your ip is 223.254.0.101 create remote inbound pod for deployment/productpage successfully
tunnel connected +---------------------------------------------------------------------------+
dns service ok | Now you can access resources in the kubernetes cluster, enjoy it :) |
+---------------------------------------------------------------------------+
--------------------------------------------------------------------------- ➜ ~
Now you can access resources in the kubernetes cluster, enjoy it :)
---------------------------------------------------------------------------
``` ```
```shell ```shell
@@ -287,58 +266,72 @@ Run the Kubernetes pod in the local Docker container, and cooperate with the ser
the specified header to the local, or all the traffic to the local. the specified header to the local, or all the traffic to the local.
```shell ```shell
➜ ~ kubevpn -n kube-system --headers a=1 -p 9080:9080 -p 80:80 -it --entrypoint sh dev deployment/authors ➜ ~ kubevpn dev deployment/authors --headers a=1 -it --rm --entrypoint sh
connectting to cluster
start to connect
got cidr from cache got cidr from cache
get cidr successfully
update ref count successfully update ref count successfully
traffic manager already exist, reuse it traffic manager already exist, reuse it
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
port forward ready port forward ready
tunnel connected tunnel connected
dns service ok dns service ok
start to create remote inbound pod for Deployment.apps/authors
patch workload default/Deployment.apps/authors with sidecar
rollout status for Deployment.apps/authors
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
rollout status for Deployment.apps/authors successfully
create remote inbound pod for Deployment.apps/authors successfully
tar: removing leading '/' from member names tar: removing leading '/' from member names
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/3264799524258261475:/var/run/secrets/kubernetes.io/serviceaccount /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/4563987760170736212:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading '/' from member names tar: Removing leading `/' from member names
tar: Removing leading '/' from hard link targets tar: Removing leading `/' from hard link targets
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/4472770436329940969:/var/run/secrets/kubernetes.io/serviceaccount /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/4044542168121221027:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading '/' from member names create docker network 56c25058d4b7498d02c2c2386ccd1b2b127cb02e8a1918d6d24bffd18570200e
tar: Removing leading '/' from hard link targets Created container: nginx_default_kubevpn_a9a22
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/359584695576599326:/var/run/secrets/kubernetes.io/serviceaccount Wait container nginx_default_kubevpn_a9a22 to be running...
Created container: authors_kube-system_kubevpn_a7d82 Container nginx_default_kubevpn_a9a22 is running on port 80/tcp:80 8888/tcp:8888 9080/tcp:9080 now
Wait container authors_kube-system_kubevpn_a7d82 to be running... WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Container authors_kube-system_kubevpn_a7d82 is running on port 9080/tcp:32771 now Created main container: authors_default_kubevpn_a9a22
Created container: nginx_kube-system_kubevpn_a7d82
Wait container nginx_kube-system_kubevpn_a7d82 to be running...
Container nginx_kube-system_kubevpn_a7d82 is running now
/opt/microservices # ls /opt/microservices # ls
app app
/opt/microservices # ps -ef /opt/microservices # ps -ef
PID USER TIME COMMAND PID USER TIME COMMAND
1 root 0:00 ./app 1 root 0:00 nginx: master process nginx -g daemon off;
10 root 0:00 nginx: master process nginx -g daemon off; 29 101 0:00 nginx: worker process
32 root 0:00 /bin/sh 30 101 0:00 nginx: worker process
44 101 0:00 nginx: worker process 31 101 0:00 nginx: worker process
45 101 0:00 nginx: worker process 32 101 0:00 nginx: worker process
46 101 0:00 nginx: worker process 33 101 0:00 nginx: worker process
47 101 0:00 nginx: worker process 34 root 0:00 {sh} /usr/bin/qemu-x86_64 /bin/sh sh
49 root 0:00 ps -ef 44 root 0:00 ps -ef
/opt/microservices # apk add curl /opt/microservices # apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5) (1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0) (2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r5) (3/4) Installing libcurl (8.0.1-r0)
(4/4) Installing curl (7.79.1-r5) (4/4) Installing curl (8.0.1-r0)
Executing busybox-1.33.1-r3.trigger Executing busybox-1.33.1-r3.trigger
OK: 8 MiB in 19 packages OK: 8 MiB in 19 packages
/opt/microservices # curl localhost:9080 /opt/microservices # ./app &
404 page not found /opt/microservices # 2023/09/30 13:41:58 Start listening http port 9080 ...
/opt/microservices # curl localhost:9080/health /opt/microservices # curl localhost:9080/health
{"status":"Authors is healthy"}/opt/microservices # exit {"status":"Authors is healthy"}/opt/microservices # exit
prepare to exit, cleaning up prepare to exit, cleaning up
update ref count successfully update ref count successfully
tun device closed
leave resource: deployments.apps/authors
workload default/deployments.apps/authors is controlled by a controller
leave resource: deployments.apps/authors successfully
clean up successfully clean up successfully
prepare to exit, cleaning up
update ref count successfully
clean up successfully
➜ ~
``` ```
You can see that it will start up two containers with docker, mapping to pod two container, and share port with same You can see that it will start up two containers with docker, mapping to pod two container, and share port with same
@@ -348,42 +341,44 @@ truly consistent with the kubernetes runtime. Makes develop on local PC comes tr
```shell ```shell
➜ ~ docker ps ➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de9e2f8ab57d nginx:latest "/docker-entrypoint.…" 5 seconds ago Up 5 seconds nginx_kube-system_kubevpn_e21d8 afdecf41c08d naison/authors:latest "sh" 37 seconds ago Up 36 seconds authors_default_kubevpn_a9a22
28aa30e8929e naison/authors:latest "./app" 6 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:9080->9080/tcp authors_kube-system_kubevpn_e21d8 fc04e42799a5 nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 37 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:8888->8888/tcp, 0.0.0.0:9080->9080/tcp nginx_default_kubevpn_a9a22
➜ ~ ➜ ~
``` ```
If you just want to start up a docker image, you can use simple way like this: If you just want to start up a docker image, you can use simple way like this:
```shell ```shell
kubevpn --headers user=naison dev deployment/authors kubevpn dev deployment/authors --no-proxy -it --rm
``` ```
Example Example
```shell ```shell
➜ ~ kubevpn --headers user=naison dev deployment/authors ➜ ~ kubevpn dev deployment/authors --no-proxy -it --rm
connectting to cluster
start to connect
got cidr from cache got cidr from cache
get cidr successfully
update ref count successfully update ref count successfully
traffic manager already exist, reuse it traffic manager already exist, reuse it
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
port forward ready port forward ready
tunnel connected tunnel connected
dns service ok dns service ok
tar: removing leading '/' from member names tar: removing leading '/' from member names
/tmp/3795398593261835591:/var/run/secrets/kubernetes.io/serviceaccount /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/5631078868924498209:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading `/' from member names tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets tar: Removing leading `/' from hard link targets
/tmp/1432525228828829439:/var/run/secrets/kubernetes.io/serviceaccount /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/1548572512863475037:/var/run/secrets/kubernetes.io/serviceaccount
Created container: nginx_default_kubevpn_08aba create docker network 56c25058d4b7498d02c2c2386ccd1b2b127cb02e8a1918d6d24bffd18570200e
Wait container nginx_default_kubevpn_08aba to be running... Created container: nginx_default_kubevpn_ff34b
Container nginx_default_kubevpn_08aba is running now Wait container nginx_default_kubevpn_ff34b to be running...
Container nginx_default_kubevpn_ff34b is running on port 80/tcp:80 8888/tcp:8888 9080/tcp:9080 now
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Created container: authors_default_kubevpn_08ab9 Created main container: authors_default_kubevpn_ff34b
2023/09/02 00:17:00 Start listening http port 9080 ... 2023/09/30 14:02:31 Start listening http port 9080 ...
``` ```
Now the main process will hang up to show you log. Now the main process will hang up to show you log.
@@ -402,13 +397,26 @@ need to special parameter `--network` (inner docker) for sharing network and pid
Example: Example:
```shell ```shell
docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config naison/kubevpn:v1.2.0 docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config --platform linux/amd64 naison/kubevpn:v1.2.0
``` ```
```shell ```shell
➜ ~ docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -c authors -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/vke:/root/.kube/config naison/kubevpn:v1.2.0 ➜ ~ docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/vke:/root/.kube/config --platform linux/amd64 naison/kubevpn:v2.0.0
root@4d0c3c4eae2b:/# Unable to find image 'naison/kubevpn:v2.0.0' locally
root@4d0c3c4eae2b:/# kubevpn -n kube-system --image naison/kubevpn:v1.2.0 --headers user=naison -it --entrypoint sh dev deployment/authors v2.0.0: Pulling from naison/kubevpn
445a6a12be2b: Already exists
bd6c670dd834: Pull complete
64a7297475a2: Pull complete
33fa2e3224db: Pull complete
e008f553422a: Pull complete
5132e0110ddc: Pull complete
5b2243de1f1a: Pull complete
662a712db21d: Pull complete
4f4fb700ef54: Pull complete
33f0298d1d4f: Pull complete
Digest: sha256:115b975a97edd0b41ce7a0bc1d8428e6b8569c91a72fe31ea0bada63c685742e
Status: Downloaded newer image for naison/kubevpn:v2.0.0
root@d0b3dab8912a:/app# kubevpn dev deployment/authors --headers user=naison -it --entrypoint sh
---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG. Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG.
@@ -416,66 +424,47 @@ root@4d0c3c4eae2b:/# kubevpn -n kube-system --image naison/kubevpn:v1.2.0 --head
Current env KUBECONFIG value: Current env KUBECONFIG value:
---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
hostname is d0b3dab8912a
connectting to cluster
start to connect
got cidr from cache got cidr from cache
traffic manager not exist, try to create it... get cidr successfully
pod [kubevpn-traffic-manager] status is Pending
Container Reason Message
pod [kubevpn-traffic-manager] status is Pending
Container Reason Message
control-plane ContainerCreating
vpn ContainerCreating
webhook ContainerCreating
pod [kubevpn-traffic-manager] status is Running
Container Reason Message
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully update ref count successfully
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination... traffic manager already exist, reuse it
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
port forward ready port forward ready
tunnel connected tunnel connected
dns service ok dns service ok
start to create remote inbound pod for Deployment.apps/authors
patch workload default/Deployment.apps/authors with sidecar
rollout status for Deployment.apps/authors
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
rollout status for Deployment.apps/authors successfully
create remote inbound pod for Deployment.apps/authors successfully
tar: removing leading '/' from member names tar: removing leading '/' from member names
/tmp/3122262358661539581:/var/run/secrets/kubernetes.io/serviceaccount /tmp/6460902982794789917:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading '/' from member names tar: Removing leading `/' from member names
tar: Removing leading '/' from hard link targets tar: Removing leading `/' from hard link targets
/tmp/7677066538742627822:/var/run/secrets/kubernetes.io/serviceaccount /tmp/5028895788722532426:/var/run/secrets/kubernetes.io/serviceaccount
latest: Pulling from naison/authors network mode is container:d0b3dab8912a
Digest: sha256:2e7b2d6a4c6143cde888fcdb70ba091d533e11de70e13e151adff7510a5d52d4 Created container: nginx_default_kubevpn_6df63
Status: Downloaded newer image for naison/authors:latest Wait container nginx_default_kubevpn_6df63 to be running...
Created container: authors_kube-system_kubevpn_c68e4 Container nginx_default_kubevpn_6df63 is running now
Wait container authors_kube-system_kubevpn_c68e4 to be running... WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Container authors_kube-system_kubevpn_c68e4 is running now Created main container: authors_default_kubevpn_6df5f
Created container: nginx_kube-system_kubevpn_c68e7
Wait container nginx_kube-system_kubevpn_c68e7 to be running...
Container nginx_kube-system_kubevpn_c68e7 is running now
/opt/microservices # ps -ef /opt/microservices # ps -ef
PID USER TIME COMMAND PID USER TIME COMMAND
1 root 0:00 {bash} /usr/bin/qemu-x86_64 /bin/bash /bin/bash 1 root 0:00 {bash} /usr/bin/qemu-x86_64 /bin/bash /bin/bash
60 root 0:07 {kubevpn} /usr/bin/qemu-x86_64 kubevpn kubevpn dev deployment/authors -n kube-system --image naison/kubevpn:v1.1.36 --headers user=naison --parent 14 root 0:02 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn kubevpn dev deployment/authors --headers
73 root 0:00 {tail} /usr/bin/qemu-x86_64 /usr/bin/tail tail -f /dev/null 25 root 0:01 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn /usr/local/bin/kubevpn daemon
80 root 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off; 37 root 0:04 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn /usr/local/bin/kubevpn daemon --sudo
92 root 0:00 {sh} /usr/bin/qemu-x86_64 /bin/sh /bin/sh 53 root 0:00 nginx: master process nginx -g daemon off;
156 101 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off; (4/4) Installing curl (8.0.1-r0)
158 101 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off;
160 101 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off;
162 101 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off;
164 root 0:00 ps -ef
/opt/microservices # ls
app
/opt/microservices # apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r5)
(4/4) Installing curl (7.79.1-r5)
Executing busybox-1.33.1-r3.trigger Executing busybox-1.33.1-r3.trigger
OK: 8 MiB in 19 packagesnx: worker process
/opt/microservices #
/opt/microservices # apk add curl
OK: 8 MiB in 19 packages OK: 8 MiB in 19 packages
/opt/microservices # curl localhost:80 /opt/microservices # curl localhost:80
<!DOCTYPE html> <!DOCTYPE html>
@@ -503,13 +492,41 @@ Commercial support is available at
</html> </html>
/opt/microservices # ls /opt/microservices # ls
app app
/opt/microservices # exit /opt/microservices # ls -alh
total 6M
drwxr-xr-x 2 root root 4.0K Oct 18 2021 .
drwxr-xr-x 1 root root 4.0K Oct 18 2021 ..
-rwxr-xr-x 1 root root 6.3M Oct 18 2021 app
/opt/microservices # ./app &
/opt/microservices # 2023/09/30 14:27:32 Start listening http port 9080 ...
/opt/microservices # curl authors:9080/health
/opt/microservices # curl authors:9080/health
{"status":"Authors is healthy"}/opt/microservices #
/opt/microservices # curl localhost:9080/health
{"status":"Authors is healthy"}/opt/microservices # exit
prepare to exit, cleaning up prepare to exit, cleaning up
update ref count successfully update ref count successfully
ref-count is zero, prepare to clean up resource tun device closed
leave resource: deployments.apps/authors
workload default/deployments.apps/authors is controlled by a controller
leave resource: deployments.apps/authors successfully
clean up successfully clean up successfully
root@4d0c3c4eae2b:/# exit prepare to exit, cleaning up
update ref count successfully
clean up successfully
root@d0b3dab8912a:/app# exit
exit exit
➜ ~
```
```text
➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cd576b51b66 naison/authors:latest "sh" 4 minutes ago Up 4 minutes authors_default_kubevpn_6df5f
56a6793df82d nginx:latest "/docker-entrypoint.…" 4 minutes ago Up 4 minutes nginx_default_kubevpn_6df63
d0b3dab8912a naison/kubevpn:v2.0.0 "/bin/bash" 5 minutes ago Up 5 minutes upbeat_noyce
➜ ~
``` ```
### Multiple Protocol ### Multiple Protocol
@@ -543,23 +560,23 @@ Answer: here are two solution to solve this problem
Example: Example:
``` shell ``` shell
➜ ~ kubevpn version ➜ ~ kubevpn version
KubeVPN: CLI KubeVPN: CLI
Version: v1.1.36 Version: v1.2.0
Image: docker.io/naison/kubevpn:v1.1.36 DaemonVersion: v1.2.0
Branch: master Image: docker.io/naison/kubevpn:v1.2.0
Git commit: 87dac42dad3d8f472a9dcdfc2c6cd801551f23d1 Branch: feature/daemon
Built time: 2023-01-15 04:19:45 Git commit: 7c3a87e14e05c238d8fb23548f95fa1dd6e96936
Built OS/Arch: linux/amd64 Built time: 2023-09-30 22:01:51
Built Go version: go1.18.10 Built OS/Arch: darwin/arm64
~ Built Go version: go1.20.5
``` ```
Image is `docker.io/naison/kubevpn:v1.1.36`, transfer this image to private docker registry Image is `docker.io/naison/kubevpn:v1.2.0`, transfer this image to private docker registry
```text ```text
docker pull docker.io/naison/kubevpn:v1.1.36 docker pull docker.io/naison/kubevpn:v1.2.0
docker tag docker.io/naison/kubevpn:v1.1.36 [docker registry]/[namespace]/[repo]:[tag] docker tag docker.io/naison/kubevpn:v1.2.0 [docker registry]/[namespace]/[repo]:[tag]
docker push [docker registry]/[namespace]/[repo]:[tag] docker push [docker registry]/[namespace]/[repo]:[tag]
``` ```
@@ -578,32 +595,31 @@ pod [kubevpn-traffic-manager] status is Running
Example Example
```shell ```shell
➜ ~ kubevpn connect --transfer-image --image nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn:v1.1.33 ➜ ~ kubevpn connect --transfer-image --image nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn:v1.2.0
Password: v1.2.0: Pulling from naison/kubevpn
v1.1.33: Pulling from naison/kubevpn Digest: sha256:450446850891eb71925c54a2fab5edb903d71103b485d6a4a16212d25091b5f4
Digest: sha256:970c0c82a2d9cbac1595edb56a31e8fc84e02712c00a7211762efee5f66ea70c Status: Image is up to date for naison/kubevpn:v1.2.0
Status: Image is up to date for naison/kubevpn:v1.1.33
The push refers to repository [nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn] The push refers to repository [nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn]
9d72fec6b077: Pushed ecc065754c15: Preparing
12a6a77eb79e: Pushed f2b6c07cb397: Pushed
c7d0f62ec57f: Pushed 448eaa16d666: Pushed
5605cea4b7c8: Pushed f5507edfc283: Pushed
4231fec7b258: Pushed 3b6ea9aa4889: Pushed
babe72b5fcae: Pushed ecc065754c15: Pushed
6caa74b4bcf0: Pushed feda785382bb: Pushed
b8a36d10656a: Pushed v1.2.0: digest: sha256:85d29ebb53af7d95b9137f8e743d49cbc16eff1cdb9983128ab6e46e0c25892c size: 2000
v1.1.33: digest: sha256:1bc5e589bec6dc279418009b5e82ce0fd29a2c0e8b9266988964035ad7fbeba5 size: 2000 start to connect
got cidr from cache got cidr from cache
get cidr successfully
update ref count successfully update ref count successfully
traffic manager already exist, reuse it traffic manager already exist, reuse it
port forward ready port forward ready
tunnel connected tunnel connected
dns service ok dns service ok
+---------------------------------------------------------------------------+ +---------------------------------------------------------------------------+
| Now you can access resources in the kubernetes cluster, enjoy it :) | | Now you can access resources in the kubernetes cluster, enjoy it :) |
+---------------------------------------------------------------------------+ +---------------------------------------------------------------------------+
➜ ~
``` ```
### 2, When use `kubevpn dev`, but got error code 137, how to resolve ? ### 2, When use `kubevpn dev`, but got error code 137, how to resolve ?

View File

@@ -66,76 +66,83 @@ kubectl apply -f https://raw.githubusercontent.com/KubeNetworks/kubevpn/master/s
```shell ```shell
➜ ~ kubevpn connect ➜ ~ kubevpn connect
Password:
start to connect
get cidr from cluster info... get cidr from cluster info...
get cidr from cluster info ok get cidr from cluster info ok
get cidr from cni... get cidr from cni...
wait pod cni-net-dir-kubevpn to be running timeout, reason , ignore
get cidr from svc... get cidr from svc...
get cidr from svc ok get cidr from svc ok
get cidr successfully
traffic manager not exist, try to create it... traffic manager not exist, try to create it...
pod [kubevpn-traffic-manager] status is Pending label namespace default
Container Reason Message create serviceAccount kubevpn-traffic-manager
create roles kubevpn-traffic-manager
pod [kubevpn-traffic-manager] status is Pending create roleBinding kubevpn-traffic-manager
create service kubevpn-traffic-manager
create deployment kubevpn-traffic-manager
pod kubevpn-traffic-manager-66d969fd45-9zlbp is Pending
Container Reason Message Container Reason Message
control-plane ContainerCreating control-plane ContainerCreating
vpn ContainerCreating vpn ContainerCreating
webhook ContainerCreating webhook ContainerCreating
pod [kubevpn-traffic-manager] status is Running pod kubevpn-traffic-manager-66d969fd45-9zlbp is Running
Container Reason Message Container Reason Message
control-plane ContainerRunning control-plane ContainerRunning
vpn ContainerRunning vpn ContainerRunning
webhook ContainerRunning webhook ContainerRunning
Creating mutatingWebhook_configuration for kubevpn-traffic-manager
update ref count successfully update ref count successfully
port forward ready port forward ready
your ip is 223.254.0.101
tunnel connected tunnel connected
dns service ok dns service ok
+---------------------------------------------------------------------------+
--------------------------------------------------------------------------- | Now you can access resources in the kubernetes cluster, enjoy it :) |
Now you can access resources in the kubernetes cluster, enjoy it :) +---------------------------------------------------------------------------+
--------------------------------------------------------------------------- ➜ ~
``` ```
**有这个提示出来后, 当前 terminal 不要关闭,新打开一个 terminal, 执行新的操作**
```shell ```shell
➜ ~ kubectl get pods -o wide ➜ ~ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-7db5668668-mq9qr 1/1 Running 0 7m 172.27.0.199 172.30.0.14 <none> <none> authors-dbb57d856-mbgqk 3/3 Running 0 7d23h 172.29.2.132 192.168.0.5 <none> <none>
kubevpn-traffic-manager-99f8c8d77-x9xjt 1/1 Running 0 74s 172.27.0.207 172.30.0.14 <none> <none> details-7d8b5f6bcf-hcl4t 1/1 Running 0 61d 172.29.0.77 192.168.104.255 <none> <none>
productpage-8f9d86644-z8snh 1/1 Running 0 6m59s 172.27.0.206 172.30.0.14 <none> <none> kubevpn-traffic-manager-66d969fd45-9zlbp 3/3 Running 0 74s 172.29.2.136 192.168.0.5 <none> <none>
ratings-859b96848d-68d7n 1/1 Running 0 6m59s 172.27.0.201 172.30.0.14 <none> <none> productpage-788df7ff7f-jpkcs 1/1 Running 0 61d 172.29.2.134 192.168.0.5 <none> <none>
reviews-dcf754f9d-46l4j 1/1 Running 0 6m59s 172.27.0.202 172.30.0.14 <none> <none> ratings-77b6cd4499-zvl6c 1/1 Running 0 61d 172.29.0.86 192.168.104.255 <none> <none>
reviews-85c88894d9-vgkxd 1/1 Running 0 24d 172.29.2.249 192.168.0.5 <none> <none>
``` ```
```shell ```shell
➜ ~ ping 172.27.0.206 ➜ ~ ping 172.29.2.134
PING 172.27.0.206 (172.27.0.206): 56 data bytes PING 172.29.2.134 (172.29.2.134): 56 data bytes
64 bytes from 172.27.0.206: icmp_seq=0 ttl=63 time=49.563 ms 64 bytes from 172.29.2.134: icmp_seq=0 ttl=63 time=55.727 ms
64 bytes from 172.27.0.206: icmp_seq=1 ttl=63 time=43.014 ms 64 bytes from 172.29.2.134: icmp_seq=1 ttl=63 time=56.270 ms
64 bytes from 172.27.0.206: icmp_seq=2 ttl=63 time=43.841 ms 64 bytes from 172.29.2.134: icmp_seq=2 ttl=63 time=55.228 ms
64 bytes from 172.27.0.206: icmp_seq=3 ttl=63 time=44.004 ms 64 bytes from 172.29.2.134: icmp_seq=3 ttl=63 time=54.293 ms
64 bytes from 172.27.0.206: icmp_seq=4 ttl=63 time=43.484 ms
^C ^C
--- 172.27.0.206 ping statistics --- --- 172.29.2.134 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss 4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 43.014/44.781/49.563/2.415 ms round-trip min/avg/max/stddev = 54.293/55.380/56.270/0.728 ms
``` ```
```shell ```shell
➜ ~ kubectl get services -o wide ➜ ~ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
details ClusterIP 172.27.255.92 <none> 9080/TCP 9m7s app=details authors ClusterIP 172.21.5.160 <none> 9080/TCP 114d app=authors
productpage ClusterIP 172.27.255.48 <none> 9080/TCP 9m6s app=productpage details ClusterIP 172.21.6.183 <none> 9080/TCP 114d app=details
ratings ClusterIP 172.27.255.154 <none> 9080/TCP 9m7s app=ratings kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 319d <none>
reviews ClusterIP 172.27.255.155 <none> 9080/TCP 9m6s app=reviews kubevpn-traffic-manager ClusterIP 172.21.2.86 <none> 8422/UDP,10800/TCP,9002/TCP,80/TCP 2m28s app=kubevpn-traffic-manager
productpage ClusterIP 172.21.10.49 <none> 9080/TCP 114d app=productpage
ratings ClusterIP 172.21.3.247 <none> 9080/TCP 114d app=ratings
reviews ClusterIP 172.21.8.24 <none> 9080/TCP 114d app=reviews
``` ```
```shell ```shell
➜ ~ curl 172.27.255.48:9080 ➜ ~ curl 172.21.10.49:9080
<!DOCTYPE html> <!DOCTYPE html>
<html> <html>
<head> <head>
@@ -175,30 +182,19 @@ reviews ClusterIP 172.27.255.155 <none> 9080/TCP 9m6s app=
```shell ```shell
➜ ~ kubevpn proxy deployment/productpage ➜ ~ kubevpn proxy deployment/productpage
got cidr from cache already connect to cluster
traffic manager not exist, try to create it... start to create remote inbound pod for deployment/productpage
pod [kubevpn-traffic-manager] status is Running workload default/deployment/productpage is controlled by a controller
Container Reason Message rollout status for deployment/productpage
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
deployment "productpage" successfully rolled out deployment "productpage" successfully rolled out
port forward ready rollout status for deployment/productpage successfully
your ip is 223.254.0.101 create remote inbound pod for deployment/productpage successfully
tunnel connected +---------------------------------------------------------------------------+
dns service ok | Now you can access resources in the kubernetes cluster, enjoy it :) |
+---------------------------------------------------------------------------+
--------------------------------------------------------------------------- ➜ ~
Now you can access resources in the kubernetes cluster, enjoy it :)
---------------------------------------------------------------------------
``` ```
```go ```go
@@ -230,30 +226,19 @@ Hello world!%
```shell ```shell
➜ ~ kubevpn proxy deployment/productpage --headers a=1 ➜ ~ kubevpn proxy deployment/productpage --headers a=1
got cidr from cache already connect to cluster
traffic manager not exist, try to create it... start to create remote inbound pod for deployment/productpage
pod [kubevpn-traffic-manager] status is Running patch workload default/deployment/productpage with sidecar
Container Reason Message rollout status for deployment/productpage
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
deployment "productpage" successfully rolled out deployment "productpage" successfully rolled out
port forward ready rollout status for deployment/productpage successfully
your ip is 223.254.0.101 create remote inbound pod for deployment/productpage successfully
tunnel connected +---------------------------------------------------------------------------+
dns service ok | Now you can access resources in the kubernetes cluster, enjoy it :) |
+---------------------------------------------------------------------------+
--------------------------------------------------------------------------- ➜ ~
Now you can access resources in the kubernetes cluster, enjoy it :)
---------------------------------------------------------------------------
``` ```
```shell ```shell
@@ -278,58 +263,72 @@ Hello world!%
将 Kubernetes pod 运行在本地的 Docker 容器中,同时配合 service mesh, 拦截带有指定 header 的流量到本地,或者所有的流量到本地。这个开发模式依赖于本地 Docker。 将 Kubernetes pod 运行在本地的 Docker 容器中,同时配合 service mesh, 拦截带有指定 header 的流量到本地,或者所有的流量到本地。这个开发模式依赖于本地 Docker。
```shell ```shell
➜ ~ kubevpn -n kube-system --headers a=1 -p 9080:9080 -p 80:80 -it --entrypoint sh dev deployment/authors ➜ ~ kubevpn dev deployment/authors --headers a=1 -it --rm --entrypoint sh
connectting to cluster
start to connect
got cidr from cache got cidr from cache
get cidr successfully
update ref count successfully update ref count successfully
traffic manager already exist, reuse it traffic manager already exist, reuse it
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
port forward ready port forward ready
tunnel connected tunnel connected
dns service ok dns service ok
start to create remote inbound pod for Deployment.apps/authors
patch workload default/Deployment.apps/authors with sidecar
rollout status for Deployment.apps/authors
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
rollout status for Deployment.apps/authors successfully
create remote inbound pod for Deployment.apps/authors successfully
tar: removing leading '/' from member names tar: removing leading '/' from member names
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/3264799524258261475:/var/run/secrets/kubernetes.io/serviceaccount /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/4563987760170736212:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading '/' from member names tar: Removing leading `/' from member names
tar: Removing leading '/' from hard link targets tar: Removing leading `/' from hard link targets
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/4472770436329940969:/var/run/secrets/kubernetes.io/serviceaccount /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/4044542168121221027:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading '/' from member names create docker network 56c25058d4b7498d02c2c2386ccd1b2b127cb02e8a1918d6d24bffd18570200e
tar: Removing leading '/' from hard link targets Created container: nginx_default_kubevpn_a9a22
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/359584695576599326:/var/run/secrets/kubernetes.io/serviceaccount Wait container nginx_default_kubevpn_a9a22 to be running...
Created container: authors_kube-system_kubevpn_a7d82 Container nginx_default_kubevpn_a9a22 is running on port 80/tcp:80 8888/tcp:8888 9080/tcp:9080 now
Wait container authors_kube-system_kubevpn_a7d82 to be running... WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Container authors_kube-system_kubevpn_a7d82 is running on port 9080/tcp:32771 now Created main container: authors_default_kubevpn_a9a22
Created container: nginx_kube-system_kubevpn_a7d82
Wait container nginx_kube-system_kubevpn_a7d82 to be running...
Container nginx_kube-system_kubevpn_a7d82 is running now
/opt/microservices # ls /opt/microservices # ls
app app
/opt/microservices # ps -ef /opt/microservices # ps -ef
PID USER TIME COMMAND PID USER TIME COMMAND
1 root 0:00 ./app 1 root 0:00 nginx: master process nginx -g daemon off;
10 root 0:00 nginx: master process nginx -g daemon off; 29 101 0:00 nginx: worker process
32 root 0:00 /bin/sh 30 101 0:00 nginx: worker process
44 101 0:00 nginx: worker process 31 101 0:00 nginx: worker process
45 101 0:00 nginx: worker process 32 101 0:00 nginx: worker process
46 101 0:00 nginx: worker process 33 101 0:00 nginx: worker process
47 101 0:00 nginx: worker process 34 root 0:00 {sh} /usr/bin/qemu-x86_64 /bin/sh sh
49 root 0:00 ps -ef 44 root 0:00 ps -ef
/opt/microservices # apk add curl /opt/microservices # apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5) (1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0) (2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r5) (3/4) Installing libcurl (8.0.1-r0)
(4/4) Installing curl (7.79.1-r5) (4/4) Installing curl (8.0.1-r0)
Executing busybox-1.33.1-r3.trigger Executing busybox-1.33.1-r3.trigger
OK: 8 MiB in 19 packages OK: 8 MiB in 19 packages
/opt/microservices # curl localhost:9080 /opt/microservices # ./app &
404 page not found /opt/microservices # 2023/09/30 13:41:58 Start listening http port 9080 ...
/opt/microservices # curl localhost:9080/health /opt/microservices # curl localhost:9080/health
{"status":"Authors is healthy"}/opt/microservices # exit {"status":"Authors is healthy"}/opt/microservices # exit
prepare to exit, cleaning up prepare to exit, cleaning up
update ref count successfully update ref count successfully
tun device closed
leave resource: deployments.apps/authors
workload default/deployments.apps/authors is controlled by a controller
leave resource: deployments.apps/authors successfully
clean up successfully clean up successfully
prepare to exit, cleaning up
update ref count successfully
clean up successfully
➜ ~
``` ```
此时本地会启动两个 container, 对应 pod 容器中的两个 container, 并且共享端口, 可以直接使用 localhost:port 的形式直接访问另一个 container, 此时本地会启动两个 container, 对应 pod 容器中的两个 container, 并且共享端口, 可以直接使用 localhost:port 的形式直接访问另一个 container,
@@ -337,42 +336,44 @@ clean up successfully
```shell ```shell
➜ ~ docker ps ➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de9e2f8ab57d nginx:latest "/docker-entrypoint.…" 5 seconds ago Up 5 seconds nginx_kube-system_kubevpn_e21d8 afdecf41c08d naison/authors:latest "sh" 37 seconds ago Up 36 seconds authors_default_kubevpn_a9a22
28aa30e8929e naison/authors:latest "./app" 6 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:9080->9080/tcp authors_kube-system_kubevpn_e21d8 fc04e42799a5 nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 37 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:8888->8888/tcp, 0.0.0.0:9080->9080/tcp nginx_default_kubevpn_a9a22
➜ ~ ➜ ~
``` ```
如果你只是想在本地启动镜像,可以用一种简单的方式: 如果你只是想在本地启动镜像,可以用一种简单的方式:
```shell ```shell
kubevpn --headers user=naison dev deployment/authors kubevpn dev deployment/authors --no-proxy -it --rm
``` ```
例如: 例如:
```shell ```shell
➜ ~ kubevpn --headers user=naison dev deployment/authors ➜ ~ kubevpn dev deployment/authors --no-proxy -it --rm
connectting to cluster
start to connect
got cidr from cache got cidr from cache
get cidr successfully
update ref count successfully update ref count successfully
traffic manager already exist, reuse it traffic manager already exist, reuse it
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
port forward ready port forward ready
tunnel connected tunnel connected
dns service ok dns service ok
tar: removing leading '/' from member names tar: removing leading '/' from member names
/tmp/3795398593261835591:/var/run/secrets/kubernetes.io/serviceaccount /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/5631078868924498209:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading `/' from member names tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets tar: Removing leading `/' from hard link targets
/tmp/1432525228828829439:/var/run/secrets/kubernetes.io/serviceaccount /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/1548572512863475037:/var/run/secrets/kubernetes.io/serviceaccount
Created container: nginx_default_kubevpn_08aba create docker network 56c25058d4b7498d02c2c2386ccd1b2b127cb02e8a1918d6d24bffd18570200e
Wait container nginx_default_kubevpn_08aba to be running... Created container: nginx_default_kubevpn_ff34b
Container nginx_default_kubevpn_08aba is running now Wait container nginx_default_kubevpn_ff34b to be running...
Container nginx_default_kubevpn_ff34b is running on port 80/tcp:80 8888/tcp:8888 9080/tcp:9080 now
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Created container: authors_default_kubevpn_08ab9 Created main container: authors_default_kubevpn_ff34b
2023/09/02 00:17:00 Start listening http port 9080 ... 2023/09/30 14:02:31 Start listening http port 9080 ...
``` ```
此时程序会挂起,默认为显示日志 此时程序会挂起,默认为显示日志
@@ -388,13 +389,26 @@ Created container: authors_default_kubevpn_08ab9
例如: 例如:
```shell ```shell
docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config naison/kubevpn:v1.2.0 docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config --platform linux/amd64 naison/kubevpn:v1.2.0
``` ```
```shell ```shell
➜ ~ docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -c authors -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config naison/kubevpn:v1.2.0 ➜ ~ docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/vke:/root/.kube/config --platform linux/amd64 naison/kubevpn:v2.0.0
root@4d0c3c4eae2b:/# Unable to find image 'naison/kubevpn:v2.0.0' locally
root@4d0c3c4eae2b:/# kubevpn -n kube-system --image naison/kubevpn:v1.2.0 --headers user=naison -it --entrypoint sh dev deployment/authors v2.0.0: Pulling from naison/kubevpn
445a6a12be2b: Already exists
bd6c670dd834: Pull complete
64a7297475a2: Pull complete
33fa2e3224db: Pull complete
e008f553422a: Pull complete
5132e0110ddc: Pull complete
5b2243de1f1a: Pull complete
662a712db21d: Pull complete
4f4fb700ef54: Pull complete
33f0298d1d4f: Pull complete
Digest: sha256:115b975a97edd0b41ce7a0bc1d8428e6b8569c91a72fe31ea0bada63c685742e
Status: Downloaded newer image for naison/kubevpn:v2.0.0
root@d0b3dab8912a:/app# kubevpn dev deployment/authors --headers user=naison -it --entrypoint sh
---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG. Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG.
@@ -402,66 +416,47 @@ root@4d0c3c4eae2b:/# kubevpn -n kube-system --image naison/kubevpn:v1.2.0 --head
Current env KUBECONFIG value: Current env KUBECONFIG value:
---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
hostname is d0b3dab8912a
connectting to cluster
start to connect
got cidr from cache got cidr from cache
traffic manager not exist, try to create it... get cidr successfully
pod [kubevpn-traffic-manager] status is Pending
Container Reason Message
pod [kubevpn-traffic-manager] status is Pending
Container Reason Message
control-plane ContainerCreating
vpn ContainerCreating
webhook ContainerCreating
pod [kubevpn-traffic-manager] status is Running
Container Reason Message
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully update ref count successfully
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination... traffic manager already exist, reuse it
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
port forward ready port forward ready
tunnel connected tunnel connected
dns service ok dns service ok
start to create remote inbound pod for Deployment.apps/authors
patch workload default/Deployment.apps/authors with sidecar
rollout status for Deployment.apps/authors
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
rollout status for Deployment.apps/authors successfully
create remote inbound pod for Deployment.apps/authors successfully
tar: removing leading '/' from member names tar: removing leading '/' from member names
/tmp/3122262358661539581:/var/run/secrets/kubernetes.io/serviceaccount /tmp/6460902982794789917:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading '/' from member names tar: Removing leading `/' from member names
tar: Removing leading '/' from hard link targets tar: Removing leading `/' from hard link targets
/tmp/7677066538742627822:/var/run/secrets/kubernetes.io/serviceaccount /tmp/5028895788722532426:/var/run/secrets/kubernetes.io/serviceaccount
latest: Pulling from naison/authors network mode is container:d0b3dab8912a
Digest: sha256:2e7b2d6a4c6143cde888fcdb70ba091d533e11de70e13e151adff7510a5d52d4 Created container: nginx_default_kubevpn_6df63
Status: Downloaded newer image for naison/authors:latest Wait container nginx_default_kubevpn_6df63 to be running...
Created container: authors_kube-system_kubevpn_c68e4 Container nginx_default_kubevpn_6df63 is running now
Wait container authors_kube-system_kubevpn_c68e4 to be running... WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Container authors_kube-system_kubevpn_c68e4 is running now Created main container: authors_default_kubevpn_6df5f
Created container: nginx_kube-system_kubevpn_c68e7
Wait container nginx_kube-system_kubevpn_c68e7 to be running...
Container nginx_kube-system_kubevpn_c68e7 is running now
/opt/microservices # ps -ef /opt/microservices # ps -ef
PID USER TIME COMMAND PID USER TIME COMMAND
1 root 0:00 {bash} /usr/bin/qemu-x86_64 /bin/bash /bin/bash 1 root 0:00 {bash} /usr/bin/qemu-x86_64 /bin/bash /bin/bash
60 root 0:07 {kubevpn} /usr/bin/qemu-x86_64 kubevpn kubevpn dev deployment/authors -n kube-system --image naison/kubevpn:v1.1.36 --headers user=naison --parent 14 root 0:02 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn kubevpn dev deployment/authors --headers
73 root 0:00 {tail} /usr/bin/qemu-x86_64 /usr/bin/tail tail -f /dev/null 25 root 0:01 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn /usr/local/bin/kubevpn daemon
80 root 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off; 37 root 0:04 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn /usr/local/bin/kubevpn daemon --sudo
92 root 0:00 {sh} /usr/bin/qemu-x86_64 /bin/sh /bin/sh 53 root 0:00 nginx: master process nginx -g daemon off;
156 101 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off; (4/4) Installing curl (8.0.1-r0)
158 101 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off;
160 101 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off;
162 101 0:00 {nginx} /usr/bin/qemu-x86_64 /usr/sbin/nginx nginx -g daemon off;
164 root 0:00 ps -ef
/opt/microservices # ls
app
/opt/microservices # apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r5)
(4/4) Installing curl (7.79.1-r5)
Executing busybox-1.33.1-r3.trigger Executing busybox-1.33.1-r3.trigger
OK: 8 MiB in 19 packagesnx: worker process
/opt/microservices #
/opt/microservices # apk add curl
OK: 8 MiB in 19 packages OK: 8 MiB in 19 packages
/opt/microservices # curl localhost:80 /opt/microservices # curl localhost:80
<!DOCTYPE html> <!DOCTYPE html>
@@ -489,13 +484,41 @@ Commercial support is available at
</html> </html>
/opt/microservices # ls /opt/microservices # ls
app app
/opt/microservices # exit /opt/microservices # ls -alh
total 6M
drwxr-xr-x 2 root root 4.0K Oct 18 2021 .
drwxr-xr-x 1 root root 4.0K Oct 18 2021 ..
-rwxr-xr-x 1 root root 6.3M Oct 18 2021 app
/opt/microservices # ./app &
/opt/microservices # 2023/09/30 14:27:32 Start listening http port 9080 ...
/opt/microservices # curl authors:9080/health
/opt/microservices # curl authors:9080/health
{"status":"Authors is healthy"}/opt/microservices #
/opt/microservices # curl localhost:9080/health
{"status":"Authors is healthy"}/opt/microservices # exit
prepare to exit, cleaning up prepare to exit, cleaning up
update ref count successfully update ref count successfully
ref-count is zero, prepare to clean up resource tun device closed
leave resource: deployments.apps/authors
workload default/deployments.apps/authors is controlled by a controller
leave resource: deployments.apps/authors successfully
clean up successfully clean up successfully
root@4d0c3c4eae2b:/# exit prepare to exit, cleaning up
update ref count successfully
clean up successfully
root@d0b3dab8912a:/app# exit
exit exit
➜ ~
```
```text
➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cd576b51b66 naison/authors:latest "sh" 4 minutes ago Up 4 minutes authors_default_kubevpn_6df5f
56a6793df82d nginx:latest "/docker-entrypoint.…" 4 minutes ago Up 4 minutes nginx_default_kubevpn_6df63
d0b3dab8912a naison/kubevpn:v2.0.0 "/bin/bash" 5 minutes ago Up 5 minutes upbeat_noyce
➜ ~
``` ```
### 支持多种协议 ### 支持多种协议
@@ -527,30 +550,30 @@ Windows
例如: 例如:
``` shell ``` shell
➜ ~ kubevpn version ➜ ~ kubevpn version
KubeVPN: CLI KubeVPN: CLI
Version: v1.1.36 Version: v1.2.0
Image: docker.io/naison/kubevpn:v1.1.36 DaemonVersion: v1.2.0
Branch: master Image: docker.io/naison/kubevpn:v1.2.0
Git commit: 87dac42dad3d8f472a9dcdfc2c6cd801551f23d1 Branch: feature/daemon
Built time: 2023-01-15 04:19:45 Git commit: 7c3a87e14e05c238d8fb23548f95fa1dd6e96936
Built OS/Arch: linux/amd64 Built time: 2023-09-30 22:01:51
Built Go version: go1.18.10 Built OS/Arch: darwin/arm64
~ Built Go version: go1.20.5
``` ```
镜像是 `docker.io/naison/kubevpn:v1.1.36`,将此镜像转存到自己的镜像仓库。 镜像是 `docker.io/naison/kubevpn:v1.2.0`,将此镜像转存到自己的镜像仓库。
```text ```text
docker pull docker.io/naison/kubevpn:v1.1.36 docker pull docker.io/naison/kubevpn:v1.2.0
docker tag docker.io/naison/kubevpn:v1.1.36 [镜像仓库地址]/[命名空间]/[镜像仓库]:[镜像版本号] docker tag docker.io/naison/kubevpn:v1.2.0 [镜像仓库地址]/[命名空间]/[镜像仓库]:[镜像版本号]
docker push [镜像仓库地址]/[命名空间]/[镜像仓库]:[镜像版本号] docker push [镜像仓库地址]/[命名空间]/[镜像仓库]:[镜像版本号]
``` ```
然后就可以使用这个镜像了,如下: 然后就可以使用这个镜像了,如下:
```text ```text
➜ ~ kubevpn connect --image docker.io/naison/kubevpn:v1.1.36 ➜ ~ kubevpn connect --image [docker registry]/[namespace]/[repo]:[tag]
got cidr from cache got cidr from cache
traffic manager not exist, try to create it... traffic manager not exist, try to create it...
pod [kubevpn-traffic-manager] status is Running pod [kubevpn-traffic-manager] status is Running
@@ -561,32 +584,31 @@ pod [kubevpn-traffic-manager] status is Running
例如: 例如:
```shell ```shell
➜ ~ kubevpn connect --transfer-image --image nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn:v1.1.33 ➜ ~ kubevpn connect --transfer-image --image nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn:v1.2.0
Password: v1.2.0: Pulling from naison/kubevpn
v1.1.33: Pulling from naison/kubevpn Digest: sha256:450446850891eb71925c54a2fab5edb903d71103b485d6a4a16212d25091b5f4
Digest: sha256:970c0c82a2d9cbac1595edb56a31e8fc84e02712c00a7211762efee5f66ea70c Status: Image is up to date for naison/kubevpn:v1.2.0
Status: Image is up to date for naison/kubevpn:v1.1.33
The push refers to repository [nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn] The push refers to repository [nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn]
9d72fec6b077: Pushed ecc065754c15: Preparing
12a6a77eb79e: Pushed f2b6c07cb397: Pushed
c7d0f62ec57f: Pushed 448eaa16d666: Pushed
5605cea4b7c8: Pushed f5507edfc283: Pushed
4231fec7b258: Pushed 3b6ea9aa4889: Pushed
babe72b5fcae: Pushed ecc065754c15: Pushed
6caa74b4bcf0: Pushed feda785382bb: Pushed
b8a36d10656a: Pushed v1.2.0: digest: sha256:85d29ebb53af7d95b9137f8e743d49cbc16eff1cdb9983128ab6e46e0c25892c size: 2000
v1.1.33: digest: sha256:1bc5e589bec6dc279418009b5e82ce0fd29a2c0e8b9266988964035ad7fbeba5 size: 2000 start to connect
got cidr from cache got cidr from cache
get cidr successfully
update ref count successfully update ref count successfully
traffic manager already exist, reuse it traffic manager already exist, reuse it
port forward ready port forward ready
tunnel connected tunnel connected
dns service ok dns service ok
+---------------------------------------------------------------------------+ +---------------------------------------------------------------------------+
| Now you can access resources in the kubernetes cluster, enjoy it :) | | Now you can access resources in the kubernetes cluster, enjoy it :) |
+---------------------------------------------------------------------------+ +---------------------------------------------------------------------------+
➜ ~
``` ```
### 2在使用 `kubevpn dev` 进入开发模式的时候,有出现报错 137, 改怎么解决 ? ### 2在使用 `kubevpn dev` 进入开发模式的时候,有出现报错 137, 改怎么解决 ?