在CCE集群中使用NetworkPolicy
NetworkPolicy 是 K8S 提供的一种资源,用于定义基于 Pod 的网络隔离策略。它描述了一组 Pod 能否与其它组 Pod 及其它 Endpoints 进行通信。本文主要演示如何使用开源工具 felix 或 kube-router 在 CCE 上实现 NetworkPolicy 功能。
用户可以根据集群的容器网络模式,选择对应的组件进行部署。
felix
注意: felix 仅能搭配 veth 网络模式使用(详见 “VPC 网络”模式高级选项)
felix 是开源容器网络方案 Calico 的一个组件,运行在每个节点上负责配置路由及ACL等信息。
CCE 基于 felix 进行修改和适配,实现了容器网络策略功能。
部署 felix
在 CCE K8S 集群上部署 felix,YAML 如下:
1---
2# Source: calico-felix/templates/rbac.yaml
3apiVersion: v1
4kind: ServiceAccount
5metadata:
6 name: cce-calico-felix
7 namespace: kube-system
8---
9# Source: calico-felix/templates/cce-reserved.yaml
10apiVersion: v1
11kind: ConfigMap
12metadata:
13 name: calico-felix-cce-reserved
14 namespace: kube-system
15 labels:
16 heritage: Helm
17 release: RELEASE-NAME
18 chart: calico-felix-1.0.0
19 app: cce-calico-felix
20data:
21 hash: "22ec24f7bfe36fe18917ff07659f9e6e3dfd725af4c3371d3e60c7195744bea4"
22---
23# Source: calico-felix/templates/crd.yaml
24apiVersion: apiextensions.k8s.io/v1beta1
25kind: CustomResourceDefinition
26metadata:
27 name: felixconfigurations.crd.projectcalico.org
28spec:
29 scope: Cluster
30 group: crd.projectcalico.org
31 version: v1
32 names:
33 kind: FelixConfiguration
34 plural: felixconfigurations
35 singular: felixconfiguration
36---
37# Source: calico-felix/templates/crd.yaml
38apiVersion: apiextensions.k8s.io/v1beta1
39kind: CustomResourceDefinition
40metadata:
41 name: bgpconfigurations.crd.projectcalico.org
42spec:
43 scope: Cluster
44 group: crd.projectcalico.org
45 version: v1
46 names:
47 kind: BGPConfiguration
48 plural: bgpconfigurations
49 singular: bgpconfiguration
50---
51# Source: calico-felix/templates/crd.yaml
52apiVersion: apiextensions.k8s.io/v1beta1
53kind: CustomResourceDefinition
54metadata:
55 name: ippools.crd.projectcalico.org
56spec:
57 scope: Cluster
58 group: crd.projectcalico.org
59 version: v1
60 names:
61 kind: IPPool
62 plural: ippools
63 singular: ippool
64---
65# Source: calico-felix/templates/crd.yaml
66apiVersion: apiextensions.k8s.io/v1beta1
67kind: CustomResourceDefinition
68metadata:
69 name: hostendpoints.crd.projectcalico.org
70spec:
71 scope: Cluster
72 group: crd.projectcalico.org
73 version: v1
74 names:
75 kind: HostEndpoint
76 plural: hostendpoints
77 singular: hostendpoint
78---
79# Source: calico-felix/templates/crd.yaml
80apiVersion: apiextensions.k8s.io/v1beta1
81kind: CustomResourceDefinition
82metadata:
83 name: clusterinformations.crd.projectcalico.org
84spec:
85 scope: Cluster
86 group: crd.projectcalico.org
87 version: v1
88 names:
89 kind: ClusterInformation
90 plural: clusterinformations
91 singular: clusterinformation
92---
93# Source: calico-felix/templates/crd.yaml
94apiVersion: apiextensions.k8s.io/v1beta1
95kind: CustomResourceDefinition
96metadata:
97 name: globalnetworkpolicies.crd.projectcalico.org
98spec:
99 scope: Cluster
100 group: crd.projectcalico.org
101 version: v1
102 names:
103 kind: GlobalNetworkPolicy
104 plural: globalnetworkpolicies
105 singular: globalnetworkpolicy
106---
107# Source: calico-felix/templates/crd.yaml
108apiVersion: apiextensions.k8s.io/v1beta1
109kind: CustomResourceDefinition
110metadata:
111 name: globalnetworksets.crd.projectcalico.org
112spec:
113 scope: Cluster
114 group: crd.projectcalico.org
115 version: v1
116 names:
117 kind: GlobalNetworkSet
118 plural: globalnetworksets
119 singular: globalnetworkset
120---
121# Source: calico-felix/templates/crd.yaml
122apiVersion: apiextensions.k8s.io/v1beta1
123kind: CustomResourceDefinition
124metadata:
125 name: networkpolicies.crd.projectcalico.org
126spec:
127 scope: Namespaced
128 group: crd.projectcalico.org
129 version: v1
130 names:
131 kind: NetworkPolicy
132 plural: networkpolicies
133 singular: networkpolicy
134---
135# Source: calico-felix/templates/rbac.yaml
136kind: ClusterRole
137apiVersion: rbac.authorization.k8s.io/v1
138metadata:
139 name: cce-calico-felix
140rules:
141 - apiGroups: [""]
142 resources: ["pods", "nodes", "namespaces", "configmaps", "serviceaccounts"]
143 verbs: ["get", "watch", "list", "update"]
144 - apiGroups: ["networking.k8s.io"]
145 resources:
146 - networkpolicies
147 verbs:
148 - get
149 - list
150 - watch
151 - apiGroups: ["extensions"]
152 resources:
153 - networkpolicies
154 verbs:
155 - get
156 - list
157 - watch
158 - apiGroups: [""]
159 resources:
160 - pods/status
161 verbs:
162 - update
163 - apiGroups: ["crd.projectcalico.org"]
164 resources: ["*"]
165 verbs: ["*"]
166---
167# Source: calico-felix/templates/rbac.yaml
168apiVersion: rbac.authorization.k8s.io/v1beta1
169kind: ClusterRoleBinding
170metadata:
171 name: cce-calico-felix
172roleRef:
173 apiGroup: rbac.authorization.k8s.io
174 kind: ClusterRole
175 name: cce-calico-felix
176subjects:
177 - kind: ServiceAccount
178 name: cce-calico-felix
179 namespace: kube-system
180---
181# Source: calico-felix/templates/daemonset.yaml
182apiVersion: apps/v1
183kind: DaemonSet
184metadata:
185 name: cce-calico-felix
186 namespace: kube-system
187spec:
188 selector:
189 matchLabels:
190 app: cce-calico-felix
191 template:
192 metadata:
193 labels:
194 app: cce-calico-felix
195 annotations:
196 scheduler.alpha.kubernetes.io/critical-pod: ""
197 spec:
198 hostPID: true
199 nodeSelector:
200 beta.kubernetes.io/arch: amd64
201 tolerations:
202 - key: node.cloudprovider.kubernetes.io/uninitialized
203 value: "true"
204 effect: NoSchedule
205 - key: node-role.kubernetes.io/master
206 effect: NoSchedule
207 - key: "CriticalAddonsOnly"
208 operator: "Exists"
209 terminationGracePeriodSeconds: 0
210 serviceAccountName: cce-calico-felix
211 hostNetwork: true
212 containers:
213 - name: policy
214 image: registry.baidubce.com/cce-plugin-pro/cce-calico-felix:v3.5.8
215 command: ["/bin/policyinit.sh"]
216 imagePullPolicy: Always
217 env:
218 - name: NODENAME
219 valueFrom:
220 fieldRef:
221 fieldPath: spec.nodeName
222 - name: FELIX_INTERFACEPREFIX
223 value: veth
224 securityContext:
225 privileged: true
226 resources:
227 requests:
228 cpu: 250m
229 livenessProbe:
230 httpGet:
231 path: /liveness
232 port: 9099
233 host: localhost
234 periodSeconds: 10
235 initialDelaySeconds: 10
236 failureThreshold: 6
237 readinessProbe:
238 httpGet:
239 path: /readiness
240 port: 9099
241 host: localhost
242 periodSeconds: 10
243 volumeMounts:
244 - mountPath: /lib/modules
245 name: lib-modules
246
247 volumes:
248 - name: lib-modules
249 hostPath:
250 path: /lib/modules
251---
252apiVersion: apps/v1
253kind: DaemonSet
254metadata:
255 name: kube-proxy-config
256 namespace: kube-system
257 labels:
258 app: kube-proxy-config
259spec:
260 selector:
261 matchLabels:
262 app: kube-proxy-config
263 template:
264 metadata:
265 labels:
266 app: kube-proxy-config
267 spec:
268 nodeSelector:
269 beta.kubernetes.io/arch: amd64
270 tolerations:
271 - operator: "Exists"
272 restartPolicy: Always
273 hostNetwork: true
274 containers:
275 - name: busybox
276 image: busybox
277 command:
278 - sh
279 - /tmp/update-proxy-yaml.sh
280 env:
281 - name: NODE_NAME
282 valueFrom:
283 fieldRef:
284 fieldPath: spec.nodeName
285 - name: NODE_IP
286 valueFrom:
287 fieldRef:
288 fieldPath: status.hostIP
289 imagePullPolicy: IfNotPresent
290 volumeMounts:
291 - name: etc-k8s
292 mountPath: /etc/kubernetes/
293 - name: shell
294 mountPath: /tmp/
295
296 terminationGracePeriodSeconds: 0
297 volumes:
298 - name: etc-k8s
299 hostPath:
300 path: /etc/kubernetes/
301 type: "DirectoryOrCreate"
302 - name: shell
303 configMap:
304 name: update-proxy-yaml-shell
305 optional: true
306 items:
307 - key: update-proxy-yaml.sh
308 path: update-proxy-yaml.sh
309
310---
311apiVersion: v1
312kind: ConfigMap
313metadata:
314 labels:
315 addonmanager.kubernetes.io/mode: EnsureExists
316 name: update-proxy-yaml-shell
317 namespace: kube-system
318data:
319 update-proxy-yaml.sh: |-
320 #!/bin/sh
321
322 if [[ -e /etc/kubernetes/proxy.yaml ]]; then
323 sed -i 's/masqueradeAll: true/masqueradeAll: false/g' /etc/kubernetes/proxy.yaml
324 if grep -q "masqueradeAll: false" /etc/kubernetes/proxy.yaml; then
325 echo "update config successfully"
326 else
327 exit 1
328 fi
329 else
330 echo "/etc/kubernetes/proxy.yaml not exists"
331 exit 1
332 fi
333 sleep infinity
kube-router
注意: kube-router 仅能搭配 kubenet 网络模式使用(详见 “VPC 网络”模式高级选项)
kube-router 是一个 kubernetes 的容器网络解决方案,它的官网和代码地址如下:
kube-router 有三大功能:
- Pod Networking;
- IPVS/LVS based service proxy;
- Network Policy Controller.
CCE 有自己的容器网络实现方案,本文主要使用 kube-router 的 Network Policy Controller 的功能.
部署 kube-router
在 CCE K8S 集群上部署 kube-router ,YAML 如下:
1apiVersion: v1
2kind: ServiceAccount
3metadata:
4 name: kube-router
5 namespace: kube-system
6
7---
8kind: ClusterRole
9apiVersion: rbac.authorization.k8s.io/v1
10metadata:
11 name: kube-router
12 namespace: kube-system
13rules:
14 - apiGroups:
15 - ""
16 resources:
17 - namespaces
18 - pods
19 - services
20 - nodes
21 - endpoints
22 verbs:
23 - list
24 - get
25 - watch
26 - apiGroups:
27 - "networking.k8s.io"
28 resources:
29 - networkpolicies
30 verbs:
31 - list
32 - get
33 - watch
34 - apiGroups:
35 - extensions
36 resources:
37 - networkpolicies
38 verbs:
39 - get
40 - list
41 - watch
42
43---
44kind: ClusterRoleBinding
45apiVersion: rbac.authorization.k8s.io/v1
46metadata:
47 name: kube-router
48roleRef:
49 apiGroup: rbac.authorization.k8s.io
50 kind: ClusterRole
51 name: kube-router
52subjects:
53- kind: ServiceAccount
54 name: kube-router
55 namespace: kube-system
56---
57apiVersion: v1
58kind: ConfigMap
59metadata:
60 name: kube-router-cfg
61 namespace: kube-system
62 labels:
63 tier: node
64 k8s-app: kube-router
65data:
66 cni-conf.json: |
67 {
68 "name":"kubernetes",
69 "type":"bridge",
70 "bridge":"kube-bridge",
71 "isDefaultGateway":true,
72 "ipam": {
73 "type":"host-local"
74 }
75 }
76---
77apiVersion: apps/v1
78kind: DaemonSet
79metadata:
80 name: kube-router
81 namespace: kube-system
82 labels:
83 k8s-app: kube-router
84spec:
85 selector:
86 matchLabels:
87 k8s-app: kube-router
88 template:
89 metadata:
90 labels:
91 k8s-app: kube-router
92 annotations:
93 scheduler.alpha.kubernetes.io/critical-pod: ''
94 spec:
95 serviceAccountName: kube-router
96 containers:
97 - name: kube-router
98 image: registry.baidubce.com/cce-plugin-pro/kube-router:latest
99 args: ["--run-router=false", "--run-firewall=true", "--run-service-proxy=false"]
100 securityContext:
101 privileged: true
102 imagePullPolicy: Always
103 env:
104 - name: NODE_NAME
105 valueFrom:
106 fieldRef:
107 fieldPath: spec.nodeName
108 livenessProbe:
109 httpGet:
110 path: /healthz
111 port: 20244
112 initialDelaySeconds: 10
113 periodSeconds: 3
114 volumeMounts:
115 - name: lib-modules
116 mountPath: /lib/modules
117 readOnly: true
118 - name: cni-conf-dir
119 mountPath: /etc/cni/net.d
120 initContainers:
121 - name: install-cni
122 image: registry.baidubce.com/cce-plugin-pro/kube-router-busybox:latest
123 imagePullPolicy: Always
124 command:
125 - /bin/sh
126 - -c
127 - set -e -x;
128 if [ ! -f /etc/cni/net.d/10-kuberouter.conf ]; then
129 TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
130 cp /etc/kube-router/cni-conf.json ${TMP};
131 mv ${TMP} /etc/cni/net.d/10-kuberouter.conf;
132 fi
133 volumeMounts:
134 - name: cni-conf-dir
135 mountPath: /etc/cni/net.d
136 - name: kube-router-cfg
137 mountPath: /etc/kube-router
138 hostNetwork: true
139 tolerations:
140 - key: CriticalAddonsOnly
141 operator: Exists
142 - effect: NoSchedule
143 key: node-role.kubernetes.io/master
144 operator: Exists
145 - effect: NoSchedule
146 key: node.kubernetes.io/not-ready
147 operator: Exists
148 volumes:
149 - name: lib-modules
150 hostPath:
151 path: /lib/modules
152 - name: cni-conf-dir
153 hostPath:
154 path: /etc/cni/net.d
155 - name: kube-router-cfg
156 configMap:
157 name: kube-router-cfg
例子说明
1 创建namespaces
1$kubectl create namespace production
2$kubectl create namespace staging
2 启动 nginx 服务
在不同的 namespace 中创建 nginx deployment.
1$kubectl apply -f nginx.yaml --namespace=production
2$kubectl apply -f nginx.yaml --namespace=staging
nginx.yaml 的 YAML 如下:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: nginx-deployment
5 labels:
6 app: nginx
7spec:
8 replicas: 3
9 selector:
10 matchLabels:
11 app: nginx
12 template:
13 metadata:
14 labels:
15 app: nginx
16 spec:
17 containers:
18 - name: nginx
19 image: hub.baidubce.com/cce/nginx-alpine-go:latest
20 ports:
21 - containerPort: 80
验证 Pod 启动成功:
1# staging 环境
2$kubectl get pods -n staging
3NAME READY STATUS RESTARTS AGE
4nginx-deployment-7fbd5f4c55-2xgd4 1/1 Running 0 45s
5nginx-deployment-7fbd5f4c55-5xr75 1/1 Running 0 45s
6nginx-deployment-7fbd5f4c55-fn6lr 1/1 Running 0 20m
7
8# productionn 环境
9$kubectl get pods -n production
10NAME READY STATUS RESTARTS AGE
11nginx-deployment-7fbd5f4c55-m764f 1/1 Running 0 10s
12nginx-deployment-7fbd5f4c55-pdhhz 1/1 Running 0 10s
13nginx-deployment-7fbd5f4c55-r98w5 1/1 Running 0 20m
没有设置 NetworkPolicy 的时候,所有的 Pod 是可以相互访问的,可以直接 ping PodIP.
Network Policy 策略测试
1. Default deny all ingress traffic
禁止 namespace=staging 中 pod 被访问.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: default-deny
5 namespace: staging
6spec:
7 podSelector: {}
8 policyTypes:
9 - Ingress
各个字段含义说明:
- PodSelector:选中需要隔离的 Pod;
- policyTypes: 策略类型,NetworkPolicy 将流量分为 ingress 和 egress,即入方向和出方向。如果没有指定则表示不闲置;
- ingress:入方向,白名单,需要指定 from、ports,即来源、目的端口号,from有三种类型,ipBlock/namespaceSelector/podSelector;
- egress:出方向,白名单,类似 ingress,egress 需要指定 to、ports,即目的、目的端口号。
上述 NetworkPolicy 创建完成后,可以在任意 Pod 中访问 namespace=staging 下的 PodIP,发现是无法访问,比如从 production 中的 pod 进行访问 :
1$kubectl exec -it nginx-deployment-7fbd5f4c55-m764f /bin/sh -n production
2/ # ping 172.16.0.92
3PING 172.16.0.92 (172.16.0.92): 56 data bytes
2. Default allow all ingress traffic
允许 namespace=staging 中 pod 被访问.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: allow-all
5 namespace: staging
6spec:
7 podSelector: {}
8 ingress:
9 - {}
10 policyTypes:
11 - Ingress
3. Default deny all egress traffic
禁止 namespace=production 中 pod 对外访问.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: default-deny
5 namespace: production
6spec:
7 podSelector: {}
8 policyTypes:
9 - Egress
4. Default allow all egress traffic
允许 namespace=production 中 pod 对外访问.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: allow-all
5 namespace: production
6spec:
7 podSelector: {}
8 egress:
9 - {}
10 policyTypes:
11 - Egress
5. Default deny all ingress and all egress traffic
禁止所有 pod 的入和出的流量:
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: default-deny
5spec:
6 podSelector: {}
7 policyTypes:
8 - Ingress
9 - Egress