初探kubernetes集群

概览:

服务器

节点

我准备了了三个节点

操作系统: almalinux9 :almalinux

分配: master: 1, worker: 2

1
2
3
192.168.137.11 k8s-master-01
192.168.137.20 k8s-worker-01
192.168.137.21 k8s-worker-02

初始化

1
2
3
yum update -y

yum install tar mdadm socat net-tools git vim nano wget mlocate bash-completion tree -y

开始安装

1
sealos run labring/kubernetes:v1.24.3 labring/cilium:v1.12.1 --masters 192.168.137.11 --nodes 192.168.137.20,192.168.137.21 -p <服务器集群密码>

集群节点状态

1
2
3
4
5
[root@k8s-master-01 ~]# kubectl get node -A
NAME STATUS ROLES AGE VERSION
k8s-master-01 Ready control-plane 3m23s v1.24.3
k8s-worker-01 Ready <none> 3m1s v1.24.3
k8s-worker-02 Ready <none> 3m v1.24.3

集群k8s状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@k8s-master-01 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-operator-7bffd48d4d-gklh9 1/1 Running 0 2m48s
kube-system cilium-qlc2p 1/1 Running 0 2m48s
kube-system cilium-xgnfh 1/1 Running 0 2m48s
kube-system cilium-zkf9p 1/1 Running 0 2m48s
kube-system coredns-6d4b75cb6d-kvq92 1/1 Running 0 2m59s
kube-system coredns-6d4b75cb6d-p8hpj 1/1 Running 0 2m59s
kube-system etcd-k8s-master-01 1/1 Running 0 3m15s
kube-system kube-apiserver-k8s-master-01 1/1 Running 0 3m15s
kube-system kube-controller-manager-k8s-master-01 1/1 Running 0 3m15s
kube-system kube-proxy-9twf9 1/1 Running 0 3m
kube-system kube-proxy-jb9dz 1/1 Running 0 2m54s
kube-system kube-proxy-pz4m9 1/1 Running 0 2m55s
kube-system kube-scheduler-k8s-master-01 1/1 Running 0 3m13s
kube-system kube-sealyun-lvscare-k8s-worker-01 1/1 Running 0 2m35s
kube-system kube-sealyun-lvscare-k8s-worker-02 1/1 Running 0 2m34s

k8s cilium状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master-01 ~]# cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/

Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
Containers: cilium Running: 3
cilium-operator Running: 1
Cluster Pods: 2/2 managed by Cilium
Image versions cilium quay.io/cilium/cilium:v1.12.1: 3
cilium-operator quay.io/cilium/operator:v1.12.1: 1

初始配置

重命名节点role

1
2
kubectl label node k8s-worker-01 kubernetes.io/role=worker
kubectl label node k8s-worker-02 kubernetes.io/role=worker

修改成功

1
2
3
4
5
[root@k8s-master-01 ~]# kubectl get node -A
NAME STATUS ROLES AGE VERSION
k8s-master-01 Ready control-plane 5m12s v1.24.3
k8s-worker-01 Ready worker 4m50s v1.24.3
k8s-worker-02 Ready worker 4m49s v1.24.3

默认存储

k8s存储支持多种模式:本地存储:hostPath/emptyDir,传递网络存储:iscsi/nfs,分布式网络存 储:glusterfs/rbd/cephfs,以及云存储等;
k8s默认容器如果重建,则容器中文件将丢失,为了解决这些问题,通常我们会将容器中需要持久化的文件存储到其他可持久化存储目录中。

安装依赖

我选择以nfs作为默认存储

1
sealos exec "yum install nfs-utils -y" # 集群节点都需要安装

配置共享

nfs服务主机 我主机有限:所以将nfs服务是由k8s-master-01提供的,共享文件:在nfs服务主机文件/etc/exportes添加需要共享的目录以及允许的网段。

1
/data/share 192.168.137.0/24(insecure,rw,async,no_root_squash)

在服务器k8s-master-01(跟随nfs磁盘)启动nfs服务

1
2
systemctl enable nfs-server
systemctl start nfs-server

安装nfs

准备工具

1
$ sealos run labring/helm:v3.8.2

安装

注:nfs镜像是拉取的上游(k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2)的镜像

1
2
3
4
5
6
7
8
9
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set image.repository=ka1i137/sig-storage-nfs-subdir-external-provisioner \
--set image.tag=v4.0.2 \
--set storageClass.name=nfs-client \
--set storageClass.defaultClass=true \
--set nfs.server=192.168.137.11 \
--set nfs.path=/data/share

验证

1
2
3
[root@k8s-master-01 data]# kubectl get sc -A
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 28s

部署服务

创建命名空间

1
kubectl create namespace app

编写yaml

whoami.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
namespace: app
labels:
app: whoami

spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: app
spec:
type: NodePort
ports:
- name: http-80
port: 80
nodePort: 30888
targetPort: 80
selector:
app: whoami

开始部署

1
2
3
4
5
6
7
8
9
[root@k8s-master-01 app]# kubectl apply -f whoami.yaml 
deployment.apps/whoami created
service/whoami created
[root@k8s-master-01 app]# kubectl get pod -A | grep "^app"
app whoami-6bbfdbb69c-tss9n 0/1 ContainerCreating 0 22s
app whoami-6bbfdbb69c-v26d4 0/1 ContainerCreating 0 22s
[root@k8s-master-01 app]# kubectl get svc -n app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whoami NodePort 10.96.1.5 <none> 80:30888/TCP 36s

验证部署

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master-01 app]# curl 192.168.137.11:30888 # ip为k8s集群ip其中一个都可以
Hostname: whoami-6bbfdbb69c-v26d4
IP: 127.0.0.1
IP: ::1
IP: 10.0.2.67
IP: fe80::5019:c7ff:fe03:42e9
RemoteAddr: 10.0.0.220:45186
GET / HTTP/1.1
Host: 192.168.137.11:30888
User-Agent: curl/7.76.1
Accept: */*

Very Nice!