春风十里不如你 —— Taozi - kubernetes https://www.xiongan.host/index.php/tag/kubernetes/ 【K8s】下的kubectl的Deployment部署Nginx https://www.xiongan.host/index.php/archives/205/ 2023-05-10T21:00:59+08:00 部署Nginx服务简介:使用Deployment实现其滚动更新管理。创建 Deployment在 master 节点创建/labfile/deployfile 目录,用于保存配置文件。后续创建deployment 的 yaml 文件保存在此处。[root@master ~]# mkdir labfile [root@master ~]# cd labfile/ [root@master labfile]# mkdir deplofile [root@master labfile]# cd deplofile/ [root@master deplofile]# vim nginx-dy.yaml //以下内容为deployment文件 apiVersion: apps/v1 kind: Deployment metadata: name: nginx-dy labels:   app: nginx spec: replicas: 3 selector:   matchLabels:     app: nginx template:   metadata:     labels:       app: nginx   spec:     containers:     - name: nginx       image: nginx:1.7.9       ports:       - containerPort: 80部署该 nginx-dy[root@master deplofile]# kubectl apply -f nginx-dy.yaml deployment.apps/nginx-dy created查看详细信息、创建结果和replicaset已经创建好:弹性伸缩 Deployment编辑之前创建的nginx-dy.yaml,将副本数量修改5应用变更后的yaml文件[root@master deplofile]# kubectl apply -f nginx-dy.yaml deployment.apps/nginx-dy configured [root@master deplofile]# kubectl get pod滚动升级 deployment复制ng原版为两个新版本[root@master deplofile]# cp nginx-dy.yaml nginx-dy-v2.yaml [root@master deplofile]# cp nginx-dy.yaml nginx-dy-v3.yaml进行滚动更新[root@master deplofile]# kubectl apply -f nginx-dy-v2.yaml --record查看更新状态,上为更新前版本查看replicaset,看到一个新的,里面有5个pod,原有的pod不存在了查看deployment更新事件更新到v3版本[root@master deplofile]# kubectl apply -f nginx-dy-v3.yaml --record查看deployment的更新记录[root@master deplofile]# kubectl rollout history deployment nginx-dy查看历史版本 2 的详细信息[root@master deplofile]# kubectl rollout history deployment nginx-dy --revision=2回滚到历史版本2[root@master deplofile]# kubectl rollout undo deployment nginx-dy --to-revision=2可以看到已经回滚到了版本2删除deployment[root@master deplofile]# kubectl delete deployment nginx-dy实训查看deployment信息ymal文件搭建httpd通过 yaml 文件创建一个 deployment,有如下要求:使用 httpd:2.44副本[root@master deplofile]# vim httpd-v1.yaml apiVersion: apps/v1 kind: Deployment metadata: name: httpd-dy labels:   app: httpd spec: replicas: 4 selector:   matchLabels:     app: httpd template:   metadata:     labels:       app: httpd   spec:     containers:     - name: httpd       image: httpd:2.4       ports:       - containerPort: 8080开始创建将上面创建的deployment删除一个pod,变成副本3编辑yaml文件更新升级该 deployment 的镜像版本至 latest。复制v1版本yaml为v2版本,并修改镜像版本号进行更新升级,看到版本已经升级到了latest寻找该 deployment 中各个 pod 运行节点,deployment 创建的时间戳(Creation Timestamp)。 kubeadm方式部署k8s集群 https://www.xiongan.host/index.php/archives/186/ 2022-12-07T08:58:00+08:00 环境准备第一台节点(主节点): 192.168.123.200 master第二台节点(从节点): 192.168.123.201 slave以下文件需要单独下载云盘地址地址配置/etc/hosts域名解析(两台)[root@master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.123.200 master-tz 192.168.123.201 slave01-tz关闭防火墙、SELINUX和swap(两台)1.systemctl disable firewalld --now 2.setenforce 0 3./etc/selinux/config中的一行修改为SELINUX=disabled 4.swapoff -a 5./etc/fstab中的swap加注释 #/dev/mapper/centos-swap swap swap defaults 0 0配置系统内核参数,使流过网桥的流量页进入IPTables/Netfilter(两台)[root@master ~]# modprobe br_netfilter [root@master ~]# echo "modprobe br_netfilter" >> /etc/profi le [root@master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF [root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf安装基本软件包,要求虚拟主机能够访问外网(两台)yum -y install wget vim ntpdate get配置时间同步ntpdate ntp1.aliyun.com配置yum源(两台)[root@master ~]# rm -rf /etc/yum.repos.d/* [root@master ~]# wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo [root@master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo [root@master ~]# wget -O kubernetes.sh https://www.xiongan.host/sh/kubernetes.sh && sh kubernetes.sh开启ipvs(两台)把 ipvs .modules 上传到 /etc/sysconfig/ modules目录下chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep ip_vs安装kubeadm和相关工具包(两台)yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6 systemctl enable kubelet注:docker的版本是20.10.8注:每个软件包的作用Kubeadm : kubeadm 是一个工具,用来初始化 k 8s 集群的kubelet: 安装 在集群所有节点上,用于启动 Pod 的kubectl:通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件kubeadm初始化k8s集群(两台)上传k8simage-1-20-6.tar.gz到两个节点docker load -i k8simage-1-20-6.tar.gz在master节点执行kubeadm命令(master节点)[root@master ~]# kubeadm init --kubernetes-version=1.20.6 --apiserver-advertise-address=192.168.123.200 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerificationkubernetes-version 代表 k8s的版本apiserver-advertise-address 如果master节点有多个网卡,则需要进行指定pod-network-cidr 指定pod网络的范围。image repository registry.aliyuncs.com/google_containers 手动指定仓库地址为registry.aliyuncs.com/google_containers 。kubeadm 默认从 k 8s.grc.io 拉取镜像 ,但是 k 8s.gcr.io访问不到,所以需要指定从 registry.aliyuncs.com/google_containers 仓库拉取镜像配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s 集群进行管理 [root@master ~]# mkdir -p $HOME/.kube [root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config进入slave节点执行添加到集群,命令是上图中的 [root@slave01-tz ~]# kubeadm join 192.168.123.200:6443 --token d32tmx.utjgdkqxhy9sk517 \ > --discovery-token-ca-cert-hash sha256:d6a0bb61368c23be10444d7a18eab071b750c97c45186020980714fd57b13bdd再次查看master节点[root@master-tz ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master-tz NotReady control-plane,master 8m31s v1.20.6 slave01-tz NotReady <none> 12s v1.20.6此时集群状态还是NotReady 状态,因为 没有安装网络 插件 。若要扩充集群(master节点)master执行 kubeadm token create --print-join-command 结果在新增节点执行安装k8s网络组件Calico(master节点)上传calico.yaml文件到master节点。[root@master ~]# kubectl apply -f calico.yaml 再次使用kubectl get nodes命令查看节点状态为Ready安装dashboard(master节点)上传dashboard_2_0_0.tar.gz和metrics-scrapter-1-0-1.tar.gz到两个节点。上传kubernetes-dashboard.yaml到master节点。[root@master ~]# docker load -i dashboard_2_0_0.tar.gz [root@master ~]# docker load -i metrics-scrapter-1-0-1.tar.gz [root@master ~]# kubectl apply -f kubernetes-dashboard.yaml [root@master-tz ~]# kubectl get pods -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-7445d59dfd-p572g 1/1 Running 0 10s kubernetes-dashboard-54f5b6dc4b-5zxpm 1/1 Running 0 10s 说明dasbnoard安装成功了。查看dashboard的service[root@master-tz ~]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.101.56.238 <none> 8000/TCP 97s kubernetes-dashboard ClusterIP 10.97.126.230 <none> 443/TCP 97s修改service type的类型为NodePort[root@master ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard[root@master ~]# kubectl get svc -n kubernetes-dashboard通过浏览器进行访问https://192.168.123.200:30245通过token登录dashboard(master节点)创建管理员token,具有查看任何空间的权限,可以管理所有资源对象。[root@master-tz ~]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created [root@master-tz ~]# kubectl get secret -n kubernetes-dashboard NAME TYPE DATA AGE default-token-scvqs kubernetes.io/service-account-token 3 14m kubernetes-dashboard-certs Opaque 0 14m kubernetes-dashboard-csrf Opaque 1 14m kubernetes-dashboard-key-holder Opaque 2 14m kubernetes-dashboard-token-bs98s kubernetes.io/service-account-token 3 14m [root@master-tz ~]# kubectl describe secret kubernetes-dashboard-token-bs98s -n kubernetes-dashboard Name: kubernetes-dashboard-token-bs98s Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: d0842b14-e79e-4129-b6e3-bfd3c7039334 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkdHTy1lQ2tndl9qQ29INUtEMEREMW1iUWhWeENOODB1Q2lOOERSYnN6OTQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1iczk4cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQwODQyYjE0LWU3OWUtNDEyOS1iNmUzLWJmZDNjNzAzOTMzNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.f6iGY-QbB5YQFuaTkU6qR9UBTbFiIcDbpgT40E_ceQZGh3kdWyKzeTB-pWUkrJV1gWFaQt3Er7_brB-T7juO8eywunXkE6Xd_xH7XzaiWbNYFYfr3gMMXI8SmbnpqDKHclqw_tUIgun37ao7YYY_22_mYDdcTSIVFvx9XehK48eJWVfdyy-snuZiTKoR2pKMH0Rau3oXKlw7is8bV7yezeucZnaMPa60N-1KIMAvRM7gXlMX9m_BKiqvxEoru-2FDEoOkiCFXV-juGclxM_Qtn70i9R2JVjPgE5VX_gP7RFHDoXIEwykyjJqOg2fguE9Vy8nKnrfOo0c99aGXxnW_g使用token值登录测试创建nginx拉取nginx镜像docker pull nginx创建nginx应用服务kubectl create deployment ngix-deployment1 --image nginx --port=80 --replicas=2创建service服务kubectl expose deployment ngix-deployment1 --name=nginx --port=80 --target-port=80 --type=NodePort访问nginx服务[root@master-tz ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 109m nginx NodePort 10.110.242.218 <none> 80:31079/TCP 9s