学院网站开发竞争对手分析,哈尔滨微网站建设公司,wordpress公网ip访问,一个完整的项目计划书目录
项目名称
项目架构图
项目环境
项目概述
项目准备
项目步骤
一、修改每台主机的ip地址#xff0c;同时设置永久关闭防火墙和selinux#xff0c;修改好主机名#xff0c;在firewalld服务器上开启路由功能并配置snat策略。
1. 在firewalld服务器上配置ip地址、设…目录
项目名称
项目架构图
项目环境
项目概述
项目准备
项目步骤
一、修改每台主机的ip地址同时设置永久关闭防火墙和selinux修改好主机名在firewalld服务器上开启路由功能并配置snat策略。
1. 在firewalld服务器上配置ip地址、设置永久关闭防火墙和selinux并修改好主机名
2. 在firewalld服务器上开启路由功能并配置snat策略使内网服务器能上网
3. 配置剩下的服务器的ip地址永久关闭防火墙和selinux并修改好主机名
二、部署dockerk8s环境实现1个master和2个node节点的k8s集群
1. 在k8s集群那3台服务器上安装好docker这里根据官方文档进行安装
2. 创建k8s集群这里采用 kubeadm方式安装
2.1 确认docker已经安装好启动docker并且设置开机启动
2.2 配置 Docker使用systemd作为默认Cgroup驱动
2.3 关闭swap分区
2.4 修改hosts文件和内核会读取的参数文件
2.5 安装kubeadm,kubelet和kubectl
2.6 部署Kubernetes Master
2.7 node节点服务器加入k8s集群
2.8 安装网络插件flannel
2.9 查看集群状态
三、编译安装nginx制作自己的镜像并上传到docker hub上给node节点下载使用
1. 在master建立一个一键安装nginx的脚本
2. 建立一个Dockerfile文件
3. 创建镜像
4. 将制作的镜像推送到docker hub上供node节点下载
5. node节点去docker hub上拉取这个镜像
四、创建NFS服务器为所有的节点提供相同Web数据结合使用pvpvc和卷挂载保障数据的一致性并用探针对pod中容器的状态进行检测
1. 用ansible部署nfs服务器环境
1.1 在ansible服务器上对k8s集群和nfs服务器建立免密通道
1.2 安装ansible自动化运维工具在ansible服务器上并写好主机清单
1.3 编写安装nfs脚本
1.4 编写playbook实现nfs安装部署
1.5 检查yaml文件语法
1.6 执行yaml文件
1.7 验证nfs是否安装成功
2. 将web数据页面挂载到容器上并使用探针技术对容器状态进行检查
2.1 创建web页面数据文件
2.1.1 先在nfs服务器上创建web页面数据共享文件
2.2 创建nginx.conf配置文件
2.2.1 先再nfs服务器上下载nginx使用前面的一键编译安装nginx的脚本下载得到nginx.conf配置文件
2.2.2 修改nginx.conf的配置文件添加就绪探针和存活性探针的位置块
2.3 编辑/etc/exports文件并让其生效 2.4 挂载web页面数据文件
2.4.1在master服务器上创建pv
2.4.2 在master服务器上创建pvc,用来使用pv
2.5 挂载nginx.conf配置文件
2.5.1在master服务器上创建pv
2.5.2 在master服务器上创建pvc,用来使用pv
2.6 在master服务器上创建pod使用pvc
2.7 创建service服务发布出去
2.8 在firewalld服务器上配置dnat策略将web服务发布出去
2.9 测试访问
五、采用HPA技术当cpu使用率达到40%的时候pod进行自动水平扩缩最小10个最多20个pod
1. 安装metrics服务
2. 配置HPA当cpu使用率达到50%的时候pod进行自动水平扩缩最小20个最多40个pod
2.1 在原来的deployment yaml文件中配置资源请求
2.2 创建hpa
3. 对集群进行压力测试
3.1 在其他机器上安装ab软件
3.2 对该集群进行ab压力测试
4. 查看hpa效果观察变化
5. 观察集群性能
6. 优化整个web集群
六、使用ingress对象结合ingress-controller给web业务实现负载均衡功能
1. 用ansible部署ingress环境
1.1 将配置ingress controller需要的配置文件传入ansible服务器上
1.2 编写拉取ingress镜像的脚本
1.3 编写playbook实现ingress controller的安装部署
1.4 查看是否成功
2. 执行ingress-controller-deploy.yaml 文件去启动ingress controller
3. 启用ingress 关联ingress controller 和service
3.1 编写ingrss的yaml文件
3.2 执行文件
3.3 查看效果
3.4 查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则
4. 测试访问
4.1 获取ingress controller对应的service暴露宿主机的端口
4.2 在其他的宿主机或者windows机器上使用域名进行访问
4.2.1 修改host文件
4.2.1 测试访问
5. 启动第2个服务和pod
6. 再次测试访问查看www.xin.com的是否能够访问到
七、在k8s集群里部署Prometheus对web业务进行监控结合Grafana成图工具进行数据展示
1. 搭建prometheus监控k8s集群
1.1 采用daemonset方式部署node-exporter
1.2 部署Prometheus
1.3 测试
2. 搭建garafana结合prometheus出图
2.1 部署grafana
2.2 测试
2.2.1 增添Prometheus数据源
2.2.2 导入模板
2.3 出图效果
八、构建CI/CD环境使用gitlab集成Jenkins、Harbor构建pipeline流水线工作实现自动相关拉取代码、镜像制作、上传镜像等功能
1. 部署gitlab环境
1.1 安装gitlab
1.1.1设置gitlab的yum源使用清华镜像源安装GitLab
1.1.2 安装 gitlab
1.1.3 配置GitLab站点Url
1.2 启动并访问GitLab
1.2.1 重新配置并启动
1.2.2 在firewalld服务器上配置dnat策略使windows能访问进来
1.2.3 在window上访问
1.2.4 配置默认访问密码
1.2.5 登录访问
1.3 配置使用自己创建的用户登录
2. 部署jenkins环境
2.1 先到官网下载通用java项目war包建议选择LTS长期支持版
2.2 下载javajdk11以上版本并安装安装后配置jdk的环境变量
2.2.1 yum安装
2.2.2 查找JAVA安装目录
2.2.3 配置环境变量
2.3 将刚刚下载下来的jenkins.war包传入服务器
2.4 启动jenkins服务
2.5 测试访问
3. 部署harbor环境
3.1 安装docker、docker-compose
3.1.1 安装docker
3.1.2 安装docker-compose
3.2 安装harbor
3.2.1 下载harbor的源码上传到linux服务器
3.2.2 解压并修改内容
3.3 登录harbor
4. gitlab集成jenkins、harbor构建pipeline流水线任务实现相关拉取代码、镜像制作、上传镜像等流水线工作
4.1 jenkins服务器上需要安装docker且配置可登录Harbor服务拉取镜像
4.1.1 jenkins服务器上安装docker
4.1.2 jenkins服务器上配置可登录Harbor服务
4.1.3 测试登录
4.2 在jenkins上安装git
4.3 在jenkins上安装maven
4.3.1 下载安装包
4.3.2 解压下载的包
4.3.3 配置环境变量
4.3.4 mvn校验
4.4 gitlab中创建测试项目
4.5 在harbor上新建dev项目
4.6 在Jenkins页面中配置JDK和Maven
4.7 在Jenkins开发视图中创建流水线任务pipeline
4.7.1 流水线任务需要编写pipeline脚本编写脚本的第一步应该是拉取gitlab中的项目
4.7.2 编写pipeline
5. 验证
九、部署跳板机限制用户访问内部网络的权限
1. 在firewalld上配置dnat策略实现用户ssh到firewalld服务后自动转入到跳板机服务器
2. 在跳板机服务器上配置只允许192.168.31.0/24网段的用户ssh进来
3. 将跳板机与内网其他服务器都建立免密通道
4. 验证
十、安装zabbix对所有服务器区进行监控监控其CPU、内存、网络带宽等 十一、使用ab软件对整个k8s集群和相关服务器进行压力测试
1. 安装ab软件
2. 测试
项目遇到的问题
1. 重启服务器后发现除了firewalld服务器其他服务器的xshell连接不上了
2. pod启动不起来发现是pvc与pv的绑定出错了原因是pvc和pv的yaml文件中的storageClassName不一致
3. 测试访问时发现访问的内容不足自己设置的即web数据文件挂载失败但是nginx.conf配置文件挂载成功
4. pipeline执行最后一步报错
5. pipeline执行最后一步报错登录不了harbor
项目心得 项目名称
基于SNATDNAT发布内网K8S及JenkinsgitlabHarbor模拟CI/CD的综合项目
项目架构图 项目环境
centos 7.9
docker 24.0.5
docker compose 2.7.0
kubelet 1.23.6
kubeadm 1.23.6
kubectl 1.23.6
nginx 1.21.1
ansible
ingress
prometheus
grafana
zabbix
gitlab
jenkins
harbor
ab 项目概述
项目名称基于SNATDNAT发布内网K8S及JenkinsgitlabHarbor模拟CI/CD的综合项目
项目环境centos 7.9(11台3台k8s集群2核2G1台gitlab4核8G7台1核1G)docker 24.0.5nginx1.21.1prometheus grafana gitlab Jenkins Harbor zabbix ansible 等
项目描述本项目模拟企业里的生产环境并通过snadnat发布内网服务部署了一个跳板机限制用户访问内部网络的权限部署web,nfs,ansible,harbor,zabbix,gitlab,jenkins环境基于dockerk8s构建一个高可用、高性能的web集群在k8s中用prometheusgrafana对web集群资源做监控和出图同时模拟CI/CD流程深刻体会应用开发中的高度持续自动化。
项目步骤
规划好整个集群架构部署好防火墙服务器开启路由功能并配置SNAT策略使用k8s实现web集群部署(1个master2个node)编译安装nginx制作自己的镜像供web集群内部的服务器使用部署nfs为web集群所有节点提供相同数据结合使用pvpvcnfs卷挂载保障数据的一致性同时使用探针技术(就绪探针和存活性探针)对容器状态进行检查同时配置DNAT策略让外面用户能访问到web集群的数据采用HPA技术当cpu使用率达到40%的时候pod进行自动水平扩缩最小10个最多20个pod使用ingress对象结合ingress-controller给web业务实现基于域名的负载均衡功能在k8s-web集群里部署Prometheus对web业务进行监控结合Grafana出图工具进行数据展示构建CI/CD环境使用gitlab集成Jenkins、Harbor构建pipeline流水线工作实现自动相关拉取代码、镜像制作、上传镜像等功能部署跳板机限制用户访问内部网络的权限使用zabbix对所有web集群外的服务器进行监控监控其CPU、内存、网络带宽等使用ab软件对整个集群进行压力测试了解其系统资源瓶颈
项目心得
通过网络拓扑图规划整个集群的架构提高了项目整体的落实和效率对于k8s的使用和集群的部署更加熟悉对promehteusgrafana和zabbix两种监控方式理解更深入通过gitlab集成Jenkins、Harbor构建pipeline流水线工作深刻体会CI/CD流程的持续自动化。查看日志对排错的帮助很大提升了自己的trouble shooting的能力。
项目准备
11台Linux服务器网络模式全部使用桥接模式(其中firewalld要配置两块网卡配置好ip地址修改好主机名同时关闭防火墙和selinux设置开机不自启为后面做项目做好准备以免影响项目进度。
IP地址角色192.168.31.69、192.168.107.10firewalld防火墙服务器192.168.107.11master192.168.107.12node1192.168.107.13node2192.168.107.14jump_server跳板机192.168.107.15nfs192.168.107.16zabbix192.168.107.17gitlab192.168.107.18jenkins192.168.107.19harbor192.168.107.20ansible
项目步骤
一、修改每台主机的ip地址同时设置永久关闭防火墙和selinux修改好主机名在firewalld服务器上开启路由功能并配置snat策略。
修改每台主机的ip地址和主机名本项目所有主机的网络模式为桥接注意firewalld有两张网卡要配置两个IP地址。
1. 在firewalld服务器上配置ip地址、设置永久关闭防火墙和selinux并修改好主机名
备注信息只做提示用建议配置时删掉
[rootfiewalld ~]# cd /etc/sysconfig/network-scripts
[rootfiewalld network-scripts]# ls
ifcfg-ens33 ifdown ifdown-ippp ifdown-post ifdown-sit ifdown-tunnel ifup-bnep ifup-ipv6 ifup-plusb ifup-routes ifup-TeamPort init.ipv6-global ifdown-bnep ifdown-ipv6 ifdown-ppp ifdown-Team ifup ifup-eth ifup-isdn ifup-post ifup-sit ifup-tunnel network-functions
ifcfg-lo ifdown-eth ifdown-isdn ifdown-routes ifdown-TeamPort ifup-aliases ifup-ippp ifup-plip ifup-ppp ifup-Team ifup-wireless network-functions-ipv6
[rootfiewalld network-scripts]# vi ifcfg-ens33
BOOTPROTOnone #将dhcp改为none为了实验的方便防止后面由于ip地址改变而出错将ip地址静态化
NAMEens33
DEVICEens33
ONBOOTyes
IPADDR192.168.31.69 #WAN口ip地址
PREFIX24
GATEWAY192.168.31.1
DNS1114.114.114.114然后配置这台机器的另一个网卡的ip地址
先复制一个同样的ifcfg-ens33在同一路径改名为ifcfg-ens36,修改里面的内容如下LAN口不需要配置网关和dns
[rootfiewalld network-scripts]# cp ifcfg-ens33 ifcfg-ens36
[rootfiewalld network-scripts]# ls
ifcfg-ens33 ifdown ifdown-ippp ifdown-post ifdown-sit ifdown-tunnel ifup-bnep ifup-ipv6 ifup-plusb ifup-routes ifup-TeamPort init.ipv6-global
ifcfg-ens36 ifdown-bnep ifdown-ipv6 ifdown-ppp ifdown-Team ifup ifup-eth ifup-isdn ifup-post ifup-sit ifup-tunnel network-functions
ifcfg-lo ifdown-eth ifdown-isdn ifdown-routes ifdown-TeamPort ifup-aliases ifup-ippp ifup-plip ifup-ppp ifup-Team ifup-wireless network-functions-ipv6
[rootfiewalld network-scripts]# vi ifcfg-ens36
BOOTPROTOnone
NAMEens36
DEVICEens36
ONBOOTyes
IPADDR192.168.107.10 #LAN口ip地址
PREFIX24
然后重启网络
[rootfiewalld network-scripts]# service network restart查看修改ip地址是否生效 可以看到ip地址配置成功
永久关闭防火墙和selinux
[rootfiewalld ~]# systemctl disable firewalld #永久关闭防火墙
[rootfiewalld ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUXdisabled #修改这里
# SELINUXTYPE can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPEtargeted修改主机名
[rootfiewalld ~]# hostnamectl set-hostname firewalld
[rootfiewalld ~]# su - root2. 在firewalld服务器上开启路由功能并配置snat策略使内网服务器能上网
编写一个脚本执行
[rootfiewalld ~]# vim snat_dnat.sh
#!/bin/bash
iptables -F
iptables -t nat -F#enable route开启路由功能
echo 1 /proc/sys/net/ipv4/ip_forward#enable snat 让109.168.107.0网段的主机能够通过WAN口上网
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source 192.168.31.69 执行脚本
[rootfiewalld ~]# bash snat_dnat.sh
查看是否搭建成功
[rootfiewalld ~]# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination Chain INPUT (policy ACCEPT)
target prot opt source destination Chain OUTPUT (policy ACCEPT)
target prot opt source destination Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
SNAT all -- 192.168.107.0/24 0.0.0.0/0 to:192.168.31.69
#出现这一条规则说明搭建成功
3. 配置剩下的服务器的ip地址永久关闭防火墙和selinux并修改好主机名
这里以其中一台为例
[rootnfs ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTOnone
NAMEens33
DEVICEens33
ONBOOTyes
IPADDR192.168.107.15
PREFIX24
GATEWAY192.168.107.10 #注意,这里要以firewalld服务器的LAN口为网关因为是通过它出去上网
DNS1114.114.114.114然后重启网络
[rootnfs ~]# service network restart查看修改ip地址是否生效 可以看到ip地址已经修改好了
测试是否能够上网 可见firewalld服务器的snat策略配置成功内网服务器已经可以上网。
永久关闭防火墙和selinux
[rootnfs ~]# systemctl disable firewalld #永久关闭防火墙
[rootnfs ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUXdisabled #修改这里
# SELINUXTYPE can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPEtargeted
修改主机名
[rootnfs ~]# hostnamectl set-hostname firewalld
[rootnfs ~]# su - root
二、部署dockerk8s环境实现1个master和2个node节点的k8s集群
1. 在k8s集群那3台服务器上安装好docker这里根据官方文档进行安装
[rootmaster ~]# yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine[rootmaster ~]# yum install -y yum-utils[rootmaster ~]# yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo[rootmaster ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y[rootmaster ~]# systemctl start docker #启动docker[rootmaster ~]# docker --version #查看docker是否安装成功
Docker version 24.0.5, build ced09962. 创建k8s集群这里采用 kubeadm方式安装
2.1 确认docker已经安装好启动docker并且设置开机启动
[rootmaster ~]# systemctl restart docker
[rootmaster ~]# systemctl enable docker
[rootmaster ~]# ps aux|grep docker
root 2190 1.4 1.5 1159376 59744 ? Ssl 16:22 0:00 /usr/bin/dockerd -H fd:// --containerd/run/containerd/containerd.sock
root 2387 0.0 0.0 112824 984 pts/0 S 16:22 0:00 grep --colorauto docker
2.2 配置 Docker使用systemd作为默认Cgroup驱动
每台服务器上都要操作master和node上都要操作
[rootmaster ~]# cat EOF /etc/docker/daemon.json{exec-opts: [native.cgroupdriversystemd]}EOF
[rootmaster ~]# systemctl restart docker #重启docker
2.3 关闭swap分区
因为k8s不想使用swap分区来存储数据使用swap会降低性能每台服务器都需要操作
[rootmaster ~]# swapoff -a #临时关闭
[rootmaster ~]# sed -i / swap / s/^\(.*\)$/#\1/g /etc/fstab #永久关闭
2.4 修改hosts文件和内核会读取的参数文件
每台机器上的/etc/hosts文件都需要修改
[rootmaster ~]# cat /etc/hosts EOF 192.168.107.11 master192.168.107.12 node1192.168.107.13 node2EOF
修改每台机器上master和node永久修改
[rootmaster ~]#cat EOF /etc/sysctl.conf 追加到内核会读取的参数文件里
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
net.ipv4.ip_nonlocal_bind 1
net.ipv4.ip_forward 1
vm.swappiness0
EOF
[rootmaster ~]#sysctl -p 让内核重新读取数据加载生效
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
net.ipv4.ip_nonlocal_bind 1
net.ipv4.ip_forward 1
vm.swappiness 02.5 安装kubeadm,kubelet和kubectl
kubeadm 是k8s的管理程序在master上运行的用来建立整个k8s集群背后是执行了大量的脚本帮助我们去启动k8s。
kubelet 是在node节点上用来管理容器的 -- 管理docker告诉docker程序去启动容器 是master和node通信用的--管理docker告诉docker程序去启动容器。 一个在集群中每个节点node上运行的代理。 它保证容器containers都运行在 Pod 中。 kubectl 是在master上用来给node节点发号施令的程序用来控制node节点的告诉它们做什么事情的是命令行操作的工具。
添加kubernetes YUM软件源
集群里的每台服务器都需要安装
[rootmaster ~]# cat /etc/yum.repos.d/kubernetes.repo EOF[kubernetes]nameKubernetesbaseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled1gpgcheck0repo_gpgcheck0gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
安装kubeadm,kubelet和kubectl
[rootmaster ~]# yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
#最好指定版本因为1.24的版本默认的容器运行时环境不是docker了
设置开机自启因为kubelet是k8s在node节点上的代理必须开机要运行的
[rootmaster ~]# systemctl enable kubelet
2.6 部署Kubernetes Master
只是master主机执行
提前准备coredns:1.8.4的镜像后面需要使用,需要在每台机器上下载镜像
[rootmaster ~]# docker pull coredns/coredns:1.8.4
[rootmaster ~]# docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4
初始化操作在master服务器上执行
[rootmaster ~]#kubeadm init \--apiserver-advertise-address192.168.107.11 \--image-repository registry.aliyuncs.com/google_containers \--service-cidr10.1.0.0/16 \--pod-network-cidr10.244.0.0/16
#192.168.107.11 是master的ip
# --service-cidr string Use alternative range of IP address for service VIPs. (default 10.96.0.0/12) 服务发布暴露--》dnat
# --pod-network-cidr string Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
执行成功后将下面这段记录下来为后面node节点加入集群做准备
kubeadm join 192.168.107.11:6443 --token i25xkd.0xrlqnee2gbky4uv \ --discovery-token-ca-cert-hash sha256:7384e64dabec0ea4eb9f0b82729aa696f90ae8c8d9f6f7b2c87c33f71c611741
完成初始化的新建目录和文件操作在master上完成
[rootmaster ~]# mkdir -p $HOME/.kube
[rootmaster ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[rootmaster ~]# chown $(id -u):$(id -g) $HOME/.kube/config
2.7 node节点服务器加入k8s集群
测试node1节点是否能和master通信
[rootnode1 ~]# ping master
PING master (192.168.107.24) 56(84) bytes of data.
64 bytes from master (192.168.107.24): icmp_seq1 ttl64 time0.765 ms
64 bytes from master (192.168.107.24): icmp_seq2 ttl64 time1.34 ms
^C
--- master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev 0.765/1.055/1.345/0.290 ms
在所有的node节点上执行
[rootnode1 ~]#kubeadm join 192.168.107.11:6443 --token i25xkd.0xrlqnee2gbky4uv \--discovery-token-ca-cert-hash sha256:7384e64dabec0ea4eb9f0b82729aa696f90ae8c8d9f6f7b2c87c33f71c611741
在master上查看node是否已经加入集群
[rootmaster ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 5m2s v1.23.6
node1 NotReady none 61s v1.23.6
node2 NotReady none 58s v1.23.62.8 安装网络插件flannel
在master节点执行
实现master上的pod和node节点上的pod之间通信
将flannel文件传入master主机 部署flannel [rootmaster ~]# kubectl apply -f kube-flannel.yml #执行
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds create2.9 查看集群状态
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9m49s v1.23.6
node1 Ready none 5m48s v1.23.6
node2 Ready none 5m45s v1.23.6此过程可能需要等一会看见都Ready状态了则表示k8s环境搭建成功了
三、编译安装nginx制作自己的镜像并上传到docker hub上给node节点下载使用
1. 在master建立一个一键安装nginx的脚本
[rootmaster ~]# mkdir /nginx
[rootmaster ~]# cd /nginx
[rootmaster nginx]# vim onekey_install_nginx.sh
#!/bin/bash#解决软件的依赖关系需要安装的软件包yum -y install zlib zlib-devel openssl openssl-devel pcre pcre-devel gcc gcc-c autoconf automake make psmisc net-tools lsof vim wget#下载nginx软件mkdir /nginxcd /nginxcurl -O http://nginx.org/download/nginx-1.21.1.tar.gz#解压软件tar xf nginx-1.21.1.tar.gz#进入解压后的文件夹cd nginx-1.21.1#编译前的配置./configure --prefix/usr/local/nginx1 --with-http_ssl_module --with-threads --with-http_v2_module --with-http_stub_status_module --with-stream
#编译
make -j 2
#编译安装
make install
2. 建立一个Dockerfile文件
[rootmaster nginx]# vim Dockerfile
FROM centos:7 #指明基础镜像
ENV NGINX_VERSION 1.21.1 #将1.21.1这个数值赋值NGINX_VERSION这个变量
ENV AUTHOR zhouxin # 作者zhouxin
LABEL maintainercali695811769qq.com #标签
RUN mkdir /nginx #在容器中运行的命令
WORKDIR /nginx #指定进入容器的时候在哪个目录下
COPY . /nginx #复制宿主机里的文件或者文件夹到容器的/nginx目录下
RUN set -ex; \ #在容器运行命令bash onekey_install_nginx.sh ; \ #执行一键安装nginx的脚本yum install vim iputils net-tools iproute -y #安装一些工具
EXPOSE 80 #声明开放的端口号
ENV PATH/usr/local/nginx1/sbin:$PATH #定义环境变量STOPSIGNAL SIGQUIT #屏蔽信号
CMD [nginx,-g,daemon off;] #在前台启动nginx程序 -g daemon off将off值赋给daemon这个变量告诉nginx不要在后台启动在前台启动daemon是守护进程默认在后台启动
3. 创建镜像
[rootmaster nginx]# docker build -t zhouxin_nginx:1.0 . 查看镜像 4. 将制作的镜像推送到docker hub上供node节点下载
将自己制作的镜像推送到我的docker hub仓库以供其他2个node节点服务器使用首先要在docker hub创建自己的账号并创建自己的仓库我已经创建了zhouxin03/nginx的仓库 在master上将自己制作的镜像打标签
[rootmaster nginx]# docker tag zhouxin_nginx:1.0 zhouxin03/nginx
登录docker hub
[rootmaster nginx]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you dont have a Docker ID, head over to https://hub.docker.com to create one.
Username: zhouxin03
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
然后再推到自己的docker hub仓库里
[rootmaster nginx]# docker push zhouxin03/nginx
Using default tag: latest
The push refers to repository [docker.io/zhouxin03/nginx]
52bbda705d25: Pushed
41e872683328: Pushed
5f70bf18a086: Pushed
5376459cbb05: Pushed
174f56854903: Mounted from library/centos
latest: digest: sha256:39801c440d239b8fec21fda5a750b38f96d64a13eef695c9394ffe244c5034a6 size: 1362此时在docker hub上查看镜像 可见镜像已经被推送到docker hub上了
5. node节点去docker hub上拉取这个镜像
[rootnode1 ~]# docker pull zhouxin03/nginx:latest #拉取镜像
latest: Pulling from zhouxin03/nginx
2d473b07cdd5: Pull complete
63fe9f4e3ea7: Pull complete
4f4fb700ef54: Pull complete
947ca89e3d17: Pull complete
0d4cea36d8fd: Pull complete
Digest: sha256:39801c440d239b8fec21fda5a750b38f96d64a13eef695c9394ffe244c5034a6
Status: Downloaded newer image for zhouxin03/nginx:latest
docker.io/zhouxin03/nginx:latest
[rootnode1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
zhouxin03/nginx latest 31274f1e297c 17 minutes ago 636MB
rancher/mirrored-flannelcni-flannel v0.19.2 8b675dda11bb 12 months ago 62.3MB
rancher/mirrored-flannelcni-flannel-cni-plugin v1.1.0 fcecffc7ad4a 15 months ago 8.09MB
registry.aliyuncs.com/google_containers/kube-proxy v1.23.6 4c0375452406 16 months ago 112MB
registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 23 months ago 46.8MB
registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 2 years ago 683kB
coredns/coredns 1.8.4 8d147537fb7d 2 years ago 47.6MB
registry.aliyuncs.com/google_containers/coredns v1.8.4 8d147537fb7d 2 years ago 47.6MB四、创建NFS服务器为所有的节点提供相同Web数据结合使用pvpvc和卷挂载保障数据的一致性并用探针对pod中容器的状态进行检测
1. 用ansible部署nfs服务器环境
1.1 在ansible服务器上对k8s集群和nfs服务器建立免密通道
这里展示对nfs服务器建立免密通道的过程
[rootansible ~]# ssh-keygen #生成密钥对
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory /root/.ssh.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:GtLchZ2flfBGzV5K3yqXePoIc9f1oT1WUOZzZ0AQdpw rootansible
The keys randomart image is:
---[RSA 2048]----
| o|
| o o E*|
| . .*B|
| o . . . .oB|
| . S o. o|
| . o o B.|
| . o .*.o|
| .o. .|
| ... |
----[SHA256]-----[rootansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.15 # 将公钥传到要建立免密通道的服务器上
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: /root/.ssh/id_rsa.pub
The authenticity of host 192.168.107.15 (192.168.107.15) cant be established.
ECDSA key fingerprint is SHA256:/y4BmyQxo26qq5BDptWmP9KVykKwBX7YrugbGtSwN1Q.
ECDSA key fingerprint is MD5:8e:26:8d:24:1a:35:94:79:3e:b5:5a:1a:d3:9e:99:83.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root192.168.107.15s password: #第一次传送公钥到远程服务器上要输入远程服务器的登录密码Number of key(s) added: 1Now try logging into the machine, with: ssh 192.168.107.15
and check to make sure that only the key(s) you wanted were added.[rootansible ~]# ssh root192.168.107.15 #验证免密通道是否建立成功
Last login: Sat Sep 2 16:26:00 2023 from 192.168.31.67
[rootnfs ~]# 其他服务器只需要把ansible的公钥传到各个服务器上即可
[rootansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.11 # 将公钥传到master
[rootansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.12 # 将公钥传到node1
[rootansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.13 # 将公钥传到node2
1.2 安装ansible自动化运维工具在ansible服务器上并写好主机清单
[rootansible ~]# yum install -y epel-release
[rootansible ~]# yum install ansible -y
[rootansible ~]# cd /etc/ansible/
[rootansible ansible]# ls
ansible.cfg hosts roles
[rootansible ansible]# vim hosts
[nfs]
192.168.107.15 #nfs
[web]
192.168.107.11 #master
192.168.107.12 #node1
192.168.107.13 #node2
1.3 编写安装nfs脚本
在nfs服务器上要安装好nfs软件包并设计开启自启nfs服务
[rootansible ~]# vim nfs_install.sh
yum install -y nfs-utils #安装nfs软件包
systemctl start nfs #设置nfs开机自启
systemctl enable nfs
在k8s集群里要安装好nfs软件包
[rootansible ~]# vim web_nfs_install.sh
yum install -y nfs-utils #安装nfs软件包
1.4 编写playbook实现nfs安装部署
[rootansible ansible]# vim nfs_install.yaml
- hosts: nfsremote_user: roottasks:- name: install nfs in nfsscript: /root/nfs_install.sh
- hosts: webremote_user: roottasks:- name: install nfs in webscript: /root/web_nfs_install.shscript模块把本地的脚本传到远端执行
1.5 检查yaml文件语法
[rootansible ansible]# ansible-playbook --syntax-check /etc/ansible/nfs_install.yamlplaybook: /etc/ansible/nfs_install.yaml1.6 执行yaml文件
[rootansible ansible]# ansible-playbook nfs_install.yaml
1.7 验证nfs是否安装成功
在nfs服务器看查看是否启动nfsd进程
[rootnfs ~]# ps aux|grep nfs
root 1693 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd4_callbacks]
root 1699 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd]
root 1700 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd]
root 1701 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd]
root 1702 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd]
root 1703 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd]
root 1704 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd]
root 1705 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd]
root 1706 0.0 0.0 0 0 ? S 17:05 0:00 [nfsd]
root 1745 0.0 0.0 112824 976 pts/0 R 17:06 0:00 grep --colorauto nfs可见nfs安装部署成功了
2. 将web数据页面挂载到容器上并使用探针技术对容器状态进行检查
要用到探针技术需要修改nginx的配置文件我这里采用就绪探针(readinessProbe)和存活性探针(livenessProbe)就要将就绪探针和存活性探针的位置块添加到nginx配置中因此需要在nfs服务器上修改nginx的配置文件后再将nginx的配置文件挂载到容器里。
所以这里需要挂载两个文件。
2.1 创建web页面数据文件
2.1.1 先在nfs服务器上创建web页面数据共享文件
[rootnfs ~]# mkdir /web
[rootnfs ~]# cd /web
[rootnfs web]# vim index.html
pwelcome!/p
h1name:zhouxin/h1
h1Hunan Agricultural University/h1
h1age: 20/h12.2 创建nginx.conf配置文件
2.2.1 先再nfs服务器上下载nginx使用前面的一键编译安装nginx的脚本下载得到nginx.conf配置文件
[rootnfs nginx]# vim onekey_install_nginx.sh
#!/bin/bash#解决软件的依赖关系需要安装的软件包yum -y install zlib zlib-devel openssl openssl-devel pcre pcre-devel gcc gcc-c autoconf automake make psmisc net-tools lsof vim wget#下载nginx软件mkdir /nginxcd /nginxcurl -O http://nginx.org/download/nginx-1.21.1.tar.gz#解压软件tar xf nginx-1.21.1.tar.gz#进入解压后的文件夹cd nginx-1.21.1#编译前的配置./configure --prefix/usr/local/nginx1 --with-http_ssl_module --with-threads --with-http_v2_module --with-http_stub_status_module --with-stream
#编译
make -j 2
#编译安装
make install
[rootnfs nginx]# bash onekey_install_nginx.sh #执行脚本
2.2.2 修改nginx.conf的配置文件添加就绪探针和存活性探针的位置块
[rootnfs ~]# cd /usr/local
[rootnfs local]# ls
bin etc games include lib lib64 libexec nginx1 sbin share src
[rootnfs local]# cd nginx1
[rootnfs nginx1]# ls
conf html logs sbin
[rootnfs nginx1]# cd conf
[rootnfs conf]# ls
fastcgi.conf fastcgi_params koi-utf mime.types nginx.conf scgi_params uwsgi_params win-utf
fastcgi.conf.default fastcgi_params.default koi-win mime.types.default nginx.conf.default scgi_params.default uwsgi_params.default
[rootnfs conf]# vim nginx.conf在http的server中添加
location /healthz {access_log off;return 200 ok;
}location /isalive {access_log off;return 200 ok;
}
如 2.3 编辑/etc/exports文件并让其生效
[rootnfs web]# vim /etc/exports
/web 192.168.107.0/24 (rw,sync,all_squash)
/usr/local/nginx1/conf 192.168.107.0/24 (rw,sync,all_squash)/nginx 是我们共享的文件夹的路径--》使用绝对路径 192.168.107.0/24 允许过来访问的客户机的ip地址网段 (rw,all_squash,sync) 表示权限的限制 rw 表示可读可写 read and write ro 表示只能读 read-only all_squash 任何客户机上的用户过来访问的时候都把它认为是普通的用户 root_squash 当NFS客户端以root管理员访问时映射为NFS服务器匿名用户 no_root_squash 当NFS客户端以root管理员访问时映射为NFS服务器的root管理员 sync 同时将数据写入到内存与硬盘中保证不丢失数据 async 优先将数据保存到内存然后再写入硬盘效率更高但可能丢失数据
让/etc/exports文件其生效
[rootnfs web]# exportfs -av
exportfs: No options for /web 192.168.107.0/24: suggest 192.168.107.0/24(sync) to avoid warning
exportfs: No host name given with /web (rw,sync,all_squash), suggest *(rw,sync,all_squash) to avoid warning
exportfs: No options for /usr/local/nginx1/conf 192.168.107.0/24: suggest 192.168.107.0/24(sync) to avoid warning
exportfs: No host name given with /usr/local/nginx1/conf (rw,sync,all_squash), suggest *(rw,sync,all_squash) to avoid warning
exporting 192.168.107.0/24:/usr/local/nginx1/conf
exporting 192.168.107.0/24:/web
exporting *:/usr/local/nginx1/conf
exporting *:/web
设置共享目录的权限
[rootnfs web]# chown nobody:nobody /web
[rootnfs web]# ll -d /web
drwxr-xr-x 2 nobody nobody 24 9月 2 17:08 /web
[rootnfs web]# chown nobody:nobody /usr/local/nginx1/conf
[rootnfs web]# ll -d /usr/local/nginx1/conf
drwxr-xr-x 2 nobody nobody 333 9月 2 18:25 /usr/local/nginx1/conf 2.4 挂载web页面数据文件
2.4.1在master服务器上创建pv
[rootmaster pod]# mkdir /pod
[rootmaster pod]# cd /pod
[rootmaster pod]# vim pv_nfs.yaml
apiVersion: v1
kind: PersistentVolume #资源类型
metadata:name: zhou-nginx-pv #创建的pv的名字labels:type: zhou-nginx-pv
spec:capacity:storage: 5Gi accessModes:- ReadWriteMany #访问模式多个客户端读写persistentVolumeReclaimPolicy: Recycle #回收策略-可以回收storageClassName: nfs #pv名字后面创建pvc的时候要用一样的nfs:path: /web # nfs共享目录的路径server: 192.168.107.15 # nfs服务器的ipreadOnly: false #只读
执行pv的yaml文件
[rootmaster pod]# kubectl apply -f pv_nfs.yaml
persistentvolume/zhou-nginx-pv created
[rootmaster pod]# kubectl get pv #查看
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
zhou-nginx-pv 5Gi RWX Recycle Available nfs 17s2.4.2 在master服务器上创建pvc,用来使用pv
[rootmaster pod]# vim pvc_nfs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: zhou-nginx-pvc
spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nfs #注意这里要用与前面pv相同的
执行并查看
[rootmaster pod]# kubectl apply -f pvc_nfs.yaml
persistentvolumeclaim/zhou-nginx-pvc created
[rootmaster pod]# kubectl get pvc #查看
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zhou-nginx-pvc Bound zhou-nginx-pv 5Gi RWX nfs 8s2.5 挂载nginx.conf配置文件
其实这里也可以用configmap实现
参考https://mp.csdn.net/mp_blog/creation/editor/129893723
2.5.1在master服务器上创建pv
[rootmaster pod]# vim pv_nginx.yaml
apiVersion: v1
kind: PersistentVolume #资源类型
metadata:name: zhou-nginx-conf-pv #创建的pv的名字labels:type: zhou-nginx-conf-pv
spec:capacity:storage: 5Gi accessModes:- ReadWriteMany #访问模式多个客户端读写persistentVolumeReclaimPolicy: Recycle #回收策略-可以回收storageClassName: nginx-conf #pv名字后面创建pvc的时候要用一样的nfs:path: /usr/local/nginx1/conf # nfs共享目录的路径server: 192.168.107.15 # nfs服务器的ipreadOnly: false #只读
执行并查看
[rootmaster pod]# kubectl apply -f pv_nginx.yaml
persistentvolume/zhou-nginx-conf-pv created
[rootmaster pod]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
zhou-nginx-conf-pv 5Gi RWX Recycle Available nginx-conf 8s
zhou-nginx-pv 5Gi RWX Recycle Bound default/zhou-nginx-pvc nfs 81m2.5.2 在master服务器上创建pvc,用来使用pv
[rootmaster pod]# vim pvc_nginx.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: zhou-nginx-conf-pvc
spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nginx-conf #注意这里要用与前面pv相同的
执行并查看
[rootmaster pod]# kubectl apply -f pvc_nginx.yaml
persistentvolumeclaim/zhou-nginx-conf-pvc created
[rootmaster pod]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zhou-nginx-conf-pvc Bound zhou-nginx-conf-pv 5Gi RWX nginx-conf 3s
zhou-nginx-pvc Bound zhou-nginx-pv 5Gi RWX nfs 113m看到两个都是绑定状态则成功
2.6 在master服务器上创建pod使用pvc
[rootmaster pod]# vim pv_pod.yaml
apiVersion: apps/v1
kind: Deployment #用副本控制器deployment创建
metadata:name: nginx-deployment #deployment的名称labels:app: zhou-nginx
spec:replicas: 10 #建立10个副本selector:matchLabels:app: zhou-nginxtemplate: #根据此模版创建Pod的副本实例metadata:labels:app: zhou-nginxspec:volumes:- name: zhou-pv-storage-nfspersistentVolumeClaim:claimName: zhou-nginx-pvc #使用前面创建的pvc- name: zhou-pv-storage-conf-nfspersistentVolumeClaim:claimName: zhou-nginx-conf-pvc #使用前面创建的pvccontainers:- name: zhou-pv-container-nfs #容器名字image: zhouxin03/nginx:latest #使用之前自己制作的镜像ports:- containerPort: 80 #容器应用监听的端口号name: http-servervolumeMounts:- mountPath: /usr/local/nginx1/html #挂载到的容器里的目录这里是自己编译安装的nginx下的html路径name: zhou-pv-storage-nfsvolumeMounts:- mountPath: /usr/local/nginx1/conf #挂载到的容器里的目录这里是自己编译安装的nginx下的conf路径name: zhou-pv-storage-conf-nfsreadinessProbe: #配置就绪探针内容httpGet: #使用httpGet检查机制path: /healthz #使用nginx.conf配置文件里的路径port: 80initialDelaySeconds: 10periodSeconds: 5livenessProbe: #配置存活性探针内容httpGet:path: /isalive #使用nginx.conf配置文件里的路径port: 80initialDelaySeconds: 15periodSeconds: 10
执行并查看
[rootmaster pod]#kubectl apply -f pv_pod.yaml
[rootmaster pod]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 20/20 20 20 2m18s
[rootmaster pod]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-79878f849f-5gzfl 1/1 Running 0 2m46s 10.244.1.13 node1 none none
nginx-deployment-79878f849f-6nrrf 1/1 Running 0 2m46s 10.244.2.9 node2 none none
nginx-deployment-79878f849f-6pl8g 1/1 Running 0 2m46s 10.244.1.6 node1 none none
nginx-deployment-79878f849f-82g94 1/1 Running 0 2m46s 10.244.1.14 node1 none none
nginx-deployment-79878f849f-8zssk 1/1 Running 0 2m46s 10.244.1.15 node1 none none
nginx-deployment-79878f849f-9n8ql 1/1 Running 0 2m46s 10.244.2.4 node2 none none
nginx-deployment-79878f849f-bwp9s 1/1 Running 0 2m46s 10.244.1.10 node1 none none
nginx-deployment-79878f849f-ct5k4 1/1 Running 0 2m46s 10.244.2.8 node2 none none
nginx-deployment-79878f849f-hdj5f 1/1 Running 0 2m46s 10.244.1.7 node1 none none
nginx-deployment-79878f849f-hhw4c 1/1 Running 0 2m46s 10.244.1.8 node1 none none
这个过程可能需要等一会才能看到全部变成Running状态且 READY是1/1则表示pod启动成功
如果不是running状态或 READY是0/1表示出错了可以通过kubectl describe pod pod的名字 来排错
测试访问
[rootmaster pod]# curl 10.244.1.13
pwelcome!/p
h1name:zhouxin/h1
h1Hunan Agricultural University/h1
h1age: 20/h1
查看nginx.conf的配置文件是否挂载成功
[rootmaster pod]# kubectl exec -it nginx-deployment-79878f849f-r4zsq -- bash
[rootnginx-deployment-79878f849f-r4zsq nginx]# cd /usr/local/nginx1/conf
[rootnginx-deployment-79878f849f-r4zsq conf]# ls
fastcgi.conf fastcgi_params koi-utf mime.types nginx.conf scgi_params uwsgi_params win-utf
fastcgi.conf.default fastcgi_params.default koi-win mime.types.default nginx.conf.default scgi_params.default uwsgi_params.default
[rootnginx-deployment-79878f849f-r4zsq conf]# vim nginx.conf看到配置文件里有这两项说明挂载成功
2.7 创建service服务发布出去
[rootmaster pod]# vim my_service.yaml
apiVersion: v1
kind: Service
metadata:name: my-nginx-nfs #service的名字后面配置ingress会用到labels:run: my-nginx-nfs
spec:type: NodePortports:- port: 8070targetPort: 80protocol: TCPname: httpselector:app: zhou-nginx #注意这里要用app的形式跟前面的pv_pod.yaml文件对应有些使用方法是run不要搞错了
执行并查看
[rootmaster pod]# kubectl apply -f my_service.yaml
service/my-nginx-nfs created
[rootmaster pod]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.1.0.1 none 443/TCP 46h
my-nginx-nfs NodePort 10.1.32.204 none 8070:32621/TCP 9s
#这里的32621就是宿主机暴露的端口号验证时用浏览器访问宿主机的这个端口号
2.8 在firewalld服务器上配置dnat策略将web服务发布出去
[rootfiewalld ~]# vim snat_dnat.sh
#!/bin/bash
iptables -F
iptables -t nat -F#enable route 开启路由功能
echo 1 /proc/sys/net/ipv4/ip_forward#enable snat 让109.168.107.0网段的主机能够通过WAN口上网
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source 192.168.31.69#添加下面的dnat策略
#enable dant 让外网能够访问内网数据
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.11
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.12
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.13查看配置的防火墙规则生效没 可见已经生效
2.9 测试访问
使用浏览器访问3台k8s集群服务器任意一台的32621端口都能显示出nfs-server服务器上的定制页面 五、采用HPA技术当cpu使用率达到40%的时候pod进行自动水平扩缩最小10个最多20个pod
1. 安装metrics服务
HPA的指标数据是通过metrics服务来获得必须要提前安装好
Metrics Server 从 Kubelets 收集资源指标并通过Metrics API在 Kubernetes apiserver 中公开它们 以供Horizontal Pod Autoscaler(HPA)和Vertical Pod Autoscaler (VPA)使用比如CPU、文件描述符、内存、请求延时等指标metric-server收集数据给k8s集群内使用如kubectl,hpa,scheduler等。还可以通过 访问指标 API kubectl top从而更轻松地调试自动缩放管道
[rootmaster ~]# vim metrics.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: truerbac.authorization.k8s.io/aggregate-to-edit: truerbac.authorization.k8s.io/aggregate-to-view: truename: system:aggregated-metrics-reader
rules:
- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:
- apiGroups:- resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir/tmp- --secure-port4443- --kubelet-preferred-address-typesInternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution15s- --kubelet-insecure-tlsimage: registry.cn-shenzhen.aliyuncs.com/zengfengjin/metrics-server:v0.5.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100 可见metrics已经安装成功
查看节点的状态信息
[rootmaster ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 115m 5% 1101Mi 29%
node1 61m 3% 766Mi 20%
node2 59m 2% 740Mi 20% 查看pod资源消耗
[rootmaster pod]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx-deployment-6fd9b4f959-754lc 1m 1Mi
nginx-deployment-6fd9b4f959-94p97 1m 1Mi
nginx-deployment-6fd9b4f959-d66t7 1m 1Mi
nginx-deployment-6fd9b4f959-hcffl 1m 1Mi
nginx-deployment-6fd9b4f959-hjbfb 1m 1Mi
nginx-deployment-6fd9b4f959-k2hvs 1m 1Mi
nginx-deployment-6fd9b4f959-mgb6m 1m 1Mi
nginx-deployment-6fd9b4f959-nb4sd 1m 1Mi
nginx-deployment-6fd9b4f959-rcfnj 1m 1Mi
nginx-deployment-6fd9b4f959-tv7t4 1m 1Mi
这个命令需要由metric-server服务提供数据没有安装metrics的话会报错error: Metrics API not available
2. 配置HPA当cpu使用率达到50%的时候pod进行自动水平扩缩最小20个最多40个pod
2.1 在原来的deployment yaml文件中配置资源请求
要配置HPA功能需要在Deployment YAML文件中配置资源请求由于前面的deployment没有配置资源请求因此先删除前面用deployment创建的pod
[rootmaster ~]# cd /pod
[rootmaster pod]# ls
my_service.yaml pvc_nfs.yaml pvc_nginx.yaml pv_nfs.yaml pv_nginx.yaml pv_pod.yaml
[rootmaster pod]# kubectl delete -f pv_pod.yaml
deployment.apps nginx-deployment deleted修改pv_pov.yaml配置文件增加配置资源请求
[rootmaster pod]# vim pv_pod.yaml
apiVersion: apps/v1
kind: Deployment #用副本控制器deployment创建
metadata:name: nginx-deployment #deployment的名称labels:app: zhou-nginx
spec:replicas: 10 #建立10个副本selector:matchLabels:app: zhou-nginxtemplate: #根据此模版创建Pod的副本实例metadata:labels:app: zhou-nginxspec:volumes:- name: zhou-pv-storage-nfspersistentVolumeClaim:claimName: zhou-nginx-pvc #使用前面创建的pvc- name: zhou-pv-storage-conf-nfspersistentVolumeClaim:claimName: zhou-nginx-conf-pvc #使用前面创建的pvccontainers:- name: zhou-pv-container-nfs #容器名字image: zhouxin03/nginx:latest #使用之前自己制作的镜像ports:- containerPort: 80 #容器应用监听的端口号name: http-servervolumeMounts:- mountPath: /usr/local/nginx1/html #挂载到的容器里的目录这里是自己编译安装的nginx下的html路径name: zhou-pv-storage-nfsvolumeMounts:- mountPath: /usr/local/nginx1/conf #挂载到的容器里的目录这里是自己编译安装的nginx下的conf路径name: zhou-pv-storage-conf-nfsreadinessProbe: #配置就绪探针内容httpGet: #使用httpGet检查机制path: /healthz #使用nginx.conf配置文件里的路径port: 80initialDelaySeconds: 10periodSeconds: 5livenessProbe: #配置存活性探针内容httpGet:path: /isalive #使用nginx.conf配置文件里的路径port: 80initialDelaySeconds: 15periodSeconds: 10#############################添加下面的内容##############################resources:requests:cpu: 300m # 这里设置了CPU的请求为300mlimits:cpu: 500m # 这里设置了CPU的限制为500m
执行并查看
[rootmaster pod]# kubectl apply -f pv_pod.yaml
deployment.apps/nginx-deployment created
[rootmaster pod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6fd9b4f959-754lc 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-94p97 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-d66t7 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-hcffl 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-hjbfb 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-k2hvs 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-mgb6m 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-nb4sd 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-rcfnj 1/1 Running 0 36s
nginx-deployment-6fd9b4f959-tv7t4 1/1 Running 0 36s2.2 创建hpa
[rootmaster ~]# vim hpa.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:name: my-hpa
spec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: nginx-deployment #这里用前面的deployment的名字minReplicas: 10 #最少10个maxReplicas: 20 #最多20个metrics:- type: Resourceresource:name: cputarget:type: UtilizationaverageUtilization: 30 #限制%30的内存执行并查看
[rootmaster ~]# kubectl apply -f hpa.yaml [rootmaster ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-hpa Deployment/nginx-deployment 0%/30% 10 20 10 48s该过程可能需要等一会才能看到TARGETS的0%/50%
3. 对集群进行压力测试
3.1 在其他机器上安装ab软件
[rootansible pod]# yum install httpd-tools -y3.2 对该集群进行ab压力测试
#1000个并发数100000000个请求数
[rootansible ~]# ab -c 1000 -n 100000000 http://192.168.107.11:32621/
This is ApacheBench, Version 2.3 $Revision: 1430300 $
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 127.0.0.1 (be patient)
4. 查看hpa效果观察变化
[rootmaster pod]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-hpa Deployment/nginx-deployment 46%/30% 10 20 17 3m4s
可以看出hpa TARGETS达到了46%需要扩容。pod数自动扩展到了17个
5. 观察集群性能 查看吞吐率 经过多次测试看到最高吞吐率为4480左右 6. 优化整个web集群
可以通过修改内核参数或nginx配置文件中的参数来优化
这里使用ulimit命令
[rootmaster ~]# ulimit -n 10000
#扩大并发连接数 六、使用ingress对象结合ingress-controller给web业务实现负载均衡功能
1. 用ansible部署ingress环境
1.1 将配置ingress controller需要的配置文件传入ansible服务器上 1.2 编写拉取ingress镜像的脚本
直接下载github上的 deploy.yaml 部署即可
由于网络问题镜像如果拉取失败可以使用下面hub.docker 上的镜像
这里是参考博客ingress-nginx-controller 部署以及优化 - 小兔几白又白 - 博客园 (cnblogs.com)
[rootansible ~]# vim ingress_images.sh
docker pull koala2020/ingress-nginx-controller:v1
docker pull koala2020/ingress-nginx-kube-webhook-certgen:v11.3 编写playbook实现ingress controller的安装部署
编写主机清单ingress-controller-deployment.yaml文件只需要传到master上拉取ingress镜像要在所有k8s集群里
[rootansible etc]# vim /etc/ansible/hosts
[nfs]
192.168.107.15
[web]
192.168.107.11
192.168.107.12
192.168.107.13
[master] #添加
192.168.107.11编写playbook
[rootansible ansible]# vim ingress_install.yaml
- hosts: webremote_user: roottasks:- name: install ingress controllerscript: /root/ingress_images.sh
- hosts: masterremote_user: roottasks:- name: copy ingress controller deployment filecopy: src/root/ingress-controller-deploy.yaml dest/root/检查yaml文件语法
[rootansible ansible]# ansible-playbook --syntax-check /etc/ansible/ingress_install.yamlplaybook: /etc/ansible/ingress_install.yaml执行yaml文件
[rootansible ansible]# ansible-playbook ingress_install.yaml
1.4 查看是否成功 发现镜像拉取成功文件也传送到master上了
2. 执行ingress-controller-deploy.yaml 文件去启动ingress controller
在master机器上
[rootmaster ~]# kubectl apply -f ingress-controller-deploy.yaml查看ingress controller的相关命名空间 查看ingress controller的相关service
[rootk8smaster 4-4]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.99.160.10 none 80:30092/TCP,443:30263/TCP 91s
ingress-nginx-controller-admission ClusterIP 10.99.138.23 none 443/TCP 91s
查看ingress controller的相关pod
[rootmaster ~]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-fbz67 0/1 Completed 0 110s
ingress-nginx-admission-patch-4fsjz 0/1 Completed 1 110s
ingress-nginx-controller-7cd558c647-dgfbd 1/1 Running 0 110s
ingress-nginx-controller-7cd558c647-g9vvt 1/1 Running 0 110s3. 启用ingress 关联ingress controller 和service
3.1 编写ingrss的yaml文件
[rootmaster ~]# vim zhou_ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: zhou-ingress #ingress的名字annotations:kubernets.io/ingress.class: nginx #注释 这个ingress 是关联ingress controller的
spec:ingressClassName: nginx #关联ingress controllerrules:- host: www.zhou.com #根据域名做负载均衡http:paths:- pathType: Prefixpath: /backend:service:name: my-nginx-nfs #用前面发布的service名字port:number: 80- host: www.xin.comhttp:paths:- pathType: Prefixpath: /backend:service:name: my-nginx-nfs2 #后面做发布service的时候要用到port:number: 80
3.2 执行文件
[rootmaster ~]# kubectl apply -f zhou_ingress.yaml
ingress.networking.k8s.io/zhou-ingress created3.3 查看效果
[rootmaster ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
zhou-ingress nginx www.zhou.com,www.xin.com 192.168.107.12,192.168.107.13 80 85s该过程需要等几分钟才能看到ADDRESS中的ip地址
3.4 查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则
[rootmaster ~]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-fbz67 0/1 Completed 0 12m
ingress-nginx-admission-patch-4fsjz 0/1 Completed 1 12m
ingress-nginx-controller-7cd558c647-dgfbd 1/1 Running 0 12m
ingress-nginx-controller-7cd558c647-g9vvt 1/1 Running 0 12m
[rootmaster ~]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-7cd558c647-dgfbd -- bash
bash-5.1$ cat nginx.conf|grep zhou.com## start server www.zhou.comserver_name www.zhou.com ;## end server www.zhou.com
bash-5.1$ cat nginx.conf|grep xin.com## start server www.xin.comserver_name www.xin.com ;## end server www.xin.com
bash-5.1$ cat nginx.conf|grep -C3 upstream_balancererror_log /var/log/nginx/error.log notice;upstream upstream_balancer {server 0.0.0.1:1234; # placeholderbalancer_by_lua_block {4. 测试访问
4.1 获取ingress controller对应的service暴露宿主机的端口
访问宿主机和相关端口就可以验证ingress controller是否能进行负载均衡
[rootmaster ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.1.58.218 none 80:30289/TCP,443:32195/TCP 19m
ingress-nginx-controller-admission ClusterIP 10.1.241.17 none 443/TCP 19m4.2 在其他的宿主机或者windows机器上使用域名进行访问
这里在ansible服务器上访问
4.2.1 修改host文件
[rootansible ansible]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.107.12 www.zhou.com
192.168.107.13 www.xin.com因为我们是基于域名做的负载均衡的配置所有必须要在浏览器里使用域名去访问不能使用ip地址 同时ingress controller做负载均衡的时候是基于http协议的7层负载均衡
4.2.1 测试访问
[rootansible ansible]# curl www.zhou.com
pwelcome!/p
h1name:zhouxin/h1
h1Hunan Agricultural University/h1
h1age: 20/h1[rootansible ansible]# curl www.xin.com
html
headtitle503 Service Temporarily Unavailable/title/head
body
centerh1503 Service Temporarily Unavailable/h1/center
hrcenternginx/center
/body
/html
[rootansible ansible]# 这里看到访问www.zhou.com能正常访问到而www.xin.com没有访问到出现503错误原因是我们只发布另一个service服务没有发布另一个
5. 启动第2个服务和pod
[rootmaster ~]# vim zhou_nginx_svc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: zhou-nginx-deploylabels:app: zhou-nginx
spec:replicas: 3selector:matchLabels:app: zhou-nginxtemplate:metadata:labels:app: zhou-nginxspec:containers:- name: zhou-nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:name: my-nginx-nfs2 #要用前面zhou_ingress.yaml中一样的labels:app: my-nginx-nfs2
spec:selector:app: zhou-nginxports:- name: name-of-service-portprotocol: TCPport: 80执行并查看
[rootmaster ~]# kubectl apply -f zhou_nginx_svc.yaml
deployment.apps/zhou-nginx-deploy created
service/my-nginx-nfs2 created
[rootmaster ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.1.0.1 none 443/TCP 2d1h
my-nginx-nfs NodePort 10.1.32.204 none 8070:32621/TCP 173m
my-nginx-nfs2 ClusterIP 10.1.202.196 none 80/TCP 43s
[rootmaster ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.1.58.218 none 80:30289/TCP,443:32195/TCP 33m
ingress-nginx-controller-admission ClusterIP 10.1.241.17 none 443/TCP 33m
[rootmaster ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
zhou-ingress nginx www.zhou.com,www.xin.com 192.168.107.12,192.168.107.13 80 23m6. 再次测试访问查看www.xin.com的是否能够访问到
[rootansible ansible]# curl www.zhou.com
pwelcome!/p
h1name:zhouxin/h1
h1Hunan Agricultural University/h1
h1age: 20/h1[rootansible ansible]# curl www.xin.com
!DOCTYPE html
html
head
titleWelcome to nginx!/title
stylebody {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
/style
/head
body
h1Welcome to nginx!/h1
pIf you see this page, the nginx web server is successfully installed and
working. Further configuration is required./ppFor online documentation and support please refer to
a hrefhttp://nginx.org/nginx.org/a.br/
Commercial support is available at
a hrefhttp://nginx.com/nginx.com/a./ppemThank you for using nginx./em/p
/body
/html可见这次访问成功ingress负载均衡配置成功
七、在k8s集群里部署Prometheus对web业务进行监控结合Grafana成图工具进行数据展示
这里参考了https://blog.csdn.net/rzy1248873545/article/details/125758153这篇博客
监控node的资源可以放一个node_exporter,这是监控node资源的node_exporter是Linux上的采集器放上去就能采集到当前节点的CPU、内存、网络IO等都可以采集的。
监控容器k8s内部提供cadvisor采集器pod、容器都可以采集到这些指标都是内置的不需要单独部署只知道怎么去访问这个Cadvisor就可以了。
监控k8s资源对象会部署一个kube-state-metrics这个服务它会定时的API中获取到这些指标帮存取到Prometheus里要是告警的话通过Alertmanager发送给一些接收方通过Grafana可视化展示
1. 搭建prometheus监控k8s集群
1.1 采用daemonset方式部署node-exporter
[rootmaster /]# mkdir /prometheus
[rootmaster /]# cd /prometheus
[rootmaster prometheus]# vim node_exporter.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:name: node-exporternamespace: kube-systemlabels:k8s-app: node-exporter
spec:selector:matchLabels:k8s-app: node-exportertemplate:metadata:labels:k8s-app: node-exporterspec:containers:- image: prom/node-exportername: node-exporterports:- containerPort: 9100protocol: TCPname: http
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: node-exportername: node-exporternamespace: kube-system
spec:ports:- name: httpport: 9100nodePort: 31672protocol: TCPtype: NodePortselector:k8s-app: node-exporter
执行
[rootmaster prometheus]# kubectl apply -f node-exporter.yaml
daemonset.apps/node-exporter created
service/node-exporter created
1.2 部署Prometheus
[rootmaster prometheus]# vim prometheus_rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: prometheus
rules:
- apiGroups: []resources:- nodes- nodes/proxy- services- endpoints- podsverbs: [get, list, watch]
- apiGroups:- extensionsresources:- ingressesverbs: [get, list, watch]
- nonResourceURLs: [/metrics]verbs: [get]
---
apiVersion: v1
kind: ServiceAccount
metadata:name: prometheusnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: prometheus
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: prometheus
subjects:
- kind: ServiceAccountname: prometheusnamespace: kube-system
[rootmaster prometheus]# vim prometheus_comfig.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: prometheus-confignamespace: kube-system
data:prometheus.yml: |global:scrape_interval: 15sevaluation_interval: 15sscrape_configs:- job_name: kubernetes-apiserverskubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https- job_name: kubernetes-nodeskubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics- job_name: kubernetes-cadvisorkubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor- job_name: kubernetes-service-endpointskubernetes_sd_configs:- role: endpointsrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:])(?::\d)?;(\d)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name- job_name: kubernetes-serviceskubernetes_sd_configs:- role: servicemetrics_path: /probeparams:module: [http_2xx]relabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]action: keepregex: true- source_labels: [__address__]target_label: __param_target- target_label: __address__replacement: blackbox-exporter.example.com:9115- source_labels: [__param_target]target_label: instance- action: labelmapregex: __meta_kubernetes_service_label_(.)- source_labels: [__meta_kubernetes_namespace]target_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]target_label: kubernetes_name- job_name: kubernetes-ingresseskubernetes_sd_configs:- role: ingressrelabel_configs:- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]action: keepregex: true- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]regex: (.);(.);(.)replacement: ${1}://${2}${3}target_label: __param_target- target_label: __address__replacement: blackbox-exporter.example.com:9115- source_labels: [__param_target]target_label: instance- action: labelmapregex: __meta_kubernetes_ingress_label_(.)- source_labels: [__meta_kubernetes_namespace]target_label: kubernetes_namespace- source_labels: [__meta_kubernetes_ingress_name]target_label: kubernetes_name- job_name: kubernetes-podskubernetes_sd_configs:- role: podrelabel_configs:- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.)- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]action: replaceregex: ([^:])(?::\d)?;(\d)replacement: $1:$2target_label: __address__- action: labelmapregex: __meta_kubernetes_pod_label_(.)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: kubernetes_pod_name
[rootmaster prometheus]# vim prometheus_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:name: prometheus-deploymentname: prometheusnamespace: kube-system
spec:replicas: 1selector:matchLabels:app: prometheustemplate:metadata:labels:app: prometheusspec:containers:- image: prom/prometheus:v2.0.0name: prometheuscommand:- /bin/prometheusargs:- --config.file/etc/prometheus/prometheus.yml- --storage.tsdb.path/prometheus- --storage.tsdb.retention24hports:- containerPort: 9090protocol: TCPvolumeMounts:- mountPath: /prometheusname: data- mountPath: /etc/prometheusname: config-volumeresources:requests:cpu: 100mmemory: 100Milimits:cpu: 500mmemory: 2500MiserviceAccountName: prometheusvolumes:- name: dataemptyDir: {}- name: config-volumeconfigMap:name: prometheus-config
[rootmaster prometheus]# vim prometheus_service.yaml
kind: Service
apiVersion: v1
metadata:labels:app: prometheusname: prometheusnamespace: kube-system
spec:type: NodePortports:- port: 9090targetPort: 9090nodePort: 30003selector:app: prometheus
执行
[rootmaster prometheus]# kubectl apply -f prometheus_rbac.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[rootmaster prometheus]# kubectl apply -f prometheus_comfig.yaml
configmap/prometheus-config created
[rootmaster prometheus]# kubectl apply -f prometheus_deployment.yaml
deployment.apps/prometheus created
[rootmaster prometheus]# kubectl apply -f prometheus_service.yaml
service/prometheus created查看
[rootmaster prometheus]# kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.1.0.1 none 443/TCP 2d1h
default my-nginx-nfs NodePort 10.1.32.204 none 8070:32621/TCP 3h9m
default my-nginx-nfs2 ClusterIP 10.1.202.196 none 80/TCP 15m
ingress-nginx ingress-nginx-controller NodePort 10.1.58.218 none 80:30289/TCP,443:32195/TCP 47m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.1.241.17 none 443/TCP 47m
kube-system kube-dns ClusterIP 10.1.0.10 none 53/UDP,53/TCP,9153/TCP 2d1h
kube-system metrics-server ClusterIP 10.1.33.66 none 443/TCP 152m
kube-system node-exporter NodePort 10.1.199.144 none 9100:31672/TCP 6m14s
kube-system prometheus NodePort 10.1.178.35 none 9090:30003/TCP 98s1.3 测试
用浏览器访问192.168.107.11:31672,这是node-exporter采集的数据 访问192.168.107.11:30003,这是Prometheus的页面依次点击Status——Targets可以看到已经成功连接到k8s的apiserver 2. 搭建garafana结合prometheus出图
2.1 部署grafana
[rootmaster prometheus]# vim grafana_deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: grafana-corenamespace: kube-systemlabels:app: grafanacomponent: core
spec:replicas: 1selector:matchLabels:app: grafanatemplate:metadata:labels:app: grafanacomponent: corespec:containers:- image: grafana/grafana:6.1.4name: grafana-coreimagePullPolicy: IfNotPresent# env:resources:# keep request limit to keep this container in guaranteed classlimits:cpu: 100mmemory: 100Mirequests:cpu: 100mmemory: 100Mienv:# The following env variables set up basic auth twith the default admin user and admin password.- name: GF_AUTH_BASIC_ENABLEDvalue: true- name: GF_AUTH_ANONYMOUS_ENABLEDvalue: false# - name: GF_AUTH_ANONYMOUS_ORG_ROLE# value: Admin# does not really work, because of template variables in exported dashboards:# - name: GF_DASHBOARDS_JSON_ENABLED# value: truereadinessProbe:httpGet:path: /loginport: 3000# initialDelaySeconds: 30# timeoutSeconds: 1#volumeMounts: #先不进行挂载#- name: grafana-persistent-storage# mountPath: /var#volumes:#- name: grafana-persistent-storage#emptyDir: {}
[rootmaster prometheus]# vim grafana_svc.yaml
apiVersion: v1
kind: Service
metadata:name: grafananamespace: kube-systemlabels:app: grafanacomponent: core
spec:type: NodePortports:- port: 3000selector:app: grafanacomponent: core
[rootmaster prometheus]# vim grafana_ing.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: grafananamespace: kube-system
spec:rules:- host: k8s.grafanahttp:paths:- path: /pathType: Prefixbackend:service:name: grafanaport: number: 3000 执行
[rootmaster prometheus]# kubectl apply -f grafana_deploy.yaml
deployment.apps/grafana-core created
[rootmaster prometheus]# kubectl apply -f grafana_svc.yaml
service/grafana created
[rootmaster prometheus]# kubectl apply -f grafana_ing.yaml
ingress.networking.k8s.io/grafana created
查看
[rootmaster prometheus]# kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.1.0.1 none 443/TCP 2d1h
default my-nginx-nfs NodePort 10.1.32.204 none 8070:32621/TCP 3h17m
default my-nginx-nfs2 ClusterIP 10.1.202.196 none 80/TCP 24m
ingress-nginx ingress-nginx-controller NodePort 10.1.58.218 none 80:30289/TCP,443:32195/TCP 56m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.1.241.17 none 443/TCP 56m
kube-system grafana NodePort 10.1.254.118 none 3000:30276/TCP 71s
kube-system kube-dns ClusterIP 10.1.0.10 none 53/UDP,53/TCP,9153/TCP 2d1h
kube-system metrics-server ClusterIP 10.1.33.66 none 443/TCP 160m
kube-system node-exporter NodePort 10.1.199.144 none 9100:31672/TCP 14m
kube-system prometheus NodePort 10.1.178.35 none 9090:30003/TCP 9m55s2.2 测试
访问192.168.107.11:30276这是grafana的页面账户、密码都是admin 2.2.1 增添Prometheus数据源 2.2.2 导入模板 输入模板号可以到这个网站去找模板
Dashboards | Grafana Labs 2.3 出图效果 八、构建CI/CD环境使用gitlab集成Jenkins、Harbor构建pipeline流水线工作实现自动相关拉取代码、镜像制作、上传镜像等功能
1. 部署gitlab环境
1.1 安装gitlab
此处参考了https://blog.csdn.net/weixin_56270746/article/details/125427722
1.1.1设置gitlab的yum源使用清华镜像源安装GitLab
gitlab-ce是它的社区版gitlab-ee是企业版是收费的。
在 /etc/yum.repos.d/ 下新建 gitlab-ce.repo
[rootgitlab ~]# cd /etc/yum.repos.d/
[rootgitlab yum.repos.d]# vim gitlab-ce.repo
[gitlab-ce]
namegitlab-ce
baseurlhttps://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/
gpgcheck0
enabled1[rootgitlab yum.repos.d]# yum clean all yum makecache
1.1.2 安装 gitlab
直接安装最新版
[rootgitlab yum.repos.d]#yum install -y gitlab-ce
安装成功后会看到gitlab-ce打印了以下图形 1.1.3 配置GitLab站点Url
GitLab默认的配置文件路径是/etc/gitlab/gitlab.rb
默认的站点Url配置项是 external_url http://gitlab.example.com
这里我将GitLab站点Url修改为http://192.168.107.17:8000
[rootgitlab gitlab]# cd /etc/gitlab
[rootgitlab gitlab]# vim gitlab.rb
external_url http://192.168.107.17:8000 #修改这里1.2 启动并访问GitLab
1.2.1 重新配置并启动
[rootgitlab gitlab]# gitlab-ctl reconfigure
完成后将会看到如下输出 1.2.2 在firewalld服务器上配置dnat策略使windows能访问进来
[rootfiewalld ~]# vim snat_dnat.sh
#!/bin/bash
iptables -F
iptables -t nat -F#enable route
echo 1 /proc/sys/net/ipv4/ip_forward#enable snat
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source 192.168.31.69#enable dant
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.11
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.12
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.13#添加下面这条注意端口是8000
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 8000 -j DNAT --to-destination 192.168.107.171.2.3 在window上访问
打开浏览器输入gitlab服务器地址注册用户如下图 注册用户 完成后想登录http://192.168.107.17:8000 需要账号和密码登录注册一个后登录报错误需要管理员账号初始化。 1.2.4 配置默认访问密码 [rootgitlab gitlab]# cd /opt/gitlab/bin/ #切换到命令运行的目录
[rootgitlab bin]# gitlab-rails console -e production #进行初始化密码
--------------------------------------------------------------------------------Ruby: ruby 3.0.6p216 (2023-03-30 revision 23a532679b) [x86_64-linux]GitLab: 16.3.1 (ea817127f2a) FOSSGitLab Shell: 14.26.0PostgreSQL: 13.11
------------------------------------------------------------[ booted in 62.10s ]
Loading production environment (Rails 7.0.6)
irb(main):001:0 uUser.where(id:1).first#User id:1 root
irb(main):002:0 u.passwordsc123456sc123456
irb(main):003:0 u.password_confirmationsc123456sc123456
irb(main):004:0 u.save!true
irb(main):005:0 exit出现true说明设置成功
此时就可以用root/sc123456来登录页面
1.2.5 登录访问 成功登录root用户
1.3 配置使用自己创建的用户登录
需要用root账号通过下 然后再次登录即可登录成功
至此gitlab环境就搭建成功了
2. 部署jenkins环境
2.1 先到官网下载通用java项目war包建议选择LTS长期支持版
下载地址
https://www.jenkins.io/download/
这里下载通用war包 2.2 下载javajdk11以上版本并安装安装后配置jdk的环境变量 参考https://blog.csdn.net/m0_37048012/article/details/120519348
2.2.1 yum安装
[rootjenkins javadoc]# yum install -y java-11-openjdk java-11-openjdk-devel # 安装
[rootjenkins javadoc]# java -version #查看是否安装成功
openjdk version 11.0.20 2023-07-18 LTS
OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1.el7_9) (build 11.0.208-LTS)
OpenJDK 64-Bit Server VM (Red_Hat-11.0.20.0.8-1.el7_9) (build 11.0.208-LTS, mixed mode, sharing)2.2.2 查找JAVA安装目录
[rootjenkins javadoc]# whereis java
java: /usr/bin/java /usr/lib/java /etc/java /usr/share/java /usr/share/man/man1/java.1.gz如果显示的是/usr/bin/java请执行下面命令
[rootjenkins javadoc]# ls -lr /usr/bin/java
lrwxrwxrwx 1 root root 22 9月 3 19:46 /usr/bin/java - /etc/alternatives/java
[rootjenkins javadoc]# ls -lrt /etc/alternatives/java
lrwxrwxrwx 1 root root 64 9月 3 19:46 /etc/alternatives/java - /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64/bin/java2.2.3 配置环境变量
[rootjenkins ~]# vim /etc/profile
#######添加下面内容########
#JAVA environment
JAVA_HOME/usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64
JRE_HOME$JAVA_HOME/jre
PATH$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASS_PATH.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
#PATH$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME PATH CLASS_PATH使环境变量生效
[rootjenkins ~]# source /etc/profile
2.3 将刚刚下载下来的jenkins.war包传入服务器 2.4 启动jenkins服务
[rootjenkins ~]# nohup java -jar jenkins.war 让其在后台运行
[rootjenkins local]# ps aux|grep jenkins
root 11790 106 13.6 2492292 136172 pts/0 Sl 20:40 0:06 java -jar jenkins.war
root 11824 0.0 0.0 112824 980 pts/1 R 20:40 0:00 grep --colorauto jenkins默认情况下端口是8080如果要使用其他端口启动可以通过命令行”java –jar Jenkins.war --httpPort80”的方式修改
2.5 测试访问
jenkins服务器名8080端口 这个过程需要等一会
出现解锁 Jenkins界面说明jenkins项目搭建完成这里需要输入管理员密码 上图中有提示管理员密码在/root/.jenkins/secrets/initialAdminPassword 打开此文件获得密码并输入密码
[rootjenkins local]# cat /root/.jenkins/secrets/initialAdminPassword
80e0160b23cf4187a0abe4974e6e9ac1点击”继续”按钮后如下图 等待所有插件安装完成。安装插件的时候会有一些插件安装失败这些插件的安装是有前置条件的等安装结束后按右下角“重试”继续安装。安装完成后点击“继续”按钮
创建用户 到此jenkins安装完成可以开启jenkins持续集成之旅了
3. 部署harbor环境
3.1 安装docker、docker-compose
3.1.1 安装docker
[rootharbor ~]# yum install -y yum-utils[rootharbor ~]# yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo[rootharbor ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y[rootharbor ~]# systemctl start docker[rootharbor ~]# docker -v #查看docker是否安装成功
Docker version 24.0.5, build ced09963.1.2 安装docker-compose
下载并且安装compose的命令行插件
[rootharbor ~]# DOCKER_CONFIG${DOCKER_CONFIG:-$HOME/.docker}
[rootharbor ~]# echo $DOCKER_CONFIG
/root/.docker
[rootharbor ~]# mkdir -p $DOCKER_CONFIG/cli-plugins
[rootharbor ~]# 上传docker-compose程序到自己的linux宿主机里存放到/root/.docker/cli-plugins/ [rootharbor ~]# mv docker-compose /root/.docker/cli-plugins/
[rootharbor ~]# cd /root/.docker/cli-plugins/
[rootharbor cli-plugins]# ls
docker-compose
[rootharbor cli-plugins]# chmod x docker-compose #授予可执行权限[rootharbor cli-plugins]# cp docker-compose /usr/bin/ #将docker-compose存放到PATH变量目录下[rootharbor cli-plugins]# docker-compose --version #查看是否安装成功
Docker Compose version v2.7.03.2 安装harbor
3.2.1 下载harbor的源码上传到linux服务器 3.2.2 解压并修改内容
[rootharbor ~]# tar xf harbor-offline-installer-v2.1.0.tgz
[rootharbor ~]# ls
anaconda-ks.cfg harbor harbor-offline-installer-v2.1.0.tgz
[rootharbor ~]# cd harbor
[rootharbor harbor]# ls
common.sh harbor.v2.1.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
[rootharbor harbor]# cp harbor.yml.tmpl harbor.yml
[rootharbor harbor]# vim harbor.yml
修改下面这两处 并注释掉https的配置 3.3 登录harbor
[rootharbor harbor]# ./install.sh 在windows机器上访问网站去配置harbor http://192.168.107.19:8089/
默认的登录的用户名和密码 admin Harbor12345 至此环境部署就全部完成了
4. gitlab集成jenkins、harbor构建pipeline流水线任务实现相关拉取代码、镜像制作、上传镜像等流水线工作
参考https://www.cnblogs.com/linanjie/p/13986198.html
在jenkins中构建流水线任务时从GitLab当中拉取代码通过maven打包然后构建dokcer镜像并将镜像推送至harbor当中 。
4.1 jenkins服务器上需要安装docker且配置可登录Harbor服务拉取镜像
4.1.1 jenkins服务器上安装docker
[rootjenkins ~]# yum install -y yum-utils[rootjenkins ~]# yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo[rootjenkins ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y[rootjenkins ~]# systemctl start docker[rootjenkins ~]# docker -v #查看docker是否安装成功
Docker version 24.0.5, build ced0996
4.1.2 jenkins服务器上配置可登录Harbor服务
[rootjenkins local]# vim /etc/docker/daemon.json
{
registry-mirrors: [https://registry.docker-cn.com],
insecure-registries : [192.168.107.19:8089]
}
重启docker
[rootjenkins local]# systemctl daemon-reload
[rootjenkins local]# systemctl restart docker4.1.3 测试登录
[rootjenkins local]# docker login 192.168.107.19:8089
Username: admin #这里使用前面的那个默认用户名和密码
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded可见登录成功
4.2 在jenkins上安装git
[rootjenkins .ssh]# yum install -y git
4.3 在jenkins上安装maven
参考:https://blog.csdn.net/liu_chen_yang/article/details/130106529
4.3.1 下载安装包
登录网址查看下载源清华大学开源软件镜像站
搜索apache 进入apache找到maven并下载 点击进入选择自己所需版本外面是大版本里面还有小版本 我就点击最新的maven-4进入之后在点击4.0.0-alpha-7在选择 binaries选择自己想要下载包格式我选择的是zip格式 下载完成之后上传到服务器上解压即可. 4.3.2 解压下载的包
[rootjenkins ~]# mkdir -p /usr/local/maven
[rootjenkins ~]# ls
anaconda-ks.cfg apache-maven-4.0.0-alpha-7-bin.zip jenkins.war nohup.out
[rootjenkins ~]# mv apache-maven-4.0.0-alpha-7-bin.zip /usr/local/maven
[rootjenkins ~]# cd /usr/local/maven
[rootjenkins ~]# yum install unzip -y
[rootjenkins ~]# unzip apache-maven-4.0.0-alpha-7-bin.zi
4.3.3 配置环境变量
[rootjenkins ~]# vim /etc/profile
######添加下面内容
MAVEN_HOME/usr/local/maven/apache-maven-4.0.0-alpha-7
export PATH${MAVEN_HOME}/bin:${PATH}使环境变量生效
[rootjenkins ~]# source /etc/profile4.3.4 mvn校验
[rootjenkins ~]# mvn -v
Unable to find the root directory. Create a .mvn directory in the root directory or add the roottrue attribute on the root projects model to identify it.
Apache Maven 4.0.0-alpha-7 (bf699a388cc04b8e4088226ba09a403b68de6b7b)
Maven home: /usr/local/maven/apache-maven-4.0.0-alpha-7
Java version: 11.0.20, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64
Default locale: zh_CN, platform encoding: UTF-8
OS name: linux, version: 3.10.0-1160.el7.x86_64, arch: amd64, family: unix看到上面输出说明安装成功
4.4 gitlab中创建测试项目
参考https://www.cnblogs.com/linanjie/p/13986198.html
我这里选择从模板中创建一个Spring项目项目名称自拟 创建模板成功
4.5 在harbor上新建dev项目 4.6 在Jenkins页面中配置JDK和Maven 编辑完成之后点击应用保存
4.7 在Jenkins开发视图中创建流水线任务pipeline
jenkins中所需插件有
Pipeline、docker-build-step、Docker Pipeline、Docker plugin、docker-build-step 、Role-based、Authorization Strategy
确保在jenkins中将上诉插件安装好。
4.7.1 流水线任务需要编写pipeline脚本编写脚本的第一步应该是拉取gitlab中的项目 点击流水线语法 然后点击添加选择刚刚创建的凭据 记录下来git credentialsId: 0e0ecf12-6c3d-449b-a957-124d18f2fbb7, url: http://192.168.107.17:8001/zhouxin/spring.git
4.7.2 编写pipeline
pipeline{agent anyenvironment {// harbor的地址HARBOR_HOST 192.168.107.19:8089 BUILD_VERSION createVersion()}tools{// 添加环境名称为Jenkins全局配置中自己定义的别名jdk jdk11maven maven4.0.0}stages{stage(拉取代码){//check CODEsteps {// 使用自己前面自己生成的git credentialsId: f7c7796f-810c-4ba5-83cb-573f1be3e707, url: http://192.168.107.17:8001/zhouxin/my-spring.git}}stage(maven构建){steps {sh mvn clean package -Dmaven.test.skiptrue}}stage(构建docker镜像并push到harbor当中){//docker pushsteps {sh docker build -t springproject:$BUILD_VERSION .docker tag springproject:$BUILD_VERSION ${HARBOR_HOST}/dev/springproject:$BUILD_VERSION// 使用自己的登陆harbor的用户名和密码sh docker login -u admin -p Harbor12345 ${HARBOR_HOST}sh docker push ${HARBOR_HOST}/dev/springproject:$BUILD_VERSION}}}
}def createVersion() {// 定义一个版本号作为当次构建的版本输出结果 20201116165759_1return new Date().format(yyyyMMddHHmmss) _${env.BUILD_ID}
}
请确保Harbor中已经创建dev仓库pipeline的写法可以自己在网上学习脚本中应尽量不要出现明文的密码为了演示方便我这里直接使用了harbor的明文密码正规来说应该再建一个凭据来维护harborn的用户名和密码然后再通过脚本去获取凭据中的用户名和密码
编写完成后点击应用保存
回到开发视图页面构建刚才创建的流水线任务 第一次构建时间相对较久因为maven构建时需要下载对应依赖耐心等待构建完成我这里因为之前已经下载过相关依赖所以时间较短 经过几次尝试和排错之后报错内容写在了文章末尾成功了
5. 验证
到harbor中查看发现镜像已上传 至此pipeline流水线工作就完成了
九、部署跳板机限制用户访问内部网络的权限
1. 在firewalld上配置dnat策略实现用户ssh到firewalld服务后自动转入到跳板机服务器
[rootfiewalld ~]# vim snat_dnat.sh
#########添加下面的规则#####
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 22 -j DNAT --to-destination 192.168.107.14:22测试在window上ssh到firewalld服务器查看是否自动转到跳板机里 可见配置成功
2. 在跳板机服务器上配置只允许192.168.31.0/24网段的用户ssh进来
[rootjump_server ~]# yum install iptables -y
[rootjump_server ~]# iptables -A INPUT -p tcp --dport 22 -s 192.168.31.0/24 -j ACCEPT3. 将跳板机与内网其他服务器都建立免密通道
这里只展示一台的操作其他的也是一样只需要把公钥依次传入其他的服务器上即可
[rootjump_server ~]# ssh-keygen #生成密钥
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory /root/.ssh.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9axtEvUoHVNh2MQCRO7UgwHn8CV6M05XOeQeCVgPg0 rootjump_server
The keys randomart image is:
---[RSA 2048]----
| .E*. |
| oOo**o.. |
| . Xo o|
| .o** *..|
| S . o .|
| . * . |
| o |
| o |
| |
----[SHA256]-----
[rootjump_server ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.19 #将公钥传到要建立免密通道的服务器上
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: /root/.ssh/id_rsa.pub
The authenticity of host 192.168.107.19 (192.168.107.19) cant be established.
ECDSA key fingerprint is SHA256:YeJAjO9gERUBkV531t5TE3PJy74ezOWN5XlC98sMqxQ.
ECDSA key fingerprint is MD5:04:ab:31:bc:ad:88:80:7c:53:3d:77:95:55:01:9c:b0.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root192.168.107.19s password: Number of key(s) added: 1Now try logging into the machine, with: ssh 192.168.107.19
and check to make sure that only the key(s) you wanted were added.[rootjump_server ~]# ssh root192.168.107.19 #测试是否成功
Last login: Mon Sep 4 20:41:37 2023 from 192.168.31.67
[rootharbor ~]# 4. 验证
用192.168.107.0/24网段的服务器登录到firewalld里看是否会自动转发到跳板机里 可见不能自动转发到跳板机中
再用192.168.31.0/24网段的服务器登录到firewalld里 可见能自动转发到跳板机中
至此跳板机就搭建成功了 十、安装zabbix对所有服务器区进行监控监控其CPU、内存、网络带宽等 十一、使用ab软件对整个k8s集群和相关服务器进行压力测试
这里用ansible服务器做压力测试
1. 安装ab软件
[rootansible ~]# yum install httpd-tools -y2. 测试
这里展示对一台服务器的压力测试其他服务器也是一样的
[rootansible ~]# ab -n 1000 -c 1000 -r http://192.168.31.69/
This is ApacheBench, Version 2.3 $Revision: 1430300 $
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 192.168.31.69 (be patient) #完成的进度
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requestsServer Software: #服务器软件版本
Server Hostname: 192.168.31.69 #服务器主机名
Server Port: 80 #服务器端口Document Path: / #测试的页面
Document Length: 0 bytes #页面的字节数Concurrency Level: 1000 #请求的并发数代表着访问的客户端数量
Time taken for tests: 0.384 seconds #整个测试花费的时间
Complete requests: 1000 #成功的请求数量
Failed requests: 2000 #失败的请求数量(Connect: 0, Receive: 1000, Length: 0, Exceptions: 1000)
Write errors: 0
Total transferred: 0 bytes #整个测试过程的总数据大小包括header头信息等
HTML transferred: 0 bytes #整个测试过程HTML页面实际的字节数
Requests per second: 2604.40 [#/sec] (mean) #每秒处理的请求数这是非常重要的参数体现了服务器的吞吐量 #后面括号中的 mean 表示这是一个平均值
Time per request: 383.966 [ms] (mean) #平均请求响应时间括号中的 mean 表示这是一个平均值#每个请求的时间 0.384[毫秒]意思为在所有的并发请求每个请求实际运行时间的平均值
#由于对于并发请求 cpu 实际上并不是同时处理的而是按照每个请求获得的时间片逐个轮转处理的
#所以基本上第一个 Time per request 时间约等于第二个 Time per request 时间乘以并发请求数
Time per request: 0.384 [ms] (mean, across all concurrent requests)
Transfer rate: 0.00 [Kbytes/sec] received 传输速率平均每秒的流量 #可以帮助排除是否存在网络流量过大导致响应时间延长的问题Connection Times (ms) #连接时间min mean[/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 1.0 0 7
Waiting: 0 0 0.0 0 0
Total: 0 0 1.0 0 7Percentage of the requests served within a certain time (ms) #在一定的时间内提供服务的请求的百分比50% 066% 075% 080% 090% 095% 398% 599% 5100% 7 (longest request)[rootansible ~]# 项目遇到的问题
1. 重启服务器后发现除了firewalld服务器其他服务器的xshell连接不上了
排错思路
查看ssh进程是否开启 是开启的没有问题
在firewalld防火墙服务器上看防火墙规则 发现之前配置的snat没有生效原因是配置snat的脚本重启后没有生效
解决bash snat_dnat.sh
再次查看防火墙规则 发现snat策略生效这时其他服务器的xshell可以连接上了
为了后面重启snat都生效将bash snat_dnat.sh写入开启自启脚本
步骤如下:
[rootfiewalld ~]# chmod x /root/snat_dnat.sh #给脚本设置可执行权限
[rootfiewalld ~]# vi /etc/rc.d/rc.local #!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run chmod x /etc/rc.d/rc.local to ensure
# that this script will be executed during boot.touch /var/lock/subsys/local/root/snat_dnat.sh #添加这一行
[rootfiewalld ~]# chmod x /etc/rc.d/rc.local #在centos7中/etc/rc.d/rc.local的权限被降低了所以需要执行如下命令赋予其可执行权限2. pod启动不起来发现是pvc与pv的绑定出错了原因是pvc和pv的yaml文件中的storageClassName不一致
3. 测试访问时发现访问的内容不足自己设置的即web数据文件挂载失败但是nginx.conf配置文件挂载成功
4. pipeline执行最后一步报错 查看错误信息 报错原因docker没有启动起来。
解决在jenkins服务器上启动docker即可
[rootjenkins ~]# service docker start
Redirecting to /bin/systemctl start docker.service
[rootjenkins ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES5. pipeline执行最后一步报错登录不了harbor
报错信息 原因默认登陆的是443端口而我们并没有启用
解决重启harbor就可以了
[rootharbor ~]# cd harbor
[rootharbor harbor]# ./install.sh 测试
[rootjenkins ~]# docker login -u admin -p Harbor12345 192.168.107.19:8089
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded登录成功 项目心得
对于snatdnat策略的原理和使用更熟悉k8s的使用和集群的部署更熟悉查看日志对排错很有帮助一定要提前规划好项目架构图部署环境的过程要细心对于dockerk8s中的技术和使用包括pvpvcnfs挂载卷实现数据一致性、镜像制作、探针技术理解更深刻使用更熟悉观察到HPA技术的现象深刻理解其作用和原理对于prometheus和zabbix两种监控方式理解跟深刻部署CI/CD完成流水线工作试错多次才成功对其使用方式更清楚了同时开启多台服务器可能会导致电脑卡顿要又耐心不要急躁排错过程如果一直失败不要着急要多方面思考和解决ingress做负载均衡的实现过程更熟悉了对于gitlabjenkinsharbor实现pipeline流水线工作的流程理解更深知道背后的原理及是如何将3者连接在一起的实现的过程出现了很多次错误试了10几次才能够要稳住心态不要急躁和放弃深刻理解了跳板机的原理知道了压力测试的意义