一、简介

文档:https://kubernetes.io/zh/docs/home/

架构:

image-20211203223137669

minikube

https://github.com/kubernetes/minikube

博客:https://juejin.cn/post/6952331691524358174?share_token=4fcf3bb4-c4b8-4107-ad1e-c9a95160271a#heading-4

来由

image-20211204080413921

传统部署:物理服务器运行应用,应用资源分配不合理,某应用可能占用大部分资源,其他应用性能下降

虚拟化部署:虚拟机进行应用隔离,更好的利用资源,但是虚拟机虚拟化了所有硬件组件,还有操作系统

容器化部署:类似虚拟机,具有自己的文件系统、CPU、内存等,轻量级,创建快速,持续开发继承和部署,资源隔离,资源利用合理,跨开发、测试生产环境一致,可观察操作系统级别的信息和应用信息等

容器故障,需要重启,保证不停机,Kubernetes

  • 服务发现和负载均衡
    • DNS名称和IP地址公开容器,如果进入容器的流量很大,可以负载均衡分配网络流量
  • 存储编排
    • 自动挂载自己的存储系统:本地存储、公共云提供商
  • 自动部署和回滚
  • 自动装箱计算:允许的CPU和RAM
  • 自我修复:重新不正常的容器的容器
  • 密钥和配置管理:允许存储和管理敏感信息:密码、令牌、ssh密钥

架构

image-20211215113913486

组件

集群由一组节点机器组成,至少一个工作节点

控制平面组件

控制平面组件(Control Plane Components)对集群做全局决策

一般同一计算机启动所有控制平面组件

Kube-apiserver

公开Kubernets API,k8s控制面的前端

etcd

键值数据库,一致性、高可用性,保存集群数据的后台数据库

kube-scheduler

监听新创建的、未指定运行节点(Node)的Pods,选择节点在Pod上运行

Kube-controller-manager

节点控制器:节点出现故障通知响应

任务控制器:检测一次性任务的Job对象,创建pod运行这些任务至完成

端点控制器:填充端点对象,加入service和pod

服务账户和令牌控制器:为新的命令空间创建默认账户和Api访问令牌

kubelet

负责维护容器的生命周期,同时也负责 Volume 和网络的管理

Container runtime

负责镜像管理以及 Pod 和容器的真正运行

Node组件

kubelet

每个节点上运行的代理,确保容器都运行在pod中

kube-proxy

每个节点上的网络代理,维护节点上的网络规则,允许从集群内部和外部的网络会话与Pod进行网络通信

插件

CoreDNS

  • 可以为集群中的 SVC 创建一个域名 IP 的对应关系解析的 DNS 服务

Dashboard

  • 给 K8s 集群提供了一个 B/S 架构的访问入口

Ingress Controller

  • 官方只能够实现四层的网络代理,而 Ingress 可以实现七层的代理

Prometheus

  • 给 K8s 集群提供资源监控的能力

Federation

  • 提供一个可以跨集群中心多 K8s 的统一管理功能,提供跨可用区的集群

基本对象

  • Pod -> 集群中的基本单元
    • daemon sets:每个机器都放
    • statefull sets:有状态应用
  • Service -> 解决如何访问 Pod 里面服务的问题,统一应用访问入口
  • Volume -> 集群中的存储卷
  • Namespace -> 命名空间为集群提供虚拟的隔离作用

二、安装

virtualbox

https://www.virtualbox.org/wiki/Downloads

打开virtualbox软件,点击管理打开全局配置配置box的新路径

网络配置

image-20211204154457829

DNS解析

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DNS1="114.114.114.114"
service network restart

虚拟机网卡

原来

nat地址转换,虚拟机和本地都能访问互联网,同一个IP,通过端口转发

仅主机网络,虚拟机内部共享网络,方便能够使用开发工具连接

NAT网络,k8s,局域网

如果使用网络地址转换,ip地址一样,出现问题

全局创建NAT网络

image-20211204175425318

单个虚拟机网卡一改成NAT网络,重新生成MAC地址

image-20211204175310082

文件位置

不能有中文!

image-20211204155032428

选中3个无界面启动

10.0.2.6/24
10.0.2.15/24
10.0.2.5/24
ping 10.0.2.6 #试试看
ping baidu.com

防火墙等

# 右键所有发送命令,关闭防火墙,安全策略,内存交换
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
cat /etc/selinux/config

swapoff -a #(临时关,重启还有)
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
free -g #验证,swap必须为0
服务 协议 行为 起始端口 结束端口 备注
ssh TCP allow 22
etcd TCP allow 2379 2380
apiserver TCP allow 6443
calico TCP allow 9099 9100
bgp TCP allow 179
nodeport TCP allow 30000 32767
master TCP allow 10250 10258
dns TCP allow 53
dns UDP allow 53
local-registry TCP allow 5000 离线环境需要
local-apt TCP allow 5080 离线环境需要
rpcbind TCP allow 111 使用 NFS 时需要
ipip IPENCAP / IPIP allow Calico 需要使用 IPIP 协议
metrics-server TCP allow 8443

主机名和IP地址对应关系【可能改变】

ip addr
vi /etc/hosts
10.0.2.6 k8s-node1
10.0.2.15 k8s-node2
10.0.2.5 k8s-node3

# 如果需要指定主机名
hostnamectl set-hostname newhostname
newhostname为新的hostname

cat /etc/hosts

流量

#将桥接的ipv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
EOF

sysctl --system

Vagrant

https://www.vagrantup.com/

镜像地址

# 尝试
setx VAGRANT_HOME "E:\vm\virtualbox" /M
# 单个下载
# centos
vagrant init centos7 https://mirrors.ustc.edu.cn/centos-cloud/centos/7/vagrant/x86_64/images/CentOS-7.box
# ubuntu
vagrant init ubuntu-bionic https://mirrors.tuna.tsinghua.edu.cn/ubuntu-cloud-images/bionic/current/bionic-server-cloudimg-amd64-vagrant.box

vagrant up

VagrantFile

记得设置虚拟机nat网络

扩容:vagrant plugin install vagrant-disksize

➜  ~ sudo fdisk -l
➜  ~ sudo fdisk /dev/sda
# 按p显示分区表,默认是 sda1 和 sda2。
# 按n新建主分区。
# 按p设置为主分区。
# 输入3设置为第三分区。
# 输入两次回车设置默认磁盘起始位置。
# 输入t改变分区格式
# 输入3选择第三分区
# 输入8e格式成LVM格式
# 输入w执行
Vagrant.configure("2") do |config|
   (1..3).each do |i|
        config.vm.define "k8s-node#{i}" do |node|
            # 设置虚拟机的Box
            node.vm.box = "centos/7"
            config.vm.box_url = "https://mirrors.ustc.edu.cn/centos-cloud/centos/7/vagrant/x86_64/images/CentOS-7.box"
            config.disksize.size = "100GB"
            # 设置虚拟机的主机名
            node.vm.hostname="k8s-node#{i}"

            # 设置虚拟机的IP
            node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"

            # 设置主机与虚拟机的共享目录
            # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"

            # VirtaulBox相关配置
            node.vm.provider "virtualbox" do |v|
                # 设置虚拟机的名称
                v.name = "k8s-node#{i}"
                # 设置虚拟机的内存大小
                v.memory = 4096
                # 设置虚拟机的CPU个数
                v.cpus = 4
            end
        end
   end
end

命令

vagrant up
vagrant ssh k8s-node1
# 初始化
su root
vagrant
vi /etc/ssh/sshd_config
# 修改为
#PasswordAuthentication no
PasswordAuthentication yes
service sshd restart

sudo passwd root

exit
exit

可能还得修改虚拟机网卡,

ip192.168.56.100

Docker

 yum remove docker \
     docker-client \
     docker-client-latest \
     docker-common \
     docker-latest \
     docker-latest-logrotate \
     docker-logrotate \
     docker-engine

sudo yum install -y yum-utils \
    device-mapper-persistent-data \
    lvm2

sudo yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum list docker-ce.x86_64  --showduplicates |sort -r 
yum install -y docker-ce-18.06.3.ce-3.el7     #安装指定版本
# sudo yum install -y docker-ce docker-ce-cli containerd.io # 不指定版本就报错

sudo mkdir -p /etc/docker

cat > /etc/docker/daemon.json <<EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "registry-mirrors": ["https://kpw8rst3.mirror.aliyuncs.com"]
}
EOF

sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://kpw8rst3.mirror.aliyuncs.com"]
}
EOF



sudo systemctl daemon-reload
sudo systemctl restart docker

systemctl enable docker
docker ps -a

基础程序

  • kubeadm:用来初始化集群的指令。
  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。
  • kubectl:用来与集群通信的命令行工具。
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 检查是否有:
yum list|grep kube

# 安装:
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

# 自启:
systemctl enable kubelet
systemctl start kubelet
# 暂时运行不起来的,都等待加入,master之后才行
systemctl status kubelet

kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client

k8s-master【可能改变】

vim master_images.sh
#!/bin/bash

images=(
    kube-apiserver:v1.17.3
    kube-proxy:v1.17.3
    kube-controller-manager:v1.17.3
    kube-scheduler:v1.17.3
    coredns:1.6.5
    etcd:3.4.3-0
    pause:3.1
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
#   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName
done

# 多个组件
chmod 700 master_images.sh
./master_images.sh





# 需要修改成本机的地址
kubeadm init \
--apiserver-advertise-address=10.0.2.6 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.17.3 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16


# error
# docker版本不匹配



# 控制台打印的
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config



# !!!!!!!!     控制台打印的,需要网络,先下载下面的【flannel】
kubeadm join 10.0.2.4:6443 --token a0l0br.8ucwd8fc5uidq6xv \
    --discovery-token-ca-cert-hash sha256:fe39185433efc499546acc5dcd4e9a8cb9b95134170ff4a379e4d8888dd8a548





# 出错
可能是 node 时间不对
        # kubeadm reset 重启
        # rm -rf $HOME/.kube
        # 如果实在不行
        #kubeadm join *** --ignore-preflight-errors=all

token过期

image-20211206212748931

网络插件

## 无法访问
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 使用资源中的kube-flannel.yml,pod的安全策略,集群配置,每个节点都要运行,flannel下载不下来就去dockerhub替换 或者去码云搜flannel
# quay.io/coreos/flannel:v0.11.0-arm64 -> jmgao1983/flannel:v0.11.0-amd64
kubectl apply -f kube-flannel.yml
# delete
kubectl get pods -A
# kube-flannel-ds-amd64-hrrg6 确保运行中
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f9c544f75-z68bl            0/1     Pending   0          4m55s
kube-system   coredns-7f9c544f75-z75jq            0/1     Pending   0          4m55s
kube-system   etcd-k8s-node1                      1/1     Running   0          4m51s
kube-system   kube-apiserver-k8s-node1            1/1     Running   0          4m51s
kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          4m51s
kube-system   kube-flannel-ds-amd64-dw6ss         1/1     Running   0          16s
kube-system   kube-proxy-w2n27                    1/1     Running   0          4m55s
kube-system   kube-scheduler-k8s-node1            1/1     Running   0          4m51s

# 集群节点 ready
kubectl get nodes

kubectl get pods --all-namespaces
# kube-flannel-ds-amd64-hrrg6 确保运行中

# 再回去复制运行到 【work】 服务器
kubeadm join 10.0.2.6:6443 --token tvf99z.wsd4ocawda7clok3 \
    --discovery-token-ca-cert-hash sha256:9bf9e1948f7300652e87d530b0750b5b9520f87172e7658a2b9543a86f26c6a7

# 失败,可能是时间不同步
ntpdate cn.pool.ntp.org 
date

[root@k8s-node1 k8s]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    master   9m48s   v1.17.3
k8s-node2   Ready    <none>   2m9s    v1.17.3
k8s-node3   Ready    <none>   2m9s    v1.17.3

notReady 网络插件都要配置

# 集群的pod
kubectl get pod -n kube-system -o wide
# 等待都running
kube-flannel-ds-amd64-6g8ln         1/1     Running   0          88s   10.0.2.5     k8s-node3   <none>           <none>
kube-flannel-ds-amd64-jjvxc         1/1     Running   0          31m   10.0.2.6     k8s-node1   <none>           <none>
kube-flannel-ds-amd64-kzxcb         1/1     Running   0          91s   10.0.2.15    k8s-node2   <none>           <none>

flannel.yaml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

Ingress

控制service网络,统一对外暴露端口,由nginx实现

DaemonSet:每一个服务器都要有ingress controller

yaml

kubectl apply -f ingress-controller.yaml

定义名称空间,账户信息,rbac,daemonset(每一个放服务器都会有)

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: siriuszg/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  #type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

kubectl apply -f ingress-tomcat6.yaml

Tomcat使用

通过域名控制

image-20211205142813472

ingress-tomcat6.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec:
  rules:
  - host: tomcat6.atguigu.com
    http:
      paths:
        - backend:
           serviceName: tomcat6
           servicePort: 80


apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: web-ingress
  namespace: mingyue
spec:
  rules:
  - host: mingyuetest.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: psedu-admin-front
          servicePort: 80  #service的端口servicePort: 80  #service的端口
      - path: /prod-api
        backend:
          serviceName: psedu-gateway
          servicePort: 8080

kubectl apply -f ingress.yaml

kubectl get ingress

minikube

# root
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
install minikube-linux-amd64 /usr/local/bin/minikube


# error
useradd tomx
passwd tomx #设置密码
usermod -aG docker tomx && newgrp docker
su tomx #切换用户

# 不使用docker
yum install epel-release
yum install conntrack-tools
minikube start --image-mirror-country=cn --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --vm-driver=none
# 下载kubectl
minikube kubectl -- get po -A

dashboard

默认


apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

暴露NodeIp:30001

需要令牌命令

image-20211205145322835

StorageClass

貌似无法使用

#需要有默认的storageClass
kubectl get sc

# 文档https://v2-1.docs.kubesphere.io/docs/zh-CN/appendix/install-openebs/
kubectl get node -o wide
kubectl describe node k8s-node1 | grep Taint
# Taints:             node-role.kubernetes.io/master:NoSchedule
# 需要去掉 Taints
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-
# openebs
kubectl create ns openebs
# 已经失效,使用下面的
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.5.0.yaml
kubectl apply -f openebs.yaml
# 查看 https://blog.csdn.net/RookiexiaoMu_a/article/details/119859930

# 等待运行成功
kubectl get pod --all-namespaces
kubectl get sc

# 设置默认的storageClass
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# 查看 OpenEBS 相关 Pod 的状态,若 Pod 的状态都是 running,则说明存储安装成功
kubectl get pod -n openebs

# 恢复 Taints
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule

openebs-operator

#
#                             DEPRECATION NOTICE
#    This operator file is deprecated in 2.11.0 in favour of individual operators
#       for each storage engine and the file will be removed in version 3.0.0
#
# Further specific components can be deploy using there individual operator yamls
#
# To deploy cStor:
# https://github.com/openebs/charts/blob/gh-pages/cstor-operator.yaml
#
# To deploy Jiva:
# https://github.com/openebs/charts/blob/gh-pages/jiva-operator.yaml
#
# To deploy Dynamic hostpath localpv provisioner:
# https://github.com/openebs/charts/blob/gh-pages/hostpath-operator.yaml
#
#
# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context

# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:
  name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: openebs-maya-operator
  namespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: openebs-maya-operator
rules:
- apiGroups: ["*"]
  resources: ["nodes", "nodes/proxy"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["statefulsets", "daemonsets"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["resourcequotas", "limitranges"]
  verbs: ["list", "watch"]
- apiGroups: ["*"]
  resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "certificatesigningrequests"]
  verbs: ["list", "watch"]
- apiGroups: ["*"]
  resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
  verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
  resources: ["volumesnapshots", "volumesnapshotdatas"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["openebs.io"]
  resources: [ "*"]
  verbs: ["*" ]
- apiGroups: ["cstor.openebs.io"]
  resources: [ "*"]
  verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]
  resources: ["leases"]
  verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]
  resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]
  verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
- apiGroups: ["*"]
  resources: ["poddisruptionbudgets"]
  verbs: ["get", "list", "create", "delete", "watch"]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: openebs-maya-operator
subjects:
- kind: ServiceAccount
  name: openebs-maya-operator
  namespace: openebs
roleRef:
  kind: ClusterRole
  name: openebs-maya-operator
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: maya-apiserver
  namespace: openebs
  labels:
    name: maya-apiserver
    openebs.io/component-name: maya-apiserver
    openebs.io/version: 2.12.0
spec:
  selector:
    matchLabels:
      name: maya-apiserver
      openebs.io/component-name: maya-apiserver
  replicas: 1
  strategy:
    type: Recreate
    rollingUpdate: null
  template:
    metadata:
      labels:
        name: maya-apiserver
        openebs.io/component-name: maya-apiserver
        openebs.io/version: 2.12.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
      - name: maya-apiserver
        imagePullPolicy: IfNotPresent
        image: openebs/m-apiserver:2.12.0
        ports:
        - containerPort: 5656
        env:
        # OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s
        # based on this config. This is ignored if empty.
        # This is supported for maya api server version 0.5.2 onwards
        #- name: OPENEBS_IO_KUBE_CONFIG
        #  value: "/home/ubuntu/.kube/config"
        # OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s
        # based on this address. This is ignored if empty.
        # This is supported for maya api server version 0.5.2 onwards
        #- name: OPENEBS_IO_K8S_MASTER
        #  value: "http://172.28.128.3:8080"
        # OPENEBS_NAMESPACE provides the namespace of this deployment as an
        # environment variable
        - name: OPENEBS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
        # environment variable
        - name: OPENEBS_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        # OPENEBS_MAYA_POD_NAME provides the name of this pod as
        # environment variable
        - name: OPENEBS_MAYA_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        # If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default
        # storageclass and storagepool will not be created.
        - name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
          value: "true"
        # OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
        # configured as a part of openebs installation.
        # If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
        # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
        # is set to true
        - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
          value: "false"
        # OPENEBS_IO_INSTALL_CRD environment variable is used to enable/disable CRD installation
        # from Maya API server. By default the CRDs will be installed
        # - name: OPENEBS_IO_INSTALL_CRD
        #   value: "true"
        # OPENEBS_IO_BASE_DIR is used to configure base directory for openebs on host path.
        # Where OpenEBS can store required files. Default base path will be /var/openebs
        # - name: OPENEBS_IO_BASE_DIR
        #   value: "/var/openebs"
        # OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath
        # to be used for saving the shared content between the side cars
        # of cstor volume pod.
        # The default path used is /var/openebs/sparse
        #- name: OPENEBS_IO_CSTOR_TARGET_DIR
        #  value: "/var/openebs/sparse"
        # OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath
        # to be used for saving the shared content between the side cars
        # of cstor pool pod. This ENV is also used to indicate the location
        # of the sparse devices.
        # The default path used is /var/openebs/sparse
        #- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR
        #  value: "/var/openebs/sparse"
        # OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath
        # to be used for default Jiva StoragePool loaded by OpenEBS
        # The default path used is /var/openebs
        # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
        # is set to true
        #- name: OPENEBS_IO_JIVA_POOL_DIR
        #  value: "/var/openebs"
        # OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath
        # to be used for default openebs-hostpath storageclass loaded by OpenEBS
        # The default path used is /var/openebs/local
        # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
        # is set to true
        #- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR
        #  value: "/var/openebs/local"
        - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
          value: "openebs/jiva:2.12.1"
        - name: OPENEBS_IO_JIVA_REPLICA_IMAGE
          value: "openebs/jiva:2.12.1"
        - name: OPENEBS_IO_JIVA_REPLICA_COUNT
          value: "3"
        - name: OPENEBS_IO_CSTOR_TARGET_IMAGE
          value: "openebs/cstor-istgt:2.12.0"
        - name: OPENEBS_IO_CSTOR_POOL_IMAGE
          value: "openebs/cstor-pool:2.12.0"
        - name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE
          value: "openebs/cstor-pool-mgmt:2.12.0"
        - name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE
          value: "openebs/cstor-volume-mgmt:2.12.0"
        - name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
          value: "openebs/m-exporter:2.12.0"
        - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE
          value: "openebs/m-exporter:2.12.0"
        - name: OPENEBS_IO_HELPER_IMAGE
          value: "openebs/linux-utils:2.12.0"
        # OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage
        # events to Google Analytics
        - name: OPENEBS_IO_ENABLE_ANALYTICS
          value: "true"
        - name: OPENEBS_IO_INSTALLER_TYPE
          value: "openebs-operator"
        # OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)
        # for periodic ping events sent to Google Analytics.
        # Default is 24h.
        # Minimum is 1h. You can convert this to weekly by setting 168h
        #- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL
        #  value: "24h"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - /usr/local/bin/mayactl
            - version
          initialDelaySeconds: 30
          periodSeconds: 60
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - /usr/local/bin/mayactl
            - version
          initialDelaySeconds: 30
          periodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:
  name: maya-apiserver-service
  namespace: openebs
  labels:
    openebs.io/component-name: maya-apiserver-svc
spec:
  ports:
  - name: api
    port: 5656
    protocol: TCP
    targetPort: 5656
  selector:
    name: maya-apiserver
  sessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-provisioner
  namespace: openebs
  labels:
    name: openebs-provisioner
    openebs.io/component-name: openebs-provisioner
    openebs.io/version: 2.12.0
spec:
  selector:
    matchLabels:
      name: openebs-provisioner
      openebs.io/component-name: openebs-provisioner
  replicas: 1
  strategy:
    type: Recreate
    rollingUpdate: null
  template:
    metadata:
      labels:
        name: openebs-provisioner
        openebs.io/component-name: openebs-provisioner
        openebs.io/version: 2.12.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
      - name: openebs-provisioner
        imagePullPolicy: IfNotPresent
        image: openebs/openebs-k8s-provisioner:2.12.0
        env:
        # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
        # based on this address. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_K8S_MASTER
        #  value: "http://10.128.0.12:8080"
        # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
        # based on this config. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_KUBE_CONFIG
        #  value: "/home/ubuntu/.kube/config"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: OPENEBS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
        # that provisioner should forward the volume create/delete requests.
        # If not present, "maya-apiserver-service" will be used for lookup.
        # This is supported for openebs provisioner version 0.5.3-RC1 onwards
        #- name: OPENEBS_MAYA_SERVICE_NAME
        #  value: "maya-apiserver-apiservice"
        # LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default
        # leader election is enabled.
        #- name: LEADER_ELECTION_ENABLED
        #  value: "true"
        # OPENEBS_IO_JIVA_PATCH_NODE_AFFINITY is used to enable/disable setting node affinity
        # to the jiva replica deployments. Default is `enabled`. The valid values are 
        # `enabled` and `disabled`.
        #- name: OPENEBS_IO_JIVA_PATCH_NODE_AFFINITY
        #  value: "enabled"
         # Process name used for matching is limited to the 15 characters
         # present in the pgrep output.
         # So fullname can't be used here with pgrep (>15 chars).A regular expression
         # that matches the entire command name has to specified.
         # Anchor `^` : matches any string that starts with `openebs-provis`
         # `.*`: matches any string that has `openebs-provis` followed by zero or more char
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - test `pgrep -c "^openebs-provisi.*"` = 1
          initialDelaySeconds: 30
          periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-snapshot-operator
  namespace: openebs
  labels:
    name: openebs-snapshot-operator
    openebs.io/component-name: openebs-snapshot-operator
    openebs.io/version: 2.12.0
spec:
  selector:
    matchLabels:
      name: openebs-snapshot-operator
      openebs.io/component-name: openebs-snapshot-operator
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: openebs-snapshot-operator
        openebs.io/component-name: openebs-snapshot-operator
        openebs.io/version: 2.12.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: snapshot-controller
          image: openebs/snapshot-controller:2.12.0
          imagePullPolicy: IfNotPresent
          env:
          - name: OPENEBS_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
         # Process name used for matching is limited to the 15 characters
         # present in the pgrep output.
         # So fullname can't be used here with pgrep (>15 chars).A regular expression
         # that matches the entire command name has to specified.
         # Anchor `^` : matches any string that starts with `snapshot-contro`
         # `.*`: matches any string that has `snapshot-contro` followed by zero or more char
          livenessProbe:
            exec:
              command:
              - sh
              - -c
              - test `pgrep -c "^snapshot-contro.*"` = 1
            initialDelaySeconds: 30
            periodSeconds: 60
        # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
        # that snapshot controller should forward the snapshot create/delete requests.
        # If not present, "maya-apiserver-service" will be used for lookup.
        # This is supported for openebs provisioner version 0.5.3-RC1 onwards
        #- name: OPENEBS_MAYA_SERVICE_NAME
        #  value: "maya-apiserver-apiservice"
        - name: snapshot-provisioner
          image: openebs/snapshot-provisioner:2.12.0
          imagePullPolicy: IfNotPresent
          env:
          - name: OPENEBS_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
          # that snapshot provisioner  should forward the clone create/delete requests.
          # If not present, "maya-apiserver-service" will be used for lookup.
          # This is supported for openebs provisioner version 0.5.3-RC1 onwards
          #- name: OPENEBS_MAYA_SERVICE_NAME
          #  value: "maya-apiserver-apiservice"
          # LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default
          # leader election is enabled.
          #- name: LEADER_ELECTION_ENABLED
          #  value: "true"
         # Process name used for matching is limited to the 15 characters
         # present in the pgrep output.
         # So fullname can't be used here with pgrep (>15 chars).A regular expression
         # that matches the entire command name has to specified.
         # Anchor `^` : matches any string that starts with `snapshot-provis`
         # `.*`: matches any string that has `snapshot-provis` followed by zero or more char
          livenessProbe:
            exec:
              command:
              - sh
              - -c
              - test `pgrep -c "^snapshot-provis.*"` = 1
            initialDelaySeconds: 30
            periodSeconds: 60
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.5.0
  creationTimestamp: null
  name: blockdevices.openebs.io
spec:
  group: openebs.io
  names:
    kind: BlockDevice
    listKind: BlockDeviceList
    plural: blockdevices
    shortNames:
      - bd
    singular: blockdevice
  scope: Namespaced
  versions:
    - additionalPrinterColumns:
        - jsonPath: .spec.nodeAttributes.nodeName
          name: NodeName
          type: string
        - jsonPath: .spec.path
          name: Path
          priority: 1
          type: string
        - jsonPath: .spec.filesystem.fsType
          name: FSType
          priority: 1
          type: string
        - jsonPath: .spec.capacity.storage
          name: Size
          type: string
        - jsonPath: .status.claimState
          name: ClaimState
          type: string
        - jsonPath: .status.state
          name: Status
          type: string
        - jsonPath: .metadata.creationTimestamp
          name: Age
          type: date
      name: v1alpha1
      schema:
        openAPIV3Schema:
          description: BlockDevice is the Schema for the blockdevices API
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: DeviceSpec defines the properties and runtime status of a BlockDevice
              properties:
                aggregateDevice:
                  description: AggregateDevice was intended to store the hierarchical information in cases of LVM. However this is currently not implemented and may need to be re-looked into for better design. To be deprecated
                  type: string
                capacity:
                  description: Capacity
                  properties:
                    logicalSectorSize:
                      description: LogicalSectorSize is blockdevice logical-sector size in bytes
                      format: int32
                      type: integer
                    physicalSectorSize:
                      description: PhysicalSectorSize is blockdevice physical-Sector size in bytes
                      format: int32
                      type: integer
                    storage:
                      description: Storage is the blockdevice capacity in bytes
                      format: int64
                      type: integer
                  required:
                    - storage
                  type: object
                claimRef:
                  description: ClaimRef is the reference to the BDC which has claimed this BD
                  properties:
                    apiVersion:
                      description: API version of the referent.
                      type: string
                    fieldPath:
                      description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.'
                      type: string
                    kind:
                      description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
                      type: string
                    name:
                      description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
                      type: string
                    namespace:
                      description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
                      type: string
                    resourceVersion:
                      description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
                      type: string
                    uid:
                      description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
                      type: string
                  type: object
                details:
                  description: Details contain static attributes of BD like model,serial, and so forth
                  properties:
                    compliance:
                      description: Compliance is standards/specifications version implemented by device firmware  such as SPC-1, SPC-2, etc
                      type: string
                    deviceType:
                      description: DeviceType represents the type of device like sparse, disk, partition, lvm, crypt
                      enum:
                        - disk
                        - partition
                        - sparse
                        - loop
                        - lvm
                        - crypt
                        - dm
                        - mpath
                      type: string
                    driveType:
                      description: DriveType is the type of backing drive, HDD/SSD
                      enum:
                        - HDD
                        - SSD
                        - Unknown
                        - ""
                      type: string
                    firmwareRevision:
                      description: FirmwareRevision is the disk firmware revision
                      type: string
                    hardwareSectorSize:
                      description: HardwareSectorSize is the hardware sector size in bytes
                      format: int32
                      type: integer
                    logicalBlockSize:
                      description: LogicalBlockSize is the logical block size in bytes reported by /sys/class/block/sda/queue/logical_block_size
                      format: int32
                      type: integer
                    model:
                      description: Model is model of disk
                      type: string
                    physicalBlockSize:
                      description: PhysicalBlockSize is the physical block size in bytes reported by /sys/class/block/sda/queue/physical_block_size
                      format: int32
                      type: integer
                    serial:
                      description: Serial is serial number of disk
                      type: string
                    vendor:
                      description: Vendor is vendor of disk
                      type: string
                  type: object
                devlinks:
                  description: DevLinks contains soft links of a block device like /dev/by-id/... /dev/by-uuid/...
                  items:
                    description: DeviceDevLink holds the mapping between type and links like by-id type or by-path type link
                    properties:
                      kind:
                        description: Kind is the type of link like by-id or by-path.
                        enum:
                          - by-id
                          - by-path
                        type: string
                      links:
                        description: Links are the soft links
                        items:
                          type: string
                        type: array
                    type: object
                  type: array
                filesystem:
                  description: FileSystem contains mountpoint and filesystem type
                  properties:
                    fsType:
                      description: Type represents the FileSystem type of the block device
                      type: string
                    mountPoint:
                      description: MountPoint represents the mountpoint of the block device.
                      type: string
                  type: object
                nodeAttributes:
                  description: NodeAttributes has the details of the node on which BD is attached
                  properties:
                    nodeName:
                      description: NodeName is the name of the Kubernetes node resource on which the device is attached
                      type: string
                  type: object
                parentDevice:
                  description: "ParentDevice was intended to store the UUID of the parent Block Device as is the case for partitioned block devices. \n For example: /dev/sda is the parent for /dev/sda1 To be deprecated"
                  type: string
                partitioned:
                  description: Partitioned represents if BlockDevice has partitions or not (Yes/No) Currently always default to No. To be deprecated
                  enum:
                    - "Yes"
                    - "No"
                  type: string
                path:
                  description: Path contain devpath (e.g. /dev/sdb)
                  type: string
              required:
                - capacity
                - devlinks
                - nodeAttributes
                - path
              type: object
            status:
              description: DeviceStatus defines the observed state of BlockDevice
              properties:
                claimState:
                  description: ClaimState represents the claim state of the block device
                  enum:
                    - Claimed
                    - Unclaimed
                    - Released
                  type: string
                state:
                  description: State is the current state of the blockdevice (Active/Inactive/Unknown)
                  enum:
                    - Active
                    - Inactive
                    - Unknown
                  type: string
              required:
                - claimState
                - state
              type: object
          type: object
      served: true
      storage: true
      subresources: {}
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.5.0
  creationTimestamp: null
  name: blockdeviceclaims.openebs.io
spec:
  group: openebs.io
  names:
    kind: BlockDeviceClaim
    listKind: BlockDeviceClaimList
    plural: blockdeviceclaims
    shortNames:
      - bdc
    singular: blockdeviceclaim
  scope: Namespaced
  versions:
    - additionalPrinterColumns:
        - jsonPath: .spec.blockDeviceName
          name: BlockDeviceName
          type: string
        - jsonPath: .status.phase
          name: Phase
          type: string
        - jsonPath: .metadata.creationTimestamp
          name: Age
          type: date
      name: v1alpha1
      schema:
        openAPIV3Schema:
          description: BlockDeviceClaim is the Schema for the blockdeviceclaims API
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: DeviceClaimSpec defines the request details for a BlockDevice
              properties:
                blockDeviceName:
                  description: BlockDeviceName is the reference to the block-device backing this claim
                  type: string
                blockDeviceNodeAttributes:
                  description: BlockDeviceNodeAttributes is the attributes on the node from which a BD should be selected for this claim. It can include nodename, failure domain etc.
                  properties:
                    hostName:
                      description: HostName represents the hostname of the Kubernetes node resource where the BD should be present
                      type: string
                    nodeName:
                      description: NodeName represents the name of the Kubernetes node resource where the BD should be present
                      type: string
                  type: object
                deviceClaimDetails:
                  description: Details of the device to be claimed
                  properties:
                    allowPartition:
                      description: AllowPartition represents whether to claim a full block device or a device that is a partition
                      type: boolean
                    blockVolumeMode:
                      description: 'BlockVolumeMode represents whether to claim a device in Block mode or Filesystem mode. These are use cases of BlockVolumeMode: 1) Not specified: VolumeMode check will not be effective 2) VolumeModeBlock: BD should not have any filesystem or mountpoint 3) VolumeModeFileSystem: BD should have a filesystem and mountpoint. If DeviceFormat is    specified then the format should match with the FSType in BD'
                      type: string
                    formatType:
                      description: Format of the device required, eg:ext4, xfs
                      type: string
                  type: object
                deviceType:
                  description: DeviceType represents the type of drive like SSD, HDD etc.,
                  nullable: true
                  type: string
                hostName:
                  description: Node name from where blockdevice has to be claimed. To be deprecated. Use NodeAttributes.HostName instead
                  type: string
                resources:
                  description: Resources will help with placing claims on Capacity, IOPS
                  properties:
                    requests:
                      additionalProperties:
                        anyOf:
                          - type: integer
                          - type: string
                        pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                        x-kubernetes-int-or-string: true
                      description: 'Requests describes the minimum resources required. eg: if storage resource of 10G is requested minimum capacity of 10G should be available TODO for validating'
                      type: object
                  required:
                    - requests
                  type: object
                selector:
                  description: Selector is used to find block devices to be considered for claiming
                  properties:
                    matchExpressions:
                      description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
                      items:
                        description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
                        properties:
                          key:
                            description: key is the label key that the selector applies to.
                            type: string
                          operator:
                            description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
                            type: string
                          values:
                            description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
                            items:
                              type: string
                            type: array
                        required:
                          - key
                          - operator
                        type: object
                      type: array
                    matchLabels:
                      additionalProperties:
                        type: string
                      description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
                      type: object
                  type: object
              type: object
            status:
              description: DeviceClaimStatus defines the observed state of BlockDeviceClaim
              properties:
                phase:
                  description: Phase represents the current phase of the claim
                  type: string
              required:
                - phase
              type: object
          type: object
      served: true
      storage: true
      subresources: {}
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
  name: openebs-ndm-config
  namespace: openebs
  labels:
    openebs.io/component-name: ndm-config
data:
  # udev-probe is default or primary probe it should be enabled to run ndm
  # filterconfigs contains configs of filters. To provide a group of include
  # and exclude values add it as , separated string
  node-disk-manager.config: |
    probeconfigs:
      - key: udev-probe
        name: udev probe
        state: true
      - key: seachest-probe
        name: seachest probe
        state: false
      - key: smart-probe
        name: smart probe
        state: true
    filterconfigs:
      - key: os-disk-exclude-filter
        name: os disk exclude filter
        state: true
        exclude: "/,/etc/hosts,/boot"
      - key: vendor-filter
        name: vendor filter
        state: true
        include: ""
        exclude: "CLOUDBYT,OpenEBS"
      - key: path-filter
        name: path filter
        state: true
        include: ""
        exclude: "/dev/loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/md,/dev/dm-,/dev/rbd,/dev/zd"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: openebs-ndm
  namespace: openebs
  labels:
    name: openebs-ndm
    openebs.io/component-name: ndm
    openebs.io/version: 2.12.0
spec:
  selector:
    matchLabels:
      name: openebs-ndm
      openebs.io/component-name: ndm
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        name: openebs-ndm
        openebs.io/component-name: ndm
        openebs.io/version: 2.12.0
    spec:
      # By default the node-disk-manager will be run on all kubernetes nodes
      # If you would like to limit this to only some nodes, say the nodes
      # that have storage attached, you could label those node and use
      # nodeSelector.
      #
      # e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"
      # kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"
      #nodeSelector:
      #  "openebs.io/nodegroup": "storage-node"
      serviceAccountName: openebs-maya-operator
      hostNetwork: true
      # host PID is used to check status of iSCSI Service when the NDM
      # API service is enabled
      #hostPID: true
      containers:
      - name: node-disk-manager
        image: openebs/node-disk-manager:1.6.1
        args:
          - -v=4
          # The feature-gate is used to enable the new UUID algorithm.
          - --feature-gates="GPTBasedUUID"
        # Detect mount point and filesystem changes wihtout restart.
        # Uncomment the line below to enable the feature.
        # --feature-gates="MountChangeDetection"
        # The feature gate is used to start the gRPC API service. The gRPC server
        # starts at 9115 port by default. This feature is currently in Alpha state
        # - --feature-gates="APIService"
        # The feature gate is used to enable NDM, to create blockdevice resources
        # for unused partitions on the OS disk
        # - --feature-gates="UseOSDisk"
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        volumeMounts:
        - name: config
          mountPath: /host/node-disk-manager.config
          subPath: node-disk-manager.config
          readOnly: true
          # make udev database available inside container
        - name: udev
          mountPath: /run/udev
        - name: procmount
          mountPath: /host/proc
          readOnly: true
        - name: devmount
          mountPath: /dev
        - name: basepath
          mountPath: /var/openebs/ndm
        - name: sparsepath
          mountPath: /var/openebs/sparse
        env:
        # namespace in which NDM is installed will be passed to NDM Daemonset
        # as environment variable
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # pass hostname as env variable using downward API to the NDM container
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        # specify the directory where the sparse files need to be created.
        # if not specified, then sparse files will not be created.
        - name: SPARSE_FILE_DIR
          value: "/var/openebs/sparse"
        # Size(bytes) of the sparse file to be created.
        - name: SPARSE_FILE_SIZE
          value: "10737418240"
        # Specify the number of sparse files to be created
        - name: SPARSE_FILE_COUNT
          value: "0"
        livenessProbe:
          exec:
            command:
            - pgrep
            - "ndm"
          initialDelaySeconds: 30
          periodSeconds: 60
      volumes:
      - name: config
        configMap:
          name: openebs-ndm-config
      - name: udev
        hostPath:
          path: /run/udev
          type: Directory
      # mount /proc (to access mount file of process 1 of host) inside container
      # to read mount-point of disks and partitions
      - name: procmount
        hostPath:
          path: /proc
          type: Directory
      - name: devmount
      # the /dev directory is mounted so that we have access to the devices that
      # are connected at runtime of the pod.
        hostPath:
          path: /dev
          type: Directory
      - name: basepath
        hostPath:
          path: /var/openebs/ndm
          type: DirectoryOrCreate
      - name: sparsepath
        hostPath:
          path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-ndm-operator
  namespace: openebs
  labels:
    name: openebs-ndm-operator
    openebs.io/component-name: ndm-operator
    openebs.io/version: 2.12.0
spec:
  selector:
    matchLabels:
      name: openebs-ndm-operator
      openebs.io/component-name: ndm-operator
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: openebs-ndm-operator
        openebs.io/component-name: ndm-operator
        openebs.io/version: 2.12.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: node-disk-operator
          image: openebs/node-disk-operator:1.6.1
          imagePullPolicy: IfNotPresent
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            # the service account of the ndm-operator pod
            - name: SERVICE_ACCOUNT
              valueFrom:
                fieldRef:
                  fieldPath: spec.serviceAccountName
            - name: OPERATOR_NAME
              value: "node-disk-operator"
            - name: CLEANUP_JOB_IMAGE
              value: "openebs/linux-utils:2.12.0"
            # OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets
            # to the cleanup pod launched by NDM operator
            #- name: OPENEBS_IO_IMAGE_PULL_SECRETS
            #  value: ""
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8585
            initialDelaySeconds: 15
            periodSeconds: 20
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8585
            initialDelaySeconds: 5
            periodSeconds: 10
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-admission-server
  namespace: openebs
  labels:
    app: admission-webhook
    openebs.io/component-name: admission-webhook
    openebs.io/version: 2.12.0
spec:
  replicas: 1
  strategy:
    type: Recreate
    rollingUpdate: null
  selector:
    matchLabels:
      app: admission-webhook
  template:
    metadata:
      labels:
        app: admission-webhook
        openebs.io/component-name: admission-webhook
        openebs.io/version: 2.12.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: admission-webhook
          image: openebs/admission-server:2.12.0
          imagePullPolicy: IfNotPresent
          args:
            - -alsologtostderr
            - -v=2
            - 2>&1
          env:
            - name: OPENEBS_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: ADMISSION_WEBHOOK_NAME
              value: "openebs-admission-server"
            - name: ADMISSION_WEBHOOK_FAILURE_POLICY
              value: "Fail"
          # Process name used for matching is limited to the 15 characters
          # present in the pgrep output.
          # So fullname can't be used here with pgrep (>15 chars).A regular expression
          # Anchor `^` : matches any string that starts with `admission-serve`
          # `.*`: matche any string that has `admission-serve` followed by zero or more char
          # that matches the entire command name has to specified.
          livenessProbe:
            exec:
              command:
              - sh
              - -c
              - test `pgrep -c "^admission-serve.*"` = 1
            initialDelaySeconds: 30
            periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-localpv-provisioner
  namespace: openebs
  labels:
    name: openebs-localpv-provisioner
    openebs.io/component-name: openebs-localpv-provisioner
    openebs.io/version: 2.12.0
spec:
  selector:
    matchLabels:
      name: openebs-localpv-provisioner
      openebs.io/component-name: openebs-localpv-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: openebs-localpv-provisioner
        openebs.io/component-name: openebs-localpv-provisioner
        openebs.io/version: 2.12.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
      - name: openebs-provisioner-hostpath
        imagePullPolicy: IfNotPresent
        image: openebs/provisioner-localpv:2.12.0
        args:
          - "--bd-time-out=$(BDC_BD_BIND_RETRIES)"
        env:
        # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
        # based on this address. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_K8S_MASTER
        #  value: "http://10.128.0.12:8080"
        # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
        # based on this config. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_KUBE_CONFIG
        #  value: "/home/ubuntu/.kube/config"
        #  This sets the number of times the provisioner should try 
        #  with a polling interval of 5 seconds, to get the Blockdevice
        #  Name from a BlockDeviceClaim, before the BlockDeviceClaim
        #  is deleted. E.g. 12 * 5 seconds = 60 seconds timeout
        - name: BDC_BD_BIND_RETRIES
          value: "12"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: OPENEBS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
        # environment variable
        - name: OPENEBS_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: OPENEBS_IO_ENABLE_ANALYTICS
          value: "true"
        - name: OPENEBS_IO_INSTALLER_TYPE
          value: "openebs-operator"
        - name: OPENEBS_IO_HELPER_IMAGE
          value: "openebs/linux-utils:2.12.0"
        - name: OPENEBS_IO_BASE_PATH
          value: "/var/openebs/local"
        # LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default
        # leader election is enabled.
        #- name: LEADER_ELECTION_ENABLED
        #  value: "true"
        # OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets
        # to the helper pod launched by local-pv hostpath provisioner
        #- name: OPENEBS_IO_IMAGE_PULL_SECRETS
        #  value: ""
        # Process name used for matching is limited to the 15 characters
        # present in the pgrep output.
        # So fullname can't be used here with pgrep (>15 chars).A regular expression
        # that matches the entire command name has to specified.
        # Anchor `^` : matches any string that starts with `provisioner-loc`
        # `.*`: matches any string that has `provisioner-loc` followed by zero or more char
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - test `pgrep -c "^provisioner-loc.*"` = 1
          initialDelaySeconds: 30
          periodSeconds: 60
---

Kubesphere 3.0.0

https://v3-0.docs.kubesphere.io/zh/docs/pluggable-components/devops/

前提:

Kubernetes 版本 1.15.x,1.16.x,1.17.x,1.18.x

可用 CPU > 1 核;内存 > 2 G

Kubernetes 集群已配置默认 StorageClass(请使用 kubectl get sc 进行确认)。

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f


kubectl get pod --all-namespaces

kubectl get svc/ks-console -n kubesphere-system

Kubesphere 3.1.1


#需要有默认的storageClass
kubectl get sc

# 文档https://v2-1.docs.kubesphere.io/docs/zh-CN/appendix/install-openebs/
kubectl get node -o wide
kubectl describe node k8s-node1 | grep Taint
# Taints:             node-role.kubernetes.io/master:NoSchedule
# 需要去掉 Taints
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-
# openebs
kubectl create ns openebs
# 已经失效
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.5.0.yaml
# 查看 https://blog.csdn.net/RookiexiaoMu_a/article/details/119859930

# 等待运行成功
get pod --all-namespaces
kubectl get sc

# 设置默认的storageClass
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# 查看 OpenEBS 相关 Pod 的状态,若 Pod 的状态都是 running,则说明存储安装成功
kubectl get pod -n openebs

# 恢复 Taints
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule

准备完毕


######################################
# 准备完毕
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f


# 所有 Pod 正常运行: 9个 kubesphere-system,需要很久
kubectl get pod --all-namespaces

kubectl get svc/ks-console -n kubesphere-system

# 确保在安全组中打开了端口 30880,并通过 NodePort (IP:30880) 
使用默认帐户和密码 (admin/P@88w0rd) 访问 Web 控制台。

插件安装

注意work节点要8GB,5个CPU

# 失败
kubectl edit cm -n kubesphere-system ks-installer

# 错了,没必要这些,直接去官网看 https://v3-1.docs.kubesphere.io/zh/docs/pluggable-components/metrics-server/
# https://v3-1.docs.kubesphere.io/zh/docs/pluggable-components/metrics-server/
wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
vi cluster-configuration.yaml

metrics_server:
  enabled: true # 将“false”更改为“true”。

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml

kubectl apply -f cluster-configuration.yaml

# 安装后启用
以 admin 身份登录控制台。点击左上角平台管理,选择集群管理
自定义资源 CRD 在搜索栏中输入 clusterconfiguratio 点击搜索结果查看详情页
在资源列表中,点击 ks-installer 右侧的 ,选择编辑配置文件

metrics_server:
  enabled: true # 将“false”更改为“true”。

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

kubectl get pod -n kube-system
# metrics-server-6c767c9f94-hfsb7 

open:

devops(sonarqube) console notification alerting

close:

metrics_server logging openpitrix

  metrics_server:               
    enabled: false

  console:
    enableMultiLogin: true 
    port: 30880

  logging:             
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2

  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G
    enabled: true
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1000Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g

kubesphere 3.2.0

可以devops,集群要求很高,kuboard要求不高

all In one 全部给安装好

https://kubesphere.io/zh/

# 开始安装
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/kubesphere-installer.yaml

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/cluster-configuration.yaml
# 安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
# pod是否正常
kubectl get pod --all-namespaces
# 检查控制台端口
kubectl get svc/ks-console -n kubesphere-system
外网:ip

插件

kubectl edit cm -n kubesphere-system ks-installer
  metrics_server:               
    enabled: false

  console:
    enableMultiLogin: true 
    port: 30880

  logging:             
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2

  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G
    enabled: true
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1000Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g

旧版kubesphere

# 被墙
curl -L https://git.io/get_helm.sh | bash

安装helm

2.17.0

https://github.com/helm/helm/tags

wget https://get.helm.sh/helm-v2.17.0-linux-amd64.tar.gz
tar -zxvf helm-v2.17.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm help

权限rbac

kubectl apply -f helm_rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

tiller

helm init --service-account=tiller --tiller-image=sapcc/tiller:v2.16.3 --history-max 300
# --tiller-image指定镜像,否则会被墙
kubectl get deployment -n kube-system
# 删除
# kubectl delete deployment tiller-deploy -n kube-system

命令

# 没了
kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v2.1.1/kubesphere-minimal.yaml

https://github.com/kubesphere/ks-installer/blob/v2.1.1/kubesphere-minimal.yaml

kubectl apply -f kubesphere-minimal.yaml

yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
data:
  ks-config.yaml: |
    ---
    persistence:
      storageClass: ""
    etcd:
      monitoring: False
      endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9
      port: 2379
      tlsEnable: True
    common:
      mysqlVolumeSize: 20Gi
      minioVolumeSize: 20Gi
      etcdVolumeSize: 20Gi
      openldapVolumeSize: 2Gi
      redisVolumSize: 2Gi
    metrics_server:
      enabled: False
    console:
      enableMultiLogin: False  # enable/disable multi login
      port: 30880
    monitoring:
      prometheusReplicas: 1
      prometheusMemoryRequest: 400Mi
      prometheusVolumeSize: 20Gi
      grafana:
        enabled: False
    logging:
      enabled: False
      elasticsearchMasterReplicas: 1
      elasticsearchDataReplicas: 1
      logsidecarReplicas: 2
      elasticsearchMasterVolumeSize: 4Gi
      elasticsearchDataVolumeSize: 20Gi
      logMaxAge: 7
      elkPrefix: logstash
      containersLogMountedPath: ""
      kibana:
        enabled: False
    openpitrix:
      enabled: False
    devops:
      enabled: False
      jenkinsMemoryLim: 2Gi
      jenkinsMemoryReq: 1500Mi
      jenkinsVolumeSize: 8Gi
      jenkinsJavaOpts_Xms: 512m
      jenkinsJavaOpts_Xmx: 512m
      jenkinsJavaOpts_MaxRAM: 2g
      sonarqube:
        enabled: False
        postgresqlVolumeSize: 8Gi
    servicemesh:
      enabled: False
    notification:
      enabled: False
    alerting:
      enabled: False
kind: ConfigMap
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v2.1.1
        imagePullPolicy: "Always"

状态

kubectl get pod --all-namespaces
# 获取进度,账号密码
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

三、新安装

image-20211215120059391

https://www.yuque.com/leifengyang/oncloud/ghnb83#LRzAY

初始化

准备三台云服务器,centos7.9,记得删除ip

p31 资源管理,创建工单,3个ip

私有网络:

  • vpc ipv4 172.31.0.0/16

  • 子网络:k8s-cluster-01 172.31.0.0/24 自动分配ip

按流量费用

开启内网互联

image-20211215130219065

互相ping看,

k8s-port 开放端口30000-32767

IP分配

159.75.218.183 172.31.0.9

159.75.201.179 172.31.0.15

159.75.253.222 172.31.0.6

Docker

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# maybe
visudo
vim sudoers # tab
  myuser ALL=(ALL) ALL # 用户
  %sudo ALL=(ALL) ALL # 用户组
groupadd sudo

# sudo yum install -y docker-ce docker-ce-cli containerd.io


# 以下是在安装k8s的时候使用
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6

# 启动
systemctl enable docker --now

# 配置加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

集群安装

预备环境

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令

  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)

  • 2 CPU 核或更多

  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)

  • 设置防火墙放行规则

  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。

  • 设置不同hostname

  • 开启机器上的某些端口。请参见这里 了解更多详细信息。

  • 内网互信

  • 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。

# 各个机器设置自己的主机名
# k8s-master node1 node2
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

hostname

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab

# 允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

三大件

kubelet kubeadm kubectl

# k8s去哪下载
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF


sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

systemctl status kubelet

准备镜像

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF

chmod +x ./images.sh && ./images.sh

主副节点

159.75.218.183     172.31.0.9
159.75.201.179    172.31.0.15
159.75.253.222   172.31.0.6

# !!! 所有机器添加master域名映射,以下需要修改为自己的
echo "101.200.169.229  master" >> /etc/hosts

sudo tee /etc/hosts <<-'EOF'
101.200.169.229  master
112.124.15.81 node1
EOF

ping master

# 需要看公网搭建

# 主节点初始化
kubeadm init \
--apiserver-advertise-address=172.31.0.9 \
--control-plane-endpoint=172.31.0.9 \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
# 所有网络范围不重叠

# kubernetes node节点加入容器 [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forw
# 所有节点
echo "echo 1 > /proc/sys/net/ipv4/ip_forward" >> /etc/rc.d/rc.local \
&& echo 1 > /proc/sys/net/ipv4/ip_forward \
&& chmod +x /etc/rc.d/rc.local \
&& ll /etc/rc.d/rc.local \
&& cat /proc/sys/net/ipv4/ip_forward



mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get nodes

# 需要一个网络插件 calico  https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes

curl https://docs.projectcalico.org/manifests/calico.yaml -O
# 如果前面更改了,这里也要修改192.168..

kubectl apply -f calico.yaml

watch -n 1 kubectl get pod -A

kubectl get nodes

# 其他node节点, 24小时有效,不要复制错
kubeadm join 101.200.169.229:6443 --token b9pw5c.siandzqm0v2mhhh5 \
    --discovery-token-ca-cert-hash sha256:033a93c57189d27835f2fadaa1f4fb6d643d4f7f249be028197f87f314bfe099
# One or more conditions for hosting a new control plane instance is not satisfied.

# 过期了,新令牌
kubeadm token create --print-join-command

kubectl get nodes
watch -n 1 kubectl get pod -A

dashboard

https://www.yuque.com/leifengyang/oncloud/ghnb83#5axOO

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

yum install -y wget

yaml

vi dashboard.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

端口

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

type: ClusterIP 改为 type: NodePort

kubectl get svc -A |grep kubernetes-dashboard
## 找到端口,在安全组放行

image-20211215154104722

访问: https://集群任意IP:端口 https://139.198.165.238:32759

访问账号

#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f dash.yaml

令牌访问

#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

四、核心实战

Namespace

隔离资源,不隔离网络(但是可以设置)

kubectl get ns

kubectl create ns prod
kubectl delete ns prod

kubectl get pods -A
kubectl get pods -n prod

# 进入内部
kubectl exec -it mynginx -- /bin/bash
apiVersion: v1
kind: Namespace
metadata:
  name: prod

Pod

运行中的一组容器,Pod是kubernetes中应用的最小单位,相当于一个宿舍

kubectl run mynginx --image=nginx
kubectl get pod -owide

# 查看default名称空间的Pod
kubectl get pod 
# 描述
kubectl describe pod 你自己的Pod名字
# 删除
kubectl delete pod mynginxxxx
# 查看Pod的运行日志
kubectl logs Pod 名字

# 每个Pod - k8s都会分配一个ip
kubectl get pod -owide
# 使用Pod的ip+pod里面运行容器的端口
curl 192.168.169.136
集群中 的任意一个机器以及任意的应用
都能通过Pod分配的ip来访问这个Pod

pod中两个

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: myapp
  name: myapp
spec:
  containers:
  - image: nginx
    name: nginx
  - image: tomcat:8.5.68
    name: tomcat

image-20211215212040579

宿舍内的只需要 127.0.0.1

Deployment

控制Pod,使Pod有多副本,治愈、扩缩容等能力

# 清除所有Pod,比较下面两个命令有何不同效果?
kubectl run mynginx --image=nginx

# 自愈能力
kubectl create deployment mytomcat --image=tomcat:8.5.68

多副本

kubectl create deployment my-dep --image=nginx --replicas=3
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-dep
  template:
    metadata:
      labels:
        app: my-dep
    spec:
      containers:
      - image: nginx
        name: nginx

扩缩容

kubectl scale --replicas=5 deployment/my-dep

kubectl edit deployment my-dep
#修改 replicas

自愈&故障转移

  • 停机

  • 删除Pod

  • 容器崩溃

滚动更新

kubectl set image deployment/my-dep nginx=nginx:1.16.1 --record
kubectl rollout status deployment/my-dep
# 修改 kubectl edit deployment/my-dep

版本回退

#历史记录
kubectl rollout history deployment/my-dep


#查看某个历史详情
kubectl rollout history deployment/my-dep --revision=2

#回滚(回到上次)
kubectl rollout undo deployment/my-dep

#回滚(回到指定版本)
kubectl rollout undo deployment/my-dep --to-revision=2

其他负载

image-20211215221710824

Service

一组Pod公开为网络服务的抽象

#暴露Deploy
kubectl expose deployment my-dep --port=8000 --target-port=80
# service 信息
kubectl get service

#使用标签检索Pod
kubectl get pod -l app=my-dep
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  selector:
    app: my-dep
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80

ClusterIP

集群内访问

image-20211215223741175

前端:pod容器内

# 等同于没有--type的
kubectl expose deployment my-dep --port=8000 --target-port=80   #--type=ClusterIP
# clusterIp:port
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
  selector:
    app: my-dep
  type: ClusterIP

NodePort

集群外也可以访问

NodePort范围在 30000-32767 之间

image-20211215224421668

kubectl expose deployment my-dep --port=8000 --target-port=80 --type=NodePort
kubectl get svc
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
  selector:
    app: my-dep
  type: NodePort

Ingress

简介

统一网关入口

service是pod的入口

ingress是service的入口

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml

#修改镜像
vi deploy.yaml
#将image的值改为如下值:
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0

# 检查安装的结果
kubectl get pod,svc -n ingress-nginx

# 最后别忘记把svc暴露的端口要放行

yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
      - v1beta1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

使用

https://kubernetes.github.io/ingress-nginx/

就是nginx做的

https://139.198.163.211:32401/

http://139.198.163.211:31405/

image-20211216112948180

测试

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
      - name: hello-server
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
        ports:
        - containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - image: nginx
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-server
  name: hello-server
spec:
  selector:
    app: hello-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 9000

域名访问

apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx"  # 直接带路径,把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: nginx-demo  ## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000

路径重写

annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2

apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx(/|$)(.*)"  # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: nginx-demo  ## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000
      - pathType: Prefix
        path: "/prod-api(/|$)(.*)"  # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: psedu-gateway  ## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000

流量限制

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-limit-rate
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
  ingressClassName: nginx
  rules:
  - host: "haha.atguigu.com"
    http:
      paths:
      - pathType: Exact
        path: "/"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000
kubectl get ing

存储抽象

docker需要挂载目录

但是k8s中的pod宕机后,会重新启动一份,其没有目录,不正常

k8s将空间统一到存储层:Glusterfs NFS CephFS,每个存储都会备份

环境

主从

#所有机器安装
yum install -y nfs-utils

# 主节点

#nfs主节点
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

mkdir -p /nfs/data
systemctl enable rpcbind --now
systemctl enable nfs-server --now
#配置生效
exportfs -r


# 从节点 192.168.56.100
showmount -e 172.31.0.4

#执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /root/nfsmount
mkdir -p /nfs/data

# 192.168.56.100
mount -t nfs 172.31.0.4:/nfs/data /nfs/data
# 写入一个测试文件
echo "hello nfs server" > /nfs/data/test.txt

原生挂载

修改主节点地址

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-pv-demo
  name: nginx-pv-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pv-demo
  template:
    metadata:
      labels:
        app: nginx-pv-demo
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          nfs:
            server: 172.31.0.4
            path: /nfs/data/nginx-pv

PV&PVC

PV:Persitent Volume,将应用需要持久化的数据保存到指定位置

PVC:Persistent Volume Claim,声明需要使用的持久化规格

PV池

静态供应

#nfs主节点
mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03

创建PV,需要修改server 192.168.56.100

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01-10m
spec:
  capacity:
    storage: 10M
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/01
    server: 172.31.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/02
    server: 172.31.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/03
    server: 172.31.0.4

PVC

创建声明

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
  storageClassName: nfs

创建Pod,绑定PVC

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy-pvc
  name: nginx-deploy-pvc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy-pvc
  template:
    metadata:
      labels:
        app: nginx-deploy-pvc
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: nginx-pvc

ConfigMap

抽取应用配置,自动更新

redis示例

创建配置集

vim redis.conf
appendonly yes

# 创建配置,redis保存到k8s的etcd;
kubectl create cm redis-conf --from-file=redis.conf

kubectl get cm

命令对应yaml,kubectl get cm redis-conf -oyaml

apiVersion: v1
data:    #data是所有真正的数据,key:默认是文件名   value:配置文件的内容
  redis.conf: |
    appendonly yes
kind: ConfigMap
metadata:
  name: redis-conf
  namespace: default

创建pod

apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis
    command:
      - redis-server
      - "/redis-master/redis.conf"  #指的是redis容器内部的位置
    ports:
    - containerPort: 6379
    volumeMounts:
    - mountPath: /data
      name: data
    - mountPath: /redis-master
      name: config
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap:
        name: redis-conf
        items:
        - key: redis.conf
          path: redis.conf

检查修改配置

kubectl exec -it redis -- redis-cli



# 修改,但是redis不会热更新
kubectll edit cm redis-conf
requirepass 123456

127.0.0.1:6379> CONFIG GET appendonly
127.0.0.1:6379> CONFIG GET requirepass
apiVersion: v1
kind: ConfigMap
metadata:
  name: example-redis-config
data:
  redis-config: |
    maxmemory 2mb
    maxmemory-policy allkeys-lru 

Secret

敏感信息,密码、OAuth、SSH密钥,放在Secret中比放Pod中定义或者容器镜像更安全和灵活

只使用base64

kubectl create secret docker-registry leifengyang-docker \
--docker-username=leifengyang \
--docker-password=Lfy123456 \
--docker-email=534096094@qq.com

##命令格式
kubectl create secret docker-registry registry-name \
  --docker-server=<你的镜像仓库服务器> \
  --docker-username=<你的用户名> \
  --docker-password=<你的密码> \
  --docker-email=<你的邮箱地址>

使用

apiVersion: v1
kind: Pod
metadata:
  name: private-nginx
spec:
  containers:
  - name: private-nginx
    image: leifengyang/guignginx:v1.0
  imagePullSecrets:
  - name: leifengyang-docker

五、入门操作

# 使用资源中的kube-flannel.yml:
kubectl apply -f kube-flannel.yml

kubectl get pods -n kube-system #查看指定名称空间的pods
kubectl get pods --all-namespaces #查看指定所有名称空间的pods

watch kubectl get pod -n kube-system -o wide #监控pod进度

# 1、部署一个tomcat
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8
# +:  --dry-run -o yaml 可以获取到yaml文件
kubectl get pods -o wide 可以获取到tomcat信息

# 2、暴露nginx访问
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort 
# Pod的80映射容器的8080;service会代理Pod的80

# 3、动态扩容测试
kubectl get deployment
#应用升级:kubectl set image(--help查看帮助)
#扩容:kubectl scale --replicas=3 deployment tomcat6
#扩容了多份,所以无论访问哪个node的指定端口,都可以访问到tomcat6

# 4、删除
kubectl get all
kubectl delete deploy/nginx
kubectl delete service/nginx-service

# 流程:创建deployment会管理replicas,replicas控制pod数量,有pod故障会自动拉起新的pod

# 5、Ingress(底层使用nginx)
通过Service发现Pod进行关联,基于域名访问。
通过Ingress Controller实现Pod负载均衡

kubctl命令

参数

get       #显示一个或多个资源
describe  #显示资源详情
create    #从文件或标准输入创建资源
update   #从文件或标准输入更新资源
delete   #通过文件名、标准输入、资源名或者 label 删除资源
log       #输出 pod 中一个容器的日志
rolling-update  #对指定的 RC 执行滚动升级
exec  #在容器内部执行命令
port-forward #将本地端口转发到 Pod
proxy   #为 Kubernetes API server 启动代理服务器
run     #在集群中使用指定镜像启动容器
expose   #将 SVC 或 pod 暴露为新的 kubernetes service
label     #更新资源的 label
config   #修改 kubernetes 配置文件
cluster-info #显示集群信息
api-versions #以”组/版本”的格式输出服务端支持的 API 版本
version       #输出服务端和客户端的版本信息
help         #显示各个命令的帮助信息
ingress-nginx  #管理 ingress 服务的插件(官方安装和使用方式)

配置

# Kubectl自动补全
$ source <(kubectl completion zsh)
$ source <(kubectl completion bash)

# 显示合并后的 kubeconfig 配置
$ kubectl config view

# 获取pod和svc的文档
$ kubectl explain pods,svc

创建资源对象

# yaml
kubectl create -f xxx-rc.yaml
kubectl create -f xxx-service.yaml

# json
kubectl create -f ./pod.json
cat pod.json | kubectl create -f -

# yaml2json
kubectl create -f docker-registry.yaml --edit -o json

# 创建多个
kubectl create -f xxx-service.yaml -f xxx-rc.yaml
# 创建目录下多个
kubectl create -f <目录>
# url
kubectl create -f https://git.io/vPieo

查看资源对象

Node或者namespace

kubectl get nodes
kubectl get namespace

pod

# 列出所有namespace中的所有pod
kubectl get pods --all-namespaces

# 查看子命令帮助信息
kubectl get --help

# 列出默认namespace中的所有pod
kubectl get pods

# 列出指定namespace中的所有pod
kubectl get pods --namespace=test


# 列出所有pod并显示详细信息
kubectl get pods -o wide
kubectl get replicationcontroller web
kubectl get -k dir/
kubectl get -f pod.yaml -o json
kubectl get rc/web service/frontend pods/web-pod-13je7
kubectl get pods/app-prod-78998bf7c6-ttp9g --namespace=test -o wide
kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}

# 列出该namespace中的所有pod包括未初始化的
kubectl get pods,rc,services --include-uninitialized

资源描述

# pod详细
kubectl describe pods/nginx
kubectl describe pods my-pod
kubectl describe -f pod.json
kubectl describe pod redis-6fd6c6d6f9-qht6n -n redis-6fd6c6d6f9-qht6n
# node详细
kubectl describe nodes c1
# rc关联pod
kubectl describe pods <rc-name>

删除资源

# yaml文件名字按照你创建时的文件一致
kubectl delete -f xxx.yaml

# 删除包括某个 label 的 pod 对象
kubectl delete pods -l name=<label-name>

# 删除包括某个 label 的 service 对象
kubectl delete services -l name=<label-name>
# 复制代码
# 删除包括某个 label 的 pod 和 service 对象
kubectl delete pods,services -l name=<label-name>
# 删除所有 pod/services 对象
kubectl delete pods --all
kubectl delete service --all
kubectl delete deployment --all

编辑资源

# 编辑名为docker-registry的service
kubectl edit svc/docker-registry

直接执行命令

在宿主机直接执行

# 执行 pod 的 date 命令,默认使用 pod 的第一个容器执行
kubectl exec mypod -- date
kubectl exec mypod --namespace=test -- date
# 指定 pod 中某个容器执行 date 命令
kubectl exec mypod -c ruby-container -- date
# 进入某个容器
kubectl exec mypod -c ruby-container -it -- bash

移除node

kubectl drain k8s-node-4 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node-4

日志

# 不实时刷新kubectl logs mypod
kubectl logs mypod --namespace=test
# 查看日志实时刷新
kubectl logs -f mypod -c ruby-container

操作集群

命令操作

1、部署一个tomcat

kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8
kubectl get pods -o wide 可以获取到tomcat信息

2、暴露nginx访问

kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort 
Pod的80映射容器的8080;service会代理Pod的80

3、动态扩容测试

kubectl get deployment

应用升级:kubectl set image(--help查看帮助)

扩容:kubectl scale --replicas=3 deployment tomcat6
扩容了多份,所以无论访问哪个node的指定端口,都可以访问到tomcat6

4、删除

kubectl get all
kubectl delete deploy/nginx
kubectl delete service/nginx-service

流程:创建deployment会管理replicas,replicas控制pod数量,有pod故障会自动拉起新的pod

5、Ingress(底层使用nginx)

通过Service发现Pod进行关联,基于域名访问。
通过Ingress Controller实现Pod负载均衡

yaml操作tomcat6集群

kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat6-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tomcat6
  name: tomcat6
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat6
  template:
    metadata:
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
kubectl apply -f tomcat6-deployment.yaml
kubectl get all
# 暴露端口
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml

部署暴露放一起

tomcat6.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tomcat6
  name: tomcat6
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat6
  template:
    metadata:
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: tomcat6
  name: tomcat6
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: tomcat6
  type: NodePort

删除容器

kubectl delete deployment.apps/tomcat6
# 再次运行,部署+暴露
kubectl apply -f tomcat6-deployment.yaml

deployment.apps/tomcat6 created
service/tomcat6 created

kubectl get all

# 通过各个服务 ip+port 即可查看到tomcat,上面的ingress yaml安装
kubectl apply -f ingress-controller.yaml 
kubectl get pods --all-namespaces

# 有两个,容器镜像都放到了Node节点,master调度而已
kubectl apply -f ingress-tomcat6.yaml

# host添加 192.168.56.102 tomcat6.atguigu.com
# 直接访问 tomcat6.atguigu.com 也能进入,不需要ip+port

六、KubeSphere

KubeSphere v3.1.1

https://v3-1.docs.kubesphere.io/zh/docs/quick-start/minimal-kubesphere-on-k8s/

初始化

前置准备

  • 如需在 Kubernetes 上安装 KubeSphere v3.1.1,您的 Kubernetes 版本必须为:1.17.x、1.18.x、1.19.x 或 1.20.x。
  • 确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。
  • 在安装之前,需要配置 Kubernetes 集群中的默认存储类型。

NFS

# 在每个机器
yum install -y nfs-utils



# 执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /nfs/data

# 在master 执行以下命令 
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
# 使配置生效
exportfs -r
#检查配置是否生效
exportfs

# 子节点
showmount -e 101.200.169.229


mount -t nfs 101.200.169.229:/nfs/data /nfs/data

设置默认存储

要修改配置的两个ip192.168.56.100!

vi storage.yaml

kubectl apply -f storage.yaml

kubectl get sc

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 101.200.169.229 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 101.200.169.229 ## 指定自己地址
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

检查

创建pvc,动态供应

vi pvc.yaml

kubectl apply -f pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
kubectl get pvc
kubectl get pv

metrics-server

集群指标监控组件,安装kubesphere会安装,但是经常会下载失败,所以提前安装

不需要修改

vi metrics-server.yaml

kubectl apply -f metrics-server.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

检查

watch -n 1 kubectl get pod -A
# 等待runngin
kubectl top nodes
kubectl top pods -A

慢安装

https://www.yuque.com/leifengyang/oncloud/gz1sls#EbmTY

# https://github.com.cnpmjs.org/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
# https://github.com.cnpmjs.org/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

修改cluster-configuration

在 cluster-configuration.yaml中指定我们需要开启的功能

参照官网“启用可插拔组件” https://kubesphere.com.cn/docs/pluggable-components/overview/

vi cluster-configuration.yaml

monitoring: true
endpointIps: 【内网IP】
redis.enable: true
openldap.enabled: true
alert.enabled: true
auditing.enabled: true
devops: true
events: true
logging: true
metrics_server: false # 已经安装,否则将失败
networkpolicy: true
ippool: calico # 网络插件
openpitrix.store: true
servicemesh: true
kubectl apply -f kubesphere-installer.yaml
# Terminating 失败 https://blog.csdn.net/shykevin/article/details/107242163   
# 启动失败 docker pull kubesphere/ks-installer:v3.1.1
watch -n 1 kubectl get pod -A
# kubectl describe pod -n kubesphere-system   ks-installer-54c6bcf76b-q745g 

kubectl apply -f cluster-configuration.yaml

watch -n 1 kubectl get pod -A -owide

# 进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

附录

kubesphere-installer.yaml

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
  - name: v1alpha1
    served: true
    storage: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
    - cc

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kubeedge.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v3.1.1
        imagePullPolicy: "Always"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time

cluster-configuration.yaml

172.31.0.4

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: ""        # Add your private registry address if it is needed.
  etcd:
    monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: 172.31.0.4  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    redis:
      enabled: true
    openldap:
      enabled: true
    minioVolumeSize: 20Gi # Minio PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
    es:   # Storage backend for logging, events and auditing.
      # elasticsearchMasterReplicas: 1   # The total number of master nodes. Even numbers are not allowed.
      # elasticsearchDataReplicas: 1     # The total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    port: 30880
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true         # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: true         # Enable or disable the KubeSphere Auditing Log System. 
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # Enable or disable the KubeSphere DevOps System.
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # Enable or disable the KubeSphere Events System.
    ruler:
      enabled: true
      replicas: 2
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true         # Enable or disable the KubeSphere Logging System.
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    # prometheusReplicas: 1          # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1          # AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: true # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: true   # Enable or disable KubeEdge.
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
          - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

Docker镜像地址

cat /etc/docker/daemon.json

cat > /etc/docker/daemon.json <<EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "registry-mirrors": ["https://kpw8rst3.mirror.aliyuncs.com"]
}
EOF


sudo systemctl daemon-reload
sudo systemctl restart docker

单节点一键安装

4c8g;centos7.9;防火墙放行 30000~32767;指定hostname

hostnamectl set-hostname node1
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
chmod +x kk
#可能需要下面命令
yum install -y conntrack
./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1

多节点一键安装

  • 4c8g (master)
  • 8c16g * 2(worker)
  • centos7.9
  • 内网互通
  • 每个机器有自己域名
  • 防火墙开放30000~32767端口
  • 每个节点:yum install conntrack-tools
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
chmod +x kk
#./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
./kk create cluster -f config-sample.yaml
# 进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

config-sample.yaml

修改节点ip

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 10.140.126.6, internalAddress: 10.140.126.6, user: root, password: Hello777}
  - {name: node1, address: 10.140.122.56, internalAddress: 10.140.122.56, user: root, password: Hello777}
  - {name: node2, address: 10.140.122.39, internalAddress: 10.140.122.39, user: root, password: Hello777}
  roleGroups:
    etcd:
    - master
    master: 
    - master
    worker:
    - node1
    - node2
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.20.4
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  addons: []


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""       
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""        
  etcd:
    monitoring: false      
    endpointIps: localhost  
    port: 2379             
    tlsEnable: true
  common:
    redis:
      enabled: false
    redisVolumSize: 2Gi 
    openldap:
      enabled: false
    openldapVolumeSize: 2Gi  
    minioVolumeSize: 20Gi
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    es:  
      elasticsearchMasterVolumeSize: 4Gi   
      elasticsearchDataVolumeSize: 20Gi   
      logMaxAge: 7          
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""  
  console:
    enableMultiLogin: true 
    port: 30880
  alerting:       
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:    
    enabled: false
  devops:           
    enabled: true
    jenkinsMemoryLim: 2Gi     
    jenkinsMemoryReq: 1500Mi 
    jenkinsVolumeSize: 8Gi   
    jenkinsJavaOpts_Xms: 512m  
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:          
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging:         
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:             
    enabled: false
  monitoring:
    storageClass: ""
    prometheusMemoryRequest: 400Mi  
    prometheusVolumeSize: 20Gi  
  multicluster:
    clusterRole: none 
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: true
  servicemesh:    
    enabled: false  
  kubeedge:
    enabled: false
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: 
          - ""           
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

maven修改

ks-devops-agent 修改仓库

实战

多租户

image-20211217120020385

中间件

mysql

配置添加

项目成员,项目内创建配置,mysql-conf

key: my.cnf

value:

[client]
default-character-set=utf8mb4

[mysql]
default-character-set=utf8mb4

[mysqld]
init_connect='SET collation_connection = utf8mb4_unicode_ci'
init_connect='SET NAMES utf8mb4'
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
lower_case_table_names=1
存储卷

存储管理,创建存储卷

mysql-pvc

有状态服务一般使用单节点读写,无状态多节点

副本集

应用负载、工作负载、有状态副本集

his-mysql,添加镜像mysql:5.7,使用默认端口

添加环境变量,同步主机时区

docker run -p 3306:3306 --name mysql-01 \
-v /mydata/mysql/log:/var/log/mysql \
-v /mydata/mysql/data:/var/lib/mysql \
-v /mydata/mysql/conf:/etc/mysql/conf.d \
-e MYSQL_ROOT_PASSWORD=root \
--restart=always \
-d mysql:5.7 
添加卷和配置

数据卷: mysql-pvc 读写、/var/lib/mysql

配置: 只读,/etc/mysql/conf.d

如果修改了configMap,一段时间后自动同步,但是没有热更新

,只能使用重新部署

暴露服务

默认暴露了一个服务,

DNS: his-mysql-9y3w.his,在集群内部都可以使用

kubectl get all -n his

mysql -uroot -h his-mysql-9y3w -p

可以删除默认暴露的服务

内网访问

创建指定工作负载,his-mysql,访问类型:集群内部通过后端直接访问服务

指定工作负载,添加有状态

端口:http,http-3306,容器3306,服务3306

(刚刚设置的集群内部,不可用)外网访问,NodePort,无法设置

外网访问

创建指定工作负载,his-mysql-node,访问类型:集群内部IP

外网访问,Node

3306:32670/TCP

创建外网也会有内网 his-mysql-node.his

redis

创建配置:redis-conf redis.conf:value

让其自动映射到配置文件

创建有状态副本集 his-redis

镜像 redis

启动命令:redis-server 参数:/etc/redis/redis.conf

同步主机时区

#创建配置文件
## 1、准备redis配置文件内容
mkdir -p /mydata/redis/conf && vim /mydata/redis/conf/redis.conf


##配置示例
appendonly yes
port 6379
bind 0.0.0.0


#docker启动redis
docker run -d -p 6379:6379 --restart=always \
-v /mydata/redis/conf/redis.conf:/etc/redis/redis.conf \
-v  /mydata/redis-01/data:/data \
 --name redis-01 redis:6.2.5 \
 redis-server /etc/redis/redis.conf

存储卷模板,redis-pvc,会自动创建,推荐有状态的自动设置,每个mysq应该有自己的存储卷,在扩容时会自动创建,否则会公用一个存储卷

挂载路径:读写 /data

配置文件,redis-conf,只读,/etc/redis,会自动将configMap的key:redis.conf 作为文件名,value为文件值写入

进入容器

cat /etc/redis/redis.conf

删除默认服务,

内部访问

创建服务,his-redis,集群内部,指定工作负载,

http-6379 6379 6379

外部访问

his-redis-node,IP

http-6379 6379 6379

外网访问,NodePort

6379:32005

elasticSearch

# 创建数据目录
mkdir -p /mydata/es-01 && chmod 777 -R /mydata/es-01

# 容器启动,具名卷挂载,没有文件会用默认的
docker run --restart=always -d -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-v es-config:/usr/share/elasticsearch/config \
-v /mydata/es-01/data:/usr/share/elasticsearch/data \
--name es-01 \
elasticsearch:7.13.4

docker exec -it es-01 /bin/bash
cd /usr/share/elasticsearch/config

注意: 子路径挂载,配置修改后,k8s不会对其Pod内的相关配置文件进行热更新,需要自己重启Pod

es配置: es-conf

key: elasticsearch.yaml jvm.options

工作负载:

有状态,his-es,elasticsearch:7.13.4

环境变量:

discovery.type = single-node
ES_JAVA_OPTS = -Xms512m -Xmx512m

同步时区

存储卷模板 es-pvc

读写 /usr/share/elasticsearch/data

配置文件 es-conf 只读 /usr/share/elasticsearch/config错误,会完全覆盖目录我们只需要个别的

/usr/share/elasticsearch/config/elasticsearch.yml,子路径elasticsearch.yml,选定指定路径

/usr/share/elasticsearch/config/jvm.options,子路径 jvm.options,选定指定路径

不使用默认服务

his-es

不生成ip,

http http-9200 9200 9200

tcp tcp-9300 9300 9300

his-es.his

进入服务,curl http://his-es.his:9200

his-es-node 外网访问 NodePort http://192.168.56.104:32705/

elasticSearch配置

elasticsearch.yml

cluster.name: "docker-cluster"
network.host: 0.0.0.0

jvm.options

################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html
## for more information.
##
################################################################



################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## and the min and max should be set to the same value. For
## example, to set the heap to 4 GB, create a new file in the
## jvm.options.d directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################


################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################

## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly

## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly
14-:-XX:+UseG1GC

## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log

## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m

应用商店

Helm中的charts

https://artifacthub.io/

应用管理->应用仓库 -> 添加应用仓库 https://charts.bitnami.com/bitnami

image-20211217174331660

创建自制应用

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/redis

RuoYiCloud

https://gitee.com/y_project/RuoYi-Cloud

img

本地运行

nacos 2.0.3:https://nacos.io/zh-cn/docs/what-is-nacos.html

修改配置文件,导入ry-config的nacos配置

单机模式支持mysql

startup.cmd -m standalone单机模式启动nacos,nacos所有写嵌入式数据库的数据都写到了mysql,连接时间太短可能会失败

修改naocs的云配置(数据库,redis)

上云注意

image-20211217201936858

注意点:

中间件(有状态、数据导入)

  • mysql(ruoyi-config、ruoyi-cloud、ruoyi-seata)
  • redis
  • nacos(service暴露nacos,cluster.conf中的各个ip应该使用nacos固定域名)

微服务(无状态、镜像)

访问地址、配置(生产配置分离URL)

1、每个微服务准备 bootstrap.properties,配置 nacos 2.0.3 地址信息。默认使用本地

2、每个微服务准备Dockerfile,启动命令,指定线上nacos配置等。

3、每个微服务制作自己镜像。

负载、子目录挂载

service暴露

Nacos

service暴露集群nacos v2.0.3,但是这样会报错,应该只使用单点

cluster.conf中的各个ip应该使用nacos固定域名

数据存在mysql,但是部分配置(configMap)得改,/home/nacos/conf/xxx使用子目录

  • application.properties(数据库地址,his-mysql.his:3306)
  • cluster.conf( his-nacoe-v1-0.his.svc.cluster.local:8848 )

无状态,镜像:nacos/nacos-server:v2.0.3,指定暴露tcp 8848,环境变量 MODE = standalone

同步主机时机

健康检查器,容器存活检查,Http请求 /nacos 8848,初始延迟20s,超时时间3s,10秒频率

所有配置文件进行clone,文件dev改为prod

存活探针:

系统重启,nacos准备好,要连接数据库,数据库没准备好,服务就会崩溃

nacos正常应当能 curl 127.0.0.1:8848/nacos

使用健康检查器,使用存活探针,检查失败要重启容器,不然不会重试连接mysql

DockerFile

FROM openjdk:8-jdk
LABEL maintainer=leifengyang


#docker run -e PARAMS="--server.port 9090"
ENV PARAMS="--server.port=8080 --spring.profiles.active=prod --spring.cloud.nacos.discovery.server-addr=his-nacos.his:8848 --spring.cloud.nacos.config.server-addr=his-nacos.his:8848 --spring.cloud.nacos.config.namespace=prod --spring.cloud.nacos.config.file-extension=yml"
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone

COPY target/*.jar /app.jar
EXPOSE 8080

#
ENTRYPOINT ["/bin/sh","-c","java -Dfile.encoding=utf8 -Djava.security.egd=file:/dev/./urandom -jar app.jar ${PARAMS}"]
  • 容器默认以8080端口启动

  • 时间为CST

  • 环境变量 PARAMS 可以动态指定配置文件中任意的值

  • nacos集群内地址为 his-nacos.his:8848

微服务默认启动加载 nacos中 服务名-激活的环境.yml 文件,所以线上的配置可以全部写在nacos中。

微服务

步骤:

maven打包成jar

docker将jar打包成镜像

推送镜像到阿里云仓库(harbor)

应用部署,无状态,不能外网访问,只暴露网关

$ docker login --username=forsum**** registry.cn-hangzhou.aliyuncs.com

#把本地镜像,改名,成符合阿里云名字规范的镜像。
$ docker tag [ImageId] registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/镜像名:[镜像版本号]
## docker tag 461955fe1e57 registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-visual-monitor:v1

$ docker push registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/镜像名:[镜像版本号]
## docker push registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-visual-monitor:v1

应用一启动会获取到 “应用名-激活的环境标识.yml”,配置可以只使用server-addr,省略discovery.server-addr & config.server-addr

每次部署应用的时候,需要提前修改nacos线上配置,确认好每个中间件的连接地址是否正确

redis: his-redis.his

mysql: his-mysql.his

gateway的nacos配置添加

# Spring
spring: 
  cloud:
    sentinel:
      # 取消控制台懒加载
      eager: true
      transport:
        # 控制台地址
        dashboard: 127.0.0.1:8718
      # nacos配置持久化
      datasource:
        ds1:
          nacos:
            server-addr: his-nacos.his:8848
            dataId: sentinel-ruoyi-gateway
            groupId: DEFAULT_GROUP
            data-type: json

前端

不应该这样

vue.config.js:     

target: http://ruoyi-gate.his:8080

应该在nginx,

ruoyi-ui需要外网访问,NodrPort

所有镜像

docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-auth:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-file:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-gateway:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-job:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-system:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-visual-monitor:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-ui:v2

七、公网搭建

外网成功

初始化

# 开放端口
10250/10260 TCP端口:给kube-schedule、kube-controll,kube-proxy、kubelet等使用
6443 TCP端口:给kube-apiserver使用
2379 2380 2381 TCP商品:ETCD使用
8472 UDP端口:vxlan使用端口

groupadd sudo
visudo
  myuser ALL=(ALL) ALL # 用户
  %sudo ALL=(ALL) ALL # 用户组
sudo mkdir test


hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

cat > /etc/hosts <<EOF
101.200.169.229  master
112.124.15.81 node1
101.227.11.219 node2
EOF

# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
systemctl stop postfix && systemctl disable postfix

cat > k8s.conf <<EOF
#开启网桥模式
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#开启转发
net.ipv4.ip_forward = 1
##关闭ipv6
net.ipv6.conf.all.disable_ipv6=1
EOF
cp k8s.conf /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf

setenforce 0
swapoff -a

关闭无用服务

mkdir /var/log/journal # 持久化保存日志的目录
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化
Storage=persistent
# 压缩历史日志
Compress=yes
SysnIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-journald

网卡

master 101.200.169.229 172.25.50.35/18

node1 112.124.15.81 172.26.18.224/18

node2 101.227.11.219 172.31.0.254/24

# 所有节点
cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
BOOTPROTO=static
DEVICE=eth0:1
IPADDR=【公网IP】
PREFIX=32
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
EOF

vim /etc/sysconfig/network-scripts/ifcfg-eth0:1
cat /etc/sysconfig/network-scripts/ifcfg-eth0:1


systemctl restart network

ip addr


# 安装ipvs(所有节点)
yum install -y ipvsadm

Dcoker

# 安装命令

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install wget

yum makecache

wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm

yum -y install containerd.io-1.2.6-3.3.el7.x86_64.rpm

yum install -y docker-ce docker-ce-cli containerd.io

systemctl enable docker --now


# 修改docker驱动,低版本需要修改(具体需不需要可以先改了,如果docker重启启动不起来,就说明不需要,删掉即可):替换了
#vim /usr/lib/systemd/system/docker.service
# 在ExecStart命令结尾添加:
#--exec-opt native.cgroupdriver=systemd

#systemctl daemon-reload
#systemctl restart docker

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

三大件

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf 

# !!!在$KUBELET_EXTRA_ARGS添加--node-ip=节点公网IP(空格分割)
--node-ip=101.200.169.229


systemctl enable --now kubelet

Master

vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.20.13
apiServer:
  certSANs:
  - master
  - 101.200.169.229
  - 172.25.50.35
  - 10.96.0.1
controlPlaneEndpoint: 101.200.169.229:6443
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy-config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
kubeadm init --config=kubeadm-config.yaml 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

apiserver

master

vim /etc/kubernetes/manifests/kube-apiserver.yaml
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 101.200.169.229:6443
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=101.200.169.229
    - --bind-address=0.0.0.0

flannel

master

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
vim kube-flannel.yml

kubectl apply -f kube-flannel.yml

kubectl get pod -A -owide # 是否是外网地址
containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --public-ip=$(PUBLIC_IP) #add
        - --iface=eth0 #add

 # env下
 env:
 - name: PUBLIC_IP # add
   valueFrom:          
     fieldRef:          
       fieldPath: status.podIP 

       #POD_NAME

IPVS(all)

modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# 是否要开机自启动
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

swapoff -a
yum install -y ipvsadm
ipvsadm -Ln

# !@master
kubectl edit configmaps -n kube-system kube-proxy
kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"  # 填入`ipvs`
    nodePortAddresses: null

work

# master
kubectl get pod -A -owide
kubeadm token create --print-join-command
# worker
systemctl enable --now kubelet

验证

kubectl get nodes
kubectl get pod -A -owide
kubectl create deployment my-dep --image=nginx --replicas=3
kubectl get pod -A -owide
curl 其他节点的nginx pod ip,获取正常说明成功

master也变为node

kubectl taint node --all node-role.kubernetes.io/master-
kubectl taint node --all node-role.kubernetes.io/master-
# 恢复
kubectl taint node --all node-role.kubernetes.io/master="":NoSchedule

删除

rm -rf /etc/kubernetes/*
rm -rf /root/.kube
kubeadm reset
rm -rf $HOME/.kube/config

外网Kuboard

https://kuboard.cn/

kuboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
type: ClusterIP 改为 type: NodePort
kubectl get svc -A |grep kubernetes-dashboard


#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

kubectl apply -f dash.yaml


#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
kubect get scv
https://IP:31428/#/login

kuboard

kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
# kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml
#kubectl delete daemonset kuboard-etcd -n kuboard
#为节点添加标签
#kubectl label nodes your-node-name k8s.kuboard.cn/role=etcd
#执行 kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
kubect get scv
http://IP:30080

微服务

存储类

https://kuboard.cn/guide/cluster/storage.html#%E5%9C%A8-kuboard-%E5%88%9B%E5%BB%BA%E5%AD%98%E5%82%A8%E7%B1%BB

mkdir /tmp/testnfs \
&& mount -t nfs -o vers=4,minorversion=0,noresvport 101.200.169.229:/nfs/data /tmp/testnfs \
&& echo "hello nfs" >> /tmp/testnfs/test.txt \
&& cat /tmp/testnfs/test.txt

流程

https://blog.csdn.net/chen645800876/article/details/105833835

虚拟网卡+kubeadm安装 centos7.6

准备

# hostname
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

# 内核参数
cat > k8s.conf <<EOF
#开启网桥模式
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#开启转发
net.ipv4.ip_forward = 1
##关闭ipv6
net.ipv6.conf.all.disable_ipv6=1
EOF
cp k8s.conf /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf
# 调整系统时区
# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
# 关闭系统不需要的服务
systemctl stop postfix && systemctl disable postfix
# 使用journald关闭rsyslogd
mkdir /var/log/journal # 持久化保存日志的目录
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化
Storage=persistent
# 压缩历史日志
Compress=yes
SysnIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF

systemctl restart systemd-journald

swapoff -a

# ipvs前置条件准备
# step1
modprobe br_netfilter

# step2
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

docker


# docker
# step 1: 安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装Docker-CE
yum makecache fast
yum -y install docker-ce
# Step 4: 开启Docker服务
systemctl start docker


sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

    # 创建 `/etc/docker`目录
    mkdir -p /etc/docker

    # 配置 `daemon`
    cat > /etc/docker/daemon.json << EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      }
    }
    EOF



# 启动docker
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

虚拟网卡(所有节点)



# step1 ,注意替换你的公网IP进去
cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
BOOTPROTO=static
DEVICE=eth0:1
IPADDR=101.200.169.229
PREFIX=32
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
EOF

cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
BOOTPROTO=static
DEVICE=eth0:1
IPADDR=112.124.15.8
PREFIX=32
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
EOF

# step2 如果是centos8,需要重启
systemctl restart network
# step3 查看新建的IP是否进去
ip addr

下载kube(不好用)

# 添加源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 关闭selinux
setenforce 0

# 安装kubelet、kubeadm、kubectl
yum install -y kubelet kubeadm kubectl

# 设置为开机自启
systemctl enable kubelet 

kubelet(所有)

三大件
# 此文件安装kubeadm后就存在了
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

# 注意,这步很重要,如果不做,节点仍然会使用内网IP注册进集群
# 在末尾添加参数 --node-ip=公网IP
$KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=47.74.22.13

kubeadm(主节点)

相关镜像
docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.0             43940c34f24f        4 weeks ago         117MB
k8s.gcr.io/kube-controller-manager   v1.18.0             d3e55153f52f        4 weeks ago         162MB
k8s.gcr.io/kube-scheduler            v1.18.0             a31f78c7c8ce        4 weeks ago         95.3MB
k8s.gcr.io/kube-apiserver            v1.18.0             74060cea7f70        4 weeks ago         173MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        2 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        3 months ago        43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        6 months ago        288MB
quay.io/coreos/flannel               v0.11.0-amd64       ff281650a721        15 months ago       52.6MB

准备镜像
# step1 添加配置文件,注意替换下面的IP
cat > kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images
kubernetesVersion: v1.20.9
apiServer:
  certSANs:
  - master
  - 101.200.169.22
  - 172.25.50.35
  - 10.96.0.1
controlPlaneEndpoint: 101.200.169.229:6443
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
--- 
apiVersion: kubeproxy-config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
EOF

cat > kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP
  - master    #请替换为hostname
  - 47.74.22.13   #请替换为公网
  - 175.24.19.12  #请替换为私网
  - 10.96.0.1   #不要替换,此IP是API的集群地址,部分服务会用到
controlPlaneEndpoint: 47.74.22.13:6443 #替换为公网IP
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
--- 将默认调度方式改为ipvs
apiVersion: kubeproxy-config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
EOF




# step2 如果是1核心或者1G内存的请在末尾添加参数(--ignore-preflight-errors=all),否则会初始化失败
# 同时注意,此步骤成功后,会打印,两个重要信息
kubeadm init --config=kubeadm-config.yaml 

# 信息1 上面初始化成功后,将会生成kubeconfig文件,用于请求api服务器,请执行下面操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 信息2 此信息用于后面工作节点加入主节点使用
kubeadm join 101.200.169.229:6443 --token ifcqkw.hu6kev51um9w40u5 \
    --discovery-token-ca-cert-hash sha256:580afbe4454973b6108ca975a524c0f64bf342d5d46b944725f348d891d6478a

kube-apiserver(主)

# 修改两个信息,添加--bind-address和修改--advertise-address 2?
vim /etc/kubernetes/manifests/kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 47.74.22.13:6443  #修改为公网IP
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=47.74.22.13  #修改为公网IP
    - --bind-address=0.0.0.0 #添加此参数
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.18.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 175.24.19.12
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
status: {}

work加入

## 工作节点相关镜像
k8s.gcr.io/kube-proxy    v1.18.0             43940c34f24f        4 weeks ago         117MB
k8s.gcr.io/pause         3.2                 80d28bedfe5d        2 months ago        683kB
k8s.gcr.io/coredns       1.6.7               67da37a9a360        3 months ago        43.8MB
quay.io/coreos/flannel   v0.11.0-amd64       ff281650a721        15 months ago       52.6MB

# 需要虚拟IP和kubelet启动参数都改成功后,再执行
kubeadm join 47.74.22.13:6443 --token sdfs.dsfsdfsdfijdth \
    --discovery-token-ca-cert-hash sha256:sdfsdfsdfsdfsdfsdfsdfsdfg9a460f44b118050091245c1d

# 成功后,INTERNAL-IP均显示公网IP
kubectl get nodes -o wide

flanne

# 下载flannel的yaml配置文件

# 共修改两个地方,一个是args下,添加
 args:
 - --public-ip=$(PUBLIC_IP) # 添加此参数,申明公网IP
 - --iface=eth0             # 添加此参数,绑定网卡


 # 然后是env下
 env:
 - name: PUBLIC_IP     #添加环境变量
   valueFrom:          
     fieldRef:          
       fieldPath: status.podIP 

检查网络

# 检查pod是否都是ready状态
kubectl get pods -o wide --all-namespaces
...

# 手动创建一个pod
kubectl create deployment nginx --image=nginx

# 查看pod的ip
kubectl get pods -o wide

# 主节点或其它节点,ping一下此ip,看看是否能ping通

# 没有的话,查看2.9章节中说明的端口是否打开

开启ipvs

# 前面都成功了,但是有时候默认并不会启用`IPVS`模式,那就手动修改一下,只修改一处
# 修改后,如果没有及时生效,请删除kube-proxy,会自动重新创建,然后使用ipvsadm -Ln命令,查看是否生效
# ipvsadm没有安装的,使用yum install ipvsadm安装
kubectl edit configmaps -n kube-system kube-proxy

---
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"  # 如果为空,请填入`ipvs`
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
...

移除虚拟网卡

# 移除虚拟网卡
 mv /etc/sysconfig/network-scripts/ifcfg-eth0\:1 /root/
 # 重启
 reboot
# 内核太高
modprobe -- nf_conntrack

cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
BOOTPROTO=static
DEVICE=eth0:1
IPADDR=101.200.169.229
PREFIX=32
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
EOF
112.124.15.81

网络

主机 外网 内网
master 101.200.169.229 172.25.50.35/18
node1 112.124.15.81 172.26.18.224/18
node2 101.227.11.219 172.31.0.254/24

公网代替内网,还需要虚拟网卡

添加网卡 https://blog.csdn.net/weixin_30588729/article/details/95464281

vim /etc/yum.repos.d/nux-misc.repo

[nux-misc]
name=Nux Misc
baseurl=http://li.nux.ro/download/nux/misc/el7/x86_64/
enabled=0
gpgcheck=1
gpgkey=http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro

yum --enablerepo=nux-misc install tunctl

tunctl -t eth1 -u root    #tap0 是虚拟网卡名字
# 可能需要自启动
ifconfig eth1 101.200.169.229 netmask 255.255.255.0 promisc
ip a
mkdir /root/boot-shells
vi /root/boot-shells/iptables.sh
iptables -t nat -A OUTPUT -d 172.25.50.35  -j DNAT --to-destination 101.200.169.229
iptables -t nat -A OUTPUT -d 172.26.18.224  -j DNAT --to-destination 112.124.15.81
iptables -t nat -A OUTPUT -d 172.31.0.254  -j DNAT --to-destination 101.227.11.219

vim /etc/rc.d/rc.local
# 以root用户执行脚本,添加
su - root -c 'cd /root/boot-shells/ && sh iptables.sh'

#授予 rc.local 执行权限
chmod +x /etc/rc.d/rc.local

#重启服务器,使生效
reboot

ping 172.25.50.35
ping 172.26.18.224

节点

master 101.200.169.229

vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# 添加
--node-ip=101.200.169.229




vim /etc/kubernetes/manifests/etcd.yaml
    - --listen-client-urls=https://0.0.0.0:2379,https://0.0.0.0:2379(修改的位置)
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://0.0.0.0:2380(修改的位置)

sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
cp /etc/kubernetes/manifests/kube-apiserver.yaml ./
cp -f ./kube-apiserver.yaml /etc/kubernetes/manifests

spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=101.200.169.229 #update
    - --bind-address=0.0.0.0 # add

kubeadm init \
--apiserver-advertise-address=101.200.169.229 \
--control-plane-endpoint=101.200.169.229 \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

kubeadm token create --print-join-command

# 使用flannel
net-conf.json: |
    {
      "Network": "192.168.0.0/16", # 本来是10.244.0.0
      "Backend": {
        "Type": "vxlan"
      }
    }

# delete
kubeadm reset
rm -rf $HOME/.kube/config
sudo systemctl enable --now kubelet
systemctl status kubelet


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

卡住后

sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
cp /etc/kubernetes/manifests/kube-apiserver.yaml ./
cp -f ./kube-apiserver.yaml /etc/kubernetes/manifests
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=101.200.169.229 #update
    - --bind-address=0.0.0.0 # add

vim /etc/kubernetes/manifests/etcd.yaml

    - --listen-client-urls=https://0.0.0.0:2379,https://0.0.0.0:2379(修改的位置)
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://0.0.0.0:2380(修改的位置)
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=101.200.169.229 #update
    - --bind-address=0.0.0.0 # add



kube-flannel.yml
    - name: kube-flannel
    image: quay.io/coreos/flannel:v0.14.0-amd64
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --public-ip=$(PUBLIC_IP)  # new 
    - --iface=eth0  # new
    - --kube-subnet-mgr


        env:
        - name: PUBLIC_IP # new
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE

calico(X)



# calico无法连接网络,需要添加网卡,添加iptable映射,sudo iptables -P FORWARD ACCEPT
# 添加网卡 https://blog.csdn.net/weixin_30588729/article/details/95464281
sudo ifconfig eth0:1 101.200.169.229
sudo ifconfig eth0:1 112.124.15.81
# 不行,calico/node is not ready: BIRD is not ready: BGP not established with $ip
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=skip-interface=eth0:1
# or
vim calico.yaml
- name: CLUSTER_TYPE 
  value: "k8s,bgp" 
- name: IP_AUTODETECTION_METHOD  # add and next line
  value: "interface=eth0.*" 
- name: IP
  value: "autodetect" 

ip a
# 查看
kubectl get pod -A
kubectl describe pod -n kube-system  
watch -n 1 kubectl get pod -A

# 删除
kubectl get pod -A
kubectl delete pod -n kube-system   
kubectl delete -f calico.yaml
vim calico.yaml
kubectl apply -f calico.yaml

ERROR

2380 bing

k8s etcdmain: listen tcp xx.xx.xx.xx:2380: bind: cannot assign requested address

apiserver、etcd容器都是exited状态,etcd容器日志报了一个错误,绑定不了ip端口,通过查看网卡ip可以看到,公网ip是没有绑定在网卡上的,

修改/etc/kubernetes/manifests/etcd.yaml,将–listen-client-urls、–listen-peer-urls后面的ip都设置为0.0.0.0

master

    - --listen-client-urls=https://0.0.0.0:2379,https://0.0.0.0:2379(修改的位置)
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://0.0.0.0:2380(修改的位置)

ping 不通

由于主机内看到的只有内网 IP, 而且几台云服务器位于不同的内网, 直接搭建会将内网 IP 注册进集群导致搭建不成功。解决方案:使用虚拟网卡绑定公网 IP, 使用该公网 IP 来注册集群

# 所有主机都要创建虚拟网卡,并绑定对应的公网 ip
sudo ifconfig eth0:1 139.198.108.103

该设置方式在重启服务器后失效,持久化需要将配置写入/etc/network/interfaces/etc/netplan/50-cloud-init.yaml

calico

sudo iptables -P FORWARD ACCEPT

docker 从 1.13 版本开始,可能将 iptables FORWARD chain 的默认策略设置为了 DROP,该设置会导致 ping 其他 node 上的 Pod ip 失败

cs unhealthy

使用kubectl get cs查看显示controller-manager、scheduler状态为Unhealthy

kubectl get cs  

NAME                 STATUS      MESSAGE                    ERROR                               
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}       

都添加注释  # - --port=0
vi /etc/kubernetes/manifests/kube-controller-manager.yaml
vi /etc/kubernetes/manifests/kube-scheduler.yaml

metrics

docker中一直失败

https://cloud.tencent.com/developer/article/1819955

    command:
    - /metrics-server
    - --kubelet-insecure-tls
    - --kubelet-preferred-address-types=InternalIP
    - --kubelet-insecure-tls  #添加
    - --metric-resolution=30s #添加

八、DevOps

https://kubesphere.com.cn/docs/devops-user-guide/

Dev开发,Ops运维,CI持续集成,CD持续交付

sonarqube:代码质量检查

尚医通

https://gitee.com/leifengyang/yygh-parent

https://gitee.com/leifengyang/yygh-admin

https://gitee.com/leifengyang/yygh-site

yygh-parent
|---common                  // 通用模块
|---hospital-manage         // 医院后台                [9999]   
|---model                    // 数据模型
|---server-gateway            // 网关                  [80]
|---service                    // 微服务层
|-------service-cmn            // 公共服务                [8202]
|-------service-hosp        // 医院数据服务          [8201]
|-------service-order        // 预约下单服务          [8206]
|-------service-oss            // 对象存储服务          [8205]
|-------service-sms            // 短信服务                [8204]
|-------service-statistics    // 统计服务                [8208]
|-------service-task        // 定时服务                [8207]
|-------service-user        // 会员服务                [8203]

====================================================================

yygh-admin                    // 医院管理后台        [9528]
yygh-site                    // 挂号平台              [3000]
中间件 集群内地址 外部访问地址
Nacos his-nacos.his:8848 http://139.198.165.238:30349/nacos
MySQL his-mysql.his:3306 139.198.165.238:31840
Redis his-redis.his:6379 139.198.165.238:31968
Sentinel his-sentinel.his:8080 http://139.198.165.238:31523/
MongoDB mongodb.his:27017 139.198.165.238:32693
RabbitMQ rabbitm-yp1tx4-rabbitmq.his:5672 139.198.165.238:30375
ElasticSearch his-es.his:9200 139.198.165.238:31300

流水线

拉取代码,添加步骤,指定容器,任务,嵌套xshell ls

base:git

maven:git、maven

添加凭证(账号密码)

项目编译:指定容器、任务、mvn clean package -Dmaven.test.skip=true,注意pom:p115修改pom finalName

maven镜像仓库修改:配置中心、配置、ks-devops-agent修改配置,mavensetting添加加速地址

已经下载的jar包,下次不会重复下载

构建镜像:容器maven,bash:ls、bash:docker bulid -t hospital-manager:latest -f hospital-manage/Dockerfile ./hospital-manage

成功后,使用并行阶段

推送镜像:要使用统一的镜像仓库(阿里云、harbor),bash:docker bulid -t hospital-manager:latest -f hospital-manage/Dockerfile ./hospital-manage

docker tag hospital-manage:latest $REGISTRY/$DOCKERHUB_NAMESPACE/hospital-manage:SHAPSHOT-$BUILD_NUMBER

sh能动态取值,environment定义的变量能在流水线取出

image-20211222121152513

九、工具

helm

artifacthub.io 类似于 dockerhub

十、部署应用

  • 工作负载:部署(无状态微服务)、有状态副本集(数据库)、守护进程集(每个机器都要的日志收集器 )

  • 数据存储

  • 网络访问