Kubernetes二进制安装方式
准备工作
- 关闭swap
- 检查MAC地址和product_uuid
修改hosts文件
192.168.1.31 k8s-master01192.168.1.32 k8s-master02192.168.1.33 k8s-master03
192.168.1.41 k8s-node01192.168.1.42 k8s-node02
192.168.1.51 k8s-apiserver
192.168.1.51
应该设置为虚拟ip
加载内核模块
cat >> /etc/modules-load.d/ipvs.conf <<EOFip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackbr_netfilterip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOF
systemctl restart systemd-modules-load.service
- ip_vs: 用于实现负载均衡和高可用性。它通过在前端代理服务器上分发传入请求到后端实际服务器上,提供了高性能和可扩展的网络服务。
- ip_vs_rr: IPVS 的一种调度算法之一,使用轮询方式分发请求到后端服务器,每个请求按顺序依次分发。
- ip_vs_wrr: IPVS 的一种调度算法之一,使用加权轮询方式分发请求到后端服务器,每个请求按照指定的权重比例分发。
- ip_vs_sh: IPVS 的一种调度算法之一,使用哈希方式根据源 IP 地址和目标 IP 地址来分发请求。
- nf_conntrack: 用于跟踪和管理网络连接,包括 TCP、UDP 和 ICMP 等协议。它是实现防火墙状态跟踪的基础。
- ip_tables: 提供了对 Linux 系统 IP 数据包过滤和网络地址转换(NAT)功能的支持。
- ip_set: 扩展了 iptables 的功能,支持更高效的 IP 地址集合操作。
- xt_set: 扩展了 iptables 的功能,支持更高效的数据包匹配和操作。
- ipt_set: 一个用户空间工具,用于配置和管理 xt_set 内核模块。
- ipt_rpfilter: 用于实现反向路径过滤,用于防止 IP 欺骗和 DDoS 攻击。
- ipt_REJECT: 一个 iptables 目标,用于拒绝 IP 数据包,并向发送方发送响应,指示数据包被拒绝。
- ipip: 用于实现 IP 封装在 IP(IP-over-IP)的隧道功能。它可以在不同网络之间创建虚拟隧道来传输 IP 数据包。
修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward=1net.bridge.bridge-nf-call-iptables=1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time=600net.ipv4.tcp_keepalive_probes=3net.ipv4.tcp_keepalive_intvl=15net.ipv4.tcp_max_tw_buckets=36000net.ipv4.tcp_tw_reuse=1net.ipv4.tcp_max_orphans=327680net.ipv4.tcp_orphan_retries=3net.ipv4.tcp_syncookies=1net.ipv4.tcp_max_syn_backlog=16384net.ipv4.ip_conntrack_max=65536net.ipv4.tcp_max_syn_backlog=16384net.ipv4.tcp_timestamps=0net.core.somaxconn=16384
net.ipv6.conf.all.disable_ipv6=0net.ipv6.conf.default.disable_ipv6=0net.ipv6.conf.lo.disable_ipv6=0net.ipv6.conf.all.forwarding=1EOF
sysctl --system
- net.ipv4.ip_forward=1: 允许服务器作为网络路由器转发数据包
- vm.overcommit_memory=1: 该设置允许原始的内存过量分配策略,当系统的内存已经被完全使用时,系统仍然会分配额外的内存。
- vm.panic_on_oom=0: 当系统内存不足(OOM)时,禁用系统崩溃和重启。
- fs.inotify.max_user_watches=89100: 设置系统允许一个用户的inotify实例可以监控的文件数目的上限。
- fs.file-max=52706963: 设置系统同时打开的文件数的上限。
- fs.nr_open=52706963: 设置系统同时打开的文件描述符数的上限。
- net.netfilter.nf_conntrack_max=2310720: 置系统可以创建的网络连接跟踪表项的最大数量。
- net.ipv4.tcp_keepalive_time=600: 设置TCP套接字的空闲超时时间(秒),超过该时间没有活动数据时,内核会发送心跳包。
- net.ipv4.tcp_keepalive_probes=3: 设置未收到响应的TCP心跳探测次数。
- net.ipv4.tcp_keepalive_intvl=15: 设置TCP心跳探测的时间间隔(秒)。
- net.ipv4.tcp_max_tw_buckets=36000: 设置系统可以使用的TIME_WAIT套接字的最大数量。
- net.ipv4.tcp_tw_reuse=1: 启用TIME_WAIT套接字的重新利用,允许新的套接字使用旧的TIME_WAIT套接字。
- net.ipv4.tcp_max_orphans=327680: 设置系统可以同时存在的TCP套接字垃圾回收包裹数的最大数量。
- net.ipv4.tcp_orphan_retries=3: 设置系统对于孤立的TCP套接字的重试次数。
- net.ipv4.tcp_syncookies=1: 启用TCP SYN cookies保护,用于防止SYN洪泛攻击。
- net.ipv4.tcp_max_syn_backlog=16384: 设置新的TCP连接的半连接数(半连接队列)的最大长度。
- net.ipv4.ip_conntrack_max=65536: 设置系统可以创建的网络连接跟踪表项的最大数量。
- net.ipv4.tcp_timestamps=0: 关闭TCP时间戳功能,用于提供更好的安全性。
- net.core.somaxconn=16384: 设置系统核心层的连接队列的最大值。
- net.ipv6.conf.all.disable_ipv6=0: 启用IPv6协议。
- net.ipv6.conf.default.disable_ipv6=0: 启用IPv6协议。
- net.ipv6.conf.lo.disable_ipv6=0: 启用IPv6协议。
- net.ipv6.conf.all.forwarding=1: 允许IPv6数据包转发。
安装运行时
- 下载
wget https://github.com/containerd/containerd/releases/download/v2.0.1/containerd-2.0.1-linux-amd64.tar.gz
curl -sSL -O https://github.com/containerd/containerd/releases/download/v2.0.1/containerd-2.0.1-linux-amd64.tar.gz
- 安装
tar Cxzvf /usr/local containerd-2.0.1-linux-amd64.tar.gz
- 设置systemd服务
cat <<EOF > /lib/systemd/system/containerd.service[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target dbus.service
[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerd
Type=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5
LimitNPROC=infinityLimitCORE=infinity
TasksMax=infinityOOMScoreAdjust=-999
[Install]WantedBy=multi-user.targetEOF
- 安装runc
wget https://github.com/opencontainers/runc/releases/download/v1.2.3/runc.amd64
curl -sSL -O https://github.com/opencontainers/runc/releases/download/v1.2.3/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc
- 安装cri
wget https://github.com/containernetworking/plugins/releases/download/v1.6.1/cni-plugins-linux-amd64-v1.6.1.tgz
curl -sSL -O https://github.com/containernetworking/plugins/releases/download/v1.6.1/cni-plugins-linux-amd64-v1.6.1.tgz
mkdir -p /opt/cni/bintar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.1.tgz
- 开启containerd
systemctl daemon-reloadsystemctl enable --now containerd
安装ipvsadm
apt install ipvsadm ipset sysstat conntrack -y
主节点安装
以k8s-master01
为例, apiserver的统一入口设置为k8s-apiserver
安装ectd
wget https://github.com/etcd-io/etcd/releases/download/v3.5.17/etcd-v3.5.17-linux-amd64.tar.gz
curl -sSL -O https://github.com/etcd-io/etcd/releases/download/v3.5.17/etcd-v3.5.17-linux-amd64.tar.gz
tar xf etcd-v3.5.17-linux-amd64.tar.gzmv etcd-v3.5.17-linux-amd64/etcd* /usr/local/bin/
安装Kubernetes组件
version="v1.32.0"
# wgetwget https://dl.k8s.io/${version}/bin/linux/amd64/kube-apiserver -O /usr/local/bin/kube-apiserverwget https://dl.k8s.io/${version}/bin/linux/amd64/kube-controller-manager -O /usr/local/bin/kube-controller-managerwget https://dl.k8s.io/${version}/bin/linux/amd64/kube-scheduler -O /usr/local/bin/kube-schedulerwget https://dl.k8s.io/${version}/bin/linux/amd64/kube-proxy -O /usr/local/bin/kube-proxywget https://dl.k8s.io/${version}/bin/linux/amd64/kubelet -O /usr/local/bin/kubeletwget https://dl.k8s.io/${version}/bin/linux/amd64/kubectl -O /usr/local/bin/kubectl
curl -sSL https://dl.k8s.io/${version}/bin/linux/amd64/kube-apiserver -o /usr/local/bin/kube-apiservercurl -sSL https://dl.k8s.io/${version}/bin/linux/amd64/kube-controller-manager -o /usr/local/bin/kube-controller-managercurl -sSL https://dl.k8s.io/${version}/bin/linux/amd64/kube-scheduler -o /usr/local/bin/kube-schedulercurl -sSL https://dl.k8s.io/${version}/bin/linux/amd64/kube-proxy -o /usr/local/bin/kube-proxycurl -sSL https://dl.k8s.io/${version}/bin/linux/amd64/kubelet -o /usr/local/bin/kubeletcurl -sSL https://dl.k8s.io/${version}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
chmod +x /usr/local/bin/kube-apiserverchmod +x /usr/local/bin/kube-controller-managerchmod +x /usr/local/bin/kube-schedulerchmod +x /usr/local/bin/kube-proxychmod +x /usr/local/bin/kubeletchmod +x /usr/local/bin/kubectl
生成etcd证书
etcd证书存放目录/etc/kubernetes/pki/etcd
mkdir -p /etc/kubernetes/pki/etcdcd /etc/kubernetes/pki/etcd
生成etcd的ca证书
openssl ecparam -genkey -name prime256v1 -noout -out ca.key
openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt -subj "/CN=etcd-ca" -addext "basicConstraints=critical,CA:TRUE" -addext "keyUsage=critical,digitalSignature,keyEncipherment,Certificate Sign"
生成server
证书用于etcd的server端
openssl ecparam -genkey -name prime256v1 -noout -out server.keyopenssl req -x509 -new -nodes -key server.key -sha256 -days 3650 -CA ca.crt -CAkey ca.key -out server.crt -subj "/CN=k8s-master01" -addext "extendedKeyUsage=clientAuth,serverAuth" -addext "subjectAltName=DNS:localhost,IP:127.0.0.1,DNS:k8s-master01,IP:192.168.1.31" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt server.crt
生成peer
证书用于etcd服务端之间的通信
openssl ecparam -genkey -name prime256v1 -noout -out peer.keyopenssl req -x509 -new -nodes -key peer.key -sha256 -days 3650 -CA ca.crt -CAkey ca.key -out peer.crt -subj "/CN=k8s-master01" -addext "extendedKeyUsage=clientAuth,serverAuth" -addext "subjectAltName=DNS:localhost,IP:127.0.0.1,DNS:k8s-master01,IP:192.168.1.31" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt peer.crt
生成kube-apiserver-etcd-client
证书用于kube-apiserver访问etcd
openssl ecparam -genkey -name prime256v1 -noout -out kube-apiserver-etcd-client.keyopenssl req -x509 -new -nodes -key kube-apiserver-etcd-client.key -sha256 -days 3650 -CA ca.crt -CAkey ca.key -out kube-apiserver-etcd-client.crt -subj "/CN=kube-apiserver-etcd-client" -addext "extendedKeyUsage=clientAuth" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt kube-apiserver-etcd-client.crt
生成Kubernetes的证书
Kubernetes证书存放目录/etc/kubernetes/pki
mkdir -p /etc/kubernetes/pkicd /etc/kubernetes/pki
生成Kubernetes的CA证书
openssl ecparam -genkey -name prime256v1 -noout -out ca.keyopenssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt -subj "/CN=kubernetes" -addext "basicConstraints=critical,CA:TRUE" -addext "keyUsage=critical,digitalSignature,keyEncipherment,Certificate Sign"
生成apiserver证书用于apiserver服务端
将
service_cluster_ip_range
的ip段的第一个ip加入证书中,如10.10.0.1
.
openssl ecparam -genkey -name prime256v1 -noout -out apiserver.keyopenssl req -x509 -new -nodes -key apiserver.key -sha256 -days 3650 -out apiserver.crt -CA ca.crt -CAkey ca.key -subj "/CN=kube-apiserver" -addext "subjectAltName=DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:localhost,DNS:k8s-apiserver,IP:10.10.0.1,IP:127.0.0.1,DNS:k8s-master01,IP:192.168.1.31" -addext "extendedKeyUsage=serverAuth" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt apiserver.crt
生成apiserver-kubelet-client证书用于kubelet与apiserver通信
openssl ecparam -genkey -name prime256v1 -noout -out apiserver-kubelet-client.keyopenssl req -x509 -new -nodes -key apiserver-kubelet-client.key -sha256 -days 3650 -out apiserver-kubelet-client.crt -CA ca.crt -CAkey ca.key -subj "/CN=apiserver-kubelet-client" -addext "extendedKeyUsage=clientAuth" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt apiserver-kubelet-client.crt
生成kube-controller-manager证书用于kube-controller-manager与apiserver通信
openssl ecparam -genkey -name prime256v1 -noout -out kube-controller-manager.keyopenssl req -x509 -new -nodes -key kube-controller-manager.key -sha256 -days 3650 -out kube-controller-manager.crt -CA ca.crt -CAkey ca.key -subj "/CN=system:kube-controller-manager" -addext "extendedKeyUsage=clientAuth" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt kube-controller-manager.crt
生成kube-scheduler证书用于kube-scheduler与apiserver通信
openssl ecparam -genkey -name prime256v1 -noout -out kube-scheduler.keyopenssl req -x509 -new -nodes -key kube-scheduler.key -sha256 -days 3650 -out kube-scheduler.crt -CA ca.crt -CAkey ca.key -subj "/CN=system:kube-scheduler" -addext "extendedKeyUsage=clientAuth" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt kube-scheduler.crt
生成kube-proxy证书用于kube-proxy与apiserver通信
openssl ecparam -genkey -name prime256v1 -noout -out kube-proxy.keyopenssl req -x509 -new -nodes -key kube-proxy.key -sha256 -days 3650 -out kube-proxy.crt -CA ca.crt -CAkey ca.key -subj "/CN=system:kube-proxy" -addext "extendedKeyUsage=clientAuth" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt kube-proxy.crt
生成admin证书用于admin与apiserver通信
openssl ecparam -genkey -name prime256v1 -noout -out admin.keyopenssl req -x509 -new -nodes -key admin.key -sha256 -days 3650 -out admin.crt -CA ca.crt -CAkey ca.key -subj "/CN=kubernetes-super-admin/O=system:masters" -addext "extendedKeyUsage=clientAuth" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile ca.crt admin.crt
生成front-proxy的CA
openssl ecparam -genkey -name prime256v1 -noout -out front-proxy-ca.keyopenssl req -x509 -new -nodes -key front-proxy-ca.key -sha256 -days 3650 -out front-proxy-ca.crt -subj "/CN=front-proxy-ca" -addext "subjectAltName=DNS:front-proxy-ca" -addext "basicConstraints=critical,CA:TRUE" -addext "keyUsage=critical,digitalSignature,keyEncipherment,Certificate Sign"
签发front-proxy-client证书
openssl ecparam -genkey -name prime256v1 -noout -out front-proxy-client.keyopenssl req -x509 -new -nodes -key front-proxy-client.key -sha256 -days 3650 -out front-proxy-client.crt -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -subj "/CN=front-proxy-client" -addext "extendedKeyUsage=clientAuth" -addext "basicConstraints=critical,CA:FALSE" -addext "keyUsage=critical,digitalSignature,keyEncipherment"
# 验证证书openssl verify -CAfile front-proxy-ca.crt front-proxy-client.crt
生成ServiceAccount Key
openssl ecparam -genkey -name prime256v1 -out sa.keyopenssl ec -in sa.key -pubout -out sa.pub
文件树
/etc/kubernetes/└── pki ├── admin.crt ├── admin.key ├── apiserver.crt ├── apiserver.key ├── apiserver-kubelet-client.crt ├── apiserver-kubelet-client.key ├── ca.crt ├── ca.key ├── etcd │ ├── ca.crt │ ├── ca.key │ ├── kube-apiserver-etcd-client.crt │ ├── kube-apiserver-etcd-client.key │ ├── peer.crt │ ├── peer.key │ ├── server.crt │ └── server.key ├── front-proxy-ca.crt ├── front-proxy-ca.key ├── front-proxy-client.crt ├── front-proxy-client.key ├── kube-controller-manager.crt ├── kube-controller-manager.key ├── kube-proxy.crt ├── kube-proxy.key ├── kube-scheduler.crt ├── kube-scheduler.key ├── sa.key └── sa.pub
生成kubeconfig文件
生成controller-manager.conf
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.crt \ --embed-certs=true \ --server=https://k8s-apiserver:6443 \ --kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/kube-controller-manager.crt \ --client-key=/etc/kubernetes/pki/kube-controller-manager.key \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.conf
生成scheduler.conf
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.crt \ --embed-certs=true \ --server=https://k8s-apiserver:6443 \ --kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/kube-scheduler.crt \ --client-key=/etc/kubernetes/pki/kube-scheduler.key \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.conf
生成admin.conf
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.crt \ --embed-certs=true \ --server=https://k8s-apiserver:6443 \ --kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.crt \ --client-key=/etc/kubernetes/pki/admin.key \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.conf
kubectl config use-context kubernetes-admin@kubernetes \ --kubeconfig=/etc/kubernetes/admin.conf
生成kube-proxy.conf
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.crt \ --embed-certs=true \ --server=https://k8s-apiserver:6443 \ --kubeconfig=/etc/kubernetes/kube-proxy.conf
kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.crt \ --client-key=/etc/kubernetes/pki/kube-proxy.key \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.conf
kubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.conf
kubectl config use-context kube-proxy@kubernetes \ --kubeconfig=/etc/kubernetes/kube-proxy.conf
生成bootstrap-kubelet.conf
kubectl config set-cluster bootstrap \ --certificate-authority=/etc/kubernetes/pki/ca.crt \ --server=https://k8s-apiserver:6443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config set-credentials kubelet-bootstrap \ --token=0e5634.55bf9fc480d29395 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config set-context bootstrap \ --user=kubelet-bootstrap \ --cluster=bootstrap \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config use-context bootstrap \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
文件树
/etc/kubernetes/├── admin.conf├── bootstrap-kubelet.conf├── controller-manager.conf├── kube-proxy.conf├── pki│ ├── admin.crt│ ├── admin.key│ ├── apiserver.crt│ ├── apiserver.key│ ├── apiserver-kubelet-client.crt│ ├── apiserver-kubelet-client.key│ ├── ca.crt│ ├── ca.key│ ├── etcd│ │ ├── ca.crt│ │ ├── ca.key│ │ ├── kube-apiserver-etcd-client.crt│ │ ├── kube-apiserver-etcd-client.key│ │ ├── peer.crt│ │ ├── peer.key│ │ ├── server.crt│ │ └── server.key│ ├── front-proxy-ca.crt│ ├── front-proxy-ca.key│ ├── front-proxy-client.crt│ ├── front-proxy-client.key│ ├── kube-controller-manager.crt│ ├── kube-controller-manager.key│ ├── kube-proxy.crt│ ├── kube-proxy.key│ ├── kube-scheduler.crt│ ├── kube-scheduler.key│ ├── sa.key│ └── sa.pub└── scheduler.conf
设置systemd服务
生成etcd服务
cat <<EOF > /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServiceDocumentation=https://coreos.com/etcd/docs/latest/After=network.target
[Service]Type=notifyExecStart=/usr/local/bin/etcd \\ --advertise-client-urls=https://192.168.1.31:2379 \\ --cert-file=/etc/kubernetes/pki/etcd/server.crt \\ --client-cert-auth=true \\ --data-dir=/var/lib/etcd \\ --experimental-initial-corrupt-check=true \\ --experimental-watch-progress-notify-interval=5s \\ --initial-advertise-peer-urls=https://192.168.1.31:2380 \\ --initial-cluster=k8s=https://192.168.1.31:2380 \\ --key-file=/etc/kubernetes/pki/etcd/server.key \\ --listen-client-urls=https://127.0.0.1:2379,https://192.168.1.31:2379 \\ --listen-metrics-urls=http://127.0.0.1:2381,http://192.168.1.31:2381 \\ --listen-peer-urls=https://192.168.1.31:2380 \\ --name=k8s \\ --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \\ --peer-client-cert-auth=true \\ --peer-key-file=/etc/kubernetes/pki/etcd/peer.key \\ --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \\ --snapshot-count=10000 \\ --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crtRestart=on-failureRestartSec=10LimitNOFILE=65536
[Install]WantedBy=multi-user.targetAlias=etcd3.serviceEOF
生成kube-apiserver服务
cat <<EOF > /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-apiserver \\ --advertise-address=192.168.1.31 \\ --allow-privileged=true \\ --authorization-mode=Node,RBAC \\ --client-ca-file=/etc/kubernetes/pki/ca.crt \\ --enable-admission-plugins=NodeRestriction \\ --enable-bootstrap-token-auth=true \\ --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt \\ --etcd-certfile=/etc/kubernetes/pki/etcd/kube-apiserver-etcd-client.crt \\ --etcd-keyfile=/etc/kubernetes/pki/etcd/kube-apiserver-etcd-client.key \\ --etcd-servers=https://127.0.0.1:2379 \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key \\ --requestheader-allowed-names=front-proxy-client \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-username-headers=X-Remote-User \\ --secure-port=6443 \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-cluster-ip-range=10.10.0.0/16 \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.crt \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver.keyRestart=on-failureRestartSec=10sLimitNOFILE=65535
[Install]WantedBy=multi-user.targetEOF
生成kube-controller-manager服务
cat <<EOF > /usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-controller-manager \\ --allocate-node-cidrs=true \\ --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf \\ --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf \\ --bind-address=0.0.0.0 \\ --client-ca-file=/etc/kubernetes/pki/ca.crt \\ --cluster-cidr=10.11.0.0/16 \\ --node-cidr-mask-size-ipv4=24 \\ --node-cidr-mask-size-ipv6=120 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca.key \\ --controllers=*,bootstrapsigner,tokencleaner \\ --kubeconfig=/etc/kubernetes/controller-manager.conf \\ --leader-elect=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \\ --root-ca-file=/etc/kubernetes/pki/ca.crt \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --service-cluster-ip-range=10.10.0.0/16 \\ --use-service-account-credentials=true
Restart=alwaysRestartSec=10s
[Install]WantedBy=multi-user.targetEOF
生成kube-scheduler服务
cat <<EOF > /usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-scheduler \\ --authentication-kubeconfig=/etc/kubernetes/scheduler.conf \\ --authorization-kubeconfig=/etc/kubernetes/scheduler.conf \\ --bind-address=0.0.0.0 \\ --kubeconfig=/etc/kubernetes/scheduler.conf \\ --leader-elect=true
Restart=alwaysRestartSec=10s
[Install]WantedBy=multi-user.targetEOF
生成kubelet服务
cat <<EOF > /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=network-online.target firewalld.service containerd.serviceWants=network-online.targetRequires=containerd.service
[Service]ExecStart=/usr/local/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \ --kubeconfig=/etc/kubernetes/kubelet.conf \ --config=/etc/kubernetes/kubelet-conf.yaml \ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \ --node-labels=node.kubernetes.io/node=
[Install]WantedBy=multi-user.targetEOF
生成/etc/kubernetes/kubelet-conf.yaml
配置文件
cat <<EOF > /etc/kubernetes/kubelet-conf.yamlapiVersion: kubelet.config.k8s.io/v1beta1authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crtauthorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0scgroupDriver: systemdclusterDNS:- 10.10.0.10clusterDomain: cluster.localcontainerRuntimeEndpoint: ""cpuManagerReconcilePeriod: 0sevictionPressureTransitionPeriod: 0sfileCheckFrequency: 0shealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 0simageMaximumGCAge: 0simageMinimumGCAge: 0skind: KubeletConfigurationlogging: flushFrequency: 0 options: json: infoBufferSize: "0" text: infoBufferSize: "0" verbosity: 0memorySwap: {}nodeStatusReportFrequency: 0snodeStatusUpdateFrequency: 0srotateCertificates: trueruntimeRequestTimeout: 0sshutdownGracePeriod: 0sshutdownGracePeriodCriticalPods: 0sstaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 0ssyncFrequency: 0svolumeStatsAggPeriod: 0sserverTLSBootstrap: trueEOF
生成kube-proxy服务
cat <<EOF > /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --cluster-cidr=10.11.0.0/16 \\ --v=2Restart=alwaysRestartSec=10s
[Install]WantedBy=multi-user.targetEOF
生成/etc/kubernetes/kube-proxy.yaml
cat <<EOF > /etc/kubernetes/kube-proxy.yamlapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.conf qps: 5clusterCIDR: 10.11.0.0/16configSyncPeriod: 15m0sconntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30sipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250msEOF
启动主节点
systemctl enable --now etcd.servicesystemctl enable --now kube-apiserver.servicesystemctl enable --now kube-controller-manager.servicesystemctl enable --now kube-scheduler.servicesystemctl enable --now kube-proxy.service
设置TLS bootstrapping
bootstrapping.yaml
apiVersion: v1kind: Secretmetadata: name: bootstrap-token-0e5634 namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: "0e5634" token-secret: "55bf9fc480d29395" usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: create-csrs-for-bootstrappingsubjects:- kind: Group name: system:bootstrappers:default-node-token apiGroup: rbac.authorization.k8s.ioroleRef: kind: ClusterRole name: system:node-bootstrapper apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: auto-approve-csrs-for-groupsubjects:- kind: Group name: system:bootstrappers:default-node-token apiGroup: rbac.authorization.k8s.ioroleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient apiGroup: rbac.authorization.k8s.io
kubectl apply -f bootstrapping.yaml
启动kubelet
systemctl enable --now kubelet.service
配置节点
复制admin.config
mkdir ~/.kubecp /etc/kubernetes/admin.conf ~/.kube/config
查看证书kubelet证书申请
kubectl get csr
同意其中的证书申请
kubectl certificate approve <name>
安装CoreDNS
安装helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
安装CoreDNS的helm仓库
helm repo add coredns https://coredns.github.io/helm
cat <<EOF > coredns-values.yamlservice: clusterIP: "10.10.0.10"
安装coredns
helm -n kube-system install coredns coredns/coredns -f coredns-values.yaml
安装网络插件
flannel
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
curl -sSL -O https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
修改Network
为podCIDR,如10.11.0.0
, 安装flannel
kubectl apply -f kube-flannel.yml
工作节点安装
复制文件
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/bootstrap-kubelet.conf
安装Kubernetes组件
version="v1.32.0"
# wgetwget https://dl.k8s.io/${version}/bin/linux/amd64/kube-proxy -O /usr/local/bin/kube-proxywget https://dl.k8s.io/${version}/bin/linux/amd64/kubelet -O /usr/local/bin/kubelet
curl -sSL https://dl.k8s.io/${version}/bin/linux/amd64/kube-proxy -o /usr/local/bin/kube-proxycurl -sSL https://dl.k8s.io/${version}/bin/linux/amd64/kubelet -o /usr/local/bin/kubelet
chmod +x /usr/local/bin/kube-proxychmod +x /usr/local/bin/kubectl
启动systemd服务
生成kubelet服务
cat <<EOF > /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=network-online.target firewalld.service containerd.serviceWants=network-online.targetRequires=containerd.service
[Service]ExecStart=/usr/local/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \ --kubeconfig=/etc/kubernetes/kubelet.conf \ --config=/etc/kubernetes/kubelet-conf.yaml \ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \ --node-labels=node.kubernetes.io/node=
[Install]WantedBy=multi-user.targetEOF
生成/etc/kubernetes/kubelet-conf.yaml
配置文件
cat <<EOF > /etc/kubernetes/kubelet-conf.yamlapiVersion: kubelet.config.k8s.io/v1beta1authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crtauthorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0scgroupDriver: systemdclusterDNS:- 10.10.0.10clusterDomain: cluster.localcontainerRuntimeEndpoint: ""cpuManagerReconcilePeriod: 0sevictionPressureTransitionPeriod: 0sfileCheckFrequency: 0shealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 0simageMaximumGCAge: 0simageMinimumGCAge: 0skind: KubeletConfigurationlogging: flushFrequency: 0 options: json: infoBufferSize: "0" text: infoBufferSize: "0" verbosity: 0memorySwap: {}nodeStatusReportFrequency: 0snodeStatusUpdateFrequency: 0srotateCertificates: trueruntimeRequestTimeout: 0sshutdownGracePeriod: 0sshutdownGracePeriodCriticalPods: 0sstaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 0ssyncFrequency: 0svolumeStatsAggPeriod: 0sserverTLSBootstrap: trueEOF
生成kube-proxy服务
cat <<EOF > /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --cluster-cidr=10.11.0.0/16 \\ --v=2Restart=alwaysRestartSec=10s
[Install]WantedBy=multi-user.targetEOF
生成/etc/kubernetes/kube-proxy.yaml
cat <<EOF > /etc/kubernetes/kube-proxy.yamlapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.conf qps: 5clusterCIDR: 10.11.0.0/16configSyncPeriod: 15m0sconntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30sipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250msEOF