乱读天书, 不求甚解
周祎骏的个人云笔记
Toggle navigation
乱读天书, 不求甚解
主页
Linux:系统配置
Linux:用户管理
Linux:优化排错
Linux:进程调度
Linux:文件系统
Linux:网络
Linux:系统服务
Linux:安全
Linux:内核
容器:Docker
容器:containerd
容器编排:Kubernetes
IAC:Terraform
大数据:Hadoop
大数据:Zookeeper
大数据:Hbase
消息队列:rsyslog
消息队列:kafka
数据库:MySQL
数据库:MongoDB
搜索引擎:Elasticsearch
时序数据库:OpenTSDB
网站服务:Nginx
编程:Bash
编程:Perl
编程:Python
编程:C
编程:JAVA
编程:Rust
版本控制:gitlab
知识管理:docusaurus
常用小工具
关于我
标签
Kubernetes 01.02 手动安装高可用K8S
2023-05-17 02:22:14
31
0
0
admin
> 3台节点做Master # 架构 k8s-master-1:etcd,api-server k8s-master-2:etcd,api-server k8s-master-3:etcd,api-server k8s-node-1 k8s-node-2 ## 准备工作 ``` apt update mkdir /k8s useradd -m -s /bin/bash k8s chown k8s:k8s /k8s/ ``` # 证书系统 K8S 需要配置多套证书,分别用于不同的场景。 参考文档: https://kubernetes.io/docs/setup/best-practices/certificates/ https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl https://github.com/kubernetes/kubeadm/blob/main/docs/design/design_v1.10.md ## 一套证书有以下文件: 1. 根证书私钥(一个私钥文件 ca.key) 2. 根证书 (由ca.key 生成,ca.crt) 3. 服务器私钥 (每个服务器一个 xxx.key) 4. 服务器证书请求签名文件 (由xxx.key 生成,每个服务器一个 xxx.csr) 5. 服务器证书 (由ca.key + xxx.csr 生成,每个服务器一个) ## 哪些场景需要证书: **etcd相关** 1. etcd 对外服务 2. etcd 内部通信 3. api-server 作为client 与etcd 通信 **api-server相关** 1. api-server 对外服务 2. kube-scheduler/kube-controller-manager/kube-proxy/kubelet 作为client 与api-server 通信 3. api-server 作为client 与kubelet 通信 **kubelet相关** 1. kubelet 对外提供服务 66. kube-controller-manager 生成service account 66. api-server 生成service account (一对私钥公钥) ## 模仿kubeadm生成证书: 1.创建证书目录以及其它准备工作 ``` mkdir -p /etc/kubernetes/pki/etcd cat << EOF > ~/ssl.conf [ req ] default_bits = 2048 prompt = no default_md = sha256 req_extensions = req_ext distinguished_name = req_distinguished_name [ req_distinguished_name ] [ req_ext ] subjectAltName = @alt_names [ v3_req ] authorityKeyIdentifier=keyid,issuer:always basicConstraints=CA:FALSE keyUsage=keyEncipherment,dataEncipherment extendedKeyUsage=serverAuth,clientAuth subjectAltName=@alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster DNS.5 = kubernetes.default.svc.cluster.local DNS.6 = localhost DNS.7 = k8s-master-1 DNS.8 = k8s-master-2 DNS.9 = k8s-master-3 IP.1 = 169.169.0.1 IP.2 = 127.0.0.1 IP.3 = 172.16.1.174 IP.4 = 172.16.1.172 IP.5 = 172.16.1.171 EOF ``` 2.创建etcd的根证书以及服务端客户端证书 ``` #创建根证书 cd /etc/kubernetes/pki/etcd/ openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ./ca.key -subj "/CN=etcd-ca" -days 36500 -out ./ca.crt #创建etcd server 证书 openssl genrsa -out server.key 2048 openssl req -new -key ./server.key -config ~/ssl.conf -subj "/CN=kube-etcd" -out server.csr openssl x509 -req -in ./server.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out server.crt #创建etcd peer 证书 openssl genrsa -out peer.key 2048 openssl req -new -key ./peer.key -config ~/ssl.conf -subj "/CN=kube-etcd-peer" -out peer.csr openssl x509 -req -in ./peer.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out peer.crt #创建healthcheck-client 证书 openssl genrsa -out healthcheck-client.key 2048 openssl req -new -key ./healthcheck-client.key -config ~/ssl.conf -subj "/CN=kube-etcd-healthcheck-client" -out healthcheck-client.csr openssl x509 -req -in ./healthcheck-client.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out healthcheck-client.crt #创建etcd-client 证书 cd ../ openssl genrsa -out apiserver-etcd-client.key 2048 openssl req -new -key ./apiserver-etcd-client.key -config ~/ssl.conf -subj "/CN=kube-apiserver-etcd-client/O=system:masters" -out apiserver-etcd-client.csr openssl x509 -req -in ./apiserver-etcd-client.csr -CA ./etcd/ca.crt -CAkey ./etcd/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out apiserver-etcd-client.crt ``` 3.创建api-server 相关证书 ``` #创建根证书 cd /etc/kubernetes/pki/ openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ./ca.key -subj "/CN=kubernetes-ca" -days 36500 -out ./ca.crt #创建API server 证书 openssl genrsa -out apiserver.key 2048 openssl req -new -key ./apiserver.key -config ~/ssl.conf -subj "/CN=kube-apiserver" -out apiserver.csr openssl x509 -req -in ./apiserver.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out apiserver.crt #创建kubelet连api-server的证书 openssl genrsa -out apiserver-kubelet-client.key 2048 openssl req -new -key ./apiserver-kubelet-client.key -config ~/ssl.conf -subj "/CN=kube-apiserver-kubelet-client/O=system:masters" -out apiserver-kubelet-client.csr openssl x509 -req -in ./apiserver-kubelet-client.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out apiserver-kubelet-client.crt ``` 4.创建front proxy 相关证书 ``` #创建根证书 cd /etc/kubernetes/pki/ openssl genrsa -out front-proxy-ca.key 2048 openssl req -x509 -new -nodes -key ./front-proxy-ca.key -subj "/CN=kubernetes-front-proxy-ca" -days 36500 -out ./front-proxy-ca.crt #创建front-proxy-client证书 openssl genrsa -out front-proxy-client.key 2048 openssl req -new -key ./front-proxy-client.key -config ~/ssl.conf -subj "/CN=front-proxy-client" -out ./front-proxy-client.csr openssl x509 -req -in ./front-proxy-client.csr -CA ./front-proxy-ca.crt -CAkey ./front-proxy-ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out front-proxy-client.crt ``` 5.检查crt 证书 ``` #检查证书文件 openssl x509 -noout -text -in ./xxx.crt # 检查服务端 CA 证书 curl https://k8s-master-1:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication \ --cacert ~/certs/api-server/api-server_ca.crt # 检查 服务端+客户端 CA 证书 curl https://k8s-master-1:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication \ --cacert ~/certs/api-server/api-server_ca.crt \ --cert ~/certs/api-client/api-client.crt \ --key ~/certs/api-client/api-client.key ``` 6.创建service account 密钥对 ``` cd /etc/kubernetes/pki/ openssl genrsa -out ./sa.key 2048 openssl rsa -in ./sa.key -pubout -out ./sa.pub ``` 7.制作kubeconfig需要用到的证书 ``` cd /etc/kubernetes/ openssl genrsa -out kubeconfig-admin.key 2048 openssl req -new -key ./kubeconfig-admin.key -config ~/ssl.conf -subj "/CN=kubernetes-admin/O=system:masters" -out ./kubeconfig-admin.csr openssl x509 -req -in ./kubeconfig-admin.csr -CA ./pki/ca.crt -CAkey ./pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out ./kubeconfig-admin.crt openssl genrsa -out kubeconfig-controller-manager.key 2048 openssl req -new -key ./kubeconfig-controller-manager.key -config ~/ssl.conf -subj "/CN=system:kube-controller-manager" -out ./kubeconfig-controller-manager.csr openssl x509 -req -in ./kubeconfig-controller-manager.csr -CA ./pki/ca.crt -CAkey ./pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out ./kubeconfig-controller-manager.crt openssl genrsa -out kubeconfig-scheduler.key 2048 openssl req -new -key ./kubeconfig-scheduler.key -config ~/ssl.conf -subj "/CN=system:kube-scheduler" -out ./kubeconfig-scheduler.csr openssl x509 -req -in ./kubeconfig-scheduler.csr -CA ./pki/ca.crt -CAkey ./pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out ./kubeconfig-scheduler.crt # run for every node export NodeName=k8s-node-1 openssl genrsa -out kubeconfig-kubelet-$NodeName.key 2048 openssl req -new -key ./kubeconfig-kubelet-$NodeName.key -config ~/ssl.conf -subj "/CN=system:node:$NodeName/O=system:nodes" -out ./kubeconfig-kubelet-$NodeName.csr openssl x509 -req -in ./kubeconfig-kubelet-$NodeName.csr -CA ./pki/ca.crt -CAkey ./pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out ./kubeconfig-kubelet-$NodeName.crt openssl genrsa -out kubeconfig-proxy.key 2048 openssl req -new -key ./kubeconfig-proxy.key -config ~/ssl.conf -subj "/CN=system:kube-proxy/O=system:node-proxier" -out ./kubeconfig-proxy.csr openssl x509 -req -in ./kubeconfig-proxy.csr -CA ./pki/ca.crt -CAkey ./pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile ~/ssl.conf -out ./kubeconfig-proxy.crt ``` 8.权限 ``` chown -R k8s:k8s /etc/kubernetes/ ``` # 在master安装etcd run as k8s ``` wget https://github.com/etcd-io/etcd/releases/download/v3.4.24/etcd-v3.4.24-linux-amd64.tar.gz tar -xzvf etcd-v3.4.24-linux-amd64.tar.gz mv etcd-v3.4.24-linux-amd64 etcd cd etcd/ mkdir -p data logs cat <<EOF > ./etcd.conf ETCD_NAME=etcd-1 ETCD_DATA_DIR=/k8s/etcd/data ETCD_CERT_FILE=/etc/kubernetes/pki/etcd/server.crt ETCD_KEY_FILE=/etc/kubernetes/pki/etcd/server.key ETCD_TRUSTED_CA_FILE=/etc/kubernetes/pki/etcd/ca.crt ETCD_CLIENT_CERT_AUTH=true ETCD_PEER_CERT_FILE=/etc/kubernetes/pki/etcd/peer.crt ETCD_PEER_KEY_FILE=/etc/kubernetes/pki/etcd/peer.key ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/pki/etcd/ca.crt ETCD_PEER_CLIENT_CERT_AUTH=true ETCD_LISTEN_CLIENT_URLS=https://172.16.1.174:2379 ETCD_ADVERTISE_CLIENT_URLS=https://172.16.1.174:2379 ETCD_LISTEN_PEER_URLS=https://172.16.1.174:2380 ETCD_INITIAL_ADVERTISE_PEER_URLS=https://172.16.1.174:2380 ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster ETCD_INITIAL_CLUSTER="etcd-1=https://k8s-master-1:2380,etcd-2=https://k8s-master-2:2380,etcd-3=https://k8s-master-3:2380" ETCD_INITIAL_CLUSTER_STATE=new # new 如果集群不存在,创建新集群 existing => 如果集群不存在,启动失败 ETCD_LOG_OUTPUTS=stdout EOF ``` run as root ``` cat << EOF > /usr/lib/systemd/system/etcd.service [Unit] Description=etcd system Documentation=https://github.com/etcd-io/etcd After=network.target [Service] User=k8s EnvironmentFile=/k8s/etcd/etcd.conf ExecStart=/k8s/etcd/etcd StandardOutput=file:/k8s/etcd/logs/stdout.log StandardError=file:/k8s/etcd/logs/stderr.log Restart=always [Install] WantedBy=multi-user.target EOF systemctl restart etcd && systemctl enable etcd ``` 检查是否安装成功 ``` /k8s/etcd/etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/apiserver-etcd-client.crt \ --key=/etc/kubernetes/pki/apiserver-etcd-client.key \ --endpoints=https://k8s-master-1:2379,https://k8s-master-2:2379,https://k8s-master-3:2379 \ endpoint health #新版本日志会有警告,不影响,等修复 https://github.com/etcd-io/etcd/issues/12713 ``` # 在master 安装api server run as k8s ``` wget https://dl.k8s.io/v1.27.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cp /k8s/kubernetes/server/bin/kubectl /bin/ cd kubernetes/server mkdir logs cat << 'EOF' > ./api-server.conf KUBE_API_ARGS=" \ --secure-port=6443 \ --authorization-mode=Node,RBAC \ --allow-privileged=true \ --enable-admission-plugins=NodeRestriction \ --enable-bootstrap-token-auth=true \ --etcd-servers=https://k8s-master-1:2379,https://k8s-master-2:2379,https://k8s-master-3:2379 \ --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt \ --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt \ --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key \ --tls-cert-file=/etc/kubernetes/pki/apiserver.crt \ --tls-private-key-file=/etc/kubernetes/pki/apiserver.key \ --client-ca-file=/etc/kubernetes/pki/ca.crt \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key \ --requestheader-allowed-names=front-proxy-client \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-cluster-ip-range=169.169.0.0/16 \ " EOF ``` run as root ``` cat << 'EOF' > /usr/lib/systemd/system/k8s-apiserver.service [Unit] Description=k8s API Server Documentation=https://github.com/kubernetes/kubernetes [Service] User=k8s EnvironmentFile=/k8s/kubernetes/server/api-server.conf ExecStart=/k8s/kubernetes/server/bin/kube-apiserver $KUBE_API_ARGS StandardOutput=file:/k8s/kubernetes/server/logs/api-server.stdout StandardError=file:/k8s/kubernetes/server/logs/api-server.stderr Restart=always [Install] WantedBy=multi-user.target EOF systemctl restart k8s-apiserver && systemctl enable k8s-apiserver cp /k8s/kubernetes/server/bin/kubectl /usr/bin/ ``` # 创建kubeconfig (后面的服务都需要) ``` cd /k8s/kubernetes/server/bin export KUBECONFIG=/etc/kubernetes/admin.conf ./kubectl config set-cluster default-cluster --server=https://k8s-master-1:6443 --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs ./kubectl config set-credentials default-admin --client-key /etc/kubernetes/kubeconfig-admin.key --client-certificate /etc/kubernetes/kubeconfig-admin.crt --embed-certs ./kubectl config set-context default-system --cluster default-cluster --user default-admin ./kubectl config use-context default-system export KUBECONFIG=/etc/kubernetes/controller-manager.conf ./kubectl config set-cluster default-cluster --server=https://k8s-master-1:6443 --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs ./kubectl config set-credentials default-controller-manager --client-key /etc/kubernetes/kubeconfig-controller-manager.key --client-certificate /etc/kubernetes/kubeconfig-controller-manager.crt --embed-certs ./kubectl config set-context default-system --cluster default-cluster --user default-controller-manager ./kubectl config use-context default-system export KUBECONFIG=/etc/kubernetes/scheduler.conf ./kubectl config set-cluster default-cluster --server=https://k8s-master-1:6443 --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs ./kubectl config set-credentials default-scheduler --client-key /etc/kubernetes/kubeconfig-scheduler.key --client-certificate /etc/kubernetes/kubeconfig-scheduler.crt --embed-certs ./kubectl config set-context default-system --cluster default-cluster --user default-scheduler ./kubectl config use-context default-system #run for every node export NodeName=k8s-node-1 export KUBECONFIG=/etc/kubernetes/kubelet-$NodeName.conf ./kubectl config set-cluster default-cluster --server=https://k8s-master-1:6443 --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs ./kubectl config set-credentials default-auth --client-key /etc/kubernetes/kubeconfig-kubelet-$NodeName.key --client-certificate /etc/kubernetes/kubeconfig-kubelet-$NodeName.crt --embed-certs ./kubectl config set-context default-system --cluster default-cluster --user default-auth ./kubectl config use-context default-system export KUBECONFIG=/etc/kubernetes/kubeconfig-proxy.conf ./kubectl config set-cluster default-cluster --server=https://k8s-master-1:6443 --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs ./kubectl config set-credentials kube-proxy --client-key /etc/kubernetes/kubeconfig-proxy.key --client-certificate /etc/kubernetes/kubeconfig-proxy.crt --embed-certs ./kubectl config set-context default-system --cluster default-cluster --user kube-proxy ./kubectl config use-context default-system ``` # 安装kube-contriller-manager run as k8s ``` cd /k8s/kubernetes/server cat << 'EOF' > ./controller-manager.conf KUBE_CONTROLLER_MANAGER_ARGS="\ --allocate-node-cidrs=true \ --leader-elect=true \ --controllers=*,bootstrapsigner,tokencleaner \ --kubeconfig=/etc/kubernetes/controller-manager.conf \ --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf \ --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf \ --client-ca-file=/etc/kubernetes/pki/ca.crt \ --root-ca-file=/etc/kubernetes/pki/ca.crt \ --cluster-cidr=192.168.0.0/16 \ --cluster-name=kubernetes \ --service-cluster-ip-range=169.169.0.0/16 \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \ --cluster-signing-key-file=/etc/kubernetes/pki/ca.key \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --use-service-account-credentials=true \ " EOF ``` run as root ``` cat << 'EOF' > /usr/lib/systemd/system/k8s-controller-manager.service [Unit] Description=k8s Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] User=k8s EnvironmentFile=/k8s/kubernetes/server/controller-manager.conf ExecStart=/k8s/kubernetes/server/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS StandardOutput=file:/k8s/kubernetes/server/logs/controller-manager.stdout StandardError=file:/k8s/kubernetes/server/logs/controller-manager.stderr Restart=always [Install] WantedBy=multi-user.target EOF systemctl restart k8s-controller-manager && systemctl enable k8s-controller-manager ``` # 安装kube-scheduler run as k8s ``` cd /k8s/kubernetes/server cat << 'EOF' > ./scheduler.conf KUBE_SCHEDULER_ARGS="\ --authentication-kubeconfig=/etc/kubernetes/scheduler.conf \ --authorization-kubeconfig=/etc/kubernetes/scheduler.conf \ --kubeconfig=/etc/kubernetes/scheduler.conf \ --leader-elect=true \ " EOF ``` run as root ``` cat << 'EOF' > /usr/lib/systemd/system/k8s-scheduler.service [Unit] Description=k8s Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] User=k8s EnvironmentFile=/k8s/kubernetes/server/scheduler.conf ExecStart=/k8s/kubernetes/server/bin/kube-scheduler $KUBE_SCHEDULER_ARGS StandardOutput=file:/k8s/kubernetes/server/logs/scheduler.stdout StandardError=file:/k8s/kubernetes/server/logs/scheduler.stderr Restart=always [Install] WantedBy=multi-user.target EOF systemctl restart k8s-scheduler && systemctl enable k8s-scheduler ``` # 在node 节点安装kubelet run as root ``` apt install -y containerd mkdir /etc/containerd containerd config default | sed 's|sandbox_image = "registry.k8s.io/pause|sandbox_image = "registry.aliyuncs.com/google_containers/pause|g' > /etc/containerd/config.toml sed 's/SystemdCgroup = false/SystemdCgroup = true/g' -i /etc/containerd/config.toml echo "runtime-endpoint: unix:///run/containerd/containerd.sock" >> /etc/crictl.yaml systemctl daemon-reload systemctl restart containerd modprobe br_netfilter && echo br_netfilter >> /etc/modules echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf && sysctl -p ``` run as k8s ``` wget https://dl.k8s.io/v1.27.0/kubernetes-node-linux-amd64.tar.gz tar -xzvf kubernetes-node-linux-amd64.tar.gz ``` run as root ``` cp /k8s/kubernetes/node/bin/kubelet /k8s/kubernetes/node/bin/kube-proxy /usr/bin/ mkdir -p /var/lib/kubelet/ /etc/kubernetes/manifests cat << 'EOF' > /var/lib/kubelet/config.yaml apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0s cgroupDriver: systemd clusterDNS: - 169.169.0.100 clusterDomain: cluster.local containerRuntimeEndpoint: "" cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration logging: flushFrequency: 0 options: json: infoBufferSize: "0" verbosity: 0 memorySwap: {} nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s resolvConf: /run/systemd/resolve/resolv.conf rotateCertificates: true runtimeRequestTimeout: 0s shutdownGracePeriod: 0s shutdownGracePeriodCriticalPods: 0s staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s EOF cat << 'EOF' > /var/lib/kubelet/kubelet.env KUBELET_ARGS="\ --kubeconfig=/etc/kubernetes/kubelet-k8s-node-1.conf \ --config=/var/lib/kubelet/config.yaml \ --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \ --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 \ " EOF cat << 'EOF' > /usr/lib/systemd/system/kubelet.service [Unit] Description=kubelet Documentation=https://github.com/kubernetes/kubernetes [Service] User=root EnvironmentFile=/var/lib/kubelet/kubelet.env ExecStart=/usr/bin/kubelet $KUBELET_ARGS StandardOutput=file:/k8s/kubelet.stdout StandardError=file:/k8s/kubelet.stderr Restart=always [Install] WantedBy=multi-user.target EOF systemctl restart kubelet && systemctl enable kubelet ``` # 在node 节点安装kube-proxy run as root ``` mkdir /var/lib/kube-proxy/ cat << 'EOF' > /var/lib/kube-proxy/config.conf apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 bindAddressHardFail: false clientConnection: acceptContentTypes: "" burst: 0 contentType: "" kubeconfig: /etc/kubernetes/kubeconfig-proxy.conf qps: 0 clusterCIDR: 192.168.0.0/16 configSyncPeriod: 0s conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null detectLocal: bridgeInterface: "" interfaceNamePrefix: "" detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: localhostNodePorts: null masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "" nodePortAddresses: null oomScoreAdj: null portRange: "" showHiddenMetricsForVersion: "" winkernel: enableDSR: false forwardHealthCheckVip: false networkName: "" rootHnsEndpointName: "" sourceVip: "" EOF cat << 'EOF' > /usr/lib/systemd/system/kube-proxy.service [Unit] Description=kube-proxy Documentation=https://github.com/kubernetes/kubernetes [Service] User=root ExecStart=/usr/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf StandardOutput=file:/k8s/proxy.stdout StandardError=file:/k8s/proxy.stderr Restart=always [Install] WantedBy=multi-user.target EOF systemctl restart kube-proxy && systemctl enable kube-proxy ``` # 安装网络插件Calico operator ``` kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml #kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml -o custom-resources.yaml sed 's/encapsulation: VXLANCrossSubnet/encapsulation: IPIP/g' -i custom-resources.yaml kubectl create -f custom-resources.yaml ``` 检查: (node 变成ready) ``` kubectl get nodes ``` # 安装coreDNS ``` git clone https://github.com/coredns/deployment.git cd deployment/kubernetes/ ./deploy.sh -r 169.169.0.0/16 -i 169.169.0.100 -s | sed 's/\(# replicas: not specified here:.*\)/\1\n replicas: 2/g' | kubectl apply -f - ``` # 安装localdns https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/ ``` export kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}` export domain=cluster.local export localdns=169.254.20.10 cd /k8s/kubernetes/ && mkdir src cp kubernetes-src.tar.gz ./src cd src && tar -xzvf kubernetes-src.tar.gz cd cluster/addons/dns/nodelocaldns cp nodelocaldns.yaml nodelocaldns.yaml.bk # 如果kube-proxy 是iptables 模式 # 在节点curl localhost:10249/proxyMode 返回 iptables sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml # 如果在国内,尝试换一个源 sed -i 's|registry.k8s.io/dns/k8s-dns-node-cache:1.22.20|rancher/k8s-dns-node-cache:1.15.7|g' nodelocaldns.yaml kubectl apply -f ./nodelocaldns.yaml ```
上一篇:
Kubernetes 01.01 用kubeadm安装集群
下一篇:
Kubernetes 02.01 pod
文档导航