kubeadm安装kubernetes(k8s)
(2022-03-19 16:18:27)
标签:
kubernetesk8sdocker |
分类: 软件 |
一.环境配置 - VMware
"exec-opts":
["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config
sudo chown $(id -u):$(id -g)
$HOME/.kube/config
export
KUBECONFIG=/etc/kubernetes/admin.conf
https://kubernetes.io/docs/concepts/cluster-administration/addons/
1.master - 2cpu - 3g内存 - ip - 192.168.23.39
2.node1 - 2cpu - 2G内存 - ip - 192.168.23.40
3.node1 - 2cpu - 2G内存 - ip - 192.168.23.41
二.安装kubernetes(k8s)
1.安装kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl
/usr/local/bin/kubectl
2.关闭防火墙 - 开启机器上的某些端口
systemctl stop firewalld && systemctl disable
firewalld
3.禁用交换分区
swapoff -a
vim /etc/fstab - 注释最后一行
4.增加host - 预先添加 IP 路由规则
echo "192.168.23.39 master" >> /etc/hosts
echo "192.168.23.40 node1" >> /etc/hosts
echo "192.168.23.42 node2" >> /etc/hosts
5.设置网桥参数 - 允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
6.未安装docker - 安装docker
yum remove docker docker-client docker-client-latest
docker-common docker-latest docker-latest-logrotate
docker-logrotate docker-engine
yum install -y yum-utils
yum-config-manager --add-repo
https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
}
EOF
systemctl enable docker && systemctl daemon-reload
&& systemctl start docker
7.安装 kubeadm、kubelet 和 kubectl
#cat <<EOF | sudo tee
/etc/yum.repos.d/kubernetes.repo
#[kubernetes]
#name=Kubernetes
#baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
#enabled=1
#gpgcheck=1
#repo_gpgcheck=1
#gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
#exclude=kubelet kubeadm kubectl
#EOF
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/'
/etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl
--disableexcludes=kubernetes
sudo systemctl enable --now kubelet
8.初始化kubeadm - 只在master上用
kubeadm init
替换
kubeadm config images pull --image-repository
registry.aliyuncs.com/google_containers
[config/images] Pulled
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.5
[config/images] Pulled
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5
[config/images] Pulled
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.5
[config/images] Pulled
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.5
[config/images] Pulled
registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled
registry.aliyuncs.com/google_containers/coredns:v1.8.6
kubeadm init --image-repository
registry.aliyuncs.com/google_containers
kubeadm init
--image-repository=registry.aliyuncs.com/google_containers
--pod-network-cidr 10.244.0.0/16
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a
Kubernetes cluster
[preflight] This might take a minute or two, depending on the
speed of your internet connection
[preflight] You can also perform this action in beforehand
using 'kubeadm config images pull'
[certs] Using certificateDir folder
"/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [km
kubernetes kubernetes.default kubernetes.default.svc
kubernetes.default.svc.cluster.local] and IPs [10.96.0.1
192.168.23.39]
[certs] Generating "apiserver-kubelet-client" certificate and
key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and
key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [km
localhost] and IPs [192.168.23.39 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [km
localhost] and IPs [192.168.23.39 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and
key
[certs] Generating "apiserver-etcd-client" certificate and
key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig
file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to
file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file
"/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder
"/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for
"kube-apiserver"
[control-plane] Creating static Pod manifest for
"kube-controller-manager"
[control-plane] Creating static Pod manifest for
"kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in
"/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the
control plane as static Pods from directory
"/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after
8.004215 seconds
[upload-config] Storing the configuration used in ConfigMap
"kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in
namespace kube-system with the configuration for the kubelets in
the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet
ConfigMap is deprecated. Once the UnversionedKubeletConfigMap
feature gate graduates to Beta the default name will become just
"kubelet-config". Kubeadm upgrade will handle this transition
transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node km as control-plane by
adding the labels: [node-role.kubernetes.io/master(deprecated)
node-role.kubernetes.io/control-plane
node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node km as control-plane by
adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: uls8na.09040heqbwbk7e7u
[bootstrap-token] Configuring bootstrap tokens, cluster-info
ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node
Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node
Bootstrap tokens to post CSRs in order for nodes to get long term
certificate credentials
[bootstrap-token] configured RBAC rules to allow the
csrapprover controller automatically approve CSRs from a Node
Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate
rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the
"kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to
point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized
successfully!
To start using your cluster, you need to run the following as
a regular user:
Alternatively, if you are the root user, you can run:
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the
options listed at:
Then you can join any number of worker nodes by running the
following on each as root:
kubeadm join 192.168.23.39:6443 --token
uls8na.09040heqbwbk7e7u \
--discovery-token-ca-cert-hash
sha256:e0a2baba820581f76434dfd5b68011ce2ed2e644bb50dd73cd84bdeca00bce52
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
9.安装 Pod 网络附加组件
wget
https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in
v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged
created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel
created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
10.node加入
kubeadm join 192.168.23.39:6443 --token
uls8na.09040heqbwbk7e7u --discovery-token-ca-cert-hash
sha256:e0a2baba820581f76434dfd5b68011ce2ed2e644bb50dd73cd84bdeca00bce52
11.简单命令
kubectl get nodes
kubectl get pods --all-namespaces
kubectl logs podName -n NameSpaceName
12.其他
#时间同步
yum install -y ntpdate
ntpdate time.windows.com
前一篇:GO语言学习笔记

加载中…