[ERROR CRI]: container runtime is not running: output: time="2024-02-08T20:28:24+08:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
shell
解决方法:vi /etc/containerd/config.toml。
注释掉其中的:disabled_plugins = ["cri"]
重启:systemctl restart containerd,然后重新进行初始化。
如果第二次初始化可能会报端口占用,这时候直接重置就行:kubeadm reset。
timed out waiting for the condition
虽然这个问题最后是我自己很傻逼的打掉了一个字造成的,但还是在这里分享一下我是怎么发现的。
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
2月 08 22:17:14 k8s-main kubelet[12197]: E0208 22:17:14.105521 12197 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp 34.96.108.209:443: i/o timeout"
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join cluster-endpoint:6443 --token wdvggh.980xtzhyrr2g0iti \
--discovery-token-ca-cert-hash sha256:04e247aff627e00fdee90715ab2df601641e5494cae46d6c03854a28ad2d36e4 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join cluster-endpoint:6443 --token wdvggh.980xtzhyrr2g0iti \
--discovery-token-ca-cert-hash sha256:04e247aff627e00fdee90715ab2df601641e5494cae46d6c03854a28ad2d36e4
#创建访问账号,准备一个yaml文件; vi dash.yamlapiVersion:v1kind:ServiceAccountmetadata:name:admin-usernamespace:kubernetes-dashboard---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:admin-userroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:cluster-adminsubjects:-kind:ServiceAccountname:admin-usernamespace:kubernetes-dashboard
kubectl explain pod.spec.containers.lifecycle.postStart
KIND: Pod
VERSION: v1
FIELD: postStart <LifecycleHandler>
DESCRIPTION:
PostStart is called immediately after a container is created. If the handler
fails, the container is terminated and restarted according to its restart
policy. Other management of the container blocks until the hook completes.
More info:
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
LifecycleHandler defines a specific action that should be taken in a
lifecycle hook. One and only one of the fields, except TCPSocket must be
specified.
FIELDS:
exec <ExecAction>
# 执行命令
Exec specifies the action to take.
httpGet <HTTPGetAction>
# 发送一个 Get 请求
HTTPGet specifies the http request to perform.
tcpSocket <TCPSocketAction>
Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for
the backward compatibility. There are no validation of this field and
lifecycle hooks will fail in runtime when tcp handler is specified.