使用Skaffold在使用Kubeadm部署Kubernetes时如何使用本地集群

56次阅读
没有评论

问题描述

想要使用Skaffold在本地Kubernetes集群上部署他的NodeJS应用程序,但是在使用skaffold时遇到了问题。他得到了以下结果:

DEBU[0018] Pod "expiration-depl-7989dc5ff4-lkpvw" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoopDEBU[0018] marking resource failed due to error code STATUSCHECK_IMAGE_PULL_ERR  subtask=-1 task=Deploy - deployment/expiration-depl: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled    - pod/expiration-depl-7989dc5ff4-lkpvw: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled - deployment/expiration-depl failed. Error: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled.DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] pod statuses could not be fetched this time due to following errors occurred context canceled  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] pod statuses could not be fetched this time due to following errors occurred context canceled  subtask=-1 task=DeployDEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=DeployDEBU[0018] setting skaffold deploy status to STATUSCHECK_IMAGE_PULL_ERR.  subtask=-1 task=DeployCleaning up...DEBU[0018] Running command: [kubectl --context kubernetes-admin@kubernetes delete --ignore-not-found=true --wait=false -f -]  subtask=-1 task=DevLoop - deployment.apps "auth-depl" deleted - service "auth-srv" deleted - deployment.apps "auth-mongo-depl" deleted - service "auth-mongo-srv" deleted - deployment.apps "client-depl" deleted - service "client-srv" deleted - deployment.apps "expiration-depl" deleted - deployment.apps "expiration-redis-depl" deleted - service "expiration-redis-srv" deleted - ingress.networking.k8s.io "ingress-service" deleted - deployment.apps "nats-depl" deleted - service "nats-srv" deleted - deployment.apps "orders-depl" deleted - service "orders-srv" deleted - deployment.apps "orders-mongo-depl" deleted - service "orders-mongo-srv" deleted - deployment.apps "payments-depl" deleted - service "payments-srv" deleted - deployment.apps "payments-mongo-depl" deleted - service "payments-mongo-srv" deleted - deployment.apps "tickets-depl" deleted - service "tickets-srv" deleted - deployment.apps "tickets-mongo-depl" deleted - service "tickets-mongo-srv" deletedINFO[0054] Cleanup completed in 35.7 seconds             subtask=-1 task=DevLoopDEBU[0054] Running command: [tput colors]                subtask=-1 task=DevLoopDEBU[0054] Command output: [256]                        subtask=-1 task=DevLoop1/12 deployment(s) failed

这是expiration-depl.yaml文件的内容:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: expiration-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: expiration
  template:
    metadata:
      labels:
        app: expiration
    spec:
      containers:
        - name: expiration
          image: learnertester/expiration
          env:
            - name: NATS_CLIENT_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NATS_URL
              value: 'http://nats-srv:4222'
            - name: NATS_CLUSTER_ID
              value: ticketing
            - name: REDIS_HOST
              value: expiration-redis-srv

这是expiration-redis-depl.yaml文件的内容:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: expiration-redis-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: expiration-redis
  template:
    metadata:
      labels:
        app: expiration-redis
    spec:
      containers:
        - name: expiration-redis
          image: redis
---
apiVersion: v1
kind: Service
metadata:
  name: expiration-redis-srv
spec:
  selector:
    app: expiration-redis
  ports:
    - name: db
      protocol: TCP
      port: 6379
      targetPort: 6379

信息

  • Skaffold版本:v2.0.3
  • 操作系统:Ubuntu 22.04 LTS
  • 安装方式:Snap
  • skaffold.yaml文件内容如下:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
  kubectl:
    manifests:
      - ./infra/k8s/*
build:
  local:
    push: false
  artifacts:
    - image: learnertester/auth
      context: auth
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .
    - image: learnertester/ticketing-client
      context: client
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: '**/*.js'
            dest: .
    - image: learnertester/tickets
      context: tickets
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .
    - image: learnertester/orders
      context: orders
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .
    - image: learnertester/expiration
      context: expiration
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .
    - image: learnertester/payments
      context: payments
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .

解决方案

请注意以下操作注意版本差异及修改前做好备份。

方案1

根据您提供的信息,您遇到了一个容器无法拉取的问题。这可能是由于网络问题或镜像仓库访问限制引起的。以下是一些可能的解决方案:
1. 检查网络连接:确保您的机器可以访问互联网,并且没有任何网络连接问题。
2. 检查镜像仓库访问权限:如果您使用的是私有镜像仓库,确保您具有正确的访问权限。您可以尝试使用docker pull命令手动拉取镜像,以验证是否存在访问问题。
3. 检查镜像名称和标签:确保您在expiration-depl.yaml文件中指定的镜像名称和标签是正确的。您可以尝试使用docker pull命令手动拉取镜像,并使用正确的名称和标签。
4. 检查Skaffold配置:检查您的skaffold.yaml文件中的配置是否正确。确保您指定了正确的上下文和Dockerfile路径,并且镜像名称和标签与expiration-depl.yaml文件中的一致。
5. 检查Kubernetes集群状态:确保您的Kubernetes集群正常运行,并且没有任何问题导致容器无法拉取。
6. 检查Kubernetes节点状态:检查Kubernetes节点的状态,确保它们正常运行,并且没有任何问题导致容器无法拉取。
7. 检查Kubernetes事件:使用kubectl get events命令检查Kubernetes集群中的事件,查看是否有任何与容器拉取相关的错误或警告。
8. 检查Kubernetes日志:使用kubectl logs命令检查相关容器的日志,查看是否有任何与容器拉取相关的错误或警告。

如果您尝试了上述解决方案仍然无法解决问题,可能需要进一步调查和排除故障。您可以尝试在Skaffold的GitHub存储库中提出问题,或者在相关的Kubernetes社区论坛上寻求帮助。

方案2

使用本地集群时,确保您的Kubernetes集群已正确配置,并且所有必需的组件和插件已正确安装。如果您使用的是Kubeadm进行Kubernetes集群的部署,您可以按照以下步骤检查和修复问题:
1. 检查Kubeadm配置:确保您的Kubeadm配置正确,并且所有必需的参数已正确设置。您可以使用kubeadm config view命令查看当前的Kubeadm配置。
2. 检查Kubernetes版本:确保您的Kubernetes版本与Skaffold兼容。您可以使用kubectl version命令查看当前的Kubernetes版本。
3. 检查Kubernetes节点状态:使用kubectl get nodes命令检查Kubernetes节点的状态。确保所有节点都处于Ready状态,并且没有任何问题导致节点无法正常工作。
4. 检查Kubernetes组件状态:使用kubectl get pods -n kube-system命令检查Kubernetes集群中的核心组件的状态。确保所有组件都处于Running状态,并且没有任何问题导致组件无法正常工作。
5. 检查Kubernetes网络配置:确保您的Kubernetes网络配置正确,并且所有必需的网络插件已正确安装和配置。您可以使用kubectl get pods -n kube-system命令检查网络插件的状态。
6. 检查Kubernetes存储配置:如果您在Skaffold配置中使用了持久化存储卷(Persistent Volume),请确保您的Kubernetes存储配置正确,并且所有必需的存储插件已正确安装和配置。您可以使用kubectl get pvkubectl get pvc命令检查存储卷和存储卷声明的状态。
7. 检查Kubernetes事件和日志:使用kubectl get events命令检查Kubernetes集群中的事件,查看是否有任何与Skaffold和Kubeadm相关的错误或警告。您还可以使用kubectl logs命令检查相关容器的日志,查看是否有任何与Skaffold和Kubeadm相关的错误或警告。

如果您尝试了上述解决方案仍然无法解决问题,可能需要进一步调查和排除故障。您可以尝试在Skaffold的GitHub存储库中提出问题,或者在相关的Kubernetes社区论坛上寻求帮助。

正文完