千家信息网

Kubernets 部署 Harbor (最新版)

发表于:2025-02-21 作者:千家信息网编辑
千家信息网最后更新 2025年02月21日,容器,镜像,仓库号称容器三大基本组件,玩转K8S自然逃脱不了镜像仓库搭建的宿命,私有镜像仓库的必要性我想没必要在这里重申。今天这篇文章,在实验环境下介绍K8S里面完整部署一个私有的harbor镜像仓库
千家信息网最后更新 2025年02月21日Kubernets 部署 Harbor (最新版)

容器,镜像,仓库号称容器三大基本组件,玩转K8S自然逃脱不了镜像仓库搭建的宿命,私有镜像仓库的必要性我想没必要在这里重申。今天这篇文章,在实验环境下介绍K8S里面完整部署一个私有的harbor镜像仓库的搭建过程。

K8S一定要用Harbor作为镜像仓库吗?当然不一定,但是通过对比你会知道,无论从哪方面Harbor正努力并已经成了你几乎唯一的选择,就像K8S作为容器编排的事实上的标准一样,你几乎没有第二个更好的选择。

这也是笔者苦心琢磨,并一定要将其部署成功并撰写此文奉献给读者的目的。

废话少说,言归正传,介绍实验环境:

1,CentOS 7 minimal

2, 单节点的K8S master 1.15.5 ;(由于1.16改动较大,所有启用1.15的最高版本)

3,helm 2.15

4,harbor


helm部署
一、Helm 客户端安装


Helm 的安装方式很多,这里采用二进制的方式安装。更多安装方法可以参考 Helm 的官方帮助文档。

方式一:使用官方提供的脚本一键安装

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.shchmod 700 get_helm.sh./get_helm.sh

二、Helm 服务端安装Tiller

注意:先在 K8S 集群上每个节点安装 socat 软件(yum install -y socat ),不然会报如下错误:

error forwarding port 44134 to pod dc6da4ab99ad9c497c0cef1776b9dd18e0a612d507e2746ed63d36ef40f30174, uid : unable to do port forwarding: socat not found.Error: cannot connect to Tiller

centos7 是默认安装,所以我这里忽略,请确认安装。

Tiller 是以 Deployment 方式部署在 Kubernetes 集群中的,只需使用以下指令便可简单的完成安装:

helm init

三、给 Tiller 授权

因为 Helm 的服务端 Tiller 是一个部署在 Kubernetes 中 Kube-System Namespace 下 的 Deployment,它会去连接 Kube-Api 在 Kubernetes 里创建和删除应用。
而从 Kubernetes 1.6 版本开始,API Server 启用了 RBAC 授权。目前的 Tiller 部署时默认没有定义授权的 ServiceAccount,这会导致访问 API Server 时被拒绝。所以我们需要明确为 Tiller 部署添加授权。
为 Tiller创建 Kubernetes 的服务帐号和绑定角色 :

kubectl create serviceaccount --namespace kube-system tillerkubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

使用 kubectl patch 更新 API 对象 :

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

查看是否授权成功

kubectl get deploy --namespace kube-system   tiller-deploy  --output yaml|grep  serviceAccount    serviceAccount: tiller    serviceAccountName: tiller

四、验证 Tiller 是否安装成功

kubectl -n kube-system get pods|grep tillertiller-deploy-6d68f5c78f-nql2z          1/1       Running   0          5mhelm versionClient: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}

harbor安装

具体可以看看官方的介绍https://github.com/goharbor/harbor-helm
添加helm仓库:

helm repo add harbor https://helm.goharbor.io

官方的介绍教程是假设各位都是高手(我这里心里默默问候它),下面介绍点基本的详细操作:

一,搜索harbor chart 项目:

helm search harbor

二,下载到本地,便于修改values.yaml:

helm fetch harbor/harbor

解压下载的项目包,并进入解压路径修改values.yaml文件:

 tar zxvf harbor-1.2.1.tgz  cd harbor vim values.yaml

可以参考官方介绍修改参数,但是对于初学者除了数据持久化需要修改,其他一律默认,后面熟悉了再逐一修改:

更改values.yaml所有的storageClass为storageClass: "nfs",这是我已经提前部署好的,

如果你错过了,可以回去看我的教程《初探Kubernetes动态卷存储(NFS)》,把它补上:https://blog.51cto.com/kingda/2440315;

当然你可以一条语句直接修改此文件:

sed -i 's#storageClass: ""#storageClass: "nfs"#g' values.yaml

其他地方一律默认,然后开始安装:

helm install --name harbor-v1 .  --wait --timeout 1500 --debug --namespace harbor

由于PV和PVC的自动创建工作可能没你想象的那么快,所以导致很多pod开始会报错,所以一定要有点耐心等待它自动多次重启就绪。

上面那条安装命令可能一直卡在执行状态,请一定要有点耐心,等待所有pod都启动成功,helm才会检测到所有pod的安装状态并执行完毕。


由于我们是才用默认设置安装,所以helm默认是启动ingress的方式暴露harbor服务,所以如果你没有提前安装ingress控制器的话,虽然不影响harbor正常运行但是你无法访问它,

所以,下面介绍安装ingress控制器:

K8S官方有源码介绍,这里直接贴出一键安装脚本文件:

apiVersion: v1kind: Namespacemetadata:  name: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata:  name: nginx-configuration  namespace: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata:  name: tcp-services  namespace: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata:  name: udp-services  namespace: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginx---apiVersion: v1kind: ServiceAccountmetadata:  name: nginx-ingress-serviceaccount  namespace: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginx---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:  name: nginx-ingress-clusterrole  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginxrules:  - apiGroups:      - ""    resources:      - configmaps      - endpoints      - nodes      - pods      - secrets    verbs:      - list      - watch  - apiGroups:      - ""    resources:      - nodes    verbs:      - get  - apiGroups:      - ""    resources:      - services    verbs:      - get      - list      - watch  - apiGroups:      - "extensions"    resources:      - ingresses    verbs:      - get      - list      - watch  - apiGroups:      - ""    resources:      - events    verbs:      - create      - patch  - apiGroups:      - "extensions"    resources:      - ingresses/status    verbs:      - update---apiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata:  name: nginx-ingress-role  namespace: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginxrules:  - apiGroups:      - ""    resources:      - configmaps      - pods      - secrets      - namespaces    verbs:      - get  - apiGroups:      - ""    resources:      - configmaps    resourceNames:      # Defaults to "-"      # Here: "-"      # This has to be adapted if you change either parameter      # when launching the nginx-ingress-controller.      - "ingress-controller-leader-nginx"    verbs:      - get      - update  - apiGroups:      - ""    resources:      - configmaps    verbs:      - create  - apiGroups:      - ""    resources:      - endpoints    verbs:      - get---apiVersion: rbac.authorization.k8s.io/v1beta1kind: RoleBindingmetadata:  name: nginx-ingress-role-nisa-binding  namespace: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginxroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: nginx-ingress-rolesubjects:  - kind: ServiceAccount    name: nginx-ingress-serviceaccount    namespace: ingress-nginx---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:  name: nginx-ingress-clusterrole-nisa-binding  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginxroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: nginx-ingress-clusterrolesubjects:  - kind: ServiceAccount    name: nginx-ingress-serviceaccount    namespace: ingress-nginx---apiVersion: extensions/v1beta1kind: DaemonSetmetadata:  name: nginx-ingress-controller  namespace: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginxspec:  #replicas: 1  selector:    matchLabels:      app.kubernetes.io/name: ingress-nginx      app.kubernetes.io/part-of: ingress-nginx  updateStrategy:    rollingUpdate:      maxUnavailable: 1    type: RollingUpdate  template:    metadata:      labels:        app.kubernetes.io/name: ingress-nginx        app.kubernetes.io/part-of: ingress-nginx      annotations:        prometheus.io/port: "10254"        prometheus.io/scrape: "true"    spec:      serviceAccountName: nginx-ingress-serviceaccount      hostNetwork: true      containers:        - name: nginx-ingress-controller          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0          args:            - /nginx-ingress-controller            - --configmap=$(POD_NAMESPACE)/nginx-configuration            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services            - --publish-service=$(POD_NAMESPACE)/ingress-nginx            - --annotations-prefix=nginx.ingress.kubernetes.io          securityContext:            allowPrivilegeEscalation: true            capabilities:              drop:                - ALL              add:                - NET_BIND_SERVICE            # www-data -> 33            runAsUser: 33          env:            - name: POD_NAME              valueFrom:                fieldRef:                  fieldPath: metadata.name            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace          ports:            - name: http              containerPort: 80            - name: https              containerPort: 443          livenessProbe:            failureThreshold: 3            httpGet:              path: /healthz              port: 10254              scheme: HTTP            initialDelaySeconds: 10            periodSeconds: 10            successThreshold: 1            timeoutSeconds: 1          readinessProbe:            failureThreshold: 3            httpGet:              path: /healthz              port: 10254              scheme: HTTP            periodSeconds: 10            successThreshold: 1            timeoutSeconds: 1---

使用kubectl 安装即可。

如果你已经解析默认的ingress访问域名到K8S的任意节点上,那么直接使用默认账号和密码登录即可。

0