千家信息网

docker中资源指标API及自定义指标API的示例分析

发表于:2025-02-19 作者:千家信息网编辑
千家信息网最后更新 2025年02月19日,这篇文章给大家分享的是有关docker中资源指标API及自定义指标API的示例分析的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。以前是用heapster来收集资源指标才能看
千家信息网最后更新 2025年02月19日docker中资源指标API及自定义指标API的示例分析

这篇文章给大家分享的是有关docker中资源指标API及自定义指标API的示例分析的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。

以前是用heapster来收集资源指标才能看,现在heapster要废弃了。

从k8s v1.8开始后,引入了新的功能,即把资源指标引入api。

资源指标:metrics-server

自定义指标: prometheus,k8s-prometheus-adapter

因此,新一代架构:

1) 核心指标流水线:由kubelet、metrics-server以及由API server提供的api组成;cpu累计利用率、内存实时利用率、pod的资源占用率及容器的磁盘占用率

2) 监控流水线:用于从系统收集各种指标数据并提供终端用户、存储系统以及HPA,他们包含核心指标以及许多非核心指标。非核心指标不能被k8s所解析。

metrics-server是个api server,仅仅收集cpu利用率、内存利用率等。

[root@master ~]# kubectl api-versionsadmissionregistration.k8s.io/v1beta1apiextensions.k8s.io/v1beta1apiregistration.k8s.io/v1apiregistration.k8s.io/v1beta1apps/v1apps/v1beta1apps/v1beta2authentication.k8s.io/v1authentication.k8s.io/v1beta1authorization.k8s.io/v1

资源指标(metrics)

访问 https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server

把文件下载到本地目录,,注意,一定要到和自己k8s集群版本一致目录里面下载,比如我的k8s 是v1.11.2。否则安装后metrics的pod运行不起来。

[root@master metrics-server]# cd kubernetes-1.11.2/cluster/addons/metrics-server
[root@master metrics-server]# lsauth-delegator.yaml  metrics-apiservice.yaml         metrics-server-service.yamlauth-reader.yaml     metrics-server-deployment.yaml  resource-reader.yaml

注意:需要修改的地方:

metrics-server-deployment.yaml# - --source=kubernetes.summary_api:''- --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true resource-reader.yaml resources:  - pods  - nodes  - namespaces  - nodes/stats  #新加
[root@master metrics-server]# kubectl apply -f ./clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdserviceaccount/metrics-server createdconfigmap/metrics-server-config createddeployment.extensions/metrics-server-v0.3.1 createdservice/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master metrics-server]# kubectl get pods -n kube-system -o wideNAME                                    READY     STATUS    RESTARTS   AGE       IP             NODEmetrics-server-v0.2.1-fd596d746-c7x6q   2/2       Running   0          1m        10.244.2.49    node2
[root@master metrics-server]# kubectl api-versionsmetrics.k8s.io/v1beta1

看到api-version里面有metrics了。

[root@master ~]# kubectl proxy --port=8080Starting to serve on 127.0.0.1:8080
[root@master ~]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1{  "kind": "APIResourceList",  "apiVersion": "v1",  "groupVersion": "metrics.k8s.io/v1beta1",  "resources": [    {      "name": "nodes",      "singularName": "",      "namespaced": false,      "kind": "NodeMetrics",      "verbs": [        "get",        "list"      ]    },    {      "name": "pods",      "singularName": "",      "namespaced": true,      "kind": "PodMetrics",      "verbs": [        "get",        "list"      ]    }  ]
[root@master metrics-server]#  curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods{  "kind": "PodMetricsList",  "apiVersion": "metrics.k8s.io/v1beta1",  "metadata": {    "selfLink": "/apis/metrics.k8s.io/v1beta1/pods"  },  "items": [    {      "metadata": {        "name": "pod1",        "namespace": "dev",        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/dev/pods/pod1",        "creationTimestamp": "2018-10-15T09:26:57Z"      },      "timestamp": "2018-10-15T09:26:00Z",      "window": "1m0s",      "containers": [        {          "name": "myapp",          "usage": {            "cpu": "0",            "memory": "2940Ki"          }        }      ]    },    {      "metadata": {        "name": "rook-ceph-osd-0-b9b94dc6c-ffs8z",        "namespace": "rook-ceph",        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/rook-ceph/pods/rook-ceph-osd-0-b9b94dc6c-ffs8z",        "creationTimestamp": "2018-10-15T09:26:57Z"      },      "timestamp": "2018-10-15T09:26:00Z",      "window": "1m0s",      "containers": [        {
[root@master metrics-server]#  curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes{  "kind": "NodeMetricsList",  "apiVersion": "metrics.k8s.io/v1beta1",  "metadata": {    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"  },  "items": [    {      "metadata": {        "name": "node2",        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node2",        "creationTimestamp": "2018-10-15T09:27:26Z"      },      "timestamp": "2018-10-15T09:27:00Z",      "window": "1m0s",      "usage": {        "cpu": "90m",        "memory": "1172044Ki"      }    },    {      "metadata": {        "name": "master",        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/master",        "creationTimestamp": "2018-10-15T09:27:26Z"      },      "timestamp": "2018-10-15T09:27:00Z",      "window": "1m0s",      "usage": {        "cpu": "186m",        "memory": "1582972Ki"      }    },    {      "metadata": {        "name": "node1",        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node1",        "creationTimestamp": "2018-10-15T09:27:26Z"      },      "timestamp": "2018-10-15T09:27:00Z",      "window": "1m0s",      "usage": {        "cpu": "68m",        "memory": "1079332Ki"      }    }  ]}[root@master metrics-server]#

看到iterms里面有数据了,说明可以采集各节点和pod里面的资源使用情况了。注意,如果你看不到就多等一会,如果等了很长的时间,iterm里面还是空,那么就看看metrics容器里面的日志是不是有报错。查看日志的方法为:

[root@master metrics-server]#kubectl get pods -n kube-systemNAME                                    READY     STATUS    RESTARTS   AGEmetrics-server-v0.2.1-84678c956-jdtr5   2/2       Running   0          14m
[root@master metrics-server]# kubectl logs metrics-server-v0.2.1-84678c956-jdtr5 -c metrics-server -n kube-system-8r6lzI1015 09:26:57.117323       1 reststorage.go:93] No metrics for pod rook-ceph/rook-ceph-osd-prepare-node1-8r6lzI1015 09:26:57.117336       1 reststorage.go:140] No metrics for container rook-ceph-osd in pod rook-ceph/rook-ceph-osd-prepare-node2-vnr97I1015 09:26:57.117347       1 reststorage.go:93] No metrics for pod rook-ceph/rook-ceph-osd-prepare-node2-vnr97

这样,kubectl top命令就能使用了:

[root@master ~]# kubectl top nodesNAME      CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%   master    131m         3%        1716Mi          46%       node1     68m          1%        1169Mi          31%       node2     96m          2%        1236Mi          33%
[root@master manifests]# kubectl top pods NAME                            CPU(cores)   MEMORY(bytes)   myapp-deploy-69b47bc96d-dfpvp   0m           2Mi             myapp-deploy-69b47bc96d-g9kkz   0m           2Mi
[root@master manifests]# kubectl top pods -n kube-systemNAME                                    CPU(cores)   MEMORY(bytes)   canal-4h3ww                             11m          49Mi            canal-6tdxn                             11m          49Mi            canal-z2tp4                             11m          43Mi            coredns-78fcdf6894-2l2cf                1m           9Mi             coredns-78fcdf6894-dkkfq                1m           10Mi            etcd-master                             14m          242Mi           kube-apiserver-master                   26m          527Mi           kube-controller-manager-master          20m          68Mi            kube-flannel-ds-amd64-6zqzr             2m           15Mi            kube-flannel-ds-amd64-7qtcl             2m           17Mi            kube-flannel-ds-amd64-kpctn             2m           18Mi            kube-proxy-9snbs                        2m           16Mi            kube-proxy-psmxj                        2m           18Mi            kube-proxy-tc8g6                        2m           17Mi            kube-scheduler-master                   6m           16Mi            kubernetes-dashboard-767dc7d4d-4mq9z    0m           12Mi            metrics-server-v0.2.1-84678c956-jdtr5   0m           29Mi

自定义指标(prometheus)

大家看到,我们的metrics已经可以正常工作了。不过,metrics只能监控cpu和内存,对于其他指标如用户自定义的监控指标,metrics就无法监控到了。这时就需要另外一个组件叫prometheus。

prometheus的部署非常麻烦。

node_exporter是agent;

PromQL相当于sql语句来查询数据;

k8s-prometheus-adapter:prometheus是不能直接解析k8s的指标的,需要借助k8s-prometheus-adapter转换成api

kube-state-metrics是用来整合数据的。

下面开始部署。

访问 https://github.com/ikubernetes/k8s-prom

[root@master pro]# git clone https://github.com/iKubernetes/k8s-prom.git

先创建一个叫prom的名称空间:

[root@master k8s-prom]# kubectl apply -f namespace.yaml namespace/prom created

部署node_exporter:

[root@master k8s-prom]# cd node_exporter/[root@master node_exporter]# lsnode-exporter-ds.yaml  node-exporter-svc.yaml[root@master node_exporter]# kubectl apply -f .daemonset.apps/prometheus-node-exporter createdservice/prometheus-node-exporter created
[root@master node_exporter]# kubectl get pods -n promNAME                             READY     STATUS    RESTARTS   AGEprometheus-node-exporter-dmmjj   1/1       Running   0          7mprometheus-node-exporter-ghz2l   1/1       Running   0          7mprometheus-node-exporter-zt2lw   1/1       Running   0          7m

部署prometheus:

[root@master k8s-prom]# cd prometheus/[root@master prometheus]# lsprometheus-cfg.yaml  prometheus-deploy.yaml  prometheus-rbac.yaml  prometheus-svc.yaml[root@master prometheus]# kubectl apply -f .configmap/prometheus-config createddeployment.apps/prometheus-server createdclusterrole.rbac.authorization.k8s.io/prometheus createdserviceaccount/prometheus createdclusterrolebinding.rbac.authorization.k8s.io/prometheus createdservice/prometheus created

看prom名称空间中的所有资源:

[root@master prometheus]# kubectl get all -n promNAME                                     READY     STATUS    RESTARTS   AGEpod/prometheus-node-exporter-dmmjj       1/1       Running   0          10mpod/prometheus-node-exporter-ghz2l       1/1       Running   0          10mpod/prometheus-node-exporter-zt2lw       1/1       Running   0          10mpod/prometheus-server-65f5d59585-6l8m8   1/1       Running   0          55sNAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEservice/prometheus                 NodePort    10.111.127.64           9090:30090/TCP   56sservice/prometheus-node-exporter   ClusterIP   None                    9100/TCP         10mNAME                                      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGEdaemonset.apps/prometheus-node-exporter   3         3         3         3            3                     10mNAME                                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEdeployment.apps/prometheus-server   1         1         1            1           56sNAME                                           DESIRED   CURRENT   READY     AGEreplicaset.apps/prometheus-server-65f5d59585   1         1         1         56s

上面我们看到通过NodePorts的方式,可以通过宿主机的30090端口,来访问prometheus容器里面的应用。

最好挂载个pvc的存储,要不这些监控数据过一会就没了。

部署kube-state-metrics,用来整合数据:

[root@master k8s-prom]# cd kube-state-metrics/[root@master kube-state-metrics]# lskube-state-metrics-deploy.yaml  kube-state-metrics-rbac.yaml  kube-state-metrics-svc.yaml[root@master kube-state-metrics]# kubectl apply -f .deployment.apps/kube-state-metrics createdserviceaccount/kube-state-metrics createdclusterrole.rbac.authorization.k8s.io/kube-state-metrics createdclusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics createdservice/kube-state-metrics created
[root@master kube-state-metrics]# kubectl get all -n promNAME                                      READY     STATUS    RESTARTS   AGEpod/kube-state-metrics-58dffdf67d-v9klh   1/1       Running   0          14mNAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEservice/kube-state-metrics         ClusterIP   10.111.41.139           8080/TCP         14m

部署k8s-prometheus-adapter,这个需要自制证书:

[root@master k8s-prometheus-adapter]# cd /etc/kubernetes/pki/[root@master pki]# (umask 077; openssl genrsa -out serving.key 2048)Generating RSA private key, 2048 bit long modulus...........................................................................................+++...............+++e is 65537 (0x10001)

证书请求:

[root@master pki]#  openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"

开始签证:

[root@master pki]# openssl  x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650Signature oksubject=/CN=servingGetting CA Private Key

创建加密的配置文件:

[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key  -n promsecret/cm-adapter-serving-certs created

注:cm-adapter-serving-certs是custom-metrics-apiserver-deployment.yaml文件里面的名字。

[root@master pki]# kubectl get secrets -n promNAME                             TYPE                                  DATA      AGEcm-adapter-serving-certs         Opaque                                2         51sdefault-token-knsbg              kubernetes.io/service-account-token   3         4hkube-state-metrics-token-sccdf   kubernetes.io/service-account-token   3         3hprometheus-token-nqzbz           kubernetes.io/service-account-token   3         3h

部署k8s-prometheus-adapter:

[root@master k8s-prom]# cd k8s-prometheus-adapter/[root@master k8s-prometheus-adapter]# lscustom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml   custom-metrics-apiserver-service.yamlcustom-metrics-apiserver-auth-reader-role-binding.yaml              custom-metrics-apiservice.yamlcustom-metrics-apiserver-deployment.yaml                            custom-metrics-cluster-role.yamlcustom-metrics-apiserver-resource-reader-cluster-role-binding.yaml  custom-metrics-resource-reader-cluster-role.yamlcustom-metrics-apiserver-service-account.yaml                       hpa-custom-metrics-cluster-role-binding.yaml

由于k8s v1.11.2和k8s-prometheus-adapter最新版不兼容,解决办法就是访问https://github.com/DirectXMan12/k8s-prometheus-adapter/tree/master/deploy/manifests下载最新版的custom-metrics-apiserver-deployment.yaml文件,并把里面的namespace的名字改成prom;同时还要下载custom-metrics-config-map.yaml文件到本地来,并把里面的namespace的名字改成prom。

[root@master k8s-prometheus-adapter]# kubectl apply -f .clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator createdrolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader createddeployment.apps/custom-metrics-apiserver createdclusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader createdserviceaccount/custom-metrics-apiserver createdservice/custom-metrics-apiserver createdapiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io createdclusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources createdclusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader createdclusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created
[root@master k8s-prometheus-adapter]# kubectl get all -n promNAME                                           READY     STATUS    RESTARTS   AGEpod/custom-metrics-apiserver-65f545496-64lsz   1/1       Running   0          6mpod/kube-state-metrics-58dffdf67d-v9klh        1/1       Running   0          4hpod/prometheus-node-exporter-dmmjj             1/1       Running   0          4hpod/prometheus-node-exporter-ghz2l             1/1       Running   0          4hpod/prometheus-node-exporter-zt2lw             1/1       Running   0          4hpod/prometheus-server-65f5d59585-6l8m8         1/1       Running   0          4hNAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEservice/custom-metrics-apiserver   ClusterIP   10.103.87.246           443/TCP          36mservice/kube-state-metrics         ClusterIP   10.111.41.139           8080/TCP         4hservice/prometheus                 NodePort    10.111.127.64           9090:30090/TCP   4hservice/prometheus-node-exporter   ClusterIP   None                    9100/TCP         4hNAME                                      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGEdaemonset.apps/prometheus-node-exporter   3         3         3         3            3                     4hNAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEdeployment.apps/custom-metrics-apiserver   1         1         1            1           36mdeployment.apps/kube-state-metrics         1         1         1            1           4hdeployment.apps/prometheus-server          1         1         1            1           4hNAME                                                  DESIRED   CURRENT   READY     AGEreplicaset.apps/custom-metrics-apiserver-5f6b4d857d   0         0         0         36mreplicaset.apps/custom-metrics-apiserver-65f545496    1         1         1         6mreplicaset.apps/custom-metrics-apiserver-86ccf774d5   0         0         0         17mreplicaset.apps/kube-state-metrics-58dffdf67d         1         1         1         4hreplicaset.apps/prometheus-server-65f5d59585          1         1         1         4h

最终看到prom名称空间里面的所有资源都是running状态了。

[root@master k8s-prometheus-adapter]# kubectl api-versionscustom.metrics.k8s.io/v1beta1

可以看到custom.metrics.k8s.io/v1beta1这个api了。

开个代理:

[root@master k8s-prometheus-adapter]# kubectl proxy --port=8080

可以看到指标数据了:

[root@master pki]# curl  http://localhost:8080/apis/custom.metrics.k8s.io/v1beta1/ {      "name": "pods/ceph_rocksdb_submit_transaction_sync",      "singularName": "",      "namespaced": true,      "kind": "MetricValueList",      "verbs": [        "get"      ]    },    {      "name": "jobs.batch/kube_deployment_created",      "singularName": "",      "namespaced": true,      "kind": "MetricValueList",      "verbs": [        "get"      ]    },    {      "name": "jobs.batch/kube_pod_owner",      "singularName": "",      "namespaced": true,      "kind": "MetricValueList",      "verbs": [        "get"      ]    },

下面我们就可以愉快的创建HPA了(水平Pod自动伸缩)。

另外,prometheus还可以和grafana整合。如下步骤。

先下载文件grafana.yaml,访问https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/grafana.yaml

[root@master pro]# wget

修改grafana.yaml文件内容:

 把namespace: kube-system改成prom,有两处; 把env里面的下面两个注释掉:        - name: INFLUXDB_HOST          value: monitoring-influxdb 在最有一行加个type: NodePort ports:  - port: 80    targetPort: 3000  selector:    k8s-app: grafana  type: NodePort
[root@master pro]# kubectl apply -f grafana.yaml deployment.extensions/monitoring-grafana createdservice/monitoring-grafana created
[root@master pro]# kubectl get pods -n promNAME                                       READY     STATUS    RESTARTS   AGEmonitoring-grafana-ffb4d59bd-gdbsk         1/1       Running   0          5s

看到grafana这个pod运行起来了。

[root@master pro]# kubectl get svc -n promNAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGEmonitoring-grafana         NodePort    10.106.164.205           80:32659/TCP     19m

我们可以访问宿主机ip: http://172.16.1.100:32659

然后,就能从界面上看到相应的数据了。

登录下面的网站下载个grafana监控k8s-prometheus的模板:

然后再grafana的界面中导入上面下载的模板:

导入模板之后,就能看到监控数据了:

HPA(水平pod自动扩展)

当pod压力大了,会根据负载自动扩展Pod个数以均匀压力。

目前,HPA只支持两个版本,v1版本只支持核心指标的定义(只能根据cpu利用率的指标进行pod的扩展);

[root@master pro]# kubectl explain hpa.spec.scaleTargetRefscaleTargetRef:表示基于什么指标来计算pod伸缩的标准
[root@master pro]# kubectl api-versions |grep autoautoscaling/v1autoscaling/v2beta1

上面看到分别支持hpav1和hpav2。

下面我们用命令行的方式重新创建一个带有资源限制的pod myapp:

[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80service/myapp createddeployment.apps/myapp created
[root@master ~]# kubectl get podsNAME                     READY     STATUS    RESTARTS   AGEmyapp-6985749785-fcvwn   1/1       Running   0          58s

下面我们让myapp 这个pod能自动水平扩展,用kubectl autoscale,其实就是指明HPA控制器的。

[root@master ~]# kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=60horizontalpodautoscaler.autoscaling/myapp autoscaled

--min:表示最小扩展pod的个数

--max:表示最多扩展pod的个数

--cpu-percent:cpu利用率

[root@master ~]# kubectl get hpaNAME      REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGEmyapp     Deployment/myapp   0%/60%    1         8         1          4m
[root@master ~]# kubectl get svcNAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGEmyapp        ClusterIP   10.105.235.197           80/TCP              19

下面我们把service改成NodePort的方式:

[root@master ~]# kubectl patch svc myapp -p '{"spec":{"type": "NodePort"}}'service/myapp patched
[root@master ~]# kubectl get svcNAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGEmyapp        NodePort    10.105.235.197           80:31990/TCP        22m
[root@master ~]# yum install httpd-tools #主要是为了安装ab压测工具
[root@master ~]# kubectl get pods -o wideNAME                     READY     STATUS    RESTARTS   AGE       IP            NODEmyapp-6985749785-fcvwn   1/1       Running   0          25m       10.244.2.84   node2

开始用ab工具压测

[root@master ~]# ab -c 1000 -n 5000000 http://172.16.1.100:31990/index.htmlThis is ApacheBench, Version 2.3 <$Revision: 1430300 $>Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 172.16.1.100 (be patient)

多等一会,会看到pods的cpu利用率为98%,需要扩展为2个pod了:

[root@master ~]# kubectl describe hparesource cpu on pods  (as a percentage of request):  98% (49m) / 60%Deployment pods:                                       1 current / 2 desired
[root@master ~]# kubectl top podsNAME                     CPU(cores)   MEMORY(bytes)   myapp-6985749785-fcvwn   49m (我们设置的总cpu是50m)         3Mi
[root@master ~]#  kubectl get pods -o wideNAME                     READY     STATUS    RESTARTS   AGE       IP             NODEmyapp-6985749785-fcvwn   1/1       Running   0          32m       10.244.2.84    node2myapp-6985749785-sr4qv   1/1       Running   0          2m        10.244.1.105   node1

上面我们看到已经自动扩展为2个pod了,再等一会,随着cpu压力的上升,还会看到自动扩展为4个或更多的pod:

[root@master ~]#  kubectl get pods -o wideNAME                     READY     STATUS    RESTARTS   AGE       IP             NODEmyapp-6985749785-2mjrd   1/1       Running   0          1m        10.244.1.107   node1myapp-6985749785-bgz6p   1/1       Running   0          1m        10.244.1.108   node1myapp-6985749785-fcvwn   1/1       Running   0          35m       10.244.2.84    node2myapp-6985749785-sr4qv   1/1       Running   0          5m        10.244.1.105   node1

等压测一停止,pod个数还会收缩为正常个数的。

上面我们用的是hpav1来做的水平pod自动扩展的功能,我们前面也说过,hpa v1版本只能根据cpu利用率括水平自动扩展pod。

下面我们介绍一下hpa v2的功能,它可以根据自定义指标利用率来水平扩展pod。

在使用hpa v2版本前,我们先把前面创建的hpa v1版本删除了,以免和我们测试的hpa v2版本冲突:

[root@master hpa]# kubectl delete hpa myapphorizontalpodautoscaler.autoscaling "myapp" deleted

好了,下面我们创建一个hpa v2:

[root@master hpa]# cat hpa-v2-demo.yaml apiVersion: autoscaling/v2beta1   #从这可以看出是hpa v2版本kind: HorizontalPodAutoscalermetadata:  name: myapp-hpa-v2spec:  scaleTargetRef: #根据什么指标来做评估压力    apiVersion: apps/v1 #对谁来做自动扩展    kind: Deployment    name: myapp  minReplicas: 1 #最少副本数量  maxReplicas: 10  metrics: #表示依据哪些指标来进行评估  - type: Resource #表示基于资源进行评估    resource:       name: cpu      targetAverageUtilization: 55 #表示pod cpu使用率超过55%,就自动水平扩展pod个数  - type: Resource    resource:      name: memory #我们知道hpa v1版本只能根据cpu来进行评估,而到了我们的hpa v2版本就可以根据内存来进行评估了      targetAverageValue: 50Mi #表示pod内存使用超过50M,就自动水平扩展pod个数
[root@master hpa]# kubectl apply -f hpa-v2-demo.yaml horizontalpodautoscaler.autoscaling/myapp-hpa-v2 created
[root@master hpa]# kubectl get hpaNAME           REFERENCE          TARGETS                MINPODS   MAXPODS   REPLICAS   AGEmyapp-hpa-v2   Deployment/myapp   3723264/50Mi, 0%/55%   1         10        1          37s

我们看到现在只有一个pod

[root@master hpa]# kubectl get pods -o wideNAME                     READY     STATUS    RESTARTS   AGE       IP            NODEmyapp-6985749785-fcvwn   1/1       Running   0          57m       10.244.2.84   node2

开始压测:

[root@master ~]# ab -c 100 -n 5000000 http://172.16.1.100:31990/index.html

看hpa v2的检测情况:

[root@master hpa]# kubectl describe hpaMetrics:                                               ( current / target )  resource memory on pods:                             3756032 / 50Mi  resource cpu on pods  (as a percentage of request):  82% (41m) / 55%Min replicas:                                          1Max replicas:                                          10Deployment pods:                                       1 current / 2 desired
[root@master hpa]# kubectl get pods -o wideNAME                     READY     STATUS    RESTARTS   AGE       IP             NODEmyapp-6985749785-8frq4   1/1       Running   0          1m        10.244.1.109   node1myapp-6985749785-fcvwn   1/1       Running   0          1h        10.244.2.84    node2

看到自动扩展出了2个Pod。等压测一停止,pod个数还会收缩为正常个数的。

将来我们不光可以用hpa v2,根据cpu和内存使用率进行伸缩Pod个数,还可以根据http并发量等。

比如下面的:

[root@master hpa]# cat hpa-v2-custom.yaml apiVersion: autoscaling/v2beta1  #从这可以看出是hpa v2版本kind: HorizontalPodAutoscalermetadata:  name: myapp-hpa-v2spec:  scaleTargetRef: #根据什么指标来做评估压力    apiVersion: apps/v1 #对谁来做自动扩展    kind: Deployment    name: myapp  minReplicas: 1 #最少副本数量  maxReplicas: 10  metrics: #表示依据哪些指标来进行评估  - type: Pods #表示基于资源进行评估    pods:       metricName: http_requests#自定义的资源指标        targetAverageValue: 800m #m表示个数,表示并发数800

感谢各位的阅读!关于"docker中资源指标API及自定义指标API的示例分析"这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!

指标 资源 个数 版本 利用率 数据 水平 评估 文件 监控 内存 压力 内容 功能 名字 名称 容器 方式 更多 核心 数据库的安全要保护哪些东西 数据库安全各自的含义是什么 生产安全数据库录入 数据库的安全性及管理 数据库安全策略包含哪些 海淀数据库安全审计系统 建立农村房屋安全信息数据库 易用的数据库客户端支持安全管理 连接数据库失败ssl安全错误 数据库的锁怎样保障安全 杭州慧财网络技术有限公司地址 买服务器的安全威胁 马海旭智能服务器 石油公司服务器设备安全制度 南京孙秀才软件开发 专注脚本软件开发公司 低价美国服务器 软件开发架构师职责 网络安全法规条文 黑曜石之锋服务器阵营人数对比 生产数据库部署在虚拟化 杭州皮萌网络技术有限公司 北京时间校准显示器钟表服务器 山东崇越网络技术有限公司 怎么样把数据库保存到邮箱 储能软件开发需要什么知识 近年热点的网络技术名称 易安信服务器总代理 服务器电源引脚图 为什么显示没有服务器登录不让进 ctf网络安全有什么用 恒生互联网科技指数实时市盈率 网络技术需要学啥 软件开发毕业设计目录 湖北家庭教育与网络安全回放 社保认证提示服务器IP链接失败 ios好用的股票软件开发 拉夫堡大学网络安全与大数据 固原软件开发推荐 创造与魔法合并的服务器
0