Kubernetes 集群安装

本文主要介绍如何使用 helm 或 kubectl 将 clickvisual 部署到 Kubernetes 集群。

1. 部署要求

  • Kubernetes >= 1.17
  • Helm >= 3.0.0

2. 部署 fluent-bit(参考)

可以直接参考 fluent-bit 官方网站进行部署 https://docs.fluentbit.io/,只需要保证,写入 kafka 的数据包含以下两个字段即可。

  • time
  • log

如果采用 DaemonSet 方式部署,可以使用如下的 DaemonSet yaml。注意需要挂载 configMap。fluentbit-daemonset.yaml 如下:

  1. apiVersion: apps/v1
  2. kind: DaemonSet
  3. metadata:
  4. name: fluent-bit
  5. namespace: kube-system
  6. labels:
  7. k8s-app: fluent-bit-logging
  8. version: v1
  9. kubernetes.io/cluster-service: "true"
  10. spec:
  11. updateStrategy:
  12. type: RollingUpdate
  13. selector:
  14. matchLabels:
  15. k8s-app: fluent-bit-logging
  16. template:
  17. metadata:
  18. labels:
  19. k8s-app: fluent-bit-logging
  20. version: v1
  21. kubernetes.io/cluster-service: "true"
  22. spec:
  23. containers:
  24. - name: fluent-bit
  25. image: bitnami/fluent-bit:1.8.12
  26. imagePullPolicy: Always
  27. env:
  28. - name: CLUSTER_NAME
  29. value: ${CLUSTER_NAME}
  30. - name: KAFKA_BROKERS
  31. value: ${KAFKA_BROKERS}
  32. - name: NODE_IP
  33. valueFrom:
  34. fieldRef:
  35. apiVersion: v1
  36. fieldPath: status.hostIP
  37. resources:
  38. requests:
  39. cpu: 5m
  40. memory: 32Mi
  41. limits:
  42. cpu: 500m
  43. memory: 512Mi
  44. volumeMounts:
  45. - name: varlog
  46. mountPath: /var/log
  47. - name: varlibdockercontainers
  48. mountPath: /var/lib/docker/containers
  49. readOnly: true
  50. - name: fluent-bit-config
  51. mountPath: /fluent-bit/etc/
  52. volumes:
  53. - name: varlog
  54. hostPath:
  55. path: /var/log
  56. - name: varlibdockercontainers
  57. hostPath:
  58. path: /var/lib/docker/containers
  59. - name: fluent-bit-config
  60. configMap:
  61. name: fluent-bit-config

挂载的 fluentbit-configmap.yaml 配置可以参考如下:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: fluent-bit-config
  5. namespace: kube-system
  6. labels:
  7. k8s-app: fluent-bit
  8. data:
  9. # Configuration files: server, input, filters and output
  10. # ======================================================
  11. fluent-bit.conf: |
  12. [SERVICE]
  13. Flush 1
  14. Log_Level info
  15. Daemon off
  16. Parsers_File parsers.conf
  17. HTTP_Server On
  18. HTTP_Listen 0.0.0.0
  19. HTTP_Port 2020
  20. @INCLUDE input-kubernetes.conf
  21. @INCLUDE filter-kubernetes.conf
  22. @INCLUDE filter-modify.conf
  23. @INCLUDE output-kafka.conf
  24. # Deamonet中有配置ENV时禁用
  25. #@Set CLUSTER_NAME=shimodev
  26. #@Set KAFKA_BROKERS=127.0.0.1:9092
  27. input-kubernetes.conf: |
  28. [INPUT]
  29. Name tail
  30. # Tag 标识数据源,用于后续处理流程Filter,output时选择数据
  31. Tag ingress.*
  32. Path /var/log/containers/nginx-ingress-controller*.log
  33. Parser docker
  34. DB /var/log/flb_ingress.db
  35. Mem_Buf_Limit 15MB
  36. Buffer_Chunk_Size 32k
  37. Buffer_Max_Size 64k
  38. # 跳过长度大于 Buffer_Max_Size 的行,Skip_Long_Lines 若设为Off遇到超过长度的行会停止采集
  39. Skip_Long_Lines On
  40. Refresh_Interval 10
  41. # 采集文件没有数据库偏移位置记录的,从文件的头部开始读取,日志文件较大时会导致fluent内存占用率升高出现oomkill
  42. #Read_from_Head On
  43. [INPUT]
  44. Name tail
  45. # Tag 标识数据源,用于后续处理流程Filter,output时选择数据
  46. Tag ingress_stderr.*
  47. Path /var/log/containers/nginx-ingress-controller*.log
  48. Parser docker
  49. DB /var/log/flb_ingress_stderr.db
  50. Mem_Buf_Limit 15MB
  51. Buffer_Chunk_Size 32k
  52. Buffer_Max_Size 64k
  53. # 跳过长度大于 Buffer_Max_Size 的行,Skip_Long_Lines 若设为Off遇到超过长度的行会停止采集
  54. Skip_Long_Lines On
  55. Refresh_Interval 10
  56. # 采集文件没有数据库偏移位置记录的,从文件的头部开始读取,日志文件较大时会导致fluent内存占用率升高出现oomkill
  57. #Read_from_Head On
  58. [INPUT]
  59. Name tail
  60. Tag kube.*
  61. Path /var/log/containers/*_default_*.log,/var/log/containers/*_release_*.log
  62. Exclude_path *fluent-bit-*,*mongo-*,*minio-*,*mysql-*
  63. Parser docker
  64. DB /var/log/flb_kube.db
  65. Mem_Buf_Limit 15MB
  66. Buffer_Chunk_Size 1MB
  67. Buffer_Max_Size 5MB
  68. # 跳过长度大于 Buffer_Max_Size 的行,Skip_Long_Lines 若设为Off遇到超过长度的行会停止采集
  69. Skip_Long_Lines On
  70. Refresh_Interval 10
  71. [INPUT]
  72. Name tail
  73. Tag ego.*
  74. Path /var/log/containers/*_default_*.log,/var/log/containers/*_release_*.log
  75. Exclude_path *fluent-bit-*,*mongo-*,*minio-*,*mysql-*
  76. Parser docker
  77. DB /var/log/flb_ego.db
  78. Mem_Buf_Limit 15MB
  79. Buffer_Chunk_Size 1MB
  80. Buffer_Max_Size 5MB
  81. Skip_Long_Lines On
  82. Refresh_Interval 10
  83. filter-kubernetes.conf: |
  84. [FILTER]
  85. Name kubernetes
  86. Match ingress.*
  87. Kube_URL https://kubernetes.default.svc:443
  88. Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  89. Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
  90. Kube_Tag_Prefix ingress.var.log.containers.
  91. # Merge_Log=On 解析log字段的json内容,提取到根层级, 附加到Merge_Log_Key指定的字段上.
  92. Merge_Log Off
  93. #Merge_Log_Key log_processed
  94. #Merge_Log_Trim On
  95. # 合并log字段后是否保持原始log字段
  96. Keep_Log On
  97. K8S-Logging.Parser On
  98. K8S-Logging.Exclude Off
  99. Labels Off
  100. Annotations Off
  101. #Regex_Parser
  102. [FILTER]
  103. Name kubernetes
  104. Match ingress_stderr.*
  105. Kube_URL https://kubernetes.default.svc:443
  106. Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  107. Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
  108. Kube_Tag_Prefix ingress_stderr.var.log.containers.
  109. # Merge_Log=On 解析log字段的json内容,提取到根层级, 附加到Merge_Log_Key指定的字段上.
  110. Merge_Log Off
  111. # 合并log字段后是否保持原始log字段
  112. Keep_Log Off
  113. K8S-Logging.Parser On
  114. K8S-Logging.Exclude Off
  115. Labels Off
  116. Annotations Off
  117. #Regex_Parser
  118. [FILTER]
  119. Name kubernetes
  120. Match kube.*
  121. Kube_URL https://kubernetes.default.svc:443
  122. Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  123. Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
  124. Kube_Tag_Prefix kube.var.log.containers.
  125. Merge_Log Off
  126. Keep_Log On
  127. K8S-Logging.Parser On
  128. K8S-Logging.Exclude Off
  129. Labels Off
  130. Annotations Off
  131. [FILTER]
  132. Name kubernetes
  133. Match ego.*
  134. Kube_URL https://kubernetes.default.svc:443
  135. Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  136. Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
  137. Kube_Tag_Prefix ego.var.log.containers.
  138. Merge_Log Off
  139. Keep_Log Off
  140. K8S-Logging.Parser On
  141. K8S-Logging.Exclude Off
  142. Labels Off
  143. Annotations Off
  144. filter-modify.conf: |
  145. [FILTER]
  146. Name nest
  147. Match *
  148. Wildcard pod_name
  149. Operation lift
  150. Nested_under kubernetes
  151. Add_prefix kubernetes_
  152. [FILTER]
  153. Name modify
  154. Match *
  155. Rename time _time_
  156. Rename log _log_
  157. Rename stream _source_
  158. Rename kubernetes_host _node_name_
  159. Rename kubernetes_namespace_name _namespace_
  160. Rename kubernetes_container_name _container_name_
  161. Rename kubernetes_pod_name _pod_name_
  162. Remove kubernetes_pod_id
  163. Remove kubernetes_docker_id
  164. Remove kubernetes_container_hash
  165. Remove kubernetes_container_image
  166. Add _cluster_ ${CLUSTER_NAME}
  167. Add _log_agent_ ${HOSTNAME}
  168. # ${NODE_IP} 通过daemonset中配置ENV注入
  169. Add _node_ip_ ${NODE_IP}
  170. [FILTER]
  171. Name grep
  172. Match ingress.*
  173. #Regex container_name ^nginx-ingress-controller$
  174. #Regex stream ^stdout$
  175. Exclude _source_ ^stderr$
  176. # 排除 TCP 代理日志(日志格式不同影响采集)
  177. Exclude log ^\[*
  178. [FILTER]
  179. Name grep
  180. Match ingress_stderr.*
  181. Exclude _source_ ^stdout$
  182. [FILTER]
  183. Name grep
  184. Match kube.*
  185. #Regex stream ^stdout$
  186. Exclude log (ego.sys)
  187. [FILTER]
  188. Name grep
  189. Match ego.*
  190. #Regex lname ^(ego.sys)$
  191. Regex log ("lname":"ego.sys")
  192. # [FILTER]
  193. # Name modify
  194. # Match ego.*
  195. # Hard_rename ts _time_
  196. output-kafka.conf: |
  197. [OUTPUT]
  198. Name kafka
  199. Match ingress.*
  200. Brokers ${KAFKA_BROKERS}
  201. Topics ingress-stdout-logs-${CLUSTER_NAME}
  202. #Timestamp_Key @timestamp
  203. Timestamp_Key _time_
  204. Retry_Limit false
  205. # hides errors "Receive failed: Disconnected" when kafka kills idle connections
  206. rdkafka.log.connection.close false
  207. # producer buffer is not included in http://fluentbit.io/documentation/0.12/configuration/memory_usage.html#estimating
  208. rdkafka.queue.buffering.max.kbytes 10240
  209. # for logs you'll probably want this ot be 0 or 1, not more
  210. rdkafka.request.required.acks 1
  211. [OUTPUT]
  212. Name kafka
  213. Match ingress_stderr.*
  214. Brokers ${KAFKA_BROKERS}
  215. Topics ingress-stderr-logs-${CLUSTER_NAME}
  216. #Timestamp_Key @timestamp
  217. Timestamp_Key _time_
  218. Retry_Limit false
  219. # hides errors "Receive failed: Disconnected" when kafka kills idle connections
  220. rdkafka.log.connection.close false
  221. # producer buffer is not included in http://fluentbit.io/documentation/0.12/configuration/memory_usage.html#estimating
  222. rdkafka.queue.buffering.max.kbytes 10240
  223. # for logs you'll probably want this ot be 0 or 1, not more
  224. rdkafka.request.required.acks 1
  225. [OUTPUT]
  226. Name kafka
  227. Match kube.*
  228. Brokers ${KAFKA_BROKERS}
  229. Topics app-stdout-logs-${CLUSTER_NAME}
  230. Timestamp_Key _time_
  231. Retry_Limit false
  232. # hides errors "Receive failed: Disconnected" when kafka kills idle connections
  233. rdkafka.log.connection.close false
  234. # producer buffer is not included in http://fluentbit.io/documentation/0.12/configuration/memory_usage.html#estimating
  235. rdkafka.queue.buffering.max.kbytes 10240
  236. # for logs you'll probably want this ot be 0 or 1, not more
  237. rdkafka.request.required.acks 1
  238. [OUTPUT]
  239. Name kafka
  240. Match ego.*
  241. Brokers ${KAFKA_BROKERS}
  242. Topics ego-stdout-logs-${CLUSTER_NAME}
  243. Timestamp_Key _time_
  244. Retry_Limit false
  245. # hides errors "Receive failed: Disconnected" when kafka kills idle connections
  246. rdkafka.log.connection.close false
  247. # producer buffer is not included in http://fluentbit.io/documentation/0.12/configuration/memory_usage.html#estimating
  248. rdkafka.queue.buffering.max.kbytes 10240
  249. # for logs you'll probably want this ot be 0 or 1, not more
  250. rdkafka.request.required.acks 1
  251. parsers.conf: |
  252. [PARSER]
  253. Name apache
  254. Format regex
  255. Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
  256. Time_Key time
  257. Time_Format %d/%b/%Y:%H:%M:%S %z
  258. [PARSER]
  259. Name apache2
  260. Format regex
  261. Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
  262. Time_Key time
  263. Time_Format %d/%b/%Y:%H:%M:%S %z
  264. [PARSER]
  265. Name apache_error
  266. Format regex
  267. Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
  268. [PARSER]
  269. Name nginx
  270. Format regex
  271. Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
  272. Time_Key time
  273. Time_Format %d/%b/%Y:%H:%M:%S %z
  274. [PARSER]
  275. Name json
  276. Format json
  277. Time_Key time
  278. Time_Format %d/%b/%Y:%H:%M:%S %z
  279. [PARSER]
  280. Name docker
  281. Format json
  282. Time_Key time
  283. Time_Format %Y-%m-%dT%H:%M:%S.%L
  284. Time_Keep On
  285. # 与FILTER 阶段中Merge_Log=On 效果类似,解析log字段的json内容,但无法提到根层级
  286. #Decode_Field_As escaped_utf8 kubernetes do_next
  287. #Decode_Field_As json kubernetes
  288. [PARSER]
  289. # http://rubular.com/r/tjUt3Awgg4
  290. Name cri
  291. Format regex
  292. Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
  293. Time_Key time
  294. Time_Format %Y-%m-%dT%H:%M:%S.%L%z
  295. [PARSER]
  296. Name syslog
  297. Format regex
  298. Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
  299. Time_Key time
  300. Time_Format %b %d %H:%M:%S
  1. 部署 clickvisual 克隆仓库:
  1. git clone https://github.com/clickvisual/clickvisual.git
  2. cd clickvisual &amp;&amp; cp api/config/default.toml data/helm/clickvisual/default.toml

修改 data/helm/clickvisual/default.toml 中的 mysql、auth 以及其他段配置,将 mysql.dsn 、 auth.redisAddr、auth.redisPassword 替换为你自己的配置。

方法一:[推荐] 使用 helm 直接安装:

  1. helm install clickvisual data/helm/clickvisual --set image.tag=latest --namespac default

如果你已将 clickvisual 镜像推送到你自己的 harbor 仓库,可以通过 —set image.respository 指令修改仓库地址

  1. helm install clickvisual data/helm/clickvisual --set image.repository=${YOUR_HARBOR}/${PATH}/clickvisual --set image.tag=latest --namespace default<br/>

方法二:[可选] 使用 helm 渲染出 yaml 后,手动通过 kubectl 安装:

  1. # 使用 helm template 指令渲染安装的 yaml
  2. helm template clickvisual data/helm/clickvisual --set image.tag=latest > clickvisual.yaml
  3. # 可以使用 "--set image.repository" 来覆盖默认镜像路径
  4. # helm template clickvisual clickvisual --set image.repository=${YOUR_HARBOR}/${PATH}/clickvisual --set image.tag=latest > clickvisual.yaml
  5. # 检查 clickvisual.yaml 是否无误,随后通过 kuebctl apply clickvisual.yaml
  6. kubectl apply -f clickvisual.yaml --namespace default