Reference architecture: up to 3,000 users

原文:https://docs.gitlab.com/ee/administration/reference_architectures/3k_users.html

Reference architecture: up to 3,000 users

此页面描述了最多 3,000 个用户的 GitLab 参考架构. 有关参考架构的完整列表,请参见可用参考架构 .

注意:下面记录的 3,000 个用户参考体系结构旨在帮助您的组织实现高度可用的 GitLab 部署. 如果您没有专业知识或需要维护高度可用的环境,则可以遵循2,000 个用户的参考体系结构 ,从而拥有一个更简单且成本更低的操作环境.

  • 支持的用户(大约): 3,000
  • 高可用性: True
  • 测试 RPS 速率: API:60 RPS,网站:6 RPS,Git:6 RPS
Service Nodes Configuration GCP AWS Azure
外部负载平衡节点 1 2 vCPU, 1.8GB Memory n1-highcpu-2 c5.large F2s v2
Redis 3 2 个 vCPU,7.5GB 内存 n1-standard-2 m5.large D2s v3
领事+前哨 3 2 个 vCPU,1.8GB 内存 n1-highcpu-2 c5.large F2s v2
PostgreSQL 3 2 个 vCPU,7.5GB 内存 n1-standard-2 m5.large D2s v3
PgBouncer 3 2 个 vCPU,1.8GB 内存 n1-highcpu-2 c5.large F2s v2
内部负载平衡节点 1 2 vCPU, 1.8GB Memory n1-highcpu-2 c5.large F2s v2
Gitaly 最少 2 个 4 个 vCPU,15GB 内存 n1-standard-4 m5.xlarge D4s v3
Sidekiq 4 2 个 vCPU,7.5GB 内存 n1-standard-2 m5.large D2s v3
亚搏体育 app Rails 3 8 个 vCPU,7.2GB 内存 n1-highcpu-8 c5.2xlarge F8s v2
监控节点 1 2 vCPU, 1.8GB Memory n1-highcpu-2 c5.large F2s v2
对象存储 n/a n/a n/a n/a n/a
NFS 服务器(可选,不推荐) 1 4 个 vCPU,3.6GB 内存 n1-highcpu-4 c5.xlarge F4s v2

这些架构是使用 GCP 上的Intel Xeon E5 v3(Haswell) CPU 平台构建和测试的. 在不同的硬件上,您可能会发现需要对 CPU 或节点数进行相应的调整,无论是较低还是较高. 有关更多信息,请在此处找到 CPU 的Sysbench基准.

对于 LFS,Uploads,Artifacts 等数据对象,由于性能和可用性更好,建议在 NFS 上尽可能使用对象存储服务 . 由于这不需要设置节点,因此在上表中将其标记为不适用(n / a).

Setup components

设置 GitLab 及其组件以容纳多达 3,000 个用户:

  1. 配置外部负载平衡节点 ,该节点将处理两个 GitLab 应用程序服务节点的负载平衡.
  2. Configure Redis.
  3. Configure Consul and Sentinel.
  4. 配置 PostgreSQL (GitLab 的数据库).
  5. Configure PgBouncer.
  6. Configure the internal load balancing node
  7. 配置 Gitaly ,它提供对 Git 存储库的访问.
  8. Configure Sidekiq.
  9. 配置主 GitLab Rails 应用程序以运行 Puma / Unicorn,Workhorse,GitLab Shell,并服务所有前端请求(UI,API,基于 HTTP / SSH 的 Git).
  10. 配置 Prometheus来监视您的 GitLab 环境.
  11. 配置用于共享数据对象的对象存储 .
  12. 将 NFS(可选)配置为具有共享磁盘存储服务,以替代 Gitaly 和/或对象存储(尽管不建议这样做). GitLab 页面需要 NFS,如果不使用该功能,则可以跳过此步骤.

我们从同一 10.6.0.0/16 专用网络范围内的所有服务器开始,它们可以在这些地址上自由地相互连接.

这是每台机器和分配的 IP 的列表和说明:

  • 10.6.0.10 :外部负载平衡器
  • 10.6.0.61主要
  • 10.6.0.62 :返回副本 1
  • 10.6.0.63 :返回副本 2
  • 10.6.0.11 :领事/前哨 1
  • 10.6.0.12 :领事/前哨 2
  • 10.6.0.13 :领事/前哨 3
  • 10.6.0.31
  • 10.6.0.32中学 1
  • 10.6.0.33中学 2
  • 10.6.0.21 :PgBouncer 1
  • 10.6.0.22 :PgBouncer 2
  • 10.6.0.23 :PgBouncer 3
  • 10.6.0.20 :内部负载均衡器
  • 10.6.0.51 :Gitaly 1
  • 10.6.0.52 :Gitaly 2
  • 10.6.0.71 :Sidekiq 1
  • 10.6.0.72 :Sidekiq 2
  • 10.6.0.73 :Sidekiq 3
  • 10.6.0.74 :Sidekiq 4
  • 10.6.0.41应用程序 1
  • 10.6.0.42应用程序 2
  • 10.6.0.43应用程序 3
  • 10.6.0.81 :普罗米修斯

Configure the external load balancer

注意:此体系结构已通过HAProxy作为负载均衡器进行了测试和验证. 尽管也可以使用具有类似功能集的其他负载均衡器,但这些负载均衡器尚未经过验证.

在主动/主动 GitLab 配置中,您将需要一个负载均衡器来将流量路由到应用程序服务器. 有关使用负载均衡器或进行确切配置的细节超出了 GitLab 文档的范围. 我们希望,如果您要管理像 GitLab 这样的多节点系统,那么已经选择了负载均衡器. 一些示例包括 HAProxy(开源),F5 Big-IP LTM 和 Citrix Net Scaler. 本文档将概述需要在 GitLab 上使用哪些端口和协议.

下一个问题是如何在环境中处理 SSL. 有几种不同的选择:

Application node terminates SSL

配置您的负载均衡器以将端口 443 上的连接作为TCP而不是HTTP(S)协议进行传递. 这会将连接直接传递到应用程序节点的 NGINX 服务. NGINX 将具有 SSL 证书并在端口 443 上侦听.

有关管理 SSL 证书和配置 NGINX 的详细信息,请参见NGINX HTTPS 文档 .

Load balancer terminates SSL without backend SSL

将您的负载均衡器配置为使用HTTP(S)协议而不是TCP . 然后,负载平衡器将负责管理 SSL 证书和终止 SSL.

由于负载均衡器和 GitLab 之间的通信将不安全,因此需要一些其他配置. 有关详细信息,请参见NGINX 代理的 SSL 文档 .

Load balancer terminates SSL with backend SSL

Configure your load balancer(s) to use the ‘HTTP(S)’ protocol rather than ‘TCP’. The load balancer(s) will be responsible for managing SSL certificates that end users will see.

在这种情况下,负载均衡器和 NGINX 之间的流量也将是安全的. 无需为代理 SSL 添加配置,因为连接将一直保持安全. 但是,需要将配置添加到 GitLab 来配置 SSL 证书. 有关管理 SSL 证书和配置 NGINX 的详细信息,请参见NGINX HTTPS 文档 .

Ports

下表显示了要使用的基本端口.

LB 端口 后端端口 Protocol
80 80 HTTP( 1
443 443 TCP 或 HTTPS( 1 )( 2
22 22 TCP
  • 1 ): Web 终端支持要求您的负载平衡器正确处理 WebSocket 连接. 当使用 HTTP 或 HTTPS 代理,这意味着负载平衡器必须被配置为通过ConnectionUpgrade逐跳头. 有关更多详细信息,请参见Web 终端集成指南.
  • 2 ):当对端口 443 使用 HTTPS 协议时,需要向负载均衡器添加 SSL 证书. 如果您想在 GitLab 应用程序服务器上终止 SSL,请使用 TCP 协议.

如果您使用具有自定义域支持的 GitLab 页面,则将需要一些其他端口配置. GitLab 页面需要一个单独的虚拟 IP 地址. 配置 DNS,将pages_external_url/etc/gitlab/gitlab.rb指向新的虚拟 IP 地址. 有关更多信息,请参见GitLab 页面文档 .

LB 端口 后端端口 Protocol
80 变化( 1 HTTP
443 变化( 1 TCP( 2
  • 1 ):GitLab 页面的后端端口取决于gitlab_pages['external_http']gitlab_pages['external_https']设置. 有关更多详细信息,请参见GitLab Pages 文档 .
  • 2 ):GitLab 页面的端口 443 应该始终使用 TCP 协议. 用户可以使用自定义 SSL 配置自定义域,如果 SSL 在负载均衡器处终止,则不可能.

Alternate SSH Port

某些组织有禁止打开 SSH 端口 22 的策略.在这种情况下,配置允许用户在端口 443 上使用 SSH 的备用 SSH 主机名可能会有所帮助.与其他 GitLab 相比,备用 SSH 主机名将需要一个新的虚拟 IP 地址.上面的 HTTP 配置.

为备用 SSH 主机名(例如altssh.gitlab.example.com配置 DNS.

LB 端口 后端端口 Protocol
443 22 TCP

Back to setup components

Configure Redis

使用Redis 的可扩展环境,可以使用一次 X 副本拓扑与Redis 的哨兵服务来观看,并自动启动故障转移过程.

如果与 Sentinel 一起使用,Redis 需要身份验证. 有关更多信息,请参见Redis 安全性文档. 我们建议结合使用 Redis 密码和严格的防火墙规则来保护您的 Redis 服务. 强烈建议您在使用 GitLab 配置 Redis 之前阅读Redis Sentinel文档,以充分了解拓扑和体系结构.

在本节中,将指导您配置与 GitLab 一起使用的外部 Redis 实例. 以下 IP 将作为示例:

  • 10.6.0.61主要
  • 10.6.0.62 :返回副本 1
  • 10.6.0.63 :返回副本 2

Provide your own Redis instance

来自云提供商(例如 AWS ElastiCache)的托管 Redis 将可以使用. 如果这些服务支持高可用性,请确保它不是 Redis 群集类型.

需要 Redis 5.0 或更高版本,因为这是从 GitLab 13.0 开始的 Omnibus GitLab 软件包附带的版本. 较旧的 Redis 版本不支持 SPOP 的可选 count 参数,这对于合并火车现在是必需的.

注意 Redis 节点的 IP 地址或主机名,端口和密码(如果需要). 这些在以后配置GitLab 应用程序服务器时是必需的.

Standalone Redis using Omnibus GitLab

这是我们安装和设置新 Redis 实例的部分.

Redis 设置的要求如下:

  1. 所有 Redis 节点必须能够互相通信并接受通过 Redis( 6379 )和 Sentinel( 26379 )端口的传入连接(除非您更改默认端口).
  2. 托管 GitLab 应用程序的服务器必须能够访问 Redis 节点.
  3. 使用防火墙保护节点免受来自外部网络( Internet )的访问.

注意: Redis 节点(主节点和副本节点)将需要使用redis['password']定义的相同密码. 在故障转移期间的任何时间,Sentinels 都可以重新配置节点并将其状态从主节点更改为副本节点,反之亦然.

Configuring the primary Redis instance

  1. SSH 进入 Redis 服务器.
  2. 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包.
    • 确保选择正确的 Omnibus 软件包,并使用与当前安装相同的版本和类型(社区版,企业版).
    • 不要完成下载页面上的任何其他步骤.
  3. 编辑/etc/gitlab/gitlab.rb并添加内容:

    1. # Specify server role as 'redis_master_role'
    2. roles ['redis_master_role']
    3. # IP address pointing to a local IP that the other machines can reach to.
    4. # You can also set bind to '0.0.0.0' which listen in all interfaces.
    5. # If you really need to bind to an external accessible IP, make
    6. # sure you add extra firewall rules to prevent unauthorized access.
    7. redis['bind'] = '10.6.0.61'
    8. # Define a port so Redis can listen for TCP requests which will allow other
    9. # machines to connect to it.
    10. redis['port'] = 6379
    11. # Set up password authentication for Redis (use the same password in all nodes).
    12. redis['password'] = 'redis-password-goes-here'
    13. ## Enable service discovery for Prometheus
    14. consul['enable'] = true
    15. consul['monitoring_service_discovery'] = true
    16. ## The IPs of the Consul server nodes
    17. ## You can also use FQDNs and intermix them with IPs
    18. consul['configuration'] = {
    19. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
    20. }
    21. # Set the network addresses that the exporters will listen on
    22. node_exporter['listen_address'] = '0.0.0.0:9100'
    23. redis_exporter['listen_address'] = '0.0.0.0:9121'
    24. redis_exporter['flags'] = {
    25. 'redis.addr' => 'redis://10.6.0.61:6379',
    26. 'redis.password' => 'redis-password-goes-here',
    27. }
    28. # Disable auto migrations
    29. gitlab_rails['auto_migrate'] = false
  4. 重新配置 Omnibus GitLab,以使更改生效.

注意:您可以将多个角色(如哨兵和 Redis)指定为: roles ['redis_sentinel_role', 'redis_master_role'] . 阅读有关角色的更多信息.

您可以通过以下方式列出当前 Redis 主副本服务器状态:

  1. /opt/gitlab/embedded/bin/redis-cli -h <host> -a 'redis-password-goes-here' info replication

通过以下方式显示正在运行的 GitLab 服务:

  1. gitlab-ctl status

输出应类似于以下内容:

  1. run: consul: (pid 30043) 76863s; run: log: (pid 29691) 76892s
  2. run: logrotate: (pid 31152) 3070s; run: log: (pid 29595) 76908s
  3. run: node-exporter: (pid 30064) 76862s; run: log: (pid 29624) 76904s
  4. run: redis: (pid 30070) 76861s; run: log: (pid 29573) 76914s
  5. run: redis-exporter: (pid 30075) 76861s; run: log: (pid 29674) 76896s

Configuring the replica Redis instances

  1. SSH 进入副本 Redis 服务器.
  2. 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包.
    • 确保选择正确的 Omnibus 软件包,并使用与当前安装相同的版本和类型(社区版,企业版).
    • 不要完成下载页面上的任何其他步骤.
  3. 编辑/etc/gitlab/gitlab.rb并添加内容:

    1. # Specify server role as 'redis_replica_role'
    2. roles ['redis_replica_role']
    3. # IP address pointing to a local IP that the other machines can reach to.
    4. # You can also set bind to '0.0.0.0' which listen in all interfaces.
    5. # If you really need to bind to an external accessible IP, make
    6. # sure you add extra firewall rules to prevent unauthorized access.
    7. redis['bind'] = '10.6.0.62'
    8. # Define a port so Redis can listen for TCP requests which will allow other
    9. # machines to connect to it.
    10. redis['port'] = 6379
    11. # The same password for Redis authentication you set up for the primary node.
    12. redis['password'] = 'redis-password-goes-here'
    13. # The IP of the primary Redis node.
    14. redis['master_ip'] = '10.6.0.61'
    15. # Port of primary Redis server, uncomment to change to non default. Defaults
    16. # to `6379`.
    17. #redis['master_port'] = 6379
    18. ## Enable service discovery for Prometheus
    19. consul['enable'] = true
    20. consul['monitoring_service_discovery'] = true
    21. ## The IPs of the Consul server nodes
    22. ## You can also use FQDNs and intermix them with IPs
    23. consul['configuration'] = {
    24. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
    25. }
    26. # Set the network addresses that the exporters will listen on
    27. node_exporter['listen_address'] = '0.0.0.0:9100'
    28. redis_exporter['listen_address'] = '0.0.0.0:9121'
    29. redis_exporter['flags'] = {
    30. 'redis.addr' => 'redis://10.6.0.62:6379',
    31. 'redis.password' => 'redis-password-goes-here',
    32. }
    33. # Disable auto migrations
    34. gitlab_rails['auto_migrate'] = false
  4. 重新配置 Omnibus GitLab,以使更改生效.

  5. 对于所有其他副本节点,请再次执行该步骤,并确保正确设置 IP.

注意:您可以将多个角色(如哨兵和 Redis)指定为: roles ['redis_sentinel_role', 'redis_master_role'] . 阅读有关角色的更多信息.

故障转移后, /etc/gitlab/gitlab.rb/etc/gitlab/gitlab.rb再次更改这些值,因为节点将由Sentinels管理,即使在gitlab-ctl reconfigure ,它们也将通过恢复配置恢复.同样的哨兵

支持高级配置选项 ,可以根据需要添加.

Back to setup components

Configure Consul and Sentinel

注意:如果您使用的是外部 Redis Sentinel 实例,请确保从 Sentinel 配置中排除requirepass参数. 此参数将导致客户端报告NOAUTH Authentication required. . Redis Sentinel 3.2.x 不支持密码身份验证 .

现在已经全部安装了 Redis 服务器,让我们配置 Sentinel 服务器. 以下 IP 将作为示例:

  • 10.6.0.11 :领事/前哨 1
  • 10.6.0.12 :领事/前哨 2
  • 10.6.0.13 :领事/前哨 3

要配置 Sentinel:

  1. SSH 进入将托管 Consul / Sentinel 的服务器.
  2. 从 GitLab 下载页面使用步骤 1 和 2 下载/安装 Omnibus GitLab 企业版软件包.
    • 确保选择正确的 Omnibus 软件包,并且与 GitLab 应用程序正在运行的版本相同.
    • 不要完成下载页面上的任何其他步骤.
  3. 编辑/etc/gitlab/gitlab.rb并添加内容:

    1. roles ['redis_sentinel_role', 'consul_role']
    2. # Must be the same in every sentinel node
    3. redis['master_name'] = 'gitlab-redis'
    4. # The same password for Redis authentication you set up for the primary node.
    5. redis['master_password'] = 'redis-password-goes-here'
    6. # The IP of the primary Redis node.
    7. redis['master_ip'] = '10.6.0.61'
    8. # Define a port so Redis can listen for TCP requests which will allow other
    9. # machines to connect to it.
    10. redis['port'] = 6379
    11. # Port of primary Redis server, uncomment to change to non default. Defaults
    12. # to `6379`.
    13. #redis['master_port'] = 6379
    14. ## Configure Sentinel
    15. sentinel['bind'] = '10.6.0.11'
    16. # Port that Sentinel listens on, uncomment to change to non default. Defaults
    17. # to `26379`.
    18. # sentinel['port'] = 26379
    19. ## Quorum must reflect the amount of voting sentinels it take to start a failover.
    20. ## Value must NOT be greater then the amount of sentinels.
    21. ##
    22. ## The quorum can be used to tune Sentinel in two ways:
    23. ## 1\. If a the quorum is set to a value smaller than the majority of Sentinels
    24. ## we deploy, we are basically making Sentinel more sensible to primary failures,
    25. ## triggering a failover as soon as even just a minority of Sentinels is no longer
    26. ## able to talk with the primary.
    27. ## 1\. If a quorum is set to a value greater than the majority of Sentinels, we are
    28. ## making Sentinel able to failover only when there are a very large number (larger
    29. ## than majority) of well connected Sentinels which agree about the primary being down.s
    30. sentinel['quorum'] = 2
    31. ## Consider unresponsive server down after x amount of ms.
    32. # sentinel['down_after_milliseconds'] = 10000
    33. ## Specifies the failover timeout in milliseconds. It is used in many ways:
    34. ##
    35. ## - The time needed to re-start a failover after a previous failover was
    36. ## already tried against the same primary by a given Sentinel, is two
    37. ## times the failover timeout.
    38. ##
    39. ## - The time needed for a replica replicating to a wrong primary according
    40. ## to a Sentinel current configuration, to be forced to replicate
    41. ## with the right primary, is exactly the failover timeout (counting since
    42. ## the moment a Sentinel detected the misconfiguration).
    43. ##
    44. ## - The time needed to cancel a failover that is already in progress but
    45. ## did not produced any configuration change (REPLICAOF NO ONE yet not
    46. ## acknowledged by the promoted replica).
    47. ##
    48. ## - The maximum time a failover in progress waits for all the replica to be
    49. ## reconfigured as replicas of the new primary. However even after this time
    50. ## the replicas will be reconfigured by the Sentinels anyway, but not with
    51. ## the exact parallel-syncs progression as specified.
    52. # sentinel['failover_timeout'] = 60000
    53. ## Enable service discovery for Prometheus
    54. consul['enable'] = true
    55. consul['monitoring_service_discovery'] = true
    56. ## The IPs of the Consul server nodes
    57. ## You can also use FQDNs and intermix them with IPs
    58. consul['configuration'] = {
    59. server: true,
    60. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
    61. }
    62. # Set the network addresses that the exporters will listen on
    63. node_exporter['listen_address'] = '0.0.0.0:9100'
    64. redis_exporter['listen_address'] = '0.0.0.0:9121'
    65. # Disable auto migrations
    66. gitlab_rails['auto_migrate'] = false
  4. 重新配置 Omnibus GitLab,以使更改生效.

  5. 对于其他所有 Consul / Sentinel 节点,请再次执行步骤,并确保设置了正确的 IP.

注意:第三个 Consul 服务器的配置完成后,将选举 Consul 负责人. 查看领事日志sudo gitlab-ctl tail consul将显示...[INFO] consul: New leader elected: ...

You can list the current Consul members (server, client):

  1. sudo /opt/gitlab/embedded/bin/consul members

您可以验证 GitLab 服务正在运行:

  1. sudo gitlab-ctl status

输出应类似于以下内容:

  1. run: consul: (pid 30074) 76834s; run: log: (pid 29740) 76844s
  2. run: logrotate: (pid 30925) 3041s; run: log: (pid 29649) 76861s
  3. run: node-exporter: (pid 30093) 76833s; run: log: (pid 29663) 76855s
  4. run: sentinel: (pid 30098) 76832s; run: log: (pid 29704) 76850s

Back to setup components

Configure PostgreSQL

在本节中,将指导您配置与 GitLab 一起使用的外部 PostgreSQL 数据库.

Provide your own PostgreSQL instance

如果您将 GitLab 托管在云提供商上,则可以选择将托管服务用于 PostgreSQL. 例如,AWS 提供了运行 PostgreSQL 的托管关系数据库服务(RDS).

如果您使用云托管服务,或提供自己的 PostgreSQL:

  1. 根据数据库要求文档设置 PostgreSQL.
  2. 使用您选择的密码设置一个gitlab用户名. gitlab用户需要特权才能创建gitlabhq_production数据库.
  3. 使用适当的详细信息配置 GitLab 应用程序服务器. 配置 GitLab Rails 应用程序涵盖了此步骤.

Standalone PostgreSQL using Omnibus GitLab

以下 IP 将作为示例:

  • 10.6.0.31
  • 10.6.0.32中学 1
  • 10.6.0.33中学 2

首先,请确保在每个节点上 安装 Linux GitLab 软件包. 按照以下步骤,从步骤 1 安装必需的依赖项,并从步骤 2 添加 GitLab 软件包存储库.在第二步中安装 GitLab 时,请勿提供EXTERNAL_URL值.

PostgreSQL primary node

  1. SSH 进入 PostgreSQL 主节点.
  2. 为 PostgreSQL 用户名/密码对生成密码哈希. 假设您将使用默认用户名gitlab (推荐). 该命令将要求输入密码和确认. 将此命令在下一步中输出的值用作<postgresql_password_hash>的值:

    1. sudo gitlab-ctl pg-password-md5 gitlab
  3. 为 PgBouncer 用户名/密码对生成密码哈希. 假设您将使用pgbouncer的默认用户名(推荐). 该命令将要求输入密码和确认. 将此命令在下一步中输出的值用作<pgbouncer_password_hash>的值:

    1. sudo gitlab-ctl pg-password-md5 pgbouncer
  4. 为 Consul 数据库用户名/密码对生成密码哈希. 假设您将使用默认用户名gitlab-consul (推荐). 该命令将要求输入密码和确认. 将此命令在下一步中输出的值用作<consul_password_hash>的值:

    1. sudo gitlab-ctl pg-password-md5 gitlab-consul
  5. 在主数据库节点上,编辑/etc/gitlab/gitlab.rb替换/etc/gitlab/gitlab.rb # START user configuration部分中记录的值:

    1. # Disable all components except PostgreSQL and Repmgr and Consul
    2. roles ['postgres_role']
    3. # PostgreSQL configuration
    4. postgresql['listen_address'] = '0.0.0.0'
    5. postgresql['hot_standby'] = 'on'
    6. postgresql['wal_level'] = 'replica'
    7. postgresql['shared_preload_libraries'] = 'repmgr_funcs'
    8. # Disable automatic database migrations
    9. gitlab_rails['auto_migrate'] = false
    10. # Configure the Consul agent
    11. consul['services'] = %w(postgresql)
    12. # START user configuration
    13. # Please set the real values as explained in Required Information section
    14. #
    15. # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
    16. postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
    17. # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
    18. postgresql['sql_user_password'] = '<postgresql_password_hash>'
    19. # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
    20. # This is used to prevent replication from using up all of the
    21. # available database connections.
    22. postgresql['max_wal_senders'] = 4
    23. postgresql['max_replication_slots'] = 4
    24. # Replace XXX.XXX.XXX.XXX/YY with Network Address
    25. postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
    26. repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
    27. ## Enable service discovery for Prometheus
    28. consul['enable'] = true
    29. consul['monitoring_service_discovery'] = true
    30. # Set the network addresses that the exporters will listen on for monitoring
    31. node_exporter['listen_address'] = '0.0.0.0:9100'
    32. postgres_exporter['listen_address'] = '0.0.0.0:9187'
    33. postgres_exporter['dbname'] = 'gitlabhq_production'
    34. postgres_exporter['password'] = '<postgresql_password_hash>'
    35. ## The IPs of the Consul server nodes
    36. ## You can also use FQDNs and intermix them with IPs
    37. consul['configuration'] = {
    38. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
    39. }
    40. #
    41. # END user configuration
  6. 重新配置 GitLab,以使更改生效.

  7. 您可以通过以下方式列出当前 PostgreSQL 主,辅助节点的状态:

    1. sudo /opt/gitlab/bin/gitlab-ctl repmgr cluster show
  8. 验证 GitLab 服务正在运行:

    1. sudo gitlab-ctl status

    输出应类似于以下内容:

    1. run: consul: (pid 30593) 77133s; run: log: (pid 29912) 77156s
    2. run: logrotate: (pid 23449) 3341s; run: log: (pid 29794) 77175s
    3. run: node-exporter: (pid 30613) 77133s; run: log: (pid 29824) 77170s
    4. run: postgres-exporter: (pid 30620) 77132s; run: log: (pid 29894) 77163s
    5. run: postgresql: (pid 30630) 77132s; run: log: (pid 29618) 77181s
    6. run: repmgrd: (pid 30639) 77132s; run: log: (pid 29985) 77150s

Back to setup components

PostgreSQL secondary nodes

  1. 在两个辅助节点上,添加与上面为主要节点指定的配置相同的附加设置,该设置将告知gitlab-ctl最初它们是备用节点,无需尝试将它们注册为主要节点:

    1. # Disable all components except PostgreSQL and Repmgr and Consul
    2. roles ['postgres_role']
    3. # PostgreSQL configuration
    4. postgresql['listen_address'] = '0.0.0.0'
    5. postgresql['hot_standby'] = 'on'
    6. postgresql['wal_level'] = 'replica'
    7. postgresql['shared_preload_libraries'] = 'repmgr_funcs'
    8. # Disable automatic database migrations
    9. gitlab_rails['auto_migrate'] = false
    10. # Configure the Consul agent
    11. consul['services'] = %w(postgresql)
    12. # Specify if a node should attempt to be primary on initialization.
    13. repmgr['master_on_initialization'] = false
    14. # START user configuration
    15. # Please set the real values as explained in Required Information section
    16. #
    17. # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
    18. postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
    19. # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
    20. postgresql['sql_user_password'] = '<postgresql_password_hash>'
    21. # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
    22. # This is used to prevent replication from using up all of the
    23. # available database connections.
    24. postgresql['max_wal_senders'] = 4
    25. postgresql['max_replication_slots'] = 4
    26. # Replace XXX.XXX.XXX.XXX/YY with Network Address
    27. postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
    28. repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
    29. ## Enable service discovery for Prometheus
    30. consul['enable'] = true
    31. consul['monitoring_service_discovery'] = true
    32. # Set the network addresses that the exporters will listen on for monitoring
    33. node_exporter['listen_address'] = '0.0.0.0:9100'
    34. postgres_exporter['listen_address'] = '0.0.0.0:9187'
    35. postgres_exporter['dbname'] = 'gitlabhq_production'
    36. postgres_exporter['password'] = '<postgresql_password_hash>'
    37. ## The IPs of the Consul server nodes
    38. ## You can also use FQDNs and intermix them with IPs
    39. consul['configuration'] = {
    40. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
    41. }
    42. # END user configuration
  2. 重新配置 GitLab,以使更改生效.

支持高级配置选项 ,可以根据需要添加.

Back to setup components

PostgreSQL post-configuration

SSH 进入主节点

  1. 打开数据库提示:

    1. gitlab-psql -d gitlabhq_production
  2. Enable the pg_trgm extension:

    1. CREATE EXTENSION pg_trgm;
  3. 键入\q并按 Enter 退出数据库提示.

  4. 验证集群是否已用一个节点初始化:

    1. gitlab-ctl repmgr cluster show

    输出应类似于以下内容:

    1. Role | Name | Upstream | Connection String
    2. ----------+----------|----------|----------------------------------------
    3. * master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
  5. 在连接字符串中记下主机名或 IP 地址: host=HOSTNAME . 在下一节中,我们将主机名称为<primary_node_name> . 如果该值不是 IP 地址,则必须是可解析的名称(通过 DNS 或/etc/hosts

SSH 进入辅助节点

  1. 设置 repmgr 备用数据库:

    1. gitlab-ctl repmgr standby setup <primary_node_name>

    Do note that this will remove the existing data on the node. The command has a wait time.

    输出应类似于以下内容:

    1. Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
    2. If this is not what you want, hit Ctrl-C now to exit
    3. To skip waiting, rerun with the -w option
    4. Sleeping for 30 seconds
    5. Stopping the database
    6. Removing the data
    7. Cloning the data
    8. Starting the database
    9. Registering the node with the cluster
    10. ok: run: repmgrd: (pid 19068) 0s

在继续之前,请确保正确配置了数据库. 在节点上运行以下命令以验证复制是否正常工作,并且辅助节点是否出现在群集中:

  1. gitlab-ctl repmgr cluster show

输出应类似于以下内容:

  1. Role | Name | Upstream | Connection String
  2. ----------+---------|-----------|------------------------------------------------
  3. * master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
  4. standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
  5. standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr

如果任何节点的”角色”列显示”失败”,请在继续操作之前检查” 故障排除”部分 .

另外,请检查repmgr-check-master命令在每个节点上是否都能正常工作:

  1. su - gitlab-consul
  2. gitlab-ctl repmgr-check-master || echo 'This node is a standby repmgr node'

此命令依靠退出代码来告诉 Consul 特定节点是主节点还是辅助节点. 这里最重要的是该命令不会产生错误. 如果有错误,很可能是由于gitlab-consul数据库用户权限不正确gitlab-consul . 在继续之前,请检查” 故障排除”部分 .

Back to setup components

Configure PgBouncer

现在已经安装了 PostgreSQL 服务器,让我们配置 PgBouncer. 以下 IP 将作为示例:

  • 10.6.0.21 :PgBouncer 1
  • 10.6.0.22 :PgBouncer 2
  • 10.6.0.23 :PgBouncer 3
  1. 在每个 PgBouncer 节点上,编辑/etc/gitlab/gitlab.rb ,并将<consul_password_hash><pgbouncer_password_hash>替换为之前设置的密码哈希:

    1. # Disable all components except Pgbouncer and Consul agent
    2. roles ['pgbouncer_role']
    3. # Configure PgBouncer
    4. pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
    5. pgbouncer['users'] = {
    6. 'gitlab-consul': {
    7. password: '<consul_password_hash>'
    8. },
    9. 'pgbouncer': {
    10. password: '<pgbouncer_password_hash>'
    11. }
    12. }
    13. # Configure Consul agent
    14. consul['watchers'] = %w(postgresql)
    15. consul['enable'] = true
    16. consul['configuration'] = {
    17. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
    18. }
    19. # Enable service discovery for Prometheus
    20. consul['monitoring_service_discovery'] = true
    21. # Set the network addresses that the exporters will listen on
    22. node_exporter['listen_address'] = '0.0.0.0:9100'
    23. pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
  2. 重新配置 Omnibus GitLab,以使更改生效.

  3. 创建一个.pgpass文件,以便 Consul 能够重新加载 PgBouncer. 询问时两次输入 PgBouncer 密码:

    1. gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
  4. 确保每个节点都在与当前主节点通信:

    1. gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD

    如果出现错误psql: ERROR: Auth failed输入密码后psql: ERROR: Auth failed ,请确保您以前以正确的格式生成了 MD5 密码哈希. 正确的格式是连接密码和用户名PASSWORDUSERNAME . 例如, Sup3rS3cr3tpgbouncer将是为pgbouncer用户生成 MD5 密码哈希所需的文本.

  5. 控制台提示可用后,请运行以下查询:

    1. show databases ; show clients ;

    输出应类似于以下内容:

    1. name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
    2. ---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
    3. gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
    4. pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
    5. (2 rows)
    6. type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
    7. ------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
    8. C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
    9. (2 rows)
  6. 验证 GitLab 服务正在运行:

    1. sudo gitlab-ctl status

    The output should be similar to the following:

    1. run: consul: (pid 31530) 77150s; run: log: (pid 31106) 77182s
    2. run: logrotate: (pid 32613) 3357s; run: log: (pid 30107) 77500s
    3. run: node-exporter: (pid 31550) 77149s; run: log: (pid 30138) 77493s
    4. run: pgbouncer: (pid 32033) 75593s; run: log: (pid 31117) 77175s
    5. run: pgbouncer-exporter: (pid 31558) 77148s; run: log: (pid 31498) 77156s

Back to setup components

Configure the internal load balancer

如果按照建议运行多个 PgBouncer 节点,那么此时,您将需要设置一个 TCP 内部负载均衡器以正确地服务每个负载均衡器.

以下 IP 将作为示例:

  • 10.6.0.20 :内部负载均衡器

使用HAProxy 的方法如下:

  1. global
  2. log /dev/log local0
  3. log localhost local1 notice
  4. log stdout format raw local0
  5. defaults
  6. log global
  7. default-server inter 10s fall 3 rise 2
  8. balance leastconn
  9. frontend internal-pgbouncer-tcp-in
  10. bind *:6432
  11. mode tcp
  12. option tcplog
  13. default_backend pgbouncer
  14. backend pgbouncer
  15. mode tcp
  16. option tcp-check
  17. server pgbouncer1 10.6.0.21:6432 check
  18. server pgbouncer2 10.6.0.22:6432 check
  19. server pgbouncer3 10.6.0.23:6432 check

请参阅您首选的负载均衡器的文档以获取更多指导.

Back to setup components

Configure Gitaly

在自己的服务器上部署 Gitaly 可以使大于单个计算机的 GitLab 安装受益.

Gitaly 节点要求取决于客户数据,特别是项目数量及其存储库大小. 建议将两个节点作为绝对最小值. 每个 Gitaly 节点应存储的数据不超过 5TB,并将gitaly-ruby工作者的数量设置为可用 CPU 的 20%. 根据以上建议,应结合其他节点并结合对预期数据大小和分布的审查.

强烈建议所有 Gitaly 节点都安装 SSD 磁盘,因为 Gitaly I / O 繁重,因此其读操作的吞吐量至少为 8000 IOPS,写操作的吞吐量至少为 2,000 IOPS. 这些 IOPS 值仅建议作为启动器使用,因为随着时间的推移,它们可能会根据环境工作负载的规模而调整得更高或更低. 如果您在 Cloud provider 上运行环境,则可能需要参考其文档以了解如何正确配置 IOPS.

注意事项:

  • GitLab Rails 应用程序将存储库分片到存储库中 .
  • Gitaly 服务器可以托管一个或多个存储.
  • 一个 GitLab 服务器可以使用一个或多个 Gitaly 服务器.
  • 必须以对所有 Gitaly 客户端正确解析的方式指定 Gitaly 地址.
  • Gitaly 服务器一定不能暴露在公共互联网上,因为默认情况下,Gitaly 的网络流量是未加密的. 强烈建议使用防火墙以限制对 Gitaly 服务器的访问. 另一种选择是使用 TLS .

提示:有关 Gitaly 历史和网络体系结构的更多信息,请参见独立的 Gitaly 文档 .

注意: 注意: Gitaly 文档中引用的令牌只是管理员选择的任意密码. 它与为 GitLab API 创建的令牌或其他类似的 Web API 令牌无关.

下面我们描述如何配置两个具有 IP 和域名的 Gitaly 服务器:

  • 10.6.0.51 1( gitaly1.internal
  • 10.6.0.52 2( gitaly2.internal

假定该秘密令牌为gitalysecret ,并且您的 GitLab 安装具有三个存储库存储:

  • default为 Gitaly 1
  • storage1在 Gitaly 1
  • storage2上 Gitaly 2

在每个节点上:

  1. 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包,但提供EXTERNAL_URL值.
  2. 编辑/etc/gitlab/gitlab.rb以配置存储路径,启用网络侦听器并配置令牌:

    1. # /etc/gitlab/gitlab.rb
    2. # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
    3. # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
    4. # The following two values must be the same as their respective values
    5. # of the GitLab Rails application setup
    6. gitaly['auth_token'] = 'gitlaysecret'
    7. gitlab_shell['secret_token'] = 'shellsecret'
    8. # Avoid running unnecessary services on the Gitaly server
    9. postgresql['enable'] = false
    10. redis['enable'] = false
    11. nginx['enable'] = false
    12. puma['enable'] = false
    13. unicorn['enable'] = false
    14. sidekiq['enable'] = false
    15. gitlab_workhorse['enable'] = false
    16. grafana['enable'] = false
    17. gitlab_exporter['enable'] = false
    18. # If you run a seperate monitoring node you can disable these services
    19. alertmanager['enable'] = false
    20. prometheus['enable'] = false
    21. # Prevent database connections during 'gitlab-ctl reconfigure'
    22. gitlab_rails['rake_cache_clear'] = false
    23. gitlab_rails['auto_migrate'] = false
    24. # Configure the gitlab-shell API callback URL. Without this, `git push` will
    25. # fail. This can be your 'front door' GitLab URL or an internal load
    26. # balancer.
    27. # Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
    28. gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
    29. # Make Gitaly accept connections on all network interfaces. You must use
    30. # firewalls to restrict access to this address/port.
    31. # Comment out following line if you only want to support TLS connections
    32. gitaly['listen_addr'] = "0.0.0.0:8075"
    33. ## Enable service discovery for Prometheus
    34. consul['enable'] = true
    35. consul['monitoring_service_discovery'] = true
    36. # Set the network addresses that the exporters will listen on for monitoring
    37. gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
    38. node_exporter['listen_address'] = '0.0.0.0:9100'
    39. gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
    40. ## The IPs of the Consul server nodes
    41. ## You can also use FQDNs and intermix them with IPs
    42. consul['configuration'] = {
    43. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
    44. }
  3. 对于每个服务器,将以下内容附加到/etc/gitlab/gitlab.rb

    1. gitaly1.internal

      1. git_data_dirs ({ 'default' => { 'path' => '/var/opt/gitlab/git-data' }, 'storage1' => { 'path' => '/mnt/gitlab/git-data' }, })
    2. gitaly2.internal

      1. git_data_dirs ({ 'storage2' => { 'path' => '/mnt/gitlab/git-data' }, })
  4. 保存文件并重新配置 GitLab .

  5. 确认 Gitaly 可以执行对内部 API 的回调:

    1. sudo /opt/gitlab/embedded/service/gitlab-shell/bin/check -config /opt/gitlab/embedded/service/gitlab-shell/config.yml
  6. 验证 GitLab 服务正在运行:

    1. sudo gitlab-ctl status

    输出应类似于以下内容:

    1. run: consul: (pid 30339) 77006s; run: log: (pid 29878) 77020s
    2. run: gitaly: (pid 30351) 77005s; run: log: (pid 29660) 77040s
    3. run: logrotate: (pid 7760) 3213s; run: log: (pid 29782) 77032s
    4. run: node-exporter: (pid 30378) 77004s; run: log: (pid 29812) 77026s

Gitaly TLS support

Gitaly 支持 TLS 加密. 为了能够与侦听安全连接的 Gitaly 实例进行通信,您将需要在 GitLab 配置中相应存储条目的gitaly_address中使用tls:// URL 方案.

您将需要携带自己的证书,因为该证书不会自动提供. 证书或其证书颁发机构必须按照GitLab 自定义证书配置中所述的步骤,安装在所有 Gitaly 节点(包括使用证书的 Gitaly 节点)上,以及与之通信的所有客户端节点上.

注意:自签名证书必须指定用于访问 Gitaly 服务器的地址. 如果要通过主机名寻址 Gitaly 服务器,则可以为此使用”公用名”字段,也可以将其添加为”使用者备用名”. 如果要通过 Gitaly 服务器的 IP 地址对其进行寻址,则必须将其作为主题备用名称添加到证书中. gRPC 不支持在证书中使用 IP 地址作为公用名 .注意:可以同时为 Gitaly 服务器配置未加密的侦听地址listen_addr和已加密的侦听地址tls_listen_addr . 如果需要,这使您可以从未加密的流量逐渐过渡到加密的流量.

要使用 TLS 配置 Gitaly:

  1. 创建/etc/gitlab/ssl目录,并在其中复制密钥和证书:

    1. sudo mkdir -p /etc/gitlab/ssl
    2. sudo chmod 755 /etc/gitlab/ssl
    3. sudo cp key.pem cert.pem /etc/gitlab/ssl/
    4. sudo chmod 644 key.pem cert.pem
  2. 将证书复制到/etc/gitlab/trusted-certs以便 Gitaly 在调用自身时信任该证书:

    1. sudo cp /etc/gitlab/ssl/cert.pem /etc/gitlab/trusted-certs/
  3. 编辑/etc/gitlab/gitlab.rb并添加:

    1. gitaly['tls_listen_addr'] = "0.0.0.0:9999"
    2. gitaly['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
    3. gitaly['key_path'] = "/etc/gitlab/ssl/key.pem"
  4. 删除gitaly['listen_addr']以仅允许加密连接.

  5. 保存文件并重新配置 GitLab .

Back to setup components

Configure Sidekiq

Sidekiq 需要连接到 Redis,PostgreSQL 和 Gitaly 实例. 以下 IP 将作为示例:

  • 10.6.0.71 :Sidekiq 1
  • 10.6.0.72 :Sidekiq 2
  • 10.6.0.73 :Sidekiq 3
  • 10.6.0.74 :Sidekiq 4

要配置 Sidekiq 节点,每个节点一个:

  1. SSH 到 Sidekiq 服务器.
  2. 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包. 不要完成下载页面上的任何其他步骤.
  3. 使用编辑器打开/etc/gitlab/gitlab.rb

    1. ########################################
    2. ##### Services Disabled ###
    3. ########################################
    4. nginx['enable'] = false
    5. grafana['enable'] = false
    6. prometheus['enable'] = false
    7. gitlab_rails['auto_migrate'] = false
    8. alertmanager['enable'] = false
    9. gitaly['enable'] = false
    10. gitlab_workhorse['enable'] = false
    11. nginx['enable'] = false
    12. puma['enable'] = false
    13. postgres_exporter['enable'] = false
    14. postgresql['enable'] = false
    15. redis['enable'] = false
    16. redis_exporter['enable'] = false
    17. gitlab_exporter['enable'] = false
    18. ########################################
    19. #### Redis ###
    20. ########################################
    21. ## Must be the same in every sentinel node
    22. redis['master_name'] = 'gitlab-redis'
    23. ## The same password for Redis authentication you set up for the master node.
    24. redis['master_password'] = '<redis_primary_password>'
    25. ## A list of sentinels with `host` and `port`
    26. gitlab_rails['redis_sentinels'] = [
    27. {'host' => '10.6.0.11', 'port' => 26379},
    28. {'host' => '10.6.0.12', 'port' => 26379},
    29. {'host' => '10.6.0.13', 'port' => 26379},
    30. ]
    31. #######################################
    32. ### Gitaly ###
    33. #######################################
    34. git_data_dirs({
    35. 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
    36. 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
    37. 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
    38. })
    39. gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
    40. #######################################
    41. ### Postgres ###
    42. #######################################
    43. gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
    44. gitlab_rails['db_port'] = 6432
    45. gitlab_rails['db_password'] = '<postgresql_user_password>'
    46. gitlab_rails['db_adapter'] = 'postgresql'
    47. gitlab_rails['db_encoding'] = 'unicode'
    48. gitlab_rails['auto_migrate'] = false
    49. #######################################
    50. ### Sidekiq configuration ###
    51. #######################################
    52. sidekiq['listen_address'] = "0.0.0.0"
    53. #######################################
    54. ### Monitoring configuration ###
    55. #######################################
    56. consul['enable'] = true
    57. consul['monitoring_service_discovery'] = true
    58. consul['configuration'] = {
    59. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
    60. }
    61. # Set the network addresses that the exporters will listen on
    62. node_exporter['listen_address'] = '0.0.0.0:9100'
    63. # Rails Status for prometheus
    64. gitlab_rails['monitoring_whitelist'] = ['10.6.0.81/32', '127.0.0.0/8']
    65. gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
  4. 保存文件并重新配置 GitLab .

  5. 验证 GitLab 服务正在运行:

    1. sudo gitlab-ctl status

    输出应类似于以下内容:

    1. run: consul: (pid 30114) 77353s; run: log: (pid 29756) 77367s
    2. run: logrotate: (pid 9898) 3561s; run: log: (pid 29653) 77380s
    3. run: node-exporter: (pid 30134) 77353s; run: log: (pid 29706) 77372s
    4. run: sidekiq: (pid 30142) 77351s; run: log: (pid 29638) 77386s

提示:您还可以运行多个 Sidekiq 进程 .Back to setup components

Configure GitLab Rails

注意:在我们的体系结构中,我们使用 Puma Web 服务器运行每个 GitLab Rails 节点,并将其工作程序数设置为可用 CPU 的 90%以及四个线程. 对于运行带有其他组件的 Rails 的节点,应该相应地降低 worker 的值,我们发现 50%达到了很好的平衡,但这取决于工作量.

本节介绍如何配置 GitLab 应用程序(Rails)组件. 在每个节点上执行以下操作:

  1. 如果您使用的是 NFS

    1. 如有必要,请使用以下命令安装 NFS 客户端实用程序软件包:

      1. # Ubuntu/Debian
      2. apt-get install nfs-common
      3. # CentOS/Red Hat
      4. yum install nfs-utils nfs-utils-lib
    2. /etc/fstab指定必要的 NFS 挂载. /etc/fstab的确切内容取决于您选择配置 NFS 服务器的方式. 有关示例和各种选项,请参见NFS 文档 .

    3. 创建共享目录. 这些可能会有所不同,具体取决于您的 NFS 安装位置.

      1. mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
  2. 使用GitLab 下载中的 步骤 1 和 2下载/安装 Omnibus GitLab. 不要完成下载页面上的其他步骤.

  3. 创建/编辑/etc/gitlab/gitlab.rb并使用以下配置. 为了保持整个节点的链接均匀性, external_url在应用服务器上应指向外部 URL,用户将用来访问 GitLab. 这将是外部负载平衡器的 URL,它将负载流量路由到 GitLab 应用程序服务器:

    1. external_url 'https://gitlab.example.com'
    2. # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
    3. # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
    4. # The following two values must be the same as their respective values
    5. # of the Gitaly setup
    6. gitlab_rails['gitaly_token'] = 'gitalyecret'
    7. gitlab_shell['secret_token'] = 'shellsecret'
    8. git_data_dirs({
    9. 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
    10. 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
    11. 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
    12. })
    13. ## Disable components that will not be on the GitLab application server
    14. roles ['application_role']
    15. gitaly['enable'] = false
    16. nginx['enable'] = true
    17. sidekiq['enable'] = false
    18. ## PostgreSQL connection details
    19. # Disable PostgreSQL on the application node
    20. postgresql['enable'] = false
    21. gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
    22. gitlab_rails['db_port'] = 6432
    23. gitlab_rails['db_password'] = '<postgresql_user_password>'
    24. gitlab_rails['auto_migrate'] = false
    25. ## Redis connection details
    26. ## Must be the same in every sentinel node
    27. redis['master_name'] = 'gitlab-redis'
    28. ## The same password for Redis authentication you set up for the Redis primary node.
    29. redis['master_password'] = '<redis_primary_password>'
    30. ## A list of sentinels with `host` and `port`
    31. gitlab_rails['redis_sentinels'] = [
    32. {'host' => '10.6.0.11', 'port' => 26379},
    33. {'host' => '10.6.0.12', 'port' => 26379},
    34. {'host' => '10.6.0.13', 'port' => 26379}
    35. ]
    36. ## Enable service discovery for Prometheus
    37. consul['enable'] = true
    38. consul['monitoring_service_discovery'] = true
    39. # Set the network addresses that the exporters used for monitoring will listen on
    40. node_exporter['listen_address'] = '0.0.0.0:9100'
    41. gitlab_workhorse['prometheus_listen_addr'] = '0.0.0.0:9229'
    42. sidekiq['listen_address'] = "0.0.0.0"
    43. puma['listen'] = '0.0.0.0'
    44. ## The IPs of the Consul server nodes
    45. ## You can also use FQDNs and intermix them with IPs
    46. consul['configuration'] = {
    47. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
    48. }
    49. # Add the monitoring node's IP address to the monitoring whitelist and allow it to
    50. # scrape the NGINX metrics
    51. gitlab_rails['monitoring_whitelist'] = ['10.6.0.81/32', '127.0.0.0/8']
    52. nginx['status']['options']['allow'] = ['10.6.0.81/32', '127.0.0.0/8']
    53. gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
    54. ## Uncomment and edit the following options if you have set up NFS
    55. ##
    56. ## Prevent GitLab from starting if NFS data mounts are not available
    57. ##
    58. #high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
    59. ##
    60. ## Ensure UIDs and GIDs match between servers for permissions via NFS
    61. ##
    62. #user['uid'] = 9000
    63. #user['gid'] = 9000
    64. #web_server['uid'] = 9001
    65. #web_server['gid'] = 9001
    66. #registry['uid'] = 9002
    67. #registry['gid'] = 9002
  4. 如果您正在使用具有 TLS 支持git_data_dirs ,请确保git_data_dirs条目配置了tls而不是tcp

    1. git_data_dirs({
    2. 'default' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
    3. 'storage1' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
    4. 'storage2' => { 'gitaly_address' => 'tls://gitaly2.internal:9999' },
    5. })
    1. 将证书复制到/etc/gitlab/trusted-certs

      1. sudo cp cert.pem /etc/gitlab/trusted-certs/
  5. 保存文件并重新配置 GitLab .

  6. 运行sudo gitlab-rake gitlab:gitaly:check确认节点可以连接到 Gitaly.
  7. 拖尾日志以查看请求:

    1. sudo gitlab-ctl tail gitaly
  8. 验证 GitLab 服务正在运行:

    1. sudo gitlab-ctl status

    输出应类似于以下内容:

    1. run: consul: (pid 4890) 8647s; run: log: (pid 29962) 79128s
    2. run: gitlab-exporter: (pid 4902) 8647s; run: log: (pid 29913) 79134s
    3. run: gitlab-workhorse: (pid 4904) 8646s; run: log: (pid 29713) 79155s
    4. run: logrotate: (pid 12425) 1446s; run: log: (pid 29798) 79146s
    5. run: nginx: (pid 4925) 8646s; run: log: (pid 29726) 79152s
    6. run: node-exporter: (pid 4931) 8645s; run: log: (pid 29855) 79140s
    7. run: puma: (pid 4936) 8645s; run: log: (pid 29656) 79161s

注意:如上例所示,当在external_url指定https时,GitLab 会假定您在/etc/gitlab/ssl/具有 SSL 证书. 如果没有证书,NGINX 将无法启动. 有关更多信息,请参见NGINX 文档 .

GitLab Rails post-configuration

  1. 确保运行所有迁移:

    1. gitlab-rake gitlab:db:configure

    注意:如果遇到rake aborted! 错误,指出 PgBouncer 是无法连接到 PostgreSQL 也可能是您的 PgBouncer 节点的 IP 地址是从 PostgreSQL 的缺失trust_auth_cidr_addressesgitlab.rb你的数据库节点. 请参阅”故障排除”部分中的PgBouncer 错误ERROR: pgbouncer cannot connect to server ,然后再继续.

  2. Configure fast lookup of authorized SSH keys in the database.

Back to setup components

Configure Prometheus

Omnibus GitLab 软件包可用于配置运行PrometheusGrafana的独立 Monitoring 节点:

  1. SSH 进入”监视”节点.
  2. 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包. 不要完成下载页面上的任何其他步骤.
  3. 编辑/etc/gitlab/gitlab.rb并添加内容:

    1. external_url 'http://gitlab.example.com'
    2. # Disable all other services
    3. gitlab_rails['auto_migrate'] = false
    4. alertmanager['enable'] = false
    5. gitaly['enable'] = false
    6. gitlab_exporter['enable'] = false
    7. gitlab_workhorse['enable'] = false
    8. nginx['enable'] = true
    9. postgres_exporter['enable'] = false
    10. postgresql['enable'] = false
    11. redis['enable'] = false
    12. redis_exporter['enable'] = false
    13. sidekiq['enable'] = false
    14. puma['enable'] = false
    15. unicorn['enable'] = false
    16. node_exporter['enable'] = false
    17. gitlab_exporter['enable'] = false
    18. # Enable Prometheus
    19. prometheus['enable'] = true
    20. prometheus['listen_address'] = '0.0.0.0:9090'
    21. prometheus['monitor_kubernetes'] = false
    22. # Enable Login form
    23. grafana['disable_login_form'] = false
    24. # Enable Grafana
    25. grafana['enable'] = true
    26. grafana['admin_password'] = '<grafana_password>'
    27. # Enable service discovery for Prometheus
    28. consul['enable'] = true
    29. consul['monitoring_service_discovery'] = true
    30. consul['configuration'] = {
    31. retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
    32. }
  4. 保存文件并重新配置 GitLab .

  5. 在 GitLab 用户界面中,将admin/application_settings/metrics_and_profiling >指标-Grafana 设置为/-/grafanahttp[s]://<MONITOR NODE>/-/grafana .
  6. 验证 GitLab 服务正在运行:

    1. sudo gitlab-ctl status

    输出应类似于以下内容:

    1. run: consul: (pid 31637) 17337s; run: log: (pid 29748) 78432s
    2. run: grafana: (pid 31644) 17337s; run: log: (pid 29719) 78438s
    3. run: logrotate: (pid 31809) 2936s; run: log: (pid 29581) 78462s
    4. run: nginx: (pid 31665) 17335s; run: log: (pid 29556) 78468s
    5. run: prometheus: (pid 31672) 17335s; run: log: (pid 29633) 78456s

Back to setup components

Configure the object storage

GitLab 支持使用对象存储服务来保存多种类型的数据. 建议在NFS上使用它,并且通常在较大的设置中更好,因为对象存储通常具有更高的性能,可靠性和可伸缩性.

manbetx 客户端打不开已经测试过或知道使用的客户的对象存储选项包括:

要配置 GitLab 以使用对象存储,请根据要使用的功能参考以下指南:

  1. Configure object storage for backups.
  2. Configure object storage for job artifacts including incremental logging.
  3. Configure object storage for LFS objects.
  4. Configure object storage for uploads.
  5. Configure object storage for merge request diffs.
  6. 配置容器注册表的对象存储 (可选功能).
  7. 为 Mattermost配置对象存储 (可选功能).
  8. 配置软件包的对象存储 (可选功能).
  9. 配置依赖项代理的对象存储 (可选功能).
  10. 为 Pseudonymizer (可选功能)配置对象存储 .
  11. 配置对象存储以自动缩放 Runner 缓存 (可选-为了提高性能).
  12. Configure object storage for Terraform state files.

对于 GitLab,建议为每种数据类型使用单独的存储桶.

我们的配置的局限性是对象存储的每次使用都是单独配置的. 我们有一个需要改进的问题 ,轻松地将一个存储桶与单独的文件夹一起使用可能会带来一个改进.

使用同一个存储桶至少有一个特定的问题:当使用 Helm 图表部署 GitLab 时,除非使用单独的存储桶,否则从备份还原将无法正常工作 .

如果您的组织将来决定将 GitLab 迁移到 Helm 部署,则使用单个存储桶的一种风险是. GitLab 可以运行,但是直到组织对备份起作用的关键要求之前,备份的情况可能无法实现.

Back to setup components

Configure NFS (optional)

建议尽可能在 NFS 上使用对象存储以及Gitaly ,以提高性能. 如果您打算使用 GitLab 页面,则当前需要 NFS .

请参阅如何配置 NFS .

Back to setup components

Troubleshooting

请参阅故障排除文档 .

Back to setup components