📋
k8s use handbook
  • 概述
  • 1. kuberbetes应用接入准则篇
    • 1.1 git分支管理规范
    • 1.2 接入elk字段格式以及约定
    • 1.3 健康检测接口规范
    • 1.4 项目命名规范
  • 2. kubernetes集群部署篇
    • 2.0 kubernetes手动安装概览
      • 201 创建跟证书和秘钥
      • 202 ETCD集群部署及维护
      • 203 kubectl部署以及基本使用
      • 204 Master节点部署及维护
        • 2041 kube-apiserver
        • 2042 kube-scheduler
        • 2043 kube-controller-manager
      • 205 Node节点部署及维护
        • 2051 Flannel部署及维护
        • 2052 kubernetes runtime部署及维护
        • 2053 kubelet
        • 2054 kube-proxy
    • 2.1 kubernetes ansible安装
    • 2.2 kubernetes kubeadm安装
    • 2.3 kubernetes 组件安装
      • 231 coredns
      • 232 kube-dashboard
  • 3. kubernetes权限控制篇
    • 认证
    • 授权
    • 准入机制
  • 4. what happens when k8s .....
    • Kubernetes使用什么方法方法来检查应用程序的运行状况?
    • 如何优雅的关闭pod?
    • TLS bootstrapping 是如何工作的?
    • 怎么编辑kubernetes的yaml文件以及kubernetes的控制是什么样的?
    • deployment如何使用不同的策略部署我们的程序?
    • Kubernetes 如何接收请求,又是如何将结果返回至客户端的?
    • Kubernetes 的调度流程是怎样的?
    • Kubelet 是如何接受调度请求并启动容器的?
    • Kube-proxy 的作用,提供的能力是什么?
    • Kubernetes 控制器是如何工作的?
    • ingress-service-deployment如何关联的?
    • 如何指定pod的运行节点?
    • Https 的通信过程?
  • 5. kubernetes私有仓库篇
  • 6. kubernetes CI/CD篇
    • 5. kubernetes cicd发布流水线
  • 6. kubernetes日志系统篇
    • 6.1 elk使用规范和指南
    • 6.2 kibana搜索简易指南
    • 6.3 基于es api进行查询的注意事项
    • 6.4 集群部署
      • 6.4.1 es规划
        • 索引的生命周期
      • 6.4.2 安装
      • 6.4.3 elasticsearch配置
      • 6.4.4 logstash配置
      • 6.4.5 kibana配置
      • 6.4.6 enable-xpack
        • 6.4.6.1 X-Pack on Elasticsearch
        • 6.4.6.2 X-Pack on Logstash
        • 6.4.6.3 X-Pack on Kibana
        • 6.4.6.4 xpack破解
        • 6.4.6.5 LDAP user authentication
      • 6.4.7 Cerebro configuration
      • 6.4.8 Curator configuration
    • 6.10 备份恢复
  • 7.0 kuberbetes服务暴露Ingress篇
    • 7.1 Ingress规划
    • 7.2 Traefik ingress controller
      • 7.2.1 Traefik配置详解
      • 7.2.2 Traefik部署
      • 7.2.3 分场景使用示例
      • 7.2.4 Traefik功能示例
      • 7.2.5 Traefik日志收集
      • 7.2.6 https证书更新
    • 7.3 Nginx ingress controller
      • 7.3.1 Nginx 配置详解
      • 7.3.2 Nginx 部署
      • 7.3.3 使用示例
    • 7.4 ingress日常运维
  • 8.0 kubernetes监控篇
    • 8.1 prometheus非k8s部署
    • 8.2 prometheusk8s部署
    • 8.3 prometheus 配置文件详解
    • 8.3 prometheus alertmanager
  • 9.0 kubernetes配置管理篇
  • 10.0 权威DNS篇
    • 10.1 PowerDNS安装部署
    • 10.1 PowerDNS zone设置
由 GitBook 提供支持
在本页
  • Configuring Security in Logstash
  • Monitoring for Logstash
  • 关键参数说明
  • 开启Monitoring
  • Centralized pipeline management
  • 关键参数说明
  • 开启pipe集中管理

这有帮助吗?

  1. 6. kubernetes日志系统篇
  2. 6.4 集群部署
  3. 6.4.6 enable-xpack

6.4.6.2 X-Pack on Logstash

上一页6.4.6.1 X-Pack on Elasticsearch下一页6.4.6.3 X-Pack on Kibana

最后更新于5年前

这有帮助吗?

请参考完成部署 , 部署步骤如下

Run bin/logstash-plugin install from the Logstash installation directory.

bin/logstash-plugin install x-pack

The plugin install scripts require direct internet access to download and install X-Pack. If your server doesn’t have internet access, specify the location of the X-Pack zip file that you downloaded to a temporary directory.

bin/logstash-plugin install file:///path/to/file/x-pack-6.2.4.zip

Logstash needs to be able to manage index templates, create indices, and write and delete documents in the indices it creates.

  1. Use the the Management > Roles UI in Kibana or the role API to create a logstash_writer role. For cluster privileges, add manage_index_templates and monitor. For indices privileges, add write, create, delete, and create_index.

    POST _xpack/security/role/logstash_writer
    {
      "cluster": ["manage_index_templates", "monitor", "manage_ilm"], 
      "indices": [
        {
          "names": [ "logstash-*" ], 
          "privileges": ["write","create","delete","create_index","manage","manage_ilm"]  
        }
      ]
    }
    1. If you use a custom Logstash index pattern, specify your custom pattern instead of the default logstash-* pattern.

  2. Create a logstash_internal user and assign it the logstash_writer role. You can create users from the Management > Users UI in Kibana or through the user API:

    POST _xpack/security/user/logstash_internal
    {
      "password" : "x-pack-test-password",
      "roles" : [ "logstash_writer"],
      "full_name" : "Internal Logstash User"
    }
  3. Configure Logstash to authenticate as the logstash_internal user you just created. You configure credentials separately for each of the Elasticsearch plugins in your Logstash .conf file. For example:

    input {
      elasticsearch {
        ...
        user => logstash_internal
        password => x-pack-test-password
      }
    }
    filter {
      elasticsearch {
        ...
        user => logstash_internal
        password => x-pack-test-password
      }
    }
    output {
      elasticsearch {
        ...
        user => logstash_internal
        password => x-pack-test-password
      }
    }

关键参数说明

  • xpack.monitoring.enabled

    设置为true或false来打开或者关闭monitoing, 默认为关闭

  • xpack.monitoring.elasticsearch.url

    接受logstash发送的metries的elasticsearch地址, ["http://es-prod-node-1:9200", "http://es-prod-node-2:9200"]

  • xpack.monitoring.elasticsearch.username and xpack.monitoring.elasticsearch.password

    elasticsearch开启了认证功能需要提供logstash_system的用户名和密码,

  • xpack.monitoring.elasticsearch.sniffing

    将嗅探设置为true,以便发现elasticsearch集群的其他节点,默认值为false。

  • xpack.monitoring.collection.interval

    控制在logstash端收集和发布metries的时间, 默认为10s

开启Monitoring

编辑logstash.yml, 添加如下内容

xpack.monitoring.enabled: True
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: lBJPQyPl0NBwGE6Cq8d0
xpack.monitoring.elasticsearch.url:
  - "http://es-welog02cn-p004.pek3.example.net:9200"
  - "http://es-welog02cn-p005.pek4.example.net:9200"
xpack.monitoring.elasticsearch.sniffing: true
xpack.monitoring.collection.interval: 10s

开启X-Pack Management功能后,启动logstash的时候就不用再配置logstash.conf文件了,启动的时候也不用再使用-f指定这个文件进行启动了, 开启这个功能

关键参数说明

  • xpack.management.enabled

设置为true表示为Logstash开启X-Pack 集中式配置管理。

  • xpack.management.logstash.poll_interval

Logstash实例轮询来自Elasticsearch的管道更改的频率。默认值为5s。

  • xpack.management.pipeline.id

指定以逗号分隔的管道标识列表,以便为集中式管道生产管理注册。更改此设置后,您需要重新启动Logstash来使更改生效。 需要登录kibana --> management --> logstash --> pipelines创建pipeline

  • xpack.management.elasticsearch.url

     存储Logstash管道配置和元数据的Elasticsearch示例。可以是和`outputs`中的相同的实例,也可以是不同的。默认是 `http://localhost:9200`.
  • xpack.management.elasticsearch.usernameand xpack.management.elasticsearch.password

     如果你的Elasticsearch集群使用基本认证进行保护,这些设置提供用户名和密码,Logstash实例使用这些用户名和密码对访问配置数据进行身份验证。你在这里指定的用户名和密码必须具有`logstash_admin`角色,它提供对于`.logstash-*`的索引的认证。

开启pipe集中管理

编辑logstash.yml添加如下内容

xpack.management.enabled: True
xpack.management.pipeline.id:  
  - "prod_output_es"
  - "prod_output_kafka_plain"
  - "prod_output_kafka_json"
  - "prod_output_k8s_es"
  - "dev_output_es"
xpack.management.elasticsearch.username: logstash_admin_user
xpack.management.elasticsearch.password: MzZqbT44FU2oVJxatz4b
xpack.management.elasticsearch.url:
  - "http://es-welog02cn-p004.pek3.example.net:9200"
  - "http://es-welog02cn-p005.pek4.example.net:9200"
xpack.management.logstash.poll_interval: 5s

If you plan to use , also add manage_ilm for cluster and manage and manage_ilm for indices.

The cluster needs the manage_ilm privilege if is enabled.

If is enabled, the role requires the manage and manage_ilm privileges to load index lifecycle policies, create rollover aliases, and create and manage rollover indices.

Configuring Security in Logstash
index lifecycle management
index lifecycle management
index lifecycle management
Monitoring for Logstash
Centralized pipeline management
官方文档