Heka: различия между версиями
Sirmax (обсуждение | вклад) |
Sirmax (обсуждение | вклад) |
||
Строка 164: | Строка 164: | ||
This is 'opened port' used for haproxy http check. |
This is 'opened port' used for haproxy http check. |
||
As you can see in haproxy config, this port is used only for check 'is heka running or not' for expose port 5565 from input-aggregator. |
As you can see in haproxy config, this port is used only for check 'is heka running or not' for expose port 5565 from input-aggregator. |
||
+ | <BR> |
||
<B>/etc/haproxy/conf.d/999-lma.cfg</B> |
<B>/etc/haproxy/conf.d/999-lma.cfg</B> |
||
<PRE> |
<PRE> |
Версия 17:23, 28 января 2016
Heka
Heka is an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data processing, useful for a wide variety of different tasks, such as:
- Loading and parsing log files from a file system.
- Accepting statsd type metrics data for aggregation and forwarding to upstream time series data stores such as graphite or InfluxDB.
- Launching external processes to gather operational data from the local system.
- Performing real time analysis, graphing, and anomaly detection on any data flowing through the Heka pipeline.
- Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP).
- Delivering processed data to one or more persistent data stores.
Configuration overview
All LMA heka config files are located in /etc/lma_collector folder. e.g. on controller there are follwing confguration files:
amqp-openstack_error.toml amqp-openstack_info.toml amqp-openstack_warn.toml decoder-collectd.toml decoder-http-check.toml decoder-keystone_7_0.toml decoder-keystone_wsgi.toml decoder-mysql.toml decoder-notification.toml decoder-openstack.toml decoder-ovs.toml decoder-pacemaker.toml decoder-rabbitmq.toml decoder-swift.toml decoder-system.toml encoder-elasticsearch.toml encoder-influxdb.toml encoder-nagios_afd_nodes_debug.toml encoder-nagios_afd_nodes.toml encoder-nagios_gse_global_clusters.toml encoder-nagios_gse_node_clusters.toml filter-afd_api_backends.toml filter-afd_api_endpoints.toml filter-afd_node_controller_cpu.toml filter-afd_node_controller_log-fs.toml filter-afd_node_controller_root-fs.toml filter-afd_node_mysql-nodes_mysql-fs.toml filter-afd_service_apache_worker.toml filter-afd_service_cinder-api_http_errors.toml filter-afd_service_glance-api_http_errors.toml filter-afd_service_heat-api_http_errors.toml filter-afd_service_keystone-admin-api_http_errors.toml filter-afd_service_keystone-public-api_http_errors.toml filter-afd_service_mysql_node-status.toml filter-afd_service_neutron-api_http_errors.toml filter-afd_service_nova-api_http_errors.toml filter-afd_service_rabbitmq_disk.toml filter-afd_service_rabbitmq_memory.toml filter-afd_service_rabbitmq_queue.toml filter-afd_service_swift-api_http_errors.toml filter-afd_workers.toml filter-gse_global.toml filter-gse_node.toml filter-gse_service.toml filter-heka_monitoring.toml filter-http_metrics.toml filter-influxdb_accumulator.toml filter-influxdb_annotation.toml filter-instance_state.toml filter-resource_creation_time.toml filter-service_heartbeat.toml global.toml httplisten-collectd.toml httplisten-http-check.toml input-aggregator.toml logstreamer-keystone_7_0.toml logstreamer-keystone_wsgi.toml logstreamer-mysql.toml logstreamer-openstack_7_0.toml logstreamer-openstack_dashboard.toml logstreamer-ovs.toml logstreamer-pacemaker.toml logstreamer-rabbitmq.toml logstreamer-swift.toml logstreamer-system.toml multidecoder-aggregator.toml output-aggregator.toml output-dashboard.toml output-elasticsearch.toml output-influxdb.toml output-nagios_afd_nodes.toml output-nagios_gse_global_clusters.toml output-nagios_gse_node_clusters.toml scribbler-aggregator_flag.toml splitter-openstack.toml splitter-rabbitmq.toml
Heka's configuration files can be divided into follwing groups:
- Inputs
- Splitters
- Decoders
- Filters
- Encoders
- Outputs
Inputs
On controller there are following inputs groups:
AMQPInput
AMQP input (https://hekad.readthedocs.org/en/v0.10.0/config/inputs/amqp.html)
There are followinf AMQP inputs:
- amqp-openstack_error.toml
- amqp-openstack_info.toml
- amqp-openstack_warn.toml
All AMQP inputs looks like:
[openstack_error_amqp] type = "AMQPInput" url = "amqp://nova:nova_password@192.168.0.2:5673/" exchange = "nova" exchange_type = "topic" exchange_durability = false exchange_auto_delete = false queue_auto_delete = false queue = "lma_notifications.error" routing_key = "lma_notifications.error" decoder = "notification_decoder" splitter = "NullSplitter" can_exit = true
The only difference between AMQP inputs are queue and routing_key parameter:
queue = "lma_notifications.info" routing_key = "lma_notifications.info"
All AMQP inputs use one decoder to decode AMQP messages: notification_decoder, configuration can be found in decoder-notification.toml file.
LMA plugin configures openstack services to use 'lma_notifications' as notification_topics, e.g :
# cat /etc/nova/nova.conf | grep lma notification_topics=lma_notifications
so heka is enable to get messages from queue and decode it.
Also, it is possible to see rabbitmq messages using trace plugin, for details please see: http://wiki.sirmax.noname.com.ua/index.php/Rabbitmq_trace#RabbitMQ_log_messages
HttpListenInput
HttpListenInput plugins start a webserver listening on the specified address and port. For more detail: https://hekad.readthedocs.org/en/v0.10.0/config/inputs/httplisten.html
There are the folljwing HttpListen inputs configured in LMA (controller)
- httplisten-collectd.toml
- httplisten-http-check.toml
httplisten-collectd
[collectd_httplisten] type="HttpListenInput" address = "127.0.0.1:8325" decoder = "collectd_decoder" splitter = "NullSplitter"
httplisten-http-check
[http-check_httplisten] type="HttpListenInput" address = "192.168.0.2:5566" decoder = "http-check_decoder" splitter = "NullSplitter"
This is 'opened port' used for haproxy http check.
As you can see in haproxy config, this port is used only for check 'is heka running or not' for expose port 5565 from input-aggregator.
/etc/haproxy/conf.d/999-lma.cfg
listen lma bind 192.168.0.7:5565 balance roundrobin mode tcp option httpchk option tcplog server node-6 192.168.0.2:5565 check port 5566
TcpInput
input-aggregator.toml
LogstreamerInput
- logstreamer-keystone_7_0.toml
- logstreamer-keystone_wsgi.toml
- logstreamer-mysql.toml
- logstreamer-openstack_7_0.toml
- logstreamer-openstack_dashboard.toml
- logstreamer-ovs.toml
- logstreamer-pacemaker.toml
- logstreamer-rabbitmq.toml
- logstreamer-swift.toml
- logstreamer-system.toml
Splitters
Splitter details: https://hekad.readthedocs.org/en/v0.10.0/config/splitters/index.html
There are only one custom splitter:
[openstack_splitter] type = "RegexSplitter" delimiter = '(<[0-9]+>)' delimiter_eol = false
Decoders
decoder-collectd.toml decoder-libvirt.toml decoder-openstack.toml decoder-ovs.toml decoder-system.toml
Heka Debugging
[RstEncoder] [output_file] type = "FileOutput" #message_matcher = "Fields[aggregator] == NIL && Type == 'heka.sandbox.afd_node_metric'" message_matcher = "Fields[aggregator] == NIL" path = "/var/log/heka-debug.log" perm = "666" flush_count = 100 flush_operator = "OR" #encoder = "nagios_afd_nodes_encoder_debug" encoder = "RstEncoder"