Heka: различия между версиями

Материал из noname.com.ua
Перейти к навигацииПерейти к поиску
Строка 1: Строка 1:
==Heka==
+
=Heka=
 
Heka is an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data processing, useful for a wide variety of different tasks, such as:
 
Heka is an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data processing, useful for a wide variety of different tasks, such as:
 
* Loading and parsing log files from a file system.
 
* Loading and parsing log files from a file system.
Строка 7: Строка 7:
 
* Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP).
 
* Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP).
 
* Delivering processed data to one or more persistent data stores.
 
* Delivering processed data to one or more persistent data stores.
  +
==Configuration overview==
  +
All LMA heka config files are located in /etc/lma_collector folder.
  +
e.g. on controller there are follwing confguration files:
  +
<PRE>
  +
amqp-openstack_error.toml
  +
amqp-openstack_info.toml
  +
amqp-openstack_warn.toml
  +
decoder-collectd.toml
  +
decoder-http-check.toml
  +
decoder-keystone_7_0.toml
  +
decoder-keystone_wsgi.toml
  +
decoder-mysql.toml
  +
decoder-notification.toml
  +
decoder-openstack.toml
  +
decoder-ovs.toml
  +
decoder-pacemaker.toml
  +
decoder-rabbitmq.toml
  +
decoder-swift.toml
  +
decoder-system.toml
  +
encoder-elasticsearch.toml
  +
encoder-influxdb.toml
  +
encoder-nagios_afd_nodes_debug.toml
  +
encoder-nagios_afd_nodes.toml
  +
encoder-nagios_gse_global_clusters.toml
  +
encoder-nagios_gse_node_clusters.toml
  +
filter-afd_api_backends.toml
  +
filter-afd_api_endpoints.toml
  +
filter-afd_node_controller_cpu.toml
  +
filter-afd_node_controller_log-fs.toml
  +
filter-afd_node_controller_root-fs.toml
  +
filter-afd_node_mysql-nodes_mysql-fs.toml
  +
filter-afd_service_apache_worker.toml
  +
filter-afd_service_cinder-api_http_errors.toml
  +
filter-afd_service_glance-api_http_errors.toml
  +
filter-afd_service_heat-api_http_errors.toml
  +
filter-afd_service_keystone-admin-api_http_errors.toml
  +
filter-afd_service_keystone-public-api_http_errors.toml
  +
filter-afd_service_mysql_node-status.toml
  +
filter-afd_service_neutron-api_http_errors.toml
  +
filter-afd_service_nova-api_http_errors.toml
  +
filter-afd_service_rabbitmq_disk.toml
  +
filter-afd_service_rabbitmq_memory.toml
  +
filter-afd_service_rabbitmq_queue.toml
  +
filter-afd_service_swift-api_http_errors.toml
  +
filter-afd_workers.toml
  +
filter-gse_global.toml
  +
filter-gse_node.toml
  +
filter-gse_service.toml
  +
filter-heka_monitoring.toml
  +
filter-http_metrics.toml
  +
filter-influxdb_accumulator.toml
  +
filter-influxdb_annotation.toml
  +
filter-instance_state.toml
  +
filter-resource_creation_time.toml
  +
filter-service_heartbeat.toml
  +
global.toml
  +
httplisten-collectd.toml
  +
httplisten-http-check.toml
  +
input-aggregator.toml
  +
logstreamer-keystone_7_0.toml
  +
logstreamer-keystone_wsgi.toml
  +
logstreamer-mysql.toml
  +
logstreamer-openstack_7_0.toml
  +
logstreamer-openstack_dashboard.toml
  +
logstreamer-ovs.toml
  +
logstreamer-pacemaker.toml
  +
logstreamer-rabbitmq.toml
  +
logstreamer-swift.toml
  +
logstreamer-system.toml
  +
multidecoder-aggregator.toml
  +
output-aggregator.toml
  +
output-dashboard.toml
  +
output-elasticsearch.toml
  +
output-influxdb.toml
  +
output-nagios_afd_nodes.toml
  +
output-nagios_gse_global_clusters.toml
  +
output-nagios_gse_node_clusters.toml
  +
scribbler-aggregator_flag.toml
  +
splitter-openstack.toml
  +
splitter-rabbitmq.toml
  +
</PRE>
  +
Heka's configuration files can be divided into follwing groups:
  +
  +
* Inputs
  +
* Splitters
  +
* Decoders
  +
* Filters
  +
* Encoders
  +
* Outputs
 
===Inputs===
 
===Inputs===
  +
There are 2 types of input plugins used in heka
 
  +
* HttpListenInput
 
** 127.0.0.1:8325; collectd_decoder
 
* LogstreamerInput
 
** /var/log/libvirt; libvirt_decoder
 
** file_match = '(?P<Service>nova|cinder|keystone|glance|heat|neutron|murano)-all\.log$', openstack_decoder
 
** "/var/log/dashboard\.log$'; decoder = "openstack_decoder"; splitter = "TokenSplitter"
 
** file_match = '(?P<Service>nova|cinder|keystone|glance|heat|neutron|murano)-all\.log$'; differentiator = [ 'openstack.', 'Service' ]; decoder = "openstack_decoder"; splitter = "openstack_splitter"
 
**file_match = '(?P<Service>ovs\-vswitchd|ovsdb\-server)\.log$';differentiator = [ 'Service' ];decoder = "ovs_decoder";splitter = "TokenSplitter"
 
**file_match = '(?P<Service>daemon\.log|cron\.log|haproxy\.log|kern\.log|auth\.log|syslog|messages|debug)';differentiator = [ 'system.', 'Service' ];decoder = "system_decoder"
 
   
   
Строка 39: Строка 120:
 
decoder-system.toml
 
decoder-system.toml
 
</PRE>
 
</PRE>
  +
 
==Heka Debugging==
 
==Heka Debugging==
 
<PRE>
 
<PRE>

Версия 12:04, 28 января 2016

Heka

Heka is an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data processing, useful for a wide variety of different tasks, such as:

  • Loading and parsing log files from a file system.
  • Accepting statsd type metrics data for aggregation and forwarding to upstream time series data stores such as graphite or InfluxDB.
  • Launching external processes to gather operational data from the local system.
  • Performing real time analysis, graphing, and anomaly detection on any data flowing through the Heka pipeline.
  • Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP).
  • Delivering processed data to one or more persistent data stores.

Configuration overview

All LMA heka config files are located in /etc/lma_collector folder. e.g. on controller there are follwing confguration files:

amqp-openstack_error.toml
amqp-openstack_info.toml
amqp-openstack_warn.toml
decoder-collectd.toml
decoder-http-check.toml
decoder-keystone_7_0.toml
decoder-keystone_wsgi.toml
decoder-mysql.toml
decoder-notification.toml
decoder-openstack.toml
decoder-ovs.toml
decoder-pacemaker.toml
decoder-rabbitmq.toml
decoder-swift.toml
decoder-system.toml
encoder-elasticsearch.toml
encoder-influxdb.toml
encoder-nagios_afd_nodes_debug.toml
encoder-nagios_afd_nodes.toml
encoder-nagios_gse_global_clusters.toml
encoder-nagios_gse_node_clusters.toml
filter-afd_api_backends.toml
filter-afd_api_endpoints.toml
filter-afd_node_controller_cpu.toml
filter-afd_node_controller_log-fs.toml
filter-afd_node_controller_root-fs.toml
filter-afd_node_mysql-nodes_mysql-fs.toml
filter-afd_service_apache_worker.toml
filter-afd_service_cinder-api_http_errors.toml
filter-afd_service_glance-api_http_errors.toml
filter-afd_service_heat-api_http_errors.toml
filter-afd_service_keystone-admin-api_http_errors.toml
filter-afd_service_keystone-public-api_http_errors.toml
filter-afd_service_mysql_node-status.toml
filter-afd_service_neutron-api_http_errors.toml
filter-afd_service_nova-api_http_errors.toml
filter-afd_service_rabbitmq_disk.toml
filter-afd_service_rabbitmq_memory.toml
filter-afd_service_rabbitmq_queue.toml
filter-afd_service_swift-api_http_errors.toml
filter-afd_workers.toml
filter-gse_global.toml
filter-gse_node.toml
filter-gse_service.toml
filter-heka_monitoring.toml
filter-http_metrics.toml
filter-influxdb_accumulator.toml
filter-influxdb_annotation.toml
filter-instance_state.toml
filter-resource_creation_time.toml
filter-service_heartbeat.toml
global.toml
httplisten-collectd.toml
httplisten-http-check.toml
input-aggregator.toml
logstreamer-keystone_7_0.toml
logstreamer-keystone_wsgi.toml
logstreamer-mysql.toml
logstreamer-openstack_7_0.toml
logstreamer-openstack_dashboard.toml
logstreamer-ovs.toml
logstreamer-pacemaker.toml
logstreamer-rabbitmq.toml
logstreamer-swift.toml
logstreamer-system.toml
multidecoder-aggregator.toml
output-aggregator.toml
output-dashboard.toml
output-elasticsearch.toml
output-influxdb.toml
output-nagios_afd_nodes.toml
output-nagios_gse_global_clusters.toml
output-nagios_gse_node_clusters.toml
scribbler-aggregator_flag.toml
splitter-openstack.toml
splitter-rabbitmq.toml

Heka's configuration files can be divided into follwing groups:

  • Inputs
  • Splitters
  • Decoders
  • Filters
  • Encoders
  • Outputs

Inputs

Splitters

Splitter details: https://hekad.readthedocs.org/en/v0.10.0/config/splitters/index.html
There are only one custom splitter:

[openstack_splitter]
type = "RegexSplitter"
delimiter = '(<[0-9]+>)'
delimiter_eol = false

Decoders

decoder-collectd.toml
decoder-libvirt.toml
decoder-openstack.toml
decoder-ovs.toml
decoder-system.toml

Heka Debugging

[RstEncoder]

[output_file]
type = "FileOutput"
#message_matcher = "Fields[aggregator] == NIL && Type == 'heka.sandbox.afd_node_metric'"
message_matcher = "Fields[aggregator] == NIL"
path = "/var/log/heka-debug.log"
perm = "666"
flush_count = 100
flush_operator = "OR"
#encoder = "nagios_afd_nodes_encoder_debug"
encoder = "RstEncoder"