Heka

Материал из noname.com.ua
Перейти к навигацииПерейти к поиску

Heka

Heka is an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data processing, useful for a wide variety of different tasks, such as:

  • Loading and parsing log files from a file system.
  • Accepting statsd type metrics data for aggregation and forwarding to upstream time series data stores such as graphite or InfluxDB.
  • Launching external processes to gather operational data from the local system.
  • Performing real time analysis, graphing, and anomaly detection on any data flowing through the Heka pipeline.
  • Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP).
  • Delivering processed data to one or more persistent data stores.

Data flow and LAM HA overview

In LMA HA clusters we need to aggregate data on primary controller. "Primary" controller is controller with VIP (virtual IP), so we can sen aggregated data to VIP. Sometimes it may looks, so need more detailed explanation.

  • 3 controllers, all are up and running


     [VIP] <-----------------------------------------------------------------------------<--+
 Aggregation                ^                                ^                              |
 Input                      |                                |                              |
+--------------+            |    +--------------+            |  +--------------+            |
| Controller 1 |            |    | Controller 2 |            |  |Controller 3  |            |
| Hekad        |Aggregation-|    | Hekad        |Aggregation-|  |Hekad         |Aggregation-|  
|              |Output           |              |Output         |              |Output         
|              |                 |              |               |              |
|              |                 |              |               |              |
+--------------+                 +--------------+               +--------------+
      |                                |                              |
      +--------------------------------+---------->--->--->--->--->---+------->[ElasticSearch, InfluxDB, Nagios, etc]

  • Controler 1 is down, VIP moved to Controller3


                                                             ->-->--[VIP]----<-<---------<--+
                                                             ^   Aggregation                |
                                                             |   Input                      |
+--------------+                 +--------------+            |  +--------------+            |
| Controller 1 |                 | Controller 2 |            |  |Controller 3  |            |
| Hekad        |Aggregation-     | Hekad        |Aggregation-|  |Hekad         |Aggregation-|  
|              |Output           |              |Output         |              |Output         
| DOWN         |                 |              |               |              |
|              |                 |              |               |              |
+--------------+                 +--------------+               +--------------+
                                       |                              |
                                       +---------->--->--->--->--->---+------->[ElasticSearch, InfluxDB, Nagios, etc]

To avoid endless loop used field aggregator, e.g:

message_matcher = "Fields[aggregator]

Configuration overview

All LMA heka config files are located in /etc/lma_collector folder. e.g. on controller there are follwing confguration files:

amqp-openstack_error.toml
amqp-openstack_info.toml
amqp-openstack_warn.toml
decoder-collectd.toml
decoder-http-check.toml
decoder-keystone_7_0.toml
decoder-keystone_wsgi.toml
decoder-mysql.toml
decoder-notification.toml
decoder-openstack.toml
decoder-ovs.toml
decoder-pacemaker.toml
decoder-rabbitmq.toml
decoder-swift.toml
decoder-system.toml
encoder-elasticsearch.toml
encoder-influxdb.toml
encoder-nagios_afd_nodes_debug.toml
encoder-nagios_afd_nodes.toml
encoder-nagios_gse_global_clusters.toml
encoder-nagios_gse_node_clusters.toml
filter-afd_api_backends.toml
filter-afd_api_endpoints.toml
filter-afd_node_controller_cpu.toml
filter-afd_node_controller_log-fs.toml
filter-afd_node_controller_root-fs.toml
filter-afd_node_mysql-nodes_mysql-fs.toml
filter-afd_service_apache_worker.toml
filter-afd_service_cinder-api_http_errors.toml
filter-afd_service_glance-api_http_errors.toml
filter-afd_service_heat-api_http_errors.toml
filter-afd_service_keystone-admin-api_http_errors.toml
filter-afd_service_keystone-public-api_http_errors.toml
filter-afd_service_mysql_node-status.toml
filter-afd_service_neutron-api_http_errors.toml
filter-afd_service_nova-api_http_errors.toml
filter-afd_service_rabbitmq_disk.toml
filter-afd_service_rabbitmq_memory.toml
filter-afd_service_rabbitmq_queue.toml
filter-afd_service_swift-api_http_errors.toml
filter-afd_workers.toml
filter-gse_global.toml
filter-gse_node.toml
filter-gse_service.toml
filter-heka_monitoring.toml
filter-http_metrics.toml
filter-influxdb_accumulator.toml
filter-influxdb_annotation.toml
filter-instance_state.toml
filter-resource_creation_time.toml
filter-service_heartbeat.toml
global.toml
httplisten-collectd.toml
httplisten-http-check.toml
input-aggregator.toml
logstreamer-keystone_7_0.toml
logstreamer-keystone_wsgi.toml
logstreamer-mysql.toml
logstreamer-openstack_7_0.toml
logstreamer-openstack_dashboard.toml
logstreamer-ovs.toml
logstreamer-pacemaker.toml
logstreamer-rabbitmq.toml
logstreamer-swift.toml
logstreamer-system.toml
multidecoder-aggregator.toml
output-aggregator.toml
output-dashboard.toml
output-elasticsearch.toml
output-influxdb.toml
output-nagios_afd_nodes.toml
output-nagios_gse_global_clusters.toml
output-nagios_gse_node_clusters.toml
scribbler-aggregator_flag.toml
splitter-openstack.toml
splitter-rabbitmq.toml

Heka's configuration files can be divided into follwing groups:

  • Inputs
  • Splitters
  • Decoders
  • Filters
  • Encoders
  • Outputs


Data flow in heka is following:

 
                                                                                                 +-<----------------------+
                                                                                                 |                        |
                                             +-> | log reader decoder for service A log format | Routing |---->Filters (may inject new messages)
         - read logs  -------------------------> | log reader decoder for service B log format |         |----> Decoder (TXT/JSON/....)-+---------->Outputs to file
--Inputs - listen http/tcp/udp-----------------> | http decoder for input from service C       |         |                              |
                                             +-> | tcp/udp decoder for input from service D    |         |                              +---------->TCP
         - other inputs (not ued in LMA now)---> | Other/custom decoders                       |         |                              +---------->Http PUT  

So, what does heka REALLY do?

  • Reads data from multiple sources
  • Split data into messages
  • Decode each message and save it internal format
  • Send messages to filters if configured
    • Filter can create new messages, e.g.aggregate data and inject message back to heka
  • Encode (transform) output message
  • Send message to out from Heka to file or another process

It sounds not too complicated (and like other log collectors e.g. logstash) but heka's configuration may be very complex.

Heka Inputs

Input plugins acquire data from the outside world and inject it into the Heka pipeline. They can do this by reading files from a file system, actively making network connections to acquire data from remote servers, listening on a network socket for external actors to push data in, launching processes on the local system to gather arbitrary data, or any other mechanism.
Input plugins must be written in Go.

Splitters

Splitter plugins receive the data that is being acquired by an input plugin and slice it up into individual records. They must be written in Go.

Decoders

Decoder plugins convert data that comes in through the Input plugins to Heka’s internal Message data structure. Typically decoders are responsible for any parsing, deserializing, or extracting of structure from unstructured data that needs to happen.
Decoder plugins can be written entirely in Go, or the core logic can be written in sandboxed Lua code.

Filters

Filter plugins are Heka’s processing engines. They are configured to receive messages matching certain specific characteristics (using Heka’s Message Matcher Syntax) and are able to perform arbitrary monitoring, aggregation, and/or processing of the data. Filters are also able to generate new messages that can be reinjected into the Heka pipeline, such as summary messages containing aggregate data, notification messages in cases where suspicious anomalies are detected, or circular buffer data messages that will show up as real time graphs in Heka’s dashboard.
Filters can be written entirely in Go, or the core logic can be written in sandboxed Lua code. It is also possible to configure Heka to allow Lua filters to be dynamically injected into a running Heka instance without needing to reconfigure or restart the Heka process, nor even to have shell access to the server on which Heka is running.

Encoders

Encoder plugins are the inverse of Decoders. They generate arbitrary byte streams using data extracted from Heka Message structs. Encoders are embedded within Output plugins; Encoders handle the serialization, Outputs handle the details of interacting with the outside world.
Encoder plugins can be written entirely in Go, or the core logic can be written in sandboxed Lua code.
Example of simple Nagios encoder you can find in example of AFD filters document.

Outputs


Output plugins send data that has been serialized by an Encoder to some external destination. They handle all of the details of interacting with the network, filesystem, or any other outside resource. They are, like Filters, configured using Heka’s Message Matcher Syntax so they will only receive and deliver messages matching certain characteristics.
Output plugins must be written in Go.

Heka Debugging