logging



Background

Logging role is an abstract layer for provisioning and configuring the logging system. Currently, rsyslog is the only supported provider.

In the nature of logging, there are multiple ways to read logs and multiple ways to output them. For instance, the logging system may read logs from local files, or read them from systemd/journal, or receive them from the other logging system over the network. Then, the logs may be stored in the local files in the /var/log directory, or sent to Elasticsearch, or forwarded to other logging system. The combination between the inputs and the outputs needs to be flexible. For instance, you may want to inputs from journal stored just in the local file, while inputs read from files stored in the local log files as well as forwarded to the other logging system.

To satisfy such requirements, logging role introduced 3 primary variables logging_inputs, logging_outputs, and logging_flows. The inputs are represented in the list of logging_inputs dictionary, the outputs are in the list of logging_outputs dictionary, and the relationship between them are defined as a list of logging_flows dictionary. The details are described in Logging Configuration Overview.

Requirements

This role is supported on RHEL-7+, CentOS Stream-8+ and Fedora distributions.

Collection requirements

The role requires the firewall role and the selinux role from the fedora.linux_system_roles collection, if logging_manage_firewall and logging_manage_selinux is set to true, respectively. (Please see also the variables in the Other options section.)

If the logging is a role from the fedora.linux_system_roles collection or from the Fedora RPM package, the requirement is already satisfied.

The role requires external collections for management of rpm-ostree nodes. These are listed in the meta/collection-requirements.yml. You do not need them if you do not want to manage rpm-ostree systems.

If you need to install additional collections based on the above, please run:

ansible-galaxy collection install -r meta/collection-requirements.yml

Definitions

Logging Configuration Overview

Logging role allows to have variables logging_inputs, logging_outputs, and logging_flows with additional options to configure logging system such as rsyslog.

Currently, the logging role supports four types of logging inputs: basics, files, ovirt, and remote. And four types of outputs: elasticsearch, files, forwards, and remote_files. To deploy configuration files with these inputs and outputs, specify the inputs as logging_inputs and the outputs as logging_outputs. To define the flows from inputs to outputs, use logging_flows. The logging_flows has three keys name, inputs, and outputs, where inputs is a list of logging_inputs name values and outputs is a list of logging_outputs name values.

This is a schematic logging configuration to show log messages from input_nameA are passed to output_name0 and output_name1; log messages from input_nameB are passed only to output_name1.

---
- name: a schematic logging configuration
  hosts: all
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: input_nameA
        type: input_typeA
      - name: input_nameB
        type: input_typeB
    logging_outputs:
      - name: output_name0
        type: output_type0
      - name: output_name1
        type: output_type1
    logging_flows:
      - name: flow_nameX
        inputs: [input_nameA]
        outputs: [output_name0, output_name1]
      - name: flow_nameY
        inputs: [input_nameB]
        outputs: [output_name1]

Variables

Logging_inputs options

logging_inputs: A list of the following dictionaries to configure inputs.

logging_inputs common keys

logging_inputs basics type

basics input supports reading logs from systemd journal or systemd unix socket.

Available options:

logging_inputs files type

files input supports reading logs from the local files.

Available options:

logging_inputs ovirt type

ovirt input supports oVirt specific inputs.

Available options:

Available options for engine and vdsm:

Available options for collectd:

logging_inputs relp type

relp input supports receiving logs from the remote logging system over the network using relp.

Available options:

logging_inputs remote type

remote input supports receiving logs from the remote logging system over the network.

Available options:

Note: There are 3 types of items in the remote type - udp, plain tcp and tls tcp. The udp type configured using udp_ports; the plain tcp type is configured using tcp_ports without tls or with tls: false; the tls tcp type is configured using tcp_ports with tls: true at the same time. Please note there might be only one instance of each of the three types. E.g., if there are 2 udp type items, it fails to deploy.

  # Valid configuration example
  - name: remote_udp
    type: remote
    udp_ports: [514, ...]
  - name: remote_ptcp
    type: remote
    tcp_ports: [514, ...]
  - name: remote_tcp
    type: remote
    tcp_ports: [6514, ...]
    tls: true
    pki_authmode: x509/name
    permitted_clients: ['*.example.com']
  # Invalid configuration example 1; duplicated udp
  - name: remote_udp0
    type: remote
    udp_ports: [514]
  - name: remote_udp1
    type: remote
    udp_ports: [1514]
  # Invalid configuration example 2; duplicated tcp
  - name: remote_implicit_tcp
    type: remote
  - name: remote_tcp
    type: remote
    tcp_ports: [1514]

logging_custom_templates

logging_custom_templates: A list of custom template definitions, for use with logging_outputs type files and type forwards. You can specify the template for a particular output to use by setting the template field in a particular logging_outputs specification, or by setting the default for all such outputs to use in logging_files_template_format and logging_forwards_template_format.

Specify custom templates like this, in either the legacy format or the new style format:

logging_custom_templates:
  - |
    template(name="tpl1" type="list") {
        constant(value="Syslog MSG is: '")
        property(name="msg")
        constant(value="', ")
        property(name="timereported" dateFormat="rfc3339" caseConversion="lower")
        constant(value="\n")
        }
  - >-
    $template precise,"%syslogpriority%,%syslogfacility%,%timegenerated::fulltime%,%HOSTNAME%,%syslogtag%,%msg%\n"

Then use like this:

logging_outputs:
  - name: custom_file_output
    type: files
    path: /var/log/custom_file_output.log
    template: tpl1  # override logging_files_template_format if set

Logging_outputs options

logging_outputs: A list of following dictionary to configure outputs.

logging_outputs common keys

logging_outputs general queue parameters

logging_outputs:
  - name: files_output
    type: files
    queue:
      size: 100

logging_outputs general action parameters

logging_outputs:
  - name: forwards_output
    type: forwards
    target: your_target_host
    action:
      writeallmarkmessages: "on"

logging_outputs elasticsearch type

elasticsearch output supports sending logs to Elasticsearch. It is available only when the input is ovirt. Assuming Elasticsearch is already configured and running.

Available options:

logging_elasticsearch_password: If basic HTTP authentication is deployed, the password is specified with this global variable. Please be careful that this logging_elasticsearch_password is a global variable to be placed at the same level as logging_output, logging_input, and logging_flows are. Another things to be aware of are this logging_elasticsearch_password is shared among all the elasticsearch outputs. That is, the elasticsearch servers should share one password if there are multiple of servers. Plus, the uid and password are configured if both of them are found in the playbook. For instance, if there are multiple elasticsearch outputs and one of them is missing the uid key, then the configured output does not have the uid and password.

logging_outputs files type

files output supports storing logs in the local files usually in /var/log.

Available options:

Global options:

logging_files_template_format: Set default template for the files output. Allowed values are traditional, syslog, and modern, or one of the templates defined in logging_custom_templates. Default to modern.

Note: Selector options and property-based filter options are exclusive. If Property-based filter options are defined, selector options will be ignored.

Note: Unless the above options are given, these local file outputs are configured.

  kern.*                                      /dev/console
  *.info;mail.none;authpriv.none;cron.none    /var/log/messages
  authpriv.*                                  /var/log/secure
  mail.*                                      -/var/log/maillog
  cron.*                                      -/var/log/cron
  *.emerg                                     :omusrmsg:*
  uucp,news.crit                              /var/log/spooler
  local7.*

logging_outputs forwards type

forwards output sends logs to the remote logging system over the network.

Available options:

Global options:

logging_forwards_template_format: Set default template for the forwards output. Allowed values are traditional, syslog, and modern, or one of the templates defined in logging_custom_templates. Default to modern.

Note: Selector options and property-based filter options are exclusive. If Property-based filter options are defined, selector options will be ignored.

logging_outputs relp type

relp output sends logs to the remote logging system over the network using relp.

Available options:

logging_outputs remote_files type

remote_files output stores logs to the local files per remote host and program name originated the logs.

Available options:

Note: Selector options and property-based filter options are exclusive. If Property-based filter options are defined, selector options will be ignored.

Note: If both remote_log_path and remote_sub_path are not specified, the remote_file output configured with the following settings.

  template(
    name="RemoteMessage"
    type="string"
    string="/var/log/remote/msg/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log"
  )
  template(
    name="RemoteHostAuthLog"
    type="string"
    string="/var/log/remote/auth/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log"
  )
  template(
    name="RemoteHostCronLog"
    type="string"
    string="/var/log/remote/cron/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log"
  )
  template(
    name="RemoteHostMailLog"
    type="string"
    string="/var/log/remote/mail/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log"
  )
  ruleset(name="unique_remote_files_output_name") {
    authpriv.*   action(name="remote_authpriv_host_log" type="omfile" DynaFile="RemoteHostAuthLog")
    *.info;mail.none;authpriv.none;cron.none action(name="remote_message" type="omfile" DynaFile="RemoteMessage")
    cron.*       action(name="remote_cron_log" type="omfile" DynaFile="RemoteHostCronLog")
    mail.*       action(name="remote_mail_service_log" type="omfile" DynaFile="RemoteHostMailLog")
  }

Logging_flows options

Security options

These variables are set in the same level of the logging_inputs, logging_output, and logging_flows.

logging_pki_files

Specifying either of the paths of the ca_cert, cert, and key on the control host or the paths of theirs on the managed host or both of them. When TLS connection is configured, ca_cert_src and/or ca_cert is required. To configure the certificate of the logging system, cert_src and/or cert is required. To configure the private key of the logging system, private_key_src and/or private_key is required.

logging_domain

The default DNS domain used to accept remote incoming logs from remote hosts. Default to "{{ ansible_domain if ansible_domain else ansible_hostname }}"

Server performance optimization options

These variables are set in the same level of the logging_inputs, logging_output, and logging_flows.

Other options

These variables are set in the same level of the logging_inputs, logging_output, and logging_flows.

    logging_certificates:
      - name: logging_cert
        dns: ['localhost', 'www.example.com']
        ca: ipa

The created private key and certificate are set with the ca certificate, e.g., in logging_pki_files as follows:

  logging_pki_files:
    - ca_cert: /etc/ipa/ca.crt
      cert: /etc/pki/tls/certs/logging_cert.crt
      private_key: /etc/pki/tls/private/logging_cert.key

or in the relp parameters as follows:

   logging_inputs:
     - name: relp_server
       type: relp
       tls: true
       ca_cert: /etc/ipa/ca.crt
       cert: /etc/pki/tls/certs/logging_cert.crt
       private_key: /etc/pki/tls/private/logging_cert.key
       [snip]

NOTE: The certificate role, unless using IPA and joining the systems to an IPA domain, creates self-signed certificates, so you will need to explicitly configure trust, which is not currently supported by the system roles.

Update and Delete

Due to the nature of ansible idempotency, if you run ansible-playbook multiple times without changing any variables and options, no changes are made from the second time. If some changes are made, only the rsyslog configuration files affected by the changes are recreated. To delete any existing rsyslog input or output config files generated by the previous ansible-playbook run, you need to add "state: absent" to the dictionary to be deleted (in this case, input_nameA and output_name0). And remove the flow dictionary related to the input and output as follows.

logging_inputs:
  - name: input_nameA
    type: input_typeA
    state: absent
  - name: input_nameB
    type: input_typeB
logging_outputs:
  - name: output_name0
    type: output_type0
    state: absent
  - name: output_name1
    type: output_type1
logging_flows:
  - name: flow_nameY
    inputs: [input_nameB]
    outputs: [output_name1]

If you want to remove all the configuration files previously configured, in addition to setting state: absent to each logging_inputs and logging_outputs item, add logging_enabled: false to the configuration variables as follows. It will eliminate the global and common configuration files, as well. Or, use logging_purge_confs: true to wipe out all previous configuration and replace it with your given configuration.

logging_enabled: false
logging_inputs:
  - name: input_nameA
    type: input_typeA
    state: absent
  - name: input_nameB
    type: input_typeB
    state: absent
logging_outputs:
  - name: output_name0
    type: output_type0
    state: absent
  - name: output_name1
    type: output_type1
    state: absent
logging_flows:
  - name: flow_nameY
    inputs: [input_nameB]
    outputs: [output_name1]

Configuration Examples

Standalone configuration

Deploying basics input reading logs from systemd journal and implicit files output to write to the local files. This also deploys two custom files to the /etc/rsyslog.d/ directory.

---
- name: Deploying basics input and implicit files output
  hosts: all
  roles:
    - linux-system-roles.logging
  vars:
    logging_custom_config_files:
      - files/90-my-custom-file.conf
      - files/my-custom-file.rulebase
    logging_inputs:
      - name: system_input
        type: basics

The following playbook generates the same logging configuration files.

---
- name: Deploying basics input and files output
  hosts: all
  roles:
    - linux-system-roles.logging
  vars:
    logging_custom_config_files:
      - files/90-my-custom-file.conf
      - files/my-custom-file.rulebase
    logging_inputs:
      - name: system_input
        type: basics
    logging_outputs:
      - name: files_output
        type: files
    logging_flows:
      - name: flow0
        inputs: [system_input]
        outputs: [files_output]

Deploying basics input reading logs from systemd unix socket and files output to write to the local files.

---
- name: Deploying basics input using systemd unix socket and files output
  hosts: all
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: system_input
        type: basics
        use_imuxsock: true
    logging_outputs:
      - name: files_output
        type: files
    logging_flows:
      - name: flow0
        inputs: [system_input]
        outputs: [files_output]

Deploying basics input reading logs from systemd journal and files output to write to the individually configured local files. This also shows how to specify ownership/permission for log files/directories created by the logger.

---
- name: Deploying basic input and configured files output
  hosts: all
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: system_input
        type: basics
    logging_outputs:
      - name: files_output0
        type: files
        severity: info
        exclude:
          - authpriv.none
          - auth.none
          - cron.none
          - mail.none
        path: /var/log/messages
      - name: files_output1
        type: files
        facility: authpriv,auth
        path: /var/log/secure
      - name: files_output2
        type: files
        severity: info
        path: /var/log/myapp/my_app.log
        mode: "0600"
        owner: logowner
        group: loggroup
        dir_mode: "0700"
        dir_owner: logowner
        dir_group: loggroup
    logging_flows:
      - name: flow0
        inputs: [system_input]
        outputs: [files_output0, files_output1]

Deploying files input reading logs from local files and files output to write to the individually configured local files.

---
- name: Deploying files input and configured files output
  hosts: all
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: files_input0
        type: files
        input_log_path: /var/log/containerA/*.log
      - name: files_input1
        type: files
        input_log_path: /var/log/containerB/*.log
    logging_outputs:
      - name: files_output0
        type: files
        severity: info
        exclude:
          - authpriv.none
          - auth.none
          - cron.none
          - mail.none
        path: /var/log/messages
      - name: files_output1
        type: files
        facility: authpriv,auth
        path: /var/log/secure
    logging_flows:
      - name: flow0
        inputs: [files_input0, files_input1]
        outputs: [files_output0, files_output1]

Deploying files input reading logs from local files and files output to write to the local files based on the property-based filters.

---
- name: Deploying files input and configured files output
  hosts: all
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: files_input0
        type: files
        input_log_path: /var/log/containerA/*.log
      - name: files_input1
        type: files
        input_log_path: /var/log/containerB/*.log
    logging_outputs:
      - name: files_output0
        type: files
        property: msg
        property_op: contains
        property_value: error
        path: /var/log/errors.log
      - name: files_output1
        type: files
        property: msg
        property_op: "!contains"
        property_value: error
        path: /var/log/others.log
    logging_flows:
      - name: flow0
        inputs: [files_input0, files_input1]
        outputs: [files_output0, files_output1]

Client configuration

Deploying basics input reading logs from systemd journal and forwards output to forward the logs to the remote rsyslog.

---
- name: Deploying basics input and forwards output
  hosts: clients
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: basic_input
        type: basics
    logging_outputs:
      - name: forward_output0
        type: forwards
        severity: info
        target: your_target_hostname
        udp_port: 514
      - name: forward_output1
        type: forwards
        facility: mail
        target: your_target_hostname
        tcp_port: 514
    logging_flows:
      - name: flows0
        inputs: [basic_input]
        outputs: [forward_output0, forward_output1]

Deploying files input reading logs from a local file and forwards output to forward the logs to the remote rsyslog over tls. Assuming the ca_cert, cert and key files are prepared at the specified paths on the control host. The files are deployed to the default location /etc/pki/tls/certs/, /etc/pki/tls/certs/, and /etc/pki/tls/private, respectively.

---
- name: Deploying files input and forwards output with certs
  hosts: clients
  roles:
    - linux-system-roles.logging
  vars:
    logging_pki_files:
      - ca_cert_src: /local/path/to/ca_cert
        cert_src: /local/path/to/cert
        private_key_src: /local/path/to/key
    logging_inputs:
      - name: files_input
        type: files
        input_log_path: /var/log/containers/*.log
    logging_outputs:
      - name: forwards_output
        type: forwards
        target: your_target_host
        tcp_port: your_target_port
        pki_authmode: x509/name
        permitted_server: '*.example.com'
    logging_flows:
      - name: flows0
        inputs: [basic_input]
        outputs: [forwards-severity_and_facility]

Server configuration

Deploying remote input reading logs from remote rsyslog and remote_files output to write the logs to the local files under the directory named by the remote host name.

---
- name: Deploying remote input and remote_files output
  hosts: server
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: remote_udp_input
        type: remote
        udp_ports: [514, 1514]
      - name: remote_tcp_input
        type: remote
        tcp_ports: [514, 1514]
    logging_outputs:
      - name: remote_files_output
        type: remote_files
    logging_flows:
      - name: flow_0
        inputs: [remote_udp_input, remote_tcp_input]
        outputs: [remote_files_output]

Deploying remote input reading logs from remote rsyslog and remote_files output to write the logs to the configured local files with the tls setup supporting 20 clients. Assuming the ca_cert, cert and key files are prepared at the specified paths on the control host. The files are deployed to the default location /etc/pki/tls/certs/, /etc/pki/tls/certs/, and /etc/pki/tls/private, respectively.

---
- name: Deploying remote input and remote_files output with certs
  hosts: server
  roles:
    - linux-system-roles.logging
  vars:
    logging_pki_files:
      - ca_cert_src: /local/path/to/ca_cert
        cert_src: /local/path/to/cert
        private_key_src: /local/path/to/key
    logging_inputs:
      - name: remote_tcp_input
        type: remote
        tcp_ports: [6514, 7514]
        permitted_clients: ['*.example.com', '*.test.com']
    logging_outputs:
      - name: remote_files_output0
        type: remote_files
        remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log
        async_writing: true
        client_count: 20
        io_buffer_size: 8192
      - name: remote_files_output1
        type: remote_files
        remote_sub_path: others/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log
    logging_flows:
      - name: flow_0
        inputs: [remote_udp_input, remote_tcp_input]
        outputs: [remote_files_output0, remote_files_output1]

Client configuration with Relp

Deploying basics input reading logs from systemd journal and relp output to send the logs to the remote rsyslog over relp.

---
- name: Deploying basics input and relp output
  hosts: clients
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: basic_input
        type: basics
    logging_outputs:
      - name: relp_client
        type: relp
        target: logging.server.com
        port: 20514
        tls: true
        ca_cert_src: /path/to/ca.pem
        cert_src: /path/to/client-cert.pem
        private_key_src: /path/to/client-key.pem
        pki_authmode: name
        permitted_servers:
          - '*.server.com'
    logging_flows:
      - name: flow
        inputs: [basic_input]
        outputs: [relp_client]

Server configuration with Relp

Deploying relp input reading logs from remote rsyslog and remote_files output to write the logs to the local files under the directory named by the remote host name.

---
- name: Deploying remote input and remote_files output
  hosts: server
  roles:
    - linux-system-roles.logging
  vars:
    logging_inputs:
      - name: relp_server
        type: relp
        port: 20514
        tls: true
        ca_cert_src: /path/to/ca.pem
        cert_src: /path/to/server-cert.pem
        private_key_src: /path/to/server-key.pem
        pki_authmode: name
        permitted_clients:
          - '*.client.com'
          - '*.example.com'
    logging_outputs:
      - name: remote_files_output
        type: remote_files
    logging_flows:
      - name: flow
        inputs: [relp_server]
        outputs: [remote_files_output]

Port Managed by Firewall and SELinux Role

When a port is specified in the logging role configuration, the firewall role is automatically included and the port is managed by the firewalld.

The port is then configured by the selinux role and given an appropriate syslog SELinux port type depending upon the associated TLS value.

You can verify the changes by the following command-line.

For firewall,

firewall-cmd --list-port

For SELinux,

semanage port --list | grep "syslog"

The newly specified port will be added to this default set.

syslog_tls_port_t     tcp   6514, 10514
syslog_tls_port_t     udp   6514, 10514
syslogd_port_t        tcp   601, 20514
syslogd_port_t        udp   514, 601, 20514

Providers

Tests

tests/README.md - This documentation shows how to execute CI tests in the tests directory as well as how to debug when the test fails.

rpm-ostree

See README-ostree.md