Ansible playbooks is commonly used for configuration management, but we are also using it to ensure security or baseline compliance. This could be operating systems, appliances or even switches. The main difference for the playbook is that it only checks configuration and reports status, it doesn’t make changes. Some may ask why can’t you apply role to those inventories instead, in that way your system will always be compliant? Well, you want the playbook to run on schedule and report the findings. In a large controlled environment, you do not want changes to be made to your systems unless they are planned. Obviously, if the risk and impact is properly managed and negotiated with your change and risk teams, it is still possible.
The main feature of such a playbook, different from most, is that it’s used for reporting and outputting the findings into either another system or a file in which the report can be read. You cannot depend on the ansible.log for such information.
For this, I have created 3 tasks files for such playbooks:
- To prune old log files
- To output results to a log file
- To output result to syslog (in my case vRealize Log Insight)
prune_logfiles.yml
This file is required so that you don’t fill up disk with log files every run. It just a basic tasks to remove 10 days old files
# pre-requsite vars
# - logpath
# - logpattern
- name: Get old log files
find:
paths: "{{ logpath }}"
age: 10d
patterns: "{{ logpattern }}"
register: oldfiles
run_once: true
tags: always
- name: Prune old logs
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ oldfiles.files }}"
loop_control:
label: "{{ item.path }}"
run_once: true
tags: always
write_logfile.yml
This task will write a structured message to a log file. You can set up a CSV or tab delimited log, but choose to use “|” as delimited.
# to use this task, you need to define the follow vars before you call
# - logfile: full path of output file
# - logsubject: title of the event
# - logstatus: PASS|FAIL|SUCCESS, etc
# - logdesc: description of subject
# - logmsg: the message for this event
- name: Write to logfile
vars:
logtext: "{{ lookup('pipe', 'date --iso-8601=seconds') + '|' + inventory_hostname + '|' + logdesc + '|' + logstatus + '|' + logmsg }}"
lineinfile:
path: "{{ logfile }}"
line: "{{ logtext }}"
insertafter: EOF
create: true
delegate_to: localhost
tags: always
write_vrli_log.yml
Lastly, this task will write a structure syslog message to your syslog server. In this case, we are writing to a vRealize Log Insight syslog server using its api. Obviously, you need to make sure all variables especially the syslog are defined before executing. The HOSTNAME task is to ensure that I report in the syslog message itself which host the log is coming from.
# to use this task, you need to define the follow vars before you call
# - logserver: syslog server
# - logsubject: title of the event
# - logstatus: PASS|FAIL|SUCCESS, etc
# - logdesc: description of subject
# - logmsg: the message for this event
- name: Get ansible hostname
ansible.builtin.shell: echo $HOSTNAME
register: result
delegate_to: localhost
tags: always
- name: Send event to syslog
vars:
logurl: "https://{{logserver}}:9543/api/v1/events/ingest/ansible-playbook"
thishost: "{{ result.stdout }}"
ansible.builtin.uri:
url: "{{ logurl }}"
method: POST
body:
events:
- fields:
- name: "appname"
content: "ansible"
- name: "status"
content: "{{ logstatus }}"
text: >
"{{thishost}} ansible
subject=[{{logsubject}}] host={{inventory_hostname}}
status={{logstatus}} desc=[{{logdesc}}] msg=[{{logmsg}}]"
body_format: json
headers:
Content-Type: application/json
validate_certs: false
delegate_to: localhost
tags: always
main.yml
Here is a sample of an NSX-T compliance baseline playbook. The main playbook itself has two plays. The first is just used to clear old logs. We only need to run it once and run it against the localhost, where the logs are kept.
The second play is the main playbook. This is not a full playbook; I sniped a short sample to show you how it’s done but in general this is how the structure looks like. I sort of adopted parts of the structure based on what I found in the STIG ansible playbooks.
You will notice that I used “serial:1” in the run. Since this is for compliance reporting, you want the logs and results to be serially created, so that they can be read easily raw. Of course, if you are dealing with a large number of inventories, you may want to remove it; to read the report, just import them into Excel (for example) and sort the data accordingly.
You can see one task is used to list all the compliance items. Keeping them all in this section makes it easier to see in one glance what items are being checked. Any values or lists that may be used to check pass/fail conditions are defined here, prefixed with the item reference.
Each compliance item consists of at least 3 tasks (it can be more if required) starting with a task to get the values you need and register it into a variable. The next two tasks simply write pass or fail statements to log files and syslog based on positive or negative conditions using “when”. The way the cluster of tasks are written is to make them easy to copy and paste with minimal changes. Tags is used generally during testing of your playbook.
- name: Clear out old log files
hosts: localhost
vars:
logpattern: 'nsxt_security_baseline_check'
tasks:
- name: Task to prune logfiles
include_tasks: ./common/prune_logfiles.yml
- name: NSX-T security baseline check
hosts: nsxt
serial: 1
vars:
logpath: /mypath/log
date: "{{ lookup('pipe', 'date +%Y%m%d') }}"
tasks:
- name: Initialize vars
set_fact:
logfile: "{{ logpath }}nsxt_security_baseline_check_{{ inventory_hostname }}_{{ date }}.log"
delegate_to: localhost
tags: always
- name: define logging facts
set_fact:
logserver: "mysyslogserver1.com"
logsubject: "NSX-T security baseline compliance"
delegate_to: localhost
run_once: true
tags: always
- name: define security baseline vars
set_fact:
nsxt001: "NSXT-001: Ensure Custom certificate is used"
nsxt002: "NSXT-002: Ensure syslog server is correct"
nsxt002_syslog_servers: ["syslog1"]
nsxt003: "NSXT-003: Ensure NTP servers are correct"
nsxt003_ntp_servers: ["ntp1","ntp2"]
nsxt004: "NSXT-004: Ensure backup is configured"
nsxt005: "NSXT-005: Ensure SNMP v2 is disabled"
nsxt006: "NSXT-006: Password length must be at least 15 chars"
nsxt006_minlen: 15
delegate_to: localhost
run_once: true
tags: always
...
- name: "{{ nsxt005 }}"
vars:
nsxt_api: "/api/v1/node/services/snmp?show_sensitive_data=false"
ansible.builtin.uri:
url: "https://{{inventory_hostname}}{{nsxt_api}}"
user:
password:
method: GET
force_basic_auth: true
validate_certs: false
headers:
Content-Type: application/json
register: snmp_config
delegate_to: localhost
tags: NSXT-005
- name: Write to logs (PASS)
vars:
logmsg: "Configuration is correct"
logstatus: "PASS"
logdesc: "{{ nsxt005 }}"
include_tasks: "{{ item }}"
loop:
- ./common/write_logfile.yml
- ./common/write_vrli_log.yml
when: not snmp_config.json.service_properties.v2_configured
tags: NSXT-005
- name: Write to logs (FAIL)
vars:
logmsg: "Configuration is NOT correct"
logstatus: "FAIL"
logdesc: "{{ nsxt005 }}"
include_tasks: "{{ item }}"
loop:
- ./common/write_logfile.yml
- ./common/write_vrli_log.yml
when: snmp_config.json.service_properties.v2_configured
tags: NSXT-005
....
Other stuff
You may notice that the playbook tasks are littered with “tags: always”. This is important if you are using tags and you want to ensure that tasks are run when tags are used. This is especially important in included task files. Without them, the tasks in the task file will not run even if the main task is tagged to run.