2.1.2. Ansible configuration from OpenNAC OVA

This section covers the automatic configuration of the OpenNAC Enterprise servers for two possible scenarios:

  • When you need a first-time deployment from OVA.

  • When you are adding a new node.

Prerequisites

  • Make sure you have an accessible IP address and an internet connection.

  • Before configuring the nodes, ensure that you have deployed the necessary OVAs. Each node type requires its own image, which serves as the template for creating multiple instances of that node type.

Before configuring the nodes, ensure that you have deployed the necessary OVAs. Each node type requires its own image, which serves as the template for creating multiple instances of that node type.

Download the images in URL: https://repo-opennac.opencloudfactory.com/ova/1.2.3

The repository images configure the following roles:

  • opennac_core_<ONCORE_FULL_VERSION>_img.ova → Principal, Worker, and Proxy roles.

  • opennac_analytics_<ONNALYTICS_FULL_VERSION>_img.ova → Analytics, Aggregator, Aggregator+Analytics.

  • opennac_sensor_<ONSENSOR_FULL_VERSION>_img.ova → Sensor role.

Note

The playbooks will be launched from the OpenNAC Enterprise Principal node, as it is the one that has the code.

2.1.2.1. Giving the nodes an IP

Some of the OVAs, like the Analytics one, come configured with two network interfaces (eth0 and eth1). We can use the eth0 interface as a management and communication interface between nodes.

Other OVAs, only have one interface by default. Configure them according to your needs.

To assign an IP to the node, execute the graphical network manager:

nmtui
../../../_images/nmtui1.png


On the initial window, select Edit a connection:

../../../_images/nmtui2.png


Select the interface and press Edit:

../../../_images/nmtui3.png


On IPv4 Configuration, select “Manual”. Display IPv4 configuration selecting option <Show>

../../../_images/nmtui4.png


  • Addresses: Add node IP address with netmask (<IP>/<MASK>)

  • Gateway: Add a default gateway

  • DNS Servers: Add a DNS server (ex. Google)

Set option Require IPv4 addressing for this connection. Finalize by selecting <OK> at the bottom right corner.

At this point, you must activate and deactivate the interface to apply changes. On the NetworkManager menu, select the option Activate a connection.

../../../_images/nmtui5.png


Deactivate and active interface and go back to initial menu.

../../../_images/nmtui6.png


The interface is now configured. It can be verified by typing the following command in CLI “ifconfig” o “ip a”.

ifconfig
../../../_images/nmtui7.png


Note

The name of the interfaces may change depending on the OS version, i.e.: ens18

2.1.2.2. Creating and copying the SSH key pair

These steps are only necessary on the Core Principal node.

On the console, type:

ssh-keygen -t rsa -C ansibleKey

Answer all questions with an “Enter” to use default values.

Now copy the generated public key to the nodes:

ssh-copy-id -i ~/.ssh/id_rsa.pub root@<nodes_IP>

Where <nodes_IP> are the IPs of all the available nodes nodes, ON Principal itself, ON Worker, ON Proxy, ON Analytics and/or ON Sensor.

Note

When copying the keys, it is important to do it against all the nodes, including the ON Principal itself from where it is executed.

../../../_images/sshkeys.png


2.1.2.3. Provisioning Ansible

You can find the Ansible configuration playbooks in the ON Principal node in the following path:

wget https://<username>:<password>@repo-opennac.opencloudfactory.com/1.2.3/opennacansible8-1.2.3.zip
unzip -d opennacansible8-1.2.3 opennacansible8-1.2.3.zip
cd opennacansible8-1.2.3

2.1.2.4. Configuration steps

The Ansible playbooks downloaded in the previous step contains many files, the important ones are the following:

  • Configuration files: They will carry out the entire deployment (installs and configures). You will see the following configuration playbooks:

    • all_config.yml: Executes configurations for all roles.

    • all_core_config.yml: Configures principal, worker, and proxy.

    • all_analytics_config.yml: Configures analytics, aggregator, analy_agg, and sensor.

  • Inventory: In the inventory.sample file you will find the server names and IP addresses of the nodes.

  • Ansible configuration: ansible.cfg.sample file contains the basic Ansible configuration. Here, you need to specify the path to the previously created SSH key.

  • Variables files: Inside the vars/ directory, you will find files containing variables. Replace the values of these variables as explained in more detail later.

Carefully follow the steps in the specified order.

2.1.2.4.1. Build the inventory file

vim inventory

The structure followed is:

<hostname> ansible_ssh_host=<PUBLIC_IP> private_ip_address=PRIVATE_IP> role_opennac=<role>

Where:

  • <hostname>: The name of the server, if it does not fit, the playbook will change it for the one you write on the inventory.

  • <PUBLIC_IP>: The accessible IP needed to make the SSH connection.

  • <PRIVATE_IP>: The internal IP needed in the servers to fill the /etc/hosts file or to communicate with each other. Sometimes you may not have this IP; in that case, fill it with the <PUBLIC_IP> as well.

  • <role>: The OpenNAC role needed to know, for example, what healthcheck configures. It can be one of these: [ aggregator | analytics | analy+agg | sensor | principal | proxy | worker].

Note

  • The special parameter zone_id only needs to be written in analytics servers when configuring an ELK HA architecture.

  • The two extra parameters farm_name and root_ssh_password only need to be written for the vpngw role.

  • You can add or comment servers according to your needs. Do NOT comment or delete the groups.

inventory.sample file:

; This is the file where you must declare your server names and IP addresses

; The general syntax followed is:
; [group]
; <hostname> ansible_ssh_host=<SSH_IP> private_ip_address=<PRIVATE_IP> role_opennac=<role>
; The extra parameter zone_id only goes in analytics servers when configuring an ELK HA architecture

; The hostname chosen will be changed on the server
; In some cases, public and private IP may be the same
; The role_opennac can be one of these: [ aggregator | analytics | analy+agg | sensor | principal | proxy | worker]


[principal]
onprincipal ansible_ssh_host=192.168.69.101 private_ip_address=10.10.39.17 role_opennac=principal

[worker]
;onworker01 ansible_ssh_host=10.10.36.121 private_ip_address=192.168.69.1 role_opennac=worker
;onworker02 ansible_ssh_host=10.10.36.122 private_ip_address=192.168.69.2 role_opennac=worker
;onworker03 ansible_ssh_host=10.10.36.123 private_ip_address=192.168.69.3 role_opennac=worker

[proxy]
;onproxy ansible_ssh_host=10.10.36.103 private_ip_address=192.168.69.7 role_opennac=proxy

[analytics]
onanalytics ansible_ssh_host=192.168.69.102 private_ip_address=10.10.39.18 role_opennac=analy+agg
;onanalytics01 ansible_ssh_host=10.10.36.137 private_ip_address=192.168.69.11 role_opennac=analytics zone_id=1
;onanalytics02 ansible_ssh_host=10.10.36.138 private_ip_address=192.168.69.12 role_opennac=analytics zone_id=1
;onanalytics03 ansible_ssh_host=10.10.36.139 private_ip_address=192.168.69.13 role_opennac=analytics zone_id=1

[aggregator]
;onaggregator ansible_ssh_host=10.10.36.132 private_ip_address=192.168.69.14 role_opennac=aggregator

[sensor]
;onsensor ansible_ssh_host=192.168.69.103 private_ip_address=10.10.39.19 role_opennac=sensor


[cores:children]
principal
worker
proxy

; Please note that the group "principal" must always be uncommented

2.1.2.4.2. Build the Ansible configuration file

2.1.2.4.3. Build the Ansible configuration file

vim ansible.cfg

ansible.cfg.sample file:

[defaults]
timeout = 30
inventory = inventory
host_key_checking = False
remote_user = root
private_key_file = ~/.ssh/id_rsa
log_path = /var/log/ansible.log
roles_path = ./roles

[ssh_connection]
control_path = %(directory)s/%%h-%%r

2.1.2.4.4. Configure the variables

There is a vars/ directory inside opennacansible8-1.2.3/ in which you will find the variables. It is crucial to look at all the variables and understand their usage, as explained below.

2.1.2.4.5. Configure the variables

These are the variables you will find inside each file:

  • vars_analytics.yml: Variables related to the Analytics, Aggregator, Analytics+Aggregator, and Sensor deployments, including the ones for the installation (without the OpenNAC OVA deployed), and the ones for the configuration.

  • vars_core.yml: Variables related to Core roles including the ones for the installation (without the OpenNAC OVA deployed), principal configuration, proxy configuration, worker configuration and worker replication.

Edit the necessary files to provide values for the variables (most of them can stay with the default value). A variable misconfiguration can lead to an execution error.

2.1.2.4.5.1. vars_analytics.yml

Common variables (mandatory for every deployment)

##########
# COMMON #
##########

inventory: 'static'
timezone_custom: 'Europe/Madrid'
ntpserv: # A NTP server where you must get the synchronization. Add or delete lines if necessary.
  - 'hora.roa.es'
  - '3.es.pool.ntp.org'


# The version packages that we want to be installed
# It could be the stable version or the testing one
# Change it if necessary
deploy_testing_version: 'yes'

# The necessary user:password to access the repository
repo_auth: 'user:password' # Change to the actual user and password
portal_user: 'admin'
portal_pass: 'opennac'

ansible_ssh_timeout: '7200'

config: 'true'
  • inventory: The default value is static. Set to dynamic when deploying in cloud with AWS and tags.

  • timezone_custom: The timezone where the server is set. You can execute the command timedatectl list-timezones to list valid timezones.

  • ntpserv: NTP servers where you must get the synchronization.

  • deploy_testing_version: Set to “yes” if you want to use the testing version, or “no” for the stable version. The default value is “false” as it is the stable version.

  • repo_auth: The user/password to access the OpenNAC repository.

  • ansible_ssh_timeout: Timeout duration in seconds to stablish a SSH connection. The default value is ‘7200’.

  • config: Configure nodes if in deploy, the default value is set to ‘true’.

Warning

For the correct Ansible playbooks execution, it is necessary to properly enter the repository credentials in the repo_auth variable.

Analytics and Aggregator

############################
# ANALYTICS AND AGGREGATOR #
############################

cluster_name: 'on_cluster' # Desired name for the cluster (to configure elasticsearch.yml)
number_of_shards: 6 # The number of shards to split the indexes
number_of_replicas: 1 # At least one replica per primary shard
  • cluster_name: Desired name for the cluster (to configure elasticsearch.yml).

  • number_of_shards: The number of shards to split the indexes

  • number_of_replicas: The number of replicas per primary shard. It should be at least 1.

Sensor

##########
# SENSOR #
##########

LOCAL_MGMT_IFACE: 'ens34'  # From where we are going to access the device for management
SNIFFER_INTERFACE: 'ens35' # The interface that captures the packets

# There are two capture methods: SPAN mode and SSH mode
# If SSH mode deployment wanted, change the following variable to 'SSH'
deployment_mode: 'SPAN'


# SSH MODE (if you have selected deployment_mode: 'SSH')
# To configure /usr/share/opennac/sensor/sniffer.sh
remoteHostIP: '10.10.36.121' # The remote IP of the device that needs to be captured
remoteInterface: 'eth0' # The remote interface where the information is going to be collected
remoteHostPassword: opennac
ansible_ssh_timeout: '7200'

ansible_python_interpreter: '/usr/bin/python3'
  • LOCAL_MGMT_IFACE: From where we are going to access the device for management. It is important to select the assigned interface for management, otherwise the installation will fail.

  • SNIFFER_INTERFACE: The interface that captures packets. It is important to select the assigned span interface, otherwise, the installation will fail.

  • deployment_mode: There are two capture methods: SPAN mode and SSH mode. Change the following variable to ‘SSH’ or ‘SPAN’. The default value is “SPAN”.

  • remoteHostIP: The remote IP of the device that needs to be captured (if you have selected deployment_mode: ‘SSH’).

  • remoteInterface: The remote interface where the information is going to be collected (if you have selected deployment_mode: ‘SSH’)

  • remoteHostPassword: opennac (if you have selected deployment_mode: ‘SSH’)

  • ansible_ssh_timeout: Timeout duration in seconds to stablish a SSH connection. The default value is ‘7200’.

  • ansible_python_interpreter: Path to Python interpreter ‘/usr/bin/python3’.

2.1.2.4.5.2. vars_core.yml

Installation

################
# INSTALLATION #
################

inventory: 'static'
timezone_custom: 'Europe/Madrid'
ntpserv: # A NTP server where you must get the synchronization. Add or delete lines if necessary.
  - 'hora.roa.es'
  - '3.es.pool.ntp.org'

# The version packages that we want to be installed
# It could be the stable version or the testing one
# Change it if necessary
deploy_testing_version: 'no'
repo_auth: 'user:password' # CHANGE the actual user and password
portal_user: 'admin'
portal_pass: 'opennac'
ansible_ssh_timeout: '7200'

config: 'true'

Principal configuration

###########################
# PRINCIPAL CONFIGURATION #
###########################

criticalAlertEmail: 'notify1@opennac.org,notify2@opennac.org'
criticalAlertMailTitle: 'openNAC policy message [%MSG%]'
criticalAlertMailContent: 'Alert generated by policy [%RULENAME%], on %DATE%.\n\nData:\nMAC: %MAC%\nUser: %USERID%\nIP Switch: %SWITCHIP%\nPort: %SWITCHPORT% - %SWITCHPORTID%\n'

# Variables to configure /etc/raddb/clients.conf
clients_data:
  -
    ip: '192.168.0.0/16'
    shortname: 'internal192168'
    secret: 'testing123'
  -
    ip: '10.10.36.0/24'
    shortname: 'internal1010'
    secret: 'testing123'
  -
    ip: '172.16.0.0/16'
    shortname: 'internal17216'
    secret: 'testing123'
  -
    ip: '10.0.0.0/8'
    shortname: 'internal10'
    secret: 'testing123'

# Variables to configure /etc/postfix/main.cf and /etc/postfix/generic
relayhostName: 'relay.remote.com'
relayhostPort: '25'
mydomain: 'acme.local'
emailAddr: 'openNAC@notifications.mycompany.com'
  • criticalAlertEmail: Email or set of emails where you want to receive alerts.

  • criticalAlertMailTitle: Title of the alert email.

  • criticalAlertMailContent: Content of the critical email.

  • clients_data: To configure /etc/raddb/clients.conf, add as many clients as you need

    • ip: ‘X.X.X.X/X’

    • shortname: desired client name

    • secret: desired password

  • relayhostName: FQDN of the SMTP server to relay the emails (next-hop destination(s) for non-local email). Configure /etc/postfix/main.cf and /etc/postfix/generic.

  • relayhostPort: Port of the “relayhostName” to relay the emails on. Configure /etc/postfix/main.cf and /etc/postfix/generic.

  • mydomain: The mydomain parameter specifies the local internet domain name. Configure /etc/postfix/main.cf and /etc/postfix/generic.

  • emailAddr: The email address used as the sender of the send alerts. Configure /etc/postfix/main.cf and /etc/postfix/generic.

Worker replication

######################
# WORKER REPLICATION #
######################

mysql_replication_password: opennac
mysql_root_password: opennac # Password for mysql root
mysql_replication_password_nagios: 'Simpl3PaSs'
path: /tmp/ # The path to save the dump .sql file
  • mysql_replication_password: Password for mysql replication.

  • mysql_root_password: Password for mysql root user.

  • mysql_replication_password_nagios: Password for mysql nagios user.

  • path: The path to save the dump .sql file.

Proxy

#######################
# PROXY CONFIGURATION #
#######################

sharedkey: 'CHANGE_ME' # The string to encrypt the packets between the Proxy Servers and Backends

# PROXY.CONF
# Edit the following lines in order to configure /etc/raddb/proxy.conf
# You may either need to add new lines (follow the "-" structure) or to delete some.
pools_data:
  -
    namepool: 'auth'
    namerealm: 'DEFAULT'


# CLIENTS.CONF
# Edit the following lines in order to configure /etc/raddb/clients.conf
# You may either need to add new lines or to delete some.
# Follow the structure indicated below:
clients_data_PROXY:
  -
    ip: '192.168.0.0/16'
    shortname: 'internal192168'
    secret: '{{ sharedkey }}'
  -
    ip: '172.16.0.0/16'
    shortname: 'internal17216'
    secret: '{{ sharedkey }}'
  -
    ip: '10.0.0.0/8'
    shortname: 'internal10'
    secret: '{{ sharedkey }}'

ansible_python_interpreter: '/usr/bin/python3'
  • sharedkey: The string to encrypt the packets between the Proxy Servers and Backends.

  • pools_data: To configure /etc/raddb/proxy.conf, add or delete as many as you need

    • namepool: the name of the pool

    • namerealm: the name of the realm

  • clients_data_PROXY: To configure /etc/raddb/clients.conf, add or delete as many clients as you need

    • ip: ‘X.X.X.X/X’

    • shortname: desired client name

    • secret: the previously defined shared key (do not change this variable)

2.1.2.5. Launch the playbooks

To launch the playbooks, go to the previously downloaded opennacansible8-1.2.3/ folder.

2.1.2.6. Launch the playbooks

To launch the playbooks, go to the previously downloaded opennacansible8-1.2.3/ folder.

By running the following command, it will configure all the nodes you have previously stated in your inventory:

ansible-playbook all_config.yml

The playbook execution output should look as follows. This example configures onprincipal, onanalytics, and onsensor:

../../../_images/all_deploy.png


Once the installation process finishes, the result should look as follows:

../../../_images/all_deploy_output.png


Note

The numbers of each “ok”, “changed”, “skipped” or “ignored” may vary depending on the playbook, number of nodes, etc.

The only numbers that must be always 0 are unreachable and failed. Otherwise, something went wrong and you should review the tasks. Otherwise, something went wrong and you should review the tasks.