2.1.1.2. Ansible node deployment

This section covers the automatic deployment of new OpenNAC Enterprise nodes on an empty Rocky Linux 8 system for existing OpenNAC Enterprise environments. Following this procedure only makes sense in environments where it is not possible to deploy the OpenNAC Enterprise OVAs, such as a cloud environment.

This process will only deploy OpenNAC packages on the new nodes, not configure them. After the nodes deployment, they will look like an OVA. That is why, once the deployment is complete, you should execute their configuration following the Configuration from OVA section.

Warning

In order to proceed with the configuration, ensure that the operating system language is set to English.

  • Open the /etc/locale.conf file and ensure that the following line reflects the English language configuration:

LANG="en_US.UTF-8"
  • Save and exit the file, then reboot the system to apply the changes.

For more information about the Rocky Linux 8 installation you can visit the official Rocky Linux official site.

Before deploying the nodes, it is necessary to have the proper disk partitions made. Each role requires different partitions:

Note

The recommended file system is the xfs.

Partition Role

Minimum recommended size

Principal

Proxy

Worker

Analytics

Sensor

/backup

24GB (xfs)

Required

Required

Required

/var/log

58GB (xfs)

Required

Required

Required

/var

40GB (xfs)

Required

Required

/

40GB (xfs)

Required

Required

Required

Required

Required

swap

RAM SIZE (xfs)

Once you have the partitions created, make sure you have an accessible IP address and an internet connection.

Note

The playbooks will be launched from the OpenNAC Enterprise Principal node, to standardize and simplify the process.

2.1.1.2.1. Giving the nodes an IP

To assign said IP, we execute the graphical network manager:

nmtui
../../../_images/nmtui1.png


On the initial window, select Edit a connection

../../../_images/nmtui2.png


Select the interface and press Edit

../../../_images/nmtui3.png


On IPv4 Configuration select Manual. Display IPv4 configuration selecting option <Show>

../../../_images/nmtui4.png


  • Addresses: Add node IP address with netmask (<IP>/<MASK>)

  • Gateway: Add a default gateway

  • DNS Servers: Add a DNS server (ex. Google with ip 8.8.8.8). It is recommended to put the Corporate DNS server.

Set option Require IPv4 addressing for this connection. Finalize by clicking <OK> at the bottom.

At this point, activating and deactivating the interface must be done to apply changes. In the Network Manager menu, select the option Activate a connection.

../../../_images/nmtui5.png


Deactivate and activate interface and go back to the initial menu.

../../../_images/nmtui6.png


Interface is now configured and can be verified by typing the following command in CLI “ifconfig” o “ip a”

ifconfig
../../../_images/nmtui7.png


Note

The name of the interfaces may change depending on the OS version, i.e.: ens18

2.1.1.2.2. Creating and copying the SSH key pair

These steps are only necessary on the Core Principal node.

On the console, type:

ssh-keygen -t rsa -C ansibleKey

Answer all questions with an “Enter” to use default values.

Now, copy the generated public key to the nodes:

ssh-copy-id -i ~/.ssh/id_rsa.pub root@<nodes_IP>

Where <nodes_IP> are the IPs of all the available nodes: Core Principal itself (where we are connected), Core Worker, Core Proxy, Analytics and/or Sensor.

Note

When copying the keys, it is important to do it against all the nodes, including the ON Principal itself from where it is executed.

../../../_images/sshkeys.png


2.1.1.2.3. Prerequisites

Before you proceed with any further steps, ensure to fullfil the following prerequisites.

1. Ensure that the following packages are installed on the machines where you intend to execute the Ansible playbooks:

dnf install -y epel-release unzip wget vim
  • Then install the Ansible packages:

dnf install -y ansible

2. If the machines you are configuring belong to a network that accesses the Internet through a proxy, you have to configure it to download files with wget. To do so, follow these instructions:

  • Open the wget file:

vi /etc/wgetrc
  • Within the file, edit following lines and replace placeholders with your proxy configurations:

https_proxy = {{ your__https_proxy }}
http_proxy = {{ your_http_proxy }}
  • Still within the file, ensure the following line is set to on:

# If you do not want to use proxy at all, set this to off.
use_proxy = on

2.1.1.2.4. Provisioning Ansible

After performing the previous steps, you can proceed to get the Ansible playbooks from our public repository by executing the following command:

Download the Ansible file in a machine with access to the nodes you will execute the playbooks, e.g. the Core.

wget https://<username>:<password>@repo-opennac.opencloudfactory.com/1.2.3/opennacansible8-1.2.3.zip
unzip -d opennacansible8-1.2.3 opennacansible8-1.2.3.zip
cd opennacansible8-1.2.3

2.1.1.2.5. Configuration steps

The Ansible playbooks downloaded in the previous step contains many files, the important ones are the following:

  • Deployment files: They will carry out the entire deployment (installs and configures). You will see the following deployment playbooks:

    • all_core_deploy.yml: Configures principal, worker, proxy, and captive.

    • all_analytics_deploy.yml: Configures analytics, aggregator, analy_agg, and sensor.

    • all_deploy.yml and all_config.yml: Execute configurations for all roles.

Note

The all_deploy.yml playbooks automatically calls the execution of the all_config.yml playbooks. As a result, there is no need to separately run the configuration playbooks if you have already executed the deployment playbook.

  • Inventory: In the inventory.sample file you will find the server names and IP addresses of the nodes.

  • Ansible configuration: ansible.cfg.sample file contains the basic Ansible configuration. Here, you need to specify the path to the previously created SSH key.

  • Variables files: Inside the vars/ directory, you will find files containing variables. Replace the values of these variables as explained in more detail later.

Carefully follow the steps in the specified order.

2.1.1.2.5.1. Build the inventory file

Note

You should only fill the inventory with the nodes you are going to deploy.

Use the servers’ IPs to populate the inventory. First, copy the inventory.sample to the inventory and then edit the file (using a tool like Vim, for instance) to add the IPs as shown below. Always check the .sample file first for any potential updates.

cp inventory.sample inventory
vim inventory

The structure followed is:

<hostname> ansible_ssh_host=<PUBLIC_IP> private_ip_address=PRIVATE_IP> role_opennac=<role>

Where:

  • <hostname>: The name of the server, if it does not fit, the playbook will change it for the one you write on the inventory.

  • <PUBLIC_IP>: The accessible IP needed to make the SSH connection.

  • <PRIVATE_IP>: The internal IP needed in the servers to fill the /etc/hosts file or to communicate with each other. Sometimes you may not have this IP; in that case, fill it with the <PUBLIC_IP> as well.

  • <role>: The OpenNAC role needed to know, for example, what healthcheck configures. It can be one of these: [ aggregator | analytics | analy+agg | sensor | principal | proxy | worker].

Note

  • The special parameter zone_id only needs to be written in analytics servers when configuring an ELK HA architecture.

  • The two extra parameters farm_name and root_ssh_password only need to be written for the vpngw role.

  • You can add or comment servers according to your needs. Do NOT comment or delete the groups.

Example of an inventory file adding onworker03 and onproxy:

; This is the file where you must declare your server names and IP addresses

; The general syntax followed is:
; [group]
; <hostname> ansible_ssh_host=<SSH_IP> private_ip_address=<PRIVATE_IP> role_opennac=<role>
; The extra parameter zone_id only goes in analytics servers when configuring an ELK HA architecture

; The hostname chosen will be changed on the server
; In some cases, public and private IP may be the same
; The role_opennac can be one of these: [ aggregator | analytics | analy+agg | sensor | principal | proxy]


[principal]
;onprincipal ansible_ssh_host=192.168.69.101 private_ip_address=10.10.39.17 role_opennac=principal

[worker]
;onworker01 ansible_ssh_host=10.10.36.121 private_ip_address=192.168.69.1 role_opennac=worker
;onworker02 ansible_ssh_host=10.10.36.122 private_ip_address=192.168.69.2 role_opennac=worker
onworker03 ansible_ssh_host=10.10.36.123 private_ip_address=192.168.69.3 role_opennac=worker

[proxy]
onproxy ansible_ssh_host=10.10.36.103 private_ip_address=192.168.69.7 role_opennac=proxy

[analytics]
;onanalytics ansible_ssh_host=192.168.69.102 private_ip_address=10.10.39.18 role_opennac=analy+agg
;onanalytics01 ansible_ssh_host=10.10.36.137 private_ip_address=192.168.69.11 role_opennac=analytics zone_id=1
;onanalytics02 ansible_ssh_host=10.10.36.138 private_ip_address=192.168.69.12 role_opennac=analytics zone_id=1
;onanalytics03 ansible_ssh_host=10.10.36.139 private_ip_address=192.168.69.13 role_opennac=analytics zone_id=1

[aggregator]
;onaggregator ansible_ssh_host=10.10.36.132 private_ip_address=192.168.69.14 role_opennac=aggregator

[sensor]
;onsensor ansible_ssh_host=192.168.69.103 private_ip_address=10.10.39.19 role_opennac=sensor

[cores:children]
principal
worker
proxy

; Please note that the group "principal" must always be uncommented

2.1.1.2.5.2. Build the Ansible configuration file

Similar to the inventory setup, copy the ansible.cfg.sample file to ansible.cfg and then edit it to include the path to your private key (the ssh key that you previously have copied to the server).

Also indicate the path to your inventory file. There are more variables in this file you may want to change, but these are the recommended and basic ones. Always check the .sample file first for any potential updates.

cp ansible.cfg.sample ansible.cfg
vim ansible.cfg

ansible.cfg.sample file:

[defaults]
timeout = 30
inventory = inventory
host_key_checking = False
remote_user = root
private_key_file = ~/.ssh/id_rsa
log_path = /var/log/ansible.log
roles_path = ./roles

[ssh_connection]
control_path = %(directory)s/%%h-%%r

2.1.1.2.5.3. Configure the variables

There is a vars/ directory inside opennacansible8-1.2.3/ in which you will find the variables. It is crucial to look at all the variables and understand their usage, as explained below.

Important

DO NOT comment or delete the variables unless it is specifically said that you can delete lines. Never delete the variable name -if you are not going to use a variable, leave it with the default variable.

For example, if your deployment does not have a Sensor, leave the Sensor variables with the default value.

These are the variables you will find inside each file:

  • vars_analytics.yml: Variables related to the Analytics, Aggregator, Analytics+Aggregator, and Sensor deployments, including the ones for the installation (without the OpenNAC OVA deployed), and the ones for the configuration.

  • vars_core.yml: Variables related to Core roles including the ones for the installation (without the OpenNAC OVA deployed), principal configuration, proxy configuration, worker configuration and worker replication.

Edit the necessary files to provide values for the variables (most of them can stay with the default value). A variable misconfiguration can lead to an execution error.

2.1.1.2.5.3.1. vars_analytics.yml

Common variables (mandatory for every deployment)

##########
# COMMON #
##########

inventory: 'static'
timezone_custom: 'Europe/Madrid'
ntpserv: # A NTP server where you must get the synchronization. Add or delete lines if necessary.
  - 'hora.roa.es'
  - '3.es.pool.ntp.org'


# The version packages that we want to be installed
# It could be the stable version or the testing one
# Change it if necessary
deploy_testing_version: 'yes'

# The necessary user:password to access the repository
repo_auth: 'user:password' # Change to the actual user and password
portal_user: 'admin'
portal_pass: 'opennac'

ansible_ssh_timeout: '7200'

config: 'true'
  • inventory: The default value is static. Set to dynamic when deploying in cloud with AWS and tags.

  • timezone_custom: The timezone where the server is set. You can execute the command timedatectl list-timezones to list valid timezones.

  • ntpserv: NTP servers where you must get the synchronization.

  • deploy_testing_version: Set to “yes” if you want to use the testing version, or “no” for the stable version. The default value is “false” as it is the stable version.

  • repo_auth: The user/password to access the OpenNAC repository.

  • ansible_ssh_timeout: Timeout duration in seconds to stablish a SSH connection. The default value is ‘7200’.

  • config: Configure nodes if in deploy, the default value is set to ‘true’.

Warning

For the correct Ansible playbooks execution, it is necessary to properly enter the repository credentials in the repo_auth variable.

Analytics and Aggregator

############################
# ANALYTICS AND AGGREGATOR #
############################

cluster_name: 'on_cluster' # Desired name for the cluster (to configure elasticsearch.yml)
number_of_shards: 6 # The number of shards to split the indexes
number_of_replicas: 1 # At least one replica per primary shard
  • cluster_name: Desired name for the cluster (to configure elasticsearch.yml).

  • number_of_shards: The number of shards to split the indexes

  • number_of_replicas: The number of replicas per primary shard. It should be at least 1.

Sensor

##########
# SENSOR #
##########

LOCAL_MGMT_IFACE: 'ens34'  # From where we are going to access the device for management
SNIFFER_INTERFACE: 'ens35' # The interface that captures the packets

# There are two capture methods: SPAN mode and SSH mode
# If SSH mode deployment wanted, change the following variable to 'SSH'
deployment_mode: 'SPAN'


# SSH MODE (if you have selected deployment_mode: 'SSH')
# To configure /usr/share/opennac/sensor/sniffer.sh
remoteHostIP: '10.10.36.121' # The remote IP of the device that needs to be captured
remoteInterface: 'eth0' # The remote interface where the information is going to be collected
remoteHostPassword: opennac
ansible_ssh_timeout: '7200'

ansible_python_interpreter: '/usr/bin/python3'
  • LOCAL_MGMT_IFACE: From where we are going to access the device for management. It is important to select the assigned interface for management, otherwise the installation will fail.

  • SNIFFER_INTERFACE: The interface that captures packets. It is important to select the assigned span interface, otherwise, the installation will fail.

  • deployment_mode: There are two capture methods: SPAN mode and SSH mode. Change the following variable to ‘SSH’ or ‘SPAN’. The default value is “SPAN”.

  • remoteHostIP: The remote IP of the device that needs to be captured (if you have selected deployment_mode: ‘SSH’).

  • remoteInterface: The remote interface where the information is going to be collected (if you have selected deployment_mode: ‘SSH’)

  • remoteHostPassword: opennac (if you have selected deployment_mode: ‘SSH’)

  • ansible_ssh_timeout: Timeout duration in seconds to stablish a SSH connection. The default value is ‘7200’.

  • ansible_python_interpreter: Path to Python interpreter ‘/usr/bin/python3’.

2.1.1.2.5.3.2. vars_core.yml

Installation

################
# INSTALLATION #
################

inventory: 'static'
timezone_custom: 'Europe/Madrid'
ntpserv: # A NTP server where you must get the synchronization. Add or delete lines if necessary.
  - 'hora.roa.es'
  - '3.es.pool.ntp.org'

# The version packages that we want to be installed
# It could be the stable version or the testing one
# Change it if necessary
deploy_testing_version: 'no'
repo_auth: 'user:password' # CHANGE the actual user and password
portal_user: 'admin'
portal_pass: 'opennac'
ansible_ssh_timeout: '7200'

config: 'true'

Principal configuration

###########################
# PRINCIPAL CONFIGURATION #
###########################

criticalAlertEmail: 'notify1@opennac.org,notify2@opennac.org'
criticalAlertMailTitle: 'openNAC policy message [%MSG%]'
criticalAlertMailContent: 'Alert generated by policy [%RULENAME%], on %DATE%.\n\nData:\nMAC: %MAC%\nUser: %USERID%\nIP Switch: %SWITCHIP%\nPort: %SWITCHPORT% - %SWITCHPORTID%\n'

# Variables to configure /etc/raddb/clients.conf
clients_data:
  -
    ip: '192.168.0.0/16'
    shortname: 'internal192168'
    secret: 'testing123'
  -
    ip: '10.10.36.0/24'
    shortname: 'internal1010'
    secret: 'testing123'
  -
    ip: '172.16.0.0/16'
    shortname: 'internal17216'
    secret: 'testing123'
  -
    ip: '10.0.0.0/8'
    shortname: 'internal10'
    secret: 'testing123'

# Variables to configure /etc/postfix/main.cf and /etc/postfix/generic
relayhostName: 'relay.remote.com'
relayhostPort: '25'
mydomain: 'acme.local'
emailAddr: 'openNAC@notifications.mycompany.com'
  • criticalAlertEmail: Email or set of emails where you want to receive alerts.

  • criticalAlertMailTitle: Title of the alert email.

  • criticalAlertMailContent: Content of the critical email.

  • clients_data: To configure /etc/raddb/clients.conf, add as many clients as you need

    • ip: ‘X.X.X.X/X’

    • shortname: desired client name

    • secret: desired password

  • relayhostName: FQDN of the SMTP server to relay the emails (next-hop destination(s) for non-local email). Configure /etc/postfix/main.cf and /etc/postfix/generic.

  • relayhostPort: Port of the “relayhostName” to relay the emails on. Configure /etc/postfix/main.cf and /etc/postfix/generic.

  • mydomain: The mydomain parameter specifies the local internet domain name. Configure /etc/postfix/main.cf and /etc/postfix/generic.

  • emailAddr: The email address used as the sender of the send alerts. Configure /etc/postfix/main.cf and /etc/postfix/generic.

Worker replication

######################
# WORKER REPLICATION #
######################

mysql_replication_password: opennac
mysql_root_password: opennac # Password for mysql root
mysql_replication_password_nagios: 'Simpl3PaSs'
path: /tmp/ # The path to save the dump .sql file
  • mysql_replication_password: Password for mysql replication.

  • mysql_root_password: Password for mysql root user.

  • mysql_replication_password_nagios: Password for mysql nagios user.

  • path: The path to save the dump .sql file.

Proxy

#######################
# PROXY CONFIGURATION #
#######################

sharedkey: 'CHANGE_ME' # The string to encrypt the packets between the Proxy Servers and Backends

# PROXY.CONF
# Edit the following lines in order to configure /etc/raddb/proxy.conf
# You may either need to add new lines (follow the "-" structure) or to delete some.
pools_data:
  -
    namepool: 'auth'
    namerealm: 'DEFAULT'


# CLIENTS.CONF
# Edit the following lines in order to configure /etc/raddb/clients.conf
# You may either need to add new lines or to delete some.
# Follow the structure indicated below:
clients_data_PROXY:
  -
    ip: '192.168.0.0/16'
    shortname: 'internal192168'
    secret: '{{ sharedkey }}'
  -
    ip: '172.16.0.0/16'
    shortname: 'internal17216'
    secret: '{{ sharedkey }}'
  -
    ip: '10.0.0.0/8'
    shortname: 'internal10'
    secret: '{{ sharedkey }}'

ansible_python_interpreter: '/usr/bin/python3'
  • sharedkey: The string to encrypt the packets between the Proxy Servers and Backends.

  • pools_data: To configure /etc/raddb/proxy.conf, add or delete as many as you need

    • namepool: the name of the pool

    • namerealm: the name of the realm

  • clients_data_PROXY: To configure /etc/raddb/clients.conf, add or delete as many clients as you need

    • ip: ‘X.X.X.X/X’

    • shortname: desired client name

    • secret: the previously defined shared key (do not change this variable)

2.1.1.2.6. Launch the playbooks

To launch the playbooks, go to the previously downloaded opennacansible8-1.2.3/ folder.

Warning

This is will only deploy the new nodes added to the OpenNAC Enterprise environment.

By running the following command, it will deploy all the nodes you have previously stated in your inventory:

ansible-playbook all_deploy.yml -e "config=false"

The playbook execution output should look as follows. This example configures onprincipal, onanalytics, and onsensor:

../../../_images/all_deploy_config_false.png


Once the installation process finishes, the result should look as follows:

../../../_images/all_deploy_output.png


Note

The numbers of each “ok”, “changed”, “skipped” or “ignored” may vary depending on the playbook, number of nodes, etc.

The only numbers that must be always 0 are unreachable and failed. Otherwise, something went wrong and you should review the tasks. Otherwise, something went wrong and you should review the tasks.

Now that you have deployed the nodes, it is necessary to configure them into the OpenNAC Enterprise environment. At this point, the scenario is the same as the OVA configuration. To continue with the configuration, follow the instructions described in the Configuration from OVA section.