2.2.1.3. Rocky node deployment
This section covers the automatic deployment of new OpenNAC Enterprise nodes on an empty Rocky Linux 9 system for existing OpenNAC Enterprise environments.
Following this procedure only makes sense in environments where it is not possible to deploy the OpenNAC Enterprise OVAs, such as a cloud environment.
This process will deploy and configure OpenNAC packages on the new nodes.
Ensure that you fulfill the Rocky Linux Deployment Requirements before continuing with the following steps.
2.2.1.3.1. ON Analytics node deployment
Note
If you move from a standalone ON Analytics node to a cluster, have in mind that this new cluster will be generated without data.
If you want to migrate the data from the standalone Analytics node to the new cluster, you will need to perform a manual data migration process.
The procedure will involve backing up the data from the standalone node, shutting down the node, configuring it to join the cluster, and then restoring the data into the cluster. This process may require some downtime, depending on the size of the data and the network speed.
2.2.1.3.2. Creating and copying the SSH key pair
Note
Perform the following steps on the ON Core Principal node.
If you do not have an SSH key, execute the following command on the console:
ssh-keygen -t rsa -C ansibleKey
Answer all questions with an “Enter” to use default values. If the SSH key pair already exists and you want to overwrite it, you will be asked if you want to overwrite it, and you should answer the question with “y”.
Now, copy the generated public key to the nodes:
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<nodes_IP>
Where <nodes_IP> are the IPs of all the available nodes nodes, ON Principal itself, ON Worker, ON Proxy, ON VPNGW, ON Captive, ON Analytics and/or ON Sensor.
Note
When copying the keys, it’s crucial to do so for all nodes, including the ON Core Principal itself from where the operation is executed.

2.2.1.3.3. Prerequisites
Note
Before you proceed with any further steps, ensure to fullfil the following prerequisites.
GENERAL
These are the prerequisites required to be executed on each node that will be deployed:
1. If the machines you are configuring belong to a network that accesses the Internet through a proxy, you have to configure it to download files with wget. To do so, follow these instructions:
Open the wget file:
vi /etc/wgetrc
Within the file, edit following lines and replace placeholders with your proxy configurations:
https_proxy = {{ your__https_proxy }}
http_proxy = {{ your_http_proxy }}
Still within the file, ensure the following line is set to on:
# If you do not want to use proxy at all, set this to off.
use_proxy = on
2. Create the SSHD configuration file to enable SSH access with root login on all nodes you are deploying. This configuration is essential as all other nodes can be accessed from the Principal node.
vi /etc/ssh/sshd_config.d/00-opennac.conf
# openNAC SSHD config file
#
# From "man sshd_config": For each keyword, the first obtained value will be used.
PermitRootLogin yes
GSSAPIAuthentication no
RekeyLimit 1G 1h
Banner /etc/issue
Restart SSHD:
systemctl restart sshd
2.2.1.3.4. Provisioning Ansible
After performing the previous steps, you can proceed to get the Ansible playbooks from our public repository by executing the following commands in the specified order:
Note
Download the Ansible file on the ON Core Principal machine, since the playbooks will be launched from this node.
wget --user "<username>" --ask-password https://repo-opennac.opencloudfactory.com/1.2.4/opennacansible8-1.2.4.zip
unzip -d opennacansible8-1.2.4 opennacansible8-1.2.4.zip
cd opennacansible8-1.2.4
2.2.1.3.5. Configuration steps
The Ansible playbooks downloaded in the previous step contains many files, the important ones are the following:
Deployment files: They will carry out the entire deployment (installs and configures). You will see the following deployment playbooks:
all_core_deploy.yml
: Configures principal, worker, and proxy.all_analytics_deploy.yml
: Configures analytics, aggregator, analy_agg, and sensor.all_captive_deploy.yml
: Configures captive.all_vpngw_deploy.yml
: Configures vpngw.all_deploy.yml
andall_config.yml
: Execute configurations for all roles.
Note
The all_deploy.yml playbooks automatically calls the execution of the all_config.yml playbooks. As a result, there is no need to separately run the configuration playbooks if you have already executed the deployment playbook.
Inventory: In the
inventory.sample
file you will find the server names and IP addresses of the nodes.Ansible configuration:
ansible.cfg.sample
file contains the basic Ansible configuration. Here, you need to specify the path to the previously created SSH key.Variables files: Inside the vars/ directory, you will find files containing variables. Replace the values of these variables as explained in more detail later.
Carefully follow the steps in the specified order.
2.2.1.3.5.1. Build the inventory file
Note
You should only fill the inventory with the nodes you are going to deploy.
Use the servers’ IPs to populate the inventory. First, copy the inventory.sample
to the inventory and then edit the file (using a tool like Vim, for instance) to add the IPs as shown below. Always check the .sample file first for any potential updates.
cp inventory.sample inventory
vim inventory
You will see the following file:
; This is the file where you must declare your server names and IP addresses
; The general syntax followed is:
; [group]
; <hostname> ansible_ssh_host=<SSH_IP> private_ip_address=<PRIVATE_IP> role_opennac=<role>
; The extra parameter zone_id only goes in analytics servers when configuring an ELK HA architecture
; The hostname chosen will be changed on the server
; In some cases, public and private IP may be the same
; The role_opennac can be one of these: [ aggregator | analytics | analy+agg | sensor | principal | proxy | worker | vpngw ]
[principal]
;onprincipal ansible_ssh_host=192.168.69.101 private_ip_address=10.10.39.101 role_opennac=principal
[worker]
;for a single worker
;onworker ansible_ssh_host=10.10.36.120 private_ip_address=192.168.69.120 role_opennac=worker
;for more than one worker
;onworker01 ansible_ssh_host=10.10.36.121 private_ip_address=192.168.69.121 role_opennac=worker
;onworker02 ansible_ssh_host=10.10.36.122 private_ip_address=192.168.69.122 role_opennac=worker
;onworker03 ansible_ssh_host=10.10.36.123 private_ip_address=192.168.69.123 role_opennac=worker
[proxy]
;for a single proxy
;onproxy ansible_ssh_host=10.10.36.133 private_ip_address=192.168.69.133 role_opennac=proxy
;for more than one proxy
;onproxy01 ansible_ssh_host=10.10.36.134 private_ip_address=192.168.69.134 role_opennac=proxy
;onproxy02 ansible_ssh_host=10.10.36.135 private_ip_address=192.168.69.135 role_opennac=proxy
[captive]
;for a single captive
;oncaptive ansible_ssh_host=10.10.36.140 private_ip_address=192.168.69.140 role_opennac=captive
;for more than one captive
;oncaptive01 ansible_ssh_host=10.10.36.141 private_ip_address=192.168.69.141 role_opennac=captive
;oncaptive02 ansible_ssh_host=10.10.36.142 private_ip_address=192.168.69.142 role_opennac=captive
;oncaptive03 ansible_ssh_host=10.10.36.143 private_ip_address=192.168.69.143 role_opennac=captive
[analytics]
;for a single analy+agg
;onanalytics ansible_ssh_host=192.168.69.151 private_ip_address=10.10.39.151 role_opennac=analy+agg
;for a single analytics
;onanalytics ansible_ssh_host=192.168.69.156 private_ip_address=10.10.39.156 role_opennac=analytics
;for an analytics cluster
;onanalytics01 ansible_ssh_host=10.10.36.152 private_ip_address=192.168.69.152 role_opennac=analytics zone_id=1
;onanalytics02 ansible_ssh_host=10.10.36.153 private_ip_address=192.168.69.153 role_opennac=analytics zone_id=1
;onanalytics03 ansible_ssh_host=10.10.36.154 private_ip_address=192.168.69.154 role_opennac=analytics zone_id=1
[aggregator]
;for a single aggregator
;onaggregator ansible_ssh_host=10.10.36.160 private_ip_address=192.168.69.160 role_opennac=aggregator
;for an aggregator cluster
;onaggregator01 ansible_ssh_host=10.10.36.161 private_ip_address=192.168.69.161 role_opennac=aggregator
;onaggregator02 ansible_ssh_host=10.10.36.162 private_ip_address=192.168.69.162 role_opennac=aggregator
;onaggregator03 ansible_ssh_host=10.10.36.163 private_ip_address=192.168.69.163 role_opennac=aggregator
[sensor]
;for a single sensor
;onsensor ansible_ssh_host=192.168.69.171 private_ip_address=10.10.39.171 role_opennac=sensor
;for more than one sensor
;onsensor01 ansible_ssh_host=192.168.69.172 private_ip_address=10.10.39.172 role_opennac=sensor
;onsensor02 ansible_ssh_host=192.168.69.173 private_ip_address=10.10.39.173 role_opennac=sensor
[vpngw]
;for a single vpngw
;onvpngw ansible_ssh_host=10.10.36.181 private_ip_address=192.168.69.181 role_opennac=vpngw root_ssh_password=password farm_name=farm01
;for more than one vpngw
;onvpngw01 ansible_ssh_host=10.10.36.182 private_ip_address=192.168.69.182 role_opennac=vpngw root_ssh_password=password farm_name=farm01
;onvpngw02 ansible_ssh_host=10.10.36.183 private_ip_address=192.168.69.183 role_opennac=vpngw root_ssh_password=password farm_name=farm01
[cores:children]
principal
worker
proxy
; Please note that the group "principal" must always be uncommented
Understand the structure of the inventory:
<hostname> ansible_ssh_host=<PUBLIC_IP> private_ip_address=<PRIVATE_IP> role_opennac=<role>
<hostname> ansible_ssh_host=<PUBLIC_IP> private_ip_address=<PRIVATE_IP> role_opennac=analytics zone_id=<zone_id>
<hostname> ansible_ssh_host=<PUBLIC_IP> private_ip_address=<PRIVATE_IP> role_opennac=vpngw farm_name=<farm_name> root_ssh_password=<root_ssh_password>
Where:
<hostname>: The name of the server, if it does not fit, the playbook will change it for the one you write on the inventory.
<PUBLIC_IP>: The accessible IP needed to make the SSH connection.
<PRIVATE_IP>: The internal IP needed in the servers to fill the /etc/hosts file or to communicate with each other. Sometimes you may not have this IP; in that case, fill it with the <PUBLIC_IP> as well.
<role>: The OpenNAC role needed to know, for example, what healthcheck configures. It can be one of these: [ aggregator | analytics | analy+agg | sensor | principal | proxy | worker | vpngw | captive].
<zone_id>: This special parameter only needs to be written in analytics servers when configuring an ELK HA architecture.
<farm_name> and <root_ssh_password>: These special parameters only need to be written for the vpngw role.
Note
You can add or comment servers according to your needs. Do NOT comment or delete the groups.
Do not add any extra space between strings.
After editing the inventory, remember to save before exiting the file.
The following command will help you see the hosts you have configured in your inventory. Ensure that all the nodes are set correctly.
grep -v ';\|^$' inventory
The output would look like the following one. It displays only the nodes you have in your inventory. In our example, we populate the inventory with a new worker node and 2 new analytics nodes:
In our example, we already have one worker in our environment, so now, the new worker node should be positioned in the “for more than one worker” group, occupying the positions onworker02.
Just like in the worker node case, as we already have one analytics node, to add two new analytics, they should occupy the positions onanalytics02 and onanalytics03.
[root@rocky9base opennacansible8-1.2.4]# grep -v ';\|^$' inventory
[principal]
[worker]
onworker02 ansible_ssh_host=10.10.36.122 private_ip_address=192.168.69.122 role_opennac=worker
[proxy]
[captive]
[analytic]
onanalytics02 ansible_ssh_host=10.21.65.228 private_ip_address=10.21.65.228 role_opennac=analytics
onanalytics03 ansible_ssh_host=10.21.65.229 private_ip_address=10.21.65.229 role_opennac=analytics
[aggregator]
[sensor]
[vpngw]
[cores:children]
principal
worker
proxy
2.2.1.3.5.2. Build the Ansible configuration file
Similar to the inventory setup, copy the ansible.cfg.sample
file to ansible.cfg
and then edit it to include the path to your private key (the ssh key that you previously have copied to the server).
Also indicate the path to your inventory file. There are more variables in this file you may want to change, but these are the recommended and basic ones. Always check the .sample file first for any potential updates.
cp ansible.cfg.sample ansible.cfg
vim ansible.cfg
ansible.cfg.sample file:
timeout = 30
inventory = inventory
host_key_checking = False
remote_user = root
private_key_file = ~/.ssh/id_rsa
log_path = /var/log/ansible.log
roles_path = ./roles
display_skipped_hosts = False
show_task_path_on_failure = True
[ssh_connection]
control_path = %(directory)s/%%h-%%r
2.2.1.3.5.3. Configure the variables
There is a vars/
directory inside the opennacansible8-1.2.4/
directory in which you will find the variables. It is crucial to look at all the variables and understand their usage, as explained below.
[root@localhost ansible]# pwd
opennacansible8-1.2.4/
[root@localhost ansible]# ls -la
total 68
drwxr-xr-x. 5 apache apache 4096 Feb 20 15:11 .
drwxr-xr-x. 21 apache apache 4096 Feb 20 15:11 ..
-rw-r--r-- 1 apache apache 1347 Feb 20 01:25 all_analytics_config.yml
-rw-r--r-- 1 apache apache 593 Feb 20 01:25 all_captive_config.yml
-rw-r--r-- 1 apache apache 936 Feb 20 01:25 all_config.yml
-rw-r--r-- 1 apache apache 1323 Feb 20 01:25 all_core_config.yml
-rw-r--r-- 1 apache apache 509 Feb 20 01:25 all_vpngw_config.yml
-rw-r--r-- 1 apache apache 28161 Feb 20 01:25 analytics_check.yml
-rw-r--r-- 1 apache apache 232 Feb 20 01:25 ansible.cfg.sample
drwxr-xr-x. 3 apache apache 23 Feb 20 01:25 checks
-rw-r--r-- 1 apache apache 3924 Feb 20 01:25 inventory.sample
drwxr-xr-x. 41 apache apache 4096 Feb 20 01:25 roles
drwxr-xr-x. 2 apache apache 144 Feb 20 15:11 vars
[root@localhost ansible]# ll vars/
total 20
-rw-r--r-- 1 apache apache 2252 Feb 20 01:25 vars_NUC.yml
-rw-r--r-- 1 apache apache 1190 Feb 20 01:25 vars_analytics.yml
-rw-r--r-- 1 apache apache 709 Feb 20 01:25 vars_cmix_NUC.yml
-rw-r--r-- 1 apache apache 2195 Feb 20 01:25 vars_core.yml
-rw-r--r-- 1 apache apache 676 Feb 20 01:25 vars_general.yml
-rw-r--r-- 1 apache apache 0 Feb 20 01:25 vars_vpngw.yml
Important
DO NOT comment or delete the variables unless it is specifically said that you can delete lines. Never delete the variable name -if you are not going to use a variable, leave it with the default variable.
For example, if your deployment does not have a Sensor, leave the Sensor variables with the default value.
These are the variables you will find inside each file:
vars_general.yml: These are the mandatory common variables for every deployment.
vars_analytics.yml: Variables related to the Analytics, Aggregator, Analytics+Aggregator, and Sensor deployments, including the ones for the installation (without the OpenNAC OVA deployed), and the ones for the configuration.
vars_core.yml: Variables related to Core roles including the ones for the installation (without the OpenNAC OVA deployed), principal configuration, proxy configuration, captive configuration, worker configuration and worker replication.
vars_vpngw.yml: Variables related to the VPNGW deployment, including the ones for the installation (without the OpenNAC OVA deployed), and the ones for the configuration.
Edit the necessary files to provide values for the variables (most of them can stay with the default value). A variable misconfiguration can lead to an execution error.
2.2.1.3.5.3.1. vars_general.yml
vim vars/vars_general.yml
Common variables (mandatory for every deployment)
##########
# COMMON #
##########
inventory: 'static'
timezone_custom: 'Europe/Madrid'
# A NTP server where you must get the synchronization. Add or delete lines if necessary.
ntpserv:
- 'hora.roa.es'
- '3.es.pool.ntp.org'
# The version packages that we want to be installed
# It could be the stable version or the testing one
# Change it if necessary
deploy_testing_version: 'no'
# The necessary user:password to access the repository
# Change to the actual repo user
repo_user: 'user'
# Change to the actual repo password
repo_pass: 'password'
# The portal password for the user admin
portal_pass: 'opennac'
# Configure nodes if in deploy
config: 'true'
# Collectd password
collectd_password: "changeMeAsSoonAsPossible"
# Do not touch the following variables
ansible_ssh_timeout: '7200'
ansible_python_interpreter: '/usr/bin/python3'
inventory: The default value is static. Set to dynamic when deploying in cloud with AWS and tags.
timezone_custom: The timezone where the server is set. You can execute the command timedatectl list-timezones to list valid timezones.
ntpserv: NTP servers where you must get the synchronization.
deploy_testing_version: Set to “yes” if you want to use the testing version, or “no” for the stable version. The default value is “no” as it is the stable version.
repo_user: The user to access the OpenNAC repository.
repo_pass: The password to access the OpenNAC repository.
portal_pass: The password to access the OpenNAC portal.
config: Configure nodes if in deploy, the default value is set to ‘true’.
collectd_password: The CollectD password.
ansible_ssh_timeout: Timeout duration in seconds to stablish a SSH connection. The default value is ‘7200’.
ansible_python_interpreter: Path to Python interpreter ‘/usr/bin/python3’.
Warning
For the correct Ansible playbooks execution, it is necessary to properly enter the repository credentials in the repo_user and repo_pass variables.
2.2.1.3.5.3.2. vars_analytics.yml
vim vars/vars_analytics.yml
Analytics and Aggregator
############################
# ANALYTICS AND AGGREGATOR #
############################
# Desired name for the cluster (to configure elasticsearch.yml)
cluster_name: 'on_cluster'
# The number of shards to split the indexes
number_of_shards: 6
# At least one replica per primary shard
number_of_replicas: 1
cluster_name: Desired name for the cluster (to configure elasticsearch.yml).
number_of_shards: The number of shards to split the indexes
number_of_replicas: The number of replicas per primary shard. It should be at least 1.
These variables only impact clusters. You can leave the default values. Increasing the number of shards can improve load distribution in a cluster environment, while configuring replicas ensures redundancy and fault tolerance.
For example, increasing the number of shards in an environment with 5 analytics nodes can help distribute the load evenly among nodes, ensuring availability. For further details, see the ON Analytics cluster section.
Sensor
##########
# SENSOR #
##########
# The interface that captures the packets, if it is not valid, it will be taken automatically
SNIFFER_INTERFACE: 'ens35'
# There are two capture methods: SPAN mode and SSH mode
# If SSH mode deployment wanted, change the following variable to 'SSH'
deployment_mode: 'SPAN'
# SSH MODE (if you have selected deployment_mode: 'SSH')
# To configure /usr/share/opennac/sensor/sniffer.sh
remoteHostIP: '10.10.36.121' # The remote IP of the device that needs to be captured
remoteInterface: 'eth0' # The remote interface where the information is going to be collected
remoteHostPassword: opennac
SNIFFER_INTERFACE: The interface that captures packets. It is important to select the assigned span interface, otherwise, it will be taken automatically.
deployment_mode: There are two capture methods: SPAN mode and SSH mode. Change the following variable to ‘SSH’ or ‘SPAN’. The default value is “SPAN”.
remoteHostIP: The remote IP of the device that needs to be captured (if you have selected deployment_mode: ‘SSH’).
remoteInterface: The remote interface where the information is going to be collected (if you have selected deployment_mode: ‘SSH’)
remoteHostPassword: opennac (if you have selected deployment_mode: ‘SSH’)
2.2.1.3.5.3.3. vars_core.yml
vim vars/vars_core.yml
Principal configuration
###########################
# COMMON #
###########################
# Mail variables
criticalAlertEmail: 'notify1@opennac.org,notify2@opennac.org'
criticalAlertMailTitle: 'openNAC policy message [%MSG%]'
criticalAlertMailContent: 'Alert generated by policy [%RULENAME%], on %DATE%.\n\nData:\nMAC: %MAC%\nUser: %USERID%\nIP Switch: %SWITCHIP%\nPort: %SWITCHPORT% - %SWITCHPORTID%\n'
# CLIENTS.CONF
# Edit the following lines in order to configure /etc/raddb/clients.conf
# You may either need to add new lines or to delete some.
# Follow the structure indicated below:
clients_data:
-
ip: '192.168.0.0/16'
shortname: 'internal192168'
secret: 'testing123'
-
ip: '172.16.0.0/16'
shortname: 'internal17216'
secret: 'testing123'
# Variables to configure /etc/postfix/main.cf and /etc/postfix/generic
relayhostName: 'relay.remote.com'
relayhostPort: '25'
mydomain: 'acme.local'
emailAddr: 'openNAC@notifications.mycompany.com'
criticalAlertEmail: Email or set of emails where you want to receive alerts.
criticalAlertMailTitle: Title of the alert email.
criticalAlertMailContent: Content of the critical email.
clients_data: To configure /etc/raddb/clients.conf, add as many clients as you need
ip: ‘X.X.X.X/X’
shortname: desired client name
secret: desired password
relayhostName: FQDN of the SMTP server to relay the emails (next-hop destination(s) for non-local email). Configure /etc/postfix/main.cf and /etc/postfix/generic.
relayhostPort: Port of the “relayhostName” to relay the emails on. Configure /etc/postfix/main.cf and /etc/postfix/generic.
mydomain: The mydomain parameter specifies the local internet domain name. Configure /etc/postfix/main.cf and /etc/postfix/generic.
emailAddr: The email address used as the sender of the send alerts. Configure /etc/postfix/main.cf and /etc/postfix/generic.
Worker replication
########################
# MYSQL #
########################
# mysql passwords
mysql_root_password: "opennac" # Password for mysql root
mysql_healthcheck_password: "Simpl3PaSs"
mysql_replication_password: "opennac"
mysql_opennac_service_password: "opennac"
path: /tmp/ # The path to save the dump .sql file
# Only necessary if you are going to change the mysql password with change_passwords role
# If it is not the case, let it as default
# Is you have a virgin OVA, the old root password will be opennac
# It is important that all the nodes with mysql have the same root password configured
current_mysql_root_password: "opennac"
mysql_root_password: Password for mysql root user.
mysql_healthcheck_password: Password for the healthcheck service.
mysql_replication_password: Password for mysql nagios user.
mysql_opennac_service_password: Password for the OpenNAC Enterprise API user.
path: The path to save the dump .sql file.
current_mysql_root_password: Only necessary if you are going to change the mysql password with change_passwords role. If it is not the case, let it as default.
Proxy
#######################
# PROXY CONFIGURATION #
#######################
proxy_workers_radius_sharedkey: 'CHANGE_ME' # The string to encrypt the packets between the Proxy Servers and Backends
# PROXY.CONF
# Edit the following lines in order to configure /etc/raddb/proxy.conf
# You may either need to add new lines (follow the "-" structure) or to delete some.
pools_data:
-
namepool: 'auth'
namerealm: 'DEFAULT'
# CLIENTS.CONF
# Edit the following lines in order to configure /etc/raddb/clients.conf
# You may either need to add new lines or to delete some.
# Follow the structure indicated below:
clients_data_PROXY:
-
ip: '192.168.0.0/16'
shortname: 'internal192168'
secret: 'testing123'
-
ip: '172.16.0.0/16'
shortname: 'internal17216'
secret: 'testing123'
proxy_workers_radius_sharedkey: The string to encrypt the packets between the Proxy Servers and Backends.
pools_data: To configure /etc/raddb/proxy.conf, add or delete as many as you need
namepool: the name of the pool
namerealm: the name of the realm
clients_data_PROXY: To configure /etc/raddb/clients.conf, add or delete as many clients as you need. If you have a proxy, here you will configure the proxy IP and password.
ip: ‘X.X.X.X/X’
shortname: desired client name
secret: the previously defined shared key (do not change this variable)
2.2.1.3.5.3.4. vars_vpngw.yml
Currently, there are no particular variables for the VPNGW.
2.2.1.3.6. Launch the playbooks
Note
Make sure to launch Ansible within a screen. This is to ensure continuity in case the connection to the Principal is lost.
To launch the playbooks, go to the previously downloaded opennacansible8-1.2.4/
folder.
Warning
This is will only deploy the new nodes added to the OpenNAC Enterprise environment.
By running the following command, it will deploy all the nodes you have previously stated in your inventory:
ansible-playbook all_deploy.yml -e "config=false"
The playbook execution output should look as follows. This example deploys onprincipal, onanalytics, and onsensor:

Once the Ansible execution process finishes, the result should look as follows:

Note
The numbers of each “ok”, “changed”, “skipped” or “ignored” may vary depending on the playbook, number of nodes, etc.
The only numbers that must be always 0 are unreachable and failed. Otherwise, something went wrong and you should review the tasks.
2.2.1.3.7. Add all nodes to the inventory file
Now that you have deployed the nodes, the next step is to configure them within the OpenNAC Enterprise environment. To do this, you need to add all the nodes of the environment to the inventory file. This includes both the original nodes and the newly added ones.
You will repeat the steps you have performed on the Build the inventory section, but including all the nodes.
Use the servers’ IPs to populate the inventory. First, copy the inventory.sample
to the inventory and then edit the file (using a tool like Vim, for instance) to add the IPs as shown below. Always check the .sample file first for any potential updates.
cp inventory.sample inventory
vim inventory
To help you organize your configuration flow, you can execute a command to visualize which nodes you have to configure in your file.
grep -v ';\|^$' inventory
The output would look like the following one. It displays only the nodes you have in your invironment. In our example, 1 onprincipal, 2 onworker, 1 onproxy, and 1 onanalytics:
[root@rocky9base opennacansible8-1.2.4]# grep -v ';\|^$' inventory
[principal]
onprincipal ansible_ssh_host=10.21.65.219 private_ip_address=10.21.65.219 role_opennac=principal
[worker]
onworker01 ansible_ssh_host=10.21.65.235 private_ip_address=10.21.65.235 role_opennac=worker
onworker02 ansible_ssh_host=10.21.65.235 private_ip_address=10.21.65.235 role_opennac=worker
[proxy]
onproxy ansible_ssh_host=10.21.65.202 private_ip_address=10.21.65.202 role_opennac=proxy
[captive]
[analytics]
onanalytics ansible_ssh_host=10.21.65.228 private_ip_address=10.21.65.228 role_opennac=analytics
[aggregator]
[sensor]
[vpngw]
[cores:children]
principal
worker
proxy
The structure followed is:
<hostname> ansible_ssh_host=<PUBLIC_IP> private_ip_address=<PRIVATE_IP> role_opennac=<role>
Where:
<hostname>: The name of the server, if it does not fit, the playbook will change it for the one you write on the inventory.
<PUBLIC_IP>: The accessible IP needed to make the SSH connection.
<PRIVATE_IP>: The internal IP needed in the servers to fill the /etc/hosts file or to communicate with each other. Sometimes you may not have this IP; in that case, fill it with the <PUBLIC_IP> as well.
<role>: The OpenNAC role needed to know, for example, what healthcheck configures. It can be one of these: [ aggregator | analytics | analy+agg | sensor | principal | proxy | worker | vpngw | captive].
Note
The special parameter zone_id only needs to be written in analytics servers when configuring an ELK HA architecture.
The two extra parameters farm_name and root_ssh_password only need to be written for the vpngw role.
You can add or comment servers according to your needs. Do NOT comment or delete the groups.
2.2.1.3.8. Launch the playbooks and deploy all nodes
Note
Make sure to launch Ansible within a screen. This is to ensure continuity in case the connection to the Principal is lost.
To launch the playbooks, go to the previously downloaded opennacansible8-1.2.4/
folder.
By running the following command, it will configure all the nodes you have stated in your inventory:
ansible-playbook all_config.yml
The playbook execution output should look as follows. This example configures onprincipal, onanalytics, and onsensor:

Note
The numbers of each “ok”, “changed”, “skipped” or “ignored” may vary depending on the playbook, number of nodes, etc.
The only numbers that must be always 0 are unreachable and failed. Otherwise, something went wrong and you should review the tasks.