2.3.2.1. ON Analytics basic configuration
In this section we can find the basic configuration for the ON Analytics component.
The default SSH access for all images username/password is root/opennac.
Warning
Change the default password to a stronger password.
2.3.2.1.1. Network configuration
ON Analytics OVAs come configured with two network interfaces (eth0 and eth1). We can use the eth0 interface as a management and communication interface between nodes.
We will edit the network configuration of the eth0 interface (address 192.168.56.253/24 is taken as an example):
To assign said IP, we execute the graphical network manager:
nmtui
In the initial window we select Edit a connection.

Select the interface and press Edit.

In the IPv4 Configuration section we select Manual.

We display the IPv4 configuration by selecting the <Show> option

Addresses: We add the IP of the node with the corresponding network mask (<IP>/<MASK>).
Gateway: We add the gateway of the node.
DNS Servers: We add a DNS server (for example, Google).
We mark the option Require IPv4 addressing for this connection.
We finish by clicking on <OK> at the bottom. At this point, we must activate and deactivate the interface to apply the changes to the interface that we have edited. In the menu we select the option Activate a connection.

We deactivate and activate the interface and return to the initial menu.

Now we have the node configured, we can verify it with the command ifconfig or ip a:

We must modify the /etc/hosts file and include the appropriate address of ON Analytics (127.0.0.1 in case there is no infrastructure in cluster mode) identified as onanalytics and onaggregator as well as the other nodes, onprincipal, oncore, onsensor, and onvpngw in case of forming part of the module architecture. An example is shown below:
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 onanalytics
127.0.0.1 onaggregator
192.168.56.254 onprincipal
192.168.56.252 onsensor
10.10.10.184 onvpngw
We need to be sure that the firewalld is disabled:
systemctl stop firewalld
systemctl disable firewalld
2.3.2.1.2. Change the hostname
To modify the hostname of the node we must use the following command:
hostnamectl set-hostname <hostname>
It is recommended to use the same name defined in the previous section (/etc/hosts), in this case, onanalytics.
Once the hostname has been modified, it will be necessary to restart to apply the changes.
reboot
To verify the hostname and obtain information about the equipment we can use the hostnamectl command.
2.3.2.1.3. Iptables configuration
It is necessary the configuration of the iptables.
First of all, we need to copy the analy+agg iptables template to the iptables file:
yes | cp /usr/share/opennac/analytics/iptables_analy+agg /etc/sysconfig/iptables
To have the analy+agg iptables correctly we need to substitute the node variables in of the following files:
vi /etc/sysconfig/iptables
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
# SNMP rule would be enabled only from openNAC Core IPs
#-A INPUT -s <oncoreXX_ip> -p udp -m state --state NEW -m udp --dport 161 -j ACCEPT
# SSH port
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
# AGGREGATOR ports
## LOGSTASH
#-A INPUT -s <oncoreXX_ip> -p tcp -m state --state NEW -m tcp --dport 5000 -j ACCEPT
#-A INPUT -s <onsensorXX_ip> -p tcp -m state --state NEW -m tcp --dport 5001 -j ACCEPT
# ANALYTICS ports
## KIBANA
#-A INPUT -s <oncoreXX_ip> -p tcp -m state --state NEW -m tcp --dport 5601 -j ACCEPT
## ELASTICSEARCH
#-A INPUT -s <onaggregatorXX_ip> -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT
#-A INPUT -s <oncoreXX_ip> -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT
#### CLUSTER
#-A INPUT -s <onanalyticsXX_cluster_ip> -p tcp -m state --state NEW -m tcp --dport 9300 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
Finally, restart the iptables service:
systemctl restart iptables
2.3.2.1.4. NTP Configuration
First, we must stop the NTP server before modifying its parameters, we must enter a valid NTP server, for example, hora.roa.es:
systemctl stop chronyd
If you have your own NTP server, it can be configured.
The /etc/chrony.conf file is modified and the appropriate servers are included:
server <ip_server_ntp>
We can also add an NTP pool with the line:
pool <IP_pool_ntp>
The file is saved and the service is started:
systemctl start chronyd
2.3.2.1.5. Repository configuration and corporate proxy
Before updating the nodes it is necessary that:
The nodes have access to the internet, either directly or through a corporate proxy.
Access credentials to OpenNAC Enterprise repositories for solution updates.
For this we can use the configUpdates.sh script available on each node.
In ON Analytics we will find the script in:
/usr/share/opennac/analytics/scripts/configUpdates.sh
Corporate Proxy Authorization:
/usr/share/opennac/analytics/scripts/configUpdates.sh --proxy --add --proxyurl="http://<PROXY_IP>:<PROXY_PORT>"

Repository credentials:
/usr/share/opennac/analytics/scripts/configUpdates.sh --repo --add

We can verify the correct configuration of the repository in the /etc/yum.repos.d/opennac.repo file:

2.3.2.1.6. Update to latest version
One of the recommended steps to take when the system is newly deployed is to update it to the latest version available.
To update the ON Analytics machine, go to the ON Analytics Update Section. Once the update is finished, we must continue from this same point.
2.3.2.1.7. Healthcheck configuration
Healthcheck is the system monitoring module. It has service controls that ensure that all necessary system services are running properly.
On a fresh installation of ON Analytics, the first step is to check if the package is installed:
rpm -qa | grep opennac-healthcheck
If the healthcheck package is not installed, we need to install it:
dnf install opennac-healthcheck -y
After installing it, we need to configure it:
If the ON Aggregator is not included in the same machine:
cd /usr/share/opennac/healthcheck/
cp -rfv healthcheck.ini.analytics healthcheck.ini
cp -rfv application.ini.sample application.ini
If the ON Aggregator is in the same machine as the ON Analytics, we need to activate logstash service and redis:
cd /usr/share/opennac/healthcheck/
cp -rfv healthcheck.ini.analy+agg healthcheck.ini
cp -rfv application.ini.sample application.ini
2.3.2.1.8. Services Configuration
To start the ON Analytics services, we use the following commands:
systemctl start elasticsearch
systemctl start kibana
If the ON Aggregator is in the same machine that the ON Analytics, we also need to activate logstash service and redis:
systemctl start logstash
systemctl start redis
In case we have any issues that may arise related to Logstash data processing or other issues, we go to the /var/log/logstash/logstash-plain.log file.
Communication between Logstash and Kibana with Elasticsearch is done using TCP port 9200.
There is a cron task defined in /etc/cron.d/opennac-purge, which checks disk space. In the event that the disk occupancy is greater than 90%, the data is purged, from oldest to newest. This mechanism is a contingency mechanism. Proper purging control must be done through the Curator.
There is another cron task defined in /etc/cron.d/opennac-curator, which checks the number of days of stored information categorized by indexes. If you want to modify it, you have to go to /etc/elastCurator/action.yaml and modify the unit_count of each of the indexes:
For example, in the opennac-* index the information is deleted when it has been stored for 15 days by default:

2.3.2.1.9. Unnecessary Services
Since the ON Analytics OVA contains the ON Sensor node services, it is necessary to disable and uninstall them from the system to improve performance.
Disable services:
systemctl stop dhcp-helper-reader
systemctl stop zeek
systemctl stop pf_ring
systemctl stop filebeat
systemctl disable dhcp-helper-reader
systemctl disable zeek
systemctl disable pf_ring
systemctl disable filebeat
Delete services:
dnf remove zeek filebeat opennac-dhcp-helper-reader pfring
When asked if you really want to delete the services, indicate yes by entering the letter Y.
If the ON Aggregator is not used, we also need to disable it:
systemctl stop logstash
systemctl stop redis
systemctl disable logstash
systemctl disable redis
Redis can’t be uninstalled because we need it for the TIME_SYNC healthcheck.
2.3.2.1.10. Collectd configuration
Collectd is used to send the trending information to the ON Principal. To configure Collectd it is necessary to edit the following file:
vim /etc/collectd.d/network.conf
Inside the file we should have something like:
# openNAC collectd file
#
# This file *WON'T* be automatically upgraded with default configuration,
# due to is only configured in deployment stage
LoadPlugin "network"
## When this server is a Core Principal or
## collect all data from other nodes
## Default port: 25826
#<Plugin "network">
# <Listen "{{ principal_IP }}">
# SecurityLevel "Encrypt"
# AuthFile "/etc/collectd.d/auth_file"
# </Listen>
#</Plugin>
## When this server send info to another
## server (ex. a Core Principal or similar)
## Default port: 25826
<Plugin "network">
<Server "onprincipal">
SecurityLevel "Encrypt"
Username "opennac"
Password "changeMeAsSoonAsPossible"
</Server>
</Plugin>
To send the collectd messages with this device, we need to go to:
vi /etc/collectd.conf
We need to add the servers hostname in the hostname parameter.
#Hostname "localhost"
Hostname "onanalytics-XX"
Finally, it is necessary to restart the collectd service.
systemctl restart collectd