2.3.1.1. ON Core basic configuration

In this section, we can find the basic configuration for the ON Core component.

The default SSH access for all images username/password is root/opennac.

The default HTTPS access for Principal node username/password is admin/opennac.

Warning

Change the default password to a stronger password.

2.3.1.1.1. Network configuration

ON Core OVAs come configured with two network interfaces (eth0 and eth1). We can use the eth0 interface as a management and communication interface between nodes.

We will edit the network configuration of the eth0 interface (address 192.168.56.254/24 is taken as an example):

To assign said IP, we execute the graphical network manager:

nmtui

In the initial window we select Edit a connection.

../../../_images/core.png


Select the interface and press Edit.

../../../_images/core1.png


In the IPv4 Configuration section we select Manual.

../../../_images/core2.png


We display the IPv4 configuration by selecting the <Show> option

../../../_images/core3.png


  • Addresses: We add the IP of the node with the corresponding network mask (<IP>/<MASK>).

  • Gateway: We add the gateway of the node.

  • DNS Servers: We add a DNS server (for example, Google).

We mark the option Require IPv4 addressing for this connection.

We end the configuration by clicking on <OK> at the bottom. At this moment we must activate and deactivate the interface to apply the changes to the interface that we have edited. In the menu we select the option Activate a connection.

../../../_images/core4.png


We deactivate and activate the interface and return to the initial menu.

../../../_images/core5.png


Now we have the node configured, we can verify it with the command ifconfig or ip a:

../../../_images/core6.png


We must modify the /etc/hosts file and include the appropriate (127.0.0.1 in case there is no infrastructure in mode cluster) of ON Core identified as onprincipal as well as the other nodes, onanalytics, onaggregator, onsensor, and onvpngw in case to be part of the module architecture. In this file, the administration IP of each node will be indicated.

vim /etc/hosts
127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
::1             localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1       onprincipal
192.168.56.253      onanalytics
192.168.56.253      onaggregator
192.168.56.252      onsensor
10.10.10.184        onvpngw

Finally, we must add the iptables enabling the corresponding ports.

vim /etc/sysconfig/iptables
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
# Collectd: receive data from all nodes to onprincipal
#-A INPUT -s <other_openNAC> -p udp -m state --state NEW -m udp --dport 25826 -j ACCEPT
# Radius ports
-A INPUT -p udp -m state --state NEW -m udp --dport 1812 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 1813 -j ACCEPT
# Syslog port, when it's necessary receive syslog messages
#-A INPUT -p udp -m state --state NEW -m udp --dport 514 -j ACCEPT
# Snmptrap port
#-A INPUT -p udp -m state --state NEW -m udp --dport 162 -j ACCEPT
# SNMP rule would be enabled only from openNAC Core IPs and monitoring systems
#-A INPUT -s <other_openNAC_Core> -p udp -m state --state NEW -m udp --dport 161 -j ACCEPT
# DHCP port
-A INPUT -p udp -m state --state NEW -m udp --dport 67 -j ACCEPT
# DNS ports
#-A INPUT -p udp -m state --state NEW -m udp --dport 53 -j ACCEPT
#-A INPUT -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT
# SSH port
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
# HTTP ports
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
# MySQL port, for replication
#-A INPUT -s <worker_ip> -p tcp -m state --state NEW -m tcp --dport 3306 -j ACCEPT
# Gearman port, to send jobs to queues from external nodes
#-A INPUT -s <external_node> -p tcp -m state --state NEW -m tcp --dport 4730 -j ACCEPT
# Redis port, to access onprincipal cache and send healthcheck status
#-A INPUT -s <other_openNAC> -p tcp -m state --state NEW -m tcp --dport 6379 -j ACCEPT
# Log rule, to enable when unexpected REJECTs are processed
#-A INPUT -j LOG
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

We need to be sure that the firewall is disabled:

systemctl stop firewalld
systemctl disable firewalld

And then restart the iptables service:

systemctl restart iptables
systemctl restart php-fpm
systemctl restart mysql
systemctl restart redis

2.3.1.1.2. Change the hostname

To modify the hostname of the equipment it is necessary to have added said hostname in the previous section, in the file “/etc/hosts” as local address (127.0.0.1).

Later we must modify the system hostname with the following command:

hostnamectl set-hostname <HOSTNAME>

Once the hostname has been modified, it will be necessary to restart to apply the changes.

reboot

To verify the hostname and obtain information about the equipment we can use the hostnamectl command.

2.3.1.1.3. NTP Configuration

First we must stop the NTP server before modifying its parameters, we must enter a valid NTP server, for example hora.roa.es:

systemctl stop chronyd
  • If you have your own NTP server, it can be configured.

The /etc/chrony.conf file is modified, and the appropriate servers are included:

server <ip_server_ntp>

We can also add an NTP pool with the line:

pool <IP_pool_ntp>

The file is saved and the service is started:

systemctl start chronyd

2.3.1.1.4. Time zone configuration

To modify the time zone of the ON Core node we must execute the following command:

timedatectl set-timezone <Time_Zone>

We can see a full list of available time zones using the command:

timedatectl list-timezones

To verify the correct application of the time zone we can use the command “timedatectl”.

2.3.1.1.5. Repository configuration and corporate proxy

Before updating the nodes, note that:

  • The nodes have access to the internet, either directly or through a corporate proxy.

  • We must have the credentials to access to OpenNAC Enterprise repositories for node updates.

For this, we can use the configUpdates.sh script available on each node.

In ON Core, we will find the script in:

/usr/share/opennac/api/scripts/configUpdates.sh

Corporate Proxy Authorization:

/usr/share/opennac/api/scripts/configUpdates.sh --proxy --add --proxyurl="http://<PROXY_IP>:<PROXY_PORT>"
../../../_images/core7.png


Repository credentials:

/usr/share/opennac/api/scripts/configUpdates.sh --repo --add
../../../_images/core8.png


We can verify the correct configuration of the repository in the /etc/yum.repos.d/opennac.repo file:

../../../_images/core9.png


2.3.1.1.6. Update to latest version

One of the recommended steps to take when the system is newly deployed is to update it to the latest version available.

To update the ON Core machine, go to the ON Core Update Section. Once the update is finished, we must continue from this same point.

2.3.1.1.7. Admin Portal Access

Now that we have IP connectivity with ON Core, we can access the Administration portal

The management portal URL is: https://<ip_management>/admin

ON Core includes default credentials. Wear them and change them as soon as possible.

User: admin
Password: opennac

The first time we access the administration portal, the initial configuration wizard will probably appear, it is then when we can cancel this configuration.

2.3.1.1.8. Healthcheck configuration

Healthcheck is the system monitoring module. It has service controls that ensure that all necessary system services are running properly.

On a fresh installation of ON Core, the first step is to check if the package is installed:

rpm -qa | grep opennac-healthcheck

If the healthcheck package is not installed, we need to install it:

dnf install opennac-healthcheck -y

After installing the healthcheck, we need to configure it:

  • ON Principal

cd /usr/share/opennac/healthcheck/
cp -rfv healthcheck.ini.principal healthcheck.ini
  • ON Worker

cd /usr/share/opennac/healthcheck/
cp -rfv healthcheck.ini.worker healthcheck.ini
  • ON Principal Secondary

cd /usr/share/opennac/healthcheck/
cp -rfv healthcheck.ini.worker healthcheck.ini
  • ON Proxy

cd /usr/share/opennac/healthcheck/
cp -rfv healthcheck.ini.proxy healthcheck.ini
  • ON Captive Portal

cd /usr/share/opennac/healthcheck/
cp -rfv healthcheck.ini.portal healthcheck.ini

2.3.1.1.9. Add GPG checks for the repository

To verify the integrity of the software downloaded from the repository, all packages published to the repository are signed with a GPG key, so they can be verified with the provided public GPG key.

To enable this GPG verification, follow the steps below:

  1. Install or download the public GPG key.

  2. Enable “gpgcheck” in the OpenNAC Enterprise repository (/etc/yum.repos.d/opennac.repo), specifying the public GPG key that will be used.

These steps are done by installing the “opennac-gpg-key” package:

dnf -y install opennac-gpg-key
  1. Once it is installed, you can check it as follows:

# ls /etc/pki/rpm-gpg/RPM-GPG-KEY-OCF
/etc/pki/rpm-gpg/RPM-GPG-KEY-OCF
# cat /etc/yum.repos.d/opennac.repo
[OPENNAC]
name=Opennac repo
baseurl=https://repo-opennac.opencloudfactory.com/x86_64
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OCF

2.3.1.1.10. Collectd configuration

Collectd is used to send the trending information to the ON Principal. To configure Collectd it is necessary to edit the following file:

vim /etc/collectd.d/network.conf

If we are configuring a principal, the file should be like:

# openNAC collectd file
#
# This file *WON'T* be automatically upgraded with default configuration,
# due to is only configured in deployment stage

LoadPlugin "network"

## When this server is a Core Principal or
## collect all data from other nodes
## Default port: 25826
<Plugin "network">
        <Listen "{{ principal_IP }}">
                SecurityLevel "Encrypt"
                AuthFile "/etc/collectd.d/auth_file"
        </Listen>
</Plugin>

## When this server send info to another
## server (ex. a Core Principal or similar)
## Default port: 25826
#<Plugin "network">
#       <Server "onprincipal">
#               SecurityLevel "Encrypt"
#               Username "opennac"
#               Password "changeMeAsSoonAsPossible"
#       </Server>
#</Plugin>

If we are configuring another ON Core role, the configuration should be like:

# openNAC collectd file
#
# This file *WON'T* be automatically upgraded with default configuration,
# due to is only configured in deployment stage

LoadPlugin "network"

## When this server is a Core Principal or
## collect all data from other nodes
## Default port: 25826
#<Plugin "network">
#        <Listen "{{ principal_IP }}">
#                SecurityLevel "Encrypt"
#                AuthFile "/etc/collectd.d/auth_file"
#        </Listen>
#</Plugin>

## When this server send info to another
## server (ex. a Core Principal or similar)
## Default port: 25826
<Plugin "network">
    <Server "onprincipal">
            SecurityLevel "Encrypt"
            Username "opennac"
            Password "changeMeAsSoonAsPossible"
    </Server>
</Plugin>

To send the collectd messages with this device, we need to go to:

vi /etc/collectd.conf

We need to add the servers hostname in the hostname parameter.

#Hostname    "localhost"
Hostname    "oncore-XX"

Finally, it is necessary to restart the collectd service.

systemctl restart collectd