2.3.1.2.2. ON Core worker
In this section, we will explain the process that should be followed in order to deploy a worker node in a working ON Core Cluster.
First, you have to create an ON Core instance that will work as a Core Principal.
After that the Principal is up and running, an ON Core Worker instance should be deployed.
Establish a secure connection between the different nodes.
Once you have the Principal and the Worker up and running, it is required to establish a Load Balance policy.
This kind of deployment could be also used and followed with Proxy RADIUS Architecture or without it.
In order to add and deploy a new node, follow the next steps:
2.3.1.2.2.1. MySQL replication
2.3.1.2.2.1.1. On all Servers
Stop services:
systemctl stop opennac
systemctl stop gearmand
systemctl stop httpd
systemctl stop radiusd
2.3.1.2.2.1.2. ON Principal Configuration
Before executing any configuration, we must stop the following services:
systemctl stop opennac
systemctl stop gearmand
systemctl stop httpd
systemctl stop radiusd
Note
Remember to perform the Basic Configuration before proceeding.
Edit the /etc/my.cnf file and uncomment the “Replication (Principal)” section, make sure the server-id is 1.
vim /etc/my.cnf

Restart mysql service.
systemctl restart mysqld
Access mysql.
mysql -u root -p<mysql_root_password> opennac

Grant permissions to the different workers, execute the following command for each worker and use its IP address.
GRANT REPLICATION SLAVE ON *.* TO 'onworker'@'<worker_ip>' IDENTIFIED BY '<password>';

Note
Remember that it is important that this password is unique and that it should be stored somewhere safe, like a password vault.
This password will be used to configure all workers.
Grant privileges
GRANT ALL PRIVILEGES ON opennac.* TO 'admin'@'<worker_ip>' identified by '<admin_password>';

Note
Run for each worker with its own IP address.
The admin_password will be the same that is stored in the file ‘/usr/share/opennac/api/application/configs/application.ini’, from each worker, is the value of the field ‘resources.multidb.dbW.password’.

Flush privileges
flush privileges;

Still inside mysql, check the master status, mind the file and position for later use. After that exit from Mysql.
show master status;
exit

Note
Remember that the file and position values will be used in the Worker configuration.
Generate a dump of OpenNAC Enterprise database.
mysqldump -u root -p<mysql_root_password> opennac > opennac.sql
Insert the firewall rule into the master’s iptables.
vim /etc/sysconfig/iptables
Add the following line (where the worker_ip is the ip of the core that contains the replicated database).
-A INPUT -s <worker_ip> -p tcp -m state --state NEW -m tcp --dport 3306 -j ACCEPT

Note
You need to configure a rule for each worker device with its own IP address.
Restart iptables service
systemctl restart iptables
Now, send this dump to all the workers (where the worker_ip is the ip of the core that contains the replicated database).
scp opennac.sql root@<worker_ip>:
2.3.1.2.2.2. ON Worker Configuration
Edit the hosts file.
vim /etc/hosts
Change the onprincipal variable and indicate the IP of the ON Principal from which we are going to replicate the database.
<principal_ip> onprincipal

Import the opennac.sql database file.
mysql -u root -p<mysql_root_password> opennac < opennac.sql
Edit the /etc/my.cnf and uncomment the “Replication (Worker)” session.
vim /etc/my.cnf
Change the server-id to unique number (if you have 3 workers, you will have the server-id=1 inside the master, the first worker will be 2, the second 3 and the third 4 and so on…)

When deploying a ON Core Secondary, add the following line log-slave-updates = true to the end of the Replication (Worker) section, so the entire code block will be the following:
#Replication (Worker)
server-id=X
relay-log = /usr/share/opennac/mysql/mysql-relay-bin.log
log_bin = /usr/share/opennac/mysql/mysql-bin.log
binlog_do_db = opennac
expire_logs_days = 5
slave-net-timeout = 60
log-slave-updates = true
Note
The “log-slave-updates = true” line will cause that this secondary server, since it is also a worker, writes updates that are received from the primary principal and performed by the worker’s SQL thread to this worker’s own binary log, which in turn will be replicated to other worker servers when the database is acting as principal.
Restart mysql service (systemctl mysql restart).
systemctl restart mysqld
Access the mysql cli.
mysql -u root -p<mysql_root_password> opennac

Configure the replication.
Here you will use the password created in the onprincipal configuration on step: 4.Grant permissions
The file and position must be the data that we get from the onpricipal on step 7, when cheking the onprincipal status.
Stop the worker
STOP SLAVE;

Synchronize the file and position with the principal database.
CHANGE MASTER TO MASTER_HOST='onprincipal',MASTER_USER='onworker',MASTER_PASSWORD='<password>',MASTER_LOG_FILE='<file>',MASTER_LOG_POS=<position>;

Start the worker.
start slave;

See if you don’t have errors.
show slave status\G

2.3.1.2.2.3. On all Servers
Start services on all servers
systemctl start opennac
systemctl start gearmand
systemctl start httpd
systemctl start radiusd
2.3.1.2.2.4. Verify Worker Status
Check if everything is still fine and you don’t have errors.
show slave status\G
