2.3.1.2.2. ON Core worker

In this section, we will explain the process that should be followed in order to deploy a worker node in a working ON Core Cluster.

  • First, you have to create an ON Core instance that will work as a Core Principal.

  • After that the Principal is up and running, an ON Core Worker instance should be deployed.

  • Establish a secure connection between the different nodes.

Once you have the Principal and the Worker up and running, it is required to establish a Load Balance policy.

This kind of deployment could be also used and followed with Proxy RADIUS Architecture or without it.

In order to add and deploy a new node, follow the next steps:

2.3.1.2.2.1. MySQL replication

2.3.1.2.2.1.1. On all Servers

Stop services:

systemctl stop opennac
systemctl stop gearmand
systemctl stop httpd
systemctl stop radiusd

2.3.1.2.2.1.2. ON Principal Configuration

Before executing any configuration, we must stop the following services:

systemctl stop opennac
systemctl stop gearmand
systemctl stop httpd
systemctl stop radiusd

Note

Remember to perform the Basic Configuration before proceeding.

  1. Edit the /etc/my.cnf file and uncomment the “Replication (Principal)” section, make sure the server-id is 1.

vim /etc/my.cnf
../../../_images/princiaplconf.png


  1. Restart mysql service.

systemctl restart mysqld
  1. Access mysql.

mysql -u root -p<mysql_root_password> opennac
../../../_images/ddbbin.png


  1. Grant permissions to the different workers, execute the following command for each worker and use its IP address.

GRANT REPLICATION SLAVE ON *.* TO 'onworker'@'<worker_ip>' IDENTIFIED BY '<password>';
../../../_images/grantpermissionsprincipal.png


Note

  • Remember that it is important that this password is unique and that it should be stored somewhere safe, like a password vault.

  • This password will be used to configure all workers.

  1. Grant privileges

GRANT ALL PRIVILEGES ON opennac.* TO 'admin'@'<worker_ip>' identified by '<admin_password>';
../../../_images/grantprivilegesprincipal.png


Note

  • Run for each worker with its own IP address.

  • The admin_password will be the same that is stored in the file ‘/usr/share/opennac/api/application/configs/application.ini’, from each worker, is the value of the field ‘resources.multidb.dbW.password’.

../../../_images/principaldbpass.png


  1. Flush privileges

flush privileges;
../../../_images/flushprivileges.png


  1. Still inside mysql, check the master status, mind the file and position for later use. After that exit from Mysql.

show master status;

exit
../../../_images/masterstatus.png


Note

Remember that the file and position values will be used in the Worker configuration.

  1. Generate a dump of OpenNAC Enterprise database.

mysqldump -u root -p<mysql_root_password> opennac > opennac.sql
  1. Insert the firewall rule into the master’s iptables.

vim /etc/sysconfig/iptables

Add the following line (where the worker_ip is the ip of the core that contains the replicated database).

-A INPUT -s <worker_ip> -p tcp -m state --state NEW -m tcp --dport 3306 -j ACCEPT
../../../_images/iptables.png


Note

You need to configure a rule for each worker device with its own IP address.

  1. Restart iptables service

systemctl restart iptables
  1. Now, send this dump to all the workers (where the worker_ip is the ip of the core that contains the replicated database).

scp opennac.sql root@<worker_ip>:

2.3.1.2.2.2. ON Worker Configuration

  1. Edit the hosts file.

vim /etc/hosts

Change the onprincipal variable and indicate the IP of the ON Principal from which we are going to replicate the database.

<principal_ip>     onprincipal
../../../_images/workerconf.png


  1. Import the opennac.sql database file.

mysql -u root -p<mysql_root_password> opennac < opennac.sql
  1. Edit the /etc/my.cnf and uncomment the “Replication (Worker)” session.

vim /etc/my.cnf

Change the server-id to unique number (if you have 3 workers, you will have the server-id=1 inside the master, the first worker will be 2, the second 3 and the third 4 and so on…)

../../../_images/workerconf1.png


When deploying a ON Core Secondary, add the following line log-slave-updates = true to the end of the Replication (Worker) section, so the entire code block will be the following:

#Replication (Worker)
server-id=X
relay-log = /usr/share/opennac/mysql/mysql-relay-bin.log
log_bin  = /usr/share/opennac/mysql/mysql-bin.log
binlog_do_db = opennac
expire_logs_days = 5
slave-net-timeout = 60
log-slave-updates = true

Note

The “log-slave-updates = true” line will cause that this secondary server, since it is also a worker, writes updates that are received from the primary principal and performed by the worker’s SQL thread to this worker’s own binary log, which in turn will be replicated to other worker servers when the database is acting as principal.

  1. Restart mysql service (systemctl mysql restart).

systemctl restart mysqld
  1. Access the mysql cli.

mysql -u root -p<mysql_root_password> opennac
../../../_images/ddbbin.png


  1. Configure the replication.

    • Here you will use the password created in the onprincipal configuration on step: 4.Grant permissions

    • The file and position must be the data that we get from the onpricipal on step 7, when cheking the onprincipal status.

    • Stop the worker

STOP SLAVE;
../../../_images/workerconf5.png


  • Synchronize the file and position with the principal database.

CHANGE MASTER TO MASTER_HOST='onprincipal',MASTER_USER='onworker',MASTER_PASSWORD='<password>',MASTER_LOG_FILE='<file>',MASTER_LOG_POS=<position>;
../../../_images/workerconf6.png


  1. Start the worker.

start slave;
../../../_images/workerconf4.png


  1. See if you don’t have errors.

show slave status\G
../../../_images/workerconf3.png


2.3.1.2.2.3. On all Servers

Start services on all servers

systemctl start opennac
systemctl start gearmand
systemctl start httpd
systemctl start radiusd

2.3.1.2.2.4. Verify Worker Status

  1. Check if everything is still fine and you don’t have errors.

show slave status\G
../../../_images/workerconf6.png