- Details
- Category: Information Technology
- Hits: 194
1. Implement a firewall
The de facto firewall for Ubuntu is ufw (uncomplicated firewall). I personally love this tool as it is easy to configure and maintain. You recall that CIS Benchmark I have referred to numerous times in this article and others? Yeah. Follow that. There are a couple of other firewall packages that are referenced in the guides. Pick one and use the recommended settings.
By default, I would recommend to block all inbound traffic from the internet, allow all outbound traffic from the server, and block all but port 22/TCP (for ssh access) and any other required ports on the local network. My personal go to is to block everything by default and then add rules only for services I explicitly need. In some cases, this will require research into the product or service you are installing on the server. As an example, see the port requirements mentioned in my article for building a high available cluster for MySQL. There are ports for the SQL service, remote control, and cluster management services. Proper research in advance will save you hours of frustration trying to figure out why something is not working the way it should.
If you have an existing system that is already providing services on your network and you want to tighten it down, use a tool such as netstat to see what services are listening on which ports to determine what rules should be applied to your firewall. Some traffic is all local. We should only need to worry about traffic that is not local.
2. Monitor for threats
Aside from the tools recommended in the security guides above, you can utilize free tools such as Greenbone or Wazuh to monitor your systems for any potential vulnerabilities or configuration issues that you might want to address within your network. Neither of these tools take a ton of resources, but the feedback they provide will go a long way toward helping you identify any potential holes in your security. I run both of them.
3. Periodically check for CVEs which impact your systems
A great tool that Ubuntu provides is OVAL reports. These reports use OpenSCAP to check your systems against a list of known vulnerabilities known as CVEs. It will not fix them for you, but it will provide an easy to read HTML report you can view in any browser to determine where to focus your efforts for patching. Most of these vulnerabilities can be fixed by regularly running an apt upgrade against your servers. Some of them can only be fixed with an ESM subscription from Ubuntu as those repositories are limited to licensed Pro subscribers.
I generally recommend a reboot primarily due to the number of kernel patches that are released. There is no set schedule as there is with Microsoft. This is arguably a good thing as patches come out more frequently to take care of issues immediately instead of waiting a month or two for the vulnerability to be patched. Others may complain that a lack of any schedule makes checking for patches a daily task to keep their systems current. Follow your best practice or operating procedures to make sure you can hit maintenance windows to avoid impacts to other teams.
It is generally a good practice to re-run the OVAL report after any patching is performed to ensure that the vulnerability is fixed. In some situations, the vulnerability will not be fixed, but the report will provide links to details about the vulnerability which may give you more information on how to mitigate the risk.
Above all, keep in communication with stakeholders for the systems you are patching. Yes, people tend to blame you if a problem occurs after patching. That is just the nature of any job in IT from the support desk staff to the level three engineers. You learn to not let it bother you over time. Communication goes a long way in tempering a lot of those complaints. Just remember to answer all potential questions in your communication — who, what, where, when, and why. Users don’t really care about or need to know the how, but the how should be clearly defined in any change request tickets you may be required to file.
4. Monitoring
Monitoring is one of those tools that you don’t think you need until you realize that you do. It allows you to be more proactive instead of reactive which goes a long way toward showing your value to an employer. Would you rather be known for fixing hundreds of impacts to servers you manage, or for preventing those problems from occurring in the first place?
There are multiple tools to provide monitoring for your infrastructure. My current favorite is Zabbix which also provides high availability. It is an Open Source project that is free to use. It just requires a bit of setting up. Once it is configured, it requires very little maintenance on the system.
My preferred way of setting up Zabbix is for high availability which allows monitoring to always be running even during maintenance windows on the Zabbix systems themselves. Refer to my article on setting up a HA MySQL cluster as a base for getting started. The external resources within that document will also guide you to setting up the application and front-end layers so the system remains always on.
The default Zabbix dashboard will provide you with a list of servers that are consuming the most resources. It will also give you a list of the currently active alerts to help guide you to the problem areas that you might want to address first. If you integrate monitoring with email, you can receive notifications before a user even realizes there might be a problem. Checking the resource utilization can quickly guide you to where modifications in your hosting platform should be made to add more drive space, RAM, or CPU.
5. Use a Pi-Hole for ad filtering
Ok. First, do not do this at work without lots of discussion with leadership or teams that focus on controlling network traffic. There are ways to do this as a Docker container on your local machine so that it only affects you and not a larger group of users. Be aware that if you start filtering traffic from network devices other than yours you might be interfering with someone’s ability to do their job. This information is mainly for the home tinkerer that wants to better control their bandwidth usage.
This software really takes minimal resources and was designed to be run from a low-powered Raspberry Pi. I have found that it works just as well from a Docker or LXD container. In fact, I have it running as my primary DNS point for all devices in my home network to prevent all sorts of metrics monitoring and tracking from third parties. Over time, 25% of the requests leaving my network are going to ad or tracking sites.
My outbound traffic through Pi-Hole is not limited to just my web browser. ALL traffic on my network that has to perform DNS look-ups goes through my Pi-Hole. I have found suspicious tracking on AppleTV, Roku, and Chrome devices as well as Amazon’s Echo brand of devices. Even my internet-connected thermostat does a phone home to the manufacturer hundreds of times per day for some unknown reason.
While I prefer to purchase devices that don’t rely on subscriptions or third party cloud services, it is becoming more avoidable as manufacturers move to fee-based subscription services to use the basic functionality for which I purchased the device. I have a whole soapbox about paying $100 for a security camera and then losing all functionality when the company goes defunct because they were fronting the third party video storage at Amazon Web Services. And why does Amazon’s EERO wifi router require a subscription to use basic functionality? Do not get me started. Pay attention to requirements before you buy devices.
FireBog has a good curated list of lists to add to your Pi-Hole over at https://firebog.net/. I can use all of the items in green without any issue. If you use the lists in blue, you might start running into problems with applications or services not working such as Paramount+ which uses some of the services on these lists to provide you with required ads.
Keep in mind that some services might require you to whitelist some entries for them to function properly. Most subscriptions for streaming services have moved to an ad-supported tier and increased prices for ad-free services. In so doing, the app may not play properly if you are on an ad-supported tier and using a Pi-Hole or any other form of ad blocking with them. I wait until I experience any such problem and then check my blocking logs on the Pi-Hole to see what the culprit might be. The Pi-Hole has a button next to each log item to allow you to quickly add the destination to the whitelist to see if that corrects the issue.
6. Use a VPN
Another one for home users only, consider using a VPN to keep your business your business. I have used NordVPN for years for multiple reasons.
a. I can set my country of origin so that I can test access to resources from remote locations,
b. They do not maintain log files, so it is theoretically impossible for others to track my web usage,
c. Comcast or CenturyLink (the two major players in my area) cannot track my behavior for ad-serving purposes,
d. All traffic including DNS queries are encrypted when leaving my network keeping prying eyes out of my business.
VPN services are also useful if you live in a country where your entire digital life is monitored. Russia’s ban of Twitter after the Arab Spring is an ideal example. If your government runs the risk of serving itself over serving the people, why give them any more power or make their goals easier to achieve? To misappropriate the frequent quote from Benjamin Franklin, “Those who would give up essential liberty to produce a little temporary safety, deserve neither liberty nor safety.” Granted, he was using it in terms of taxation and border security, but that is another discussion for a political blog to handle. The sentiment, albeit misplaced, is apropos to privacy and security in the private sector and technology spaces as well.
7. Isolate IoT devices on their own network
Does your coffee maker need to see all of the traffic on your network or interact with your desktop? No. Nor does Siri, Alexa, Google, your thermostat, or any other internet-connected device on your network. They may need to communicate with each other, but they don’t need to communicate with your laptop.
Any IoT device should have its own interface with which you interact. They do not need to be on the same network as your web browser in order to function fully. In most cases, these devices have not kept up with the latest technology and only support wifi over the 2.4GHz spectrum which is severely speed limited. Some devices cannot even use your newer wifi 6, 6E or 7 networks. I have even found devices that do not support encryption newer than WEP. Others don’t support any encryption at all. Manufacturers have little to no incentive to keep these devices up to date for newer technologies. And we really don’t want to have to replace something that is otherwise functioning so that we can fix vulnerabilities in protocols they use that are decades old as in the case of WEP.
Some devices might have to access resources on your network such as an AppleTV device that is streaming movies from your Plex server or iTunes libraries. Those can still be on a separate subnet with a firewall routing only required traffic into your home network where the resources reside.
Another advantage of isolating IoT devices to their own network is performance. If only your laptops, desktops, and servers are communicating on one network, they are not constantly interrupted by traffic that is going on in the other network. Granted, there is not that much of a performance gain, but there is a security gain that compounds the benefits.
Most wifi routers still have the ability to set up a secondary or guest network that runs over 2.4GHz and meets the low security requirements of older devices. I configure the guest network on my home system for IoT devices and isolate that traffic over there. I do not leave it as an open wifi, but keep it secured with a password. If someone should ever crack that password, they will only get into my IoT devices. Just make sure that your baby monitors or any other audio video monitoring devices are as secure as possible and consider changing the guest network password at least once per year. Better yet, change it when you change the smoke detector batteries.
8. T-Pot 24.04
This suggestion is a bit more obscure and recommended for experienced network administrators. It could be inviting disaster if not done properly. Only do this at your own risk. This machine should not be powered on at all times if it is directly accessible through the internet. Be prepared to quickly power off the VM when not in use or if attacks grow so large that they consume your bandwidth. Even with a 1GB internet connection, they will consume 100% of bandwidth and take down other services on other servers. Never underestimate an attacker.
A honeypot is for all intents and purposes a trap. Have you seen Winnie the Pooh get his head stuck in a tree or pot of honey when trying to grab every last drop? It is like that. In computer terms, a honeypot is a fake server that appears to be legitimate to the outside world. It is a temporary server that provides a vulnerable service as a temptation to hackers. Once a hacker locates the service, they can proceed to attempt a variety of exploits against the service. Meanwhile, the honeypot is logging everything and sending that information to a logging system outside of the honeypot so that you can review what happened later and learn how the system was exploited.
Deutsche Telekom - the owners of T-Mobile - have provided an open source collection of honeypots for a number of years. The system provides popular applications such as SSh servers, web servers, and databases among others. It also provides a web interface for the local administrator to use to see the number of attacks and the country of origin of each using Kibana and Grafana. It used to be much easier to set up because it shipped as an installation ISO complete with an operating systems deployment. You were able to deploy it as a VM, point your DMZ to it, and start monitoring traffic. But the current version is not too difficult to get running.
In its current iteration, it is no longer an ISO from which you can install. Instead, it is an installation script that builds out various containers to build fake systems and services as an invitation for hackers to attempt to exploit. Most hackers will see these and play around on them for about a minute before they realize they are a trap. But they will give you an idea of where holes might exist in your firewall, and provide a starting point to where to focus on tightening security.
To begin, you will need to build a minimal installation linux VM. Any major distribution should suffice, but I choose Debian for my minimal installations. The only optional software that should be installed is OpenSSH and Curl to be able to grab the installation script. The system will need at least 128GB of storage and 8-16GB of RAM. CPU does not seem to be as important, but I select one socket with four cores. Follow one of my other guides on this blog to build out a system before continuing.
Don’t worry about it not having any services other than OpenSSH installed. The installer will build out containers within the system to provide the necessary services to allow attackers to locate the honeypots.
Once the base system is installed and patched, you can execute the following command to install the entire T-Pot collection.
env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/install.sh)"
Follow the instructions printed on the screen, check for possible conflicts with any other services, and reboot the system before using. You can get full details on how to best use the software and configure your firewall at https://github.security.telekom.com/2024/04/honeypot-tpot-24.04-released.html. Make sure that you do not expose ports over 64,000 as these higher ports are used for your management of the system. All lower ports are potential targets for attacks.
9. Centralized User Management
One of the benefits of working in a corporate network is the implementation of a centralized database of user accounts and credentials. Most often, this is done with Active Directory (AD) which is Microsoft’s implementation of PAM and LDAP into a single database which can be administered from centralized tools. AD is a required skillset for any system administrator. You could use evaluation versions of Windows to set things up, but long-term access is not guaranteed unless you purchase a license. But Microsoft licenses can get costly - even if it is only for home use.
There are several ways to get centralized user management on Linux systems that will not cost you a lot of money, but they may take you a lot of time. It is time well spent as it will help you develop the skills to understand the underpinnings of identity management. Some of the tools you can research and try are below. Some are free, and some are commercial. I tend to stick with free. They are more difficult to set up, but still worth the effort.
OpenLDAP
389 Directory Server
Samba
Zentyal
ClearOS
NethServer
Univention
Once you have your system of choice in place, you can configure your servers as members of the infrastructure and create accounts that can be shared across servers and workstations. This means that you will no longer have to remember the username and password for each system on the network. You can have a single username and password that works across all of them. With even further configuration, you may even be able to use the same account to manage non-PC devices such as routers, firewalls, switches, and others.
While centralized user management does not really add to security, it does make user management much less of a headache.
10. Sign up for mailing lists
While not a direct security countermeasure that has any impact on the security of your system, there are a variety of security mailing lists you can subscribe to that will provide you with relatively current news of new threat vectors, vulnerability discoveries, available patches, and a plethora of security-related news. Here are some of my favorites.
Bruce Schneier - https://www.schneier.com/
While there is a lot of opinion and paranoia in his blog, Bruce Schneier is one of the leading technologists in the security sector. His blog and newsletter have been available for over twenty years and include information about new threats as well as countermeasures.
SANS Institute - https://www.sans.org/newsletters/
One of the most recognizeed and respected places to learn about security and to obtain security focused certifications is SANS. There are currently three main newsletters you can sign up for. They will send periodic emails without inundating your inbox with tons of spam.
NewsBites - This newsletter is released about every two weeks and gives a timely update on recent headlines related to cybersecurity. It covers major breaches and exploits you should pay particular attention to.
@Risk - This weekly email gives a more detailed look at newly discovered vulnerabilities along with more in-depth coverage of how some exploits work from time to time.
Ouch! - This is a monthly newsletter that gives you the highlights of computer security tuned more for the lay-person. It will provide tips for detecting phishing attempts or other ways that us common folk are constantly being hit with attempts to steal our personal information. It will also suggest ways to avoid situations where security can be easily compromised.
CVE Announce - https://www.cve.org/Media/News/NewsletterSignup
This might be one of the noisier mailing lists and works well if you have filters in place to redirect emails from them to a subfolder that you can review from time to time. Every time a new vulnerability is found or analyzed, this mailing list sends out notification of the new attack vector, and the status of any fix along with links to further guidance on fixing or mitigating the risks.
Summary
There are a ton of things that can be done to provide security to your home network, but with ever increasing numbers of newly found exploits and vulnerabilities, nothing is 100% secure unless it is air-gapped completely from the internet (i.e. no network card or wifi, and all USB ports disabled). With the reliance on network connected information, that is not a very common situation. Though you can never be 100% certain all the time that your network is secure, you can take multiple steps to help keep up with the latest security information and make sure your system is as secure as possible.
Remember that anything running on an internet connected device poses some security risk. Our job is to mitigate these risks as well as possible without impeding our ability to get things accomplished. The only way to fully avoid risk is to power off any device that is not being used.
- Details
- Category: Information Technology
- Hits: 334
In my previous article, I showed you how to configure high availability clustering for MySQL or MariaDB. But what if you are working in a secure environment where you need to have all ports locked down except those required for functionality? There are a variety of ports that will need to be open depending on required functionality. But they do not have to be open to the world. This article is working under the assumption that the firewall is enabled and already configured to restrict access as needed for your environment.
The most obvious port to be opened for any SQL server is TCP port 3306 to other servers that need to access the database. Another obvious choice is protected access on TCP port 22 if you are using a terminal session to remotely manage the server. If you are using the HA configuration with Pacemaker and Corosync, there are six other ports that need to be open between the nodes of the cluster. They do not need to be open to any other servers or services.
RedHat recommends a simple two command approach to opening ports required for high availability. I argue that these are not restrictive enough for secure environments. The commands are:
firewall-cmd --permanent --add-service=high-availability
firewall-cmd --add-service=high-availability
My issue with this approach is that it opens those ports to the world. Granted, you most likely have a separate firewall protecting your network perimeter, so this is not a problem with users outside of your network. Let's assume you have a bad actor within your network. It could be a disgruntled employee, or it could be a system that has been compromised by an external threat. Since these commands open those ports to any device on your network and not just the cluster nodes, this system is now at risk on those ports. How big of a risk depends on future vulnerabilities yet to be detected in the service software. Why take the risk?
There are very specific ports that are required for communication within the cluster itself. They are:
- TCP ports: 2224, 3121, and 21064
- UDP ports: 5404, 5405, and 5406
To limit our exposure, we only want to open these ports between the specific server nodes in the cluster so that the servers can communicate over those ports only with each other.
The default firewall tool in Ubuntu is Uncomplicated Fire Wall (UFW). Since the guide uses Ubuntu as the base server for our configuration, I am only addressing that configuration here. This information should easily translate to firewalld or any other firewall software you may be using.
As a recap of the previous article, we have a MariaDB server running on the following IPs to provide a database backed for any services that rely on data storage. We are not concerned with the VIP of the cluster for this level of inter-server communication.
- 192.168.50.231
- 192.168.50.232
- 192.168.50.233
The same configuration can be used on each of the servers. These must be added using an account with sudo privileges. Since there are three IPs and six ports, there are a total of eighteen commands to hit every combination of port and IP. You can shorten this by using a slash notation for the IP range, but there is a risk of other IPs in that range coming online and being able to access the services. Because my motto is "perfect paranoia is perfect awareness," I prefer to err on the side of caution and specify every possible rule. But you do you.
sudo ufw allow from 192.168.50.231 proto tcp to any port 2224
sudo ufw allow from 192.168.50.232 proto tcp to any port 2224
sudo ufw allow from 192.168.50.233 proto tcp to any port 2224
sudo ufw allow from 192.168.50.231 proto tcp to any port 3121
sudo ufw allow from 192.168.50.232 proto tcp to any port 3121
sudo ufw allow from 192.168.50.233 proto tcp to any port 3121
sudo ufw allow from 192.168.50.231 proto tcp to any port 21064
sudo ufw allow from 192.168.50.232 proto tcp to any port 21064
sudo ufw allow from 192.168.50.233 proto tcp to any port 21062
sudo ufw allow from 192.168.50.231 proto udp to any port 5404
sudo ufw allow from 192.168.50.232 proto udp to any port 5404
sudo ufw allow from 192.168.50.233 proto udp to any port 5404
sudo ufw allow from 192.168.50.231 proto udp to any port 5405
sudo ufw allow from 192.168.50.232 proto udp to any port 5405
sudo ufw allow from 192.168.50.233 proto udp to any port 5405
sudo ufw allow from 192.168.50.231 proto udp to any port 5406
sudo ufw allow from 192.168.50.232 proto udp to any port 5406
sudo ufw allow from 192.168.50.233 proto udp to any port 5406
By running the above commands, you now have enabled these ports through the software firewall native to each server. You can verify the existing rules by running ufw show added
.
Of course, there are other ways to modify configuration files to store your rules, but the above commands will write these for you. If modifying configuration files, you will want to do a ufw reload to ensure the new rules are loaded. And don't forget to enable the firewall by entering ufw enable
, but only after you are sure you have enabled whatever port you are using to manage your server be it SSH, NoVNC, or some other service. Google is your friend here. Once the firewall is enabled, you can use ufw status
to show the list of current rules.
Don't forget to check the status of your cluster after making any changes. To do this, just run pcs status to display the current status of the cluster. You should see results like the following.
Cluster name: mysql-ha
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: u24-mysql03 (version 2.1.6-6fdc9deea29) - partition with quorum
* Last updated: Wed Oct 9 00:53:56 2024 on u24-mysql01
* Last change: Tue Oct 8 12:37:50 2024 by root via cibadmin on u24-mysql01
* 3 nodes configured
* 1 resource instance configured
Node List:
* Online: [ u24-mysql01 u24-mysql02 u24-mysql03 ]
Full List of Resources:
* Resource Group: mysql_ha_cluster:
* virtual_ip (ocf:heartbeat:IPaddr2): Started u24-mysql01
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
As you can see from the above example output, all three nodes are showing online, the VIP resource is started, and the daemons are all active/enabled.
The above information should give you a decent head start to figuring out how to apply additional rules to allow web servers access to SQL services on TCP port 3306 on these servers. It should also provide guidance on configuring your web servers for access on TCP ports 80 and 443 if required.
Remember to only open ports that are required for the role of the server that is being modified. If you are not running web services, don't allow web service ports.
- Details
- Category: Information Technology
- Hits: 344
This document is intended to walk you through setting up multiple servers to store your databases. You can of course get by with a single server, but then this document would not be for you. Our requirement is to configure a cluster of servers to allow us to perform maintenance on any one server in the cluster while maintaining access to the data stored in the cluster.
It is important to note that this will not speed up access to data. That is not the goal. Only one node will be operating as a master at any given time. The others are replication partners that are grabbing any changes to the master and copying those changes into their copy of the database. They are not directly serving any applications or any other function than ensuring that the data exists in multiple locations for use in the even the master is no longer available.
To keep data in sync between servers, each acts as both a master and a slave. Node 1 is the master replicating date to the slave of node 2. Node 2 is also working as a master replicating its date to the slave of node 3. Finally, node 3 is working as a master replicating its data back to node 1. This round robin approach keeps the data consistent between all nodes. To keep from having to determine which server is the master, we assign a separate name and IP address to the cluster and let the clustering service determine who the master is at any given time.
This does require increased resources overall, but since the slave nodes are replaying the changes in the background, they can generally do this faster than the time taken to make modifications in your application unless you have super-human AI typing abilities. Even then, the nodes will eventually catch up and data will remain consistent between nodes.
External resources and acknowlegments
These instructions are based on a very well written resource by Markku Leiniö. The original source document can be found at https://majornetwork.net/2020/01/high-availability-clustering-with-zabbix-on-debian/. In addition, I have incorporated information from Edmunds Vesmanis from a presentation he gave at the Zabbix Summit 2019 in China on setting up HA clusters for use in Zabbix. That source presentation can be found at https://assets.zabbix.com/files/events/conference_china_2019/Edmunds_Vesmanis-Zabbix_HA_cluster_setups.pdf. There is also a video presentation from the event at https://www.youtube.com/watch?v=vdoUWkwk9QU. Both authors are worth the read and use slightly different approaches. I have opted to follow Markku’s approach of using GTID for replication because it is easy to follow and works for my situation.
If you are using PostgreSQL, these instructions will not work. For that, I would recommend the “PostgreSQL 12 High Availability Cookbook - Third Edition” written by Shaun Thomas for Packt Publishing. It is a much more involved process and utilizes different technologies for replication between nodes. There may be a newer version by the time you read this, so make sure that the book you are purchasing covers high availability.
Yes, the above source links are heavily focused on Zabbix, but the instructions below should work for any application that relies on MySQL as the database backend.
Servers
For this exercise, we are going to need three servers. One node will be the master that handles all incoming requests to the cluster. The others will be slaves initially, but their role can change to master if the primary server experiences a fault and is unreachable. One is also a witness which helps with the election process in determining who the master should be. You should always have an odd number of servers in your cluster to aide with the election process. If you have an even number of servers, the election could result in a tie and cause the cluster to fail.
I use Proxmox in my home network, so I have chosen to create three new virtual servers. Make sure you know how many resources you will need for your project before building the servers. It is much easier to start out with appropriate resources than it is to go back and modify them later; although, it is always possible to extend resources at a later date if necessary. In my case, each has two virtual CPUs, 4GB of RAM, and 50GB of storage space.
I have chosen Ubuntu 24.04 as the operating system for these servers. This is a Debian-based operating system, and the commands below will be specific to Debian-based systems. You can use any operating system you desire as MySQL can run on any flavor of Linux or Windows. I tend to stay away from Windows due to licensing costs and limited resources. You don’t really need to have a desktop environment for any of these services, so why waste the resources on things you will never use?
If you are using a RedHat based system, apt will be replaced by yum. Gentoo uses portage as its package manager. Arch uses its own package manager as well. On Mac computers, you can use homebrew. Just be sure to modify the commands in the subsequent instructions to accommodate your package manager. I will not get into the minutiae of modifying the commands below to cover every situation. I am only sticking to Debian commands.
Don’t forget to make sure that all of the current patches are applied to the servers. You can do this with the following command.
apt update && apt upgrade -y
Pay attention to any packages that have been held back. You can manually install these by specifying their name. For example, my new systems installed all updates with the exceptions of python3-distupgrade and ubuntu-release-upgrader-core. Running the following command got those to install for me along with all of their dependancies.
apt install python3-distupgrade ubuntu-release-upgrader-core
You can run the following command to ensure that there are no further updates that need to be installed.
apt update && apt list —upgradable
It should return with “All packages are up to date.”
Network configuration
Before creating the cluster, we want to make sure that we have a range of four continuous IP addresses on our network. It is not required, but makes things a lot easier to remember and keep track of. One of the IP addresses will be the master address for the entire cluster. The cluster services will listen on this address and internally handle processing of information to each node of the cluster. To make things simple for me, I choose an address in the high end of my range ending in zero. The three subsequent IP addresses are used for node 1, node 2, and node 3 in the cluster.
I am using a single class C network of 192.168.50.0/24 as issued by my router. Since this is a new network, I can see that nothing is using 192.168.50.230 through 192.168.50.233. You can test this from a terminal by pinging each of the IP addresses. They should come up with either a time-out or no response. After confirming, I have the following network settings.
192.168.50.230 | This will be the master IP address of the cluster |
192.168.50.231 | Node 1 |
192.168.50.232 | Node 2 |
192.168.50.233 | Node 3 |
We also have to have names for the servers. In my case, I like to be able to look at the name of the server to determine what it does and what operating system is installed. To quickly determine that I am using Ubuntu 24.04 as the server operating system, my machine names begin with u24-. The next part is the service that the server is performing. Even though I am using MariaDB, this is for all intents and purposes MySQL — just a more freely available and less restrictive licensed version of the same software. I then place the node number at the end of the server name.
Don’t forget to name the virtual IP as well. We don’t want to have to constantly remember things by IP address. Since the virtual IP does not have its own server behind it, I eliminated the u24- but added -ha to the end of the name to indicate to me that it is the high available cluster address. My chosen list of server names are as follows.
mysql-ha | The master name of the cluster |
u24-mysql01 | Node 1 |
u24-mysql02 | Node 2 |
u24-mysql03 | Node 3 |
If you are using a DHCP server on your router, it is also a good idea to make sure the IP addresses will remain static and not assigned to other devices on your network. Because of this, I set reservations in the DHCP server based on the MAC address of the new virtual servers. Whenever the network card with the given MAC address tries to get an IP form DHCP, it will be assigned the appropriate number in the table. We will not have a MAC address for the cluster itself, so we must tell DHCP to never issue this address to any devices on the network.
There are multiple ways to determine the MAC address of your server. If you are using a virtualization solution such as VMware, Proxmox, or Nutanix, you can look at the hardware of your virtual server. The network interface card will have a 6-octet hexadecimal number associated with it. This is the MAC address.
You can also determine the MAC address within the terminal of the server itself by running the command ‘ip a’. This will give you a list of all network cards and settings. You will want the MAC address of the card that has the assigned IP. Ignore the card named lo that has an address of 127.0.0.1. This is just a loop back card for the machine to be able to use TCP/IP to see itself.
mysql-ha | 192.168.50.230 | no MAC address |
u24-mysql01 | 192.168.50.231 | BC:24:11:43:46:CB |
u24-mysql02 | 192.168.50.232 | BC:24:11:61:8A:5D |
u24-mysql03 | 192.168.50.233 | BC:24:11:83:00:00 |
Security
Finally, we need a couple of passwords for services. There is a user and password used for internal replication between database services, and there is a master password for the cluster. Since these passwords will never change and need to be secure, choose a randomly generated lengthy password. I like to make mine at least 60 characters in length. Due to their complexity, make sure you are storing these in a password manager so that you can access them if you ever have to replace or add a node to the cluster. These are the only times after the initial build that these passwords should be needed, but if you don’t make a record of them, you will most likely have to rebuild your cluster from scratch if they are lost or misplaced.
You will need passwords for the following two accounts.
- hacluster
- replicator
Initial DB Server Setup
Now we are ready to get into the complicated portion of the install — installing the database server and configuring the high availability. Before beginning, you should have completed all of the requirements listed above. If you have not, you will most definitely run into issues.
Configure the hosts file - All nodes
On each server to be used in the cluster, we need to modify the hosts file to reference each node in the cluster. If you are using DNS within your network, your DNS zones should also be modified to include A records for these servers.
Using your favorite editor (mine is vim but nano will work as well), open the file /etc/hosts. It should look like the following.
127.0.0.1 localhost
127.0.1.1 u24-mysql01
The server name on the second line will be different for each server and will be the name of that server. There may also be a section for IPv6 addresses. We are not concerned with this section as we will be sticking exclusively to IPv4 addressing.
Retain the first line for localhost, but remove the second line beginning 127.0.1.1 and insert the following text immediately after the localhost entry. Ensure you have adapted it for your network configuration as outlined in the prior steps.
192.168.50.230 mysql-ha mysql-ha.antimidas.net
192.168.50.231 u24-mysql01 u24-mysql01.antimidas.net
192.168.50.232 u24-mysql02 u24-mysql02.antimidas.net
192.168.50.233 u24-mysql03 u24-mysql03.antimidas.net
Save the file and close it.
Install required software
We need to install the services we will be using to provide the clustering and replication. This will need to be done on all servers in our cluster. The first three applications work together to provide the tools necessary to manage the cluster and the backend replication resources. The last is, of course, the database server software.
apt install corosync pacemaker pcs mariadb-server
We also want to stop the database service because we are not yet ready to use it until it is configured.
systemctl stop mariadb
Set the password for the cluster resources - All nodes
Now we get to use the first of the exceedingly long passwords that we created earlier. Replace <PASSWORD> in the command below with the password that you chose for your cluster. We are setting the same password on every node so that they know how to communicate with other nodes in the cluster.
echo ‘hacluster:<PASSWORD>’ | chpasswd
Backup the default configuration - All nodes
We are not going to use the default configuration for Corosync. But we want to be able to reference it in the future if needed. To keep the settings from being applied, we will simply rename the file.
mv /etc/corosync/corosync.conf /etc/corosync/corosync.conf.orig
Configure the database server - All nodes
This configuration will be slightly different on each of the servers in our cluster. Changes will need to be made to the last three lines of the configuration based on which node is being configured. It is important to pay attention to this part so that you do not duplicate configurations between servers. Although each server is nearly identical, the last three lines are unique to each node.
Using your preferred editor, create a new file named /etc/mysql/mariadb.conf.d/90-ha.cnf with the following information and then save it.
[mysqld]
skip_name_resolve
bind_address = 0.0.0.0
log_slave_updates
max_binlog_size = 1G
expire_logs_days = 5
innodb_buffer_pool_size = 1G # 70-80% of total RAM
innodb_buffer_pool_instances = 1 # each instance should be at least 1GB
innodb_flush_log_at_trx_commit = 2 # default = 1
innodb_flush_method = O_DIRECT # default = fsync
innodb_io_capacity = 500 # HDD = 500-800, SSD = 2000
query_cache_size = 0
# Change the following values for each server accordingly!
log_basename = u24-mysql01
log_bin = u24-mysql01-bin
server_id = 231 # The last number of the server IP address
Now that our configuration is written, we can start the database service on each node
systemctl start mariadb
Initial clustering configuration
Our base installation is now set and it is time to configure the clustering. For configuration purposes, we are going to start with the first node and configure it slightly different than all of the others. We will circle back at the end and configure the slave component of the server, but first we must have the configuration for master.
These steps will be very particular about which server is being configured. Pay attention to which server you are on. If the commands are run on the wrong server, you will most likely not get the desired replication topology.
Configuring the cluster
On node 1, we want to create the cluster and set the password for the cluster controller. The following command tells the service which nodes are members of the cluster and sets the password for communication within the cluster. Make sure to replace <PASSWORD> with the credentials created previously for the hacluster user.
pcs host auth u24-mysql01 u24-mysql02 u24-mysql03 -u hacluster -p <PASSWORD>
Now that we have specified which nodes will make up the cluster, we need to add them to the cluster itself and start the initial cluster. We are going to use --force to ensure that the cluster is created as described.
pcs cluster setup mysql24-ha u24-mysql01 u24-mysql02 u24-mysql03 --force
We then want to start the clustering service on all nodes. The following will start the service on all member nodes that were just defined.
pcs cluster start --all
We then need to enable the other cluster management services now that we have everything defined and running. This command will start the services and ensure that they are started if the node is rebooted.
systemctl enable corosync pacemaker
We need to ensure that one node does not “kill off” another node. As an aside, this might sound a bit hyperbolic to use the verb “kill” but the stonith setting below does stand for “shoot the other node in the head”, so it is not too far off in verbiage. A typical cluster has the ability to pull another node from the pull, and we don’t want this to happen in our case.
pcs property set stonith-enabled=false
We also need to set a parameter that controls how strongly a service prefers to stay running on a node. We are going to use a conservative default of 100.
pcs resource defaults resource-stickiness=100
We also need to create the cluster IP and assign it to the cluster. along with settings on how often to monitor for changes to one node to trigger the replication.
pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.50.230 op monitor interval=5s --group mysql_ha_cluster
On the remaining nodes, we now need to enable the cluster services so they start automatically the same as we did to node 1. They will use the settings we just applied to the primary node. Run the following command on the remaining nodes.
systemctl enable corosync pacemaker
Checking the status of the cluster
At any point in the future, we can check on the status of the cluster to determine the status of each node and that all nodes are communicating with each other by running the following command.
pcs status
This will display a “prettified” list of the configuration on each node telling you whether the cluster is online or not, the list of resources assigned to the cluster, and the number of nodes.
Configuring database replication
We now need to configure MySQL itself to replicate data between servers. This is the portion where we set up the round-robin so that all additions, modifications, and deletions of data are run against each copy of the database across nodes. Note in the following sections that although the commands appear similar, we are changing the IPs to configure replication to the next node in the chain.
On the first node, open mysql from the terminal and run the following commands to set the second node as its slave. Replace <PASSWORD> with the password created for the replicator user.
stop slave;
grant replication slave on *.* to 'replicator'@'192.168.50.232’ identified by ‘<PASSWORD>’;
show global variables like 'gtid_current_pos';
Make a note of the gtid_current_pos value. It should be 0-231-1 or something similar.
On the second server, open mysql and enter the following commands to configure node 1 as its master and node 3 as its slave. Replace <PASSWORD> with the password created for the replicator user.
stop slave;
set global gtid_slave_pos = ‘0-231-1’; # The GTID you noted earlier
change master to master_host='192.168.50.231’, master_user='replicator', master_password=‘<PASSWORD>’, master_use_gtid=slave_pos;
grant replication slave on *.* to 'replicator'@'192.168.50.233’ identified by ‘<PASSWORD>’;
reset master;
start slave;
show slave status\G
On the third server, open mysql and enter the following commands to configure node 2 as its master and node 1 as its slave. Replace <PASSWORD> with the password created for the replicator user.
stop slave;<
set global gtid_slave_pos = ‘0-231-1’; # The GTID you noted earlier
change master to master_host='192.168.50.232’, master_user='replicator', master_password=‘<PASSWORD>’, master_use_gtid=slave_pos;
grant replication slave on *.* to 'replicator'@'192.168.50.231’ identified by ‘<PASSWORD>’;
reset master;
start slave;
show slave status\G
We now need to complete the ring on the first server to set node 3 as its master and start the slave service. Replace <PASSWORD> with the password created for the replicator user.
stop slave;
set global gtid_slave_pos = ‘0-231-1’;
change master to master_host='192.168.50.233’, master_user='replicator', master_password=‘<PASSWORD>’, master_use_gtid=slave_pos;
start slave;
show slave status\G
Conclusion
The HA cluster is now configured, but there are no databases on the server. You can follow whatever instructions you have for your application by using the mysql console on the first server in the cluster. Ideally, your application should make remote connections to the VIP that was created. In this case, it is 192.168.50.230. Any changes made should replicate to the other two servers within five seconds depending on transient network conditions.
As mentioned above, you can also use DNS within your network to ensure that the server names are associated to the appropriate iP addresses using A records. This includes creating an A record for the cluster name itself (mysql-ha). You would then be able to make connections to the name of the cluster rather than the IP address.
- Details
- Category: Information Technology
- Hits: 2048
There seems to be a new trend in IT -- immutability. But what is it? Immutability is a security mechanism intended to assure the user (or engineer) that the system they built remains as it was built without the addition of any mailware or modified configurations caused by any nefarious actors. Think of it as installing your operating system on a CD. A bad actor cannot modify the files on the CD because they are read only. This is the same for the operating system partition. But how do you configure a system to be immutable? It is much easier than you think, but it does come with its limitations.
- Details
- Category: Information Technology
- Hits: 2563