Well, it’s been a while since I have posted. I realized today that I never posted my transition of moving my self-hosted Hugo generated web site from a Red Hat virtual web server on my homelab to the free-tier hosting on Netlify. This also included finally learning the basics of Git and using a remote Git repository in Bitbucket.
It was a surprisingly easy transition, moving from my local server to Netlify. I think the most difficult part, for me, was learning how to get my local Git repo where my Hugo files existed to Bitbucket. I’m not a developer and anything code related, even if it is the simple use of Git, is a bit of a learning hurdle for me. I simply followed the steps from this site, and other resources from my web search, changed the DNS on my domain from my local WAN IP address to point at Netlify’s, and all was working. I was really surprised how easy it was to migrate. The only difference was I have a different and simplier netlify.toml config than what I found on the site referenced above. Mine looks like this:
I’ve been a Docker user for years. I’ve never been an expert, I’ve only used it enough to get by in deploying Containers on my home lab for self-hosting services. Recently, I decided it was time to use Podman instead of Docker just so I could learn the differences. The major differences, as I understand it, is that Podman is more secure due to the ability of deploying a rootless container. So, I setup a new virtual machine running AlmaLinux 9, installed all the Podman packages and proceeded with, first, setting up Homarr dashboard using a Docker compose file. I deployed it as usual; this time using the podman-compose command instead of docker-compose using a standard non-root user. From there, I decided to migrate a couple of other containers (Freshrss and flatnote) from a different machine where they had been running as a Docker container to the new virtual machine running Podman and they all started up nicely. What I had found, so far, is that most things translate easily from Docker to Podman. Fair enough, easy enough.
In part 1 of Linux 101, in my attempt to inform those interested in the basics of Linux, I gave a brief breakdown of why I am doing these post. In this part 2 I will continue to do so by breaking down the different versions of Linux distributions.
In the graphic above, I have a desktop distribution list and a server distribution list. Even though the list, as a whole, is not the full list of distributions available, it is a list of the more common and popular distros at the time of this post. To get a complete list of all that is available, visit the DistroWatch website.
In a previous post I discussed creating a RHEL or AlmaLinux KVM virtual machine using a Kickstart file and a bash shell script. I’ve recently decided that I wanted an easier approach to quickly creating a virtual machine using a pre-existing template I had created. In my research, I found there are quite a few ways to complete this task but this is the method that works for me, using AlmaLinux, and may also work for you.
I have a young child in my home. As she gets older (she’s turning 9 years old this month) she has more and more exposure to the internet. For any parent, this is a challenge; I need to keep her safe from malicious content on the internet. I have used Ubiquiti network devices on my LAN for many years and have had no major complaints with their products. However, one thing that really isn’t very clear in their UniFi Network web interface, used to manage all my devices, is that you can manage internet use for a specific client. And not many people know this feature. Mainly, I believe, is because Ubiquiti is geared toward small office to enterprise level networks where there is an expectation that an experienced Network Admin knows their products and has the expertise to manage the network.
I love Linux. I am passionate about Linux. I am a Linux geek. I think I have stated that before in this blog but I wanted to state that claim again for this post.
There was a time that I had attempted to run a YouTube channel to help those interested in learning Linux. However, I found that with the limited time that I had between personal life, work life and school life was a very difficult task to maintain. Since I now have this blogging site, I figured I would try to post some of the content from that video series for those interested in learning about Linux. The information I posted mostly came from my personal knowledge but I also pulled information from the Linux Professional Institute’s Linux Essentials and LPIC-1 certification content. Both of those certifications have some very basic information about Linux but also will have more advanced content for those interested in really learning and obtaining certifications for Linux Systems Administration. In fact, if you wanted to, you could go to their site and download their study guides for free. So, to recycle, this post, and future post, will have some of that same basic content from my old channel.
I previously posted about setting up a Raspberry Pi with Raspbian 11 as a photo album kiosk. Since then I wanted to test completing the same setup with Debian and Fedora on an x86 machine, just in case my Raspberry Pi died, or if I decided I needed a faster machine and wanted to use my spare Beelink S12. This a great setup if you have a spare machine and monitor lying around for displaying family photos (again, see previously post to see my example). Well, in testing on a virtual machine I came up with these basic steps below.
I recently purchased a second Beelink mini S12; mainly because it was such a good deal for $140 USD that I couldn’t pass up. The difference in this model versus the S12 Pro I had purchased for my Jellyfin server (see previous post), is that the S12 Pro is an Intel N100 while the S12 is an Intel N95. Well, it is true that you get what you pay for. The S12 (N95) has a bad wired network interface.
Server monitoring has many levels. From my own personal experience, in the I.T. world as a Systems Administrator, I have used many products for such purposes (e.g. SolarWinds). You can monitor servers for simply their availability status. Or, you can get as detailed as monitoring performance and resource statistics. On my home network, I have tried many solutions from Nagios to Zabbix to Checkmk. All have been solid solutions but, truthfully, a bit of overkill. At the moment, the most basic monitoring I have in place is Uptime Kuma internally that sends notifications to Discord if a server goes down. Externally, since I self-host my own web server (here), I use the free version of UptimeRobot for up time with notifications straight to my email.
In my backup strategy I have a few tiers of backup for restore scenarios. I have my primary data folder that syncs to my Nextcloud server on my home network using the Nextcloud client. This is so I can sync that same data to multiple devices on my LAN, even while I am off my LAN using my Tailscale network. That Nextcloud instance has a NFS share mounted from my 8TB Synology NAS (one of these days I’ll post about my Nextcloud setup). At the same time I also rsync that same data folder to a separate directory on my Synology NAS and that directory on the NAS, and everything else, also rsyncs to an external 8TB USB drive; all scheduled to be completed twice a week via a crontab. So that I have off-site backup of my primary data folder, I also use a service called MEGA. I’ve been using it for over a year and have not had any issues with it, until now.