Monday, August 4, 2014

Linux Spacewalk / Satellite Server

This is my first attempt at setting up a Satellite server, so I learned quite a bit. Before I had all my VMs, I never had the requirement for it. Now, having it will definitely speed up patching and installs.

Some features I plan on using:

  • remote installs of applications
  • application baselines across multiple servers
  • pushing updates to all the servers
  • pushing config files that need updating (i.e. yum.conf, snmp.conf, ...)
  • kickstarting servers
There's plenty of information out on the web to follow. Comparing to Microsoft SCCM, the install is a little more complex. This is the main page I used to do the install https://fedorahosted.org/spacewalk/wiki/HowToInstall. This is another site (http://wiki.centos.org/HowTos/PackageManagement/Spacewalk)  that does a good job of explaining everything, but the first link worked better for me for the installation itself.

The biggest key things to note is to ensure you have DNS setup correctly and dependencies available. The client needs to resolve the server and vice versa. And, the client will need some dependencies to install some of the client components.

Good luck with your install!

Year In Review

It's been one year since I started the build of my new server, and overall, I'm very, very pleased with the outcome. It's been a very busy year with family, but I was still able to chip away at a few things. On average, I have about 16 VMs running at any given time without any major performance issues (this is given that only about one or two machines really doing anything heavy at any given time). The biggest limitation is the hard drives.

The servers that have proven very useful are the Plex , FreePBX, FreeNAS, web, and VPN servers. They are getting used about every day, and I'm able to expand my experience and knowledge with them as well. The remainder VMs are for monitoring and/or management purposes with four recent builds of BOINC servers as I have extra CPU cycles to spend. The BOINC servers run under a resource pool, so they consume only the amount of CPU that I want to provide at that point in time.

The ESXi build has also given me the opportunity to play around with a few new technologies (for me) this year as well. It's been able to support a nested ESXi clustered environment, NetApp simulators, a GNS3 network lab, and testing with NoMachine's NX client (they just recently added support for Android and iOS (iPad only at the moment)). I've also been able to dedicate some time to a soon to be migration to pfSense as my primary router.

My current running issues remain storage. Since I wasn't able to use the RAID card due to the hardware pass through limitation, I'm limited to two non-RAIDed drives supporting all my VMs. I've thought a lot about how to resolve this, and I'm leaning towards using FreeNAS (running as a VM) to host iSCSI target(s) to the ESXi host. The goal is to add the new disks (potentially four) to VMware by creating a datastores for each disk, create a virtual disk for the FreeNAS VM on each new datastore consuming all the available disk space, create a single ZFS volume within FreeNAS across all the drives, then creating the iSCSI block device on to the new volume. This would then get presented to ESXi which would create another datastore for the VMs to use. (disk -> VMFS data store -> Virtual disk -> RAID disk within FreeNAS -> iSCSI block device -> VMFS data store). This of course is bit more complicated than what it needs to be, but I'm not sure I can afford a new machine that would run FreeNAS. And, I think I would run into issues with the 1GB Ethernet connection. And, while thinking about this, I will need to do some testing as I still might be limited by a 1 GB virtual Ethernet connection.