Friday, January 2, 2015

FreeNAS Migration to Hardware


After one of my hard drives failed in my 4 bay eSATA SAN, it sparked some thinking with the purchase of a new drive. I priced out a replacement and saw drives that were bigger and better for the same price. The four drives I'm using are outside their warranty, so it might not be much longer for the remaining drives.

I definitely don't want to purchase a replacement drive that's identical to what I have when I could upgrade to a better and bigger drive. With my 4 bay SAN, I think I'm limited on the drive size (it might be 3 TB) that currently has 2.5 TB drives (and even if I did buy a bigger drive and it worked, I would have unused space that I would never be able to reclaim until I replaced all the drives and started from scratch). I recently read that FreeNAS has the support to increase the size of the zpool / volume automatically by replacing individual drives with larger drives, so I'm going to take advantage of this (I already tested it in the lab, and it works great!). So, I definitely want to take this into consideration as I could easily expand my storage in 3 or 4 years (once my warranty expires and my hard drives start to fail) without having to build a new machine and having to migrate data.

So, the plan is to build a new machine (going to go with some of the hardware recommendations from some of the FreeNAS forums) with 4 WD Red 3 TB drives that will be in a RAID 10 configuration. This will hopefully give me optimal performance with the slow 5400 rpm drives. From my use of FreeNAS as a VM supporting all my VMs, I don't see the 1 GB link being saturated (it's mainly disk IO intensive) so I should be OK.

**** Hardware Purchased ****
I think the total price will come to around $1300. I'm hoping it'll last at least 5 years, so it'll be about $22/month for just over 5TB usable RAID 10 storage that's local and very versatile. And hopefully all I'll need to do is invest another $400 into the server for new HDDs and get another 5 years out of it.

  • SuperMicro motherboard X10SLM-f (http://www.supermicro.com/products/motherboard/Xeon/C220/X10SLM-F.cfm)
  • Intel Xenon E3-1220v3 Haswell 3.1 GHz CPU (http://ark.intel.com/products/75052/Intel-Xeon-Processor-E3-1220-v3-8M-Cache-3_10-GHz#@specifications)
  • 4 8GB sticks of Crucial DDR3 SDRAM ECC unbuffered 1600 server memory (CT2KIT102472BA160B)
  • 4 3TB WD Red NAS drives (6 GB/s 5400 RPM)
  • Case - purchased a used one from a friend
  • Power Supply - pending
  • USB drive for install of FreeNAS - pending

**** Update - March 20, 2015 ****
Overall I would say that I'm satisfied with the new FreeNAS server. It's opened a lot of options, and I was able to learn a lot about FreeNAS. There were a few set backs along the way, but overall, things are looking good now.

The new FreeNAS server needed three network cables. One for the IPMI port (this is really nice to have), one that will be dedicated to NFS traffic hosting the VMs, and one for the remainder traffic. The interface supporting the VMs is using a cross over Ethernet cable to the ESXi box as I only have one ESXi host and I don't want to waste two ports on my already limited gigabit switch.

My first problems started when two of my new hard drives failed within FreeNAS. After two RMA's, I was back on track. I replicated my datasets from the VM FreeNAS to the new FreeNAS, re-added the new VMs to the inventory, and was onto using the new setup.

With the new setup, I've only run into issues just recently, but that came after the latest updates for ESXi 5.5 (I was on the patch from ~April 2014 and I patched it with the latest Feb 2015 patch). I started seeing issues where the datastores would become unavailable. From either the ESXi host or FreeNAS, I couldn't ping the other. From the ESXi logs, it showed the vmnic with a watchdog alert. After some searches, it appeared that the issue was related to the Realtec driver/card I was using. I updated the driver without any success. Then, I switched the interfaces on the vSwitches, moving the NFS traffic to the Broadcom NIC and the user traffic to the Realtec. I only saw the issue when the datastore was under high load, so I should be ok. Since I made the change, I haven't had any issues. I do not recommend using the Realtec cards in an ESXi install.

Overall, the performance is great for what I'm doing. However, I had to disable the NFS sync writes. When it was enabled (and it is highly recommended that it is), everything was very slow. If I had some extra money, I would probably invest in some SSDs for the ZIL. But, since I have good backups, I'm not too worried about it right now. I can now reboot several VMs at the same time without noticing any delay. And when starting a SpaceWalk verify scan, I saw esxtop show about 2200 IOPS for the RAID 10 array (I'm assuming most of this speed is from the read cache). And with the RAID 10 array, my writes a much faster than what they would be with RAID 5 or 6.

I also finished configuring a remote FreeNAS for off site replication (yeah!). I installed the PHPVirtualBox jail, attached a 5 TB USB NTFS formatted drive, granted the jail access to the USB drive, created a FreeNAS VM on the USB drive, and replicated all the datasets I configured. A friend is hosting it now on his Windows machine that is running VirtualBox. I created two volumes: the first uses a passphrase with encryption, and the other just has encryption. My data is on the volume with the passphrase, so every time the FreeNAS VM reboots, I have to connect and unlock. The second drive holds the OpenVPN configuration files. FreeNAS already had OpenVPN installed, but I had to replace the binary with a read-password-from-file enabled OpenVPN binary that was copied from a compiled version running on another FreeBSD jail. I then have a cron job to monitor the connection and restart the service if the connection goes down.

I'm hoping the above protects me from someone that tries to break into the FreeNAS OS during a reboot, as they still wouldn't be able to access my data. The worst case is that they would get my configs for the OpenVPN client, however, the client is limited on what it can see and I'll be monitoring the connection closely. To recover my data, I would only need to attach the USB drive to the FreeNAS server, setup VirtualBox, power on the VM, and then begin replicating data.




Wednesday, December 31, 2014

Recording Live TV

A home PVR has been another hobby for quite some time. I once ran MythTV back in the day, and did enjoy it, but there were some flaws. So during this new setup, I wanted re-evaluate the available options before deciding on a final solution.

My requirements for my final solution:
 - Be able to record live TV (both OTA and from commercial providers)
 - Have access to guide data and ability to record using a web browser
 - Be able to record and convert shows into m4v (most compatible across multiple platforms IMO
 - Be able to watch the converted shows via Plex
 - Be able to watch live TV (with pause ability) on all my devices (phones, tablets, TVs, laptops, ...)

There's lots of options for recording. Here is a list of options I reviewed:
 - Windows Media Center = Doesn't have web based guide option; can't find any plugins to watch live TV

 - MythTV = Live TV is still experimental, and it was not fun getting the drivers to work with my Hauppauge card.

 - WinTV with Extend (from Hauppauge) = Live TV from the phone app, but couldn't extend to other devices. Also didn't see guide data when choosing a channel.

 - HD Homerun with Plex HD SurferWave plugin = saw a YouTube video on this; it looked nice for watching live TV, but by itself, didn't support recording shows (could run into resource contention with tuners being available)

 - Next PVR = Currently I've been using this for a few weeks now. I found a plugin for Plex that supports live TV. Currently having issues while watching two shows at a time through Plex. However, I like the support with my Hauppauge WinTV-HVR-2255 tuner - it supports recording all of the channels in a single digital group on a single tuner (i.e. 5.1, 5.2, 5.3, 5.4). It also can stream recorded TV using its built in web server. I can also stream live TV using VLC via the built in web server.

 - Tablo = Really awesome, but a bit out of my price range. But, it definitely takes all the work out of it.


***** Current Solution - as of Dec 2014 ******

I've been using NextPVR / NPVR for a couple months now. I've been using it to record shows and then an application called MCE Buddy to pull out commercials, compress the file, and move it to a folder configured for Plex to pull it into its inventory. Plex does the rest in delivering the content to all my Plex devices. And, if I want to watch live TV (which isn't too often), I can use VLC to connect to the NextPVR web page and stream it or I use a NextPVR plugin within Plex. I use Schedules Direct to receive TV guide data.


Saturday, October 4, 2014

VM Migration to NFS Export hosted by a FreeNAS VM

I mentioned earlier in my Year in Review about not having RAID setup (the on board motherboard RAID wasn't supported with stock ESXi, and I couldn't use my RAID card with hardware pass through to some of the VMs) for the VMs (they were just spread across 2 - 2 TB drives. To support the migration, I was using 4 older SATA I - 250 GB drives for my file shares and an external SAN for non OS data.

The goal was to have something that would protect against drive failure, and I didn't want to purchase any additional hardware. My solution was to provide virtual disks to a FreeNAS VM, create a ZFS volume using RAID, and setup NFS that was mounted within ESXi. I know it isn't the most elegant solution, but it does offer some benefits.
 1) I didn't have to purchase any new hardware
 2) I can sleep better at night knowing I can survive a drive failing
 3) I can setup off site replication for my VMs, since my VMs are on now a ZFS volume

I did opt for a second FreeNAS VM that I could dedicate to the VMs, and the other FreeNAS VM can act more like a development environment to test patches and configurations (if I need to reboot the FreeNAS hosting the VMs, I have to shut down all the VMs). And the FreeNAS VM was kept on an ESXi datastore that was available at boot.

I wanted to setup FreeNAS with a 10 GB NIC, but I initially ran into issues with drivers for the VMXNET 3 NIC. So I setup 2 NICs on the FreeNAS VM, and setup each of the NFS exports so they had their own dedicated NIC. Once I figure this out with my development FreeNAS, I'll fix it. With what I'm doing, I'm not seeing too much of a performance hit using the 1 GB link, but the extra bandwidth wouldn't hurt.

The process was a bit slow to migrate all the data around (it took about a week). The steps I took are below:
 1) Migrated my CIFS data off the 4 drive RAID to the external SAN
 2) Created a new ZFS volume on the 4 drives where the CIFS data was (I went with RAID 10)
 3) Migrated some of the VMs to the new RAID 10 ZFS volume
 4) Migrated the remainder VMs to the external SAN
 5) Once the 2 - 2 TB drives were empty, I created a new virtual disk on each and setup a second RAID 1 ZFS volume, then migrated some of the VMs to it

To wrap this up, I've been using this setup for 20+ days now, and haven't had any issues. And as I said, it's not that elegant, but it does provide peace of mind against drive failure, and I'll feel really good once I have the data replicated off site. :)


*** I recently had a hard drive go bad in my 4 bay external eSATA SAN that's way past warranty. The cost to replace the drive at the current size is actually more than getting a bigger drive when the bigger drives are on sale. So, I started reading more about FreeNAS and how it could solve some of my problems. But while reading about FreeNAS as a VM, it's not recommended. I personally haven't had an issue (it's been wonderful), but apparently there are stories with people losing data. I really don't want to lose everything I've done, so now I plan to build a new server that will be fully dedicated to storage (this was always the most ideal, but I was trying to cut down on costs). I plan to directly connect it to the ESXi server via a cross over Ethernet cable (I don't want to waste two ports on my gigabit switch, I currently only have one virtual host server, and the future FreeNAS box will have two NICs with the second one supporting CIFS and offsite replication).

The other HUGE benefit is that I gain about 12GB memory in my ESXi box. I was using 8 for my prod FreeNAS and 4 for the CIFS / dev FreeNAS. I'll still need to test FreeNAS updates in a VM, but that's something I could spin up, test, and then shut back down.

The plan is to support storage to ESXi via NFS as it seems to be easier than iSCSI (I don't want to deal with increasing iSCSI extent sizes). I'll just keep the data separated by datasets for replication purposes. I've been monitoring my performance of the virtual FreeNAS machine, and bandwidth isn't an issue (I'm mainly concerned with disk IO).

I've started testing growing volumes by replacing the disks with larger ones, so I feel confident in building something that supports growth for three years (through the hard drive warranty period). Then, as the hard drives fail, I would purchase larger ones to replace them. I'm targeting RAID 10 with four drives (probably the WD Red NAS drives, even though they're 5400rpm) for the most optimal performance on a small motherboard.



Monday, September 1, 2014

Upgrade to ESXi 5.5

I finally had some time to dedicate towards an upgrade to ESXi 5.5!

The goal was to install on a new SSD hard drive I had just for this occasion. I wanted to keep the original hard drive as is in the event I ran into issues and had to revert back (and it was close, I almost had to revert back!).

I had some issues with the 5.5 installer seeing my hard drive, which I think was caused by the hard drive being on a no longer supported SATA controller (ASMedia, which I didn't realize was no longer supported). So, I temporarily moved the drive to another controller, did the install, and eventually moved it back (eventually with a few adjustments to make it work).

When I initially moved the drive back to the ASMedia controller, ESXi would boot, but would not recognize its own datastore. ESXi also couldn't find the network card I was using nor the SAN connected via eSATA also off the ASMedia controller. I then thought I'd try a fresh install of 5.0 and then upgrade (hoping it wouldn't remove drivers that were being used) which was mostly successful. the upgraded 5.5 installation could now see the network card, but it still couldn't see the drives attached to the ASMedia controller.

After some searching, I caught a lucky break and came across some drivers to load (http://www.v-front.de/2013/11/how-to-make-your-unsupported-sata-ahci.html). Thanks for the support! (I remembered afterwards that somebody posted this same issue as a comment to one of my earlier posts - dang, should have remembered!) This did the trick. After the load, a quick reboot, everything was successful! I then moved the new drive back to it's original SATA controller and everything came up wonderfully!

The first items on the agenda included configuring the hardware pass through and licensing. Both were easy without any issues.

Then it was on to adding the virtual machines back to the inventory and powering them on. I encountered no issues with the inventory.  I was forced to remove the PCI devices configured for hardware pass through and add them back to the VM. I also ran into an issue with starting a VM with USB.

Error:
Error message from ESX1: Failed to register the
device pciPassthru0 for 0:29.0 due to unavailable
hardware or software support

With some more searches, and a test with another machine that was successful booting that also had USB installed, lead me to adding the pciHole.start/.end parameters to the VM's configuration file. This is the same configuration I originally used to power on a VM that was using a video card.

pciHole.start = "1200"
pciHole.end = "2200"

That about sums it up. I also loaded the latest ESXi 5.5 patch as well (ESXi550-201407001.zip) without any issues. Now I'm able to use a version of ESXi fully licensed to support my 64GB of memory! Yeah!

As for upgrading the VM tools and hardware profiles, I'll be doing that later. I want to make sure everything is stable before doing that. But, when I do the hardware profile upgrade, I'll be going to version 9 from the command line (version 10, from what I've read, is only supported with the vSphere web client). But, the process is fairly simple (I had to do this when testing a nested ESXi 5.5 cluster at work - the hardware profile version 9 offered better support for something).








Monday, August 4, 2014

Linux Spacewalk / Satellite Server

This is my first attempt at setting up a Satellite server, so I learned quite a bit. Before I had all my VMs, I never had the requirement for it. Now, having it will definitely speed up patching and installs.

Some features I plan on using:

  • remote installs of applications
  • application baselines across multiple servers
  • pushing updates to all the servers
  • pushing config files that need updating (i.e. yum.conf, snmp.conf, ...)
  • kickstarting servers
There's plenty of information out on the web to follow. Comparing to Microsoft SCCM, the install is a little more complex. This is the main page I used to do the install https://fedorahosted.org/spacewalk/wiki/HowToInstall. This is another site (http://wiki.centos.org/HowTos/PackageManagement/Spacewalk)  that does a good job of explaining everything, but the first link worked better for me for the installation itself.

The biggest key things to note is to ensure you have DNS setup correctly and dependencies available. The client needs to resolve the server and vice versa. And, the client will need some dependencies to install some of the client components.

Good luck with your install!

Year In Review

It's been one year since I started the build of my new server, and overall, I'm very, very pleased with the outcome. It's been a very busy year with family, but I was still able to chip away at a few things. On average, I have about 16 VMs running at any given time without any major performance issues (this is given that only about one or two machines really doing anything heavy at any given time). The biggest limitation is the hard drives.

The servers that have proven very useful are the Plex , FreePBX, FreeNAS, web, and VPN servers. They are getting used about every day, and I'm able to expand my experience and knowledge with them as well. The remainder VMs are for monitoring and/or management purposes with four recent builds of BOINC servers as I have extra CPU cycles to spend. The BOINC servers run under a resource pool, so they consume only the amount of CPU that I want to provide at that point in time.

The ESXi build has also given me the opportunity to play around with a few new technologies (for me) this year as well. It's been able to support a nested ESXi clustered environment, NetApp simulators, a GNS3 network lab, and testing with NoMachine's NX client (they just recently added support for Android and iOS (iPad only at the moment)). I've also been able to dedicate some time to a soon to be migration to pfSense as my primary router.

My current running issues remain storage. Since I wasn't able to use the RAID card due to the hardware pass through limitation, I'm limited to two non-RAIDed drives supporting all my VMs. I've thought a lot about how to resolve this, and I'm leaning towards using FreeNAS (running as a VM) to host iSCSI target(s) to the ESXi host. The goal is to add the new disks (potentially four) to VMware by creating a datastores for each disk, create a virtual disk for the FreeNAS VM on each new datastore consuming all the available disk space, create a single ZFS volume within FreeNAS across all the drives, then creating the iSCSI block device on to the new volume. This would then get presented to ESXi which would create another datastore for the VMs to use. (disk -> VMFS data store -> Virtual disk -> RAID disk within FreeNAS -> iSCSI block device -> VMFS data store). This of course is bit more complicated than what it needs to be, but I'm not sure I can afford a new machine that would run FreeNAS. And, I think I would run into issues with the 1GB Ethernet connection. And, while thinking about this, I will need to do some testing as I still might be limited by a 1 GB virtual Ethernet connection.







Tuesday, October 29, 2013

High System Interrupts

I just finished fixing a problem with one of my VMs using hardware pass-through. It was one of the machines getting the second USB controller. When I went into the OS (Windows), it was very slow. When checking task manager, it showed the first CPU floating around 30% and the second CPU was at idle. When looking at the processes, there wasn't anything consuming CPU cycles. Then, I saw that it was hardware system interrupts within the Resource Monitor.

I isolated it to the USB controller vice the video card by removing both and adding them back one at a time. I tried a couple different scenarios with switching the controller to other VMs and whatnot, but with no luck.

To resolve the problem, I de-selected all the hardware for pass-through, rebooted, reconfigured the pass-through and rebooted a second time. I checked my VMs to ensure they were still configured with the correct hardware items and powered everything back on. I think it might have happened when I was adding additional hardware for pass-through and it just got out of sync somewhere.

Within Linux, I'm fairly certain you can use iostat to track down culprits with high system interrupts. I think it was the tool I used a few years ago to track down something (it was either the analog phone card or a dying hard drive from what I can remember).