Wednesday, December 31, 2014

Recording Live TV

A home PVR has been another hobby for quite some time. I once ran MythTV back in the day, and did enjoy it, but there were some flaws. So during this new setup, I wanted re-evaluate the available options before deciding on a final solution.

My requirements for my final solution:
 - Be able to record live TV (both OTA and from commercial providers)
 - Have access to guide data and ability to record using a web browser
 - Be able to record and convert shows into m4v (most compatible across multiple platforms IMO
 - Be able to watch the converted shows via Plex
 - Be able to watch live TV (with pause ability) on all my devices (phones, tablets, TVs, laptops, ...)

There's lots of options for recording. Here is a list of options I reviewed:
 - Windows Media Center = Doesn't have web based guide option; can't find any plugins to watch live TV

 - MythTV = Live TV is still experimental, and it was not fun getting the drivers to work with my Hauppauge card.

 - WinTV with Extend (from Hauppauge) = Live TV from the phone app, but couldn't extend to other devices. Also didn't see guide data when choosing a channel.

 - HD Homerun with Plex HD SurferWave plugin = saw a YouTube video on this; it looked nice for watching live TV, but by itself, didn't support recording shows (could run into resource contention with tuners being available)

 - Next PVR = Currently I've been using this for a few weeks now. I found a plugin for Plex that supports live TV. Currently having issues while watching two shows at a time through Plex. However, I like the support with my Hauppauge WinTV-HVR-2255 tuner - it supports recording all of the channels in a single digital group on a single tuner (i.e. 5.1, 5.2, 5.3, 5.4). It also can stream recorded TV using its built in web server. I can also stream live TV using VLC via the built in web server.

 - Tablo = Really awesome, but a bit out of my price range. But, it definitely takes all the work out of it.


***** Current Solution - as of Dec 2014 ******

I've been using NextPVR / NPVR for a couple months now. I've been using it to record shows and then an application called MCE Buddy to pull out commercials, compress the file, and move it to a folder configured for Plex to pull it into its inventory. Plex does the rest in delivering the content to all my Plex devices. And, if I want to watch live TV (which isn't too often), I can use VLC to connect to the NextPVR web page and stream it or I use a NextPVR plugin within Plex. I use Schedules Direct to receive TV guide data.


Saturday, October 4, 2014

VM Migration to NFS Export hosted by a FreeNAS VM

I mentioned earlier in my Year in Review about not having RAID setup (the on board motherboard RAID wasn't supported with stock ESXi, and I couldn't use my RAID card with hardware pass through to some of the VMs) for the VMs (they were just spread across 2 - 2 TB drives. To support the migration, I was using 4 older SATA I - 250 GB drives for my file shares and an external SAN for non OS data.

The goal was to have something that would protect against drive failure, and I didn't want to purchase any additional hardware. My solution was to provide virtual disks to a FreeNAS VM, create a ZFS volume using RAID, and setup NFS that was mounted within ESXi. I know it isn't the most elegant solution, but it does offer some benefits.
 1) I didn't have to purchase any new hardware
 2) I can sleep better at night knowing I can survive a drive failing
 3) I can setup off site replication for my VMs, since my VMs are on now a ZFS volume

I did opt for a second FreeNAS VM that I could dedicate to the VMs, and the other FreeNAS VM can act more like a development environment to test patches and configurations (if I need to reboot the FreeNAS hosting the VMs, I have to shut down all the VMs). And the FreeNAS VM was kept on an ESXi datastore that was available at boot.

I wanted to setup FreeNAS with a 10 GB NIC, but I initially ran into issues with drivers for the VMXNET 3 NIC. So I setup 2 NICs on the FreeNAS VM, and setup each of the NFS exports so they had their own dedicated NIC. Once I figure this out with my development FreeNAS, I'll fix it. With what I'm doing, I'm not seeing too much of a performance hit using the 1 GB link, but the extra bandwidth wouldn't hurt.

The process was a bit slow to migrate all the data around (it took about a week). The steps I took are below:
 1) Migrated my CIFS data off the 4 drive RAID to the external SAN
 2) Created a new ZFS volume on the 4 drives where the CIFS data was (I went with RAID 10)
 3) Migrated some of the VMs to the new RAID 10 ZFS volume
 4) Migrated the remainder VMs to the external SAN
 5) Once the 2 - 2 TB drives were empty, I created a new virtual disk on each and setup a second RAID 1 ZFS volume, then migrated some of the VMs to it

To wrap this up, I've been using this setup for 20+ days now, and haven't had any issues. And as I said, it's not that elegant, but it does provide peace of mind against drive failure, and I'll feel really good once I have the data replicated off site. :)


*** I recently had a hard drive go bad in my 4 bay external eSATA SAN that's way past warranty. The cost to replace the drive at the current size is actually more than getting a bigger drive when the bigger drives are on sale. So, I started reading more about FreeNAS and how it could solve some of my problems. But while reading about FreeNAS as a VM, it's not recommended. I personally haven't had an issue (it's been wonderful), but apparently there are stories with people losing data. I really don't want to lose everything I've done, so now I plan to build a new server that will be fully dedicated to storage (this was always the most ideal, but I was trying to cut down on costs). I plan to directly connect it to the ESXi server via a cross over Ethernet cable (I don't want to waste two ports on my gigabit switch, I currently only have one virtual host server, and the future FreeNAS box will have two NICs with the second one supporting CIFS and offsite replication).

The other HUGE benefit is that I gain about 12GB memory in my ESXi box. I was using 8 for my prod FreeNAS and 4 for the CIFS / dev FreeNAS. I'll still need to test FreeNAS updates in a VM, but that's something I could spin up, test, and then shut back down.

The plan is to support storage to ESXi via NFS as it seems to be easier than iSCSI (I don't want to deal with increasing iSCSI extent sizes). I'll just keep the data separated by datasets for replication purposes. I've been monitoring my performance of the virtual FreeNAS machine, and bandwidth isn't an issue (I'm mainly concerned with disk IO).

I've started testing growing volumes by replacing the disks with larger ones, so I feel confident in building something that supports growth for three years (through the hard drive warranty period). Then, as the hard drives fail, I would purchase larger ones to replace them. I'm targeting RAID 10 with four drives (probably the WD Red NAS drives, even though they're 5400rpm) for the most optimal performance on a small motherboard.



Monday, September 1, 2014

Upgrade to ESXi 5.5

I finally had some time to dedicate towards an upgrade to ESXi 5.5!

The goal was to install on a new SSD hard drive I had just for this occasion. I wanted to keep the original hard drive as is in the event I ran into issues and had to revert back (and it was close, I almost had to revert back!).

I had some issues with the 5.5 installer seeing my hard drive, which I think was caused by the hard drive being on a no longer supported SATA controller (ASMedia, which I didn't realize was no longer supported). So, I temporarily moved the drive to another controller, did the install, and eventually moved it back (eventually with a few adjustments to make it work).

When I initially moved the drive back to the ASMedia controller, ESXi would boot, but would not recognize its own datastore. ESXi also couldn't find the network card I was using nor the SAN connected via eSATA also off the ASMedia controller. I then thought I'd try a fresh install of 5.0 and then upgrade (hoping it wouldn't remove drivers that were being used) which was mostly successful. the upgraded 5.5 installation could now see the network card, but it still couldn't see the drives attached to the ASMedia controller.

After some searching, I caught a lucky break and came across some drivers to load (http://www.v-front.de/2013/11/how-to-make-your-unsupported-sata-ahci.html). Thanks for the support! (I remembered afterwards that somebody posted this same issue as a comment to one of my earlier posts - dang, should have remembered!) This did the trick. After the load, a quick reboot, everything was successful! I then moved the new drive back to it's original SATA controller and everything came up wonderfully!

The first items on the agenda included configuring the hardware pass through and licensing. Both were easy without any issues.

Then it was on to adding the virtual machines back to the inventory and powering them on. I encountered no issues with the inventory.  I was forced to remove the PCI devices configured for hardware pass through and add them back to the VM. I also ran into an issue with starting a VM with USB.

Error:
Error message from ESX1: Failed to register the
device pciPassthru0 for 0:29.0 due to unavailable
hardware or software support

With some more searches, and a test with another machine that was successful booting that also had USB installed, lead me to adding the pciHole.start/.end parameters to the VM's configuration file. This is the same configuration I originally used to power on a VM that was using a video card.

pciHole.start = "1200"
pciHole.end = "2200"

That about sums it up. I also loaded the latest ESXi 5.5 patch as well (ESXi550-201407001.zip) without any issues. Now I'm able to use a version of ESXi fully licensed to support my 64GB of memory! Yeah!

As for upgrading the VM tools and hardware profiles, I'll be doing that later. I want to make sure everything is stable before doing that. But, when I do the hardware profile upgrade, I'll be going to version 9 from the command line (version 10, from what I've read, is only supported with the vSphere web client). But, the process is fairly simple (I had to do this when testing a nested ESXi 5.5 cluster at work - the hardware profile version 9 offered better support for something).








Monday, August 4, 2014

Linux Spacewalk / Satellite Server

This is my first attempt at setting up a Satellite server, so I learned quite a bit. Before I had all my VMs, I never had the requirement for it. Now, having it will definitely speed up patching and installs.

Some features I plan on using:

  • remote installs of applications
  • application baselines across multiple servers
  • pushing updates to all the servers
  • pushing config files that need updating (i.e. yum.conf, snmp.conf, ...)
  • kickstarting servers
There's plenty of information out on the web to follow. Comparing to Microsoft SCCM, the install is a little more complex. This is the main page I used to do the install https://fedorahosted.org/spacewalk/wiki/HowToInstall. This is another site (http://wiki.centos.org/HowTos/PackageManagement/Spacewalk)  that does a good job of explaining everything, but the first link worked better for me for the installation itself.

The biggest key things to note is to ensure you have DNS setup correctly and dependencies available. The client needs to resolve the server and vice versa. And, the client will need some dependencies to install some of the client components.

Good luck with your install!

Year In Review

It's been one year since I started the build of my new server, and overall, I'm very, very pleased with the outcome. It's been a very busy year with family, but I was still able to chip away at a few things. On average, I have about 16 VMs running at any given time without any major performance issues (this is given that only about one or two machines really doing anything heavy at any given time). The biggest limitation is the hard drives.

The servers that have proven very useful are the Plex , FreePBX, FreeNAS, web, and VPN servers. They are getting used about every day, and I'm able to expand my experience and knowledge with them as well. The remainder VMs are for monitoring and/or management purposes with four recent builds of BOINC servers as I have extra CPU cycles to spend. The BOINC servers run under a resource pool, so they consume only the amount of CPU that I want to provide at that point in time.

The ESXi build has also given me the opportunity to play around with a few new technologies (for me) this year as well. It's been able to support a nested ESXi clustered environment, NetApp simulators, a GNS3 network lab, and testing with NoMachine's NX client (they just recently added support for Android and iOS (iPad only at the moment)). I've also been able to dedicate some time to a soon to be migration to pfSense as my primary router.

My current running issues remain storage. Since I wasn't able to use the RAID card due to the hardware pass through limitation, I'm limited to two non-RAIDed drives supporting all my VMs. I've thought a lot about how to resolve this, and I'm leaning towards using FreeNAS (running as a VM) to host iSCSI target(s) to the ESXi host. The goal is to add the new disks (potentially four) to VMware by creating a datastores for each disk, create a virtual disk for the FreeNAS VM on each new datastore consuming all the available disk space, create a single ZFS volume within FreeNAS across all the drives, then creating the iSCSI block device on to the new volume. This would then get presented to ESXi which would create another datastore for the VMs to use. (disk -> VMFS data store -> Virtual disk -> RAID disk within FreeNAS -> iSCSI block device -> VMFS data store). This of course is bit more complicated than what it needs to be, but I'm not sure I can afford a new machine that would run FreeNAS. And, I think I would run into issues with the 1GB Ethernet connection. And, while thinking about this, I will need to do some testing as I still might be limited by a 1 GB virtual Ethernet connection.