Category: Vmware vSphere

New Additions to My Home Lab: HP MicroServer & Synology NAS

I have been preparing for my VMvware Certified Professional (VCP) exam. Early this year I decided to invest and buy HP ProLiant MicroServer G8 and Synology DS414slim NAS appliance to expand my home lab.

I’m one on those who learn better by doing rather than reading and I wanted to rely on practice labs and hands-on experience instead of books and practice tests to pass the exam.

I bought everything from newegg.com. The HP MicroServer came with 8GB of RAM installed. I then upgraded the RAM to 16GB and downloaded the ESXi 5.5 ISO directly from the HP website which comes with all the drivers required to run ESXi on HP ProLiant servers.

I’m running few VMs on the HP server including the VMware vCenter Server Virtual Appliance (vCSA) which manages my ESXi servers. The HP server is one of two ESXi servers I have running. The other ESXi server runs inside VMware Fusion (nested) on my iMac. Because my HP server and iMac desktop have two different CPU architectures, I had to enable VMware Enhanced vMotion Compatibility (EVC) to provide CPU compatibility and support for vMotion and DRS.

I populated the Synology appliance with two SSDs configured in RAID 1. On it, I have a datastore configured that provides NFS storage to my VMs. I also store there all of my ISOs and OVA files.

I’m happy so far with both devices. The HP MicroServer is relatively quiet compared to other devices I have seen. In terms of noise the HP MicroServer fan generates on average 40 dB-A of noise (equivalent to the noise which a fan of Dell Latitude laptop would put out) according to my iPhone noise meter. The Synology NAS appliance is also pretty quiet. Its fan comes on for few seconds only when the CPU is doing heavy processing. I keep both devices in my home office, which is where I do most of my work.

One thing I wanted to do was to schedule automatic shut down at night to save power. So I searched online for a script to do so but the problem I ran into was that in vSphere 5.5 the host had to be put into maintenance mode before it could shut off gracefully. That meant that the server would come up as a result in maintenance mode when it powered back on and I would need to intervene and take it out of maintenance mode every time.

After experimenting with few ESX CLIs and with some help from the online community I came up with the following Apple script (hack) which basically shuts down the powered on VMs, puts the host in maintenance mode and then issues a shut down command with a delay of 10 seconds. Before the delay timer expires the script executes another command (last command below) and takes the host out of maintenance mode. When the delay timer finally expires the host gracefully shuts down.

do shell script “ssh -i sshkey [email protected] vim-cmd vmsvc/power.shutdown 1”

do shell script “ssh -i sshkey [email protected] esxcli system maintenanceMode set -e true -t 0″

do shell script “ssh -i sshkey [email protected] esxcli system shutdown poweroff -d 10 -r Shell”

do shell script “ssh -i sshkey [email protected] esxcli system maintenanceMode set -e n -t 0″

From there I scheduled an action in my Apple calendar to launch and execute the script every night.

HP MicroServer G8 + Synology NAS

I will be sharing in future posts some of the lessons I have learned during my prep journey so stay tuned for that.

Anas

@anastarsha

 

Additional Information:

Install VMware ESxi 5.5 on HP ProLiant MicroServer G8

HP ProLiant MicroServer G8 Links 


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail

Troubleshooting vMotion Connectivity Issues

I attended a panel discussion session at VMUG this week. A guy from the audience asked a question on how to troubleshoot connectivity issues after moving a VM. The guy had a one flat VLAN with one IP subnet and every time he vMotioned a VM to another host, users lost connectivity with that VM.

To answer his question, a guy on the stage advised him to recreate another VLAN/IP subnet, move that VM to the newly created subnet, and then try to do the vMotion again.

Well obviously that wasn’t a good advice and changing the VM IP address in this case would not help. In this post I will explain what happens when a VM moves and steps you can take to troubleshoot network delay and connectivity issues related to vMotion.

First before you do any vMotion, ensure that the VLAN is configured on all switches and allowed on all necessary trunk ports in the network. You can use the commands show vlan and show trunk to verify that that.

When you create a new VM, the host allocates and assigns a MAC address to that VM. The physical switch which the host is connected to eventually learns the VM MAC address (that happens when the VM ARPs for its gateway, starts sending traffic and the switch sees that traffic, or when the Notify Switches in vSphere option is tuned on).

When a VM moves to another host (the new host could be either connected to another port or the same physical switch or connected to a different switch), somebody has to tell the network to change its destination port for that MAC address in order to continue to deliver traffic to that VM. In vSphere that is usually handled by the host when the “Notify Switches” option is turned on. The host in this case notifies the network by sending several RARP messages on behalf of the VM to ensure that the upstream physical network updates its MAC table.

So the first step in troubleshoot this specific issue is to ensure that the “Notify Switches” settings is set to Yes before you vMotion the VM. In vSphere 5.5 you would go to the vSwitch/vDS settings and then to Teaming and Failover to verify the setting.  

Notify switches

 

If that does not help, try to do the followings:

On the physical switch (the switch the destination host is connected to), issue a command to look up MAC table and find out which destination port the switch is using to reach the VM. On a Cisco Catalyst switch, you can use the “show mac address-table” command for example. If the switch is still using the old MAC-to-port mapping, then it’s either that the switch is not reaching the RARP notification from the host or the host itself is not sending it.

If you don’t do anything to correct the problem, the aging timer for that MAC address on the switch (default is set to 5 mins on Cisco switches) will eventually expire and the switch will learn the MAC address via the new port but to speed things up you can alway flush out the VM MAC address from the switch MAC table (clear mac address-table on Cisco catalyst).

Obviously having to manually intervene and troubleshoot every time you move a VM defeats the whole purpose of vMotion. vMotion is supposed to do live migration of a VM and preserves all active network connections during the process. But hopefully the above steps will give you some pointers on where to look and start to find the root cause.

Additionally VMware recommends disabling the followings on the physical ports connected to the host to minimize networking delay:

– Port Aggregation Protocol (PAgP) and Link Aggregation Control Protocol (LACP). 

– Dynamic Trunking Protocol (DTP) or trunk negotiation.

– Spanning Tree Protocol (STP).

 


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail

Private VLAN and How It Affects Virtual Machine Communication

Private VLAN (PVLAN) is a security feature which has been available for quite some time on most modern switches. It adds a layer of security and enables network admins to restrict communication between servers in the same network segment (VLAN). So for example let’s say you have an Email and Web servers in the DMZ in the same VLAN and you don’t want them to communicate with each other but still want each server to communicate with the outside world. Obviously one way to prevent the servers from talking directly to each other is to place each server in a separate VLAN and apply ACLs on the firewall/router preventing communication between the two VLANs. This solution though requires using multiple VLANs and IP subnets. It also requires you to re-IP the servers in an existing environment. But what if you are running out of VLANs or IP subnets and/or re-IPing is too disruptive? Well then you can use PVLAN instead.

With PVLAN you can provide network isolation between servers or VMs which are in the same VLAN without introducing any additional VLANs or having to use MAC access control on the switch itself. 

While you can configure PVLAN on any modern physical switch, this post will focus on deploying PVLAN on a virtual distributed switch in a VMware vSphere environment.  

Private VLAN and Vmware vSphere

But first let me explain briefly how PVLAN works. The basic concept behind Private VLAN (PVLAN) is to divide up the existing VLAN (now referred to as Primary PVLAN) into multiple segments , called secondary PVLANs. Each Secondary PVLAN type then can be one of the following:

  • Promiscuous: VMs in this PVLAN can talk to any other VMs in the same Promiscuous PVLAN or any other secondary PVLANs. On the diagram above, VM E can communicate with A, B, C, and D.
  • Community: VMs in this secondary PVLAN can communicate with any VM in the same Community PVLAN and it can communicate with the Promiscuous PVLAN as explained above. However VMs in this PVLAN cannot talk to the Isolated PVLAN. So on the diagram, VM C and D can communicate with each other and communicate also with E.
  • Isolated: A VM in this secondary PVLAN cannot communicate with any VM in the same Isolated PVLAN nor with any VM in the Community PVLAN. It can only communicate with the Promiscuous PVLAN. So looking at the diagram again, VM A and B cannot communicate with each other nor with C or D but can communicate with E.

There are few things you need to be aware of when deploying PVLAN in a VMware vSphere environment,:

  • PVLAN is supported only on distributed virtual switches with Enterprise Plus license. PVLAN is not supported on a standard vSwitch.
  • PVLAN is supported on vDS in vSphere 4.0 or later; or on Cisco Nexus 1000v version 1.2 or later.
  • All traffic between VMs in the same PVLAN on different ESXi hosts need to traverse the upstream physical switch. So the upstream physical switch must be PVLAN-aware and configured accordingly. Note that this required only if you are deploying PVLAN on s vSphere vDS since VMware applies PVLAN enforcement at the destination while Cisco Nexus 1000v applies enforcement at the source and therefore allowing PVLAN support without upstream switch awareness.
Next I’m going to demonstrate how to configure PVLAN on VMware vDS and Cisco Nexus 1000v. Stay tuned for that. In the meantime feel free to leave some comments below.

 


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail