There are times when you want to leverage public cloud but want to keep some workloads in your data center. You can do that with Hybrid Cloud.
According to Gartner the most important use case for Hybrid Cloud is cloud bursting. That’s running the workloads internally in the data center and bursting to a public cloud during peak times. The other common use cases for hybrid cloud is disaster recovery where you migrate the workloads to a public cloud during disasters or maintenance windows.
Migrating a workload to the cloud is time consuming especially if you have to re-IP all the VMs.
In this post I will show how to migrate an application to the cloud without changing its IP address, mask, default gateway, or security policies. This makes the migration easier especially if you are migrating the workload temporarily and do not want to change the IP addresses or DNS settings.
Why I Like This Solution
You can certainly enable workload mobility between two data centers using a Layer 2 extension technology such as VPLS or Cisco OTV. However I prefer LISP over any L2 extension simply because it’s based on Layer 3 and therefore does not exntend the broadcast/failure domain like L2 extensions and you get all the benefits and scale of L3 routing.
I also think that using LISP is better than relying on any cloud extension technology like Cisco Nexus 1000v InterCloud or Verizon CloudSwitch because those technologies add a shim layer to allow the VM to run in a cloud provider environment and that degrades the VM performance. With this LISP design the VM runs natively on the hypervisor and behaves like any VM local to the cloud provider environment.
– To get this solution to work to you need two Cisco routers that support Locator/ID Separation Protocol (LISP). The data center router can be either virtual or physical while the cloud provider router has to be virtual. I will be using here two Cisco CSR 1000v routers running IOS XE 3.12S for this solution.
– I will briefly explain how LISP works along the way and will be focusing only on the required components for this design. You can find more information on LISP on Cisco.com
– I will be using Verizon Terremark as my cloud provider. You may use other providers like Microsoft Azure or AWS if you wish.
Before The Migration
The diagram above shows a VM running in a data center/private cloud with IP address 10.122.139.92. The default gateway for this VM is currently CSR-1 and from the outside DNS resolves to the VM public IP address which is part of the data center addressing scheme. Now because CSR-1 is a LISP enabled router and because it’s also acting as a LISP map-server/map-resolver, it will cache the VM IP address in its database so it can inform other LISP routers about the VM location when asked.
In the cloud provider, CSR-3 is also a LISP enabled router and configured to query CSR-1 for any hosts trying to reach. So if CSR-3 receives any traffic destined for the VM it queries CSR-1 for the location of that VM. CSR-1 replies back to CSR-3 and tells it to encapsulate and send all traffic designed for the VM to him because the VM is located on its LAN.
The output below shows that the VM is located on CSR-1 local subnet at the moment:
CSR_1#sh ip lisp forwarding eid local
The next output below shows that CSR-1 (126.96.36.199) is currently the locator for the VM:
CSR_1#sh ip lisp data
LISP ETR IPv4 Mapping Database for EID-table default (IID 0), LSBs: 0x1, 2 entries
10.122.139.92/32, dynamic-eid Cloud-dyn, locator-set DC1
Locator Pri/Wgt Source State
188.8.131.52 1/100 cfg-addr site-self, reachable
On CSR-3 if I issue “where is the VM” command I see that CSR-1 (184.108.40.206) is currently the locator:
Mapping information for EID 10.122.139.92 from 10.0.20.1 with RTT 49 msecs
10.122.139.92/32, uptime: 00:02:28, expires: 23:59:59, via map-reply, complete
Locator Uptime State Pri/Wgt
220.127.116.11 00:02:28 up 1/100
That’s good. Both routers are in sync and know that the VM is located in the data center. Let’s now migrate this bad boy to the public cloud.
After the Migration
Now the VM has migrated (cold migration) to public cloud as shown in the diagram above. When the VM boots up of the first time it will ARP for its default gateway (which is CSR-3 in this case). When CSR-3 detects that the VM is now on its LAN it registers it with the map-server (CSR-1) and declares itself as the new locator for that VM. Now the DNS mapping for the VM has not changed and the outside world still thinks that the VM resides in the enterprise data center. So when CSR-1 receives traffic destined for that VM it does a lookup in its database, encapsulates the traffic, and sends it to the new locator (CSR-3) which in turn decapsualtes the traffic and sends it off to the VM.
Now back to the router CLI. If I issue the “where is the VM now” command again, I see that CSR-3 (18.104.22.168) is now the locator:
Mapping information for EID 10.122.139.92 from 10.0.20.3 with RTT 48 msecs
10.122.139.92/32, uptime: 00:02:27, expires: 23:59:59, via map-reply, complete
Locator Uptime State Pri/Wgt
22.214.171.124 00:02:27 up 1/100
If I ping the VM from CSR-1 sourcing my packets from the LAN interface, the ping is successful:
CSR_1#ping 10.122.139.92 source loopback 10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.122.139.92, timeout is 2 seconds:
Packet sent with a source address of 10.10.10.1
Success rate is 80 percent (4/5), round-trip min/avg/max = 47/47/48 ms
LISP allows migrating workloads to the cloud without relying on L2 extensions. Use this design if you are migrating your workload temporarily. If you are migrating the workload permanently you might want to consider adding a LISP router at the WAN edge or at the branch office to redirect user traffic to the public cloud provider before its enters the enterprise data center.
Ping me if you want the full configurations.
Part 2 of this post is coming soon, make sure to subscribe to the blog or follow me on Twitter to get notified when new content is added.