Tag: LISP

Deploying Secure Hybrid Cloud Extension Using LISP For Workload Mobility – Part 2

This post assumes a working knowledge of Locator/ ID Separation Protocol (LISP), you may first want to review Part 1 before reading through this post.

In Part 1 we created two LISP sites and enabled connectivity between these two sites. We also enabled VM mobility and showed how you can migrate a VM to the cloud without changing its IP address, mask, or default gateway.

In this post I will expand on Part 1 and enable connectivity between non-LISP and LISP sites. As shown in the diagram below I have a branch office (non-LISP site) which needs to reach the VM (IP address 10.122.139.91) in the cloud.

LISP2

To do so we have three design options…

Option 1 is to enable LISP on the branch edge router. In this design traffic leaving the branch will be redirected by the branch edge router to the cloud however this option is costly and requires deploying a router with LISP support on each branch router.

Option 2 is to deploy a LISP proxy router in the service provider WAN to intercept traffic and re-direct it to cloud. This option is less expensive than option 1 but requires routing all traffic thru the proxy router which could cause this router to become a choke point.

Option 3 is to configure the already LISP-enabled router in the data center as proxy which will redirect traffic to the cloud. This is the least expensive option but obviously does not provide optimal routing as you need to backhaul traffic to the data center first, however this might be acceptable design when local internet access is not available at the branch and internet-bound traffic needs to traverse the data center anyway or when the customer wants the traffic from the branch to traverse the data center firewall first for policy enforcement.

In this post I will show you what you need to do to enable option 3.

To add the proxy functionality to the design, first we need to configure CSR1 as Proxy Tunnel Router (PxTR) which tells CSR1 to intercept the traffic coming from any non-LISP site and redirect it to the cloud:

CSR_1#sh run | i ipv4

 no ipv4 itr

 ipv4 proxy-etr

 ipv4 proxy-itr 1.1.1.1

Now if I run a show command I can see that CSR1 is configured as PxTR:

CSR_1#sh ip lisp

  Instance ID:                      0

  Router-lisp ID:                   0

  Locator table:                    default

  EID table:                        default

  Ingress Tunnel Router (ITR):      disabled

  Egress Tunnel Router (ETR):       enabled

  Proxy-ITR Router (PITR):          enabled RLOCs: 1.1.1.1

  Proxy-ETR Router (PETR):          enabled

The second step is to configure CSR3 to use CSR1 as its default proxy ETR (PETR). This is necessary as CSR3 is not going to know how to get back to the source 2.2.2.2 since this prefix is not a LISP entry and does not exist in its cache table.

CSR_3# show run | b lisp

 router lisp

 ipv4 use-petr 1.1.1.1

Now let’s run a ping test from R2 (2.2.2.2) to see if we can reach the VM:

R2#ping 10.122.139.92 source 2.2.2.2

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.122.139.92, timeout is 2 seconds:

Packet sent with a source address of 2.2.2.2 

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 48/51/63 ms

 

Comment below if you have any questions. Also please help me spread out the word if you like this post and share with your network on Twitter/Facebook/Linkedin.

 

Anas

Twitter: @anastarsha


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail

Deploying Secure Hybrid Cloud Extension Using LISP For Workload Mobility – Part 1

There are times when you want to leverage public cloud but want to keep some workloads in your data center. You can do that with Hybrid Cloud.

According to Gartner the most important use case for Hybrid Cloud is cloud bursting. That’s running the workloads internally in the data center and bursting to a public cloud during peak times. The other common use cases for hybrid cloud is disaster recovery where you migrate the workloads to a public cloud during disasters or maintenance windows.

Migrating a workload to the cloud is time consuming especially if you have to re-IP all the VMs.

In this post I will show how to migrate an application to the cloud without changing its IP address, mask, default gateway, or security policies. This makes the migration easier especially if you are migrating the workload temporarily and do not want to change the IP addresses or DNS settings.

Why I Like This Solution

You can certainly enable workload mobility between two data centers using a Layer 2 extension technology such as VPLS or Cisco OTV. However I prefer LISP over any L2 extension simply because it’s based on Layer 3 and therefore does not exntend the broadcast/failure domain like L2 extensions and you get all the benefits and scale of L3 routing.

I also think that using LISP is better than relying on any cloud extension technology like Cisco Nexus 1000v InterCloud or Verizon CloudSwitch because those technologies add a shim layer to allow the VM to run in a cloud provider environment and that degrades the VM performance. With this LISP design the VM runs natively on the hypervisor and behaves like any VM local to the cloud provider environment.

Prerequisites

– To get this solution to work to you need two Cisco routers that support Locator/ID Separation Protocol (LISP). The data center router can be either virtual or physical while the cloud provider router has to be virtual. I will be using here two Cisco CSR 1000v routers running IOS XE 3.12S for this solution.

– I will briefly explain how LISP works along the way and will be focusing only on the required components for this design. You can find more information on LISP on Cisco.com

– I will be using Verizon Terremark as my cloud provider. You may use other providers like Microsoft Azure or AWS if you wish.

Before The Migration LISP before

The diagram above shows a VM running in a data center/private cloud with IP address 10.122.139.92. The default gateway for this VM is currently CSR-1 and from the outside DNS resolves to the VM public IP address which is part of the data center addressing scheme. Now because CSR-1 is a LISP enabled router and because it’s also acting as a LISP map-server/map-resolver, it will cache the VM IP address in its database so it can inform other LISP routers about the VM location when asked.

In the cloud provider, CSR-3 is also a LISP enabled router and configured to query CSR-1 for any hosts trying to reach. So if CSR-3 receives any traffic destined for the VM it queries CSR-1 for the location of that VM. CSR-1 replies back to CSR-3 and tells it to encapsulate and send all traffic designed for the VM to him because the VM is located on its LAN.

The output below shows that the VM is located on CSR-1 local subnet at the moment:

CSR_1#sh ip lisp forwarding eid local

Prefix

10.122.139.92/32

The next output below shows that CSR-1 (1.1.1.1) is currently the locator for the VM:

CSR_1#sh ip lisp data

LISP ETR IPv4 Mapping Database for EID-table default (IID 0), LSBs: 0x1, 2 entries

 

10.122.139.92/32, dynamic-eid Cloud-dyn, locator-set DC1

  Locator  Pri/Wgt  Source     State

  1.1.1.1    1/100  cfg-addr   site-self, reachable

 On CSR-3 if I issue “where is the VM” command I see that CSR-1 (1.1.1.1) is currently the locator:

CSR_3#lig 10.122.139.92

Mapping information for EID 10.122.139.92 from 10.0.20.1 with RTT 49 msecs

10.122.139.92/32, uptime: 00:02:28, expires: 23:59:59, via map-reply, complete

  Locator  Uptime    State      Pri/Wgt

  1.1.1.1  00:02:28  up           1/100

That’s good. Both routers are in sync and know that the VM is located in the data center. Let’s now migrate this bad boy to the public cloud.

  After the Migration LISP after

Now the VM has migrated (cold migration) to public cloud as shown in the diagram above. When the VM boots up of the first time it will ARP for its default gateway (which is CSR-3 in this case). When CSR-3 detects that the VM is now on its LAN it registers it with the map-server (CSR-1) and declares itself as the new locator for that VM. Now the DNS mapping for the VM has not changed and the outside world still thinks that the VM resides in the enterprise data center.  So when CSR-1 receives traffic destined for that VM it does a lookup in its database, encapsulates the traffic, and sends it to the new locator (CSR-3) which in turn decapsualtes the traffic and sends it off to the VM.

Now back to the router CLI. If I issue the “where is the VM now” command again, I see that CSR-3 (3.3.3.3) is now the locator:

CSR_1#lig 10.122.139.92

Mapping information for EID 10.122.139.92 from 10.0.20.3 with RTT 48 msecs

10.122.139.92/32, uptime: 00:02:27, expires: 23:59:59, via map-reply, complete

  Locator  Uptime    State      Pri/Wgt

  3.3.3.3  00:02:27  up           1/100

 If I ping the VM from CSR-1 sourcing my packets from the LAN interface, the ping is successful:

CSR_1#ping 10.122.139.92 source loopback 10

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.122.139.92, timeout is 2 seconds:

Packet sent with a source address of 10.10.10.1

.!!!!

Success rate is 80 percent (4/5), round-trip min/avg/max = 47/47/48 ms

 

Conclusion

LISP allows migrating workloads to the cloud without relying on L2 extensions. Use this design if you are migrating your workload temporarily. If you are migrating the workload permanently you might want to consider adding a LISP router at the WAN edge or at the branch office to redirect user traffic to the public cloud provider before its enters the enterprise data center.

Ping me if you want the full configurations.

Part 2 of this post is coming soon, make sure to subscribe to the blog or follow me on Twitter to get notified when new content is added.

 

Anas

Twitter: @anastarsha

 

 


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail