When troubleshooting traffic flows through the Palo Alto NGFW, it can be difficult to see what’s happening. Using the logs from the GUI can help.

An alternative is, to use the command line “show session all filter destination [IP]“. This shows you a filtered view of stateful sessions going through the firewall. It provides information like state, source/destination, translated IP/Port. An example is shown below. (Use the context sensitive help if you need more options)

admin@PA-FW> show session all filter destination 74.200.26.232

--------------------------------------------------------------------------------
ID Application State Type Flag Src[Sport]/Zone/Proto (translated IP[Port])
Vsys Dst[Dport]/Zone (translated IP[Port])
--------------------------------------------------------------------------------
37167 web-browsing ACTIVE FLOW *NS 192.168.20.115[55927]/trust/6 (1.1.1.1[40416])
vsys1 74.200.26.232[443]/untrust (74.200.26.232[443])

Once you have the session ID (in the case above the session ID is 37167), use the command “show session id 37167” to drill down further into the information within that session.

admin@PA-FW> show session id 37167
Session 37167

c2s flow:
 source: 192.168.20.115 [trust]
 dst: 74.200.26.232
 proto: 6
 sport: 55927 dport: 443
 state: ACTIVE type: FLOW
 src user: test\robert
 dst user: unknown
 qos node: ethernet1/1, qos member Qid 0
 match src interface: any
 match src address: ('any ',)

s2c flow:
 source: 74.200.26.232 [untrust]
 dst: 1.1.1.1
 proto: 6
 sport: 443 dport: 40416
 state: ACTIVE type: FLOW
 src user: unknown
 dst user: test\robert

start time : Fri Jan 26 10:03:53 2018
 timeout : 60 sec
 time to live : 48 sec
 total byte count(c2s) : 1924
 total byte count(s2c) : 5115
 layer7 packet count(c2s) : 14
 layer7 packet count(s2c) : 10
 vsys : vsys1
 application : web-browsing
 rule : Rule 2
 session to be logged at end : True
 session in session ager : True
 session synced from HA peer : False
 address/port translation : source + destination
 nat-rule : NAT-Outside(vsys1)
 layer7 processing : completed
 URL filtering enabled : True
 URL category : Rule 2
 session via syn-cookies : False
 session terminated on host : False
 session traverses tunnel : False
 captive portal session : False
 ingress interface : ethernet1/2
 egress interface : ethernet1/1
 session QoS rule : N/A (class 4)
 tracker stage l7proc : proxy timer expired

As you can see above, the amount of information can be very helpful for troubleshooting. If you suspect a particular session is causing a problem, you can clear with the command “clear session ID 37167

RH

Advertisement

This feature is similar to Cisco IPSLA, in that it tracks the reachability of a destination and can remove static routes based on the ping response.

Simple topology with Palo-Alto connected to the internet and using path monitor on the default route. Internal interface peering OSPF with the core router and redistributing the static route but only when the ping responds.

Capture

 

 

 

 

 

 

 

 

 

First create the static route
Network -> Virtual Routers -> (router) -> Static Routes -> Add+

Virtual Router - Static Routes

In this scenario the path monitor will ping the opposite side of the link and Google DNS, both must fail for the condition to be met. Interval and count are default (5 pings 3 seconds apart). Once the pings fail, the route will be removed from the routing table. When the router is able to ping the destination after a failure it waits 2 minutes before re-installing the route, this is default preemptive behaviour and can be changed.

Next create a redistribution profile that redistributes your routes, what I found was that if you redistribute ‘0.0.0.0/0’ that means all routes, if you have other routes you don’t want to redistribute just match them with a lower priority and choose ‘No redist

Network -> Virtual Routers -> (router) -> Redistribution Profile -> Add+

Redistribution Profile

Configure OSPF as you normally would with any other device no difference here the usual attributes must match. Area 0 is the same as Area 0.0.0.0.

Next apply the redistribution profile to OSPF and check ‘Allow Redistribute Default Route‘. You have the option to set external type, metric and tag.

Network -> Virtual Routers -> (router) -> OSPF -> Export Rules -> Add+

Export Rules

The Palo-Alto should have formed neighbors with the core router and be redistributing the default route. This can be seen here. Network -> Virtual Routers -> More Runtime Stats. You can also view the routing table here and the forwarding table along with OSPF neighbors etc.

Run Time Stats

Currently the core router receives the route from the Palo-Alto

Core

Next fail the routing on the internet router to see the impact on the path monitoring. The outcome is, the route is withdrawn (debug ip routing)

Route withdrawn

Path Monitor (down)

On the core device you may have a floating static or default route with a higher metric from a different IGP, waiting to take over in the event of a failure to the Palo-Alto.

When routing is restored you can view the preempted route counting down. After the 2 minutes the route is re-instated.

Preempt hold

That’s it, works great.

RH

Dynamic Virtual Tunnel Interface.

It’s a similar concept to DMVPN but with a few differences

  • dVTI requires a smaller packet header only 4 bytes compared to DMVPN which is an additional 28 bytes
  • dVTI does not  use NHRP
  • dVTI has backwards compatibility with IPsec direct encapsulation
  • dVTI requires IP unumbered
  • dVTI requires the use of a dynamic routing protocol instead of keepalives
  • dVTI must be initiated by the remote branch to the head-end

I personally like the fact that the interface is unnumbered as it reduces the amount of IP address space that you need to manage. Each virtual template can be configured with different characteristics on the head-end device so that common branch offices share the same settings. This type of solution is ideal for a hub and spoke design.

I’m not saying dVTI is better or worse than DMVPN it’s just different.

The spoke forms an EIGRP neighbor with the HUB, you can then advertise a default route to the spoke. If you want to apply policies like QoS you can do this directly on the template interface.  The spoke will advertise the LAN network to the Hub and you can then summarise at the Hub into the data centre for multiple spokes.

Here is the configuration with a front VRF, this was tested in the lab on live equipment.

Hub Configuration

vrf definition internet
 rd 1:1
 address-family ipv4
 
 crypto keyring KEYRING vrf internet 
  pre-shared-key address [IP ADDRESS of SPOKE or match all] key cisco
 
 crypto isakmp policy 10
  encr aes
  hash sha256
  authentication pre-share
  group 14

crypto isakmp profile ISAKMPPROFILE
  keyring KEYRING
  match identity address [IP ADDRESS of SPOKE or match all] internet
  virtual-template 1

crypto ipsec transform-set TSET_SECURE esp-aes esp-sha256-hmac

interface Loopback0
  ip address 172.16.255.1 255.255.255.255
 
 interface gig0/0
  vrf forwarding internet
  description outside_interface
  ip address [WAN Interface IP and Subnet Mask] 
  no ip redirects
  no ip unreachables
  no ip proxy-arp

interface gig0/1
  description inside_interface
  ip address 192.168.255.50 255.255.255.0
  

interface Virtual-Template1 type tunnel
  ip unnumbered Loopback0
  ip mtu 1408
  ip summary-address eigrp 1 0.0.0.0 0.0.0.0
  tunnel mode ipsec ipv4
  tunnel vrf internet
  tunnel protection ipsec profile IPSECPROFILE_SECURE

router eigrp 1
  network 172.16.255.1 0.0.0.0

ip route vrf internet 0.0.0.0 0.0.0.0 [next-hop to internet]

Spoke Configuration

vrf definition internet
 rd 1:1
  address-family ipv4

crypto keyring 1 vrf internet 
  pre-shared-key address [IP address of HUB] cisco

crypto isakmp policy 10
  encr aes
  hash sha256
  authentication pre-share
  group 14

crypto ipsec transform-set TSET_SECURE esp-aes esp-sha256-hmac

crypto ipsec profile IPSECPROFILE_SECURE
  set transform-set TSET_SECURE

interface Loopback0
 ip address 172.16.255.2 255.255.255.255
 
 interface Tunnel0
  ip unnumbered Loopback0
  ip mtu 1408
  tunnel source gig0/1
  tunnel mode ipsec ipv4
  tunnel destination [IP address of HUB]
  tunnel vrf internet
  tunnel protection ipsec profile IPSECPROFILE_SECURE

interface gig0/1
  description outside_interface
  vrf forwarding internet
  ip address [WAN Interface IP and Subnet Mask or DHCP]

interface vlan 1 
  description inside_interface
  ip address 172.16.1.1 255.255.255.252
  no shut

router eigrp 1
  network 172.168.255.2 0.0.0.0
  network 172.16.1.1 0.0.0.0
  eigrp stub connected summary

To verify you can use commands like below.

“show ip int brief”
“show ip interface virtual-access1”
“show ip eigrp neighbors”
“show crypto isakmp sa”
“show crypto engine connections active”

Update: You might notice I have used classic EIGRP instead of EIGRP named mode, the reason was due to the fact that under the af-interface virtua- template I wasn’t able to set the summary route, the command was accepted but wasn’t being sent to the neighbor. Perhaps this is a bug in the version I was running.

 

One of the most interesting features i’ve noticed about LISP is the ability to encapsulate IPv6 packets and transport them between IPv4 only RLOC’s. This can be used for a transition strategy.

Click to access white_paper_c11-629044.pdf

The best parts are you don’t have to configure tunnel endpoints, it works in a multi-homed environment and supports communications between LISP and non-LISP sites (with a PxTR). The document describes this as ‘IPv6 islands over an IPv4 core’.

I read about LISP online and watched the Cisco Live presentation from Berlin 2017 (BRKRST-3045). I wanted to see this for myself.

The lab is direct from http://lisp.cisco.com/lisp_tech.html If you want to lab this, go grab the PDF. The lab is pretty good and gives you the chance to see LISP in action, all the configs and step by step guides are provided.

Topology.

Capture

Look at the packet from wireshark. This is a ping between R111 host and R117 Host. This was captured at the exit of  core R113 (before decapsulation). It has the IP header from R112 (ETR) to R116 (ITR), next is UDP dst.port 4341 (LISP), next LISP Data, then the IPv6 header and ICMP message. So its easy to see how this works.

packet Capture

For traceroute , the only hops that are visible are IPv6.

traceroute

Before I could connect to anything in Site2, the ETR (R112) had to request mapping information from the MS/MR, once it was returned R112 then knew how to forward traffic towards 2001:DB8:B:1::3. From R112 perspective it now has a map for the prefix 2001:DB8:B::/48 and the RLOC is 10.0.9.2

map-cache

If you want to know what LISP can be described as read this article by Ethan Banks http://ethancbanks.com/2013/07/30/lisp-not-exactly-a-routing-protocol.

RH.

DMVPN | Phase 3 | IPsec | VRF | Per-Tunnel QoS

Covering the configuration, confirmation and troubleshooting of DMVPN Phase 3 with IPsec and Per-Tunnel QoS. The main reason behind this is to help out my friend who is implementing a production design similar to this.

Below is the high level topology used for this lab. Its a fairly simple diagram but gives you the idea, the devil is in the detail (config)

DMVPN

For the dynamic routing protocol, I have chosen EIGRP, you could use BGP if you want a hyper-scale design. Phase 3 DMVPN is chosen simply to enable spoke-to-spoke communication and maintain a default route to the spokes. Phase 3 allows this by using redirect messages / shortcut routing in NHRP.

The purpose of this blog is not to explain DMVPN, there are plenty of resources for that.

Here is the config for the HUB.

vrf definition internet
!
address-family ipv4
exit-address-family
ip route vrf internet 0.0.0.0 0.0.0.0 [internet next-hop IP]
!
interface Tunnel0
ip address 10.100.0.4 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp authentication cisco
ip nhrp map multicast dynamic
ip nhrp map group Spoke-5 service-policy output DMVPN_5000kbps
ip nhrp map group Spoke-1 service-policy output DMVPN_1500kbps
ip nhrp map group Spoke-2 service-policy output DMVPN_10000kbps
ip nhrp network-id 123
ip nhrp redirect
ip tcp adjust-mss 1360
qos pre-classify
tunnel source Ethernet0/0
tunnel mode gre multipoint
tunnel key 123
tunnel vrf internet
tunnel protection ipsec profile DMVPN
!
router eigrp DMVPN
!
address-family ipv4 unicast autonomous-system 123
!
af-interface Tunnel0
summary-address 0.0.0.0 0.0.0.0
no split-horizon
exit-af-interface
!
topology base
exit-af-topology
network 10.0.0.0
network 10.100.0.0 0.0.0.255
exit-address-family
crypto keyring dmvpn vrf internet
pre-shared-key address 0.0.0.0 0.0.0.0 key topsecret!
!
crypto isakmp policy 100
encr aes 256
authentication pre-share
group 5
!
crypto ipsec transform-set DMVPN esp-aes esp-sha-hmac
!
crypto ipsec profile DMVPN
set transform-set DMVPN

 

QOS – On the hub tunnel interface you can set the QoS policies to map to a group name, then on the spoke you can set the command to have the tunnel subscribe to a particular group.

The QoS polices need to be setup in a specific way with parent and child policies, this way you can be specific in which spoke gets which shaper settings. Create a policy with your required settings see example below.

policy-map DMVPN
class icmp
police 8000
class telnet
set precedence 4
class www
police 80000
set precedence 6

 

Then create your shaping policy and attach the child policy like below.

policy-map DMVPN_5000kbps
class class-default
shape average 5000000
service-policy DMVPN

 

Front Door VRF – The reason for this is to allow a default route for internet access and also allow a default route for LAN traffic. The Internet facing interface is in its own VRF and routes from this are not part of the global routing table.  We simply tell the tunnel to use the VRF for its NBMA routing with a simple command “tunnel vrf [name of VRF]“. Once this is enabled we can have a default route for LAN traffic via the tunnel and a default route for internet traffic.

A great write up by Denise Fishburn can be found here. http://www.networkingwithfish.com/tunnels-and-the-use-of-front-door-vrfs

IPsec – Using the front door VRF, we have to adjust the crypto keyring slightly to apply the VRF to the key and address.

Confirmation – From the HUB one command can tell us all we need to know from a DMVPN perspective. It shows the tunnel interface and number of peers,  DMVPN peers and the QoS profiles they are subscribed to and shows the IPsec details.

HUB#show dmvpn detail

 

Interface Tunnel0 is up/up, Addr. is 10.100.0.4, VRF “”
Tunnel Src./Dest. addr: 172.16.3.10/MGRE, Tunnel VRF “internet”
Protocol/Transport: “multi-GRE/IP”, Protect “DMVPN”
Interface State Control: Disabled
nhrp event-publisher : Disabled
Type:Hub, Total NBMA Peers (v4/v6): 3
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb Target Network
—– ————— ————— —– ——– —– —————–
1 172.16.1.10 10.100.0.1 UP 02:54:32 D 10.100.0.1/32
NHRP group: Spoke-1
Output QoS service-policy applied: DMVPN_1500kbps
1 172.16.2.10 10.100.0.2 UP 02:54:32 D 10.100.0.2/32
NHRP group: Spoke-2
Output QoS service-policy applied: DMVPN_100kbps
1 172.16.5.10 10.100.0.5 UP 02:54:32 D 10.100.0.5/32
NHRP group: Spoke-5
Output QoS service-policy applied: DMVPN_5000kbps
Crypto Session Details:
——————————————————————————–
Interface: Tunnel0
Session: [0xF384C6B8]
IKEv1 SA: local 172.16.3.10/500 remote 172.16.1.10/500 Active
Capabilities:(none) connid:1005 lifetime:21:05:26
Crypto Session Status: UP-ACTIVE
fvrf: internet, Phase1_id: 172.16.1.10
IPSEC FLOW: permit 47 host 172.16.3.10 host 172.16.1.10
Active SAs: 2, origin: crypto map
Inbound: #pkts dec’ed 2635 drop 0 life (KB/Sec) 4150500/3474
Outbound: #pkts enc’ed 2655 drop 0 life (KB/Sec) 4150500/3474
Outbound SPI : 0x B4E148D, transform : esp-aes esp-sha-hmac
Socket State: Open

 

Here is the config for one of the spokes. The crypto config is identical to the HUB so omitted for brevity.

vrf definition internet
!
address-family ipv4
exit-address-family
ip route vrf internet 0.0.0.0 0.0.0.0 [internet next-hop ip]
interface Tunnel0
ip address 10.100.0.1 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp authentication cisco
ip nhrp group Spoke-1
ip nhrp map 10.100.0.4 [NBMA – HUB]
ip nhrp map multicast [NBMA – HUB]
ip nhrp network-id 123
ip nhrp nhs 10.100.0.4
ip nhrp shortcut
ip tcp adjust-mss 1360
qos pre-classify
tunnel source Ethernet0/0
tunnel mode gre multipoint
tunnel key 123
tunnel vrf internet
tunnel protection ipsec profile DMVPN
router eigrp DMVPN
!
address-family ipv4 unicast autonomous-system 123
!
topology base
exit-af-topology
network 10.0.0.0
network 10.100.0.0 0.0.0.255
eigrp stub connected summary
exit-address-family

 

Confirmation on the spoke is similar to the hub using the “show dmvpn detail” command lists everything you need to know from a DMVPN and IPsec perspective.

Just to confirm everything is OK lets run some commands from the spoke

Spoke1# ping 10.0.2.1 sou lo 2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.2.1, timeout is 2 seconds:
Packet sent with a source address of 10.0.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 9/15/22 ms
!
Spoke1#traceroute 10.0.2.1 sou lo 2
Type escape sequence to abort.
Tracing the route to 10.0.2.1
VRF info: (vrf in name/id, vrf out name/id)
1 10.100.0.2 13 msec 9 msec 8 msec
!
Spoke1#sh ip nhrp shortcut
10.0.2.0/24 via 10.100.0.2
Tunnel0 created 00:00:26, expire 00:01:13
Type: dynamic, Flags: router used rib
NBMA address: 172.16.2.10
!
Spoke1#sh ip route nhrp
H 10.0.2.0/24 [250/1] via 10.100.0.2, 00:00:31, Tunnel0
!
Spoke1#sh ip route eigrp
Gateway of last resort is 10.100.0.4 to network 0.0.0.0
D* 0.0.0.0/0 [90/76800640] via 10.100.0.4, 03:07:27, Tunnel0
!
Spoke1#sh ip eigrp ne
EIGRP-IPv4 VR(DMVPN) Address-Family Neighbors for AS(123)
H Address Interface Hold Uptime SRTT RTO Q Seq
(sec) (ms) Cnt Num
0 10.100.0.4 Tu0 13 03:08:04 23 1398 0 30
!
Spoke1#show ip cef 10.0.2.1

10.0.2.0/24
nexthop 10.100.0.2 Tunnel0

 

We can see spoke to spoke ping is ok, traceroute is direct to spoke, NHRP has a shortcut, the routing table has an NHRP route, and a default route (Global Routing Table) with one EIGRP neighbor. All looks good.

Troubleshoot – The best command to use is the “debug dmvpn detail all” this will debug crypto and nhrp.  It can be a lot to look at so perhaps you might want to use the”debug crypto isakmp“or “debug crypto ipsec” individually, depends on which part is failing.

RH

IPv4 MTU issues can be hard to spot initially, there is a solution and its called Path MTU Discovery (RFC1191). The RFC describes it as the following “a technique for using the Don’t Fragment (DF) bit in the IP header to dynamically discover the PMTU of a path”

Further to that the RFC states “The basic idea is that a source host initially assumes that the PMTU of a path is the (known) MTU of its first hop, and sends all datagrams on that path with the DF bit set. If any of the datagrams are too large to be forwarded without fragmentation by some router along the path, that router will discard them and return ICMP Destination Unreachable messages with a code meaning “fragmentation needed and DF set” (Type 3, code 4)

The unfortunate issue is that the message that’s sent back doesn’t actually say what the MTU is.

A colleague of mines who is a Windows 7 expert, has reliably informed me that by default Windows 7 has PMTUD enabled.

The important point to focus on is the ICMP unreachable (Type 3, code 4). To put this quite simply, if you don’t receive an ICMP message back with the code for fragmentation needed then, your PC will assume that the MTU is fine and continue to send the packets even though somewhere in the path the packets are potentially being dropped.

There can be a number of reasons for this, including firewalls blocking the message, ICMP unreachable disabled on an interface, a transparent host between 2 endpoints (Often done in service provider networks) that has a lower MTU value.

I recently ran into an issue where IP connectivity between 2 sites looked to be fine, ping, traceroute and SSH were all working, but certain applications and protocols were not, most notably HTTPS.

Below I will explain how to spot this issue.

Take a look at the diagram below, i have deliberately used a transparent device as its most likely what you might see in a L3VPN (MPLS) network. The last mile provider provides a layer 2 path (perhaps a L2TPv3) from CE to PE and the underlying hops are hidden from us.  From the service provider perspective the routers are directly connected.

This is perhaps where an MTU issue has occurred. For this scenario I have reduced it quite significantly for effect.

Capture3

Lets say for example you have a perfectly functioning network where MTU is fine along the path. Initially you can send a ping with 1460bytes and you will get a reply. Lets increase this to something we know is to big (1550bytes). This works great in a perfectly functioning network where you receive an ICMP type 3, you will get the “packet needs to be fragmented but DF set” message.

Capture2

Now lets try that through our network where the MTU is set lower but the sending device doesn’t know about it.

Capture4

At first you think its OK because you can ping along the path and get a reply, you try SSH and it works too. Now lets try to ping with different MTU sizes. Remember your PC doesn’t receive the ICMP message this time, so what happens is you get a “request timed out” message.

Capture5

The reason for that is the packet is being dropped and the ICMP message isn’t being returned. If I ping with an MTU that is lower than the 1000 i get a reply.

Capture6

Now the question, why would HTTPS not work? well in some cases web applications or your client might set the Do Not Fragement bit in the IP header SYN request. This means the packet should not be fragmented, so when we send this on our network with the bad MTU in the path, the packet is dropped and the sending device never receives the ICMP message. It never knows that it has to reduce the MTU value. The packet capture below shows where the DF bit is set.

Capture7

I had a look through the RFC2246 for TLS1.0 and it doesn’t specify that the DF bit should be set. It’s most likely a vendor or O/S specific setting, so your observed results may differ from vendor to vendor.

RH

Ethernet traffic is transported in frames, the maximum size of this frame is known as the Maximum Transmission Unit (MTU). By default an Ethernet packet has an MTU size of 1500 bytes. An Ethernet packet larger than 1500 bytes is known as a jumbo frame.

When a network endpoint receives a packet that is larger than its MTU, it can either fragment the packet into smaller parts or drop the packet.

Fragmentation is the process of breaking up a large packet into smaller chunks before sending, the receiving host then re-assembles the packets. This is not desirable when trying to optimize your network. The main reason I say this is because if one fragment is lost the entire datagram is lost. IP has no error correction and UDP has no re-transmission feature. TCP re-transmits the whole datagram if one fragment is lost.

An excellent document which explains fragmentation in detail can be found at the link below. 

http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html

Increasing the MTU can improve network efficiency, this is achieved by transmitting larger payloads within the packets. Changing the MTU on a LAN should be planned out correctly, performance issues can be expected if the MTU of one device is different from the MTU of another device. What you might think should improve performance is likely to have the opposite effect.

The recommended approach is to increase the MTU on all of your network devices in the LAN, the network would then at that point benefit from less fragmentation and less overhead. Your network topology and traffic flows should be fully understood before making these changes.

To demonstrate we can use the ping utility on a windows 7 PC. This particular PC is connected to a switch with an MTU size of 1500 bytes.

A ping packet has a default value of 32 bytes, we will try to send a ping with the default MTU and then send a ping with the jumbo MTU value and see what happens.

C:\PC1>ping 10.0.0.1

Pinging 10.0.0.1 with 32 bytes of data:
Reply from 10.0.0.1: bytes=32 time=12ms TTL=254
Reply from 10.0.0.1: bytes=32 time=12ms TTL=254
Reply from 10.0.0.1: bytes=32 time=12ms TTL=254
Reply from 10.0.0.1: bytes=32 time=12ms TTL=254

We can see the latency is 12 milliseconds.

Now send a packet with a jumbo frame, we will see the impact of fragmentation. The -l switch allows is to increase the packet size.

C:\Users\rhill>ping 10.0.0.1 -l 9000

Pinging 10.0.0.1 with 9000 bytes of data:
Reply from 10.0.0.1: bytes=9000 time=19ms TTL=254
Reply from 10.0.0.1: bytes=9000 time=24ms TTL=254
Reply from 10.0.0.1: bytes=9000 time=20ms TTL=254
Reply from 10.0.0.1: bytes=9000 time=20ms TTL=254

We can see that the latency has increased, this is due to the fragmentation that has taken place. You may not experience increased latency if you are on a local LAN, but the packets are still being fragmented.

A WireShark capture shows the extra work that needs to be done on the sending device to fragment the packet, the receiving device then has to re-assemble this at the opposite end.

The output below shows the packet being split into 6 different parts in order to send the jumbo frame.

fragmentation

As you can see, simply increasing the MTU on a sending device may not actually improve the efficiency of your network.

It’s worth noting that when using TCP the MSS is negotiated in the 3-way handshake and will default to the lowest value.

RH.

Ran into this at the weekend when I was working on some Data Centre switches, within the fabric the switches use jumbo MTU. So when I tried to peer a device that was not part of the fabric, I got stuck in EXSTART.

run “debug ip ospf adj” you will get a message similar to this.

OSPF-1 ADJ Gi0/0: Nbr 10.1.1.1 has smaller interface MTU

Answer: match the MTU on both sides.
I had a read through TCP/IP Volume 1, It doesn’t mention MTU size anywhere. RFC 2328 does mention it.

If the Interface MTU field in the Database Description packet
indicates an IP datagram size that is larger than the router can
accept on the receiving interface without fragmentation, the
Database Description packet is rejected.

This is where wireshark comes in handy.

The MTU isnt sent in the Hello packet its sent in the type 2 DBD packet, this is after the neighbors acknowledge each other (2WAY). See below.

mtu-ospf

RH

Some time ago I setup SIP trunking , I had to configure voice translation rules on the CUBE. Here’s are a few I setup that might help you (numbers changed). Once the numbers were translated they would match dial-patterns and be routed respectively.

Using regular expression.

Number 1 translates an incoming call to a 4 digit extension.

voice translation-rule 1
rule 1 /^02081234567/ /1111/

R1#test voice translation-rule 1 02081234567
Matched with rule 1
Original number: 02081234567 Translated number: 1111

Number 2 takes any number and adds a 9 to the beginning.

voice translation-rule 2
rule 1 /^.*/ /9\0/

R1#test voice translation-rule 2 02081234567
Matched with rule 1
Original number: 02081234567 Translated number: 902081234567

Number  3 translates a dialed number beginning with +44 or + to 90 or 900.

voice translation-rule 3
rule 1 /^\+44/ /90/
rule 2 /^\+/ /900/

R1#test voice translation-rule 3 +442081234567
Matched with rule 1
Original number: +442081234567 Translated number: 902081234567

R1#test voice translation-rule 3 +1123456789
Matched with rule 2
Original number: +1123456789 Translated number: 9001123456789

Number 4 translates a 4 digit number beginning with 5, drops the 5 and appends a country code and an area code.

voice translation-rule 4
rule 1 /^5\(…$\)/ /00442081234\1/

R1#test voice translation-rule 4 5678
Matched with rule 1
Original number: 5678 Translated number: 00442081234678

Number 5 translates any number beginning with 901* to 0044 and any number beginning with 900* drops the 9.

R1#test voice translation-rule 5 9020812345678
Matched with rule 1
Original number: 9020812345678 Translated number: 004420812345678

R1#test voice translation-rule 5 90012345678910
Matched with rule 2
Original number: 90012345678910 Translated number: 0012345678910

RH

I was asked question about MPLS labels within a VRF. Well the question was ” when I type the command ‘show mpls ldp bindings vrf xxx‘ ” I don’t see any labels for my remote site prefixes.

In this case these labels aren’t assigned by LDP they are assigned by BGP.

The command he was looking for was “show ip bgp vpnv4 vrf xxx labels” I think a light went on in his head when i explained this to him!

If your still confused , imagine the LSP (unidirectional) R1 -> R4 is basically a virtual connection if you like , where R4 is the egress LSR for the packet. The packet will travel via R1 -> R2 -> R3 -> R4. The next hop actually is listed as R4 even though it has to pass through R2 and R3. Which by the way has it own labels to add on top of bottom of stack label for R4.

Capture

R1 pushes on the label 1234 for R4 and another label on top for R2 which is 22, next R2 removes its own label (22) and adds the label for R3 which is 33, R3 removes its own label (33) and forwards the packet with the label 1234 to R4. Bottom of stack bit is set to 1, R4 removes the label and forwards.

https://tools.ietf.org/html/rfc3107

RH