OTV – Configuration and Verification

In my previous post here, I discussed the concepts of OTV, so if you are not familiar with OTV concepts perhaps go read that post first. In this post I intend to dive a little deeper into OTV configuration and verification.

The assumption is that IP connectivity is in place, within the Data Centre and between Data Centres.

One of the first configuration steps is to define the OTV site VLAN and OTV site identifier.

As mentioned, the local OTV edge devices need to communicate as part of the AED election process. The requirement for this AED election is that the devices participating are connected via a local VLAN. Note: This site VLAN must NOT be stretched over the OTV link but rather trunked between the OTV edge devices. Thus on each of the OTV edge devices a VLAN needs to be configured such as:

otv site-vlan 999


This can be the same in each site, but must not be extended over the Overlay interface. This enables the OTV edge devices to discover each other and determine their roles on a per VLAN basis as the nominated OTV edge device which later in this post you will see that even VLANs are active on one OTV edge whilst odd VLANs are active on the other OTV edge, shown via the ‘show otv vlan’ command.

OTV uses the site identifier to identify the OTV edge devices which exists in a specific site (where a site is a single geographic data centre) which can form an adjacency with another site. Thus in a site with dual AED’s, the site identifier needs to match, however should be unique per site. Thus in one site the identifier may be 0x001 on both OTV edge devices, whilst in the other site it may be 0x002. Thus the following example shows a site identifier:

otv site-identifier 0x001

At a high level the overlay interface configuration would look like the following where  port-channel1 is defined as the join interface, which as per the previous post is the L3 physical link or port-channel which is used to route upstream towards the DCI / Core. Whilst the Overlay interface is the logical OTV tunnel interface which performs the encapsulation and where the OTV configuration is done.

interface Overlay1
 description Overlay Network
 otv join-interface port-channel1
 otv extend-vlan 100, 205
 otv use-adjacency-server 172.16.50.1 unicast-only
 no shutdown

The Join interface is the L3 interface on the OTV edge device connecting to the DCI or Core (IP transport network). This interface is used as the source of OTV encapsulation and assigned to the logical ‘Overlay’ interface.

interface port-channel1
 description OTV/GRE uplink to Core / DCI
 mtu 9216
 ip address 172.16.50.1/30

The Internal interface is a L2 interface, typically configured as a trunk or access port, which takes part in STP and learns MAC addresses per normal. This is typically the interface that connects the device performing L3 gateway / SVI functionality to the OTV edge device.

interface port-channel10
 description To L3 Router VDC
 switchport
 switchport mode trunk
 switchport trunk allowed vlan 100,205
 spanning-tree port type normal
 mtu 9216
 vpc 10

As mentioned, the local OTV edge devices need to communicate as part of the AED election process and is a local VLAN to these devices and NOT stretched over the OTV link but rather trunked between the OTV edge devices. This enables the OTV edge devices to discover each other and determine their roles on a per VLAN basis as the AED.

Once the configuration is in place all of the OTV edge devices need to form an adjacency. This allows the OTV edge devices to learn and distribute the list of neighbors which it can replicate the control packets to. Thus every OTV edge device which joins the OTV domain needs to join by registering with the Adjacency Server whilst the other OTV edge devices are discovered dynamically through the Adjacency Server and thus all are aware of each other and can update when OTV devices join or leave.

To check the OTV Adjacency use the command “show otv adjacency”

Hostname                         System-ID      Dest Addr       Up Time   State
otv-site1-2                      4055.3905.64c1 172.16.50.2     2w1d      UP
otv-site2-2                      4055.3905.b6c1 172.16.50.50    2w1d      UP
otv-site2-1                      4055.3905.c641 172.15.50.46    2w1d      UP

Also the command “show otv overlay 1″ also provides good information including the Adjacency Server details.

#show otv overlay 1

OTV Overlay Information
Site Identifier 0000.0000.0100

Overlay interface Overlay1

VPN name : Overlay1
VPN state : UP
Extended vlans : 100 205 (Total:2)
Join interface(s) : Po1 (172.16.50.1)
Site vlan : 999 (up)
AED-Capable : Yes
Capability : Unicast-Only
Is Adjacency Server : Yes
Adjacency Server(s) : 172.16.50.1

To confirm which VLANs have been stretched over the Overlay interface and identify which OTV edge device they is their active AED, the command “show otv vlan” can be used.

#show otv vlan

OTV Extended VLANs and Edge Device State Information (* - AED)

Legend:
(NA) - Non AED, (VD) - Vlan Disabled, (OD) - Overlay Down
(DH) - Delete Holddown, (HW) - HW: State Down
 (NFC) - Not Forward Capable 

VLAN   Auth. Edge Device                     Vlan State                 Overlay
----   -----------------------------------   ----------------------       -------
 100*  otv-site-1                            active                  Overlay1
 205   otv-site-2                            inactive(NA)            Overlay1

As can be seen above, on this OTV edge device, VLAN 100 is the active AED, meaning this device is responsible for encapsulating the VLAN traffic and sending it to the other site. VLAN 205 is also stretched across the OTV but this devices neighbor is the active AED for that odd VLAN.

The command “show otv route” can be utilised to see where a specific MAC address is learnt from. In the following example the MAC address for the host in VLAN100 is learnt over the OTV link, whilst the MAC address for the host in VLAN205 is learnt from the downstream gateway device local to this site.

#show otv route

OTV Unicast MAC Routing Table For Overlay1

VLAN MAC-Address     Metric  Uptime    Owner      Next-hop(s)
---- --------------  ------  --------  ---------  -----------
 100 0050.568d.16d7  42      1w5d      overlay    otv-site-1
 205 0050.568d.5b2d  1       1w5d      site       port-channel10

On the gateway router where the SVI’s exist it is always a good idea to check that the MAC address is being learnt locally and that the HSRP/VRRP status is what you expect to see. In this example no FHRP (HSRP/VRRP) filtering is done thus the traffic to/from the end hosts is always routed via the same gateway in the same site. There is issue with this approach as it can cause traffic to trombone across the OTV link adding latency and providing a less than optimal path, but that is a post for another time.

#show hsrp brief
*:IPv6 group #:group belongs to a bundle
P indicates configured to preempt.
|
 Interface   Grp  Prio P State    Active addr      Standby addr     Group addr
  Vlan100     1    120  P Active   local            172.24.24.2      172.24.24.1 (conf)

And also to check that whichever is the primary gateway device can see the appropriate MAC addresses being learnt:

#show ip arp vrf VPN-GW-1 | incl Vlan100
172.24.24.10    00:07:22  0050.568d.16d7  Vlan100
172.24.24.11    00:00:26  0050.568d.43db  Vlan100

Note; whilst the MTU in this example has been set to 9216 as this is supported by the IP transport which OTV runs over the gateway SVI’s should be set lower to allow for the OTV overhead added by the OTV edge devices.

Just for completeness the configuration of the SVI gateway for VLAN100 is as follows:

interface Vlan100
 description : Gateway_Test_OTV-SPAN
 no shutdown
 mtu 9000
 vrf member VPN-GW-1
 ip address 172.24.24.0/24
 ip unreachables
 hsrp version 2
 hsrp 1
   preempt
   priority 120
   ip 172.24.24.1

Overlay Transport Protocol (OTV)

OTV has been around for a while and whilst in general, I think that stretching broadcast domains (layer 2 VLANs) is not a great idea. However, in most enterprises I have worked in it is typically done for either long distance vMotion or server clustering, the later, specifically for Databases.

There are many great posts explaining why layer2 stretch VLANs (bridging) are not a good idea so I wont go into them here, but will just leave it with this.

bridgingdoesntscale

Anyway if you do need to span a bridging domain between data centres, over a data centre interconnect (DCI) then one of the perhaps, more robust options, is Cisco’s Overlay Transport Protocol (OTV). It should be noted that whilst I’m discussing Cisco’s proprietary OTV implementation there is an RFC draft for OTV which can be found here.

NOTE: the Cisco and standard draft are likely to be non interoperable as they use different encapsulation methods.

Thus this post is specifically focused on the Cisco implementation of OTV which is actually Ethernet over MPLS over GRE over IP (EoMPLSoGREoIP). For OTV to work, assuming you have devices that support it (Nexus 7K & Cisco ASR), all that is needed from a transport perspective to run OTV is IP connectivity.

As OTV is an EoMPLSoGREoIP encapsulation it adds 42 bytes of header to the original packet which comprises of:

  • 4 bytes – MPLS
  • 4 bytes – GRE
  • 20 bytes – IP Header
  • 14 bytes – Outer Ethernet frame

One of the limitations of OTV currently, is that it cannot run in the same device or VDC as the layer 3 gateway or SVI. Thus when deploying OTV it must be in one of two topologies, being either ‘on a stick’ or ‘inline’. The on a stick method is typically preferred as it means that non OTV traffic does not need to traverse the OTV VDC and thus is more flexible, however the configuration is largely the same in either scenario.

otv-topology-2

  • Red Line – represents the OTV traffic path.
  • Purple Line – represents other (non OTV) traffic path.

Additionally OTV can be deployed in multicast mode or unicast mode, where the former requires that the DCI also supports and runs multicast. The benefit of multicast mode over unicast is when more than 2 DC’s are connected and thus the traffic can be more optimally sent between the source and destination rather than it being replicated to all OTV participants regardless if they need to see the traffic or not. However in most scenarios OTV is utilised to connect 2 sites, and thus unicast is generally fine, and arguably more simplistic to configure, as it does not require multicast in the DCI.

NOTE: The rest of this post will assume a 2 site DC connected via unicast OTV.

Whilst the diagram above only has a single device for the Layer3 and OTV functions these would each typically be on a pair of devices for redundancy purposes. Thus when OTV is formed it needs to decide which of the OTV pair is the Active Edge Device (AED). This is done via the AED election process, which in turn is negotiated over a ‘site VLAN’.

As mentioned, the site VLAN is an internal VLAN connecting the two local AED routers (not spanned over OTV) and utilised for the AED election. This allows each edge device to be active for a VLAN and redundant for another. When this process takes place, what would normally end up happening is that all even VLANs would use one AED, say edge device 1 as its active OTV device and all odd VLANs would use the other AED, say edge device 2 as its active OTV device, with the other AED being redundant (secondary). Each site is identified via a unique ID, which is shared locally between the AED’s at the same site and must be unique to a site.

The main components of OTV are an Overlay Interface, Join Interface and the Internal Interface. The Overlay interface is the logical OTV tunnel interface which performs the encapsulation, whilst the Join interface is the L3 physical link or portchannel which is used to route upstream towards the DCI. The Internal interface is the L2 interface, typically configured as a trunk or access port, which takes part in STP and learns MAC addresses per normal. This is typicall the interface that connects the L3 gateway device to the OTV device.

OTV uses ISIS as the routing protocol to form adjacencies between AEDs and to advertise MAC reachability information. OTV utilises ISIS in the background and thus no ISIS configuration is required. Additionally the OTV ISIS will not interfere with any other ISIS process which may be running on or between these devices.

OTV does have some limitations

  • Cannot run OTV in the same VDC as Layer 3 SVI’s.
  • OTV does not support Fragmentation, and sets the DF (Don’t Fragment) bit on all packets.
  • OTV adds 42 bytes of header (So DCI needs to cater for this additional header).
  • Limited number of extended VLANs per system across all configured overlays, which is around 1500 at time of writing this post.

One of the key benefits of OTV over other layer 2 stretch technologies, such as VPLS, is that it keeps the STP domain isolated. That is STP BPDUs are not transported over the OTV and thus a STP issue in one site should not affect the other.

The intent of this post is to provide a high level overview of OTV and its components. The next post will provide more detail into the configuration and troubleshooting of OTV.

All things French

I am fortunate to have some very generous friends that share a passion for wine, thus every month, or thereabouts, we get together for dinner and wine. This month’s theme, as the title suggests, was French wine, which is obviously a very broad and varied genre, however for this occasion that was specifically the plan.

We started the night with the 2014 Domaine Weinbach Riesling Schlossberg Cuvee Sainte Catherine, Alsace Grand Cru (The Krug owner was 10 mins late and we were thirsty).

I have tried the Weinbach Schlossberg Alsace Grand Cru Riesling before and have always been a big fan of it and the ’14 is an excellent example. It is a very rich, ripe and complex dry Riesling, with a nose of lemon, apple, and peach. Powerful and complex with great texture and a line of vibrant mineral, and good acidity – 96/100.

Next up, as the owner of this bottle had arrived, was the Krug Grande Cuvee. Whilst this is a non vintage, or as Krug calls it a “multi-vintage” it does have a high percentage of aged wine between 1990 and 2007 as the ID code on the label identified.

This particular cuvée is a blend is of 44% Pinot Noir, 37% Chardonnay and 19% Pinot Meunier. The bottles are then aged for at least 6 years in the cellar (the minimum requirement for a non vintage Champagne being 18 months) which gives it great complexity and richness. It is a very iconic Krug style (nothing wrong with that) with aromas of stone fruit, honey and brioche. The palate is full, rich and concentrated flavor of toast, citrus and nutty butter with great acid and a long crisp finish – 96/100.

Next up was the 2014 Vincent Girardin Pouilly-Fuissé Les Vieilles Vignes Chardonnay. This wine is put into French oak casks of 500 liters (10% new) for malolactic fermentation for 14 months, with some barrels then being aged on lees a month before with others in stainless steel tanks, before being blended together and bottled.

Great aromas of pears and apple with a light buttery texture and a palate of passion fruit, citrus and pineapple. The acidity was a little too evident and will perhaps fade with some age – 93/100.

To finish the whites we had the 2014 Domaine Christian Moreau Pere & Fils Les Clos, Chablis Grand Cru. This showed great aroma of tropical fruits, citrus and white flowers. The palate was vibrant with tropical fruit, nectarine, and citrus with silky texture and a slight salty minerality on the long finish, a great wine – 95/100. 

The next we tried the two Bordeaux side by side, with the 2005 Chateau Fourcas-Hosten, Listrac-Medoc and the 2009 Chateau Haut-Beausejour, Saint-Estephe with both being decanted, the later for a couple of hours.

The Chateau Fourcas-Hosten was a medium body, with a nose of bright red fruits. On the palate it had some sour cherry, pencil, eucalyptus and a slightly drying tannins.A good wine but nothing particularly interesting – 90/100.

The Chateau Haut-Beausejour, Saint-Estephe was also a medium body, with bright red fruit on the nose which carried through to the palate with blackcurrants and touches of spice and mint, well rounded tannins. Slightly more progressed than I expected even given the couple of hours of decanting and whilst a good wine it had a slightly meaty character which did not appeal to me – 91/100.

The last of red of the night was a 2000 Chateau Mont-Redon Chateauneuf-du-Pape from southern Rhone in Magnum. This was a medium to full body red. The bouquet was a blend of red and black fruits berries. On the palate was strawberry / plums, licorice and spice with a touch of smoky and leather notes. The tannins were well integrated and benefited on tasting from a magnum bottle – 93/100.

The highlight of the night for me was probably the Weinbach Schlossberg Riesling, but with such a good line up of both white and red french wines the whole night was one to remember!

french_wine_dinner

Multiprotocol Label Switching (MPLS) Notes

Multiprotocol Label Switching (MPLS) is widely used in many large enterprise networks and as with all networking technologies it is the concepts which are important to remember and understand. Thus the following is just some general information about MPLS rather than configuration examples, which are easy to find on the interweb.

Unlike a traditional IP network which perform routing lookup based on IP addresses to determine the next hop, MPLS does label switching instead. Basically instead of looking up the next hop based on the IP address it finds the destination router, which is based on a predefined label to destination network association, and applies the appropriate label(s) to get to that router via a pre-determined path. Once the traffic reaches the destination router (PE) the label is removed (or via the penultimate P router if penultimate hop popping is enabled, which in most deployments it is) and the packets are delivered locally via normal IP routing.

A typical example of this is when a tenant advertises its IP subnet (pick your favorite routing protocol) associated with its VRF to the PE router which will associate that subnet to a label. The PE then exports those tenant routes from the tenant’s VRF into MPLS and transmits them across the cloud / backbone, to their destination. Those routes are then imported back into the destination VRF and locally advertised by a routing protocol, thus creating a virtual private network. Note: Private in this instance does not imply any encryption but rather segregation of information from other tenants.

Because the PE associated the tenants IP subnet to a label and those labels are communicated via the control plane to all MPLS participating PE devices as an MP-BGP extended attribute, other PE’s know what label to associate to get back to that tenants IP subnet. When the traffic is sent across the MPLS core the PE adds the destinations PE label, which it already knows via the control plane learning and then if required also adds an additional label for the next router in the predefined path towards the destination.

This pre-determined path or label-switched paths (LSP) is established via the Label Distribution Protocol (LDP) which creates a unidirectional tunnel between the PE routers.

MPLS is typically deployed in an Enterprise as a method to connect tenant environments across a shared backbone and/or to segregate tenants across a shared backbone from each other. Whilst there is some perception that MPLS is faster than performing an IP route lookup, and this is likely true, for the most part given today’s router processing speeds, for all but the largest networks, this is of negligible benefit.

For pure IP routing to work the router must use control plane protocols, like OSPF, to first populate the IP routing table and then populate the CEF Forwarding Information Base (FIB).

Similarly, for MPLS forwarding to work, MPLS relies on control plane protocols to learn which MPLS labels to use to reach each IP prefix, and then populate both the FIB & LFIB with the correct labels.

A diagram I find useful is as follows:

mpls-diagram

The LFIB resides in the data plane and contains a local label to next-hop label mapping along with the outgoing interface, which is used to forward labeled packets.

A unique MPLS label is allocated for each VPNv4 prefix which is inserted between the L2 and L3 header. Multiple labels can be inserted, in fact this is how the MPLS VPNs work, by stacking multiple labels.

For example, the ingress PE will place two labels on the packet, label 1 (L1) is the path label (provided by LDP), and label 2 L2 is the VPN label (provided by BGP).

Thus, as per the following example, the mpls will populate the LFIB with labels associated with prefixes and the outgoing interface / next hop. Also if this router is the last MPLS hop for a destination prefix the label is removed, or ‘popped off’ before sending the packets to the local VRF (VRF-BLUE).

router1#sho mpls forwarding-table
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
16         Pop Label  IPv4 VRF[V]      2941227361    aggregate/VRF-BLUE
17         No Label   10.0.42.0/24[V]  496031973389  Vl3500     172.16.50.70
18         157        172.16.6.24/30   0             Po101      172.16.51.17

Virtual Routing and Forwarding (VRF)
VRFs can be used to store routes separately for different tenets (customers, groups, domains). Each VRF has three main components:

  1. An IP routing table (RIB)
  2. A CEF FIB, populated based on the VRFs RIB
  3. A separate instance or process of the routing protocol used to exchange routes.

Route Distinguisher (RD) : = 96 bit VPNv4
RDs allow BGP to advertise and distinguish between duplicate IPv4 prefixes. It does this by adding the RD to the IPv4 prefix, creating what is called a VPNv4, which is comprised of two parts:

  1. A 64-bit RD
  2. A 32-bit IPv4 prefix

Route Targets (RT) :
PE routers advertise RTs in BGP updates as BGP extended community path attributes (PA). MPLS uses RTs to determine into which VRF a PE places iBGP learned prefixes.

NOTE: RD & RT are separate, independent values. While a particular prefix can have only one RD, it can have one or more RTs assigned to it.

Misc

  • Labels are locally significant (similar to frame-relay DLCI, or VLANs)
  • MPLS is based, not tied to the routing table!
  • Always ensure basic connectivity and routing is functioning correctly before implementing MPLS

The capability vrf-lite command disables the DN-bit (down bit) and domain-tag checks in OSPF. Since the CE router acts as the PE router in VRF-lite, these checks should be disabled, because the PE routers advertise VPN routes with DN-bit set to the CE routers

When VPN routing and forward (VRF) is used on a router that is not a PE (that is, one that is not running BGP), the checks can be turned off to allow for correct population of the VRF routing table with routes to IP prefixes.

Digioia-Royer Chambolle-Musigny 2010

Whilst I am but an amateur wine enthusiast, I’m even more naive when it comes to french wine. Thus what follows is perhaps my feeble attempt to classify and characterise this wine based on my limited exposure of french vino.

Name: Digioia-Royer Chambolle-Musigny
Appellation: Burgundy – Chambolle Musigny
Varietal: Pinot Noir
Vintage: 2010
Date Tasted: 05/11/16 – 06/11/16
Tasting Notes: Medium body, dark ruby colour with red cherries on the palette. The tannin and acid are a little prickly and not ideally integrated on first taste but these seem to settle down when tasting again the next day.

This is a young example but I don’t think that age will greatly alter the balance although it should help make it more approachable, not that it is unapproachable now, just a little rough around the edged.

Overall a good wine, but not a great one, and perhaps I was expecting to much as I’ve found most of the Chambolle-Musigny I’ve tried (not many) have tended to be more elegant and silky than this example showed. Obviously typical disclaimers about bottle variation, etc apply although I did not detect any faults.
Score: 90/100.

20161105_195249-2

Passwords are so passe

Passwords are ubiquitous when dealing with user authentication but are perhaps also the weakest link in security authentication. They generally require the user to maintain a complex yet easy to remember string which can be somewhat of a contradiction as the requirement to recall the string, generally leads to it being, based on, or related to, a known word or something personal to the user, and ultimately easy for a human to remember, and hence reduces complexity and randomness. A possible work around to this issue is to not allow user (human) generated passwords, and rather have the password automatically generated by an application using suitable complexity, however this tends to lead to other issues such as users documenting their password or reusing the same password on many systems.

Perhaps the best method to date is to use a password manager. I started doing this myself a couple of years back and while each have their pros and cons, I’ve never looked back

In 2010 an analysis was performed on the 32 million passwords that were publicly published from the December 2009 Rockyou.com breach.

Some of the key findings of the study include:

  • About 30% of users chose passwords whose length is equal or below six characters.
  • Moreover, almost 60% of users chose their passwords from a limited set of alpha-numeric characters.
  • Nearly 50% of users used names, slang words, dictionary words or trivial passwords (consecutive digits, adjacent keyboard keys, and so on). The most common password among Rockyou.com account owners is “123456”.

Additionally, further studies show that this insecure trend sadly doesn’t shift as 26% of users reuse the same password for important accounts such as email, banking or shopping and social networking sites.

To provide some context the following tables represent the approximate maximum time required to guess each password using a simple brute force “key-search” attack.

mixed-62

As can be seen using only mixed alpha and numerical characters even for a password with a character length of 8 it is still feasible to retrieve the password in a short time. It also should be noted that there are many ways to improve the speed that these passwords could be cracked.

mixed-96

Even using all 96 mixed alpha, numerical and symbols for a 6 character length password does not provide enough complexity.

The NASA guidelines, recommend that all passwords be at least eight characters, and contain a mix of four different types of characters – upper case letters, lower case letters, numbers, and special characters such as !@#$%^&*,;” If there is only one letter or special character, it should not be either the first or last character in the password.

Additional to password complexity guidelines other factors should be taken into account such as:

  • Not displaying the password as it is being entered or obscuring it as it is typed by using asterisks (*) or bullets (•).
  • Requiring users to re-enter their password after a period of inactivity (screensaver)
  • Using encrypted tunnels / protocols (SSH, IPSec, SSL) to protect transmitted passwords.
  • Limiting the number of allowed failures within a given time period (to prevent repeated password guessing).
  • Introducing a delay between password submission attempts to slow down automated password guessing programs.
  • Requiring passwords are not shared between users / systems.
  • * Requiring periodic password changes.
  • The frequency for periodic password changes is a widely debated topic and whilst the accepted dogma was to force password changes somewhere between 3-6 months, recently some more evidence has come about that suggests that forcing password change is perhaps, not a good idea, in fact less secure.

More details can be found here.

However given the general insecurity of relying on passwords for authentication it is recommended that these be coupled with some other form of security, such as, two-factor authentication, limiting access, and regular password assessments.

My View:

All systems should enforce that mixed alpha, numerals & symbols be used, with a minimum of 8 characters to ensure suitable complexity.

Additionally, user and administrator passwords are periodically audited to ensure they meet the requirements for complexity and are not based on easy guessable or brute forced dictionary words, and the same passwords are not used by the same person on multiple systems with differing security risk levels.

If possible a password manager should be mandated. Whilst this is a cost for the company this is IMHO far outweighed by the increased security and ease of use which can be applied. Most modern password management applications also support auto filling in forms and passwords which can greatly improve the user experience whilst only requiring the user to remember one secure password.

If possible users should be encouraged to use passphrases rather than passwords as these are generally longer and more complex than passwords.

Finally, one of the biggest security concerns with passwords is protecting them, thus ensuring they are salted and encrypted when stored on any system is paramount, so if when they are stolen it is not feasible for the attacker to decrypt them.

Yabby Lake Vineyard Single Vineyard Chardonnay 2014

Whilst I am far from a wine critic or a connoisseur (an expert judge in matters of taste) I do purchase and own, and thus drink my fair share of wine. Thus perhaps more for a record for myself but also just to share my experiences, I’ve decided to attempt to write about the wines that I drink.

Therefore for no other reason than I’ve been meaning to do this for a while and I have a few spare minutes, and a glass of wine in front of me, I thought now would be as good a time as any to begin… so here goes.

Name: Yabby Lake Vineyard Single Vineyard Chardonnay.
Appellation: Mornington Peninsula
Varietal: Chardonnay
Vintage: 2014
Date Tasted: 03/11/16
Tasting Notes: Intense body, grapefruit, peach, tightly wound with a hint of minerality and flint. Great line and length with light well-integrated oak. Fruit needs a little more time to develop.
Score: 94/100

yabby-lake-chardy-2014

Wine Maker Comments:

Each parcel of chardonnay was carefully handpicked between 27 February and 5 March, in pristine condition. Every parcel was handled separately in the winery, with minimal intervention. The chardonnay was crushed, pressed and transferred with solids, into tight grain 500L French oak puncheons (20% new) to undergo natural fermentation. It was then left to mature on lees for 11 months, without malolatic fermentation proceeding, until bottling in February 2015.

Firewall Rule Guidelines

Whilst reviewing my teams implementation plan I came across some ACL’s and Firewall Rules which I assume had been created some time ago and thence continually added too (by another group) as even from a quick glance it was clear that a lot of the rules didn’t where redundant, or blatantly incorrect.

It made me recall a simple document I had written a few years back which described my thoughts on guidelines or principles for firewall rule management thus thought it worth repeating here…

  • Access should be specifically permitted.
  • IP address ranges and ports, defined in rules should be as restrictive as practical to match source and destination hosts and ports.
  • Sequential IP addresses that match CDIR boundaries should be combined into as few rules as possible.
  • Rules should be ordered, descending from most frequently to least frequently hit rules.
  • At a minimum rules should be applied to traffic that ingress the Firewall.
    The use of NAT should be considered a form of routing, not a type of firewall.
  • The last rule in every ACL should be an explicit deny to all traffic with logging enabled.
  • All rules should be routinely checked for adequacy and removed if not required.

In addition to the above guidelines, the following guidelines should be considered and adhered too for firewalls that intersect the public Internet and the organizations network:

  • Organizations should deny inbound traffic that does uses a source or destination IP addresses from the RFC1918 range (Private IP addresses).
  • Organizations should deny outbound traffic that does not uses the source IP addresses in use by the organization.
  • Organizations should deny inbound traffic that does uses the source IP addresses in use by the organization.

The following depicts how a firewall rule life-cycle may be managed:

fw-rule-flow

My Recommendations:

Firewalls (yes even the ones which call themselves next gen firewalls) provide very course protection and thus should not be viewed as a complete security solution, especially those which are deployed at the boundary to the Internet.

Ideally they are coupled with other security controls to provide a more complete protection layer.

It is perhaps preferable to separate the more in-depth protocol analysis into another device to ensure that the firewall is not impacted by this function and to simplify its management as not all traffic that traverses it will require such in-depth analysis.

It is also recommended that the firewall rules are validated and tested periodically, preferably every quarter to ensure integrity, protection and adherence to the known state of configuration.

Finally it is recommended that the guidelines and firewall rule life-cycle described in this document above are implemented into change management and service life-cycle processes and policies.

iRule Optimisation Guidelines

I have been using F5’s Load Balancers Applicaiton Delivery Controllers (ADC) for over 7 years now, where I initially implemented them as a replacement for another Load Balancer product at a company.

Soon after installing the F5’s I started tinkering with creating iRules. I had some previous experience with the TCL language, but that’s a tale for another time, so felt relatively comfortable.

Whilst iRules are a powerful tool and have helped me out of many tight spots, usually by allowing me to fairly quickly identify problems (usually not related to the load balancer themselves, but rather weird application behavior), they need to be treated with care and like all things in the network, tested in a non production environment before deploying, to understand how they affect the traffic flow.

Thus the purpose of this post, as while iRules can add some interesting functionality they can also introduce latency, especially if they are not optimised appropriately.

Therefore here are some notes I gathered along the way…

  • Use a Profile first, if possible!
  • It’s better to use chained “if”/”elseif” statements than to use separate “if” statements.
  • It’s better to use “switch” than any form of “if”, whenever possible.
  • It’s better to use “switch” (even with -glob) instead of “matchclass” for comparisons of 100 elements or less.
  • Order your if/elseif’s with most frequent hits at the top. Maybe put a temp hit/log counter in your evaluations to identify their frequency.
  • It’s better to use a command like HTTP::uri than to place the value in a variable.
  • The amount of code in an unused switch statement or an un-called if block doesn’t seem to be a performance consideration.
  • Use your operators wisely “equals” is better than “contains”, “string match/switch-glob” is better than regular expressions. (regex is cool, but a CPU hog!)

Also when thinking of applying iRules to traffic flows you need to cater for both client to server and server to client. Thus when redirecting a HTTP to HTTPS via an iRule such as:

when HTTP_REQUEST {
  # save hostname for use in response
  set fqdn_name [HTTP::host]
}

You need to consider the responses from the server side, as it will not be aware that the client thinks it is communicating to it via HTTPS (SSL) as the client’s HTTPS session is actually terminating on the F5, and therefore the server behind the load balancer will send all redirects back to the client instructing it to respond on HTTP rather than HTTPS.

This can be catered for (as most things can) in an iRule such as:

when HTTP_REQUEST {
  # save hostname for use in response
  set fqdn_name [HTTP::host]
}
when HTTP_RESPONSE {
  if { [HTTP::is_redirect] }{
    if { [HTTP::header Location] starts_with "/" }{
      HTTP::header replace Location "https://$fqdn_name[HTTP::header Location]"
    } else {
      HTTP::header replace Location "[string map {"http://" "https://"} [HTTP::header Location]]"
    }
  }
}

However as per the first point above, a perhaps simpler and possibly more efficient way to handle this is via a profile. The LTM HTTP profile contains the “Rewrite Redirects” option which supports rewriting server-set redirects to the HTTPS protocol with a hostname matching the one requested by the client.

Variables

These can be powerful however should only be used if required, for example if you need to do repetitive evaluations on a value (keep variable names short).

For example rather than this:

when HTTP_REQUEST {
  set host [HTTP::host]
  set uri [HTTP::uri]
  if { $host equals "bob.com" } {
    log "Host = $host; URI = $uri"
    pool http_pool1
  }
}

Do this:

when HTTP_REQUEST {
  if { [HTTP::host] equals "bob.com" } {
    log "Host = [HTTP::host]; URI = [HTTP::uri]"
    pool http_pool1
  }
}

Whilst it may not seem like a big deal to use short yet concise variable names, each connection that comes in (and there may be 1000s) needs to store that variable in memory, thus it adds up…

Timing

The timing iRule command is a great tool to help you get the best performance out of your iRule. CPU cycles are stored on a per event level and you can turn “timing on” for any and all events.

These CPU metrics are stored and can be retrieved by the CLI, or the administrative GUI.

  • CPU Cycles/Request
  • Run Time (ms)/Request
  • Percent CPU Usage/Request
  • Max Concurrent Requests

Just make sure you turn those timing values off when you are done optimizing your iRule as they incur an overhead!

For example, to enable timing for the entire iRule:

when RULE_INIT {
   log local0. "Rule initialized with timing enabled."
}
when HTTP_REQUEST {
   pool my_pool
}

or for a single event:

when RULE_INIT {
   log local0. "Rule initialized -- timing not enabled."
}
when HTTP_REQUEST timing on {
   log local0. "This event only is being timed"
   pool my_pool
}

Persistence

There are many great examples for doing persistence, however (its been a while since I looked so this may have changed) most of them do not seem to consider the health of the pool member they are persisting too, which can cause issues.

Thus my recommendation is when using an iRule for persistency always check the pool status such as:

if { [LB::status pool <pool_name>  eq "up" }

This will stop a connection persisting to a node if the node goes down after the initial connection is established, and the connection is persisting to that node via whatever mechanism (cookie, hash, IP, etc…)

Good Luck with those iRules!

Getting GRUBy with it

I have a laptop which I dual boot, well actually I have 3 distros running on it.

I currently have Linux Mint 18, which is my daily driver. I find it is very stable, simple and not as much bloat as Ubuntu. I also have Windows 10, which I hardly use, but can be good for some gaming.

The third and last partition I typically use as a bit of a play pen where I install various other distros I want to try and have recently been running Kali Linux. However this morning I decided to install a new’ish distro called Solus.

Anyway after I installed Solus on my EFI based laptop it overwrote my boot loader (GRUB) and I lost access to all but Solus to boot into. This is not uncommon when installing other OS onto the same disk but it usually takes me longer than it should to remember how to recover thus I thought I should just document the process here.

So lets get started…

Basically the first thing to do is to boot up using a USB key/drive with a live linux distro such as Ubuntu or Linux Mint.

Once booted, if you are not sure you need to find where your / and EFI partition are. Whilst it is not obvious my partition are:

sda1 = /boot/efi
sda4 = Windows 10
sda6 = Linux Mint root (/)
sda8 – Solus root (/)
sda9 = Swap

Displaying Screenshot from 2016-10-22 02-18-50.png

Once you know these you can start by mounting these partitions, with the following commands:

mount /dev/sda6 /mnt
mount /dev/sda1 /mnt/boot/efi

Next you need to also mount all the file systems you will need with the following command:

for i in /dev /dev/pts /proc /sys; do sudo mount -B $i /mnt$i; done

To ensure our kernel has loaded the UEFI model to enable efibootmgr to access the boot manager variables we run:

modprob efivars

Then we chroot to /mnt and use that as our base filesystem

chroot /mnt

Once this is done we can now install grub which will also can the paritions and setup boot entries in grub for the OS’s it finds.

apt-get install –reinstall grub-efi-amd64

Once this is finished you can type ‘exit’ to get out of the chrooted environment. The above command can be seen in the below screenshot.Displaying Screenshot from 2016-10-22 02-17-52.png

Once that is complete you can umount the file systems and reboot.

for i in /sys /proc /dev/pts /dev; do sudo umount /mnt$i; done
umount /dev/sda1 /mnt/boot/efi
umount /dev/sda6 /mnt
reboot

That is it. Once you rebooted you should get a new grub boot loader with all of your OS’s listed and accessible.