Zero Trust Network – Hype Cycle?

As the hype cycle of artificial intelligence and machine learning start to wane a new contender for the marketecture focus has emerged. Well it has been around for many years but is getting a lot more attention recently, were almost all the network and security vendors typically have it, or a reference to it, on their front page… That is of course ‘zero trust’.

I for one welcome the focus on zero trust, even though it is somewhat a misnomer, but more on that later, as it helps direct focus on an area of network security that I think has been a struggle for a long time. It has been part of network security, albeit in more niche areas for many years, mainly when working with wireless deployments where mobility of the user is inherent and thus network location cannot be relied upon to provide a comprehensive security posture. Typically this was part of a mobility strategy where the user’s or system’s identity formed the basis of how security posture and controls where applied.

Fast forward 5-10 years and with the increasing adoption of public cloud which has further eroded or at least stretched and evolved the normal boundaries of a network, a more holistic approach to leveraging identity for network controls and access is gaining momentum.

Therefore when I discuss the meaning of Zero Trust I consider treating every connection the same as a foundation, that is, every connection has no implied trust or untrust, enabling the right access to the right destination at the right time. The benefit of this is that being “off-net” is no longer an inhibitor and security controls can be proactively extended to all applications. It is key to understand that zero trust is not a product, technology, standard, pattern or process but rather a principle that spans all technology domains.

Additionally, contrary to many vendor and industry marketing, the perimeter did not disappear, and trust is no longer required, but rather how trust is leveraged and considered is now another tool in the tool belt, where trust is assigned more based on the identity, posture and requirements of an entity, rather than inherited due to location or connectivity medium. It is still important to understanding the boundaries of the network to enable an enhanced definition of policies for users and resources, and the criteria to log, monitor and inspect activities within these boundaries with further understanding of expected behaviors provided by micro-segmentation and identity. It is important to understand that a zero trust implementation is a marathon not a sprint, allowing focus on the greatest risks and iterate over time. In the network it is also important to not try and attempt to control every connection, especially early on, but rather work towards grouping connectivity based on identity and segmentation enabling the controls to remain at the edge of the segment but leveraging the richer information provided by identity, visibility and logging within the network to make more informed security and control decisions.

Once the identity of an entity which is required to establish a connection is known, a control can authenticate and authorize the connection to the destination based on a policy. For example a firewall could block all traffic to an application by default, however based on its verified knowledge of the identity of the entity trying to establish the connection it could allow that connection to pass, this can be extended to specific destinations and to specific times, all defined in a policy, regardless of the entities location.

An important capability for a Zero Trust approach is not just to enable conditional access, but to also ensure that access is secure, by preventing exploits, vulnerabilities, and other attacks, which requires both a clear understanding of what should or should not be traversing the network but also visibility to measure, learn and adapt, which means that the network controls can no longer focus just on layer 4, whilst this is still important, but also needs better insight into layer 7.

Conceptually the steps an organisation needs to undertake to adopt a zero trust approach is to define the landscape which zero trust will be applied, the ability to identify the users, map that identity to the access they are authorised for, distribute the policy to the controls which will enforce the access and monitor the connection to ensure it maintains compliance with the policy. This is an iterative process and can be represented as follows:

To enable the adoption of a zero trust approach, the network, meaning the traffic traversing the network and the devices enabling the network, need to be able to support identity based controls, the ability to segment or isolate and remove any undesired or compromised component or traffic flow on demand.

Underpinning the ability to define the landscape is a micro-segmentation approach in the network where workloads are segmented based on security, support and operational requirements, with well defined zones for administration activities and shared services, which not only allows simplification of controls but also aids in visibility of compromised or mis-configured components.

Final Thoughts

As I mentioned at the start of this thought dump, zero trust is often misunderstood or misportrayed as no entity, be it user, application or system, should have zero trust, but trust is required, perhaps you need to trust your identity store or the links utilised to connect components, but rather that trust should not be implied without better consideration of what, how and why a connection is required. This is likely a long journey which cannot be completed with the purchase or implementation of a technology, but rather by adopting both a micro-segmentation approach, which allows for policies to be tailored to network zones and the expected behaviors and capabilities within those zones and with identifying the requestor of connections along with who or what is making the request.

Whilst the zero trust question cannot be solved with technology alone, it also requires a new approach and new way of thinking, acknowledging that most connectivity will originate from, or be destined to, an entity outside of the organisations network, be it administrators working from home, or applications deployed to a platform as a Service (PaaS), all with the goal of providing the least amount of access required for a user or function to accomplish a specific task.

To realise this zero trust approach the network controls need to incorporate identity information to make decision about what access to resources is enabled and what the user is authorised to do in a dynamic and automated way, along with uplifting the ways of working to leverage these capabilities.

Therefore the best place to start a zero trust journey is with the way you think about security and the mindset of applying controls, expanding the focus from the deeply ingrained network centric based approached to a more holistic view understanding what is actually required. Also trying to do this without an underpinning of network automation will likely lead to lax or overbearing controls.

BGP VxLAN EVPN – Part 2: Underlay

In the previous post, found here I provided an overview of  BGP VxLAN EVPN and mentioned that various IGP’s could be utilised to provide the underlay. In this post I am going to flesh out what a potential underlay setup may look like based on OSPFv2.
There are some initial considerations which need to be defined when planning the underlay design. Some of these considerations are:
  • MTU
  • Unicast Routing Protocol
  • IP addressing
  • Multicast for BUM traffic replication

VxLAN adds 50 Bytes to the original Ethernet frame which needs to be catered for to avoid fragmentation. The simplest way of doing this is to enable Jumbo frames in the IP network where VxLAN will run. As most servers utilise a jumbo frame of 9000 it is recommended that the switches be configured with a Jumbo frame of 9192 / 9216 depending on what the model of hardware supports. This will cater for the servers 9000 plus the VxLAN overhead.

The next consideration is which IGP (unicast routing protocol) to utilise, however as mentioned this post will focus on OSPF.

IP addressing for the underlay needs to cater for the P2P links between the spine and leaf switches, the loopback interfaces on each spine and leaf switch and the multicast Rendezvous-Point (RP) address.

Whilst discussed in more detail later in this post it should be noted that the mode of multicast utilised will likely depend on the model of hardware which is being utilised. For example on the Cisco Nexus range, unfortunately, not all Nexus models support the same multicast mode. Below is a list of what is supported on each Nexus model:

  • Nexus 1000v – IGMP v2/v3
  • Nexus 3000 – PIM ASM
  • Nexus 5600 – PIM BiDir
  • Nexus 7000/F3 – PIM ASM / PIM BiDir
  • Nexus 9000 – PIM ASM

In this example we will leverage the loopback address for our multicast RP address, however as an example for a medium sized spine and leaf deployment utilising 4 spine switches and 20 leaf switches the following IP address usage needs to be considered:

  • 4 Spine x 20 leaf = 80 P2P Links
  • 80 links, with an IP address at each end = 160 P2P IP addresses
  • 24 devices in total = 24 Lookpack IP addresses.
  • Total = 160 P2P IP + 24 Loopback IP = 184 IP Addresses

Also note that to conserve IP addresses, ‘IP unnumbered loopback 0’ for the P2P interfaces, may be used, which means 1 IP address per device. This should be seriously considered for large deployment, however for simplicity in this example I am going to utilise 2 Spine switches and 3 Leaf switches and thus a unique IP address everywhere, meaning I need to cater for:

2 Spine x 3 leaf = 6 P2P links x 2 = 12 P2P IP addresses + 6 Loopback IP addresses.

Also I am going to assume that in this example that the servers are utilising the 10/8 IP address range, and thus I have opted to use the 192.168/16 range for the Loopback interfaces which are also used as the Router ID and 172.16/12 IP address range for the physical layer 3 P2P interfaces.

Also for reference whilst most of the thoery is independant of the vendor and hardware in this example I am using Cisco Nexus 9000 switches to implement this network technology, and as with all Nexus switches the features first need to be enabled, thus I have enabled the following:

Spine-1#show run | incl feature
feature nxapi
feature ospf
feature bgp
feature pim
feature interface-vlan
feature vn-segment-vlan-based
feature lacp
feature lldp
feature nv overlay
As the spine switches are the simplest to configure I’ll start there with the first spine switch. As mentioned, depending on how MAC address replication and flooding is configured in the environment multicast may be required. I’ll explain this in more detail later, but in this example I have enabled multicast and also nominated this spine switch as one of the RP, with the following commands.
ip pim rp-address 192.168.1.0
ip pim anycast-rp 192.168.1.0 192.168.1.1
ip pim anycast-rp 192.168.1.0 192.168.1.2
Once this is done the next step is to enable the underlay routing protocol. As I am using OSPF to provide IP reachability across the fabric, the first step is to configure the loopback interface which will be used as the router ID for the routing protocol, and then configure OSPF itself.

interface loopback0
description Router-ID – Spine1
ip address 192.168.1.1/32

router ospf UNDERLAY
router-id 192.168.1.1
log-adjacency-changes
maximum-paths 12
auto-cost reference-bandwidth 100000 Mbps
passive-interface default

The router-id is the IP I will use for the loopback0 interface and for all router-id’s defined on this switch.

The OSPF configuration is standard and should be familiar to anyone who has configured OSPF before, however the command ‘maximum paths’ may not be. This is enabled to provide Equal Cost Multi-Pathing between my leaf and spine switches. I chosen 12 just to have a large number and likely never need to worry about it again, but as long as this is equal to, or greater than, the amount of physical links it will be fine. Also it is always good practice to define the reference bandwidth, and in this example I have configured 100000 Mbps which is 100 Gbps and should cater for the largest link this environment will have. Also I prefer to manually nominate any interfaces I wish to participate in OSPF thus I have configured the interfaces to be passive by default.

TIP: By default OSPF is uses broadcast for message propergation and election, however we want to utilise the Network type P2P thus, ensure that ‘ip ospf network point-to-point’ on loopback and P2P interfaces is configured.

Once this is done I can go back into the loopback interface and assign the OSPF and Mulicast parameters so the loopback interface participates in these protocols, with the following configuration:

interface loopback0
  description Router-ID – Spine1
  ip address 192.168.1.1/32
  ip ospf network point-to-point
  ip router ospf UNDERLAY area 0.0.0.0
  ip pim sparse-mode
The next step is to configure the point to point interfaces and enable OSPF and Multicast. As we are using VxLAN we are going to increase the MTU to cater for the additional header size. Technically only an additional 50 bytes is required but for simplicity I’ve decided to enable jumbo frames and set the mtu to 9216 on all physical interfaces.
interface Ethernet1/43
  description – DC01-LSL06-03 [Eth1/47]
  mtu 9216
  ip address 172.16.1.1/30
  ip ospf network point-to-point
  no ip ospf passive-interface
  ip router ospf UNDERLAY area 0.0.0.0
  ip pim sparse-mode
  no shutdown

Its important to configure OSPF as point to point here to ensure there is no DR/BDR and thus no election as well as being a more optimised LSA database, and avoiding a full SPF calculation for a link failure.  Also as we have nominated passive-interface default in OSPF we need to enable this interface to participate in OSPF with the command ‘no ip ospf passive-interface’. I have also used a /30 for the point to point link which is not ideal for preserving IP address space and may cause scale issues in a very large deployment but for simplicity of configuration and troubleshooting I’ve decided the trade of here is fine.

All the interconnects between the leaf and spine switches are via 2 x 10G interfaces thus I need to replicate the above configuration on an additional interface as per the following configuration.

interface Ethernet1/44
  description – DC01-LSL06-03 [Eth1/48]
  mtu 9216
  ip address 172.16.1.5/30
  ip ospf network point-to-point
  no ip ospf passive-interface
  ip router ospf UNDERLAY area 0.0.0.0
  ip pim sparse-mode
  no shutdown
This should be repeated for all links between each spine and leaf adjusting the IP addresses as required until all of your switches form a neighbor relationship as shown here:
Spine-1# show ip ospf neighbors
 OSPF Process ID UNDERLAY VRF default
 Total number of neighbors: 6
 Neighbor ID     Pri State            Up Time  Address         Interface
 192.168.1.13      1 FULL/ –          1w5d     172.16.1.2      Eth1/43
 192.168.1.13      1 FULL/ –          1w5d     172.16.1.6      Eth1/44
 192.168.1.12      1 FULL/ –          1w5d     172.16.1.10     Eth1/45
 192.168.1.12      1 FULL/ –          1w5d     172.16.1.14     Eth1/46
 192.168.1.11      1 FULL/ –          1w5d     172.16.1.18     Eth1/47
 192.168.1.11      1 FULL/ –          1w5d     172.16.1.22     Eth1/48
Also as we enabled multicast PIM earlier, to confirm this has formed the appropriate neighbor relationships we use the following command:
Spine-1# show ip pim neighbor
PIM Neighbor Status for VRF “default”
Neighbor        Interface            Uptime    Expires   DR       Bidir-  BFD
                                                         Priority Capable State
172.16.1.2      Ethernet1/43         1w5d      00:01:42  1        yes     n/a
172.16.1.6      Ethernet1/44         1w5d      00:01:35  1        yes     n/a
172.16.1.10     Ethernet1/45         1w5d      00:01:26  1        yes     n/a
172.16.1.14     Ethernet1/46         1w5d      00:01:23  1        yes     n/a
172.16.1.18     Ethernet1/47         1w5d      00:01:34  1        yes     n/a
172.16.1.22     Ethernet1/48         1w5d      00:01:44  1        yes     n/a
Spine-1# show ip pim interface brief
PIM Interface Status for VRF “default”
Interface            IP Address      PIM DR Address  Neighbor  Border
                                                     Count     Interface
Ethernet1/43         172.16.1.1      172.16.1.2      1         no
Ethernet1/44         172.16.1.5      172.16.1.6      1         no
Ethernet1/45         172.16.1.9      172.16.1.10     1         no
Ethernet1/46         172.16.1.13     172.16.1.14     1         no
Ethernet1/47         172.16.1.17     172.16.1.18     1         no
Ethernet1/48         172.16.1.21     172.16.1.22     1         no
loopback0            192.168.1.1     192.168.1.1     0         no

Note: As this example is from the spine switch and each spine has 2 x 10G links to the 3 x leaf switches, there are 6 entries plus the loopback (depending on which command used) above.

This now has formed the underlay network with OSPF and Multicast and we can now build the overlay and control plane network above this. It is critical that reachability of the underlay is consistent across the fabric and this may be a good point to test failure scenarios for the underlay. It is a good point however to finish this blog, with the next providing the overlay and control plane configuration details.

 

To Architect or Design, that is the question?

In IT, there are various roles such as Architect or Designer and the line between these two definitions seem to get blurred. I find that it often means different things for different people and companies. This can also make understanding a potential candidates strengths hard as there is no clear formal definition, so whilst a person might have the title of network designer, that person may be performing more of a network architecture function, and vise versa.

In the IT industry the term designer and architect largely follow the broader known definitions used in other industries, but unlike other industries which may have very clear descriptions, in IT these are often used interchangeably. However I believe there is a significant difference between the two which, based on my own experience I will try to discuss here, and maybe provide some insight, and perspective. I also think both skills are critical to a successful IT department in any mid to large size organisation.

ISO/IEC 42010:20076 defines “architecture” as: “The fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution.”

TOGAF embraces, but does not strictly adheres to ISO/IEC 42010:2007 terminology. In TOGAF, (based on my 9.1 certification and knowledge) “architecture” has two meanings depending upon the context:

  • A formal description of a system, or a detailed plan of the system at a component level to guide its implementation.
  • The structure of components, their inter-relationships, and the principles and guidelines governing their design and evolution over time.

My thoughts on the role of an architect is to optimize, the often fragmented legacy processes, technologies and capabilities, which is responsive to change and enables the delivery of the business strategy. It enables the effective utilisation of information and technology to assist the business in achieving a competitive advantage, and enhancing the user experience both internal and external to the business.

IT architecture focuses is on the broader, holistic view on how systems inter operate with each other and the principles that they should adhere too. It typically defines the choice of framework, capabilities, scope, goals, and high level methodologies which will be utilised.

IT designers focus is to plan for how the systems will be organised, how the components of a system will work and integrate, how the system will be implemented and the specification which should be met during, and at the end of the implementation and or integration.

Whilst these may seem in large part like the same thing I believe IT architecture is more objective focused, analyzing the requirements, the system and how it will be measured, whilst design is more subjective, as it is based more on the usage of a system, and how it will operate and be managed.

Simply put IT architecture often involves looking at all the features, from a business and IT perspective, how they inter relate, the inputs and outputs of how the system will be supported or utilised and the broader implications to the business as a whole. Design is typically more focused on the system itself, and its technical aspects, features and constraints.

That said, as mentioned, both skills are important as an architect may focus on the overall aesthetics of the system and the integration with the business a designer is typically looking for the purest technical solution. Architecture faces towards strategy, structure and the abstract. Design faces towards implementation and practice, towards the concrete. Therefore when combined a design defines how a chosen architecture is applied to the given requirements.

Architecture without design does nothing: it can too easily remain stuck in an ‘ivory-tower’ world, seeking ever finer and more idealized abstractions and solutions with the risk at realizing practical outcomes .

Design without architecture tends toward point-solutions that are optimized solely for a single task and context, often developed only for the current techniques and technologies, and often with high levels of hidden ‘technical debt’.

Having skills in both disciplines can sometimes be challenging but for effective and efficient IT in a mid to large size organisation, both architecture and design are essential to arrive at appropriate, useful, maintainable solutions when both are in use and in appropriate balance.

Final Thoughts

I have worked from a technician to designer to solution architect to domain architect and seen the benefits and limitations of all of these roles. I believe, perhaps slightly egocentricity, that having experience in all areas help round out what is needed for the organisation. Whilst in large organisations these roles are typically filled by different people or groups they can be a single person or group.

Whilst it is often important to deliver to the goals and objective of a specific project, being able to ensure this aligns with the organisation’s overall strategy and leaves minimal tech debt (gap) is more ideal. I have briefly discussed this in a previous post IT Architecture Process

I guess the answer is that both Architecture and Design are important, one may be more so depending on the situation, and they are often not disparate skills, however more focus or weight can be applied to one area over the other, it really depends on what problem is trying to be solved.

Overlay Transport Protocol (OTV)

OTV has been around for a while and whilst in general, I think that stretching broadcast domains (layer 2 VLANs) is not a great idea. However, in most enterprises I have worked in it is typically done for either long distance vMotion or server clustering, the later, specifically for Databases.

There are many great posts explaining why layer2 stretch VLANs (bridging) are not a good idea so I wont go into them here, but will just leave it with this.

bridgingdoesntscale

Anyway if you do need to span a bridging domain between data centres, over a data centre interconnect (DCI) then one of the perhaps, more robust options, is Cisco’s Overlay Transport Protocol (OTV). It should be noted that whilst I’m discussing Cisco’s proprietary OTV implementation there is an RFC draft for OTV which can be found here.

NOTE: the Cisco and standard draft are likely to be non interoperable as they use different encapsulation methods.

Thus this post is specifically focused on the Cisco implementation of OTV which is actually Ethernet over MPLS over GRE over IP (EoMPLSoGREoIP). For OTV to work, assuming you have devices that support it (Nexus 7K & Cisco ASR), all that is needed from a transport perspective to run OTV is IP connectivity.

As OTV is an EoMPLSoGREoIP encapsulation it adds 42 bytes of header to the original packet which comprises of:

  • 4 bytes – MPLS
  • 4 bytes – GRE
  • 20 bytes – IP Header
  • 14 bytes – Outer Ethernet frame

One of the limitations of OTV currently, is that it cannot run in the same device or VDC as the layer 3 gateway or SVI. Thus when deploying OTV it must be in one of two topologies, being either ‘on a stick’ or ‘inline’. The on a stick method is typically preferred as it means that non OTV traffic does not need to traverse the OTV VDC and thus is more flexible, however the configuration is largely the same in either scenario.

otv-topology-2

  • Red Line – represents the OTV traffic path.
  • Purple Line – represents other (non OTV) traffic path.

Additionally OTV can be deployed in multicast mode or unicast mode, where the former requires that the DCI also supports and runs multicast. The benefit of multicast mode over unicast is when more than 2 DC’s are connected and thus the traffic can be more optimally sent between the source and destination rather than it being replicated to all OTV participants regardless if they need to see the traffic or not. However in most scenarios OTV is utilised to connect 2 sites, and thus unicast is generally fine, and arguably more simplistic to configure, as it does not require multicast in the DCI.

NOTE: The rest of this post will assume a 2 site DC connected via unicast OTV.

Whilst the diagram above only has a single device for the Layer3 and OTV functions these would each typically be on a pair of devices for redundancy purposes. Thus when OTV is formed it needs to decide which of the OTV pair is the Active Edge Device (AED). This is done via the AED election process, which in turn is negotiated over a ‘site VLAN’.

As mentioned, the site VLAN is an internal VLAN connecting the two local AED routers (not spanned over OTV) and utilised for the AED election. This allows each edge device to be active for a VLAN and redundant for another. When this process takes place, what would normally end up happening is that all even VLANs would use one AED, say edge device 1 as its active OTV device and all odd VLANs would use the other AED, say edge device 2 as its active OTV device, with the other AED being redundant (secondary). Each site is identified via a unique ID, which is shared locally between the AED’s at the same site and must be unique to a site.

The main components of OTV are an Overlay Interface, Join Interface and the Internal Interface. The Overlay interface is the logical OTV tunnel interface which performs the encapsulation, whilst the Join interface is the L3 physical link or portchannel which is used to route upstream towards the DCI. The Internal interface is the L2 interface, typically configured as a trunk or access port, which takes part in STP and learns MAC addresses per normal. This is typicall the interface that connects the L3 gateway device to the OTV device.

OTV uses ISIS as the routing protocol to form adjacencies between AEDs and to advertise MAC reachability information. OTV utilises ISIS in the background and thus no ISIS configuration is required. Additionally the OTV ISIS will not interfere with any other ISIS process which may be running on or between these devices.

OTV does have some limitations

  • Cannot run OTV in the same VDC as Layer 3 SVI’s.
  • OTV does not support Fragmentation, and sets the DF (Don’t Fragment) bit on all packets.
  • OTV adds 42 bytes of header (So DCI needs to cater for this additional header).
  • Limited number of extended VLANs per system across all configured overlays, which is around 1500 at time of writing this post.

One of the key benefits of OTV over other layer 2 stretch technologies, such as VPLS, is that it keeps the STP domain isolated. That is STP BPDUs are not transported over the OTV and thus a STP issue in one site should not affect the other.

The intent of this post is to provide a high level overview of OTV and its components. The next post will provide more detail into the configuration and troubleshooting of OTV.

Passwords are so passe

Passwords are ubiquitous when dealing with user authentication but are perhaps also the weakest link in security authentication. They generally require the user to maintain a complex yet easy to remember string which can be somewhat of a contradiction as the requirement to recall the string, generally leads to it being, based on, or related to, a known word or something personal to the user, and ultimately easy for a human to remember, and hence reduces complexity and randomness. A possible work around to this issue is to not allow user (human) generated passwords, and rather have the password automatically generated by an application using suitable complexity, however this tends to lead to other issues such as users documenting their password or reusing the same password on many systems.

Perhaps the best method to date is to use a password manager. I started doing this myself a couple of years back and while each have their pros and cons, I’ve never looked back

In 2010 an analysis was performed on the 32 million passwords that were publicly published from the December 2009 Rockyou.com breach.

Some of the key findings of the study include:

  • About 30% of users chose passwords whose length is equal or below six characters.
  • Moreover, almost 60% of users chose their passwords from a limited set of alpha-numeric characters.
  • Nearly 50% of users used names, slang words, dictionary words or trivial passwords (consecutive digits, adjacent keyboard keys, and so on). The most common password among Rockyou.com account owners is “123456”.

Additionally, further studies show that this insecure trend sadly doesn’t shift as 26% of users reuse the same password for important accounts such as email, banking or shopping and social networking sites.

To provide some context the following tables represent the approximate maximum time required to guess each password using a simple brute force “key-search” attack.

mixed-62

As can be seen using only mixed alpha and numerical characters even for a password with a character length of 8 it is still feasible to retrieve the password in a short time. It also should be noted that there are many ways to improve the speed that these passwords could be cracked.

mixed-96

Even using all 96 mixed alpha, numerical and symbols for a 6 character length password does not provide enough complexity.

The NASA guidelines, recommend that all passwords be at least eight characters, and contain a mix of four different types of characters – upper case letters, lower case letters, numbers, and special characters such as !@#$%^&*,;” If there is only one letter or special character, it should not be either the first or last character in the password.

Additional to password complexity guidelines other factors should be taken into account such as:

  • Not displaying the password as it is being entered or obscuring it as it is typed by using asterisks (*) or bullets (•).
  • Requiring users to re-enter their password after a period of inactivity (screensaver)
  • Using encrypted tunnels / protocols (SSH, IPSec, SSL) to protect transmitted passwords.
  • Limiting the number of allowed failures within a given time period (to prevent repeated password guessing).
  • Introducing a delay between password submission attempts to slow down automated password guessing programs.
  • Requiring passwords are not shared between users / systems.
  • * Requiring periodic password changes.
  • The frequency for periodic password changes is a widely debated topic and whilst the accepted dogma was to force password changes somewhere between 3-6 months, recently some more evidence has come about that suggests that forcing password change is perhaps, not a good idea, in fact less secure.

More details can be found here.

However given the general insecurity of relying on passwords for authentication it is recommended that these be coupled with some other form of security, such as, two-factor authentication, limiting access, and regular password assessments.

My View:

All systems should enforce that mixed alpha, numerals & symbols be used, with a minimum of 8 characters to ensure suitable complexity.

Additionally, user and administrator passwords are periodically audited to ensure they meet the requirements for complexity and are not based on easy guessable or brute forced dictionary words, and the same passwords are not used by the same person on multiple systems with differing security risk levels.

If possible a password manager should be mandated. Whilst this is a cost for the company this is IMHO far outweighed by the increased security and ease of use which can be applied. Most modern password management applications also support auto filling in forms and passwords which can greatly improve the user experience whilst only requiring the user to remember one secure password.

If possible users should be encouraged to use passphrases rather than passwords as these are generally longer and more complex than passwords.

Finally, one of the biggest security concerns with passwords is protecting them, thus ensuring they are salted and encrypted when stored on any system is paramount, so if when they are stolen it is not feasible for the attacker to decrypt them.

Thoughts on IT Architecture Process

I probably should start by pointing out that this, and in fact most of my posts are expressed in a Network and/or Security lens, as those are my domains of specialty, however they may and hopefully are still relevant to other IT domains.

When discussing principles and processes I try and not be constrained to a silo, as one of the issues I see in the IT industry is that practitioners are typically focused on only their silo and I strongly feel that whilst having strengths in specific domains is fine, one should always strive to break down silos and understand other perspectives.

Anyway, back to the point…

Typically when performing Network and Security Architecture you are working in an environment that has an existing network and/or security devices deployed.

Therefore you need to be able to quickly get a lay of the land and work with, and within, the existing tools and processes that are provided. This is not always easy as anyone that is responsible for designing and recommending the deployment of new technologies will know, one of the hardest discussions with the business, is explaining the reasons they should spend more money to replace something that, may for the most part, currently be working fine.

You know, if it ant broke, don’t fix it. For the most part I would agree, however from experience most businesses do not even know when something is broken, and by this I don’t just mean it does not perform it’s primary function correctly; for example a firewall may be adequately blocking everything other than HTTP (port 80). However if the code has vulnerabilities or if the fail-over mechanism is buggy, or the firewall introduces significant latency, or it cannot inspect and determine that the port 80 traffic which it is permitting is actually valid HTTP, then the technology from a business perspective may be broken and thus needs to be addressed.

Inversely if you start recommending that every piece of technology be replaced with the latest and greatest you are not likely to last long either.

Therefore a key objective, once you have obtained all your required inputs, business goals, strategy, compliance, etc is to try and get the best of the existing technology and augment it where needed to address the most significant pain points and gain the most benefit.

When I plan the Network and Security architectural process I will follow, which will hopefully be implemented by the business I typically use the following high level process:

network_security-process

Minimalistic Architecture Principles

As with most things a good place to start is with principles. There are many good references to principles on the interweb and these are just the ones I have developed which I try and use to provide guidance and consistency when doing IT architecture.

This is in no way a complete or exhaustive list of principles and it does not include rationale or detail on each principle but is rather a starting point, and as the name suggests minimalistic high level guiding principles.

Lets start by discussing at a high level what principles are:

Principles are fundamental statements, which express a belief about the future and / or future direction. They articulate the organisation’s vision and are the cornerstone for managing change. Principles provide an agreed reference or policy to be used for evaluation of alternatives and decisions.

The principles are equal in a sense that they all have to be taken into account for any decision, but their importance may vary on the particular decision to be taken.

Now with that out of the way, to the minimalist architecture principles-

General Architecture Principles

•  Components should be loosely coupled.
•  Good solutions should be re-used, not invented.
•  There shall be a single source of truth.
•  Re-use before buy, buy before build.
•  All solutions will be architectured for change.
•  All solutions should be exposed to other solutions.
•  Only vendor supported solutions will be deployed.
•  Only enough information will be provided to make informed decisions.
•  Application will be designed for reuse.
•  Information is an asset that has value and should be managed accordingly.
•  Information is protected from unauthorized use and disclosure.
•  Systems should allow information exchange encapsulating both business rules and data.
•  Services should have clearly defined boundaries.
•  Architecture and systems should be kept as simple as possible.
•  Good enough architecture is always better than perfection.
•  Functionality and business logic should be applied as early as possible.
•  Principles are not universal truths

These are the generic principles I try and use in my work when making decisions, however if a decision can be reasonably made by someone with a more narrow scope of responsibility, defer the decision to that person or group.