Cisco VSS

VSS has been around for some years now, but it allows you to virtualise Cisco 6500 chassis’s and morph them into a single, logical unit. Once configured your single switching system is known as a Virtual Switching System 1440 (VSS1440*).

The main benefits include operational efficiency, a single point of management/configuration and scaling of the system’s bandwidth, i.e. pooling the resources of two chassis’s.

VSS_Cisco_Pic

VSS is made up of the following:

  1. Virtual Switch Members – these are your 6500 chassis’s
  2. Virtual Switch Links (VSL) – These are 10Gb Ethernet connections (max of 8) and are the links between the VSM’s.

VSL’s can carry regular traffic in addition to the management comms between the two 6500’s.

VSL Links are required, however you will also want to configure fast-hello links – ideally a pair. These links provide dual active detection, i.e. if a disgruntled employee were to sever all the VSL links, the VSS would still be able to determine which switch is the active member. If these additional links are not configured you can end up with a split-brain scenario.

Split Brain

If the standby switch detects a complete loss of the VSL, it assumes the current active chassis has failed and will take over as the active member.

This is not to say your network will not have an outage, as if the VSL links were to be lost, the active switch (via the fast-hello links) will go into recovery mode. In this mode, ALL ports except the VSL ports are shut down until the VSL links recover and the switch will reload in it’s normal state.

I recently had an issue with a client where the VSL links were temporarily severed, and although we were running fast-hello links a split brain still occurred and caused a widespread outage. After the VSL links were re-patched and the switches rebooted, service was restored.

Troubleshooting this at the time, the switches did not reboot and recover automatically after the VSL links were re-established. I still ponder why…..

*1440 refers to the two Supervisor 720 cards (one in each chassis) being active at the same time, thus combined gives you 720×2 = 1440.

I.

VSS

Favourite AWS Services

I’m a fan of Amazon Web Services. Mainly from a technical perspective, as it’s not necessarily cheaper to move from on-prem to on-cloud – so always read the small-print before uplifting your whole datacentre ;). Infact, it interested me so much I sat the Certified Solutions Architect exam last year and thoroughly enjoyed going through the material and labbing along the way.

I like to keep a track of updates to current AWS services, but also new ones that are released and thought I’d highlight 5 of my current favourite offerings.

5. Elastic Compute Cloud (Amazon EC2)

EC2_Icon

EC2 is the bread and butter of AWS. It provides you with all the compute grunt you could ever wish for or need. Need 5 Linux VMs for a web server cluster? Or how about the ability to auto-scale when demand requires it, then spin those same servers down automatically when demand tails off? Don’t worry, EC2 can do just that, as well as a vast amount more.

To spin up an EC2 instance (VM) you have a few options. You can:

  • Use their quick start utility, which provides you with ~30 of the most popular AMI’s (Amazon Machine Images) to choose from. Think your standard, hardened versions of Amazon Linux, RedHat, SUSE, Fedora and then your Windows and Ubuntu variants too
  • Choose an AMI that you have created yourself, perhaps a specific build of server with pre-install software
  • Head over to the AWS Marketplace and utilise for free, or buy specific software that runs in the cloud. Think F5 from Big-IP, Splunk or Juniper etc
  • Launch a community AMI that has been created by a member of the community

It’s frighteningly easy to get up and running, just make sure to terminate the instance/s when you’re finished playing otherwise the costs can soon start to build without you even knowing.

Intro to EC2 Video

4. Kinesis

Kinesis_Icon

If you’re interested in processing or analyzing streams of data – think Twitter for example, then Kinesis and  is a really useful service.

You can use it to build custom applications to collect and analyze streaming data for a bespoke set of needs or requirements. One example could be monitoring Twitter for every time the tag #JustinBieber (whoever he is….) is seen, then pushing that data through Firehose to the analytics engine to present users with personalised content – graphs, diagrams, feeds etc. Powerful stuff.

As per AWS Kinesis FAQs , a Kinesis stream flow:

Kinesis_Flow

Amazon Kinesis Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream.

3. Trusted Advisor

Trusted_Advisor

Trusted Advisor is like having your own AWS architect on-hand, 24 hours a day, to audit your AWS account and tell you where it’s vulnerable, where you could save money and how you could increase performance. Whenever you want.

Trusted_Advisor_Checks

It’s pretty simple – if you use AWS, you should be using TA.

2. Identity & Access Management

IAM

IAM is certainly in the top 3 of the most important AWS services. With it you can pretty much control all access to all of your accounts resources, whether they be groups or individuals.

Straight out of the box you will want to create users (then swallow your root credentials to keep them safe…) and manage their identities by granting generic or bespoke permissions. This way they’ll only have access to the resources they need.

1. Virtual Private Cloud (VPC)

VPC

As a Network bod myself, VPC is of real interest to me. It allows you to provision you own isolated CIDR block, allocate subnets and configure routing tables, all within AWS. You can then architect your solutions in a virtual network that you have defined and could, in theory replicate your on-prem, private IP schema’s in the cloud!

You can also create a hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.

AWS VPC FAQ.

I feel that the VPC gives a little bit back to the Network Engineer, as in they’ve just seen half their DC shifted to VM’s in the cloud so still get to play with IP subnetting and IP allocation in the Cloud.

A Quick AWS explanation of VPC can be found here.

If you want more AWS content than any normal person could ever be able to digest, then head over to the AWS YouTube channel.

I.

AWS_Services_Feature_Image

IP SLA – ICMP

Cisco IOS IP SLA is a network performance and diagnostic utility that uses real time monitoring to check the health of an IP enabled device. This involves the generation of traffic in a reliable and predictable way.

Simple ICMP Monitoring

Topology:

IPSLA_Topology

The routers could be geographically far apart or relatively near one another, but I want to utilise IP SLA to continuously monitor the f0/0 link on R2 from R1. Perhaps for failover purposes or the interface is getting over utilised and we want to see if this is leading to dropped packets, and we don’t have any other monitoring tools.

The configuration for your basic ICMP IP SLA is really easy as we’re only sending ICMP packets, so we don’t need to configure the destination host with a responder – we’ll get to that with another post – think VoIP!

Note IOS version used – C3725-ADVENTERPRISEK9-M, Version 12.4(25d)

Configure R1’s f0/0 interface with the following:

IPSLA_Port_Config

Once this config has taken we can run some commands to see the information we’re receiving.

IPSLA_ShowList

Those I find most useful:

show ip sla monitor operational-state 1

IPSLA_Port_OpState

  • Entry number – The number we configured for our ICMP IP SLA
  • Operations attempted – How many ICMP packets have been attempted to be sent
  • Operational state of entry – Is our IP SLA working or not
  • Latest RTT – How long in milliseconds did the last IP SLA test Round Trip Time take
  • Latest operation return code – Did the latest test return ok – yes!

show ip sla monitor statistics 1

IPSLA_Stats

  • Number of successes – How many packets have been sent in total
  • Number of failures – How many packets have failed in tot

As you can see, there’s quite a lot of good information, even with just a simple ICMP IP SLA configured.

As well as ICMP you can also configure tests such as DNS, Specific UDP + ports, TCP + ports, HTTP (really cool) and a really important one if you’re looking to roll out VoIP – UDP Jitter!

I.
IPSLA_Pic

My Tech Visits

During the working day – over lunch usually, and when I have some time to waste I’ll frequent a number of tech related sites so I thought I’d jot them down here for reference.

Packet Pushers

Packet_Pushers_Logo

http://packetpushers.net/

I don’t visit the Packet Pushers website too often, but they are probably my most frequently listened to podcast. There are some informative blog posts on their site and you can also access all of the podcasts there too.

The Register

the_register_2

https://www.theregister.co.uk/

The Register is easily my number one IT news website I visit. It not only has up to date news, but the writers add their own satirical/comical slant on the news, which I really like. Highly recommended.

IPSpace

IP_Space

http://www.ipspace.net/

IPSpace is an networking orientated blog not affiliated to any vendors (for the record) that’s run by CCIE #1354 Ivan Pepelnjak. On this blog you’ll find excellent articles, webinars, books relating to architectures, real-life solutions, technologies and more.

Packet Life

Packet_Life

http://packetlife.net/

Packet Life is a blog by Jeremy Stretch, an extremely knowledgeable network engineer who enjoys sharing what he’s learned with his readers.

There are some fantastic networking cheat sheets on this site, along with lots of great tech posts, packet capture trace files, software and book recommendations and much, much more. Definitely worth a look.

I.

the_register

TCPDUMP’ing

Within Network Security & Support roles it’s normal practice to, at some point, have to jump onto a CLI and start messing around with tcpdump.

For me it’s usually when there’s a connectivity issues either between two hosts, i.e. server/client or server/server. That’s where TCPDUMP is a massive help, as packets don’t lie. If you can show said server/developer bod, in black and white what is going on, they can’t argues – although they still may. 😉

If I had a penny for every time a colleague blamed a firewall or the network for why their server isn’t communicating as it should, I’d be a wealthy man.

Below are some of my most frequently utilised tcpdump commands and switches, and what they do and print.

My Favourite TCPDUMP Commands

tcpdump -i any host 10.1.1.1 and host 10.2.2.2

This is a good starter to see all traffic between two hosts. After you’ve confirmed you’re receiving output you can then look to add further switches to filter further.

Useful Switches

tcpdump -D shows the interfaces you can capture on

tcpdump -i any will capture traffic on all available interfaces

Use -c to stop your tcpdump after a specific number of packets have been captured

To display IPs and not hostnames use the -n switch

The default packet capture byte size is 65535, i.e. the full packet, therefore if you only want to capture the first 64 or 96 bytes of each packet, use the -s switch

-S will turn off relative sequence numbers and give you the complete, harder to read strings

Use the -w <filename>.pcap to write a capture to a file, but use -v to display how many packets have been captured to file.

You can view your written capture files on the CLI, however they display just the same, therefore I prefer to review them in Wireshark.

That’s a whole series of blog posts in itself though, happy capturing 🙂

I.

Ubuntu_TCPDUMP

 

Ansible & GNS3 Lab

With the constantly moving landscape in IT it’s always worth your while to get to know new stuff, if nothing more than to know what someone in a meeting is talking about.

To that extent I’ve recently been playing around with Ansible, which is a method to automate IT infrastructure – Networking kit in my realm. I’d read through a few articles on the web and so far I’ve built the beginning of a Cisco Ansible lab within GNS3 so wanted to share this with you.

Taken from the Ansible website:

“Ansible delivers simple IT automation that ends repetitive tasks and frees up DevOps teams for more strategic work.”

Or, for me as a Network Engineer, it can stop me having to log into 30 different switches to create a new vlan :p.

What I Have

At the moment I have the following setup, which I’ll run through:

  • 2 Routers setup in GNS3
  • an Ubuntu server VM
  • Ansible comms from VM into GNS3 and the ability to run Ansible code on the Ubuntu server and retrieve output from the Routers in GNS3

How It’s Setup

  1. Firstly I downloaded the latest Ubuntu Desktop image off their website and created myself a VM within VM Workstation in my case, but you can you Oracle Virtual Box or VM Player.

Ubuntu_Desktop

You will then want to update the VM with the latest repository code and install Ansible, so you’ll need to make sure the VM has internet access.

  1. Update all of your packages

sudo apt-get update -y

Sudo will raise your privileges to a root user and the -y switch will accept any forthcoming yes/no prompts during the update.

2. Update your VM firewall – May be required depending on Ubuntu version

sudo ufw allow 22

3. Install Ansible on your Ubuntu VM

sudo apt-get install software-properties-common

sudo apt-add-repository ppa:ansible/ansible

sudo apt-get update -y

sudo apt-get install ansible

4. Create your lab within GNS3

Create a new project and drag a Cloud onto your topology window. Then configure your Cloud to reside on the same subnet as you plan to have your 2, 3, 4, 20 routers on.

GNS3_Cloud_Config

In my case I have all of my devices on the Host-only network 1, however I have also given my Ubuntu server a second NIC, which is NAT’d to my local host so it has internet access. Oh and I changed the icon of my cloud to be a server, as it looks prettier……

VM_Network

5. Configure your end-hosts that you want to pull config from using Ansible. In our case these are our routers.

*Your IP’s will obviously relate to the subnet your hosts reside in and your interface will be whatever you’ve chosen.

conf t

interface fa0/0

ip address 192.168.134.25 255.255.255.0

no shut

You should now, all being well be able to ping between your Ubuntu VM and your routers, and vice versa.

R1#ping 192.168.134.131

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.134.131, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 8/12/16 ms
R1#

R2#ping 192.168.134.131

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.134.131, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/9/12 ms
R2#

ish@ubuntu:~$ ping 192.168.134.25
PING 192.168.134.25 (192.168.134.25) 56(84) bytes of data.
64 bytes from 192.168.134.25: icmp_seq=1 ttl=255 time=9.18 ms
64 bytes from 192.168.134.25: icmp_seq=2 ttl=255 time=4.23 ms
^C
— 192.168.134.25 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 4.230/7.758/11.212/2.658 ms
ish@ubuntu:~$
ish@ubuntu:~$ ping 192.168.134.30
PING 192.168.134.30 (192.168.134.30) 56(84) bytes of data.
64 bytes from 192.168.134.30: icmp_seq=1 ttl=255 time=9.48 ms
64 bytes from 192.168.134.30: icmp_seq=2 ttl=255 time=11.1 ms
^C
— 192.168.134.30 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 4.042/7.733/11.194/2.785 ms
ish@ubuntu:~$

* If ping doesn’t work it’s always worth turning off your Windows firewall temporarily a re-check.

6. Configure SSH on your end-hosts

I am using 3725 series Cisco Routers in my lab – IOS c3725-adventerprisek9-mz.124-25d.bin, but you should be OK using any router image as long as it supports K9 – just remember to set that Idle PC!

conf t

ip domain name lab

crypto key generate rsa general-keys modulus 1024

aaa new-model

aaa authentication login default local

username cisco secret cisco

enable secret cisco

7. Add your end-host IP addresses to the /etc/ansible hosts file within your Ubuntu VM

ish@ubuntu:~$ cd /etc/ansible/

ish@ubuntu:/etc/ansible$ sudo nano hosts

[Routers]

R1 ansible_host=192.168.134.25

R2 ansible_host=192.168.134.30

Ctrl + x + y to save your edited file and exit out

ish@ubuntu:/etc/ansible$ cat hosts

R1 ansible_host=192.168.134.25
R2 ansible_host=192.168.134.30

ish@ubuntu:/etc/ansible$

8. Test your configuration

We can run the following command from the command line of our Ubuntu VM.

cd /etc/ansible

ansible all -m raw -a ‘show version | i uptime’ -u cisco -k

You should be prompted for the device password and if that’s entered correctly the following should be printed.

ish@ubuntu:/etc/ansible$ ansible all -m raw -a ‘show version | i uptime’ -u cisco -k
SSH password:
R2 | SUCCESS | rc=0 >>
R2 uptime is 2 hours, 4 minutes
Shared connection to 192.168.134.30 closed.

R1 | SUCCESS | rc=0 >>
R1 uptime is 2 hours, 4 minutes
Shared connection to 192.168.134.25 closed.

ish@ubuntu:/etc/ansible$

There we have it. I plan to delve into this much more and the use of Ansible Playbooks, but to simply test Ansible commands over SSH this should do nicely.

Router Config, if required, can be found here.

I.

GNS3_Lab

HP Proliant Gen8 Home Server

Home servers have become really popular over the last 5 years or so, whether they be your regular NAS or a more home server flavour, with MS Windows, Linux or a Hypervisor as an OS.

I decided to jump aboard the band-wagon, so picked up a Gen 8 HP Proliant from eBuyer to utilise as a mix of the below:

  1. Plex Media Server
  2. Home Lab for study

I have since added a 4TB WD Red hard drive and will be picking up 16GB (2×8) of Ram in the coming weeks to max out it’s two dimm slots. I will, at some point, also add additional disks and employ RAID – with 0 (Striping) or 1 (Mirroring) being the options.

Plex Media Server

Plex is a client-server media player system and allows you to consolidate all of your pictures, films, Music etc in one location, and access it all from anywhere with an internet connection. You can stream the movies on a range of devices (iPads, SmartPhones etc), but you do have to pay £4.99 for the privilege – but in my eyes it’s well worth it.

Home Lab

This was the main reason I picked up a decent home server. As I work away the Gen8 allows me to remotely connect onto my home server and lab/test away in my own virtualised environments!

I initially went with Xubuntu as my server OS, which is perfect for home use as it’s lightweight and you don’t need to be a Linux developer to navigate around it. However, although the Gen8 supports RedHat Linux (RHEL) out of the box, to go above a 640×480 resolution you have to create your own bespoke driver!

If I had to do this just for a useful res, I assumed there would be other issues down the line I’d encounter too, therefore decided to wuss out and rebuild it with something more friendly – Windows Server 2016 Essentials!

After a couple of weeks running Server 2016 I decided to start fresh again, therefore went for a Hypervisor. My choice was the most popular, VMWare’s ESXi. This now means that I can spin up as many VM’s as I desire (resource allowing) – for example I have a Server 2016 VM, which sits on my LAN happily as my Plex Server. I then access all of my VM’s using the vSphere Client below.

vSphere Client

Other VM’s include Linux distro’s – Mint, Ubuntu etc and also a Cisco 1000v virtual router so I can try my hand at some Ansible Playbooks.

Bug-Bears

There are a few issues I had/have with the Gen8, and for all it’s positives here are a few negatives.

  • iLO requires a licence to mount virtual cd – 60 day workaround
  • No DVI or HDMI, just VGA
  • NTFS pendrives not supported, only Fat32, but 4GB file limit
  • To install an OS you need to load the relevant disk drivers before the OS will see the Array you’ve created prior in the BIOS – I had them on a USB pen-drive and you can grab them from here. You can also circumvent this using the HP Intelligent Provisioning utility, but I prefer the old fashion way.

I.

HP_Pro_Gen8

Website Resilience in AWS

In February of this year Amazon Web Services suffered a pretty bad outage on it’s S3 (Simple Storage Service) platform, which is used by millions of it’s customers, predominantly for hosting websites and the issues caused many of these sites to go dark.

Now, although to some degree one should expect their hosted content to be unavailable at some point, when hosted externally in the public cloud (they don’t offer 100% availability, derr!), it would appear those impacted decided to skimp on resilience.

Non-resilient Website

The above diagram illustrates a regular website being hosted on AWS. You type in a domain name, a lookup is performed via AWS’s DNS service – Route 53, you’re forwarded on to a Linux or Windows VM running your web server code and your content is passed to the requester via S3 buckets. If you’re popular enough to have comments/feedback etc then this is stored in a back-end RDS database.

Now let’s take S3 buckets, data is replicated within an Availability Zone (which houses more than two data centres), but not across different AZ’s or geographical regions, you can configure this, but must pay for the benefit.

Website_Resilient_Regions
Website Resilience

In this scenario, you mitigate any real possibility of your business critical website going dark, as even if Amazon have S3 issues in an AZ or even a region, the likelihood of two zones going dark would require something pretty spectacular (read: devastating) to occur.

You have DNS resolution occurring using multiple ELBs (Elastic Load Balancers), therefore if one lookup fails you still have a second juicy AZ or region to fall back on and point your requesting users at.

There are a few more bells and whistles to the above diagram, notably a CloudFront distribution to serve cached files to users from geographically closer servers. And also the use of Auto-Scaling groups to automatically scale up and down my web server cluster if demand warrants it.

The recent AWS outage shows us that we all need to think about how important and costly would it be off domain X were to go offline.

I.

website

The First (of Many)

Hi, welcome to my blog. I hope in the coming weeks, months and years this blog will be filled with useful posts to interest a wide range of tech readers.

The main focus will involve aspects of my job, as a Network Security & Support Engineer, and notably revolve around Cisco, but also a few other vendors. I tend to lab plenty, therefore I plan to share these with you.

I am a Cisco Certified Networking Professional, hence the Cisco focus, but also an AWS Certified Solutions Architect, therefore expect some post on the current king of the cloud too.

Thanks for coming by.

I.

post

Powered by WordPress.com.

Up ↑