TCPDUMP’ing

Within Network Security & Support roles it’s normal practice to, at some point, have to jump onto a CLI and start messing around with tcpdump.

For me it’s usually when there’s a connectivity issues either between two hosts, i.e. server/client or server/server. That’s where TCPDUMP is a massive help, as packets don’t lie. If you can show said server/developer bod, in black and white what is going on, they can’t argues – although they still may. 😉

If I had a penny for every time a colleague blamed a firewall or the network for why their server isn’t communicating as it should, I’d be a wealthy man.

Below are some of my most frequently utilised tcpdump commands and switches, and what they do and print.

My Favourite TCPDUMP Commands

tcpdump -i any host 10.1.1.1 and host 10.2.2.2

This is a good starter to see all traffic between two hosts. After you’ve confirmed you’re receiving output you can then look to add further switches to filter further.

Useful Switches

tcpdump -D shows the interfaces you can capture on

tcpdump -i any will capture traffic on all available interfaces

Use -c to stop your tcpdump after a specific number of packets have been captured

To display IPs and not hostnames use the -n switch

The default packet capture byte size is 65535, i.e. the full packet, therefore if you only want to capture the first 64 or 96 bytes of each packet, use the -s switch

-S will turn off relative sequence numbers and give you the complete, harder to read strings

Use the -w <filename>.pcap to write a capture to a file, but use -v to display how many packets have been captured to file.

You can view your written capture files on the CLI, however they display just the same, therefore I prefer to review them in Wireshark.

That’s a whole series of blog posts in itself though, happy capturing 🙂

I.

Ubuntu_TCPDUMP

 

Ansible & GNS3 Lab

With the constantly moving landscape in IT it’s always worth your while to get to know new stuff, if nothing more than to know what someone in a meeting is talking about.

To that extent I’ve recently been playing around with Ansible, which is a method to automate IT infrastructure – Networking kit in my realm. I’d read through a few articles on the web and so far I’ve built the beginning of a Cisco Ansible lab within GNS3 so wanted to share this with you.

Taken from the Ansible website:

“Ansible delivers simple IT automation that ends repetitive tasks and frees up DevOps teams for more strategic work.”

Or, for me as a Network Engineer, it can stop me having to log into 30 different switches to create a new vlan :p.

What I Have

At the moment I have the following setup, which I’ll run through:

  • 2 Routers setup in GNS3
  • an Ubuntu server VM
  • Ansible comms from VM into GNS3 and the ability to run Ansible code on the Ubuntu server and retrieve output from the Routers in GNS3

How It’s Setup

  1. Firstly I downloaded the latest Ubuntu Desktop image off their website and created myself a VM within VM Workstation in my case, but you can you Oracle Virtual Box or VM Player.

Ubuntu_Desktop

You will then want to update the VM with the latest repository code and install Ansible, so you’ll need to make sure the VM has internet access.

  1. Update all of your packages

sudo apt-get update -y

Sudo will raise your privileges to a root user and the -y switch will accept any forthcoming yes/no prompts during the update.

2. Update your VM firewall – May be required depending on Ubuntu version

sudo ufw allow 22

3. Install Ansible on your Ubuntu VM

sudo apt-get install software-properties-common

sudo apt-add-repository ppa:ansible/ansible

sudo apt-get update -y

sudo apt-get install ansible

4. Create your lab within GNS3

Create a new project and drag a Cloud onto your topology window. Then configure your Cloud to reside on the same subnet as you plan to have your 2, 3, 4, 20 routers on.

GNS3_Cloud_Config

In my case I have all of my devices on the Host-only network 1, however I have also given my Ubuntu server a second NIC, which is NAT’d to my local host so it has internet access. Oh and I changed the icon of my cloud to be a server, as it looks prettier……

VM_Network

5. Configure your end-hosts that you want to pull config from using Ansible. In our case these are our routers.

*Your IP’s will obviously relate to the subnet your hosts reside in and your interface will be whatever you’ve chosen.

conf t

interface fa0/0

ip address 192.168.134.25 255.255.255.0

no shut

You should now, all being well be able to ping between your Ubuntu VM and your routers, and vice versa.

R1#ping 192.168.134.131

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.134.131, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 8/12/16 ms
R1#

R2#ping 192.168.134.131

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.134.131, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/9/12 ms
R2#

ish@ubuntu:~$ ping 192.168.134.25
PING 192.168.134.25 (192.168.134.25) 56(84) bytes of data.
64 bytes from 192.168.134.25: icmp_seq=1 ttl=255 time=9.18 ms
64 bytes from 192.168.134.25: icmp_seq=2 ttl=255 time=4.23 ms
^C
— 192.168.134.25 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 4.230/7.758/11.212/2.658 ms
ish@ubuntu:~$
ish@ubuntu:~$ ping 192.168.134.30
PING 192.168.134.30 (192.168.134.30) 56(84) bytes of data.
64 bytes from 192.168.134.30: icmp_seq=1 ttl=255 time=9.48 ms
64 bytes from 192.168.134.30: icmp_seq=2 ttl=255 time=11.1 ms
^C
— 192.168.134.30 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 4.042/7.733/11.194/2.785 ms
ish@ubuntu:~$

* If ping doesn’t work it’s always worth turning off your Windows firewall temporarily a re-check.

6. Configure SSH on your end-hosts

I am using 3725 series Cisco Routers in my lab – IOS c3725-adventerprisek9-mz.124-25d.bin, but you should be OK using any router image as long as it supports K9 – just remember to set that Idle PC!

conf t

ip domain name lab

crypto key generate rsa general-keys modulus 1024

aaa new-model

aaa authentication login default local

username cisco secret cisco

enable secret cisco

7. Add your end-host IP addresses to the /etc/ansible hosts file within your Ubuntu VM

ish@ubuntu:~$ cd /etc/ansible/

ish@ubuntu:/etc/ansible$ sudo nano hosts

[Routers]

R1 ansible_host=192.168.134.25

R2 ansible_host=192.168.134.30

Ctrl + x + y to save your edited file and exit out

ish@ubuntu:/etc/ansible$ cat hosts

R1 ansible_host=192.168.134.25
R2 ansible_host=192.168.134.30

ish@ubuntu:/etc/ansible$

8. Test your configuration

We can run the following command from the command line of our Ubuntu VM.

cd /etc/ansible

ansible all -m raw -a ‘show version | i uptime’ -u cisco -k

You should be prompted for the device password and if that’s entered correctly the following should be printed.

ish@ubuntu:/etc/ansible$ ansible all -m raw -a ‘show version | i uptime’ -u cisco -k
SSH password:
R2 | SUCCESS | rc=0 >>
R2 uptime is 2 hours, 4 minutes
Shared connection to 192.168.134.30 closed.

R1 | SUCCESS | rc=0 >>
R1 uptime is 2 hours, 4 minutes
Shared connection to 192.168.134.25 closed.

ish@ubuntu:/etc/ansible$

There we have it. I plan to delve into this much more and the use of Ansible Playbooks, but to simply test Ansible commands over SSH this should do nicely.

Router Config, if required, can be found here.

I.

GNS3_Lab

HP Proliant Gen8 Home Server

Home servers have become really popular over the last 5 years or so, whether they be your regular NAS or a more home server flavour, with MS Windows, Linux or a Hypervisor as an OS.

I decided to jump aboard the band-wagon, so picked up a Gen 8 HP Proliant from eBuyer to utilise as a mix of the below:

  1. Plex Media Server
  2. Home Lab for study

I have since added a 4TB WD Red hard drive and will be picking up 16GB (2×8) of Ram in the coming weeks to max out it’s two dimm slots. I will, at some point, also add additional disks and employ RAID – with 0 (Striping) or 1 (Mirroring) being the options.

Plex Media Server

Plex is a client-server media player system and allows you to consolidate all of your pictures, films, Music etc in one location, and access it all from anywhere with an internet connection. You can stream the movies on a range of devices (iPads, SmartPhones etc), but you do have to pay £4.99 for the privilege – but in my eyes it’s well worth it.

Home Lab

This was the main reason I picked up a decent home server. As I work away the Gen8 allows me to remotely connect onto my home server and lab/test away in my own virtualised environments!

I initially went with Xubuntu as my server OS, which is perfect for home use as it’s lightweight and you don’t need to be a Linux developer to navigate around it. However, although the Gen8 supports RedHat Linux (RHEL) out of the box, to go above a 640×480 resolution you have to create your own bespoke driver!

If I had to do this just for a useful res, I assumed there would be other issues down the line I’d encounter too, therefore decided to wuss out and rebuild it with something more friendly – Windows Server 2016 Essentials!

After a couple of weeks running Server 2016 I decided to start fresh again, therefore went for a Hypervisor. My choice was the most popular, VMWare’s ESXi. This now means that I can spin up as many VM’s as I desire (resource allowing) – for example I have a Server 2016 VM, which sits on my LAN happily as my Plex Server. I then access all of my VM’s using the vSphere Client below.

vSphere Client

Other VM’s include Linux distro’s – Mint, Ubuntu etc and also a Cisco 1000v virtual router so I can try my hand at some Ansible Playbooks.

Bug-Bears

There are a few issues I had/have with the Gen8, and for all it’s positives here are a few negatives.

  • iLO requires a licence to mount virtual cd – 60 day workaround
  • No DVI or HDMI, just VGA
  • NTFS pendrives not supported, only Fat32, but 4GB file limit
  • To install an OS you need to load the relevant disk drivers before the OS will see the Array you’ve created prior in the BIOS – I had them on a USB pen-drive and you can grab them from here. You can also circumvent this using the HP Intelligent Provisioning utility, but I prefer the old fashion way.

I.

HP_Pro_Gen8

Website Resilience in AWS

In February of this year Amazon Web Services suffered a pretty bad outage on it’s S3 (Simple Storage Service) platform, which is used by millions of it’s customers, predominantly for hosting websites and the issues caused many of these sites to go dark.

Now, although to some degree one should expect their hosted content to be unavailable at some point, when hosted externally in the public cloud (they don’t offer 100% availability, derr!), it would appear those impacted decided to skimp on resilience.

Non-resilient Website

The above diagram illustrates a regular website being hosted on AWS. You type in a domain name, a lookup is performed via AWS’s DNS service – Route 53, you’re forwarded on to a Linux or Windows VM running your web server code and your content is passed to the requester via S3 buckets. If you’re popular enough to have comments/feedback etc then this is stored in a back-end RDS database.

Now let’s take S3 buckets, data is replicated within an Availability Zone (which houses more than two data centres), but not across different AZ’s or geographical regions, you can configure this, but must pay for the benefit.

Website_Resilient_Regions
Website Resilience

In this scenario, you mitigate any real possibility of your business critical website going dark, as even if Amazon have S3 issues in an AZ or even a region, the likelihood of two zones going dark would require something pretty spectacular (read: devastating) to occur.

You have DNS resolution occurring using multiple ELBs (Elastic Load Balancers), therefore if one lookup fails you still have a second juicy AZ or region to fall back on and point your requesting users at.

There are a few more bells and whistles to the above diagram, notably a CloudFront distribution to serve cached files to users from geographically closer servers. And also the use of Auto-Scaling groups to automatically scale up and down my web server cluster if demand warrants it.

The recent AWS outage shows us that we all need to think about how important and costly would it be off domain X were to go offline.

I.

website

The First (of Many)

Hi, welcome to my blog. I hope in the coming weeks, months and years this blog will be filled with useful posts to interest a wide range of tech readers.

The main focus will involve aspects of my job, as a Network Security & Support Engineer, and notably revolve around Cisco, but also a few other vendors. I tend to lab plenty, therefore I plan to share these with you.

I am a Cisco Certified Networking Professional, hence the Cisco focus, but also an AWS Certified Solutions Architect, therefore expect some post on the current king of the cloud too.

Thanks for coming by.

I.

post

Powered by WordPress.com.

Up ↑