Terraforming – Part 1

Infrastructure as code (IaC) is now somewhat of a necessity for a Network Engineer in 2024, and if you think about it, it makes perfect sense. With more and more workloads moving to cloud platforms why would you waste time in the GUI when you can code your environments to deploy, replicate them with ease for Test and Pre-Prod, amend when required and always have a as-is state view – how it should look, written down in code!

The tools I am using to embark down this rabbit hole are (I’m on a Mac);

  • HomeBrew – a great package manager for MacOS.
  • Terraform – This was installed via HomeBrew.
  • AWS CLI – As above, HomeBrew.
  • Docker Desktop – if you want to create code for automating Container creation/deletion etc as another
  • Import AWS IAM Access Keys to AWS CLI
  • A Code Editor – I like VSCode because it’s free, but also because there are extensions for many programming languages, that help greatly.

Building the environment

Install of HomeBrew. I ran the curl code on the Homebrew website directly into a Terminal session window. Just make sure to watch out for the additional steps at the end – I missed them first time.

Install Terraform. Homebrew on OS X instructions on developer.hashicorp website – obviously, if your OS is different choose the option that suits.

Install the AWS CLI. I ran the Command Line Installer (Terminal) from the AWS CLI install guide. Again, depending on your flavour of OS, choose accordingly.

Extra! Install Docker Desktop. If you’d like to deploy Terraform for spinning up Docker containers too, I’ve found Docker Desktop to be great. Grab the relevant version here.

Import AWS IAM Access Key Credentials. Note: make sure to download the AC file and store it somewhere secure (key is only available at time of creation!) – or look at utilising Roles (short-term access). This import action permits Terraform to authenticate against the relevant Provider – AWS in this instance. Alternatively run an aws configure once the AWS CLI has been installed to input your access key credentials, which will provide the AWS CLI the relevant permissions to make AWS API calls.

Code Editor. VSCode tends to come out top of many a chart, therefore I went this this, but there are plenty of others to chose from so see which you like – e.g. Sublime Text, NotePadd++, Espresso (Mac).

Once that’s in place you can launch VSCode and open up a terminal window to create your AWS working directory. Note: Each Terraform configuration must be in its own working directory, i.e. AWS, Docker, GCP, Azure.

Within your Terminal window, you can make a new directory to work from.

mkdir terraform-aws

Then navigate into your newly created directory.

cd terraform-aws

Create a new file, which will be used to create our Terraform configuration.

touch main.tf

Then edit this file to begin building out your code. Note: Other text editors are also available.

nano main.tf

Once the directory is created I open the folder within VSCode so I may write code from within the apps editor as opposed to the more clunky, but adequate, terminal window. File > Open Folder > local directory created above. This will then open in the left Explorer pane. For added quality of life you can install some beneficial extension such as Hashicorp Terraform, Terraform and Terraform Autocomplete.

Now we have our environment set up we can now start running terraform code to build out AWS and/or Docker containers. In Part 2 we’ll look at;

  • Building out our code in the main.tf file
  • Initialising the new configuration to pull down the relevant Providers
  • Format and Validate our config
  • Create the infrastructure!

Thanks,
Ish

Home Network – with pictures

Having decided on ESXi for my type 1 hypervisor, over alternatives such as Proxmox I thought I’d put a simple home network diagram to get me visualising everything.

I used draw.io as I like the look of it, but there any many more network diagram tools to choose from if you do a quick Google.

I’m yet to decide if I want to vlan off my virtual estate with vSwitches (internal VMWare ESXi L2 switches) and Port Groups, but if I feel it will add value I may.

I’m utilising the OpenVPN capabilities of my ASUS router to VPN onto my LAN and access my Lab (and Plex library :)) when I’m away from home – using certificates and a username/password. I’m also thinking about picking up a Dell iDRAC licence, which will then give me the ability to turn on the R720 remotely, when required.

Home Lab Take Two

Having sold my older Dell R710 server a few months back I decided, with a recent purchase of Cisco Modelling Lab, to pick up a more capable replacement. After scouring eBay for a few weeks I found an R720 (non-xd model) within my price range.

The specs of the new server are;

  • Model – Dell PowerEdge R720 2U Rack Server
  • CPU – Two Intel Xeon E5-2660 V2 2.2Ghz, 10 cores each
  • Memory – 256GB
  • Storage – None included, but 8 3.5” front bays

I picked up two 600GB 15K SAS 3.5” hard disks to slot in and provide some generic storage and a Fusion-IO ioDrive II 1.2TB internal SSD to use up one of the PCI-E slots on the servers motherboard.

The server’s main function will be to run my Cisco CML lab VM, as I have another HP Gen 8 server to run my other VMs – notably my Plex and Valheim servers :). CML can get ridiculously resource hungry if you want to spin up labs with 10/15+ nodes that include NX-OS devices, hence the requirement for plenty of CPU cores and memory.

To launch CML I’ll be running VMWare ESXi as a bare-metal installation, and installing the OVA file as a VM. I have chosen ESXi v6.7, as unfortunately v7 does not support the Fusion-io SSD drivers – thanks, Dell! The R720, as did the R710 I had prior also has an onboard USB slot, so I’ll be booting and running the OS from the USB stick to save external / PCIe storage space.

On a storage note, I don’t have a problem with disk failure or VM loss, so will only be running the virtual disk in a RAID0 configuration to utilise all the disk space.

When I have it all spun up I’ll put another post up.

Building out a VPC Part 2

Following on from my previous post, we’ll now continue building out our VPC and perform some tests to prove all is as it should be (and as secure as it should be).

Security Groups

Security Groups are our firewalls in the AWS cloud, they allow us to permit/deny protocol and port access level.

We’re going to create two new Security Groups, one to permit the outside world across the www into our public facing instance – a web server as an example.

Our second SG will be permitting our EC2/public facing resources to talk to our backend – perhaps we have our database tier in the private addressing space.

Navigate to your VPC Dashboard and select “Security Groups” from the left hand pane.

Create a new Security Group. Give it a useful name, description and associate it with your VPC. Hit “Create”.

Select your newly created SG and add your required rules. This is our web facing instance in our public subnet, therefore for this web server we’re going to allow SSH and HTTP access from the internet.

Here you can see I have added two rules into the “Inbound Rules” tab within the SG. The top rule allows me to SSH into my instances sharing the same Security Group, and the second allows clients to come in over TCP/80 – HTTP.

To prove this SG we’ll spin up an EC2 instance and check two things:

  • If we can SSH into the instance
  • If the instance can get out to the internet.

There are a few details we need to make sure are correct on the “Configure Instance Details” screen, these are:

  • Network – You want the instance provisioned into your VPC
  • Subnet – You want the instance in the correct public subnet we created earlier
  • Auto-assign Public IP – As this is going to be a public facing instance/server, we want a public IP address to be automatically configured for us.

Before we try to SSH to our newly created instance we need to assign it to the Security Group we created earlier for our public facing instances.

Navigate to “Actions > Networking > Change Security Groups”.

Deselect the wizard SG, as that was created when we spun up the instance and select the appropriate SG that was created earlier to permit SSH and HTTP. This step could have been set during instance creation, but I find it easier to do it post as I get a better feel for what the new instance can and can’t do.

SSH

Attempt to SSH into the instance via PuTTy. If you’ve not done this before I suggest reading this article as the process if different for MAC and Windows.

Grab your instances public IP address – right hand side of the “Description” tab and head over to PuTTy.

Populate your hostname and save the session for future use if required. Then navigate down the left pane to SSH > Auth and browse for your PPK file which will authenticate you and permit you into the instance.

Head back to the Session window and hit Open, if successful you’ll receive the below screen. If this fails it’s likely to be a PuTTy issue with the ppk file, an Security Group issue or a issue with your instance being in the wrong subnet.

Hit Yes here.

We’ve now successfully SSH’d into our AWS EC2 instance, which resides in our configured subnet, inside our VPC.

Increase your privileges using the command “Sudo Su” – this is a Linux VM we’ve created.

Let’s see if our instance in the public subnet can get out to the internet to Yum repositories for example, to update. We know we can get in, but can it get out?

Looks good.

Web Server

We know we can SSH into our instance, as we’ve just done that, but how can we tell if port 80 (HTTP) is open. To test that we’re going to install Apache onto the instance and create a little web page for us to target.


This command will install Apache onto our instance and turn it into a web server.

Now change directory to /var/www/html and create an index.html file, which will be the file our web server will present when we try and access it over the web!

Type a message, or add some ascii art 😉 and exit and save via a CTRL+x and Return.

As you can see, our newly created index.html file now sits in the /var/www/html directory for us to hit when we browse to our server – all going well.

The final thing to do before testing is to start the httpd service, which will make our web server listen for incoming request on port 80.

We can confirm the server is now listening by looking at it’s open ports


The acid test is bringing up the webpage (index.html) in a browser so let’s try that.

And there we are, we’re now serving web pages to anyone on the internet from our EC2 web server. This sits in our defined subnet, inside our VPC, and is restricted by our Security Groups.

That was another long one, so in part 3 we’ll:

  • Spin up a backend server and drop it into our Private Subnet
  • Secure the backend, so only the instances in our public subnet can talk to it – not anyone on the internet!
  • Create a NAT Gateway, so our instances in the private subnet can get secure internet access.

I.

Building out a VPC Part 1

In AWS a VPC (Virtual Private Cloud) allows you to build out your own piece of the AWS cloud, the way you want it, such as your Data Center schema for example, if you’re migrating.

I’ve been going through the material to recertify my Solutions Architect cert, therefore thought I’d put it down in writing for reference.

Create your CIDR block

Within the console, navigate to “VPC” . Once you’re in the VPC dashboard you can launch the VPC Wizard, but you don’t really learn much going that route. Navigate down the left pane and select “Your VPCs”.

Hit Create VPC and you will be presented with the following screen, which will ask you for certain information.


Give your VPC a useful name and specify your Classless InterDomain Routing block. You can select the radio button to assign an IPv6 block, but I didn’t, and I left Tenancy at Default instead of “Dedicated” as I don’t need my VPC running on dedicated AWS hardware/resource.

If successful you’ll receive:

By creating a new VPC, you’ll automatically receive the following:

  • A new Routing table
  • A new default Network ACL (Access Control List)
  • A new default Security Group.

Subnets

Next step is to create your individual subnets that will be carved out from your VPC’s CIDR block. On the left hand pane select “Subnets”.

You will find a handful of subnets already listed, but these are the default subnets for the default VPC. The new subnets we create will be in addition to these.

Give the first subnet a useful name, assign it into the new VPC you’ve just created and drop it into an “Availability Zone” of your choosing. I shall be making two subnets – a Public and a Private, therefore each will go into a different AZ for additional resilience.

Follow the same steps for the second, private subnet and hit “Create”. We now have two subnets, one for our Public facing services and a second, Private subnet for our backend.

At the moment we have no means of internet access out of our newly provisioned VPC and Subnets, therefore we need to remedy that so our resources can update/talk out etc.

Routing Table

We don’t want newly provisioned resources in our VPC to use the default routing table, therefore we need to create a new one, associate it with our Public facing subnet and give it a Gateway out.

On the left pane in the VPC Dashboard navigate to:


Give your Routing table a useful name, associate it with your VPC and hit “Create”. Highlighting your newly created Routing table will display a number of tabs:

Select the “Subnet Associations” tab and hit “Edit subnet associations” to link your new, public subnet to this new routing table.

Make sure to select the Public subnet and hit Save, as we are now going to create a Internet Gateway and specify a default route in our Routing table to forward non-local traffic out to the internet via our IGW.

Internet Gateway

Navigate down the left pane of the VPC Dashboard to “Internet Gateways” and create a new IGW.

Highlight your new Internet Gateway and select “Actions -> Attach to VPC”

Select your new VPC and hit “Attach”. Now go back to your Route Tables and highlight your newly created Routing Table for your public subnet.

Go to the Routes tab and then “Edit routes”. Add a new default route with a destination of 0.0.0.0/0 (anywhere other than the routes you know about), as a Target select your newly created Internet Gateway and hit “Save routes”.

You will now have a default route below your local route, which will forward all non-local traffic to the Internet Gateway.

In Part 2 we’ll finish off by:

  • Creating suitably secure Security Groups for our Public and Private instances.
  • Creating an EC2 instance as a web server and confirming all the routing and necessary security is in place.
  • Creating a NAT Gateway to provide the private subnet with means to get to the internet.

To be continued…

I.

Home Lab

In an attempt to help me prove new network related stuff and, in some instances prove solutions before offering it up in a design document, I’ve been putting together a home lab.

I spent a while skimming eBay looking for second-hand servers and eventually settled upon a Dell Power Edge R715. This particular one came with an AMD 6272 processor at 2.1Ghz (16 cores), 128GB of Ram and just under a TB of SAS 10k disk space, but there are some sellers (I found my on eBay) that will spec the server how you want it – at a cost of course. I paid £250 delivered for mine, which was a pretty good deal.

I did have (still may) plans to utilise some other hardware I had lying around:

  • x2 Cisco 3750G switches
  • Cisco ASA 5510 firewal

But, as I play around with, i.e. break/reboot the kit frequently I didn’t want to suffer the wrath of an angry family member that can’t access Netflix or play Angry Birds :).

The setup is really nice and simple at the moment and it means I can hop over the internet onto VMWare ESXi and build/test/break stuff without any problems.

MTU Fun

I recently ran into an issue whereby traffic ceased passing over a link that had been in use for weeks and months without any issues. Post investigations and packet captures the root cause ended up being a mis-match in MTU size, but I thought I’d share my experience……

The Maximum Transmission Unit (MTU) is the largest number of bytes an individual datagram can have on a particular data communications link. When encapsulation, encryption or overlay network protocols are used the end-to-end effective MTU size is reduced. Some applications may not work well with the reduced MTU size and fail to perform Path MTU Discovery. In response, it would be nice to be able to increase the MTU size of the network links.

Source: Wikipedia

For most Ethernet networks the MTU size is, by default set to 1500 bytes, however today’s networks with numerous overlays and encapsulation on-top of encapsulation it can be difficult to determine what you should be setting the MTU size to throughout your network and across your WAN.

If we take the below image as a reference and pick SMTP as our example protocol. We have the corporate LAN on the left where our traffic will be sourced and the remote DC on the right where our Exchange servers reside.

LAN-DC

The communication should flow as follows:

  1. Mail sent from Client to local Exchange server
  2. Exchange processes and determines next-hop, which will then likely be a mail gateway of sorts.
  3. Mail gateway performs it’s own interrogation, i.e. potentially blocks attachments/key words that are not permitted to leave the network, but if processed successfully traffic will will be forward on towards the egress point of the network. Additional upstream appliances could include IDPS, additional proxies etc.
  4. Traffic reaches egress firewall and provided there is an ACL in place to permit, the traffic is forwarded on to CE router and onto the WAN.

Let’s say the egress interface on the corporate firewall has an MTU of 1500 bytes, but the CE router interface has an MTU of 1400 bytes – what’s going to happen?

Well, in my recent experience I found SMTP traffic was leaving the firewall destined for the DC and a successful 3-way handshake performed and a connection established. However, data was leaving the firewall at 1500 bytes, being chopped up by the router and sent on it’s way. The reply from the remote DC device (using ICMP) was being dropped by the firewall (as why would you want your egress firewall to be pingable….)

Upon performing a packet capture from the corporate network I could see SMTP traffic leaving, but receiving TCP retransmit messages, but also messages stating:

Timeout waiting for client input

On first look I assumed this related to authentication, but after some expert googling it turned out to be an idiosyncrasy of Windows, which actually related to the network and the MTU size of the packets being transmitted.

There was a change performed on the CE router, which had reduced said MTU size, and although SMTP 3-way handshakes were still successful large amounts of drops were occuring and it ground mail to a standstill.

Tip: MTU size is always a great thing to check, but so are the approved changes that took place the time of the issue 😉

The below list is great to have when looking to determine your MTU size and how many bytes will be added to the frame:

GRE (IP Protocol 47) (RFC 2784) adds 24 bytes (20 byte IPv4 header, 4 byte GRE header)
6in4 encapsulation (IP Protocol 41, RFC 4213) adds 20 bytes
4in6 encapsulation (e.g. DS-Lite RFC 6333) adds 40 bytes
Any time you add another outer IPv4 header adds 20 bytes
IPsec encryption performed by the DMVPN adds 73 bytes for ESP-AES-256 and ESP-SHA-HMAC overhead (overhead depends on transport or tunnel mode and the encryption/authentication algorithm and HMAC)
MPLS adds 4 bytes for each label in the stack
IEEE 802.1Q tag adds 4 bytes (Q-in-Q would add 8 bytes)
VXLAN adds 50 bytes
OTV adds 42 bytes
LISP adds 36 bytes for IPv4 and 56 bytes for IPv6 encapsulation
NVGRE adds 42 bytes
STT adds 54 bytes

Source: Network World

I.

MTU

Certs, because.

I’ve been having a good think about what certifications to renew/take going forward, as being an IT Contractor newbie I feel it can be even more important – as I’m frequently in the shop window.

To Renew

I decided it would be unwise to let my Cisco CCNP lapse, as it took be several years to achieve and that knowledge has certainly come in useful with the handful of my Support, Design & Implementation contracts these past 2 years.

A lot of people tend to go for the Troubleshoot exam to re-certify, but I felt the Switch exam would be more beneficial. Mainly so I could brush up on some of my FHRP knowledge, but also so I could get a taste for what Cisco are putting into their most recent Switch exam (300-115). I was surprised to find nothing on their new 9k switches for example.

I have not, as of yet renewed my Amazon AWS Certified Solutions Architect, but I have until Q2 2019 to do that so I have a little bit of time to determine if it’s worth it or not. I did enjoy learning and labbing the material, so I think I will.

To Take

I’ve been looking at a few different certifications, taking into consideration that I don’t want to have 3, 4, 5 certs that I’m constantly having to re-certify!

Those that I’m leaning towards looking at in 2019 are:

  • EC-Council Certified Ethical Hacker (CEH)

As my contracts tend to be Networks Support/Design/Security/Architecture a cyber security would be a really useful.

  • (ISC)2 Certified Cloud Security Professional (CCSP)

As a vendor agnostic cloud security certification this, perhaps, could be even more useful than the AWS cert.

  • Cisco CCIE Routing and Switching

Although I have the pre-requisite for this certification, and to achieve it would be great my view is that time would be better spent being more of a IT generalist instead of a subject matter expert in the R&S space.

Ish

Cisco_Certs

Cisco VSS

VSS has been around for some years now, but it allows you to virtualise Cisco 6500 chassis’s and morph them into a single, logical unit. Once configured your single switching system is known as a Virtual Switching System 1440 (VSS1440*).

The main benefits include operational efficiency, a single point of management/configuration and scaling of the system’s bandwidth, i.e. pooling the resources of two chassis’s.

VSS_Cisco_Pic

VSS is made up of the following:

  1. Virtual Switch Members – these are your 6500 chassis’s
  2. Virtual Switch Links (VSL) – These are 10Gb Ethernet connections (max of 8) and are the links between the VSM’s.

VSL’s can carry regular traffic in addition to the management comms between the two 6500’s.

VSL Links are required, however you will also want to configure fast-hello links – ideally a pair. These links provide dual active detection, i.e. if a disgruntled employee were to sever all the VSL links, the VSS would still be able to determine which switch is the active member. If these additional links are not configured you can end up with a split-brain scenario.

Split Brain

If the standby switch detects a complete loss of the VSL, it assumes the current active chassis has failed and will take over as the active member.

This is not to say your network will not have an outage, as if the VSL links were to be lost, the active switch (via the fast-hello links) will go into recovery mode. In this mode, ALL ports except the VSL ports are shut down until the VSL links recover and the switch will reload in it’s normal state.

I recently had an issue with a client where the VSL links were temporarily severed, and although we were running fast-hello links a split brain still occurred and caused a widespread outage. After the VSL links were re-patched and the switches rebooted, service was restored.

Troubleshooting this at the time, the switches did not reboot and recover automatically after the VSL links were re-established. I still ponder why…..

*1440 refers to the two Supervisor 720 cards (one in each chassis) being active at the same time, thus combined gives you 720×2 = 1440.

I.

VSS

Favourite AWS Services

I’m a fan of Amazon Web Services. Mainly from a technical perspective, as it’s not necessarily cheaper to move from on-prem to on-cloud – so always read the small-print before uplifting your whole datacentre ;). Infact, it interested me so much I sat the Certified Solutions Architect exam last year and thoroughly enjoyed going through the material and labbing along the way.

I like to keep a track of updates to current AWS services, but also new ones that are released and thought I’d highlight 5 of my current favourite offerings.

5. Elastic Compute Cloud (Amazon EC2)

EC2_Icon

EC2 is the bread and butter of AWS. It provides you with all the compute grunt you could ever wish for or need. Need 5 Linux VMs for a web server cluster? Or how about the ability to auto-scale when demand requires it, then spin those same servers down automatically when demand tails off? Don’t worry, EC2 can do just that, as well as a vast amount more.

To spin up an EC2 instance (VM) you have a few options. You can:

  • Use their quick start utility, which provides you with ~30 of the most popular AMI’s (Amazon Machine Images) to choose from. Think your standard, hardened versions of Amazon Linux, RedHat, SUSE, Fedora and then your Windows and Ubuntu variants too
  • Choose an AMI that you have created yourself, perhaps a specific build of server with pre-install software
  • Head over to the AWS Marketplace and utilise for free, or buy specific software that runs in the cloud. Think F5 from Big-IP, Splunk or Juniper etc
  • Launch a community AMI that has been created by a member of the community

It’s frighteningly easy to get up and running, just make sure to terminate the instance/s when you’re finished playing otherwise the costs can soon start to build without you even knowing.

Intro to EC2 Video

4. Kinesis

Kinesis_Icon

If you’re interested in processing or analyzing streams of data – think Twitter for example, then Kinesis and  is a really useful service.

You can use it to build custom applications to collect and analyze streaming data for a bespoke set of needs or requirements. One example could be monitoring Twitter for every time the tag #JustinBieber (whoever he is….) is seen, then pushing that data through Firehose to the analytics engine to present users with personalised content – graphs, diagrams, feeds etc. Powerful stuff.

As per AWS Kinesis FAQs , a Kinesis stream flow:

Kinesis_Flow

Amazon Kinesis Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream.

3. Trusted Advisor

Trusted_Advisor

Trusted Advisor is like having your own AWS architect on-hand, 24 hours a day, to audit your AWS account and tell you where it’s vulnerable, where you could save money and how you could increase performance. Whenever you want.

Trusted_Advisor_Checks

It’s pretty simple – if you use AWS, you should be using TA.

2. Identity & Access Management

IAM

IAM is certainly in the top 3 of the most important AWS services. With it you can pretty much control all access to all of your accounts resources, whether they be groups or individuals.

Straight out of the box you will want to create users (then swallow your root credentials to keep them safe…) and manage their identities by granting generic or bespoke permissions. This way they’ll only have access to the resources they need.

1. Virtual Private Cloud (VPC)

VPC

As a Network bod myself, VPC is of real interest to me. It allows you to provision you own isolated CIDR block, allocate subnets and configure routing tables, all within AWS. You can then architect your solutions in a virtual network that you have defined and could, in theory replicate your on-prem, private IP schema’s in the cloud!

You can also create a hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.

AWS VPC FAQ.

I feel that the VPC gives a little bit back to the Network Engineer, as in they’ve just seen half their DC shifted to VM’s in the cloud so still get to play with IP subnetting and IP allocation in the Cloud.

A Quick AWS explanation of VPC can be found here.

If you want more AWS content than any normal person could ever be able to digest, then head over to the AWS YouTube channel.

I.

AWS_Services_Feature_Image

Website Powered by WordPress.com.

Up ↑