Search

Just another day in paradise

rant about cloud computing and other Microsoft stuff

Traditional networking is dead

There has been a lot of activity over the last 18 months regarding SDN and similar technologies disrupting the traditional network architecture stack.

Today we have vendors such as BigSwitch Networks disrupting the traditional network vendors by allowing customers to purchase whitebox network hardware and use their SDN controller to manage the network. Microsoft with their 2016 server release will have built in a network controller role which can be installed plus support for VXLAN and NVGRE which is leveraging the same technology stack Microsoft uses in their Azure cloud services.

Microsoft also just recently announced a fully open sourced “Software for Open Networking in the Cloud (SONiC)’ for running network devices like switches, built in collaboration with leading networking industry vendors Arista, Broadcom, Dell, BigSwitch and Mellanox.

3b62a124-ad94-4dbf-b790-7b99caf26872

Vendors such as Cisco are playing catch up with ACI which has been known to cause instability within customer environments. There is no question these events are creating doubt within customers’ minds as to whether SDN really is ready for prime time. I seem to recall certain vendors creating the same doubt in customers’ minds about cloud technologies around 4 years ago.

There is a good industry overview from Brad Casemore Research Director, Datacenter Networks @ IDC available from BigSwitch Networks which I’d recommend you take a look at.

If you are looking at performing a technology refresh regarding your network infrastructure, you would be crazy these days to not consider some sort of SDN technology working within your environment within the next 2-3 years. Hybrid cloud is the way customers are wanting to consume cloud services in the short to medium term and the only way to facilitate network connectivity in a consistent, easy to manage and rapid deployment method is by using SDN in a manner which allows openness and customer choice.

VMM 2012 Console launch command line options

I couldn’t find any information on the internet around this information, so when I received some information from a Microsoft buddy, I would I’d share.

 

VmmadminUI /Connect: “[server],[port],[user role],[use current creds (true/false)]”

Note that the quotes are mandatory.  User role and use creds are optional.  Use “da” for the role if you want to log on as the built-in administrator role:

 

.\VmmAdminUI.exe /Connect: “vmm2012r2a,8100,da”

.\VmmAdminUI.exe /Connect: “vmm2012r2a,8100″

 

If you get in to a bind, you can also specify /specifynewserver to reset:

.\VmmAdminUI.exe /SpecifyNewServer

 

Full examples:

.\VmmAdminUI.exe /Connect:”vmm2012r2a,8100,Administrator,true”

.\VmmAdminUI.exe /Connect:”vmm2012r2a,8100,TenantAdmin,true”

 

Hope this helps.

Size Matters – Not Brand

This post should generate some pretty healthy debate within your own mind…Cloud, Compute and Brand.

I think we’re now at a time in the evolution of compute resources where the brand of the equipment really doesn’t make any difference. Sure, there is the mean time between failures which plays a part but when components all comes from pretty much the same manufacturer, it’s really then down to assembly quality.

So what is really important? Cost and Compute Unit Measurement. How do you calculate such a thing?

In simple terms, we could look at it with this scenario. In this example we’ll use the following server specs;

  • 2 RU server
  • 256GB RAM
  • Dual Processors with 10 cores each running at 3.0Ghz (total of 60Ghz of processing)
  • Purchase price of $10,000

For this example, let’s assume you’re going to have quite a few of these servers providing compute capacity using some virtualisation technology, pick your own vendor but my preference is HyperV 🙂 But let’s also assume, you’re never going to run this server at more than 80% capacity, so the initial numbers need to be wound back to 204.8GB and 48Ghz respectively. (I honestly think you can assume you’d use 90% of the CPU capacity but your experience should tell you this)

Let’s say we’ll use a measurement of 1 CCU (Cloud Compute Unit) = 1GB of RAM and 500mhz of CPU which from my experience is a pretty good average of RAM to CPU ratio, in real life I’ve seen the CPU requirement to be a lot less.

So in this example, that piece of hardware could hold 96 CCU’s which would be totally under utilising the hardware potential but let’s go with this, as it shows a design characteristic which is required upfront.

To calculate the cost of 1 CCU per month, use the following formula;

$10,000 / 96 CCU’s / 36 months = $2.89

An important point, if you can get your RAM to CPU ratio right for the workloads you are using, you can get more CCU’s onto a single server. Drive the purchase price down and the number of CCU’s you can host on a server and your cost to provide CCU’s just keeps coming down.

There are plenty of other variables which you need to consider when calculating a CCU such as hypervisor software, network, rack space, people costs but this should be a good starting point.

 

So really, does the badge on the front of the piece of tin really make any difference in delivering the service?

 

Cheers

Luke

 

 

Monitoring Amazon Web Services with SCOM 2012

There’s a new kid in town, the AWS Management Pack for SCOM 2012. The AWS site details the steps quite well which you need to get the service up and running which can be found here – http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AWSManagementPack.html

Here is an example of the kind of information you can get out of this management pack;

EBS Volumes view

EBS Volumes in SCOM

 

 

EC2 Instances view including detailed information

EC2 View SCOM 2012

 

Download, install and get monitoring!

 

Backing up MySQL on Windows using DPM

Backing up MySQL using DPM….rubbish you say? not quite 🙂

Microsoft DPM allows for pre and post backup scripts to be run which means you can write a script
which backs up mysql, dumps the backup to disk and then DPM will backup this file.

Step 1 – Create MySQL backup script
Here’s an example script which could work for you

mysqlbackup.cmd
________________________________________
@echo off
set currentdate=%date:~-4,4%%date:~-7,2%%date:~-10,2%
move /y c:\mysqlbackup\publicsite-*.sql c:\mysqlbackup\publicsiteold.sql
mysqldump –user backupuser –password=yourpassword databasenane > c:\mysqlbackup\publicsite-%currentdate%-full.sql
_______________________________________________

Replace the database name and username and passwords to match your environment. The idea of
this script is to backup the database with the current date in the file name and make a
backup of the previous version to an OLD file.

Step 2 – Create DPM Protection Group to backup the MySQL Backup
Here, all we need to do is create a protection group to match the settings you would like,
just make sure you are backing up the directory where the MySQL backup files are stored,
something like C:\MySQLBackup

Step 3 – Modify ScriptingConfig.xml on the DPM Agent server
C:\Program Files\Microsoft Data Protection Manager\DPM\Scripting\ScriptingConfig.xml
This is the file which is used to configure pre and post backup scripts.
Below is a copy of the file I used but you can modify for your environment. The important thing to make note of is the “DataSourceName” which needs to match the datasource which you are backing up. If the directory are you backing up is on C: or D: make sure this is reflected in this value.
___________________________________________________________
<?xml version=”1.0″ encoding=”utf-8″?>
<ScriptConfiguration xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance&#8221;
xmlns:xsd=”http://www.w3.org/2001/XMLSchema&#8221;
xmlns=”http://schemas.microsoft.com/2003/dls/ScriptingConfig.xsd”&gt;
<DatasourceScriptConfig DataSourceName=”C:”>
<PreBackupScript>c:\mysqlbackup\backupmysql.cmd</PreBackupScript>
<PostBackupScript></PostBackupScript>
<TimeOut>30</TimeOut>
</DatasourceScriptConfig>
</ScriptConfiguration>
___________________________________________________________
If the pre backup script fails, so will the DPM job. This is fantastic because now all we have to do is monitor DPM job status huzzah!

Cheers Luke

Website keyword filtering – stopping the bad guys

We host quite a few public websites and attempted hacks are regular on a lot of the sites. One of the things I was interested in doing was being able to blocked certain keywords from even being accepted in the URL. Because we use TMG to publish all our websites currently, you can configure the HTTP filter to block keywords on each publishing rule (cool!). Check this site out on how to do it – http://www.elmajdal.net/ISAServer/Keyword_Filtering_With_ISA_Server_2006.aspx

So I went to our splunk instance to do some searching to find some keywords which I could block, but also make sure the keywords didn’t appear in any legitimate traffic. Some of the keywords I found I could block pretty easily were as follows;

– union, having, select, blackhat, information_schema

You can also look at blocking certain user agent strings like for example “Havij” which is a popular sql injection tool and also “sqlmap”

There are some others as well but I consider those a bit of our IP so unfortunately I’m not going to share them all with you. However I encourage you to log all your traffic to splunk and do some keyword search of your own. I’m sure if you poked some security tools like Nessus at a website, you’d find a heap more.

Happy splunking!

**19/07/2012 – added information_schema to list of keywords and user agent string blocking

Using Orchestrator to automate OSSEC

System Center Orchestrator is a fantastic automation and scheduling tool from Microsoft which has just gone through a new revision with the System Center 2012 wave of products. It allows me to integrate with a number of systems natively including SCOM, SCCM and VMM as well as a bunch of pre-built scheduling tasks. Codeplex has great Integration Packs available and its worth checking out, which is where I downloaded the Exchange Mail IP.

One of my passions is automation and in the hosting world its how you keep your costs down and drive efficiencies from having to do repetitive tasks. Over the last few years we’ve been running, I’ve been able to collect a lot of great information around alerts, especially OSSEC alerts. We use OSSEC as our IDS and if you haven’t heard of it before, I’d recommend checking it out as it’s a fantastic open source IDS which is very configurable. We use OSSEC for host based agents and also network based, with Snort. Everything is also indexed and pumped into Splunk which gives us superior searching capability across our entire stack of firewall, switches, IDS and reverse proxy servers.

We generate OSSEC alerts via email so, Orchestrator with the Exchange Mail IP allows you configure the monitoring of a mailbox and wrap some rules around it. Here’s an example;

Monitor – Support Mailbox (connection you have setup in the IP previously)

Folder Name – Inbox (folder to monitor)

Body Format – Plain Text

Read Mail Filter – Unread Only

Now we need to configure the rule filters, ie what criteria should be matched in order to trigger this Runbook.

Subject – Contains “Alert level 10”

Body – Contains “WEB-IIS cmd.exe access”

Now you have the ability to get orchestrator to do “something” if the rules are matched like adding the IP address from within the email body to a firewall blocking rule. Additionally instead of monitoring a mailbox, you could get Orchestrator to monitor a log file for the exact same criteria.

I’ve assumed a certain level of Orchestrator knowledge with this post but it shouldn’t be too difficult to work out 🙂

Snapshots are bad, mmmkay

You might think that taking a snapshot of a virtual machine before you make any changes to it is a life saver as an IT pro, but I can assure you they are not.

Take a step back and think about what you are doing….you are taking a snapshot of a running server during its normal running operations so that you can roll back quickly if the shit hits the fan. It’s a running server…what about transaction which may or may not have been written to disk if its a SQL or Exchange server?

Yeah but it snapshots the VM’s memory as well you say? well guess what, between taking a snapshot of the memory and the disk, there could still be transaction between the two or on the network. Either way you’ve just increased the risk profile of you being able to successfully restore the VM to a consistent state after its just screwed the pooch.

So smarty pants, what should I do?

Don’t get me wrong, snapshots are great but shut the VM down and then take a snapshot. This way you have a consistent state on the VM so you can roll back quickly. If shutting down the VM is not an option, then stop all services which may be processing transactions.

Latency based DNS load balancing with Amazon Web Services

AWS just keeps on innovating. They have recently just announced the ability to load balance DNS queries based on the users latency to an EC2 region…say wha?

http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/CreatingLatencyRRSets.html

So if a user is based in Australia and you have an application or website running on an EC2 instance based in Singapore, you can create a DNS record so the users request is routed to the nearest instance to them. Similarly if you have an instance in the US a users DNS record can be directed to a US based instance in EC2.

The result is a user experience directed to the nearest EC2 region your application is running in. To test this, I created a version of our website http://www.cloudcomputingperth.com and created the necessary DNS entries. You can see by my nslookup output I am directed to the Singapore EC2 elastic IP whilst my DNS server is set to a local one, yet if I use a US based DNS server, I am directed to my US EC2 elastic IP.

C:\Users\Luke>nslookup http://www.cloudcomputingperth.com
Server: dns1.mydomain.local
Address: 172.20.0.1

Non-authoritative answer:
Name: http://www.cloudcomputingperth.com
Address: 122.248.243.93

C:\Users\Luke>ipconfig /flushdns

Windows IP Configuration

Successfully flushed the DNS Resolver Cache.

C:\Users\Luke>nslookup http://www.cloudcomputingperth.com
Server: 111.118.175.56.reverse.crucialx.net
Address: 111.118.175.56

Non-authoritative answer:
Name: http://www.cloudcomputingperth.com
Address: 184.169.151.178

Pretty damn cool, but does it actually work? Of course!

So I created 2 copies of the website and slightly changed the footer of the site for each region its hosted in.

Then accessed the URL from my machine with a local DNS server, then changed the DNS server to be a US based server.

Blog at WordPress.com.

Up ↑