Multiple Github accounts & SSH keys on the same machine

If like me you have 2 or more, different Github accounts on the go, then accessing and committing as both on the same machine can be a challenge.
In my case, I have 2 accounts, one for work associated with my company email and a second for my own personal code.

If you’d like to be able to checkout, code and commit against different repo’s across different github accounts on the same machine, then you can do so by setting up multiple ssh keys, and having hostname aliases configured in your .ssh config file.

First off all, you’ll need to generate your SSH Keys. If you haven’t done this already, you can use the following commands to generate your keys.

$ ssh-keygen -t rsa -C ""
Generating public/private rsa key pair.
Enter file in which to save the key (/c/Users/eoin/.ssh/id_rsa): id_rsa_eoin_at_work

$ ssh-keygen -t rsa -C ""
Generating public/private rsa key pair.
Enter file in which to save the key (/c/Users/eoin/.ssh/id_rsa): id_rsa_eoin_at_home

Once you’ve created your 2 files, you’ll see 2 key pair files (the file you specified and a .pub) in your ~/.ssh directory. You can go ahead and add the respective key files each of your Github accounts. It’s in the Github > Settings < SSH and GPG Keys section of your settings. You’ll also need to add these files to ssh.

Next you’ll want to create an ssh config file in your ~/.ssh directory. You can see mine below.

    User git
    IdentityFile ~/.ssh/id_rsa_eoin_at_work

    User git
    IdentityFile ~/.ssh/id_rsa_eoin_at_home

Here’s the trick, when you execute a git clone command to clone a repo, the host in that command is not a real DNS hostname. It is the host entry specified on the first line of each section in the above files. So you can very easily change that. Now, if I want to check out work related projects from my work account, I can use.

git clone

# don't forget to set your git config to use your work meta data.
git config "eoincgreenfinch"
git config "" 

But if I want to check out code from my personal account, I can easily modify the clone URI with the following.

git clone

# don't forget to set your git config to use your work meta data.
git config "eoincampbell"
git config "" 

~Eoin Campbell

SLA: How many 9’s do I need?


I recently had a conversation with a colleague regarding service level agreements and what kind of up-time SLAs we were required to provide (or would recommend) to some our customers. This is something that comes up more and more, particularly in relation to software delivery on cloud hosting platforms. Azure, Amazon AWS, Open Stack, Rack Space, Google App Engine, and so on all offer ever increasing levels of improved up-time around their cloud offerings and this trickles down to the ISVs who build software on these platforms. So how many 9’s does your organisation’s system need ?

Percentage availability

Availability is the ability for your users to access or use the system. If they can’t access it because it’s locked up, or offline, or the underlying hardware has failed, then it is unavailable.

For the uninitiated, measuring availability in 9’s is industry parlance for what percentage of time your application is available. The following table maps out the equivalent allowed downtime described by those numbers.

Description Up-time Downtime per year Downtime per month
two 9’s 99% ~3.65 days ~7.2 hours
three 9’s 99.9% ~8.7 hours ~43 minutes
three and a half 9’s 99.95% ~4.3 hours ~21 minutes
four 9’s 99.99% ~52 minutes ~4.3 minutes
five 9’s 99.999% ~5.25 minutes ~25 seconds

Service Level Agreements

How many 9’s a company or services’ SLA specifies, does not necessarily mean that the system will always adhere to or guarantee that level of up-time. No doubt, there are mission critical systems out there that would need guaranteed/consistent up-time and multiple layers of fail-over/redundancy in case those guarantees are not met. However, more often that not, these numbers are goals to be attained, and customers might be offered a rebate/credit if the availability did not reach those goals.

Take Amazon S3 storage services for example. Their service commitment goal is to maintain a three 9’s level of up-time in each month, however in the event that they do not, they offer a customer credit of:
– 10% in the case where they drop below three 9’s
– 25% in the case where they drop below two 9’s

Microsoft Azure has a similar service commitment for their IaaS Virtual Machines. In this case, while they offer a similar credit rebate for dropping below, 99.95% they also caveat that you must have a a minimum of 2 virtual machines configured in an availability set across different fault domains (areas of their comm center infrastructure that ensure resources like power & network are redundantly supplied).

What are your requirements?

Our business is predominantly focused on providing our customers with line of business applications. The large majority of their usage is by end-users between 8 am and 6 pm on business days. As a result, we have a level of flexibility with our customers to co-ordinate releases, planned outages and system maintenance in a way that minimally impacts the user base.

In the past however, I’ve built and maintained systems that were both financially and time critical; SMS based revenue generation based on 30 second TV ad spots for example have a very different business use case, requiring a different level of service availability. If you’re system is offline during the 90 second window from the start of the advert, then you risk having lost that customer.

When identifying your own requirements, you need to think about the following:

  • When do you need your system or application to be available?
  • Do you have different levels of availability requirements depending on time of day, month or year?
    • LOB application that needs to be available 9-5/M-F
    • FinSrv application required for high availability at end of month but low availability through out the month
    • An e-commerce application requiring 24/7 availability across multiple geographic locations & overlapping timezone
  • What are the implications for your system being unavailable?
    • Are there financial implications?
    • Is the usage/availability time critical/sensitive?
    • Are other systems upstream/downstream dependent upon you and if so, what SLA do they provide?
  • If one component of your system is unavailable, is the entirety of the system unusable?
    • Is component availability mutually exclusive?

The cost of higher levels of availability

Requiring higher levels of availability (more 9’s) means having a more complex, robust and resilient hardware infrastructure and software system. If your system is complicated, that may mean ensuring that the various constituent components can each, independently satisfy the SLA. e.g.

  • Clustering your database in a Master-Master replication setup over multiple servers
  • Load-balancing your web application across multiple virtual machines
  • Redesigning to remove single points of failure in your application architecture such as in process session-state
  • Externalising certain services to 3rd parties that provide commercial solutions. (Azure Service Bus, Amazon S3 Storage etc…)

And all these things comes with a cost.

Johns E-Commerce Site

John runs an e-commerce website where he sells high value consumer goods. During the year his system generates ~€12m in revenue. Over the course of the year up-time equates to the following average revenue earnings, however since his business is low volume/high margin, missing a single sale/transaction could be costly.

  • €1,000,000 per month
  • €33,333.33 per day
  • €1,388.89 per hour
  • €23.15 per minute

John’s application currently only offers two 9’s of availability as it’s implemented on a single VPS and has numerous single points of failure. Planned outages are kept to a minimum but required to perform updates, releases and patches.

John is considering attempting to increase his platforms availability to four 9’s. Should he do it?

Quantifying the value of higher levels of availability

If you take a purely financial view of John’s situation, the cost implications of two 9’s vs. four 9’s is significant.

SLA Outage Window Formula Total Cost of Max. Outages
99% 3.65 Days 3.65 * €33333 €121,665.45
99.99% 52 minutes 52 * €23.15 €1,203.80

Ultimately, he needs to understand if this is an accurate estimation of the cost impact, and if it is, would it cost him more than €120K year on year, to increase the up-time of his system. There are numerous other business and technical considerations here on both sides of the equation.

  • Revenue estimation year on year may or may not be accurate
  • Revenue generation may not be evenly distributed through the year; if he can maintain high availability through the Black Friday and Christmas shopping seasons, it may alleviate most of his losses.
  • There may be other less tangible impacts on recurring revenue due to bad user experiences of arriving while the site is down etc.
  • Downtime may have a detrimental/negative impact on his brand.

On the other hand, what is the cost of the upgrade.

  • Development costs to upgrade the system.
  • Additional hosting costs to move to a cloud a platform or additional 3rd parties
  • On-going support costs to maintain this new system
  • There may be other considerations where the adoption of new technologies (a high availability cache) would alleviate the necessity of an increased SLA for a data store for example.

Assuming that the system can be initially upgraded and maintained year on year for less than €120K, the return on investment would make sense for John to undertake this work. It would be a different conversation the next time though when he wants to go to five 9’s availability.


Deciding on an appropriate level for your SLA is complicated, and there are a myriad of considerations and inputs which will dictate the “right” answer for your particular situation. Whatever you decide, attempting to achieve higher and higher levels of availability for your system, will most probably lead to higher costs, and the smaller returns on investment. So make sure the level you choose is appropriate from both a business and technical perspective.

~Eoin Campbell

When’s a Deep Dive not a Deep Dive ?

Global Windows Azure Bootcamp

This weekend, I attended the Global Windows Azure Deep Dive conference in the National College of Ireland, Dublin. Microsoft in conjunction This was a community organised event where Local & National IT Organisations, Educational Institutions & .NET Communities were running a series of events in parallel in a number of cities around the world. The purpose; Deep Dive into the latest technology available on Microsoft as well as take part in a massively parallel lab where participants from all over the world would spin up worker roles to contribute to 3D graphics rendering based on depth data from a KINECT. Alas, Deep it was not, and Dive we didn’t.

I suppose I can’t complain too much. You get what you pay for and it was a free event but I’d have serious reservations about attending this type of session again. Don’t get me wrong, I don’t want to sound ungrateful, and fair dues to the organisers for holding the event but if you’re going to advertise something as a “Deep Dive” or a “Bootcamp” then that has certain connotations that there would actually be some Advanced Hands-on learning.

Instead the day would barely have qualified as a Level 100 introduction to 2 or 3 Windows Azure technologies interspersed with Sales Pitches, Student Demo’s of their project work and filler talks relating to cloud computing in general. Probably most disappointingly we didn’t actually take part in the RenderLab experiment which kinda torpedoed the “Global” aspect of the day as well. You can see the agenda below. I’ve highlighted the practical aspects in Red.

Time Topic
0930 Welcome Dr Pramod – Pathak, Dean, School of Computing, NCI
0935 Schedule for the day  – Vikas Sahni, Lecturer, School of Computing,NCI
0940 How ISIN can help – Dave Feenan, Manager, ISIN
0945 Microsoft’s Best Practice in Data Centre Design – Mark O’Neill, Data Center Evangelist, Microsoft
1000 Virtual Machines – Demo and Lab 1 – Vikas Sahni, Lecturer, School of Computing, NCI
1100 Careers in the Cloud – Dr Horacio Gonzalez-Velez, Head, Cloud Competency Center, School of Computing, NCI
1110 Graduates available today – Robert Ward, Head of Marketing, NCI
1120 Break
1135 Web Sites – Demo and Lab 2 – Vikas Sahni, Lecturer, School of Computing, NCI
1235 Building the Trusted Cloud – Terry Landers, Regional Standards Officer for Western Europe, Microsoft
1300 Lunch
1400 Tools for Cloud Development – Colum Horgan, InverCloud
1410 Windows Azure Mobile Services – Overview and Showcase –  Vikas Sahni, Lecturer, School of Computing, NCI and Students of NCI
1440 Developing PaaS applications – Demo – Michael Bradford, Lecturer, School of Computing, NCI
1530 Break
1545 Windows Azure – The Big Picture – Vikas Sahni, Lecturer, School of Computing, NCI
1645 Q&A

Alas even the practical aspects of the day were extremely basic and the kinda of thing that most people in the room had done/could do in their own spare time.

  • During the Virtual Machines Lab, we spun up a Virtual Machine from the Windows Azure Gallery and remote desktop connected into it.
  • During the Websites Lab, we deployed a WordPress install… unless you were feeling brave enough to do something else. To be fair I hadn’t done a hands on GitHub Deploy of the code so that was interesting.
  • During the PaaS Application Demo… well it was supposed to be a Hello World web/worker role deployment but god love the poor chap he was out of his depth with Visual Studio and had a few technical hiccups and it was just a bad demo. Upshot was we ran out of time before there was an opportunity for any hands on time in the room.

At 15:30 we left… I didn’t have another lecture in me, although at least we’d had the common courtesy to stay that long. Half the room didn’t come back after lunch.

The takeaways; I know that alot of time and effort goes into these events, and particularly when they are free, that time and effort is greatly appreciated. But you need to make sure you get your audience right. If you advertise Advanced and deliver basic, people will be disappointed. That was clear from the mass exodus that occured during the day… I’m kinda curious to know if there was anyone around for the Q&A at all. I’ll be sure as heck checking the agenda on these type of events before committing my time to them in future. We aren’t currently using Windows Azure in our company yet, and embarrassingly I had been promoting it internally and had convinced several of my colleagues to give up their Saturday for it.

~Eoin C

Unexpected Variable Behaviour in DOS Batch and Delayed Expansion

What would you expect the following piece of Code to print. if the directory ‘A’ doesn’t exist

IF '1'=='1' (
        CD a

CD a

Not very intuitive right?

This is because the DOS batch processor treats the whole if statement as one command, expanding the variables only once, before it executes the conditional block. So you end up with %ERRORLEVEL% being expanded to its value, which is 0, before you start the block, You can get around this by enabling Delayed Expansion. As the name suggests this forces the the Batch Processor to only expand variables once required to do so in the middle of execution.
To enable this behavior you need to do 2 things.
  1. SET ENABLEDELAYEDEXPANSION at the top of your script.
  2. Replace % delimited variables with an Exclamation. i.e. %ERRORLEVEL% becomes !ERRORLEVEL!

Now our script looks like this, and behaves as expected.

Working Script

REM Enable Delayed Expansion
setlocal enabledelayedexpansion
IF '1'=='1' (
        CD a
        REM Use Exclamations instead of percentages

CD a
For when powershell just isn’t retro enough 😉
~Eoin C