Xamarin & Azure Notification Hubs

A few weeks back the Azure Dublin user group hosted some of the Microsoft and Xamarins team in Dublin for a half day conference. While the Microsoft & Xamarin teams were there to cover some introductory sessions on using Xamarin, they also invited some industry partners to talk about some other technical topics. I presented a quick 15 minute intro on how to use the Azure Notification Hubs platform for sending push notifications to the various platform specific notification services for each of the major mobile platforms.

Azure Notification Hubs is the Microsoft Azure scaled-out infrastructure for doing multi-platform push-notifications. It provides you with a single single hosted platform which you can configure to relay your push notifications to all the major platform specific push notification services.

Currently, Azure Notification Hubs supports sending notifications to

  • Windows Notification Service (WNS) for Windows Phone & Universal Apps on Windows 8 and Window 10
  • The legacy Microsoft Push Notification Service (MPNS) for older Windows Phone 8 apps
  • Apple Push Notification Service (APNS) for Apple Devices such as IPhones and Macs running iOS and OSX
  • Firebase Cloud Messaging (FCM) and Google Cloud Messaging (GCM) for Android devices & chrome apps
  • Baidu Cloud Push for Android in China
  • Amazon Device Messaging (ADM) for Amazon Kindles

Setup can be a little tricky for the respective platforms. In order to configure Google for example, you’ll need to login to the Google Developer Console and enable the FireBase/Google Cloud Messaging API, recording your API Key. You’ll also need to configure a project under the IAM & Admin Sections and take note of your Project Number which will be used in your source code

Apple’s Push Notification Service on the other hand requires that you generate a CSR which is uploaded to the Apple Developer site. That will allow you to create a Certificate which is in turn uploaded to the Azure Notification Hub configuration.

Once you’ve configured the various platform notification services in Azure, you can start pushing notifications out to various subsets of your user base. Notification Hub clients support connecting with Tag configurations. This allows you to dynamically tag your client base by device, demographic, specific user or some other categorisation and then targets those subsets of users

You’ll find some demo code to get you started in our demo repository of Source Code on Github and a copy of the presentation slides from the demostration below

~Eoin C

Multiple Github accounts & SSH keys on the same machine

If like me you have 2 or more, different Github accounts on the go, then accessing and committing as both on the same machine can be a challenge.
In my case, I have 2 accounts, one for work associated with my company email and a second for my own personal code.

If you’d like to be able to checkout, code and commit against different repo’s across different github accounts on the same machine, then you can do so by setting up multiple ssh keys, and having hostname aliases configured in your .ssh config file.

First off all, you’ll need to generate your SSH Keys. If you haven’t done this already, you can use the following commands to generate your keys.


$ ssh-keygen -t rsa -C "eoin@work.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/c/Users/eoin/.ssh/id_rsa): id_rsa_eoin_at_work

$ ssh-keygen -t rsa -C "eoin@home.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/c/Users/eoin/.ssh/id_rsa): id_rsa_eoin_at_home

Once you’ve created your 2 files, you’ll see 2 key pair files (the file you specified and a .pub) in your ~/.ssh directory. You can go ahead and add the respective key files each of your Github accounts. It’s in the Github > Settings < SSH and GPG Keys section of your settings. You’ll also need to add these files to ssh.

Next you’ll want to create an ssh config file in your ~/.ssh directory. You can see mine below.

Host github.com
    HostName github.com
    User git
    IdentityFile ~/.ssh/id_rsa_eoin_at_work

Host personal.github.com
    HostName github.com
    User git
    IdentityFile ~/.ssh/id_rsa_eoin_at_home

Here’s the trick, when you execute a git clone command to clone a repo, the host in that command is not a real DNS hostname. It is the host entry specified on the first line of each section in the above files. So you can very easily change that. Now, if I want to check out work related projects from my work account, I can use.

git clone git@github.com:eoincgreenfinch/heartbeat.git

# don't forget to set your git config to use your work meta data.
git config user.name "eoincgreenfinch"
git config user.email "eoin@work.com" 

But if I want to check out code from my personal account, I can easily modify the clone URI with the following.

git clone git@personal.github.com:eoincampbell/combinatorics.git

# don't forget to set your git config to use your work meta data.
git config user.name "eoincampbell"
git config user.email "eoin@home.com" 

~Eoin Campbell

Running SQL Server on Azure Virtual Machines

Introduction

We recently started hitting some capacity issues with an SQL Server Reporting Services box hosted
on Microsoft’s Azure Cloud Platform. The server had been setup around the time, Microsoft end-of-life’d
their platform-as-a-service report server offering, and forced everyone back onto standalone instances.
The server was a Basic A2 class VM (3.5GB Ram, 2 Cores). Originally, it only had to handle a small amount of report
creation load but in recent times, that load has gone up significantly. And due to the “peaky”
nature of the customer’s usage, we would regularly see periods where the box could not keep up with
report generation requests.

In the past week, we’ve moved the customer to a new SQL Server 2014 Standard Edition install. Here are a few of the things we’ve
learned along the way with regards setting up SQL Server as a standalone instance on an Azure VM.

This information is based on the service offerings and availabilities in the Azure North Europe region as of February 2016

Which Virtual Machine Class?

First off, you should choose a DS scale virtual machine. At the time of writing, Microsoft offer 4 different VM classes
in the North Europe region: A, D, DS and D_V2. Only the DS class machines currently support Premium Locally
Redundant Storage (Premium LRS) which allows you to attach permanent SSD storage to your server.

Within the DS Set, DS1-DS4 have a slightly lower memory : core count ratio. The DS11-DS14 set have a higher
starting memory foot print for the same core count. We went with a DS3 server (4 core / 14GB) which we can downscale to
DS1 during out of hours periods.

Which Virtual Machine Class?

Which Storage Account?

During setup ensure that you’ve selected a Premium Locally Redundant Storage account which will
provide you access to additional attachable SSDs for your SQL Server. This can be found under
Optional Configuration > Storage > Create Storage Account > Pricing Tier

Which Storage Account?

External Security

Security will be somewhat dependent on your specific situation. In our case, this was a
standalone SQL Server with no failover cluster or domain management. The server was setup
with a long username and password (not the john.doe account in the screenshots).

We also lock down the management ports for Remote Desktop and Windows RM, as well as the added
HTTPS and SQL ports. To do this, add the public-to-private port mapping configurations under
Optional Configurations > Endpoints

Endpoint Configuration

Once you’ve finished setting up the configuration and azure has provisioned the server,
you’ll want to reenter the management blades and add ACL rules to lock down port access
to only the IP Ranges you want to access it. In our case, our development site, customer
site, and Azure hosted services.

You can add “permit” rules for specific IP addresses to access your server.
Once a single
permit rule is added, all other IP Addresses/Ranges are blocked by Default.

Endpoint ACLs

Automated Backups

SQL Azure VMs can now leverage an automated off-server database backup service
which will place your backups directly into Blob Storage. Select SQL Automated
Backup and enable it. You will be asked to specify where you would like to store your
backups and for how long. We chose to use a non-premium storage account
for this, and depending on the inherent value of your backups and whether you
intend to subsequently off-site them yourself, you might want to choose a storage
setup with zone or geo redundancy. You can also enable backup encryption by providing
a password here.

Automated SQL Backup to Storage

Disk Configuration

Now that your server is up and running, you can log in via Remote Desktop. The first
thing you’ll want to do is patch your server. As of mid-February 2016, the base image
for SQL Server 2014 on Windows Server 2012 Standard R2 is missing quite a number of
patches. Approximately ~70 critical updates and another ~80 optional updates need to
be installed.

Once you’ve got your server patched, you can take a look at the disk setup. If you’ve
chosen a DS Class Server, you’ll notice that you have 2 Disks. A regular OS disk, and
an SSD Temp Disk. This temp disk is NOT to be used for real data, it is local only to
the VM while it’s running and will be deallocated and purged if you shut the server
down

You can however purchase additional SSD disks very easily. Head back out to the Azure
Management Portal, find your VM, go to settings and choose Disks. In the following
screenshot, we’ve chosen to add an additional 2 x 128GB disks (P10 class) disks to
the server. The
SQL Server best practices
document recommends using the 1TB (P30 class) disks
which do give a significant I/O bump but they are also more expensive.

Ensure that you specify “Read Only” host caching for your Data Disk and No-Caching for
your Log disk to improve performance.

Adding Extra Disks

Once your disks are attached you can access and map them inside your VM. We chose to
setup the disks using the newer Window Server 2012 Resilient File System (ReFS) rather
than NTFS. Previously there were potential issues with using ReFS in conjunction with
SQL Server, particularly in relation to sparse files and the use of DBCC CHECKDB however
these issues have been resolved in SQL Server 2014.

Disk Configuration

Moving your Data Files

SQL Server VM Images come pre-installed with SQL Server so we’ll need to do a little bit
of reconfiguration to make sure all our data and log files end up in the correct place. In the
following sections, disk letters & paths refer to the following.

  • C: (OS Disk)
  • D:\SQLTEMP (Temp/Local SSD)
  • M:\DATA\ (Attached Perm SSD intended for Data)
  • L:\LOGS\ (Attached Perm SSD intended for Logs)

First, we need to give permission to SQL Server to access these other disks. Assuming
you haven’t changed the default service accounts, then your SQL Server instance will
be running as the NT SERVICE\MSSQLSERVER account. You’ll need to give this account Full
Permissions on each of the locations you intend to store data and log files.

Folder Permissions

Once the permissions are correct, we can specify those directories as new defaults
for our Data, Logs and Backups.

Setting Default Paths for Data & Logs

Next We’ll move our master MDF and LDF files, by performing the following steps.

  1. Launch the SQL Server configuration Manager
  2. Under SQL Server Services, select the main Server instance, and stop it
  3. Right click the server instance, go to properties and review the startup parameters tab
  4. Modify the –d and –e parameters to point to the paths where you intend to host your data and log files
  5. Open Explorer and navigate to the default directory where the MDF files and LDF files are located (C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\). Move the Master MDF and LDF to your new paths
  6. Restart the Server

Moving the Master Database

When our server comes back online, we can move the remainder of the default databases.
Running the following series of SQL Commands will update the system to expect the MDFs
and LDFs and the new location on next start up.

ALTER DATABASE [msdb] MODIFY FILE ( NAME = MSDBData , FILENAME = 'M:\DATA\MSDBData.mdf' )
ALTER DATABASE [msdb] MODIFY FILE ( NAME = MSDBLog , FILENAME = 'L:\LOGS\MSDBLog.ldf' )
ALTER DATABASE [model] MODIFY FILE ( NAME = modeldev , FILENAME = 'M:\DATA\model.mdf' )
ALTER DATABASE [model] MODIFY FILE ( NAME = modellog , FILENAME = 'L:\LOGS\modellog.ldf' )
ALTER DATABASE [tempdb] MODIFY FILE (NAME = tempdev, FILENAME = 'D:\SQLTEMP\tempdb.mdf');
ALTER DATABASE [tempdb] MODIFY FILE (NAME = templog, FILENAME = 'D:\SQLTEMP\templog.ldf');

--You can verify them with this command
SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files 
	

Shut down the SQL Instance one more time. Phyiscally move your MDF and LDF files to
their new locations in Explorer, and finally restart the instance. If there are any
problems with the setup or the server fails to start, you can review the ERROR LOG in
C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log\ERRORLOG

Conclusions


There are a number of other steps that you can then perform to tune your server.

You should also setup SSL/TLS for any exposed endpoints to the outside world
(e.g. if your going to run the server as an SSRS box). Hopefully you will have a you a far
more performant SQL Instance running in the Azure Cloud.

~Eoin Campbell

When’s a Deep Dive not a Deep Dive ?

Global Windows Azure Bootcamp

This weekend, I attended the Global Windows Azure Deep Dive conference in the National College of Ireland, Dublin. Microsoft in conjunction This was a community organised event where Local & National IT Organisations, Educational Institutions & .NET Communities were running a series of events in parallel in a number of cities around the world. The purpose; Deep Dive into the latest technology available on Microsoft as well as take part in a massively parallel lab where participants from all over the world would spin up worker roles to contribute to 3D graphics rendering based on depth data from a KINECT. Alas, Deep it was not, and Dive we didn’t.

I suppose I can’t complain too much. You get what you pay for and it was a free event but I’d have serious reservations about attending this type of session again. Don’t get me wrong, I don’t want to sound ungrateful, and fair dues to the organisers for holding the event but if you’re going to advertise something as a “Deep Dive” or a “Bootcamp” then that has certain connotations that there would actually be some Advanced Hands-on learning.

Instead the day would barely have qualified as a Level 100 introduction to 2 or 3 Windows Azure technologies interspersed with Sales Pitches, Student Demo’s of their project work and filler talks relating to cloud computing in general. Probably most disappointingly we didn’t actually take part in the RenderLab experiment which kinda torpedoed the “Global” aspect of the day as well. You can see the agenda below. I’ve highlighted the practical aspects in Red.

Time Topic
0930 Welcome Dr Pramod – Pathak, Dean, School of Computing, NCI
0935 Schedule for the day  – Vikas Sahni, Lecturer, School of Computing,NCI
0940 How ISIN can help – Dave Feenan, Manager, ISIN
0945 Microsoft’s Best Practice in Data Centre Design – Mark O’Neill, Data Center Evangelist, Microsoft
1000 Virtual Machines – Demo and Lab 1 – Vikas Sahni, Lecturer, School of Computing, NCI
1100 Careers in the Cloud – Dr Horacio Gonzalez-Velez, Head, Cloud Competency Center, School of Computing, NCI
1110 Graduates available today – Robert Ward, Head of Marketing, NCI
1120 Break
1135 Web Sites – Demo and Lab 2 – Vikas Sahni, Lecturer, School of Computing, NCI
1235 Building the Trusted Cloud – Terry Landers, Regional Standards Officer for Western Europe, Microsoft
1300 Lunch
1400 Tools for Cloud Development – Colum Horgan, InverCloud
1410 Windows Azure Mobile Services – Overview and Showcase –  Vikas Sahni, Lecturer, School of Computing, NCI and Students of NCI
1440 Developing PaaS applications – Demo – Michael Bradford, Lecturer, School of Computing, NCI
1530 Break
1545 Windows Azure – The Big Picture – Vikas Sahni, Lecturer, School of Computing, NCI
1645 Q&A

Alas even the practical aspects of the day were extremely basic and the kinda of thing that most people in the room had done/could do in their own spare time.

  • During the Virtual Machines Lab, we spun up a Virtual Machine from the Windows Azure Gallery and remote desktop connected into it.
  • During the Websites Lab, we deployed a WordPress install… unless you were feeling brave enough to do something else. To be fair I hadn’t done a hands on GitHub Deploy of the code so that was interesting.
  • During the PaaS Application Demo… well it was supposed to be a Hello World web/worker role deployment but god love the poor chap he was out of his depth with Visual Studio and had a few technical hiccups and it was just a bad demo. Upshot was we ran out of time before there was an opportunity for any hands on time in the room.

At 15:30 we left… I didn’t have another lecture in me, although at least we’d had the common courtesy to stay that long. Half the room didn’t come back after lunch.

The takeaways; I know that alot of time and effort goes into these events, and particularly when they are free, that time and effort is greatly appreciated. But you need to make sure you get your audience right. If you advertise Advanced and deliver basic, people will be disappointed. That was clear from the mass exodus that occured during the day… I’m kinda curious to know if there was anyone around for the Q&A at all. I’ll be sure as heck checking the agenda on these type of events before committing my time to them in future. We aren’t currently using Windows Azure in our company yet, and embarrassingly I had been promoting it internally and had convinced several of my colleagues to give up their Saturday for it.

~Eoin C

Automatically update the AssemblyFileVersion attribute of a .NET Assembly

Automatic AssemblyFileVersion Updates
Automatic AssemblyFileVersion Updates

There is support in .NET for automatically incrementing the AssemblyVersion of a project by using the “.*” notation. e.g.
[assembly: AssemblyVersion("0.1.*")]

Unfortunately the same functionality isn’t available for the AssemblyFileVersion. Often times, I don’t want to bump the AssemblyVersion of an assembly as it will effect the strong name signature of the assembly, and perhaps the changes (a bug fix) isn’t significant enough to warrant it. However I do want to automatically increment the file version, so that in a deployed environment, I can right click the file and establish when the file was built & released.

[important]Enter the Update-AssemblyFileVersion.ps1 file.[/important]

This powershell script, (heavily borrowed from David J Wise’s article), runs as a pre-build command on a .NET Project. Simply point the command at an assembly info file, (or GlobalAssemblyInfo.cs if you’re following my suggested versioning tactics)  and ta-da, automatically updating AssemblyFileVersions.

The Build component of the version number will be set using the following formula based on a daycount since the year 2000.

# Build = (201X-2000)*366 + (1==>366)
#
    $build = [int32](((get-date).Year-2000)*366)+(Get-Date).DayOfYear
 

The Revision component of the version number will be using the following formula based on seconds in the current day.

# Revision = (1==>86400)/2 # .net standard
#
    $revision = [int32](((get-date)-(Get-Date).Date).TotalSeconds / 2)
 

The Major & Minor components are not set to update although they could be. Simply add the following command to your Pre-Build event and you’re all set.

%SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe 
    -File "C:\Path\To\Update-AssemblyFileVersion.ps1"  
    -assemblyInfoFilePath "$(SolutionDir)\Project\Properties\AssemblyInfo.cs"

~Eoin C