Skip Ribbon Commands
Skip to main content

Migrating SharePoint to Azure

Overcoming the challenges of hosting SharePoint 2007 and 2013 within the same infrastructure

Andrew Walman
Brian Jones


Fuse has recently completed the migration of a customer's 2007 SharePoint farm from a "traditional" hosting environment to the Azure IAAS cloud. The farm runs four major websites, including the main website of our local county council (Northamptonshire) so it was critical that downtime was kept to a minimum. At the same time as migrating the farm, we were also introducing a new SharePoint 2013 farm to support a newly developed search-driven site, and to ultimately upgrade and host the existing 2007 sites. In this post, I'll explain some of the challenges and successes we experienced.


Northamptonshire County Council is a customer we've worked with for many years, particularly around SharePoint. The 2007 farm hosting was originally built by Fuse as combination of physical and virtual (VMWare) servers as a three tier farm, with a separate UAT environment. Active directory services were provided by local AD servers replicating over a VPN link back to the customer's site. Shared load balancers and firewalls were managed by the datacentre providers, part of a managed service that included OS monitoring and patching. Over three years, the infrastructure has worked pretty much faultlessly, with the only few incidents of unexpected downtime caused by factors outside of anyone's control.

The sites themselves are developed by a combination of Fuse and NCC, with all support provided by Fuse. Over the years, the SharePoint sites have grown to include many custom solutions, forms based authentication, custom data-driven applications, and data pulled from web services hosted in Northampton and elsewhere. A custom mobile solution and multiple SSL sites add to the overall complexity of the hosting environment. Even though we carry out every deployment and have overall responsibility for support, the prospect of moving the farm to a new infrastructure was a daunting one.

So Why Move?

Three main factors drove the move to a new infrastructure:

  1. SharePoint 2007 is an aging platform, and any new sites would need to be developed on 2013, requiring a new farm, whilst needing to maintain the existing one as the sites are upgraded – the cost of doing hosting both farms simultaneously with the current provider, who requires annual commitment, was prohibitive.
  2. NCC is now part of LGSS (Another Site hosted by Fuse on Azure in this farm!) who will be building their own hosting environments in the medium/long term. Using Azure, we could build an environment that could easily be moved back into an on-premises environment, at any time, or simply taken over by LGSS.
  3. We had worked with other customers to build large-scale SharePoint farms on Azure, and found the reliability, features and flexibility to surpass that of any hosting environment we'd ever come across. Combined with the seemingly ever decreasing costs and increasing features, Azure ticks many boxes.

Having convinced the customer that Azure was the right way to go, we engaged with Microsoft's UK Azure team to set up the subscription as part of NCC's EA agreement. This gave them improved pricing and access to the Azure super-portal that allows management of multiple subscriptions – a useful feature that then enabled NCC to delegate access to an entire subscription for us (something we and Microsoft guided NCC through setting up) and will allow future subscriptions in the future to be used for other projects with separate budgets. Azure's pay as you go model doesn't seem at first glance to fit with local government procurement, but done the right way, it's very simple to set up, and is just an extension of an existing agreement. This also covers off any licensing issues, which can be complex with a mixed on-premise/cloud infrastructure.

The Challenges

We've built several SharePoint 2013 farms in Azure, and they've all been relatively straightforward. As SharePoint 2013 and SQL 2012 are built for Windows 2012, which sits very comfortably on Azure, there are no real issues to overcome - essentially you build the farm as you would on premise, with certain tweaks for optimising performance on Azure - particularly with the SQL servers. A SharePoint 2007 farm is a different matter:

  • Only supports Windows 2008 R2 – which is still OK on Azure
  • No support for SQL 2012 or Availability groups
  • Requires sticky-session support in a load-balanced environment
  • No SNI (server name identification) in IIS 7

We also had our own challenges to overcome with this particular environment:

  • VPN needed to replicate the Active Directory, for user authentication of content editors from NCC, using their corporate credentials
  • Transferring the existing content and application data
  • Ensuring all the various features of each site were operational prior to go live
  • Co-ordinating all this at the same time as launching a brand-new site

Here's some detail on how we overcame each challenge to build a successful solution.

Two Farms in One

In order to minimise the costs of hosting two farms, we knew we needed to share as much as possible between the farms, while still ensuring high availability. The sites in the farm regularly exceed 15,000 visitors per day, so it infrastructure must be able to comfortably handle those traffic levels. During certain periods (bad weather for example) traffic can quadruple.  So we designed the following architecture for production:

  • Shared Infrastructure
    • Two Windows 2012 Active Directory Servers, running DNS and replicating with the AD servers at NCC
    • Two Windows 2012 servers, in a Windows cluster, running two instances of SQL 2012 – an Availability group for SharePoint 2013, and a mirror/witness set for 2007
    • Azure configured to run a front end service for 2007, and a separate service for 2013 - effectively two distinct load balancers
    • All servers connected to the same virtual network, divided into subnets, and connected to NCC's infrastructure through a VPN tunnel
  • Two SharePoint 2007 front end servers running on Windows 2008 R2
  • One SharePoint 2007 server running on Windows 2008 R2 as a search crawler/indexer etc.
  • Four SharePoint 2013 servers running on Windows 2012, with two acting as the application/batch processing tier
  • UAT environment consists of two SharePoint 2013 and one SharePoint 2007 server, connected to a single SQL server running two instances to SQL. The servers are joined to the same AD as production.

All the SharePoint 2007 servers have more memory, disk space and processing power than the existing farm, but are still cheaper to host at Azure.

SQL Issues with 2007

Although officially SharePoint 2007 doesn't support SQL 2012, adding another pair of large SQL servers when we already had a decent set seemed like something that was worth avoiding, particularly as the intention was to wind down 2007 over the life of the solution. Before we put the solution together we looked into what it would take to make SharePoint 2007 work on SQL 2012, and it turns out, not a lot – there's some good blog posts out there explaining the necessary steps, and we've found no issues so far. We have kept SharePoint 2007 on its own instance, and limited memory accordingly for each instance, so we don't make any SQL changes that could affect our 2013 farm supportability.

This has also meant we can take different high availability approaches for each farm. 2013 natively supports all the SQL 2012 HA features, so we've gone for the one that best suits Azure, availability groups on a Windows cluster. 2007 doesn't (though we did try hard to convince it!) so instead we've opted for a witness/mirror configuration.

Load Balancing

In a previous blog post I've mentioned how good Azure can be for load balancing SharePoint – but this was for SharePoint 2013, which handles sessions across the farm much better than 2007. Although anonymous users are fine without sticky sessions, there are a number of features within the sites that require a sticky session, which Azure load balancing doesn't do. The first issue this caused was with site editing for Windows-authenticated users, which on post back would generate an invalid view state if it went to the other server – this is easily fixed by having the same machine key in the application's web config, which fixes a similar issue for forms-based authentication too.

All the other transaction-based custom applications within the farm seem to handle non-sticky sessions well, but this could just be luck – so we looked at some different solutions in case our luck doesn't hold out. The first was a virtual load-balancing device from Kemp, which is available from the Azure store. This is essentially a full featured load balancer that runs behind the Azure load balancer and in front of the SharePoint servers, and can handle sticky sessions. Secondly we looked at the Application Request Routing module on IIS 8, which turns any Windows server into a full featured load balancer/cache server. As we also needed an SSL solution, this was what went with, though currently it's only handling our SSL issue.


Because our 2007 farms hosts multiple SharePoint applications, and some of those have SSL, we needed a way of having multiple SSL certificates on the same port. In our old hosting environment, our hosting providers gave us another IP address and we simply forwarded that on a different internal port to SharePoint. With Azure, things aren't quite so simple – another IP address requires another "service" and virtual machines can only exist in one service at a time. SNI is an option, but only for 2013 farm, as it's a feature of IIS 8. For our 2007 farm we needed to a similar solution to our existing hosting, and came up with using ARR to forward the requests from a new Azure service to our SharePoint 2007 front ends, again on an internal port. As the ARR servers are also Windows 2012, we can use SNI on these to identify the requested site and forward it correctly to the SharePoint 2007 servers, enabling us to host multiple SSL sites without requiring any more servers or services.

VPN and Active Directory Replication

Our existing hosting environment already used a VPN to enable AD traffic between sites. From this and other Azure implementations, we knew that the Azure site-to-site VPN and virtual networks would provide a workable solution, but we also know that any VPN into a large/complex environment is never straightforward. We also had to ensure the replication topology could handle the two branch sites. This took a lot of work with the customers networking and AD teams, and had we not had previous experience with Azure VPNs, we might not have persisted. I'm glad to say it does all work, and enables other solutions, such as on-premises back up, monitoring etc., to be used seamlessly with Azure – effectively the Azure servers in Dublin are now as much a part of the NCC infrastructure as those in Northampton.

Data Transfer

In order to perform the migration with minimal downtime, we had set the farms up and configured them with all the required settings and solutions over a number of weeks, testing against a copy of the content and application databases we had copied across as SQL backup files. The final step to making them live was to freeze content and migrate the data one more time, before switching DNS entries to make the new farm available to the public. However, we'd found with the initial data copy, where we'd used the two VPN tunnels to copy the live SQL data up to NCC and then back out to Azure, had taken much too long.

We found a much better way was to use the Azure/SQL backup tool, which can automatically transfer any SQL backup, as it's created, to Azure storage – by connecting this to storage available to our new farm servers, the backups were ready in minutes instead of hours and meant we could commence the content restore that night, shortening the entire migration process by around a day. We simply scheduled the SQL backup to occur after the content freeze and waited for it to arrive at Azure.

Planning, Testing and Project Management

Of course we can't pretend the whole process went completely smoothly, and there was still some minor issues once the sites went live. However these were quickly identified and fixed. This was due to the way the project was managed between NCC and Fuse, with frequent update meetings, shared project resources and effective communication. A well-developed test plan ensured that once the migration was completed and the new farm made live, all the features could be tested and the integration endpoints updated – this would not have been possible without both teams working together.

Successful Delivery

This was a major set of changes for NCC, with significant risks and a large number of people involved on both sides. Fuse overcame a number of technical challenges to deliver a solid platform that has saved a significant amount of public money and enabled NCCs digital team to continue delivering award-winning services to the public.

 About us

Fuse Collaboration Services is a Cloud Solution Provider and Microsoft Gold Partner specialising in delivering SharePoint, Skype for Business, and Azure cloud-based solutions. Based in Northampton, UK.

Microsoft Gold Partner Logo showing 5 competencies

Read more

 Latest Tweets

 Latest Blog



Better Mobile 4G and the Portable Office3089<p class="lead">​​<img class="img-responsive float-right" src="/ourblog/Blog%20Site%20Images/OneDrive-for-Business-App.png" alt="OneDrive for Business App" style="max-width&#58;300px;margin&#58;5px;" />Transport for London’s (TfL) plans to roll out 4G across the London Undergound network are continuing apace, with the <a href="http&#58;//" target="_blank">recent news</a> that this will likely include the new Elizabeth Line (aka CrossRail) and the planned C​rossRail 2 and Northern Line extension projects.</p><p>Suppliers and interested parties have had until July 6th to submit a selection questionnaire, which will then be evaluated before a more thorough tender goes out in early August.</p><p>City AM reports that companies will be competing to construct a commercial fibre optic network and the provision of a public WiFi service in specified TfL stations. The plan Is for the first stations to get connected during 2019, taking the capital one step closer to being a fully mobile working environment.​</p><p>WiFi already exists in some Tube stations, as it does on large parts of the train network across the country – even if the actual internet speeds vary widely depending on whereabouts you are!</p><p>Both TfL and the train companies will be heavily promoting the virtues of this type of connectivity for both leisure and work use, and it is popular here in the East Midlands, where we have a mixture of commuters heading into either London or Birmingham and companies who have to travel to see their clients.</p><p>But in the same way as nothing can beat the quality of a film that is already downloaded to your device, you need to make sure you have the processes and protocols in place for your mobile working to make sure you do not lose​ work unnecessarily. After all, there is nothing more frustrating than concentrating hard for an hour, only to see your work disappear because the saving process has not worked properly!</p><div class="well well-lg bg-color4 lead"><p>Here are Fuse’s top tips for your mobile working&#58;</p><ol><li>Make sure you have OneDrive syncing in place, so that when you save your document to your device it will automatically update onto your online OneDrive folder as soon as you are back online, thereby ensuring that your company work remains up-to-date<br></li><li>Do not get too reliant on having internet connectivity when you are on the move. Access to online resources may not be available for the duration of your whole journey, and even if your device is connected to the on-train WiFi the actual internet speeds will vary depending on whereabouts you are, or if you are in a tunnel<br></li><li>Have processes and checklists in place – even if they are just your own individual way of working, not official company policy – to ensure that you always download what you need onto your machine before you travel<br></li></ol></div>​ <p class="lead">Here at Fuse we believe that having the wherewithal to work effectively on the move is an integral part of the modern office. This includes OneDrive syncing being set up on all your machines, working with you to develop robust working practices for all your staff, and proactively making sure that your IT provision works to meet your needs, not the other way around.</p><div class="well well-lg"><p class="lead">Get in touch to find out how we can make sure that you have the great IT provision your organisation deserves! Call 01604 797979 or <a href="/_layouts/15/FIXUPREDIRECT.ASPX?WebId=4fc45909-2b6d-48b9-bcf9-a446e9d472d6&amp;TermSetId=c98895cd-d37f-4406-9cff-5480b4f829b6&amp;TermId=218eb0be-10f6-490a-82a7-a7fd47c8de90">send us an enquiry​​​</a></p>​ </div> | Chris Wearmouth | 693A30232E777C6675736563735C632E776561726D6F757468 i:0#.w|fusecs\c.wearmouth18/07/2018 23:00:002018-07-18T23:00:00ZAs better 4G rolls out across the nation how are you going to make your office truly mobile?10/12/2019 23:55:5735845Falseaspx

 Contact us

Our address
12 Brookfield, Duncan Close
Moulton Park, Northampton
P: +44(0)1604 797979
Contact Us