Keynote
During the //build/ keynote today, there are several impressions Microsoft wanted to ensure everyone walked away with.
The first impression is “Microsoft is listening to your feedback” – during the Windows 10 portion of the keynote, Microsoft spoke towards constantly changing how Windows 10 looks and interacts with users. I was happy to see start menu changes that make Windows 10 more like Windows 7.
The second impression is “Microsoft wants to give you what you want.” Here is the reality check – the development community is more than just .NET developers. In fact, .NET is a small part of the larger development community, but .NET developer have done a remarkable job of staying out of the community for too long. Microsoft has realized just that, and now, with their open source initiative and focus on the other platforms, .NET developers will not have a choice but to embrace alternate platforms and the development community. Two concrete examples – the open sourcing of .NET and Visual Studio Code (Microsoft’s new micro-IDE for Windows, Mac, and Linux). And as an aside, the groundwork for Visual Studio Code had been laid for the past year – I would have been surprised if an announcement like Visual Studio Code hadn’t been made.
The third and final impression is, “Microsoft is an innovator, and holographic technology will change the way in which we interact with each other and technology.” I can say no more. There were three amazing HoloLens demonstrations today. Each showcased a different usage of HoloLens, almost as a justification or explanation of “why” we would want and use this technology. Each demonstration was just beautiful and by the conclusion of the HoloLens segment, I would have (without question) dropped $1000 to buy a developer kit immediately.
Sessions
The remainder of my day was spent in sessions and speaking with various Microsoft and third-party vendor employees. Before I move on to the sessions I attended, I want to say that I enjoyed meeting new people and learning about the new features and new products that are just around the corner. Truly awesome. One of the highlights was working with the successor of Microsoft’s Perceptive Pixel acquisition from several years ago. Image an 80” multi-point touch screen that allows users to collaborate on content while using Skype for Business. You draw on the screen, I draw on the screen, the remote parties collaborate by interacting with OneNote – and when you’re finished, the large “Surface” emails all meeting participants the OneNote notebook. Sign me up.
Azure App Service Architecture
The lesser Scotts – Scott Hunter and Scott Hanselman led this session looking at the new Azure App Services – Web App, Api App, BizTalk App, and Logic App. Web, Api, and BizTalk are really just rebranding effort (with some bundling price reductions), but Logic Apps are something entirely new. This feature went into Preview on the Azure Portal several weeks ago, but the true power was realized today. The worst part of being a developer is writing the “plumbing code” – this is the boilerplate code you write over and over and over again. Logic App helps you out by simplifying complex workflows and long-running tasks by providing you with the necessary framework to orchestrate these tasks. Logic Apps ARE the glue that bind sequential tasks together – and with this glue, you can easily orchestrate complex data flows from an on-premise solution to cloud bases solutions, while keeping you up-to-date with the progress via social media and other communication protocols.
Cross-Platform Continuous Delivery with Release Management to Embrace DevOps
In this session, Donovan Brown began by introducing the audience to Release Management – Microsoft’s automated deployment and configuration tool that is a component of Team Foundation Server. Next, Donovan talked briefly on DSC – Desired State Configuration, a technology that is used to describe the way an environment SHOULD look like (in terms of installed and configured software ad pre-requisites) and then force an environment to be automatically configured to these specifications.
I came to this session because I love TFS, Release Management, and the concept of automated build, test, and deploy (BTD) pipelines. Donovan showed us some of the pre-release (alpha) features of Visual Studio Online with respect to Release Management. The alpha bits of Release Management has tie-ins to Chef and DSC. In the demo, we saw a Full Release pipeline of two apps (.NET and Java) following BTD. The .NET app used MSBuild and the Java app used Maven for compilation – all in the hosted VS Online environment.
As for test, there are built-in integrations with MS Test and Selenium to run automated coded UI tests that can be configured from within Release Management. In the current version of the Release Management agent, in order to run coded UI tests, you need to pre-install the test agents and do quite a bit of configuration. This newer version of Release Management and the new Release Management Agent will make running coded UI tests simple. All you have to do is tell Release Management to run coded UI tests and the new Agents will take care of everything.
Similar to the on-premise Release Management, you have Environments and Tasks. Environments are a collection of entities that together create a collective group of resources to which you deploy. For example, you may have a “QA” environment and a “Prod” environment. Release Management online also has the ability to create configuration options (like tokens) that can (and will) be different for each environment you have. The user interface appears to be much friendlier than the on-premise WPF app.
Release date (GA) for Release Management components are currently TBD.
Surviving Success: Architecting Web Sites and Services for Rapid Growth
In this session, I learned more on how to architect an application for different massive-scale usage scenarios. The presenter’s overall theme was before you can really scale an application/solution you must first understand your workloads, your application telemetry data, and how each component of your system can be measured and analyzed by both development and operations teams. Once you understand these components, you can then begin to row your solutions.
In order to understand you application’s workload, you should consider: whether you have bursts of activity, when these bursts occur, if the application data is primarily read-only (or read/write, or write-only), the data read latency (how stale can your data can be), and whether the data you are delivering is user-specific or global system-wide data? Each of these questions, once answered, can give you an idea of how to best expand the solution and architect it for cloud services.
For measuring application telemetry, the actual tool doe snot matter. In fact, it only really matters that you are collecting and analyzing this data to best understand how your application acts on a normal day. Without this baseline understanding, you will never know when something goes wrong or something under performs until it’s too late (often when you’re in the middle of a crisis situation). The most common shortcoming in organizations is the lack of monitoring and understanding of data flows and application performance. Most solutions are default deployed IIS apps with no logging enabled or telemetry data available.
So, how do you move to the cloud from an on-premise solution. The presenter gave the following basic formula:
- Migrate single servers/application components to Azure, using the corresponding PaaS (Platform as a Service) counterparts, i.e., SQL migrates to SQL Azure, IIS apps migrate to Web Apps
- Automate your deployments to this environment
- Be sure to use Azure resource Manager
- Do not rely on point-and-click Azure portal for deployments – FULLY automate
- Use Azure Web App staging capabilities to pre-deploy and “flip” between your staged and production environments
- Enable monitoring/telemetry and perform analysis on these auto-deployed systems, which will enable you to gain insight into their current baseline performance
- Ensure your data is actually flowing through your environments
- Ensure development and operations staff is comfortable on the use and analysis of this telemetry data
- Setup Azure Traffic Manager in front of this single environment
- Add additional cloned environments (based off of the running environment)
- Configure your CI (continuous integration) builds to push to these additional environments
- Enable Azure SQL DB geo-replication to create read-only secondary databases in these new environments
- Keep your primary SQL Azure database as a read/write, with others read-only
- Maintain 2 sets of connections strings in your applications – one for writing data and one for reading data
- Write data connection string always points to the read/write database in the primary environment
- Read data always pulls from local secondary SQL Azure read-only databases
- Link additional environments into the Azure Traffic Manager
This example was just one of many recommendations that the Azure Patterns and Practices team has assembled on github.
Wrap Up
//build/ day 1 was a fantastic day. I learn quite a bit and am looking forward to day 2. Perhaps there’s stlil a chance we’ll get a HoloLens.