All posts by Dustin Ewers

Grouping Events by Date Ranges In SQL Server

Let’s say you have a table of events. Each event has a beginning and an ending date. How would you get a list of the events that happened within each quarter?

Example: A event that begins on 2-1-2017 and ends on 8-1-2017 would have an entries for Q1 2017, Q2 2017, and Q3 2017.

If you’re dealing with a single value, you can just use DATEPART(quarter, [DateValue]), but if you’re looking to figure out if an event occurs in a range, it’s a little more complicated. Here’s how to do it.

Getting Date Ranges

The first step to getting this data is to get a list of quarters with their respective date ranges. One way to do this is to use a recursive common table expression. CTE’s are snippets of SQL that you can put above your main query to create tables to join against. A recursive CTE creates values by calling itself. Here’s a CTE that’ll give us a list of quarters from the start date to the current date. When using recursive CTEs, make sure you have a termination condition or you’ll have a nice infinite loop on your hands.

DECLARE @start Date = '01-01-2010';
DECLARE @End Date = GETDATE();
WITH Quarters_CTE
AS
(
SELECT @start as [Start], DateAdd(quarter, 1, @start) AS [End], DATEPART(quarter, @start) AS [Quarter], DATEPART(year, @start) AS [Year]
UNION ALL
SELECT DateAdd(quarter, 1, [Start]), DateAdd(quarter, 1, [End]), DATEPART(quarter, DateAdd(quarter, 1, [Start])), DATEPART(year, DateAdd(quarter, 1, [Start]))
FROM Quarters_CTE
WHERE [End] < @end
)
SELECT * FROM Quarters_CTE Q

Matching Quarters To Events

The next step is compare our events with the quarters. We want a different entry for each quarter the event occurs in. To do this you can join up with the table. You can use non-equality based comparison operators in join queries, so we filter by date range. Here’s the complete query:

DECLARE @start Date = '01-01-2010';
DECLARE @End Date = GETDATE();
WITH Quarters_CTE
AS
(
SELECT @start as [Start], DateAdd(quarter, 1, @start) AS [End], DATEPART(quarter, @start) AS [Quarter], DATEPART(year, @start) AS [Year]
UNION ALL
SELECT DateAdd(quarter, 1, [Start]), DateAdd(quarter, 1, [End]), DATEPART(quarter, DateAdd(quarter, 1, [Start])), DATEPART(year, DateAdd(quarter, 1, [Start]))
FROM Quarters_CTE
WHERE [End] < @end
)
SELECT * FROM Quarters_CTE Q
JOIN [Events] E ON e.StartDate < q.[End] AND (e.EndDate is null OR e.EndDate > q.[Start])

From there you can aggregate your data as needed.

10 Handy NuGet Packages for Your Next .NET Project

One of the great things about modern development is how easy you can leverage code written by others. Package managers like NPM and NuGet make this process even easier. The only problem is that there’s so much available code to choose from. For example, there’s over 100,000 packages on NuGet. In my travels as a .NET developer, I’ve found a few packages that made my life a little easier. Here’s a list of 10 handy packages you use in your next project:

Refit

Refit is a library that helps you automate calls to REST APIs. To use it, you first define your APIs as an annotated interface. Then you use Refit to turn that interface into a class. I love this package because it allows you to create incredibly terse API callers.

Swashbuckle

This package should be installed by default on any .NET API project. It automatically adds swagger documentation to your API. In addition to that, Swashbuckle creates a handy web page you can use to document and explore your API. I’m a big fan of this package because it links your docs with your actual code. It becomes even more important in a microservices environment where you have lots of services to keep track of.

Polly

Polly is a fault handling library that allows you to write more resilient code. To do this, you create policies. These policies define what happens when a particular operation fails. These policies include things like Retry, Circuit Breaker, and Fallback. It’s great for operations where there’s a chance of failure, like a REST call over a flaky network connection.

MoreLINQ

LINQ is my preferred way to deal with enumerable objects. MoreLINQ adds lots of useful operators to traditional LINQ.

Dapper

Dapper is a Micro OR/M made by the same folks that gave us Stack Overflow. If you hate fiddling with Entity Framework configurations and just want to write some damn SQL, Dapper is for you. It’s been around for a while, but I’ve only recently gotten a chance to use on a project. After trying it, I’m a huge fan of this approach. I’ve spent way more hours than I’d care to admit trying to shoehorn Entity Framework’s API to get the data I want. I’ve also spent countless hours trying to figure out why my five lines of C# produced 3000+ lines of sketchy looking SQL. Sometimes it’s easier to do things yourself.

Filesystem Abstractions

Generally, when you want to unit test an operation that interacts with the file system, you would need to write a separate interface and class to abstract away the IO operations. With Filesystem Abstractions, you no longer need to do that. Filesystem Abstractions wraps System.IO in a useful interface. Instead of making your own file system access class, you can inject an IFileSystem and get access to all of the IO methods you know and love. Filesystem Abstractions also includes classes to mock the file system, so testing IO operations becomes really easy.

Moq

Moq is a handy library for mocking interfaces in unit tests. You’re probably already using this one.

Xunit

Xunit is a unit testing framework for testing .NET code. It’s like MSTest and NUnit, but it has a few interesting features. One of those is the use of theories. A theory allows you to run the same test with several different parameters. It’s a handy way to test multiple values without having to write multiple tests.

MiniProfiler

MiniProfiler injects a small unobtrusive profiler into your MVC application. As you complete operations, Miniprofiler will display the time it took to complete each step. Tools like this make it easy to diagnose performance bottlenecks.

CSV Helper

The CSV helper makes it easy to create and read delimited files. If you work in an environment where you have to create or process lots of CSVs, this package is a good one to employ.

 

2018 Technology Predictions

Ever wish you could see into the future? I’d love to have known about Bitcoin eventually hitting 17k back when you could pick one up for a dollar. A $200 dollar investment at the time would make for a nice retirement today. Too bad foresight isn’t 20/20.

Even if you don’t want to know everything that happens in the future, you make predictions everyday. You predict that you’ll get paid regularly (if you’re a salaried employee). You make certain assumptions about your health and abilities. You likely predict that the technologies you’re currently learning will be useful in the future. The skill of predicting the future is essential.

It’s also a skill that most of us aren’t very good at. To improve my own prediction skills, I’ve read several books over the past year about prediction and how to discern the future. Here’s a few of the better reads.

The Inevitable – Kevin Kelly

Kevin Kelly lays out 12 different technological trends that will shape our world in the coming decades. I found this to be an interesting look ahead. Different trends include the dissolution of specific versions of software, increased personalization through automation, and the rise of virtual reality.

The Signal and the Noise – Nate Silver

Nate Silver one of the top predictors of election outcomes and other events. His book talks about some of the difficulties in creating useful models. He emphasizes the importance of probabilistic thinking and enumerating your own inclinations.

Black Swan – Nassim Nicholas Taleb

In Black Swan, Taleb talks about the need to prepare for unpredictable events which he calls “black swans”. Even in situations that look stable (ie. the real estate market), certain system shocks can suddenly appear and destroy the fortunes of the unprepared. Only one of these black swan events can cancel out the gains of all previous years. Black Swans work both ways though. You can profit if you can jump onto a positive Black Swan event.

The Age of Spiritual Machines – Ray Kurzweil

Ray Kurzweil is regarded as one of the most accurate futurists in the industry. Age of Spiritual Machines was written in 1999 and contains predictions for 2009 and 2019. While he credits himself with being mostly right, I laughed out loud at some of the predictions while I was listening to the book in my car. Even the best futurists often fail.

Crystal Ball Time

One of the major themes throughout all these books is the need to write down your own predictions and see how well (or poorly) you did. People have self-serving biases that distort past predictions. Writing them down prevents you from forgetting how awesome you thought MySpace and Betamax were going to be. Learning about your own fallibility makes it easier to correct your judgement and improve. To improve my own prediction abilities, and provide my future self with some entertainment, I’m going to make a series of predictions. I put a reminder in my calendar to check back over the next few years to see how well I did.

Disclaimer: Don’t bet the farm on this stuff.

Technology Over The Next 1-3 years

We won’t see a major new JS framework this year. The market has stabilized around React, Angular, and Vue

Rationale: Web frameworks have reached a stability point. Each framework generation has offered diminishing returns and the time to learn them isn’t getting any shorter. Unless something major comes along, we probably aren’t going to see any major new web frameworks.

Web Components will move towards the mainstream

Rationale: Web components have the possibility of lowering framework dependence. I think they’ll be welcomed by people who are tired of having to keep up with a yearly JavaScript framework. While this may trigger a new series of frameworks, my guess is that web components will integrate with existing frameworks.

AI is over-hyped.

Rationale: With the advent of commoditized AI, people are freaking out about various doomsday scenarios. While I think AI is important, it’s probably not going to be all that different from every other major technological shift.

XR (augmented and virtual reality) will continue to advance, but people won’t find the “killer app” for it yet

Rationale: There’s a ton of long-term potential with XR, but I don’t think we’ve figured out the “killer app” yet. We’re still in the phase where people are experimenting and building stuff that’s not useful.

Analog technologies will continue to grow in popularity.

Rationale: During the late 1800’s and the 1900’s, people began to go to the woods for recreation. This was a reaction to urban life of the time. The urban populace wanted to get away from the concrete jungle. As cities incorporated more green elements (parks, gardens, etc…), the desire to camp has flattened. As technology advances, people tend modernize past activities into modern forms of recreation (camping, hunting, hiking, fishing, etc…). The current trend towards analog books, vinyl records, and table top board games is an offshoot of this broader trend.

3-4 Major cloud service providers will reach feature parity and crowd most of the other contestants out of the market.

Rationale: Most major technical markets tend to settle on 3-5 competitors. Every major technology category tends to have a large number of competitors at first who get acquired or go out of business. Cloud providers will be no different. AWS is the current champion, but I think Azure and Google will also be in the top three. I also think each cloud provider will copy the features of its main competitors so there’s not a huge difference between platforms.

The Bitcoin / crypto bubble will burst, but prices will be higher than they were before the bubble.

Rationale: This has happened several times before and will likely happen several more times. The end game will either be crypto stabilizing at high price or tanking. I’m guessing it’s going to eventually stabilize.

Crypto-currencies will move away from proof of work algorithms because they consume too much power.

Technology 5-10 Years Out

XR will become commonplace as people figure out what to do with it. Augmented reality glasses will replace cell phones.

Self-driving car technology will mature, but won’t be commonplace due to regulatory issues

There will be a 1 trillion-dollar tech company.

The market capitalization of social media companies is going to tank

Rationale: The centralized nature of the big social media companies has led many people to criticize them.  They are a huge target for anti-tech criticism. Social media in general will fall out of favor as more people learn about the abusive practices of the social media companies. Once enough people leave the network, it’ll trigger a spiral where the whole network tanks. Look at MySpace for reference.

Social media will fragment as people realize that they don’t want to be exposed to the whole public.

Rationale: Our society is becoming increasingly polarized while major social media companies are creating arbitrary content filtering policies. Platforms like Discourse and Diaspora allow people to build decentralized communities that can cater to their users needs.

There will be a wave of rapidly falling prices on things that were once labor intensive as AI makes labor far more effective.

There will be at least a few big crypto bubbles before crypto-currency settles on a stable valuation.

Bitcoin will lose out to another crypto-currency like Ethereum or Litecoin.

Rationale: I think Bitcoin is the Apple Newton of crypto-currencies. I think that other coins are going to innovate past Bitcoin and render it obsolete.

Analog and digital will merge due to technologies like 3d printing and the Internet of Things.

Rationale: Things that are analog will become digital and digital things will be able to become analog very easily. A big part of why analog product are succeeding in the modern age is because of technology. Just as cars and lightweight materials made camping a recreational activity, things like easy customization through automated manufacturing and platforms like Etsy will make analog products more appealing. The Internet of Things also gives us the ability to embed intelligence in our analog items.

The web will decentralize as a response to abuse by large technology companies.

There will be at least one major industry shock on par with the financial crash of 2008.

Rationale: At least one highly regulated industry will melt down. There are several industries that have consolidated power and become ossified over the past few decades. Government regulations tend to push industries into a consolidated oligarchy because larger companies are more able to bear the cost of regulatory burdens. When there aren’t many competitors to pick up the slack if one company fails, the whole industry becomes fragile. That fragility exposes the industry to “black swan” events that can cause major damage. Candidates for explosion include: banking (again), telecom, energy companies, and health insurance.

10+ years out

AI and other automation technologies will trigger major political shifts within the next 10-15 years

Self-driving cars will be commonplace in 10-15 years

Rationale: The technology is close today, but cultural and regulatory factors are going to slow down the adoption of self driving car technology.

Major medical breakthroughs will increase health-span and lifespan

Rationale: Life expectancy will climb to new highs and retirement will be a financial goal as opposed to something that happens due to disability. This has major political implications as social security will become even less viable.

Basic income will be standard in most industrialized countries.

Rationale: Technology creates asymmetric effects where small numbers of people can command extremely high amounts of wealth. Technology can also melt down entire industries and leave lots of people scrambling for work. While I think that we won’t have the crazy waves of unemployment that most tech skeptics portray, governments will need to simplify their welfare systems to account for people having to take regular career breaks to retrain in new skills.

The American University system will be replaced by new technology enabled educational processes

Rationale: The American university system has ever-increasing prices for ever decreasing value. Getting a college degree used to guarantee a well-paying job. Now it only guarantees a large amount of debt. People are going to look for more vocational skills training along with shorter times to marketable skills. Since people will likely be retraining several times over their careers, traditional institutions won’t be able to keep up. The technology industry is almost there today. Platforms like Pluralsight, Udemy, and Safari allow technology professionals to constantly train in new skills.

Prognosis: The Future is Tricky

While it’s fun to make random speculations about the future of technology, it’s an important skill to be able to predict major shifts and prepare for them in advance. I’m sure I’ll come back to this post in a few years and have a good laugh at some of these predictions.

Architecture Patterns for Angular and .NET Core

Modern web application development is fraught with choices. There are many ways to build a modern web application. Some of the choices include: front-end architecture style (SPA, MVC, Razor Pages), front-end framework (React, Angular, jQuery), front-end build tools, CSS pre-processor, JavaScript build tool (Webpack, System.js, Gulp), and back-end architecture. Each of those choices spawns other architectural choices. Depending on what you pick, there could be several dozen decision points in the process. 
 
One of the things I like about Angular is that you don’t need to make as many of these choices. If you use Angular, you don’t have to spend hours searching NPM and Github for the different pieces of your web app. There are still choices to make though. In this post, we’re going to focus on how to integrate your front end (Angular) with your back-end (ASP.NET Core).
 
We are going look at three ways to build a web application. We’ll begin with the time tested monolith. Then we’ll move onto the shiny new “serverless” architecture style. After that, we’ll explore microservices architecture. After this post, you’ll have a good grasp of each of these styles and know when to use each one for your own applications.

Monoliths

If you’ve been building web applications for any amount of time, you’ve probably worked on at least one monolith. In a monolith, the entire application’s workflow is in one code base. In the .NET world, that means everything can fit into one big solution file. Monolith doesn’t mean we can’t have a reasonable separation of code. You can create module boundaries and separate your code into projects. You can even implement an n-teir architecture and separate your application into layers. The key to the monolith is that the whole process is in one self-contained code base.
 
Monolith architecture gets bashed on, but it’s the default for a reason. There’s several advantages to the monolith style. For one, everything is in one place. If you need to refactor, you don’t have to worry about moving across service boundaries. Tools like ReSharper make this even easier. Monoliths are also easy to deploy. If you are building a small or medium sized application, monolith architecture is a good choice.
While monoliths are fine much of the time, there are situations where they tend to fall down. The first problem with monoliths is that they are hard to scale. If you have one piece of an application that’s experiencing a heavier load, you still have to scale out your whole application. Monoliths also deter code reuse. In many corporate environments, there are resources that are used by more than one application. With a monolith, you end up copying the code to access those resources. Because there’s so much code in one place, larger monoliths can be hard to understand. This can make it harder to on-board new team members. Once your application gets to a certain size, you should peel microservices off of your monolith.
 

There are a few ways to implement a monolithic Angular application. The easiest way is to use the built in template. Visual Studio comes with an Angular template that includes some nice features. I prefer to use the Angular CLI, so I don’t use the included template. If you also want to use the Angular CLI, I have a post on how to use Angular CLI with a .NET Core project.

Angular CLI with .NET Core

As I learn more about Angular, I’m beginning to favor another approach. The truth is that .NET doesn’t add much to the Angular front end party. Given a clean slate, I’d run the Angular half of the code as it’s own project and use .NET Core for the API. Angular CLI is perfectly happy self hosting or you can deploy the static assets to an IIS site. 

Serverless

The serverless architecture is the most recent pattern on the list. It’s also the worst named pattern on the list. The servers [obviously] exist, but you don’t have to care about them. It’s “serverless” to you.
 
Serverless architectures rely on services that host and run your code for you. Instead of managing VMs or physical servers, the service abstracts that away. If 500,000 users come knocking on the door to your site, you don’t even need to push up a slider bar. The service scales for you.
 
When people think of serverless, they’re usually thinking about Functions as a Service (FAAS). FAAS platforms allow you to host small bits of code in the cloud. This means a single endpoint. FAAS is not the only way to be serverless though. Services like Firebase can encapsulate the your whole back-end.
 
Here’s a list of serverless platforms: https://github.com/anaibol/awesome-serverless
 
To integrate this with Angular, you compile your Angular application into a static web site. Then you call the APIs you create in whatever serverless service you’re using. This post has details on how to build a serverless Angular application using Azure Functions and an Angular site hosted in an Azure web site. You should be able to adapt these instructions for your favorite cloud provider.
 
Serverless isn’t for every situation, but there’s some definite advantages. The biggest advantage is that you can quickly get your code into production. You don’t need to configure servers or set anything up. You just drop your code into the cloud and go. Server level concerns like scalability and server management are gone. Serverless architectures are also very cheap to start with. FAAS platforms charge you by usage, so low traffic apps are dirt cheap. Static websites are also fast and cheap. Since most static assets are cached, you can serve a ton of users with very little impact.
 
There are a few problems with serverless. The big one for me is complexity. Once you have a certain number of functions, they become hard to manage. I’d rather have a web service or series of web services over a suite of cloud functions. Functions also have somewhat unpredictable pricing. A misconfiguration in your service or a bug can leave you with a huge cloud bill. The serverless pattern really shines when building prototypes and minimum viable products (MVP). You can get code into production and start iterating. Later on, you can refactor your functions into services if your application takes off.

Microservices

The microservices architectural pattern addresses some of the issues found on larger products. Microservices divide the functionality into tightly scoped services. These services are loosely coupled, so you can work on them independently. You can even build them using separate technology stacks.
 
Microservices architecture has several advantages. First, it’s easy to scale microservices. If you have one part of your application that’s receiving lots of traffic, you can scale only that part. Microservices are also easy to understand. Because they are small, it’s easy to drop into a new service and find your way around. Each service is easy to reason about. This is my favorite feature of the microservies architecture. Getting up to speed is much easier than on a big monolith application. Microservices also encourage reuse. In enterprise environments, you often have several applications hitting the same data. With microservices, you can make one service that serves many applications.
 
While microservices are great, there are some drawbacks. The biggest drawback I’ve seen is performance. Because the services are loosely coupled, you use JSON as a commnication mechanism between services. This means you end up doing a lot of serialization and deserialization. Each service boundary you cross incurs an overhead cost. On larger requests, those costs are significant. These service boundaries also make it harder to troubleshoot problems. Tracking down which service in a chain of calls is breaking down can be frustrating.
 
Another pitfall of microservice development is deployment complexity. Even though services are loosely coupled, you still need to manage the dependencies between them when deploying larger features. If you don’t have a good continuous deployment and integration pipeline, you are going to have a bad time.
 
If you want to see a sample microservices and Angular application, I have one here.

Conclusion

Each of the architectures in this post shines in a particular set of circumstances. Monoliths are great for small to medium sized projects, but are not ideal for larger applications. Serverless architecture is great for small apps, but not for large enterprise applications. Microservices architecture works well for large applications, but isn’t worth the overhead on small ones. Figure out the application you want to build and choose wisely. 

How to Build a Serverless Angular App on Azure

Building web applications is hard work. Not only do you have to build the application, you need to figure out where to host it. Ever want to skip all the frustrating server provisioning and focus on your code? If so, then serverless architecture is worth a look. In this article, you will learn how to build setup a serverless Angular app using Azure.

What exactly do you mean by “serverless”?

The term “serverless” is somewhat misleading. There are still servers, but you don’t have to care about them.

Serverless architecture runs on managed services that host your code. Server level concerns like figuring out how much memory you need, fault tolerance, and scalability are abstracted away. You don’t need to pick out a VM or service tier. The service scales automatically. You upload your code and go. The service takes care of the rest.

Serverless platforms come in several different flavors including APIs like Firebase and functions as a service (FAAS). FAAS is where you host small bits of code in the cloud. Think micro-services to the nth degree. All of the major cloud providers have some flavor of FAAS. Examples include: Amazon Lambda, Google Cloud Functions, and Azure Functions. You can even host your own FAAS using OpenFAAS, though that kinda defeats the point. (comprehensive list of serverless platforms)

For the purposes of this application, we are going to use Azure Functions for our backend API.

Where do we put the web site?

While Angular application are complex to develop, they compile into a handful of static files. There’s no reason you can’t throw these files up on any commodity static host. In this case, we’re going to use Azure Web Sites. While this solution isn’t 100% serverless, (you have to pick a hosting tier) hosting a handful of files on the free tier is close enough for me. To add another layer of performance, we’re also going to use Azure CDN to cache the files so they load quickly.

Roadmap

Here’s what we’re going to do:

  • Build an API using Azure Functions
  • Setup an Azure Web App to host our Angular application
  • Setup Azure CDN to serve those files more quickly
  • Add CORS headers on our function app so we can use it with our Angular app
  • Setup a SPA redirect to prevent unintended 404 errors

Building an API with Azure Functions

We start off by creating a new Azure Function app. Starting from the Azure Portal, click “New” search for “function” and click Function App. This will bring you to the new function app blade. Type in your app name (it has to be unique), add the rest of your info and click Create.

Now we’re going to make a function. Either hit the plus sign or click “New Function”. This brings you to the function template screen before. Functions can be activated by a variety of triggers including DB requests, files, and manual activation. We’re going to use the HTTP Trigger. HTTP triggers are activated by an HTTP request. Select “HttpTrigger – C#”.

You should now have this lovely boilerplate. This code can handle any http request and sends back a response.

Let’s ditch that boilerplate and drop in our API code. This is just going to be a simple GET request. Feel free to pretend we’re calling a database here instead of just returning a list.

If you’d like to test your shiny new function, click the “get function url” link and drop that into Postman. As you can see here, things are working splendidly.

You can also test using the built-in testing utility on the right side of the function window.

Setting up an Azure Web Site

Now that we have our backend setup, let’s put together our front end. In this case, we’re going to set up an Angular application built with the Angular CLI. Since this post is about setting up the architecture, I’m not going to go into great detail about the application itself, which can be found in this github repo. If you’re using another front-end library like React or Vue.js, this setup should be mostly the same.

To begin, create an Azure Web App. (New -> Web App). The free tier should be adequate for our needs. We’re just hosting a few files. If you want to use your own custom domain, you’ll need to use a higher tier.

Now we need to get the static files to host. If you’re using Angular CLI (and you should be…), the command to do this is:

ng build --prod --aot

After that command runs, head over to the dist folder of your angular app. It should look something like this:

The next step is to get your static files into the web app. I used FTP, though you can download a publish profile if you want. Drop your static files into the wwwroot folder. At this point, you should be able go to the URL and your app will load.

The next step is to set up a CDN. Since these are just static files, we can use a CDN to serve them. Azure CDN also allow us to serve the site over https without having to pay for a plan that supports SSL. To set one up, go to New, search “CDN”, click “CDN”, and click “Create”.

This brings up the CDN profile blade. Type in your name, select “Standard” for your pricing tier, and select your web site from the dropdown. Then click “Create”. Now you have a CDN for your static files.

Setting up CORS

Now that we have our web app in place, we need to set up the CORS (cross origin resource sharing) headers on our function. Browsers, by default, prevent your website from accessing APIs that don’t match the url of your web application (CORS error pictured below). This feature helps to prevent Cross Site Request Forgery (CSRF) attacks, but is sometimes kind of annoying. If you want to use your function API from your web application, you’ll need to add a CORS header to your API.

To begin, go back to your function app and go to the “Platform features” tab and click “CORS”.

This brings you to the CORS screen. Add the url/s of your web app. You can use “*”, which is the wildcard route, but you shouldn’t because it’s rather insecure. It’s best to use your web app’s URL here.

Setting up a SPA Redirect

At this point, we have a functioning web application, but there’s still one more issue to resolve. Like most SPA applications, Angular supports client side urls that don’t correspond to server-side resources. All of our application code is served from index.html on the root url, but we use the Angular router to map urls to parts of the application. For example, in our sample app, if we navigate from the home page to the Products page it works great, but if we hit refresh we get a 404. (error below) This is because there’s no page on the server for that URL. To fix this we need to add a URL re-write rule to redirect anything that’s not a static asset back to the index page.

To do this, we’re going to create a web.config file with our rewrite rule.

Here’s our rule (finished product):

This rule takes anything that isn’t a file, directory, or an asset (*.html, *.js, *.css) and redirects it to the index page. If you have fonts or images in your application, you should probably add rules for those file extensions as well.

Then, take that web.config file and drop it into the wwwroot folder of our web app. App urls should now redirect appropriately.

Conclusion

If you’d like to build an Angular application and host it for cheap in the cloud, this is a great way to do it. The serverless architecture is great for MVPs, simple applications, and proof of concept applications. It’s an easy way to get code into the world with minimum fuss.

References / Additional Information

If you’d like additional information or access to the code referenced in this post, check out the links below.

Demo Code

Cloud hosting for a static website

Azure Functions Docs

URL Rewrites in an Azure Web App

URL Rewrite Docs

Speaker Tip: Zooming the Chrome Tools Window

When doing a technology demo, it’s important that everyone in the room can see your screen. This usually means bumping up your view to 150%-200% depending on the screen, room, and relative blindness of the people in your audience. I demo a lot of web technologies, so I find myself showing the Chrome developer tools. Fun fact, when you zoom web pages in Chrome, that setting doesn’t apply to the developer tools. Fortunately, this is easy to fix.

TL;DR

To zoom in Chrome developer tools:
open the developer tools window,
hold down ctrl,
press + or – to zoom in and out.

Avoid The Class Hierarchy Jungle: Favor Composition Over Inheritance

Have you ever worked on an application with a jungle-like class inheritance hierarchy? Everything in the app inherits from two layers of classes and it’s impossible to follow a single line of functionality. You get code reuse, but at the cost of incomprehensible spaghetti code. I, for one, find that price too steep. In this post, we’re going to learn how to build code that’s easy to reuse, easy to test, and most important, easy to read.

Composition > Inheritance

We have many design patterns in object oriented programming. One the most useful ones is the composite reuse principle (aka. composition over inheritance). This term sounds a little opaque, but it’s not hard to understand. It means that you should design your classes as a series of loosely coupled components instead of using a multilayered class hierarchy for code reuse.

Here’s an example:

Practically speaking, this means breaking down inheritance hierarchies into plug-able services. Then you can inject those services using your favorite dependency injection framework.

Why Bother?

While using composition may seem more complicated, there are several advantages.

  1. It’s far easier to reason about the code. If you divide up your functionality into small components, each component is simple. You’re dividing up the complexity of the application into manageable chunks. When you’re using complex inheritance, it’s difficult to figure out what block of code is executing. This is especially true once you start selectively overriding methods.
  2. It’s much easier to reuse a single component than to glue a class onto a hierarchy.
  3. It’s easy to unit test loosely coupled components. Building the appropriate mocks to test a complex class is painful. Mocking a few interfaces is much easier.

Spotting Refactoring Opportunities

There are a few potential anti patterns to keep an eye out for. 
 
“Base<thing>” classes. Especially base controllers (MVC), base pages (on Web Forms), and other base classes for classes that process data. Base classes for data storage objects are usually OK. ( ex: An Administrator that inherits from a Person class.) Using inheritance for processes is a bad idea. 
 
More than two layers of inheritance. It’s hard to imagine anything that needs more than two layers of inheritance.
 
Ginormous “God” classes that span 1000’s of lines. While not strictly related to using composition over inheritance, this goes against the idea of building a suite of simple components. Large classes are difficult to read and to test. Flattening a class hierarchy into a “God” class is not an improvement. 
 
Base classes with only one class that inherits from them. The base class here is superfluous. Feel free to get rid of it. 

 

Conclusion

If you’re using class hierarchy for code reuse, ditch that approach and favor composition instead. Your code base will thank you for it.

Further Reading

https://en.wikipedia.org/wiki/Composition_over_inheritance

https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)

 

Cmder is the Cadillac of Windows Consoles

Like fashion, computing works in cycles. Things that were once looked at as passe come back with retro vengeance. For .NET development, the console is back. Consoles combined with lightweight editors like Visual Studio Code are becoming increasingly popular. Call me a hipster, but I’m all for this change. I hate waiting around for my editor and I never have to do that in a console or Visual Studio Code.

Unfortunately, if you’re living in the world of Windows, the default command line options are lacking. Not only are CMD and Powershell ugly, they lack basic usability features found in Linux and OSX. There is a better way. Cmder is a Windows console emulator that bundles several command line tools together into one fantastic package. It’s the Cadillac of Windows consoles.

Here’s what it looks like:

Cmder has a nice tabbed interface. You can run multiple consoles without having to deal with a bunch of windows. It also supports several different types of consoles including: CMD (enhanced with Clink), Powershell, and Bash.

You also get full control of the appearance of the shells including the font, color, etc… Cmder has many themes, but the default Monokai theme is good enough for me.

More importantly, you can create custom tasks. A custom task is a specific command window that you can define. You can specify the shell, what parameters it’s called with, and what directory is opened. You can have a command line setup for each application you work on. You no longer need to open a command line and manually navigate to your app folder each time you open up the console.

To make a custom task, do the following:

From Cmder, type Win + Alt + T. This takes you the tasks windows. You can also click the arrow next to the plus sign and click “Setup Tasks”.

This window allows you to reorder and reorganize the different defaults in Cmder. Hit the plus sign to add your own.

In this case, this is a console task that opens a specific project I’m working on.

After you setup your new task, click “Save Settings”. You should see your new shell in the list of available presets. Then you can open that exact shell whenever you want.

If you want to supercharge your Windows console, check out Cmder.

A Free Windows Tool For Recording Tests (Steps Recorder)

Documenting the exact reproduction steps for a bug can be a real pain. Some bugs need several specific reproduction steps. Even the most detail oriented tester can miss a step. Fortunately, there’s a hidden tool built into Windows that can help you out. It’s called Problem Steps Recorder. 
 
Problem Steps Recorder is an easy to use tool that will track your steps and produce a handy report. This report has information on each click and a screenshot for each step. It makes it easy to reproduce a bug.
 
Here’s a sample of what the tool produces: 

Problem Steps Recorder

To get access to this tool, type “psr” into your favorite command shell. That’ll open up the program. After the program is open, you can pin it to your toolbar. You don’t need to install anything, Steps Recorder is built into Windows.
The Steps Recorder app is easy to use. Click “New Recording” and work through your test. When you’re done, click “Stop”. Afterwards, you get a document detailing every step. This document can then be saved into a zipped mhtml file which you can easily attach to a bug report or an email. This document is better than a video because you can scan it quickly.
 
I’m not sure why more people don’t know about this tool. It makes recording test steps a breeze.

4 Tips For Blazing Fast ASP.NET Core Applications

I’m a huge fan of ASP.NET Core. It’s a great iteration on the ASP.NET platform and it should be your default choice for any new web development. I’m also a big fan of apps that don’t take a week to load. Fortunately, ASP.NET Core doesn’t skimp in the speed department. The framework has some great features for building fast applications. Some things that used to be hard are now easy.

In this post, I’m going to go over a few tips for building blazing fast ASP.NET applications.

Async Everything

 

One important way to help your application scale is to use asynchronous methods. The async and await keywords make building asynchronous code as easy as building synchronous code. Using async and await frees up your threads while waiting for calls to return. Because you are using up fewer threads, more people can use your application at the same time.

Async is usually a good default practice. But, it’s especially important when calling slower processes, like database calls and service requests. You don’t want to hog up a thread waiting around for database results to come back. When building your application favor async versions when they are available. Entity Framework has async versions of most data access methods, so make use of them.

One thing to keep in mind is that using async and await will help you scale, but it won’t run your requests in parallel. If you have several slow requests, consider running them in parallel using the Task Parallel Library. That will compress your wait time to the longest running call, as opposed to waiting for each one to return sequentially.

Cache Rules Everything Around Me

Getting data out of a database is the largest performance bottleneck in most applications. One way to reduce that cost is to cache things that are slow to retrieve or slow to change. ASP.NET Core has a few built in cache mechanisms.

The easiest cache to use is the built in memory cache. This cache stores items in the memory of your applications server. While this is easy to use, there are two downsides. The first downside is your cache goes away if your server goes down. Often, this is a non-issue, but if your application caches things are costly to calculate, this can be a real downer. The second problem is that if you want to scale your application to more than one server, your’re out of luck.

The solution to this problem is to use a distributed cache. This cache uses either a REDIS instance or a SQL Server table to hold your cached data. I’ve used both flavors and they both work great. One thing I liked about using the SQL Server cache is that I could add fields to the table to enable more detailed caching logic.

Regardless of which caching mechanism you use, you should hide it behind your own cache abstraction. (I call mine ICacheProvider) By using your own cache abstraction, you can easily swap out one caching mechanism for another. Most people start off with the memory caching, but eventually outgrow it. If you put your caching behind an abstraction, you can swap it out for a distributed cache without having to change a bunch of places in your app.

Crush Those JSON Responses With Middleware

By default, ASP.NET only compresses a few types of requests by default. These include the content of Razor pages, but not the results of api calls (JSON is not compressed by default). This means that if you have an api request heavy application (ie. a SPA), you can save some serious bandwidth by compressing those responses. This is especially important if you are serving mobile customers, who may have a low bandwidth signal.

Unlike the previous version of ASP.NET, Core has a handy built in middleware that you can add to your app. You can specify the type of compression and what mime types to compress. I’ve tested this in my application and the compression saves a noticeable amount of bandwidth.

Bundle Those Client Side Assets

Modern applications tend to use lots of JavaScript libraries, images, and fonts. Getting those assets to the client efficiently is important. Especially if those clients are on a low bandwidth connection. We rely on two strategies to minimize what we send to the client. The first strategy is bundling. This is where you take several assets and send them down in one request. This saves you bandwidth because you have less headers sent over the wire. The other strategy is minification. This is where you run JavaScript through a process that strips out any extraneous code, shrinking down the file size.

In the ASP.NET world, there are two paths to do this. If you are building a JavaScript heavy application, like a SPA application, use a JavaScript build tool. Webpack is my preferred JavaScript build tool. It can iterate through your dependencies and then bundle them into files. If you’re using Angular (2+), you should use the Angular CLI. It uses Webpack under the hood, but hides away it’s complexity.

The second way to do bundle and minify assets is to use the Bundler and Minifier Visual Studio extension. This extension compiles your client side assets on build. It is easier to use than JavaScript build tools like Webpack. If you’re using Razor views with a little bit of client side code, this is the way to go.

 

How about you?

If you have a good performance tip, feel free to leave it in the comments.

1 2 3 6