One of the great things about modern development is how easy you can leverage code written by others. Package managers like NPM and NuGet make this process even easier. The only problem is that there’s so much available code to choose from. For example, there’s over 100,000 packages on NuGet. In my travels as a .NET developer, I’ve found a few packages that made my life a little easier. Here’s a list of 10 handy packages you use in your next project:
Refit is a library that helps you automate calls to REST APIs. To use it, you first define your APIs as an annotated interface. Then you use Refit to turn that interface into a class. I love this package because it allows you to create incredibly terse API callers.
This package should be installed by default on any .NET API project. It automatically adds swagger documentation to your API. In addition to that, Swashbuckle creates a handy web page you can use to document and explore your API. I’m a big fan of this package because it links your docs with your actual code. It becomes even more important in a microservices environment where you have lots of services to keep track of.
Polly is a fault handling library that allows you to write more resilient code. To do this, you create policies. These policies define what happens when a particular operation fails. These policies include things like Retry, Circuit Breaker, and Fallback. It’s great for operations where there’s a chance of failure, like a REST call over a flaky network connection.
LINQ is my preferred way to deal with enumerable objects. MoreLINQ adds lots of useful operators to traditional LINQ.
Dapper is a Micro OR/M made by the same folks that gave us Stack Overflow. If you hate fiddling with Entity Framework configurations and just want to write some damn SQL, Dapper is for you. It’s been around for a while, but I’ve only recently gotten a chance to use on a project. After trying it, I’m a huge fan of this approach. I’ve spent way more hours than I’d care to admit trying to shoehorn Entity Framework’s API to get the data I want. I’ve also spent countless hours trying to figure out why my five lines of C# produced 3000+ lines of sketchy looking SQL. Sometimes it’s easier to do things yourself.
Generally, when you want to unit test an operation that interacts with the file system, you would need to write a separate interface and class to abstract away the IO operations. With Filesystem Abstractions, you no longer need to do that. Filesystem Abstractions wraps System.IO in a useful interface. Instead of making your own file system access class, you can inject an IFileSystem and get access to all of the IO methods you know and love. Filesystem Abstractions also includes classes to mock the file system, so testing IO operations becomes really easy.
Moq is a handy library for mocking interfaces in unit tests. You’re probably already using this one.
Xunit is a unit testing framework for testing .NET code. It’s like MSTest and NUnit, but it has a few interesting features. One of those is the use of theories. A theory allows you to run the same test with several different parameters. It’s a handy way to test multiple values without having to write multiple tests.
MiniProfiler injects a small unobtrusive profiler into your MVC application. As you complete operations, Miniprofiler will display the time it took to complete each step. Tools like this make it easy to diagnose performance bottlenecks.
The CSV helper makes it easy to create and read delimited files. If you work in an environment where you have to create or process lots of CSVs, this package is a good one to employ.
There are a few ways to implement a monolithic Angular application. The easiest way is to use the built in template. Visual Studio comes with an Angular template that includes some nice features. I prefer to use the Angular CLI, so I don’t use the included template. If you also want to use the Angular CLI, I have a post on how to use Angular CLI with a .NET Core project.
Each of the architectures in this post shines in a particular set of circumstances. Monoliths are great for small to medium sized projects, but are not ideal for larger applications. Serverless architecture is great for small apps, but not for large enterprise applications. Microservices architecture works well for large applications, but isn’t worth the overhead on small ones. Figure out the application you want to build and choose wisely.
Building web applications is hard work. Not only do you have to build the application, you need to figure out where to host it. Ever want to skip all the frustrating server provisioning and focus on your code? If so, then serverless architecture is worth a look. In this article, you will learn how to build setup a serverless Angular app using Azure.
The term “serverless” is somewhat misleading. There are still servers, but you don’t have to care about them.
Serverless architecture runs on managed services that host your code. Server level concerns like figuring out how much memory you need, fault tolerance, and scalability are abstracted away. You don’t need to pick out a VM or service tier. The service scales automatically. You upload your code and go. The service takes care of the rest.
Serverless platforms come in several different flavors including APIs like Firebase and functions as a service (FAAS). FAAS is where you host small bits of code in the cloud. Think micro-services to the nth degree. All of the major cloud providers have some flavor of FAAS. Examples include: Amazon Lambda, Google Cloud Functions, and Azure Functions. You can even host your own FAAS using OpenFAAS, though that kinda defeats the point. (comprehensive list of serverless platforms)
For the purposes of this application, we are going to use Azure Functions for our backend API.
While Angular application are complex to develop, they compile into a handful of static files. There’s no reason you can’t throw these files up on any commodity static host. In this case, we’re going to use Azure Web Sites. While this solution isn’t 100% serverless, (you have to pick a hosting tier) hosting a handful of files on the free tier is close enough for me. To add another layer of performance, we’re also going to use Azure CDN to cache the files so they load quickly.
Here’s what we’re going to do:
We start off by creating a new Azure Function app. Starting from the Azure Portal, click “New” search for “function” and click Function App. This will bring you to the new function app blade. Type in your app name (it has to be unique), add the rest of your info and click Create.
Now we’re going to make a function. Either hit the plus sign or click “New Function”. This brings you to the function template screen before. Functions can be activated by a variety of triggers including DB requests, files, and manual activation. We’re going to use the HTTP Trigger. HTTP triggers are activated by an HTTP request. Select “HttpTrigger – C#”.
You should now have this lovely boilerplate. This code can handle any http request and sends back a response.
Let’s ditch that boilerplate and drop in our API code. This is just going to be a simple GET request. Feel free to pretend we’re calling a database here instead of just returning a list.
If you’d like to test your shiny new function, click the “get function url” link and drop that into Postman. As you can see here, things are working splendidly.
You can also test using the built-in testing utility on the right side of the function window.
Now that we have our backend setup, let’s put together our front end. In this case, we’re going to set up an Angular application built with the Angular CLI. Since this post is about setting up the architecture, I’m not going to go into great detail about the application itself, which can be found in this github repo. If you’re using another front-end library like React or Vue.js, this setup should be mostly the same.
To begin, create an Azure Web App. (New -> Web App). The free tier should be adequate for our needs. We’re just hosting a few files. If you want to use your own custom domain, you’ll need to use a higher tier.
Now we need to get the static files to host. If you’re using Angular CLI (and you should be…), the command to do this is:
ng build --prod --aot
After that command runs, head over to the dist folder of your angular app. It should look something like this:
The next step is to get your static files into the web app. I used FTP, though you can download a publish profile if you want. Drop your static files into the wwwroot folder. At this point, you should be able go to the URL and your app will load.
The next step is to set up a CDN. Since these are just static files, we can use a CDN to serve them. Azure CDN also allow us to serve the site over https without having to pay for a plan that supports SSL. To set one up, go to New, search “CDN”, click “CDN”, and click “Create”.
This brings up the CDN profile blade. Type in your name, select “Standard” for your pricing tier, and select your web site from the dropdown. Then click “Create”. Now you have a CDN for your static files.
Now that we have our web app in place, we need to set up the CORS (cross origin resource sharing) headers on our function. Browsers, by default, prevent your website from accessing APIs that don’t match the url of your web application (CORS error pictured below). This feature helps to prevent Cross Site Request Forgery (CSRF) attacks, but is sometimes kind of annoying. If you want to use your function API from your web application, you’ll need to add a CORS header to your API.
To begin, go back to your function app and go to the “Platform features” tab and click “CORS”.
This brings you to the CORS screen. Add the url/s of your web app. You can use “*”, which is the wildcard route, but you shouldn’t because it’s rather insecure. It’s best to use your web app’s URL here.
At this point, we have a functioning web application, but there’s still one more issue to resolve. Like most SPA applications, Angular supports client side urls that don’t correspond to server-side resources. All of our application code is served from index.html on the root url, but we use the Angular router to map urls to parts of the application. For example, in our sample app, if we navigate from the home page to the Products page it works great, but if we hit refresh we get a 404. (error below) This is because there’s no page on the server for that URL. To fix this we need to add a URL re-write rule to redirect anything that’s not a static asset back to the index page.
To do this, we’re going to create a web.config file with our rewrite rule.
Here’s our rule (finished product):
This rule takes anything that isn’t a file, directory, or an asset (*.html, *.js, *.css) and redirects it to the index page. If you have fonts or images in your application, you should probably add rules for those file extensions as well.
Then, take that web.config file and drop it into the wwwroot folder of our web app. App urls should now redirect appropriately.
If you’d like to build an Angular application and host it for cheap in the cloud, this is a great way to do it. The serverless architecture is great for MVPs, simple applications, and proof of concept applications. It’s an easy way to get code into the world with minimum fuss.
If you’d like additional information or access to the code referenced in this post, check out the links below.
Have you ever worked on an application with a jungle-like class inheritance hierarchy? Everything in the app inherits from two layers of classes and it’s impossible to follow a single line of functionality. You get code reuse, but at the cost of incomprehensible spaghetti code. I, for one, find that price too steep. In this post, we’re going to learn how to build code that’s easy to reuse, easy to test, and most important, easy to read.
We have many design patterns in object oriented programming. One the most useful ones is the composite reuse principle (aka. composition over inheritance). This term sounds a little opaque, but it’s not hard to understand. It means that you should design your classes as a series of loosely coupled components instead of using a multilayered class hierarchy for code reuse.
Here’s an example:
Practically speaking, this means breaking down inheritance hierarchies into plug-able services. Then you can inject those services using your favorite dependency injection framework.
While using composition may seem more complicated, there are several advantages.
If you’re using class hierarchy for code reuse, ditch that approach and favor composition instead. Your code base will thank you for it.
What do traditional archers and old Linux hackers have in common?
Terrible looking websites.
I’ve recently started getting into traditional archery. Like any other geek, the first place I looked for information was the Internet. In my quest to suck less at hitting targets, I scoured the Internet for resources. I found lots of content, especially video content, but most of the websites were so outdated it made them hard to read. Not only were they unusable on my phone, but many sites didn’t even look good on my computer. While there are lots of videos on YouTube, I could only find one well designed site. (http://www.archery360.com/)
The thing about traditional archery is that it’s been around for thousands of years. It’s about the most evergreen content you can find. A well written archery site should be able to last indefinitely.
This got me thinking about how to build a future proof (or at least future resistant) web application. The benefits of future proof apps are obvious, but even if you don’t plan for longevity, your app is probably going to be running longer than you expect. I’ve seen many instances of businesses running ancient apps that no one thought would last that long. Think about the trillions of lines of COBOL happily chugging away in server rooms across the world.
Additionally, most people have better things to do than redesign their sites every year. Not everyone needs to keep up with the latest web design trend. This is especially true for people who are producing evergreen content, as opposed to news sites and blogs.
Great software is software that can be efficiently maintained. Pay attention to proper design principles. Separate your concerns, don’t get too clever, and keep things simple whenever possible. Program to interfaces to isolate dependencies. There’s a reason why we call these concepts “best practices”. Additionally, use a robust suite of automated tests. Automated tests will save your ass when trying to make changes. Software that can be properly maintained lasts longer and is easier to deal with. It’s also less expensive. Realize that most of the stuff you write isn’t going to be scrapped in the next few years. The web is a mature platform and most applications are going to be around for a while.
While the Lindy Effect is useful, relying on it alone will steer you wrong. COBOL and FORTRAN have been around since forever, but there’s no way in hell anyone should be writing new code in them. Think about what other people are doing with your chosen stack today. Are people writing anything new in it or is the technology on it’s way out?
Another predictor of longevity is who’s backing the technology in question. ASP.NET has Microsoft behind it. Ruby on Rails and PHP both have large communities of users. Angular.js has Google (and to a lesser degree, Microsoft) behind it. You don’t need a large corporate backer to be viable, but if you see lots of people and companies get behind a technology, the likelihood of it being around in ten years is high.
People access the web from all sorts of devices. From computers to phones to Internet enabled refrigerators. One thing we know about the future is that the number of device form factors is not going down. When I was looking for archery sites, the most common problem was the text was too small. This was because most of the sites were designed with one viewport in mind.
There are two things we can do to make our designs future proof.
How dated does it look? Can you still use it?
While Google isn’t the most beautiful site, it’s simplicity gives it long term appeal. When building something to last, stick with a simple design. You don’t need to go as spartan as Google, but keep the content to chrome ratio firmly in the content side.
With frameworks like Bootstrap and Foundation, it’s easier than ever to make your site responsive. Responsive design should be the default for everything you build on the web. You never know what kind of device someone is going to visit your page with, but with a little bit of effort, you can make your web application work on most of them.
Future proofing should be an important design consideration. By making good choices in the beginning, you can ensure your web app will live a long life.
Now if you’ll excuse me, I have a traditional archery website to build…
There’s nothing like replacing a legacy software system to stoke the fires of self-righteousness. You get to pull some poor users out of the dark ages and save them from a system may have been state of the art last decade, but is junk now. (Ignoring the fact that the old system lasted that long because it did what it what the users wanted it to do.) It’s especially fun when the old system is so laughably bad that it was obsolete the day it was written. Unfortunately, these projects come with a few pitfalls.
The first pitfall is that you can end up building the same legacy system in a new technology. It’s hard to transfer old paradigms into new ones and if you’re not careful, you can end up repeating old mistakes. I worked at a company that was creating an ASP.NET web app built over an existing database. Many of the tables in this database had no keys or relational data. The original system was built on a mainframe using flat files. When that system was upgraded in the 90’s, the flat files were just copied over. There was no regard to modern relational database structuring. When the ASP.NET version came along, the plan was to just copy over each of the old pages, one by one. Management didn’t take modern web and object oriented practices into consideration. Needless to say there was some debate between the developers and management.
The second pitfall can come while gathering requirements. When gathering requirements for the new system, it’s easy (and often correct) to reference the old system. There are two problems using the old system as the primary reference for requirements. First, the system sometimes does things a certain way because of technical constraints. I once worked on replacing a system that had lots of batch processes. Many of those processes were not needed because modern systems were fast enough to process those records on the fly. Second, many times, old systems don’t reflect the business practices of people who use them. There are lots of instances where users have to do something convoluted to get their old system to behave how they need it to. Watch out for these types of scenarios, fixing them is a good way to score brownie points with your customers.
The third pitfall happens when the builders of the new system don’t recognize the advantages of the old system. In the system I mentioned in the first point, the legacy system allowed for rapid keyboard entry. The new system, being a web application, did not optimize for keyboard entry. In the beginning, the developers ignored the users because of the “obvious” superiority of the new way of doing things. Eventually, we realized our error and created a keyboard entry optimized set of web controls. The users were much happier. While it may be fun to mock the old system, you should also pay attention to your users and make sure you aren’t creating a system that’s worse than what they started with. Just because it’s pretty and new doesn’t mean it’s good for the customer.
1. Make sure your new system is using new paradigms. Don’t repeat legacy design.
2. Be careful when gathering requirements from the old system. Especially when it comes to implementation details.
3. Don’t make a new system that’s worse than the system you’re replacing.
Software developers spend most of their time taming chaos into orderly systems. Along the way, there’s often lots of random issues that can throw you off track. Heavy processes like waterfall development sought to eliminate as much risk as possible. Unfortunately, this risk avoidance often lead to lots of pain at the end of a project. After several decades of failure, we now use agile practices that expect chaos. Dealing with chaos and risk are important for anyone who builds software for a living.
I recently read Antifragile by Nassim Nicholas Taleb. Antifragile is one of those books that changes how you see the world. Once you understand the concept of antifragility, it becomes a useful analytic tool. As a software developer, I started thinking about how to build less fragile software.
Antifragile is about how some things become better when exposed to volatility. For example, when you lift a heavy weight, your body makes itself stronger. It becomes stronger in response to stress. Additionally, if you never exercise, your body becomes weak. The human body needs some stress to be healthy. The author talks about three different states.
Fragile things suffer from volatility. Think of a teacup. If you drop it, it will break. Fragile things tend to be rigid and over-optimized. Think of a complex code base with lots of dependencies. One small change can cause the whole system to come crashing down. Systems that are fragile tend to suffer from a small number of devastating events.
Resilient things are indifferent to volatility. Think of a rock. If you drop it, nothing happens. Resiliency can come from redundancy or structural integrity. Think of a well structured code base with lots of automated tests. Failures are generally caught and corrected for before they can cause a problem. Many of the principles of software development revolve around building resilient software programs.
Some systems go a step beyond resiliency and become antifragile. Antifragile things actually gain with disorder (up to a point). Changes are generally frequent and have a small impact. Free market economies are a good example of antifragile systems. Individual businesses are fragile, but the system is strong because the best businesses survive. Evolution is another example of an antifragile system. Individuals of a species can die, but the best adapted animals get to reproduce. This causes the species as a whole to get stronger with volatility.
Other summaries of Antifragile
There are lots of software development practices that captialize on chaos. The key is to use frequent failures as learning expiriences. Here are some examples:
Agile Software Development
Agile processes revolve around building software in tight iterations with frequent customer feedback. Frequent exposure to criticism reduces the impact of criticism. It also allows you to gather more accurate information about what the users want. Additionally, your process continually improves, which means your team becomes more effective over time. This is in contrast to waterfall processes that front-load understanding and are fragile. One mistake in understanding can have a large and painful impact in waterfall projects.
Frequent releases / Continuous Integration
If you do hard things frequently, like releasing software, they become easier. A frequent release cycle breeds processes that make deployment easy and reversible. Continuous integration allows companies like Facebook to push several updates a day. This gives them a huge advantage over slower companies.
The chaos monkey is a process build by engineers at Netflix. It is a program that randomly breaks processes. The introduced chaos allows developers handle problems in a controlled way. It also encourages resilient design in the first place. Contrast this to companies that avoid chaos. Random failures occur at inconvenient times and the fix is likely to consist of bubblegum and duct tape.
Minimum Viable Product
A minimum viable product is a small product used to test the market. Because the product is cheap to build, you can run many iterations. Each iteration allows you to learn more about your potential customers. These experiments allow companies to find profitable products quickly. This process allows smaller companies to exploit market situations that bigger companies cannot.
The world of software development is chaotic. We should not fear the chaos, but instead build systems that capitalize on it. Modern software development practices expect change and use it to create better software. Using these processes, we can build software that isn’t just resilient, but is also antifragile.
If you’re interested in learning more, check out Antifragile