Avoid The Class Hierarchy Jungle: Favor Composition Over Inheritance

Have you ever worked on an application with a jungle-like class inheritance hierarchy? Everything in the app inherits from two layers of classes and it’s impossible to follow a single line of functionality. You get code reuse, but at the cost of incomprehensible spaghetti code. I, for one, find that price too steep. In this post, we’re going to learn how to build code that’s easy to reuse, easy to test, and most important, easy to read.

Composition > Inheritance

We have many design patterns in object oriented programming. One the most useful ones is the composite reuse principle (aka. composition over inheritance). This term sounds a little opaque, but it’s not hard to understand. It means that you should design your classes as a series of loosely coupled components instead of using a multilayered class hierarchy for code reuse.

Here’s an example:

Practically speaking, this means breaking down inheritance hierarchies into plug-able services. Then you can inject those services using your favorite dependency injection framework.

Why Bother?

While using composition may seem more complicated, there are several advantages.

  1. It’s far easier to reason about the code. If you divide up your functionality into small components, each component is simple. You’re dividing up the complexity of the application into manageable chunks. When you’re using complex inheritance, it’s difficult to figure out what block of code is executing. This is especially true once you start selectively overriding methods.
  2. It’s much easier to reuse a single component than to glue a class onto a hierarchy.
  3. It’s easy to unit test loosely coupled components. Building the appropriate mocks to test a complex class is painful. Mocking a few interfaces is much easier.

Spotting Refactoring Opportunities

There are a few potential anti patterns to keep an eye out for. 
 
“Base<thing>” classes. Especially base controllers (MVC), base pages (on Web Forms), and other base classes for classes that process data. Base classes for data storage objects are usually OK. ( ex: An Administrator that inherits from a Person class.) Using inheritance for processes is a bad idea. 
 
More than two layers of inheritance. It’s hard to imagine anything that needs more than two layers of inheritance.
 
Ginormous “God” classes that span 1000’s of lines. While not strictly related to using composition over inheritance, this goes against the idea of building a suite of simple components. Large classes are difficult to read and to test. Flattening a class hierarchy into a “God” class is not an improvement. 
 
Base classes with only one class that inherits from them. The base class here is superfluous. Feel free to get rid of it. 

 

Conclusion

If you’re using class hierarchy for code reuse, ditch that approach and favor composition instead. Your code base will thank you for it.

Further Reading

https://en.wikipedia.org/wiki/Composition_over_inheritance

https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)

 

Cmder is the Cadillac of Windows Consoles

Like fashion, computing works in cycles. Things that were once looked at as passe come back with retro vengeance. For .NET development, the console is back. Consoles combined with lightweight editors like Visual Studio Code are becoming increasingly popular. Call me a hipster, but I’m all for this change. I hate waiting around for my editor and I never have to do that in a console or Visual Studio Code.

Unfortunately, if you’re living in the world of Windows, the default command line options are lacking. Not only are CMD and Powershell ugly, they lack basic usability features found in Linux and OSX. There is a better way. Cmder is a Windows console emulator that bundles several command line tools together into one fantastic package. It’s the Cadillac of Windows consoles.

Here’s what it looks like:

Cmder has a nice tabbed interface. You can run multiple consoles without having to deal with a bunch of windows. It also supports several different types of consoles including: CMD (enhanced with Clink), Powershell, and Bash.

You also get full control of the appearance of the shells including the font, color, etc… Cmder has many themes, but the default Monokai theme is good enough for me.

More importantly, you can create custom tasks. A custom task is a specific command window that you can define. You can specify the shell, what parameters it’s called with, and what directory is opened. You can have a command line setup for each application you work on. You no longer need to open a command line and manually navigate to your app folder each time you open up the console.

To make a custom task, do the following:

From Cmder, type Win + Alt + T. This takes you the tasks windows. You can also click the arrow next to the plus sign and click “Setup Tasks”.

This window allows you to reorder and reorganize the different defaults in Cmder. Hit the plus sign to add your own.

In this case, this is a console task that opens a specific project I’m working on.

After you setup your new task, click “Save Settings”. You should see your new shell in the list of available presets. Then you can open that exact shell whenever you want.

If you want to supercharge your Windows console, check out Cmder.

A Free Windows Tool For Recording Tests (Steps Recorder)

Documenting the exact reproduction steps for a bug can be a real pain. Some bugs need several specific reproduction steps. Even the most detail oriented tester can miss a step. Fortunately, there’s a hidden tool built into Windows that can help you out. It’s called Problem Steps Recorder. 
 
Problem Steps Recorder is an easy to use tool that will track your steps and produce a handy report. This report has information on each click and a screenshot for each step. It makes it easy to reproduce a bug.
 
Here’s a sample of what the tool produces: 

Problem Steps Recorder

To get access to this tool, type “psr” into your favorite command shell. That’ll open up the program. After the program is open, you can pin it to your toolbar. You don’t need to install anything, Steps Recorder is built into Windows.
The Steps Recorder app is easy to use. Click “New Recording” and work through your test. When you’re done, click “Stop”. Afterwards, you get a document detailing every step. This document can then be saved into a zipped mhtml file which you can easily attach to a bug report or an email. This document is better than a video because you can scan it quickly.
 
I’m not sure why more people don’t know about this tool. It makes recording test steps a breeze.

4 Tips For Blazing Fast ASP.NET Core Applications

I’m a huge fan of ASP.NET Core. It’s a great iteration on the ASP.NET platform and it should be your default choice for any new web development. I’m also a big fan of apps that don’t take a week to load. Fortunately, ASP.NET Core doesn’t skimp in the speed department. The framework has some great features for building fast applications. Some things that used to be hard are now easy.

In this post, I’m going to go over a few tips for building blazing fast ASP.NET applications.

Async Everything

 

One important way to help your application scale is to use asynchronous methods. The async and await keywords make building asynchronous code as easy as building synchronous code. Using async and await frees up your threads while waiting for calls to return. Because you are using up fewer threads, more people can use your application at the same time.

Async is usually a good default practice. But, it’s especially important when calling slower processes, like database calls and service requests. You don’t want to hog up a thread waiting around for database results to come back. When building your application favor async versions when they are available. Entity Framework has async versions of most data access methods, so make use of them.

One thing to keep in mind is that using async and await will help you scale, but it won’t run your requests in parallel. If you have several slow requests, consider running them in parallel using the Task Parallel Library. That will compress your wait time to the longest running call, as opposed to waiting for each one to return sequentially.

Cache Rules Everything Around Me

Getting data out of a database is the largest performance bottleneck in most applications. One way to reduce that cost is to cache things that are slow to retrieve or slow to change. ASP.NET Core has a few built in cache mechanisms.

The easiest cache to use is the built in memory cache. This cache stores items in the memory of your applications server. While this is easy to use, there are two downsides. The first downside is your cache goes away if your server goes down. Often, this is a non-issue, but if your application caches things are costly to calculate, this can be a real downer. The second problem is that if you want to scale your application to more than one server, your’re out of luck.

The solution to this problem is to use a distributed cache. This cache uses either a REDIS instance or a SQL Server table to hold your cached data. I’ve used both flavors and they both work great. One thing I liked about using the SQL Server cache is that I could add fields to the table to enable more detailed caching logic.

Regardless of which caching mechanism you use, you should hide it behind your own cache abstraction. (I call mine ICacheProvider) By using your own cache abstraction, you can easily swap out one caching mechanism for another. Most people start off with the memory caching, but eventually outgrow it. If you put your caching behind an abstraction, you can swap it out for a distributed cache without having to change a bunch of places in your app.

Crush Those JSON Responses With Middleware

By default, ASP.NET only compresses a few types of requests by default. These include the content of Razor pages, but not the results of api calls (JSON is not compressed by default). This means that if you have an api request heavy application (ie. a SPA), you can save some serious bandwidth by compressing those responses. This is especially important if you are serving mobile customers, who may have a low bandwidth signal.

Unlike the previous version of ASP.NET, Core has a handy built in middleware that you can add to your app. You can specify the type of compression and what mime types to compress. I’ve tested this in my application and the compression saves a noticeable amount of bandwidth.

Bundle Those Client Side Assets

Modern applications tend to use lots of JavaScript libraries, images, and fonts. Getting those assets to the client efficiently is important. Especially if those clients are on a low bandwidth connection. We rely on two strategies to minimize what we send to the client. The first strategy is bundling. This is where you take several assets and send them down in one request. This saves you bandwidth because you have less headers sent over the wire. The other strategy is minification. This is where you run JavaScript through a process that strips out any extraneous code, shrinking down the file size.

In the ASP.NET world, there are two paths to do this. If you are building a JavaScript heavy application, like a SPA application, use a JavaScript build tool. Webpack is my preferred JavaScript build tool. It can iterate through your dependencies and then bundle them into files. If you’re using Angular (2+), you should use the Angular CLI. It uses Webpack under the hood, but hides away it’s complexity.

The second way to do bundle and minify assets is to use the Bundler and Minifier Visual Studio extension. This extension compiles your client side assets on build. It is easier to use than JavaScript build tools like Webpack. If you’re using Razor views with a little bit of client side code, this is the way to go.

 

How about you?

If you have a good performance tip, feel free to leave it in the comments.

Speaker Tip: Warm Up Your Audience With Conversation

Ever give a talk when you walk up to the lectern and everyone is chatting and not paying attention? You end up wasting several minutes of your time slot while everyone settles in. It kills your talk’s momentum and makes it harder to get everyone’s energy up. You’re starting your talk in the hole.
 
This is an easy problem to fix. 
 
When you begin to setup your room, start a conversation with people in your audience. Try to get in right after the previous speaker leaves. That way you can setup your gear and still have plenty of time to schmooze. Start off by asking your audience to help you with AV. Most rooms are a little different, so you usually need to tweak your settings. (Bonus tip: Don’t wait until your presentation to adjust this stuff, that’s an amateur move.) “Is this text big enough?” “Can you hear me okay?” etc… If you do this enough times, you’ll be close, but it’s nice to get some feedback.
 
After you get setup, ask your audience more questions. Ask them where they are from. Ask them about their tech stack. Gather information for your talk. Use this information to customize your presentation to the people who in the room.
 
Have a bunch of Java people? Reference some of their culture or relevant technologies. Doing a web talk in a room full of web noobs? Spend more time on the basics. Have a room full of .NET people? Explain new concepts using familiar terms. For example, I use attributes in C# to explain decorators in TypeScript. Use local jokes and references. Ask people about their concerns and try to address them in your talk.
 
Make your talk a conversation, not a monologue. 
 
Ask people about previous talks in the conference or other conference activities. Specifically, ask them about talks they enjoyed. Besides being fun, it gets people to associate you with other good speakers. If you have a bunch of people from a previous talk, you can reference that talk in your own. This is also a good way to learn about new speakers or topics to check out.
 
There are other benefits to starting with conversation. Leading the conversation allows you to take control of the room early. That way, when your time slot begins, you’ll already have everyone’s attention. This maximizes your speaking time and the value you deliver. By leading the discussion, you can build energy. You can joke around with your audience and build a rapport. You can begin your talk with an engaged and energetic audience, which is ideal.

The next time you give a tech talk or presentation at work, show up early and start a conversation. It’s a great way to get things moving in the right direction.

R for .NET Developers: Why Bother?

I spend most of my time working with Microsoft web technologies, or as I like to refer to it, “.NET and Friends”. While I’m a big fan of the web, I’m always looking into new areas of development. One of those areas is data analysis. We are awash in data and learning how to process it is a valuable skill. Until recently, there wasn’t much in the Microsoft ecosystem for doing this kind of work. This isn’t a bad thing, but it’s nice to be able to use familiar tools to learn new things.
 
Fortunately, Microsoft has made some serious investments in the data analysis space. You aren’t going to be using C#, but Visual Studio now supports R. R is a language made “by statisticians for statisticians”. It’s one of the premier data science technologies and a great way to learn statistics. Microsoft also has R support in SQL Server.
 
In this post, I’m going to cover a few of the reasons R is worth a look. Even if you are not planning on donning the data scientist hat anytime soon.

The Power of Polyglot

This is sometimes forgotten in the .NET world, but different languages are good for different things. If you build web applications, you already know this. For example, if you want to build a modern web application, you need at least three different languages (JavaScript, CSS, and HTML). More likely you’re looking at six or more (JavaScript, Typescript, SASS, CSS, C#, HTML, XML, and Markdown).
 
Every language does certain things better. You should use the language that does the job best, rather than trying to shoe horn your language of choice. In the data analysis space, this is no different. The two most popular languages for data analysis are R and Python. While Python is a viable option (and supported in Visual Studio as well), R is purpose build for data analysis. You can do data analysis in either, but R does it with less code. 

In addition to the productivity benefits of using the right tool for the right job, it’s good for your personal development to learn new programming languages. The Pragmatic Programmer recommends learning a new one each year. Learning new languages improves your thinking and makes you better at your primary development stack.

Data Is The New Oil

“There’s gold in them servers.”

Data is money. Large companies are using the data you generate as a goldmine. Uses range from using data to optimize advertising to using it to make even more addictive products. In addition to user generated data, we also have the mountains of data generated by IOT devices. Sometimes we can use it for small gains, like using a Nest Thermostat to optimize your heating and cooling, but sensor networks can have a much greater impact.  We have access to more data than in all of human history. If you can figure out how to mine insights from that data, you will be rewarded handsomely.

If You Care About Truth, Data is For You

“There are three kinds of lies: lies, damned lies, and statistics.”

With plenty of data comes plenty of people using that data to manipulate you. Every political cause has a stable of statistics behind it. Even if they fall apart under scrutiny, people believe them because numbers sound fancy. People trying to sell you something use numbers to appear more credible. If you want to thrive in our data soaked economy, it’s essential to become data literate, so you can spot these manipulations.

R is for Learners

R has several features that make it a great tool for learning about data analysis. First, it’s really easy to learn. R is a simple language that you can pick up in a few hours. Additionally, R has an easy to use built in help system. If you need info on any command or method, it’s a few keystrokes away. R also has a lot of built in data sets to play with statistical techniques. This includes lots of popular demo statistical data sets that are well known in the statistics community.

Playing Nice With Others

As data analysis becomes more prevalent in the enterprise, you’re probably going to wind up working with data analysts and data scientists. Learning about some of the tools and techniques they use gives you common ground. It’s the same reason software developers should develop business and industry knowledge. Being able to connect with your team members on their terms makes you more than a run of the mill software developer.

Conclusion

If you’re an enterprise developer, R is worth a look. You can use R to learn valuable new skills using familiar tools. With a little effort, you’ll be able to slice and dice data for fun and profit.

To learn more, check out my post on R Resources

A Few of My Favorite R Resources

In an effort to improve my data analysis skills, I’ve been learning and speaking about the R programming language. Even if you don’t want to be a data scientist, (whatever the hell that means this week) learning some analysis skills can pay dividends. Data literacy is an essential skill in our data soaked economy and R is a good learning tool for analysis skills.

One of the harder things to do when starting in a new area is finding useful resources. It’s tough to find the digital needle in the web powered haystack. To make your life a little easier, here’s a list of the R resources I found to be useful.

Setting Up R

There are three paths to getting R setup on your machine. If you’re a Visual Studio 2017 user, the easiest way to get R is to install the Data Science workload in Visual Studio. This will get you the Microsoft flavor of R and R Tools for Visual Studio.

Installing R Tools for Visual Studio

If you’re not into Visual Studio, you can also install an R interpreter and R Studio. R Studio is a free R IDE. For interpreters, you can go with either the Microsoft flavor or the standard CRAN flavor of R.

R Windows Installer
Microsoft R (optional)
R Studio

If neither of those options work for you, you can also run R in a Jupyter Notebook. Jupyter is a web-based environment that makes it really easy to mix text and code. It’s used in many contexts including scientific research and virtual textbooks. To setup a local copy, start off by installing Anaconda. Anaconda is a data science environment that includes a plethora of handy analysis tools. After you install Anaconda, you’ll need to install R using the conda package manager. Then you can run Jupyter using the “jupyter notebook” command.

Anaconda Download

Commands:

conda install -c r r-essentials
jupyter notebook

Sites

R Studio Cheat Sheets
A collection of useful R related guides in PDF format.

R Tutor Tutorials
This site came in handy a few times while trying to find specific R issues.

Flowing Data
Flowing data has a variety of useful articles on R and other data topics.

Don’t forget about the built-in R help system. Prefix any command with a question mark and it’ll search the R documentation for you. (Example: “?kmeans”)

Books

I skimmed through a bunch of books on R, but the one I really liked was R: Recipes for Analysis, Visualization and Machine Learning. The writing was clear and the content was pragmatic. The task based format was easy to follow and implement. Another book that I used was R for Data Science.

R: Recipes for Analysis, Visualization and Machine Learning
R for Data Science
This list of resources is enough to help you get started in learning R. Go forth and learn how to slice and dice your data.

Solution – New Project Hanging When Using R Tools for Visual Studio 2017

I’m really enjoying using R Tools for Visual Studio. It’s nice to learn something new (R) with something familiar (Visual Studio). I did, however, hit a snag when trying out the new IDE.

Upon installing the data science workload in Visual Studio (which is how you install R Tools), I couldn’t open up or create a project. File -> New Project just hung indefinitely. Usually, you expect these things to actually work, so I dropped a bug onto the R Tools for Visual Studio Github page. To my surprise, within about thirty minutes (on a Sunday night), someone asked me about my issue. While they didn’t give me an exact answer, they gave me the hint I needed to fix my issue. I was impressed by the speed and helpfulness of their response.

As some of you already know, our good friends at Microsoft maintain their own version of R. This version is faster, but it’s a point release behind the latest one. It’s a basic trade off between shiny and speedy. Turns out R Tools for Visual Studio doesn’t yet support the latest version. I had previously installed the non-Microsoft R, which was at v3.4 and Visual Studio defaulted to that version.

There are two ways to solve the issue. The easiest way is to just uninstall R 3.4 and use the Microsoft versions of R. If you are using R Studio as well, Microsoft’s R works fine. The second way is to go to R-Tools -> Windows -> Workspaces. From there, you can pick the version of R that’s being used by Visual Studio.

Regardless of which solution you go with, this issue, while vexing, is easy enough to fix.

Happy data hunting.

Angular CLI With .NET Core

Angular is a fantastic platform for building rich client side apps, but let’s not forget about the back end. My choice for the back end is ASP.NET Core. If you’re not shy about spending a bunch of time setting up Webpack, Angular and ASP.NET Core is a fantastic combination.

However, I prefer to spend my time building applications, not configuring build tools. Angular CLI takes a lot of the pain out of using Angular, but it’s a self contained command line tool. Fortunately, Angular CLI and ASP.NET Core can happily co-exist. In this post, we’re going to build an application using these two technologies together. By the end, you’ll be ready to have all the goodness of Angular and ASP.NET Core without spending a week setting up Webpack.

Prerequisites

Make sure you have .NET Core, Node.js and Angular CLI installed.

https://www.microsoft.com/net/core#windowsvs2017

https://nodejs.org/en/

npm install -g @angular/cli 

Getting Started: ASP.NET Core App Setup

Fire up Visual Studio (I’m using VS 2017) and create a new ASP.NET Core Web Application. Select the Web API Starter. This doesn’t bring in any web stuff, which is ideal. We’re going to use Angular CLI to generate the client files.

Then go ahead and disable automatic Typescript compilation. We want CLI to compile our Typescript, not Visual Studio. Do this by adding the “TypeScriptCompilerBlocked” line to your csproj file.

Head on over to the Startup.cs and add in the static files middleware. You’ll have to install the “Microsoft.AspNetCore.StaticFiles” NuGet package. After that, add app.UseDefaultFiles() and app.UseStaticFiles() to your Configure() method.

After you get those setup, you’ll want to add some middleware to redirect those pesky 404s to the root file. This will allow you to navigate the application without having to start at the root each time you hit refresh. For this app, api routes are prefixed with “api”, so we’re excluding those routes from the check. We’re also excluding routes with a file extension since those are likely static assets.

To prevent the app from loading up the api/values endpoint by default, go to properties/launchSettings.json and change the “launchUrl” keys from “api/values” to “”.

At this point, our .NET Core app should be ready to go. Let’s move on to the Angular code.

Setting Up Angular CLI

We’re going generate our Angular app on top of our ASP.NET core app. To do this, go to the command line and navigate to the directory of your solution file. Then run “ng new <your app name>”.

You want the <app name> part to be the same as your .NET Core source folder. This will drop all of the CLI files into your ASP.NET Core root folder. It should look like this:

Gluing It All Together

Angular has a default file structure, but it’s not ideal for the existing ASP.NET Core application. Fortunately, Angular CLI has some options that will allow us to change this structure. We want the client side code to be in a folder called “client-src” and the client side build artifacts to go to the wwwroot folder. To do this, rename the “src” folder to “client-src”. Then go to go to .angular-cli.json. This is the primary configuration file for Angular CLI. First, change “src” to “client-src”. Then change the “outDir” attribute from “dist” to “wwwroot”. This will drop all of the compiled assets into the wwwroot folder.

At this point, we can build the Angular application using Angular CLI. Navigate a command prompt into the main application folder and run “ng build”. This command will build the client side part of the application, dropping the build artifacts into wwwroot. The wwwroot folder should look like this:

  

At this point we should be able to run the app by hitting the Run button in Visual Studio. Unfortunately, we still have a two stage build pipeline. The first step being “ng build” to generate the client side files and then running the .NET Core app. To fix this, we can drop in a post build script: “ng build — aot”. This will compile the client side files (with ahead of time compiling) after the app builds.

Bonus Points

If you’re using git, you’ll want to add the wwwroot folder to your .gitignore file. These files are generated, so you probably don’t want to check them in.

Example Code

All of the demo code used in this post is available here:

https://github.com/DustinEwers/shiny-angular-demos/tree/master/ninjas-quest-cli-core/ninjas-quest

This demo includes a full Angular application on top of ASP.NET core. Feel free to use this as a template or something to look at while building your own. You now have everything you need to get started building shiny applications using Angular CLI and ASP.NET Core.

Book Review – Crucial Conversations

Ever find yourself in high stakes situations where even the slightest miscommunication can bring everything crashing into the ground? If so, Crucial Conversations has you covered. Crucial Conversations is a book about how to better navigate high stakes conversations. Unlike most business books, Crucial Conversations is packed with actionable information.

Why bother?

Why is this an important skill for developers? People think software development as a process where people in a windowless basement turn pizza and caffeine into software. The reality is that software development is more about communication than technology. We build software in teams. We build software for people. We need to figure out what those people want. We need to be able to have honest conversations when things don’t go as planned (which is always). Creating a free flowing dialog is absolutely essential to creating valuable software.

Beyond building software. Developers who want a lucrative career find themselves in high stakes negotiations. These include, project scope discussions, salary negotiations, and job role discussions. Learning how navigate these situations can add thousands of dollars to your lifetime earnings. Not bad for a $10 book and a few hours of reading time.

Main Ideas

Everyone has high stakes conversations. These include high pressure negotiations, impassioned arguments, and delicate interventions. Crucial conversations come in many different flavors. What links them together is that the results of these sorts of conversations have an out-sized impact on your life. Screw up one of these and you could be feeling the pain for years to come.

The key to navigating crucial conversations is to keep a free flowing dialog between the participants. To create free flowing dialog, maintain psychological safety. The primary goal of someone in a crucial conversation is to create and maintain a psychological safe space where both parties can express themselves without fear of anger or retribution. If everyone can get everything onto the table, you can usually figure out the correct path.

To cultivate psychological safety, you need to control your own emotions. Many people cast their own stories into “victim and villain” narratives. Playing the victim causes other people to get defensive. This defensiveness erodes psychological safety. Without psychological safety, people retreat to “silence or violence”. They either shut down or defend themselves with hostility. Usually emotional and verbal hostility, but sometimes physical hostility. Responding to a conversation with silence or aggression is “the fool’s choice”. Avoid the fool’s choice at all costs.

The book describes many techniques to maintain dialog. I’m not going to list them all out here, but a few include:

Shared Purpose
People generally have some shared goal in the conversation. Reminding people of that goal can inspire mutual cooperation.

Contrast and Clarification
Use contrast to clarify what you want. Prevent misinterpretation. Everyone has a plethora of cognitive biases. It’s easy to misinterpret wants and needs in high pressure situations. Contrast what you actually want with what people think you want.

“Start with Heart”
Figure out what you actually want from a situation and take your ego out of the equation.

Related Concepts

Radical Candor
Radical candor is where you are willing to challenge people directly, but with a high degree of empathy. It’s the useful alternative to being a wimp or an asshole.

Find out more about it here: https://www.radicalcandor.com/about-radical-candor/

Cognitive Distortions
People have a variety of intellectual distortions. These are also referred to as cognitive biases. Watch out for cognitive distortions in yourself or others. There are dozens of these, but Psychology Tools has put together handy chart detailing some of the major ones:

Web site – http://psychologytools.com/unhelpful-thinking-styles.html
PDF – http://media.psychologytools.com/worksheets/english_us/unhelpful_thinking_styles_en-us.pdf

Ego is the Enemy
A big part of being a better negotiation is learning how to disarm your ego. Lots of people forget their mutual goal and try to “win” an argument. This is usually waste of time. Focus more on your goal and less on yourself. Ryan Holiday has a fantastic book about this.

Ego is the Enemy (Amazon)

Conclusion

Being able to successfully navigate tough conversations is an essential developer skill. Crucial Conversations has a variety of techniques to better navigate high stakes conversations. For the sake of yourself and everyone who has to work with you, work on your communication skills.

Crucial Conversations (Amazon)

1 2 3 5