Merge conflicts in csproj files

In a recent version of GitHub for Windows, we made a quiet change that had a subtle effect you might have noticed. We changed the default merge strategy for *.csproj and similar files. If you make changes to a .csproj file in a branch and then merge it to another branch, you'll probably run into more merge conflicts now than before.

Why?

Well, it used to be that we would do a union merge for *.csproj files. The git merge-file documentation describes this option as such:

Instead of leaving conflicts in the file, resolve conflicts favouring our (or their or both) side of the lines.

For those who don't speak the commonwealth English, "favouring" is a common British mispelling of the one true spelling of "favoring". :trollface:

So when a conflict occurs, it tries to resolve it by accepting all changes more or less. It's basically a cop out.

This strategy can be set in a .gitattributes file like so if you really want this behavior for your repository.

*.csproj  merge=union

But let me show why you probably don't want to do that and why we ended up changing this.

Union Merges Gone Wild

Suppose we start with the following simplified foo.csproj file in our master branch along with that .gitattributes file:

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

After creating that file, let's make sure we commit it.

git init .
git add -A
git commit -m "Initial commit of gittattributes and foo.csproj"

We then create a branch (git checkout -b branch) creatively named "branch" and insert the following snippet into foo.csproj in between the AAA.cs and DDD.cs elements.

    <Page Include="BBB.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>

For those who lack imagination, here's the result that we'll commit to this branch.

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="BBB.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

Don't forget to commit this if you're following along.

git commit -a "Add BBB.cs element"

Ok, so let's switch back to our master branch.

git checkout master

And then insert the following snippet into the same location.

    <Page Include="CCC.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>

The result now in master is this:

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="CCC.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

Ok, commit that.

git commit -a "Add CCC.cs element"

Still with me?

Ok, now let's merge our branch into our master branch.

git merge branch

Here's the end result with the union merge.

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="CCC.cs">
    <Page Include="BBB.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

Eww, that did not turn out well. Notice that "BBB.cs" is nested inside of "CCC.cs" and we don't have enough closing </Page> tags. That's pretty awful.

Without that .gitattributes file in place and using the standard merge strategy, the last merge command would result in a merge conflict which forces you to fix it. In our minds, this is better than a quiet failure that leaves your project in this weird state.

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
<<<<<<< HEAD
    <Page Include="CCC.cs">
=======
    <Page Include="BBB.cs">
>>>>>>> branch
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

Obviously, in some idyllic parallel universe, git would merge the full CCC element after the BBB element without fudging it up and without bothering us with these pesky merge conflicts. We don't live in that universe, but maybe ours could become more like that one. I hear it's cool over there.

What's this gotta do with Visual Studio?

I recently asked folks on Twitter to vote up this User Voice issue asking the Visual Studio team to support file patterns in project files. Wildcards in .csproj files are already supported by MSBuild, but Visual Studio doesn't deal with them very well.

One of the big reasons to do this is to ease the pain of merge conflicts. If I could wild-card a directory, I wouldn't need to add an entry to *.csproj every time I add afile.

Another way would be to write a proper XML merge driver for Git, but that's quite a challenge as my co-worker Markus Olsson can attest to. If it were easy, or even moderately hard, it would have been done already. Though I wonder if we limited it to common .csproj issues could we write one that isn't perfect but good enough to handle common merge conflicts? Perhaps.

Even if we did this, the merge driver only solves the problem for one version control system, though arguably the only one that really matters. :trollface:

It's been suggested that if Visual Studio sorted its elements first, that would help mitigate the problem. That helps reduce the incidental conflicts caused by Visual Studio's apparent non-deterministic sort of elements. But it doesn't make the issue of merge conflicts go away. In the example I presented, every element remained sorted throughout my example. So any time two different branches adds files that would be adjacent, you run the risk of this conflict. This happens quite frequently.

Wild card support would make this problem almost completely go away. Note, I said almost. There will still be the occasional conflict in the file, but they'd be very rare.

A less terrible .NET project build with NuGet

According to Maarten Balliauw, Building .NET projects is a world of pain. He should know, he is a co-founder of MyGet.org which provides private NuGet feeds along with build services for those packages.

He's also a co-author of the Pro NuGet book, though I might argue he's most famous for his contribution to Let Me Bing That For You.

His post gives voice to a frustration I've long had. For example, if you want to build a project library that targets Windows 8 RT, you have to install Visual Studio on your build machine. That's just silly fries! (By the way, if you have a solution that doesn't require Visual Studio, I'd love to hear it!)

Maarten doesn't just rant about this situation, he proposes a solution (emphasis mine):

I do not think we can solve this quickly and change history. But I do think from now on we have to start building SDK’s differently. Most projects only require an MSBuild .targets file and some assemblies, either containing MSBuild tasks or reference assemblies, to do their compilation work. What if… we shipped the minimum files required to succesfully build a project as NuGet packages?

This philosophy aligns well with my personal philosophy on self-contained builds and was a key design goal with NuGet. One of the guiding principles I wrote about when we first announced NuGet:

Works with your source code. This is an important principle which serves to meet two goals: The changes that NuGet makes can be committed to source control and the changes that NuGet makes can be x-copy deployed. This allows you to install a set of packages and commit the changes so that when your co-worker gets latest, her development environment is in the same state as yours. This is why NuGet packages do not install assemblies into the GAC as that would make it difficult to meet these two goals. NuGet doesn’t touch anything outside of your solution folder. It doesn’t install programs onto your computer. It doesn’t install extensions into Visual studio. It leaves those tasks to other package managers such as the Visual Studio Extension manager and the Web Platform Installer.

There's a caveat that NuGet does store packages in a machine specific location outside of the solution, but that's an optimization. The point is, a developer should ideally be able to checkout your code from GitHub or other source hosting repository and build the solution. Bam! Done! If there's too many more steps than that, it's a pain to contribute.

Fortunately, there are some great features in NuGet that can help package authors reach this goal!

Import MSBuild targets and props files into project

NuGet 2.5 introduces the ability to import MSBuild targets and prop files into a project. As more projects take advantage of this feature, we'll hopefully see the demise of required MSIs in order to work on a project. As Maarten points out, MSIs (or Visual Studio Extensions) are still valuable in order to add extra tooling. But they shouldn't be required in order to build a project.

Development only dependencies

In tandem with importing MSBuild targets, NuGet 2.7 adds the ability to specify dependency only dependencies.

This feature was contributed by Adam Ralph and it allows package authors to declare dependencies that were only used at development time and don't require package dependencies. By adding a developmentDependency="true" attribute to a package in packages.config, nuget.exe pack will no longer include that package as a dependency.

These are packages that do not get deployed with your application. These packages might include MSBuild targets, code contract assemblies, or source code only packages.

You can see an example of this in use with Octokit.net in its packages.config.

<?xml version="1.0" encoding="utf-8"?>
<packages>
  <package id="DocPlagiarizer" version="0.1.1" targetFramework="net45" developmentDependency="true" />
  <package id="SimpleJson" version="0.34.0" targetFramework="net45" developmentDependency="true" />
</packages>

My recommendation to package authors is to consider a separate *.Runtime package that contains just the assemblies that need to be deployed and a separate main project that depends on that package that brings in all the build-time dependencies such as MSBuild targets and whatnot. It keeps a nice separation and works well for other non-Visual Studio NuGet consumers such as Web Matrix, ASP.NET Web Pages, Xamarin, etc.

Related dependencies feature

At the end of his post, Maarten notes that there is good progress towards build sanity.

P.S.: A lot of the new packages like ASP.NET MVC and WebApi, the OData packages and such are being shipped as NuGet packages which is awesome. The ones that I am missing are those that require additional build targets that are typically shipped in SDK's. Examples are the Windows Azure SDK, database tools and targets, ... I would like those to come aboard the NuGet train and ship their Visual Studio tooling separately from teh artifacts required to run a build.

This reminds me of a feature proposal I wrote a draft specification for a long time ago called Related Dependencies. You can tell it's old because it refers to the old name for NuGet.

These are basically "optional" dependencies that can bring in tooling from other package managers such as the Visual Studio Extensions gallery. In the spec, I mentioned "prompting" but the goal would be a non-obtrusive way for packages to highlight other tooling related to the package dependency and make it easy for developers to easily install all of them.

In my mind, this would be similar to how you are notified of updates in the Visual Studio Extension Manager (now called "Extensions and Updates" dialog). Perhaps there's another tab that lets you see extensions related to the packages installed in your solution and an easy way to install them all.

But these would have to be optional. You should be able to build the solution without them. Installing them just makes the development experience a bit better.

AutoMapper 3.2.0 released

Full release notes on the GitHub site

Big features/improvements:

  • LINQ queryable extensions greatly improved
    • ICollection supported
    • MaxDepth supported
    • Custom MapFrom expressions supported (including aggregations)
    • Inherited mapping configuration applied
    • Windows Universal Apps supported
  • Fixed NuGet package to not have DLL in project
  • iOS confirmed to work
  • ReverseMap ignores both directions (only one Ignore() or IgnoreMap attribute needed)
  • Pre conditions on member mappings (called before resolving anything)
  • Exposing ResolutionContext everywhere, including current mapping engine instance

A lot of small improvements, too. I’ve ensured that every new extension to the public API includes code documentation. The toughest part of this release was coming up with a good solution to the multi-platform support and MSBuild’s refusal to copy indirect references to all projects.

As always, if you find any issues with this release, please report over on GitHub.

Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Working hard and enjoying every minute of it.

I have not blogged in almost a year, I am a total slacker. But, I really want to share what I have been doing and what my team and I have learned, so in the coming months, I will be getting into painstaking detail about some concepts and implementations that I think have really helped my team to deliver value.

 

Where am I ?

About a year ago I left my role as Chief Architect for the largest .Net ecommerce site, www.dell.com , I found my role there ended up spending more time teaching the fundamentals to teams and management, when I really wanted to spend my time moving quickly and getting things done. So I left for a start up; QuarterSpot. My role there is CTO, and I am responsible for all of the technology decisions, which is great, because if something is not working, I am accountable and empowered to change it.  QuarterSpot is a peer to peer financial company that specializes in lending money to small businesses. I feel great about our mission which is to help small businesses get money when banks will not lend money or the process is so time consuming that by the time they get approved the small business losses the opportunity they needed the money for. (QuarterSpot CEO on Small Business Lending Panel at LendIt Conference)

 

What am I doing?

My team is responsible for building all of the technology to enable our business. Since the peer to peer space is a newer business model, this means we need to move fast and innovate, which is what the promise of Agile was all about. Since we are in the financial space, quality is of the highest importance, so this is where my experience in extreme programming (XP) practices really pays off. So, mix this together with Continuous Delivery and we have all the components to deliver software at a rapid pace in a business that needs to relay on technology innovations to stay ahead of its competition.

We are building the websites and backend systems to be able to process and service loans, utilizing machine learning to analyze our customers so we can analyze and discover better algorithm to serve the business. We are able to use whatever tools makes the most sense for us to move quickly and it is so much fun to deploy code to production on a frequent basis.

 

We push code to production frequently, which means I am usually exhausted after a full day of work. Which is very rewarding and takes a lot of mental energy to stay diligent about quality and make sure we complete each feature is complete.

 

Topics that I will be covering in upcoming posts

  • What is continuous delivery and how is it different from continuous deployment?
  • The importance of keeping code out of your UI / Web frameworks.
  • Using the Command Query Separationpattern
  • Transparency in your development and production support process, utilizing dashboard
  • Utilizing cloud infrastructure to move quickly.
  • Automate everything.
  • How my preferred development stack has changed since 2009.
  • Importance of a consistent architecture / application implementation.
  • Keeping your  architectural concept count low.
  • Optimizing performance when it maters and not before.
  • Machine Learning and statically typed models.

 

If any of these topics are interesting to you, let me know in the comments and I will get to those posts first.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Using AutoMapper to perform LINQ aggregations

In the last post I showed how AutoMapper and its LINQ projection can prevent SELECT N+1 problems and other lazy loading problems. That was pretty cool, but wait, there’s more! What about complex aggregation? LINQ can support all sorts of interesting queries, that when done in memory, could result in really inefficient code.

Let’s start small, what if in our model of courses and instructors, we wanted to display the number of courses an instructor teaches and the number of students in a class. This is easy to do in the view:

@foreach (var item in Model.Instructors)
{
    <tr>
        <td>
            @item.Courses.Count()
        </td>
    </tr>
}
<!-- later down -->
@foreach (var item in Model.Courses)
{
    <tr class="@selectedRow">
        <td>
            @item.Enrollments.Count()
        </td>
    </tr>
}

But at runtime this will result in another SELECT for each row to count the items:

image

We could eager fetch those rows ahead of time, but this is also less efficient than just performing a SQL correlated subquery to calculate that SUM. With AutoMapper, we can just declare this property on our ViewModel class:

public class CourseModel
{
    public int CourseID { get; set; }
    public string Title { get; set; }
    public string DepartmentName { get; set; }
    public int EnrollmentsCount { get; set; }
}

AutoMapper can recognize extension methods, and automatically looks for System.Linq extension methods. The underlying expression created looks something like this:

courses =
    from i in db.Instructors
    from c in i.Courses
    where i.ID == id
    select new InstructorIndexData.CourseModel
    {
        CourseID = c.CourseID,
        DepartmentName = c.Department.Name,
        Title = c.Title,
        EnrollmentsCount = c.Enrollments.Count()
    };

LINQ providers can recognize that aggregation and use it to alter the underlying query. Here’s what that looks like in SQL Profiler:

SELECT 
    [Project1].[CourseID] AS [CourseID], 
    [Project1].[Title] AS [Title], 
    [Project1].[Name] AS [Name], 
    (SELECT 
        COUNT(1) AS [A1]
        FROM [dbo].[Enrollment] AS [Extent5]
        WHERE [Project1].[CourseID] = [Extent5].[CourseID]) AS [C1]
    FROM --etc etc etc

That’s pretty cool, just create the property with the right name on your view model and you’ll get an optimized query built for doing an aggregation.

But wait, there’s more! What about more complex operations? It turns out that we can do whatever we like in MapFrom as long as the query provider supports it.

Complex aggregations

Let’s do something more complex. How about counting the number of students whose name starts with the letter “A”? First, let’s create a property on our view model to hold this information:

public class CourseModel
{
    public int CourseID { get; set; }
    public string Title { get; set; }
    public string DepartmentName { get; set; }
    public int EnrollmentsCount { get; set; }
    public int EnrollmentsStartingWithA { get; set; }
}

Because AutoMapper can’t infer what the heck this property means, since there’s no match on the source type even including extension methods, we’ll need to create a custom mapping projection using MapFrom:

cfg.CreateMap<Course, InstructorIndexData.CourseModel>()
    .ForMember(m => m.EnrollmentsStartingWithA, opt => opt.MapFrom(
        c => c.Enrollments.Where(e => e.Student.LastName.StartsWith("A")).Count()
    )
);

At this point, I need to make sure I select the overloads for the aggregation methods that are supported by my LINQ query provider. There’s another overload of Count() that takes a predicate to filter items, but it’s not supported. Instead, I need to chain a Where then Count. The SQL generated is now efficient:

SELECT 
    [Project2].[CourseID] AS [CourseID], 
    [Project2].[Title] AS [Title], 
    [Project2].[Name] AS [Name], 
    [Project2].[C1] AS [C1], 
    (SELECT 
        COUNT(1) AS [A1]
        FROM  [dbo].[Enrollment] AS [Extent6]
        INNER JOIN [dbo].[Person] AS [Extent7]
            ON ([Extent7].[Discriminator] = N''Student'')
            AND ([Extent6].[StudentID] = [Extent7].[ID])
        WHERE ([Project2].[CourseID] = [Extent6].[CourseID])
            AND ([Extent7].[LastName] LIKE N''A%'')) AS [C2]

This is a lot easier than me pulling back all students and looping through them in memory. I can go pretty crazy here, but as long as those query operators are supported by your LINQ provider, AutoMapper will pass through your MapFrom expression to the final outputted Select expression. Here’s the equivalent Select LINQ projection for the above example:

courses =
    from i in db.Instructors
    from c in i.Courses
    where i.ID == id
    select new InstructorIndexData.CourseModel
    {
        CourseID = c.CourseID,
        DepartmentName = c.Department.Name,
        Title = c.Title,
        EnrollmentsCount = c.Enrollments.Count(),
        EnrollmentsStartingWithA = c.Enrollments
            .Where(e => e.Student.LastName.StartsWith("A")).Count()
    };

As long as you can LINQ it, AutoMapper can build it. This combined with preventing lazy loading problems is a compelling reason to go the view model/AutoMapper route, since we can rely on the power of our underlying LINQ provider to build out the correct, efficient SQL query better than we can. That, I think, is wicked awesome.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Using AutoMapper to prevent SELECT N+1 problems

Back in my post about efficient querying with AutoMapper, LINQ and future queries, one piece I glossed over was how View Models and LINQ projection can prevent SELECT N+1 problems. In the original controller action, I had code like this:

public ActionResult Index(int? id, int? courseID)
{
    var viewModel = new InstructorIndexData();
 
    viewModel.Instructors = db.Instructors
        .Include(i => i.OfficeAssignment)
        .Include(i => i.Courses.Select(c => c.Department))
        .OrderBy(i => i.LastName);
 
    if (id != null)
    {
        ViewBag.InstructorID = id.Value;
        viewModel.Courses = viewModel.Instructors.Where(
            i => i.ID == id.Value).Single().Courses;
    }
 
    if (courseID != null)
    {
        ViewBag.CourseID = courseID.Value;
        viewModel.Enrollments = viewModel.Courses.Where(
            x => x.CourseID == courseID).Single().Enrollments;
    }
 
    return View(viewModel);
}

See that “Include” part? That’s because the view shows information from navigation and collection properties on my Instructor model:

public class Instructor : Person
{
    [DataType(DataType.Date)]
    [DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}", ApplyFormatInEditMode = true)]
    [Display(Name = "Hire Date")]
    public DateTime HireDate { get; set; }

    public virtual ICollection<CourseInstructor> Courses { get; set; }
    public virtual OfficeAssignment OfficeAssignment { get; set; }
}

public abstract class Person
{
    public int ID { get; set; }

    [Required]
    [StringLength(50)]
    [Display(Name = "Last Name")]
    public string LastName { get; set; }
    [Required]
    [StringLength(50, ErrorMessage = "First name cannot be longer than 50 characters.")]
    [Column("FirstName")]
    [Display(Name = "First Name")]
    public string FirstMidName { get; set; }

    [Display(Name = "Full Name")]
    public string FullName
    {
        get
        {
            return LastName + ", " + FirstMidName;
        }
    }
}

If I just use properties on the Instructor/Person table, only one query is needed. However, if my view happens to use other information on different tables, additional queries are needed. If I’m looping through a collection association, I could potentially have a query issued for each loop iteration. Probably not what was expected!

ORMs let us address this by eagerly fetching associations, via JOINs. In EF this can be done via the “Include” method on a LINQ query. In NHibernate, this can be done via Fetch (depending on the query API you use). This is addresses the symptom, but is not a good long-term solution.

Because our domain model exposes all data available, it’s easy to just show extra information on a view without batting an eye. However, unless we keep a database profiler open at all times, it’s not obvious to me as a developer that a given association will result in a new query. This is where AutoMapper’s LINQ projections come into play. First, we have a View Model that contains only the data we wish to show on the screen, and nothing more:

public class InstructorIndexData
{
    public IEnumerable<InstructorModel> Instructors { get; set; }

    public class InstructorModel
    {
        public int ID { get; set; }

        [Display(Name = "Last Name")]
        public string LastName { get; set; }
            
        [Display(Name = "First Name")]
        public string FirstMidName { get; set; }

        [DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}", ApplyFormatInEditMode = true)]
        [Display(Name = "Hire Date")]
        public DateTime HireDate { get; set; }

        public string OfficeAssignmentLocation { get; set; }

        public IEnumerable<InstructorCourseModel> Courses { get; set; } 
    }

    public class InstructorCourseModel
    {
        public int CourseID { get; set; }
        public string Title { get; set; }
    }
}

At this point, if we used AutoMapper’s normal Map method, we could still potentially have SELECT N+1 problems. Instead, we’ll use the LINQ projection capabilities of AutoMapper :

var viewModel = new InstructorIndexData();
 
viewModel.Instructors = db.Instructors
    .OrderBy(i => i.LastName)
    .Project().To<InstructorIndexData.InstructorModel>()
;
 

Which results in exactly one query to fetch all Instructor information, using LEFT JOINs to pull in various associations. So how does this work? The LINQ projection is quite simple – it merely looks at the destination type to build out the Select portion of a query. Here’s the equivalent LINQ query:

from i in db.Instructors
orderby i.LastName
select new InstructorIndexData.InstructorModel
{
    ID = i.ID,
    FirstMidName = i.FirstMidName,
    LastName = i.LastName,
    HireDate = i.HireDate,
    OfficeAssignmentLocation = i.OfficeAssignment.Location,
    Courses = i.Courses.Select(c => new InstructorIndexData.InstructorCourseModel
    {
        CourseID = c.CourseID,
        Title = c.Title
    }).ToList()
};

Since Entity Framework recognizes our SELECT projection and can automatically build the JOINs based on the data we include, we don’t have to do anything to Include any navigation or collection properties in our SQL query – they’re automatically included!

With AutoMapper’s LINQ projection capabilities, we eliminate any possibility of lazy loading or SELECT N+1 problems in the future. That, I think, is awesome.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

GitHub Secrets Talk

If you happen to be in Oahu next week (lucky you!), Wednesday April 9 2014 at 5:30 PM, come see my talk on GitHub Secrets at the University of Hawaii (lucky me!). Did I mention good food will be served?!

What am I speaking about? Well I asked a few dear friends of mine what questions they would want answered in a talk by me and this is what they came up with.

What's the software industry like? Great question DJ Pauly D! Photo by Eva Rinaldi license CC BY-SA 2.0 What's the secret to success to being a developer? I have a few ideas Mr. Bill Gates! Photo by World Economic Forum license CC BY-SA 2.0 What's the secret to GitHub's success Well it's a combination of factors Ms. Marissa Mayers. Photo by Michael Tippet license CC BY-SA 2.0 Tell me GitHub.com secrets for great success You got it Mr. Mark Zuckerberg! Photo by Jason McElWeenie license CC BY 2.0

It's really an opportunity to talk to developers and students about topics that are near and dear to my heart.

Share your knowledge when you travel

This is the second time I'm giving a talk while on vacation in Hawaii. The first time was a couple of years ago. When I went back home to Alaska, I also gave a talk there.

I've found that places that are outside of the usual tech-hubs tend to be very welcoming to outside speakers. It can be hard to maintain a software user group when you don't have a large pool of speakers to draw from as you would in Seattle or San Francisco.

So if you find yourself on vacation somewhere like Alaska, Hawaii, or elsewhere, you should consider getting in touch with a local user group if you have something interesting to share. It may be your fresh perspective is exactly what they'd like to see.

But here's a pro tip. Giving a talk while on vacation does introduce an element of stress that you're probably going on vacation to avoid in the first place. I advise trying to schedule the talk near the beginning of your vacation. The amazing feeling of relief after giving a talk will help you relax the rest of the trip.

Sadly, I did not think of this until after I scheduled this talk. But I think I've prepared enough in advance that I'll be able to relax. After all, I'll be in Hawaii. How can I complain?

Blogging while Broken

I'm going through a bit of a funk with work and writing. They seem somewhat intertwined. Writing this blog has been such an important outlet for me that it's rough when I can't seem to muster the energy to just keep writing.

So what do you do when you have blogger's block? You blog about blogging of course! This isn't the first time I've done it.

Looking at this list, which isn't even comprehensive, I now realize I have a bit of a blogging problem. I mean, just look at that last entry in the list. That has to be when blogging about blogging jumped the shark. Then again, after Fonzie jumped the shark, Happy Days continued on as a top television show for six more seasons. Jumping the shark doesn't necessarily lead to decline.

But I digress.

These days, I have a new tool for fighting bloggers block. On Twitter today, I asked people to do me a favor. Now that my blog is hosted on GitHub.com using Jekyll, there's an associated repository with issues!

So I asked folks to log an issue with a topic you'd like me to write about. In other words, for some crazy reason, you think I might have something interesting to write about this topic.

A big challenge for me is each blog post (well, every one except this one) takes me a lot of time and effort to write. So I end up looking at that effort and decide to watch Game of Thrones or Archer instead.

Sometimes though, an idea will just grab me by the neck and not let go until I absolutely have to write about it. I can't promise I'll address every idea posted in my blog's issues. I might not even do any. But my hope is one of the posted issues will grab be hard and toss me in front of the keyboard until all the words spill out.

One topic I plan to write about is how I've been using GitHub repositories and issues to manage many aspects of my life apart from software lately. I also want to make sure I get back to my roots and blog more about code. But I am curious to hear what you're interested in. Thanks!

Empathy In Your Best Interest

If I had to pick only one trait I hope to instill in my children, it's empathy. It's on my mind because of this beautiful post by Reg Braythwayt.

Empathy is not seeing the world with your eyes from where someone else is standing, it’s seeing the world with their eyes, from their perspective, coloured with their hopes and fears, their life experience.

Empathy is putting yourself in someone else’s shoes and then overcoming your own thoughts of what you would do in their shoes and imagining what it feels like to be them in their shoes.

You'll note the unnecessary "u" in "coloured". Reg is Canadian, but don't let that stop you from reading the whole post. It's brief but wonderful.

In fact, it's so good, part of the point of my blog post is to draw attention to his post. Especially the iconic image in his post that is a powerful illustration of real empathy.

But this reminds me of a scene from an early 2000 television sitcom known for exploring the dark recesses of human psychology, Malcom in the Middle. In an episode entitled Reese Cooks, Reese, an older brother to Malcom, exhibits mild psychopathic tendencies. In an effort to show him more attention, ~~Heisenberg~~Hal, his dad, signs him up for a cooking class.

Malcom in the Middle episode: Reese Cooks - Heisenberg in his previous marriage

Reese discovers he has natural talent at cooking and really takes to the class. The parents are amazed at his transformation until a cooking contest where he ends up sabotaging the other contestants dishes because "It was fun!"

His mom, Lois, and dad then attempt to teach him about empathy.

Lois:
How would you feel if you were that poor woman whose quiche you salted?
Reese:
…Fat?
Hal:
Reese, do you know what empathy is?
Reese:
No.
Hal:
Well, empathy is putting yourself in other people's shoes so you can feel what they do. If you hurt someone, empathy makes you hurt as well.
Reese:
Then why would you want empathy?

Why would you want it indeed? It sounds kind of, well, painful. Why would anyone want to be empathetic? How do you explain the benefit to someone who's not inclined to be empathetic? How do you explain to someone who seems to only look out for his or herself?

It's in your own best interest to be empathetic.

I don't mean this in some vague karmic fashion, but in a concrete sense.

It makes for better relationships with others.

It's hard to carry on meaningful relationships with others when you constantly misunderstand the intentions and motivations of those around you. This applies to your friends, family, and work relationships. You can imagine that being around someone who constantly misinterprets your intentions would lead to unnecessary conflict.

Empathy helps people better understand the mindset of those around them. This helps people address the real issues rather than talking past each other or working towards cross purposes.

It helps you make better choices for your own well being.

Everyone views the world through a lens of their own experience. In effect, our own biases are feeding ourselves misinformation which affects our ability to make decisions. Empathy helps one see the truth of a situation and act accordingly.

Too often, people spend much of their time engaging in behavior that is ultimately not in their long term self interest for an apparent short term gain. Sometimes it's obvious. It might feel real good to smoke that cigarette, but in the long term you know you'd be better off quitting.

Sometimes it's more subtle. For example, when a marginalized person speaks out against some abuse they've faced, it seems inevitable that there's a strong backlash from people who, although are not involved in this particular incident in any way, feel a sense of being attacked.

I ascribe this to a lack of empathy. People jump to a conclusion that ascribes the worst motives and demonize others who don't share the same worldview.

Empathy makes you realize that everybody has their struggles in life and are just trying to get by. People spend their time concerned about their own well-being, not on negatively affecting yours. As a friend once told me, we're all just squirrels trying to get a nut in this world.

Spending a lot of time demonizing others who don't conform to your world view leads to a pretty unhealthy existence. This isn't to say that you must agree with everyone. But that you recognize that the lives of others is not so black and white, much as yours isn't.

It makes you a more effective person

All too often I see leaders who flip a bozo bit on an employee. Or color their experiences through their own lens. This makes the leader extremely ineffective at motivating people to do their best work. It creates an environment where those who don't see things the same way as the leader are demoralized, even though they may be doing great work otherwise.

Likewise, I often see employees flip the bozo bit on a leader because of lack of empathy for the challenges and pressures of being a leader. This makes the employee ineffective. It's hard to influence decisions when you lack basic empathy to the view point you're arguing against.

Conclusion

Someone truly concerned about their own well being in the long run would see the benefits of empathy.

This isn't the first time I've written about empathy and won't be the last. You might find my other posts that talk about empathy in various contexts helpful.

Successful IoC container usage

Every now and again I here the meme of IoC containers being bad, they lead to bad developer practices, they’re too complicated and on and on. IoC containers – like any sharp tool – can be easily abused. Dependency injection, as a concept, is here to stay. Heck, it’s even in Angular.

Good usage of IoC containers goes hand in hand with good OO design. Dependency injection won’t make your design better, but it can enable better design.

So what do I use IoC containers for? First and foremost, dependency injection. If I have a 3rd-party dependency, I’ll inject it. This enables me to swap implementations or isolate that dependency behind a façade. Additionally, if I want to provide different configurations of that component for different environments, dependency injection allows me to modify that behavior without modifying services using that component.

I am, however, very judicious in my facades. I don’t wrap 3rd party libraries, like a Repository does with your DbContext or ISession. If a library needs simplification or unification (Adapter pattern), that’s where wrapping the dependency helps.

I also don’t create deep compositional graphs. I don’t get stricken with service-itis, where every function has to have an IFooService and FooService implementation.

Instead, I focus on capturing concepts in my application. In one I’m looking at, I have concepts for:

  • Queries
  • Commands
  • Validators
  • Notifications
  • Model binders
  • Filters
  • Search providers
  • PDF document generators
  • Search providers
  • REST document readers/writers

Each of these has multiple implementers of a common interface, often as a generic interface. These are all examples of the good OO design patterns – the behavioral patterns, including:

  • Chain of responsibility
  • Command
  • Mediator
  • Strategy
  • Visitor

I strive to find concepts in my system, and build abstractions around those concepts. The IModelBinderProvider interface, for example, is a chain of responsibility implementation, where we have a concept of providing a model binder based on inputs, and each provider deciding to provide a model binder (or not).

The final usage is around lifecycle/lifetime management. This is much easier if you have a container and ecosystem that provides explicit scoping using child/nested containers. Web API for example has an “IDepedencyScope” which acts as a composition root for each request. I either have singleton components, composition root-scoped components (like your DbContext/ISession), or resolve-scoped components (instantiated once per call to Resolve).

Ultimately, successful container usage comes down to proper OO, limiting abstractions and focusing on concepts. Composition can be achieved in many forms – often supported directly in the language, such as pattern matching or mixins – but no language has it perfect so being able to still rely on dependency injection without a lot of fuss can be extremely powerful.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.