Software Design Blog

Simple solutions to solve complex problems

Late binding principle: defer resource binding to improve flow


Donald G. Reinertsen, author of “The Principles of Product Development Flow”, asked the question should we assign a resource to a task at the start of a project?

Let’s explore this questions in our software examples below. Will delaying assignment produce light classes, with less memory consumption, and reduced computing waste?

The problem is how can you delay resource demands during object construction? For example, using dependency injection (DI), how do you inject dependencies into a class and delay the construction of these dependencies at the same time?

The solution is to use automatic factories to delay resource demands until it is used.

Download Source Code

Setting the scene

This post will use an illustration of a cloud storage service that relies on an expensive online connection resource. Let’s assume that we cannot modify the constructor behaviour of the resource class. The interfaces and resource implementation are shown below.

  
    public interface IStorageService
    {
        void Save(string path, Stream stream);
        bool HasSufficientSpace(int requiredSizeMb);
    }

    public interface IStorageResource
    {
        void UploadStream(string path, Stream stream);
    }

    public class CloudStorage : IStorageResource
    {
        public CloudStorage()
        {
            Console.WriteLine("Establishing cloud connection...");
            // Simulate expensive initialisation
            System.Threading.Thread.Sleep(5000); 
        }

        public void UploadStream(string path, Stream stream)
        {
            // your code here
        }
    }

    public class CloudStorageService : IStorageService
    {
        private readonly IStorageResource _resource;

        public CloudStorageService(IStorageResource resource)
        {
            if (resource == null) throw new ArgumentNullException("resource");
            _resource = resource;
        }

        public void Save(string path, Stream stream)
        {
            Console.WriteLine("Called Save");
            _resource.UploadStream(path, stream);
        }

        public bool HasSufficientSpace(int requiredSizeMb)
        {
            Console.WriteLine("Called HasSufficientSpace");
            return true; // No need to check, the cloud service has unlimited space
        }
    }

Evaluating the results

Let’s run the code and observe the output.

  
            var container = new UnityContainer();
            container.RegisterType<IStorageService, CloudStorageService>();
            container.RegisterType<IStorageResource, CloudStorage>();
            var storageService = container.Resolve<IStorageService>();

            if (storageService.HasSufficientSpace(100))
            {
                using (var fileStream = System.IO.File.OpenRead(@"C:\Temp\File.txt"))
                {
                    storageService.Save(@"Files\File01.txt", fileStream);    
                } 
            }
Establishing cloud connection...
Called HasSufficientSpace
Called Save

The CloudStorageService class is poorly implemented because:

  • The resource dependency is demanded in the constructor causing a significant start-up delay. By default, Windows will refuse to start the service if it takes longer than 30 sec to initialise.
  • System memory and computing cycles are wasted since the resource may never be used.

Late binding with an Auto Factory

We are only going to change the CloudStorageService class. Everything else will remain the same, which minimises the risk of potential regression.

The auto factory version is shown below.

  

    public class CloudStorageServiceAutoFactory : IStorageService
    {
        private readonly Lazy<IStorageResource> _resource;

        public CloudStorageServiceAutoFactory(Func<IStorageResource> resource)
        {
            if (resource == null) throw new ArgumentNullException("resource");
            _resource = new Lazy<IStorageResource>(resource);
        }

        private IStorageResource Resource
        {
            get { return _resource.Value; }
        }

        public void Save(string path, Stream stream)
        {
            Console.WriteLine("Called Save");
            Resource.UploadStream(path, stream);
        }

        public bool HasSufficientSpace(int requiredSizeMb)
        {
            Console.WriteLine("Called HasSufficientSpace");
            return true;  // Cloud service has unlimited space
        }
    }

Let’s call the improved CloudStorageServiceAutoFactory class instead.

Called HasSufficientSpace
Called Save
Establishing cloud connection...

Why is this a great solution?

Here is an alternative version as covered in a DevTrends post:

 
    public class CloudStorageServiceBad : IStorageService
    {
        private readonly Func<IStorageResource> _factory;
        private IStorageResource _resource;

        public CloudStorageServiceBad(Func<IStorageResource> factory)
        {
            if (factory == null) throw new ArgumentNullException("factory");
            _factory = factory;
        }

        private IStorageResource Resource
        {
            get { return _resource ?? (_resource = _factory()); }
        }
        
       // The methods goes here
    }

The code above is bad because:

  • A reference is required to the factory and resource, yet the factory is only used once
  • Performing lazy loading in the Resource property is not thread safe

The solution as shown in the CloudStorageServiceAutoFactory class will pass the factory to Lazy<IStorageResource>, which requires less code and is thread safe. See this post about thread safety.

Let’s compare the output of the two solutions by executing the method in multiple threads:

 
            Parallel.Invoke(() => storageService.Save(@"Files\File01.txt", null),
                            () => storageService.Save(@"Files\File01.txt", null));

CloudStorageServiceBad output:

Called Save
Called Save
Establishing cloud connection...
Establishing cloud connection...

CloudStorageServiceAutoFactory output:

Called Save
Called Save
Establishing cloud connection...

Manual Construction

For those out there who haven’t transitioned to DI yet, but I highly recommend that you do, here is how you would wire it up manually:

 
    var service = new CloudStorageServiceAutoFactory(() => new CloudStorage());

Summary

This post illustrated how to improve memory utilisation and to reduce computing waste using automatic factories to delay expensive resource demands.

Delaying the demand of resources until the code paths are actually executed can significantly improve application performance.

How to structure DI registration code cleanly


The previous post introduced dependency injection (DI). This post will provide a practical approach to wire up DI cleanly.

The problem is where do we keep our dependency injection registration bootstrap/wiring code?

  • We don’t want a giant central registration assembly or a monolithic global.asax class
  • We don’t want to bleed DI into our implementation assemblies that requires DI references
  • We don’t want to be locked into a specific DI framework

The solution is to componentise registrations and keep DI frameworks out of our implementation and interface libraries.

Download Source Code

What is Unity?

Unity is a DI container which facilitates building loosely coupled apps. It provides features such as object lifecycle management, registration by convention and instance/type interception.

What is MEF?

The Managed Extensibility Framework (MEF) provides DI container capabilities to build loosely coupled apps. It allows an app to discover dependencies implicitly with no configuration required by declaratively specifying the dependencies (known as imports) and what capabilities (known as exports) that are available.

MEF vs Unity

MEF and Unity provide similar capabilities but they both have strengths and weaknesses.

  • Unity has greater DI capabilities – such as interception
  • Unity is less invasive since MEF requires developers to sprinkle [Import] and [Export] attributes all throughout the code
  • MEF has great discoverability features that are not available in Unity

Setup

The solution layout of the previous DI introduction post is shown below.


All of the bootstrapping code currently lives in the OrderApplication console assembly. The registration can quickly get out of hand with a large app with many components. The registration is also not discoverable which means we can’t just drop in new dlls to automatically replace existing functionality or add new functionality.

Clean Wiring

Let’s get started with clean wiring in 3 steps.

1. Move each components’ DI registration to its own assembly
  
    // Orders.Bootstrap.dll
    public class Component
    {
        private readonly IUnityContainer _container;
  
        public Component(IUnityContainer container)
        {
            if (container == null) throw new ArgumentNullException("container");
            _container = container;;
        }

        public void Register()
        {
            _container.RegisterType<IOrderRepository, OrderRepository>();
            _container.RegisterType<IEmailService, EmailService>();
            _container.RegisterType<IRenderingService, RazaorRenderingService>();
            _container.RegisterType<IMailClient, SmtpMailClient>();
            _container.RegisterType<IOrderService, OrderService>();

            var baseTemplatePath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, 
                                                "EmailTemplates");
            _container.RegisterType<ITemplateLocator, TemplateLocator>(
                            new InjectionConstructor(baseTemplatePath));
        }
    }
2. Discover and bootstrap your registration

MEF is great at discovering assemblies so let’s create an interface that can be discovered by exporting the bootstrap interface.

  
    // Bootstrap.Interfaces.dll
    public interface IBootstrap
    {
        void Register();
    }

Add the MEF export attribute to the bootstrap component created in step 1.

  
    [Export(typeof(IBootstrap))]
    public class Component : IBootstrap
    {
        private readonly IUnityContainer _container;
  
        [ImportingConstructor]
        public Component(IUnityContainer container)
        {
            if (container == null) throw new ArgumentNullException("container");
            _container = container;;
        }

        public void Register()
        {
            // Registration code from step 1
        }
    }

The ResolvingFactory below will perform the discovery and Unity registration.

  
    // Bootstrap.Implementation.dll
    public class ResolvingFactory
    {
        [ImportMany(typeof(IBootstrap))]
        private IEnumerable<IBootstrap> Bootstraps { get; set; }

        private readonly IUnityContainer _container;

        public ResolvingFactory()
        {
            _container = new UnityContainer();
            Initialise();
        }

        private void Initialise()
        {
            var catalog = new AggregateCatalog();
            catalog.Catalogs.Add(new DirectoryCatalog(GetBinPath()));
            using (var mefContainer = new CompositionContainer(catalog))
            {
                mefContainer.ComposeExportedValue(_container);
                mefContainer.SatisfyImportsOnce(this);
            }

            foreach (var bootstrap in Bootstraps)
            {
                bootstrap.Register();
            }
        }

        public string GetBinPath()
        {
            var domainSetup = AppDomain.CurrentDomain.SetupInformation;
            return !string.IsNullOrEmpty(domainSetup.PrivateBinPath) ? 
                    domainSetup.PrivateBinPath : domainSetup.ApplicationBase;
        }

        public T Resolve<T>()
        {
            return _container.Resolve<T>();
        }
    }

The new project assembly layout is shown below.

3. Run the solution

Let’s run the new Unity + MEF solution.

  
        // OrderApplication.dll
        static void Main(string[] args)
        {
            var orderModel = new OrderModel()
            {
                Description = "Design Book",
                Customer = new CustomerModel()
                {
                    Email = "customer@yoursite.com",
                    Name = "Jay"
                }
            };

            var factory = new ResolvingFactory();
            var orderService = factory.Resolve<IOrderService>();
            orderService.Create(orderModel);
        }

Summary

DI code often becomes messy with large amounts of registrations.

This post shows how DI registration can be componentised to keep code clean, simple and discoverable.

Problem solving beyond the basics

Chain of Responsibility

Part of my role is to review code and to coach developers.

Reviewing code provides insight into a range of techniques to solve problems. Like many developers, we often steal each other’s ideas. To become a good thief, you really need to be able to identify what is valuable so that you don’t steal someone else’s rubbish code.

Whilst coaching developers, I often take the code submitted for a code review and ask the person I’m coaching to review it. This technique helps to assess the fidelity of a developer to identify good and bad code. It also exposes new potential skills that can be coached or helps with confirming that a developer has mastered a skill.

The aim of this post is to show a working solution and discuss potential problems. A follow up post will provide an alternative that will solve these problems.

Setup

The problem we are trying to solve is to validate a credit card number based on the credit card type. The credit card context model / data transfer object (DTO) is shown below.

  
    public class CreditCard
    {
        public string Type { get; set; }
        public string Name { get; set; }
        public string Number { get; set; }
        public string Expiry { get; set; }
    }

    public interface ICreditCardValidator
    {
        void Validate(CreditCard creditCard);
    }

Code Review

The solution that was submitted for a code review is shown below.

  
    public class CreditCardValidator : ICreditCardValidator
    {
        public void Validate(CreditCard creditCard)
        {
            if (creditCard.Type.ToLower() == "visa")
            {
                var visaRegEx = new Regex("^4[0-9]{6,}$");
                if (!visaRegEx.IsMatch(creditCard.Number)) 
                      throw new Exception("Invalid card");
            }
            else if (creditCard.Type.ToLower() == "mastercard")
            {
                var masterCardRegEx = new Regex("^5[1-5][0-9]{5,}$");
                if (!masterCardRegEx.IsMatch(creditCard.Number)) 
                      throw new Exception("Invalid card");
            }
            else
            {
                throw new Exception(string.Format("Card type {0} is unsupported.", 
                                                  creditCard.Type));
            }
        }
    }

Code Smells

Let's run through a standard set of heuristics to identify code smells.

  • Duplicate code – both If statements uses a fairly similar pattern but it doesn’t appear to justify refactoring
  • Long method – the method length appears to be acceptable
  • Large class – the class fits on one screen and it doesn’t appear to be too large
  • Too many parameters – it only has one parameter so that's fine
  • Overuse of inheritance – no inheritance at all
  • Not testable – the class implements an interface which suggests that clients rely on a contract instead of a concrete implementation which promotes mocking and the class appears to be easily testable

Code Problems

Even though the code doesn't appear to have code smells, the code is not great. Here is a list of problems with the code:

  1. Guard Exception – an ArgumentNullException should be thrown before line 5 when the client calls the validator with a null credit card instance
  2. NullReferenceException – the client will receive an “object reference not set to an instance of an object” exception on line 5 when the card type is null
  3. Throwing Exceptions – throwing custom exceptions such as InvalidCreditCardNumberException is better than throwing a generic Exception which is harder to catch and bubble up
  4. Immutable String Comparison – at least the code is case insensitive using .ToLower() although strings are immutable so the .ToLower() comparison will create a new copy of the string for each if statement. We could just use the string comparison function such as creditCard.Type.Equals("visa", StringComparison.OrdinalIgnoreCase)
  5. Constants – we should avoid using magic strings and numbers in code and use constants instead for names such as credit card names. A CreditCardType Enum could be an alternative if the credit card types are fixed.
  6. Readability – as the number of supported credit cards grows, readability can be improved with a switch statement
  7. Globalisation – error messages can be placed in resource files to provide multilingual support
  8. Performance – the regular expression is created for every credit card number that is validated which can be changed to a singleton. We can use the RegEx compile parameter to improve the execution time.
An alternative solution using a switch statement is shown below.
  
    public class CreditCardValidator : ICreditCardValidator
    {
        private static readonly Regex VisaRegEx = 
                   new Regex("^4[0-9]{6,}$", RegexOptions.Compiled);
        private static readonly Regex MasterCardRegEx = 
                   new Regex("^5[1-5][0-9]{5,}$", RegexOptions.Compiled);

        public void Validate(CreditCard creditCard)
        {
            if (creditCard == null) throw new ArgumentNullException("creditCard");
            if (string.IsNullOrWhiteSpace(creditCard.Type)) 
                   throw new ArgumentException("The card type must be specified.");

            switch (creditCard.Type.ToLower())
            {
                case "visa":
                    if (!VisaRegEx.IsMatch(creditCard.Number)) 
                         throw new InvalidCardException("Invalid card");
                    break;
                case "mastercard":
                    if (!MasterCardRegEx.IsMatch(creditCard.Number)) 
                         throw new InvalidCardException("Invalid card");
                    break;
                default:
                    throw new InvalidCardException(
                         string.Format("Card type {0} is unsupported.", creditCard.Type));
            }
        }
    }

Even though the code was improved, the design is still poor and breaks SOLID principals.

  1. Open/Closed Principal - The class might be small but is hard to extend by supporting additional credit cards without modifying the code
  2. Single Responsibility Principle - The class knows too much about various credit card types and how to validate them

Summary

The post identified issues with a working solution and discussed fundamental coding practices.

If you can spot more issues, please let me know. If you know how to solve the problem then I’d love you hear from you too.

The next post will discuss how the design can be improved by applying a design pattern.

How to get started with Microsoft Message Queuing MSMQ

MSQM Fundamentals

The previous post described how to design a highly scalable solution using queue oriented architecture.

This post will cover the fundamentals to get started with Microsoft Message Queuing (MSMQ). Code examples are provided to illustrate how to create a queue, write messages to a queue and read messages from a queue synchronously and asynchronously.

Download Source Code

Setup

The pre-requisite is to install MSMQ, which comes free with Windows.

Creating a queue

The following code was used for creating the queue named orders.

  
using System.Messaging;

if (!MessageQueue.Exists(@".\Private$\Orders"))
{
    MessageQueue.Create(@".\Private$\Orders");
} 

The orders queue and messages on the queue can be viewed using Computer Management as shown below.

MSQM Computer Management

Writing Messages

The following code was used for writing the OrderModel DTO instance (message) to the queue using a BinaryMessageFormatter.

  
    [Serializable]
    public class OrderModel
    {
        public int Id { get; set; }
        public string Name { get; set; }
    }

    using (var queue = new MessageQueue(@".\Private$\Orders"))
    {
        queue.Formatter = new BinaryMessageFormatter();

        var order = new OrderModel()
        {
            Id = 123,
            Name = "Demo Order"
        };
        queue.Send(order);
    }

Reading Messages

Blocking Synchronous Read

The following code was used for reading a message from the queue. The thread will be blocked on the receive method on line 5 until a message is available.
  
    using (var queue = new MessageQueue(@".\Private$\Orders"))
    {
        queue.Formatter = new BinaryMessageFormatter();

        var message = queue.Receive();
        var order = (OrderModel)message.Body;
    }

A read timeout duration can be specified. The code below will use a 1 sec timeout limit and catch the MessageQueueException raised due to the timeout. A custom ReadTimeoutException is thrown to notify the client that a timeout has occured.

  
    public class ReadTimeoutException : Exception
    {
        public ReadTimeoutException(string message, Exception innerException) 
               : base(message, innerException)
        {
            
        }
    }

    using (var queue = new MessageQueue(@".\Private$\Orders"))
    {
        queue.Formatter = new BinaryMessageFormatter();
        Message message = null;

        try
        {
            message = queue.Receive(TimeSpan.FromSeconds(1));
        }
        catch (MessageQueueException mqException)
        {
           if (mqException.MessageQueueErrorCode == MessageQueueErrorCode.IOTimeout)
           {
               throw new ReadTimeoutException("Reading from the queue timed out.", 
                            mqException);
           }
           throw;
        }
    }

Asynchronous Read

The following code was used to perform an async read in conjunction with a Task.
  
private async Task<OrderModel> ReadAsync()
{
    using (var queue = new MessageQueue(@".\Private$\Orders"))
    {
        queue.Formatter = new BinaryMessageFormatter();

        var message = await Task.Factory.FromAsync<Message>(
                              queue.BeginReceive(), queue.EndReceive);
        return (OrderModel)message.Body;
    }
}

Summary

This post covered the fundamentals of using a MSMQ to read and write messages.

The next post describes various message delivery strategies.

How to make your application 5x faster with background processing

5X Performance Gain

Processing everything in real-time is easy but often makes an application tightly coupled, prone to failure and difficult to scale.

Let’s assume a user just clicked on the purchase button to kick off the following workflow:

  1. Save the order
  2. Send a confirmation email (mask the credit card number)
  3. Update the user’s loyalty points in an external system

The problem is that non-critical workloads (step 2 and 3) can negatively impact an application's performance and recoverability. What happens when the mail server is down and the email confirmation fails? How do we monitor failure? Do we roll the transaction back or replay the workflow by taking a checkpoint after each successful step?

The solution is to ensure that application flow isn't impeded by waiting on non-critical workloads to complete. Queue based systems are effective at solving this problem.

This post will use a purchase order application that accepts XML requests. The application will sanitise the credit card details by masking most of the digits before sending the request to a mailbox.

Download Source Code

Setup

The following XML document will be used as the purchase order. It is about 512KB to simulate a decent payload.

  


  
    Test User
    
123 Abc Road, Sydney, Australia
Visa 4111111111111111
Gambardella, Matthew XML Developer's Guide ...
The PurchaseOrderProcessor class was intentionally kept small to focus on solving the main problem, which is to minimise the performance impact of the mailing system.
  
    public interface IMailer
    {
        void Send(string message);
    }

    public class PurchaseOrderProcessor
    {
        private readonly IMailer _mailer;

        public PurchaseOrderProcessor(IMailer mailer)
        {
            if (mailer == null) throw new ArgumentNullException("mailer");
            _mailer = mailer;
        }

        public void Process(string xmlRequest)
        {
            var doc = new XmlDocument();
            doc.LoadXml(xmlRequest);

            // Process the order

            _mailer.Send(xmlRequest);
        }
    }
The SMTP mailer will be configured to write the mail to disk to simplify the illustration.
  
    public class FileSmtpMailer : IMailer
    {
        private readonly string _path;
        private readonly string _from;
        private readonly string _recipients;
        private readonly string _subject;

        public FileSmtpMailer(string path, string from, string recipients, string subject)
        {
            if (path == null) throw new ArgumentNullException("path");
            _path = path;
            _from = @from;
            _recipients = recipients;
            _subject = subject;
        }

        public void Send(string message)
        {
            using (var client = new SmtpClient())
            {
                // This can be configured in the app.config
                client.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory;
                client.PickupDirectoryLocation = _path;

                using (var mailMessage = 
                          new MailMessage(_from, _recipients, _subject, message))
                {
                    client.IsBodyHtml = true;
                    client.Send(mailMessage);    
                }             
            }
        }
    }
The MaskedMailerDecorator will be used for masking the credit card details.
  
    public class MaskedMailerDecorator : IMailer
    {
        private readonly Regex _validationRegex;
        private readonly IMailer _next;
        private const char MaskCharacter = '*';
        private const int MaskDigits = 4;

        public MaskedMailerDecorator(Regex regex, IMailer next)
        {
            if (regex == null) throw new ArgumentNullException("regex");
            if (next == null) throw new ArgumentNullException("next");
            _validationRegex = regex;
            _next = next;
        }

        public void Send(string message)
        {
            if (_validationRegex.IsMatch(message))
            {
                message = _validationRegex.Replace(message, 
                               match => MaskNumber(match.Value));
            }
            _next.Send(message);
        }

        private static string MaskNumber(string value)
        {
            return value.Length <= MaskDigits ?
               new string(MaskCharacter, value.Length) :
               string.Format("{0}{1}", 
                          new string(MaskCharacter, value.Length - MaskDigits),
                          value.Substring(value.Length - MaskDigits, MaskDigits));
        }
    }

Baseline

Let's establish a performance baseline by running the application with the Null Mailer that doesn't do anything. Refer to the reject the null checked object post if you are new to the null object pattern.

  
    public class NullMailer : IMailer
    {
        public void Send(string message)
        {
            // intentionally blank
        }
    }

    static void Main(string[] args)
    {
        var path = Path.Combine(Directory.GetCurrentDirectory(), "PurchaseOrder.xml");
        var request = File.ReadAllText(path); 

        var nullMailer = new NullMailer();
        var orderProcessor = new PurchaseOrderProcessor(nullMailer);

        var stopWatch = Stopwatch.StartNew();
        Parallel.For(0, 1000, i => orderProcessor.Process(request));
        stopWatch.Stop();

        Console.WriteLine("Seconds: {0}", stopWatch.Elapsed.TotalSeconds);
        Console.ReadLine();
    }
Seconds: 6.3404086

Real-Time Processing

Let’s measure the performance when MaskedMailerDecorator and FileSmtpMailer are used.
  
            Directory.CreateDirectory(@"C:\temp");
            var ccRegEx = new Regex(@"(?:\b4[0-9]{12}(?:[0-9]{3})?\b
                                       |\b5[1-5][0-9]{14}\b)", RegexOptions.Compiled);

            var path = Path.Combine(Directory.GetCurrentDirectory(), "PurchaseOrder.xml");
            var request = File.ReadAllText(path);
            
            // Use Unity for doing the wiring          
            var fileMailer = new FileSmtpMailer(@"C:\temp", "p@mail.com", "a@mail.com", "Order");
            var maskedMailer = new MaskedMailerDecorator(ccRegEx, fileMailer);
            var orderProcessor = new PurchaseOrderProcessor(maskedMailer);

            var stopWatch = Stopwatch.StartNew();
            Parallel.For(0, 1000, i => orderProcessor.Process(request));
            stopWatch.Stop();

            Console.WriteLine("Seconds: {0}", stopWatch.Elapsed.TotalSeconds);
            Console.ReadLine();
Seconds: 32.0430142

Background Processing

Let's extend the solution by adding a memory queue to buffer the results. The queue effectively acts as an outbox for sending mail without overwhelming the mail server with parallel requests.

    public class QueuedMailerDecorator : IMailer
    {
        private readonly IMailer _next;
        private BlockingCollection<string> _messages;

        public QueuedMailerDecorator(IMailer next)
        {
            if (next == null) throw new ArgumentNullException("next");
            _next = next;
            _messages = new BlockingCollection<string>();

            Task.Factory.StartNew(() =>
            {
                try
                {
                    // Block the thread until a message becomes available
                    foreach (var message in _messages.GetConsumingEnumerable())
                    {
                        _next.Send(message);
                    }
                }
                finally
                {
                    _messages.Dispose();
                }
            }, TaskCreationOptions.LongRunning);
        }

        public void Send(string message)
        {
            if (_messages == null || _messages.IsAddingCompleted)
            {
                return;
            }
            try
            {
                _messages.TryAdd(message);
            }
            catch (ObjectDisposedException)
            {
                Trace.WriteLine("Add failed since the queue was disposed.");
            }
        }

        public void Dispose()
        {
            if (_messages != null && !_messages.IsAddingCompleted)
            {
                _messages.CompleteAdding();
            }
            GC.SuppressFinalize(this);
        }
    }

Here is a diagram that depicts the collection of decorated mailers to intercept the communication between the PurchaseOrderProcessor and the FileSmtpMailer.

SMTP Mailer Decorators

Let's run the code below to evaluate if the queue made a performance difference.
             
            // Use Unity for doing the wiring          
            var fileMailer = new FileSmtpMailer(@"C:\temp", "p@mail.com", "a@mail.com", "Order");
            var maskedMailer = new MaskedMailerDecorator(creditCardRegEx, fileMailer);
            var queuedMailer = new QueuedMailerDecorator(maskedMailer);
            var orderProcessor = new PurchaseOrderProcessor(queuedMailer);

            var stopWatch = Stopwatch.StartNew();
            Parallel.For(0, 1000, i => orderProcessor.Process(request));
            stopWatch.Stop();

            Console.WriteLine("Seconds: {0}", stopWatch.Elapsed.TotalSeconds);
            Console.ReadLine();
Seconds: 6.3908034 

The drawback of the in-memory queue used in this post is that it requires memory to store messages temporarily before it is processed. It is possible to lose messages if the application crashes or stops unexpectedly before all of the requests have been processed.

These issues can be addressed with a locally persisted queue such as MSMQ or a cloud based queue such as Azure Service Bus. Queues provide many benefits that will be covered in the next post.

Summary

This post provided evidence to the performance degradation caused by processing non-critical workloads in real-time. Using queues can be an effective technique to keep an application responsive by buffering and processing tasks in the background.

Null Check Performance Improvement and Failure Reduction

Calling optional dependencies such as logging, tracing and notifications should be fast and reliable.

The Null Object pattern can be used for reducing code complexity by managing optional dependencies with default behaviour as discussed in the Reject the Null Checked Object post.

This post aims to illustrate the problem with the Null Object pattern and how to resolve it using a simple lambda expression.

The problem is that the null check pattern can potentially lead to performance degradation and unnecessary failure.

The solution is to avoid executing unnecessary operations especially for default behaviour.

Download Source Code

Setup

The intent of the sample application is to read and parse XML documents for a book store. The document reader is responsible for logging the book titles using the code below.

    public interface ILogger
    {
        void Log(string message);
    }

    public class ConsoleLogger : ILogger
    {
        public void Log(string message)
        {
            Console.WriteLine(message);
        }
    }

    public class DocumentReader
    {
        private readonly ILogger _logger;

        public DocumentReader(ILogger logger)
        {
            if (logger == null) throw new ArgumentNullException("logger");
            _logger = logger;
        }

        public void Read(XmlDocument document)
        {
            if (document == null) throw new ArgumentNullException("document");
            var books = document.SelectNodes("catalog/book");
            if (books == null) throw new XmlException("Catalog/book missing.");

            _logger.Log(string.Format("Titles: {0}.", 
                        string.Join(", ", GetBookTitles(document))));
            
            // Implementation
        }

        private static IEnumerable<string> GetBookTitles(XmlNode document)
        {
            Console.WriteLine("Retrieving the book titles");
            var titlesNodes = document.SelectNodes("catalog/book/title");
            if (titlesNodes == null) yield break;
            foreach (XmlElement title in titlesNodes)
            {
                yield return title.InnerText;
            }
        }
    }

Problem

Here is an example that illustrates the execution of the application.

 

        static void Main(string[] args)
        {
            var document = new XmlDocument();
            document.LoadXml(@"<catalog>
                                 <book><title>Developer's Guide</title></book>
                                 <book><title>Tester's Guide</title></book>
                               </catalog>");

            var logger = new ConsoleLogger();
            var docReader = new DocumentReader(logger);

            docReader.Read(document);
            Console.ReadLine();
        }
Retrieving the book titles
Book titles: Developer's Guide, Tester's Guide.
The solution works well when an actual logger is used but what happens if we replace the logger with the NullLogger as shown below?
 
    public class NullLogger : ILogger
    {
        public void Log(string message)
        {
            // Purposefully provides no behaviour
        }
    }

    var logger = new NullLogger();
    var docReader = new DocumentReader(logger);

    docReader.Read(document);
    Console.ReadLine();
Retrieving the book titles

Solution

Here is an example that illustrates the improved version. The logger was modified to accept two methods. The first method takes a string for simple logging operations and the second method takes a lambda function that will produce a string.

 
    public interface ILogger
    {
        void Log(string message);
        void Log(Func<string> messageFunc);
    }

    public class NullLogger : ILogger
    {
        public void Log(string message)
        {
            // Purposefully provides no behaviour
        }

        public void Log(Func<string> messageFunc)
        {
            // Purposefully provides no behaviour
        }
    }

    public class ConsoleLogger : ILogger
    {
        public void Log(string message)
        {
            Console.WriteLine(message);
        }

        public void Log(Func<string> messageFunc)
        {
            try
            {
                Console.WriteLine(messageFunc.Invoke());
            }
            catch (Exception ex)
            {
                Console.WriteLine("Failed to log the result. Error: {0}", ex);
            } 
        }
    }

    public class DocumentReader
    {
        public void Read(XmlDocument document)
        {
            _logger.Log(() => string.Format("Titles: {0}.", 
                              string.Join(", ", GetBookTitles(document))));
           ...
         }
    }

Running the example using the Console Logger will produce the same result as the original example.

Let's run the Null Logger example again.

 
    var logger = new NullLogger();
    var docReader = new DocumentReader(logger);

    docReader.Read(document);
    Console.ReadLine();

Summary

The null object checking pattern simplifies solutions by removing the need to check for null. The drawback is the potential risk of impeding performance and causing failure. Passing around lamba expression functions can be a subtle solution to overcome the problem without overcomplicating the code.

Reject the Null Checked Object

Software solutions become overcomplicated with dozens of long and ugly checks for null. It is like a pothole and you will eventually fall in when you’re not looking carefully.

The problem is that methods returns null instead of real objects and every client must check for null to avoid the application from blowing up due to a NullReferenceExcetion (Object reference not set to an instance of an object).

The solution is to return a null object that exhibits default behaviour. This is called the Null Object design pattern.

Download Source Code

Example 1: Debug Mode

Here is an example where additional tasks are performed such as logging verbose messages when the application is in debug mode.

The cyclomatic complexity of the application has increased due to the number checks as shown below.

Null Checking

            if (isDebugMode)
            {
                logger.Log("Saving - Step 1");
            }

            // Save Step 1

            if (isDebugMode)
            {
                logger.Log("Saving - Step 2");
            }

            // Save Step 2
The code can be simplified as shown below since the logger is always called. A real logger can be registered whilst in debug mode otherwise a null logger that logs to nowhere will be used by default.

Null Object Design Pattern

            
public class NullLogger : ILogger
{
    public void Log(string message)
    {
        // Purposefully provides no behaviour
    }
}

logger.Log("Saving - Step 1");
// Save Step 1
logger.Log("Saving - Step 2");
// Save Step 2

Example 2: Registration System

The use case is to create a service to confirm reservations and to send optional confirmation notices. The service is also responsible for retrieving reservations that can be filtered based on the confirmation status.

The following interfaces will be used in the illustration.

    public interface INotification<in T>
    {
        void Notifiy(T message);
    }

    public interface IReservationRepository
    {
        void Save(Confirmation request);
        IEnumerable<Reservation> GetReservations();
    }

Null Checking

Here is the complicated example that performs null checks.

    public class ReservationServiceV1 : IReservationService
    {
        private readonly IReservationRepository _repository;
        private readonly INotification<Confirmation> _notifier;

        public ReservationServiceV1(IReservationRepository repository, 
                                    INotification<Confirmation> notifier)
        {
            if (repository == null) throw new ArgumentNullException("repository");
            _repository = repository;
            _notifier = notifier;
        }

        public void Confirm(Confirmation request)
        {
            _repository.Save(request);
            if (_notifier != null) _notifier.Notifiy(request);
        }

        public IEnumerable<Reservation> GetReservations(bool confirmed)
        {
            var reservations = _repository.GetReservations();
            return reservations == null ? null : 
                   reservations.Where(reservation => reservation.Confirmed == confirmed);
        }
    }

Null Object Design Pattern

Here is the simplified version that works with default behaviour.

 
public class NullNotification<T> : INotification<T>
{
    public void Notifiy(T message)
    {
        // Purposefully provides no behaviour
    }
}
   
public class ReservationServiceV2 : IReservationService
{
    private readonly IReservationRepository _repository;
    private readonly INotification<Confirmation> _notifier;

    public ReservationServiceV2(IReservationRepository repository, 
                                INotification<Confirmation> notifier)
    {
        if (repository == null) throw new ArgumentNullException("repository");
        if (notifier == null) throw new ArgumentNullException("notifier");
        _repository = repository;
        _notifier = notifier;
    }

    public void Confirm(Confirmation request)
    {
        _repository.Save(request);
        _notifier.Notifiy(request);
    }

    public IEnumerable<Reservation> GetReservations(bool confirmed)
    {
        return _repository.GetReservations()
                          .Where(reservation => reservation.Confirmed == confirmed);
    }
}

Summary

Avoid returning null and return default behaviour instead; such as an empty list. The Null Object design pattern will simplify code and reduce potential slip-ups causing unexpected failure.

10x Performance Gain: IEnumerable vs IQueryable

This post compares IEnumerable against IQuerable using an experiment to illustrate the behaviour and performance differences. Spotting a func vs an expression func filter bug is easy to miss. The caller’s syntax stays the same but it could have a 10x performance impact on your application.

Download Source Code

Setup

SQL Server 2014 was used for hosting the database. The GeoAllCountries table content was sourced from GeoNames and contains just over 10 million rows. Entity Framework 6 was used for the LINQ to SQL integration.

Predicate Function

The code below will query the GeoAllCountries table and use a filter predicate function to filter the results starting with "Aus".
        static void Main(string[] args)
        {
            var stopWatch = Stopwatch.StartNew();
            var countryNames = GetCountryNames(name => name.StartsWith("Aus"));

            foreach (var name in countryNames)
            {
                Console.WriteLine(name);
            }

            stopWatch.Stop();
            Console.WriteLine("Running time: {0}", stopWatch.Elapsed.TotalSeconds);
            Console.ReadLine();
        }

        public static IEnumerable<string> GetCountryNames(Func<string, bool> filterFunc)
        {
            using (var context = new TestDatabaseDataContext())
            {
                IQueryable<string> names = (from country in context.GeoAllCountries 
                                            select country.Name);
         
                foreach (var name in names.Where(filterFunc))
                {
                    yield return name;
                }
            }  
        }
Running time: 8.6558463
SQL Server Profiler captured the following query between the application and the database:
SELECT [t0].[Name] FROM [dbo].[GeoAllCountries] AS [t0]

Expression Predicate Function

The code below will query the GeoAllCountries table and use an expression filter predicate function to filter the results starting with "Aus".
        static void Main(string[] args)
        {
            var stopWatch = Stopwatch.StartNew();
            var countryNames = GetCountryNames(name => name.StartsWith("Aus"));

            foreach (var name in countryNames)
            {
                Console.WriteLine(name);
            }

            stopWatch.Stop();
            Console.WriteLine("Running time: {0}", stopWatch.Elapsed.TotalSeconds);
            Console.ReadLine();
        }

        public static IEnumerable<string> GetCountryNames(
                      Expression<Func<string, bool>> filterFunc)
        {
            using (var context = new TestDatabaseDataContext())
            {
                IQueryable<string> names = (from country in context.GeoAllCountries 
                                            select country.Name);
         
                foreach (var name in names.Where(filterFunc))
                {
                    yield return name;
                }
            }  
        }
Running time: 0.8633603
SQL Server Profiler captured the following query between the application and the database:
exec sp_executesql N'SELECT [t0].[Name]
FROM [dbo].[GeoAllCountries] AS [t0]
WHERE [t0].[Name] LIKE @p0',N'@p0 nvarchar(4000)',@p0=N'Aus%'
Note that the client code did not change. Adding the expression syntax around the func made a world of difference. It is pretty easy to add the predicate syntax but is just as easy to miss in a code review unless you have the fidelity to spot the issue and understand the implications.

Summary

IEnumerable executes the select query at the database and filters the data in-memory at the application layer.

IQueryable executes the select query and all of the filters at the database.

The database filtering reduced network traffic and application memory load resulting in a significant 10x performance gain.

Butcher the LINQ to SQL Resource Hog

Has your LINQ to SQL repository ever thrown a "cannot access a disposed object" exception? You can fix it by calling ToList on the LINQ query but it will impede your application’s performance and scalability.

This post covers common pitfalls and how to avoid them when dealing with unmanaged resources such as the lifecycle of a database connection in a pull-based IEnumerable repository. An investigation is made to uncover when Entity Framework and LINQ to SQL resources are disposed of and how to implement an effective solution.

Download Source Code

Setup

The following repository class will be used to model the same behaviour as an actual LINQ to SQL database repository.
    public class Model
    {
        public string Message { get; set; }
    }

    public class Repository : IDisposable
    {
        public IEnumerable<Model> Records
        {
            get
            {
                if (_disposed) throw new InvalidOperationException("Disposed");
                Console.WriteLine("Building message one");
                yield return new Model() { Message = "Message one" };
                if (_disposed) throw new InvalidOperationException("Disposed");
                Console.WriteLine("Building message two");
                yield return new Model() { Message = "Message two" };
            }
        }

        private bool _disposed = false;

        public void Dispose()
        {
            Dispose(true);
            GC.SuppressFinalize(this);
        }
        
        protected virtual void Dispose(bool disposing)
        {
            if (_disposed) return;
            _disposed = true;
        }
    }

LINQ to SQL: Cannot access a disposed object

Let's execute the LINQ query below to call the repository and write the results to the console.
        static void Main(string[] args)
        {
            var records = GetLinqRecords();
            foreach (var record in records)
            {
                Console.WriteLine(record);    
            }
            Console.ReadLine();
        }

        private static IEnumerable<string> GetLinqRecords()
        {
            using (var repository = new Repository())
            {
                return (from model in repository.Records select model.Message);
            }
        }

A LINQ to SQL application would raise the following exception:

An unhandled exception of type 'System.ObjectDisposedException' occurred in System.Data.Linq.dll
Additional information: Cannot access a disposed object.

LINQ to SQL: ToList

Let's execute the LINQ query below by materialising the records to a list first:
        
        static void Main(string[] args)
        {
            var records = GetLinqRecordsToList();
            foreach (var record in records)
            {
                Console.WriteLine(record);    
            }
            Console.ReadLine();
        }

        private static IEnumerable>string< GetLinqRecordsToList()
        {
            using (var repository = new Repository())
            {
                return (from model in repository.Records select model.Message).ToList();
            }
        }
Building message one
Building message two
Message one
Message two

Yield to the rescue

Let's execute the code below using yield instead:
        static void Main(string[] args)
        {
            var records = GetYieldRecords();
            foreach (var record in records)
            {
                Console.WriteLine(record);    
            }
            Console.ReadLine();
        }

        private static IEnumerable<string> GetYieldRecords()
        {
            using (var repository = new Repository())
            {
                foreach (var record in repository.Records)
                {
                    yield return record.Message;
                }
            }
        }
Building message one
Message one
Building message two
Message two

Don’t refactor your code

Let's see what happens when we run a refactored version of the code:
        static void Main(string[] args)
        {
            var records = GetRefactoredYieldRecords();
            foreach (var record in records)
            {
                Console.WriteLine(record);    
            }
            Console.ReadLine();
        }

        private static IEnumerable<string>string<string> GetRefactoredYieldRecords()
        {
            using (var repository = new Repository())
            {
                return YieldRecords(repository.Records);
            }
        }

        private static IEnumerable<string> YieldRecords(IEnumerable<Model> records)
        {
            if (records == null) throw new ArgumentNullException("records");
            foreach (var record in records)
            {
                yield return record.Message;
            }
        }

Déjà Vu. The same error occurred as seen in the LINQ to SQL example. Take a closer look at the IL produced by the compiler using a tool such as ILSpy.

In the refactored and the LINQ to SQL version, instead of returning an IEnumerable function directly, a function is returned that points to another IEnumerable function. Effectively, it is an IEnumerable within an IEnumerable. The connection lifecycle is managed in the first IEnumerable function which will be disposed once the second IEnumerable function is returned to the caller.

Keep it simple, return the IEnumerable function directly to the caller.

Yield IEnumerable vs List Building

This post describes the use of yield and compares it to building and returning a list behind an IEnumerable<T> interface.

Download Source Code

Setup

The example consists of a contact store that will allow the client to retrieve a collection of contacts.

The IStore.GetEnumerator method must return IEnumerable<T>, which is a strongly typed generic interface that describes the ability to fetch the next item in the collection.

The actual implementation of the collection can be decided by the concrete implementation. For example, the collection could consist of an array, generic list or yielded items.

       
    public interface IStore<out T>
    {
        IEnumerable<T> GetEnumerator();
    }

    public class ContactModel
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }

Calling GetEnumerator

Let's create two different stores, call the GetEnumerator on each store and evaluate the console logs to determine if there is a difference between the List Store and the Yield Store.

List Store

The code below is a common pattern I've observed during code reviews, where a list is instantiated, populated and returned once ALL of the records have been constructed.
    
    public class ContactListStore : IStore<ContactModel>
    {
        public IEnumerable<ContactModel> GetEnumerator()
        {
            var contacts = new List<ContactModel>();
            Console.WriteLine("ContactListStore: Creating contact 1");
            contacts.Add(new ContactModel() { FirstName = "Bob", LastName = "Blue" });
            Console.WriteLine("ContactListStore: Creating contact 2");
            contacts.Add(new ContactModel() { FirstName = "Jim", LastName = "Green" });
            Console.WriteLine("ContactListStore: Creating contact 3");
            contacts.Add(new ContactModel() { FirstName = "Susan", LastName = "Orange" });
            return contacts;
        }
    }

    static void Main(string[] args)
    {
        var store = new ContactListStore();
        var contacts = store.GetEnumerator();

        Console.WriteLine("Ready to iterate through the collection.");
        Console.ReadLine();
    }
ContactListStore: Creating contact 1
ContactListStore: Creating contact 2
ContactListStore: Creating contact 3
Ready to iterate through the collection.

Yield Store

The yield alternative is shown below, where each instance is returned as soon as it is produced.
    
    public class ContactYieldStore : IStore<ContactModel>
    {
        public IEnumerable<ContactModel> GetEnumerator()
        {
            Console.WriteLine("ContactYieldStore: Creating contact 1");
            yield return new ContactModel() { FirstName = "Bob", LastName = "Blue" };
            Console.WriteLine("ContactYieldStore: Creating contact 2");
            yield return new ContactModel() { FirstName = "Jim", LastName = "Green" };
            Console.WriteLine("ContactYieldStore: Creating contact 3");
            yield return new ContactModel() { FirstName = "Susan", LastName = "Orange" };
        }
    }

    static void Main(string[] args)
    {
        var store = new ContactYieldStore();
        var contacts = store.GetEnumerator();

        Console.WriteLine("Ready to iterate through the collection.");
        Console.ReadLine();
    }
Ready to iterate through the collection.
Let's call the collection again and obverse the behaviour when we fetch the first contact in the collection.
  
        static void Main(string[] args)
        {
            var store = new ContactYieldStore();
            var contacts = store.GetEnumerator();
            Console.WriteLine("Ready to iterate through the collection");
            Console.WriteLine("Hello {0}", contacts.First().FirstName);
            Console.ReadLine();
        }
Ready to iterate through the collection
ContactYieldStore: Creating contact 1
Hello Bob

Possible multiple enumeration of IEnumerable

Have you ever noticed the "possible multiple enumeration of IEnumerable" warning from ReSharper? ReSharper is warning us about a potential double handling issue, particularly for deferred execution functions such as yield and Linq. Have a look at the results produced from the code below.
  
        static void Main(string[] args)
        {
            var store = new ContactYieldStore();
            var contacts = store.GetEnumerator();
            Console.WriteLine("Ready to iterate through the collection");

            if (contacts.Any())
            {
                foreach (var contact in contacts)
                {
                    Console.WriteLine("Hello {0}", contact.FirstName);
                }
            }
            
            Console.ReadLine();
        }
Ready to iterate through the collection
ContactYieldStore: Creating contact 1
ContactYieldStore: Creating contact 1
Hello Bob
ContactYieldStore: Creating contact 2
Hello Jim
ContactYieldStore: Creating contact 3
Hello Susan

IEnumerable.ToList()

What if we have a requirement to materialize (build) the entire collection immediately? The answer is shown below.

  
        static void Main(string[] args)
        {
            var store = new ContactYieldStore();
            var contacts = store.GetEnumerator().ToList();
            Console.WriteLine("Ready to iterate through the collection");
            Console.ReadLine();
        }
ContactYieldStore: Creating contact 1
ContactYieldStore: Creating contact 2
ContactYieldStore: Creating contact 3
Ready to iterate through the collection

Calling .ToList() on IEnumerable will build the entire collection up front.

Comparison

The list implementation loaded all of the contacts immediately whereas the yield implementation provided a deferred execution solution.

In the list example, the caller doesn't have the option to defer execution. The yield approach provides greater flexibility since the caller can decide to pre-load the data or pull each record as required. A common trap to avoid is performing multiple enumerations on the same collection since yield and Linq functions will perform the same operation for each enumeration.

In practice, it is often desirable to perform the minimum amount of work needed in order to reduce the resource consumption of an application.

For example, we may have an application that processes millions of records from a database. The following benefits can be achieved when we use IEnumerable in a deferred execution pull-based model:

  • Scalability, reliability and predictability are likely to improve since the number of records does not significantly affect the application’s resource requirements.
  • Performance and responsiveness are likely to improve since processing can start immediately instead of waiting for the entire collection to be loaded first.
  • Recoverability and utilisation are likely to improve since the application can be stopped, started, interrupted or fail. Only the items in progress will be lost compared to pre-fetching all of the data where only a portion of the results was actually used.
  • Continuous processing is possible in environments where constant workload streams are added.

Evolution of the Singleton Design Pattern in C#

This post is focused on the evolutionary process of implementing the singleton design pattern that restricts the instantiation to one object.

We will start with a simple implementation and move onto a thread safe example using locking. A comparison is made using Lazy Initialization and we will complete the post with a loosely coupled singleton solution using Dependency Injection.

Download Source Code

Singleton Components

We will use logging as our singleton use-case. The logging interface and implementation shown below will write to the console when the ConsoleLogger is instantiated.

       
    public interface ILogger
    {
        void Log(string message);
    }

    public class ConsoleLogger : ILogger
    {
        public ConsoleLogger()
        {
            Console.WriteLine("{0} constructed", GetType().Name);
        }

        public void Log(string message)
        {
            Console.WriteLine(message);
        }
    }

Single Thread Singleton

Below is a simple implementation of a singleton.

    
    public class SingleThreadLogger
    {
         private static ILogger _instance;

         public static ILogger Instance
         {
             get
             {
                 if (_instance == null)
                 { 
                     _instance = new ConsoleLogger();
                 }
                 return _instance;
             }
         }
    }
Calling the singleton in a single thread from the console:
    
    static void Main(string[] args)
    {
         SingleThreadLogger.Instance.Log("Hello World");
         SingleThreadLogger.Instance.Log("Hello World");     
         Console.ReadLine();
    }
  ConsoleLogger constructed
  Hello World
  Hello World
Calling the singleton using multiple threads from the console:
    
    static void Main(string[] args)
    {
         Parallel.Invoke(() => SingleThreadLogger.Instance.Log("Hello World"),
                         () => SingleThreadLogger.Instance.Log("Hello World"));
         Console.ReadLine();
    }
  ConsoleLogger constructed
  Hello World
  ConsoleLogger constructed
  Hello World

Thread Safe Singleton

Below is an example of a poorly implemented locking approach around the constructor of the ConsoleLogger:
    
    public class ThreadLockConstructorLogger
    {
        private static ILogger _instance;

        private static readonly Object _lock = new object();

        public static ILogger Instance
        {
            get
            {
                if (_instance == null)
                {
                     lock (_lock)
                    {
                         // WARNING: Multiple instantiation
                        _instance = new ConsoleLogger();
                    }       
                }
                return _instance;
            }
        }
    }
Calling the singleton using multiple threads from the console:
    
    static void Main(string[] args)
    {
         Parallel.Invoke(() => ThreadLockConstructorLogger.Instance.Log("Hello World"),
                         () => ThreadLockConstructorLogger.Instance.Log("Hello World")); 
         Console.ReadLine();
    }
  ConsoleLogger constructed
  Hello World
  ConsoleLogger constructed
  Hello World

The ThreadLockConstructorLogger class has the classic double-checked locking bug. The first thread acquired the lock and is busy on line 15 while the second thread is waiting for the lock to be released on line 13. Once the second thread acquires the lock, another instance will be created on line 15.

We can solve the problem by implementing the double locking solution as shown below, where we perform another check to verify that the class has not been instantiated once the lock has been acquired.

    
    public class ThreadLockWriteLogger
    {
        private static ILogger _instance;

        private static readonly Object _lock = new object();

        public static ILogger Instance
        {
            get
            {
                if (_instance == null)
                {
                    lock (_lock)
                    {
                        if (_instance == null)
                        {
                            _instance = new ConsoleLogger();
                        }             
                    }
                }
                return _instance;
            }
        }
    }
Calling the singleton using multiple threads from the console:
    
    static void Main(string[] args)
    {
         Parallel.Invoke(() => ThreadLockWriteLogger.Instance.Log("Hello World"),
                         () => ThreadLockWriteLogger.Instance.Log("Hello World")); 
         Console.ReadLine();
    }
  ConsoleLogger constructed
  Hello World
  Hello World

Lazy Instantiated Singleton

A simplified approach is to use lazy instantiation because by default, Lazy objects are thread-safe as shown below.

    
    public class LazyLogger
    {
        private static readonly Lazy<ILogger> _instance = 
            new Lazy<ILogger>(() => new ConsoleLogger());

        public static ILogger Instance
        {
            get { return _instance.Value; }
        }
    }
Calling the lazy singleton using multiple threads from the console:
    
    static void Main(string[] args)
    {
         Parallel.Invoke(() => LazyLogger.Instance.Log("Hello World"),
                         () => LazyLogger.Instance.Log("Hello World"));
         Console.ReadLine();
    }
  ConsoleLogger constructed
  Hello World
  Hello World

Dependency Injection Singleton

A singleton is considered an anti-pattern due to the following reasons:

  • Hidden dependencies makes it is hard to tell what a class is dependent on when the dependencies are not explicitly defined (ie: the constructor).
  • Tight coupling occurs when singletons are hardcoded into an application as a static method and unnecessary complicates mocking dependencies in automated tests.
  • Single Responsibility Principle is violated since the creation of an object is mixed with the lifecycle management of the application.

Dependency Injection (DI) can remedy the problems above. Let's have a look at the example below that consists of a WriteMessageAction class with a dependency on the ILogger interface.

    
    public interface IAction<in T>
    {
        void Do(T message);
    }

    public class WriteMessageAction : IAction<string>
    {
        private readonly ILogger _logger;

        public WriteMessageAction(ILogger logger)
        {
            if (logger == null) throw new ArgumentNullException("logger");
            _logger = logger;
        }

        public void Do(string message)
        {
            _logger.Log(message);
        }
    }
We can use Unity to manage the lifecycle of the ILogger instance as shown below.
    
static void Main(string[] args)
{
     var container = new UnityContainer();
     container.RegisterType<ILogger, ConsoleLogger>(
                  new ContainerControlledLifetimeManager());
     container.RegisterType<IAction<string>, WriteMessageAction>("One");
     container.RegisterType<IAction<string>, WriteMessageAction>("Two");

     var actions = container.ResolveAll<IAction<string>>();
     actions.ForEach(action => action.Do("Hello World"));

     Console.ReadLine();
}
  ConsoleLogger constructed
  Hello World
  Hello World
The key is on line 4 to 5, since we pass in "new ContainerControlledLifetimeManager()" to register the object as a singleton. By default, unity will create a new instance of an object when we omit the ContainerControlledLifetimeManager constructor parameter as shown below:
    
{
     var container = new UnityContainer();
     container.RegisterType<ILogger, ConsoleLogger>();
     container.RegisterType<IAction<string>, WriteMessageAction>("One");
     container.RegisterType<IAction<string>, WriteMessageAction>("Two");

     var actions = container.ResolveAll<IAction<string>>();
     actions.ForEach(action => action.Do("Hello World"));

     Console.ReadLine();
}
  ConsoleLogger constructed
  ConsoleLogger constructed
  Hello World
  Hello World