Contextual Friendship Framework
 

Recently one of my colleague,  Rahul Rathore and I were on a conversation on object-oriented techniques and we both agree that it has lots of inspiration from the real world. Below is the background of our conversation and what emerged out of that.

Background:

The object-oriented languages like C#, Java are more close to real world. We can mimic the real world behaviors in these languages easily. Scenarios are well captured because of their object-oriented abilities. Inheritance, polymorphism, and encapsulation are principles derived from the real world. In the real world, there are more concepts which are applicable to humans but are not well mimicked in the computer world. One of them is Friendship (between objects).

For better code re-usage, Object-oriented programming languages like C++ have a feature called Friend classes. A class in C++ allows access to all the private & protected members to its friend classes.

But in the real world, we share only a few things with our friends based on the context we are in. We have complete control over what we want to share with our friends and families. This is not the case with C++ friend classes; it shares all the private & protected members to its friends. This poses a threat to the object-oriented theory of encapsulation.

Because of this threat, advanced programming languages like C# and Java have completely removed friendship between objects. But friendship can significantly increase code re-usage and cohesion in objects than breaking them. We wanted to have friendship in C# and Java but still, follow other object-oriented principles.

We observe that C# is very easy to use and highly extendable. So we embraced C# and extended it with the custom module which will enable friendship between classes.

Contextual Friendship Framework:

The framework extends the Microsoft.Net framework to enable friendship among classes. This Friendship framework allows developers to add attributes to classes and its members to enable friendship. There are 2 attributes available.

  1. FriendOf – for classes
  2. AvailableToFriends – for members
  3. FriendOf attribute can be applied to a class whose private and protected members need to be made available to specific friend classes. It takes an array of .Net Types as a parameter. AvailableToFriends attribute can be applied to a class member to allow its access to specific friends only.

A Friend class can access a private/protected member of its friend class by using the ‘MakeFriendlyCall’ extension method exposed by the friendship framework. MakeFriendlyCall method is an extension method that internally uses .Net Reflection to reach to private and protected members. MakeFriendlyCall will allow making calls to private/protected members with AvailableToFriends attribute.

Friendship enables selective sharing of members with friend class based on the class definition. The module provides facility to decorate members of a class for granting access to its friend. Let’s understand this using the below example

 
    [FriendOf(typeof(World))]
    public class Hello
    {
        public string Name { get; private set; }
 
        public void HelloPub()
        {
            Console.WriteLine("Public Hello");
        }
 
        [AvailableToFriends(typeof(World))]
        private string PrivateMethodsAvlToFriends(string name)
        {
            Name = name;
            Console.WriteLine("Private Hello : " + Name);
            return Name;
        }
 
        private void PrivateMethod(string d)
        {
            Console.WriteLine("Private method" + d);
        }
    }
 

    public class World
    {
        private void DoSomething()
        {
            var h = new Hello();
            h.MakeFriendlyCall("PrivateMethodAvlToFriends", "John");
        }
    } 

Here we have 2 classes, Hello and World. The Hello class has two private methods “PrivateMethodsAvlToFriends” and “PrivateMethod”. PrivateMethodsAvlToFriends is decorated with AvailableToFriends attribute. Now the Friend class “World” can make a call to this private method. This can be done by using the ‘MakeFriendlyCall’ extension method exposed generically.

Class Diagram Of the dependencies:

FriendshipFramework

Alternative solutions:

  • Friend class in C++
  • C# has Friend Assemblies which allow all internals of a class visible to another assembly using InternalsVisibleTo attribute. This is more generic than the C++ friend class and does not allow selective access.

Comparison of C++ Friend class Vs Friendship Framework:

Criteria C++ Friend Class Friendship Framework
Security All members are available to friends Granular control over what is available to Friends
Encapsulation Breaks Encapsulation Enhances Encapsulation
Re-Usability Part of the C++ library, so re-usable Fully re-usable as the framework is shipped as a package.
Design Pattern Access Modifier Decorator pattern is used. Also, can be implemented using Access Modifier

Future scope:

We have enabled friendship between classes without breaking encapsulation and security. However, this design can be extended to objects of classes, which makes it closer to the real world. After all, we are not sharing our car with a friend all the times.

Also, the Friendship attributes ‘FriendOf’ and ‘AvailableToFriends’ can be converted to new access modifiers like Public, private, protected. This could be done using Roslyn which is a complier extension to .Net. The Friend framework is available in .Net, but can be implemented to Java framework as well.

 

 

Perspective Designing
 

Recently, I was working with a colleague in refactoring one of our projects. As we added tests, we found few code issues and continued refactoring. Was feeling happy as our unit tests were rearing benefits. However, we know TDD or unit testing does not guarantee clean code. As we progressed, the naming conventions consumed a lot of our time. And eventually, it brought us to a discussion about why specific naming conventions can create a better design. Thought I will share our discussions and practices here.

While we design classes for application, we often think of it as a different subject than ourselves (programmer). When I say different subject, we think of it as a different object and not as a person. When a programmer considers classes/interfaces as personalities and thinks from the perspective of the class, design can change drastically. This is what we call “Perspective designing”. Let’s take an Example:

    public interface ITotalTaxCalculator
    {
        decimal Calculate(IEnumerable products);
    }

    public class TotalTaxCalculator : ITotalTaxCalculator
    {
        public decimal Calculate(IEnumerable products)
        {
            decimal total = 0.0;
            //add total of products etc....
            foreach (var product in products)
        	{
                using(var dbContext = new ProductContext())
                {
                    var productInDb = dbContext.FistOrDefault(prod => prod.Id == product.Id)
                    total += (total * productInDb.taxRate);
                }
        	}
            return total;
        }
    }

In the above example, the name of the class and interface are perfectly fine. But they are impersonal and it’s very hard to think of it as a person and bring in perspective thinking with these names. So we refactored them to ‘ICanCalculateTotalTax’ and  ‘TotalTaxMan’.

public interface ICanCalculateTotalTax
{
    decimal Calculate(IEnumerable products);
}

public class TotalTaxMan : ICanCalculatorTotalTax
{
    public decimal Calculate(IEnumerable products)
    {
        decimal total = 0.0;
        //add total of products etc....
        //blah blah blah..
        total += (total * taxRate);
        return total;
    }
}

These naming conversions have lots of inspiration from in NServiceBus for their class/Interface names. With the new class and interface names, it’s easy to think of them as personalities. However, this does not guarantee good design. So we needed refactoring. Perspective thinking comes in handy especially while we do refactoring When my colleague and I started putting ourselves in the place of each of the classes. We had very reasonable questions which triggered our object-oriented thinking.

Example1:  As ‘ICanCalculateTotalTax’ , why I am having database related behavior?

Example2: As ‘ICanCalculateTax’, why I am having logic to find which language it needs to be presented?

These questions helped us to refactor the code to follow good design principles. When we implement these interfaces/abstract classes, we have clarity on what the class is capable of doing. So we generalized these naming conventions & questioning attitude and derived below two rules to do Perspective designing (think like a class).

  • Give personality to the names of  classes/interfaces (example: ICanCalculateTax)
  • Use the Agile User Stories way of articulating what the class should and should not do. (example: As ‘ICanCalculateTax’, I should be able to provide behavior to calculate tax)

I think, “Perspective designing” can make classes more object-oriented and best practice like SOLID principles automatically fall in line. Let me know your thoughts.

SOLID Principles
 

Many of the old programmers have adopted SOLID principles and it is taught in lots of schools these days. Yet I find many people not clearly understanding SOLID principles. I think these principles are fundamental to creating good object oriented software and every developer should learn them. So I have series of posts for these principles

Single Responsibility : A piece of software always have a single responsibility!
What this basically translates into is that a piece of code should have one and only one functionality. The corollary to this would be that  a piece of code should have only a single reason to change. Some advantages of this would be to ensure minimal change to existing classes in case of a change in requirements, which avoids larger testing efforts for a larger number of classes changed, which cuts down on testing efforts and increase turnaround times, which leads to lower overall cost of ownership for the code, which saves $$.

Take the example of a logging class. As a software developer, chances are pretty high that you’ve all at some point used some form of a logger, be it log4j, its .Net port log4net, Serilog, Elmah or a number of other logging frameworks that are out there. At their core, one thing all these libraries have in common is their focus one one thing. Logging messages. Be it to a flat file, RDBMS, other or other forms of persistence. At no point do these libraries attempt to do anything more than just that. Now a lot of you have also used the same libraries to do something seemingly different, such as  sending out emails for example. The beauty of their design however is that at their core the implementations of these libraries themselves do things at a slightly higher level of abstraction. Namely, they take data, including the message level / severity, to be logged (from a source) and deliver it to a sink (which in most cases happens to be a flat file). While writing to flat files is usually the default implementation of the data sink provided by the frameworks, at their true level of abstraction, they are simply conduits for data, providing solid low level implementations for supporting different logging levels based on configuration. Even the basic things that we take for granted, such as file lock management, rolling log files etc, aren’t truly in the hands of the core framework. These auxiliary features are handled by specific implementations of the log sinks, from DMBS connectors, to file writers, and even SMTP appenders. This brings us to the second principle.

Open-Closed Principle : Software should be open for extension and closed for modification. This weird sounding principle basically attempts to convey that code should be designed such that allows its behavior to be modified without actually modifying the source code for the class itself

Now, historically, this principle has been interpreted in a few different ways, but they have all relied on inheritance to achieve the purported goal.

One interpretation of this principle suggests that a class is ‘open’ for extension if its structure enables us to add new properties or functions in addition to the existing ones provided by the structure. The ‘closed’ part of this interpretation applies when the module by itself is available for other modules for use through its publicly available interface.

The other interpretation of the open closed principle (based on a polymorphic view) refers to the use of abstractions and interfaces to provide and extend certain behaviors, while keeping other (often core) implementation details hidden, and closed to change.

The logger example from the first solid principle is a prime example of this approach, where the implementations of the data sinks (often referred to as appenders) are often supported purely through configuration without touching the core conduit between the source and the sink. The fie appender writing to flat files however, still maintains the Single Responsibility principle by assuming responsibility for only writing to flat files, while also taking care of features and issues specific to flat files such as managing file locks, and often providing out of the box support for rolling log files that prevent log files from becoming too large to manage and view using ordinary text editors.

Liskov’s Substitution Principle : Child class object can be substituted for a base class variable. This one is a slightly more theoretical concept than the others, as unlike the others, it tries more to enforce consistency of behavior between parent and child classes than the syntax of the classes themselves.

The gist of the matter is that if S is a sub type of base type T, then all instances of T in a program should be replaceable with instances of type S without altering the desirable properties of the program. Note the use of the term ‘desirable’, which has a more semantic implication than, say the easier to observe consistency of method signatures that is often associated with overriding / hiding of method implementations in class hierarchies. Without going into the technical details of the precise rules proposed under this rule, it is safe to say that if you see a method or property in a sub type behaving drastically differently from the implementation of its parent, chances are that this principle is being violated in some form.

Interface Segregation: Many interfaces (client specific) is better than the single monolithic interface.

Dependency Inversion : Just depend on abstractions and not implementations.

Learning SOLID principles have changed my programming life and the real one. We shall see each of these principles in more detail in the coming posts.

Language of the Developers
 

There are lots of people who tell me that they need to understand specific design patterns (factory, observer, chain of responsibility etc). And after we spend quite some time discussing the topic they brought up, I ask them why they wanted to understand the original topic. The answers were quite surprising to me. Most of them said, “Design patterns are the trend now”. Some of them said, “I want to write good code”. Some of the candidly accepted “I was asked about this in an interview”.

Well, Design patterns are the new trend in the software industry and are being asked in most of the senior level technical interviews. For me, these reasons don’t qualify to take steps to understand design patterns. But the answer “I want to write good code” was quite interesting. Since a person wants to write good code, he/she is finding ways to improve it by applying patterns. Of course, Design patterns are good to understand and it saves time to know about how to solve a requirement/problem. If we take a step back and see if we really need to understand Design patterns to write good software, the answer is a big NO. Good software’s have emerged even before GoF patterns were documented. So why do we need to understand Design patterns?

When we write the code required to solve the problems, when we write optimal code, when we write code which is readable, testable and maintainable, we would have already applied the correct strategies/principles to solve the problem. But we should realize that when we write code we create a document not only for the compiler but also for other developers to read. How can we explain our code to another developer in a short time? We need a language !! Christopher Alexander (father of design patterns) calls this as the Pattern Language. A language that professional designers can use to communicate designs. He defines what are patterns with respect to this language.

The elements of this language are entities called patterns. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice. — Christopher Alexander

In Software terms, Design patterns are the constructs of a language, professional software developers can use to communicate their design. Until a developer needs to communicate their design/code, they need not know Design patterns! But developers communicate mostly with code. When the code communicates, patterns in the code will help the reader to quickly understand it.

When to Choose Async and TPL
 

Recent versions of .Net Framework provides two good features.

  1. Async/Await
  2. Task parallel Library (TPL)

I hear from a lot of people who are confused about when to use these features.  These two are solving 2 different problems of computer science.

  1. Need for faster processing (CPU)
  2. Waiting for dependent operations

Task parallel Library(TPL) optimally uses the multi-core CPU’s and reduces the time spent in doing a complex computation (like finding the millionth prime number). However, Async/Await keywords can help utilize the time wasted in waiting for completing dependent operations (like waiting for File writing, availability of port/buffer etc).

I hope this will give you a basic understanding of how these two features are different and when to use them.

 

A Developer is an Author
 

When I started this blog, I was skeptical if I can be an author. But what appeared more important to me is spreading knowledge, my thoughts So I continued blogging. After writing few articles what I realize is that writing a blog is not much different from my favorite profession (writing code).

Few similarities between coding and authoring.

  1. Both require deep knowledge of the subject (Domain)
  2. Both require good use of vocabulary
  3. Both require good organizing and formatting
  4. Both require analytic and logical skills
  5. Focus on the reader

If someone is a good writer, he/she can be a good programmer as well. I want to simply put it like this…

Programmer = Author + Technical Knowledge

Most of us(developers/testers)  are focused on attaining higher levels in Technology and unfortunately less in Authoring skills. This is because we think that there is a magical box (called compiler) which will convert our code to machine language and after that, there is no use of our code. This is not true!. When we write code, we are already an author, we are already creating an article/document/story.  And apart from the compiler, there are other readers for your code(Other developers and testers !).  And when they read, they expect to read it like an article. I know this is not a new concept, people like Robert C Martin,  Martin Fowler have spoken/wrote about this long time back. I am just re-emphasizing it from a different perspective.

Let us consider how a small snippet from the novel “A Stone for Danny Fisher” by Harold Robbins.

Harold Robbins

If you observe the paragraphs, each one has a unique purpose. Beautiful short paragraphs doing only one thing at a time (Single Responsibility). This makes it easy for the reader to understand the subject.

Learning: Single Responsibility

Also, observe that the order of paragraphs tells a story. The first paragraph sets the stage for the court, the second one describes the start of the court etc.

Learning: Order the methods in the order, they are being called. Tell the story of your code.

There are lots of quotations in the above snippet. They represent the words of the character. This helps the fast skimmer to skip the unnecessary things and just read the important words.

Learning: Characterize your code, give personality to your classes.

Observe that almost all the sentences are in active voice. They represent the voice of a person. It’s more assertive and stronger.

Learning: Method names should be assertive.

There are lots of similarities between good code and a good novel. We just need to read between the lines to understand them! Besides, each author is different and have their own style of writing and so will be the choices made by developers. These styles will have long-term implications on the success of the project.

Re-Organization
 

We see organization re-structuring happens for enabling greater collaboration and innovation. I observed this lot many times when a multi-million dollar project fails. “What? A simple re-organization to fix a failed project of this big a size ?”.

I often wondered how re-organization could help teams and their daily activities (code). Today, I started searching the internet and found few interesting articles about how teams collaboration and attitude reflect on the code they are writing. There were lots of research papers which analyzes this and why organizations re-define their teams to enable greater collaboration.

Later I found Conway’s law.

“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations”.

I had to read and re-read this couple of times and associate this with our work before I could understand this. Below is my understanding of this law.

If there are more collaborations between the teams, their code will have more collaborations as well. If the teams are distributed, the code will be more componentized and distributed. If the teams don’t talk to each other neither do their code. Everything that we do as a team, reflects on the code.

You can search for more details about Conway’s law and the research which substantiate it.

Eventual Consistency
 

Distributed computing had been a research area for quite some time and now the industry is embracing it with both hands. New buzz words in the industry like NoSQL, NServiceBus, Hadoop, Bigdata etc. are all one or other form of distributed computing. There are multiple algorithms/principles which evolved before what distributed computing became what it is today. One of approach is eventual consistency across distributed systems.

So what is it?

Eventual consistency is a way of maintaining the consistency of system/application state across machines with acceptable delays. This means that data in the distributed systems will not be the same for some period of time before it achieves consistency. Eventual constancy is not related only to database systems but also applies to other systems. Below are few mentions where we can see eventual consistency happens

  •  NoSQL databases like (Cassandra, Gemfire, RavenDB etc.)
  •  Messaging based systems (including NServiceBus)
  •  Domain Naming System (DNS)
  •  SQL Server Replication (of course it’s been there for some time)
  •  Amazon’s Dynamo

Why eventual consistency:

Any general purpose relational database like SQL Server, Oracle, MySQL etc. follow the ACID (Atomicity, Consistency, Isolation and Durability) properties. In order to follow these rules, they are heavy dependent on resource locking (rows, pages, table, and database). Let’s take an example,

UPDATE dbo.employee SET salary=salary + 500 WHERE employee_id=1508

Here we are updating the salary information of an employee. In order to execute this query, the database server needs to lock the row/page to ensure that no one else is viewing stale data (depending on Isolation level) or updating it while the query execution is in progress. The DB server needs to do this for each of the query executed against the database. If there are 2 million rows updated, the database server will have to create locks for each of the 2 million rows to ensure data consistency (of course lock escalation will take care of this). This is too much to handle for any database engine. The result of all this locking is poor performance.

Data analytics reveals that 80% of the applications does not require immediate consistency but requires very good performance. So in distributed systems, the data is scattered across multiple systems and there are replicas maintained to ensure good read performance. Since the data is distributed, traditional locking mechanisms cannot be used. Few good engineers decided to move the logic of locking out of the database systems and place them in the application so that they have more control on data consistency. The result of this approach is called eventual consistency.

In eventually consistent systems, the data/application state is not updated in all the replicas at the same time. But the primary data owner is updated and the replicas will get the update eventually. The UPDATE can be triggered either based on READ or WRITE or asynchronously. Eventually, consistent systems need to maintain a history of transactions so that they can make application state consistent or tolerate the inconsistency and take corrective actions.

Can we tolerate inconsistency?

This purely depends on the application. Let’s take few examples where NoSQL (or any eventually consistent implementation is done).

In a sales application, a user (a sales person) has updated a consumer’s date of birth/age, address, and phone number. Let’s assume the user information is updated in only one of the copies. What if another user (say billing person) reads the same consumer information from another data replica which is not in sync with the latest update? Of course, the data is inconsistent and the second user will read stale data (unless we have read triggered consistency). How important is this information? What action does the second user going to take based on this information? With a casual glance, we may say the date of birth or age can be inconsistent for some time. What about the address and phone number? Can we tolerate this inconstancy? Yes and No. Yes, when there is no action taken by the billing system. When the billing system is going to use the consumer address for shipment we will not be able to use stale data.

Eventual Consistency

Let’s take another example. In ticket booking system, two users are trying to book a ticket to a cinema hall. Let’s assume both of them are trying to block the same seat (say A21) by connecting to two different data replicas. Can both the users be allowed to block the same seat? Yes, both the users can block the same seat until one of them makes the payment and confirms the ticket. When the second person makes the payment, we need to confirm if the same seat has been already used. If the seat is already booked, we need to intimate the user and refund the amount back to the user.

Critical section:

Every workflow has some portion of logic inside them where data in-consistency cannot be tolerated. We can call this portion of code as a critical section. In the billing system above, using the consumer address for shipment is a critical section for address data. In the ticketing system, the seat availability data enters the critical section when we receive the payment confirmation. The critical section of the workflow is a place where it should know if the data is consistent or not (dirty flag) so that the application can decide to live with the inconsistency or to enforce consistency.

                        CriticalSection

Few points to think about, before we implement eventual consistency in our applications.

  • We need to identify the data specific critical sections in our application where data inconstancies cannot be tolerated.
  •  If there are inconsistencies, what are the rollback or corrective workflows that will solve the problems of inconsistency?
  • What are the policies around the corrective workflow?  They should be clearly documented and made available to the consumers.

End Note:

In SQL Server, changing the isolation levels can make significant changes to the way locks are handled. I remember SQL Server version 7.5 had only 4 isolation levels and now there are 7 isolation levels in SQL Server 2012. This is also a sign of how developers are increasingly taking control of parallelism in their applications rather than relying on the DBMS to handle it. With distributed databases, the control is more in the hands of developers. The world is moving towards faster and more distributed systems and eventual consistency is one of the things that are fueling this. I know this is a hot topic to discuss for everyone so don’t restrain yourself, please feel free to provide your comments/thoughts on the topic.

Read about Micro-Services at below:-

Micro-Services, Eventual Consistency and Event sourcing patterns