A Developer is an Author
 

When I started this blog, I was skeptical if I can be an author. But what appeared more important to me is spreading knowledge, my thoughts So I continued blogging. After writing few articles what I realize is that writing a blog is not much different from my favorite profession (writing code).

Few similarities between coding and authoring.

  1. Both require deep knowledge of the subject (Domain)
  2. Both require good use of vocabulary
  3. Both require good organizing and formatting
  4. Both require analytic and logical skills
  5. Focus on the reader

If someone is a good writer, he/she can be a good programmer as well. I want to simply put it like this…

Programmer = Author + Technical Knowledge

Most of us(developers/testers)  are focused on attaining higher levels in Technology and unfortunately less in Authoring skills. This is because we think that there is a magical box (called compiler) which will convert our code to machine language and after that, there is no use of our code. This is not true!. When we write code, we are already an author, we are already creating an article/document/story.  And apart from the compiler, there are other readers for your code(Other developers and testers !).  And when they read, they expect to read it like an article. I know this is not a new concept, people like Robert C Martin,  Martin Fowler have spoken/wrote about this long time back. I am just re-emphasizing it from a different perspective.

Let us consider how a small snippet from the novel “A Stone for Danny Fisher” by Harold Robbins.

Harold Robbins

If you observe the paragraphs, each one has a unique purpose. Beautiful short paragraphs doing only one thing at a time (Single Responsibility). This makes it easy for the reader to understand the subject.

Learning: Single Responsibility

Also, observe that the order of paragraphs tells a story. The first paragraph sets the stage for the court, the second one describes the start of the court etc.

Learning: Order the methods in the order, they are being called. Tell the story of your code.

There are lots of quotations in the above snippet. They represent the words of the character. This helps the fast skimmer to skip the unnecessary things and just read the important words.

Learning: Characterize your code, give personality to your classes.

Observe that almost all the sentences are in active voice. They represent the voice of a person. It’s more assertive and stronger.

Learning: Method names should be assertive.

There are lots of similarities between good code and a good novel. We just need to read between the lines to understand them! Besides, each author is different and have their own style of writing and so will be the choices made by developers. These styles will have long-term implications on the success of the project.

Re-Organization
 

We see organization re-structuring happens for enabling greater collaboration and innovation. I observed this lot many times when a multi-million dollar project fails. “What? A simple re-organization to fix a failed project of this big a size ?”.

I often wondered how re-organization could help teams and their daily activities (code). Today, I started searching the internet and found few interesting articles about how teams collaboration and attitude reflect on the code they are writing. There were lots of research papers which analyzes this and why organizations re-define their teams to enable greater collaboration.

Later I found Conway’s law.

“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations”.

I had to read and re-read this couple of times and associate this with our work before I could understand this. Below is my understanding of this law.

If there are more collaborations between the teams, their code will have more collaborations as well. If the teams are distributed, the code will be more componentized and distributed. If the teams don’t talk to each other neither do their code. Everything that we do as a team, reflects on the code.

You can search for more details about Conway’s law and the research which substantiate it.

Eventual Consistency
 

Distributed computing had been a research area for quite some time and now the industry is embracing it with both hands. New buzz words in the industry like NoSQL, NServiceBus, Hadoop, Bigdata etc. are all one or other form of distributed computing. There are multiple algorithms/principles which evolved before what distributed computing became what it is today. One of approach is eventual consistency across distributed systems.

So what is it?

Eventual consistency is a way of maintaining the consistency of system/application state across machines with acceptable delays. This means that data in the distributed systems will not be the same for some period of time before it achieves consistency. Eventual constancy is not related only to database systems but also applies to other systems. Below are few mentions where we can see eventual consistency happens

  •  NoSQL databases like (Cassandra, Gemfire, RavenDB etc.)
  •  Messaging based systems (including NServiceBus)
  •  Domain Naming System (DNS)
  •  SQL Server Replication (of course it’s been there for some time)
  •  Amazon’s Dynamo

Why eventual consistency:

Any general purpose relational database like SQL Server, Oracle, MySQL etc. follow the ACID (Atomicity, Consistency, Isolation and Durability) properties. In order to follow these rules, they are heavy dependent on resource locking (rows, pages, table, and database). Let’s take an example,

UPDATE dbo.employee SET salary=salary + 500 WHERE employee_id=1508

Here we are updating the salary information of an employee. In order to execute this query, the database server needs to lock the row/page to ensure that no one else is viewing stale data (depending on Isolation level) or updating it while the query execution is in progress. The DB server needs to do this for each of the query executed against the database. If there are 2 million rows updated, the database server will have to create locks for each of the 2 million rows to ensure data consistency (of course lock escalation will take care of this). This is too much to handle for any database engine. The result of all this locking is poor performance.

Data analytics reveals that 80% of the applications does not require immediate consistency but requires very good performance. So in distributed systems, the data is scattered across multiple systems and there are replicas maintained to ensure good read performance. Since the data is distributed, traditional locking mechanisms cannot be used. Few good engineers decided to move the logic of locking out of the database systems and place them in the application so that they have more control on data consistency. The result of this approach is called eventual consistency.

In eventually consistent systems, the data/application state is not updated in all the replicas at the same time. But the primary data owner is updated and the replicas will get the update eventually. The UPDATE can be triggered either based on READ or WRITE or asynchronously. Eventually, consistent systems need to maintain a history of transactions so that they can make application state consistent or tolerate the inconsistency and take corrective actions.

Can we tolerate inconsistency?

This purely depends on the application. Let’s take few examples where NoSQL (or any eventually consistent implementation is done).

In a sales application, a user (a sales person) has updated a consumer’s date of birth/age, address, and phone number. Let’s assume the user information is updated in only one of the copies. What if another user (say billing person) reads the same consumer information from another data replica which is not in sync with the latest update? Of course, the data is inconsistent and the second user will read stale data (unless we have read triggered consistency). How important is this information? What action does the second user going to take based on this information? With a casual glance, we may say the date of birth or age can be inconsistent for some time. What about the address and phone number? Can we tolerate this inconstancy? Yes and No. Yes, when there is no action taken by the billing system. When the billing system is going to use the consumer address for shipment we will not be able to use stale data.

Eventual Consistency

Let’s take another example. In ticket booking system, two users are trying to book a ticket to a cinema hall. Let’s assume both of them are trying to block the same seat (say A21) by connecting to two different data replicas. Can both the users be allowed to block the same seat? Yes, both the users can block the same seat until one of them makes the payment and confirms the ticket. When the second person makes the payment, we need to confirm if the same seat has been already used. If the seat is already booked, we need to intimate the user and refund the amount back to the user.

Critical section:

Every workflow has some portion of logic inside them where data in-consistency cannot be tolerated. We can call this portion of code as a critical section. In the billing system above, using the consumer address for shipment is a critical section for address data. In the ticketing system, the seat availability data enters the critical section when we receive the payment confirmation. The critical section of the workflow is a place where it should know if the data is consistent or not (dirty flag) so that the application can decide to live with the inconsistency or to enforce consistency.

                        CriticalSection

Few points to think about, before we implement eventual consistency in our applications.

  • We need to identify the data specific critical sections in our application where data inconstancies cannot be tolerated.
  •  If there are inconsistencies, what are the rollback or corrective workflows that will solve the problems of inconsistency?
  • What are the policies around the corrective workflow?  They should be clearly documented and made available to the consumers.

End Note:

In SQL Server, changing the isolation levels can make significant changes to the way locks are handled. I remember SQL Server version 7.5 had only 4 isolation levels and now there are 7 isolation levels in SQL Server 2012. This is also a sign of how developers are increasingly taking control of parallelism in their applications rather than relying on the DBMS to handle it. With distributed databases, the control is more in the hands of developers. The world is moving towards faster and more distributed systems and eventual consistency is one of the things that are fueling this. I know this is a hot topic to discuss for everyone so don’t restrain yourself, please feel free to provide your comments/thoughts on the topic.

Read about Micro-Services at below:-

Micro-Services, Eventual Consistency and Event sourcing patterns

SQL Server Transaction Logs explained in plain english
 

 

Transaction logs are the most important part of the SQL Server that any DBA has to understand. The technologies like Log shipping, replication, mirroring etc. depend on transaction logs. I have seen a lot of articles on transaction log on the net, most of them talk about the structure and working of transaction log but none of the articles is simple enough for a newbie. So thought of writing an article keeping newbies in mind. Please provide your feedback and suggestions using the comments.

In this article, we shall talk about the transaction log and its working without delving deeper in the structure of the transaction logs.

Database files and Pages

Any database in SQL Server will have at least one Data file and one Log file. Data files (with extension .mdf or .ndf) store the actual table structure and the data. SQL Server stores data in the data file in blocks of 8 Kb called Pages and are the basic unit of storage for tables and indexes. This means the records in a table are stored in the form of pages.

The log file is also called as the transaction log and is what we are going to discuss in detail. Transaction Logs is an additional database file that helps in making the transactions faster and easy to recover in the event of system failure.

Why do we require transaction log

Let us understand what happens internally when data is manipulated by SQL Server upon an update/insert or delete statement.

When SQL Server performs a data manipulation (Like Insert, update, delete) in a table, SQL Server checks if the required pages are available in memory or not (physical or virtual, depends on memory allocated by the memory manager.). If the pages are not in memory, SQL Server fetches the relevant pages from disk into the memory and a copy of it is kept in memory for performing the modifications. When the pages are available in memory, the data is manipulated. For faster processing, the changes are made to the pages/data in memory and are not written to a data file in the disk immediately. At this point, the data in the memory corresponding to the records manipulated are different from those in the mdf file. If the same records are fetched again for other manipulations, SQL Server checks if the pages are in memory. Since the pages are available in memory, the latest/updated data is available for processing.

If there is a server failure at this point, the data modified in memory will be lost as it’s not written to disk or permanent storage. Hence there is a risk of losing the data if there is a failure in the system even though the transaction is committed. In order to mitigate this risk, Microsoft introduced an additional file called transaction log to keep track of changes that are happening in the database. After a transaction is committed, SQL Server immediately writes the information about what changed (like this happened, that happened… etc.) into the transaction log file (i.e. LDF file). This way, all the changes made to the pages in memory are tracked and written to Log file in the disk/permanent storage for future reference once the transaction is committed. Whenever System fails, the data in the log file can be used to recover the data modified in memory.

Restart recovery and Checkpoint

When the failure happens and once SQL Server is back online, SQL Server reads the transaction log and reruns all the actions that happened until the time of failure. This process of the data manipulations on a database after a failure is called restart recovery. During restart recovery, all the pages referenced in the transaction log will be fetched from disk to the memory and manipulations will be performed; just like how it happened during the original transaction.

The database restart recovery can run longer depending on the number of transactions carried out before the failure. In order to reduce the time taken for the restart recovery, SQL Server does a Checkpoint on each of the databases at frequent intervals. The Checkpoint is a process of writing all pages in memory to the disk, irrespective of whether the transaction is committed or not. When SQL Server will have to perform the restart recovery only for the transactions happened after the latest checkpoint. Hence Checkpoints reduces the time taken during the restart recovery.

System failure Example

Let’s us consider the following scenarios to explain the above concepts.

Consider there are five transactions at different stages of execution while checkpoints happen at frequent intervals. The figure shows the progress of each of the transactions at a time. There are 2 checkpoints C1 and C2 and a system failure SF. The figure also shows that

Transaction 1 (T1) is complete before the checkpoint C1 happened. Hence the data is modified and retained in memory until the checkpoint happens.

Transaction 2 (T2) started before the checkpoint but was in progress at the time of the checkpoint. Upon Checkpoint All the data modified in the memory before the checkpoint will be written to disk, even though the transaction is not committed. There is a discrepancy between the pages in memory and those in a disk. But the transaction is complete before failure, hence the log will have information of whether the transaction got committed or not.

Transaction 3 (T3) started before the checkpoint and was in progress till the system failed. This is more like the transaction 2, but the transaction is not complete at the time of failure. Hence Log will not have information if it is committed or not.

Transaction 4 (T4) started after the checkpoint but completed before the system failure. Here all the data is modified in the memory and logged to disk. Hence no updated information will be available in the data file after the failure.

Transaction 5 (T5) started after the checkpoint and was running during the failure. Hence no updated information will be available in the data file at the time of failure.

TransactionLog-Checkpoint

Fig: Actions taken for different transactions after recovery

The following actions will be taken during the restart recovery for each of the scenarios considered.

  1. Transaction 1: No action will be taken on the      Transaction 1, as the modified data in memory is already written to disk by the checkpoint.
  2. Transaction 2: Since half of the transaction is already written to disk, pages from the disk will be read and put into memory.      Since the Transaction is completed before the failure all the activities are available in the transaction log and each of the data manipulation done after the checkpoint will be re-run. This is called roll forward.
  3. Transaction 3: Half of the transaction is already written to disk, pages will be fetched into memory and all the actions carried out after the checkpoint will be read from the transaction log and re-done. But the transactions will rollback as there is no information if it is committed or not.
  4. Transaction 4: All the data modified in the memory will be lost as the failure happened before the checkpoint. The transaction log will have information regarding what changed and if it’s committed or not.      Hence all the activities will be performed again and the truncation will be completed.
  5. Transaction 5: All the data modified in the memory will be lost as the failure happened before the checkpoint. And the transaction log will not have information if it’s committed or not. Hence it will be rolled back.

Conclusion

The transaction log is a vital component in making databases faster and easy to recover during failure. Understanding the basics of the transaction log is important in making a good DBA or a programmer. In recent times transaction log is used as a basis for data reliability and scalability technologies like log shipping, Mirroring, Replication, Change data capture, change control etc. For further reading regarding the transaction log, below are few pointers

http://msdn.microsoft.com/en-us/library/ms345583.aspx

http://www.sqlservercentral.com/articles/Design+and+Theory/63350/