AI / ML – Past, Present & Future – Part 2
 

Just a recap, we learnt in my last update, the advent of the industrial revolutions, and the current industrial revolution of the ‘Machines’ we are currently experiencing. The impact of the new age machines that are learning and thus by utilizing AI/ML mould the world we are going to experience. In this excerpt we are going to understand the machine in itself, the raw material that constitutes it and how the world of AI/ML comes alive.

New Machine: A system of intelligence that combines software, hardware data and human input:

  • Software that learns
  • Massive hardware processing power
  • Huge amounts of data

Any New machine exhibiting AI has three main Elements

1. Digital process Logic

  • Transform many manual processes into automated ones
  • Car dispatch process between traditional vs Uber
  • Digitized process multiplied over millions of transactions – an industry is revolutionized – structuring the process is the hardest part
    • thereby new Large Databases that are stable, scalable and tested. For e.g. Hadoop are finding favor against Oracle/SAP

2. Machine Intelligence

  • Combination of algorithms, automation processes, machine learning and neural networks – just a richer data set – HEART
    • thereby Highly efficient and always on plumbing

3. Software Ecosystem

  • Multiple systems of intelligence connected thru API. For e.g. Uber uses Twilio for cloud communications, Google for maps, Braintree for payments, SendGrid for email etc.
    • thereby an Intelligent System in action

So finally what will work for us to run an AI system is a combination of the above which not just a system but a very intelligent system based on new and enhanced learning.

Figure: Intelligent System working (Source: Internet)

Just to illustrate this in a live example for the most Successful Internet media-service provider and Production Company in North America. I have put together the to-be story of a system of intelligence that they built.

There is a big difference between merely having all the necessary ingredients of the new machines and actually getting them to perform at a high level. An intelligent system that can help you be the Michael Phelps of whatever race you are in will have all or most of the below characteristics to make it successful.

  • Learn more than any other system
  • Open to more changes/corrections
  • Not just being automated but also involve human inputs
  • Focused on a confined problem
  • Individual experience has to be top priority
  • Look out for Constantly improving system

Once the intelligent system is in place, you finally need a way to measure whether it is doing the right things or not –

  1. An Intelligent has to become better and better and that depends solely on the ‘Quality of Data’ that is being fed into it
  2. Intelligent systems has to be a journey in an organization and not just an individual contribution
  3. System should take ownership of more and more data analysis and should reduce human intervention

Every day that passes gives us more evidence and strengthens our conviction that the intelligent systems that we are trying to understand in this part are the engines of the fourth industrial revolution. Individuals and companies that are early birds on this bandwagon are the ones that are reaping rich benefits out of solving their major problems. So it’s but obvious that we need to the AI/ML way sooner or later. Are you ready?

Stay tuned…. Part III of this foray, we will quickly look through the making of Data that makes an AI system successful and then dwell upon the Digital Business Models and Solutions: – “Robots” that outline the design and delivery of the AI platform.

AI / ML – Past, Present & Future – Part I
 

The World has seen development/growth primarily driven by the Industrial revolutions. Each of these revolutions changed the way we looked at a time of economic dislocation; when old ways of production become defunct and they had to give way to far better/newer ways of production that could harness the improvement brought in by new machines. The First Industrial revolution was powered by the invention of the loom the second by the steam engine and the third by the assembly line, the fourth however will be powered by the machines that seem to think. We are HERE in the fourth one.

Between each industrial revolution to the next, there is long and bumpy road connecting one era of business and technology to the next, the evolution of each industrial revolution follows the part of an S-curve (as show in figure below)

  1. IDEA BURST : Breakthrough, high concentration in wealth, new industries created, new tech create press clipping but no impact on existing industries
  2. BUMPY ROAD :Revolution stalls, skepticism on value creation in phase I, economic models and value chains created, change in existing industries
  3. MASSIVE LIFT UP : Everybody richly rewarded, National GDP gets vertical lift off, Large wealth distribution

Just to see the impact of such AI driven world that is in front of us, few commonsensical usage of AI in the future world are below.

  • 1/3rd of all food produced go to waste, could be moved to Third world countries by usage of AI to address the hunger prevalent there
  • 12 million Medical misdiagnoses in US only contribute to 4,00,000 deaths. By the right usage of AI, most of these deaths can be avoided
  • Driverless cars are reducing the Annual # of accidents from 4.2 to 3.2 per million miles driven. This will improve as days go by

Now that the machines are in, we need to see what is that we are supposed to expect

  • Technology will be embedded into everything (IoT – Internet of Things)
  • As machines become better, it is but obvious that by year 2030 standards, the current frame work of machines will stink. Advent of improvements on these machines
  • Becoming Digital – mastering the three Ms (raw Materials, new Machines, and business models)

While the above statistics on Job displacement is detrimental to many of human futures, however the pace of elimination will be slow. Consider the following

  • Most likely scenario : 12% elimination in next 10 to 15 years
    • 3 scenarios
      • Job Automation: 12% are at risk
      • Job Enhancement: 75% of existing jobs will be altered
      • Job Creation: 13% net new jobs will get created due to new machine requirements or new job categories

The advent of 13% of new job and the ones that cant automated and enhanced still would need human intervention and keep the true need of humans in place vis-a-vis machines replacing each one of us – Scary isnt it?

Source: internet

So Let me introduce you to some key definitions to keep us on track of this arduous journey

What is AI?

Artificial Intelligence – Coined in 1956 by Dartmouth Assistant Professor John McCarthy, ‘Artificial Intelligence’ (AI) is a general term that refers to hardware or software that exhibits behavour which appears intelligent. In the words of Professor McCarthy, it is “the science and engineering of making intelligent machines, especially intelligent computer programs.”

Other sources terms AI(Artificial Intelligence) as an area of computer science that focuses on machines that learn. There are 3 types of AI prevalent

  • Narrow AI (ANI)/Applied AI: Purpose built and business focus on a specific task. E.g. Driving a car, Reviewing an X-ray, Tracking financial trades
  • General AI (AGI)/Strong AI: pursuit of a machine that has same general human intelligence as a human. E.g. figuring out how to make coffee in an average American home
  • Super AI: 10(or 1000) steps ahead of us. Technical genie – havoc around us

By the way, AI has existed for decades, via rules-based programs that deliver rudimentary displays of ‘intelligence’ in specific contexts. Progress, however, has been limited — because algorithms to tackle many real-world problems are too complex for people to program by hand. To resolve the area of complex problems is the world of ML (Machine Learning)

Machine learning (ML) is a sub-set of AI. All machine learning is AI, but not all AI is machine learning . Machine learning lets us tackle problems that are too complex for humans to solve by shifting some of the burden to the algorithm. As AI pioneer Arthur Samuel wrote in 1959, machine learning is the ‘field of study that gives computers the ability to learn without being explicitly programmed.’

The goal of most machine learning is to develop a prediction engine for a particular use case. An algorithm will receive information about a domain (say, the films a person has watched in the past) and weigh the inputs to make a useful prediction (the probability of the person enjoying a different film in the future).  Machine learning algorithms learn through training. An algorithm initially receives examples whose outputs are known, notes the difference between its predictions and the correct outputs, and tunes the weightings of the inputs to improve the accuracy of its predictions until they are optimized.

Why is AI important?

AI is important because it tackles difficult problems in a way our human brain would have done but much faster and less erroneous- obviously resulting in human well-being. Snce the 1950s, AI research has focused on five fields of enquiry:

  1. Reasoning: the ability to solve problems through logical deduction
  2. Knowledge: the ability to represent knowledge about the world (the understanding that there are certain entities, events and situations in the world; those elements have properties; and those elements can be categorised.)
  3. Planning: the ability to set and achieve goals (there is a specific future state of the world that is desirable, and sequences of actions can be undertaken that will effect progress towards it)
  4. Communication: the ability to understand written and spoken language.
  5. Perception: the ability to deduce things about the world from visual images, sounds and other sensory inputs.

AI has thus already gone past imaginations and already is part of our home, workplace, community and what not. To say it simply, it’s infiltrating all the frameworks that are driving the global economy. From Siri, Alexa, Google Home, to Nest to Uber the world is covered with smart machines which are operating on extremely strong software platforms which in turn are in self learning mode. And I am not sure if it’s the best part or the scary part – This is the just the BEGINNING!!!. I call it scary because these new inventions are always “ready to learn” and constantly “adding intelligence” which will very soon challenge and enhance the intellect and experience of the savviest professionals in every sector.

Stay tuned…. Part II of this foray, we will dwell upon the Raw Materials and New Machines that outline the core of the AI platform.

Agile Myths
 

Myth 1. Stand-up is a status meeting
Reality – The Daily standup is a Planning Meeting. It promotes Collaboration, Self-Organization, Enables inspection and Adaptation, Focus on achieving the outcome.

Myth 2. Pair programming decreases the productivity
Reality – The pair programming increases the productivity in majority of the cases if done in right way.

Myth 3. Agile doesn’t need project manager
Reality – The Scrum Master plays a role of a project manager.

Myth 4. Agile is another word for Scrum.
Reality – The Scrum is one of the Agile framework. There are many more like Kanban, XP, FDD etc.

Myth 5. Agile is faster than Waterfall
Agile is less about delivering software faster, and more about delivering value faster.

Myth 6. Agile means no documentation
Agile doesn’t do documentation for documentation’s sake. Documentation is like any other deliverable on an Agile project. It gets estimated, and prioritized like any other user story.

Myth 7. We’re a self-organizing Agile team, so we can do anything we want
Reality – The core principles are not allowed to be tailored. It is a lightweight process while the core practices are expected to be followed.

Myth 8. Agile is a silver bullet
Reality – The projects done in Agile do fail. It cannot be an excuse to stop thinking. Agile promotes fail faster philosophy so that one can learn from it.

Myth 9. The story points can be exactly translated to hours.
Reality – When the team gets mature, you can translate it to approx but not exact hours.

Myth 10. The product owners, developers and scrum masters get to do what they like.
Reality – The Agile is very disciplined framework. Every role has set of responsibilities. The core responsibilities should not be switched.

Myth 11. There is always a right size for a user story.
Reality – Every team is different hence there is no right size of the user story. The team producing 30 story points can be more productive than the team doing 45 story points.

Myth 12. Agile means no planning, no design
Reality – Agile typically needs more planning and design than waterfall. Both of these activities happen throughout the development.

Simple introduction to NServiceBus
 

What is NServiceBus

NServiceBus is a Microsoft .Net based framework for implementing the Enterprise Service Bus pattern. Its a framework for implementing reliable messaging through various transports without having to write boilerplate code. NServiceBus also provides facilities for pub/sub, retries, format abstraction, encryption, workflow orchestration, and scalability.  Below are few highlights of the system.

  • Seamlessly work with various messaging systems (Transport)
  • Automatic Serialisation & De-serialisation
  • Enforcing a Bus architecture
  • Transaction Scope (Unit of Work)
  • Simple model to handle long running transactions (Sagas)

Building a simple Unicast bus using NServiceBus

Let’s use .Net core 2.0 for building a simple bus. Everything in NServicebus can be configured using a simple class called “EndpointConfiguration”. An endpoint is a service which uses NServicebus framework for handling messages.

var endpointConfiguration = new EndpointConfiguration("UnicastBus");

NServiceBus uses persistence to store all the configuration and temporary data so that the data is not lost while there is a failure (fallacies of Network). We can configure NServiceBus persistence using the below code. We are using an In-memory persistence for the sample app. But in real applications, other reliable persistence like SQL, Raven, Azure Service fabric etc. will be used.

endpointConfiguration.UsePersistence<InMemoryPersistence>();

NserviceBus is a framework and is not a message queues on its own. However, it uses an Adapter pattern to fit any open source & commercial queuing systems. In our case, we are wiring it up with RabbitMQ (an open source message queuing system written in Erlang).

var tranportExtensions = endpointConfiguration.UseTransport<RabbitMQTransport>();

tranportExtensions.ConnectionString("host=localhost;username=guest;password=guest");

RabbitMQ also has a concept of routing topology where a user can configure how data is passed between exchanges & queues. In our case, we are going to use ‘direct routing’ topology.

tranportExtensions.UseDirectRoutingTopology();

var routing = tranportExtensions.Routing();
routing.RouteToEndpoint(assembly: typeof(Order).Assembly,destination: "Sales");

Through this configuration, we are pushing all messages coming to “Unicast Bus” exchange to “Sales” exchange. That’s it, we are good to start the Unicast bus.

var endpointInstance = await Endpoint.Start(endpointConfiguration).ConfigureAwait(false);

Now, sending messages using the bus that we just created is simple.

endpointInstance..Send(“My First Message using NServiceBus”);

To see the result, we should run the program and look for messages in the ‘Sales’ queue in RabbitMQ management tool.

Conclusion:

NServiceBus is one of the frameworks for doing reliable messaging for handling the fallacies of distributed computing. This is just a gentle introduction to NServiceBus. We shall see more advanced concepts in the future posts.

Docker – Basics and Benefits
 

Docker

How many times have you encountered a situation where the software built on one platform doesn’t work on another platform? It is a very expensive affair to ensure that your software works on all the platforms (Mobile, Tablets, PC and so on). This is one of the crucial problems which Docker helps you to resolve but this is not it. There is much more to Docker which we will explore as we move in my upcoming blogs.

Docker is a container management service or a tool which is designed to help developers to deploy and run applications by using containers.  It ensures that the code will behave the same regardless of environment/platform.

Key components of a Docker

  1. Docker File – A dockerfile is a text document that contains series of all the commands a user a make use of to create a Docker Image. The Docker images are automatically built by reading the instructions from a dockerfile.
  2. Docker Image – It’s a lightweight snapshot of a virtual machine. It is essentially an application with all its requirements and dependencies (read-only copy of OS libraries, application libraries etc.). When you run a Docker image, you get a Docker container. Once a Docker image is created, it guarantees to run on any Docker platform. These can be retrieved to create a container in any environment.
  3. Docker Registry – A Docker image can also be stored at online repository called Docker hub.  You can also store your image in this cloud repository (https://hub.docker.com/). You can also save these images to your version control system.
  4. Docker Container – A Docker container is nothing but a Runtime instances of Docker images.

Benefits of a Docker

  1. Build your application once and use it in multiple environments without the need of building or configuring once again. The application built on dev environment guaranteed to work in prod environment.
  2. Centralized Docker repository makes it very easy to store and retrieve. Moreover, you don’t need to build the image from scratch.  You can always leverage existing image and go from there.  The sharing of an image becomes very simple as well.
  3. Version Control – You can always create next image and version control them.
  4. Isolation – Every application works inside its own container and never interfere with other containers.  If you no longer need an application, you can delete its container. Every container has it’s own resources hence there is going to be no challenge.
  5. Security – The isolation ensures that the applications that are running on containers are completely segregated. The container cannot look into or have a provision to control processes running on other containers. This greatly enhances security.
  6. Multi-cloud platforms/Portability – The image built on Amazon EC2 would very well be ported to Microsoft Azure.
  7. Productivity – This is an implicit benefit of using Docker. The speed of development is much faster as the main focus is writing code and business over worrying extensively about deployment/testing.
Why choose Couch DB over Relational Database?
 

Why choose Couch DB over Rational Database?

Choosing a correct database is very important in software development.

The relational database maintains ACID rules. Relational model requires relational database engine to manage writes in a special way. It locks the rows and table to maintain atomicity and consistency. Protecting referential integrity across the tables and rows increase locking time. Increase locking time means higher latency and slower application. Developers face some problems with the Relational database such as:

Object Relational Impedance Mismatch: When RDBMS is being served by an application program written in an object-oriented programming language, objects or class definitions need to be mapped to database tables defined by a relational schema. Misalignment layer of application objects to tables and rows is called Impedance mismatch.

As an example, below are the schema definitions:

Below is the application code:

Class Foo
{
int Id;
string[] colours;
}

In this example, object foo contains field Id and colours (which is a bunch of strings). To store this into the database we need to create 3 tables.

  • Main object
  • Colour information
  • Relation between color and main object

These forces the developer to write a mapping layer such as Entity Framework or ado.net to translate the objects in memory and what is saved in the database.

Many of the developers use object-oriented languages in development. Objects are not rows and tables. All objects are not uniform. Mapping those objects into rows can be painful.

Couch DB is schema-less. There is no relation between a collection of objects. A developer can store any type of document. The documents can be flat and simple or as complex as the application requires.

Couch DB document for “Foo”:

{

ID:1,

Colours:[“red”,”yellow”]

}

Scalability Issues: Scaling out and replicating data to other servers need to increase the lock. Relational database tries to be consistent and increase the locking time and as result application gets slower.

Replication is one of the features in couch DB. Replication takes advantage of clustering to achieve scalability. We just need to mention the source and destination database. Couch Db will handle to replicate the data into a destination. This can also be achieved through a REST call. It should be a POST request to the endpoint “_replicate” with the source and destination servers specified in the body of the request.

RDBMS With MS SQL SERVER NO SQL WITH COUCHDB
Define table No schema
Rows and Columns Document
Dynamic Query Predefined query
Join/Relations Not required
T-SQL Map Reduce
OLEDB/ODBC/EF/ADO.NET REST API
Management Studio Futon
Constraints, Triggers, SPS Validations, show & List Functions

 

Couch Db stores JSON for documents, JAVASCRIPT as MAPREDUCE queries and HTTP for an API. Couch DB can be considered as a b-tree manager with an HTTP interface. It uses a data structure called a B- tree to index its documents and views. It maintains optimistic concurrency via MVCC (Multi-version concurrency control). Previous versions of a document are available till the database is compacted.

B-trees append data only to the database file that keeps the B-tree on disk and grows at the end. B-tree delivers fast searches, inserts, and deletes.

Couch DB features on the server-side document validation and on the fly document transformation. Although a document is stored in JSON, the document can be served as XML or CSV.

Architecture OF Couch DB:

The lowest tier is simple JSON based file store. The storage engine is responsible to accept the JSON documents and serialize them into the disc. Storage engine can access the JSON store. The query engine does fast access to the stored in JSON store. The query definitions are JAVASCRIPT functions stored in the database. B-tree structured index is built on every query and stored in a database. This helps the query engine to read the data fast. Replication engine provides capabilities in master-master bidirectional replication. Through Rest API we can access any of the three capabilities.

 

 

 

 

Service Oriented Architecture – Basics
 

The Service Oriented Architecture (SOA) is not a technology instead it is a collection of Services. Precisely it is an architectural style for building business applications using loosely coupled services which communicates with each other.

SOA      =    Messages + Services

Services – Self-contained business functionality and communicate using messages.

Messages – It is a discrete unit of communication.  It should be cross-platform, secure, asynchronous, reliable, follow industry standard and able to describe and discovery service.

The SOA shifts your thinking from classes and objects to messages.

The webservice is nothing but a service available over the web (Or method of communication between two devices over a network) while API (Application Programming Interface) acts as an interface between two different applications so that they can communicate with each other. The API may or may not be web-based.  Having said that all the all the web services are APIs but all APIs are not web services. The webservice always needs a network while for API it may/may not be needed.

WebServices are mainly of two types

  1. SOAP
  2. RESTFul

Let’s understand the differences between SOAP and REST

  1. SOAP stands for Simple Object Access Protocol. REST stands for Representation State transfer.
  2. SOAP is an XML based protocol which uses WSDL for communication between source and recipient. REST is an architecture style protocol which uses XML and JSON to communicate between consumer and provider.
  3. SOAP uses RPC while REST directly uses URL.
  4. The transfer is over HTTP, FTP, SMTP and other protocol for SOAP while for REST it is HTTP only.
  5. SOAP is hard to implement over REST.
  6. The performance is slower compared to REST.
  7. SOAP is more secure since it does define its own security. The REST is less secure. The transport defines the security. (Smart consumer and provide while dumb pipe).

The SOAP message is large.  The core element of SOAP message is a SOAP envelope. The SOAP envelope defines the start and end of the message. It has two parts 1. SOAP HEADER 2. SOAP BODY

WSDL – Web Service Description Language

The WSDL is an XML based interface that is used to describe the functionalities of the webservices .

Microservices to Micro-Frontends
 

Microservices

Microservices is one of the newer concepts and a variant of a Service Oriented Architecture (SOA). Although SOA has been there for almost two decades while the Microservices came into existence in 2012.

Microfrontends

The idea is to have small autonomous services to work together to build the large complex application. The approach focuses on individual business sub-domains and building small services making them easier to maintain, and promotes independently deployable pieces thus ensuring that internal changes in one service do not affect or require the redeployment of other services. In today’s software development landscape, most applications are monolithic and one of the drawbacks of this approach is that business owners get to take a very limited number of decisions in a year (slower response times because of dependencies). For instance upgrading product, adding newer functionality which is significant in size etc within a set of related services requires a concerted effort of all concerned parties to deliver changes in a synchronized manner. Microservices allow you take more far-reaching business decisions more spontaneously as each microservice works independently and an individual team who is managing that has a good control over the changes. This is also helped by the fact that well-implemented microservices attempt to steer clear of the ‘shared database’ model of development (mentioned later in this post).

The microservices architecture allows each team to decide the technology and infrastructure that works best for them, which may be completely different from other microservices that it interacts with for the very same product. Another aspect of choice is seen when attempting to scale a Monolith application, where you need to scale every component as all components under one product typically run under the same process. This usually reduces flexibility that may require only a small subset of the features to be scaled (eg. performance bottleneck in one piece of a payment processing pipeline). Given such circumstances, scaling a monolithic application as a whole may soon turn into an expensive affair. On the other hand, each set of microservices can (potentially) be scaled independently of the others, thus helping focus resources on areas where the problem truly lies.

I have been reading building microservices by Sam Newman and really liked the 8 keys principles explained by him. In this post, I am attempting to provide a high-level overview of those principles and I highly encourage you to read his book to get a more detailed understanding of these concepts.

The eight key principles are

1. Modelled around business domain 

Focus on your business domain and identify individual subdomains and build services. A good way to start would be to follow the principles of Domain Driven Design.

2. Culture of automation

Infrastructure automation is what smart companies are focusing on today.  Provisioning a new machine, operating system and service should be automated.

Automation testing and continuous delivery are critical as well so as to deploy/release your software frequently and reliably.

3. Hide implementation details.

Hide your database, and hide your functionality. Each service should have its own database and if shared information is needed from other services, leverage service endpoints designed for the specific subdomain to extract what is expected. While migrating a monolithic application to a microservice(ish) structure, it is often considered easiest to tease apart application level code while leaving the (shared) underlying database as is. There is often a variety of rationale provided for doing this ranging from a lack of confidence in the success of this migration to potential issues faced for data analysis and report aggregation purposes. However, this shared database continues to serve as a source of coupling between the independent services far greater than the decoupling achieved by spinning off the application level services.

4. Decentralise all the things

Focusing on autonomy (giving people as much freedom as possible to do the job in hand), self-service (do you have to create a ticket to provision a machine or you can do it all by your self), shared governance (making architecture work) and avoiding complex messaging is important.

5. Deploy independently

If you have 4 services and all of them have to be deployed together due to dependency then fix that before you create 5th service. Having one service per host make life very easy for everyone. Docker is getting a lot of traction for such isolation of operating environments.

In such an environment of independent deployments, consumers drive contracts where when you make a change, the consumer service has expectations about not facing challenges when you deploy changes.

When there are co-existing endpoints, for instance in an upgrade scenario, the consumer service would switch to a newer version while the provider continues supporting existing version for a limited time period this providing some time for other services to migrate without holding the entire system hostage to its changes.

6. Consumer First

As the creator of an API, it is very important that you make your service easy to consume. The documentation plays an important role here.

7. Isolate Failures

Microservice architecture doesn’t automatically make your systems more stable. To the contrary, it makes the overall system more vulnerable to certain types of network and hardware related issues (more points of failure). You should have ways to isolate failures and look for ways to recover such as failover caching and retry logic.

8. Highly observable

It is very important to know what is happening in your system with so many moving parts. Each service may depend on multiple services and vice versa.  There needs to be constant observation and monitoring to ensure the whole integration is smooth.

Microservices often communicate via HTTP/REST (Synchronous) or utilizes Asynchronous protocols like JMS, RabbitMQ etc.  It is perfectly fine and acceptable to use Synchronous protocols for public APIs while when dealing with microservices internal communications, you should go with Asynchronous protocols.

In a typical monolith application when you want to fetch the data to show in search functionality, all you have to do is join multiple tables and present it to a user. If you want to achieve the same using microservices, there is going to be a big performance hit as you need to retrieve it from each and every microservices which may not be a good idea.   In this situation, I would recommend you to go with Elasticsearch. You can display the high-level data coming from one table. When a user clicks on an individual item, you can always go to all microservices. Moreover, you can run them all in parallel on top of it. This can significantly reduce the pain of performance bottlenecks in homegrown solutions.  There are multiple scenarios like this which entice teams and organizations to avoid going with microservices while an easier way out exists. I would be discussing some of the very common challenges and their resolution in details in my upcoming blogs

Micro-Frontends

The microservices architecture gives you enormous benefits when done right. When implementing a microservices architecture, you certainly want to keep your services small.  Most of the companies and teams I have come in contact with having a tendency to do so with the backend, primarily for two reasons.

  1. It is expensive in terms time and money.
  2. Knowledge gap

Although you are better off going with monolith application while you would not gain most of the benefits as you cant deploy your backend services independently without the front end.  In such scenarios, not only is application scaling a challenge,  you also need to update the front end whenever an API is deployed with breaking changes.

 

The idea with a Micro-Frontend is to decompose your application into smaller units based on screens representing domain-specific functionality instead of writing large monolithic front-end application. The front-ends are self-contained and can be deployed independently. SPAs (single page applications) are the best way identified so far to go this route and domain driven architecture helps achieve this to a great extent. You can have backend, frontend, data access layer, and database, everything required for a subdomain in one service. Every piece of the service should be worked by an independent team. Collaboration and communication play an important role here and as long as you adhere to best practices and principles of microservices, you are most likely to get successful and gain maximum out of your product/software.

Benefits of Micro-Frontends

  1. The individual development team can choose their own technology.
  2. The development and deployment are very quick.
  3. The benefit of microservices can be leveraged in a much better way. The dependency is drastically reduced.
  4. Helps in continuous deployment.
  5. The maintenance and support are very easy as the individual team owns the specific area.
  6. The testing becomes simple as well as for every small change, you don’t have to go and touch entire application.

Challenges

  1. The UX consistency is an important aspect. The user experience may become a challenge if the individual team go with their own direction hence there should be some common medium to ensure UX is not compromised.
  2. The dependency needs to be managed properly. The collaboration becomes a challenge at a time. The multiple teams working on one product should be aligned and have a common understanding.

You can also read :

Micro-Services, Eventual Consistency and Event sourcing patterns