AI / ML – Past, Present & Future – Part 2
 

Just a recap, we learnt in my last update, the advent of the industrial revolutions, and the current industrial revolution of the ‘Machines’ we are currently experiencing. The impact of the new age machines that are learning and thus by utilizing AI/ML mould the world we are going to experience. In this excerpt we are going to understand the machine in itself, the raw material that constitutes it and how the world of AI/ML comes alive.

New Machine: A system of intelligence that combines software, hardware data and human input:

  • Software that learns
  • Massive hardware processing power
  • Huge amounts of data

Any New machine exhibiting AI has three main Elements

1. Digital process Logic

  • Transform many manual processes into automated ones
  • Car dispatch process between traditional vs Uber
  • Digitized process multiplied over millions of transactions – an industry is revolutionized – structuring the process is the hardest part
    • thereby new Large Databases that are stable, scalable and tested. For e.g. Hadoop are finding favor against Oracle/SAP

2. Machine Intelligence

  • Combination of algorithms, automation processes, machine learning and neural networks – just a richer data set – HEART
    • thereby Highly efficient and always on plumbing

3. Software Ecosystem

  • Multiple systems of intelligence connected thru API. For e.g. Uber uses Twilio for cloud communications, Google for maps, Braintree for payments, SendGrid for email etc.
    • thereby an Intelligent System in action

So finally what will work for us to run an AI system is a combination of the above which not just a system but a very intelligent system based on new and enhanced learning.

Figure: Intelligent System working (Source: Internet)

Just to illustrate this in a live example for the most Successful Internet media-service provider and Production Company in North America. I have put together the to-be story of a system of intelligence that they built.

There is a big difference between merely having all the necessary ingredients of the new machines and actually getting them to perform at a high level. An intelligent system that can help you be the Michael Phelps of whatever race you are in will have all or most of the below characteristics to make it successful.

  • Learn more than any other system
  • Open to more changes/corrections
  • Not just being automated but also involve human inputs
  • Focused on a confined problem
  • Individual experience has to be top priority
  • Look out for Constantly improving system

Once the intelligent system is in place, you finally need a way to measure whether it is doing the right things or not –

  1. An Intelligent has to become better and better and that depends solely on the ‘Quality of Data’ that is being fed into it
  2. Intelligent systems has to be a journey in an organization and not just an individual contribution
  3. System should take ownership of more and more data analysis and should reduce human intervention

Every day that passes gives us more evidence and strengthens our conviction that the intelligent systems that we are trying to understand in this part are the engines of the fourth industrial revolution. Individuals and companies that are early birds on this bandwagon are the ones that are reaping rich benefits out of solving their major problems. So it’s but obvious that we need to the AI/ML way sooner or later. Are you ready?

Stay tuned…. Part III of this foray, we will quickly look through the making of Data that makes an AI system successful and then dwell upon the Digital Business Models and Solutions: – “Robots” that outline the design and delivery of the AI platform.

AI / ML – Past, Present & Future – Part I
 

The World has seen development/growth primarily driven by the Industrial revolutions. Each of these revolutions changed the way we looked at a time of economic dislocation; when old ways of production become defunct and they had to give way to far better/newer ways of production that could harness the improvement brought in by new machines. The First Industrial revolution was powered by the invention of the loom the second by the steam engine and the third by the assembly line, the fourth however will be powered by the machines that seem to think. We are HERE in the fourth one.

Between each industrial revolution to the next, there is long and bumpy road connecting one era of business and technology to the next, the evolution of each industrial revolution follows the part of an S-curve (as show in figure below)

  1. IDEA BURST : Breakthrough, high concentration in wealth, new industries created, new tech create press clipping but no impact on existing industries
  2. BUMPY ROAD :Revolution stalls, skepticism on value creation in phase I, economic models and value chains created, change in existing industries
  3. MASSIVE LIFT UP : Everybody richly rewarded, National GDP gets vertical lift off, Large wealth distribution

Just to see the impact of such AI driven world that is in front of us, few commonsensical usage of AI in the future world are below.

  • 1/3rd of all food produced go to waste, could be moved to Third world countries by usage of AI to address the hunger prevalent there
  • 12 million Medical misdiagnoses in US only contribute to 4,00,000 deaths. By the right usage of AI, most of these deaths can be avoided
  • Driverless cars are reducing the Annual # of accidents from 4.2 to 3.2 per million miles driven. This will improve as days go by

Now that the machines are in, we need to see what is that we are supposed to expect

  • Technology will be embedded into everything (IoT – Internet of Things)
  • As machines become better, it is but obvious that by year 2030 standards, the current frame work of machines will stink. Advent of improvements on these machines
  • Becoming Digital – mastering the three Ms (raw Materials, new Machines, and business models)

While the above statistics on Job displacement is detrimental to many of human futures, however the pace of elimination will be slow. Consider the following

  • Most likely scenario : 12% elimination in next 10 to 15 years
    • 3 scenarios
      • Job Automation: 12% are at risk
      • Job Enhancement: 75% of existing jobs will be altered
      • Job Creation: 13% net new jobs will get created due to new machine requirements or new job categories

The advent of 13% of new job and the ones that cant automated and enhanced still would need human intervention and keep the true need of humans in place vis-a-vis machines replacing each one of us – Scary isnt it?

Source: internet

So Let me introduce you to some key definitions to keep us on track of this arduous journey

What is AI?

Artificial Intelligence – Coined in 1956 by Dartmouth Assistant Professor John McCarthy, ‘Artificial Intelligence’ (AI) is a general term that refers to hardware or software that exhibits behavour which appears intelligent. In the words of Professor McCarthy, it is “the science and engineering of making intelligent machines, especially intelligent computer programs.”

Other sources terms AI(Artificial Intelligence) as an area of computer science that focuses on machines that learn. There are 3 types of AI prevalent

  • Narrow AI (ANI)/Applied AI: Purpose built and business focus on a specific task. E.g. Driving a car, Reviewing an X-ray, Tracking financial trades
  • General AI (AGI)/Strong AI: pursuit of a machine that has same general human intelligence as a human. E.g. figuring out how to make coffee in an average American home
  • Super AI: 10(or 1000) steps ahead of us. Technical genie – havoc around us

By the way, AI has existed for decades, via rules-based programs that deliver rudimentary displays of ‘intelligence’ in specific contexts. Progress, however, has been limited — because algorithms to tackle many real-world problems are too complex for people to program by hand. To resolve the area of complex problems is the world of ML (Machine Learning)

Machine learning (ML) is a sub-set of AI. All machine learning is AI, but not all AI is machine learning . Machine learning lets us tackle problems that are too complex for humans to solve by shifting some of the burden to the algorithm. As AI pioneer Arthur Samuel wrote in 1959, machine learning is the ‘field of study that gives computers the ability to learn without being explicitly programmed.’

The goal of most machine learning is to develop a prediction engine for a particular use case. An algorithm will receive information about a domain (say, the films a person has watched in the past) and weigh the inputs to make a useful prediction (the probability of the person enjoying a different film in the future).  Machine learning algorithms learn through training. An algorithm initially receives examples whose outputs are known, notes the difference between its predictions and the correct outputs, and tunes the weightings of the inputs to improve the accuracy of its predictions until they are optimized.

Why is AI important?

AI is important because it tackles difficult problems in a way our human brain would have done but much faster and less erroneous- obviously resulting in human well-being. Snce the 1950s, AI research has focused on five fields of enquiry:

  1. Reasoning: the ability to solve problems through logical deduction
  2. Knowledge: the ability to represent knowledge about the world (the understanding that there are certain entities, events and situations in the world; those elements have properties; and those elements can be categorised.)
  3. Planning: the ability to set and achieve goals (there is a specific future state of the world that is desirable, and sequences of actions can be undertaken that will effect progress towards it)
  4. Communication: the ability to understand written and spoken language.
  5. Perception: the ability to deduce things about the world from visual images, sounds and other sensory inputs.

AI has thus already gone past imaginations and already is part of our home, workplace, community and what not. To say it simply, it’s infiltrating all the frameworks that are driving the global economy. From Siri, Alexa, Google Home, to Nest to Uber the world is covered with smart machines which are operating on extremely strong software platforms which in turn are in self learning mode. And I am not sure if it’s the best part or the scary part – This is the just the BEGINNING!!!. I call it scary because these new inventions are always “ready to learn” and constantly “adding intelligence” which will very soon challenge and enhance the intellect and experience of the savviest professionals in every sector.

Stay tuned…. Part II of this foray, we will dwell upon the Raw Materials and New Machines that outline the core of the AI platform.

Docker – Basics and Benefits
 

Docker

How many times have you encountered a situation where the software built on one platform doesn’t work on another platform? It is a very expensive affair to ensure that your software works on all the platforms (Mobile, Tablets, PC and so on). This is one of the crucial problems which Docker helps you to resolve but this is not it. There is much more to Docker which we will explore as we move in my upcoming blogs.

Docker is a container management service or a tool which is designed to help developers to deploy and run applications by using containers.  It ensures that the code will behave the same regardless of environment/platform.

Key components of a Docker

  1. Docker File – A dockerfile is a text document that contains series of all the commands a user a make use of to create a Docker Image. The Docker images are automatically built by reading the instructions from a dockerfile.
  2. Docker Image – It’s a lightweight snapshot of a virtual machine. It is essentially an application with all its requirements and dependencies (read-only copy of OS libraries, application libraries etc.). When you run a Docker image, you get a Docker container. Once a Docker image is created, it guarantees to run on any Docker platform. These can be retrieved to create a container in any environment.
  3. Docker Registry – A Docker image can also be stored at online repository called Docker hub.  You can also store your image in this cloud repository (https://hub.docker.com/). You can also save these images to your version control system.
  4. Docker Container – A Docker container is nothing but a Runtime instances of Docker images.

Benefits of a Docker

  1. Build your application once and use it in multiple environments without the need of building or configuring once again. The application built on dev environment guaranteed to work in prod environment.
  2. Centralized Docker repository makes it very easy to store and retrieve. Moreover, you don’t need to build the image from scratch.  You can always leverage existing image and go from there.  The sharing of an image becomes very simple as well.
  3. Version Control – You can always create next image and version control them.
  4. Isolation – Every application works inside its own container and never interfere with other containers.  If you no longer need an application, you can delete its container. Every container has it’s own resources hence there is going to be no challenge.
  5. Security – The isolation ensures that the applications that are running on containers are completely segregated. The container cannot look into or have a provision to control processes running on other containers. This greatly enhances security.
  6. Multi-cloud platforms/Portability – The image built on Amazon EC2 would very well be ported to Microsoft Azure.
  7. Productivity – This is an implicit benefit of using Docker. The speed of development is much faster as the main focus is writing code and business over worrying extensively about deployment/testing.
Why choose Couch DB over Relational Database?
 

Why choose Couch DB over Rational Database?

Choosing a correct database is very important in software development.

The relational database maintains ACID rules. Relational model requires relational database engine to manage writes in a special way. It locks the rows and table to maintain atomicity and consistency. Protecting referential integrity across the tables and rows increase locking time. Increase locking time means higher latency and slower application. Developers face some problems with the Relational database such as:

Object Relational Impedance Mismatch: When RDBMS is being served by an application program written in an object-oriented programming language, objects or class definitions need to be mapped to database tables defined by a relational schema. Misalignment layer of application objects to tables and rows is called Impedance mismatch.

As an example, below are the schema definitions:

Below is the application code:

Class Foo
{
int Id;
string[] colours;
}

In this example, object foo contains field Id and colours (which is a bunch of strings). To store this into the database we need to create 3 tables.

  • Main object
  • Colour information
  • Relation between color and main object

These forces the developer to write a mapping layer such as Entity Framework or ado.net to translate the objects in memory and what is saved in the database.

Many of the developers use object-oriented languages in development. Objects are not rows and tables. All objects are not uniform. Mapping those objects into rows can be painful.

Couch DB is schema-less. There is no relation between a collection of objects. A developer can store any type of document. The documents can be flat and simple or as complex as the application requires.

Couch DB document for “Foo”:

{

ID:1,

Colours:[“red”,”yellow”]

}

Scalability Issues: Scaling out and replicating data to other servers need to increase the lock. Relational database tries to be consistent and increase the locking time and as result application gets slower.

Replication is one of the features in couch DB. Replication takes advantage of clustering to achieve scalability. We just need to mention the source and destination database. Couch Db will handle to replicate the data into a destination. This can also be achieved through a REST call. It should be a POST request to the endpoint “_replicate” with the source and destination servers specified in the body of the request.

RDBMS With MS SQL SERVER NO SQL WITH COUCHDB
Define table No schema
Rows and Columns Document
Dynamic Query Predefined query
Join/Relations Not required
T-SQL Map Reduce
OLEDB/ODBC/EF/ADO.NET REST API
Management Studio Futon
Constraints, Triggers, SPS Validations, show & List Functions

 

Couch Db stores JSON for documents, JAVASCRIPT as MAPREDUCE queries and HTTP for an API. Couch DB can be considered as a b-tree manager with an HTTP interface. It uses a data structure called a B- tree to index its documents and views. It maintains optimistic concurrency via MVCC (Multi-version concurrency control). Previous versions of a document are available till the database is compacted.

B-trees append data only to the database file that keeps the B-tree on disk and grows at the end. B-tree delivers fast searches, inserts, and deletes.

Couch DB features on the server-side document validation and on the fly document transformation. Although a document is stored in JSON, the document can be served as XML or CSV.

Architecture OF Couch DB:

The lowest tier is simple JSON based file store. The storage engine is responsible to accept the JSON documents and serialize them into the disc. Storage engine can access the JSON store. The query engine does fast access to the stored in JSON store. The query definitions are JAVASCRIPT functions stored in the database. B-tree structured index is built on every query and stored in a database. This helps the query engine to read the data fast. Replication engine provides capabilities in master-master bidirectional replication. Through Rest API we can access any of the three capabilities.