Machine Learning – The Primer – Part 4

Just a recap, in my previous post, I had primarily focused on the tools/techniques/frameworks and hardware that are prominent in the implementation of an AI/ML framework for any organization. In this post, we are going to dwell a little deeper into the Algorithms that are at the core of the ML/DL space and how humans are at the helm of this impact of machines.

Algorithms are becoming an integral part of our daily lives. About 80% of the viewing hours on Netflix and 35% of the retail on Amazon are due to automated recommendations by the so called AI/ML engines. Designers in companies like Facebook know how to make use of notifications and gamifications to increase user engagement and exploit & amplify various human vulnerabilities such as social approval and instant gratification. In short, we are nudged and sometimes even tricked into making choices by algorithms that need to learn. In case of the products and services we buy, the information we consume and who we mingle with online, algorithms are playing an important role in practically every aspect of it.

In the world of AI, a new challenge that is being increasingly discussed is the biases that creep into algorithms. Due to these biases, when we leave it to algorithms to take decisions, there can be unintended consequences. More so, as algorithms deployed by tech companies are used by billions of people, its damage because of biases can be significant. Moreover, we have a tendency to believe that algorithms are predictable and rational. So we tend to overlook many of their side effects.

How today’s algorithms differ?

In the past, developing an algorithm involved writing a series of steps that a machine could implement repeatedly without getting tired or making a mistake. In comparison, today’s algorithms, based on machine learning, do not follow a programmed sequence of instructions but ingest data and figure out for themselves the most logical sequence of steps and then keep working on improvement as they consume more and more data.

Machine learning itself is more sophisticated as traditional (supervised) ML, a programmer usually specifies what patterns to look for. The performance of these methods improves as they are exposed to greater data but this is limited. In deep learning, programmers do not specify what patterns to look for but the algorithm evaluates the training data in different ways to identify patterns that truly matter. Human beings may not identify some of these patterns.

Deep learning models contain an input layer of data and an output layer of the desired prediction and the multiple hidden layers in between that combine patterns from previous layers to identify abstract and complex patterns in the data. For instance

Unlike traditional algorithms, the performance of deep learning algorithms keeps improving as more data is fed.


Decision making and avoiding unintended consequence

AI involves enabling computers to do the tasks that human beings can handle. This means computers must be able to reason, understand language, navigate the visual world and manipulate objects. Machine learning enhances this by learning from experience. As algorithms become more and more sophisticated and develop newer capabilities, they are going beyond their original role of decision support to decision making. The flip side is that as algorithms become more powerful, there are growing concerns about their opaqueness and unknown biases. The benefits of algorithms seem to far outweigh the small chance of an algorithm going rogue now and then. It is important to recognize that while algorithms do an exceptionally good job of achieving what they are designed to achieve, they are not completely predictable. They do have side effects like some medicines. These consequences are of three types

Perverse results affect precisely what is measured and have a better chance of being detected. Unexpected drawbacks do not affect the exact performance metrics that are being tracked. and difficult to avoid them. Facebook’s Trending Topics algorithm is a good example. The algorithm ensured that the highlighted stories were genuinely popular. But it failed to question the credibility of sources and inadvertently promoted fake news. So inaccurate and fabricated stories were widely circulated in the months leading up to the 2016 US Presidential elections. The top 20 false stories in that period received greater engagement on Facebook than the top 20 legitimate ones.

Content and Collaborative filtering systems

Content based recommendation systems start with detailed information about a product’s characteristics and then search for other products with similar qualities. Thus, content based algorithms can match people based on similarities in demographic attributes- age, occupation, location, shared interests and ideas discussed on social media.

Collaborative filtering recommendation algorithms do not focus on the product’s characteristics. These algorithms look for people who use the same products that we do. For example, two of us may not be connected on Linked In but if we have more than a hundred mutual connections, we will get a notification that we should perhaps get connected.

Digital Framework

Algorithms also leverage the principle of digital neighborhood. One of the earliest pioneers of this principle was Google. In the late 1990s, when the internet was about to take off, the most popular online search engines relied primarily on the text content within web pages to determine their relevance. If a lot of other sites have a link to our website, then our website must be worth reading. It is not what we know but how many people know us that gets our website higher rankings. Research reveals that when Oprah Winfrey recommends a book, sales increase significantly but the books recommended by Amazon also get a significant boost. That is why digital neighborhoods are so important. For products that are recommended on many other product pages, recommendation algorithms drive a dramatic increase in sales. Spotify initially used a collaborative filter but later combined it with a content based method.

The predictability

AI began with Expert systems, ie systems which capture the knowledge and experience of experts. These systems suffer from two drawbacks.

  • Do not automate the decision making process.
  • Can’t code a response to every kind of situation.

We can either create intelligent algorithms in highly controlled environments, expert systems style to ensure they are highly predictable in behavior. But these algorithms will encounter problems they were not prepared for. Alternatively, we can use machine learning algorithms to create resilient but also somewhat unpredictable algorithms. This is the predictability – resilience paradox. Much as we may desire fully explainable and interpretable algorithms, the balance between predictability and resilience inevitably seems to be tilting in the latter direction.

Technology is most useful when it helps human beings to solve the most sophisticated problems which involves creativity. To solve such problems, we will have to move away from predictable systems.  One solution to resolve the predictability resilience paradox is to use multiple approaches. Thus, in a self-driving car, machine earning might drive the show but in case of confusion about a road sign, a set of rules might tick in.

Environmental Factors that Support

Human behavior is shaped both by hereditary and environmental factors. Same is the case with algorithms. There are three components we need to consider:

While data, algorithms and people play a significant role in determining the outcomes of the system, the sum is greater than the parts. The complex interactions among the various components have a big impact.

Need Trust

Many professions will be redefined if algorithmic systems are adopted intelligently by users. But if there are public failures, we cannot take user adoption for granted. We might successfully build intelligent diagnostic systems and effective driver-less cars but in the absence of trust, doctors and passengers will be unwilling to use them. So it is important to increase trust in algorithms. Otherwise, they will not gain acceptance. According to some estimates, driver-less cars would save up to 1.5 million lives just in the US and close to 50 million lives globally in the next 50 years. Yet, in a poll conducted in April 2018, 50% of the respondents said they considered autonomous cars less safe than cars driven by human beings.

Rules and Regulations

Decision making is the most human of our abilities. Today’s algorithms are advancing rapidly into this territory. So we must develop a set of rights, responsibilities and regulations to manage and indeed thrive in this world of technological innovations. Such a bill of rights should have four components:

AI systems have mastered games – now it’s time to master reality! Sometime back, Facebook developed two bots and trained them on negotiation skills. The bots were exposed to thousands of negotiation games and taught how conversations would evolve in a negotiation and how they should respond. The outcome of this training far exceeded expectations. The bots learnt how to trade items, developed their own short hand language. When the bots were made to negotiate with human beings again, the people on the other side did not even realize this!

Stay tuned…. Part 5 of this foray, we will look into the details of top 8 examples of organizations that have successfully implemented the AI/ML framework and how they are benefiting out of it.

Please feel free to review my earlier series of posts 


Machine Learning – The Primer – Part 3

Just a recap, in my previous post, I had primarily looked upon amount, structure of data that needs to be served as input to machine learning and the process used for its implementation. In this post, we are going to continue to learn more about the tools/techniques and hardware that will be required for these processes to excel.

While the majority of us are appreciating the early applications of machine learning, it continues to evolve at quite a promising pace, introducing us to more advanced algorithms like Deep Learning. Why DL is so good? It is simply great in terms of accuracy when trained with a huge amount of data. Also, it plays a significant role to fill the gap when a scenario is challenging for the human brain. So, quite logical this contributed to a whole slew of new frameworks appearing. Please see below the top 10 frameworks that are available and their intended use in the relevant industry space.

Source Internet

Just to provide a comprehensive analysis of the best tools that would undoubtedly recommend. So what is the final advice?

The speed of your algorithm is dependent on the size and complexity of your data, the algorithm itself, and your available hardware. In this section, we’re going to focus on hardware considerations when:

  • Training the model
  • Running the model in production

Some things to remember:

If you need results quickly, try machine learning algorithms first. They are generally quicker to train and require less computational power. The main factor in training time will be the number of variables and observations in the training data.

Deep learning models will take time to train. Pre-trained networks and public datasets have shortened the time to train deep learning models through transfer learning, but it is easy to underestimate the real-world practicalities of incorporating your training data into these networks. These algorithms can take anywhere from a minute to a few weeks to train depending on your hardware and computing power. 

Training the Model

Desktop CPUs: are sufficient for training most machine learning models but may prove slow for deep learning models.

CPU Clusters: Big data frameworks such as Apache Spark™ spread the computation across a cluster of CPUs.

The cluster or cloud option has gained popularity due to the high costs associated with obtaining the GPUs, since this option lets the hardware be shared by several researchers. Because deep learning models take a long time to train (often on the order of hours or days), it is common to have several models training in parallel, with the hope that one (or some) of them will provide improved results.

GPUs: are the norm for training most deep learning models because they offer dramatic speed improvements over training on a CPU. To reduce training time it is common for practitioners to have multiple deep learning models training simultaneously (which requires additional hardware).

Running the Model in Production

The trend toward smarter and more connected sensors is moving more processing and analytics closer to the sensors. This shrinks the amount of data that is transferred over the network, which reduces the cost of transmission and can reduce the power consumption of wireless devices.

Several factors will drive the architecture of the production system:

  • Will a network connection always be available?
  • How often will the model need to be updated?
  • Do you have specialized hardware to run deep learning models?

Will a network connection always be available?

Machine learning and deep learning models that run on hardware at the edge will provide quick results and will not require a network connection.

How often will the model need to be updated?

Powerful hardware will need to be available at the edge to run the machine learning model, and it will be more difficult to push out updates to the model than if the model resided on a centralized server.

Tools are available that can convert machine learning models, which are typically developed in high-level interpreted languages, into standalone C/C++ code, which can be run on low-power embedded devices.

Do you have specialized hardware to run deep learning models?

For deep learning models, specialized hardware is typically required due to the higher memory and compute requirements.

We have now looked through the different frameworks/tools and techniques that are prevalent in the industry for Machine and Deep Learning initiatives in your organization. We also looked through the recommendation on which framework works for your specific request. Also we looked at the Hardware that is required to run these models Or to make the frameworks active.

Stay tuned…. Part 4 of this foray, we will look into Usage of all these Machine and deep learning algorithms, models and frameworks that are prevalent in the industry.

Please feel free to review my earlier series of posts on AI-ML Past, Present and Future – distributed across 8 blogs.


Machine Learning – The Primer – Part 2

Just a recap, in my previous post, I had primarily looked upon the basics terms in machine learning and the process used for its implementation. In this post, we are going to continue to learn more about the steps in machine learning process and how it can be enhanced and made more efficient.

There is a huge significance to be on top of Domain knowledge in your said industry/process. This will filter the right data set to be considered for machine learning. The core of this discussion goes around Data and its structure that exists in your organization.

To deduce the data present in an organization and its structure, we pose below three questions:

1.       Is Your Data Tabular?

Traditional machine learning techniques were designed for tabular data, which is organized into independent rows and columns. In tabular data, each row represents a discrete piece of information (e.g., an employee’s address).

There are ways to transform tabular data to work with deep learning models, but this may not be the best option to start off with.

Tabular data can be numeric or categorical (though eventually the categorical data would be converted to numeric).

2.       If You Have Non-Tabular Data, What Type Is It?

Images and Video: Deep learning is more common for image and video classification problems. Convolutional neural networks are designed to extract features from images that often result in state-of-the-art classification accuracies – making it possible to discern high-level differences such as cat vs. dog.

Sensor and Signal: Extracting features from signals and then using these features with a machine learning algorithm. More recently, signals have been passed directly to LSTM (Long Short Term Memory) networks, or converted to images (for example by calculating the signal’s spectrogram), and then that image is used with a convolutional neural network. Wavelets provide yet another way to extract features from signals.

Text:  Text can be converted to a numerical representation via bag-of-words models and normalization techniques and then used with traditional machine learning techniques such as support vector machines or naïve Bayes. Newer techniques use text with recurrent or convolutional neural network architectures. In these cases, text is often transformed into a numeric representation using a word-embedding model such as word2vec.

3.       Is Your Data Labeled?

To train a supervised model, whether for machine learning or deep learning, you need labeled data.

If You Have No Labeled Data

Focus on machine learning techniques (in particular, unsupervised learning techniques). Labeling for deep learning can mean annotating objects in an image, or each pixel of an image or video, for semantic segmentation. The process of creating these labels, often referred to as “ground-truth labeling,” can be prohibitively time-consuming.

If You Have Some Labeled Data

Use transfer learning as it focuses on training a smaller number of parameters in the deep neural network, it requires a smaller amount of labeled data.

Another approach for dealing with small amounts of labeled data is to augment that data. For example, it is common with image datasets to augment the training data with various transformations on the labeled images (such as reflection, rotation, scaling, and translation).

If You Have Lots of Labeled Data  

With plenty of labeled data, both machine learning and deep learning are available. The more labeled data you have, the more likely that deep learning techniques will be more accurate. A typical example is below that illustrates approach when you have too much labeled data.

The steps for you to initiate any Machine learning project is to identify the different steps/tasks as part of any one business process. While one task alone might be more suited to machine learning, your full application might involve multiple steps that, when taken together, are better suited to deep learning. If you have a large data set, deep learning techniques can produce more accurate results than machine learning techniques. Deep learning uses more complex models with more parameters that can be more closely “fit” to the data.

Some areas are more suited to machine learning or deep learning techniques. Here we present 6 common tasks:

We thus look at each of the above tasks and its related examples, its applications, inputs required, common algorithm applied and whether it’s more approached through Machine learning or deep learning.

While one task alone might be more suited to machine learning, your full application might involve multiple steps that, when taken together, are better suited to deep learning. So, how much data is a “large” dataset? It depends. Some popular image classification networks available for transfer learning were trained on a dataset consisting of 1.2 million images from 1000 different categories. If you want to use machine learning and have a laser-focus on accuracy, be careful not to over fit your data.

Over fitting happens when your algorithm is too closely associated to your training data, and then cannot generalize to a wider data set. The model can’t properly handle new data that doesn’t fit its narrow expectations. The data needs to be representative of your real-world data and you need to have enough of it. Once your model is trained, use test data to check that your model is performing well; the test data should be completely new data.

Now that we have witnessed the amount and format of data that is required and how it has to be structured for machine learning or deep learning processes to be implemented. Stay tuned…. Part 3 of this foray, we will look into the details of the tools/techniques and hardware that is required to support the machine learning process while we lead deep into the learning portions of this AI foray we have embarked upon.

Please feel free to review my earlier series of posts on AI-ML Past, Present and Future – distributed across 8 blogs.


Machine Learning – The Primer – Part I

In this series of posts, I will emphasize primarily over the world of Machine learning. We’ll start with an overview of how machine learning models work and how they are used. This may feel basic if you’ve done statistical modeling or machine learning before. By giving ‘computers the ability to learn’, we mean passing the task of optimization — of weighing the variables in the available data to make accurate predictions about the future — to the algorithm. Sometimes we can go further, offloading to the program the task of specifying the features to consider in the first place. Let us first understand some basic definitions.

Machine learning: Machine learning lets us tackle problems that are too complex for humans to solve by shifting some of the burden to the algorithm. The goal of most machine learning is to develop a prediction engine for a particular use case. Common machine learning techniques include decision trees, support vector machines, and ensemble methods.

Deep learning: A subset of machine learning modeled loosely on the neural pathways of the human brain. Deep refers to the multiple layers between the input and output layers. In deep learning, the algorithm automatically learns what features are useful. Common deep learning techniques include convolutional neural networks (CNNs), recurrent neural networks (such as long short-term memory, or LSTM), and deep Q networks.

AlgorithmThe set of rules or instructions that will train the model to do what you want it to do. An algorithm will receive information about a domain (say, the films a person has watched in the past) and weigh the inputs to make a useful prediction (the probability of the person enjoying a different film in the future).

ModelThe trained program that predicts outputs given a set of inputs.

The below depicts the way the terms are related. Artificial intelligence encompasses the world of Machine learning and in turn machine learning encompasses the world of deep Learning.

Why Machine learning matters?

Artificial intelligence will shape our future more powerfully than any other innovation in this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.

The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades due to Limited computer power, intractability of solutions, common sense knowledge and reasoning like face recognition and qualifying the right problems.  And therefore rapid advances in data storage and computer processing power have dramatically changed the game in recent years. In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions.

So how is all this happening? How are the machines becoming smarter to the extent that they are able to beat human at most of the games like DeepBlue in chess, AlphaGo in Chinese GO, DeepMind in 49 Atari games etc. Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app. Google Translate has come such a long way that it can even perform a kinyarwanda translation to accurate, verbatim English. Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery. In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste.

In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level. The process to implement Machine learning is given below. Needless to say that it’s a process that needs to be institutionalized to be an every improving one.

Gathering Data: You can acquire data from many sources; it might be data that’s held by your organization or open data from the Internet. There might be one dataset, or there could be ten or more.

Cleaning Data: You must come to accept that data will need to be cleaned and checked for quality before any processing can take place. These processes occur during the prepare phase.

Build Models: The processing phase is where the work gets done. The machine learning routines that you have created perform this phase.

Gain Insights and Report: Finally, the results are presented. Reporting can happen in a variety of ways, such as reinvesting the data back into a data store or reporting the results as a spreadsheet or report.

Stay tuned…. Part 2 of this foray, we will continue to dwell upon the steps in the machine learning process while we lead deep into the learning portions of this AI foray we have embarked upon

Please feel free to review my earlier series of posts on AI-ML Past, Present and Future – distributed across 8 blogs.


AI / ML – Past, Present & Future – Part 6

Just a recap, we learnt in my last update, the different ways to harness the new age machine to enhance market competitiveness of your products and services. In this excerpt we are going to dwell upon the different ways to harness the new age machine in by and large the innovation quotient that we need to invest upon.

As we have seen throughout this series of posts, the innovation related to the intelligent systems and digital economy is both a catalyst for and an outcome that will allow your organization to discover opportunities that were never before visible or addressable. Innovation being the center stone, it can’t be a side project which is nice-to-have but its central to remaining relevant in the great digital build-out that we are experiencing and of course lies ahead of us. While machines will do more and more of our work, the process of innovation will allow us to discover entirely new things to do that are impossible to imagine and hard to predict but they will be at the core of what we do in the future.

Today, reports that a new economy is emerging with a flurry of job categories that even a few years ago would have been hard to predict; social medial consultants, search engine optimizers, full stack engineers, content curators, and chief happiness officers. These all jobs that the tech economy’s equivalent to the Charles Babbages of the early starters would have imagined. AI is changing our world already, but in reality we have only begun to scratch the surface of where it will take us over the next 20, 50, 100 years. Your job is to imagine the new forms of value you can create with the new machines of the new revolution. Institutionalizing the role and importance of being open to the fruits of innovation is a hugely important role that you as a leader of the future need to play. The openness to innovation is not just a job of the company’s formal R&D department. It has to be a culture of innovation that each and every one of you should exhibit.

We have learned right through this series that the new machine will be your platform of innovation. Once you are instrumenting, automating, tracking and analyzing the core operations of your business an applying machine learning, innovation opportunities will be consistently unearthed. Innovation is thus a rich term with many different attributes and applications. It can be applied to many areas including some I have listed below.

With product innovation, team will gain continual insight as to how your products are being used, also what customer frustrations points exist, and where the obvious areas for improvement exist. These inputs definitely can’t be gauged without a thorough AI system in place. Once you have automated, instrumented and enhanced your company’s activities, the associated AI engines can be applied to innovation. The team will get greatly enhanced by the application of the new machine, primarily because it radically accelerates the scale and speed of the innovation process. As such, when the new machine is soon widely adopted, the rate of human progress in the 21st century (as defined by the cumulative growth of human knowledge and he pace of the innovation) will be at least 1000 times the average rate of the 20th century. Obviously the general factors affecting innovation in the organization (opinions, ideas, emotions, organization’s inertia etc) can bring down above prediction to may be by two orders of magnitude. Still the innovation quotient of 10 times is way higher than traditional R&D ways.

One of the core principle in these posts is that machines can do many things but that practical application should be focused on specific business processes and customer experiences. When you are making discovery investments, start at the process and experience level and imagine how the process can be restructured and reinvented with digital.

Discovery can be a risk. Invest too much in the wrong ideas and you go broke. Wait for somebody else to do it and you can miss the market opportunity of a lifetime. So what’s the best practice for bringing about this new form of innovation? We find too many managers looking for the new “the next great breakthrough” but that doesn’t work. The opposite approach is to ask how the new machine adds the most value – that is, by looking for continuous, incremental improvements or looking to hit singles on a consistent basis. This primarily caters to “change for better” but is implemented as small, continuous improvements that in time have a large impact. Your goal should be to become a Know-It-All business via instrumentation, sensors, big data and analytics. In the real world, organizations should establish a portfolio of initiatives focused on discovery, with a clear life cycle methodology that manages these initiates from inception through o ultimate success or failure. Central to their generative acts will be the belief that something better can be created. The true core of discovery is, after all, hope.

Obviously, please don’t forget that with the Gods come the devils as well. While most part of my posts are ushering in that AI is an age of miracles and wonder of technological marvels but in the hindsight we should also see a world of robots, more powerful human like machines taking over. Thus strike a balance.

In these posts, we have argued that the information technology innovations and investments of the past 4 decades are merely a precursor to the next waves of digitization, which will have truly revolutionary impacts on every aspect of work, society, and life.

As the last S-Curve’s growth rate continues its inexorable journey south, the new S-curve is gather momentum, and so are the companies  poised to lead this new charge. These are the companies that have learned how to master the 3 Ms, how to align the new raw materials of the digital age (data), the new machines (intelligent systems), and the new models (business models that optimize the monetization of data-based personalization).These are the companies that understand how to build and operate a know-it-all-business, that understand that intelligent machines aren’t to be feared but embraced and harnessed, and that are energized by the unwritten future rather than just trying to hang onto the glories of the past.

Below are the few mandatory steps that any organization should embark upon and leaders from organizations should help them implement.

The companies that are getting ahead are the ones acting on these ideas. Some companies we work with emphasizes one ‘play’ over another, while others recognize the holistic connection between all of the plays, automation enables enhancement, discovery uncovers how to achieve excessiveness, and so on. All of them, however, understand the need to act now, to not wait for more certain times ahead, more clarity over exactly what AI is, and what it will become. All of them recognize that the rise of the machine intelligence is the ultimate game changer we face today. All of them know that inaction will result in irrelevance. All of them know that fortune favors the brave and punishes the timid.

This is the last blog in the series on AI/ML and the related space. Hope you all enjoyed reading through the posts. Stay tuned while I come back with yet another series on a technology topic.


AI / ML – Past, Present & Future – Part 5b


Just a recap, we learnt in my last update, the different ways to harness the new age machine to enhance human experience and their related processes and analysis. In this excerpt we are going dwell upon the different ways to harness the new age machine in enhancing Market competitiveness.

The loom led to excessive clothing, the steam engine to excessive travel, and the factory model led to excessive refrigerators and televisions finding their way into homes all around the world. Before the revolutions that spurred them, these products were rare luxuries. So the concept of excessiveness is really quite simple, and old – as prices go down, demand goes up. As the new machines drives the price down, markets of excessiveness will be established, driving sales up to unimagined levels. The question now becomes, will you seize the advantage with the new excessiveness that is available or fall victim to it?

In the past, we have used raw materials, new machines, and hybrid business models to support them to create an unprecedented excessiveness that can in turn make it easily for all luxury goods to be easily available for common masses. A very good example of this excessiveness created is the need for heart surgeries that has pushed organizations to throw excessiveness and innovations in and eventually bring the costs down and thereby make it readily available for common masses. The results delivered in the case of cost reduction in heart surgeries is nether by magic nor by cutting corners. Yes, there are salary disparities between India and other countries, but these account for only a fraction of the cost difference. The vast majority of these dramatic savings come from the digitization of key processes. The business models considered here were purely hybrid, viz. some portions must of course remain highly physical – human – centric work performed by medical professionals on an actual patient – whereas others can be significantly digitized, such as monitoring patients and machines. By Breaking down the processes of surgery preparation, operation room management, and intensive care unit operations into discrete processes and experiences and then apply new technologies, hospitals cut costs to the point that it now can provide high quality care to many more people. In this instance, digital is literally saving lives.

The key point to note here is that setting a new price point is not a one-time thing but a continual process. As once automation takes hold of your products creation and delivery transition from being human- based to machine based they will become inherently the centric, and thus able to benefit from the general consensus that the speed and capability of our computers doubles every two years.

At this stage, it’s imperative to wonder how to kick start quickly on the excessive thought process. Below are seven approaches that might lead you to deliver positive results.

Focus on disruptive thinking

Organizations should now focused on new / disruptive thinking, it should keep its eyes and ears open to new companies that are coming after your business. The Key is that this team should be empowered to take an objective view of the tech-based companies that are looking to bring excessiveness into the industry and potentially eat your pie of business. In such cases, you can clearly view that the industry is clearly coming for that portion of your company and you need to marshal an appropriate response right away – whether it be to buy, to build, or to partner to address the threat.

Analyze areas of weakness

The new generation are key to arriving at where the current organization sucks. These fresh thinking blokes can offer a unique and highly valuable point of view regarding your traditional ways of doing business. As a best practice, a sub-group of these fresh thinkers should be focusing on weakness that can put your company out of business

Plan for a future cost reduction model

Leadership must ask had, even painful, questions about the implications of current products and services moving from expensive and rare, to cheap and nearly available everywhere. Of course, nothing will be truly cost less, one way or another, you need to find ways to grow revenue. However it’s healthy to have organizations start conceiving its products and services as the sum of their parts, which will add up to a certain price. All these parts have been be analyzed from a digital perspective to make them costless so as to bring down the cost of the overall product and service.

Innovative profit making

The question of how do we price our products differently, aimed at very different customer segments and entailing very different economics of margins. It definitely does worldly good for companies to start thinking as to how to bring down the cost but still be able to make adequate profits.

Search for Technical prowess

The movement comprises of individuals, teams, and companies enthusiastic about building new devices that live at the intersection of new functionality and low cost. We find such movements around individuals who while holding down their day job, are itching to really focus on their weekend avocation. Don’t look upon such individuals as uncommitted to the work at hand; rather, harness their talents, energy, and passions by putting them in places and positions in which their personal innovations can become your corporate innovations.

Personalize Product line

The key to focus on personalizing any product or service to your customer opens up an extremely new horizon of doing business. This is not a function of scale. It’s simply a function of applying the new machine to establishing one-to-one connections with your customers. After all, this had been the first goal so to say, the entire value chains and customer value propositions were focused on this pursuit. For instance, personalization is the new battleground in the apparel industry as we had seen in one of our last post.

Apply Digital Thought

The lowering cost paradigms is all about finding dramatic cost savings to open new markets. How best to find these breakthroughs and apply new technologies of AI to the parts of the sum (overall). It was advanced in past that almost every work activity could and should be broken down into discrete tasks and measured in time, motion and output. More important, performance levels and best practices could be applied to repetitive tasks to make them efficient. This will more or less drive competitiveness at your company.

Applying the above seven levers and making sure organization are actively pursuing each one of them will keep them active, always in search for best practices to enhance and improve their products and services while keeping the cost lower. This will result in obviously at the end of it an enhanced market competitive stature of the organization.

Stay tuned…. Part VI of this foray, we will continue to dwell upon the different ways to harness the new age machine in by and large the innovation quotient that we need to invest upon.


AI / ML – Past, Present & Future – Part 5a

Just a recap, we learnt in my last update, the different ways to harness the new age machine through Automation and Instrumentation and their related processes and analysis. In this excerpt we are going dwell upon the different ways to harness the new age machine to enhance human experience.

Let us recognize that all these scenarios in one way or the other are enhancing the human experience. Today driving places is so easy when compared to following directions from a print out. With smart GPS systems, whether as an app on our smart phones or embedded as an instrument in our vehicle’s dashboard, it’s far more difficult to get lost nowadays. The GPS systems we now take for granted provide a preview of coming attractions on how the new machines are enhancing more and more of our work and personal lives.

Below are some scenarios which illustrate how the current intelligent systems are enhancing human experience in each of the areas below. Obviously this is an infinite list

It’s therefore easier to judge this when it comes to personal choices that a vast majority of us will prefer to work with an enhanced human, the one who is equipped with all details from an intelligent system on their side. For these reasons, we see the forces of enhancement as positive and the concerns over automation-driven job substitution as ill-considered. As we outlined earlier, the vast majority of white-collar work won’t be replace by these new machines but will be made better with them. I believe that more than 80% of teaching jobs, nursing jobs, legal jobs, and coding jobs will be mode more productive, more beneficial, and more satisfying by computers – in other words, enhanced

The story of human evolution is, of course, in many ways the story of our tools, from the sharpened stones to the intelligent machine used by deep-learning pioneers today. This is the progression we are witnessing as we move into the digital age. Isn’t it?


One important but often overlooked aspect of enhancing work is to recognize the relationship that exists between enhancing a job/role/process and automating it. In many ways, automation and enhancement exist in a symbiotic, to –sides-of-the-same-coin way. To effectively enhance, one needs to automate. So the winners will be those who continue to believe in the progress created by technology, those who enhance, and those who understand the power of tools and who adapt to using them effectively.

All of us including senior management, need to enhance our current skills when it comes to engaging with others, leading, reasoning and interpreting, applying judgement, being creative and applying the human touch. These behaviors and activities are still far outside the purview of current and near-future technologies and will remain so for years to come, even as the new machines become more capable. By 2020, senior executives project that employees will need to improve their performance in below areas.


Major companies today are proving that even in a world of enhancement solutions, where people and machines work together in new ways, there’s still value in being human. Our work ahead will require us to double-down on the activities where humans have and will continue to have an advantage over silicon.

We are in an incredible time, when technology is significantly extending the envelope of human capability. Intelligent systems now allow us to do things at a level of productivity and profitability that even a few years ago would have seemed far-fetched and implausible. All of these possibilities and many more are being created by the injection of intelligence in to our tools. We have the potential to become smarter because our tools are becoming smarter. It’s these tools that are really at the heart of the progress we have made so far and the progress we will make ahead. Enhancement will be the force that causes the bar to rise for every one of us, in every organization and in every country in the world. If you can enhance the value you generate, you are doing the right things as machines being to do everything. Enhancement also introduces new avenues of opportunity that we need to explore to keep ahead.

Stay tuned…. Part Vb of this foray, we will continue to dwell upon the different ways to harness the new age machine in enhancing Market competitiveness and by and large the innovation quotient that we need to invest upon.


AI / ML – Past, Present & Future – Part 4


Just a recap, we learnt in my last update, the different aspects of what constitutes a business model that an organization should follow and how that impacts your overall foray into AI/ML implementation. In this excerpt we are going dwell upon the different ways to harness the new age machine through Automation and Instrumentation

As I have pointed out repeatedly in the previous parts of this blog series on AI/ML, industry is riding on the cusp of a huge new wave of automated work that is going to fundamentally change what millions and millions of people all around the world do, Monday through Friday, 8 hour work day. The attempt at automation of existing parts of your business with the new machine provides an opportunity to change the cost structure of your firm, while at the same time increasing the velocity and quality of your operations. We need to understand what automation actually is, which part of your business are best suited to be automated, which jobs will be most impacted, the benefits you can expect and the problems to avoid.

Automation is the first step in the journey that exhibits the tendency for industrial change to continuously destroy old economic structures and replace them with new ones. This will result in both revenue increase for the industry but more importantly a cost savings overall. Some study on the internet shows the below numbers to back up the generic idea in the different industries where Automation is more prevalent in.


This trend of applying automation technology to lower cost and improve productivity is playing out in nearly every industry. Like it or not, your competitor across the street will soon gain the massive benefit of digital automation of core processes. If you don’t keep pace, your cost structure will soon be unsustainable. Additionally, the saving generated through automation are what will then pay for the coming digital innovations. Fortunately most of us have a running start. We have been consuming automation for a long time, and much as with AI, once used it’s not even noticed. Let us consider some examples for such automations that we come across and still they go unnoticed while we are travelling out of station

  • Automated toll collections through EZ Passes on the highway while we pass through the toll booth without stopping
  • Parking pass generation while you arrive at the airport parking
  • Receipt of the boarding pass and you check-in baggage at one of airport kiosks
  • Getting cash from ATM while walking down to your departure gate

From your house to the airport gate, your trip was at least a half hour faster than in pre-automation days if not lesser. In any organization therefore, such automation are targeted best in the core operations areas, not visible to your customers. What if you could run these functions or processes at half the cost and with double the throughput? With continuous improvements and quality control and with all these aspects – every transaction – full instrumented and recorded? With the new machine you can.

Finding your Process/Automation Targets for immediate automation is the low hanging fruit you should embark upon.

  1. Highly repetitive tasks
  2. Tasks with low level of human judgement
  3. Tasks requiring low level of empathy
  4. Tasks generating and handling high volumes of data

Identification of your automation target will give your teams a clear path to success, but there still remain a significant hurdle in managing change with your organization. You will need to adapt the path depending on the complexity of what you are doing; what follows is a high level walk through, but the seven steps put together below for automating any process or task are basically the same.


I therefore have shown you above that automation is our new loom, our new steam engine. The cost savings generated from these next levels of automation will provide the cash needed to fuel investments in the new markets and new ideas. The data degenerated by automation is at the heart of creating new products, better customer relationships, and more transparency. Leaders who create the ongoing momentum from using automation – every quarter looking for new automation opportunities using the criteria and guidelines I have presented – will ensure they have the fuel needed to win. Also please do realize and remember that automation is not an end in its own right, it is simply a means to an end.

We are slowly moving into the instrumentation zone, where once an automation is instrumented and tracked, an invisible framework of code emerges around the object and this often provides more insight and value than the actual physical item itself. For instance the Amazon and Netflixs know your tastes in literature and movies better than your family and friends do without even coming in touch with you as a consumer. The race is to win through instrumentation, and established companies are changing the rules of competition across many industries.


There are now three key rules of competition when it comes to Instrumentation


Why instrument everything and build solutions around information? Because doing so sets you on the path to being a “Know-It-All” business. With sensors and instrumentation, it’s now possible to collect and analyze information about everything. To know everything about everything

While instrumentation and collection of data from each and every portion of physical thing on your enterprise does pose the vast opportunity to review, analyze and take business decisions, it also exposes the analyzed and sensitive data to hackers. This is where organizations have to be careful so as to keep their internal, important, competition eluding data secured and not misused in the wrong fashion. Some of key hacking horror stories are below and would give you a fair bit of idea why we need to be careful.


Competing with instrumentation is now becoming the default model for our modern economy. Instrumenting everything, accessing data scientists and other big data/analytics talent, and avoiding the dark side of the instrumentation are all tactics you must adopt to get started. Fortunes will be won and lost depending on your organizations ability to leverage the upsides of the instrumentation and mitigate its downsides.

Stay tuned…. Part V of this foray, we will continue to dwell upon the different ways to harness the new age machine through the enhancement to human experience, Market competitiveness and by and large the innovation quotient that we need to invest upon.