Machine Learning – The Primer – Part 5

Just a recap, in my previous post, I had primarily focused on the Algorithms that are at the core of the ML/DL space and how humans are at the helm of this impact of machines. This will be the concluding post for this series where we will discuss the applications/organizations who have successfully implemented the AI/ML frameworks and how they are benefiting out of it.

Have you ever thought how Google Maps predicts the traffic so accurately or how Amazon recommends products for you or even how self-driving cars work? If yes, then let us see the top 8 applications of Machine Learning. 

  1. Google Maps Traffic Prediction

We will start with one of the applications of Machine Learning that we use in our day-to-day life, i.e., Google Maps’ traffic prediction. Google Maps is very accurate in predicting traffic. Google uses the information from the phone the app is installed upon to calculate how many cars are there on the road or how fast they are moving. Also the more people are on-boarded on this app, the traffic data becomes more accurate.

It also has incorporated traffic data from an app called ‘Waze’ to monitor traffic reports from local transportation departments and also keeps a history of traffic patterns on specific roads so its prediction is far more accurate. So, this was about Google Maps using a Machine Learning algorithm to analyze and predict the result using your data. More data you feed more accurate it becomes!


  1. Google Translate

Google Translate enables us to translate documents, sentences, and websites instantly. All these translations come from computers that use statistical machine translation. For teaching someone a new language, we usually start off by teaching the vocabulary, grammatical rules, and then explain about constructing sentences but rules here contain a lot of exceptions. When you combine all of these exceptions in a computer program, then the quality of the translation begins to breakdown. Hence Google Translate, took a slightly different approach. Instead of teaching every rule of a language to the computer, what it does is, it lets the computer find the rules by itself. Google Translate does this with the help of Machine Learning. This is done by examining billions of documents that are already translated by human translators. Google Translate collects text from multiple sources. After the text or the data is collected, the machine tries to find patterns by scanning the text. Once the machine detects the pattern, this pattern is used multiple times for translating similar text. Repetitions of the same process by the machine will detect millions of patterns that will make the machine a perfect translator. Google’s translation is undoubtedly perfect, but by constantly providing newly translated text, it can get smarter and translate better. This is how Google translates your speech.

Now, we will move on to the applications of Machine Learning by looking at Facebook’s Automatic Alt Text.


  1. Facebook’s Automatic Alt Text

Facebook’s Automatic Alt Text is one of the wonderful applications of Machine Learning for the blind. Facebook has rolled out this new feature that lets the blind users explore the Internet. It is called Automatic Alternative Text. With the help of this, the blind are getting the tools by which they can experience the outside world and the Internet. Blind people use screen readers that help in describing websites or apps. Facebook has estimated that there are more than a billion photos shared every day. However, the pictures shared would be of no use for the blind if they don’t come up with the text that outlines the picture. So, Facebook is resolving this problem with the help of ‘Automatic Alt Text.’ Here, when the built-in reader is turned on and when we tap on a picture, then Facebook’s Machine Learning algorithms try to recognize the features of the image and then create an alt text. This alt text will describe the picture with the help of the screen reader.

Recently, Twitter has also added a feature that makes use of alt text for images.

This was all about the applications of Machine Learning which Facebook developed to help the blind people experience the world.

Further, in this blog on ‘Applications of Machine Learning,’ we will see another application of Machine Learning, that is, Amazon’s recommendation engine.


  1. Amazon’s Recommendation Engine

Amazon uses Machine Learning with Big Data to power its recommendation engine. It involves three stages: events, ratings, and filtering.

In the events phase, Amazon tracks and stores data regarding customer behavior and their activities on the site. Every click the user makes is an event, and the record of the user is logged in the database. This way, different types of events are captured for different kinds of actions like a user liking a product, adding a product to the cart, or purchasing a product.

Next phase is ratings. Ratings are important as they reveal what the user feels about the product. The recommendation system then assigns implicit values on different kinds of user actions like four-star for purchase, three-star for like, and two-star for a click, and so on.

Amazon’s recommendation system also uses Natural Language Processing to analyze the feedback which is provided by the user. The feedback can be something like ‘the product was great but the packaging was not good at all.’ With the help of Natural Language Processing, the recommendation system calculates the sentiment score and then classifies the feedback as positive, negative, or neutral.

Now, the last phase is filtering. In this step, the machine filters the product based on the ratings and other user data. The recommendation system uses different kinds of filtering such as collaborative filtering, user-based filtering, and hybrid filtering.

Collaborative filtering is one in which all the users’ choices are compared and they get a recommendation. For example, a user X likes products A, B, C, and D, and the user Y likes products A, B, C, D, and E. So, there is a chance that the user X will also like the product E, and the machine will recommend the product E to the user X.

After that comes the user-based filtering. In this, the users’ browsing history such as likes, purchasing, and ratings are taken into account before providing the recommendation.

Finally, in hybrid filtering, there is a mix and match of both the collaborative and the user-based filtering.

So, this is how Amazon recommends products for you. The applications of Machine Learning are not limited to just Amazon; organizations such as Alibaba, eBay, and Flipkart also use the same approach.

Going ahead in this blog on ‘Applications of Machine Learning,’ we will see about spam detection in Gmail.


  1. Spam Detection in Gmail

Spam detection is the most commonly used mechanism in our day today life that makes use of filters. Algorithms are regularly updated based on the new potential threads found, advancement in technology, and the reaction given by users to spammed mails. Spam filters remove the threats using text filters based on the sender’s history. So, in this blog on applications of Machine Learning, next, we will see text filters, client filters, and spam filters.

That’s the outline of spam detection and how Gmail understands which email is spam. However, the real-time processes are a lot more complex and it consumes a lot of data. It is also used in fraud detection.

After spam detection in the applications of Machine Learning, we will move on to Amazon Alexa, which is another wonderful application of Machine Learning.


  1. Amazon Alexa

The brain or voice of Echo is known as Amazon Alexa. It is capable of doing several tasks such as giving the weather report and playing your favorite song. Also, the word ‘Alexa’ is a wake word. As you say this word, it starts the recording of your voice. When you finish speaking, it sends the voice to Amazon. The service that persists this recording is called Alexa Voice Service or AVS, and it’s one of the magnificent applications of Machine Learning. This service is run by Amazon.

AVS interprets the command from the recorded audio. It is also called a voice detection service that can work with many other online services.

The commands interpreted by Alexa can be like asking for time and weather reports. After the command has been noted, it is sent to Amazon. Then, AVS gives the response by telling you the time and weather reports with the help of an audio file sent by Amazon servers.

You can also give some complex tasks such as if you tell Alexa to tell you the ‘Applications of Machine Learning,’ then AVS will search the keywords you have set up in servers.

Alexa can also control your home appliances by voice commands if you are using smart electronic devices such as Philips smart bulbs. You can give commands to Alexa to switch on or off the lights. You can even link it to Domino’s and order pizza by giving commands to Alexa. Isn’t it a magnificent application of Machine Learning?

Echo and Alexa can perform a lot of things. Amazon is continuously adding more skills to Alexa that will make it better.

Well, if you consider these products, Amazon is not the only company in the market that has used this application of Machine Learning. Google uses ‘Ok Google’ as its voice command services, Apple uses ‘Siri,’ and Microsoft uses ‘Cortana.’ Even if they are using the same approach, i.e., voice commands processed in cloud servers, they are not as good as Alexa.

Now, we will move on to another superb application of Machine Learning, i.e., self-driving cars!


  1. Tesla’s Self-driving Cars

Tesla’s self-driving cars is another of the most-used applications of Machine Learning. A recent study has shown that over 90 percent of the road accidents are caused by human errors, and these mistakes are often catastrophic. The accidents have led to a massive number of unnecessary deaths; lives that could have been saved if they were driving safely. This is where the self-driving cars come into the picture. Thus, this real-world application of Machine Learning has led the automobile industry to a new and safer direction. These self-driven cars are autonomous cars that are safer than the cars driven by humans. Things that make these cars safe are that they are not affected by factors like illness or emotions of the driver.

Self-driving cars persistently observe the environments and scan all the directions and make their move. Due to their mechanism of not lagging in observation, self-driving cars work perfectly.

Working of Self-driving Cars

Self-driving cars are a real-world example of Machine Learning that mainly uses three different technologies: IoT sensors, IoT connectivity, and software algorithms.

Talking about self-driving cars is not limited to Tesla. In today’s world, the most famous self-driving cars are those made by Tesla and Google. Tesla cars work by examining the surroundings with the use of a software system that is the auto-pilot. As we use our eyes to visualize the surrounding world, the auto-pilot does it with the help of hi-tech cameras for recognizing objects. After that, it interprets the information and then makes the best conclusion out of it. This major application of Machine Learning is revolutionizing the automobile industry.

Next, in this blog on the applications of Machine Learning, we will look at the Netflix movie recommendation system.

  1. Netflix Movie Recommendation

Netflix movie recommendation discovers 80% of the movie/TV shows that are streamed which means, the majority of what we decide to watch on Netflix is a result of the decisions made by its algorithm.

Netflix uses Machine Learning algorithms to recommend the list of movies and shows that you might have not initially chosen. To do this, it looks at threads within the content.

There are three legit tools for Netflix, and they are as follows: the first is Netflix members, the second is taggers who understand everything about the content, and the third is the Machine Learning algorithms that take all of the data and put them together.

Netflix uses different kinds of data from these profiles. It keeps track of what you guys are watching from your profile, what you watch after completing your current video, and even what you have watched earlier. It also keeps track of what you have watched a year ago or what you are currently watching, or at what time of the day you are watching. So, this data is the first leg of the metaphorical tool.

Now, they are combining this information with more data to understand the content of the shows that you are watching. This data is gathered from dozens of in-house and freelance stuff watched every minute and every show on Netflix, and they tag them. All the tags and user behavior data are taken and fed into a very sophisticated Machine Learning algorithm that figures out what’s the most important.

Well, these three legit tools are used to analyze the taste of communities around the world. It’s about people who watch the same kind of things that you watch. Viewers are made to fit into thousands of different taste groups that affect recommendation pop-ups on their screen at the top, as an interface with joined rows of similar content.

Across the globe, the tags used are the same for all the Machine Learning algorithms. The data Netflix feeds into its algorithms can be broken down into two types: implicit and explicit.

Explicit data is what you literally tell. For example, you give thumbs up to Friends and Netflix gets it.

Implicit data is real behavioral data. It’s like, you did not explicitly tell Netflix that you like Black Mirror, but you just watched it in two nights. So, here Netflix understands the behavior. But, just as a matter of fact, the majority of useful data is implicit data.

There are a lot more real-world applications of Machine Learning, but those described in this blog are the major applications of Machine Learning. Now that we have seen various applications of Machine Learning which are revolutionizing the world. We are moving into the next generation with the full-fledged Machine Learning technology that will help in giving a whole new direction.

This is the last blog in the series on Machine Learning and the related space. Hope you all enjoyed reading through the posts as much as I enjoyed putting them together. Stay tuned while I come back with yet another series on a technology topic.

Please feel free to review my other series of posts 

Authored by Venugopala Krishna Kotipalli

Machine Learning – The Primer – Part 4

Just a recap, in my previous post, I had primarily focused on the tools/techniques/frameworks and hardware that are prominent in the implementation of an AI/ML framework for any organization. In this post, we are going to dwell a little deeper into the Algorithms that are at the core of the ML/DL space and how humans are at the helm of this impact of machines.

Algorithms are becoming an integral part of our daily lives. About 80% of the viewing hours on Netflix and 35% of the retail on Amazon are due to automated recommendations by the so called AI/ML engines. Designers in companies like Facebook know how to make use of notifications and gamifications to increase user engagement and exploit & amplify various human vulnerabilities such as social approval and instant gratification. In short, we are nudged and sometimes even tricked into making choices by algorithms that need to learn. In case of the products and services we buy, the information we consume and who we mingle with online, algorithms are playing an important role in practically every aspect of it.

In the world of AI, a new challenge that is being increasingly discussed is the biases that creep into algorithms. Due to these biases, when we leave it to algorithms to take decisions, there can be unintended consequences. More so, as algorithms deployed by tech companies are used by billions of people, its damage because of biases can be significant. Moreover, we have a tendency to believe that algorithms are predictable and rational. So we tend to overlook many of their side effects.

How today’s algorithms differ?

In the past, developing an algorithm involved writing a series of steps that a machine could implement repeatedly without getting tired or making a mistake. In comparison, today’s algorithms, based on machine learning, do not follow a programmed sequence of instructions but ingest data and figure out for themselves the most logical sequence of steps and then keep working on improvement as they consume more and more data.

Machine learning itself is more sophisticated as traditional (supervised) ML, a programmer usually specifies what patterns to look for. The performance of these methods improves as they are exposed to greater data but this is limited. In deep learning, programmers do not specify what patterns to look for but the algorithm evaluates the training data in different ways to identify patterns that truly matter. Human beings may not identify some of these patterns.

Deep learning models contain an input layer of data and an output layer of the desired prediction and the multiple hidden layers in between that combine patterns from previous layers to identify abstract and complex patterns in the data. For instance

Unlike traditional algorithms, the performance of deep learning algorithms keeps improving as more data is fed.


Decision making and avoiding unintended consequence

AI involves enabling computers to do the tasks that human beings can handle. This means computers must be able to reason, understand language, navigate the visual world and manipulate objects. Machine learning enhances this by learning from experience. As algorithms become more and more sophisticated and develop newer capabilities, they are going beyond their original role of decision support to decision making. The flip side is that as algorithms become more powerful, there are growing concerns about their opaqueness and unknown biases. The benefits of algorithms seem to far outweigh the small chance of an algorithm going rogue now and then. It is important to recognize that while algorithms do an exceptionally good job of achieving what they are designed to achieve, they are not completely predictable. They do have side effects like some medicines. These consequences are of three types

Perverse results affect precisely what is measured and have a better chance of being detected. Unexpected drawbacks do not affect the exact performance metrics that are being tracked. and difficult to avoid them. Facebook’s Trending Topics algorithm is a good example. The algorithm ensured that the highlighted stories were genuinely popular. But it failed to question the credibility of sources and inadvertently promoted fake news. So inaccurate and fabricated stories were widely circulated in the months leading up to the 2016 US Presidential elections. The top 20 false stories in that period received greater engagement on Facebook than the top 20 legitimate ones.

Content and Collaborative filtering systems

Content based recommendation systems start with detailed information about a product’s characteristics and then search for other products with similar qualities. Thus, content based algorithms can match people based on similarities in demographic attributes- age, occupation, location, shared interests and ideas discussed on social media.

Collaborative filtering recommendation algorithms do not focus on the product’s characteristics. These algorithms look for people who use the same products that we do. For example, two of us may not be connected on Linked In but if we have more than a hundred mutual connections, we will get a notification that we should perhaps get connected.

Digital Framework

Algorithms also leverage the principle of digital neighborhood. One of the earliest pioneers of this principle was Google. In the late 1990s, when the internet was about to take off, the most popular online search engines relied primarily on the text content within web pages to determine their relevance. If a lot of other sites have a link to our website, then our website must be worth reading. It is not what we know but how many people know us that gets our website higher rankings. Research reveals that when Oprah Winfrey recommends a book, sales increase significantly but the books recommended by Amazon also get a significant boost. That is why digital neighborhoods are so important. For products that are recommended on many other product pages, recommendation algorithms drive a dramatic increase in sales. Spotify initially used a collaborative filter but later combined it with a content based method.

The predictability

AI began with Expert systems, ie systems which capture the knowledge and experience of experts. These systems suffer from two drawbacks.

  • Do not automate the decision making process.
  • Can’t code a response to every kind of situation.

We can either create intelligent algorithms in highly controlled environments, expert systems style to ensure they are highly predictable in behavior. But these algorithms will encounter problems they were not prepared for. Alternatively, we can use machine learning algorithms to create resilient but also somewhat unpredictable algorithms. This is the predictability – resilience paradox. Much as we may desire fully explainable and interpretable algorithms, the balance between predictability and resilience inevitably seems to be tilting in the latter direction.

Technology is most useful when it helps human beings to solve the most sophisticated problems which involves creativity. To solve such problems, we will have to move away from predictable systems.  One solution to resolve the predictability resilience paradox is to use multiple approaches. Thus, in a self-driving car, machine earning might drive the show but in case of confusion about a road sign, a set of rules might tick in.

Environmental Factors that Support

Human behavior is shaped both by hereditary and environmental factors. Same is the case with algorithms. There are three components we need to consider:

While data, algorithms and people play a significant role in determining the outcomes of the system, the sum is greater than the parts. The complex interactions among the various components have a big impact.

Need Trust

Many professions will be redefined if algorithmic systems are adopted intelligently by users. But if there are public failures, we cannot take user adoption for granted. We might successfully build intelligent diagnostic systems and effective driver-less cars but in the absence of trust, doctors and passengers will be unwilling to use them. So it is important to increase trust in algorithms. Otherwise, they will not gain acceptance. According to some estimates, driver-less cars would save up to 1.5 million lives just in the US and close to 50 million lives globally in the next 50 years. Yet, in a poll conducted in April 2018, 50% of the respondents said they considered autonomous cars less safe than cars driven by human beings.

Rules and Regulations

Decision making is the most human of our abilities. Today’s algorithms are advancing rapidly into this territory. So we must develop a set of rights, responsibilities and regulations to manage and indeed thrive in this world of technological innovations. Such a bill of rights should have four components:

AI systems have mastered games – now it’s time to master reality! Sometime back, Facebook developed two bots and trained them on negotiation skills. The bots were exposed to thousands of negotiation games and taught how conversations would evolve in a negotiation and how they should respond. The outcome of this training far exceeded expectations. The bots learnt how to trade items, developed their own short hand language. When the bots were made to negotiate with human beings again, the people on the other side did not even realize this!

Stay tuned…. Part 5 of this foray, we will look into the details of top 8 examples of organizations that have successfully implemented the AI/ML framework and how they are benefiting out of it.

Please feel free to review my other series of posts 

Authored by Venugopala Krishna Kotipalli


Machine Learning – The Primer – Part 3

Just a recap, in my previous post, I had primarily looked upon amount, structure of data that needs to be served as input to machine learning and the process used for its implementation. In this post, we are going to continue to learn more about the tools/techniques and hardware that will be required for these processes to excel.

While the majority of us are appreciating the early applications of machine learning, it continues to evolve at quite a promising pace, introducing us to more advanced algorithms like Deep Learning. Why DL is so good? It is simply great in terms of accuracy when trained with a huge amount of data. Also, it plays a significant role to fill the gap when a scenario is challenging for the human brain. So, quite logical this contributed to a whole slew of new frameworks appearing. Please see below the top 10 frameworks that are available and their intended use in the relevant industry space.

Source Internet

Just to provide a comprehensive analysis of the best tools that would undoubtedly recommend. So what is the final advice?

The speed of your algorithm is dependent on the size and complexity of your data, the algorithm itself, and your available hardware. In this section, we’re going to focus on hardware considerations when:

  • Training the model
  • Running the model in production

Some things to remember:

If you need results quickly, try machine learning algorithms first. They are generally quicker to train and require less computational power. The main factor in training time will be the number of variables and observations in the training data.

Deep learning models will take time to train. Pre-trained networks and public datasets have shortened the time to train deep learning models through transfer learning, but it is easy to underestimate the real-world practicalities of incorporating your training data into these networks. These algorithms can take anywhere from a minute to a few weeks to train depending on your hardware and computing power. 

Training the Model

Desktop CPUs: are sufficient for training most machine learning models but may prove slow for deep learning models.

CPU Clusters: Big data frameworks such as Apache Spark™ spread the computation across a cluster of CPUs.

The cluster or cloud option has gained popularity due to the high costs associated with obtaining the GPUs, since this option lets the hardware be shared by several researchers. Because deep learning models take a long time to train (often on the order of hours or days), it is common to have several models training in parallel, with the hope that one (or some) of them will provide improved results.

GPUs: are the norm for training most deep learning models because they offer dramatic speed improvements over training on a CPU. To reduce training time it is common for practitioners to have multiple deep learning models training simultaneously (which requires additional hardware).

Running the Model in Production

The trend toward smarter and more connected sensors is moving more processing and analytics closer to the sensors. This shrinks the amount of data that is transferred over the network, which reduces the cost of transmission and can reduce the power consumption of wireless devices.

Several factors will drive the architecture of the production system:

  • Will a network connection always be available?
  • How often will the model need to be updated?
  • Do you have specialized hardware to run deep learning models?

Will a network connection always be available?

Machine learning and deep learning models that run on hardware at the edge will provide quick results and will not require a network connection.

How often will the model need to be updated?

Powerful hardware will need to be available at the edge to run the machine learning model, and it will be more difficult to push out updates to the model than if the model resided on a centralized server.

Tools are available that can convert machine learning models, which are typically developed in high-level interpreted languages, into standalone C/C++ code, which can be run on low-power embedded devices.

Do you have specialized hardware to run deep learning models?

For deep learning models, specialized hardware is typically required due to the higher memory and compute requirements.

We have now looked through the different frameworks/tools and techniques that are prevalent in the industry for Machine and Deep Learning initiatives in your organization. We also looked through the recommendation on which framework works for your specific request. Also we looked at the Hardware that is required to run these models Or to make the frameworks active.

Stay tuned…. Part 4 of this foray, we will look into Usage of all these Machine and deep learning algorithms, models and frameworks that are prevalent in the industry.

Please feel free to review my other series of posts 

Authored by Venugopala Krishna Kotipalli

Machine Learning – The Primer – Part 2

Just a recap, in my previous post, I had primarily looked upon the basics terms in machine learning and the process used for its implementation. In this post, we are going to continue to learn more about the steps in machine learning process and how it can be enhanced and made more efficient.

There is a huge significance to be on top of Domain knowledge in your said industry/process. This will filter the right data set to be considered for machine learning. The core of this discussion goes around Data and its structure that exists in your organization.

To deduce the data present in an organization and its structure, we pose below three questions:

1.       Is Your Data Tabular?

Traditional machine learning techniques were designed for tabular data, which is organized into independent rows and columns. In tabular data, each row represents a discrete piece of information (e.g., an employee’s address).

There are ways to transform tabular data to work with deep learning models, but this may not be the best option to start off with.

Tabular data can be numeric or categorical (though eventually the categorical data would be converted to numeric).

2.       If You Have Non-Tabular Data, What Type Is It?

Images and Video: Deep learning is more common for image and video classification problems. Convolutional neural networks are designed to extract features from images that often result in state-of-the-art classification accuracies – making it possible to discern high-level differences such as cat vs. dog.

Sensor and Signal: Extracting features from signals and then using these features with a machine learning algorithm. More recently, signals have been passed directly to LSTM (Long Short Term Memory) networks, or converted to images (for example by calculating the signal’s spectrogram), and then that image is used with a convolutional neural network. Wavelets provide yet another way to extract features from signals.

Text:  Text can be converted to a numerical representation via bag-of-words models and normalization techniques and then used with traditional machine learning techniques such as support vector machines or naïve Bayes. Newer techniques use text with recurrent or convolutional neural network architectures. In these cases, text is often transformed into a numeric representation using a word-embedding model such as word2vec.

3.       Is Your Data Labeled?

To train a supervised model, whether for machine learning or deep learning, you need labeled data.

If You Have No Labeled Data

Focus on machine learning techniques (in particular, unsupervised learning techniques). Labeling for deep learning can mean annotating objects in an image, or each pixel of an image or video, for semantic segmentation. The process of creating these labels, often referred to as “ground-truth labeling,” can be prohibitively time-consuming.

If You Have Some Labeled Data

Use transfer learning as it focuses on training a smaller number of parameters in the deep neural network, it requires a smaller amount of labeled data.

Another approach for dealing with small amounts of labeled data is to augment that data. For example, it is common with image datasets to augment the training data with various transformations on the labeled images (such as reflection, rotation, scaling, and translation).

If You Have Lots of Labeled Data  

With plenty of labeled data, both machine learning and deep learning are available. The more labeled data you have, the more likely that deep learning techniques will be more accurate. A typical example is below that illustrates approach when you have too much labeled data.

The steps for you to initiate any Machine learning project is to identify the different steps/tasks as part of any one business process. While one task alone might be more suited to machine learning, your full application might involve multiple steps that, when taken together, are better suited to deep learning. If you have a large data set, deep learning techniques can produce more accurate results than machine learning techniques. Deep learning uses more complex models with more parameters that can be more closely “fit” to the data.

Some areas are more suited to machine learning or deep learning techniques. Here we present 6 common tasks:

We thus look at each of the above tasks and its related examples, its applications, inputs required, common algorithm applied and whether it’s more approached through Machine learning or deep learning.

While one task alone might be more suited to machine learning, your full application might involve multiple steps that, when taken together, are better suited to deep learning. So, how much data is a “large” dataset? It depends. Some popular image classification networks available for transfer learning were trained on a dataset consisting of 1.2 million images from 1000 different categories. If you want to use machine learning and have a laser-focus on accuracy, be careful not to over fit your data.

Over fitting happens when your algorithm is too closely associated to your training data, and then cannot generalize to a wider data set. The model can’t properly handle new data that doesn’t fit its narrow expectations. The data needs to be representative of your real-world data and you need to have enough of it. Once your model is trained, use test data to check that your model is performing well; the test data should be completely new data.

Now that we have witnessed the amount and format of data that is required and how it has to be structured for machine learning or deep learning processes to be implemented. Stay tuned…. Part 3 of this foray, we will look into the details of the tools/techniques and hardware that is required to support the machine learning process while we lead deep into the learning portions of this AI foray we have embarked upon.

Please feel free to review my other series of posts 

Authored by Venugopala Krishna Kotipalli

Machine Learning – The Primer – Part I

In this series of posts, I will emphasize primarily over the world of Machine learning. We’ll start with an overview of how machine learning models work and how they are used. This may feel basic if you’ve done statistical modeling or machine learning before. By giving ‘computers the ability to learn’, we mean passing the task of optimization — of weighing the variables in the available data to make accurate predictions about the future — to the algorithm. Sometimes we can go further, offloading to the program the task of specifying the features to consider in the first place. Let us first understand some basic definitions.

Machine learning: Machine learning lets us tackle problems that are too complex for humans to solve by shifting some of the burden to the algorithm. The goal of most machine learning is to develop a prediction engine for a particular use case. Common machine learning techniques include decision trees, support vector machines, and ensemble methods, although TensorFlow sometimes can have issues working with the DLLs, if you happen to have this problem you can Download Cudart64_110.dll which will help you resolve this issue.

Deep learning: A subset of machine learning modeled loosely on the neural pathways of the human brain. Deep refers to the multiple layers between the input and output layers. In deep learning, the algorithm automatically learns what features are useful. Common deep learning techniques include convolutional neural networks (CNNs), recurrent neural networks (such as long short-term memory, or LSTM), and deep Q networks.

AlgorithmThe set of rules or instructions that will train the model to do what you want it to do. An algorithm will receive information about a domain (say, the films a person has watched in the past) and weigh the inputs to make a useful prediction (the probability of the person enjoying a different film in the future).

ModelThe trained program that predicts outputs given a set of inputs.

The below depicts the way the terms are related. Artificial intelligence encompasses the world of Machine learning and in turn machine learning encompasses the world of deep Learning.

Why Machine learning matters?

Artificial intelligence will shape our future more powerfully than any other innovation in this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.

The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades due to Limited computer power, intractability of solutions, common sense knowledge and reasoning like face recognition and qualifying the right problems.  And therefore rapid advances in data storage and computer processing power have dramatically changed the game in recent years. In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions.

So how is all this happening? How are the machines becoming smarter to the extent that they are able to beat human at most of the games like DeepBlue in chess, AlphaGo in Chinese GO, DeepMind in 49 Atari games etc. Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app. Google Translate has come such a long way that it can even perform a kinyarwanda translation to accurate, verbatim English. Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery. In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste.

In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level. The process to implement Machine learning is given below. Needless to say that it’s a process that needs to be institutionalized to be an every improving one.

Gathering Data: You can acquire data from many sources; it might be data that’s held by your organization or open data from the Internet. There might be one dataset, or there could be ten or more.

Cleaning Data: You must come to accept that data will need to be cleaned and checked for quality before any processing can take place. These processes occur during the prepare phase.

Build Models: The processing phase is where the work gets done. The machine learning routines that you have created perform this phase.

Gain Insights and Report: Finally, the results are presented. Reporting can happen in a variety of ways, such as reinvesting the data back into a data store or reporting the results as a spreadsheet or report.

Stay tuned…. Part 2 of this foray, we will continue to dwell upon the steps in the machine learning process while we lead deep into the learning portions of this AI foray we have embarked upon

Please feel free to review my other series of posts 

Authored by Venugopala Krishna Kotipalli


AI / ML – Past, Present & Future – Part 6

Just a recap, we learnt in my last update, the different ways to harness the new age machine to enhance market competitiveness of your products and services. In this excerpt we are going to dwell upon the different ways to harness the new age machine in by and large the innovation quotient that we need to invest upon.

As we have seen throughout this series of posts, the innovation related to the intelligent systems and digital economy is both a catalyst for and an outcome that will allow your organization to discover opportunities that were never before visible or addressable. Innovation being the center stone, it can’t be a side project which is nice-to-have but its central to remaining relevant in the great digital build-out that we are experiencing and of course lies ahead of us. While machines will do more and more of our work, the process of innovation will allow us to discover entirely new things to do that are impossible to imagine and hard to predict but they will be at the core of what we do in the future.

Today, reports that a new economy is emerging with a flurry of job categories that even a few years ago would have been hard to predict; social medial consultants, search engine optimizers, full stack engineers, content curators, and chief happiness officers. These all jobs that the tech economy’s equivalent to the Charles Babbages of the early starters would have imagined. AI is changing our world already, but in reality we have only begun to scratch the surface of where it will take us over the next 20, 50, 100 years. Your job is to imagine the new forms of value you can create with the new machines of the new revolution. Institutionalizing the role and importance of being open to the fruits of innovation is a hugely important role that you as a leader of the future need to play. The openness to innovation is not just a job of the company’s formal R&D department. It has to be a culture of innovation that each and every one of you should exhibit.

We have learned right through this series that the new machine will be your platform of innovation. Once you are instrumenting, automating, tracking and analyzing the core operations of your business an applying machine learning, innovation opportunities will be consistently unearthed. Innovation is thus a rich term with many different attributes and applications. It can be applied to many areas including some I have listed below.

With product innovation, team will gain continual insight as to how your products are being used, also what customer frustrations points exist, and where the obvious areas for improvement exist. These inputs definitely can’t be gauged without a thorough AI system in place. Once you have automated, instrumented and enhanced your company’s activities, the associated AI engines can be applied to innovation. The team will get greatly enhanced by the application of the new machine, primarily because it radically accelerates the scale and speed of the innovation process. As such, when the new machine is soon widely adopted, the rate of human progress in the 21st century (as defined by the cumulative growth of human knowledge and he pace of the innovation) will be at least 1000 times the average rate of the 20th century. Obviously the general factors affecting innovation in the organization (opinions, ideas, emotions, organization’s inertia etc) can bring down above prediction to may be by two orders of magnitude. Still the innovation quotient of 10 times is way higher than traditional R&D ways.

One of the core principle in these posts is that machines can do many things but that practical application should be focused on specific business processes and customer experiences. When you are making discovery investments, start at the process and experience level and imagine how the process can be restructured and reinvented with digital.

Discovery can be a risk. Invest too much in the wrong ideas and you go broke. Wait for somebody else to do it and you can miss the market opportunity of a lifetime. So what’s the best practice for bringing about this new form of innovation? We find too many managers looking for the new “the next great breakthrough” but that doesn’t work. The opposite approach is to ask how the new machine adds the most value – that is, by looking for continuous, incremental improvements or looking to hit singles on a consistent basis. This primarily caters to “change for better” but is implemented as small, continuous improvements that in time have a large impact. Your goal should be to become a Know-It-All business via instrumentation, sensors, big data and analytics. In the real world, organizations should establish a portfolio of initiatives focused on discovery, with a clear life cycle methodology that manages these initiates from inception through o ultimate success or failure. Central to their generative acts will be the belief that something better can be created. The true core of discovery is, after all, hope.

Obviously, please don’t forget that with the Gods come the devils as well. While most part of my posts are ushering in that AI is an age of miracles and wonder of technological marvels but in the hindsight we should also see a world of robots, more powerful human like machines taking over. Thus strike a balance.

In these posts, we have argued that the information technology innovations and investments of the past 4 decades are merely a precursor to the next waves of digitization, which will have truly revolutionary impacts on every aspect of work, society, and life.

As the last S-Curve’s growth rate continues its inexorable journey south, the new S-curve is gather momentum, and so are the companies  poised to lead this new charge. These are the companies that have learned how to master the 3 Ms, how to align the new raw materials of the digital age (data), the new machines (intelligent systems), and the new models (business models that optimize the monetization of data-based personalization).These are the companies that understand how to build and operate a know-it-all-business, that understand that intelligent machines aren’t to be feared but embraced and harnessed, and that are energized by the unwritten future rather than just trying to hang onto the glories of the past.

Below are the few mandatory steps that any organization should embark upon and leaders from organizations should help them implement.

The companies that are getting ahead are the ones acting on these ideas. Some companies we work with emphasizes one ‘play’ over another, while others recognize the holistic connection between all of the plays, automation enables enhancement, discovery uncovers how to achieve excessiveness, and so on. All of them, however, understand the need to act now, to not wait for more certain times ahead, more clarity over exactly what AI is, and what it will become. All of them recognize that the rise of the machine intelligence is the ultimate game changer we face today. All of them know that inaction will result in irrelevance. All of them know that fortune favors the brave and punishes the timid.

This is the last blog in the series on AI/ML and the related space. Hope you all enjoyed reading through the posts. Stay tuned while I come back with yet another series on a technology topic.

Please feel free to review my other series of posts 

Authored by Venugopala Krishna Kotipalli

AI / ML – Past, Present & Future – Part 5b


Just a recap, we learnt in my last update, the different ways to harness the new age machine to enhance human experience and their related processes and analysis. In this excerpt we are going dwell upon the different ways to harness the new age machine in enhancing Market competitiveness.

The loom led to excessive clothing, the steam engine to excessive travel, and the factory model led to excessive refrigerators and televisions finding their way into homes all around the world. Before the revolutions that spurred them, these products were rare luxuries. So the concept of excessiveness is really quite simple, and old – as prices go down, demand goes up. As the new machines drives the price down, markets of excessiveness will be established, driving sales up to unimagined levels. The question now becomes, will you seize the advantage with the new excessiveness that is available or fall victim to it?

In the past, we have used raw materials, new machines, and hybrid business models to support them to create an unprecedented excessiveness that can in turn make it easily for all luxury goods to be easily available for common masses. A very good example of this excessiveness created is the need for heart surgeries that has pushed organizations to throw excessiveness and innovations in and eventually bring the costs down and thereby make it readily available for common masses. The results delivered in the case of cost reduction in heart surgeries is nether by magic nor by cutting corners. Yes, there are salary disparities between India and other countries, but these account for only a fraction of the cost difference. The vast majority of these dramatic savings come from the digitization of key processes. The business models considered here were purely hybrid, viz. some portions must of course remain highly physical – human – centric work performed by medical professionals on an actual patient – whereas others can be significantly digitized, such as monitoring patients and machines. By Breaking down the processes of surgery preparation, operation room management, and intensive care unit operations into discrete processes and experiences and then apply new technologies, hospitals cut costs to the point that it now can provide high quality care to many more people. In this instance, digital is literally saving lives.

The key point to note here is that setting a new price point is not a one-time thing but a continual process. As once automation takes hold of your products creation and delivery transition from being human- based to machine based they will become inherently the centric, and thus able to benefit from the general consensus that the speed and capability of our computers doubles every two years.

At this stage, it’s imperative to wonder how to kick start quickly on the excessive thought process. Below are seven approaches that might lead you to deliver positive results.

Focus on disruptive thinking

Organizations should now focused on new / disruptive thinking, it should keep its eyes and ears open to new companies that are coming after your business. The Key is that this team should be empowered to take an objective view of the tech-based companies that are looking to bring excessiveness into the industry and potentially eat your pie of business. In such cases, you can clearly view that the industry is clearly coming for that portion of your company and you need to marshal an appropriate response right away – whether it be to buy, to build, or to partner to address the threat.

Analyze areas of weakness

The new generation are key to arriving at where the current organization sucks. These fresh thinking blokes can offer a unique and highly valuable point of view regarding your traditional ways of doing business. As a best practice, a sub-group of these fresh thinkers should be focusing on weakness that can put your company out of business

Plan for a future cost reduction model

Leadership must ask had, even painful, questions about the implications of current products and services moving from expensive and rare, to cheap and nearly available everywhere. Of course, nothing will be truly cost less, one way or another, you need to find ways to grow revenue. However it’s healthy to have organizations start conceiving its products and services as the sum of their parts, which will add up to a certain price. All these parts have been be analyzed from a digital perspective to make them costless so as to bring down the cost of the overall product and service.

Innovative profit making

The question of how do we price our products differently, aimed at very different customer segments and entailing very different economics of margins. It definitely does worldly good for companies to start thinking as to how to bring down the cost but still be able to make adequate profits.

Search for Technical prowess

The movement comprises of individuals, teams, and companies enthusiastic about building new devices that live at the intersection of new functionality and low cost. We find such movements around individuals who while holding down their day job, are itching to really focus on their weekend avocation. Don’t look upon such individuals as uncommitted to the work at hand; rather, harness their talents, energy, and passions by putting them in places and positions in which their personal innovations can become your corporate innovations.

Personalize Product line

The key to focus on personalizing any product or service to your customer opens up an extremely new horizon of doing business. This is not a function of scale. It’s simply a function of applying the new machine to establishing one-to-one connections with your customers. After all, this had been the first goal so to say, the entire value chains and customer value propositions were focused on this pursuit. For instance, personalization is the new battleground in the apparel industry as we had seen in one of our last post.

Apply Digital Thought

The lowering cost paradigms is all about finding dramatic cost savings to open new markets. How best to find these breakthroughs and apply new technologies of AI to the parts of the sum (overall). It was advanced in past that almost every work activity could and should be broken down into discrete tasks and measured in time, motion and output. More important, performance levels and best practices could be applied to repetitive tasks to make them efficient. This will more or less drive competitiveness at your company.

Applying the above seven levers and making sure organization are actively pursuing each one of them will keep them active, always in search for best practices to enhance and improve their products and services while keeping the cost lower. This will result in obviously at the end of it an enhanced market competitive stature of the organization.

Stay tuned…. Part VI of this foray, we will continue to dwell upon the different ways to harness the new age machine in by and large the innovation quotient that we need to invest upon.

Please feel free to review my other series of posts 

Authored by Venugopala Krishna Kotipalli

AI / ML – Past, Present & Future – Part 4


Just a recap, we learnt in my last update, the different aspects of what constitutes a business model that an organization should follow and how that impacts your overall foray into AI/ML implementation. In this excerpt we are going dwell upon the different ways to harness the new age machine through Automation and Instrumentation

As I have pointed out repeatedly in the previous parts of this blog series on AI/ML, industry is riding on the cusp of a huge new wave of automated work that is going to fundamentally change what millions and millions of people all around the world do, Monday through Friday, 8 hour work day. The attempt at automation of existing parts of your business with the new machine provides an opportunity to change the cost structure of your firm, while at the same time increasing the velocity and quality of your operations. We need to understand what automation actually is, which part of your business are best suited to be automated, which jobs will be most impacted, the benefits you can expect and the problems to avoid.

Automation is the first step in the journey that exhibits the tendency for industrial change to continuously destroy old economic structures and replace them with new ones. This will result in both revenue increase for the industry but more importantly a cost savings overall. Some study on the internet shows the below numbers to back up the generic idea in the different industries where Automation is more prevalent in.


This trend of applying automation technology to lower cost and improve productivity is playing out in nearly every industry. Like it or not, your competitor across the street will soon gain the massive benefit of digital automation of core processes. If you don’t keep pace, your cost structure will soon be unsustainable. Additionally, the saving generated through automation are what will then pay for the coming digital innovations. Fortunately most of us have a running start. We have been consuming automation for a long time, and much as with AI, once used it’s not even noticed. Let us consider some examples for such automations that we come across and still they go unnoticed while we are travelling out of station

  • Automated toll collections through EZ Passes on the highway while we pass through the toll booth without stopping
  • Parking pass generation while you arrive at the airport parking
  • Receipt of the boarding pass and you check-in baggage at one of airport kiosks
  • Getting cash from ATM while walking down to your departure gate

From your house to the airport gate, your trip was at least a half hour faster than in pre-automation days if not lesser. In any organization therefore, such automation are targeted best in the core operations areas, not visible to your customers. What if you could run these functions or processes at half the cost and with double the throughput? With continuous improvements and quality control and with all these aspects – every transaction – full instrumented and recorded? With the new machine you can.

Finding your Process/Automation Targets for immediate automation is the low hanging fruit you should embark upon.

  1. Highly repetitive tasks
  2. Tasks with low level of human judgement
  3. Tasks requiring low level of empathy
  4. Tasks generating and handling high volumes of data

Identification of your automation target will give your teams a clear path to success, but there still remain a significant hurdle in managing change with your organization. You will need to adapt the path depending on the complexity of what you are doing; what follows is a high level walk through, but the seven steps put together below for automating any process or task are basically the same.


I therefore have shown you above that automation is our new loom, our new steam engine. The cost savings generated from these next levels of automation will provide the cash needed to fuel investments in the new markets and new ideas. The data degenerated by automation is at the heart of creating new products, better customer relationships, and more transparency. Leaders who create the ongoing momentum from using automation – every quarter looking for new automation opportunities using the criteria and guidelines I have presented – will ensure they have the fuel needed to win. Also please do realize and remember that automation is not an end in its own right, it is simply a means to an end.

We are slowly moving into the instrumentation zone, where once an automation is instrumented and tracked, an invisible framework of code emerges around the object and this often provides more insight and value than the actual physical item itself. For instance the Amazon and Netflixs know your tastes in literature and movies better than your family and friends do without even coming in touch with you as a consumer. The race is to win through instrumentation, and established companies are changing the rules of competition across many industries.


There are now three key rules of competition when it comes to Instrumentation


Why instrument everything and build solutions around information? Because doing so sets you on the path to being a “Know-It-All” business. With sensors and instrumentation, it’s now possible to collect and analyze information about everything. To know everything about everything

While instrumentation and collection of data from each and every portion of physical thing on your enterprise does pose the vast opportunity to review, analyze and take business decisions, it also exposes the analyzed and sensitive data to hackers. This is where organizations have to be careful so as to keep their internal, important, competition eluding data secured and not misused in the wrong fashion. Some of key hacking horror stories are below and would give you a fair bit of idea why we need to be careful.


Competing with instrumentation is now becoming the default model for our modern economy. Instrumenting everything, accessing data scientists and other big data/analytics talent, and avoiding the dark side of the instrumentation are all tactics you must adopt to get started. Fortunes will be won and lost depending on your organizations ability to leverage the upsides of the instrumentation and mitigate its downsides.

Stay tuned…. Part V of this foray, we will continue to dwell upon the different ways to harness the new age machine through the enhancement to human experience, Market competitiveness and by and large the innovation quotient that we need to invest upon.

Please feel free to review my other series of posts 

Authored by Venugopala Krishna Kotipalli