In this series of posts, I will emphasize primarily over the world of Machine learning. We’ll start with an overview of how machine learning models work and how they are used. This may feel basic if you’ve done statistical modeling or machine learning before. By giving ‘computers the ability to learn’, we mean passing the task of optimization — of weighing the variables in the available data to make accurate predictions about the future — to the algorithm. Sometimes we can go further, offloading to the program the task of specifying the features to consider in the first place. Let us first understand some basic definitions.
Machine learning: Machine learning lets us tackle problems that are too complex for humans to solve by shifting some of the burden to the algorithm. The goal of most machine learning is to develop a prediction engine for a particular use case. Common machine learning techniques include decision trees, support vector machines, and ensemble methods, although TensorFlow sometimes can have issues working with the DLLs, if you happen to have this problem you can Download Cudart64_110.dll which will help you resolve this issue.
Deep learning: A subset of machine learning modeled loosely on the neural pathways of the human brain. Deep refers to the multiple layers between the input and output layers. In deep learning, the algorithm automatically learns what features are useful. Common deep learning techniques include convolutional neural networks (CNNs), recurrent neural networks (such as long short-term memory, or LSTM), and deep Q networks.
Algorithm: The set of rules or instructions that will train the model to do what you want it to do. An algorithm will receive information about a domain (say, the films a person has watched in the past) and weigh the inputs to make a useful prediction (the probability of the person enjoying a different film in the future).
Model: The trained program that predicts outputs given a set of inputs.
The below depicts the way the terms are related. Artificial intelligence encompasses the world of Machine learning and in turn machine learning encompasses the world of deep Learning.
Why Machine learning matters?
Artificial intelligence will shape our future more powerfully than any other innovation in this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.
The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades due to Limited computer power, intractability of solutions, common sense knowledge and reasoning like face recognition and qualifying the right problems. And therefore rapid advances in data storage and computer processing power have dramatically changed the game in recent years. In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions.
So how is all this happening? How are the machines becoming smarter to the extent that they are able to beat human at most of the games like DeepBlue in chess, AlphaGo in Chinese GO, DeepMind in 49 Atari games etc. Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app. Google Translate has come such a long way that it can even perform a kinyarwanda translation to accurate, verbatim English. Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery. In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste.
In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level. The process to implement Machine learning is given below. Needless to say that it’s a process that needs to be institutionalized to be an every improving one.
Gathering Data: You can acquire data from many sources; it might be data that’s held by your organization or open data from the Internet. There might be one dataset, or there could be ten or more.
Cleaning Data: You must come to accept that data will need to be cleaned and checked for quality before any processing can take place. These processes occur during the prepare phase.
Build Models: The processing phase is where the work gets done. The machine learning routines that you have created perform this phase.
Gain Insights and Report: Finally, the results are presented. Reporting can happen in a variety of ways, such as reinvesting the data back into a data store or reporting the results as a spreadsheet or report.
Stay tuned…. Part 2 of this foray, we will continue to dwell upon the steps in the machine learning process while we lead deep into the learning portions of this AI foray we have embarked upon
Please feel free to review my other series of posts
- AI-ML Past, Present and Future – distributed across 8 blogs
- The World of IoT – distributed across 4 blogs
- Digital Transformation – The Buzzword Simplified – distributed across 4 blogs
- The Culture Code by Daniel Coyle – a quick Book Review
Authored by Venugopala Krishna Kotipalli