In this series of posts, I will take a deep dive into Digital Transformation – the word/feeling/term that has taken the world by a storm. I have this word being referred to at lunches, coffee catch ups, and informal meets round the street corner and not to forget in almost all formal leadership summits. Hence I felt it will be very useful for me to indulge further upon this subject for the sake of all you netizens. In this series of posts, I will look through the evolution of the digital economy and then follow it up with its elements, approaches, areas, Strategies, industries specific implementations and last but not the least its importance beyond technology.
Digital transformation is the thoughtful transformation of business and organizational activities, processes, competencies and models to fully leverage the changes and opportunities of a mix of digital technologies and their accelerating impact across society in a strategic and prioritized way, with present and future shifts in mind. Just to keep it simple, an alternate definition would mention it as something that refers to converting processes, activities and models to meet digital economy requirements until the company is a fully networked digital organization.
Businesses have always been changing and innovating, technologies always came with challenges and opportunities, regulations and ecosystems have always evolved. That’s nothing new. It’s in the degree of interconnectedness and of various accelerations, which require profound enterprise-wide change, that digital (business) transformation is to be seen as more than a buzzword but as a challenge, force and most of all opportunity for organizations that will enable them to achieve the core business competencies they need to succeed in rapidly changing environments. So Why is it needed? Let us see below.
We need to make sure we speak the same language and it’s important to emphasize that digital transformation is not just about:
Digital marketing, even if that’s an important part of the business activities and if it’s the context in which digital transformation is often used.
Digital customer behavior, although it plays a role and customers are increasingly ‘digital and mobile’.
Technological disruptions because the disruptions are always about customers, workers, markets, competitors and stakeholders, even if related to technological evolution and knowing that ’emerging’ technologies indeed can have a ‘disruptive’ effect.
The transformation of paper into digital information as originally meant nor the digitization of information (flows) and business processes, which is simply a condition sine quod non.
Components of Digital Transformation: The basic seven elements of digital transformation that are required for you as an organization to lead a digital change is below.
Leadership and Vision: It’s required for you as an organization to inculcate the thought process and clear vision in leadership. This particularly means that the leaders and management must stimulate the right digital culture. They should focus on improving the organizations operations, revenue, customer experience and competitive position. They should have holistic view of the digital threats and opportunities. They should be able to utilize the collective intelligence to reshape the competitive landscape.
Formulating strategies: The organization should be able to define the outcome or result to help achieve the business goals. It should then work backwards to make a compelling digital transformation strategy. It should optimally harness technologies to deliver the value proposition. It could then utilize the niches like augmented reality, geo-location, and social media integration to extend the overall capabilities of the journey
Information Governance: The organization should have formulated a strong governance plan and apply a strategic approach and then move to cloud applications to expand the reach
Focus on Customer Journey: The organization should be able to identify customer needs with an accurate study of customer behavior and then should diversify customer experiences by utilizing the customer data, advanced analytics, online customer surveys and data mining tools.
Define Technology Road map: The organization should align the strategic priorities towards long term planning before the actual tool investment with a well-defined technology road map. This should be followed up to enhance the existing technology stack with the latest tools to match current and future business needs.. This will help leverage technology to increase RoI, improve product and service portfolio, boost productivity and enhance customer satisfaction
Business Disruptor: Businesses need disruptive elements like digital automation, collaboration tools, and enterprise analytics platforms. These help optimize, evolve or transform the entire value chain. Disruptive elements help organizations by modifying or replacing told methods with more faster and integrated ways of working across all the levels.
Utilizing Technology: Big data allow businesses to have an incredible amount of data to analyze and guide business decisions. AI carries out precise predictive or prescriptive analysis for active strategy development
In overall essence the components help seek your way around the digital transformation to drive the digital way of change for your business.
Let us also quickly take a look at the general challenges that we face in the digital transformation journey so as to keep them in mind when we look at key approaches.
The right approach to digital transformation hence allows companies to adopt new technologies that meet the ever-increasing customer expectations. Additionally, cultural transformation is also critical as part of the digital transformation process. It must align with organizational norms along with their risk-taking capability to define and deliver compelling customer experiences. Based on these experiences, the new business models and operational processes are derived, which, in turn, drive the investments in technologies.
Stay tuned…. Part 2 of this foray, we will look key approaches to Digital transformation. Please feel free to review my earlier series of posts.
Just a recap, in my previous post, I had taken a deep dive into the growth and trends in the IoT space. This will be the concluding post for this series where we will discuss the Industries where IoTs have been successfully implemented.
According to Internet of Things spending data and forecasts, published early 2017 by IDC, the 3 main industries in terms of IoT spending in 2016 were, respectively, manufacturing, transportation and utilities. Consumer Internet of Things spending ranked fourth.
While globally in the period until 2020, manufacturing will remain the major industry (except in Western-Europe) there will be global changes in this top 3. Among the fastest growing industries in the period until 2020 are insurance, healthcare, retail, consumer and, as mentioned, cross-industry initiatives.
Obviously, there is a difference between Internet of Things spend and number of Internet of Things projects.
A report by IoT Analytics, really a list of 640 real-life Internet of Things projects, indicates that from the perspective of number of projects connected industry ranks first but is closely followed by smart city implementations (where we mentioned the report), which rank second.
Internet of Things in MANUFACTURING
The Manufacturing industry has always taken the lead in the implementation of IoT, given the origins of IoT i.e., RFID. Hence the most early typical use cases have kept this industry in the lead but not for long. In 2015, it was estimated that there were 307 million installed units in the manufacturing industry where systems with sensors have always been embedded into manufacturing and the automation processes. And that it would reach $98.8 billion by 2018 in manufacturing operations through efficiency optimization and connecting the automated areas. By and large the 3 top IoT use case in this industry are listed below.
A majority of manufacturers has deployed devices to collect, analyze/measure and act upon data. More than 34.6 percent of these companies had already implemented devices and sensors to gather this data and another 9.6 percent were about to implement IoT devices within a year. Only 24 percent from manufacturing industry had no plans to implement devices to collect, analyze and act upon data.
Retailers are working with the Internet of Things for several innovative and immersive approaches, ranging from virtual closets and self-checkouts to smart shelves (inventory accuracy) and connected vending machines.
The Internet of Things in the RETAIL business
Retail is moving up fast, both in operations and customer-facing circumstances. The emphasis is primarily on efforts to digitize the consumer experience. It is mainly in the optimization of processes and of logistics that the Internet of Things offers immediate benefits to retailers. However, obviously the customer-facing and inventory-related aspects matter a lot too. The use of the Internet of Things in retail, among others, changes customer experience, leads to better customer insights, enables new collaborations and business models and further blurs the line between digital and physical in an in-store context.
Retailers are working with the Internet of Things for several innovative and immersive approaches, ranging from virtual closets and self-checkouts to smart shelves (inventory accuracy) and connected vending machines.
The Internet of Things in GOVERNMENT AND CITIES
The Internet of Things is already used across several government activities and layers the sector is a very vast ecosystem and so are the many IoTs use cases in government. Probably the best-known usage of the Internet of Things in a government context concerns smart cities, in reality mainly smart city applications.
Smart city projects are what people hear about most and they get a lot of attention, among others because smart city applications are close to the daily lives of residents. Another reason why smart cities are often mentioned is that defacto smart city projects account for a big portion of Internet of Things deployments. Think about smart waste management (often a local matter), smart parking and environment monitoring.
Another area where we see the Internet of Things popping up is in citizen-facing public services. To a large extent smart city uses cases overlap with Internet of Things use cases in public services as one of the key tasks of a city is to serve the citizens. However, with public services we also go beyond the local/urban level but also includes smart energy. The degree of overlap depends on the way government services are organized in a particular country or region.
Improving citizen satisfaction is the main objective when considering or implementing the Internet of Things and other emerging technologies. Moreover, governments have a role in public health which can be enhanced by taking initiatives using the Internet of Things and in collaboration with private a state-sponsored partners. The same goes for public safety by the way. An example: collaborations between governments and insurance firms, leveraging telematics.
There are really hundreds of ways in which governments leverage and can leverage the Internet of Things to improve citizen experience, realize cost savings and, not to forget, generate new revenue streams.
The latter is quite important as many IoT projects have an impact on the funding of cities. A simple example: if you have a perfectly working smart parking solution in a city, you lose revenues for all the obvious reasons. So, it’s not just a matter of technologies but also of finding creative ways to turn enhanced citizen experience and citizen services in a global picture that is beneficial for everyone.
This takes time, planning and, as you can imagine, given the complexity of the government ecosystems, lots of alignment and coordination.
The Internet of Things in BUILDING AND FACILITIES
The Internet of Things plays an important role in facility management, among others including data centers and smart buildings. The integration of IT (Information Technology) and OT (Operational Technology) plays an important role in this regard. Smart buildings are among the fastest growing cross-industry Internet of Things use cases in the period until 2020. Moreover, research indicates that data collection from buildings and other structures such as HVAC is already high. Last but not least, the market and evolutions of the BMS (Building Management System) are strongly impacted by the Internet of Things.
As the graphic below indicates, building management systems are becoming the centers of connectivity in a world of ever more endpoints in buildings, data analytics and actionable data play a key role in the evolution of building design, the connected building and the building management. As data collection from end point increases and next generation technologies make analytics and insights key in building systems, the connected BMS becomes a center of visualization, insights and actions.
Leveraging data from IoT-enabled facility assets, along with new Internet of Things platforms and facility management, with embedded capabilities, are leading to possibilities and benefits in building management areas such as:
Smarter building security systems.
Smarter Heating, ventilation and air conditioning (HVAC).
Safer and more comfortable/healthy workplaces and buildings.
Facility service quality optimization.
Cost reductions, also in a green building context and in reduction of energy and water consumption.
Better planning, operational efficiencies and enhanced resource allocation.
Predictive maintenance and facility maintenance planning.
Facility equipment control, configuration and regulation.
Building management and building automation.
Light and room control, comfort.
This list is far from comprehensive. As there are various sorts of buildings, each with their own challenges, infrastructure, technologies and most of all goals the landscape of building automation and management is very broad. In light and room control alone there are several controls such as blind controls, AC unit controls and literally dozens more.
The overall building automation and management landscape exists since far before the Internet of Things existed and is composed of various specializations, each with their standards (e.g. KNX in room control or BACnet in building management systems), certification programs for green buildings (ecology and energy/ecology regulations are key drivers) and for OT channel partners, technologies, networks, solutions and of course goals (the goal of an IoT-enabled office space, building or even meeting room is not the same of a hospital, even if there are always overlaps) .
However, with the Internet of Things these worlds are converging (and the standards already evolved to IP). This is a challenge and opportunity for the various players who all have their skill sets but rarely are able to offer the full picture.
5. The Internet of Things in HEALTHCARE
The Internet of Things has been present in healthcare in many forms and shapes since several years.
With remote healthcare monitoring and medical/hospital asset tracking, monitoring and maintenance as typical examples of these initial applications, the face of the Internet of Things in healthcare is changing fast.
Among the evolutions and drivers of the Internet of Things in healthcare:
An increasing consciousness and engagement from the consumer/patient side leads to new models, leveraging personal healthcare devices.
In a more integrated perspective, data from biosensors, wearables and monitors are used in real-time health systems and to save time for caregivers, detect patterns, be more aware and increase quality of care.
A broad range of innovations in fields such as smart pills and ever better delivery robots help in making healthcare more efficient and in saving resources, while also increasing quality of care.
This glorifies the importance of remote monitoring as the main use case in healthcare from a spending perspective until 2020 and ongoing growth in the years after that with some vital sign monitor devices, followed by ways how healthcare providers and healthcare payers plan to leverage the Internet of Things and, finally smart healthcare market growth data.
Some evolutions and forecasts in healthcare IoT in numbers:
Research shows that by 2019, 89% of all healthcare organizations will have adopted IoT technology
Among the main perceived benefits of healthcare IoT in the future are increased workforce productivity (57%), cost saving (57%), the creation of new business models (36%) and better collaboration with colleagues and patients (27%). The key benefits as reported in March 2017, however, are increased innovation (80%), visibility across the organization (76%) and cost savings (73%).
Other research shows that wearables will play a key role in health care plans, clinical IoT device data will free up clinician’s time significantly by 2019 (up to 30%) and there will be an increasing role for IoT-enabled biosensors and robots for medication and supplies delivery in hospitals by 2019.
Internet of Things in UTILITIES AND ENERGY
Facing huge challenges and transformations for several reasons, utility firms have 299 million units installed according to Gartner. On top of utilities in the traditional sense there is also a lot happening in oil and gas and in energy.
Among the many typical use cases in utility firms: smart meters to improve efficiency in energy, from a household perspective (savings, better monitoring etc.) and a utility company perspective (billing, better processes and of course also dealing with natural resources in a more efficient way as they are not endless) and smart grids (which is about more than the Internet of Things).
The Internet of Things in AUTOMOTIVE
Connected cars and all the other evolutions in the automotive industry are driving the market as well. Again, connected vehicles is the hottest US market in the overall picture. The connected car is one of those typical examples where the Consumer Internet of Things and Industrial Internet of Things overlap.
The Internet of Things in OTHER SECTORS
Other industries include healthcare, transportation (where “smart devices” and sensors have existed for quite some time), logistics, agriculture and more. Add to that the consumer context and you know why it is such a hot topic.
In summary the biggest drivers for IoT projects are listed below
This is the last blog in the series on the World of IoT and the related space. Hope you all enjoyed reading through the posts as much as I enjoyed putting them together. Stay tuned while I come back with yet another series on a technology topic
Just a recap, in my previous post, I had emphasized primarily over the key definitions and approaches to the Internet of Things. In this post, we are going to take a deep dive into the growth and trends in the IoT space.
The exact predictions regarding the size and evolution of the Internet of Things landscape tend to focus on the number of devices, appliances and other ‘things’ that are connected and the staggering growth of this volume of IP-enabled IoT devices, as well as the data they generate, with mind-blowing numbers for many years to come.
It makes it look as if the Internet of Things is still nowhere. Make no mistake though: it is already bigger than many believe and used in far more applications than those which are typically mentioned in mainstream media.
At the same time it is true that the increase of connected devices is staggering and accelerating. As we wrote the first edition of this Internet of Things guide, approximately each single hour a million new connections were made and there were about 5 to 6 billion different items connected to the Internet. By 2020, Cisco expected there would be 20 billion devices in the Internet of Things. Estimations for 2030 went up to a whopping 50 billion devices and some predictions were even more bullish, stating that by 2025 there will be up to 100 billion devices. The truth is that we will have to wait and see and that by the time we have written about recent predictions, new ones are already published.
Regardless of the exact numbers, one thing is clear: there is a LOT that can still be connected and it’s safe to assume we’ll probably reach the lower numbers of connected devices (20-30 billion) by 2020. Moreover, it’s not that much the growth of connected devices which matters but how they are used in the broader context of the Internet of Things whereby the intersection of connected and IP-enabled devices, big data (analytics), people, processes and purposeful projects affect several industries.
Also the data aspect is critical (again with mind-blowing forecasts) and how all this (big) data is analyzed, leveraged and turned into actions or actionable intelligence that creates enhanced customer experience, increased productivity, better processes, societal improvements, innovative models and all possible other benefits and outcomes. The impact of the IoT from a sheer data volume and digital universe perspective is amazing. And the Internet of Things will surpass mobile phones as the largest category of connected devices with 16 billion connected devices being IoT devices
There are numerous reasons for the growing attention for the Internet of Things. While you will often will read about the decreasing costs of storage, processing and material or the third platform with the cloud, big data, smart (mobile) technologies/devices, etc. there certainly is also a societal/people dimension with a strong consumer element.
A factor that has also contributed a lot to the rise of the Internet of Things, certainly in a context of the industrial Internet of Things and smart buildings, to name a few, is the convergence of IT and OT (Operational Technology) whereby sensors, actuators and so forth remove the barriers between these traditionally disconnected worlds.
As companies increasingly started investing in Internet of Things technologies and scalable Internet of Things deployments instead of just pilot projects it quickly became clear that the Internet of Things as a term covered completely different realities which have little in common. The majority of the Internet of Things hype focused on consumer-oriented devices such as wearables or smart home gadgets. Yet, we can’t repeat it enough, there is a huge difference between a personal fitness tracker and the usage of IoT in industrial markets such as manufacturing where the IoT takes center stage in the vision of Industry 4.0 (you can for instance think about IoT-connected or IoT-enabled devices such as large industrial robots or IoT logistics systems). That’s why a distinction was made between the Industrial Internet of Things and the Consumer Internet of Things to begin with.
The Industrial Internet of Things (IIoT): is ‘machines, computers and people enabling intelligent industrial operations using advanced data analytics for transformational business outcomes”. The main value and applications are found in the so-called Industrial Internet of Things or IIoT. In all honesty one of the main reasons why we started talking about the Industrial Internet of Things is to distinguish it from the more popular view on the Internet of Things as it has becoming increasingly used in recent years: that of the consumer Internet of Things or consumer electronics applications such as wearables in a connected context or smart home applications.
Typical use cases of the Industrial Internet of Things include smart lightning and smart traffic solutions in smart cities, intelligent machine applications, industrial control applications, factory floor use cases, condition monitoring, use cases in agriculture, smart grid applications and oil refinery applications.
It’s important to know that the Industrial Internet of Things is not just about saving costs and optimizing efficiency though. Companies also have the possibility to realize important transformations and can find new opportunities thanks to IIoT.
Those who can overcome the challenges, understand the benefits beyond the obvious and are able to deal with the industrial data challenge have golden opportunities to be innovative, create competitive benefits and even entirely new business models
The Consumer Internet of Things (CIoT)
About 5 years ago, consumers rarely saw what the Internet of Things would mean to their private lives. Today, they increasingly do: not just because they are are interested in technology but mainly because a range of new applications and connected devices has hit the market.
These devices and their possibilities are getting major attention on virtually every news outlet and website that covers technology. Wearables and smart watches, connected and smart home applications (with Google’s Nest being a popular one but certainly not the first): there are ample of you know the examples.
Although it is said that there is some technology fatigue appearing, the combination of applications in a consumer context and of technology fascination undoubtedly plays a role in the growing attention for the Internet of Things. That consumer fascination aspect comes on top of all the real-life possibilities as they start getting implemented and the contextual and technological realities, making the Internet of Things one of those many pervasive technological umbrella terms. Obviously, the Consumer Internet of Things market is not just driven by new technology fascination: their manufacturers push the market heavily as adoption means news business possibilities with a key role for data.
Below are some consumer electronics challenges to tackle first:
Smarter devices. Consumers are waiting for smarter generations of wearables and Internet of Things products, which are able to fulfil more functions without being too dependent from smartphones, as is the case with many of such devices today (think the first generations of smartwatches, which need a smartphone).
Security. Consumers don’t trust the Internet of Things yet, further strengthened by breaches and the coverage of these breaches. Moreover, it’s not just about the security of the devices but also about, among others, the security of low data communication protocols (and Internet of Things operating systems). An example: home automation standard Zigbee was proven easy to crack in November 2016.
Data and privacy. On top of security concerns, there are also concerns regarding data usage and privacy. The lack of trust in regards with data, privacy and security was already an issue before these breaches as we cover in our overview of the consumer electronics market evolutions.
A “compelling reason to buy”. The current devices which are categorized as Consumer Internet of Things appliances are still relatively expensive, “dumb” and hard to use. They also often lack a unique benefit that makes consumers massively buy them.
Whereas the focus of the Industrial Internet of Things is more on the benefits of applications, the Consumer Internet of Things is more about new and immersive customer-centric experiences. As mentioned, the Consumer Internet of Things typically is about smart wearables and smart home appliances but also about smart televisions, drones for consumer applications and a broad range of gadgets with Internet of Things connectivity.
The Internet of Everything (IoE) : brings together people, process, data and things to make networked connections more relevant and valuable than ever before-turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries.
It focuses too much on the things and, as mentioned, is also very broadly used. It’s why some started distinguishing between the just mentioned Consumer Internet of Things and the Industrial Internet of Things.
Cisco and other prefer to use the term Internet of Everything, partially because of that umbrella term issue, partially because of the focus on things and partially to provide context to their views and offerings. But it’s not just marketing. The Internet of Everything or IoE depicts crucial aspects of IoT, namely people, data, things and processes; in other words: what makes a business. It’s this mix that matters. Moreover, the classic illustration of the Internet of Everything also made clear what, for instance, machine to machine or M2M is all about.
We’ve based ourselves on that classic depiction and added the dimensions of value and data analysis.
The relevant four key drivers for IoE are listed below
The Internet of Robotic Things (IoRT): is a concept where intelligent devices can monitor events, fuse sensor data from a variety of sources, use local and distributed intelligence to determine a best course of action, and then act to control or manipulate objects the physical world, and in some cases while physically moving through that world
One of the major characteristics of the Internet of Things is that it enables to build far stronger bridges between physical and digital (cyber) worlds. You see it in all IoT use case and in the Industrial Internet of Things you see it in what’s called the Cyber Physical Systems.
Yet, in most case, the focus is predominantly on the ‘cyber’ part whereby data from sensors essentially are leveraged to achieve a particular outcome with human interference and with a focus on data analytics and ‘cyber’ platforms. The way it happens, as ABI Research, who came up with the IoRT concept (which is real today) puts it is that essentially many applications and business models are built upon passive interaction. The Internet of Robotic Things market is expected to be valued at USD 21.44 Billion by 2022
By adding robotics to the equation and turning devices (robots) in really intelligent devices with embedded monitoring capabilities, the ability to add sensor data from other sources, local and distributed intelligence and the fusion of data and intelligence in order to allow these devices determine actions to take and have them take these actions, within a pre-defined scope, you have a device that can control/manipulate objects in the physical world.
With collaborative industrial robots), warehouse automation (Amazon Robotics) and even personal robots for cleaning and so forth make it more tangible. It’s still early days for the IoRT but the projects and realizations in this next stage are real. IoRT is not tied to the consumer and industrial IoT distinction, it’s ever-present.
The Internet of Things is used in various industries for numerous use cases which are typical for these industries. On top of that, there is a long list of Internet of Things use cases that is de facto cross-industry. As the Internet of Things is embraced and deployed at different speeds throughout consumer and industrial sectors, we take a look at some of the main industries and use cases which drive the Internet of Things market and Internet of Things projects.
Patterns and shifts in the vertical industry and Internet of Things use case spend
Note that the biggest and/or fastest growing use cases are not always related to the biggest and/or fastest growing industries in terms of Internet of Things spending.
The costs and scope of the investments. A full-blown, enterprise-wide Internet of Things project in industrial settings such as manufacturing or logistics is far more expensive than a smart home implementation.
The shifts in the major Internet of Things use cases and industries. Remember that the Internet of Things mainly started as an industrial and business sector phenomenon. Industries with many existing physical assets can realize fast cost savings and efficiencies of scale. That’s why today they spend more in Internet of Things projects than consumer segments where we see more ‘new’ devices, rather than existing assets.
The Consumer Internet of Things catching up. As industries keep leading the current waves of Internet of Things spending until 2020, the fact that they started first and the advent of ever more consumer use cases and better (safer and more useful) solutions means that gradually consumer Internet of Things catches up with Industrial Internet of Things spending.
The rise of cross-industry Internet of Things applications and of scenarios whereby consumers and businesses meet each other in business-driven initiatives (for instance, the push for telematics in insurance, the push for smart meters in utilities) has a leveling effect on the adoption of the Internet of Things and on spending.
Stay tuned…. Part 4 of this foray, we will look into the 8 best example usages of the world of IoT.
Please feel free to review my earlier series of posts
In this series of posts, I will emphasize primarily over the world of IoT (Internet of Things). We’ll start with looking at the origins of IoT, its common elements and approaches, look at the market growth and how it has changed the horizon for future tech devices, notably an sls printer. We will also touch base with the newer extensions of IoT like IIoT (Industrial IoT), CIoT (Consumer IoT), IoE (Internet of Everything) and IoRT (Internet of Robotic things).
IoT is an umbrella term for a broad range of underlying technologies and services depending upon the use cases and in turn are part of a broader technology ecosystem which includes related technologies such as AI, cloud computing, cyber security, analytics, big data, various connectivity/communication technologies, digital twin simulation, Augmented reality and virtual Reality, block chain and more.
The idea of the Internet of Things goes back quite some time. The RFID has been a key development towards the Internet of Things and the term Internet of Things has been coined in an RFID context (and NFC), whereby we used RFID to track items in various operations such as supply chain management and logistics.
The roots and origin of the Internet of Things go beyond just RFID. Think about machine-to-machine (M2M) networks. Or think about ATMs (automated teller machine or cash machines), which are connected to interbank networks, just as the point of sales terminals where you pay with your ATM cards. M2M solutions for ATMs have existed for a long time, just as RFID. These earlier forms of networks, connected devices and data are where the Internet of Things comes from. Yet, it’s not the Internet of Things.
The Role and Impact of RFID
In the nineties, technologies such as RFID, sensors and a few wireless innovations led to several applications in the connecting of devices and “things”. Most real-life implementations of RFID in those days happened in logistics like warehouses and the supply chain in general. However, there were many challenges and hurdles to overcome (mainly warehousing and industrial logistics as RFID was still expensive).
An example of an RFID application – electronic toll collection. The use of RFID became popular in areas beyond logistics and supply chain management: from public transport, identification (from pets to people), electronic toll collection (see image), access control and authentication, traffic monitoring, retail outdoor advertising. That growing usage was, among others, driven by the decreasing cost of RFID tags, increasing standardization and NFC(Near Field Communication).
Journey of RFID to IoT
The possibility of tagging, tracking, connecting and “reading” and analyzing data from objects would become known as the Internet of Things around the beginning of this Millennium.
It was obvious that the connection of the types of “things” and applications – as we saw them in RFID, NFCs – with the Internet would change a lot. It might surprise you but the concepts of connected refrigerators, telling you that you need to buy milk, the concept of what is now known as smart cities and the vision of an immersive shopping experience (without bar code scanning and leveraging smart real-time information obtained via connected devices and goods) go back since before the term Internet of Things even existed. Th attention for IoT in numerous other areas without a doubt has led to the grown attention for it as you’ll read further.
Coining of IoT Term
According to the large majority of sources, the term Internet of Things was coined in 1999 by Kevin Ashton at MIT.
RFID existed years before talked about the Internet of Things as a system, connecting the physical world and the Internet via omni-present sensors. Team there wanted to solve a challenge as wired reports: empty shelves for a specific product. When shelves are empty, obviously no one can buy what’s supposed to be there. It’s a typical problem of logistics and supply chain. The solution was found in RFID tags, which were still far too expensive to be able to put them on each product. Once the benefit was realized, there were many who invested in the expensive RFIDs to derive the benefits. The rest is a standard system, solving miniaturization challenges, lowering RFID tags prices and…history.
Definition of IoT
The internet of things, or IoT, is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.
Physical devices are either designed for the Internet of Things or are assets, including living beings, which are equiped with data sensing and transmitting electronics. Beyond this endpoint dimension with devices, sensors, actuators and communication systems, the Internet of Things is also used to describe what is effectively done with the data acquired from connected things.
The Internet of Things describes a range of applications, protocols, standards, architectures and data acquisition and analysis technologies whereby devices and items (appliances, clothes, animals,….) which are equipped with sensors, specifically designed software and /or other digital and electronical systems, are connected to the Internet and/or other networks via a unique IP address or URI, with a societal, industrial, business and/or human purpose in mind. As you can read below, data and how they are acquired, analyzed and combined into information value chains and benefits are key in it. In fact, the true value of the Internet of Things lies in the ways it enables to leverage entirely new sources and types of data for entirely new business models, insights, forms of engagement, and ways of living and societal improvements
The Internet of Things is not a thing. Data which is acquired, submitted, processed or sent to devices, in most cases travels across the Internet, fixed lines, across cloud ecosystems or via (tailored) wireless connectivity technologies which are developed for specific applications of IoT
Bridging digital, physical and human spheres through networks, connected processes and data, turned into knowledge and action, is an essential aspect in this equation. In recent years the focus in the Internet of Things has shifted from the pure aspect of connecting devices and gathering data to this interconnection of devices, data, business goals, people and processes, certainly in IIoT.
Elements of IoT
Most IoT definitions have several aspects in common. Here are the elements they have in common:
Internet of Things Connectivity
All IoT definitions include the connectivity and network aspect: a network of things, devices, sensors, objects and/or assets, depending on the source. It’s pretty clear that a dimension of networks and connectedness, we would even say hyper-connectedness, needs to be present in any decent IoT definition.
2. The Things in the Internet of Things
IoT-enabled assets, devices, physical objects, sensors, anything connected to the physical world, appliances, endpoints, the list goes on. They are all terms to describe what an essential part of a network of things. Some add words such as smart or intelligent to the devices. Let’s say that they contain technology that grants them an additional capability of ‘doing something’: measuring temperature or moisture levels, capturing location data, sensing movement or capturing any other form of action and context that can be captured and turned into data.
3. The Internet of Things and Data
This is part of that intelligent notion but it also brings us far closer to the essence. You can define the Internet of Things by simply describing all characteristics (“what it is”) but you also need to look at its purpose (“the why”).
4. Communication in the Internet of Things
Data as such is maybe not without value but it sure is without meaning unless it is used for a purpose and it is turned into meaning, insights, intelligence and actions. The data gathered and sensed by IoT devices needs to be communicated in order to even start turning it into actionable information, let alone knowledge, insights, wisdom or actions.
5. Internet of Things, Intelligence and action
We just touched upon this aspect. However, in most definitions we see that intelligence is attributed to just the network(s) and/or the devices. While we certainly need, for instance, ‘intelligent networking technologies’ in many cases and while connected devices have a capacity of action, the real intelligence and action sits in the analysis of the data and the smart usage of this data to solve a challenge, create a competitive benefit, automate a process, improve something, whatever possible action our IoT solution wants to tackle.
There is always a degree of automation, no matter the scope of the project or the type of Internet of Things application. This automation, irrespective of the system it is applied on, always comes concomitant with a smtp provider, considering the vulnerability of the system. Most IoT applications are essentially all about automation. And that often comes with costs and benefits. Industrial automation, business process automation or the automatic updating of software: it all plays a role, depending on the context.
Meaning and hyper-connectedness is what we miss in many answers on the questions regarding what the Internet of Things is. We stay too descriptive and focused on just the technologies and don’t look at purpose and intelligent action enough
While the above mentioned elements come back in all Internet of Things definitions there are a few we miss that are essential in the evolving views regarding the Internet of Things as it moves from devices and data to outcomes and actionable intelligence, and ultimately to a hyper-connected world of digital transformation (DX) and business.
The aspect of hyper-connectivity and integration often lacks. In a context of a reality whereby devices, people, processes and information are more interconnected than ever before; an Internet of Things definition and approach just needs to mention these aspects as the Internet of Things is part of something broader and is more about data, meaning and purpose than about objects. A key element of that hyper-connectivity in the Internet of Things sphere is that sometimes mentioned ongoing bridging of digital and physical environments, along with human environments, processes and data as the glue, enabler and condition to create value when properly used for connected purposes.
Then there is also the possibility to create new ecosystems where connected device usage by groups of people can lead to new applications and forms of community ecosystems. Last but not least and we’ve mentioned this often before: no Internet of Things without security.
Stay tuned…. Part 2 of this foray, we will look into key definitions and approaches for IoT.
Please feel free to review my earlier series of posts
Just a recap, in my previous post, I had primarily focused on the Algorithms that are at the core of the ML/DL space and how humans are at the helm of this impact of machines. This will be the concluding post for this series where we will discuss the applications/organizations who have successfully implemented the AI/ML frameworks and how they are benefiting out of it.
Have you ever thought how Google Maps predicts the traffic so accurately or how Amazon recommends products for you or even how self-driving cars work? If yes, then let us see the top 8 applications of Machine Learning.
Google Maps Traffic Prediction
We will start with one of the applications of Machine Learning that we use in our day-to-day life, i.e., Google Maps’ traffic prediction. Google Maps is very accurate in predicting traffic. Google uses the information from the phone the app is installed upon to calculate how many cars are there on the road or how fast they are moving. Also the more people are on-boarded on this app, the traffic data becomes more accurate.
It also has incorporated traffic data from an app called ‘Waze’ to monitor traffic reports from local transportation departments and also keeps a history of traffic patterns on specific roads so its prediction is far more accurate. So, this was about Google Maps using a Machine Learning algorithm to analyze and predict the result using your data. More data you feed more accurate it becomes!
Google Translate enables us to translate documents, sentences, and websites instantly. All these translations come from computers that use statistical machine translation. For teaching someone a new language, we usually start off by teaching the vocabulary, grammatical rules, and then explain about constructing sentences but rules here contain a lot of exceptions. When you combine all of these exceptions in a computer program, then the quality of the translation begins to breakdown. Hence Google Translate, took a slightly different approach. Instead of teaching every rule of a language to the computer, what it does is, it lets the computer find the rules by itself. Google Translate does this with the help of Machine Learning. This is done by examining billions of documents that are already translated by human translators. Google Translate collects text from multiple sources. After the text or the data is collected, the machine tries to find patterns by scanning the text. Once the machine detects the pattern, this pattern is used multiple times for translating similar text. Repetitions of the same process by the machine will detect millions of patterns that will make the machine a perfect translator. Google’s translation is undoubtedly perfect, but by constantly providing newly translated text, it can get smarter and translate better. This is how Google translates your speech.
Now, we will move on to the applications of Machine Learning by looking at Facebook’s Automatic Alt Text.
Facebook’s Automatic Alt Text
Facebook’s Automatic Alt Text is one of the wonderful applications of Machine Learning for the blind. Facebook has rolled out this new feature that lets the blind users explore the Internet. It is called Automatic Alternative Text. With the help of this, the blind are getting the tools by which they can experience the outside world and the Internet. Blind people use screen readers that help in describing websites or apps. Facebook has estimated that there are more than a billion photos shared every day. However, the pictures shared would be of no use for the blind if they don’t come up with the text that outlines the picture. So, Facebook is resolving this problem with the help of ‘Automatic Alt Text.’ Here, when the built-in reader is turned on and when we tap on a picture, then Facebook’s Machine Learning algorithms try to recognize the features of the image and then create an alt text. This alt text will describe the picture with the help of the screen reader.
Recently, Twitter has also added a feature that makes use of alt text for images.
This was all about the applications of Machine Learning which Facebook developed to help the blind people experience the world.
Further, in this blog on ‘Applications of Machine Learning,’ we will see another application of Machine Learning, that is, Amazon’s recommendation engine.
Amazon’s Recommendation Engine
Amazon uses Machine Learning with Big Data to power its recommendation engine. It involves three stages: events, ratings, and filtering.
In the events phase, Amazon tracks and stores data regarding customer behavior and their activities on the site. Every click the user makes is an event, and the record of the user is logged in the database. This way, different types of events are captured for different kinds of actions like a user liking a product, adding a product to the cart, or purchasing a product.
Next phase is ratings. Ratings are important as they reveal what the user feels about the product. The recommendation system then assigns implicit values on different kinds of user actions like four-star for purchase, three-star for like, and two-star for a click, and so on.
Amazon’s recommendation system also uses Natural Language Processing to analyze the feedback which is provided by the user. The feedback can be something like ‘the product was great but the packaging was not good at all.’ With the help of Natural Language Processing, the recommendation system calculates the sentiment score and then classifies the feedback as positive, negative, or neutral.
Now, the last phase is filtering. In this step, the machine filters the product based on the ratings and other user data. The recommendation system uses different kinds of filtering such as collaborative filtering, user-based filtering, and hybrid filtering.
Collaborative filtering is one in which all the users’ choices are compared and they get a recommendation. For example, a user X likes products A, B, C, and D, and the user Y likes products A, B, C, D, and E. So, there is a chance that the user X will also like the product E, and the machine will recommend the product E to the user X.
After that comes the user-based filtering. In this, the users’ browsing history such as likes, purchasing, and ratings are taken into account before providing the recommendation.
Finally, in hybrid filtering, there is a mix and match of both the collaborative and the user-based filtering.
So, this is how Amazon recommends products for you. The applications of Machine Learning are not limited to just Amazon; organizations such as Alibaba, eBay, and Flipkart also use the same approach.
Going ahead in this blog on ‘Applications of Machine Learning,’ we will see about spam detection in Gmail.
Spam Detection in Gmail
Spam detection is the most commonly used mechanism in our day today life that makes use of filters. Algorithms are regularly updated based on the new potential threads found, advancement in technology, and the reaction given by users to spammed mails. Spam filters remove the threats using text filters based on the sender’s history. So, in this blog on applications of Machine Learning, next, we will see text filters, client filters, and spam filters.
That’s the outline of spam detection and how Gmail understands which email is spam. However, the real-time processes are a lot more complex and it consumes a lot of data. It is also used in fraud detection.
After spam detection in the applications of Machine Learning, we will move on to Amazon Alexa, which is another wonderful application of Machine Learning.
The brain or voice of Echo is known as Amazon Alexa. It is capable of doing several tasks such as giving the weather report and playing your favorite song. Also, the word ‘Alexa’ is a wake word. As you say this word, it starts the recording of your voice. When you finish speaking, it sends the voice to Amazon. The service that persists this recording is called Alexa Voice Service or AVS, and it’s one of the magnificent applications of Machine Learning. This service is run by Amazon.
AVS interprets the command from the recorded audio. It is also called a voice detection service that can work with many other online services.
The commands interpreted by Alexa can be like asking for time and weather reports. After the command has been noted, it is sent to Amazon. Then, AVS gives the response by telling you the time and weather reports with the help of an audio file sent by Amazon servers.
You can also give some complex tasks such as if you tell Alexa to tell you the ‘Applications of Machine Learning,’ then AVS will search the keywords you have set up in servers.
Alexa can also control your home appliances by voice commands if you are using smart electronic devices such as Philips smart bulbs. You can give commands to Alexa to switch on or off the lights. You can even link it to Domino’s and order pizza by giving commands to Alexa. Isn’t it a magnificent application of Machine Learning?
Echo and Alexa can perform a lot of things. Amazon is continuously adding more skills to Alexa that will make it better.
Well, if you consider these products, Amazon is not the only company in the market that has used this application of Machine Learning. Google uses ‘Ok Google’ as its voice command services, Apple uses ‘Siri,’ and Microsoft uses ‘Cortana.’ Even if they are using the same approach, i.e., voice commands processed in cloud servers, they are not as good as Alexa.
Now, we will move on to another superb application of Machine Learning, i.e., self-driving cars!
Tesla’s Self-driving Cars
Tesla’s self-driving cars is another of the most-used applications of Machine Learning. A recent study has shown that over 90 percent of the road accidents are caused by human errors, and these mistakes are often catastrophic. The accidents have led to a massive number of unnecessary deaths; lives that could have been saved if they were driving safely. This is where the self-driving cars come into the picture. Thus, this real-world application of Machine Learning has led the automobile industry to a new and safer direction. These self-driven cars are autonomous cars that are safer than the cars driven by humans. Things that make these cars safe are that they are not affected by factors like illness or emotions of the driver.
Self-driving cars persistently observe the environments and scan all the directions and make their move. Due to their mechanism of not lagging in observation, self-driving cars work perfectly.
Working of Self-driving Cars
Self-driving cars are a real-world example of Machine Learning that mainly uses three different technologies: IoT sensors, IoT connectivity, and software algorithms.
Talking about self-driving cars is not limited to Tesla. In today’s world, the most famous self-driving cars are those made by Tesla and Google. Tesla cars work by examining the surroundings with the use of a software system that is the auto-pilot. As we use our eyes to visualize the surrounding world, the auto-pilot does it with the help of hi-tech cameras for recognizing objects. After that, it interprets the information and then makes the best conclusion out of it. This major application of Machine Learning is revolutionizing the automobile industry.
Next, in this blog on the applications of Machine Learning, we will look at the Netflix movie recommendation system.
Netflix Movie Recommendation
Netflix movie recommendation discovers 80% of the movie/TV shows that are streamed which means, the majority of what we decide to watch on Netflix is a result of the decisions made by its algorithm.
Netflix uses Machine Learning algorithms to recommend the list of movies and shows that you might have not initially chosen. To do this, it looks at threads within the content.
There are three legit tools for Netflix, and they are as follows: the first is Netflix members, the second is taggers who understand everything about the content, and the third is the Machine Learning algorithms that take all of the data and put them together.
Netflix uses different kinds of data from these profiles. It keeps track of what you guys are watching from your profile, what you watch after completing your current video, and even what you have watched earlier. It also keeps track of what you have watched a year ago or what you are currently watching, or at what time of the day you are watching. So, this data is the first leg of the metaphorical tool.
Now, they are combining this information with more data to understand the content of the shows that you are watching. This data is gathered from dozens of in-house and freelance stuff watched every minute and every show on Netflix, and they tag them. All the tags and user behavior data are taken and fed into a very sophisticated Machine Learning algorithm that figures out what’s the most important.
Well, these three legit tools are used to analyze the taste of communities around the world. It’s about people who watch the same kind of things that you watch. Viewers are made to fit into thousands of different taste groups that affect recommendation pop-ups on their screen at the top, as an interface with joined rows of similar content.
Across the globe, the tags used are the same for all the Machine Learning algorithms. The data Netflix feeds into its algorithms can be broken down into two types: implicit and explicit.
Explicit data is what you literally tell. For example, you give thumbs up to Friends and Netflix gets it.
Implicit data is real behavioral data. It’s like, you did not explicitly tell Netflix that you like Black Mirror, but you just watched it in two nights. So, here Netflix understands the behavior. But, just as a matter of fact, the majority of useful data is implicit data.
There are a lot more real-world applications of Machine Learning, but those described in this blog are the major applications of Machine Learning. Now that we have seen various applications of Machine Learning which are revolutionizing the world. We are moving into the next generation with the full-fledged Machine Learning technology that will help in giving a whole new direction.
This is the last blog in the series on Machine Learning and the related space. Hope you all enjoyed reading through the posts as much as I enjoyed putting them together. Stay tuned while I come back with yet another series on a technology topic.
Please feel free to review my earlier series of posts
Just a recap, in my previous post, I had primarily focused on the tools/techniques/frameworks and hardware that are prominent in the implementation of an AI/ML framework for any organization. In this post, we are going to dwell a little deeper into the Algorithms that are at the core of the ML/DL space and how humans are at the helm of this impact of machines.
Algorithms are becoming an integral part of our daily lives. About 80% of the viewing hours on Netflix and 35% of the retail on Amazon are due to automated recommendations by the so called AI/ML engines. Designers in companies like Facebook know how to make use of notifications and gamifications to increase user engagement and exploit & amplify various human vulnerabilities such as social approval and instant gratification. In short, we are nudged and sometimes even tricked into making choices by algorithms that need to learn. In case of the products and services we buy, the information we consume and who we mingle with online, algorithms are playing an important role in practically every aspect of it.
In the world of AI, a new challenge that is being increasingly discussed is the biases that creep into algorithms. Due to these biases, when we leave it to algorithms to take decisions, there can be unintended consequences. More so, as algorithms deployed by tech companies are used by billions of people, its damage because of biases can be significant. Moreover, we have a tendency to believe that algorithms are predictable and rational. So we tend to overlook many of their side effects.
How today’s algorithms differ?
In the past, developing an algorithm involved writing a series of steps that a machine could implement repeatedly without getting tired or making a mistake. In comparison, today’s algorithms, based on machine learning, do not follow a programmed sequence of instructions but ingest data and figure out for themselves the most logical sequence of steps and then keep working on improvement as they consume more and more data.
Machine learning itself is more sophisticated as traditional (supervised) ML, a programmer usually specifies what patterns to look for. The performance of these methods improves as they are exposed to greater data but this is limited. In deep learning, programmers do not specify what patterns to look for but the algorithm evaluates the training data in different ways to identify patterns that truly matter. Human beings may not identify some of these patterns.
Deep learning models contain an input layer of data and an output layer of the desired prediction and the multiple hidden layers in between that combine patterns from previous layers to identify abstract and complex patterns in the data. For instance
Unlike traditional algorithms, the performance of deep learning algorithms keeps improving as more data is fed.
Decision making and avoiding unintended consequence
AI involves enabling computers to do the tasks that human beings can handle. This means computers must be able to reason, understand language, navigate the visual world and manipulate objects. Machine learning enhances this by learning from experience. As algorithms become more and more sophisticated and develop newer capabilities, they are going beyond their original role of decision support to decision making. The flip side is that as algorithms become more powerful, there are growing concerns about their opaqueness and unknown biases. The benefits of algorithms seem to far outweigh the small chance of an algorithm going rogue now and then. It is important to recognize that while algorithms do an exceptionally good job of achieving what they are designed to achieve, they are not completely predictable. They do have side effects like some medicines. These consequences are of three types
Perverse results affect precisely what is measured and have a better chance of being detected. Unexpected drawbacks do not affect the exact performance metrics that are being tracked. and difficult to avoid them. Facebook’s Trending Topics algorithm is a good example. The algorithm ensured that the highlighted stories were genuinely popular. But it failed to question the credibility of sources and inadvertently promoted fake news. So inaccurate and fabricated stories were widely circulated in the months leading up to the 2016 US Presidential elections. The top 20 false stories in that period received greater engagement on Facebook than the top 20 legitimate ones.
Content and Collaborative filtering systems
Content based recommendation systems start with detailed information about a product’s characteristics and then search for other products with similar qualities. Thus, content based algorithms can match people based on similarities in demographic attributes- age, occupation, location, shared interests and ideas discussed on social media.
Collaborative filtering recommendation algorithms do not focus on the product’s characteristics. These algorithms look for people who use the same products that we do. For example, two of us may not be connected on Linked In but if we have more than a hundred mutual connections, we will get a notification that we should perhaps get connected.
Algorithms also leverage the principle of digital neighborhood. One of the earliest pioneers of this principle was Google. In the late 1990s, when the internet was about to take off, the most popular online search engines relied primarily on the text content within web pages to determine their relevance. If a lot of other sites have a link to our website, then our website must be worth reading. It is not what we know but how many people know us that gets our website higher rankings. Research reveals that when Oprah Winfrey recommends a book, sales increase significantly but the books recommended by Amazon also get a significant boost. That is why digital neighborhoods are so important. For products that are recommended on many other product pages, recommendation algorithms drive a dramatic increase in sales. Spotify initially used a collaborative filter but later combined it with a content based method.
AI began with Expert systems, ie systems which capture the knowledge and experience of experts. These systems suffer from two drawbacks.
Do not automate the decision making process.
Can’t code a response to every kind of situation.
We can either create intelligent algorithms in highly controlled environments, expert systems style to ensure they are highly predictable in behavior. But these algorithms will encounter problems they were not prepared for. Alternatively, we can use machine learning algorithms to create resilient but also somewhat unpredictable algorithms. This is the predictability – resilience paradox. Much as we may desire fully explainable and interpretable algorithms, the balance between predictability and resilience inevitably seems to be tilting in the latter direction.
Technology is most useful when it helps human beings to solve the most sophisticated problems which involves creativity. To solve such problems, we will have to move away from predictable systems. One solution to resolve the predictability resilience paradox is to use multiple approaches. Thus, in a self-driving car, machine earning might drive the show but in case of confusion about a road sign, a set of rules might tick in.
Environmental Factors that Support
Human behavior is shaped both by hereditary and environmental factors. Same is the case with algorithms. There are three components we need to consider:
While data, algorithms and people play a significant role in determining the outcomes of the system, the sum is greater than the parts. The complex interactions among the various components have a big impact.
Many professions will be redefined if algorithmic systems are adopted intelligently by users. But if there are public failures, we cannot take user adoption for granted. We might successfully build intelligent diagnostic systems and effective driver-less cars but in the absence of trust, doctors and passengers will be unwilling to use them. So it is important to increase trust in algorithms. Otherwise, they will not gain acceptance. According to some estimates, driver-less cars would save up to 1.5 million lives just in the US and close to 50 million lives globally in the next 50 years. Yet, in a poll conducted in April 2018, 50% of the respondents said they considered autonomous cars less safe than cars driven by human beings.
Rules and Regulations
Decision making is the most human of our abilities. Today’s algorithms are advancing rapidly into this territory. So we must develop a set of rights, responsibilities and regulations to manage and indeed thrive in this world of technological innovations. Such a bill of rights should have four components:
AI systems have mastered games – now it’s time to master reality! Sometime back, Facebook developed two bots and trained them on negotiation skills. The bots were exposed to thousands of negotiation games and taught how conversations would evolve in a negotiation and how they should respond. The outcome of this training far exceeded expectations. The bots learnt how to trade items, developed their own short hand language. When the bots were made to negotiate with human beings again, the people on the other side did not even realize this!
Stay tuned…. Part 5 of this foray, we will look into the details of top 8 examples of organizations that have successfully implemented the AI/ML framework and how they are benefiting out of it.
Please feel free to review my earlier series of posts
Just a recap, in my previous post, I had primarily looked upon amount, structure of data that needs to be served as input to machine learning and the process used for its implementation. In this post, we are going to continue to learn more about the tools/techniques and hardware that will be required for these processes to excel.
While the majority of us are appreciating the early applications of machine learning, it continues to evolve at quite a promising pace, introducing us to more advanced algorithms like Deep Learning. Why DL is so good? It is simply great in terms of accuracy when trained with a huge amount of data. Also, it plays a significant role to fill the gap when a scenario is challenging for the human brain. So, quite logical this contributed to a whole slew of new frameworks appearing. Please see below the top 10 frameworks that are available and their intended use in the relevant industry space.
Just to provide a comprehensive analysis of the best tools that would undoubtedly recommend. So what is the final advice?
The speed of your algorithm is dependent on the size and complexity of your data, the algorithm itself, and your available hardware. In this section, we’re going to focus on hardware considerations when:
Training the model
Running the model in production
Some things to remember:
If you need results quickly, try machine learning algorithms first. They are generally quicker to train and require less computational power. The main factor in training time will be the number of variables and observations in the training data.
Deep learning models will take time to train. Pre-trained networks and public datasets have shortened the time to train deep learning models through transfer learning, but it is easy to underestimate the real-world practicalities of incorporating your training data into these networks. These algorithms can take anywhere from a minute to a few weeks to train depending on your hardware and computing power.
Training the Model
Desktop CPUs: are sufficient for training most machine learning models but may prove slow for deep learning models.
CPU Clusters: Big data frameworks such as Apache Spark™ spread the computation across a cluster of CPUs.
The cluster or cloud option has gained popularity due to the high costs associated with obtaining the GPUs, since this option lets the hardware be shared by several researchers. Because deep learning models take a long time to train (often on the order of hours or days), it is common to have several models training in parallel, with the hope that one (or some) of them will provide improved results.
GPUs: are the norm for training most deep learning models because they offer dramatic speed improvements over training on a CPU. To reduce training time it is common for practitioners to have multiple deep learning models training simultaneously (which requires additional hardware).
Running the Model in Production
The trend toward smarter and more connected sensors is moving more processing and analytics closer to the sensors. This shrinks the amount of data that is transferred over the network, which reduces the cost of transmission and can reduce the power consumption of wireless devices.
Several factors will drive the architecture of the production system:
Will a network connection always be available?
How often will the model need to be updated?
Do you have specialized hardware to run deep learning models?
Will a network connection always be available?
Machine learning and deep learning models that run on hardware at the edge will provide quick results and will not require a network connection.
How often will the model need to be updated?
Powerful hardware will need to be available at the edge to run the machine learning model, and it will be more difficult to push out updates to the model than if the model resided on a centralized server.
Tools are available that can convert machine learning models, which are typically developed in high-level interpreted languages, into standalone C/C++ code, which can be run on low-power embedded devices.
Do you have specialized hardware to run deep learning models?
For deep learning models, specialized hardware is typically required due to the higher memory and compute requirements.
We have now looked through the different frameworks/tools and techniques that are prevalent in the industry for Machine and Deep Learning initiatives in your organization. We also looked through the recommendation on which framework works for your specific request. Also we looked at the Hardware that is required to run these models Or to make the frameworks active.
Stay tuned…. Part 4 of this foray, we will look into Usage of all these Machine and deep learning algorithms, models and frameworks that are prevalent in the industry.
Please feel free to review my earlier series of posts on AI-ML Past, Present and Future – distributed across 8 blogs.
Just a recap, in my previous post, I had primarily looked upon the basics terms in machine learning and the process used for its implementation. In this post, we are going to continue to learn more about the steps in machine learning process and how it can be enhanced and made more efficient.
There is a huge significance to be on top of Domain knowledge in your said industry/process. This will filter the right data set to be considered for machine learning. The core of this discussion goes around Data and its structure that exists in your organization.
To deduce the data present in an organization and its structure, we pose below three questions:
1. Is Your Data Tabular?
Traditional machine learning techniques were designed for tabular data, which is organized into independent rows and columns. In tabular data, each row represents a discrete piece of information (e.g., an employee’s address).
There are ways to transform tabular data to work with deep learning models, but this may not be the best option to start off with.
Tabular data can be numeric or categorical (though eventually the categorical data would be converted to numeric).
2. If You Have Non-Tabular Data, What Type Is It?
Images and Video: Deep learning is more common for image and video classification problems. Convolutional neural networks are designed to extract features from images that often result in state-of-the-art classification accuracies – making it possible to discern high-level differences such as cat vs. dog.
Sensor and Signal: Extracting features from signals and then using these features with a machine learning algorithm. More recently, signals have been passed directly to LSTM (Long Short Term Memory) networks, or converted to images (for example by calculating the signal’s spectrogram), and then that image is used with a convolutional neural network. Wavelets provide yet another way to extract features from signals.
Text: Text can be converted to a numerical representation via bag-of-words models and normalization techniques and then used with traditional machine learning techniques such as support vector machines or naïve Bayes. Newer techniques use text with recurrent or convolutional neural network architectures. In these cases, text is often transformed into a numeric representation using a word-embedding model such as word2vec.
3. Is Your Data Labeled?
To train a supervised model, whether for machine learning or deep learning, you need labeled data.
If You Have No Labeled Data
Focus on machine learning techniques (in particular, unsupervised learning techniques). Labeling for deep learning can mean annotating objects in an image, or each pixel of an image or video, for semantic segmentation. The process of creating these labels, often referred to as “ground-truth labeling,” can be prohibitively time-consuming.
If You Have Some Labeled Data
Use transfer learning as it focuses on training a smaller number of parameters in the deep neural network, it requires a smaller amount of labeled data.
Another approach for dealing with small amounts of labeled data is to augment that data. For example, it is common with image datasets to augment the training data with various transformations on the labeled images (such as reflection, rotation, scaling, and translation).
If You Have Lots of Labeled Data
With plenty of labeled data, both machine learning and deep learning are available. The more labeled data you have, the more likely that deep learning techniques will be more accurate. A typical example is below that illustrates approach when you have too much labeled data.
The steps for you to initiate any Machine learning project is to identify the different steps/tasks as part of any one business process. While one task alone might be more suited to machine learning, your full application might involve multiple steps that, when taken together, are better suited to deep learning. If you have a large data set, deep learning techniques can produce more accurate results than machine learning techniques. Deep learning uses more complex models with more parameters that can be more closely “fit” to the data.
Some areas are more suited to machine learning or deep learning techniques. Here we present 6 common tasks:
We thus look at each of the above tasks and its related examples, its applications, inputs required, common algorithm applied and whether it’s more approached through Machine learning or deep learning.
While one task alone might be more suited to machine learning, your full application might involve multiple steps that, when taken together, are better suited to deep learning. So, how much data is a “large” dataset? It depends. Some popular image classification networks available for transfer learning were trained on a dataset consisting of 1.2 million images from 1000 different categories. If you want to use machine learning and have a laser-focus on accuracy, be careful not to over fit your data.
Over fitting happens when your algorithm is too closely associated to your training data, and then cannot generalize to a wider data set. The model can’t properly handle new data that doesn’t fit its narrow expectations. The data needs to be representative of your real-world data and you need to have enough of it. Once your model is trained, use test data to check that your model is performing well; the test data should be completely new data.
Now that we have witnessed the amount and format of data that is required and how it has to be structured for machine learning or deep learning processes to be implemented. Stay tuned…. Part 3 of this foray, we will look into the details of the tools/techniques and hardware that is required to support the machine learning process while we lead deep into the learning portions of this AI foray we have embarked upon.
Please feel free to review my earlier series of posts on AI-ML Past, Present and Future – distributed across 8 blogs.