Everyone’s talking about it. You read about it constantly. It seems like everyone’s doing it, but does anyone actually know what it is?
Today, you can’t go more than a day without hearing about artificial intelligence (A.I.) or machine learning (ML), but these concepts have actually been around for a long time. New advancements in computing power and algorithms have brought about a recent resurgence in the field, though. Now, it seems as if everyone thinks A.I. will change the world.
For those without a background in data science, it’s easy to get lost in all the hype. Fortunately, it’s easy to distill the main ideas down to a few fundamental concepts that can provide a great, high-level understanding of what’s going on in the field today, and where things might go in the future.
But the question remains: what is A.I. and what is machine learning?
At its core, A.I. is an area of research, development, and application that’s focused on getting machines to act increasingly rationally and autonomously. Today, much of the research focuses on the development of algorithms (sequence of steps) that use applied math to create formulas and functions that optimize for a very specific task.
Today, machines can do things that were, at one point, reserved for living, breathing creatures. New accomplishments in the fields of machine vision, natural language processing, voice recognition, and content generation have resulted in a renewed interest in A.I. Much of this is due to advancements in machine learning.
Machine learning is generally considered a subfield of A.I., and its focus lies in the development and application of how mathematical processes can be used to find common patterns in data that can then be used to achieve some goal.
The recent ability of computers to process vast amounts of data in a reasonable amount of time has allowed researchers to more efficiently implement these processes, which have been around since at least the ‘80s. In general, these processes rely on building and tweaking the parameters of a formula that learns some objective function (like “predict the price of this house”, or “identify the most likely object in this image”). Machine learning algorithms tweak (or learn) these parameters by examining vast quantities of historical examples to find the most relevant patterns that can help in accurately achieving the objective.
These techniques can be generalized across many different areas, and their ability to learn specific tasks or objectives from large amounts of data makes them well-suited to perform well on a wide variety of tasks.
While machine learning is one of the hottest areas in A.I. today, there are many other fields that have been popular over time, and may become more popular in the future. Many researchers today believe that machine learning cannot lead to artificial general intelligence (AGI) - think human-level abilities to reason, act, and learn across a variety of different scenarios.
So while we’re far from having to worry about machines taking over the world, we are seeing many new advancements in areas where machines can learn from data.
Machine learning has made progress in a variety of different fields. These algorithms power recommended new products for you to buy on Amazon. They decide what articles you should be shown on social media. They can even identify individual faces within large crowds. And while ML algorithms have been applied to a variety of different industries, in general they have been categorized into three main areas:
Each of these areas leverages specific techniques to accomplish certain goals and data to accomplish certain objectives, and we’ll briefly discuss each one below.
Supervised learning involves a set of algorithms and processes where machines are given a specific objective (e.g. optimize the probability that someone will click on an ad that we show them), provided with individual examples of that objective along with supporting data for each example (e.g. the advertiser and product for every ad that was shown to a group of people, along with whether they clicked that ad), and are then trained to find patterns within the data that can optimize that objective going forward (e.g. predict as closely as possible the likelihood that someone clicks on a new ad).
At a high level, supervised learning algorithms are provided with examples of what has previously happened and are asked to use that information to predict (or classify) what might happen in future scenarios given similar data. Basically, machines are told what to look for within the data they receive, this type of data is called “labeled” data, because we label each instance with what happened and use that label to optimize the objective we care about.
You might have guessed that if supervised learning involves “labeled” data, unsupervised learning involves “unlabeled” data. Specifically, unsupervised learning involves a set of techniques where machines are asked to simply find patterns that occur frequently in data that they’re provided, without being provided any additional information about a goal or objective.
While at first this may sound like this approach has no value, these sorts of methods are frequently used to find and group data into clusters or segments and identify products that are frequently sold together. Algorithms that are used in unsupervised learning can also be used to identify important attributes that will then later be used for supervised learning.
Unsupervised techniques are also frequently applied when using a very small amount of labeled data to learn patterns among a larger set of unlabeled data, a technique known as semi-supervised learning.
While supervised and unsupervised learning are frequently used to identify objectives or patterns, reinforcement learning is used to learn and optimize a set of actions over time in an uncertain environment. These methods, which are frequently used in video games to optimize long-term strategies, are designed to learn the best behaviors and actions to take to maximize long-term rewards. They learn by balancing the need to explore uncertain actions with exploiting high value actions within their environments.
At this point, it’s important to mention a generic learning technique that has been the primary catalyst for the re-emergence of interest in machine learning and A.I. This technique, known as deep learning, can be used in any of the areas mentioned above, and is arguably the most powerful learning algorithm available to us today.
Without getting into technical specifics, deep learning is an algorithm that’s loosely modeled off of the human brain. In these learning algorithms, a huge number of neurons (small and self-contained mathematical functions) “communicate” with each other (essentially they run a variety of functions whose results feed into each other) to learn and identify extremely complicated and nuanced patterns in data. They can be used to identify objects in images (supervised learning), learn from a small number of labeled data and then predict on a larger amount of unlabeled data, such as in the field of labeled radiology images (semi-supervised learning), or to learn how to intelligently represent words as numerical vectors (unsupervised learning). Deep learning methods are also used to learn and predict what to do in reinforcement learning algorithms.
While deep learning has dramatically changed the field of artificial intelligence, it faces a couple of key hurdles, namely, it can require vast amounts of data to learn efficiently, and the predictions that it generates are very hard to interpret. Deep learning algorithms are frequently referred to as “black boxes” due to their lack of transparency, and while that may not matter in some industries, the ability for humans to understand their output is a pre-requisite for others (for example, insurance companies must be able to explain why they made a certain pricing decision for a customer).
While many organizations have already embraced machine learning, a large number are still actively examining how they can take advantage of this new technology. As more and more businesses incorporate machine learning into their products and services, they’ll unlock increasingly greater efficiencies, which will enable them to produce better products with lower costs, leading to greater market share.
While organizations are actively working on incorporating existing machine learning capabilities, researchers are actively working on creating new and improved techniques. Today, the future of A.I. looks bright. Funding is continuing to pour in through both venture capital and government grants, and research and development is moving quickly in both academia and industry. The continued improvement of A.I. techniques will create new industries, jobs, and opportunities, and the businesses and individuals that will gain the most from will be those that embrace A.I. as early and as often as they can.