Reading through last week’s post got me a little annoyed. Specifically, I dislike using technical jargon – AI, in this case – without explaining what I was talking about. I’m going to remedy that here. If you’ve ever wondered what Artificial Intelligence (AI) is, why or how it’s useful, or how it relates to things like Machine Learning (ML) and Deep Learning, then this post is for you.
What the heck is it?
Defining Artificial Intelligence has taken a whole lot more brainpower than you might expect because a) it’s difficult and b) researchers like to argue. That said, here’s my working definition: technology that makes decisions based on previous experience. I realize that’s pretty vague. Rather than trying to refine it, though, let’s have an example or two.
Image recognition is one example that you may have come across. In it, a program’s set up with some handy algorithms, a goal (identifying cats, for example), and a bunch of pictures with labels saying whether this is / is not an image of a cat. The software then modifies its algorithms to ID cats in the labelled photos as accurately as possible.
The key here is that no one tells the program how to identify cats — it figures that out on its own through iteration. In effect, the software teaches itself how to find felines.
Companies use image recognition AI (part of the broader field of computer vision) across a dizzying array of applications, from self-driving cars and speedier car rentals to sorting produce. AI can also generate new images, though the results aren’t always pretty.
A similar process applies in the case of text-based sentiment analysis and content generation, including auto-complete functionality in email and messaging apps. Each of these examples, like everything I’ve discussed so far, falls under the heading of deep learning. That’s only one kind of AI though. Let’s clarify how it relates to another category of AI: Machine Learning.
Learning about Machine Learning
Deep learning can do some amazing stuff, but that amazing stuff can be pretty costly due to its complexity. For example, here’s a simplified representation of the kind of algorithm commonly used in image recognition:
Yikes! Luckily, not every AI solution involves the incredible complexity inherent to deep neural nets. A lot of these simpler alternatives allow the system to “tech itself,” just like deep learning algorithms do. As a consequence, they’re often lumped together with deep learning in the field known as Machine Learning (which is a subset of Artificial Intelligence).
ML encompasses a wide range of techniques, from complex and costly deep learning approaches to dimension-reduction algorithms that used to be computed by feeding punch cards into computers. Not surprisingly, their applications have been just as far ranging.
Like other recommendation engines, Slick Predict Recommends utilizes a family of ML algorithms that falls outside the realm of deep learning, which allows it to initialize (i.e., “learn”) quickly during installation and provide value-generating recommendations in a few milliseconds when a customer is browsing a shop’s product pages.
Easy does it
Given the examples above, it’s likely that you rely on AI-based systems in your daily life. With their sometimes-complex nature and sometimes unexpected results, why are they becoming so ubiquitous? In one word: ease. These systems save time by finishing texts, making restaurant reservations, or driving for us — and they can do these things reliably and with little user input.
This is why we built our first product on top of ML algorithms. No one wants to spend hours building lists of related products or wondering why their shop’s product collections don’t show up. Plus, AI can just do some things better than we can, especially when those things involve lots of data.
Here’s hoping I’ve cleared a thing or two up. Let me know in the comments either way!