Search
Close this search box.

You’re Being Misled About AI!!!

Man Behind the Curtain

Talent acquisition is always evolving. Adapt or get left behind. Staying ahead means embracing new technology, and right now, everyone is buzzing about Artificial Intelligence (AI).

And most of what people are saying is misleading, to say the least.

Now to be fair, some AI is absolutely legit. We at Randall Reilly have been using it for years (keep an eye out for more content on how 😉).

But SOME out there are using AI as just a buzzword to reel in the interest of those that want to be on the cutting edge of technology, but may not know the ins-and-outs of AI. 

Not if I’ve got something to say about it.

I have 2 goals in writing this article:

  1. To teach you what AI can actually do 
  2. Explain how to avoid being taken in by snake oil AI

 

Hopefully, by the end of this, you’ll understand how to recognize AI that is legitimate, and have the tools to call out the fakes when you see them. 

When we say “AI,” what are we talking about? 

According to Randall Reilly’s Chief Technology Officer Nick Reid in one of our Digging Deeper episodes, the average person right now hears “AI” and immediately thinks it’s a mix of the Matrix & Skynet. Unfortunately (fortunately?), the reality is it’s kind of boring when you break it down.

Most of the commercially available AI can be dumped into one of two buckets: 

  1. Predictive AI
  2. Generative AI

“The average person right now hears “AI” and immediately thinks it’s a mix of the Matrix & Skynet.”

Predictive AI

Predictive AI is heavy-duty stuff. Think of a budget calculator. We’ve all used one of these. Budget calculators take into account different variables to give you a predicted final amount. Predictive AI does the same thing, but on an exponentially larger scale. 

This kind of program is very math-intensive and hard to fake. Typically trained on massive amounts of typically proprietary data and protected algorithms, this is a more bespoke form of AI used to predict outcomes for specific scenarios.

Ideally, this kind of program will take available data to predict things like:

  • Whether a customer will purchase a certain product
  • How likely a candidate is to accept an offer
  • How well a candidate will perform at a company
  • The average lifetime value of a hire

Generative AI

Generative AI on the other hand is trained on open source text from the internet to generate content based on the input of a text prompt. 

In other words: you type in what you want and the AI spits out a response. 

 

Building this kind of LLM (Large Language Models) is a massive cost that only juggernauts can afford to build and train from the ground up, but there are plenty of publicly available LLMs. 

These LLMs are available to the public because as more people use it, the more it learns, so it’s a win-win for both user and developer. 

While predictive AI is more math-based, generative AI is more text-based. Because the “libraries” of text data that this software is built on are widely available, this is typically the most common type of AI we encounter. 

What to Look Out For

Most people that are pushing AI are rolling out white-labeled versions of generative AI, taking advantage of Large Language Models (LLMs) like ChatGPT.

LLMs are a specific type of AI model that’s been trained on absurd amounts of text to learn reading comprehension and provide human-like responses. 

In other words, it “talks” back to you. 

Some companies may be hawking software with AI capabilities that are just repackaged LLMs, or may just be smoke in mirrors.

How can AI be faked?

A good example is what Amazon tried to do with their Amazon Fresh store. 

If you’re not familiar, Amazon Fresh was an idea where customers walk in, scan their Amazon card, pick up their item, and just walk out of the store. The pitch was that dozens of cameras and “AI” would then tally up your total and charge your Amazon account. 

In reality, the “Artificial Intelligence” they were touting was actually “Human Intelligence” … hundreds of individuals watching video recordings of customers to account for all the items taken. 

Think The Wizard of Oz. These companies want you to “pay no attention to that man behind the curtain.” But we know better. 

How to Weed Out the Phonies

Alright, so now you’re on high alert. You’ve got your eyes peeled for fishy AI. But how can you start to sort the wheat from the chaff?

You have to do a little bit more homework.

Play around with the AI that’s currently available to get a feel for what the current version of AI can do. 

Set aside some time to test out Google Gemini or OpenAI’s Sora and ChatGPT to learn their capabilities and limitations.

Talk to more experts. Just understanding the basics better will help you spot when something sounds off. 

If it’s too good to be true, it probably is.

“Even if you don’t know what you’re really talking about, you have to kick the tires.” 

Predictive AI programs specifically aren’t just off-the-shelf, pick it up and use it kind of programs. If you partner with a vendor to build a model for you, make sure to be involved in the process of training the model.

If they’re throwing a bunch of acronyms at you, ask them to explain them. Some basic acronyms to look out for are:

  • LLM (Large Language Model)
  • NLP (Neural Language Processing)
  • NLU (Natural Language Understanding)
  • ML (Machine Learning)
  • DL (Deep Learning)

 

If you need help getting your interrogation started, here is a list of questions to test the knowledge of the next person that tries to sell you AI. 

What to ask What to listen for
“What LLM are you using?”

Model name and version (e.g., GPT-3, GPT-4, BERT, RoBERTa, etc…).

If it’s a proprietary model, ask for details on its development and unique features.

“What framework are you using?” Identifying which neural network module they are using (e.g., PyTorch from Meta, TensorFlow from Google, etc…).
“How long will it take to train?”

Listen for how many “epochs” the training will run for. More epochs typically lead to better performance, but longer training times.

An “epoch” is one complete pass of the data set through the algorithm.

“What data was your AI trained on?” Information about the dataset used to train the model (e.g., volume, diversity, sources).

These kinds of questions will force them to show you if THEY really know what they’re talking about or if they’re just riding the bandwagon by using AI buzzwords. 

The AI industry right now is like the wild west. You have to keep your head on a swivel and your eyes peeled, but more than anything you have to know what to look out for. 

Hopefully this helps.