False Positives in PPC and what's "Machine Learning" got to do with it?
Ad Tech tools that claim to be using "ML & AI" might actually set their users up to becoming victims of false positives. Check out our webinar #5 to find out! 📺
In our fifth webinar session and first-ever podcast episode (we know some of you have been waiting for this a long time), marketer extraordinaire Jason Pittock discusses with ClickGUARD’s co-founder and CTO, Miloš Đekić. Tune in to learn about the importance of understanding what False Positives are and the role that Machine Learning plays in this phenomenon.
Jason and Miloš go into detail explaining how Ad Tech tools that claim using ML / AI to block fraudulent traffic can be detrimental to the performance of an ad campaign. What’s more, these Ad Tech tools might actually set their users up to becoming victims of false positives.
What are False Positives?
If you’re to avoid falling victim to something, it’s important to understand the threat. So what exactly are false positives?
Miloš sheds light on the matter, starting with the top three types of false positives that PPC advertisers should be aware of are:
- False conversions due to faulty or superficial attribution setup
Think of ad vendors such as Facebook or Google. If you’ve ever used any of these platforms to promote your product or service and followed up on the data, you’re probably well aware of how eager they are to show that a button click is advertising success. But, Miloš points out, let’s not forget that bots can click buttons! What we should count as a successful interaction is a subscription or a sale, as opposed to a click with no indication of who’s behind it.
- False interactions with ads and websites that contribute to faulty audience models
Ad targeting has been elevated to an art of sorts in recent years. One thing that giant ad vendors have in common is that they heavily rely on various behavior data to build and refine targeted audiences. Miloš’ example shows how in Facebook Ads you can target a sub-audience you’ve pulled into your funnel when they interact with your pricing page or features comparison page. But these interactions can be false, leading to wrong data for which, of course, Facebook will charge you nonetheless, wasting your ad budget.
- False positives in identifying wasteful ad traffic
Last but not least, Miloš warns that ad fraud prevention done wrong can also go awry. The usual approach to combating wasteful ad traffic is to exclude certain sources of ad clicks. This way, false positives could lead to the exclusion of legitimate and quality click sources, which ends up damaging your ROAS because you’re not serving ads to an audience with intent to convert.
Data Bias and How it Influences Machine Learning
In answering Jason’s question about how machine learning affects false positives, Miloš advises taking a step back and looking at the bigger picture. In this case, that translates into seeing what’s behind ML and AI.
Miloš reminds us about a 2019 Forbes article that coins the phrase “Data Isn’t Truth” and goes on to explain that data science (where ML is an instrument) is no different from any science.
Simply put, this is how data science works: it takes data models that seek to represent reality and then tries to build evidence to confirm or deny a certain proposition by running experiments against it.
How about “data bias”? Well, just like with any other scientific experiment, the varying environmental conditions influence results and often create obstacles to making accurate conclusions.
Can you think of any sources of bias in data science? If you can’t, don’t fret. Miloš very thoughtfully points them all out for us: the way datasets are designed; the way data is collected; the way data is used, and the way data is deployed.
Before hearing Jason and Miloš talk, you could (and in fact, many people would) have fallen into the trap of thinking that just because machine learning is behind certain conclusions about a dataset, accuracy is guaranteed.
This not the case.
To further debunk this myth, ClickGUARD’s co-founder and CTO asks the viewer one important question: “who designs the dataset, the core, the basis of what ML uses?” Humans. With their experience and bias, humans are also in charge of attribution models, implementing how data is being collected, and being responsible for designing the ML algorithms that will be applied to the data collected. Therefore, the bias in data comes from bias in humans.
How Machine Learning Affects False Positives
So far, we’ve learned what false positives are and why machine learning doesn’t guarantee accuracy in interpreting a dataset. Jason sets our next goal: we’re about to learn how Machine Learning can affect False Positives and what outcomes these can have on people’s ad campaigns.
Without a doubt, Machine Learning is awesome, and Miloš gives it its due credit. Having a powerful machine run simulations on data to gain knowledge can be tremendously beneficial!
But, Miloš stresses out in the context of using ML to optimize advertising efforts, all the factors we’ve listed above may lead to falsely interpreting data and thus, reaching false conclusions regarding necessary ad campaign optimizations. These can lead to missed opportunities and loss of business. More than employing machine learning, it’s about how it’s used and about the quality of data that can and is being collected.
Can Ad Tech That Claims To Be Using Machine Learning Do More Bad Than Good?
Nudged on by Jason, Miloš paints a very vivid picture of how Ad Tech tools that claim to use machine learning or AI can actually be detrimental to a campaign's performance, with false positives.
When you start researching ad tech that claims to use AI or ML to optimize, protect, and boost advertising campaigns, you will often find a “one-size-fits-all” kind of solutions and a bit of a “sit-back-and-relax” approach.
Sounds familiar? This sales pitch is the same one Google Ads uses for their “Smart” Campaigns. The same “plug in your credit card and go” operating system, too. Both of these tools promise that they do everything for you. Take Miloš’ challenge and ask yourself: when has “do nothing” been a productive strategy for anyone, in business or life?
Before you fall victim to one of these tools that sound just-too-good-to-be-true, ask yourself who’s behind the AI system running your campaign if you’re not doing any of the work? Whose bias have you put in charge of your advertising budget and, essentially, the success of your business?
If that wasn’t enough, there’s a second dimension to this problem, and that, according to Miloš, is the lack of paper trail.
A system that “does the work for you” requires you to put blind faith into it. Instead of providing a paper trail for every optimization activity, they perform on your advertising campaign, these systems, which ClickGUARD’s CTO refers to as “black boxes,” issue vague reports of supposed results achieved in ways that are not verifiable by you, the user.
What that means for your finances is that a “black box” system might be costing you big bucks by killing your advertising performance or just preventing your campaigns from generating better ROI, and you wouldn’t even be able to tell how.
To conclude, Miloš wholeheartedly recommends using ML to boost ad campaigns, just as long as you don’t let someone else take the driver’s seat. Stay on top of following through the paper trail or making modifications to the system based on your own experience and intimate knowledge about your campaigns. Otherwise, you can end up losing big!
AI and ML - Buzzwords To Deceive Users or Legit Claims?
When talking about ad tech that uses machine learning, Miloš briefly mentions the words “transparency” and “control” which trigger Jason’s next questions - could it be that ad tech vendors are deliberately using buzzwords like AI and ML to deceive? Could it be that they intend to enable their users to run profitable ad campaigns?
According to Milos’s experience, there’s no one right answer. There are solutions out there that utilize ML the right way, using proper tech and not making false claims. There are, however, more than a few bad apples on the market that can’t be overlooked. One example Miloš mentions (without naming names) is of a popular tech solution in the ad tech space that claims to be using AI and that their system does wonders. But when it comes to proving their worth, they perform optimizations on ad campaigns without taking into account any visitor behavior data. They simply do not analyze how people behave on your website when landing there after clicking an ad, which speaks volumes about the quality of data they use.
A second real-life example that explains just how (NOT) transparent ad tech really is, is brought to Miloš by people migrating to ClickGUARD from other platforms. More often than not, these users will ask to see how many clicks they got from their competitors. This has grown into something users expect to see after ad tech services got used to reporting “Clicks from Competitors.” It’s madness to think that they can actually track every individual competitor and prove that a click originated from one of their devices. Still, services get away with it by not providing a paper trail or explaining how they determined this. They'll say AI did it, and they’re out of the woods.
Miloš stresses that ClickGUARD has made it their mission to educate users that this isn’t their goal. Instead, we get them to focus on what every marketer who deserves his paycheck knows is truly important: cost per acquisition, ROAS, and sales.
Are You a Victim of False Positives?
I guess when you work with ClickGUARD, people have a lot of questions for you. So it’s no wonder that Jason got asked, a few days before this interview, how they would know if they were a victim of false positives.
Jason genuinely advised them to use ClickGUARD (and get unmatched insight into their Google Ads data while at it) and left him wondering what are Miloš's thoughts on observing data and knowing if you are a victim of false positives.
Miloš's response will most likely not surprise you if you’ve been reading this carefully: you can’t know anything without data. And using a system that only shows you supposed results (with no paper trail) without doing a deep dive into data won’t get you anywhere closer to even think about getting false positives.
With that being said, he goes one step further and makes the following claim: you most certainly are a victim of false positives. They are unavoidable, regardless of who controls the optimization process. What makes the difference is having the knowledge, the data, and the transparency into it to do something about it.
One final piece of advice that Miloš had for our viewers was not to be afraid to experiment, always ask for a paper trail and strive to understand. And if you need a reliable partner, one of ClickGUARD’s crucial pitches is that we’re all about transparency of the data we collect, how we process it, and the optimization actions we perform. So you can see what is going on and control it and make sure you get the most value and the least false positives.
Share the article
Former Head of Product
Autem dolor ipsa quos necessitatibus non accusamus expedita iure exercitationem. Et aliquam voluptates atque molestiae id omnis. Perspi
Former CMO @ ClickGUARD