We are constantly looking to answer the pervasive question: "Did it work?" And our guidepost is always, "What does success look like?"
In the consumer behavior/marketing world, answering these questions is no easy task. The key is to do your planning and homework -- upfront -- so that you clearly define measurable ways to answer these questions with confidence.
So how do you do this? You first need to adopt a culture of "true experimentation." While it sounds logical, it's not as easy as it sounds. The primary tenet here is to have a "probabilistically equivalent" sample: where "there are no systematic differences between the groups in their characteristics or in how they would respond to [the] ads." This, friends, is the key. If you don't have probabilistic equivalency in your data analysis, you will introduce additional variables which can impact the causality that you're attempting to explain.
This is a favorite case study from Northwestern University/Kellogg School of Management that speaks to the issue of causality of consumer behavior as the result of Facebook advertising: A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook∗
Dr. Florian Zettelmeyer, renowned data scientist, category expert, and Professor of Marketing at Northwestern University, is one of the researchers on this project and with whom we had the privilege of attending a Kellogg executive seminar that he moderated. It's an eye opening analysis with concepts to always keep front-and-center when you're evaluating data, attempting to explain the often-elusive "why." While most of us will never become data scientists in our spare time, we can use this framework to ask the right questions and to look "under the hood" at the source of data that comes our way.
The takeaway? If you don't have true experimentation with probabilistic equivalency, you don't have much.