The development of more sophisticated analytics practices and tools has increased the ability to understand customer churn, value, channel performance, and other behaviors.  Additionally, the greater accessibility of data, data processing and visualization has made smart, data-focused marketers that much more powerful. As a result, many of today’s marketers are investing in and leveraging data to answer questions like “what happened?” and “why did it happen?” Using basic analytics and BI tools, it’s easy for businesses to answer these questions and make decisions based on their findings.

However, the companies that are seeing the best results in their businesses are going one step further and answering the question, “What will happen?” Leveraging predictive analytics is empowering marketers and businesses to predict what will happen and as a result, they are in a great position to make better decisions, improve targeting, and realize an improved ROI. 

For some, “predictive analytics” might sound complicated. But, it’s not when you understand the basics and have software to support the build and implementation phases.

The Basics

Predictive analytics is built on predictive modeling. Predictive models attempt to describe a relationship between two variables—a dependent and an independent variable. The dependent variable is the response variable and is usually a customer attribute that you would like to predict such as response rate, yearly revenue, likelihood to buy, etc. The independent variable is the predictor and is useful in making predictions of the response. Typical attributes that are used as predictors are:

  • Customer transaction history: Previous stays and stay lengths, historical responses, packages purchased
  • Customer demographics: Age, income, zip code, gender
  • Appended data: Customer psychographics, lifestyles, attitudes
  • Marketing data: Product behavior, historical campaign data, offer/discount data
  • Additional: Transactional, customer, or household-level data

All of this data is used to understand the relationship between the two variables. Finding the model and determining how good it is are what the predictive modeling process is all about. This process often requires several iterations and the quality of the model requires an understanding of the business problem and the data, modeling algorithms and parameters, software, and implementation practices.

Customer data is used to quantify customer attributes and behavior. Predictive models can be built using those attributes to better learn customer behavior patterns.

Why Would You Want to Use Predictive Analytics?

Businesses look to predictive analytics and modeling to predict customer behavior so they can make more efficient and profitable business and marketing decisions.

Here are some examples of what, in marketing, businesses use this approach to predict:

  • Response to a campaign for customers or prospects
  • Likelihood of a second stay following the first for a hotel chain
  • Cross-sell opportunities based on product or category preferences
  • Number of purchases per year
  • Offer amount that is most likely to entice response but still be profitable
  • Revenue or profitability over year

And, here’s a client example:

One of our clients, PNT Marketing Services, used predictive modeling to improve the effectiveness of email campaigns that they executed for their clients—digital marketing agencies and education firms.

PNT Marketing Services began by constructing a data warehouse to track all contacts and 12-month history of emails sent. The data they chose to record for each contact included email clicks, opens, click-throughs, page visits, conversion data, and call center information. RFM-type metrics such as the number of contacts made in the last year, time since the last contact, and percent of emails clicked in the last year was also collected. All of this data was then migrated to LityxIQ software to perform modeling.

Separate models were built for different subsets of their clients’ data and for different prospect behaviors. The fast-changing nature of the clients’ email programs and marketing conditions required the model to regularly refresh to ensure data stayed relevant. The model was designed in such a way to empower the PNT client services team to test results from different algorithms and control more technical settings without any complex coding.

PNT’s clients run email campaigns on a continuous basis, which results in a nonstop flow of new data for the models. Because of this, PNT updates datasets for proper scoring weekly. PNT set up a connection to the raw data so the datasets were automatically refreshed by LityxIQ as they changed. Once fresh data was detected, the model would run, and as a result, new scores were sent to client systems for integration into their email campaign tools for their ongoing marketing efforts. This resulted in significant increases in key email metrics, including:

  • 116% increase in click-through rate and 57% increase in click-to-lead rate
  • Fast build and deployment of over a dozen predictive models by non-statisticians
  • Weekly database scoring of over 5 million records to support email campaigns

How does one begin to use predictive modeling to find success like this? Get started moving through the stages of model development.

The Six Stages of Predictive Model Development

1) Business Understanding

This first phase focuses on understanding the goals for the business. Are you trying to:

  • Retain more customers?
  • Grow the customer base?
  • Make profitable customers more loyal?
  • Increase response rates to campaigns/promotions?
  • Reduce churn or increase loyalty?

Once you’ve defined a goal, you need to define how success will be measured. Are you working toward:

  • Profit/revenue increase?
  • Increased market share?
  • More customers?
  • Higher retention?
2) Data Understanding

Based on the business objectives and measurement that you’ve defined, you now need to determine what data is available for modeling. This could be:

  • Attributes (both response and predictor variables)
  • Timeframe of available data
  • Quality of data available
  • Quantity of data available

Different objectives require unique views of the data, different attributes, and specific modeling techniques. Once you’ve determined the data that’s available, you must retrieve the data from data sources. There are three possible sources businesses look to:

  • Internal databases/tables, external data, merges, etc
  • Row/column flat file format for use in modeling tool
  • Sampling (this requires the same time and computational effort with little accuracy tradeoff)
    • In a simple random sample, all records are equally likely to be sampled
    • In a stratified sample, some categories of data are oversampled, e.g. higher percentage of responders than non-responders

Data quality is one of the biggest challenges we see. Bad data in, bad data out. It’s critical that you use clean data. To determine the quality of your data, look at:

  • Comprehensive data: Are there any missing data in the attributes you’re going to use? Are there missing attributes that you’ll have to append from another source?
  • Outliers: Are there data errors or significant outliers that you want to clean out?
  • Accuracy: How accurate is the data you’re using?

Once the data is collected, perform an exploration in the form of an interactive or iterative data analysis. During the exploration look for interesting patterns and correlations in order to provide an initial assessment of important attributes that will be used for making predictions.

3) Data Preparation

Before you work with your data in the context of the models, you’ll want to prepare it to ensure your results are as effective and accurate as possible. Failure to properly prepare your data can result in an excessive quantity of unqualified data. There are a few ways you can properly prepare your data:

Variable Transformations:

  • Data Cleaning: This involves removing, keeping, or imputing any missing values in your data. It also means you detect and appropriately handle outliers and bad data.
  • Data Reduction: In many instances, the number of attributes available can be enormous. Modeling effort can grow exponentially with each additional variable. Use techniques for reducing the number of attributes to be used for modeling. This means removing redundant or highly correlated variables. Factor analysis or principal components analysis find a small number of transformations of original variables that contain most of the important information.

Data preparation is a key component for Predictive Modeling. While some of these steps can have drawbacks, such as reducing or removing important relationships from the data, the benefits outweigh the risk. Data reduction improves performance savings and is essential for the process. Some aspects of data preparation can be integrated into the model building process, which can be found below.  

4) Building the Models

Now it’s time to actually build the models. This consists of a number of essential subtasks, which are reviewed below.

i) Select modeling algorithm(s) to use. Some tools automate this process, though parts of it are still manual. There are many different considerations for selecting your modeling algorithm. This includes:

    • Availability built in your tool
    • Availability for the particular modeling problem
    • Analyst capabilities
    • Time
    • Easy of deployment
    • Need for interpretability

ii) Select parameter settings of the algorithms. A single algorithm is often run more than once using different parameter settings depending on what you choose. Parameters are one of two kinds. In either case, changing a parameter setting usually results in a different model built. (In some algorithms, even not changing a parameter setting can result in a different model built if you run it again.) The two types of parameters include the mathematical form of the model itself and the details of how the model is built.

Most tools have built-in default settings for parameters. Changing these can have little effect, but an advanced user can often find a combination of settings that can produce more accurate models. Novice users can still play around with the different settings, but run the risk of waste time if they’re unclear on the meaning of the different settings.

iii) Run the models using the tool. This requires complex, mathematical calculations, and as such, should always be a computer-automated task. You may end up building dozens of models depending on the number of chosen algorithms and the number parameter settings for each. Because of this, running models can take an uncertain amount of time: some can take days to finish, while others can take months.

Running the models is a multistep process that results in a listing of completed models as their mathematical formulation, and their performance.

5) Evaluation and Selection

Once you’ve run the models, you’ll have the mathematical formula that represents the results of each model. The next step is to evaluate these models and select a ‘winning’ model. 

First, estimate the performance of the models. In order to fully understand which model was the ‘winner,’ it must be known how well each model works. To do this, there are certain accuracy measurements you’ll analyze, including response lift and revenue gain. This is usually performed simultaneously with the Model Building step using a software tool like LityxIQ.

Estimating the performance of the models has two parts. The first explores the performance of the measurement of the model and the second dives into the actual estimation method of the model. Each has some important considerations.

Performance measure:

  • Lift: This computes the percent of responses found by the model at each decile in the modeling file and compares that to a percent of responses that would be returned by a random mailing. Using this method, however, can result in ‘overfitting’ the model, which would provide a bad performance indicator. To guard against this, it’s best to use some modeling data to build the model, and the remaining to validate its performance. This provides an unbiased measure of performance, also referred to as ‘generalization performance.’
  • Percent accuracy
  • Profit
  • Mean squared error

Performance validation methods:

  • Holdout set: For this, randomly select a set of data to build the model. Set aside remaining data to measure the performance of the model.
  • K-fold Cross-validation (CV): This maximizes data efficacy of the training. To do this, leave one fold out and build a model with the reset. Measure performance of that model using the left-out fold. Then, for each fold, leave one out at a time. To finish, you’ll average the performance over each iteration and train each model with all data.
  • Bootstrapping

Based on the different criteria available, pick the best model for your data. Performance is the easiest criteria since it is numeric and easily ranks the models. There are many other things to consider, though, including general sensibility of the model, ease of implementation, and the level the model is interpretable. It’s also advisable to pick two or more good model and test each in a live setting or a market test.

6) Deployment

Once you’ve completed the previous steps, it’s time to actually deploy your model. To do this, you’ll export the model formula to a database or other program. Score prospects or customers according to the model formula and take the appropriate marketing approach based on these scores. This could mean to send or not send a promotion, take anti-churn action, etc. Then, measure how well the model performed in the real-world setting!

Businesses have come to use dozens and in some cases thousands of models to make predictions for a variety of problems and situations. To scale to this level, models cannot be built manually, and automation is needed. This is the concept of analytics as a service. It gives organizations the capability to better understand their audience and opportunities without investing enormous resources into tools, people, and process.

 


Sign up here to subscribe to the blog

Subscribe Now