Humans have always relied on using as much information as possible before making a decision. However, in the past couple of decades the internet has dramatically changed the volume of information available; dealing with 2.5 quintillion bytes of data produced every day is an overwhelming workload for the human brain.
Gaining useful insights from complex datasets is a far from trivial task. While current website tracking methods generate a vast amount of invaluable user behaviour information, in many cases those are not taken into account and decisions are made intuitively. While big datasets can be partially explored by cherry picking a few parameters based on initial hypotheses, such an approach might provide half-truths or lead to neglecting unintuitive facts that might be evident in the remaining unexplored part of the data.
This fact, combined with the aforementioned data explosion led to the development of various techniques that utilise the processing capabilities of computers in order to extract usable information from data. Those methods fall under the broad term of data mining, which is a buzzword used whenever a large scale data processing or analysis is performed. In reality, data mining describes a multidisciplinary subfield of computer science which among others involve statistics, database management, data visualization and machine learning; another term that everyone is talking about these days.
What is machine learning?
While the term machine learning sounds a bit overwhelming, it describes a rather simple concept. The goal is to feed available data as input to a machine (i.e. a computer) which will use it in order to learn from it without being explicitly programmed. This is usually done by means of a generic model which adapts to the input data (or training data) and then it can be used in order to solve a pre-specified problem. Depending on the application and its objective, a machine learning task can be classified in supervised and unsupervised learning.
Supervised machine learning tasks
Supervised learning models involve labelled data, which means that the data in hand have a specified parameter of interest that we would like to explore. This parameter is usually called a target or label. For example, a dataset containing web session information of an ecommerce website could have data points labelled as “True” or “False” depending whether a user session included a conversion (e.g. a purchase or a newsletter signup).
A typical supervised learning task would be to use historical data in order to predict the value of the target parameter of new unlabelled data points. This can be applied in various situations such as email spam detection, handwriting recognition or credit scoring.
Unsupervised learning models are used with unlabelled data, which means that the algorithm has to discover a hidden structure in the provided dataset without focusing on a specific target.
This usually involves grouping data points in a way that the points within one group are more similar to each other compared to those in other groups; this is a task known as clustering. Since the similarity of the data points will be evaluated based on the parameters that are included in the dataset, those should be carefully picked and be relevant to the objective of the problem.
Typical unsupervised applications include among others image segmentation, social network analysis and market segmentation.
How can machine learning be used?
Optimisation of marketing strategy heavily relies on exploring all information available prior to making a crucial decision. Exploring your website’s tracking and transactional data enables you to understand your customers and provide personalised marketing campaigns and user experience.
For example, an ecommerce website could use a supervised machine learning algorithm to predict which of their customers are at risk of churn in the near future based on their previous browsing and/or purchasing behaviour. Those customers differentiate significantly from visitors that according to their behaviour will most likely convert soon. Therefore, a different data-driven strategy could be implemented for each type of customer, reducing campaign costs and improving acquisition/retention rates.
Cluster analysis is another popular tool among market researchers in order to partition the general population of consumers into market segments. Grouping your customers in a clever way based on a set of properties (e.g. customer value or product preference) would result in more personalised marketing campaigns, decrease costs and increase revenue.
Stages of the data mining process
It must be stressed that data mining is an explorative and iterative process; there is no such thing as a cookbook that guarantees successful results. Formulating the problem, the questions and how the results are going to be used in advance is essential; in this way the most relevant data and appropriate machine learning algorithms per application can be selected. However, as new insight might be discovered during the analysis, those initial questions might be modified.
Another task to be considered is pre-processing the data. This is often needed before those can actually be analysed and might include aggregating data or extracting new attributes from existing ones based on domain knowledge. Pre-processed data can eventually be used to perform the actual analysis task by using relevant machine learning algorithms.
Finally, evaluating the output of the analysis depends on the algorithms that were used. Often there is a trade-off between ease of model interpretation and predictive accuracy, but most approaches (i.e. those that provide only the final outcome as an output) fail to deliver an understanding of the underlying mechanism that is related to the findings. For example, if the objective is to understand the behaviour pattern of particular groups of customers in order to develop marketing strategies, a black-box predicting algorithm would most likely be useless.
The latest advances in machine Learning involve the development of cloud machine learning platforms that offer a complete set of web services which among others include data storage, model building and cloud computing. Amazon Web Services (AWS), Microsoft Azure are only a few of the options that are currently available while Google’s Cloud Machine Learning is currently in limited preview, but will be soon launched for public access.
Rocketmill Forecaster: Try it out!
If you are interested in forecasting your website’s marketing performance you can use Forecaster, a free tool developed by RocketMill that uses machine learning algorithms in order to crunch your Google Analytics data and provide accurate marketing forecasts in minutes.
Do you apply machine learning techniques to gain insight from your website’s data? Do you use this information to plan your marketing campaigns? Let us know @RocketMill.