In earlier blogpost on   Predictive Analysis (To predict or not to Predict) the basics of Predictive Analysis (PA) were explained. The three PA Models (Descriptive, Predictive and Decision) were visualized and the fact that all of these have to be taken sequentially in order to mature your company within the area of Analytics was discussed.

Recently SAP introduced the HAP which enables you to execute Descriptive and Predictive PA models within SAP BW. This blogpost will take a deep(er) dive regarding:

-What Technology stack do I need to use the HAP?

-What happens in the HAP?

-What can I do with the HAP?

-Will there be more analytic models added to the HAP?

The pre-requisite, from technology stack point of view, is that a BW7.4 environment, powered by HANA, is required in order to run the HAP. This is due to the fact that a code-pushdown to the HANA platform is required because the different kind of available analytics reside in there. This also enforces all analytics to be processed in-memory.

A visual representation of the above is depicted in the following picture which resembles an (SAP BW) transformation. A (source) InfoProvider is connected to a (target) Infoprovider via a transformation, the transformation in this case being the HAP. The BW Process Management is managing the process via a DTP (data transfer package) executed within a Process Chain.

The difference between the two is that a traditional transformations transforms the data record by record while the HAP fetches the whole dataset in one go and executes the algorithms (sometimes multiple times) before it delivers a statistically correct result.

The Source infoprovider may be an SAP BW infoprovider or a database table which resides in SAP HANA on which analytics (PAL, BFL, R-script, etc) will be applied. The Target infoprovider can be a Datastore Object (DSO), an SAP HANA database table or another HAP (called Stacked HAP). The later implies that the output of the 1st HAP will be used as input for the 2nd HAP.

The following analytic functions are currently available in the HAP:

These analytic functions can be grouped as follows:

-Clustering algorithms: ABC & K-Means.

-Time Series algorithms: Single, double and Triple Exponential Smoothing.

-Association algorithm: Apriori (better known as Market Basket Analysis).

-Outlier detection and projection analysis.

McCoy’s advice would be to first work with the available (out-of-the-box) analytics because they will provide insight into how the HAP operates. Each analytics function requires a specific input structure 

And will generate a different output structure

Please be aware that the most commonly used analytical functions have been released already and more analytic functions will be released in the (near) future.

When you want to proceed with the next step, whether this will be creating your own R-procedures or setting up a proof of concept, McCoy's consultants are specialized in Predictive Analytics and are more than willing to help you on the road of Predictive Analytics

If you want to know more, feel free to contact us