Quantcast
Channel: SCN : Blog List - SAP BW Powered by SAP HANA
Viewing all 130 articles
Browse latest View live

Week 1 Open SAP Course BW Powered by HANA

$
0
0

I just completed week 1 of this course - you can still register at SAP Business Warehouse powered by SAP HANA - Marc Hartz and Ulrich Christ

 

I did things a little backwards, I watched Lothar Henkes introduction video last, even after completing the assignment.  Having started work in BW 3.x, I would say the changes to BW covered in the course are far greater than the initial change of going from BW 3.x to BW 7.0.  This slide says it all:

 

BW Simplfiication.jpg

Source: SAP

 

You can see from the above that the Composite Provider is the "successor" to the MultiProvider and the InfoSet.  The presenters said the Composite Provider can do inner joins.

 

Bigger too is no more InfoCubes.  The speaker also said you can migrate your existing BW models to BW on HANA without remodeling.

 

The other big change is Eclipse-based development.  This is consistent with the HANA Studio.

 

You also hear the question asked "If I have HANA, do I still need BW on HANA"?  I wondered this myself yesterday while completing the Design Studio exercises which were based on HANA Calculation Views.  This question is answered in week 1 of the course.

 

I encourage you to carefully listen to everything the presenters say in this course.  Even if you don't have a background in BW but will be consuming BW in the future I think this course provides a great background to get you started.

 

This course reminds me of how I first learned BW many moons ago - in my previous job the BW team quit, and I had to learn BW quick.  I used this book: BW: A Step by Step Guide - Amazon says I purchased the book back in 2002.  The Week 1 instruction covers the basic concepts of BW, the foundation, the modeling, provisioning and more.

 

Another course tip is you can speed up or slow down the videos - I didn't know you could do that but I notice this today - see below:

screen.jpg

 

However, I do not recommend doing that for this course; carefully listen to everything they have to say.  Also pay attention to the 62 slides provided.

 

Also, I didn't see any system exercises for this course for week 1, even though I see an entry for it in the Cloud Appliance Library - see below:

cal.jpg

I activated OpenSAP BW Powered by HANA but it seemed to have the same image and guide from the previous entry OpenSAP BI Clients & Applications on SAP HANA.  Maybe this will get resolved next week.

 

In the introduction Lothar Henkes says it will take about 4 hours per week to complete the course.  I encourage you to sign up to today at SAP Business Warehouse powered by SAP HANA - Marc Hartz and Ulrich Christ - it is free.  There is no marketing with this course, it's all about what the product can do.


Experiencing Week 2 Open SAP BW On HANA Course

$
0
0

Last week Week 1 Open SAP Course BW Powered by HANA I was looking for the exercises/instance for the course.  It turns out there are no formal exercises.  This is good because I don't feel compelled to complete anything beyond the lectures.  But looking closely at the cover page at SAP Business Warehouse powered by SAP HANA - Marc Hartz and Ulrich Christ they do list an instance at http://www.saphana.com/bwonhanatrial

 

I went to http://www.saphana.com/bwonhanatrial but when I brought up the Getting Started Guide (which is very well documented) the exercises don't correlate with what you learn in Week 2.  After reviewing the exercises I decided to terminate my instance (quickly).

 

Week 2 is all about the new Advanced Data Store Object and in certain modeling situations you don't need the InfoObject any more.  The instructors made the case that this was for agile BI; they listened to customers who had data source of over 220 fields and before you needed to manually create an InfoObject for each one (painful and tedious).

 

 

Time is spent on provisioning in areas that I haven't used before:

data prov.PNG

Source: SAP

 

Open ODS views allow you to model without InfoObjects.

openods.PNG

Source: SAP

Most of the demonstrations are on BW 740 SP08; I tried following along in my own sandbox system which is on BW 740 SP04 and it was impossible due to all the enhancements and changes.

 

For example, here is the new RSA7 transaction shown during class:

1oldrsa7.png

 

ABAP Managed Database Procedures (AMDP) are covered with the transformations.  I confess, all my ABAP knowledge and use with BW came from a SAP Press book so I continue to have a lot to learn.

 

Overall there is still so much to learn that was covered here - logical data warehouse, how BW on HANA can support the "Big Data" and Hadoop scenarios.  Interesting too, any time the instructors did a "query" against BW on HANA they used Analysis for Office; I was expecting a BEx Query.

 

I think I will have to go back and watch these videos again once I get a system that is on SP08; still I thank the instructors and SAP for filling these videos with so much knowledge.  Again, this openSAP course is all about what the product can do and no marketing - kudos!

What is the impact of the Advanced DSO to your existing SAP BW landscape?

$
0
0

As described in my previous blogs (What’s new in SAP BW 7.4 SP8 and ADSO: Installation), SP8 for SAP BW 7.4 has introduced BW’s newest modeling object: the Advanced DSO. The Advanced DSO is BW’s new major modeling object that will play a central role in any SAP BW on HANA data model.


Simplification BW74.png

Image 1: Simplification of BW modeling objects



LSA vs LSA++


In classical non-HANA BW environments, the Layered Scalable Architecture (LSA) approach has been around for quite some time now. Focusing on creating a persistent corporate memory and creating datamarts for specific information needs, most SAP BW data models have been created using several persistent data layers.

When running BW on HANA, there is much less need for multiple layers of persistence. From both a performance aspect and modeling perspective, HANA offers possibilities to do a lot of joins/unions or even calculations using non-persistent objects such as CompositeProviders or even Open ODS views. The new modeling principles when running on HANA are bundled in the LSA++ approach.

So what is the actual impact of the introduction of the Advanced DSO to the LSA++ principles? Not much from an architectural point of view. The only real thing that changes here are the building blocks used in the persistent layer(s) in your data model.

For in-depth information about LSA vs LSA++ please see this presentation by Juergen Haupt

 

 

Advanced DSO as the main persistence object

 

The ADSO is the only object you need for persistence. That promise from SAP raises the immediate question how to deal with the specific characteristics of the classical modeling objects in BW.

  •          The field-based structure of the PSA
  •          The fast, no activation required loading of the WO-DSO,
  •          The 3-table approach in standard DSO’s
  •          The ‘every characteristic is key’ approach of the InfoCube.

 

The Advanced DSO manages to replace all of these objects by basically being all of them. In the ADSO settings you can make settings to have the ADSO behave like either one of these objects. SAP has provided specific templates for each use-case.

  •          Data Acquisition Layer
  •          Corporate memory
  •          Data Propagation Layer
  •          Reporting Layer

 

Each of these templates are made up out of a specific combination of settings. The data propagation layer for example would require you to check the checkboxes below. These settings create an object with an active table to report on, and a change log table for further data provisioning. Basically creating a classical Standard DSO.

 

ADSO_Settings.png

Image 2: Settings for ADSO with Data propagation template


ADSO_Propagation_schematical.png

Image 3: Schematic overview of an Advanced DSO with Data Propagation settings



Added value

 

So what is the actual added value of the Advanced DSO, compared to the ‘classical’ objects? There is a number of reasons to use ADSO’s for new developments:

  • Simplification of object types. Over the years, new functionality in SAP BW often meant new object types with their own strengths and weaknesses. If these objects had added value, that meant remodeling, reloading, added complexity. The ADSO is your object for persistence now, with settings to fine-tune it to your specific needs.
  • Flexibility in data modeling. Because the ADSO is manageable by settings, you can start out modeling your ADSO using the Reporting Layer settings. If requirements change or you come across new insights, you can now simply change your settings to change that object into the propagation layer for example. No new object needed, this can even be done without losing data in most cases.

 

 

Concluding:

 

What is the impact of the introduction of the Advanced DSO to your existing landscape? Well, for starters it changes the way of working for most developers. Providing the flexibility for prototyping and iterative development by changing into settings-based object typing and field-based modeling can be complimentary to using Open ODS views for these use cases. As for immediate impact on existing data models: SAP does not offer migration tools or anything for this and advises customers to wait until this is available. The best advice is: use the ADSO for new developments, but leave existing developments as-is.


SAP HANA Smart Data Acess for BW [SDA] Part -2

$
0
0

Please find Part -1 here .

 

In previous blog post (Part - 1) we had seen how to implement SDA. In this part we will see the same steps but by using SQL.

 

Please go through the Part - 1 of the same to refer before you start with this.

 

1. Creating the Remote Data Source


See SQL Syntax below:

 

SDA_SQL1.jpg

 

Example SQL is shown below


SDA_SQL_1Ex.jpg


Creating Virtual table

See SQL Syntax below

 

sda_sql_2.jpg

 

Example SQL is shown below


SDA_SQL_2Ex.jpg


Display Data

a. Data can be visualized through Display Content or SQL from created Virtual table

b. SQL used here is highlighted below


SDA_Step4.jpg.jpg

 

Hope this series will be helpful.

SAP HANA Smart Data Acess for BW [SDA] Part -1

$
0
0

SAP smart data acess is a capability of HANA which can be utilized in BW if running on HANA. For BW 7.4 , which is by default on HANA this comes as a new functionality by default.

 

The Smart Data Access makes access of data and reporting possible on top of it from multiple connected Databases and without actually any need of ETL. These databases which are connected to HANA can be utilized for the same, without even any requirenment of replicating data to HANA as well. This is not only beneficial as per ROI as cost per say it is a reduced cost to implement run and manage. But also this brings in homogenity in disparity. Various Databases which are unlike can be combined without thinking much about there technology in back and reporting is possible still. In BW specially where we have a concept of Sybase IQ NLS where cold data resides, if required a reporting can be done combining warm or hot data with this cold data without actually need of manipulating physically the data. Unified HANA models for reporting can be created and utilized between non homegenous source of data. Today's new concept applications handling Big Data e.g. Hadoop can be connected using Hive Interface [HWI].

 

SDA.JPG

 

 

Some of the possible Databases which can be connected at the moment are:

o SAP Sybase ASE

o SAP Sybase IQ

o Intel Distribution for Apache Hadoop

o Teradata and SAP HANA

o SQL

o XRDBMS

o IBM PureData System for Analytics


Benefit of various HANA optimized engines happens and diparity in syntax of various Data bases are taken care by this HANA based technology. All necessary performance stats and details are recorded in HANA.


A pre-requisite for this definitely is presence of all relevant drivers for the to be connected Databases in HANA. So, the same needs to be installed prior on HANA.


Once SDA is established with any Database, the virtual tables can be created as required. Users can create SQL in HANA on these Virtual tables.  SAP HANA Query processor prepares plan for query executes relevant part in target Database. Results are the returned to HANA. Query optimization can be used by enabling Smart Cache functionality, this enables running of query in target Databases.


Following are steps (High Level) to implement SDA.

SDA_Steps_Generic.JPG

Steps in Implementing SDA.


1. Creating the Remote Data Source:


Select "New Remote Source..."from Provisioning --> Smart Data Acess --> Remote Source --> New Remote Source ...


SDA_Step1.jpg


Select Required Adapter

Fill in the values for all the Properties for Connection and provide Credentials.

Prerequisite: All relevant drivers must be installed beforehand on HANA Server.


SDA_Step2.jpg.png

2. Create Virtual Table:

Select the table from connected Database and Add as virtual table

SDA_Step3.jpg.png

3. Display Data

a. Data can be visualized through Display Content or SQL from created Virtual table

b. SQL used here is highlighted below


SDA_Step4.jpg.jpg


Will be continuing further on this topic in Part - 2

How to Implement Virtual Master Data

$
0
0

This is  new type of Master Data InfoProvider  possible in SAP BW 7.4.This characteristic can be used with virtual master data in Virtual Provider.It is possible to use HANA Attribute, Analysis and Calculation Views to create Virtual Master Data in BW.

 

Steps to Implement

 

We are trying to create a Virtual Master Data for Employee Master Data from an Attribute view with following structure. Note : Field EMP_ID from Attribute view is of type and size VARCHAR and 4.

 

Step1.jpg

 

Start of creation is exactly same as creating any normal/standard Characteristic InfoObject.Virtual Key Figures cannot be created.

 

Step2.jpg

 

As EMP_ID is of Type and Size VARCHAR and 4 respectively Data Type and Length has been entered exactly of same type and size.

 

Step3.jpg

 

In Master data / text tab, down below select SAP HANA View as Master Data Access. Select SAP HANA Package and SAP HANA View

 

Step4.jpg

 

With Master Data is checked Medium Text is selected.

 

Step5.jpg

 

Now click on Propose HANA-Assignments

 

Step6.jpg

Select only those fields under Propose Mapping which needs to be added as attributes.EMP_ID and Name will not be mapped with Attributes so left unchecked.EMP_ID field of HANA View will be mapped with main master data object and Name will be mapped with Medium Text here.

 

Step7.jpg

 

Now a second pop up will request for selecting corresponding BW InfoObjects.System provides options of similar InfoObjects. Select One relevant for each.

 

Step8.jpg

Attributes will show now in Attributes tab. These are Assigned BW InfoObjects.To see all HANA field – BW Mapping Assignments and to Map Main Master data Object and Text follow steps as shown next.

 

Step9.jpg

 

Click now on Maintain HANA – Assignments.

 

Step10.jpg

 

Assignment of HANA View Field is not there for Main Master Data Object ZVIRTCH2. We need to add HANA View field EMP_ID for it.Similarly HANA View field NAME will be mapped with Medium Text.

Step11.jpg

 

Fields Added.Fields Transferred. Virtual Master Data ZVIRTCH2 Activated .

Step12.jpg

 

Right click and Display data shows Data from Attribute view of HANA.

 

Step13.jpg

What is the impact of the Advanced DSO to your existing SAP BW landscape?

$
0
0

As described in my previous blogs (What’s new in SAP BW 7.4 SP8 and ADSO: Installation), SP8 for SAP BW 7.4 has introduced BW’s newest modeling object: the Advanced DSO. The Advanced DSO is BW’s new major modeling object that will play a central role in any SAP BW on HANA data model.


Simplification BW74.png

Image 1: Simplification of BW modeling objects



LSA vs LSA++


In classical non-HANA BW environments, the Layered Scalable Architecture (LSA) approach has been around for quite some time now. Focusing on creating a persistent corporate memory and creating datamarts for specific information needs, most SAP BW data models have been created using several persistent data layers.

When running BW on HANA, there is much less need for multiple layers of persistence. From both a performance aspect and modeling perspective, HANA offers possibilities to do a lot of joins/unions or even calculations using non-persistent objects such as CompositeProviders or even Open ODS views. The new modeling principles when running on HANA are bundled in the LSA++ approach.

So what is the actual impact of the introduction of the Advanced DSO to the LSA++ principles? Not much from an architectural point of view. The only real thing that changes here are the building blocks used in the persistent layer(s) in your data model.

For in-depth information about LSA vs LSA++ please see this presentation by Juergen Haupt

 

 

Advanced DSO as the main persistence object

 

The ADSO is the only object you need for persistence. That promise from SAP raises the immediate question how to deal with the specific characteristics of the classical modeling objects in BW.

  •          The field-based structure of the PSA
  •          The fast, no activation required loading of the WO-DSO,
  •          The 3-table approach in standard DSO’s
  •          The ‘every characteristic is key’ approach of the InfoCube.

 

The Advanced DSO manages to replace all of these objects by basically being all of them. In the ADSO settings you can make settings to have the ADSO behave like either one of these objects. SAP has provided specific templates for each use-case.

  •          Data Acquisition Layer
  •          Corporate memory
  •          Data Propagation Layer
  •          Reporting Layer

 

Each of these templates are made up out of a specific combination of settings. The data propagation layer for example would require you to check the checkboxes below. These settings create an object with an active table to report on, and a change log table for further data provisioning. Basically creating a classical Standard DSO.

 

ADSO_Settings.png

Image 2: Settings for ADSO with Data propagation template


ADSO_Propagation_schematical.png

Image 3: Schematic overview of an Advanced DSO with Data Propagation settings



Added value

 

So what is the actual added value of the Advanced DSO, compared to the ‘classical’ objects? There is a number of reasons to use ADSO’s for new developments:

  • Simplification of object types. Over the years, new functionality in SAP BW often meant new object types with their own strengths and weaknesses. If these objects had added value, that meant remodeling, reloading, added complexity. The ADSO is your object for persistence now, with settings to fine-tune it to your specific needs.
  • Flexibility in data modeling. Because the ADSO is manageable by settings, you can start out modeling your ADSO using the Reporting Layer settings. If requirements change or you come across new insights, you can now simply change your settings to change that object into the propagation layer for example. No new object needed, this can even be done without losing data in most cases.

 

 

Concluding:

 

What is the impact of the introduction of the Advanced DSO to your existing landscape? Well, for starters it changes the way of working for most developers. Providing the flexibility for prototyping and iterative development by changing into settings-based object typing and field-based modeling can be complimentary to using Open ODS views for these use cases. As for immediate impact on existing data models: SAP does not offer migration tools or anything for this and advises customers to wait until this is available. The best advice is: use the ADSO for new developments, but leave existing developments as-is.


Creating an ADSO from Open ODS view

$
0
0

The Open ODS View, one of SAP BW 7.4's main enhancements and one of the strategic modeling objects for SAP BW on HANA in SAP's simplification approach (see  SAP BW 7.4 SP8 powered by SAP HANA and Roadmap | SCN and What’s new in SAP NW BW 7.4 SP8 ).

 

One of the key features of the Open ODS view is developing BW objects, without having actual data persistency in SAP BW. The Open ODS View reads data from a remote source, while leveraging BW functionality such as Master data access when working with InfoObjects. The great thing about this, is that the SAP BW developer can actually start developing, testing and prototyping on data, before having to worry about the underlying data model, changes in requirements etc. it really supports agile development and provides more flexibility in projects.

 

In a lot of situations though, there will come a certain point in your project where data persistence is required, and a traditional ETL process will need to come in place. That's where the Advanced DSO (ADSO) comes into play. From SAP BW 7.4 SP8, it is now possible to actually generate an Advanced DSO from your existing Open ODS view, inheriting all InfoObject assignments and field information. When used on an SAP ERP Datasource, it even creates a transformation to that Datasource and a Data Transfer Process.

 

An example:

 

I created a basic generic extractor on table AUFK in SAP ECC, and create a simple Open ODS View of type FACT, without associations to other Open ODS Views or InfoObjects.

 

OOV_Create.PNGOOV_Settings.PNG

 

After checking and activating, Data preview shows us about 6417 rows (demo system )OOV_DataPreview.PNG

 

Them, in the RSA1 modeling workbench, we go into change mode for this Open ODS View, and select the 'Generate Dataflow'  option. Clicking this button opens a dialog with settings for the dataflow generation. We choose the ADSO name here, and we can choose between source object data types for the fields, or for BW types. Since both are ABAP systems in this case, we just go for source system types.

Options_OOV_RSA1.PNG

OOV_Generate_DF.PNG

 

After successful completion of the process, we now have an Advanced DSO and corresponding data flow leading up to this ADSO! (Infopackage has to be created manually)

 

OOV_DataFlow.PNG

Loading and activating data shows us the same 6417 data records we had using the Open ODS View, but now persistent in BW. The data is exactly identical to the data we previewed with the Open ODS View earlier:

OOV_Loading.PNG

ADSO_DataPreview.PNG


SAP HANA Live versus SAP BW (powered by HANA): When to choose what?

$
0
0

Lately I'm receiving more and more (reporting related) questions regarding when to opt for "SAP BW (on HANA)" and when to opt for "SAP HANA Live".

As SAP BW has been around for years and years it probably doesn't need any introduction.

SAP HANA Live on the other hand does, so here I go:


SAP HANA Live is about real-time operational reporting, directly on a (Business) Suite running on SAP HANA, without any redundancy or latency and

provides SAP-delivered content (similar in concept like SAP BW business content).

 

Going back to the question "when to use what" I luckily stumbled upon the following "decision tree" which has been presented at the DMM204 (SAP HANA Live and SAP BW - A Perfect Match) session during the TechEd in Berlin:

decision tree.png

By answering the above 15 questions, points are being granted to either HANA Live or SAP BW.

 

  • Scoring 11-14 points for HANA Live? --> Go for HANA Live
  • Scoring 05-10 points for HANA Live? --> Go for a hybrid approach
  • Scoring 00-04 points for HANA Live? --> Go for SAP BW (powered by SAP HANA)

 

Hopefully this might come in handy!

In-memory data is hot! Wait, it's actually warm! - Multi-temperature data for SAP BW on top of SAP HANA

$
0
0

When BW on top of SAP HANA was first introduced on the market, the idea was simple: Your whole database needs to fit in RAM memory; this implied having enough RAM memory and having a HANA license for this size of memory.

 

But sometimes companies forget the good practices for keeping the BW database lean: archiving and house cleaning tasks.

Not performing these tasks makes difficult to plan the BW HANA sizing, since they don’t know what is the “real” size and the growth rate of the database. These good practices need to be carried out always, even if you are not on SAP HANA, otherwise your main database will grow and grow, meaning more costs: (disks, backup, etc.) and more risks (recovering a huge DB from a backup is more risky).

 

With the latest release of SAP HANA and SAP BW not all the data needs to fit in the RAM memory, now you have Multi-temperature data:

 

multitemp2.jpg

Hot data has highest performance and a lesser data volume. Hot data is kept in the main memory (RAM), while warm data is kept on the disk of the HANA box, and cold data resides in the NLS.

This concept is called Multi-temperature, you can find more information here: Multi-Temperature Categories and BW Architecture (LSA) - Using the SAP HANA Database - SAP Library

 

Warm data is enabled via two technologies in SAP BW for SAP HANA:

  • Non-active data concept (by default ON) for PSA tables and write optimised DSOs. This works on parts of the table (such as partitions, requests, and so on)
  • Warm store can be set for write optimised DSOs and datasources

The data on the warm area is available for all kind of operations (read/write) without any change to the BW processes.

 

The NLS is a different box where you store or archive data that is read-only and is not going to change anymore (e.g. last year’s invoices). The magic of the NLS is that queries without any modification are able to access both, the main database and the NLS. Performance off course is different when accessing the NLS.

To keep hot and warm data volumes under control, you can periodically archive (move) the data into the NLS.

multitemp1.jpg

For the NLS box there are solutions from several vendors like SAP, IBM, and others. The good news here is that some NLS solutions are using columnar storage and compression making the data in the archive faster to access and cheaper to store.

All these concepts help reducing the size of the needed main memory and license costs of your SAP HANA box, making the SAP BW HANA project more cost efficient.

 

After this oversimplified intro, my idea is to get your feedback about:

-Your NLS solution: Which vendor are you using? How is the performance?

-How is the SAP HANA warm data performance?

Tools for migrating BW to HANA (BoH)

$
0
0

SAP NetWeaver Business Warehouse powered by SAP HANA or BW on HANA (BoH) is around for some time already. It was introduced in 2nd quarter of 2012 and meanwhile it got a lot of attention. Basically BW is usual choice when companies are trying to evaluate HANA. It is natural that first they just migrate to HANA as database and after some time they start to implement new project using new BW objects leveraging HANA functionalities. Or starting of re-implementing current BW data models to use HANA can happen. Anyway BW is getting a lot of attention as first system to be migrated to HANA.


It may be very tricky for people who are new to HANA even start with basic evaluation of what does it mean to migrate to HANA. To support tasks like this SAP is providing a few tools. In further text I try to list the tools and provide basics information about them.

ZBW_HANA_MIGRATION_COCKPIT.png

 

Tools are embraced by central tool called – BW Migration Cockpit for SAP HANA. From the cockpit other tools for migrating BW systems to HANA are accessible.

 

Task Type

ABAP report / Tcode

Docu

Description

 

N/A

ZBW_HANA_MIGRATION_COCKPIT

1909597

BW Migration Cockpit

 

Check

ZBW_HANA_CHECKLIST

1729988

Automation  of BW checks for prerequisites for HANA migration and guidelines for operation

 

ZBW_HANA_CHECKLIST_3x      - version for BW 3.x

 

Tcode: RSRV

link

BW Objects Checks - consistency checks on the data and metadata stored in a BI system

 

RSPLS_PLANNING_ON_HDB_ANALYSIS

1824516

Determines whether a planning function is executed in ABAP or in HANA

 

Size

/SDF/HANA_BW_SIZING

1736976

Sizing Report for BW on HANA

 

Downsizing

Configure: NLS

Tcode: RSDANLCON

link

Customizing: Set Up Near-Line Storage Connections

 

Downsizing

Configure:

Data Archive Process

Tcode: RSDAP

SAPLRSDA_DAP_MAINTAIN

link

Setup of Data Archive Process in order to archive BW data in cubes and DSOs

 

Downsizing

Execution

Tcode: SARA

SAPMAADM

link

Execute archive processes to reduce system size - Data Archiving (ADK)

 

House keep - Automation

Tcode: STC01

task list:

SAP_BW_HOUSEKEEPING

1614266

1707321

Executes routine housekeeping activities; to be run on a regular basis

 

House keep - Tasks

Tcode: SLG2

ABAP: SBAL_DELETE

N/A

Application log deletion of expired logs

 

RSDDSTAT_DATA_DELETE

934848

BW statistics deletion

 

Tcode: SARA

N/A

IDoc archiving

 

RSBTCDEL

N/A

Job log deletion

 

Tcode: BPS_STAT0

UPC_STATISTIC3

540398

990000

Planning Statistics Deletion

 

Tcode: RSREQARCH

SAPLRSREQARCH

2092315

Management of archive requests

 

N/A

706478

List of large system tables

 

Migrate - Pre Migration Tasks

Tcode: /ASU/START

/ASU/ASUSTART

1000009

Tasks to be performed before starting upgrade or migration to SAP HANA

 

SMIGR_CREATE_DDL

1908075

Generate DDL statement for migration

 

Migrate - Automation

Tcode: STC01

task list: SAP_BW*

 

Task Manager – execute task list

 

Tcode: STC02

task list: SAP_BW*

 

Task Monitor – to solve errors occurred running Task List

 

Tcode: SNOTE

Z_SAP_NOTE_ANALYZER

link

 

note assistant

Notes Analyzer - application to check the pre-requisites for the usage of BW PCA (Post Copy Automation) and Housekeeping. Allows to list Notes to be implemented.

 

 

1614266

Basis XML – XML file contains Basis Notes list

 

 

1707321

BW XML - XML file contains Basis Notes list

 

 

Warehouse Mngt

Tcode: RSMIGRATE

SAPLRSMIGRATE_FRONTEND

link

Migrating Update Rules, 3.x InfoSources and Transfer Rules

 

ZBW_TRANSFORM_FINDER

ZBW_TRANSFORM_FINDER_3X - version for BW 3.x

1908367

BW Transformation Finder - check for certain types of transformation rules

 

LogSys: RS_LOGSYS_ACTIVATE

IO: RSDG_IOBJ_ACTIVATE

DSO: RSDG_ODSO_ACTIVATE

IC: RSDG_CUBE_ACTIVATE

HybridProv: RSDG_HYBR_ACTIVATE

InfoSet: RSQ_ISET_MASS_OPERATIONS

MProv: RSDG_MPRO_ACTIVATE

DS: RSDS_DATASOURCE_ACTIVATE_ALL

IS: RS_COMSTRU_ACTIVATE_ALL

Transfer Rules: RS_TRANSTRU_ACTIVATE_ALL

Update Rules: RSAU_UPDR_REACTIVATE_ALL

TRFN: RSDG_TRFN_ACTIVATE

DTP: RSBKDTPREPAIR

 

Object Activation tool – collection of several tools for activation of different BW objects

 

BEx

BEx Query Designer

Link

Starts BEx Query Designer

 

Tcode: RSRT

Link

Starts BEx Query Monitor

 

Tcode: RSRTQ

RSRQ_QUERYDEFINITION

N/A

Starts BEx Query Definition

 

BEx Web Application

Link

Starts BEx Web Application

 

Web Item Conversion

832712

Starts Web Item Conversion

 

BEx Analyzer

1901268

Starts BEx Analyzer

 

RSZW_ITEM_MIGRATION_3X_TO_70

Link

Starts Workbook Conversion

 

Security

- Authorization

RSECADMIN

link

Maintenance of analysis authorizations objects

 

SAPLRSEC_MIGRATION

link

Conversion of 3.x reporting authorizations to 7.x analysis authorizations

 

Security

- Roles

Tcode: PFCG

link

Maintenance of BW Roles

 

Optimize

ZBW_ABAP_ANALYZER

ZBW_ABAP_ANALYZER_3X - version for BW 3.x

1847431

BW ABAP Routine Analyzer - Scan custom ABAP routines used in: process chain, transformation, transfer rule (IS/DS/SS), update rule (IC, IO, IS), planning area/level/function and ADP.

 

Tcode: SCI

link

ABAP Code Inspector

 

 

Looking forward to hear from you what is your experience using these tools.

 

PS: The blog is cross posted on my personal blog.

How to determine if a BW on HANA query performance problem is HANA database related

$
0
0

I have seem a growing number of so called BW on HANA performance related Incidents where the problem is not actually anything to do with the HANA database and some times not related to BW. It is often the case that the performance problem is with the frontend tool, so this could be Webi, Design studio or any third party tool that connects to BW backend system and uses a BW query or infoprovider as a datasource for a frontend report.


The first step in the analysis is to try and identify where the problem is i.e in the frontend tool, BW backend or HANA Database.

Where the frontend tool connects directly to BW application server (frontend reporting tools can also connect directly to HANA Database and not via the BW application server in this case the below is not relevant) you can use the Query monitor transaction RSRT to execute the query and collect the runtime statistics. The RSRT Query monitor transaction is a frontend Independent tool so If you then compare the query run time in RSRT compared to the overall run time of the frontend report you can make a judgement on where the performance problem lies and if it is the BW application, HANA Database or frontend reporting tool that is responsible.

The below video explains how you can execute a query with RSRT and also how to collect the run time statistics for the query. The run time statistics are important as from these we can see in what part of the query processing the time is lost:

How to execute BW Query and collect runtime statistics with RSRT transaction


There is often the expectation that after a BW on HANA migration from other database that all BW queries should be much faster. Of course if the query was slow already before database migration to HANA due to some calculations happening on the BW application server and these calculations cannot be pushed down to HANA DB level then HANA cannot help for these use cases without a redesign of the BW Query.


Good BW Query Use Cases:


A good BW query use case from a performance improvement point of view when migrating to HANA is one where at least a significant portion if not the vast majority of the run time of the query is database time compared to the overall run time of the query.

 

Apart from database time what other parts of the query processing can be faster with BW on HANA?:

 

Figure 1.png

 

As shown in the Diagram above we can see the different parts or components that make up the run time of a query. They are database time, time in the Analytic Manager and the client and network time. The first thing to point out is that we cannot improve the network or the client time, this is the time taken to send the data across the network from the database server to the BW application server or from BW application server to client and then display the output on the end users PC, Laptop or mobile device.


What we can possibly improve with HANA apart from the database time which we have already looked at above is the time spent in the analytic manager. The analytic manager previously known as the OLAP engine in BW is the brain of BW when it comes to BW query processing so it is responsible for navigation, filtering and aggregation among other things in the query execution. With the analytic manager what we are really talking about as shown in the diagram above is removing more and more calculations with every BW on HANA release that currently happen on the BW application server and pushing them down to the HANA Database level.

 

 

The key take away here is that as of BW740 running on HANA more and more analytic manager functions and calculations are getting optimized and pushed down to the database level on HANA. Therefore it is expected that the OLAP portion of the BW query will become even faster on HANA with future developments. One important example among many is BW formula exception aggregation. With SP09 for BW740 as shown in the Diagram below this feature will be pushed down to the HANA Database:

 

Figure2.png

To get the current list of analytic manager operations that are pushed down to HANA database level please check the information in the SAP notehttps://css.wdf.sap.corp/sap/support/notes/2063449.

 

It is difficult to estimate by how much the OLAP time can be improved on HANA and it is really dependent on what OLAP functionality you use in your BW query and also what OLAP features have been pushed to HANA DB level for the BW on HANA release that you have. Good query design is also important in terms of the numbers of records that are sent from the database back to the application server and the number and complexity of the BW query calculations that need to be performed in the BW application server in the Analytic manager that cannot be pushed to HANA

Upgrade was never been easier ...

$
0
0

With the Release of the current SL toolset 1.0 SP12 - SL Toolset 1.0 SPS 12: improved Software Logistics Tools there is now a new option for ABAP and JAVA based Upgrades available.

 

Already introduced with the database migration option (DMO) the new SAPUI5 based Frontend allows even more possibilities when it comes to the monitoring of any Upgrade process despite the underlying database - SUM: New SL Common UI available for AS ABAP scenarios


This Graphic shows the old and new UI access:

SUM12_08.JPG

All common Browser types are supported, i.e. IE 11, Google, Firefox, Safari in their latest Versions.

A possible usage on mobile devices with the same Browsers should work as well, as it is HTML5 based access to the Backend.

SUM12_01.JPG

one of the mayor advantages is the fact, that the SAPHostAgent which is part of every SAP based Installation takes care of the complete communication between the SAPup and the Frontend without having an online Window to the OS open.


Various Options allow a flexible usage of the new SUM UI in the usage of the upgrade process.

SUM12_02.JPG

Different colors show individual process steps with different importance or errors

SUM12_03.JPG

SUM12_04.JPG

SUM12_06.JPG

SUM12_07.JPG

Additional possibilities like access to important log files can be accessed via "tail mode" independent to the used Backend OS.


See also the SAP Note 2107392 - SL Common UI for AS ABAP scenarios in SUM for additional technical background and the current participation on the Pilot usage.


 


Best Regards

Roland Kramer, PM SAP EDW (BW/HANA)


New Query Designer in Eclipse with SAP BW 7.4 powered by SAP HANA

$
0
0

SAP BW 7.4 SP9 powered by SAP HANA comes with a first version of a query designer integrated in the BW-Modelling Tools (BWMT) in eclipse. With this feature it is now possible to define not only the InfoProviders of SAP BW 7.4 (Advanced DataStore Object, CompositeProvider, OpenODS View), but also queries with specific functionality in eclipse. This enables an end-to-end modelling experience in eclipse.


With the current feature set available with BWMT 1.6 you can choose between various functions:

  • Create structures in the row and column axes of the query
  • Use reusable restricted and calculated key figures
  • Define conditions & exceptions
  • Add currency and unit conversion

 

At present this is a subset what BEx QueryDesigner can offer and should be used in addition to its functionality when simple queries are sufficient. We’ll further enhance the eclipse based query definition in future.


See a query example in eclipse below:

Blog 1 - Query.png

Blog 2 - Query.png

Blog 3 - Query.png

Furthermore a new feature of the BW query in eclipse is to generate SAP HANA views based on BW queries. (Also possible with SAP BW 7.4 SP8 with a generation report)


If a query definition can be expressed as a flat view the calculation can fully take place in SAP HANA and a corresponding HANA view can be generated. To do so the query in eclipse offers a flag to create an “External SAP HANA view”.

 

This will generate a HANA view based on the query definition into a separate HANA schema which allows consuming the data for example via SQL or further modelling with native HANA views.


Blog 4 - Query.png


At present the following query functions are supported to be generated into SAP HANA views:

  • Currency/unit conversion, variables, restricted key figures, global restrictions, inventory key figure (closing balance) and formulas (SAP Note 2080686)
  • All other features will either be not supported and then prevent the generation of the SAP HANA view or simply be ignored during generation
  • Please find a detailed list of offered functionalities and prerequisites here

 

The main use case of the HANA view query generation is to be able to leverage the possibilities of the query in regards to conversions, filter and formulas. Having this feature available allows us for example to filter at the earliest possible layer if further logic will be added in HANA or data consumed via SQL externally.


Blog 5 - Query.png

See example of a generated view according the query definition below:

Blog 6 - Query.png


This can now be used in any kind of HANA modelling or for direct SQL consumption like shown in the screenshot below:

Blog 7 - Query.png

The following video demonstrates the functionality in the system:

 

The “advanced” DataStoreObject – renovating BWs persistency layer

$
0
0

With BW7.4 SP08 another “feature package” was delivered with a lot of new functionality and optimizations based on the HANA platform (see http://scn.sap.com/docs/DOC-35096). In accordance with the BW strategy to deliver the HANA-based innovations and renovations non-disruptively to our customers the SP08 delivered a first step in renovating our Persistence Layer. With the possibilities of HANA we are able to consolidate all BW InfoProviders with persistent data into a single object with a single and simple data base layout. This is similar to the consolidation and simplification in the Virtual Layer based on the CompositeProvider.

ProviderConsolidation.png

Since its properties, table layout and services are closest to today’s DataStoreObject we simply kept the name. Only if we need to explicitly distinguish between the current object and the new object we add “advanced” DataStoreObject or “classic” DataStoreObject. This also emphasizes the role of BWs DataStoreObject as THE central building block for a Data Warehouse architecture.

ADSO_Tables.PNG

Major features

New intuitive Eclipse-based modeling UI

The UI renovation continues. The ultimate goal: bring the heavy-weight modeling UIs to the Eclipse-world and the administration and monitoring UIs to the Web. With the Eclipse-UI for the advanced DSO it is now possible to model all InfoProviders of a complete end2end scenario in the BW Modeling Tools (from an OpenODS View, to the advanced DSO, CompositeProvider and BW Query). All this is deeply integrated and aligned with the HANA Studio and Modeler for all “mixed-case scenarios” and a unique modelling experience.

EclipseUI.png

Combine field- and InfoObject-based modeling

The “higher” up in your architecture an InfoProvider is located the more important are the rich features of BWs masterdata modeled in an InfoObject (like re-usability, data consistency, visibility, and quality). But for new scenarios the InfoObject approach is a high hurdle, since it requests a top-down approach right from the start. With the Open ODS View we introduced in BW7.4 SP05 already a virtual field-based approach to integrate data without the need of having InfoObjects. With the advanced DSO this is now also possible for the Persistence Layer. I.e. you can load data into BW without assigning InfoObjects to each and every field without losing functionality.

In combination with the Open ODS View this approach allows you to integrate new data easily and fast, switching seamlessly from an e.g. even virtual data access to a managed persistence with an increasing level of data integrity. All this especially tabbing into “non-classic BW data” with the support additional data types and adapters.

Field_and_IObj.png

High frequent data loads – based on optimized request management

The request management is, if you like, the cornerstone of BWs managed approach to Data Warehousing since it organizes availability and status of the data from the time it is requested of the source system up to the caching by the Analytic Manager (aka OLAP Engine) or the delta buffers of the integrated planning. With the size of the BW systems growing and growing, the complexity increasing and the demand for more and more (close-to-) real-time scenarios we decided to re-write our request- and process-management for the advanced DSO.

The current request management still works for the “classic” objects (DSOs, InfoCubes, …) and a single dataflow can work with both types as well. The “new request ID” (internally called “TSN” – Transaction Sequence Number) is no longer an INT4 value, but a timestamp plus a generated, increasing postfix. This not only removes the 2 billion limitation (for the system-wide number of requests), but also allows for new logic and new semantic derived directly out of the request. The new request management comes with a new “manage UI” directly accessible from the Data Warehousing Workbench that allows you to quickly navigate through very large sets of requests and protocols and perform manual tasks or monitoring activities.

Reporting performance optimizations

With the goal to replace the InfoCube we also had to make sure that some of the more complex (or “exotic”) reporting scenarios perform well. Two features are important to achieve this goal. Only in case of an INNER JOIN between the table with the facts and the masterdata table, can HANA perform optimal and BW can push OLAP operations to HANAs Calculation Engine. The check whether or not an INNER JOIN can be used (instead of a LEFT OUTER JOIN) is the BW referential integrity check during loading or activating the data (aka SID-creation, but is much more than just determining an INTEGER value). This check exists in the classic DSO as well (the so-called “BEx-flag” – the “create SIDs” option). But for the advanced DSO it is possible to set this flag not only on InfoProvider level, but individually per characteristic of the DSO. The data integrity check then is only executed on the selected characteristics.

The second feature is also related to the JOIN with the masterdata tables. Also HANA benefits from the fact that for an InfoCube JOINs to the masterdata tables are via INT4 values, compared to STRING JOINs for a DSO. In rare cases the performance difference can be crucial, e.g. if you have a compound, high-cardinality InfoObject like 0MAT_PLANT and the reporting mostly includes navigational attributes of this InfoObject and therefore forces this JOIN to be executed very often. In such cases you can, for individual InfoObjects(!), turn on the persistency of the InfoObject-SID in the DSO tables. An additional column will then be created in the active data table to hold the SID and will be filled during data activation.

SID_Check.png

And More

Just to mention a few additional highlights: Up to 120 key fields are supported. Template-based design using references to classic BW InfoProviders types or best-practice LSA++ templates. Integrated Remodeling for structural- or type-changes. DB-partitioning on all key fields. … and so on …

Template.png

 

Positioning

The advanced DSO will be the central persistency object in BW-on-HANA replacing especially InfoCube, classic DSO, HybridProvider and PSA. While there are still some gaps to cover the complete functionality, we recommend considering the advanced DSO for all new projects as the central (and only) persistency object. A widespread conversion of existing scenarios into advanced DSO is currently not recommended, but should be done only case-by-case and demand-driven. We plan to provide a conversion tool to support the conversion of objects in future SPs. The current persistency objects are of course still supported and do co-exist with the advanced DSO. There are no plans to change this.

 

Roadmap

New minor features will be added and some gaps will be closed in subsequent SPs (SP10, SP11, SP12) – a list can be found attached to note 2070577. Another “feature package” of the BW7.4 release is planned with SP13 (scheduled for Q4 2015). This “feature package” shall then close all major gaps between today’s functionality for InfoCubes, DSOs, PSAs and the advanced DSO. Additionally the advanced DSO is planned to support several new features like Dynamic Tiering (fka ExtendedStorage) support, direct- and Bulk Load enablement, … . And a conversion tool will then also allow you to convert existing InfoProviders into an advanced DSO.

 

Availability

Please check note 2070577 for the details according to your support package level. Additional and detailed information can be found in the SAP online help.

 

See here for a short video demonstrating the most important aspects.


Prototyping your scenarios with BW7.4-on-HANA in the Cloud – piece of cake with SDATA

$
0
0

Introduction

I hope you all heard about the exciting new functionality that is offered by BW-on-HANA especially with the latest feature package BW7.4 SP08 (SCN or openSAP course). And of course, you know of the easy way to get a free trial version of BW7.4 SP08 on HANA (link). Now the big question is how can you test your reporting scenarios with your data in such an environment? With the SDATA tool we have made exactly this extremely easy and comfortable!

 

The SDATA tool allows you to export a complete reporting scenario from your production, test or development system, all data and metadata.  And then import it e.g. into a new trial cloud system. There you can run it, test it, play around with it, and gain experience with the new possibilities not on an abstract and artificial data set, but with your data.

 

I guess I did get the attention of most of you by now;-), so let’s have a deeper look at how this works.

 

SDATA

The SDATA tool is available with the following releases:

  • BW7.4 SP08 or higher,
  • BW7.31 SP11 or higher,
  • BW7.30 SP11 or higher.

For pilot scenarios the SDATA tool can be made available on lower SP levels of the above releases. Please see note 2117680 for details.

 

My demo example is a BW Query with Sales data SDATADEMO. It is defined on a MultiProvider and contains a keyfigure with exception aggregation (Customer Count) and a calculated keyfigure (Plan-Actual). The source system is on release BW7.30 based on a classic database. I want to test if this Query is sped up by an exception aggregation pushdown if I migrate to HANA on the latest BW release.
Query.png
Scenario Export

Transaction RSDATA allows you to specify a transfer path, in my case the source is a local BW system and the target is a folder on my PC. In the next step I specify my scenario. A scenario can be a simple InfoProvider (an InfoCube or a DSO) or a MultiProvider, but it can also be a specific BW Query with complex calculations attached.

Select.png

Once I have identified one or several scenarios I start the “object collection”. As part of this process step, all depended objects are collected that are necessary to run this scenario. For our Query scenario this means all required query elements (like CKFs, RKFs, variables, …), the MultiProvider and its definition, the PartProviders and the InfoObjects. In contrast to a metadata transport collection this collection also includes the data itself. In the resulting list after the collection you see not only an entry  for the metadata, but also the transactional data (request-wise) and the masterdata, including texts and hierarchy data.

 

In this screen I can also deselect parts of the scenario. E.g. since my query only reads data from year 2010 and the “plan” InfoCube, I could deselect the data files (object type DREQ) from the Sales InfoCube 2006 and so reduce the data volume for the transfer.

Collect.png

Once I start the transfer of the data, the data of these objects is written into the specified folder in a compressed and internal format.  The data files cannot
be read since they are compressed but can only be imported into any other BW system where the SDATA tool is available. The metadata files are stored as (“readable”) XML files.

Scenario Import

In the target system I again start transaction RSDATA. This time my transfer path is not from BW system to file, but from file (folder) to the BW system. I then specify the name of my scenario, here the name of my BW Query SDATADEMO.

SDATA reads the files and collects the objects that are included in these files: Queries, Query elements, InfoProviders, … . Once I start the object collection SDATA determines the status of these objects in the target system – in my case it is an “empty” system, therefore all objects are marked as “does not exist”. Similar to the export process I can deselect parts of the scenario, e.g. some of the data packages for InfoProviders that are not relevant here.

StatusCheck.png

I can then start the data import – currently only per dialog, but in future also in batch mode (which will then require the files to be located on the application server). The process creates all InfoObjects, InfoProviders and other TLOGO objects in the correct sequence and then starts the load of transactional and master data.

Export.png

Once the complete scenario is available I can see my InfoProviders in the Admin Workbench and I can immediately execute the query. But I can also load additional data, re-model the scenario by using the new BW-on-HANA objects CompositeProvider and/or advanced DSO and run additional tests. All this using my metadata and my data!

Enjoy !

 

Some remarks

Please note that SDATA does not replace and should not be used instead of the standard BW metadata transports. SDATA is only designed for the copy of test and prototype reporting scenarios into non-productive systems, e.g. test scenarios within Cloud systems!

This example shows the easiest case because the import is done into an “empty” system. In this case there are no metadata overwrites or merges but everything is newly created in exactly the same way, as it was in the source system. If you import a second scenario into this system, the SDATA tool will determine what already exists and doesn’t have to be re-imported. If the metadata is consistent between source and target the import is not problematic, but if there are differences the SDATA tool tries to merge the metadata.

The data is exported and imported package-wise, so technically there are no limits with respect to size. Nevertheless the missing batch mode and no parallelization currently probably force you to be restrictive here and e.g. deselect the data of InfoProviders that are not required. To allow the long export and import time, it may be necessary to increase the maximum runtime for dialog processes in the system. We have tested SDATA with real customer systems and successfully transferred scenarios with several tens of millions of rows.

SDATA supports the most important InfoProviders like MultiProviders, InfoCubes, InfoObjects and DSO (active data table only). It does not support, currently, customer exits, and other customizing like currency settings, fiscal variants, … . So if you import into an “empty” system like the AWS trial-system, you may have to do some customizations first to adopt the system to your specification. We are working on more tool support in this area as well. For a list of the current restrictions please see note 2098307.

Instead of importing into a Cloud system, you can use the same mechanism of course for an import into an on premise sandbox system.

SDATA offers much more options than described in this document. Please refer to the official online help and a more detailed description attached as PDF to note 2018326.

SAP BW 7.4 - Feature Overview and Platform Availibilty

$
0
0

Please find a list of all major features delivered with SAP BW 7.4 below. This list will be enhanced and updated with each new SAP BW Support Package accordingly. It is intended in addition to the overview SAP Notes for OLAP functions, planning features: and BW Modelling Tools:

  • SAP Note 2063449 - Push down of BW OLAP functionalities to SAP HANA
  • SAP Note 1637199 - Details about availability of pushed down  Planning functionality
  • SAP Note 1905207 - BW Modeling Tools for SAP BW on HANA

 

Feature

TypeHANA onlyAvailable since
HANA-optimized BW Business ContentEnhancementYesBW 7.3 SP5
XXL-Attributes (characteristics values <= 255, long texts with 1333 char)EnhancementNoBW 7.4 SP5
High-Cardinality InfoObject (SID-less InfoObject)EnhancementYesBW 7.4 SP5
BW Modeling Tools in Eclipse (Composite Provider, Open ODS View… )New FeatureYesBW 7.4 SP5
Availibity of the new CompositeProvider 7.4New FeatureYesBW 7.4 SP5
HANA Model Generation for BW InfoProviderNew FeatureYesBW 7.4 SP5
Virtual Master Data (InfoObjects based on Calculation View)New FeatureYesBW 7.4 SP5
Inventory Keyfigures for DSO, VirtualProvider, CompositeProviderEnhancementYesBW 7.4 SP5
OLAP: Calculation push-downOptimizationYesBW 7.4 SP5
OLAP: Stock coverage keyfigureNew FeatureYesBW 7.4 SP5
OLAP: FIX operatorNew FeatureNoBW 7.4 SP5
OLAP: Multi-dimensional FAGGRNew FeatureNoBW 7.4 SP5
OLAP: Current MemberNew FeatureNoBW 7.4 SP5
PAK enhancements & optimizationsOptimizationYesBW 7.4 SP5
Planning on local provider in BW WorkspaceNew FeatureYesBW 7.4 SP5
Planning function push-downOptimizationYesBW 7.4 SP5
Planning: ODATA & Easy Query extensionsNew FeatureNoBW 7.4 SP5
Planning: Support of HANA views for facts and master dataNew FeatureYesBW 7.4 SP5
Availibilty of Open ODS ViewsNew FeatureYesBW 7.4 SP5
Availibilty Support of Smart Data AccessNew FeatureYesBW 7.4 SP5
Availibilty HANA Analysis ProcessNew FeatureYesBW 7.4 SP5
HANA optimized TransformationsNew FeatureYesBW 7.4 SP5
Open Hub: Push data into a connected databaseNew FeatureNoBW 7.4 SP5
Operational Dataprovisioning - PSA becomes optionalNew FeatureNoBW 7.4 SP5
Operational Dataprovisioning - ODQ for SLTNew FeatureYesBW 7.4 SP5
Operational Dataprovisioning - DTP for HANA ViewsNew FeatureYesBW 7.4 SP5
Operational Dataprovisioning – Data Services IntegrationNew FeatureNoBW 7.4 SP5
Data request housekeeping and monitoringEnhancementNoBW 7.4 SP5
Monitoring integrated in DBA cockpit for Sybase IQEnhancementNoBW 7.4 SP5
Optimized Query-access to NLS data in Sybase IQ leveraging SDAOptimizationNoBW 7.4 SP5
BW Workspace enhancements: Data CleansingNew FeatureYesBW 7.4 SP5
Re-Modeling Toolbox EnhancementsEnhancementYesBW 7.4 SP5
New WebDynpro-based Masterdata Value MaintenanceEnhancementNoBW 7.4 SP5
HANA-optimized BW Business Content (Data flow optimizations)EnhancementYesBW 7.4 SP5
New source object types for OpenODS Views (Transformations, ADSO)EnhancementYesBW 7.4 SP8
Automated Data Flow generation for OpenODS ViewsEnhancementYesBW 7.4 SP8
Support for UNION CompositeProviders in BW-IP scenariosEnhancementYesBW 7.4 SP8
Stacked UNION CompositeProviderEnhancementYesBW 7.4 SP8
CompositeProvider Joins with OpenODS ViewsEnhancementYesBW 7.4 SP8
Availiblity of Advanced DataStore Objects
(see also SAP Note 2070577 for more details about functions and gaps)
New FeatureYesBW 7.4 SP8
HANA model generation for BW QueriesEnhancementYesBW 7.4 SP8
Generated SAP HANA model able to read the data from Nearline StorageEnhancementYesBW 7.4 SP8
ABAP managed database procedure (AMDP)  in TransformationsNew FeatureYesBW 7.4 SP8
Availiblity  of SAP HANA dynamic tiering for SAP BWNew FeatureYesBW 7.4 SP8
Enhanced NLS operations (Archiving Proposals, Automated DAP generation, NLS DBA Cockpit integration)EnhancementNoBW 7.4 SP8
Query pruning for NLS InfoProviderEnhancementNoBW 7.4 SP8
NLS support for non cumulative Key FiguresEnhancementNoBW 7.4 SP8
XXL – Attributes (Mime Types, long strings)EnhancementNoBW 7.4 SP8
BW Search  (new eclipse UI, optimized for HANA)EnhancementYesBW 7.4 SP8
ODP Support of hierachy loadsEnhancementNoBW 7.4 SP8
Re-modelling Tool Box support of  SPOsEnhancementNoBW 7.4 SP8
Process Chain Monitor (SAP UI5 mobile app)New FeatureNoBW 7.4 SP8
Query in BWMT in eclipseNew FeatureYesBW 7.4 SP9

3 tips to survive as a SAP BI consultant in 2015

$
0
0

OK, I admit. The title is maybe too alarming and made on purpose to call your attention. However, you should be worried. If you think SAP BI Consultant role has dramatically changed after BusinessObjects acquisition, you did not start to realize what might come.

 

If you are an experienced SAP BI Consultant and started your BI carrier around 1999 or 2001 (as did I, by the way), you are from a time when proposing tools to address our customers issues went more or less, like this:

 

Scenario 1

Customer: “I need a tool to create an ETL (Extraction, Transformation and Load) process”

SAP BI Consultant: “You should use SAP NetWeaver BW (update rule, infopackage, infopackage group, etc.)”


Scenario 2

Customer: “I need a tool to create an operational analysis of my DSO (Days of Sales outstanding)”

SAP BI Consultant: “You should use SAP NetWeaver BW (Query Designer)”


Scenario 3

Customer: “I need a tool to create a beautiful Dashboard to my CFO”

SAP BI Consultant: “You should use SAP NetWeaver BW (Web Application Designer – WAD)”

 

Then, in the middle of 2000’s SAP invested heavily on Analytics with the improvements on SAP NetWeaver BW 7.0, and the acquisitions of BusinessObjects and Business Planning and Consolidation (BPC) to name the two more relevant.

 

At that point-in-time, the role of a SAP BI Consultant evolved clearly: the “one man show” which was de “BW Consultant” initially had to specialize in - at least - four different roles:

  • A SAP NetWeaver BW focusing basically on Data Warehousing
  • A SAP BusinessObjects focusing basically on presentation layer
  • A SAP Data Services focusing basically on ETL layer
  • A SAP Planning & Consolidation focusing basically on planning processes

 

I know that some people may have a different vision on the number of roles and on my simplistic view on the “history of SAP BI”, but I believe you got my meaning.

 

Fact is, however, that we still see a few SAP BI consultants that did not adapt to that. They are still “BW Consultants” or “BO Consultants”, they have no holistic view and they are stuck in a 10+ years old view of the world, focused on tools and not in solution.

 

While that is still the case, a new move will change the game once more. This time, there will be no evolution, but revolution! With the release of SAP Smart Finance and the S-Innovations, SAP made it clear its bold move to realize Hasso Plattner vision to bring transactional and analytical worlds together.

 

At first glance, it may look that this is only marketing stuff but not when you look it close. Reviewing the analytical content provided with Suite on HANA (SoH) and SAP Smart Financials it powered by SAP HANA Live, for example, it is really easy to see that several historical requirements can now be seen in a beautiful interface (HTML5 based), subject oriented (and not transactional oriented) and with the unprecedented power to allow insight-to-action type of analysis. All this in real time and at (almost) line item level!

 

In short, the “common” operational analytics – which historically has been delegated to SAP BW analysis – is now delivered as part of the SoH. And more, it is nicer, more flexible, more user friendly and faster than ever before.

 

Once more, I can “see” in the faces of many people reading this text the old (and outdated) question: “Are you saying BW is dead?”. The answer is still no – and I’ll not go into much detail about this again. The short answer is simple: every time, a customer needs a Data Warehouse or an Enterprise Data Warehouse, SAP NetWeaber BW (powered by SAP HANA) is still the best choice!

 

Having that said, it is necessary to understand that the role of the SAP BI Consultant must change to address the new reality. I short, those major challenges are:

  • Analytics is coming to the Suite. That is a fact, operational analytics, operational KPIs (Key Performance Indicators) and real time analytics will happen in the Suite;
  • SAP BI Solutions is not about tools. It never was and never will be, tools will be ever changing, updating and been replaced.
  • Not all customers need a complex and sometimes complicated Data Warehouse solution, but those that need it, really need it and we must not allow adaptations that lose value on the way.

 

With this picture and challenges in mind, here are the 3 tips to survive as a SAP BI Consultant:

  1. Avoid the 5 most common mistakes of Business Intelligence projects
  2. Understand SAP roadmap, ambitions and strategy. It is not worth to fight against it, SAP has clear roadmap, ambitions and strategy, studying it, learning it, bringing it to our daily work is the only way to propose and design long last simple solutions.
  3. Update your technical skillset. Yes, there are a lot of new tools, concepts and possibilities to study; there is no way out of it. You get to get SoH, Smart Business, Simple Finance, etc. on your brain and you must be able to handle it. Next time your customer ask for a “BI report” the best solution may be based on a view, which will be based on SAP HANA Live, deployed straight into the SAP ERP, using SAP Fiori to allow easy information consumption via mobile device (and not a SAP BW development with a presentation layer in SAP BusinessObjects).

 

Conclusion

The Business Intelligence subject area is changing. What has started with simple data visualization now has to deal with Internet of Things (IoT), Big Data, Real Time, Mobile, Self Service BI, Cloud Computing and more. It is time to prepare oneself for the challenges to come and support our customers on taking the best of every single opportunity!

 

All the best,

Eduardo

Efficiently managing your data in SAP BW on HANA

$
0
0

 


There is some confusion around the different options available for managing data in BW. Hence I am writing this to ease that confusion and hopefully achieve a clear understanding of what options are available and which are best suited for what purpose.

In a typical non-HANA environment most customers will retain all of their data in SAP BW and some will retire if using tapes or disk or using a near-line storage solution with a secondary DB. 

 

When it comes to running SAP BW on HANA, the cost to putting all data in RAM in HANA can be high if the volumes are large. Moreover, not all the data needs to be in-memory because typically in an organization only 30-50% of the entire BW data is really used very actively for reporting and other operations and hence they are the ideal candidates to fully utilize the in-memory capabilities of HANA. The other 50-70 % portion of the data is infrequently accessed and hence can be managed in a low cost plan.

 

SAP BW on HANA offers a number of ways to manage these different data temperatures so you can achieve an overall lower TCO for your investment. For customers this becomes an interesting area because it encourages the adoption of an archiving policy which when managed and maintained efficiently can limit the need to buy more HANA and thus saving heavy Opex costs.

 

Broadly there are 3 types of data temperatures -

1.png

 

HOT DATA


This is the area where 100 % of your PRIMARY IMAGE DATA is in the HANA in-memory space (RAM) and is instantly available for all operations.

In the BW world, this is typically the InfoCubes and Standard DSOs as they constitute the reporting and harmonization (EDW) areas respectively as show below. They are very frequently accessed for reporting and harmonization purposes and hence is the ideal candidates for being fully in-memory and to fully benefit from the HANA capabilities.

 

2.png

Although the frequency is fast, the data that is typically accessed very frequently for reporting purposes is between 2-3 years old. Hence this portion of the most recent accessed information is the real hot data that needs to be in-memory all the time to deliver top level performance.

 

The older data (typically beyond 3 yrs.) are rarely accessed for reporting but are still required for to be retained for regulatory and compliance purposes. Hence these older data can be safely archived to a very low cost plan using the COLD DATA management option using SAP IQ as explained in the next section.

The data in the PSAs and w/o DSOs constitute the staging area and corporate memory. Although they require frequent access, they tend to be used primarily for non-reporting purposes, i.e. for data propagation and harmonization. Hence they can be moved to a WARM store area, which is explained in the next section.

 

The below diagram shows the areas where the HOT, WARM and COLD concepts will apply in a typical SAP BW EDW architecture.

 

3.png

 

Access

VERY FREQUENT OPERATIONS, that run every few seconds, to every minute to every hour

Response Time

14.pngREALLY FAST, Fully in-memory

Use case

To provide fast access  - To queries, data loading, data activation, data transformation and data look-ups

Likely candidates

RECENT DATA from InfoCubes, Standard DSOs, Open DSOs and All Master Data and Transformations and related look-up DSOs.

 

COLD DATA


In the context of this document I am only discussing SAP IQ as the cold storage, whereas with BW there are other certified partners who are providing NLS solutions such as PBS Software and DataVard. You can look up for “NLS” from the partner site at - http://go.sap.com/partner.html

 

This is the area where 100 % of your PRIMARY IMAGE DATA is in a SECONDARY DATABASE (ON DISK) and the response is slightly slower than HANA but still offers reasonably fast READ ONLY access to data for reporting purposes, as if they were in one database.

4.png

 

In the BW world, the standard DSOs & InfoCubes constitute the harmonization and reporting layers. But typically only the last 2-3 years of data is the most frequently requested. The older data (typically beyond 3 yrs.) are very in-frequently accessed for reporting but are still required for to be retained for in-frequent reporting or regulatory and compliance purposes. Hence these older data can be safely archived to a very low cost plan.

 

This is where the NLS comes into play. Keeping the existing models and architecture the same, you can remove the older sets of data from these Infoproviders (typically slicing the data according to country, region, year etc.) out from the primary HANA database and move it to a secondary low cost/low maintenance IQ database. The READ access to IQ NLS is in most cases is much faster than READ access to traditional databases. For customers running BW on xDB and using IQ as NLS, the NLS DB actually turns into an ‘accelerator’ and provides much faster response times than the primary database.

 

The NLS4IQ Adaptor in SAP BW offers tight integration between SAP BW and SAP IQ, such that all data management, retiring and control processes can be done through SAP BW using the Data Archiving Process (DAP). A lot of new enhancements have been recently added with the BW 7.4 SPx releases that help to manage the entire end-to-end archiving life cycle process in a much more simpler and efficient way.

5.png

Talking about SAP IQ, it offers columnar tables for faster read access, upto 90% compression and runs on a conventional hardware, thus offering overall lower TCO benefits plus it is a highly mature database with a large install base for the past 15+ years. Hence it is a trusted environment to retire old data as a low cost/low maintenance DB option but still have all the benefits of accessing it in near real-time whenever needed or requested.

 

Also for historical data the SLAs are usually not the same as the high availability data and hence the NLS process helps by moving the bulk of the inactive data out of the primary database to a slightly relaxed SLA environment. Secondly what NLS is providing is an on-line archiving solution, so as the volume grows and data gets older, they can be seamlessly moved out of the primary HANA database. This way you can reduce the OPEX costs by significantly reducing the need to buy more HANA, thus reducing the TCO of the landscape dramatically.

 

Access

SPORADIC, typically data that is older than 2-3 years but is still required for reporting purposes either regulatory or statistical or compliance.

Response Time

15.pngTYPICALLY 5-10 % less than HOT store.

Use case

This is used for data retiring purposes where you REMOVE part of your DATA (HISTORIC DATA) from your PRIMARY STORAGE and MOVE to a low cost database, typically generating an archiving scenario, but still making the data available anytime and anywhere with near real-time access as on request.

Likely candidates

HISTORIC DATA from InfoCubes, Standard DSOs, and SPOs.

 


WARM DATA


This is the area where the PRIMARY IMAGE DATA is in the PRIMARY DATABASE (ON DISK) of HANA Database instance, but is always available to be as and when required or requested. Using this you can manage your LESS RECENT DATA and LESS FREQUENT DATA more efficiently within the HANA database such that data is instantly available for READ, WRITE, UPDATE etc (all operations), but still offers the lower TCO benefits.

6.png

In the BW world, PSAs and W/O Optimized DSOs constitute the staging area and corporate memory area. The value of the data in the PSAs is good as long as it is the newest data. Once it is loaded to the upper level targets then the value of that data diminishes and is only required if there are discrepancies in the report results and a trace back/reload is required. Although some customers do maintain a regular housekeeping, the PSAs persists the data for a few days to few weeks to few months, depending on the SLAs and policies. Hence their size can grow very quickly thus blocking a lot of space and memory which otherwise could have been used for other important processes. Similarly with corporate memory, they are primarily used to support the transformations, harmonisations, reconstructions etc.; hence their usage is only required when such activities are taking place.

 

There are 2 options to do the WARM concept –

 

1. Non-Active Concept

 

The Non-active concept is available since SAP BW 7.3 SP8 and is primarily used to efficiently manage the available HANA memory space.

 

This concept primarily applies to PSAs and W/o DSOs. The PSAs and W/O DSO are partitioned by data request which means that the complete data request is written to a partition. Once a threshold value is exceeded for the number of rows of a partition then a new partition is created. The default threshold value for PSAs is 5 Million lines and for write-optimized DSOs it is 20 Million lines.


Using the non-active concept the PSAs and W/o DSOs can be classified as low priority objects, so whenever there is a shortage of memory, only the older partitions containing the inactive data are quickly displaced from memory to disk, thus making room for other high priority objects/processes to use the freed memory. The new/recent partition of the PSAs and the w/o DSOs are never displaced from memory and they always remain in memory for operations that are required as part of the data loading process.


7.png

 

Although the concept can be applied to InfoCubes and Standard DSO, but it is a HIGHLY UNRECOMMENED OPTION. Please check SAP Note 1767880. Since cubes and standard DSOs are not partitioned by request, the concept of making them low priority objects and displacing and reloading them does not work efficiently in these cases. As they can hold large volumes of data, whenever a load or activation is requested the entire table has to be brought back to memory and this will result in drop in performance. For these Infoproviders, it is ideal to keep either ALL of their data in HOT OR to keep the newer data in HOT and move the older data sets to a COLD STORE like IQ using NLS concept.

 

Access

MEDIUM FREQUENT DATA

Response Time

14.pngREALLY FAST, if all partitions are in-memory.

 

16.png

If the data is displaced from the partitions and require a reload back to memory then there is considerable lag depending on the volume of data and the infrastructure strength. This is one of the key reasons why non-active concept is not a highly recommended in a very active data warehousing solution, as pulling the data back into memory from disk has negative implications in performance.

Use case

To efficiently manage the low value data in the HANA in-memory space for PSAs & w/o DSOs, and retain the available HANA memory footprint.

Likely candidates

PSAs and W/O DSOs only.

 

Some considerations -

* Non-active concept is not a way to retire or store your older data into a low cost plan, but rather it is a way to efficiently manage the limited available memory so that when the higher priority objects/processes require memory the lower priority objects are displaced and memory is made available to do higher priority tasks.

 

*The non-active concept works only when there is a memory shortage. This means that the entire PSA & w/o DSO will always be in-memory unless there is a shortage during which ONLY the older partitions are flushed out of memory to disk, but still always retains the recent/new partition in memory.

 

* If the data is displaced from the older partitions and later if some BW processes requires the data then these older partitions are reloaded back to memory. This causes considerable lag depending on the volume of data and the infrastructure strength. This is one of the key reasons why non-active concept is not a highly recommended option in a very active data warehousing environment, as pulling the data back into memory from disk has negative implications in performance.

 

*The non-active concept does not reduce the in-memory space as the objects still occupy the required space. If there are large numbers of such objects then it can result in a significant blockage of the HANA memory. This is one of the main reasons why we have the Dynamic Tiering solution.

 

2. Dynamic Tiering

 

The Dynamic Tiering is ONLY available for SAP BW 7.4 SP8 onwards and HANA 1.0 SPS 9 onwards and currently only applicable to PSAs, W/O DSOs and in the future support for Advanced DSOs.

 

If we recall the non-active concept, it only works when there is a memory shortage which means that the entire PSA & w/o DSO will always be in-memory unless there is a shortage during which ONLY the older partitions of these objects are flushed out of memory to disk. This means that the recent/new partition will always be in memory and thus will occupy some space. Also whenever the older partitions need to be accessed by any BW operations they are brought back to memory thus occupying more space. So effectively, this concept occupies space in the HANA memory at all times and there is a risk that if this concept is over utilized then it could result in slower performance and impact other processes.

 

Dynamic Tiering is very different to what the non-active concept offers. In the DT concept, all data of a PSA and w/o optimized DSO is 100% on disk; which means that the entire image of the object is in the PRIMARY disk. There is no concept of high priority objects and displacement mechanism. This is effectively keeping the entire data of these objects in a separate low cost area but at the same time offering an integrated mechanism to access them whenever required with optimal performance as in-memory.

 

8.png

The tables in the DT concept are called extended tables (ET) and they sit in a separate warm store “host” on the same storage system as shown in the below diagram. Logically the ET tables are located in the SAP HANA database catalog and can be used as if they were persistent SAP HANA tables. These tables are physically located in disk-based data storage however, which has been integrated into the SAP HANA system. The user sees the entire system as one single database and the persistence of data written to the extended table is hard-disk-based and not main-memory-based. Any data written to an extended table is written directly to the disk-based data storage.

 

9.png

 

DT offers a single consolidated way of managing the less frequent and less critical data in a very low cost manner and still giving the same level of performance as the hot store. This is possible because the DT uses the main memory for caching and processing thus offering in-memory performance benefits and also the data in the warm memory is accessed using algorithms, which are optimized for disk-based storage; thus allowing the data to be stored in disk. All the data load processes and queries are processed within the warm store and it is transparent for all operations and hence no change for BW processes are required. 

 

Unlike the concept of Non-active, the main memory in SAP HANA is not required for data persistence (in extended tables). The concept of Dynamic Tiering can optimize the main memory resource management even further than the concept of Non-active data by completely moving the staging area data from the hot store to a separate low cost warm storage. This has a positive effect on hardware sizing, especially when dealing with a large quantity of warm data in the PSAs and write-optimized Data Store objects.

 

Access

MEDIUM FREQUENT DATA

Response Time

17.pngMedium Fast. Slightly lower performance than HOT store

Use case

To efficiently manage the low value and low frequent data in the HANA in-memory space and overall offer significantly lower HANA memory footprint

Likely candidates

PSAs, W/O DSOs and Advanced DSOs only.

 

*Currently there are certain limitations of using Dynamic Tiering in a true Data Centre operation because of the limited scope of Disaster Recovery and limited automation for High Availability, but this is intended to be made available with HANA SP10.

 

Summary


When you look at the 2 warm options; Non-active concept and Dynamic Tiering concept, the non-active concept has overheads in terms of HANA memory sizing and could result in performance drawbacks if over utilized; whereas the Dynamic Tiering concept mostly replaces the non-active concept by allocating a dedicated disk based storage to endlessly manage the big volumes at a very low cost plan but still delivering optimal performance as in-memory.

 

As with Dynamic Tiering, it is an area that has the current data and demands frequent access and does all normal HANA operations (READ, WRITE, CREATE, UPDATE, DELETE etc). The DT concept works on differentiating between the less critical layers and the most critical layers of the EDW; effectively giving a dedicated storage for the less critical layers but still managing it as one integral part of the solution.

10.png

 

As for the COLD storage, it is quite clear that it is an area which demands very sporadic READ only access and is ideally an on-line archiving solution that retains and maintains historic information at a very low cost plan. The NLS concept works on differentiating the new data and the old data; effectively moving the old data to a low cost COLD storage solution but still maintaining the tighter integration with the the primary database and is always on-line for reporting.

11.png

 

So where are the savings? Let’s quickly look at an example below;

 

Let’s assume customer ABC need a 1TB BW on HANA system to migrate their current BW on DB system. If ABC retains all that data in HOT then they will need to license 1 TB of HOT store licenses and 1 TB of HANA hardware. As the volumes and requirements grow there will be a further need to invest in additional HOT licenses and additional HOT memory hardware.

 

SAP BW on HANA Solution = SAP BW on HANA

12.png

Instead if we apply the WARM/COLD concepts and enforce a proper data management policy, then we can split the data according to usage/value/frequency and  maintain them in a low cost storage solution. If we assume a 20:40 split for WARM/COLD, then the requirement for HOT store reduces to merely 40%. So as volumes and requirements grows, the low value/low usage/low frequency data can be pushed directly to the low cost storage systems without even impacting the HOT storage; thus avoiding the need to invest in any further HOT storage licenses or hardware.

 

SAP BW on HANA solution = SAP HANA (HOT) + Dynamic Tiering (WARM) + SAP NLS IQ (COLD).

13.png

So effectively SAP is offering a fair proposition with different components that complements each other and fits well into the EDW architecture of SAP BW running on SAP HANA; thus providing an efficient way of managing different data temperatures depending on their usage, value and frequency.

#BWonHANA Customer Stories

$
0
0

This is a collection of public customer references like articles, comments, videos, demos, tweets etc. related to BW-on-HANA. I've compiled that list for my own purposes but think it might be useful for others too. Please point me - e.g. via Twitter direct message or a tweet including my handle @tfxz - to any article or any other valuable reference that is missing. These are only references that I'm aware of and I'm planning to update this list over time.

 

Last update: 4 Feb 2015

 

This list is also published here. You can follow me on Twitter via @tfxz.

 

DateTypeCustomerSummaryKeywords
29 Jan 2015tweet (slide)Bayern München (DE)Bayern München reports performance improvements of 70% for ERP and 80% for BW#BWonHANA, #ERPonHANA, performance, migration
28 Jan 2015tweet (slide)Kindred Healthcare (US)Kindred Healthcare uses #BWonHANA #ASUG #SAPDesignStudio webcast#BWonHANA, Design Studio
19 Jan 2015customer testimonial (2'42")Velux (DK)VELUX - Agile BI Video Testimonial#BWonHANA, #BPConHANA, agility
1 Jan 2015articlePMC-Sierra (US)PMC-Sierra Scales New BI Heights#BWonHANA, #BPConHANA, performance, migration
27 Dec 2014article (in ES)FCC (ES)SAP anuncia mejoras en el rendimiento de FCC#BWonHANA, performance, lower TCO
23 Oct 2014video (7'09")Beyond Technologies (CA)SAP HANA: Are You Ready?#BWonHANA
28 Oct 2014blogSiemens Energy (DE)PAK in a real-case scenario at our customer Siemens Energy#BWonHANA, PAK
27 Oct 2014video interview (5'07")Johnsonville Sausage (US)Johnsonville Sausage does TPM on SAP HANA #BWonHANA implementation, CRM TPM
22 Oct 2014slide (Teched session)Molson Coors (CA)Benefits of #BWonHANA as perceived by a customer #SAPtd #dmm141 #BWonHANA implementation
22 Oct 2014slide (Teched session)Molson Coors (CA)Another summary by a customer of a successful migration to #BWonHANA #SAPtd #dmm141#BWonHANA implementation
22 Oct 2014slide (Teched session)Pepsi Co (US)"Cost stays flat" after migrating from 10TB BWonDB2 to #BWonHANA says @PepsiCo. That's the fundamental comparison! #SAPtd #itm134#BWonHANA, migration
22 Oct 2014slide (Teched session)Pepsi Co (US)PCA tool played big role in migrating 10 TB #SAPBW instance from DB2 to #SAPHANA in 12h #SAPtd #BWonHANA #BWonHANA, migration
22 Oct 2014slide (Teched session)Pepsi Co (US)#BWonHANA exceeding expectations @PepsiCo. Look at poc1 column. #SAPtd #itm134 #BWonHANA, migration
22 Oct 2014slide (Teched session)Colgate (US)Simpler! #SAP trade promotion planning at @Colgate using #BWonHANA #SAPtd #dev104#BWonHANA, CRM TPM
22 Oct 2014slide (Teched session)Colgate (US)Performance gains #SAP trade promotion planning at @Colgate using #BWonHANA #SAPtd #dev104#BWonHANA, CRM TPM
21 Oct 2014slide (Teched session)Devon Energy (US)"One of the smoothest migrations I've ever seen" @DevonEnergy on #BWonHANA at #SAPtd #BWonHANA, migration
15 Oct 2014tweetBSH (DE)#BSH runs #SAPBW, #SAPCRM and fraud management in #HANA Enterprise Cloud #HEC - more to be decided #dsagjk14#BWonHANA, HEC
15 Oct 2014tweetBSH (DE)#dsagjk14 Kundenkeynote BSH. Verbesserungen BW on Hana. Dr. Sturm berichtet.#BWonHANA, HEC
30 Sep 2014articleUniversity of Amsterdam (NL)The reason why you should go abroad#BWonHANA
23 Sep 2014articleJohnsonville Sausage (US)Johnsonville Puts the Sizzle into Trade Promotions #BWonHANA implementation, CRM TPM
3 Sep 2014article (interview)Village Roadshow (AU)How Village Roadshow Gets the Right Information to the Right People at the Right Time#BWonHANA implementation
4 Jun 2014ASUG sessionHidrovias do Brasil (BR)Start Doing Real-Time Business Faster in the Cloud#BWonHANA, HEC
4 Jun 2014SAPPHIRE session (19'42")Jet Blue (US)JetBlue Models the Future with Powerful Technology from SAP#BWonHANA, #BPConHANA
4 Jun 2014ASUG sessionScotts Miracle Gro (US)Inside Story of Scotts Miracle Gro’s SAP NetWeaver BW on SAP HANA Implementation in The Cloud#BWonHANA implementation, cloud
19 May 2014blogEvonik (DE)Evonik - specialty chemicals for the easy life#BWonHANA implementation, NLS, BWA
6 May 2014blogSHS Group (UK)BW Powered by HANA for SHS Group: the roadmap for optimised analytics#BWonHANA implementation
25 Mar 2014presentation video (57'55")Molson Coors (CA)SAP BW 7.4 Launch: Molson Coors Provides a New Approach to Data Warehousing with SAP BW 7.4 Powered by SAP HANA #BWonHANA implementation
25 Mar 2014video interview (5'21")Molson Coors (CA)Molson Coors talks SAP BW on HANA#BWonHANA implementation
25 Mar 2014slidesMolson Coors (CA)Molson Coors presenting on their #BWonHANA implementation - slides: 1 - 2 - 3 - 4#BWonHANA implementation
26 Feb 2014articleWharfedale (UK)New Offering Helps Organizations Migrate SAP NetWeaver BW to SAP HANA in a Private Cloudmigration, private cloud
27 Feb 2014tweetNovartis (CH)@Novartis presenting at #SAP #dcode: run a big, multi-node, high-availabilty #BWonHANA instance for their #Alcon divisionscale-out, high availability
16 May 2013tweetPetrobras (BR)Excellent results & stats on #BWonHANA presented by @petrobras #ASUG2013 #sapphirenowSAPPHIRE 2013
15 May 2013tweetMedtronic, Adidas, Synopsys, Marathon OilCustomers Medtronic, Adidas, Synopsys, Marathon Oil in the #BWonHANA Panel #sapphirenowSAPPHIRE 2013 panel
26 Apr 20132 slidesVaillant (DE)View this slideshow to see how Vaillant group achieved 33% savings with #BWonHANA
17 Jan 2013blogEldorado (RU)HANA successful in mission to the Eldorado Group in Moscow#BWonHANA, MAP
15 Nov 2012tweetLenovo (CN)#Lenovo took 2.5 months to implement BW on #HANA. Really experience the power of HANA. Kicked out #TeraData.SAP Teched 2012
15 Nov 2012tweetVaillant (DE)More than 10% of Vaillant's employees use #BWonHANASAP Teched 2012
26 Jun 2012blogAareal Bank (DE)Aareal Bank went live with SAP BW Powered by SAP HANA
14 May 2012video interview (1'30")Hilti (CH)Christian Ritter from Hilti talks about their experience adopting BW-on-HANAmargin bridge, HANA benefits
2 May 2012blogvariousReal-World Load Performance Results On BW-on-HANAramp-up load results
22 Mar 2012blogPuig (ES)Sweet Smell of Success… PUIG 1st BW on SAP HANA Powered by IBM Customer Live in Spainramp-up experience
23 Nov 2011blogRed Bull (AT)Red Bull's Migration of a BW to a BW-on-HANA SystemKeynote demo SAP Teched 2011
Viewing all 130 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>