Quantcast
Channel: SCN : Blog List - SAP BW Powered by SAP HANA
Viewing all 130 articles
Browse latest View live

BW on HANA Elevator Pitch

$
0
0

So you find yourself in the proverbial elevator with a colleague.  Let’s say it is someone sceptical about why your organisation should move to BW on HANA:

 

Colleague: I just don’t see the point of us migrating our existing BW system to HANA.


You: Why is that?


Colleague: Well, our data loads and reports are acceptably fast already, we’ve designed and tuned them carefully over the years.  We don’t need real-time reports either.  I just don’t see any other benefits.


You: …

 

Do you know what you might say?  Without a specific business problem opportunity in mind (“our pricing processes takes 5 days to run each month”) how would you give generalised benefits?

 

Up front we need to make the reasonable assumptions that you can move to a minimum version of BW 7.4 SP8 on HANA SP8 and that you can redesign some areas of your BW system to follow LSA++ design principles where it makes sense to do so.  Given these assumptions you should expect at least four main benefits, many of which are interrelated: S.O.A.P.:

2015-02-16_142039.png

Simplicity

Simpler means cheaper to design and develop:

 

  • BW on HANA has fewer modelling objects.  The Composite Provider, Advanced DSO and Open ODS View replace most other data provider objects.  There can be fewer modelling layers because you can stage less and virtualise more.  You can report directly on lower inbound layers, even when the data just exists as fields.  A further aspect to consider is that in BW today there may be complex designs that exist purely to enable performance – it may be that these can also be simplified.
  • HANA comes with a lot of functionality built in, for example Spatial Analytics, Text Analytics (like sentiment analysis), Predictive Analysis and its own XS Engine for native application development.  In fact HANA can be thought of as a platform in its own right.  If you need any of these features it certainly simplifies your architecture to not have to buy or build them.
  • Having a single vendor means you should expect long term benefits of deeper integration between application and database.

 

Open standards

Open standards means connectivity, libraries and skills should be readily available:

 

  • SQL, JavaScript for backend and frontend processing and OData are open standards.  Skills in these areas should be already available, or easily learned.
  • HANA offers open connectivity such that data can be exposed to a wide set of SQL-based clients.
  • Open standards also means a huge existing ecosystem of tutorials, forums and libraries is available to support development, in particular for JavaScript.

 

Agility

Agility in software is a much used term, and means different things to different people.  Here I use it be mean quicker development and quicker deployment of both new builds and changes.  This is a big benefit and from a business perspective this could be the most tangible.  That’s why this benefit gets the emphasis box around it in the diagram:

 

  • Simplified BW data flows (fewer layers) means faster development.  More importantly, you can mix BW data flows with HANA models, and HANA models can be adjusted without huge regression test obligations.
  • HANA models can be adjusted and deployed without data reloads.  Imagine not having to deal with all the hassles around planning and testing for reloading data?  In addition, if you structure your layers of HANA models to minimise regression impact you should be able to move them live more readily.  Both of these reasons give us faster deployment.
  • BW on HANA also offers us new modelling options not available before, the so called hybrid scenarios where BW and HANA functionality can be used together.  For example, you might have a wonderful set of harmonised, high quality master data in BW.  You can expose that to native HANA models, and use these models as agile data marts to quickly report on e.g. non-SAP transaction data merged with that BW master data.

 

Performance

Lastly there is the benefit that things generally go faster in HANA. In our example, your colleague is not so interested in this (for the reasons they gave) but there can still be benefits here:

 

  • With just a technical upgrade you can expect to see faster loads, activations and perhaps queries all “for free” (look through some of the customer stories).  Crucially, there is also the opportunity for further performance improvement – perhaps code push down during loading (see the ABAP for HANA course), something not available before HANA.
  • If you already use BIA/BWA then query performance may not change greatly.  This depends on what BW/BWA version you’re moving from, as for example push down of exception aggregation calculations could make a big difference to certain queries.
  • There is the opportunity for reporting to be closer to real time.

 

So there we have it. SOAP doesn’t just stand for the boring old Simple Object Access Protocol. To explore the benefits of BW on HANA further a good place to start is the BW on HANA OpenSAP course.

 

Your colleague may make a mental note not to be left alone in an elevator with you again , but hopefully they’ve got a broader idea of the potential benefits of a migration.


Align SAP BW with SAP BO authorizations (automatically)

$
0
0

Currently my customer is in the middle of a migration from SAP BW7.x towards SAP BW on HANA.

This is not a database migration, but the implementation of a new (greenfield) SAP BW system based on best practices.

One of the best practices we’re currently implementing is the alignment of SAP BW and SAP BO authorizations.

 

When authorizations need to be assigned for SAP BW and SAP BO, this assignment needs to be executed in both systems separately.

Needless to say that simplification of this process would be a great time saver in case many users require (updated) authorization.


Pieter Verstraeten , on of the members of ‘my’ migration team, came up with a great manual. This manual explains how to align SAP BW and SAP BO authorization, based upon an example containing the following 3 different type of users:

- End Users : able to execute reports (BW and BO) within a specific subarea

- Expert Users : able to create BEx queries and BO reports within a specific subarea

- Key Users : able to create BEx queries and BO reports within a specific area

 

When I aroused your interest in how to assign SAP BW roles to users and have them automatically access the SAP BO environment, feel free to download Pieter’s manual here

S/4HANA and Data Warehousing

$
0
0

One of the promisses of S/4HANA is that analytics is integrated into the [S/4HANA] applications to bring analyses (insights) and the potentially resulting actions closely together. The HANA technology provides the prerequisites as it allows to easily handle "OLTP and OLAP workloads". The latter is sometimes translated into a statement that data warehouses would become obsolete in the light of S/4HANA. However, the actual translation should read "I don't have to offload data anymore from my application into a data warehouse in order to analyse that data in an operational (isolated) context.".  The fundamental thing here is that analytics is not restricted to pure operational analytics. This blog elaborates that difference.

To put it simple: a business application manages a business process. Just take the Amazon website: it's an application that handles Amazon's order process. It allows to create, change, read orders. Those orders are stored in a database. A complex business (i.e. an enterprise) has many such business processes, thus many apps that support those processes. Even though some apps share a database - like in SAP's Business Suite or S/4HANA - there is usually multiple databases involved to run a modern enterprise:

  • Simply take a company's email server which is part of a communications process. The emails, the address book, the traffic logs etc sit in a database and consitute valuable data for analysis.
  • Take a company's webserver: it's a simple app that manages access to information of products, services and other company assets. The clickstream tracked in log files constitutes a form of (non-transactional) database.
  • Cash points (till, check-outs) in a retail or grocery store form part of the billing process and write to the billing database.
  • Some business processes incorporate data from 3rd parties like partners, suppliers or market research companies meaning that their databases get incorporated too.

The list can be easily extended when considering traditional processes (order, shipping, billing, logistics, ...) and all the big data scenarios that arise on a daily base; see here for a sample. The latter add to the list of new, additional databases and, thus, potential data sources to be analysed. From all of that, it becomes obvious that not all of those applications will be hosted within S/4HANA. It is even unlikely that all the underlying data is physically stored within one single database. It is quite probable that it needs to be brought either physically or, at least, logically to one single place in order to be analysed. That single place hosts the analytic processing environment, i.e. some engines that apply semantics to the data.

Now, whatever the processing environment is (HANA, Hadoop, Exadata, BLU, Watson, ...) and whatever technical power it provides, there is one fundamental fact: if the data to be processed is not consistent, meaning harmonised and clean, then the results of the analyses will be poor. "Garbage in - garbage out" applies here. Even if all originating data sources are consistent and clean, then the union of their data is unlikely to be consistent. It starts with non-matching material codes, country IDs or customer numbers, stretches to noisy sensor data and goes up to DB clocks (whose values are materialised in timestamps) that are not in sync - simply look at Google's efforts to tackle that problem.

In summary: while analytics in S/4HANA is operational, there is 2 facts that make non-operational (i.e. beyond a single, isolated business process) and strategical analyses challenging:
  1. It is likely that enterprise data sits in more than 1 system.
  2. Data that originates from various systems is probably not clean and consistent when being combined.

A popular choice to tackle that challenge is a data warehouse. It has the fundamental task to expose the enterprise data in a harmonised and consistent way ("single version of the truth"). This can be done by physically copying data into a single DB to then transform, cleanse, harmonise the data there. It can also be done by exposing data in a logical way via views that comprise code to transform, cleanse, harmonise the data (federation). Both approaches do the same thing, simply at different moments in time: before or during query execution. But, both approaches do cleanse and harmonise. There is no way around. So, either physical or logical data warehousing is a task that does not go away. Operational analytics in S/4HANA cannot and does not intend to replace the strategical, multi-systems analytics of a physical or logical data warehouse. This should not be confused by the fact that they can leverage the same technical assets, e.g. HANA.

On purpose, this blog has been neutral to the underlying product or approach used for data warehousing. This avoids that technical product features are mixed up with general tasks. In a subsequent blog, I will tackle the relationship between S/4HANA and BW-on-HANA.

 

This blog has been cross-published here. You can follow me on Twitter via @tfxz.

DMO: optimizing system downtime ...

$
0
0

Increasing DMO performance


Since the availability of the DMO functionality within the SUM framework at lot of things happened under the hood as well. Beside the Availability of the SAP First Guidance - Migration BW on HANA using the DMO option in SUM(which is updated constantly ...), it is also worth to have a look to the official Database Migration Option (DMO) of SUM 1.0 SP12 Guide for some performance improvements right out of the box of DMO.

 

Especially Oracle RDBMS based Systems which covers the majority of our Customer base need to address with some Performance Improvements right from the beginning to ensure a stable and performant migration to BW on HANA despite it´s sizes or other challenges on the way.

 

Performance Optimization: Table Migration Durations

You can provide the Software Update Manager with the information about table migration durations from a previous DMO run. SUM uses this data to optimize the performance of subsequent DMO runs on the same system. Although SUM does consider the table sizes for the migration sequence, other factors can influence the migration duration. In other words, more criteria than just the table size have to be considered for the duration of a table migration.

The real durations are the best criteria, but they are only known after the migration of the tables.

 

During a migration, SUM creates text files with the extension LST which contain the information about the migration duration for each migrated table.
The files are created in directory SUM/abap/htdoc/ and are called

- MIGRATE_UT_DUR.LST for the uptime migration, and

- MIGRATE_DT_DUR.LST for the downtime migration

 

this can improve the downtime up to 50%, due to the more "aggressive" table splitting after the first run and the results in the LST files.

 

PREP_CONFIGURATION_INITSUBST.JPG

To provide SUM with these migration durations to optimize the next DMO run, proceed as follows:

1. Copy the above-mentioned LST-files to a different directory (such as the download folder) to prevent them from being overwritten.

2. Create the file SAPup_add.par in directory SUM\abap\bin and add to this file the following parameter as content:

/clonepar/clonedurations=<absolute_path>\MIGRATE_UT_DUR.LST, <absolute_path>\MIGRATE_DT_DUR.LST

(<absolute_path> is here the placeholder for the directory in which you copied the LST-files as described in step 1.)


whatever you will find out: You can trust the DMO optimization when it comes to the table splitting. the Algorithm behind is smarter than you think.

     Overruling this via the manual setup of the file EUCLONEDEFS_ADD.LST is technical possible on request, but not feasible due to the manual overhead.

     In the SAP First Guidance - Migration BW on HANA using the DMO option in SUM is also mentioned, how to use the file SUM/abap/htdoc/UPGANA.XML to optimize the runtime further.


don´t forget the BW Housekeeping Task before you start the DMO procedure, and don´t unterestimate the importance saving time and space! the blog http://scn.sap.com/docs/DOC-55138 gives you a good overview about the "waste" you have collected in your BW System. Together with the manual table splitting option, you can take over the tables without content. SAP First Guidance - BW Housekeeping and BW-PCA

 

Oracle: Suppress Long-Running Phases EU_CLONE_DT_SIZES/EU_CLONE_UT_SIZES

During the update with DMO, the following phases can be long-running:

- EU_CLONE_DT_SIZES

- EU_CLONE_UT_SIZES

 

In the course of these phases, the system updates the database statistics regarding the usage of space that the tables need on the database. The aim is a better distribution of the tables during the system cloning.

Before you start the update, you have the option to suppress these long-running phases using the following procedure:

 

1. Log on to the host where the Oracle database instance is running. Use ora<dbsid> for UNIX system, or user <sapsid>adm for Windows.

2. Open a command prompt and execute the following command:
brconnect -u / -c -f stats -o <schema_owner> -t all -f allsel,collect,space -p 8

3. Add to file SAPup_add.par the following line: /ORA/update_spacestat = 0

The file SAPup_add.par is located in the subdirectory SUM/abap/bin of the SUM-directory. If this file does not exist yet, create it.

In this way, you can suppress the above mentioned phases.

 

Esspecially Oracle based RDBMS (which is still the largest SAP customer implementation) need special attention for the DB statistics despite which version you are running.
     "An old statistic is a dead statistic". Old could mean 10 seconds or an empty table as well. You can always see in transaction SM50, which table is busy and run an updated statistic with transaction DB20. This can help already a lot, but of course can be time consuming. so have a look to the following SAP Notes as well.

 

Don´t go for the SAP Notes title and don´t mix up with manual heterogeneous system copy recommendations.
     DMO is highly optimized is a way, a custom build migration script or monitor would reach it´s performance and is not supported in this context anyway.

Note936441 - Oracle settings for 3load based system copy

Note1045847 - ORACLE DIRECT PATH LOAD SUPPORT IN 3LOAD

Note 1918774 - Performance issues hen running a SAP Installation / System Copy

 

  so for the most of the systems this example of the file SUM/abap/bin/SAPup_add.par would increase the performancea lot.

/clonepar/imp/procenv = HDB_MASSIMPORT=YES

/clonepar/indexcreation = after_load

/clonepar/clonedurations =

/ORA/update_spacestat = 0

 

SAP Kernel handling - always use the latest version of the R3* tools and LibDBSL

During the migration to SAP HANA DMO of SUM has to deal with 3 kernel versions. Those are in detail:

Kernel currently used by the system for the source DB (e.g. 7.20 EXT for Oracle)

Kernel for the target release and source DB (e.g. 740 for Oracle - used for shadow system)

Kernel for the target release and SAP HANA

The kernel currently used by the system can usually be found in /sapmnt/<SID>/exe/...

The other two target kernel versions (for AnyDB and SAP HANA) can be found in the SUM directory.

At the beginning of the migration process those directories look like this:

SUM/abap/exe< contains the target kernel for AnyDB

SUM/abap/exe_2nd< contains the target kernel for SAP HANA


During downtime (phase MIG2NDDB_SWITCH) the directories will be switched. After the switch it looks like this:

SUM/abap/exe(target kernel for AnyDB) has been moved to SUM/abap/exe_1st

SUM/abap/exe_2nd(target kernel for SAP HANA) has been moved to SUM/abap/exe

  As usual in SUM later on (phase KX_SWITCH) the kernel will be copied from SUM/abap/exeto the system


together with the R3* tools you always exchange the LibDBSL as well for source and target DB.

     currently for Kernel 7.42 (which is needed for SAP BW 7.40 SP08 and higher) these latest patches are needed:

Note2124912 - R3load sporadically produces empty task files

Note2118195 - R3load aborts during unicode conversion and declustering

Note 2130541 - SAP HANA: Deactivate fast data access in a non-Unicode system


with the release of SUM 1.0 SP13 there will also a new and functionality improved and consolidated UI5 for all SUM/DMO procedures (except Dualstacks) be available.

     See also the Blog - Upgrade was never been easier ...

 

newDMO_UI5.JPG

 

Best Regards

Roland Kramer PM BW/In-Memory

How to Copy HANA CompositeProviders

$
0
0

SAP BW 7.x SP8 has added more capability into the new type of CompositeProviders (loosely called HANA CompositeProviders). Hence, we're seeing these type of CompositeProviders getting used more frequently and being preferred over MultiProviders. Unfortunately, at least in SP8, there's no standard way to copy these type of CompositeProviders. I have written a program to fill this gap and this blog post contains a copy of that program. The program allows a user to specify the CompositeProvider to be copied, the InfoArea into which the new CompositeProvider will be placed, and the name of new CompositeProvider. The selection screen looks like this:

ZBW_HCPR_COPY Selection Screen.PNG

The program works by invoking the underlying ABAP classes of a HANA CompositeProvider namely: CL_RSO_HCPR and CL_RSO_HCPR_VERSION. The underlying tables that store the definition of a CompositeProvider is modeled a bit differently than the tables for the old BW objects (e.g. DSO, InfoCube). In particular, an XML representation of the object is now stored in table. For CompositeProviders, the main configuration table is RSOHCPR. The field XMLUI of this table contains the XML definition of a CompositeProvider. Hence, a good deal of the code in the attached program is for manipulating the XML string to change the name of the CompositeProvider and its InfoArea.

BW on HANA Useful Design Patterns

$
0
0

When you use BW on HANA, there are more options avaiable to you for data modelling.  There are design patterns available that are not possible in BW on non-HANA scenarios.  Some of these have been mentionedbefore on SCN and in the BW on HANA OpenSAP course (see week 3).

 

However, I'd like to highlight some more patterns that you may not yet be familiar with.  The document linked below is a PDF of potentially useful design patterns for BW on HANA scenarios:

 

     BW on HANA - Useful Design Patterns

 

What the document covers is summarised below:

 

1) Activation-less Loading / Near-Real Time BW Master Data

This pattern allows the loading of BW master data without a master data activation step.  This is useful if you want to provide near real-time master data.

 

2) Navigational Attribute of a Navigational Attribute

This pattern allows modelling a navigational attribute of a navigational attribute.

 

3) Horizontal Partitioning of BW Master Data

This pattern involves partitioning master data records in a similar way to the logical partitioning of InfoProvider data.  Master data can be distributed across multiple smaller InfoObjects and unioned together. This could be useful when there are very large master data volumes.

 

4) Vertical Partitioning of BW Master Data

This pattern involves partitioning master data records by attribute.  Each partitioned InfoObject has a different structure but they share the same key.  This is useful for template configuration where separate global / local master data designs are desirable, but they need viewed as one object in reporting.

 

5) Modelling Lifetime to Date Key Figures

Sometimes “inception to date” or “lifetime to date” key figures are required in reports over document transaction data.  This design pattern describes one possible solution to this (without using any balance carry forward or non-cumulative concepts).

 

6) Handling BW Extraction and HANA Live Replication of Similar Data

There is a design challenge when the same source ECC data is extracted via HANA Live/SLT and via normal BW extractors.  There is redundancy if the same data comes into BW via two different routes.  This design pattern explains the problem and a possible solution for certain cases.

S/4HANA and #BWonHANA

$
0
0

For many years, the absence of powerful analytic modeling and processing within SAP's R/3 and Business Suite applications led customers to install an instance of Business Warehouse (BW) next to such a system. Essentially, BW closed the analytic gap of those solutions. To that end, data was loaded from R/3 or the Business Suite into BW. Maintaining those load processes, two systems and the resulting time lag between the data being created in R/3 or Business Suite and the data being visible in BW reporting was the price for this workaround. Now, that operational analyticsis possible within S/4HANA, this workaround has become obsolete. This is sometimes mis-perceived as BW becoming obsolete. Let's have a look into the situation to understand what is dead and what is alive.

In my recent blog on S/4HANA and Data Warehousing, I've made the point that data warehouses - in general - are still necessary, even with the advent of operational analytics in S/4HANA. Here, I like to tackle the topic from BW's point of view. To that end it is important to understand that BW can be deployed in two ways; fig. 1 shows this visually:

  • embedded: into any application that runs on Netweaver like SAP CRM, SCM, HR, FIN etc, and
  • stand-alone: as an enterprise data warehouse (EDW) that allows to harmonise data originating from various, disconnected systems so that it can be analysed in a consistent way. This allows to get an understanding across many disparate business processes that exist in an enterprise.

Deployment options for BW.
Figure 1: Deployment options for BW.

 

So in both deployment options, BW exists and can be used for operational or cross-system analytics. This leads to 4 potential situations, also depicted in fig. 2:

  1. Embedded, used for operational analytics
    • BW reuses application's storage, semantics, security.
    • No redundancy.
  2. Stand-alone, used for operational analytics
    • Data, semantics, security is copied to BW as a workaround as no analytics options are available in the app.
    • This is the case that should be replaced as it is sufficient but not necessary.
    • It is not ideal but a workaround to fill a gap within the app.
  3. Embedded, used for cross-system analytics
    • BW within Netweaver (on which S/4HANA, SAP Business Suite, SAP CRM etc are built) is used as a DW.
    • Technically this is possible.
    • However, it is currently not recommended due to concerns around workload + governance around the system.
  4. Stand-alone, used for cross-system analytics
    • BW as a stand-alone data warehouse.
    • This is the most frequent deployment of BW.

Theoretical combinations of BW use cases and BW deployment options.
Figure 2: Theoretical combinations of BW use cases and BW deployment options.

 

Case 2. is solved in S/4HANA by replacing it with case 1. - which is not a prominent but yet a fact. This frequently leads to the misunderstanding that "BW was obsolete" which should actually read that case 2. is mostly obsolete (in the context of S/4HANA) . Furthermore, case 4. continues to be a valid, as is any data warehouse approach that blends S/4HANA data with data from other systems for deeper insight into what is going on in the enterprise. The advent of IoTscenarios makes this even more imperative than before.

 

This blog has been cross-published here. You can follow me on Twitter via @tfxz.

BW on HANA Useful Design Patterns

$
0
0

When you use BW on HANA, there are more options avaiable to you for data modelling.  There are design patterns available that are not possible in BW on non-HANA scenarios.  Some of these have been mentionedbefore on SCN and in the BW on HANA OpenSAP course (see week 3).

 

However, I'd like to highlight some more patterns that you may not yet be familiar with.  The document linked below is a PDF of potentially useful design patterns for BW on HANA scenarios:

 

     BW on HANA - Useful Design Patterns

 

What the document covers is summarised below:

 

1) Activation-less Loading / Near-Real Time BW Master Data

This pattern allows the loading of BW master data without a master data activation step.  This is useful if you want to provide near real-time master data.

 

2) Navigational Attribute of a Navigational Attribute

This pattern allows modelling a navigational attribute of a navigational attribute.

 

3) Horizontal Partitioning of BW Master Data

This pattern involves partitioning master data records in a similar way to the logical partitioning of InfoProvider data.  Master data can be distributed across multiple smaller InfoObjects and unioned together. This could be useful when there are very large master data volumes.

 

4) Vertical Partitioning of BW Master Data

This pattern involves partitioning master data records by attribute.  Each partitioned InfoObject has a different structure but they share the same key.  This is useful for template configuration where separate global / local master data designs are desirable, but they need viewed as one object in reporting.

 

5) Modelling Lifetime to Date Key Figures

Sometimes “inception to date” or “lifetime to date” key figures are required in reports over document transaction data.  This design pattern describes one possible solution to this (without using any balance carry forward or non-cumulative concepts).

 

6) Handling BW Extraction and HANA Live Replication of Similar Data

There is a design challenge when the same source ECC data is extracted via HANA Live/SLT and via normal BW extractors.  There is redundancy if the same data comes into BW via two different routes.  This design pattern explains the problem and a possible solution for certain cases.


SAP HANA enables Dairy Farm to make better business decisions

$
0
0

Some BW on HANA projects start with a narrow focus on improved performance. Pan-Asian retailer Dairy Farm took a much broader approach, incorporating their BW on HANA projects into a focus on process innovation and SAP platform enhancement.

 

Prior to undertaking their BW on HANA project with Bluefin Solutions, Dairy Farm’s corporate leadership had already initiated a program of retail process improvement. Along a different track, Dairy Farm’s technical leadership was reviewing their SAP installation, deciding whether their SAP system could support modern analytics requirements. Eventually, the two re-evaluation projects converged. To support these bigger goals, David West (BI Group Manager) and his team decided to upgrade their reporting systems via a multi-region BW on HANA deployment.

Dairy Farm’s re-assessment of their business and technical systems also provided a business case framework for the BW on HANA projects. As West and his team proceeded with the first BW on HANA rollout in Indonesia, they understood improved reporting fit into Dairy Farm’s corporate goals. The lessons learned from the Indonesia project, now nearing go-live, will be applied to Dairy Farm’s upcoming Chinese rollout. In China, BW on HANA will be packaged and tied into a new SAP installation.

 

Dairy Farm’s BW on HANA projects are based on two guiding principles: modernize performance and expand reporting across the user base. “Prior to HANA, our BW system was not getting it done,” recalls West. “We have a lot of reports that run against granular data, which was which was not feasible from a performance standpoint. To compete in retail, we need our users to have access to self-service information. HANA looked looked like a feasible way to provide fast, direct access to daily market information for a mass user base.”

 

To achieve their process improvement goals, Dairy Farm must transform into what West calls “an analytics culture.” West sees BW on HANA as a way to help drive that transformation. “In the past, access to BW was limited to a few business analysts who were Excel experts,” he says. “Their reports used to take one to two hours to complete - now that’s down to 15 to 20 seconds.” One of West’s first self-service projects? Merchandising. “Quick access to information is one of our success measures,” says West. “People can go in and check their numbers. We now have 50 to 60 people doing it. Before BW on HANA, it was only a handful of people sending out  PDFs.”

 

Due to improved reporting speeds, West’s team can now move to a “push” model where information is shared with users via a BusinessObjects UI. Currently 130 people are being trained on how to use these analytics capabilities within Indonesia, including Dairy Farm’s convenience, health and beauty divisions.

 

West knew that user adoption of the new BW on HANA system was critical - so far, so good. “We need to get our older systems decommissioned quickly,” explains West. “That means getting user uptake just as quickly. So far, we’ve gotten very good feedback on performance. For users, reliable access is the main thing. Getting the right design patterns to optimize BW on HANA is our biggest challenge. As much as possible, you want your models to run against the base data. We’ve now achieved a solid foundation; we’re quite happy with the design patterns we’ve developed, which force the processing down to the HANA layer where the speed benefits lie.”

 

To measure the HANA benefits, West and company will return to the fundamentals of their business case. “This is part of an overall SAP retail change initiative,” says West. “With this new system, we are now getting transparent margin data for the first time, which enables better pricing decisions. That’s a superior operating model.” Examples of retail metrics Dairy Farm will track in their pending China rollout include sales improvement, margin improvement, and stock reduction. West’s team will evaluate the success of BW on HANA in the context on the company’s SAP retail change goals.

 

Other vital metrics include user adoption, and reinforcing executive buy-in. User adoption could include bulk stats like amounts of system queries, but West’s ultimate goal is to see users embracing an analytics culture. “We want to train people to use information access to make better decisions,” says West. “For example, taking action on an out-of-stock issue. We want to make it simpler for users to get to problems quickly, or identify opportunities.”

 

As the Dairy Farm pushes their retail envelope further, West anticipates an even more compelling HANA business case. “I expect we’ll look at real-time retail scenarios like dynamic pricing, e-commerce processing, and real-time inventory management,” says West.  “That could be transformational.”

Find ASUG BW HANA Content and more - at ASUG Annual Conference

$
0
0

Here is a Spring update from our ASUG EDW/BW SIG which is lead by ASUG Volunteers Allison Levine and Steve Ruggiero.


In this issue:

  • Upcoming: SAP HANA and BW related webcasts
  • ASUG Annual Pre-Conference SAP BusinessObjects BI 4.1 with SAP NetWeaver BW Powered by SAP HANA – Deep Dive May 4th
  • BW/EDW/HANA Sessions to Consider for ASUG Annual Conference
  • Coming soon: Call for Speakers for ASUG sessions at SAP TechEd

Upcoming SAP Hana and BW related webcasts:

Please join the ASUG BI Community and the Enterprise Data Warehousing SIG for the following upcoming webcasts – not to be missed:

 

 

Have questions about migrating SAP BW to HANA?  Ask an expert on April 8th by attending our webcast.  If you cannot make it, submit your questions here in advance.  You need to be logged on to ASUG.com

 

 

In case you missed our past webcasts and want to review them check them out here. - ASUG logon is required.

 

Join us May 4th for ASUG Annual Conference   - Pre-Conference SAP SAP BusinessObjects BI 4.1 with SAP NetWeaver BW Powered by SAP HANA – Deep Dive

 

Register today:

 

Previous pre-conference sessions have sold out and please see comments on the last pre-conference session from SAP TechEd:

 

“Ingo Hilgefort is a SAP Superstar and the most knowledgeable SAP person on this topic.”

 

“Ingo Hilgefort did a great job explaining the different tools of BI Suite.  This workshop demonstrated his technical knowledge of the topic and he and the entire team was helpful during activities.  The activities material was very well documented and I will use it for future reference.”

 

“Ingo Hilgefort - great product knowledge”

 

“ Ingo Hilgefort very knowledgeable on the topic good communicator, clear and understandable”

 

ASUG is offering the first of its kind workshop for how to use the latest BI tools including Analysis Office 2.0, Design Studio, Lumira with BI4.1/BW/HANA with SAP.  Register today!

ingo.png

Source: ASUG

 

ASUG volunteer and SAP Mentor  Ingo Hilgefort is leading this session.  Could this be you in this picture with Ingo?

 

BW/EDW/HANA Sessions to Consider for ASUG Annual Conference

 

Several sessions are in this track for you to consider – have you registered (sapandasug.com)

Code

Title

BI536

The impact and role of Logical Data Warehousing in the modern age of in-memory based Enterprise Architectures

BI328

Migration of SAP NetWeaver Business Warehouse to SAP NetWeaver BW powered by SAP HANA

BI1956

Graybar Uses Cloud to Jump Start HANA Future

BI490

How ULTA used BW on HANA Mixed Scenario for Effective Promotion Analysis

BI120

Business Benefits Gained by TOMS Shoes by Migrating SAP BW to HANA

BI2324

SAP HANA Roadmap

BI378

How NBA.com Delivered Real-Time Statistics to Fans with SAP HANA

BI474

Big Data Analytics Using SAP HANA Dynamic Tiering

BI180

SAP HANA Live and SAP BW Complement Each Other Perfectly

BI1249

BI for BW Roadmap and Examples

BI1726

Learn How Uni-Select Successfully Migrated to BW 7.4 running in the HANA Enterprise Cloud

BI461

Under Armour: BW on HANA migration –  A Story of Failed Start to Amazing & Successful Ending

BI1377

SAP BW 7.4 powered by SAP HANA and further Roadmap

BI2061

  1. W. R. Grace’s SAP Business Warehouse Landscape Upgrade success story

BI2241

Extending the reach of LSA++ using the New SAP BW 7.40 artifacts

BI1291

An Enterprise Data Repository Empowered by HANA Modeling at ConAgra Foods

 

Also please see the Excel version of the grid schedule - also see here

 

Check out this justification letter if you need justification to attend.  This track had over 200 submissions alone - so the above reflects top-notch content selected by ASUG Volunteers.

 

ASUG is at SAP TechEd

SAP TechEd Las Vegas (October 19-23) is SAP's premiere technology education event. Learn about today's hottest topics, including in-memory computing, Big Data and real-time analytics, cloud management and security, Hadoop and the SAP HANA platform, and user experience

 

Learn more about SAP TechEd here. Call for speakers for ASUG sessions starts on April 20th.

 

See you online or in person at ASUG Annual Conference.

3 tips to survive as a SAP BI consultant in 2015

$
0
0

OK, I admit. The title is maybe too alarming and made on purpose to call your attention. However, you should be worried. If you think SAP BI Consultant role has dramatically changed after BusinessObjects acquisition, you did not start to realize what might come.

 

If you are an experienced SAP BI Consultant and started your BI carrier around 1999 or 2001 (as did I, by the way), you are from a time when proposing tools to address our customers issues went more or less, like this:

 

Scenario 1

Customer: “I need a tool to create an ETL (Extraction, Transformation and Load) process”

SAP BI Consultant: “You should use SAP NetWeaver BW (update rule, infopackage, infopackage group, etc.)”


Scenario 2

Customer: “I need a tool to create an operational analysis of my DSO (Days of Sales outstanding)”

SAP BI Consultant: “You should use SAP NetWeaver BW (Query Designer)”


Scenario 3

Customer: “I need a tool to create a beautiful Dashboard to my CFO”

SAP BI Consultant: “You should use SAP NetWeaver BW (Web Application Designer – WAD)”

 

Then, in the middle of 2000’s SAP invested heavily on Analytics with the improvements on SAP NetWeaver BW 7.0, and the acquisitions of BusinessObjects and Business Planning and Consolidation (BPC) to name the two more relevant.

 

At that point-in-time, the role of a SAP BI Consultant evolved clearly: the “one man show” which was de “BW Consultant” initially had to specialize in - at least - four different roles:

  • A SAP NetWeaver BW focusing basically on Data Warehousing
  • A SAP BusinessObjects focusing basically on presentation layer
  • A SAP Data Services focusing basically on ETL layer
  • A SAP Planning & Consolidation focusing basically on planning processes

 

I know that some people may have a different vision on the number of roles and on my simplistic view on the “history of SAP BI”, but I believe you got my meaning.

 

Fact is, however, that we still see a few SAP BI consultants that did not adapt to that. They are still “BW Consultants” or “BO Consultants”, they have no holistic view and they are stuck in a 10+ years old view of the world, focused on tools and not in solution.

 

While that is still the case, a new move will change the game once more. This time, there will be no evolution, but revolution! With the release of SAP Smart Finance and the S-Innovations, SAP made it clear its bold move to realize Hasso Plattner vision to bring transactional and analytical worlds together.

 

At first glance, it may look that this is only marketing stuff but not when you look it close. Reviewing the analytical content provided with Suite on HANA (SoH) and SAP Smart Financials it powered by SAP HANA Live, for example, it is really easy to see that several historical requirements can now be seen in a beautiful interface (HTML5 based), subject oriented (and not transactional oriented) and with the unprecedented power to allow insight-to-action type of analysis. All this in real time and at (almost) line item level!

 

In short, the “common” operational analytics – which historically has been delegated to SAP BW analysis – is now delivered as part of the SoH. And more, it is nicer, more flexible, more user friendly and faster than ever before.

 

Once more, I can “see” in the faces of many people reading this text the old (and outdated) question: “Are you saying BW is dead?”. The answer is still no – and I’ll not go into much detail about this again. The short answer is simple: every time, a customer needs a Data Warehouse or an Enterprise Data Warehouse, SAP NetWeaber BW (powered by SAP HANA) is still the best choice!

 

Having that said, it is necessary to understand that the role of the SAP BI Consultant must change to address the new reality. I short, those major challenges are:

  • Analytics is coming to the Suite. That is a fact, operational analytics, operational KPIs (Key Performance Indicators) and real time analytics will happen in the Suite;
  • SAP BI Solutions is not about tools. It never was and never will be, tools will be ever changing, updating and been replaced.
  • Not all customers need a complex and sometimes complicated Data Warehouse solution, but those that need it, really need it and we must not allow adaptations that lose value on the way.

 

With this picture and challenges in mind, here are the 3 tips to survive as a SAP BI Consultant:

  1. Avoid the 5 most common mistakes of Business Intelligence projects
  2. Understand SAP roadmap, ambitions and strategy. It is not worth to fight against it, SAP has clear roadmap, ambitions and strategy, studying it, learning it, bringing it to our daily work is the only way to propose and design long last simple solutions.
  3. Update your technical skillset. Yes, there are a lot of new tools, concepts and possibilities to study; there is no way out of it. You get to get SoH, Smart Business, Simple Finance, etc. on your brain and you must be able to handle it. Next time your customer ask for a “BI report” the best solution may be based on a view, which will be based on SAP HANA Live, deployed straight into the SAP ERP, using SAP Fiori to allow easy information consumption via mobile device (and not a SAP BW development with a presentation layer in SAP BusinessObjects).

 

Conclusion

The Business Intelligence subject area is changing. What has started with simple data visualization now has to deal with Internet of Things (IoT), Big Data, Real Time, Mobile, Self Service BI, Cloud Computing and more. It is time to prepare oneself for the challenges to come and support our customers on taking the best of every single opportunity!

 

All the best,

Eduardo

Efficiently managing your data in SAP BW on HANA (Dynamic Tiering, NLS)

$
0
0

There is some confusion around the different options available for managing data in BW. Hence I am writing this to ease that confusion and hopefully achieve a clear understanding of what options are available and which are best suited for what purpose.

 

In a typical non-HANA environment most customers will retain all of their data in SAP BW and some will retire using tapes or disk or using a near-line storage solution with a secondary DB.

 

When it comes to running SAP BW on HANA, the cost to putting all data in RAM in HANA can be high if the volumes are large. Moreover, not all the data needs to be in-memory because typically in an organization only 30-50% of the entire BW data is really used very actively for reporting and other operations and hence they are the ideal candidates to fully utilize the in-memory capabilities of HANA. The other 50-70 % portion of the data is infrequently accessed and hence can be managed in a low cost plan.

 

SAP BW on HANA offers a number of ways to manage these different data temperatures so you can achieve an overall lower TCO for your investment. For customers this becomes an interesting area because it encourages the adoption of an archiving policy which when managed and maintained efficiently can limit the need to buy more HANA and thus saving heavy Opex costs.

 

Broadly there are 3 types of data temperatures -

1.png

 

HOT DATA


This is the area where 100 % of your PRIMARY IMAGE DATA is in the HANA in-memory space (RAM) and is instantly available for all operations.

In the BW world, this is typically the InfoCubes and Standard DSOs as they constitute the reporting and harmonization (EDW) areas respectively as show below. They are very frequently accessed for reporting and harmonization purposes and hence is the ideal candidates for being fully in-memory and to fully benefit from the HANA capabilities.

 

2.png

Although the frequency is fast, the data that is typically accessed very frequently for reporting purposes is between 2-3 years old. Hence this portion of the most recent accessed information is the real hot data that needs to be in-memory all the time to deliver top level performance.

 

The older data (typically beyond 3 yrs.) are rarely accessed for reporting but are still required for to be retained for regulatory and compliance purposes. Hence these older data can be safely archived to a very low cost plan using the COLD DATA management option using SAP IQ as explained in the next section.


The data in the PSAs and w/o (write optimized) DSOs constitute the staging area and corporate memory. Although they require frequent access, they tend to be used primarily for non-reporting purposes, i.e. for data propagation and harmonization. Hence they can be moved to a WARM store area, which is explained in the next section.

 

The below diagram shows the areas where the HOT, WARM and COLD concepts will apply in a typical SAP BW EDW architecture.

 

3.png

 

Access

VERY FREQUENT OPERATIONS, that run every few seconds, to every minute to every hour

Response Time

14.pngREALLY FAST, Fully in-memory

Use case

To provide fast access  - To queries, data loading, data activation, data transformation and data look-ups

Likely candidates

RECENT DATA from InfoCubes, Standard DSOs, Open DSOs and All Master Data and Transformations and related look-up DSOs.

 

COLD DATA


In the context of this document I am only discussing SAP IQ as the cold storage, whereas with BW there are other certified partners who are providing Near-Line Storage solutions such as PBS Software and DataVard. You can look up for “NLS” from the partner site at - http://go.sap.com/partner.html

 

This is the area where 100 % of your PRIMARY IMAGE DATA is in a SECONDARY DATABASE (ON DISK) and the response is slightly slower than HANA but still offers reasonably fast READ ONLY access to data for reporting purposes, as if they were in one database.

4.png

 

In the BW world, the standard DSOs & InfoCubes constitute the harmonization and reporting layers. But typically only the last 2-3 years of data is the most frequently requested. The older data (typically beyond 3 yrs.) are very in-frequently accessed for reporting but are still required for to be retained for in-frequent reporting or regulatory and compliance purposes. Hence these older data can be safely archived to a very low cost plan.

 

This is where the NLS comes into play. Keeping the existing models and architecture the same, you can remove the older sets of data from these Infoproviders (typically slicing the data according to time dimensions or completely moving the entire data) out from the primary HANA database and move it to a secondary low cost/low maintenance IQ database. The READ access to IQ NLS is in most cases is much faster than READ access to traditional databases. For customers running BW on xDB and using IQ as NLS, the NLS DB actually turns into an ‘accelerator’ and provides much faster response times than the primary database.

 

The NLS4IQ Adaptor in SAP BW offers tight integration between SAP BW and SAP IQ, such that all data management, retiring and control processes can be done through SAP BW using the Data Archiving Process (DAP). A lot of new enhancements have been recently added with the BW 7.4 SPx releases that help to manage the entire end-to-end archiving life cycle process in a much more simpler and efficient way.

5.png

Talking about SAP IQ, it offers columnar tables for faster read access, upto 90% compression and runs on a conventional hardware, thus offering overall lower TCO benefits plus it is a highly mature database with a large install base for the past 15+ years. Hence it is a trusted environment to retire old data as a low cost/low maintenance DB option but still have all the benefits of accessing it in near real-time whenever needed or requested.

 

Also for historical data the SLAs are usually not the same as the high availability data and hence the NLS process helps by moving the bulk of the inactive data out of the primary database to a slightly relaxed SLA environment. Secondly what NLS is providing is an on-line archiving solution, so as the volume grows and data gets older, they can be seamlessly moved out of the primary HANA database. This way you can reduce the OPEX costs by significantly reducing the need to buy more HANA, thus reducing the TCO of the landscape dramatically.

 

Access

SPORADIC, typically data that is older than 2-3 years but is still required for reporting purposes either regulatory or statistical or compliance.

Response Time

15.pngTYPICALLY 5-10 % less than HOT store.

Use case

This is used for data retiring purposes where you REMOVE part of your DATA (HISTORIC DATA) from your PRIMARY STORAGE and MOVE to a low cost database, typically generating an archiving scenario, but still making the data available anytime and anywhere with near real-time access as on request.

Likely candidates

HISTORIC DATA from InfoCubes, Standard DSOs, and SPOs.

 


WARM DATA


This is the area where the PRIMARY IMAGE DATA is in the DISK storage of HANA Database instance, but is always available on request. Using this you can manage your LESS RECENT DATA and LESS FREQUENT DATA more efficiently within the HANA database such that data is instantly available for READ, WRITE, UPDATE etc (all operations), but still offers the lower TCO benefits.

6.png

In the BW world, PSAs and W/O Optimized DSOs constitute the staging area and corporate memory area. The value of the data in the PSAs is good as long as it is the newest data. Once it is loaded to the upper level targets then the value of that data diminishes and is only required if there are discrepancies in the report results and a trace back/reload is required. Although some customers do maintain a regular housekeeping, the PSAs persists the data for a few days to few weeks to few months, depending on the SLAs and policies. Hence their size can grow very quickly thus blocking a lot of space and memory which otherwise could have been used for other important processes. Similarly with corporate memory, they are primarily used to support the transformations, harmonisations, reconstructions etc.; hence their usage is only required when such activities are taking place.

 

There are 2 options to do the WARM concept –

 

1. Non-Active Concept

 

The Non-active concept is available since SAP BW 7.3 SP8 and is primarily used to efficiently manage the available HANA memory space.

 

This concept primarily applies to PSAs and W/o DSOs. The PSAs and W/O DSO are partitioned by data request which means that the complete data request is written to a partition. Once a threshold value is exceeded for the number of rows of a partition then a new partition is created. The default threshold value for PSAs is 5 Million lines and for write-optimized DSOs it is 20 Million lines.


Using the non-active concept the PSAs and W/o DSOs can be classified as low priority objects, so whenever there is a shortage of memory, only the older partitions containing the inactive data are quickly displaced from memory, thus making room for other high priority objects/processes to use the freed memory. The new/recent partition of the PSAs and the w/o DSOs are never displaced from memory and they always remain in memory for operations that are required as part of the data loading process.


7.png

 

Although the concept can be applied to InfoCubes and Standard DSO, but it is a HIGHLY UNRECOMMENED OPTION. Please check SAP Note 1767880. Since cubes and standard DSOs are not partitioned by request, the concept of making them low priority objects and displacing and reloading them does not work efficiently in these cases. As they can hold large volumes of data, whenever a load or activation is requested the entire table has to be brought back to memory and this will result in drop in performance. For these Infoproviders, it is ideal to keep either ALL of their data in HOT OR to keep the newer data in HOT and move the older data sets to a COLD STORE like IQ using NLS concept.

 

Access

MEDIUM FREQUENT DATA

Response Time

14.pngREALLY FAST, if all partitions are in-memory.

 

16.png

If the data is displaced from the partitions and require a reload back to memory then there is considerable lag depending on the volume of data and the infrastructure strength. This is one of the key reasons why non-active concept is not a highly recommended in a very active data warehousing solution, as pulling the data back into memory from disk has negative implications in performance.

Use case

To efficiently manage the low value data in the HANA in-memory space for PSAs & w/o DSOs, and retain the available HANA memory footprint.

Likely candidates

PSAs and W/O DSOs only.

 

Some considerations -

* Non-active concept is not a way to retire or store your older data into a low cost plan, but rather it is a way to more efficiently manage the limited available memory so that when the higher priority objects/processes require memory the lower priority objects are displaced and memory is made available to do higher priority tasks.

 

*The non-active concept works only when there is a memory shortage. This means that the entire PSA & w/o DSO will always be in-memory unless there is a shortage during which ONLY the older partitions are flushed out of memory to disk, but still always retains the recent/new partition in memory.

 

* If the data is displaced from the older partitions and later if some BW processes requires the data then these older partitions are reloaded back to memory. This causes considerable lag depending on the volume of data and the infrastructure strength. This is one of the key reasons why non-active concept is not a highly recommended option in a very active data warehousing environment, as pulling the data back into memory from disk has negative implications in performance.

 

*The non-active concept does not reduce the in-memory space as the objects still occupy the required space. If there are large numbers of such objects then it can result in a significant blockage of the HANA memory. This is one of the main reasons why we have the Dynamic Tiering solution.

 

2. Dynamic Tiering

 

The Dynamic Tiering is ONLY available for SAP BW 7.4 SP8 onwards and HANA 1.0 SPS 9 onwards and currently only applicable to PSAs, W/O DSOs and in the future support for Advanced DSOs.

 

If we recall the non-active concept, it only works when there is a memory shortage which means that the entire PSA & w/o DSO will always be in-memory unless there is a shortage during which ONLY the older partitions of these objects are flushed out of memory to disk. This means that the recent/new partition will always be in memory and thus will occupy some space. Also whenever the older partitions need to be accessed by any BW operations they are brought back to memory thus occupying more space. So effectively, this concept occupies space in the HANA memory at all times and there is a risk that if this concept is over utilized then it could result in slower performance and impact other processes.

 

Dynamic Tiering is very different to what the non-active concept offers. In the DT concept, all data of a PSA and w/o optimized DSO is 100% on disk; which means that the entire image of the object is in the PRIMARY disk. There is no concept of high priority objects and displacement mechanism. This is effectively keeping the entire data of these objects in a separate low cost area but at the same time offering an integrated mechanism to access them whenever required with optimal performance.

 

8.png

The tables in the DT concept are called extended tables (ET) and they sit in a separate warm store “host” on the same storage system as shown in the below diagram. Logically the ET tables are located in the SAP HANA database catalog and can be used as if they were persistent SAP HANA tables. These tables are physically located in disk-based data storage however, which has been integrated into the SAP HANA system. The user sees the entire system as one single database and the persistence of data written to the extended table is hard-disk-based and not main-memory-based. Any data written to an extended table is written directly to the disk-based data storage.

 

9.png

 

DT offers a single consolidated way of managing the less frequent and less critical data in a very low cost manner and still giving the same level of performance as the hot store. This is possible because the DT uses the main memory for caching and processing thus offering in-memory performance benefits and also the data in the warm memory is accessed using algorithms, which are optimized for disk-based storage; thus allowing the data to be stored in disk. All the data load processes and queries are processed within the warm store and it is transparent for all operations and hence no change for BW processes are required.

 

Unlike the concept of Non-active, the main memory in SAP HANA is not required for data persistence (in extended tables). The concept of Dynamic Tiering can optimize the main memory resource management even further than the concept of Non-active data by completely moving the staging area data from the hot store to a separate low cost warm storage. This has a positive effect on hardware sizing, especially when dealing with a large quantity of warm data in the PSAs and write-optimized Data Store objects.

 

Access

MEDIUM FREQUENT DATA

Response Time

17.pngMedium Fast. Slightly lower performance than HOT store

Use case

To efficiently manage the low value and low frequent data in the HANA in-memory space and overall offer significantly lower HANA memory footprint

Likely candidates

PSAs, W/O DSOs and Advanced DSOs only.

 

*Currently there are certain limitations of using Dynamic Tiering in a true Data Centre operation because of the limited scope of Disaster Recovery and limited automation for High Availability, but this is intended to be made available with HANA SP10.

 

Summary


When you look at the 2 warm options; Non-active concept and Dynamic Tiering concept, the non-active concept has overheads in terms of HANA memory sizing and could result in performance drawbacks if over utilized; whereas the Dynamic Tiering concept mostly replaces the non-active concept by allocating a dedicated disk based storage to endlessly manage the big volumes at a very low cost plan but still delivering optimal performance as in-memory.

 

As with Dynamic Tiering, it is an area that has the current data and demands frequent access and does all normal HANA operations (READ, WRITE, CREATE, UPDATE, DELETE etc). The DT concept works on differentiating between the less critical layers and the most critical layers of the EDW; effectively giving a dedicated storage for the less critical layers but still managing it as one integral part of the solution.

10.png

 

As for the COLD storage, it is quite clear that it is an area which demands very sporadic READ only access and is ideally an on-line archiving solution that retains and maintains historic information at a very low cost plan. The NLS concept works on differentiating the new data and the old data; effectively moving the old data to a low cost COLD storage solution but still maintaining the tighter integration with the the primary database and is always on-line for reporting.

11.png

 

So where are the savings? Let’s quickly look at an example below;

 

Let’s assume customer ABC need a 1TB BW on HANA system to migrate their current BW on DB system. If ABC retains all that data in HOT then they will need to license 1 TB of HOT store licenses and 1 TB of HANA hardware. As the volumes and requirements grow there will be a further need to invest in additional HOT licenses and additional HOT memory hardware.

 

SAP BW on HANA Solution = SAP BW on HANA

12.png

Instead if we apply the WARM/COLD concepts and enforce a proper data management policy, then we can split the data according to usage/value/frequency and  maintain them in a low cost storage solution. If we assume a 20:40 split for WARM/COLD, then the requirement for HOT store reduces to merely 40%. So as volumes and requirements grows, the low value/low usage/low frequency data can be pushed directly to the low cost storage systems without even impacting the HOT storage; thus avoiding the need to invest in any further HOT storage licenses or hardware.

 

SAP BW on HANA solution = SAP HANA (HOT) + Dynamic Tiering (WARM) + SAP NLS IQ (COLD).

13.png

So effectively SAP is offering a fair proposition with different components that complements each other and fits well into the EDW architecture of SAP BW running on SAP HANA; thus providing an efficient way of managing different data temperatures depending on their usage, value and frequency.

S/4HANA and Data Warehousing

$
0
0

One of the promisses of S/4HANA is that analytics is integrated into the [S/4HANA] applications to bring analyses (insights) and the potentially resulting actions closely together. The HANA technology provides the prerequisites as it allows to easily handle "OLTP and OLAP workloads". The latter is sometimes translated into a statement that data warehouses would become obsolete in the light of S/4HANA. However, the actual translation should read "I don't have to offload data anymore from my application into a data warehouse in order to analyse that data in an operational (isolated) context.".  The fundamental thing here is that analytics is not restricted to pure operational analytics. This blog elaborates that difference.

To put it simple: a business application manages a business process. Just take the Amazon website: it's an application that handles Amazon's order process. It allows to create, change, read orders. Those orders are stored in a database. A complex business (i.e. an enterprise) has many such business processes, thus many apps that support those processes. Even though some apps share a database - like in SAP's Business Suite or S/4HANA - there is usually multiple databases involved to run a modern enterprise:

  • Simply take a company's email server which is part of a communications process. The emails, the address book, the traffic logs etc sit in a database and consitute valuable data for analysis.
  • Take a company's webserver: it's a simple app that manages access to information of products, services and other company assets. The clickstream tracked in log files constitutes a form of (non-transactional) database.
  • Cash points (till, check-outs) in a retail or grocery store form part of the billing process and write to the billing database.
  • Some business processes incorporate data from 3rd parties like partners, suppliers or market research companies meaning that their databases get incorporated too.

The list can be easily extended when considering traditional processes (order, shipping, billing, logistics, ...) and all the big data scenarios that arise on a daily base; see here for a sample. The latter add to the list of new, additional databases and, thus, potential data sources to be analysed. From all of that, it becomes obvious that not all of those applications will be hosted within S/4HANA. It is even unlikely that all the underlying data is physically stored within one single database. It is quite probable that it needs to be brought either physically or, at least, logically to one single place in order to be analysed. That single place hosts the analytic processing environment, i.e. some engines that apply semantics to the data.

Now, whatever the processing environment is (HANA, Hadoop, Exadata, BLU, Watson, ...) and whatever technical power it provides, there is one fundamental fact: if the data to be processed is not consistent, meaning harmonised and clean, then the results of the analyses will be poor. "Garbage in - garbage out" applies here. Even if all originating data sources are consistent and clean, then the union of their data is unlikely to be consistent. It starts with non-matching material codes, country IDs or customer numbers, stretches to noisy sensor data and goes up to DB clocks (whose values are materialised in timestamps) that are not in sync - simply look at Google's efforts to tackle that problem.

In summary: while analytics in S/4HANA is operational, there is 2 facts that make non-operational (i.e. beyond a single, isolated business process) and strategical analyses challenging:
  1. It is likely that enterprise data sits in more than 1 system.
  2. Data that originates from various systems is probably not clean and consistent when being combined.

A popular choice to tackle that challenge is a data warehouse. It has the fundamental task to expose the enterprise data in a harmonised and consistent way ("single version of the truth"). This can be done by physically copying data into a single DB to then transform, cleanse, harmonise the data there. It can also be done by exposing data in a logical way via views that comprise code to transform, cleanse, harmonise the data (federation). Both approaches do the same thing, simply at different moments in time: before or during query execution. But, both approaches do cleanse and harmonise. There is no way around. So, either physical or logical data warehousing is a task that does not go away. Operational analytics in S/4HANA cannot and does not intend to replace the strategical, multi-systems analytics of a physical or logical data warehouse. This should not be confused by the fact that they can leverage the same technical assets, e.g. HANA.

On purpose, this blog has been neutral to the underlying product or approach used for data warehousing. This avoids that technical product features are mixed up with general tasks. In a subsequent blog, I will tackle the relationship between S/4HANA and BW-on-HANA.

 

You can follow me on Twitter via @tfxz.

Query Performance Optimisation Tips in BW 7.3

$
0
0

This post is to share my experiences on Query optimisation tips in SAP BW 7.3.

 

1. From BW 7.3, there is a newly implemented OLAP cache mode called “Query Aggregate Cache” which is recommended to be used.


1.jpg  

Set the cache mode to "Query - Aggregate Cache" for faster query runtimes.

2.jpg


Note: For newly developed objects in starting BW 7.3, this option will be set automatically.


2. Please ensure correct setting in the DSO is chosen to avoid the delay with the data activation .

 

3.jpg

 

For example the step for generating the SID’s are necessary (Option 2) only  if the DSO is used for reporting in Bex Queries and hence ensure right setting is chosen to avoid delay with the running of process chains.


Some times the performance of the queries on top of DSO’s got impacted with the option “During Reporting” and instead we have to check the possibility of generating the SID’s in backend by choosing the option “During” Activation".


3. Avoid using infosets as performance of queries on top of infosets are not good compared to cubes. We can use composite providers which combines the advantages of infosets (on the fly joins) and infocubes.


  Note: We can use composite providers with the BW version BW 7.8 SPS 8.0.


4. Do not schedule the collection and monitoring job too frequently as this will increase the resource consumption unnecessarily and by that lead to reduction in overall performance under high load situations


5. Use “Query Pruning” if the multiprovider contains multiple cubes (for multiple domains) . Non usage of “Query Pruning” leads to data access from all domained providers, though the query needs access only from a single domained provider which might lead to loss in query performance. In BW on HANA, the data is kept in the memory even though it is not required.

Query Results position - When there is a Characteristic in "Columns" Axis and when there is no Characteristic in Column Axis

$
0
0

Usually, when Defining a BW Query we would have the Characteritics in Rows and Key Figures in Columns

 

However, it is quite possible that there is a need to have a characteristic in the Column Axis.

Like, in the below example, where the Characteristic "CTRY_A1", is in Column Axis of the query definition and hence the resultant query view is as below

 


b1.png

 

In the above view, the "Overall Result" is seen on the Right side

If there is a requirement that the results should be seen first (left side), instead of the individual values, we can change the setting in the Query Designer to make this happen

In the Eclipse Editor of the BW Query, navigate to the "General" Tab, "Result Location", for "Columns" option, select "Left"

After saving and re-executing the query, the query results will be as seen below - where the "Overall Results" are seen first (on the left hand side)


b2.png

 

 

Note:

(a) In BEx Query Designer, this setting is seen in the Properties of the Query, in "Rows/Columns" tab.

(b) In Web Java Runtime, to make this setting, invoke the context menu on the Characteristic which is in Column Axis, select "Properties" -> "Axis". In the "Column Axis Properties" dialog, under "Data Formatting" tab, for "Result Position" option, select "Left"

 

 

Thank you

Aneesh Kumar


We migrated SAP BW to HANA.. Now what?

$
0
0

Many organizations (prepare to) migrate their existing SAP BW landscape to SAP HANA. A migration to SAP HANA is always ‘non-disruptive’, meaning you can migrate to HANA as-is, unlocking many benefits instantly. However, simply migrating to HANA without optimizing your system would discard the full potential of the new database platform for BW. This article explains a 4-step approach to optimize your SAP BW landscape and benefit of all the innovation HANA has to offer.


Step 1: Instant gain: Performance!

First of all and most obviously, you will experience a significant performance improvement compared to traditional databases like Oracle or DB2. This performance gain has several aspects.

 

  • Front-end query performance

Thanks to SAP HANA’s in-memory and column-based character, reads on the database are much faster compared to non-HANA BW. This enables you to not only speed up your badly performing queries, but also to design queries on datasets or with calculations that could be executed before because they would time-out or run for ages before returning results.



  • Back end performance

Long runtimes in daily data loading and processing are often a big problem in SAP BW environments. Problems during nightly data loads are often the cause for delays in data availability, resulting in business users not being able to run their critical reports in time, or not at all. SAP BW on HANA offers great improvements in performance for BW back-end loading processes, significantly reducing the runtime of nightly data loads. When running on HANA, these runtimes can be reduced, and problems can be addressed earlier and faster, resulting in quicker data availability for reporting and far less business reporting downtime.

 

 

To ensure you benefit the most of performance improvements, it is important to migrate to the latest SAP BW release. New improvements and optimizations are added with every release. For example: the most recent, SAP BW (7.4 SP9), contains numerous new functionalities improving the SAP BW landscape and performance.

 

Performance statistics as collected by several of our clients after the HANA migration (before further optimizations):

Performanceresults

 

 

Step 2: Identify optimizations

 

Optimizing your BW on HANA system should be the first step after migrating. Some sweet spots:

 

  • Optimizing ABAP code for HANA

As SAP HANA is a column-based database, different rules apply when it comes to performance-optimizing your custom code. Performance-optimizing your ABAP code can make all the difference in loading performance after migration. There are several checks and programs SAP delivers that identify possible optimizations and give hints on what can be improved.



  • Redefine daily process chain schedules

The increased back-end performance HANA offers makes it possible to execute the daily loading process much faster. By redefining the process chain schedule and setup, you can put business-critical process chains to the very start of the schedule. This makes it possible for BW support teams to fix failed loading processes long before they negatively impact critical business reports.



  • Making transformations ‘HANA-Enabled’

SAP BW 7.4 SP8 on HANA introduced the ‘push down’ functionality for Transformations to the database, greatly increasing the loading performance of these transformations. The functionality in transformations that is supported to run on the database however is quite limited as of yet. Custom ABAP coding in in transformations for example are not supported yet, and not all formulas are supported yet, though more formulas are being added in every SP. If a transformation contains any non-supported functionality for pushdown to HANA, the check functionality in the transformation will point out what functionality is not supported.

 

Making slight adjustments to your data model in order to maximize the number of HANA-enabled transformations can make quite the difference in loading performance.

Pushdown

 

 

Step 3: Simplify your landscape/minimize redundancy

 

The best way to reduce throughput times for transformations and activations is to just get rid of them. When running on HANA, there is no performance benefit for InfoCubes anymore. In fact, making an InfoCube HANA-Optimized actually makes it very similar to a DSO from a table point of view.

 

In classical BW environments, lots of InfoCubes have been created with the sole purpose of performance, and are simply a 1:1 copy of the data in the DSO layer underneath it. By making a few minor adjustments to the DSO, and redirecting your MultiProvider (or Query) to the DSO instead of the InfoCube, you can decommission the InfoCube completely. This makes for 100% performance improvement for that step, simplifies the data flow, and also removes a place where errors can occur. This can be a huge benefit for data flows with logically partitioned InfoCubes or use of a Semantically Partitioned Objects.

 

 

Step 4: Use new functionality

 

SAP BW on HANA offers a variety of new possibilities that enable organizations to move on from the classical BW Data Warehouse to what SAP calls ‘the virtual datawarehouse’. With HANA’s processing power, data models will not only be simplified. There is also a range of possibilities to virtualize data layers in your BW data model. By not using persistence in every step of your BW Data model, development speeds up, flexibility is increased, and memory footprint is reduced.

 

With SAP BW on HANA, SAP has (finally) made modeling based on non-SAP data sources much easier compared to previous versions of the product. By using field-based modeling, Smart Data Access and the possibility to use Open ODS views, using BW to combine SAP with non-SAP data is now a very good option. The possibility to expose BW providers as HANA models also provides the possibility to use SQL on BW infoproviders, making BW more suited to be consumed by SQL-based front-ends.

 

SAP HANA supports many very advanced libraries for, for example, extensive statistical uses. Libraries like R, PAL and AFL can be consumed using a HANA Analytic Process in BW, and provide the result of those analytical processes back to BW.

For a full overview of new functionalities in the newest release of SAP BW, please see What's new in SAP BW 7.4 SP8

BW on HANA Elevator Pitch

$
0
0

So you find yourself in the proverbial elevator with a colleague.  Let’s say it is someone sceptical about why your organisation should move to BW on HANA:

 

Colleague: I just don’t see the point of us migrating our existing BW system to HANA.


You: Why is that?


Colleague: Well, our data loads and reports are acceptably fast already, we’ve designed and tuned them carefully over the years.  We don’t need real-time reports either.  I just don’t see any other benefits.


You: …

 

Do you know what you might say?  Without a specific business problem opportunity in mind (“our pricing processes takes 5 days to run each month”) how would you give generalised benefits?

 

Up front we need to make the reasonable assumptions that you can move to a minimum version of BW 7.4 SP8 on HANA SP8 and that you can redesign some areas of your BW system to follow LSA++ design principles where it makes sense to do so.  Given these assumptions you should expect at least four main benefits, many of which are interrelated: S.O.A.P.:

2015-02-16_142039.png

Simplicity

Simpler means cheaper to design and develop:

 

  • BW on HANA has fewer modelling objects.  The Composite Provider, Advanced DSO and Open ODS View replace most other data provider objects.  There can be fewer modelling layers because you can stage less and virtualise more.  You can report directly on lower inbound layers, even when the data just exists as fields.  A further aspect to consider is that in BW today there may be complex designs that exist purely to enable performance – it may be that these can also be simplified.
  • HANA comes with a lot of functionality built in, for example Spatial Analytics, Text Analytics (like sentiment analysis), Predictive Analysis and its own XS Engine for native application development.  In fact HANA can be thought of as a platform in its own right.  If you need any of these features it certainly simplifies your architecture to not have to buy or build them.
  • Having a single vendor means you should expect long term benefits of deeper integration between application and database.

 

Open standards

Open standards means connectivity, libraries and skills should be readily available:

 

  • SQL, JavaScript for backend and frontend processing and OData are open standards.  Skills in these areas should be already available, or easily learned.
  • HANA offers open connectivity such that data can be exposed to a wide set of SQL-based clients.
  • Open standards also means a huge existing ecosystem of tutorials, forums and libraries is available to support development, in particular for JavaScript.

 

Agility

Agility in software is a much used term, and means different things to different people.  Here I use it be mean quicker development and quicker deployment of both new builds and changes.  This is a big benefit and from a business perspective this could be the most tangible.  That’s why this benefit gets the emphasis box around it in the diagram:

 

  • Simplified BW data flows (fewer layers) means faster development.  More importantly, you can mix BW data flows with HANA models, and HANA models can be adjusted without huge regression test obligations.
  • HANA models can be adjusted and deployed without data reloads.  Imagine not having to deal with all the hassles around planning and testing for reloading data?  In addition, if you structure your layers of HANA models to minimise regression impact you should be able to move them live more readily.  Both of these reasons give us faster deployment.
  • BW on HANA also offers us new modelling options not available before, the so called hybrid scenarios where BW and HANA functionality can be used together.  For example, you might have a wonderful set of harmonised, high quality master data in BW.  You can expose that to native HANA models, and use these models as agile data marts to quickly report on e.g. non-SAP transaction data merged with that BW master data.

 

Performance

Lastly there is the benefit that things generally go faster in HANA. In our example, your colleague is not so interested in this (for the reasons they gave) but there can still be benefits here:

 

  • With just a technical upgrade you can expect to see faster loads, activations and perhaps queries all “for free” (look through some of the customer stories).  Crucially, there is also the opportunity for further performance improvement – perhaps code push down during loading (see the ABAP for HANA course), something not available before HANA.
  • If you already use BIA/BWA then query performance may not change greatly.  This depends on what BW/BWA version you’re moving from, as for example push down of exception aggregation calculations could make a big difference to certain queries.
  • There is the opportunity for reporting to be closer to real time.

 

So there we have it. SOAP doesn’t just stand for the boring old Simple Object Access Protocol. To explore the benefits of BW on HANA further a good place to start is the BW on HANA OpenSAP course.

 

Your colleague may make a mental note not to be left alone in an elevator with you again , but hopefully they’ve got a broader idea of the potential benefits of a migration.

What's in store for BW 7.4 at ASUG Annual Conference | SAPPHIRE NOW 2015

$
0
0

At ASUG Annual Conference 2014, I along with Bhanu Gupta had the wonderful experience of presenting a session on BW Workspaces - An Agile BI Environment That Empowers Users to Enhance and Enrich Enterprise Data. Because of the wide interest in this topic, we also shared this through an @ASUG_BI webcast.

 

This success made us enthusiastic to submit more sessions this year. And we are grateful and lucky to get an opportunity to present on Extending the Reach of LSA++ Using New SAP BW 7.40 Artifacts. We had submitted another session on HANA Smart Data Access with Hadoop which I thought was more interesting but the committee picked this one. Nevertheless we are excited to be at the event next week- for our session and of course a lot, lot more!

In this session we will warm up with a quick intro into Enterprise Data warehousing, followed by Layered Scalable Architecture LSA (Check out the Blog By Juergen Haupt ) – which is the basis of LSA++, what is different from LSA to LSA++, basically a reduction in the data warehouse layers…and there is a lot more to it.

Then we introduce SAP BW 7.4 modeling and provisioning artifacts, namely – ODP and ODQ, Open ODS View, Advance DSO, Composite Providers. Another exciting feature is how you will now be able generate HANA views automatically. BW 7.4 SP9 powered by HANA is the next milestone for enterprise data warehousing with BW on HANA and provides the next level of simplification for BW. In addition, SAP BW on Hana’s Virtual Data Warehouse capabilities have been enhanced for even more flexibility.

 

To learn more, join our session on May 7. Take a minute to introduce yourself and talk about your plans for BW 7.4

 

There are a lot of good sessions where you can learn more on BW 7.4. Be sure to mark on your agenda:

 

Here is the overall ASUG BI/BW/HANASchedule with Session Links, nicely done by Tammy Powlas.

Also take a look at the other sessions  on A-TPM, BPC and BW on HANA, being presented by my TekLink colleagues.

 

Looks like it is going to be another great event…see you there!

We migrated SAP BW to HANA.. Now what?

$
0
0

Many organizations (prepare to) migrate their existing SAP BW landscape to SAP HANA. A migration to SAP HANA is always ‘non-disruptive’, meaning you can migrate to HANA as-is, unlocking many benefits instantly. However, simply migrating to HANA without optimizing your system would discard the full potential of the new database platform for BW. This article explains a 4-step approach to optimize your SAP BW landscape and benefit of all the innovation HANA has to offer.


Step 1: Instant gain: Performance!

First of all and most obviously, you will experience a significant performance improvement compared to traditional databases like Oracle or DB2. This performance gain has several aspects.

 

  • Front-end query performance

Thanks to SAP HANA’s in-memory and column-based character, reads on the database are much faster compared to non-HANA BW. This enables you to not only speed up your badly performing queries, but also to design queries on datasets or with calculations that could be executed before because they would time-out or run for ages before returning results.



  • Back end performance

Long runtimes in daily data loading and processing are often a big problem in SAP BW environments. Problems during nightly data loads are often the cause for delays in data availability, resulting in business users not being able to run their critical reports in time, or not at all. SAP BW on HANA offers great improvements in performance for BW back-end loading processes, significantly reducing the runtime of nightly data loads. When running on HANA, these runtimes can be reduced, and problems can be addressed earlier and faster, resulting in quicker data availability for reporting and far less business reporting downtime.

 

 

To ensure you benefit the most of performance improvements, it is important to migrate to the latest SAP BW release. New improvements and optimizations are added with every release. For example: the most recent, SAP BW (7.4 SP9), contains numerous new functionalities improving the SAP BW landscape and performance.

 

Performance statistics as collected by several of our clients after the HANA migration (before further optimizations):

Performanceresults

 

 

Step 2: Identify optimizations

 

Optimizing your BW on HANA system should be the first step after migrating. Some sweet spots:

 

  • Optimizing ABAP code for HANA

As SAP HANA is a column-based database, different rules apply when it comes to performance-optimizing your custom code. Performance-optimizing your ABAP code can make all the difference in loading performance after migration. There are several checks and programs SAP delivers that identify possible optimizations and give hints on what can be improved.



  • Redefine daily process chain schedules

The increased back-end performance HANA offers makes it possible to execute the daily loading process much faster. By redefining the process chain schedule and setup, you can put business-critical process chains to the very start of the schedule. This makes it possible for BW support teams to fix failed loading processes long before they negatively impact critical business reports.



  • Making transformations ‘HANA-Enabled’

SAP BW 7.4 SP8 on HANA introduced the ‘push down’ functionality for Transformations to the database, greatly increasing the loading performance of these transformations. The functionality in transformations that is supported to run on the database however is quite limited as of yet. Custom ABAP coding in in transformations for example are not supported yet, and not all formulas are supported yet, though more formulas are being added in every SP. If a transformation contains any non-supported functionality for pushdown to HANA, the check functionality in the transformation will point out what functionality is not supported.

 

Making slight adjustments to your data model in order to maximize the number of HANA-enabled transformations can make quite the difference in loading performance.

Pushdown

 

 

Step 3: Simplify your landscape/minimize redundancy

 

The best way to reduce throughput times for transformations and activations is to just get rid of them. When running on HANA, there is no performance benefit for InfoCubes anymore. In fact, making an InfoCube HANA-Optimized actually makes it very similar to a DSO from a table point of view.

 

In classical BW environments, lots of InfoCubes have been created with the sole purpose of performance, and are simply a 1:1 copy of the data in the DSO layer underneath it. By making a few minor adjustments to the DSO, and redirecting your MultiProvider (or Query) to the DSO instead of the InfoCube, you can decommission the InfoCube completely. This makes for 100% performance improvement for that step, simplifies the data flow, and also removes a place where errors can occur. This can be a huge benefit for data flows with logically partitioned InfoCubes or use of a Semantically Partitioned Objects.

 

 

Step 4: Use new functionality

 

SAP BW on HANA offers a variety of new possibilities that enable organizations to move on from the classical BW Data Warehouse to what SAP calls ‘the virtual datawarehouse’. With HANA’s processing power, data models will not only be simplified. There is also a range of possibilities to virtualize data layers in your BW data model. By not using persistence in every step of your BW Data model, development speeds up, flexibility is increased, and memory footprint is reduced.

 

With SAP BW on HANA, SAP has (finally) made modeling based on non-SAP data sources much easier compared to previous versions of the product. By using field-based modeling, Smart Data Access and the possibility to use Open ODS views, using BW to combine SAP with non-SAP data is now a very good option. The possibility to expose BW providers as HANA models also provides the possibility to use SQL on BW infoproviders, making BW more suited to be consumed by SQL-based front-ends.

 

SAP HANA supports many very advanced libraries for, for example, extensive statistical uses. Libraries like R, PAL and AFL can be consumed using a HANA Analytic Process in BW, and provide the result of those analytical processes back to BW.

For a full overview of new functionalities in the newest release of SAP BW, please see What's new in SAP BW 7.4 SP8

Deciding factors for BW on HANA, Enterprise HANA and HANA Live

$
0
0

I am sure most of us have been asked to choose one or combinations of the following tools to build an Enterprise Data Warehouse.

(1) BW on HANA

(2) Enterprise HANA

(3) HANA Live

 

It's an interesting topic and I have discussed this with various folks. There are number of factors that we need to consider before we finalize these tools. It varies based on customer, eco systems(source, ETL and reporting tools) in the landscape, support team...etc. I have listed some of the key factors and compared with these 3 tools.

 

CriteriaBW on HANAEnterprise HANAHANA Live
Significant current BW investmentYes
Significant SAP sourcesYes
Significant non-SAP sourcesYes
Zero data latency and operational reporting from SAP systemsYes
Business content deliveryYesYes
Data cleansing & Data harmonizationYesYes
Time dependent master dataMoreLimitedLimited
Reuse features(e.g.:Hierarchy) from SAP business suiteYes
Integration with big dataLimitedMore
Planning capabilitiesMoreLess
Predictive analysticsLimitedMore
Maturity of toolMoreLessLess
Availabitly of competent resourcesMoreLessLess

 

Here is the summary if we need to do this in 1 line.

  • May go with enterprise HANA + HANA live if BW is currently not in the landscape
  • May go with BW on HANA and HANA live if BW has been in the landscpae for a while
Viewing all 130 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>