Quantcast
Channel: SCN : Blog List - SAP BW Powered by SAP HANA
Viewing all 130 articles
Browse latest View live

What’s the Difference Between a Classic #SAPBW and #BWonHANA?

$
0
0

This is yet another question that I get from all angles, partners, customers but even colleagues. BW has been the spearhead SAP application to run on HANA. Actually, it is also one of the top drivers for HANA revenue. We've created the picture in figure 1 to describe - on a high level - what has happened. I believe that this not only tells a story on BW's evolution but underlines the overall HANA strategy of becoming not only a super-fast DBMS but an overall, compelling and powerful platform.


Fig. 1: High level comparison between a classic BW and the two versions of BW-on-HANA. Here as PPT.

Classic BW

Classic BW (7.3ff) follows the classic architecture with a central DBMS server with one or more application servers attached. The latter communicate with the DBMS in SQL via the DBSL layer. Features and functions of BW - the red boxes in the left-most picture of fig. 1 - are (mostly) implemented in ABAP on the application server.

BW 7.3 on HANA

At SAPPHIRE Madrid in November 2011, BW 7.3 was the first version to be released on HANA as a DBMS. There, the focus was (a) to enable HANA as a DBMS underneath BW and (b) to provide a few dedicated and extremely valuable performance improvements by pushing the run-time (!) of certain BW features to the HANA server. The latter is shown in the centre of fig. 1 by moving some of the red boxes from the application server into the HANA server. As the BW features and functions are still parameterised, defined, orchestrated from within the BW code in application server, they are still represented as striped boxes in the application server. Actually, customers and their users do not note a difference in usage other than better performance. Examples are: faster query processing, planning performance (PAK), DSO activation. Frequently, these features have been implemented in HANA using specialised HANA engines (most prominently the calculation and planning engines) or libraries that go well beyond a SQL scope. The latter are core components of the HANA platform and are accessed via proprietary, optimised protocols.

BW 7.4 on HANA

The next step in the evolution of BW has been the 7.4 release on HANA. Beyond additional functions being pushed down into HANA, there has been a number of features (pictured as dark blue boxes in fig. 1) that extent the classic BW scope and allow to do things that were not possible before. The HANA analytic process (e.g. using PAL or R) and the reworked modeling environment with new Eclpise-based UIs that smoothly integrate with (native) HANA modeling UIs andconcepts leading also to a reduced set of infoprovider types that are necessary to create the data warehouse. Especially the latter have triggered comments like

  • "This is not BW."
  • "Unbelievable but BW has been completely renewed."
  • "7.4 doesn't do justice to the product! You should have given it a different name!"

It is especially those dark blue boxes that surprise many, both inside and outside SAP. It is the essence that makes dual approaches, like within the HANA EDW, possible, which, in turn, leads to a simplified environment for a customer.

 

This blog has been cross-published here. You can follow me on Twitter via @tfxz.


BW on HANA - Performance Comparison of Different Exception Aggregations

$
0
0

This article compares the performance of three different ways of doing a simple exception aggregation in a BW on HANA scenario.  The goal is to see what design will give best performance for a BEx query that uses exception aggregation.

 

Introduction

A performance problem can be experienced in BW when a large amount of exception aggregation has to be done at query run-time.  Before BW 7.3, exception aggregation happened on the application server during the OLAP part of query execution.  It was not done on the database layer.  This meant that a potentially large volume of data had to make the journey from the database to the application server.  With BW 7.3 (and BWA 7.2), or with BW on HANA, it became possible to "push down" some of these exception aggregations to the database layer.

 

The performance benefit of pushing down these exception aggregations can be considerable.  This push down is well documented (see chapter 4 of this document) and takes only a switch in RSRT to implement.  In RSRT you make this setting:

rsrt setting.png

By making this setting, the system will attempt to perform the exception aggregation in the database/BWA layer, but depending on query complexity this may not always be possible.

 

But could performance be improved further?

 

If we consider a mixed scenario in a BW on HANA landscape, then we have more options available to us, so perhaps the answer is yes.  With HANA as the database there is the possibility to build custom HANA models and so push down the entire query calculation, leaving the query itself as more or less a "pass through".  This is only possible if the query and the exception aggregation are fairly simple.  As queries get more complex, BEx may be the better option, and in general you don't want to mix complex BEx with complex HANA models in one report.  For the sake of this experiment imagine performance is the only driver and the query has only simple exception aggregation.

 

Data Flow for Test Queries

So let's consider three queries that can be used to give the same exception aggregation output and see which performs best as the number of records increases:

dataflow.png

The different queries/test cases are named like this:

 

1) "Vanilla"

A standard BW cube with a BEx query on top.  Exception aggregation is defined in the query as normal and executed in the OLAP layer at runtime.  I've called this vanilla as it is the default situation.

 

2) "Exc.Aggr."

This is the same BW cube as case 1, with a different query on top.  The query has it's RSRT setting for "Operations in BWA/HANA" = "6 Exception Aggregation".  In this case, exception aggregation is still defined in the BEx query, but it is executed in the database layer (or on BWA if that were the scenario).

 

3) "Custom"

This uses a custom HANA model.  The same BW cube is used as per cases 1 and 2, but here we use a system-generated HANA Analytic View on top of the cube.  Above that, a custom Calculation View is used to do the equivalent of an exception aggregation, in fact all the required report calculation work is done here, not in the query.  A Virtual Cube above that lets BW see the custom HANA model, and lastly a BEx query that does pretty much nothing sits on the very top.

 

Test Scenario

Let's consider a very simple case where the exception aggregation is used to count occurrences.  Consider the scenario where you have measurements of man hours of effort associated with line items on a document.  Perhaps these are measurements of effort on help-desk tickets, or efforts to process financial documents.  Exception aggregation can be used to count documents and so you can get averages at a higher level, for example average man hours of effort per document across different company codes.  Here is some sample data from the cube YYCUB02 (see data flow diagram above):

base data.png

The above sample data is used to give the following report output, where exception aggregation is used to count the number of documents:

report.png

Generating Test Data

To generate the test data, a small data set was flat-filed into the cube, then a self-load was used to generate more random records based on the initial set.  This self-load was then repeated to double up the volumes with each load, with measurements running from 1m to 100m rows.

 

Gathering Test Results

To gather the test results, each query was run with an increasing numbers of records in the cube.  Performance measurements were taken from RSRT (RSRT -> Execute + Debug -> Display Statistics):

rsrt stats.png

These raw performance measurements from RSRT were then grouped into DB and OLAP time using the groupings defined in help page Aggregation of Query Runtime Statistics.  Since RSRT was used in all cases the Frontend time can be regarded as constant and was ignored.

 

Test Results

Comparing the 3 scenarios with increasing records produced these charts:

results.png

The Exc.Aggr. and Custom scenarios both perform much better than the Vanilla scenario, giving a 95% drop in runtime.  We can zoom in to see how these two scenarios are made up, by separating their OLAP time and DB time:

results2.png

The above shows that the OLAP time for both these scenarios is very low, as we'd expect since the query work is not really being done in the database layer.  The difference lies in the DB time, and the Custom model outperforms the Exc.Aggr model.

 

Excursion into Big O Notation

If you've not come across it before, Big O Notation is a way to describe how a process or algorithm behaves when one of it's inputs change.  You'd expect a report to get slower as the number of records it has to process increases, but how much slower could it be?  Big O Notation can be used to describe the broad categories of how much slower something gets.

 

This chart shows the main Big O curves (the below chart stolen from http://apelbaum.wordpress.com/2011/05/05/big-o/):

bigo.png

In the chart above, as the number of records increases on the x-axis, the runtime on the y-axis also changes.  The names of each curve are the Big O Notations.  Looking back at our test results the query execution times can be seen to form a straight line so we can say that are all O(n).  It is true that the Vanilla scenario is a worse O(n) than the other scenarios, in that it's slope is steeper, but they are still all categorised as O(n).

 

Real World Examples of Big O

O(log n) - is what you'd see using an ABAP BINARY SEARCH, for example in a transformation.  O(log n) is a good place to be if volumes get high.

O(n) - is what we see in our query runtime results.

O(n2) and worse - is what you'd see if there is nested loops in a transformation.  You may be familiar with performance being fine in a development box with limited data, and suddenly in a test environment performance becomes very bad.  A nested loop in a transformation can cause this O(n2) pattern.

 

When I first carried out these tests, my results looked like this:

resultserr.png

This looked like the Vanilla case was showing O(log n) but that didn't make any sense!  How could increasing records cause performance to stabalise?  On further investigation this turned out to be an error in my design of the test.  The random number generator was only generating values up to 7 digits, or 10 milion values.  As the number of records ramped up to 100 million, the random generator was tending to repeat the document numbers rather than create new ones.  The amount of processing done on the OLAP layer was then becoming more constant.  Lesson learned - always sanity check the results!

 

Conclusion

Pushing down exception aggregation using the RSRT setting gives a huge improvement for virtually no effort.  In this simple test case, a hand crafted custom HANA model did perform a little better, but that would need weighed against the additional effort to build and maintain.

How to create Composite provider on top of Multiprovider and SPO (Semantic Partitioned Object) in SAP BI7.3

$
0
0


Hi All,

 

Could you please let me know how to create Composite provider on top of Multiprovider and SPO (Semantic Partitioned Object  build based on Info Cube)  in SAP BI7.3.

 

 

 

Regards,

Raghavendra

SAP HANA optimized Transformation – Part 1

$
0
0

While exploring the features of BW on HANA, I came across HANA optimized transformation. I believe everyone would aware of this already. I just wanted to share my observations on this.

____________________________________________________________________________________________________________________________________

 

In BW 7.4 on HANA, we can even move the transformation processing to HANA DB level.

As we see in the below figure, now the transformation also is possible to process (the Mappings, formulas, conversions etc.) at HANA DB level.

 

overview.png

                              Figure 1(taken from one of the SAP document)


HANA optimized Transformation is possible only in below cases:-

  • Only DSO as Targets.
  • Source should one of these:  PSA,DSO,Infocube,Mutliprovider,SPOs
  • No routines should be written in transformation(Start,end,field routines/transfer routine, expert routines)
  • Only for Mappings, Conversions, formulas, Read data from Master/DSO.
  • Transformation with SAP HANA Expert script.

__________

 

  • There is a check in the transformation which will tell us, whether it is possible to push down the processing to HANA DB.
  • On Activation, the system will enable this checkbox “SAP HANA Processing possible” if the transformation satisfies the rule types for HANA processing.

               trans1.png

                                                                                    Figure 2: Transformation

 

  • On activation, we can see 2 programs generated for Transformations.
  • One for normal processing and other one for HANA processing

                         trans2.png

                                                                                               Figure 3: Transformation

  • If you click on HANA transformation, you can see a HANA Analysis process (Source->Function/Script->Target) is created for our object.

                              trans3.png

                                                                           Figure 4: Generated HANA Transformation

  • Since my data source used here is PSA, it had created with database table as PSA table name, and target as my DSO name
  • In between, there is a standard Class used for the mappings etc.
  • The data analysis part will be executed at DB level.

 

 

  • In DTP, we can choose to process Transformation either in HANA or normal processing(in Application server)
  • For HANA execution, the Semantics/Error handling should be disabled in DTP. Otherwise, the SAP HANA execution option won’t appear in processing mode.

dtp.png

                                                                 Figure: DTP

 

So this is how the transformation is pushed down to HANA DB level, by generating a HANA transformation(which is HANA Analysis process).

 

 

P.S:  I couldn't compare the performance between these modes as of now. I will share the same, once I had done.

Migrating to BW on HANA - Real Life Lessons - ASUG Webcast

$
0
0

The other week Paul Townsend of Johnsonville Sausage provided this webcast.  He is also presenting an ASUG session at SAP TechEd && d-code Las Vegas next month as well: Learn How to Successfully Upgrade your SAP BW System to SAP BW on SAP HANA

 

Paul is the BI coach at Johnsonville Sausage.  According to Paul:

 

"This session will cover a template implementation plan for a phased approach to a Business Warehouse on HANA (BWoH) landscape. Presenters will discuss the lessons learned, success factors of managing project landscape, and upgrading to BW 7.4. Learn a little about what changes in BW, how best to optimize the environment, what is required activity for the conversion, and what can be planned for future enhancement."

 

Paul said along the way they upgraded to BW 74

1fig.png

Figure 1: Source: Johnsonville

 

Figure 1 shows the application landscape.  Paul said "if SAP has an acronym, they try to own it".

 

Most of these modules get to BW for reporting and this includes a "significant amount of testing"

2fig.png

Figure 2: Source: Johnsonville

 

Johnsonville Sausage is a relatively small company, so why did they go with HANA?

 

Paul said they provided a business case while looking at Demand Signal Management tool,  trade promotion management.  This was "the fuel that sits on top of BW" and "everyone benefits"

 

He said they leverage existing BW / BI investments and competencies and don’t need to invest in learning HANA as a sidecar

 

They were looking for performance improvements – with TPM and BPC

 

He said performance was a barrier to adoption as the traditional BW system took a while to fetch back; they looked to mitigate with HANA

 

He also said the "new visualization tools work fast".  He said they have a lean team.

 

3fig.png

Figure 3: Source: Johnsonville

 

In 2005 they had BW on mainframe as shown in Figure 3.  Paul said the upgrades were slow in the past.

4fig.png

Figure 4: Source: Johnsonville

 

 

Figure 4 shows their optimization workshop to understand what optimization means

 

Paul said to look at ABAP in data flow to make sure optimized for HANA

 

They had to convert 3.x data flows to 7.x

 

DSO's had no change with migration to HANA

 

He said they wanted a single test cycle – everything tested once by users and not take them through the test cycles

5fig.png

Figure 5: Source: Johnsonville

 

Figure 5 shows a template

 

He suggested using Figure 5 as a starting point and work backwards

 

He said the longest time was optimization

 

They combined UAT with Integration testing

6fig.png

Figure 6: Source: Johnsonville

 

Figure 6  shows that most of the team was remote  with different locations including India and Malaysia

7fig.png

Figure 7: Source: Johnsonville

 

Paul said to look at using SAP Database Migration option – puts it all in one step

 

They had one single downtime; important to Johnsonville

 

They had one single testing cycle to engage the business to ensure things are happening

 

Analyze ABAP – run until you have no red lights.  This would not touch custom Z programs

 

Convert 3.x data flows are for DSO and Cube.  Master data and attributes are not required to update during this upgrade – limit scope

RSMIGRATE – where possible – 3 to 5 that had problems.

8fig.png

 

Figure 8: Source: Johnsonville

 

How long it takes to collapse table structures

 

He said transaction RSMIGRHANADB simplifies scope, Request ID table and cube table – instead of links to DIM table you have the SID values

 

The BEx Query upgrade was mostly for APO implementation – simple, took little time to do

9fig.png

Figure 9: Source: Johnsonville

 

Figure 9 shows their landscape pre-HANA, with BW 731 SP10 and no BW sandbox

10fig.png

Figure 10: Source: Johnsonville

 

Figure 10 shows their parallel landscape where they have the same system source name

 

They had a development freeze

 

They have two HANA boxes with different appliances

 

1 appliance – 1 shared for DEV/QA – using Cisco.  They have a dedicated appliance for production, have a blade on standby

11fig.png

Figure 11: Source: Johnsonville

 

Their pre-migration activities include "cleaning house”

 

Procure/rack/patch phase did take time from procurement until into data center; took 6-8 weeks +

 

Paul advises understand your lead times

 

He said to clean transports and and clear locked objects (true for any upgrade) – allowed them to copy BW production system back to BW development

 

Basis also has clean-up including space for back-ups.

 

He said to be sure to discuss backups as they are different

 

He said PSA cleanups, didn’t take long

 

He said they had a soft delete PSA were growing since 2008; if you want a clean house, do this

 

He said they have no formal archiving process; looked at cubes to purge data not necessary

12fig.png

Figure 12: Source: Johnsonville

 

Figure 12 shows they had a development freeze.  They did a production trial run and see the different times it took

 

They learned config and checks

 

Times were different when they compressed & repaired SQL Server – didn’t do that for BP1

 

The Dev and QA files were smaller

 

He recommends trial run; easier to do with a parallel landscape

13fig.png

Figure 13: Source: Johnsonville

 

He said they found compression to be 3:1

 

They found 67% savings performance boost – 8 hours down to 2.75 hours

 

DSO activations are almost instantaneous

 

LISTCUBE transaction, 1 material (brats) COPA – 98% time savings

14fig.png

Figure 14: Source: Johnsonville

 

BusinessObjects team – number of rows being selected & returned to BusinessObjects layer with a 60% savings 1 week after go-live

 

Finance – P&L – sub 10 second response requirement; almost appeared instantly

 

He said they are "setting new norm for business users"

15fig.png

Figure 15: Source: Johnsonville

 

Paul said to find a partner and do the optimization workshop

 

He suggested to schedule development freeze to keep environment freeze

16fig.png

Figure 16: Source: Johnsonville

 

DMO bugs are all worked and the process is mature

 

He said to keep testing short, test as close to real life as you can

 

He said to run process chains a few times

 

They held daily scrum sessions

 

They have been live since Memorial Day

 

Question & Answer

 

Q:  Do you migrate all data to HANA, including old, cold data? Do you use near line storage?

A:  migrated all data in instance to HANA.  Do not use near line storage yet but looking at it for a project.

________________________________________________________________

Q:  BW Prod refresh to Dev is accompanied with ECC prod to ECC Dev ?

A:  BW production refresh did not include an ECC refresh.  A few pointer and delta issues resulted

________________________________________________________________

Q:  Do you use business objects?  If so, did change to Hana affect the olap connection definitions and was there any impact to existing webi, dashboard content?

A:  Yes a BOBJ shop.  UNV universes connections changed to new app servers - no retro required

________________________________________________________________

Q:  Have you had any unexpected outages in production?  Can you talk about backup and recovery with Hana?

A:  No unexpected outages while live.  After production box was racked an issue with Dev box occurred.  But no outages since live

________________________________________________________________

Q:  I missed the cost effective point on first slide - is it really lower cost for you?  Where did you get savings?

A:  Johnsonville has runtime addition license which has limitations; savings not realy there but an investment for Trade Promotion optimization project

________________________________________________________________

Q:  Does your IT dept handle necessary updates/backups/etc for the OS where HANA resides or is the handled by a 3rd party? Have there been any hiccups with the HANA database since go-live that have required a reboot and/or restore?

A:  Basis contracted with vendor.  Vendor manages Linux patches. HANA patches are evaluated they are not current with patches right now.  No hiccups with HANA db since go-live

________________________________________________________________

Q:  Do your 3 Cisco blades share storage, or do each have their own?

A:  all share storage; Production 2 blades with one on standby.  All three active nodes

________________________________________________________________

Q:  Is the ECC system moved on SAP HANA along with SAP BW or just BW?

A:  No just BW not ECC currently

A:  Looking at moving to Suite on HANA in the future

________________________________________________________________

Q:  Is it considered TDI( tailored data center integration as opposed to an appliance )? (related to the Cisco blade ?)

A:  Believes yes but will follow up on the question

 

Meet Paul next month at his ASUG session at SAP TechEd && d-code in Las Vegas.  Special thanks to Paul for sharing his customer story more than once with ASUG members.

 

Related - from ASUG Newsletter:

Upcoming ASUG webcasts:


ASUG at SAP TechEd && d-code

Join ASUG members and BI Community Volunteers at SAP TechEd && d-code, October 20-24, in Las Vegas.

Review the ASUG Session List. The Enterprise Analytics, Business Intelligence, and Planning track includes:

  1. Consuming, Using, and Interacting with Enterprise Data
  2. Managing Analytics Technologies Behind the Scenes
  3. Developing and Customizing Analytics
  4. Overview Enterprise Analytics
  5. Preparing and Managing Data for Analytics

Learn & Be Heard: Save space in your agenda for ASUG Influence Councils:

  • EA117 SAP Analysis Influence Council
  • EA210 SAP® BusinessObjects™ Design Studio Influence Council
  • EA304 SAP Lumira in the Enterprise Influence Council
  • EA213 SAP BusinessObjects Mobile BI Influence Council

More Information: Blog Collection - Summary of speaker bios, session lists, and more.

ASUG Pre-Conference Seminars at SAP TechEd && d-code

Jumpstart your conference experience by attending an ASUG Pre-Conference Seminar on Monday, October 20.

Topics include:

SAP BusinessObjects Business Intelligence (BI) 4.1 is the latest suite release from SAP and offers a broad set of BI tools to choose from. Each tool has its own special focus in the area of reporting and analytics. This full-day, hands-on seminar is a series of activities focused on SAP BusinessObjects BI 4.1 products in combination with SAP Business Warehouse (SAP BW), SAP BW on SAP HANA, and SAP ERP. Attendees will learn to use the different products from the SAP BusinessObjects BI suite 4.1 in combination with SAP data.

Speaker: Ingo Hilgefort, SAP

Security Unification - BW on HANA

$
0
0

This blog explains on security unification in BW on HANA. In older versions (i.e BW on traditional DB) if we create a user in SAP system, data  security restrictions are confined to only that particular SAP system.But in the latest versions of BW on HANA, a new tab DBMS is enabled where  we can create users in SAP systems and it will be automatically created in back end HANA DB without any additional efforts.Next steps of the blog explains  how there will be  security unification between BW & HANA DB and how user administration is done.

 

Create User:TEST in t.code SU01

                             1.jpg

During the process on user creation assign SAP_ALL ( as an example) in profiles tab

                             3.jpg

Click on DBMS Tab ( this tab can be enabled in SU01 by implementing certain steps and SAP Notes )

 

On saving the user:TEST, Role:Public will be automatically assigned by back end HANA DB which has basic authorizations.Now the user:TEST is created in both SAP system ( Application Server) and in HANA DBMS

                           3_1.jpg

Now login to HANA Studio rev74 ( you may use any HANA Studio greater than rev74) and navigate to security folder - users

                                                                   4.jpg      5.jpg

 

As we have created user in SU01 along with DBMS user, User:TEST is automatically replicated to HANA DB

 

Here for user:TEST, all the security and data restrictions are automatically replicated to HANA DB where end users can consume BW generated models for reporting purposes

 

                                                                                        6.jpg

Now let us look at the snapshot of user administration in two aspects

 

1. Deleting user in SU01

 

In t.code:SU01 try deleting user:TEST

 

System will prompt if the DBMS user which was created in HANA DB need to be deleted or not. If "YES" is clicked then the user in SAP System (Application Server) and in HANA DB will be deleted where there will be no inconsistencies

                                             7.jpg

 

As explained earlier user:TEST is deleted in HANA DBMS

                                                                                 8.jpg

 

Now again let us recreate User:TEST again in BW system which will also be recreated in HANA DBMS

 

                                                      9.jpg

 

2. Deleting user in HANA DBMS

 

 

Now delete user:TEST in HANA DB by navigating to Security - > Users - > Right Click on User:TEST - Click on Delete where the user will be deleted

                                                           10.jpg

 

So in the above case there will be inconsistency because HANA database administrator might have deleted DBMS user without the Netweaver Application Server Administrator knowing about it. So in order to remove the inconsistencies of the user perform the below steps

 

Go to T.Code:SA38

 

Enter Program: RSUSR_DBMS_USERS_CHECK and Click on Execute

 

                                                      11.jpg

 

Now enter User:TEST and "Select inconsistent users" and click on Execute to check if the user is consistent or not

                                        12.jpg

As the HANA DB administrator have deleted DBMS user:TEST it is showing as DBMS user does not exist and it implies user is not consistent as it is created from SU01 along with the user in application server

 

                                  13.jpg

Now select option "Remove DBMS user mapping" and click on execute where the DBMS user mapping will be removed  and henceforth it will be consistent

                                  14.jpg

 

As DBMS user mapping is adjusted/removed user:TEST will be now consistent

                                   15.jpg

 

With this it is derived that there is a security unification in BW and HANA. Also the same security/data restrictions can be replicated to Design Studio, Lumira and HANA Live - For BW generated information models.

Boost lead generation with the Blitz Days in a Box for SAP Business Warehouse powered by SAP HANA

$
0
0

Now live on Virtual Agency, the “Blitz Days in a Box for SAP Business Warehouse powered by SAP HANA”
helps accelerate demand generation for SAP Business Warehouse powered by SAP HANA in the SME space.

The kit in English contains the new features in SAP BW 7.4, probing questions and value messages for the  IT, Finance, Sales & Marketing, Supply Chain audiences. You find a telescript including objection handling, and useful documents such as with step-by-step instructions to ensure your Blitz is successful.

 

 

Boost your demand generation… Use this new Blitz Days in a Box!

https://sapvirtualagency.com/Orgs/Initiative.aspx?id=1000000359

 

What’s new in SAP NW BW 7.4 SP8

$
0
0

Recently, SP8 for SAP BW 7.4 has become generally available. This new release of SAP BW brings a number of important changes to the SAP BW landscape from a modeling perspective.

 

Introduction of the Advanced DataStore Object


In SAP NW BW 7.4 SP8, the Advanced DataStore Object, or Advanced DSO is introduced. The Advanced DSO is introduced in order to simplify the SAP BW landscape. The current SAP BW landscape offers a large number of objects to be included in data models, each with their own pros and cons.

The Advanced DSO consolidates DataStore Object, PSA, InfoCube and Hybridprovider functionality into a single object and completely replaces both objects in newly to be developed data flows.  The Advanced DSO:

 

  • Supports Field-based modeling and InfoObject-based modeling
  • Can serve as a source object for Open ODS views
  • Supports high-frequency data loads
  • Can contain up to 120 key fields (compared to 16 in standard DSO)
  • Can be modeled in the Eclipse-based SAP BW modeler

 

The Advanced ODS however, does not seem to be available directly after upgrading SAP BW to 7.4 SP8. Modeling the Advanced DSO in the Eclipse-based environment is only available from release 1.5 of the BW modeling tools, due to be released in October 2014. As of yet it is unclear if workbench access to this functionality will be available before the release of BW7.4 SP9.

 

CompositeProvider enhancements


With SAP NW BW 7.4 SP8 several enhancements have been made to the Composite Provider. These are:

 

  • New use cases
    • Support of planning functionalities for UNION cases
    • Possibility to create a CompositeProvider on top of Composite Providers (only UNION supported)
  • Use of Open ODS views as Source Objects and allowing JOINS to be used
  • Support of navigational attributes (familiar to the use of navigation attributes in MultiProviders)
  • Support of input parameters when Open ODS views are used as source objects;

 

A first check on our demo environment* reveals that it is only possible to create a CompositeProvider with a JOIN operation which includes Open ODS views, when the 'Calcscenario used'  property in the Open ODS view is set to 'yes'. This is not possible for all Open ODS View sources. * Intenzz Demo system details:


  • SAP BW 7.4 SP8
  • SAP HANA Studio SP8, Rev. 82
  • BW Modeling Tools 1.4

 

CompositeProvider.png

New features for CompositeProvider in SP8 (credit image: SAP)


Landscape simplification

 

With the new functionalities the Advanced ODS (ADSO) offers and the new enhancements in the CompositeProvider, the SAP BW modeling landscape is simplified in a drastic manner. Using the new development objects, only four BW Modeling object types remain to be used. The InfoObject and ADSO act as building blocks for everything persistent, the CompositeProvider and Open ODS view for everything virtual.

 

LandscapeSimplification.png

Landscape simplification (credit image: SAP)


Performance: Additional functionality pushed down to HANA


The main advantage SAP NW BW 7.4 offers over SAP NW BW 7.3 is the fact that next to the data and activation processes, Transformation logic has been pushed down to the HANA database as well. This provides a huge performance improvement for transformations. In SP8, the number of supported formulas has been enlarged, and ABAP managed database procedures are now integrated as well.

 

Automated HANA model generation


Generation of HANA models based on BW InfoProviders and Queries have been further developed, introducing the following enhancements:

  • Changes in BW InfoProviders will automatically re-generate the HANA model, representing the changes.
  • Models based on Queries now include security
  • It is now possible to read from Nearline storage (NLS) with SAP IQ. Only supported for InfoCubes and SID-enabled DSO’s.
  • Supported functionalities now also include currency/unit conversion, variables, restricted key figures, global restrictions, inventory key figure (closing balance)

HANA SP8: What’s really available today and future planning?


With SAP NW BW 7.4 SP8, a number of really interesting new features should be available. Assessing these new features in our demo landscape however, we found not all new functionalities to be available yet. This raises the question what is available and what isn’t at this point in time and subsequently, and SAP’s plans are regarding the missing functionalities.Unavailable features in SP8:In the previous sections, the new features were listed and explained briefly. In order to avoid to make a list of these features again, you will find only those that are not yet available below:


  • The Advanced DataStore Object (available when BW Modeling Tools 1.5 is released (October 2014)

 

Future direction?

With regard to the Advanced DSO it is at least expected with the release of SP9 later this year (probably December). It is expected a future release will also support conversion of existing InfoProviders to Advanced DSO’s. If this functionality will be available in the short term however, is unsure.

 

See below the future planning as announced by SAP earlier this month:

BWRoadmap.png

SAP BW on HANA Roadmap (credit image: SAP)

 

By W. van de Rozenberg and Sjoerd van Middelkoop

cross-posted from What’s new in SAP NW BW 7.4 SP8: "SAP BW. Simplified.“


#BWonHANA: Extended Storage and Dynamic Tiering

$
0
0

There is a bit of confusion around the terminology of extended storage and dynamic tiering within HANA. This short blog attempts to resolve this. Details can be found in a PDF presentation attached to OSS note 1983178.

Extended Storage (Only)

This has been introduced and demo'ed in last year's Teched in Amsterdam with a follow-up and bigger demo here. The notion of an extended tableis introduced which is technically based on IQ technology. The underlying IQ server needs to be installed, updated, maintained separately from HANA. Within BW 7.4 SP5 on HANA SP7, it was possible to pilot this  mechanism, namely in the context of tables underlying the PSA and write-optimized data stores.

Dynamic Tiering

HANA's dynamic tiering capability is planned to be shipped with SP9. In combination with BW 7.4 SP8, it substitutes the pilot  setup described above. There will still be extended tables but the separate IQ server is replaced by an ES server that is  integrated in the HANA installer, update, management, back-up and recovery mechanisms. There will be a designated node  on the HANA hardware running that ES server. This setup will be released for general availability (GA). There will be no  migration options from the pilot to the GA version. Details can be found in a PDF presentation attached to OSS note 1983178 or the online documentation.

 

This blog has been cross-published here. You can follow me on Twitter via @tfxz.

Composite provider @ BEx Report Part - 1

$
0
0

Hi Friends, I just experienced strange thing with Composite Provider in BEx report.

In general while doing reporting on virtual infoproviders (Muliprovider,Composite provider etc.). We can see extra infoobject called 0INFOPROV values of this infoobject would be what are the objects which you used to build this virtual provider.

 

Coming to actual scenario, I developed one composite provider on top of two SPOs(Semantically Partition Object) .

 

SPO structure looks like this.

 

1.png

 

If we look at, technical names

 

SPO tech name = ZTEST

Partition names tech name North America = ZTEST01 and South America = ZTEST02

 

In BEx report, 0INFOPROV values looks like this.

 

2.png

 

So, I created report for North America, and restricted keyfigure with ZTEST01 and executed report, strange “No data to display” in fact data available in SPO for ZTEST01.

 

I struggled lot, and came to conclusion, that I dragged 0INFOPROV in rows and executed report, surprised by looking at report output.

 

3.png

 

0INFOPROV showing SPO technical name NOT partition name, however in BEx it showing partition names.

 

So, In report  filtered amount key figure with ZTEST and executed, this time I am able see data.

 

Please kindly notice this change while working with Composite provider (on top of SPO).

 

Thanks for reading this blog.

SAP BW on HANA Enterprise Cloud - "relocation without disruption"

$
0
0

This was an SAP webcast.  How to consume BW through HANA Enterprise Cloud and Cloud Lifecycle Management, combining three topics into one session

1fig.png

Figure 1:

 

Need to replicate data

 

The SAP Speaker read from the slides so no additional notes from me on this

2fig.png

Figure 2: Source: SAP

 

HANA Enterprise Cloud is a virtual private cloud with single tenancy infrastructure – it is not public – it is virtual

 

It is an extension of your corporate network

 

Three components on the left of Figure 2

 

Core services are described in roles & responsibilities document

 

Core services are a technical operation service

 

Infrastructure and core services are subscription based

 

IT planning, RDS, application services can be complemented; can be optional on a subscription basis

 

A technical assessment service is provided (plan, build, run) – how much data BW should have, expected growth, kinds of systems connected

 

System provisioning is through cloud lifecycle management tool

 

Core services keeps BW running through SLA’s

 

Plan depends on customer scenario.  Two use cases – customer relocate BW services or net-new installed BW

3fig.png

Figure 3: Source: SAP

 

Figure 3 refers to “build phase”

 

Two flavors of HEC – one flavor is Cloud Start and other is Cloud Productive

 

They different in SLA and commercial framework

 

It is up to customer which environment to begin with; depends on contracts signed

 

Cloud start lasts up to 1 year, weekly subscription

 

Cloud productive is up to 5 years with monthly subscription

 

Systems in cloud start have 95% availability; these are for non-productive systems

 

Systems can shift from cloud start to cloud productive after the contract is signed with higher SLA’s  (for POC’s)

4fig.png

Figure 4: Source: SAP

 

Switching service starts to Cloud Start to Cloud Productive

 

Cloud Productive is managed by SAP Core Services

 

Customers run their BW, leveraged on BW on HANA

 

Productive system remains your EDW

 

Service levels in cloud productive – clearly defined infrastructure activities (in roles & responsibilities document)

 

Availability is 99.5%, 24 x 7 support, defined windows for patching, etc.

 

Activity reports, remediation, dedicated personal contact, customer engagement manager, technical landscape manager – have customer specific data

recovery architecture

5fig.png

Figure 5: Source: SAP

 

Cloud lifecycle management tools support two ways – new installation and solution relocation

 

Simplest is the new installation

 

Relocation of existing BW solution to HEC- hybrid cloud scenario to connect ERP to BW or other data centers

 

SAP BW extractors are used to stage extractions

 

Aspects of WAN connected to ERP are covered during the planning phase

6fig.png

Figure 6: Source: SAP

 

You can get an empty SAP BW system – SAP BI content is not pre-activated; customer decides which BI content is activated

 

From pre-assembled RDS packages you can get SAP BW semi-activated BI content

 

Source system dependent objects would need to be reactivated after you connect to BW


There is an option for a full active BI content RDS; example data is imported for analytic scenarios

7fig.png

Figure 7: Source: SAP

 

SAP BW solution is relocated; hardware remains in your data centers

 

Two options – 1 is a system copy to HEC with the database migrated

 

Use the software provision manager for the system copy

 

It includes a stack split and Unicode conversion (SUM includes update and database migration)

8fig.png

Figure 8: Source: SAP

 

Figure 8 shows relocating SAP BW to HEC and uses SUM to and in EC

 

Relocate SAP solutions to HEC – there is no need to make a copy of on-premise system in the cloud

 

The original system is using the database migration option (DMO)

 

The Import phase runs in HEC

9fig.png

Figure 9: Source: SAP

 

There are two downtimes – less than 48 hours or greater than 48 hours

 

Managing of downtimes depends on the methods of SAP BW relocation

 

Copy & SUM in HEC scenario – system downtime can vary

 

1:1 copy can be performed in different ways – back up and restore or another tool

 

First scenario – only downtime for the export phase – SAP BW use case – is used to set up sandbox systems. Consider the integration aspects of extraction from ERP – stop batch jobs, relocate BW system, and after export you can restart the extractions to BW system on premise

 

Second scenario – takes into account business integrity – should be used for productive solutions

 

Downtime of export to import could be over 48 hours; if BW is big you may require more than 48 hours

 

As an example, the speaker said 30TB compressed data from non SAP database, moved using MPLS connectivity, accomplished in 1 weekend

 

Third scenario copy and update for optimizing downtime – set up in ERP systems – delta queue cloning – logical delta queue – achieve 2 short business downtimes

10fig.png

Figure 10: Source: SAP

Solution relocation or new installation, could use SAP RDS and SAP BW to SAP HANA RDS is coming

 

Question & Answer

Q: Will get details on transfer of data to cloud?

A: No difference to on premise scenario

HEC extends network of customer; no additional tools

 

Q: How many customers are in HEC who are live?

A: Can’t give detailed numbers.  Most customers are existing BW customers and SAP manages the relocation as described.

UPDATE: I asked Steve Lucas yesterday on a SAP Chat for customers using BW on HANA in HEC and he said "Bosch, Siemens, 99c stores+++ all using BW on HANA in HEC"

 

Q: How would users run BEx or BI reports with this hybrid architecture?

A: for the BEX you need BI Java component (?) in HEC

BusinessObjects scenario – have customers run next to BW in HEC or if on premise can run in hybrid mode

Have both scenarios running productively in HEC

 

Q: For an existing customer what will be the impact of having BW fully activated?  BI content fully activated

A: It is not assumed to provide fully activated BI content for a relocation scenario.  Relocation is system copy or migration to HEC

 

Q: Would full data reload work the same as on premise architecture?

A: in SAP BW it remains the same

Typically in planning phase you evaluate volume

 

Q: Is BI Content for all industries supported?

A: new type of BI content model – EDW/Analytical data model, takes into the account LSA++

Industries  - refer to the BI content documentation

 

Q: Does that mean the data models would have to be redefined from relocation of on-premise system?

A: No the data models stay as-is with the relocation – you can consume BW same and once on HANA DB to take advantage of HANA DB

“relocation without disruption”

SAP Business Warehouse powered by SAP HANA: New openSAP Course

$
0
0

To support your company’s analysis, reporting and other business intelligence functions, a data warehouse acts as a central entry point and single point of truth for various data sources. It consolidates and harmonizes data models and stores data for historic analysis. SAP Business Warehouse (BW), a long standing and well adopted enterprise data warehouse application, offers various integrated services supporting for example to build data models, extract load and transform data and helps to operate the data warehousing processes efficiently.

Introducing SAP HANA as a database for SAP BW was a major breakthrough and tremendously improved many core functionalities. As the next milestone in SAP Data Warehousing Strategy it offers many simplifications and optimizations in the core areas of SAP BW.

openSAP is now offering a free course on SAP Business Warehouse powered by SAP HANA which will introduce the renewed concepts and possibilities of SAP BW powered by SAP HANA. This course introduces the major guiding points and semantics of a data warehouse, demonstrating them in a business scenario. With this course, you will learn and experience the latest developments with SAP BW 7.4 as well as opportunities for real-time data integration and agile data modelling approaches which haven’t been possible before. With SAP BW, reporting and planning can run on the same platform, which will be demonstrated with the optimizations with SAP HANA. You will see how data can be consumed via different modern front-end tools and understand the concepts of how SAP HANA helps speed up data analysis.

The course starts from November 13 on openSAP and enrollment is now open.


Join openSAP and find out more about the value of running SAP Business Warehouse on SAP HANA.

BW on HANA scaleout Migration Part 1: Preparation Checklist and known issues

$
0
0

I recently worked on a BW on HANA Scale out Migration Project and in this Blog I would like to describe the preparation that was done for the Migration and in particular problems that came up that you can avoid in your Projects. In Part 2 I will cover the Migration itself and the issues I encountered. In Part 3 I will cover the post migration activities and any problems that we needed to resolve.


A good starting point that’s gives a good overview of the complete process for a BW on HANA scaleout migration is Marc Hartz’s Best practice document that you can find here: http://scn.sap.com/docs/DOC-39682. This document is constantly updated and is a must read before starting any BW on HANA Scale out Migration project.


Before starting the Migration you need to ensure that the following is done:

 

1: You have the Latest HANA Revision installed on the HANA DB.


The Import may use a lot of log space so it is highly recommended to set the log mode to overwrite on the HANA DB before starting the Import to avoid a log full scenario during the import. Please do not forget to restart the HANA DB after changing the log mode, this is important!! Where to find the parameter is shown below:


In the HANA Studio look under TAB Configuration>Global.ini>:

studio.png


Afterwards when the Migration is finished you can set the log mode to normal again if you want to have log backups of the system.

 

2a: You are using the latest BW media for the Import. You can find the BW Media here: SAP Service Market Place The path to the BW Media is: Installations and upgrades > Browse our Download Catalog > SAP NetWeaver and complementary products> SAP NetWeaver BW powered by SAP HANA> SAP NetWeaver BW 7.3 powered by SAP HANA> BW ON HANA INSTALLATION 1.0. From here you need to download the relevant Kernel file, the SL Controller file and the latest SAP HANA Platform Edition files. Store the BW media and the software you downloaded on the application server where you will install the BW system.


2b: To do the Migration I used the Software Provisioning Manager tool (at the time SP 01 Patch 02 was the latest version). This is intended to replace sapinst and if you use this tool for the migration then you don't need to do a lot of manual configuration like described in the note https://service.sap.com/sap/support/notes/1715048 You can find more information on the software provisioning manager in the user guide: Software Provisioning Manager With the software provisioning manager it will by default create the installation directory in the tmp directory (/tmp in our environment) so before starting you need to make sure there is enough space in tmp directory. The documentation says at least 600m but from my experience you need about 1- 2G to be comfortable. If you need to start the installation again then the software provisioning manager will create a new install directory every time. You can delete the old install directories to free up space. The other option is that you change the tmp directory to some other directory with more space before starting the installation. How to change the default directory for SWPM is explained on page 61 under the heading Useful Information About the Installer. Before doing the export or the Import using the software provisioning manager please read the note https://service.sap.com/sap/support/notes/1775293 and the note https://service.sap.com/sap/support/notes/1706930 for latest known problems and available work around's.

 

3: For the export that you use for the DB Migration it is critical that the export is done using the latest Kernel, DBSL and R3 Load version and patches and also the latest BW support package when you are working on a BW scale out scenario. For large tables it is really important that good table splitting is done for the performance of the export and the Import. Some information I did not know at the time of the migration is that unlike the 720 kernel the 721_EXT Kernel contains the so called MASSLOADER functionality which improves the load performance with LOB data. To enable it you have to set the Environment variable HDB_MASSIMPORT=YES. So you should use this Kernel release for export and Import if possible to improve the performance.

 

4: To ensure that the Tables were distributed correctly on the nodes during the Import on the HANA scale out DB we did the customizing described in the SAP note https://service.sap.com/sap/support/notes/1819123 (If your HANA revision is 63 orhigher please follow the steps in the note 1908075 instead). In short there were two steps required in our case:

 

A:

As part of the export execute the report SMIGR_CREATE_DDL and activate all SAP HANA-specific options, in particular the option "Row Store List". With this option a file ESTIMATED_ROW_COUNT.TXT is generated that contains the estimated number of rows of the BW tables. Copy this file to the /ABAP/DB directory of SWPM. As of SWPM 1.0 Support Package 2 this file is copied automatically.

B:

The file TABLE_PLACEMENT.TXT is attached to the SAP Note https://service.sap.com/sap/support/notes/1819123 Download this file and save it to a directory that can be accessed by SWPM during the Import. In the dialog phase of SWPM for SAP HANA landscape reorg parameters, choose the option "Use a parameter file" and enter the complete path including the file name as the "Parameter File".

 

During the Import you will come to this screen where you can point to the file:

 

swmp.png

5: Before starting the Import ensure that the BW application server and the HANA DB have the same time zone setting, the note https://service.sap.com/sap/support/notes/1706930 has further information on this.


6: Another useful source that I was pointed to after the migration which applies not just to BW but to all Netweaver ABAP stacks is the “End-to-End + Best Practice Guide – Database Migration to SAP NetWeaver Application Server ABAP on SAP HANA”, you can find the guide here:

Classical Migration to SAP HANA

BW on HANA scaleout Migration Part 2: Migration process and issues encountered

$
0
0

For a BW on HANA Scale out proof of Concept that we recently worked on I would like to discuss the migration process and the issues we encountered and how we resolved or worked around them. This Blog is part two of a three part blog, you can find the first Blog called “BW on HANA scale out Migration Part 1: Preparation Checklist and known issues” here

 

http://scn.sap.com/community/hana-in-memory/blog/2014/10/15/bw-on-hana-scaleout-migration-part-1-preparation-checklist-and-known-issues

 

Starting the GUI for the Software Provisioning Manager:

 

The OS for the BW application server was SUSE Linux  11.1. The SLES server build includes vncserver which is software to enable a virtual-desktop on Linux. You can use this software to start a GUI connection to the SWPM . A VNC session runs on the server so it can be disconnected/reconnected from the users laptop without interrupting the installation. In addition more than one person can connect and view what's happening in a particular session. You can download TightVNC from the Internet – it is a single, lightweight executable.

 

Login to the HANA server using putty and type vncserver to start a new virtual desktop:

 

swpmeditPNG.PNG

 

Multiple desktops can be started if more than one person is working on the machine. It can be left running after installation completes as a permanent desktop to use for any sapinst activity. Connect to the session using your vnc client:

2.png

Browse to the Directory that has the SWPM software and start using the command sapinst:

swpmedit2.PNG

Initially I could not get the GUI to start and got the following error message:

4.png

The reason was that there were still some old sapinst processes running in the background. I could verify this by executing the command ps aux  grep | sapinst at os level. I killed these processes using the KILL command:

 

5.png

Some times these problems with hanging processes happen if sapinst was not stopped correctly. The correct way to stop sapinst (to stop it completely, not only to close the GUI) is going to GUI and choose second menu option “SapInst” and then “Cancel”.

 

After doing this it was possible to run the command sapinst:

 

6.png

Migration Process:

 

The Migration was done using the software provisioning manager version 1.0 SP02. First on the BW application server the ASCS Instance was installed, then the database instance (during this step the actual data is copied from export file to the HANA DB). The last step is to install the Primary application server instance.

 

DB Migration Import Performance:

 

For the Database Instance step we had Bad performance with the Import of the R/3 Load packages. This process took 120 hours. The main reason for this was poor I/O performance on one of the HANA nodes due to a Hardware problem with one of the scale out nodes where there was an issue with the RAID controller firmware version and the cache batteries. According to Development a runtime for the migration where the export and import are performed in parallel for an 8 TB system can be as low as 15 to 20 hours. Other factors that influenced the performance of the Import were : the kernel version we used did not have the MASSLOADER functionality (Kernel 721 has a mass loader that improves performance that kernel 720 does not have) and also there was room for more extensive table splitting. Even with the table splitting done there were a lot of R/3 Load packages with 100 million rows or more:

7.png

Errors Encountered with R/3 Load Packages:

 

Out of a total of 1,175 R3load Jobs there were 5 errors. Two of the errors indicated a resource problem during the Insert of the data and it was not clear what the root cause of the problem was. Without making any change the problem could be solved by using the retry button in the software provisioning manager and then the Jobs were successful.

 

The third error “invalid index name: column list already indexed” regarding the creation of an Index on a fact table could be ignored. “ERROR” status in TSK file set to “OK” and then used the retry button in the software provisioning manager.

The last error for two tables was  rc=26. This means that a duplicate key error occurred. A possible cause might be the table does not have a primary key on the source system but now has one on the target system. The error was resolved by dropping the primary key on the two tables that had the error using the following SQL:

 

altertable ZHYTEST1 dropprimarykey

altertable "/BIC/4FSCO_C01F" dropprimarykey

I /BIC/4ESCO_C01F~P C ok

 

If you have some errors during the export/import or you need further help with tuning the performance of the migration process please refer to the best practice migration guide at the link below:

Best Practice Guide - Classical Migration of SAP NetWeaver AS ABAP to SAP HANA

 

Stay tuned for Blog part 3 on the Post Migration steps for BW on HANA scaleout and the issues encountered :-)

BW on HANA scaleout Migration Part 3: Post Migration Activities and issues encountered

$
0
0

This is the third and final Blog in the series for a BW on HANA scaleout Migration PoC that I worked on. The links to Part one and two are below, part one covers Migration preparation checklist and Known issues and part two covers the DB Migration process itself and issues encountered:


Part 1:

http://scn.sap.com/community/hana-in-memory/blog/2014/10/15/bw-on-hana-scaleout-migration-part-1-preparation-checklist-and-known-issues


Part 2

http://scn.sap.com/community/hana-in-memory/blog/2014/10/15/bw-on-hana-scaleout-migration-part-2-migration-process-and-issues-encountered


In General for the post migration activities you can follow the mandatory steps in the cook book at the link below:

https://cookbook.experiencesaphana.com/bw/deploying-bw-on-hana/post-installation/ Most of these steps are straightforward like described in the cookbook so in this blog I only provide additional information to supplement the information in the cookbook or when we ran into issues with the post migration steps. If pressed for time the "optional" steps in the cookbook may not need to be executed.

 

Step 6: Reconfigure change and transport system:


In transaction se06 complete post database migration options. Select “Database copy or Database migration" radio button and click on perform post Installation activities as shown below:

Transport org.png

In this case there was already another BW SID (GBP) installed on the application server so it was necessary to configure a new domain for SBW in STMS transaction.


Click on the button other configuration:

no2.png

On next screen click on configure new domain:

no3.png

Accept the domain that is proposed:

no4.png

On the next screen select the new standard password option:

no5.png

Then you are logged into the new domain controller and you can check the connection from here:

no6.png

Click on the import overview screen and on the next screen you can click on the check button to confirm that all is okay for the transport configuration.

 

10 Import SAP licenses


Logon with user DDIC to client 000:

You can generate a license key from here:

https://websmp207.sap-ag.de/licensekey

no7.png

Click on submit and the license key will be generated and you can download it to your desktop, a copy will also be emailed to you. Before you can apply the new license key in the system you have to delete the old license keys from transaction slicense :

no8.png

You also need to delete the valid temporary license key. Then click on the Install Button in transaction slicense and browse to where you saved the license key that you downloaded. New license key is successfully installed:

no9.png

You then need to execute the following steps also in client 000. Go to transaction SECSTORE -> Execute and delete the following entry if marked red in color.

 

/HMAC_INDEP/RFC_INTERNAL_TICKET_4_TRUSTED_SYSTEM :

no10.png

Go to se38 and execute the report RS_TT_CLEANUP_SECSTORE. After deleting the entry and executing the report you should see two Green entries in transaction secstore (you don't need to worry about the other red entries):

no11.png


15 Check, repair or restore BW source systems :


Execute transaction RSA1. Normally for PoC we don't have any other connected source system apart from the Myself connection and flat files. It is very Important that the Myself connection works correctly. You can check the connection in RSA1 transaction under the Option source systems :

no12.png

By default the RFC name of the myself connection is the same as the logical system name (In this case it is SBWCLNT200 as per screenshot above).You can check the Myself connection (BW connection to itself) in sm59.

For PoC purposes I removed hostname, user name and client and left all these fields blank as shown below with system set to Unicode, see SAP note https://service.sap.com/sap/support/notes/538052 for further information on maintaining the myself connection:

 

Technical Settings Tab:

no13.png

Logon and Security Tab:

no14.png

Unicode Tab:

no15.png

17 Run BW post-migration reports


Run report RS_BW_POST_MIGRATION in background with variant SAP&POSTMGRHDB.

You may get an error message like this with the report:

no16.png

The easiest way to fix this is to create a dummy destination in sm59 with the logical name (in this case Q96CLNT200), you don’t need to enter any other information on the sm59 screen. It should also be possible to delete the source system from RSA1 as long as you don't need it for the PoC.

 

Transforming InfoCubes & DataStore Objects to HANA-optimized objects


When executing the report RSDRI_CONVERT_CUBE_TO_INMEMORY to In memory optimize some of the DSO's I got the following error:

no17.png
The reason for the error message was the following. DataStore Objects with 3.x dataflows can be converted only if the outbound dataflow is on SAP NetWeaver BW 7.x technology. The inbound dataflow can be on 3.x or 7.x technology. Therefore it was necessary to Migrate some of the 3.x data flows to 7.x data flows for the related DSO objects, this migration converts the transfer/update rules to transformations and infopackages To DTP’s.


HOW TO MIGRATE 3.X DATAFLOW:

 

I give an example here but an Important point to remember is that after Support package 10 for BW 730 you no longer need to In memory Optimize a DSO as the standard DSO has the same performance on HANA!!

 

Select the data flow you want to migrate:

no18.png

Give a name for the Migration Project:

no19.png

Flag all the data flow objects you want to migrate on the following screen and click on save:

20c.PNG

Click on execute “Migration/ Recovery”:

21c.PNG

Flag all required steps again and click on migrate/recovery:

no22.png

Click on yes on the following screen:

23c.PNG

The Migration was successful for all steps except for the data source conversion since the required source system is not connected to our PoC Environment:

24c.PNG

After migration of 3.x data flow DSO can be HANA Optimized:

no25.png

DATA STAGING ERRORS


We also encountered some issues when testing the data loading regarding the technical characteristic 0REQUID, see log from transaction sm21 below:

no26.png

The problem was that an old version of the technical characteristic was installed that did not have the required master data read class. We reinstalled 0REQUID from the Business content to resolve the problem. You need to use the overwrite option and not merge and copy when installing from content. Overwriting should not be a problem for technical characteristics as they are SAP owned objects and should not be changed by customers.

 

This is a known issue that can happen when the master data read class is missing, you can find further information here:

https://service.sap.com/sap/support/notes/1695112


We also had some loads failing with the Duplicate Key error shown below:

 

no27.png

The reason is that before the change in SAP note: https://service.sap.com/sap/support/notes/1645181 It was possible to have duplicate records with the same technical key in the one data package. To resolve the problem we added some coding to the end Routines as per SAP recommendation in the note https://service.sap.com/sap/support/notes/1223532.

 

Example of code change shown below:

no28.png

This is the end of this BW on HANA scaleout Blog series. I hope you find the information useful. If you have any feedback/comments please let me know:-)


Visualising BW Query Usage Statistics on a Globe using HANA, XSJS, SAPUI5 & D3

$
0
0

This blog explains how to expose standard BW business content for consumption on a 3D globe in a browser.  The relevant source code and HANA build is available in a GitHub repository.

 

Introduction

Some of the most interesting opportunities opened up when you are in a BW-on-HANA environment are the so-called "hybrid scenarios" or "mixed scenarios" .  These mixed scenarios allow BW data to be exposed to HANA and vice-versa.  At first glance this might seem quite boring, but really it's a very underrated and very real benefit of using BW-on-HANA.  For the past 15 or so years BW has operated in a pretty closed SAP-centric environment, but suddenly when you sit it on HANA, your BW data can be very easily exposed to a whole range of new technologies including the D3 JavaScript libraries for data visualisation.  I say "new" but of course these technologies are not really that new, they're just new to BW data .  This isn't to say you couldn't do these things with BW-on-Non-HANA, it's just that now it is trivially easy to do so.

 

How it will look

Ok, so an example.  Most BW implementations will make use of the BW query usage statistics at one time or another. Normally you'd install business content to get cubes like 0TCT_C01 available to show query usage and performance.  In this example we're going to adapt this business content to visualise the same data on a 3D globe from a browser.  The result is like this:

 

StatsOverview.png

On the left above are the usual BW statistics you'd get from business content.  On the right is the result you can get in a browser using the same data with some added user location information. This is developed using HANA, XSJS and the D3 visualisation libraries.  A moving view gives a better idea, you can see the query usage statistics is displayed as animated expanding circles over time, the maximum size represents the number of query navigation steps:

 

 

It would be challenging to produce such an output using standard BW reporting tools (certainly using BEx Excel would be tough!), but in a mixed scenario we can do this easily.

 

How to Build it

 

So how do we build this?  The data flow diagram looks like this:

DataFlowDiagram.png

On the left hand side of the above diagram, it's pretty straightforward business content.  The parts on the right are where we've exposed the same data to HANA and incoporated user location data from an external table, and make the result available to a browser.  So let's walk through the steps needed to build each of the items in the above diagram:

 

Step 1 - Expose Business Content Cube as HANA model

This part is as simple as ticking a tick-box on the business content cube 0TCT_C01 in transaction RSA1 in BW:

BWTickBox.png

Ticking the above generates an Analytic View in HANA, which we can see in the repository:

GeneratedModel.png

We will later use this generated Analytic View in our own custom HANA model.

 

Step 2 - Add Location Table

Now we need a table to hold user location data.  The BW query usage statistics holds data by user id, and we'll also hold locations by user id.  In a live scenario you'd probably want to associate a user with an address and then associate that address to a latitude and longitude coordinate.  I created a table called ZUSERLOCATION.hdbtable like this:

 

table.schemaName = "ZBWSTATS";
table.tableType = COLUMNSTORE;
table.description = "User locations";
table.columns = [    {name = "USERNAME"; sqlType = NVARCHAR; length = 12; nullable = false; comment = "User name";  },    {name = "LAT"; sqlType = DECIMAL; nullable = false; precision = 12; scale = 3; comment = "Latitude"; },    {name = "LON"; sqlType = DECIMAL; nullable = false; precision = 12; scale = 3; comment = "Longitude"; }
];
table.primaryKey.pkcolumns = ["USERNAME"];

 

I used HANA SP7 to build this and it isn't possible to enter spatial data types directly.  Jon-Paul Boyd explains why and gives a workaround in his question about spatial data.  So I used that workaround and ALTER the table directly from the SQL console and add the spatial column:

 

ALTER TABLE "ZBWSTATS"."ZBWSTATS.Data::ZUSERLOCATION" ADD (LATLON ST_POINT(4326));

Then the finished table definition looks like this, with a spatial data type ST_POINT on the end:

 

ZUSERLOCATION.png

 

Step 3 - Adding Locations to Location Table

Now we need to add some locations to the location table.  I cobbled location information together from severalsources based on cities.  The full city list used together with the latitude and longitude is available in github.  This data was then linked to user id and loaded to the ZUSERLOCATION table.  Because one of the data types is spatial I found it easier to load the data using an SQL script rather than a file upload.  From the city location and user information I used Excel to generate a whole list of SQL statements, one per user, like this:

 

insert into "ZBWSTATS"."ZBWSTATS.Data::ZUSERLOCATION" values('BWDEVELOPER', 12.463, 41.929, new ST_POINT(12.463, 41.929));

Note the "new ST_POINT" part used to create the spatial data.  Once loaded, the data can be viewed using:

 

SELECT "USERNAME", "LAT", "LON", "LATLON".ST_AsGeoJSON() from "ZBWSTATS"."ZBWSTATS.Data::ZUSERLOCATION"

The output of the above is:

UserLocationData.png

 

Step 4 - Join BW Usage Stats with Locations in a Custom HANA Model

Nowe have the base data we need, one part is the generated Analytic View from 0TCT_C01 (which I then wrapped in another view called CV_LA_STATS) and the other part is the ZSERLOCATION table.  The model is simply a join over these:

JoinStatsLocations.png

Now the HANA model is done.  Again because we are in SP7, the data preview fails because we have a spatial data type, so we have to explicity convert it using SQL like this:

 

SELECT "TCTUSERNM", "CALDAY", "SESSIONCOUNT", "LATLON".ST_AsGeoJSON() FROM "_SYS_BIC"."ZBWSTATS.Models/CV_LE_STATS_PT"

 

Step 5 - Build and Attach a Frontend

When I started looking around for a suitable frontend for this, I initially looked at Lumira.  On balance, Lumira isn't really suited for this job, and as a developer I think you look for more control over a frontend tool.  And it doesn't do 3D globes.  I also looked at a GIS tool called ArcGIS.  This tool was at the other extreme, and is too powerful and had many more features than I needed for this little demo.

 

For this particular use case, I think it's hard to beat the frontend developed by Aron McDonald in his HANA Earthquake demo.  So I pretty much stole that, tweaked it a bit to report on query usage instead of earthquakes, and stole a shaded globe from another site to give it a slightly different look and feel.

 

As far as connecting up the frontend, I'd normally look to use OData to expose a HANA model.  However, I could not get OData to support the ST_POINT spatial type, again I guess this is because I was on SP7.  Therefore I used an XSJS service that does the data selection and provides a JSON output of the data.  Here is the GUI project:

UI.png

The entry point from a browser is: <server:port>/ZBWSTATS/UI/globe/index.html.  The structure of the UI project is as follows.  The services/quakeLocation.xsjs does the data selection (named "quake" because of where I stole it from!).  The usageModel, usageView, usageStyle are an approximation at splitting model and view, and the world-110m.json file contains the data for the globe.  All code for the project is available in the GitHub repository.

SAP BW 7.4 SP8: Advanced DSO: Installation

$
0
0

In my previous blog What’s new in SAP NW BW 7.4 SP8 , I wrote about the Advanced DSO (ADSO) having large potential in the SAP BW modeling landscape. I also wrote about the fact that it unfortunately wasn’t available yet.

 

Well, now it is.

 

The Advanced DSO is now available for any BW on HANA installation running 7.4 SP8. SAP however makes a clear statement on SP8 usage of the ADSO (i.e. Modeling new scenarios). The installation and preparation of the BW system is more complex and risky than waiting for SP9, and there is a set of functionalities not yet available when running SP8/SP9. These limitations are well-described in SAP note 2070577 - (advanced) DataStore Object - availability in BW7.4 SP08.

 

Now, since I just want to get my hands on the ADSO and am not running a productive system, I went ahead and asked SAP to provide me with the stuff I need to use the ADSO. First of all, it is important to understand that SAP only supplies the note bundle and installation instructions when specifically asked for them. To do so, check the contact information in the SAP note I just mentioned. SAP will set up support and provide information from that point on.

 

One important piece of advice: take a backup! Before starting to implement, make sure you have a recent backup or take one before starting.

 

After doing so, a total of 46 notes is needed to be implemented into the BW system, running UDO programs in between and downloading a number of prerequisite notes for some of the notes required. When implementing these notes, make sure to follow the implementation sequence exactly as instructed.

Next, make sure you enable Dynamic Range Partitioning in HANA according to note 2081135.


After finishing up all activities, you should see the option to create an ADSO in the BW modeling tools in HANA studio. Please note that using the ADSO requires the latest version of the BW Eclipse Modeling Tools, version 1.5.


BWMT options.png


After creating and configuring the ADSO, it will become visible in the workbench as well.


Workbench view.png

 

Finally, a huge thanks to Rainer Hoeltke and Achim Seubert for outstanding support on every small (self-induced ;-)) hiccup I faced in this process!

 

Sjoerd van Middelkoop

Open ODS View in HANA EDW – The best of BW and HANA combined

$
0
0

Since the introduction of SAP HANA, many customers wish to add more robust features that they are familiar with from SAP BW into HANA. For those customers, now they can adopt BW on HANA, a SQL-oriented data warehouse, or a combination of both, which essentially consists of SAP BW and SQL-minded approach termed ‘HANA EDW’. Especially if you have implemented ERP(or CRM, etc.) on HANA, you can start from SAP HANA with a SQL approach for Operational Reporting and move into the SAP BW side in a virtual fashion for data warehousing.

 

With BW 7.4 SP5+, we have a new acquisition layer: ‘Open ODS’ based on fields rather than InfoObjects to allow for ‘Function before Integration’ modelling. Prior to BW 7.4, nearly all of the common function modelling was based on InfoObjects, BEx Queries, InfoProviders, Dataflows, Authorisations and Operations. It’s the InfoObjects that define the integrated EDW model and hence no InfoObjects no functions. On the contrary, with ‘Function before Integration’ modelling, the Open ODS field-based approach makes the following possible:

  • Function modelling on any field data
  • Crossing the border – Open ODS Function modelling on remote data (HANA Smart Data Access)
  • Integration of field data into EDW context – association of InfoObjects to Open ODS Views
  • BEx Queries atop Open ODS Views
  • Embed Open ODS Views into Composite Providers (Only Union is allowed for Open ODS Views up to SP7, Join is supported as of SP8)

 

What does this mean in practice? We will be demonstrating the technical capability of Open ODS Function modelling on remote data. In the NTT DATA landscape, we have BW on HANA DB (namely: H04) accessing a remote ECC on HANA DB (namely: H03), which has the HANA Live Virtual Data Models installed. The landscape is illustrated below:

NTT+BW+Landscape.jpg

This landscape is represented in the HANA Studio as follows:


NTT+BW+Landscape+HANA+Studio.jpg

The scenario we are demonstrating is Cost Centre Plan and Actual Totals. In H03, we have created our own custom Calculation View: Z_CC_PLAN_ACTUAL_TOTAL with a simple join of 2 HANA Live Reuse Views: CostCenterPlanActualCostTotals and ControllingArea:

HANA_Model.png


In the node Projection_1, we created a calculated column for FiscalYearPeriod:

HANA View Calculated Column.png


In the next step let’s move to BW (HB2) to create an Open ODS View which will allow us to consume/expose the HANA Calculation View Z_CC_PLAN_ACTUAL_TOTAL to BW (Please note that it’s also possible to do that within BW Modelling Tools, which is a plugin installed on Eclipse).

Prior to the creation of an Open ODS View, make sure a DB Connection is available:


DB_Con.png


Then we have created an Open ODS View ZCOOM_OV01 with the type ‘Virtual Table via HANA Smart Data Access’ based on our Calculation View:

Open_ODS_View.png


By using the ‘Display Data’ functionality, we see that the FiscalYearPeriod field is correctly populated via the calculated column in my CalculationView. In fact, all the data available in the remote HANA system H03, is now available as an InfoProvider in BW:


Display_Data.png


To integrate fields into an EDW context, we have associated fields with InfoObjects. It is also possible to associate fields with Open ODS Views and the purpose of InfoObjects association is to leverage some of the features offered by BW Analytic Manager (aka OLAP Engine), for example hierarchy support and InfoObjects’ Navigational Attributes. Below is an illustration of association to 0COSTCENTER:

Association.jpg


As 0COSTCENTER is compounded to 0CO_AREA, we also specify the compounding for COSTCENTER:

Compounding.png


In addition, 0PROFIT_CTR is added as a navigational attribute:

Nav_Attr.png


If a field is assigned to an InfoObject, BW’s Analysis Authorisation can be leveraged as ‘Authorisation relevancy’ of the InfoObject is inherited. Whereas if a field is not assigned to any InfoObject, BW also supports creating Analysis Authorisation directly based on the field. Below is an Analysis Authorisation created on the field CompanyCode:

Auth1.png


Auth2.png


In the final step, a BEx Query is created on top of the Open ODS View with the standard Cost Centre hierarchy enabled for display, which saves you from dealing with hierarchies within HANA. Please note that fields are also supported on BEx Query:

Query1.png


Query2.png

 

Whereto from here?

Starting from SAP BW 7.4 you could embed Open ODS Views in a CompositeProvider (Latest version only allows this functionality via BW modelling perspective in Eclipse). Unfortunately this allows you to do only Union if you are not on SP8. Another limitation is that you cannot create an Open ODS View on most HANA Live Query Views due to their input parameters. With BW 7.4 SP8 becoming generally available it provides the following exciting features:

  • Support of planning functionalities for UNION cases
  • Possibility to create a CompositeProvider on top of Composite Providers (only UNION supported)
  • Use of Open ODS views as Source Objects and allowing JOINS to be used
  • Support of input parameters when Open ODS views are used as source objects

We will be providing more demos based on the new functionalities introduced in SP8.

Bex Query Designer and HANA views

$
0
0

Hello,

This morning I've opened the good old Bex Query Designer after a long time of not opening it and surprise!! All the HANA modeling objects that we developed - Analytical views, Calculation views, Attribute views are available for Bex use as Transient providers.

 

1.jpg

The Packages names starts with [2H-]

2.jpg

3.jpg

The Parameters in the CV are placed in the Data Part in the Query Designer. In case of mandatory parameters the characteristic must be restricted.

 

4.jpg

Enjoy!

Amir

BW on HANA Eclipse Based Modeling:Integration Scenarios -1

$
0
0

This blog contains 2 scenario's a) Integration of BW master data with transaction data which is retrieved from HANA Analytical view via BW composite provider b) Integration of HANA view in BW using Transient Provider for reporting in BEx.

 

a) Integration of BW master data with transaction data which is retrieved from HANA Analytical view via BW composite provider

 

In this scenario we have used the new eclipse based modeling environment where  BW modeling can be done in HANA studio - BW Modeling perspective

 

Let us consider the integration scenario as shown below where BW master data is integrated with Transaction data via HANA view using a composite provider.

                                                       0.png

We have created a new composite provider ZSCM_COMP in eclipse environment.

 

We have 3 tabs available while creating a composite provider 1) Overview:Provides properties of the Composite provider 2) Scenario: All the mappings of the objects will be done 3) Output: Expected output fields will be available

 

                             1.png

 

 

In the scenario tab:Union operation is done between HANA Analytical view:ZTEST_DS and BW Master data infoobject ZCUST_SC

 

Here customer data is populated from Customer Infoobject and transaction data is populated from HANA view:ZTEST_DS

                             2.png

 

1 - 1 mapping is done between the objects

 

 

                              3.png

 

The below is the expected output for the composite provider:

 

                                        4.png

After successful activation of composite provider, data can be previewed in HANA studio as below, where for each city,sales area, customer - no of products purchased and cost are shown.

 

                                5.png

 

As the composite provider is created in back end HANA DB, it will be automatically reflected in BW system as shown below, where we can create BEx queries as per the reporting requirements

 

 

 

                                                      6.png

 

 

b) Integration of HANA view in BW using Transient Provider for reporting in BEx.

 

This scenario explains how a HANA Analytical view can be be converted to BW Transient provider where it can be consumed for BW reporting purposes

                                             7_0.png

 

Goto T.code rsdd_hm_publish where we can generate a Transient provider from a HANA Analytical view

                    7.png

Note: We are using an already available HANA view:ZTEST_DS

 

Enter the desired Information model by selecting the relevant HANA package to generate a transient provider

                              8.png

New analytic indexes will be created

 

                     9.png

 

Now Transient provider @3ZTEST_DS is created successfully

 

                                10.png

 

BEx query:ZTEST_SCN is created based on transient provider @3ZTEST where the data is consumed from HANA Analytical view on query execution.

                                 11.png

 

The below is the report output where the data is retrieved from HANA view via Transient provider

 

                             12.png

 

To conclude we have many other integration scenario's available from BW 740 SP5 and above where there is a flexibility to the end user on usage of objects based on client requirement.

Viewing all 130 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>