Quantcast
Channel: SCN : Blog List - SAP BW Powered by SAP HANA
Viewing all 130 articles
Browse latest View live

Dutch BI Podcast: Episode 16; S/4HANA Analytics with CDS and SAP BW ABAP

$
0
0

Yesterday we've recorded, via Google Hangouts, yet another episode of "De Nederlandse BI Podcast". We're proud to mention that this time we had two special renowned guests: Ulrich Christ and Juergen Haupt who explained the ins and outs of S/4HANA Analytics with CDS views and BW as its evolutionary extension. You can check our video podcast by playing the Youtube video below:

This blog has been cross posted on www.TheSventor.com


Composite Provider issue: Internal SQL error occurred in DBSL; SY-SUBRC from ADBC(DBSL) = 16; maybe it is due to memory shortage

$
0
0

Hi

 

 

 

Recently I have faced an issue while activating the
composite provider which included with 16 DSO’s and 1Infocube as union and 6
Master data info object as Join.

 

 

The composite provider set contains nearly 700 fields (Info
objects + Key figures)

 

 

While activating system giving an error as below and unable
to activate.

 

 

“EC:9999"internal SQL error occurred in DBSL; SY-SUBRC
from ADBC(DBSL) = 16; maybe it is due to memory shortage, try to increase

 

 

parameter ztta/short_area; refer to developer trace file
dev_w7 for further information"

 

 

Creation of column view for @3ZCP_TEST failed

 

 

CompositeProvider ZCP_TEST not created”

 

Untitled.png



  Resolution:

 

 

The reason for the error is a parameter with an obsolete
value.

 

The standard value of the profile parameter ztta/short_area is 3,200,000 bytes.
Its maximum value is 6,400,000 bytes.

 

In the case of very large InfoProviders, the associated model can become very
large, if the InfoProvider has many characteristics, key figures and/or
PartProviders. The model is sent to the SAP HANA database via an SQL statement.
Since the size can be many megabytes, the value for ztta/short_area may be
overwritten. In that case, the result is the error described above.

 

 

Basis team has made a change to ztta/short_area parameter as
per the note and they need to restart system.

 

 

For the above change we are able to activate the composite
provider

 

 

 

 

 

Regards,

 

 

Madhukar

Migration Sizing for SAP BW on HANA

$
0
0

Dear SAP Users,

This time I would like to share my experience on BW on HANA migration situation.

As an initial pre requisite it is required to understand the migration sizing who have an existing SAP BW running on a “traditional” database platform and plan migrating to HANA.

 

The purpose of doing this is to predict resource requirements for a BW system on a SAP HANA database.

The sizing report /SDF/HANA_BW_SIZING is a convenient method to estimate the memory requirements of a BW system after migration to SAP HANA. The report requires ST-PI 2008_1_7xx SP8 and SAP NetWeaverBW 7.0 SP 1 or higher.

Before running the report /SDF/HANA_BW_SIZING, please consider the following:

  • In order to run the report in parallel, make sure that for the specified degree of parallelism a corresponding number of dialog work process is available.
  • Run the report on up-to-date database statistics.
  • Please try to set the dialog timeout parameter ‘’ rdisp/max_wprun_time ‘’ to minimum of 7200 seconds.

 

Report Selection Screen: SE38 - ‘/SDF/HANA_BW_SIZING’

Report Selection Screen SE38.png


  • Check flag Store output file  to save output to specified file in the al11 directory or it can saved as a text file.
  • Specify number of parallel processes to analyze tables. Make sure you have enough free work processes.
  • Tables with less than 1MB size can be suppressed in the output (but are still counted for sizing).
  • Select precision. Higher precision results in higher sampling rates and higher runtimes. Usually, low precision delivers sufficiently reliable results.
  • Specify number of years to be considered for growth and check the flag enable Future Growth calculation.
  • Enter value for relative or absolute growth.
  • Decide if you want to specify yearly growth as an absolute value or relative to the current size.


Sizing Report Results: Summary:

The result screen of the Sizing Report contains very detailed information on the source system and its corresponding HANA sizing:

  • Overview of the size of the source DB, based on sampled data. Here the sizes are based on an ABAP internal representation of the data.
  • A summary of the resources that this system at minimum requires when running on a HANA database
  • Extrapolated resource requirements in future, based on specified growth rate
  • Overall sizes for row and column store tables, both for master and worker nodes.
  • Detailed size information per table, including estimated ABAP size based on sample, derived HANA size, and record count. This detailed list can be used to directly determine database tables which should be targeted by housekeeping measures.

Sizing Report Results Summary.JPG


Sizing Report Results: How to Read:

SOURCE DB CONTENTS:

  • Summary on tables found in source DB
  • Size figures reflect sizes of tables as if they were loaded into ABAP internal tables (ABAP size) –based on sample set of records read from each table.SOURCE DB CONTENTS.JPG

 

 

NOTE: This section describes the contents of the source database as derived from data sampling. Also note that indexes, temporary space, etc. of the source DB are NOT reflected here!


Minimum SIZING RECOMMENDATION :

  • Minimum amount of memory required to operate given system on a HANA database. This amount includes space for storing database tables as well as space for runtime objects and needs to be physically available on a target landscape (HANA size).

MINIMUM SIZING RECOMMENDATION.JPG

 

SIZING RECOMMENDATION - FUTURE GROWTH:

  • This section shows an estimation on the future growth of the HANA system, based on the specified growth rate. Anticipated growth rate of source system should be specified. Growth is then split among system tables (10%) and tables with business data (90%).SIZING RECOMMENDATION - FUTURE GROWTH.JPG

 

SYSTEM INFORMATION:

  • This section shows information on the system on which the sizing report was executed, runtime information and the most important parameters of the report.

System Information.JPG

 

TABLE DETAILS:

  • Detailed list if memory consumption figures per database table, including non-active and secondary index share (if applicable), but without runtime uplift. Important to identify impact of largest tables and tables subject to housekeeping (ABAP and HANA size).
  • Provides details on Row Store and Column store tables. And additional classification of tables w.r.t Write – Optimized DSO’s, Change log of standard DSO’s, PSA tables.


TABLE DETAILS.JPG


BW on HANA Sizing: Scale Out

If a single HANA node cannot accommodate data due to limited memory, data has to be distributed across multiple nodes (scale-out).

  • Master node will handle system load and transactional load: ABAP system tables and general operational data of the BW are stored on the master node. Note that this includes both column store and row store data. DDL statements are executed on this node, global locks are acquired here.
  • Worker nodes will handle OLAP queries as well as loading/staging/activation/merging.BW data (master data + cubes/DSOs/PSAs –all tables that have been generated by BW) is distributed across the column stores of all workers. This ensures a balanced utilization of the available CPU and memory resources. Note that no column store data (except system tables) may be stored on the master node.


BW on HANA Sizing: How to Size HANA Systems:

HANA cannot allocate the complete physical memory of each node. Some space must be reserved for the operating system and other services. As a rule, 10% of the first 64 GB and another 3% of the remaining memory will be reserved exclusively for OS purposes.

BW on HANA Sizing How to Size HANA Systems.JPG

How to check the SAP BW Query push down to SAP HANA?

$
0
0

The execution of a SAP BW Query consists of many different steps which in sum make up the total runtime. With SAP BW running on SAP HANA this has been optimized as we all know. Please read for further details Klaus Nagel’s Paper about the various Query steps and runtime improvements over time by BW release.

 

Besides the fact that every single query benefits from the technology and performance of HANA per default, we implemented further improvements to push down logic to the database. This means we are moving calculation logic of certain operations into the database layer, which had been executed in the application server so far. In the end this is what we call “push down” and what enables us with BW on HANA even to improve very complex OLAP calculations. By the way this one major difference between BW on HANA and all other deployment options of BW.


This blog should give you so some tips and tricks on how you could verify in such cases and see whether a BW Query operations are been pushed down or not

 

1. Check if used OLAP functions in a query are enabled to be pushed down to SAP HANA

  • The focus for Push Down are complex OLAP operations. So not every BW query has to be executed via a so called Calc. Scenario – which represents the logic as artefact in the database. There is a SAP Note, constantly been updated, showing the so far and planned optimized functions with SAP HANA:

2063449 Push down of BW OLAP functionalities to SAP HANA


2. Check BW Query Statistics and understand how a push down works

  • Go to RSRT and check the query statistics. To do so please “Execute & Debug” without using of any cache and displaying the statistics:

2.1.png

  • The query result is showing three key figures: Sales Order Value (Aggregation), No. of Sales Orders (Counter), Average Sales Order Value (Formula).

  2.3.png2.2.png

  • (Please use F3 or back button to get the statistics after the query result)
  • Query executed normally (having Operations Mode =”6”) on SAP HANA showing runtime of 4.5-5 seconds (in our not well configured test system J) and an overall selected data volume of ~13.8 mio. data records which are aggregated and calculated on database level to ~1.200 records. This amount of records has been transferred to the application server where the OLAP processor takes over, to make up the query output as we have seen it already.

2.4.png

  • There is even a so called “HANA Calculation Engine Layer” Tab shown, were you can see the different processing steps of HANA and overall as a good indicator that the query was executed in this very bottom layer.

2.5.png

  • Understand the mechanism of push down:
  • The easiest way to see what’s happening during query execution is to compare the query statistics with a push down – like we did already – and without. This can easily be simulated via RSRT:
      • “Mode 0” will use the SQL interface which is the most classic way to execute a query
      • “Mode 2” which is a HANA/BWA specific option which does not use pushdown of exception aggregations like counters
      • "Mode 3" optimized access in HANA / BWA without exception aggr. push down
      • “Mode 6” will push down as many query steps to SAP HANA as possible

      In our example to see the difference easily we’ll use the SQL interface with Mode 0.

2.6.png

  • Considering the query statistics there are a few things obviously:
    • Runtime switched from ~5 seconds to ~154 seconds for the same identical query result
    • Transfer of data records to the application server has increased to ~5.4 mio records
    • This amount of granular records is processed in the application server in the Analytic Manger (OLAP Processor)  which took ~115 seconds instead of 0.2 seconds
    • There is no HANA Calculation Layer shown in the statistics which indicates that no push down of OLAP calculations happened

2.7.png

3.  Check if a SAP HANA Calc. Scenario is being generated and used

  • A Calculation Scenario is an HANA artifact generated on database level including the logic which should be processed, so it is the key for pushing down logic in the most cases.

3.1.png

3.2.png

 

4. Check if exception aggregation is being possible to be pushed down

  • Execute the query again and use the “Explain” functionality to see whether the query contains “Exception Aggregation” and if it is possible to be execute on data base level

4.1.png

4.2.png

The output of the Explain is a log showing which exception aggregations can be pushed down and which not. Please keep in mind that this tool is build for support cases, so the output could sometimes not be that obvious for a non developer or might be misleading in first place.

4.3.png

 

For more information about query processing with SAP HANA you can also read the following blogs:

SAP HANA Security – Create Roles and Privileges from BW System in SAP HANA

$
0
0

This blog post explains how to create roles with privileges for SAP HANA Studio from BW for DBMS profiles in order to give the users the possibilities to see views on generated SAP HANA BW content (like calculated views generated out of new composite provider and SAP BW queries 7.4).

 

Create SAP HANA Studio User via BW

First off, it is possible to create a SAP HANA User via BW System – Triggered in Transaction SU01 in tab DBMS to SAP HANA Studio.

Type in a DBMS User name and Password as well as choose and assign any already existing roles to the user profile. Through saving, the User will be created within SAP HANA Studio. After creation assigning new roles or changing password directly within BW is a further possibility.

 

MaintUSR.png

 

Generating SAP HANA Authorizations via Transaction RS2HANA_ADMIN

Be aware the user needs authorization to open a query within BW on the same provider with the corresponding authorization-relevant info objects in order to have sufficient privileges in SAP HANA Studio.

rsecadmin.png

Open up Transaction RS2HANA_ADMIN, go to “Consistency check tool” and go to Button “Generate SAP HANA Authorizations”.

rs2hana_1.png

rs2hana_2.png

Another possible way to generate SAP HANA Authorizations is to execute the report: RS2HANA_AUTH_RUN within SE38.

rs2hana_3.png

Both open up to the same screen.

Select one or more Info Providers the user would need reading/reporting access to and select one or more users who will get the authorization to this provider.

Start with “Simulation mode” for a quick check if generation will be without errors and afterwards with marking “Force generation”, generate the roles containing the necessary privileges which are needed for the selected SAP HANA Objects.

Afterwards a pop-up window shows the roles which were created and assigned to the chosen User profile.

rs2hana_4.png

When going back to transaction SU01 the newly created roles are already assigned within the DBMS tab for the selected users.

su01.png

Furthermore you can check the created authorizations with content within Table RS2HANA_AUTH_STV.

table.png

Within this example the user has now the Roles with privileges on SAP HANA Studio for accessing the Composite Provider ZPTEX05 with authorization to Company Code 0001 as was his limitations within his authorization on BW system.

Because the user is only authorized to one Company Code he will get an authorization error when trying to access all data.

hana_1.png

Selecting the user’s correct limitation from his newly generated privileges, data will be shown.

hana_2.png

SAP HANA - Advanced DSO Features

$
0
0

Features of  Advanced Datastore Objects

 

This blog post describes the features of ADSO’s and how to use them.

 

First let’s have a look on the pre-requisites for using ADSO’s. The system has to be a BW 7.40 SP8 at a minimum.

Then still a bunch of Notes have to be implemented to make it ready for the new functions.

 

It is very advisable to go at least on 7.40 with SP9 or SP10, just to have an easier and more comfortable start.

 

What’s really new is the modeling User Interface; for the first time in BW History Info Provider have to be modeled in an Eclipse Environment, there is no possibility to use the ‚classical SAP GUI‘ to create Advanced DSO’s.

 

So normally, if you work with new technology you should read some documentation to get a clue of how to do the implementation.

However, at this point of the project we already were stuck, unfortunately documents dealing with ADSO’s are rarely available.

 

Therefore, we identified how it works by ourselves.

 

Following we like to give a short overview of the most important (just from our point of view!) points to take into account.

 

When creating the Object you have to decide about the properties the Info Provider shall have.

Like, should it have a Changelog? Shall data be activated? Should it have the same behavior like the well-known Info Cube?

 

 

The second step to take is to define the structure of the Object. Which Info Objects respectively fields will be the key or data part?

If you have no key defined, the so called ‚REQUEST TSN‘ or, if defined in the appropriate tab strip, the Partition will be representing the key of the object.

 

If these steps are finished you can start to care about the ETL Process, with ADSO’s you can use normal transformations and DTP’s we did not encounter any problems here.

Caution: technical names of ADSO tables are different that the one of Classic DSO’s

All tables are created when creating the ADSO, regardless if they are used or not (e.g changelog)

Then naming convention follows this pattern

/BIC/A…….1 = New data

/BIC/A…….2 = Active data

/BIC/A…….3 = Changelog

/BIC/A.......6 = View for extraction

/BIC/A.......7 = View for reporting



One additional topic to mention is about using Navigational attributes: Up to now the navigational attributes had to be flagged in the Infoprovider to be usable in die Multiprovider on top.

With ADSO's you can't select/flag Navigational attributes! It is not necessary or even possible anymore, from now on the navigational attributes have to be selected/flagged only in Composite Provider!

Negative side-effect: you are lost if you want to use Navigational attributes for ETL purposes -> as far as i know there will be a possibility in future-releases.

 

We cannot see any restriction in transformation part (e.g. coding).

The transformations are modeled within the SAP GUI environment.

 

Next step is to include the model in regular data loads, so usually process chains are the best option.

For activating data the already existing process type can be used.

One thing missing at the moment is the possibility to delete the Changelog via process type. But according to SAP this functionality is to come in future.

 

Other functions that will be provided in future?

  • - Planning functionality
  • - Semantic Partitioned objects

 

SUMMARY

Our experience with Advanced DSO’s is good; we did not encounter severe problems. Everything worked after we gain the knowledge how to use it.

Modeling of Composite Provider in HANA studio (BWMT- CP)

$
0
0

Modeling
of Composite Provider in HANA studio (BWMT- CP)

 

 

At HANA studio Level (BW Modelling Tool
(BWMT))

 

 

After login to Hana studio, go to BW modeling (if it not available you need to install BWMT. To do that go help in menu bar, install update give https://tools.hana.ondemand.com/luna click add. Now you can few addons, select the check box BW modelling)

Go to Info area folder where you want to create the composite and right on the composite provider folder>select New>Composite Provider.

 

1.png
  
 
You will be having below prompt, fill with Technical Name in the NAME field and Description. Select root Operation radio button as per your requirement. In my example used Union.

2.png

 

 

 

After u click on finished, you will be able to find the below screen.

 

4.png

 

 

Now you are in the General Tab, here you can find the two check box. One can enable or checked to use this composite provider as External HANA View and other one is to enable this composite provider to make available for re-use in another composite provider.

 

Click on the Scenarios for Graphical modeling.

 

4.png

 

 


Move on the cursor over the Union Graphical icon which highlight the database cylinder
icon.
Click on it to add the object. Here u can see along with database cylinder icon we have info
set and delete icon.



5.png

 

We can see the Graphical area source and target is empty

6.png

 

 

We can search the object select the object and click ok. In my case object means DSO, but object means anything like Master data info object, DSO’s, Info sets, Multi Providers and Composite providers

7.png

 

 

Once the object is added you can find the source column is filled with fields which are from the object those contained.

8.png

 

 

Right click on the object in the source as shown and click on create assignments. All the fields which are present in the source will be mapped to target. You can add required fields from particular object by selecting the each field and move into target. (Use ctrl to select multiple select and drag drop into target)

9.png

 

 

For adding one more object you have to follow the same procedure by clicking on cylinder icon and select the required object and click ok. You will able see as below once you done with this step.

10.png

 


And create assignments.

 

23.PNG

 

 


12.png

 

The Common fields from both Objects are mapped automatically and unique fields will present with map to it respective source object. 

22.PNG

 

 

In this way you can add the few object, if you add more object, the performance degrades.It recommended not use more than 10 objects in one composite provider.

21.PNG

 

 



If you want to add join condition using other objects,  then right click on the Union and select Join
with.. or you click on q.pngwill get below pop and click on ok

20.PNG

19.PNG

 

You make left outer Join and inner, Default it takes inner Join.

17.PNG

 

 

18.PNG


Thanks

Madhukar M

Comparing BW ETL Process

$
0
0

This is my first post, and I'd like to share some experiments that I studied to development a business scenario but it is not a lab scenario.


We are building a business scenario for planningandwehavetoloaddata from SAP ECC 6.0 tocomposetheanalyses, in this case weusingthe Data Source 0PU_IS_PS_32.

 

The imagebelowshows partofthescenario. In this case weusingaDSOobjectlike a PropagationLayerthatreceivedall data from Data Source.

001.png

The comparisonistwoscenariostoload data:

  • ·        Using DTP with HANA Execution: in this case ismandatoryuse anInfoPackage, becauseweneeda PSA toactivatethatconfiguration.
  • ·        Using DTP withdirectextration: in this case HANA Executionisdisableandwedon’thave PSA.

 

DTP configurationusing HANA Execution:

002.png003.png

 

Process Chain to execute ETL scenario:

004.png

Thisisthejob in SourceSystem – SAP ECC – whenweusedInfoPackagewith PSA.

005.png

Thisisthe monitor of DTP loading data using HANA Executionand PSA.

 

006.png

Process Chain Start time: 09:42:55

Process Chain Finish time: 10:00:35

Process Chain execution time: 00:17:40

 

 

Now I changed theconfigurationstoload data from DTP, buttheoptionto HANA Executionisunavaliable. The filtersisthesame ofInfoPackageabove.

 

7.png

 

8.png

 

 

 

The Process Chain waschanged too.

9[.png

 

The DTP Monitor withexecution data in this case.

10.png

 

Process Chain Start time: 10:05:01

Process Chain Finish time: 10:18:10

Process Chain execution time: 00:13:09

 


Migration and Implementation Scenarios for SAP BW on SAP HANA

$
0
0

There are different scenarios for Migration:

 

1. New Installation:

 

The simplest case is usually a new installation. For customers who have not used any SAP BW system so far, this may be an interesting approach because a new installation is less elaborate than a migration, which involves time-intensive preparatory and post-processing work. If you can imagine large data volumes before an SAP BW implementation and if reporting speed is one of your top priorities, it's worth relying on SAP HANA technology today. It is no longer  recommended to implement a new SAP BW system in combination with Business Warehouse Accelerator (BWA; see the below “SAP BW on SAP HANA

vs. Business Warehouse Accelerator”).

 

BW on SAP HANA vs. Business Warehouse Accelerator:

SAP BW on SAP HANA and the Business Warehouse Accelerator (BWA) both accelerate the execution of BEx queries at runtime. However, the technical architecture of the two approaches differs considerably. When using BWA, you can deploy two storage technologies (relational database of the SAP  W system and BWA) in parallel for data retention. In the case of SAP BW on SAP HANA, however, you can only use SAP HANA as the primary database system. In this case, all SAP BW data is retained in the main memory, and reporting is accelerated for all data. When you use BWA, in contrast, reporting can be accelerated only for specific data. To do so, you must transfer data of individual InfoProviders explicitly to BWA (indexing). In contrast to BWA, the speed of data load processes is also increased with SAP BW on SAP HANA. On one hand, this omits time-intensive steps in the process chains, for example, the generation of indices for InfoCubes. On the other hand, performanceintensive processes, such as activation of DSO requests, are run directly on the SAP HANA database. By deploying SAP HANA and the Planning Application Kit (PAK), you can also considerably speed up the execution of planning functions. Ultimately, the remodeling of InfoCubes in SAP BW on SAP HANA is also significantly easier thanks to the simplified data model.

 

2. Migration:


When you are performing an SAP BW on SAP HANA migration, the primary database of an SAP BW system is replaced by SAP HANA. It is irrelevant here whether the SAP BW system has run on Oracle, DB2, MS SQL Server, or MaxDB so far. For this reason, this is often referred to as a migration of AnyDB to SAP HANA. The previous database is no longer required after the migration is completed and can usually be switched off completely. All data, SAP BW objects, and data models are still available after migration. This is also referred to as a nondisruptive approach because the SAP BW system's operation can be continued without major interruptions. The BWA that is potentially used for performance optimization loses its right to exist in an SAP BW on

SAP HANA scenario. While BWA optimizes data of only selected Info-Providers for fast reporting, SAP HANA retains all data in the main memory and thus accelerates all reports.

 

2.1 Manual migration:

 

For an SAP BW system, the database and possibly BWA is replaced with SAP HANA within the scope of a migration. Experience has shown that

manual migration (also referred to as the classic migration method) is deployed most frequently. The involved tools have proven successful

when implementing a heterogeneous system copy (system migration including a database or operating system change). Several additional

tools and special reports are provided for SAP HANA that must be used before and after the actual migration phase.

 

2.2 DMO and PCA migration options:

 

An alternative to manual migration is to use the database migration option (DMO), frequently in combination with post copy automation (PCA). DMO allows you to run a system upgrade and a migration to SAP HANA in just one step. This can be particularly useful if the SAP BW system doesn't yet have the required version for implementing a migration The basic idea of PCA is to automate the numerous individual steps, particularly for post-processing an SAP BW on SAP

HANA migration. This is supposed to ensure, among other things, that all migration steps are executed completely and in the correct sequence.


Rapid Deployment Solutions:

 

RDS are provided optionally to accelerate the implementation of SAP solutions presents three RDS packages that are relevant for SAP BW on SAP HANA. For example, SAP provides the RDS package Rapid Database Migration of SAP BW to SAP HANA with which you can speed up an SAP BW on SAP HANA migration.

 

High-Level System Architecture Before and After SAP BW on SAP HANA Migration:


SAP BW on SAP HANA.jpg

Terminations on BEx Queries due to missing/inconsistent logical indexes on SAP HANA

$
0
0

Hi BEx Community,

 

When consuming a BEx Query (or even a BW Infoprovider directly) or reading characteristic values (F4 value help) you might face error messages like the ones below. These terminations is propagated to all front-ends that consumes the services described above, such as Web Intelligence (WebI) and other BO tools, Bex Analyzer, Analysis for Office (AO), Business Planning and Consolidation (BPC), Java Portal and many others.

 

Sometimes the error message received can differ a little from the ones below or, depending on the front-end message handling, the error message is not displayed very clearly. In the second case I recommend you to test the corresponding query on RSRT transaction (HTML mode) to get a clear view of the error messages.

 

PS.: If you are using an InfoProvider instead of a BEx Query, you can execute the InfoProvider default query on RSRT by using the schema <infoprovider>/!<infoprovider>. E.g.: For a infoprovider called ZTEST, enter ZTEST/!ZTEST.

 

PS2.: If transaction RSRT cannot be used for any reason, I recommend you to collect a RSTT trace. Then play the trace afterwards and you should get clear messages. More information at: 1925924 - How to enable RSTT Trace in SAP BW for Frontend Tool(SAP BI & 3rd-party Tools)

 

 

Some of the error messages that indicate this sort of issue are the following ones:

 

"Error(#99) TREX_DBS_AGGREGATE"

"Error (#901) Error 2,949 has occurred in the BW/SAP HANA server"

"Error initializing joins: Could not handle join condition.."

"Error 2.999 has occurred in the BWA/SAP HANA server"

"Error 2.991 has occurred in the BWA/SAP HANA server"

"table config for index ...en is not valid"

"Error reading the data of InfoProvider ...$X"

 

These errors are due to missing or inconsistent column views (also known as a "logical index") in SAP Hana database.

 

In the SAP HANA database, each InfoCube has a corresponding column view  for the performance-optimized reading of data records. The same applies to DataStores with "SID Flag" (SID generation during activation), to master data providers, and to InfoObject value helps.

In most cases, the symptoms described occur because the relevant column views are missing in the database. However, you do not know or you cannot reproduce why the column view is missing or was deleted.

 

The solution for most of these cases is to execute report RSDDB_LOGIND/EX_CREATE in order to (re)create the logical index for affected object. In below image you can see objects types you can run this program (taken from a SAP BW 7.4 system).

 

01-08-2015 14-07-35.jpg

 

Please find full details on SAP note:

 

1656582 - Query terminations - InfoCubes, DSOs, master data in HANA DB

 

Hope this helps.

 

Cheers,
Eduardo Provenzano

Lessons learned in SAP HANA Greenfield Implementation

$
0
0

I wanted to share some lessons that we learned while we implemented HANA i.e SAP BW on HANA in a SAP Greenfield implementation project recently. These may or may not be applicable to your project.

 

1. Architecture design issues: There were initial architectural issues like whether BW on HANA or HANA as a sidecar should be implemented. Early systems were installed on oracle as database then midway it was decided that BW on HANA would be implemented resulting in a migration of sandbox & development from oracle to HANA hence loss of time & effort and additional time on migration.  Other architectural issues were regarding late decision on multiple schemas on one database or separating out them in multiple databases. The BW schema( SAPSR3) , the BOBJ reporting schema (Data via SLT from ERP) & HANA live schema should exist in one database or 3 databases each hosting one schema. Due to resource constraints and cost of HANA licenses and since in production HANA , SAP doesnot support multiple databases the decision was taken to put everything in to one schema. This decision resulted in dismantling of multiple database on one HANA appliance and putting all schemas on one database which also resulting in loss of time & effort and additional work. An SAP Architect is required.

 

2.  Late identification of Requirements: There were constant new requirements identified which caused delay in the project. Some of them are like single signon between BW – BOBJ, BOBJ (WEBI) – HANA and BOBJ(Analysisoffice) – HANA. Some of these requirements were even not supported by SAP . They are only supported now. But even now no proper documentation exists. We had to work with SAP to create them. For example – SCN Setting up SAML SSO between Analysis Office 2.x to HANA SP9. Lesson here is that proper requirements analysis should be done.

 

3. HANA constant upgrades:  Many functionalities or Bugs are supported in Higher releases so there is a need to constantly upgrade HANA. Initially we had HANA on Rev 78 which was upgraded to Rev 82. Now the plan is to upgrade t Rev 92.02. Some examples

 

  • SAP XS engine doesnot run scheduled Jobs when they are rescheduled. Bug in Rev 82 -> Solved in Rev 84. We had to change our plan to upgrade to Rev 96 (instead of Rev 84) due to some new functionalities required by BW/BOBJ Development
  • HANA Security issue (https://threatpost.com/static-encryption-key-found-in-sap-hana-database/113393). SAP note 2183624 - Potential information leakage using default SSFS master key in HANAon this issue. It is supported in Rev 97.1 forcing us to change our upgrade plan from Rev 82 – Rev 96 to Rev 82 – Rev 97.1
  • Embedded Statistics sever created huge logs making the tables so large that HANA used to hang in Rev 96. We worked with SAP & SAP released the note 2170779 - SAP HANA DB: Big statistics server table leads to performance impact on the system . The issue is solved in Rev 97.2 again forcing us to revise our upgrade target from Rev 97.1 to Rev 97.2

Any Customer should be ready to upgrade its HANA box all the time as new version of HANA are released in short timeframes.


4. SAP Security Issues :  HANA security has many new concepts. There were lot of authorization issues due to incorrect assignments & misunderstanding. This caused delay in the implementation , testing & support. Customer had to bring in a SAP AG consultant to fix current issues and train customer`s security team. The SAP AG  consultant also designed & created many security procedures for automating the security tasks. The lesson here is to properly train the SAP security team or hire an experienced external consultant.


5. SAP HANA Transports Management:  Here is an area where constant changes were requested & implemented resulting in loss of time & effort. The transport were done via

  • At first via Export – Import of Deployment units or Packages
  • Then it was changed in the middle to HANA Native transports
  • Then it was changed to HANA – CTS+ without Change recording
  • Then it was changed to HANA-CTS+ with Change recording.


All these changes cause a lot of change in the transport management strategy and deletion/creation of transport objects, functions resulting in the loss of data, transport sequencing.  This should be planned ahead of time


6. Hardware support vs Software support :  Hardware vendors provide their own monitoring solutions and support. If the customer already has a support team there should be proper separation of support duties like

  • Does the hardware vendor monitor the OS level only or also the HANA appliance/ Software
  • Who handles the appliance related alerts? Vendor or Customer support Team?
  • Who handles the upgrade of the HANA appliance?
  • Since the support team will be tasked to refresh SAP system (BW on HANA) or create Test systems who should create the new Database instance?
  • Does the customer’s UNIX team provide support to SUSE Linux or they should contact the vendor for OS patching/Support?
  • If there is a security patch released for SUSE/RHEL but not yet tested by SAP for HANA what should be the approach?

 

7. Communication issues regarding SAP support Tickets : Project Stakeholders require constant updates on an issue which is not possible if an OSS ticket is with SAP. SAP needs to be proactive in keeping the customer updated on the ticket.


Though these lessons learned are not comprehensive in nature but it provides some insights on what to do and what not to do in an SAP HANA project from a Basis point view.

HANA Data Warehousing: The #HANADW

$
0
0

With this blog, I like to shed some light into the direction that SAP is taking towards a unified offering for building a data warehouse on top of HANA. The unofficial working title is the HANA DW. I've divided the blog into 3 sections, each addressing the most pressing questions that I've received from customers who have already seen flavours of this.

 

The Vision for Data Warehousing on HANA: the HANA DW.

As outlined in my blog Data Warehousing on HANA: Managed or Freestyle? BW or Native?, there are two approaches (preferences) for building a DW, not only on HANA but in general:

  1. SQL-based: Meaning that the DW architects use SQL as the main building paradigm which gives them a lot of freedom but also bears the risk that too much diversity jeopardises the lifecycle of the DW as it becomes increasingly complex to manage dependencies (e.g. impact of changes) and integration (e.g. same entities - like products, customers - represented in different ways, using different data types etc).
  2. Managed via best practices: Here, high-value building blocks (like data flows, transformations, hierarchies, BW's DSOs, BW's data requests but also naming conventions) are used to construct and manage the DW. This is a faster way as long as the building blocks serve the need. It gets cumbersome whenever there is a scenario that requires deviating from the standard path offered via the building blocks.

In recent years, BW-on-HANA has offered approach #2 being extended and combined with #1, the so called mixed case scenario. A tangible example is described here. Many customers have adopted such a mixed approach; in fact, it has become the mainstream for BW-on-HANA. The HANA DWtakes a similar direction but starts with #1 and complements with #2 which, in the end, yields the same result. It goes along the following notion:

  • Start with a naked HANA DB that offers all sorts of SQL capabilities that you need. Fundamentally, you can now write your SQL code in Notepad, Emacs, VI etc, store that SQL code in files and execute them in HANA either manually or via generic tools like cron.
  • Now, writing SQL code from scratch in a text editor is cumbersome, even if there is some syntax highlighting or automatic syntax completion. Most people acquire tools that allow them to graphically model / design / create stuff to generate the underlying SQL statements.
  • Whichever method you use to get to the SQL statements, there will be the need to maintain them. Scenarios get extended or adjusted. This translates into changes on the SQL level. For purposes like auditing or simply for having the option to return to an earlier setup it is good practice to track the evolution (i.e. the changes) and to keep the versions of those (SQL or higher-level) artifacts. This is nothing else than in all kinds of programming environments and one can lend infrastructure from there like GIT. The latter and services related to it are (or will be) offered by the HANA platform. They constitute a repository.
    There are two more tasks that the repository should support:
    • managing the dependencies between the objects (e.g. a transformation using certain stored procedures who, in turn, use certain tables), and
    • the release management of those (SQL or higher-level) artifacts, e.g. to allow them being developed and tested in one system w/o jeopardising the production system.
  • Finally, there are certain recurring patterns of SQL: things that you need to do over and over again. Examples are tracking incoming data (e.g. via something like the data request in BW), how to derive data changes (like in a DSO), how to store hierarchies etc. Such "patterns" basically translate into higher-level (abstract) artifacts that are created and maintained at the abstraction level to then be translated into a series of SQL statements.

The HANA DW will support this process in the following way; figure 1 below visualises this:

  • The HANA DB provides all the SQL functionality you need.
  • The HANA platform will provide the development infrastructure, especially to support a repository and related services.
  • Tooling on top will create either direct HANA SQL* or higher-level artifacts that translate into HANA SQL*.
  • Those tools will keep their artifacts in the HANA repository, allowing to support the complete lifecycle incl. auditing, versioning, dependency analysis (especially also between artifacts maintained by different tools).
  • Tools constitute optional added value that you can use but that you don't have to use. Consider BW-on-HANA as such a tool too.

It is planned to bring the currently existing SAP products related to data warehousing into this HANA DW setup. This will allow SQL-based data warehousing (1.) enriched via higher-level / higher-purpose artifacts (2.). The second pillar in figure 2 describes that evolution. The third pillar indicates that tooling will evolve, potentially into a series of apps or services that can also manage a cloud-based DW.

 

The vision for the HANA DW.
Figure 1: The vision for the HANA DW.

 

Short-, mid- and long-term evolution of the HANA DW.
Figure 2: Short-, mid- and long-term evolution of the HANA DW.

 

The Role of BW-on-HANA.

From the above, it should have become obvious that BW-on-HANA will form an important, but optional part of the HANA DW. If it is convenient for the purpose of the DW, then it should be used or added to a HANA DW. Another potential scenario is that an existing BW-on-HANA will gradually evolve into a HANA DW as it is complemented with other tooling in the fashion described above. The border line will be blurry. In any case, BW-on-HANA will extend and enhance its existing functionality enabling more and more direct SQL access + options and leveraging / interacting with the HANA repository. A stand-alone BW-on-HANA system, as it exists today, can be considered as a special instance of a HANA DW. It will continue to exist, evolve, excel. Anyone investing into BW-on-HANA today is on a safe track.

 

The Role of HANA Vora and Hadoop: the HANA Big DW.

Many customers are looking at ways to complement existing data warehouses with Hadoop. HANA Vora will play a pivotal role in combining the HANA and Hadoop platforms. Therefore, HANA Vora will allow to extend the HANA DW into a HANA Big DW (current working title). We will elaborate on that at a later stage.


* Please consider HANA SQL here as a placeholder comprising all sorts of more specialised languages and extensions like MDX, SQLscript, calc engine expressions etc.


You can follow me on Twitter via @tfxz.

How to make Hana Optimized infoCube?

$
0
0

When the SAP NetWeaver BW system is migrated to an SAP HANA database, all the InfoCubes are not automatically converted to SAP HANA optimized InfoCubes.

The conversion of standard InfoCubes to SAP HANA optimized InfoCubes ensures that the cubes are set up most efficiently for SAP HANA and able to get the performance benefits for data loading and reporting.

How to do this?

Go to Transaction RSMIGRHANADB or the program RSDRI_CONVERT_CUBE_TO_INMEMORY and enter the standard InfoCube that needs to be converted, and click the Execute:

2015-10-25_21-49-14.jpg

The job is executed in the background as a stored procedure. After the job is finished, the standard InfoCube is converted to an SAP HANA optimized InfoCube where dimension tables are removed, and the master data tables are now directly linked with the F-fact table.2015-10-25_22-01-01.jpg

Once InfoCube is Hana Optimized you will see “Square marquee” against it.2015-10-25_22-03-37.jpg

Remarks: Please note you have make standard InfoCubes as Hana Optimized manually in each system i.e. Dev, Acceptance and Production. For the moment this conversion feature is not supported under Change and Transport System (CTS).

SAP TechEd 2015 – Round Up on EDW, BW75 and S/4 HANA

$
0
0

I started my TechEd adventure on Saturday evening arriving at Vegas and staying at the beautiful Venetian Resort...

Venetian.png

First stop was the EDW RoundTable on Sunday: For those of you who do not know, the Enterprise Data Warehousing Roundtable event at SAP TechEd is designed to provide direct feedback to SAP teams regarding current status and future plans with BW on HANA. Additionally, is also a forum for interactive discussion and networking with customers and SAP Product Management and Development teams.

EDW ppl.png
It was a very packed day and with a lot of customers sharing their experiences with BW74 on HANA. And their EDW road map for next 5 years. Very interesting to see how things will change and evolve with BW 7.5, HANA and Big Data.

Next 5.png

My colleague from Teklink, Andreas Wilmsmeier was here all the way from Europe with a thought provoking discussion on – Why S/4 HANA Does Not Replace SAP BW

We also had presentations from Target, General Mills, AmerisourceBergen, ExxonMobil, Uni-Select, Johnsonville Sausage ( for every good question the participants were offered a free sausage ), Mohawk Industries and at the end, a road map presentation from  Marc Hartz, SAP.

Mark Hartz.png

I always enjoy the SAP keynotes and this year’s TechEd was no exception -An inspiring keynote on Monday from Steve Lucas & Bernd Leukert on Digital Economy “The SAP board meetings now run in a digital way,” Leukert said. “There are no PowerPoints anymore; there are no back-office people preparing pages of documents; we run and we analyze our business in real time, in our Digital Boardroom.”

SAP also announced “SAP Cloud for Analytics” and how it can help digital enterprises to discover, visualize and predict -- all in one. At a time when most customers talk about simplification in the toolset, this appears to be one tool that can do it all. Will be interesting to know more details and see how soon customers embrace this offering.

cloud ana.png

One of the most awaited sessions for me was the BW75 Road Map from Lothar Henkes, SAP. Others included discussions on BW on HANA, Data Services, S/4 HANA, business planning and analytics.

I was excited about presenting a session on Smart Data Access (SDA) – Session code DMM221. It was scheduled for the last day of the conference, by which time most folks get tired with walking around and the information overload, still there was tremendous interest with attendees even standing towards the end of the room Thanks for your support and participation!

SDA.png

 

Here is a summary of key points I noted from the BW 75 and S/4 HANA sessions:

Simplification

  • Enhancements for Advanced DataStore Object and CompositeProvider
  • Features complete for Eclipse based Query Designer
  • Extended InfoObject maintenance in Eclipse
  • BW Workspaces with local hierarchies
  • New BW Workspace Query Designer

 

Platform integration

Introduction of SAP HANA Smart Data Integration capabilities for SAP BW

For more, read: http://scn.sap.com/community/developer-center/hana/blog/2014/12/08/hana-sps09-smart-data-integration--adapters


Big Data

  • Enhanced support for SAP HANA Dynamic Tiering
  • Extended near-line storage capabilities


SAP BW 7.5, edition for SAP HANA

  • Simplified governance and faster time to market
  • Option to run SAP BW in a simplified mode only using HANA optimized objects


“SAP S/4HANA Overview, Strategy, and Road Map” was very well presented by Rudolf Hois, SAP

His session talked about the SAP framework for digital business and how adapting to the digital economy requires a simple and powerful strategy framework

simplified data model.jpg

s4 analytics.jpg

 

The other elements to the conference like the Demo Jam, show floor sessions and of course the concert added to the overall experience. I should also mention the meals at TechEd - I was very happy to see a lot of vegetarian food choices. Above all this, making connections with other SAP professionals and customers and continuing the conversation after the event is the best value for us all.

 

Next stop and perhaps the last event for me this year will be the SAP Insider Reporting and Analytics conference (Nov 17-19). I will be presenting a session onOptimizing BEx query performance for SAP HANA. Stop by and say Hi if you are there!

TOOLS TO CONNECT HANA DB - PART 2

$
0
0

HI All,

 

In my previous blog i have covered about HANA CLIENT TOOL, ODBC and DESIGN STUDIO.

Now in this blog i am going to cover below topics,


SAP LUMIRA

SAP ANALYSIS OFFICE

SAP TABELAU


SAP LUMIRA: Media


Login into SAP Service Market place and download the SAP LUMIRA

31.png

32.png

INSTALLATION: Run the setup file with run as administrator rights,

33.png

  Installation setup is started

34.png

Click Next tab

35.png

    Click Next tab

36.png

Click Next tab

37.png


Click finish tab.Installation completed successfully.

Connecting SAP HANA using SAP LUMIRA

Open Sap Lumira,

38.png

Go to File --> New (Create a new database)

39.png

Select connect to SAP HANA One

40.png

Click on Next, provide necessary credentials.

41.png

Click Connect Tab,Now select the necessary table.

42.png

    Click next tab

43.png

Click on Create tab,Here we can view the Tables which in HANA.

44.png

SAP ANALYSIS OFFICE: MEDIA


Login into SAP Service Market place and download the SAP ANALYSIS OFFICE.

45.png

46.png

INSTALLATION: Run the setup file with run as administrator rights,

47.png

Below screen appears,

48.png

To complete the installation successfully , we need to install Microsoft Office 2013 (64 bit) or Microsoft Office 2010 (64 bit) and  Microsoft Visual Studio 2010 tools for Office run time.So, I have installed relevant   Microsoft Office , Microsoft Visual Studio 2010 and click on Next.

49.png

Installation process will start..

50.png

Installation completed successfully.

    Connecting HANA using Analysis 2.0 office

51.png
Go to Analysis Tab --> click on Insert Data source --> select Data source

52.png

below screen appears,


Click on skip to avoid the connection to SAP BO system.

In Analysis Office ->Insert Data Source->Right click in Select data source dialog and select "Create SAP New HANA Connection"

Then it displays Below dialog, Just give sufficient details

Fill Below details :

Description : System Name

Select HTTP

Give valid Host name

Port Should be 80xx(xx-instance number)

click on Create button/Create and Logon

 

Here we will provide necessary credentials and move forward.

Now ,we are able connect HANA successfully.


SAP TABELAU:

Download the tableau from below link

http://www.tableau.com/products/desktop/download?os=windows

INSTALLATION:

Run the setup file with run as administrator rights


Tick the license agreement and click on Install tab.

Enter the product key and activate the Tableau.

Connecting SAP HANA using Tableau

From here Go to --> More servers -->SAP HANA -->provide user credentials to connect SAP HANA.

Connected to SAP HANA successfully.

This will end the TOOLS TO CONNECT HANA DATABASE installation and connection part.

 

Best Regards,

Rajesh K


SAP BW Query Optimization: Use of Selection of Structure Elements

$
0
0

The “Use Selection of Structure Elements” gives performance boost to BW query. By using this function the system only passes to the database the selections and key figures (KFs) of the columns (or structure elements) currently used to filter the query.

 

A result set might have less rows than the result set calculated with the flag set to false. Normally this function should be activated in most cases except not in following conditions:

 

  • This function therefore cannot be activated if the query has read mode ‘A’

img_562f77e9ca474.png

  • You are using virtual KFs and are doing calculations/changes on KFs that are hidden in the query result (SAP Note 2118240)
  • You are using a virtual provider based on a function modules that is requiring the information of KFs hidden in the query result
  • You are using Constant Selection in KFs where characteristics used as constant selection is hidden in the Query

 

How to use this?

 

Goto RSRT and enter your BW Query and hit “Properties” button.

img_562f786fdcf20.png

HANA composite provider (HCPR): Practical Tips

$
0
0

A distinctive feature for composite provider is its ability to combine BW info-providers with analytic indexes and HANA views on some way. A short historical perspective of this group of metadata objects can be found in introduction in this document. More kinds of composite provider have already existed:

  • local composite provider (object CORP) refers to BW workspace (RSWSP) and enables to combine central providers with local data in order to give business departments some options for uploading their local data (via BW Workspace Designer). This has been available since BW 7.3 SP05 with BWA 7.20/HANA 1.0.
  • ad-hoc composite provider (same object type CORP) (RSLIMO/RSLIMOBW) is used for rapid prototyping, combining providers with analytic indexes and also as an alternative to info-sets for joining data. This one is also available from APD (RSANWB->environment->edit composite provider which opens RSLIMO) and can be created without assigning to BW workspace.
  • central composite provider (HCPR) is available since BW 7.4 SP05 and can be created from Eclipse BW modeling tools, but its data can also be viewed via SAP Logon (RSOHCPR).

A good guidance through the relevant documents can be found here and in this text we'll have a look at some practical questions arising when building central composite provider (HCPR) (further just composite provider) based on own experience.We’ll structure them per development step.

Providers

 

At first one has to choose the providers of data for composite provider. We’ll have a look at different options for this.

 

Types of sources: BW and HANA

It is possible to choose as provider an info-provider available in BW metadata dictionary (analytic indexes are visible as transient providers) or a HANA view. By default only BW metadata objects are available for selection when trying to add a provider into your composite provider.


Naamloos1.png

 

To start using HANA views it is required to attach HANA database system (in SAP HANA Administration console view) with a HDB user/password assigned (see screen print for specifying system).

 

Naamloos2.png

 

Then the system can be attached to BW system by using the following context menu of the BW Project in the BW modeling perspective.

Naamloos3.png

 

From experience, it is required that you are already logged in into the attached HANA system (for example, in SAP HANA Administration console view), after then you can login to the BW system on the BW modeling perspective and also see the attached HANA system library. As a successful result of this setup you’ll be able to choose whether to add to your composite provider an info-provider or a SAP HANA view.


Naamloos4.png

More info-providers

When building your virtual data mart layer, a combined application of the new metadata objects can deliver more result in terms of flexibility. By default it is possible to use all info-providers available for a multi-provider plus analytic indexes and new metadata objects like Open ODS view and advanced DSO. Adding the DSO into the composite provider requires to have SID generation enabled, which has certain disadvantages:

  • it would, potentially, require an extra ETL step, for example, for quick data marts without data cleansing.
  • it requires an extra ETL step when using a direct update DSO: take an example of an APD scenario where the results of analysis are stored into a direct update DSO.

An analytic index can be an alternative but this cannot be transported. Provided all the performance considerations have been taken into account, you can still add the DSO without SID generation into your composite provider. You just have to create an Open ODS view on top of the data table of your DSO and then add your Open ODS view to your composite provider. But, from my experience, changing the structure of the DSO will require to re-create your Open ODS view because the Open ODS view does not get automatically updated (yet). Since BW 7.4 SPS 11 / 7.5 SPS 00 it is possible to swap the source objects of an Open ODS view and on this way update the list of fields.

Operations

 

Root operation

It is possible to use UNION, INNER JOIN or LEFT OUTER JOIN in operations of composite provider. As first step (when creating composite provider) you have to choose the root operation. If JOIN is chosen, this can be adjusted to UNION later on if you select your root object and try to add another operation to it. This can be useful because UNION operation is available as root operation only.

 

Naamloos5.png

 

Operations and Output structure

When creating assignment of fields to the output structure, it is all straight forward with UNION operation, because the similar fields will be assigned from more sources into one target field on the output structure. For JOIN operation, you can use the similar fields as JOIN conditions and thus also have them combined. There can also be a case when these similar fields are both required into output structure in separate fields: for example ‘document date’ of the sales document and ‘document date’ of the invoice. When using the command ‘create assignments’ you’ll get the last selected field assigned to the output structure and the assignment of the similar field from the first source will be lost. In this case you can (after assigning the field from the first source) change the name of this field in the output structure and subsequently create an assignment from the field of the second source and get a new field in output.

Fields of output structure

The fields of composite provider can be associated with an info-object or with an open ODS view. This will give you access not only to the navigation attributes available for selection for the output structure of the composite provider but it will also give you access to master data at report runtime.

Fields length

It is good to remember that the maximum length of the field name is twelve characters and all the characters exceeding this will be cut in the output structure. This should not be a problem for the fields coming from the BW info-objects, but it can be an extra task for naming convention for the fields of HANA views which can be much longer than twelve characters.

Fields from HANA views

The fields from HANA views are available as normal fields in output structure. Special attention should be dedicated to HANA input parameters. They are available (since BW 7.4 SPS 08 and BW modeling tools 1.4) as normal fields and thus can be added to the output structure. For example, this way you can make it possible to transmit the selections from queries built on top of composite provider to the HANA input parameters. Associating the field of HANA input parameter with an info-object will also give access to selection values help (master data) at report runtime. In BEx query designer the use of fields based on HANA input parameters is limited to filter pane. A clear fields naming convention will help correctly apply them.

Association with info-objects

The output structure fields can be associated with info-objects and Open ODS views. Referring to the case of assigning similar fields from more joined sources into more output fields, this can also be an alternative to old good creation of info-objects with reference. Now it is possible to associate one info-object to more fields of the composite provider and thus re-use the same master data many times.It is worth noting this will require using system-wide unique field names (instead of direct usage of info-object name) which has an effect in queries. BEx variables of the associated info-object are visible in the field with system-wide uniquename. But the reverse rule doesn’t apply: BEx variables created for the fields with system-wide unique name are not visible for the associated info-objects.

Group folders

Since BW 7.4 SPS 10 (or with an OSS note in earlier versions) and BW modeling tools 1.8 it is possible to structure the output of composite provider by customizing the grouping folders and assigning them to the output fields. Just click on ‘manage groups’ to create more group folders. And then either drag-and-drop the fields into folders or right click on fields and choose ‘move to folder’. It is even possible to create hierarchies of groups and not just a list of groups.

 

Naamloos6.png

 

The group folders can be used as so called logical dimensions because they are visible in BEx query designer.

Some final checks

After the composite provider is built, it  is good to perform some final checks and controls, like:

  • Check the fields evaluated in authorizations
  • Check assigned units and currencies in measures
  • Check compound objects are assigned
  • Check all fields have a connected source field

 

Alternatives?

Of course, there are cases where a better alternative to a composite provider could be considered:

  1. If an anti-JOIN is required, this functionality is available in BW info-set. But operations of info-sets are not performed in memory.
  2. If the existing multi-provider has a considerable number of queries on top of it, a requirement to use HANA views, it is still possible add to multi-provider the virtual provider built on top of HANA view. From other hand side using specific tools (for example, program RSO_CONVERT_IPRO_TO_HCPR and transaction RSZC) will minimize the effort required for such a migration.
  3. If more joins are required but not all of them are expected to be used at the same time in queries, probably it is better to use dynamic JOINs in HANA view. The parameter 'dynamic join' will tell the system to use the specific JOIN only in case when the fields from joined table are requested in query.

 

As conclusion composite provider is the object of choice for combining data in virtual data mart layer of BW-on-HANA because this metadata object has been continuously enhanced in recent support packages and next to UNION and JOIN operations in memory it provides some very nice flexible features for building elements of semantic layer.

SAP BW 7.5 SP1 powered by SAP HANA Features and Roadmap – ASUG Webcast Part 1

$
0
0

This is part 1 recap of an ASUG webcast yesterday provided by Marc Hartz, SAP.

 

BW 7.5 SP1 will be delivered later this month (planned)

1afig.jpg

Source: SAP

 

The legal disclaimer applies that things in the future are subject to change

1fig.jpg

Figure 1: Source: SAP

 

Figure 1 shows BW adoption

 

BW is the strongest adopted product with HANA

2fig.jpg

Figure 2: Source: SAP

 

Early days, an “on top” delivery, focusing on performance optimization

 

Last release 7.4- how make application simpler again

 

With 7.5 – simplify, how make modeling more efficient

 

HANA is growing as a platform

 

Features coming from more and more mature platform

 

Another driver for BW is big data driver

 

SAP with BW with a certain set of features – bringing edition focus on HANA features

 

“BW run simpler”

3fig.jpg

Figure 3: Source: SAP

 

Highlights – big buzzword is simplification; it is getting easier

 

Run modeling environment in Eclipse with more possibilities for BW workspaces

 

Features are coming from HANA

 

BW 7.5 edition for HANA – easier governance, faster time to market

 

Simplification

4fig.jpg

Figure 4: Source: SAP

 

SAP BW 7.40 introduced the advanced data store object – persistent data object in BW – successor object, exclusively available on HANA, covers use cases from previous providers – like DSO, InfoCube

 

The new UI is in Eclipse

 

They are aligning modeling experience with the HANA modeling

5fig.jpg

Figure 5: Source: SAP

 

Graphical way of modeling

 

Can model complex scenarios

 

Join on top of join

 

Drag and drop experience, to bring and model the output

 

Semantic groups for output

 

Open spot is the temporal join

6fig.jpg

Figure 6: Source: SAP

 

Plan is to offer modeling experience in Eclipse

 

In past use BEx query designer, will be in completely Eclipse – feature complete to BEx

7fig.jpg

Figure 7: Source: SAP

 

New data type, maintain attributes

8fig.jpg

Figure 8: Source: SAP

 

View generation possible for different types of objects

9fig.jpg

Figure 9: Source: SAP

 

Workspace – ad hoc scenario – business user to upload data with IT maintained data

 

Now available for hierarchies – a local hierarchy – user can upload their own version of hierarchy created as an Excel sheet

10fig.jpg

Figure 10: Source: SAP

 

New tool – Workspace Query Designer – easier to use concept of Workspaces – UI5 interface, modern

 

User has full function of BW, using authorization, and combine with local data

 

SAP will publish a video on SCN.  It comes delivered with 7.5 – no add-on

 

Part 2 (when I have time) will cover the platform integration and roadmap)

 

ASUG Survey

If you are an ASUG member, and want more webcasts like this, please take the ASUG Business Intelligence Community’s annual survey here.

Increasing the SAP-NLS Performance

$
0
0

Increasing the SAP-NLS Performance


With the Introduction of smart data access (SDA) especially between SAP HANA and IQ, the data provisioning process can be optimized. Never the less, some additional Parameters have to be introduced on the ABAP and HANA Backend as well.

The Implementation of the SDA between SAP HANA and IQ is already discussed in the - SAP First Guidance - SAP-NLS Solution with SAP IQ | SCN This would be a mandatory task first.

 

A good starting point is also the following document which affects the settings for SAP-NLS in the SAP HANA database - Open ODS View on a Virtual Table | SCN and SAP BW on SAP HANA & SAP HANA Smart Data Access

 

Note 2128579 - Data Load into SAP IQ during Copy Phase utilizes only one server-side Thread (7.40 SP11)

With this SAP Notes two additional parameters were introduced to significantly increase the LOAD statement for writing data into SAP-NLS.

ParameterLOAD_STRIPE_SIZEsetting this parameter to a value n > 1 parallelizes the load.

Parameter LOAD_STRIPE_WITH is the parallel degree multiplied with SYBASE_IQ_BUFFER_SIZE

Parameter SYBASE_IQ_LOAD_DIRcould be changed at the database connection level (DBCO) and is by default the data directory of the SAP Instance. If you plan to load a large amount of data, please make sure that you have enough space left, or specify another directory/device.

LOAD_STRIPE_SIZE=4; LOAD_STRIPE_WIDTH=4


SAP-NLS_11.JPG



Furthermore there are several SAP Notes discussing the optimization:

Note 2063449 - Push down of BW OLAP functionalities to SAP HANA

Note 2165650 - FAQ: BW-Nearline-Storage mit SAP HANA Smart Data Access

Note 2100962 - FAQ: BW Near-Line Storage with HANA Smart Data Access: Query Performance

Note 2198480 - FAQ: BW Open ODS View - Query Execution

 

Note 2000002 - FAQ: SAP HANA SQL Optimization

Note 1987132 - SAP HANA: Parameter setting for SELECT FOR ALL ENTRIES

Note 2042333 - Deactivating the "Fast Data Access" optimization (FDA)

 

The following SAP Corrections are necessary (SAP BW 7.40/7.50):

Note 2099102 - SFAE implementation of LOOKUP has poor performance (7.40 SP10)

Note 2109015 - Continuation of Archiving Requests for Copy, Verification, and Deletion Phase in parallel (7.40 SP11)

Note 2130587 - SYB IQ: performance enhancement for LOAD statement (latest ASE LibDBSL 7.22, 7.42, 7.50)

Note 2198386 - BW HANA SDA: Performance Improvement for Creation of Database Statistics for Virtual Tables of Open ODS Views or NLS-Archives (7.40 SP13/7.50 SP01)

Note 2203484 - BW Near-Line Storage with HANA Smart Data Access: DTP Extraction slow (7.40 SP13/7.50 SP01)

Note 2233194 - BW Near-Line Storage with HANA Smart Data Access - PartProvider Pruning not correct (7.40 SP14)

Note 2210552 - BW Near-Line Storage with HANA Smart Data Access: Correction for Note 2203484 (7.40 SP14/7.50 SP02)

Note 2212633 - Near-Line Storage with HANA Smart Data Access doesn't read on node level (7.40 SP14/7.50 SP02)

Note 2214892 - BW HANA SDA: Process Type for creating Statistics for Virtual Tables (7.50 SP01)

Note 2202052 - BW Near-Line Storage with HANA Smart Data Access: Poor Query Performance with InfoCubes (7.40 SP13)

Queries on InfoCubes show poor performance since filter conditions are pushed as "SID-based" filters to the HANA database instead of using the "key-values" for filtering. With "SID-based" filters, the SID-table needs to be joined to the Virtual Table in order to execute the filter. This join adds complexity to the SQL-Statement which makes it more difficult to optimize the query in a federated database environment.

 

The following SAP Corrections are necessary aDSO Support with BW 7.50:

Note 2221933 - BW Near-Line Storage with Advanced DataStore Objects (ADSO): Query Terminations (7.50 SP02)

Note 2233002 - Advanced DataStore Object with Near-Line Storage: Wrong Pruning Behavior when used in Composite Provider with Join (7.50 SP02)

Note 2233471 - Query for aDSO does not read any data in near-line storage (7.50 SP02)

Note 2238384 - Advanced DataStore Object with Near-Line Storage: SQL Error with characteristic 0REQTSN (7.50 SP02)


 

Currently the following parameters should be changed on the SAP HANA server:

  • semi_join_virtual_table_threshold
  • virtual_table_format
  • join_relocation
  • fda_enabled

 

SAP-NLS_2.JPG

 

Currently the following parameters should be changed on the SAP ABAP server:

  • rsdb/supports_fda_prot = 0
  • rsdb/max_blocking_factor = 50
  • rsdb/max_in_blocking_factor = 1024
  • rsdb/prefer_union_all = 0
  • rsdb/prefer_in_itab_opt = 1
  • rsdb/prefer_join_with_fda = 1

 

 


With these actions you can optimize the SAP-NLS solution via SDA:


Note 2231332 - Control of Query Optimization on Near-line Storage on InfoProvider Level (7.40 SP14/7.50 SP02).

This note adds a checkbox to the "Near-line Storage" tab of the Data Archiving Process (DAP) maintenance to switch query optimization on (this is the default) or off. You must activate the DAP in order to make your setting effective. If optimization is switched on, but query optimization is not configured or not available for the Near-line Connection query access will use the non-optimized implementation via the VirtualProvider interface utilizing the Near-line Provider implementation via the standard Near-line Interface. In case Smart Data Access is configured for the Near-line Connection also the name of the HANA Virtual Table is shown.

 

SAP-NLS_3.JPG

 

SAP-NLS_4.JPG

 

 

Note 2202815 - Corrections for Column Views for Open ODS Views, Nearline Storage with SDA, Advanced DataStore Object with Dynamic Tiering (7.40 SP13/7.50 SP01).
This correction provides an optional feature that replaces the aggregation nodes on top of InfoObject tables by a projection node. This enhancement can be activated by RSADMIN Parameter RSSDA_CV_WITH_MD_PROJ

Table RSADMIN =>RSSDA_CV_WITH_MD_PROJ = X

 

SAP-NLS_5.JPG

 

SAP-NLS_6.JPG

 


Note 2198386 - BW HANA SDA: Performance Improvement for Creation of Database Statistics for Virtual Tables of Open ODS Views or NLS-Archives (7.40 SP13/7.50 SP01).
The Report RSSDA_CREATE_TABLE_STAT can be used to create database statistics for HANA Virtual Tables. HANA Virtual Tables are used in the context of HANA Smart Data Access. The execution time can be quite time consuming. As of HANA SP10, a new statistics type RECORD COUNT is available for virtual tables. RECORD COUNT specifies that only the number of records is computed. This type of statistics should take much less time to compute compared to the other types SIMPLE or HISTOGRAM. From query execution point of view, the HANA Query Optimizer however have less information which could lead to less optimized query execution. This type should therefore only be used in case it's too expensive to create SIMPLE or HISTOGRAM statistics.

 

SAP-NLS_7.JPG

 

SAP-NLS_8.JPG

 

 

Best Regards

Roland Kramer, PM EDW and SAP-NLS, SAP SE

roland.kramer@sap.com

@RolandKramer

FAQ: SAP BW & Operational Data Provisioning Framework (ODP)

$
0
0
  • What is the ODP Framework?
    • It is a infrastructure to unify data exchange between provider and consumers
      • Enables extract once deploy many architectures for sources

      • Unified configuration and monitoring for all provider and subscriber types

      • Time stamp based recovery mechanism for all provider types with configurable data retention periods

      • Highly efficient compression enables data compression rates up to 90% in Operational Delta Queue (ODQ)

      • Quality of service: „Exactly Once in Order“ for all providers

      • Intelligent parallelization options for subscribers in high volume scenarios


  • What are the major use cases with ODP and BW?
    • Data transfer of extractors SAP ERP (ODQ) a SAP BW
    • Real-time replication of tables and db-views via SAP SLT (ODQ) a SAP BW
    • Data transfer between SAP BW (ODQ) a SAP BW

 

ODP FAQ.docx.jpg

 

  • Should we change to ODP based extraction with all existing extractors?
    • No, but consider ODP as framework for all your future implementations of new data flows into you BW system for ECC and SLT extraction.
    • ODP is the strategic relevant source system connection to SAP sources in SAP BW.

 

  • Can ODP be deployed in parallel with the traditional delta queue approach?
    • Yes it is possible, but multiplies the data. ODP is a new source system for BW and would add the DataSource in a new context to the system


  • Is there a runtime advantage using ODP?
    • ODP allows to skip the PSA layer and load directly with DTP from the source system into a DSO
      • Runtime is reduced by more than 40% in lab results
      • Scenario: loading from the Operational Delta Queue (TA ODQMON) in the source system via DTP into a DSO compared to loading from BW Service API Delta Queue (TA RSA7) via InfoPackage into a PSA and then via DTP into a DSO
      • Throughput of > 35 Mio records per hour is achieved w/o tuning (three times parallel processing)
      • If the extractor logic is the ‘bottle neck’ the throughput won’t change

 

  • Does ODP have an impact on how the extractor work?
    • ODP doesn’t change the implementation of application extractors, all the features and capabilities (delta support, RDA-enablement) are the same.

 

  • When is ODP available or supported?
    • Provider (Source):

The ODP interface you must use one of the following releases of ERP and PI_BASIS (or higher) in your ODQ system (e.g. ERP system as source system):

  • PI_BASIS 2005_1_700 SP 24 (part of SAP NetWeaver 7.00 SP 24)
  • PI_BASIS 2006_1_700 SP 14
  • PI_BASIS 701 SP 9 (part of SAP NetWeaver 7.01 SP 9)
  • PI_BASIS 702 SP 8 (part of SAP NetWeaver 7.02 SP 8)
  • PI_BASIS 730 SP 3 (part of SAP NetWeaver 7.30 SP 3)
  • PI_BASIS 731 SP 1 (part of SAP NetWeaver 7.03 SP 1 and 7.31 SP 1)

See SAP Note 1521883 - ODP Data Replication API for further details.

 

    • Consumer BW:
    • Recommended starting release with BW 7.40
    • Supported for all databases

 

  • How can I enable standard or generic extractors using ODP?
    • The SAP Note Releasing ERP Extractors for ODP API together with SAP Note 1558737 - Data Sources released for ODP data replication API describe which Data Sources have been released for usage with ODP Data Replication API:
      • Examples: 0FI_GL_50‚ 0HR_PA_EC_03‚ 0MATERIAL_ATTR, 2LIS_11_V_ITM, '0BPARTNER_ATTR‚ '0CO_OM_CCA_1‚ 0EC_CS_3‚ 0CO_PC_ACT_1
      • We are right now analyzing if further extractors can be ODP enabled by default. This will be updated here continously.
    • To use the ODP data replication API for any generic DataSource (extraction methods view extraction or domain extraction) you need to implement SAP Note 1585204.

 

  • Do I have to consider ODQ in my source system sizing?
    • ODP stores the data-to-be transferred to BW, in the source system, within the ODQ. To be precise only for the active and enabled extractors. The data is stored there for a retention period of 24 hr (by default but adjustable) after all subscriber to a source system received the data. So it is only focusing of the data which fits in this timeframe and is new to be loaded into BW which in most cases is negligible.
      • In case of a non-HANA ERP system a very high compression rate storing the data in OPQ can be expected. As a rule of thumb: for the overall data growth which is loaded to BW 10% of the size should be considered.
      • In HANA based ERP systems we have already the HANA compression per default so you can take the overall data growth which should be loaded to BW 1:1.


  • How can I monitor the data exchange via ODP framework?
    • Please call transaction ODQMON in the provider system. Please note that in certain cases Provider and Consumer might be the same.


  • Do I need a PSA for ODP based extraction?
    • Since BW 7.4 the DTP can write for ODP based source systems directly into the target InfoProvider without using the PSA table.
    • This can be achieved since the ODQ provides already a lot of the services of the PSA table.

 

  • Can I change the data in the ODQ directly?
    • No, this might be one use case where you would still use a PSA table in relation with ODP sources.
Viewing all 130 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>